id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
257833761 | pes2o/s2orc | v3-fos-license | Torque Control with Joints Position and Velocity Limits Avoidance
The design of a control architecture for providing the desired motion along with the realization of the joint limitation of a robotic system is still an open challenge in control and robotics. This paper presents a torque control architecture for fully actuated manipulators for tracking the desired time-varying trajectory while ensuring the joints position and velocity limits. The presented architecture stems from the parametrization of the feasible joints position and velocity space by exogenous states. The proposed parametrization transforms the control problem with constrained states to an un-constrained one by replacing the joints position and velocity with the exogenous states. With the help of Lyapunov-based arguments, we prove that the proposed control architecture ensures the stability and convergence of the desired joint trajectory along with the joints position and velocity limits avoidance. We validate the performance of proposed architecture through various simulations on a simple two-degree-of-freedom manipulator and the humanoid robot iCub.
I. INTRODUCTION
Nonlinear feedback control of unconstrained fully actuated manipulators is not new to the control community. A large variety of position, velocity, and torque control algorithms have been developed through feedback linearization, backstepping, robust control, and adaptive tools for steering these nonlinear systems towards the desired quantities [1]. Although, the control algorithm is applicable/safe for a robotic manipulator when its physical constraints, such as the motion and actuation limits, are satisfied. In this paper, we propose a control architecture to ensure the joints position and velocity limits avoidance for torque-controlled manipulators.
The joint position and velocity limits avoidance for the reference trajectory/path planning has been widely studied through neural networks [2], repulsive potentials [3], optimisation [4], and parametrization [5]. However, ensuring the physical limits for the reference trajectory does not imply that the controller's reaction to the initial conditions and disturbances will not lead to violating these limits.
Reducing the joint velocity/acceleration of the robot considering its position and velocity limits is a simple approach for ensuring these limits. For example, the joints acceleration can be constrained with a function of the distance between the joints position and velocity values and the corresponding limits [6]. However, these approaches rely mostly on handtuned and/or offline heuristics.
For handling the joint position and velocity limits of robotic systems, the control objectives can be represented as an optimization problem with inequality constraints corresponding to the joint limits. This technique is widely used in the humanoid whole-body control and human-robot interaction control [7]. This technique, however, still lacks the theoretical guarantee of the stability and convergence properties associated with the evolution of the system.
The barrier Lyapunov function is recently used for handling control problems with joint/state constraints [8], [9]. This function yields a value that grows to infinity when the joints approach their limits. This method is usually complicated and needs to deal with the model inaccuracies and the conflicts between the tracking objectives and the joint limits [10], [11].
Parametrization is another technique used for handling the limitations of the system states and inputs [12], [13]. In this technique, the feasible space of the states/inputs is parametrized by an exogenous state/input. Thus, the constrained control problem is transformed into a non-constrained one where any control tool available in the literature can be employed. To the best of our knowledge, the control problem via parametrization technique has not been investigated for handling both the joint position and velocity limits. This paper proposes a torque control architecture for processing the joints position and velocity limits for fully actuated manipulators. The proposed control provides the asymptotic stability of the desired trajectory while preserving the constant joints position and velocity limits. Comparing to the existing methods for implementing joints position and velocity limits in torque-controlled manipulators, our proposed architecture, with the help of Lyapunov arguments, ensures that the desired trajectory is asymptotically stable and the time evolution of the joints position and velocity always remains within the associated limits provided that the desired trajectory satisfies these limits. For this purpose, the feasible joint position and velocity space is parametrized by exogenous states. The proposed parametrization introduces a one-to-one map between the joints position and velocity of the robot and the exogenous states. Using the exogenous states, the problem of control + ©2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. design with constrained states (i.e. constrained joints position and velocity) is transformed to an un-constrained control problem with exogenous states. After that, a control policy is proposes for ensuring the stability properties of the exogenous states which leads us to ensuring the stability properties and joints limitation avoidance of the robot. We have investigated the performance and limitations of the proposed control architecture through simulations on a simple two-degree-offreedom manipulator, and also humanoid robot iCub [14].
This paper is organized as follows. Section II expresses the notation and the definitions used in the paper. Section III introduces the problem statements. In section IV, a control architecture is proposed for stabilizing a desired joint trajectory and ensuring the joints position and velocity limits avoidance. In section V, the simulation results carried out to validate the performance of the proposed control architecture are illustrated. Finally, section VI concludes the paper by remarks and perspectives.
II. NOTATION
The following notation is used throughout the paper.
• R is the set of real numbers. • I n is the n × n identity matrix. • For a vector q ∈ R n , the i th component of q is written as q i . • The transpose operator is denoted by (·) . • For a vector q ∈ R n , the diagonal matrix of q is written asq. • For a vector q ∈ R n , the Euclidean norm of q is denoted by q . • For a vector q ∈ R n , the absolute value vector of q is denoted by |q|. • For a vector q ∈ R n , the function tanh(q) : R n → R n is defined as tanh(q) = [tanh(q 1 ), tanh(q 2 ), ..., tanh(q n )] . • For a scalar q ∈ R, the function Sat(q, q min , q max ) : R → R is defined as • For a vector q ∈ R n , the function Sat(q, q min , q max ) : R n → R n is defined as Sat(q, q min , q max ) = [Sat(q 1 , q min1 , q max1 ), ..., Sat(q n , q minn , q maxn )] .
III. PROBLEM STATEMENT
The equations of motion for a fully actuated robotic manipulator with n degrees of freedom can be written in the following form [15] where q ∈ R n×1 is the vector of generalized coordinates, M (q) ∈ R n×n is the inertia matrix, C(q,q)q ∈ R n is the vector of the Centrifugal and Coriolis effects, and G(q) ∈ R n is the vector of the Gravitational effects. τ ∈ R n is the vector of the actuator forces/torques.
The control objective is defined as computing τ such that q tracks the desired trajectory (e.g. q d (t)). Though, τ is feasible if it preserves the physical limits of the robot such as joint position, velocity, and torque limits. In the present paper, we propose a control policy that ensures 1) the asymptotic stability of the desired trajectory, and 2) the joint position and velocity limit avoidance. Assume that the feasible region for the generalised coordinates is as where q min , q max ∈ R n denote the vectors that define the minimum and maximum values for generalised coordinates. Moreover, assume that the feasible region for the generalised velocities is as whereq min ,q max ∈ R n denotes the vector of the minimum and maximum values for generalised velocities.
We propose a control architecture ensuring that q tracks the desired trajectory q d (t) while the evolution of q andq always remain within Q q and Qq, respectively.
Throughout this paper, we assume that: Assumption 1. The first and second-order time derivatives of q d are well-defined and bounded ∀t ∈ R + . Moreover, Assumption 2. Each generalized coordinate possesses a free motion domain different from zero i.e. q maxi − q mini > 0 anḋ q maxi −q mini > 0, ∀i = 1 : n.
IV. CONTROLLER ARCHITECTURE
We parametrize the space of the generalised coordinates, as well as, the generalised velocities as follows q = q 0 +δ q tanh(ζ), where The above parametrization guarantees that (q,q) ∈ (Q q , Qq) for bounded (ζ, ψ). This parametrization is a one-to-one nonlinear map between (q,q) and (ζ, ψ). The new states (ζ, ψ) is computed according to (q,q) as The second time derivative of the generalized coordinates can be computed with respect to (ζ, ψ) as where J ψ =δq I n −tanh 2 (ψ) . Thus, (1) can be written in terms of (ζ, ψ) as where J ζ =δ q I n −tanh 2 (ζ) and h ζψ = Remark 1. One can observe the following properties: forq ∈ Qq), R1.6 The matrix M J ψ is positive definite for bounded ψ.
If the desired trajectory is feasible i.e. (q d ,q d ) ∈ (Q q , Qq), the desired trajectory of (ζ, ψ) is defined as For ensuring asymptotic stability of (ζ d , ψ d ), one can use any control technique from the literature. The control policy is required to ensure that (ζ, ψ) is bounded and converge to (ζ d , ψ d ). We use the feedback linearisation technique and consider the control policy as where and k 1 , k 2 , k 3 ∈ R n are constant matrices.
V. SIMULATION RESULTS
In this section, we evaluates the performance of the proposed control architecture for trajectory tracking in two simulation studies: a simple two-link manipulator and humanoid robot iCub [14] (see Fig. 1). In the first case, the proposed controller is used for controlling a two-degree-of-freedom manipulator simulated in Simscape/MATLAB [16]. In the second case, the proposed controller is employed on the humanoid robot iCub simulated in Gazebo [17]. In both cases, the performance of the proposed control architecture is compared to the joints limit avoidance torque control (JLATC) [12]. All results are available on GitHub 1 .
For implementing the proposed control architecture, we need some considerations: Remark 2. Theorem 1 assumes that the controller is continuous. In both simulation and real experiments, however, the implementation of the controller is a discrete system. Thus, the controller is required to be implemented by an appropriate time step for achieving the results explained in Theorem 1. The appropriate time step depends on the system dynamics and the desired trajectory. But, a small time step drastically slows down the simulations, and is not always achievable in real applications. To resolve the requirement of small time steps, we modify the control policy as the following In this case, the definition of (ζ, ψ) is not valid. To resolve this issue, we modify (5) as Remark 4. In practice, it can happen that the desired joints trajectory does not satisfy the joints position and velocity limits of the system. In this case, the definition of (ζ d , ψ d ) are not valid. To resolve this issue, we modify the definition of (ζ, ψ) as the following This condition can be violated because of the error between ζ d and ζ. Thus, we modify the definition of ψ r as where γ =δ R is a positive constant.
A. Two-link manipulator
In this study, the two-link manipulator shown in Fig.1a is simulated in Simscape/MATLAB [16]. The two-link manipulator has two degrees of freedom and moves in the vertical plane. Fig. 2. As can be seen, using the JLATC, the joint trajectory respects the predefined position limits but goes beyond the velocity limits. Instead, using the proposed control architecture, the joints trajectory converges to the desired value while respecting the predefined position and velocity limits irrespective of the large control coefficients. The control coefficients for the proposed control architecture are tuned for each joint to provide a convergence rate equal or higher than the one provided by the JLATC. In this simulation, the convergence rate provided for the second joint by the proposed control architecture is the maximum allowable convergence rate considering the corresponding joint velocity limit. proposed control architecture, the joints trajectory tracks the desired trajectory while respecting the predefined joints position and velocity limits. Instead, using the JLATC controller, the joints trajectory respects the joints position limits but not the joint velocity limits.
B. Humanoid robot iCub
In this study, the control architecture and iCub are simulated in MATLAB and Gazebo, respectively, and communicate together through Yarp channels. The MATLAB discrete integrator with the time step of 0.001[sec] is used as the simulation integrator.
The leg of the iCub is used for this study, forming a 3 degree-of-freedom manipulator with rotational joints at the hip-pitch, hip-roll, and knee (see Fig. 1b). The ankle joint is kept fixed by a position controller. The joint limits of iCub are given in Tab. I.
1) Constant desired trajectory: In this scenario, the robot reaches the desired joint position q d = (60, 60, −90)[deg] for the hip pitch, hip roll and knee joints respectively, from a given initial position q(0) = (0, 0, 0)[deg]. As can be seen in Fig. 4, using the proposed controller, the joints trajectory converges to the desired constant value and respects the predefined position limits but, in contrast to the simulation results presented in section V-A for the two-link manipulator, in the beginning of the simulation, joints velocity fails to respect its corresponding limits while joints torque noticeably oscillate. Comparing to the JLATC, the proposed control architecture has the advantage that the joints velocity trajectory goes beyond its limits just in some time instances and then goes back to its feasible range. It is worth note that the proposed controller satisfies both the joints position and velocity limits along with smooth joints torque by reducing the simulation time step. For the sake of limited space, the simulation results for the small time step are not presented here, but are available in GitHub 2 . In fact, the proposed controller is sensitive to discretized implementation.
2) Sinusoidal desired trajectory: In this scenario, the robot tracks the desired sinusoidal joint position q = A sin (ωt + ϕ 0 ) + b where A = (33. Fig. 5. As can be seen, using the proposed control architecture, the joint trajectory tracks the desired sinusoidal trajectory while respecting both the joints position and velocity limits. Instead, using the 2 https://github.com/ami-iit/paper pasandi 2023 icra-joint-limit-avoidance JLATC controller, the joints trajectory goes beyond the joints velocity limits. Note that both the controllers ask high joints torques when the joints trajectory is near its position/velocity limits. 3) Constant desired trajectory and disturbance: In this scenario, the robot reaches the constant desired joint position (60, 60, 90)[deg] for the hip pitch, hip roll, and knee joints respectively, from the initial joint position (0, 0, 0)[deg]. An external force in the vertical direction is applied to the sole of the robot foot. As can be seen in Fig. 6, the joints trajectory preserves the predefined joints position limits. However, the joint trajectory, as explained before, goes beyond the predefined velocity limits in some time instances at the beginning of the simulation. Later on, the joint trajectory preserves the velocity limits even in the presence of the external force.
VI. CONCLUSIONS
This paper presents a torque control architecture with joints position and velocity limits avoidance for fully actuated manipulators. The presented architecture provides the convergence of the joints position of the robot to a desired trajectory while ensuring that the time evolution of the joints position and velocity remains within the feasible space. The Lyapunov analysis proves the stability and convergence of the tracking error and the limitations of the joints position and velocity. We validated the soundness of the proposed control architecture by some simulations on a simple two degrees of freedom manipulator. We also investigated the limitations of the proposed architecture by some simulations on the humanoid robot iCub. We observed that, in discrete implementation, the proposed controller could generate noticeable oscillatory joints torque and fail in joints velocity limit avoidance considering the discrete step time. As future work, it is planned to reform the proposed control architecture for improving its performance in discrete implementation.
According to Theorem 1, k 1 and k 3 − k 2 k −1 1 k 2 are diagonal positive definite matrices, and thus P is a symmetric positive definite matrix based on Schur complement theorem [18]. As a result, V is a positive definite function where V = 0 iff (e ζ , e ψ ) = (0, 0). | 2023-03-31T01:15:46.687Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "1ecda2ec6e60a3cc4dfcf27b617400728edcfcf6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1ecda2ec6e60a3cc4dfcf27b617400728edcfcf6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
35360293 | pes2o/s2orc | v3-fos-license | Relating Microstructure and Particle-level Stress in Colloidal Crystals Under Increased Confinement
The mechanical properties of crystalline materials can be substantially modified under confinement. Such modified macroscopic properties are usually governed by the altered microstructures and internal stress fields. Here, we use a parallel plate geometry to apply a quasi-static squeeze flow crushing a colloidal polycrystal while simultaneously imaging it with confocal microscopy. The confocal images are used to quantify the local structure order and, in conjunction with Stress Assessment from Local Structural Anisotropy (SALSA), determine the stress at the single-particle scale. We find that during compression, the crystalline regions break into small domains with different geometric packing. These domains are characterized by a pressure and deviatoric stress that are highly localized with correlation lengths that are half those found in bulk. Furthermore, the mean deviatoric stress almost doubles, suggesting a higher brittleness in the highly-confined samples.
In principle, knowing all particle positions and their interactions is sufficient to determine the suspension's structural order and stress distribution. Unfortunately, resolving the stress distribution within colloidal materials has remained experimentally challenging. In the simplest colloidal system -hard-spheres -particles do not interact until contact. The steep hard-sphere potential and experimental noise in locating particles make potential-based stress calculation (force times relative vector) impractical in experiment. In this work, we use a new technique -Stress Assessment from Local Structural Anisotropy (SALSA) -to measure the stress distribution in confined hard-sphere colloidal polycrystals [33]. SALSA uses the particle positions captured using confocal microscopy to calculate the orientation dependent particle collision probabilities. These probabilities are used to determine the stress at the single particle scale. Using SALSA we follow the evolving stress distribution in a polycrystal as it is compressed quasi-statically between two parallel plates.
A. Samples and instruments
We conduct experiments with suspensions comprised of sterically stabilized Poly(methyl methacrylate), PMMA, particles. The particles have a diameter of 2a = 1.62 µm, polydispersity ≈ 3%, and are fluorescently labeled with DiIC 18 (3) (1,1'-Dioctadecyl-3,3,3',3'-Tetramethylindocarbocyanine Perchlorate) for confocal imaging. The solvent -a mixture of decalin and CXB (cyclohexylbromide) -has a refractive index and density that nearly match those of the PMMA particles. While a slight mismatch between the particle and solvent density ∆ρ ≈ 0.03 g/cm 3 is introduced to induce slow sedimentation for the bulk samples, the confined crystal samples are all density-matched. We approximate the hard-sphere interparticle force by saturating the solvent with tetrabutyl ammonium bromide (TBAB) at a concentration (≈ 260 nM) [34], and using a syringe filter to remove any excess salt granules [35]. The added TBAB salt screens the electrostatic force and results in a Debye length, ≈ 100 nm, substantially smaller than the particle diameter [36].
We use a high-speed confocal laser scanning microscope (Zeiss, LSM 5 LIVE) to image the 3D structure of the sample. We acquire a time series of 15 image stacks where each stack contains up to 512×512×500 voxels, corresponding to a sample volume of 71×71×68 µm 3 . We use our confocal rheoscope to confine our crystal samples between two plates with a separation that is uniform to within ±0.2 µm [37][38][39]. The top plate, a silicon wafer (4 mm × 4 mm), is fixed to the rheoscope's kinematic mount and is static throughout the experiment. The bottom plate, a transparent coverslip for imaging from below, is attached to a multi-axial piezo electric, and can be moved vertically (z-axis) to change the gap size [39]. The multi-axial piezo (PI P-563.3CD) has a travel range 300 µm in the z direction and an accuracy ± 2 nm. Using this parallel-plate setup we reduce the gap heights in a controlled manner from 38 µm down to 6 µm (24≥ h/2a ≥ 4).
B. Structure and Stress Measurements
We process the confocal data and locate the particle positions using the Crocker-Grier featuring algorithm [40] that locates particle positions with sub-pixel accuracy (≈ 50 nm) [21,[41][42][43]. From the particle positions, we employ a standard bond-order parameter method [44][45][46][47][48] to calculate the local three dimensional (3D) structure order. In the bond-order parameter calculation, we determine the normalized complex order parameter q lm (α) for each particle α,q lm (α) = 1 C Y lm (r αβ ) β∈nn , where factor C normalizes the order parameter such that mq lm (α)q * lm (α) = 1, Y lm (r αβ ) is the spherical harmonic function of the unit vectorr αβ pointing from particle α to β, and . . . β∈nn denotes the average of the neighbors of particle α. The neighboring particles are defined as those with a center-center distance within 1.41(2a), which coincides with the first minimum of the radial distribution function g(r). Following previous protocol, we set l = 6 [44][45][46][47][48]. The number of ordered neighbors is then approximated by summing the complex inner product, N ord = m,βq lm (α)q * lm (β). Here, the number of ordered neighbors has a range 0 ≤ N ord ≤ 12. [76] From the particle positions, we can also measure the stress at the single particle scale using SALSA. In our Brownian hard-sphere systems, the force with which particles collide is related to the thermal energy k B T . Using a time series of featured particle positions, we deduce the thermal collision probability, and compute the stress arising from these collisions. As shown by previous work [33], the stress tensor σ α ij = σ ij ( X α ) at particle α can be approximated by where Ω α is the volume occupied by the particle, ∆ is the cutoff distance from contact. Here, ψ α ij (∆) is the time-averaged local structural anisotropy for the particle α, ψ α ij (∆) = β∈nnr αβ ir αβ j , where nn denotes the particles that lie within a distance 2a + ∆ from particle α, ij are spatial indices, andr αβ is the unit vector pointing from particle α to particle β.
In granular literature, ψ α ij (∆) also denotes timeaveraged fabric tensor [49]. While the trace of the timeaveraged fabric tensor i ψ α ii determines the contact particle number, the off-diagonal terms report the anisotropy of these contact particle configurations. When averaged over time, the fabric tensor of the selected particle captures the probability of Brownian collisions between it and its neighbors. This probability is linearly proportional to the cutoff distance, or shell thickness ∆ when ∆ a. Therefore, when scaling the collision probability by ∆, the stress is independent of ∆. The last step of the SALSA calculation scales the probability by the energy density per collision k B T /Ω α . In a defect-free crystal, Ω α is simply the system volume divided by the particle number. However, in a crystal containing defects, the local volume occupied by each particle varies. This variation needs to be considered to correctly measure the stress near defects. So, we first calculate the pointwise stress, σ pt ij ( x) = k B T dV a ∆ ψ ij ( x, ∆) then perform a spatial average to obtain a macroscopic measurement at the particle-level [50]. Here, dV in theory, should be an infinitesimal volume. In practice, when the pointwise stresses are assigned to a 3D discrete grid, dV is the single box volume of the grid. The continuum stress field σ cont ij ( x) is then smoothed, where in our experiments w(r) -the weighting kernel -is a Gaussian func- 50]. In particular, we set r w = 2a to remove stress features on length scales smaller than a particle. We have tested different discrete grid sizes, and other normalized weighting kernels, and find the results insensitive to those changes. This spatial average effectively addresses local volume variation. Finally, while SALSA can be modified to account for the contributions due to the confining surfaces, for simplicity, in the data presented here, we exclude boundary particles in all final presentations of local structure and stress distributions.
III. BULK CRYSTALS
In the tested bulk crystal, we observe different types of defects including vacancies, dislocations, stacking faults, grain boundaries, and voids. These defects are often close and can interact with one another. The typical size of a single crystallite is approximately 50 µm × 50 µm × 50 µm, containing on the order of 10 3 particles. We show a horizontal slice of a 3D confocal image in Fig. 1(a). In the image, we see a crystallite in the center surrounded by several other domains. The grain boundary of the center domain is highlighted by the dashed red contour. By performing the bond-order parameter analysis, we determine the number of ordered neighbor particles N ord , and plot its distribution in Fig. 1(b). The red particles represent defect regions with lower N ord ; blue particles are crystalline domains with higher N ord . To better visualize defects deeply embedded in the crystal, we remove crystalline particles with N ord > 5.5. The remaining particles are shown in Fig. 1(c). For further clarity, we remove particles with z-positions higher than 60 µm. We find that the thresholded particle distribution faithfully captures the grain boundaries highlighted in the raw confocal image illustrated by Fig. 1(a).
To determine whether the crystalline order affects the stress distribution we apply SALSA to our polycrystal. Since the magnitudes of individual stress components depend on the orientation of the coordinate sys- The stress distribution determined by SALSA is a continuous field. We then resample the stress data at individual particle positions, and plot them in the same fashion as local structure data shown in Fig. 1 tem, we focus on primary tensor invariants, including the pressure, the three principal stresses, and the von Mises stress. Our polycrystal sample has a mean pres-sureP = 1 3 (σ xx +σ yy +σ zz ) approximately 39 mPa, consistent with the prediction from previous numerical simulations [51]. Furthermore, the measured pressure as a function of height is also consistent with the estimated trend of hydrostatic pressure arising from gravitational settling due to the slight mismatch in the particle and solvent densities (∆ρ ≈ 0.03 g/cm 3 ). See Sec. VIII for detailed analyses on the bulk pressure.
In Fig. 2(a) we show the 3D pressure field of our polycrystal sample. The red particles indicate regions with higher pressure and blue particles indicate regions with lower pressure. The pressure fluctuation has a length scale ∼ 10 particles considerably smaller than the size of a single domain indicating intragrain stress fluctuations. By comparing the pressure ( Fig. 2(a)) and N ord ( Fig. 1(b)) distributions, we find that the pressure fluctuation has a relatively random spatial distribution, and is not notably correlated with the grain boundary arrangement.
We also determine the three principal stresses, σ 1 , σ 2 , and σ 3 , by calculating the eigenvalues of the measured three-by-three stress matrix, see Sec.IX. We find that the distribution of each principle stress is similar to that of the pressure. We therefore extract the difference between them and calculate the deviatoric -von Mises stress Similar to pressure, σ VM is also an invariant. Since this invariant satisfies the property that two stress configurations with equal deviatoric strain energy have the same value of σ VM , such a scalar is effectively a stress field signature capturing the distortion energy of a material under various loads [77]. In continuum elasticity, the mean of σ VM is often used as a criterion in determining how close a metal or alloy sample is to the yield point when subjected to loads [52]. In atomic-scale simulations, similar yielding criteria have been employed to elucidate the local yielding mechanism in glassy systems [53].
In Fig. 2(b), we show the von Mises stress field σ VM of our bulk polycrystal. We find that the σ VM distribution is also roughly random. Counterintuitively, the distribution of the high σ VM particles does not follow the trend of the grain boundaries shown in Fig. 1(a). In fact, many of the highly stressed regions are well within the crystalline domains (see Sec.IX and Fig. 11). The observed uniform and random fluctuations of P and σ VM are consistent with the evenly spread stress fluctuations previously reported in polycrystals of hard-sphere silica particles [33].
IV. CONFINED CRYSTALS
Confinement is known to affect the structure of crystals [27,[29][30][31][54][55][56][57]. Here, we use SALSA to determine whether such structure modifications are accompanied by changes in the stress distribution. To create a confined polycrystal, we load our crystalline suspensionvolume fraction φ ≈ 0.63 -in a confocal rheoscope. The rheoscope has three differential screws that allow us to finely adjust the alignment and gap h between two parallel plates holding the sample. We study the confined sample for seven different gap heights, starting with a bulk sample (h = 38 µm ≈ 24(2a)). We then gradually move the bottom plate upward, reducing the gap height down to h = 6 µm ≤ 4(2a). When the gap decreases, the parallel plates compress the sample and induce a squeeze flow that drives particles outward causing additional structural rearrangement. An oscillatory shear flow with a small strain amplitude 10% and frequency of 1 Hz alongxẑ is applied for 200 cycles to prevent local jamming and speed up sample recrystallization. Importantly, this low strain amplitude does not generate large structural rearrangements. Thus, the final structure at each height h is primarily determined by the squeeze flow and degree of confinement.
A. Structure
We show representative orthogonal slices from the confocal images for four gaps in Fig. 3(a-d). The black re-gions above (green arrow in Fig. 3(b)) and below (orange arrow) the sample (blue arrow) correspond to the top and bottom plates, respectively. We use confocal images to locate particle positions, and perform the bond order parameter analysis. 3D reconstructions of these particles can be seen in Fig. 3(e-h). The particles with high N ord are in blue, and low N ord are in red. By inspection we observe that the local structural order varies non-monotonically with gap with crystals at intermediate gaps h ≈ 10(2a) appearing more ordered. In addition, the crystal structure appears to break up into smaller domains at small gaps.
We quantify these observed trends by plotting the N ord histograms for the four different gap heights h in Fig. 3(il). We find that for large gaps h > 10(2a) the distributions look similar with a relatively narrow width. As the sample is confined to smaller gaps, however, we observe that the distribution becomes broader with a larger probability of having particles with lower order. These behaviors are further quantified in Fig. 4, in which the mean and standard deviation of N ord for all seven gaps investigated are plotted versus h/(2a). These measurements indicate that reduction in structural order and increase in its variance become more pronounced for gaps smaller than ten particle diameters. These observations are consistent with a large body of previous research indicating confinement effects become significant in samples confined to gaps containing less than ten layers [27,[29][30][31].
B. Stress
We calculate the 3D pressure distributions of all tested samples, and show horizontal 2D slices (xŷ at z ≈ h/2) for the same four representative gap heights h in Fig. 5(ad). Similar to the bulk measurements, the pressure fluctuations in confined samples are evenly spread, and roughly uncorrelated with the grain boundary arrangement. Furthermore, as anticipated, the mean pressure remains approximately constant (to within 10% of the mean) as h decreases. The constant pressure results from the fact that the confined sample is surrounded by a suspension reservoir residing beyond the confining plates that regulates the overall pressure of the confined zone. We also find that the standard deviation of the pressure is independent of h. The uniformity of the standard deviation is illustrated by the histograms of normalized pressure P/P shown in Fig. 5(e), in which all four datasets collapse on a single Gaussian distribution, consistent with the Gaussian pressure distribution found in the bulk crystal samples (see Fig. 9(d)). We note that while the structural order is substantially smaller at small gaps, the corresponding pressure histogram remains Gaussian. In particular, we do not see evidence of significant deviations from Gaussian behavior as is typically observed in glassy systems [58][59][60][61].
Although the mean and standard deviations of the pressure are unaffected by confinement, its spatial distribution substantially changes. In the bulk sample, we find that fluctuations are long-ranged ∼10 particles. However, under confinement, these fluctuations become localized, see (Fig. 5(d)). Such a stress localization is characterized by calculating the correlation function C P ( r/2a) = (P ( x + r 2a ) −P )(P ( x) −P ) x /P var . Here, C P ( r/2a) is unity at origin r/2a = 0 and zero at r/2a → ∞ while P var is the variance of the pressure. For simplicity, we plot the radial part of the correlation function C P (r/2a) in Fig. 5(f). As shown, the correlation function decays faster for smaller gap size h, indicating a more localized pressure fluctuation. At the smallest gap we explored (h/2a = 3.7) the correlation length is ≈ 3.1(2a) about three times shorter than in bulk sample. This reduction in correlation length tracks the decrease in grain size (also approximately a factor of three) as the crystal is squeezed.
We perform the same analysis on deviatoric stress for different gap sizes, and show 2D slices of the von Mises stress σ VM fields for four different h, see Fig. 6 (a-d). Consistent with the pressure distribution, the correlation between σ VM and grain boundary arrangement is negligible. As shown in Fig. 6(e), the histogram of σ VM shifts to the right and broadens with decreasing h, indicating a higher mean value and standard deviation. These trends become more pronounced when h ≤ 10(2a).
Similar to our treatment of the pressure, we quantify the length scale of σ VM fluctuations by calculating its correlation function, We plot the radial component of the correlation C VM (r/2a) in Fig. 6(f). Consistent with the pressure correlation evolution, C VM (r/2a) also decays more rapidly with decreasing h. As with the pressure, this reduction in correlation length also tracks the decrease in grain size as the crystal is squeezed.
The pronounced change in the σ VM suggests a link between the local structure order and σ VM . To illustrate this relationship, we plot the mean deviatoric stressσ VM against the ordered neighbor number averaged over sam- pleN ord in Fig. 7. The straight dashed line fits to the data showing a clear correlation betweenσ VM andN ord . More importantly, the mean deviatoric stress approximately doubles whileN ord decreases only by ∼ 15%. In atomic systems higherσ VM indicates the system is closer to yielding suggesting a lower shear stress would be required to induce plastic deformation of the crystal.
V. DISCUSSION AND CONCLUSIONS
Our studies, which combine both structure and stress measurement at the single particle scale, clearly illustrate that confinement can have profound consequences for stress distributions in hard sphere crystals. While we find that the distribution of the pressure and the von Mises stress are weakly correlated with the grain boundary location, their fluctuations become more localized due to the additional grain boundaries introduced by our confining procedure. Thus, the grain boundaries effectively shield each grain from the specific spatial distribution of stresses in neighboring grains.
In addition, we find that the deviatoric stress has a much wider distribution of magnitudes and a mean value that nearly doubles as the gap is reduced from h/2a ≈ 24 to h/2a ≈ 4. In our system, this reduction in gap produces smaller grain sizes. Such a grain size reduction has been suggested to have a great influence on the microstructure [62], defect density [63,64], and particle diffusivity [65] in polycrystals. More importantly, in atomic nanocrystals, grain size reduction leads to substantially lower yield stress a phenomena known as the inverse Hall-Petch relation [66,67]. Such trends are consistent with our observed enhancement in the deviatoric stress -typically a measure of how close a system is to yielding. Further experiments in which the normal force is continuously measured or experiments in which the yield stress under shear is determined for crystals under different degrees of confinement would shed light on whether a direct link can be made between our colloidal system and films comprised of nanocrystalline grains.
Such studies would also help determine whether different yielding mechanisms dominate when the grain size is reduced. For example, it has been shown in numerical simulations that in large grains dislocations penetrate the grains and entangle resulting in strain hardening. In contrast, for small grains, plastic flow induces stacking faults and twining that localize near grain boundaries, and do not contribute significantly to the flow stress. Experiments in which we can simultaneously plastically deform the crystals while measuring their order and stress evolutions would elucidate whether similar mechanisms are at play in these hard sphere colloidal crystals.
More broadly, combining the bulk stress measurement and SALSA provides a direct way to quantify the interplay between microscopic defect structures and macroscopic mechanical properties. This approach opens the door to uncovering the mechanisms that underly many defect-dominated phenomena in solid mechanics including defect-assisted premelting, strain hardening, and material fatigue.
VI. APPENDIX A: CRYSTAL SAMPLE PREPARATION AND IMAGING DETAILS
To grow a bulk crystal, we set the solvent density value (∼ 1.20 g/cm 3 ) slightly lower than the particle density (∼ 1.23 g/cm 3 ), so the particles can sediment and form a crystal with a higher volume fraction. We load a suspension with a volume fraction φ ≈ 0.60 in a sample cell and hermetically seal it. Prior to the experiment, the sample is placed on the microscope stage for at least 24 hours, until the sedimentation is complete. The resulting crystal has a total thickness of approximately 280 µm and volume fraction φ ≈ 0.67 ± 0.03. During the crystal growth, the Péclect number for sedimentation is Pe g = ∆ρga 4 /k B T ∼ 0.03, where ∆ρ = 0.03 g/cm 3 refers to the particle-solvent density mismatch, g the gravitational acceleration, and k B T the thermal energy. The small Péclect number, Pe g 1, indicates that the particle sedimentation rate is much slower than its self-diffusion. As a result, the PMMA sample can form polycrystals with far larger grain size and lower defect density than silica systems [33], where the particle density is significantly mismatched.
Our confined crystal samples are all density-matched, and we did not observe any significant particle sedimentation during the course of the experiment. The confined suspensions have a volume fraction φ ≈ 0.63, slightly lower than the bulk crystal value φ ≈ 0.67, and have a lower viscosity, allowing it to be loaded in a confocal rheoscope [30].
At an imaging rate 60 frames per second, the acquisition time of one confocal image stack that consists of 400 slices is ∼ 6.7 s. This time scale is comparable to a particle's relaxation time τ D = 6πη 0 a 3 /k B T ∼ 4.8 s where η 0 ∼ 2.1 mPa.s is the solvent viscosity. The final SALSA stress field is averaged over 15 image stacks re-quiring ∼ 100 s to collect. We observe short-time stress fluctuations arising from particle random motions within their local environment over the image acquisition time. Our data is focused on these relatively short time scales rather than the stresses associated with the long time annealing of crystal grains [? ] or the glassy behavior near grain boundaries [28]. To visualize the orientations and types of crystals, we perform a bond-angle analysis distinguishing the fcc and hcp crystals [68,69]. We highlight the color of fcc crystals (green) and tone down the color of hcp crystals (red) in Fig. 8(a). In Fig. 8(b) we reverse the color contrast. We find the fractions of fcc (≈ 60%) and hcp (≈ 40%) crystals comparable. This finding is consistent with previous computational studies, in which the fcc configuration is shown to be entropically favored over hcp, but only by 10 −3 k B T per particle [70][71][72]. Furthermore, the observed fcc fractions α ∼ 0.6 is also consistent with the values found in previous scattering [73,74] and direct imaging [23,75] experiments, which show that colloidal polycrystals are essentially comprised of randomlystacked hexagonal layers with α ≈ 0.5. We also find that the horizontal fcc and hcp stacks (with a crystal orientation 111 parallel to z-axis) do not alternate consistently across the field of view, as indicated by the arrows in Fig. 8(b). This stacking inconsistency leads to stacking faults that can be identified by the vertical interfaces between the two crystals. The high density of these planar defects in our nearly-equilibrium crystals arises from the small free energy cost associated with the fault formation. By observing the crystal orientation, we also find that most of the crystal domains have a (111) plane parallel to the coverslip. This parallel alignment is consistent with the previously reported mechanism of crystal growth from sedimentation [43]. In this case, the first few crystal layers form simultaneously near the flat bottom plate, while further crystals grow layer by layer. We find excellent agreement between our measured pressure value (horizontal line in Fig.9(a)) and the value predicted by simulations (orange line) of hard spheres at the volume fraction found (φ ≈ 0.67, vertical line) in our system. To investigate how this pressure value depends on the SALSA shell thickness ∆ , we plotP versus ∆ (black joined points) in Fig. 9 (b). The constantP highlighted by the green shade and horizontal red line indicates a pressure value insensitive to shell thickness between 50 nm ≤ ∆ ≤ 180 nm. The overestimated pressure at ∆ < 50 nm arises from polydispersity and the particle overlap associated with featuring uncertainties. The underestimated pressure at ∆ > 180 nm arises from the saturation in the collision probability once all the nearest neighbors are included. The observed ∆−independent pressure confirms that both our imaging and particle featuring resolutions are adequate to quantify the particle collision probability and its resulting stress. Throughout all SALSA analyses in this work, we set the shell thickness ∆ = 80 nm.
The particle polydispersity has different influences on the normal and shear stress measurements. A detailed discussion of this issue can be found in our previous work (Supplementary Information of [33]), where the shear and normal stresses near a vacancy defect are measured. Overall, since the pressure measurement relies on a more accurate identification of colliding particles, it can be influenced by the polydispersity more notably. In contrast to the pressure, the shear component is primarily related to the angular anisotropy of the neighboring particle configuration; hence, it is affected by the polydispersity less. In the current work, the particle polydispersity (≤3%∼ 40 nm comparable to the particle featuring error) is readily smaller than the shell thickness ∆ = 80 nm that defines the colliding criterion. Therefore, we anticipate that the identification of colliding particle is primarily associated with ∆ rather than polydispersity.
Finally, since we grow our bulk polycrystal sample from particle sedimentation, we anticipate the crystal pressure depends on the sample thickness and the mismatched density between particle and solvent. Furthermore, we anticipate that the pressure should not significantly vary with the height z, given that the depth of view ≈ 68 µm is much thinner than the sample thickness z c ≈ 280 µm. We plot the pressure averaged over the x − y plane as a function of height z in Fig. 9(c). Overall, the pressure trend is consistent with the estimated hydrostatic pressure arising from gravity, ∆ρg(z c − z)φ (blue dashed line), and we do not observe a clear decay in pressure. The pressure variation is mainly dominated by the fluctuations from the defect distribution in the sample. In Fig. 9(d) we show the histogram of the pressure fluctuation. The histogram can be described well by a Gaussian distribution (orange line) with a standard deviation ≈ 6 mPa corresponding to 15% of the mean. This magnitude of pressure fluctuation is consistent with previous results found in silica systems [33].
IX. APPENDIX D: PRINCIPAL AND VON MISES STRESSES IN BULK CRYSTALS
We calculate the three principal stresses σ 1 , σ 2 , and σ 3 and show them in Fig. 10. The three principal stresses are the eigenvalues of the stress tensor, and are thus independent of the coordinate orientation. The common trend between the three principal stresses corresponds to the hydrostatic pressure variation, while the difference is related to the deviatoric stress σ VM .
We calculate σ VM and visualize its distribution within the sample by excluding particles with σ VM ¡ 22 mPa. The thresholded field is shown in Fig. 11. We find no obvious correlation between the distribution of the particles with large σ VM and the grain boundaries within the polycrystal. | 2016-10-22T03:02:38.000Z | 2016-10-22T00:00:00.000 | {
"year": 2016,
"sha1": "239ee5fa8e23b3392e8bc003e64a932c12bbeb10",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1610.06991",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "239ee5fa8e23b3392e8bc003e64a932c12bbeb10",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Medicine"
]
} |
203623066 | pes2o/s2orc | v3-fos-license | Upregulation of cortical GABAA receptor concentration in fibromyalgia
Supplemental Digital Content is Available in the Text. A widespread upregulation of cortical GABAA receptors was observed using [18F]flumazenil positron emission tomography in fibromyalgia patients, which correlated with functioning and pain.
Introduction
Fibromyalgia (FM) is a chronic pain condition, characterized by widespread pain, sensory hypersensitivities, and cognitive and affective disturbances. 13 An increased ratio between excitatory and inhibitory neurotransmission, measured using magnetic resonance spectroscopy (MRS) in the brain, has been suggested to contribute to FM. 15,20 This perhaps explains symptom diversity through exaggerated gain setting. In support of an increased excitatory/inhibitory ratio, higher glutamate levels compared with control participants have been observed in patients with FM in the posterior insula, the posterior cingulate cortex, the ventrolateral prefrontal cortex, and the amygdala (systematically reviewed in Ref. 39), some of which changes were related to increased pain sensitivity. Patient studies on inhibitory neurotransmission are less common, but one study reported reduced GABA levels in the insula in FM. 15 Animal studies support the notion that an imbalance between (increased) glutamatergic and (decreased) GABAergic neurotransmission in the insular cortex 46 or the anterior cingulate cortex (ACC) 54 is causally related to pain sensitivity by augmenting central pain processing.
Although useful to measure glutamate and GABA neurotransmitter levels in humans, MRS is associated with difficulties and disadvantages. 1 The resonance frequency of both molecules is close to the frequencies of other metabolites, which is why glutamate is often quantified together with glutamine and GABA together with macromolecules. In addition, the in vivo concentration of GABA (1-2 mM) is at the lower end of the detectable range for MRS, 42 potentially explaining why so few studies have quantified GABA levels in FM. Finally, voxel sizes in MRS are typically relatively large (on the order of several cubic centimeters), and data acquisition is relatively long, resulting in data on only 1 or perhaps 2 brain regions in most studies.
Sponsorships or competing interests that may be relevant to content are disclosed at the end of this article.
Positron emission tomography (PET) using the tracer [ 18 F]flumazenil yields quantitative voxel-wise, whole-brain information on cortical GABA A receptors. Flumazenil binds to the benzodiazepine site of GABA A (but not GABA B ) receptors, which are densely expressed at inhibitory synapses in the cortex. 35 We reasoned that quantifying GABA A receptors using [ 18 F]flumazenil PET allows for investigation of a voxel-wise index of an important part of inhibitory neurotransmission. Cortical GABA A receptor concentration was compared between patients with FM and matched controls. We investigated whether GABA A receptor concentration is related to clinical characteristics of FM, including pain, sensitivity to mechanical stimuli, and functional, affective, and cognitive function. This work is a secondary analysis of a data set we have previously published on Ref. 38.
Participants
Twenty-six postmenopausal women with FM were included, with the diagnosis confirmed by an experienced rheumatologist (M.-A.F.) according to the 2012 Canadian Guidelines for diagnosis and management of FM. 13 Thus, patients with FM had to have chronic widespread pain for at least 3 months that was not explained by any other cause as well as sleep disturbances, fatigue, cognitive complaints, and/or other somatic symptoms and mood disorders to variable degrees. A group of 25 control participants was matched at the group level to the FM group for age, body mass index, education level, income, and physical activity level (based on the short version of the International Physical Activity Questionnaire 7 ). The McGill Institutional Review Board approved the study (certificate A08-M75-11B), and participants gave written informed consent before inclusion. Exclusion criteria for both groups were pain conditions other than FM, uncontrolled medical conditions, any diagnosed psychiatric or neurological disorders, body mass index greater than 30 kg/m 2 , and alcohol intake more than 15 glasses/week. There was no significant difference in alcohol intake between groups (controls had on average 2.7 6 4 units/week and FM patients 1.7 6 3 units/week, P . 0.3). Participants using benzodiazepine medication more than once a week were excluded. Participants using benzodiazepines occasionally (4 patients once a week, 2 patients biweekly) were off medication for at least 48 hours before the PET scan to avoid competitive binding with the radiotracer. Additional details on participants can be found in our previous report. 38
Data acquisition and image processing
Participants took part in three 1.5-hour long sessions: one psychophysical/questionnaire session, one magnetic resonance imaging (MRI) session, and one PET session. Thirty-eight participants out of 51 had the 3 sessions within a 2-week period, 9 within 1 month, and 4 within 3 months.
The psychophysical/questionnaire session served to assess different diagnostic domains of FM, specifically pain sensitivity, affective disturbances through questionnaires, and cognitive function through 2 validated cognitive tasks. Participants were asked to mark their current pain level on an 11-point numeric scale ranging from 0 (no pain) to 10 (worst bearable pain) and to complete questionnaires on depression (Beck Depression Inventory 2 ), anxiety (Hospital Anxiety and Depression Scale 53 ), and functional status (Fibromyalgia Impact Questionnaire [FIQ], 3 patients with FM were instructed to refer to their clinical pain, and controls were given no specific instructions). Cognitive function of participants was assessed using the Attention Network Task 10 and the Auditory Consonant Trigram test 45 because it has been shown that memory and attention are the most affected cognitive domains in FM. 50 Finally, pressure pain threshold and pressure pain tolerance were assessed using a calibrated hand-held pressure algometer with a 1-cm diameter round tip applied at constant rate on the thumbnail of the nondominant hand. Participants verbally indicated when the first sensation of pain occurred (pain threshold) and when the pain became intolerable (pain tolerance level). The average pressure of 3 assessments was used as pain threshold and tolerance threshold, respectively.
In the MRI session, a structural scan was acquired for coregistration of PET images. T1-weighted images were acquired using a 3-Tesla Tim Trio Siemens MRI scanner (Siemens, Erlangen, Germany) with a 12-channel head coil and a 3D magnetization-prepared rapid acquisition by gradient echo sequence (repetition time 2300 ms, echo time 2.98 ms, flip angle 9˚, field of view 256 mm, 192 slices in the sagittal plane, resolution 1 3 1 3 1 mm, acquisition time: 10 minutes). Images from other MRI sequences were acquired, but the results are not presented here (see Ref. 38 for more details).
In the PET session, data were acquired using an ECAT High-Resolution Research Tomograph (Siemens Medical Solution, Knoxville, TN), which has a spatial resolution of 2.3 to 3.4 mm at full width at half maximum. The radiopharmaceutical [ 18 F]flumazenil was synthesized as published previously. 29 After a transmission scan for attenuation correction ( 137 Cs-source), approximately 370 MBq of [ 18 F]flumazenil was injected intravenously as a slow bolus over 60 seconds. List-mode data were acquired for 60 minutes after injection and were subsequently binned into fully 3D sinograms for a total of 17 time-frames (40, 20, 2 3 30, 3 3 60, 4 3 150, 3 3 300, and 3 3 600 seconds). Raw PET images were reconstructed by fully 3D-filtered back projection by a 3D-reprojection method and corrected for participants' head motion. The nondisplaceable binding potential (BP ND ) maps were computed because they represent the signal arising from the fraction of radiotracer that is specifically bound to the benzodiazepine site of GABA A receptors, using the idSURF method with the eroded white-matter segments as reference region. 17 Higher specific binding of [ 18 F]flumazenil can be due to higher receptor concentration or greater ligand affinity. 23 Because there is no evidence for GABA A receptors to be structurally different in FM, we refer to BP ND values as GABA A receptor concentration in the remainder of the article. Please note that subcortical regions are not well represented in the gray-matter mask because the pipeline was specifically designed for cortical gray matter. Resulting BP ND maps in the Montreal Neurological Institute standard space (ICBM152 30 ) was spatially blurred with a 7 3 7 3 7-mm full width at half maximum Gaussian smoothing kernel.
Statistical analyses
Pressure pain thresholds and tolerance, questionnaire scores, and performance in the attention network task were analyzed using independent-sample two-sided t-tests to compare patients with FM and control participants. A two-way repeated-measures analysis of variance was used to assess the interaction between performance for the different recall delays (within-subject factor Recall delay, 4 levels) and Group (2 levels: patients and controls) in the Auditory Consonant Trigram task, followed by pair-wise post hoc tests.
Whole-brain, voxel-wise statistical analysis was performed in SPM8 (revision 4667, Wellcome Trust Centre for Neuroimaging); a general linear model was applied to the BP ND maps to (1) compare FM patients with control participants, including age as a covariate of no interest and (2) investigate voxel-wise regressions with clinical scores and cognitive performance, including age as a covariate of no interest. For statistical inference, a voxel-based threshold of P , 0.001 uncorrected and a cluster extent threshold of k . 53, calculated in SPM8 based on the smoothness of the image and random-field theory, 49 were used. Voxel-wise regression analyses were considered exploratory in nature, and therefore, we did not correct for the number of regressions tested.
Participants
Patients and control participants were well matched: no significant group differences were observed for age, body mass index, education level, income, and physical activity level ( Table 1).
Increased pain sensitivity in patients with FM
Compared with control participants, patients with FM had higher current pain levels (mean 6 SD, controls: 0.7 6 0.9, FM: 4.7 6 2.2, T 5 8.8, P , 0.0001) and were more sensitive to pressure stimuli (pain threshold, controls: 5. (Fig. 1). It is important to note that because the FIQ was administered without specific instructions to the elderly control participants, their scores likely reflect the impact of some sort of pain on their daily function. Indeed, some of the controls indicated some pain from, eg, osteoarthritis in the initial interview.
Delayed recall is affected but not attention in the FM sample
The Auditory Consonant Trigram test was used to assess immediate memory, and results showed that, in accordance with previous studies, 45 when a delay was introduced, patients with FM recalled a significantly smaller number of consonant letters compared with control participants. This is shown by the statistically significant interaction between Group and Recall Delay (F (3, 147) 5 107.8, P 5 0.034, Fig. 2 left panel). Post hoc pair-wise comparisons showed that the number of consonant letters recalled was significantly smaller in patients with FM at the 9-second delay (controls: 12.
Upregulation of cortical GABA A receptor concentration in FM
In controls, the distribution of GABA A receptors was similar to what has been previously reported in healthy subjects with the highest concentrations in the visual cortex (Fig. 3). 28,29,47 Six clusters showed higher GABA A receptor concentration in patients with FM compared with controls, including the right precuneus/cuneus, superior frontal gyrus, right angular gyrus, middle occipital cortex, inferior parietal/supramarginal gyrus, and lingual gyrus ( Fig. 4 and Table 2). Mean cluster BP ND was 11% to 31% higher in patients with FM compared with control participants ( Table 3). No region showed significantly lower GABA A receptor concentration in patients compared with controls.
3.5. GABA A receptor concentration is positively associated with clinical scores GABA A receptor concentration was associated with functional impairment across the 2 groups, as shown by the whole-brain regression analysis of flumazenil BP ND onto FIQ scores. The pattern for this association overlaps with regions showing increased GABA A receptor concentration in patients (Fig. 5).
GABA A receptor concentration was also associated with participants' pain, as shown by the whole-brain regression analysis between GABA A receptor concentration and current There was no significant whole-brain correlation with attentional measures, pressure pain thresholds, or tolerance levels.
Discussion
In this study, we demonstrated for the first time widespread upregulation of GABA A receptors in patients with FM using [ 18 F]flumazenil PET to quantify GABA A receptor concentration. We showed that the binding potential of flumazenil was more than 10% higher in FM compared with control participants, reflecting a significant increase in GABA A receptor concentration. There was no significant GABA A receptor concentration downregulation in patients with FM. The observed change is almost certainly driven by neuronal GABA A receptors because the concentration of GABA A receptor mRNA in neurons is nearly 2 magnitudes greater than in astrocytes. 16 However, it is unlikely to be driven by increased concentration of neurons in the gray matter because we also observed GABA A receptor upregulation in regions without any significant gray-matter change. 38 Because GABA is the most abundant neurotransmitter in the brain and is involved in a myriad of brain functions, an upregulation of GABA A receptors across multiple brain regions could have important consequences. More specifically, GABA A (and not GABA B ) receptors contribute to cortical levels of excitability as demonstrated with short-interval intracortical inhibition. 31,44,52 In rodent chronic pain models, increased neuronal excitability and decreased inhibition have been demonstrated in the spinal cord 6 and in the brain, mainly in the ACC, 51 that were positively associated with pain sensitivity. 22 The present results are therefore in line with previous reports suggesting an imbalance between neuronal excitation and inhibition in FM and other chronic pain conditions. 48 There are several mechanisms that might underlie the observed increase in GABA A receptor concentration. Common to all explanations is a decrease in GABA neurotransmitter concentration: a decrease in GABA neurotransmitter would lead to a compensatory increase in receptor concentration 18 that we observed in FM. However, the increase in receptor concentration would still be insufficient because symptoms persist in FM. The notion of decreased GABA levels has been supported in the insula in diabetic neuropathy 36 and has been associated with higher pain levels in the ACC in knee osteoarthritis. 40 This is also Figure 1. Mean pressure pain sensitivity and mean questionnaire scores for each group. Patients with fibromyalgia have lower pressure pain threshold and tolerance levels compared with control participants. Patients with fibromyalgia have higher pain, depression and anxiety scores, catastrophizing scores, and fibromyalgia impact scores compared with control participants. BDI, Beck depression inventory; CPL, current pain level; FM, patients with fibromyalgia; FIQ, Fibromyalgia Impact Questionnaire; HAD, hospital anxiety and depressions scale; PCS, pain catastrophizing scale; ***P , 0.001, **P , 0.005, *P , 0.05. Figure 2. Impaired immediate memory in patients with fibromyalgia. Left panel, patients with fibromyalgia recalled fewer consonant letters compared with control participants as soon as a recall delay was introduced. Patients and control participants were both able to recall trigrams with no delay, but the longer the delay between test and recall (higher difficulty), the lesser the patients could recall the trigrams, which show a significant impairment of immediate memory in patients with fibromyalgia. Right panel, no difference was observed in the performance on the attention network task between patients with fibromyalgia and control participants. FM, patients with fibromyalgia; NS, nonsignificant, *P , 0.05. January 2020 · Volume 161 · Number 1 www.painjournalonline.com 77 supported by another PET study showing that flumazenil BP ND correlated inversely with plasma levels of GABA. 25 GABA levels could be decreased, for example, by a primary defect in (one of) the enzyme(s) that converts glutamate into GABA, ie, glutamic acid decarboxylase, 14 which would increase glutamate concentration and result in lower GABA concentration.
As outlined in the introduction, increased glutamate levels have indeed been observed in FM and other chronic pain conditions. 11,15,20 Alternatively, GABA A receptors might become excitatory instead of being inhibitory 21,41 because of a change in the neuron's depolarization pattern and as has been described for the spinal cord in rodent models of chronic pain. 4 In this scenario, GABA neurotransmitter production would be reduced to compensate for the switch from inhibition to excitation. As described above, decreased GABA levels in chronic pain have been observed in certain brain regions using MRS, specifically the insula and the ACC. Presumably, these areas were investigated because they are important pain-processing regions. 33,37,43 Our study replicates the findings of increased GABA A receptor concentration in the ACC and the insula.
However, because it afforded a whole-brain analysis approach, impossible with MRS, increases in GABA A receptor concentration were observed in additional brain regions. The regions with increased GABA A receptor concentration seem to belong to 2 main brain networks: the attention network (superior frontal gyrus and supramarginal gyrus) and the default-mode network (precuneus and medial prefrontal cortex). Interestingly, enhanced connectivity between the medial prefrontal cortex, a key node of the default-mode network, and other regions of the default-mode network has been linked to patients' degree of rumination about their pain. 27 Furthermore, intrinsic connectivity within the defaultmode network has been observed to be inversely related to GABA levels in the default-mode network. 24 Therefore, our findings of increased GABA A receptor concentration in the default-mode network and perhaps the attention network might reflect the hypervigilance and impaired disengagement from pain commonly observed in FM and other chronic pain conditions. 13 However, it should be noted that we did not observe a (linear) relationship between GABA A receptor concentration and catastrophizing scores or performance in the attention network task, which should at least partly be related to hypervigilance and impaired disengagement. Nevertheless, increased GABA A receptor concentration was positively correlated with functional status (FIQ score) and current pain levels, and negatively correlated, in the hippocampus, with performance in the memory task, indicating that there likely is a clinical relevance of altered receptor levels. This notion is supported by the observation that intracortical inhibition, as measured with short-interval intracortical inhibition, is reduced in FM and correlates with fatigue. 32 It is important to point out that because the correlations observed between GABA A levels and clinical measure are present only across groups, it does not appear that an imbalance between increased excitation and decreased inhibition is specific to FM but might be a common feature across different types of chronic pain. Indeed, increased insular glutamate levels have been reported in painful diabetic neuropathy 36 and migraine. 5 Decreased GABA levels have been reported in the insula in painful diabetic neuropathy, 36 and GABA levels have been associated with higher pain levels in the ACC in knee osteoarthritis. 40 Also, in this study, the scatter plots of the correlations of GABA A receptor concentration and FIQ scores, respectively, current pain levels, clearly show that some degree of functional impairment/current pain in the controls (eg, back pain, osteoarthritis, and ankle pain) contributed to the significant correlation. Thus, it is unlikely that dysregulation of excitatory and inhibitory neurotransmitter systems constitutes the etiology of FM or of any other pain condition for which increased glutamate/decreased GABA has been reported. Rather, it might be a factor contributing to the maintenance of various symptoms across different chronic pain conditions. Importantly, it might be a factor that can be therapeutically targeted. For example, pregabalin, which blocks the voltage-dependent calcium channel and involved in glutamate release through an increase in Ca 21 influx, 8,12 has been shown to reduce glutamate levels in the posterior insula of patients with FM, 19 and memantine, an NMDA receptor antagonist, improved symptoms in FM compared with placebo. 34
Limitations
This study has some limitations that should be noted. Although using [ 18 F]flumazenil PET to measure GABA A receptors has the Increases in GABA A receptor concentration (BP ND ), fibromyalgia patients . control participants, whole-brain results are presented at a voxel-wise threshold of P , 0.001 uncorrected and cluster-extent threshold of k . 53 (expected number of voxels per cluster). Please note that no voxel survived a P , 0.001 uncorrected for the control participants . fibromyalgia patients' comparison; L, left, R, right.
January 2020 · Volume 161 · Number 1 www.painjournalonline.com advantage that it allows whole-brain (cortical) quantification, it does not directly measure GABA levels. Nevertheless, the observed GABA A receptor alterations indicate some sort of dysregulation of the GABAergic system, perhaps resulting from reduced GABA levels. To truly comment on an altered imbalance between increased excitatory and decreased inhibitory neurotransmission, concentrations of inhibitory and excitatory neurotransmitters ought to be measured. Furthermore, GABA A receptor concentration was not measured in subcortical areas or in the spinal cord, and GABA B receptor levels cannot be assessed with flumazenil. This limits our interpretation of the present results as different neurotransmitter systems in different regions could interact to result in the observed increase in cortical GABA A receptor levels. In addition, measuring GABA A receptors in vivo in humans is a coarse measurement and does not reveal subcellular or molecular processes. Indeed, different diseases have been related to decreased GABA levels in the brain (reviewed in Ref. 42) that might be related to different pathophysiological mechanisms. However, macro-level findings often seem to reflect finer metrics of GABA function and Mean (SD) Z-score for each cluster significantly different between control participants and patients with fibromyalgia, for each group, and percentage of increase in BP ND from controls to patients; L, left; R, right. provide a link between up-stream biochemistry and macro-level behavioral function. 9,42 In addition, there could be instances in which it is sufficient or even advantageous to target more downstream processes, for example, because of redundant or compensatory neurobiological pathways. In support of this notion, (spinal) administration of GABAergic compounds shows considerable promise to reverse pain phenotypes in animal models. 26
Conclusion and perspectives
We show that cortical GABA A receptor concentration is upregulated in FM and associated with pain levels and function. This upregulation occurs in a widespread network of brain regions and is not restricted to typical pain-processing regions. This finding adds evidence to the imbalance between excitatory and inhibitory neurotransmission hypothesis, which seems to apply to different chronic pain conditions. Future studies should test for glutamate and GABA neurotransmitter levels as well as the expression of receptors in the same patients, ideally in longitudinal designs, to further investigate the imbalance hypothesis. | 2019-10-02T13:04:16.013Z | 2019-09-27T00:00:00.000 | {
"year": 2019,
"sha1": "833cb966a5be7a77d0b25ed908cd80cd99fd94fb",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/pain/Fulltext/2020/01000/Upregulation_of_cortical_GABAA_receptor.9.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8be097d03190469f5fabf40d6fcae0701f1b9f5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54857876 | pes2o/s2orc | v3-fos-license | Research on the distribution of aerodynamic noises of high-speed trains
This paper established a computational model for the aerodynamic noise of a high-speed train with 3-train formation including 3 bodies, 6 bogies, 2 windshields and 1 pantograph system. Based on Lighthill acoustic theory, this paper adopted large eddy simulation (LES) and FW-H model to conduct numerical simulation for the aerodynamic noise of high-speed trains and analyzed the distribution of aerodynamic flow behavior and noises of the whole train. Researched results showed that the main aerodynamic noise sources of high-speed trains were in pantograph, pantograph region, streamlined region of head train, bogies, bogie region, windshield region, air conditioning and other regions. Pantograph head, junction of upper arm and lower arm, and chassis region were main aerodynamic noise sources of pantograph. Compared with other 5 bogies, bogie at the first end of head train was main aerodynamic noise source. In addition, vortex shedding and fluid separation were main reasons for the aerodynamic noise of high-speed trains. When the high-speed train ran at the speed of 300 km/h and 400 km/h, the main energy of the whole train focused on the range of 1000 Hz-4000 Hz. Aerodynamic noises were broadband noises in the analyzed frequency domain. At the longitudinal observation point which was 25 m away from the center line of track and 25 m away from the nose tip of head train, the total noise sound pressure level reached up to maximum values 96.5 dBA and 101.4 dBA, respectively. Compared with inflow, wake flow had a greater influence on the aerodynamic noise around high-speed trains. The main radiation direction of pantograph aerodynamic noises was the left and right sides of pantograph head. In addition, the main radiation energy of pantograph aerodynamic noises was in mid-high frequency. In the part of high frequency, pantograph head made the greatest contribution to aerodynamic noises in the far field.
Introduction
With the constant development of high-speed trains, the noise problem of high-speed trains becomes increasingly prominent.As a comfort index which can be directly perceived by drivers and conductors, noises have gradually turned into a key factor affecting the business operation of high-speed trains [1].At a high speed, the dynamic environment of train operation is mainly aerodynamic action [2].When the train speed exceeds 300 km/h or wheel-rail noises are treated, aerodynamic noises will replace wheel-rail noises and become the main sound source of high-speed trains [3].Aerodynamic noises caused by the operation of high-speed trains become the factor of restricting the speed.Regarding Japanese S250 high-speed trains, their design speed and experimental speed are more than 350 km/h, but their aerodynamic noises reach an unbearable level.Finally, these kinds of trains have to run at the speed of 300 km/h.Similarly, the design speed of maglev trains is over 430 km/h.Limited by noise criteria, trains have to run at the speed of 200 km/h [4][5][6].
High-speed trains make constant development through introduction, absorption and re-innovation.Compared with research on structures and system, researches on aerodynamic noises are relatively backward.At present, the aerodynamic noise of high-speed trains is studied mostly through numerical computation.Due to the complexity of the problem, most of computations lay emphasis on aerodynamic noises at a certain part of high-speed trains.Xiao [7] took the longitudinal symmetrical plane of high-speed train as the researched object, established a large eddy simulation model for the longitudinal symmetrical plane of high-speed trains, studied the spectral characteristics and change rule of aerodynamic noises in the longitudinal symmetrical plane and obtained the optimal shape of junction.Liu [8] established a mathematical model for the three-dimensional flow field of head train of high-speed trains, used Lighthill acoustic analogy theory to compute the aerodynamic noise of high-speed trains in the far field, and applied a broadband noise source model to compute aerodynamic noises at the surface of high-speed trains.Zhang [9] adopted detached eddy simulation (DES) and Lighthill acoustic analogy theory to study different structures and installation positions, and obtained the layout proposal of pantograph that the sound pressure level of the whole train reduced by 3.2 dBA at most.Yan [10] built a computational model including head train, middle train and tail train and computed noise source intensity and far-field noises at the surface of body.The model did not consider bogies or make a specific analysis and summary for noise results in the far field.Yuan [11] established a computational model including head train and tail train, computed aerodynamic noise source intensity and aerodynamic noise at the surface of body and improved computational accuracy compared with the model only established for the head train.Zhang [12] built an aerodynamic model including head train, middle train and tail train, computed the near-field and far-field aerodynamic noise of high-speed trains and only took into account noise sources at the surface of body instead of bogies and pantograph.Sun [13] established the aerodynamic model of 3-train formation, conducted an analysis on the flow field characteristics of train head, junction and tail, and studied the contribution of different parts of body to aerodynamic noises.Bogies and pantograph were not considered in the model.Liu [14] used Green function of half-free space to solve FW-H equation according to the actual situation of high-speed trains, established an acoustic integral formula considering ground effect, studied the impact of ground effect on the aerodynamic noise of high-speed trains, and made the computation of noises under trains more accurate.Yang [15] numerically simulated and analyzed external flow field and aerodynamic noises with and without air deflector and pointed out that reasonable design could enable the air deflector to well guide airflow so as to reduce aerodynamic noises in the power collection equipment.Du [16] adopted separated vortex turbulence model and acoustic analogy theory to predict the aerodynamic noise of simplified pantograph.Results showed that beams at the top of pantograph were main sources of aerodynamic noises.Yu [17] adopted nonlinear acoustic solver and acoustic analogy theory to carry out numerical research on 3 kinds of pantograph air deflectors and found their sound pressure level decreased by 3 dB in the case of designing the air deflector structure of pantograph similar to windshield in a span-wise direction.Huang [18] established an analytical model for the aerodynamic noise of bogies, focused on studying aerodynamic noises when bogies were noise sources, and analyzed the noise reduction effect of bogies on both sides of radiation noises in the case of applying the apron board of bogies.References [19] carried out numerical research on the aerodynamic noise of trailer bogies and obtained the far-field aerodynamic noise of trailer bogies which was broadband noise with directivity, attenuating characteristics, amplitude characteristics and so on.
Currently, a lot of studies have been conducted on the aerodynamic noise of high-speed trains.However, the computation of overall aerodynamic noise of high-speed trains only considers the surface of body in general due to the complexity of problem.Namely, only the structure surface of body is taken as aerodynamic noise source and pantograph and bogie [20][21][22] as main aerodynamic noise sources of high-speed trains are neglected.The difficult point of the problem is that pantograph and bogie have complex structures and their dimensions are relatively small compared with that of body.It is thus difficult to establish the computational model of aerodynamics including body, pantograph and bogie.This paper adopted a modular modeling method, firstly built the overall aerodynamic model of body of high-speed trains, built pantograph and bogie models separately, assembled bogies and pantograph into the corresponding positions of body, established a computational mode for the aerodynamic noise of the whole train composed of head train, middle train, tail train, 6 bogies, 3 air conditioning, 1 pantograph area and 1 pantograph, and obtained the aerodynamic flow behavior of high-speed train, the distribution of aerodynamic noises of trains, the propagation characteristics of pantograph aerodynamic noises and so on.
Analytical theories of aerodynamic noises of high-speed trains
Aerodynamic noises are the result of interaction between fluid and structure when fluid flows through solid surface.As general fluid computation software, Fluent integrates strong computing capacity of aerodynamic noises.Fluent can directly obtain the generation and propagation of sound wave through solving fluid dynamic equations.The direct simulation method is called as CAA (Computational Aero Acoustics) which accurately simulates viscosity effect and turbulence effect through directly solving unsteady N-S equation and unsteady Reynolds average RANS equation [23,24].CAA method calls for high-precision numerical solution method, fine meshes and nonreflecting boundary conditions of sound wave.Therefore, computational cost is high.Currently, this method cannot be adopted to solve the aerodynamic noise problem of high-speed trains.Another computational method in Fluent is widely used Lighthill acoustic analogy method, also known as AAA (Aero-Acoustic Analogy) method.Different from CAA method, "noise analogy" method decouples wave equation and flow equation, first solves unsteady flow equations, then takes the solution result as the noise source and separates sound wave solution from flow solution through solving wave equation and obtaining acoustic wave solution, which improves computational efficiency and makes solving large and complex pneumatic acoustic problems possible.Based on the mass and momentum conservation equation of fluid mechanics, Lighthill [25] deduced the wave equation of aerodynamic noises generated by turbulence within the scope of small scales surrounded by static fluid as follows: wherein, ′ was the disturbance quantity of fluid density, ′ = − .and represented density in a disturbed and undisturbed state respectively.
was Lighthill stress, = − + ( − ).stood for viscous stress.was Kronecker delta.was sound velocity.The left end of Eq. ( 1) was the same with general acoustic equations.The right end of Eq. ( 1) was equivalent to sound source item and called as Lighthill sound source item.If the right end item was 0, the equation transformed into general acoustic wave equation with sound velocity in static fluid.As a matter of fact, the right end of Eq. (1) contained variable .Therefore, Eq. ( 1) was not acoustic wave equation in a real sense.In essence, Eq. ( 1) was still fluid flow equation.However, Lighthill pointed out that Eq. ( 1) was a typical acoustic wave equation if the right end of the equation was regarded as a quadrupole source item.As a result, the method was called as "noise analogy" method.
Lighthill equation was taken as the foundation.FW-H (Ffowcs Williams and Hawkings) applied generalized Green function and generalized Lighthill acoustic analogy theory to the problem of fluid flow sound with any solid boundaries.Namely, the sound problem of objects moving in fluid obtained widely-used FW-H equation [23] as follows: wherein, was sound velocity, and represented density in a disturbed and undisturbed state respectively, was Heaviside function, was the computational time, was Lighthill stress, ∇ was Laplace operator, was the velocity component of the fluid perpendicular to the integral plane, was the moving velocity component of the integral plane.and represented spatial position coordinates.
The right end of FW-H equation could also be considered as sound source items.The first item represented Lighthill sound source item, quadrupole sound source.The second item stood for sound source (force distribution) caused by surface fluctuating pressure.It was dipole sound source.The third item referred to sound source (distribution of fluid displacement) caused by surface acceleration.It was monopole sound source.Lighthill sound source item only existed outside the surface of moving objects and was 0 in the surface.The second and third sound source items were only generated at solid surface.
Geometrical model
This paper took a high-speed train as the researched object and selected 3-train formation including head train, tail train and middle train with pantograph.Each train contained two bogies.As the train body was not smooth, the model was simplified and some parts with small dimensions were removed.The windshield at the junction of train body planned to be outsourced.The simplified model of train was shown in Fig. 1.Head train and tail train were set symmetrically.According to the dimension parameters of train, length, width and height were 76.55 mm, 3.26 m and 3.64 m, respectively.
Computational domain
The computational domain of aerodynamic noises of high-speed trains was shown in Fig. 2. Train length = 76.55 m was taken as the benchmark.Therefore, the length, width and height of computational domain were 4 , and 0.5 .The distance between the nose tip of head train and fluid entrance was ; the distance between the nose tip of tail train and fluid exit was 2 ; the distance between the train and the ground connected with track was 0.2 m.Cross sections abcd right in front of the high-speed train were inlet boundaries and set as velocity inlet conditions.In the case of computation, velocities were 300 km/h (83.3333 m/s) and 400 km/h (111.1111m/s).Cross sections efgh right behind the high-speed train were outlet boundaries and set as pressure outlet conditions.It was 1 standard atmospheric pressure.Cross sections bfgc right above the high-speed train, cghd at the left side of the high-speed train and aefb at the right side of the high-speed train were set as symmetric boundary conditions.The surface of high-speed train was set as fixed boundary.It was no-slip wall boundary condition.To simulate ground effect, grounds aehd were set as slip grounds, whose slip velocity was the running speed of the high-speed train.
Meshes of the high speed train
ICEM CFD was adopted to divide meshes.Unstructured meshes were selected.The maximum size of flow field was 1500 mm; the biggest mesh of train surface was 60 mm; the biggest mesh of pantograph was 10 mm; the biggest mesh of bogies was 30 mm; the biggest mesh at the surface of air conditioning was 40 mm.Train body, bogies and pantograph surface adopted triangular meshes.The size of three-dimensional meshes was amplified according to a certain scale factor.Hexahedral meshes were used at the places far away from train body.The transition from tetrahedral meshes to hexahedral meshes adopted pentahedral pyramid meshes.The total number of meshes was about 74,220,000, and the model was shown in Fig. 3.
Aerodynamic flow behavior of high-speed trains
Fig. 4 displayed the contour of pressure at the surface of the high-speed train when it ran at the speed of 300 km/h.As shown in Fig. 4(b), the maximum pressure of head train was at the position of nose tip and it was 4427 Pa.The maximum negative pressure was in the part of cowcatcher and it was 10578 Pa.This was due to that the stagnation speed at the nose of the head train was 0 and the airflow at the nose was separated, so that the positive pressure at the nose position of the head train was the maximum.Due to the resistance on the wind side of the exhaust barrier of the head train, airflow separation presented on the wind side of the exhaust barrier of the head train and quickly flowed into the bogie area, so that the negative pressure at the leeward side of the head train was the maximum.As displayed from Fig. 4(c), the maximum positive pressure of tail train was at the window of driver's cabin and it was 1816 Pa, which was caused by the window of the tail train.The maximum negative pressure was at the leeward side of air conditioner in tail train and it was 6148 Pa.The maximum positive pressure of the whole train was at the windward side of pantograph head and it was 4914 Pa.This was due to that the section in the skateboard of the pantograph head of pantographs was rectangle, and the nose in the wind side of the skateboard was the blocking surface, so that it was the maximum positive pressure position at the surface of the whole train.The maximum negative pressure of the whole train was in bogie area and it was 20150 Pa.This was due to that turbulence was in the bogie area, and the vortex was very complex, so that the maximum negative pressure was in the bogie area including air conditioner.The aerodynamic noise source of high-speed trains was mainly dipole noises.Dipole sound source was determined by the fluctuating pressure of train surface [10].Namely, the size of fluctuating pressure at the surface of train body could be used to reflect the situation of noise radiation at the surface of sound generation.According to three control equations, turbulent kinetic energy equation and turbulent dissipation rate equation of flow field, the size of fluctuating pressure at the surface of train body could use turbulent kinetic energy to assess the distribution characteristics of noises at train surface.
Fig. 5 displayed the distribution of turbulent kinetic energy at the surface of high-speed trains and pantograph area.As shown in Fig. 5, the distribution area of high turbulent kinetic energy was in the position of transition between the nose tip and non-streamlined position of head train, between windshield at first end of train and pantograph area, between air conditioning and pantograph area, between pantograph area and windshield at the second end of train.In addition, turbulence at the front end of pantograph air deflector shocked pantograph, which resulted in the large noise in pantograph.Turbulence continued to shock the rear of pantograph area.In addition to the vortex shedding of pantograph, noise radiation in pantograph area was further increased.In a similar way, high turbulent kinetic energy area also existed in the position of windshield at A end of car, which indicated that windshield at A end of car was also the distribution area of main noise source.Thus, it could be seen that pantograph, pantograph area, nasal tip of head car, cowcatcher part of head car, bogie area and windshield area were main noise sources of high-speed train.Additionally, the sound source areas of high-speed train were at places where airflow would be separated easily and turbulence moved violently.6 displayed the distribution of vorticity contour surface based on Q-criterion (Dimension was 0.001) when the high-speed train ran at the speed of 300 km/h.As displayed from Fig. 6, main vortexes were in the streamlined area of head train, bogie area, windshield area, pantograph area, air deflector area of air conditioners and non-streamlined area of tail train.Similarly, it could be seen that main aerodynamic noise sources were in pantograph, pantograph area, head streamlined area, bogies, bogie area, windshield area, area of air conditioning and other areas.Thus, the vortex shedding and fluid separation of the whole train were main reasons for the aerodynamic noise of the whole train.As displayed from Fig. 6(b), vortexes in pantograph and pantograph area were more violent than those in other areas.There were large vortexes in pantograph head, junction of upper arm and lower arm and chassis area, which indicated that this area was the main aerodynamic noise source of pantograph.Similarly, large vortexes could be found in pantograph tail, which showed that pantograph area was also the main aerodynamic noise source of the whole train and main aerodynamic noise source was in places where the curvature of components witnessed great changes or vortexes drastically changed.As displayed from Fig. 6(c) and Fig. 6(d), the vortex distribution of bogie area at the first end of head train showed a wider range than that at the first end of tail train.Therefore, bogie at the first end of head train was the main aerodynamic noise source of the whole train compared with other bogies.
Analysis on fluctuating pressure at the surface of high-speed trains
Researches showed that the aerodynamic noise of high-speed trains was mainly determined by the fluctuating pressure of body surface [13].Therefore, it was necessary to analyze the change rule of fluctuating pressure at the surface of train body.When the train ran at a certain speed, a comparative analysis was conducted on the fluctuating pressure of various observation points to find that fluctuating pressure in the streamlined part of head train had a great change [13].Fluctuating pressure at the nose tip of head train reached the maximum.It was because airflow flowing through the nose tip of the train was separated.A part of airflow flowed upward along the surface of train body while a part of airflow flowed downward along the bottom, which resulted in the most intense airflow disturbance and separation in the nose tip of the train.As a result, this paper took the observation point of nose tip of head train as an example to analyze the time-domain and frequency-domain characteristics of fluctuating pressure.Fig. 7 showed the time-domain curve of fluctuating pressure at the observation point of nose tip of head train when the running speed of train was 300 km/h.Fig. 8 was the corresponding power spectrum density.
As displayed from Fig. 7 and Fig. 8, pressure at the body surface of high-speed trains fluctuated randomly and showed irregular changes in the time domain.Fluctuating pressure of body surface was broadband signals in the frequency domain and its energy was mainly in the low frequency.With the scope of 800 Hz, power spectrum density dropped quickly with the increase of frequency.When the analyzed frequency was higher than 1000 Hz, power spectrum density will be stable and changed little.
Distribution characteristics of longitudinal aerodynamic noise of high-speed train
Fig. 9 displayed the comparison curve of sound pressure levels at longitudinal observation points when the high-speed train ran at the speed of 300 km/h and 400 km/h, respectively.These observation points were 25 m away from the center line of track and 3.5 m away from rail surface.78 noise observation points were distributed longitudinally along the train.The distance between two adjacent longitudinal observation points was 1 m [26,27].As displayed from Fig. 9, the distribution of sound pressure level of longitudinal aerodynamic noises of high-speed trains presented a decreasing trend.The sound pressure level in the back of bogies at the first end of head train reached the maximum value.In the back of bogies at the first end of head train, total sound pressure level reached the maximum value.Total sound pressure levels were the maximum in bogie area at the second end of head train, bogie at first end of middle train, bogie area at second end of middle train, bogie area at the second end of tail train and bogie at the first end of tail train.When the nose tip of head train changed into = 12 m, the sound pressure level of noises increased by 17.1 dBA at most.Then, the noise sound pressure level of the whole train changed little.When the nose tip of head train changed into = 12 m, the sound pressure level of noises reached the maximum value among all observation points of the whole train, namely 96.5 dBA.In the streamlined part of tail train, sound pressure levels decreased rapidly and the maximum attenuation value was 9.2 dBA.In the same way, total sound pressure levels reached local maximum values around bogie at the second end of head train, bogie at the first end of middle train, bogie at the second end of middle train and bogies at the first and second ends of tail train.Maximum sound pressure levels were 93.2 dBA, 93.3d BA, 92.2 dBA, 91.9d BA and 93.5 dBA, respectively.
From the comparative analysis of Fig. 9, the sound pressure level of longitudinal observation points increased obviously with the increase of running speed of train.When the running speed of train was 400 km/h, sound pressure levels reached the maximum value among all observation points of the whole train in the case of = 12 m, namely 101.4 dBA.Maximum sound pressure levels were 97.9 dBA, 98.3 dBA, 97.2 dBA, 96.9 dBA and 98.3 dBA, respectively around bogie at the second end of head train, bogie at the first end of middle train and bogie at the second end of middle train, bogie at the first and second of tail train.When the running speed was 300 km/h and 400 km/h, the maximum sound pressure level of the whole train has increased by 4.9 dBA. 10, aerodynamic noises of high-speed trains were a wide frequency spectrum which was a kind of broadband noise and whose main energy was within the frequency of 1000 Hz to 4000 Hz.With the increase of running speeds, its aerodynamic noise energy moved to the high frequency.The computational result was also compared with the experimental result from reference [28].The change trend was similar and the difference is also small in the high frequency because noises were main the aerodynamic noise in the high frequency.However, the noise included mechanical noises and aerodynamic noises in the low frequency, so the computational result was smaller than that of the experiment.
Distribution of aerodynamic noises in pantograph area
5 noise observation points were distributed horizontally ( -axis) along the train at places which were 3.5 m from the track, 48 m away from the nose tip of head train and 6.25 m, 12.5 m, 25 m, 50 m and 100 m (The distance between two adjacent observation points was double) away from pantograph area.Fig. 11 displayed the comparison of sound pressure levels at noise observation points of pantograph area when the high-speed train ran at the speed of 300 km/h and 400 km/h.As displayed from Fig. 11: (1) Attenuation amplitudes of horizontal noises at 5 observation points were 1.7 dBA, 0.1 dBA, 2.7 dBA and 2.6 dBA, respectively when the high-speed train ran at the speed of 300 km/h.
(2) Attenuation amplitudes of horizontal noises at 5 observation points were 1.8 dBA, 0.6 dBA, 2.7 dBA and 2.4 dBA, respectively when the high-speed train ran at the speed of 400 km/h.Therefore, the total sound pressure levels of two adjacent observation points whose distance was double decreased by 2.7 dBA when = 25 m.
(3) Attenuation amplitudes of horizontal noises at 5 observation points were 5.7 dBA, 5.6 dBA, 5.1 dBA, 5.1 dBA and 5.3 dBA respectively when the high-speed train ran at the speed from 300 km/h to 400 km/h.
Propagation characteristics of aerodynamic noises of pantograph
It was found that pantograph was the main aerodynamic noise source of high-speed trains through analysis.Therefore, this paper would study the radiation characteristics of aerodynamic noises of pantograph when the running speed of train was 300 km/h.This paper extracted the time-domain signals of fluctuating pressure of pantograph from the flow field and adopted the boundary element method to solve sound pressure.In addition, this paper adopted acoustic software Virtual.Lab to compute acoustic propagation at the surface of pantograph, used sound pressure boundary conditions to map fluctuating pressure of pantograph to the acoustic meshes, applied Discrete Fourier Transform (DFT) to transfer the data of surface fluctuating pressure, conducted acoustic response computation and obtained the radiation characteristics of aerodynamic noises of pantograph.Fig. 12 showed the acoustic meshes of pantograph and the biggest mesh size satisfied the requirement of maximum frequency.Fig. 13 displayed a comparison of noise radiation of aerodynamic noises at the frequency of 500 Hz, 1000 Hz and 2000 Hz.
According to the comparative analysis of Fig. 13, the aerodynamic noise of pantograph of high-speed trains was mainly distributed in the mid-high frequency and aerodynamic noise energy in low-frequency was lower than that in high-frequency.The main radiation direction of aerodynamic noises of pantograph was the upper left and right at 500 Hz when the running speed was 300 km/h.The main radiation direction of aerodynamic noises was right above at 1000 Hz, and both sides were not main radiation directions of aerodynamic noises.At 2000 Hz, the main radiation direction of aerodynamic noises was the upper left and right.Main energy in this position was more intense than that at 500 Hz, which was mainly attributed to the contribution of pantograph head to the radiation energy of aerodynamic noise.As a result, the contribution of aerodynamic noises of pantograph was mainly from pantograph head.Thus, the noise reduction effect of pantograph head would be the most obvious if low-noise optimization design was conducted for pantograph head subsequently.
Conclusions
Based on Lighthill acoustic theory, this paper adopted LES and FW-H acoustic model to conduct numerical computation for the aerodynamic noise of high-speed trains, analyzed the aerodynamic flow behavior and aerodynamic noise characteristics of high-speed trains, considered the aerodynamic model of microscopic structures (pantograph, pantograph area, bogie, air conditioning, junction of train ends and so on) of train in the case of modeling, established a computational model for the aerodynamic noise of 3 train formation, and came to the following conclusions: 1) Pantograph, pantograph area, streamlined area, bogies, bogie area, windshield area, and air conditioning area were main aerodynamic noise sources of high-speed trains.The main aerodynamic noise source of pantograph was mainly distributed in pantograph head, junction of upper arm and lower arm and chassis area.Compared with other 5 bogies, bogie at the first end of head train was main aerodynamic noise source.In addition, vortex shedding and fluid separation were main reasons for the aerodynamic noise of high-speed trains.
2) Through conducting a comparative analysis on the total sound pressure level of observation points (25 m away from the center line of track and 3.5 m from the rail surface) of high-speed trains, this paper found that the total sound pressure level of observation point which was 12 m away from the nose tip of head car was the maximum.The maximum value was 96.5 dBA when the high-speed train ran at the speed of 300 km/h.The maximum value was 101.4 dBA when the high-speed train ran at the speed of 400 km/h.In the direction of the operation, the distribution curve of longitudinal sound pressure levels showed local maximum sound pressure levels in bogies at the first and second ends of head train, middle train and tail train.To reduce the total sound pressure level of the whole train, this paper suggested focusing on taking noise reduction measures in 6 positions.The noise reduction effect of the whole train should be very obvious.
3) With wide frequency spectrum, the aerodynamic noise of high-speed trains was a kind of broadband noise, whose main energy was mainly within the frequency of 1000 Hz to 4000 Hz.With the increase of speeds, the aerodynamic noise energy moved to the high frequency.
4) The distribution of aerodynamic noises in pantograph area had the following characteristics: the total sound pressure level of two adjacent observation points whose distance was double decreased by 2.7 dBA when = 25 m; the average attenuation amplitude of sound pressure levels at various horizontal observation points of high-speed trains was 5.4 dBA when the running speed of high-speed trains increased from 300 km/h to 400 km/h.5) Propagation of aerodynamic noises of pantograph had the following characteristics: The main radiation energy of pantograph was in mid-high frequency and its main radiation direction was at the upper left and right sides of pantograph head.In the high frequency, the contribution of aerodynamic noises of pantograph mainly came from pantograph head.
Fig. 2 .
Fig. 2. Computational domain of the high-speed train
Fig. 3 .
Meshes of the high-speed train
4 .
a) The whole train b) Head train c) Tail train Fig. Contours of pressure at the surface of high-speed trains
Fig. 5 .
Fig.6displayed the distribution of vorticity contour surface based on Q-criterion (Dimension was 0.001) when the high-speed train ran at the speed of 300 km/h.As displayed from Fig.6, main vortexes were in the streamlined area of head train, bogie area, windshield area, pantograph area, air deflector area of air conditioners and non-streamlined area of tail train.Similarly, it could be seen that main aerodynamic noise sources were in pantograph, pantograph area, head streamlined area, bogies, bogie area, windshield area, area of air conditioning and other areas.Thus, the vortex shedding and fluid separation of the whole train were main reasons for the aerodynamic noise of the whole train.As displayed from Fig.6(b), vortexes in pantograph and pantograph area were
Fig. 6 .
Distribution diagram of vorticity of high-speed trains
Fig. 7 .Fig. 8 .
Fig. 7. Fluctuating pressure at the observation point of nose tip of head train
Fig. 9 .
Fig. 9. Sound pressure levels under different running speeds Fig. 10 displayed the comparison of high-speed trains at observation points under one-third octave (the position of longitudinal maximum sound pressure level).As shown in Fig.10, aerodynamic noises of high-speed trains were a wide frequency spectrum which was a kind of broadband noise and whose main energy was within the frequency of 1000 Hz to 4000 Hz.With the increase of running speeds, its aerodynamic noise energy moved to the high frequency.The computational result was also compared with the experimental result from reference[28].The change trend was similar and the difference is also small in the high frequency because noises were main the aerodynamic noise in the high frequency.However, the noise included mechanical noises and aerodynamic noises in the low frequency, so the computational result was smaller than that of the experiment.
Fig. 11 .
Fig. 11.A comparison curve of sound pressure levels in pantograph area | 2018-12-11T17:06:11.962Z | 2017-03-31T00:00:00.000 | {
"year": 2017,
"sha1": "6a916aaa4be7468b65dbae6c8b14f143b32842fb",
"oa_license": "CCBY",
"oa_url": "https://www.jvejournals.com/article/18139/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6a916aaa4be7468b65dbae6c8b14f143b32842fb",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
245384571 | pes2o/s2orc | v3-fos-license | Evaluation of the COVID-19 Infection Rate in the Perioperative Period of Elective Surgeries of the Hand and Microsurgery
Background: After the beginning of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic (coronavirus disease 2019 [COVID-19]), the world started reducing the number of elective surgeries to reduce the transmission of the coronavirus. Some priority elective surgeries were performed, and there was no increase in contagion rates due to safety protocols and protection measures. The study aims to present the coronavirus infection rate of elective hand surgeries and microsurgery performed during pandemics. Methods: A retrospective study evaluating 188 patients submitted to elective surgical procedures. The exclusion criteria were patients infected by COVID-19 before the surgery and patients who submitted to trauma surgery. Only 108 patients were eligible for this study. The mean age was 47.8 years (range: 15 days-81 years). There were 63 females and 45 males. They were divided into 2 groups: outpatient (n = 49) and inpatient (n = 59) procedures. Results: The overall COVID infection rate was 6.48%. The outpatient infection rate was 2.08%, whereas the inpatient infection rate was 10.17% (Student t test: P = .089). The main factor correlated with infection in the postoperative period was the number of postoperative outpatient visits (Student t test: P = .089). No statistical differences were observed between the variables studied, but there was a tendency for patients who submitted to inpatient surgery to get infected by COVID-19 (P = .089). The statistical power was 0.8 (Cohen’s d test), showing that large samples are needed to analyze the correlation better. Conclusion: We concluded that the safety of performing elective hand surgery during the pandemic remains unclear, and more studies with larger samples are needed.
Introduction
The end of 2019 and the beginning of 2020 were marked by a global catastrophe in health systems, caused by the pandemic by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), or a new coronavirus, in a disease that was named COVID-19 (coronavirus disease 2019). Due to its high mortality and transmissibility by aerosols, precautions for social isolation and lockdowns were initiated, which caused terrible therapeutic damage to orthopedic conditions in the health system, among other specialties. 1,2 Elective surgeries were suspended, and hospitals began to be overcrowded by inpatients due to the severe forms or sequelae of COVID-19. On the contrary, patients with orthopedic injuries on an urgent and emergency basis were also submitted to surgeries. Fortunately, most of these patients did not acquire COVID-19 during the perioperative period, either since admission or during follow-up. [1][2][3][4][5] As the COVID-19 pandemic was being controlled, some "urgent" or priority outpatient surgeries were performed throughout the year 2020 and at the beginning of the year 2021, before the "second wave" of the disease (COVID-19). In addition, most of these patients were not contaminated with COVID-19 during the perioperative period, hospitalization, and outpatient visits. However, there are no studies at the international level that correlate this information with hand, microsurgery, and upper limb surgery. And the main question was: Is it safe to subject patients to surgical procedures during this pandemic?
Thus, studies carried out in a quaternary hospital, such as the Hospital in the State of Sao Paulo, become fundamental for understanding, considering that this hospital is the only quaternary hospital in its administrative region ("Health Regional Departments or "Departamento Regional de Saúde/DRS") and in the heartland of the state of São Paulo. The Hospital was the main reference center for serious cases of COVID-19, and for other urgencies and emergency diseases that continued to exist in parallel. Thus, we proposed this study to correlate these data, being able to create guidelines for future safe protocols, as well as assessing which surgeries have a lower risk of transmission of COVID-19. The aim of this study is to evaluate the safety of operating patients for elective hand surgery and microsurgery.
Methods
We retrospectively reviewed medical records of 188 patients who underwent elective hand surgeries during the pandemic between April 2020 and April 2021 on a quaternary Hospital, in Ribeirão Preto city, in the heartland of the State of Sao Paulo. The inclusion criteria were patients submitted to elective hand surgeries and microsurgeries according to urgency criteria for each disease. Each patient who underwent surgery was previously instructed and knew about the risk of undergoing elective surgical procedures during the pandemic, as stated in the consent form for each surgery. The exclusion criteria were patients infected by COVID-19 before the surgery and patients who submitted to trauma surgery. None of the patients operated on was infected by COVID (positive polymerase chain reaction [PCR]) or have any symptoms on the day of the procedure. None of the patients had received any vaccine yet. All the surgeries were performed when the COVID-19 hospitalization rate and the local number of new cases were under control. So there was a suppleness to restart elective surgery scheduling in patients at risk of functional loss or in advanced stages of diseases.
Scheduling of Surgery and COVID Screening of Patients
The surgeries were scheduled according to the specific urgency for each disease. Patients received a phone call the day before surgery, asking if they had any symptoms or were awaiting test results for COVID (PCR test). On the day of surgery, patients attended the Hospital for surgery by appointment and without a companion. They underwent an assessment of symptoms and measurement of temperature. In case of normality, the surgery was performed. In case of abnormality, we rescheduled the surgery, and the patient was transferred to a COVID care area.
Postoperative Follow-Up and COVID Screening of Patients After Surgery
After surgery, patients were instructed to contact the Hospital if they had any symptoms of COVID or temperature changes. A return to an outpatient consultation was scheduled after 1 week, and the patient was evaluated. If any of them had symptoms, we performed a teleconsultation, and a new consultation was scheduled after 15 days. Symptomatic patients were also instructed to go to the nearest health facility for COVID PCR testing. If the patient was at the hospital reception for the outpatient consultation and reported or had any symptom or temperature change, he was referred to a COVID area for evaluation. He also received an evaluation by the hand surgery team (wearing N95, face shield, gloves, and clothing).
Personal Protective Equipment for Health Care Professionals in the Operating Room and on Returns
For patients without symptoms of COVID or temperature changes, health workers wore surgical masks. N95 masks and face shield were optional during surgery.
Health Care Professionals Screening
Health professionals needed to answer a questionnaire if they had any symptoms every time they logged into their virtual medical record account. If symptoms, the health professional needed to go to the occupational medicine section to attend to COVID. In addition, it is essential to remember the ethical aspects of self-care and the medical oath not to harm the patient; therefore, it is expected that no health professional has worked with symptoms of COVID.
Only 108 patients were eligible for this retrospective study. The mean age was 47.8 years (range: 15 days-81 years). There were 63 females at the mean age of 51.2 years (range: 3-81 years) and 45 males at the mean age of 42.9 years (range: 15 days-80 years). According to the procedure, there were 52 soft tissue, 48 bone, and 8 infection surgeries. The type of procedure was divided into 2 groups: outpatient (n = 49) and inpatient (n = 59). All the postoperative COVID-19 infections occurred after 15 days of surgery. Table 1 presents the details about the surgery (age, gender, total procedure time, type of anesthesia, and type of procedures) in both groups and several health professionals in the operation room. The number of health care professionals included the overall number of people in the room.
Some procedures were performed only with a hand surgeon and the fellow, using local anesthesia. Figures 1 and 2 present the localization and percentage of the surgeries performed for the inpatient and outpatient procedures.
A 2-tailed P value of less than .05 was considered statistically significant. All analyses were performed using SPSS for OS X, Version 22.0.0 (SPSS; IBM Corp., Armonk, New York).
Results
There were 7 patients (6.48%) who got infected by COVID-19 during the postoperative period in outpatient care. The inpatient infection rate was 10.17%, whereas the outpatient infection rate was 2.08% (Student t test: P = .089). There were no infections during the inpatient care. There were no statistical differences between the total time of the procedure (Student t test: P = .805), the number of health care professionals in the operating room (Student t test: P = .554), type of the procedure (Student t test: P = .250), and the use of public transportation (Student t test: P = .235).
Patients with more outpatients' visit records (Student t test: P = .089) and associated diseases presented a tendency of COVID-19 infection in the outpatient postoperative period (Student t test: P = .077). None of the 108 patients died of COVID-19, and the only 1 who needed intensive care was discharged after 4 weeks. There was a tendency for patients submitted to inpatient surgery to have acquired COVID-19 during the postoperative period (Mann-Whitney U test: P = .089), as observed in Table 2.
There was a gender difference between the groups, with more women in the inpatient group (P < .001) who were also older (P < .001). Although more women acquired COVID during outpatient visits, there was no statistical difference between the inpatient and outpatient groups (P = .472). The statistical power was more significant than 0.8 based on Cohen's d test for each variable, showing that large samples are needed to analyze the correlation better.
Discussion
During the COVID-19 pandemic, Brazil was not the best example of combating the new coronavirus. However, regarding the entire social, economic, and political context observed in the country and considering the postoperative infection rates of elective surgeries performed in a quaternary hospital, we reached a COVID-19 postoperative infection rate of 6.48% for hand surgery, microsurgery, and upper limb. Furthermore, this Hospital is still the main reference center for admissions of severe cases of COVID-19.
The ASA classification did not interfere with the surgical intervention performed, as our hospitals have 2 patient patterns. Patients requiring complex surgeries due to the disease, the need for a specialist, or surgical complications generally have fewer comorbidities (ASA I or II), or patients ASA (American Society of Anesthesiologists Classification) III or IV with less complex surgeries need a more experienced anesthesiology team. The anesthesiology team helped to perform more outpatient procedures.
Surgical triage scoring tests were widely used during the pandemic, such as the MeNTS (Medically Necessary Time-Sensitive) scoring system described by the American College of Surgeons. Unfortunately, our team has not yet adhered to this system. 6 Transmission rates in Brazil during the same period were reported between 2358 to 4474 cases per million, and mortality of 156.6 to 331.8 cases per million. 4,5 It is also important to emphasize that our hospital followed the World Health Organization (WHO) and Centers for Disease Control and Prevention (CDC) determinations, creating guidelines that meet the standards of excellence in other hospitals.
In addition, the Hospital is part of one of the largest universities in the world, having other teams that contributed to the research, creation, and testing of vaccines and protocols to control the new coronavirus. These factors are to demonstrate the contrast experienced by different medical teams during this catastrophic period. It is also important to emphasize that all patients in this study are from a hospital belonging to the free public health system ("SUS" or "Sistema Único de Saúde"). It is reiterated that the socioeconomic and cultural profile of patients referred to this Hospital is shallow. However, there was no statistical difference between patients. We emphasize that many of the patients who were operated on at this Hospital used public transport, ambulances, and vehicles for transportation. We were unable to identify whether COVID-19 contamination occurred within the Hospital, during the patient's journey to medical care, or at home. However, of the 7 patients infected during outpatient consultations, 1 became symptomatic 4 weeks after surgery and 6 patients after 12 weeks. No patient was infected on the same day of the outpatient consultation. While this article was written, the death rate in Brazil was 454 429, with 9.09% of the population fully vaccinated, with a rate of 80 486 new cases in 24 hours. These values for the state of São Paulo were 109 241 deaths and 16 671 new cases in 24 hours. It is also essential to consider that none of these patients had symptoms of COVID before surgery, and none of the patients had received any dose of vaccines. All patients were operated on by the authors of this study, who had their vaccine doses completed after March 2021. 5,[7][8][9][10] None of the health care professionals had COVID symptoms before, during, or after the surgeries. The hospital has an efficient communication network between employees and patients, which prevents inadequate exposure. Despite honestly, we are in an underdeveloped country, which was not an excellent example of combating the pandemic. For us, who live through all these difficulties and live within this precarious situation in a country with high socioeconomic contrast, the fact of having this outpatient infection rate is something encouraging. However, we agree that these rates are high and stormy considering developed countries like the United States and Europe. Given this, we considered that the safety of performing elective hand surgery during the pandemics remains unclear.
On the contrary, regarding the 2.08% outpatient infection rate, and considering that Brazil had one of the worst international scenarios for coronavirus control, there might be a light on the end of the tunnel, providing that there is control of the preventive measures recommended by the WHO/CDC. Our outcomes demonstrate that outpatient surgeries with a low number of outpatient consultations did not result in any contamination, even in older patients. Furthermore, our results are comparable with data in other reports that presents no deaths in patients submitted to elective surgeries. [7][8][9] It is important to emphasize that the number of 188 surgeries performed electively in 1 year during the pandemic may seem significant compared with the approximate number of 500 surgeries/year without pandemics-our maximum staff capacity limit per hospital structural conditions. Before the pandemic, we had more than 2000 patients awaiting a vacancy for outpatient care for evaluation by our team for the possibility of elective surgery in our surgical schedule. Our team used to provide outpatient care for 300 patients a day. During the pandemics, we provided only preoperative and postoperative care outpatient consultations. This information may sound aberrant, but it is part of the reality experienced by an underdeveloped country that has a health system that, despite being contrasting, can manage patients with dignity.
Other studies presented alternatives to postoperative follow-up, as virtual clinics consultation could reduce the exposition to the postoperative evaluation. 10,11 But it did not exclude the risk of changing wound dressing during followup or the necessity to do a radiographic evaluation. We also find similar results comparing with other reports that evaluated orthopedic surgery (trauma or elective). 12,13
Conclusion
We concluded that the safety of performing elective hand surgery during the pandemic remains unclear, and more studies with larger samples are needed to clarify this question. However, no study can reproduce the reality of each country's health system.
Ethical Approval
This study was approved by our institutional review board.
Statement of Human and Animal Rights
This report has been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments.
Statement of Informed Consent
The patient and his family were informed that data from the case would be submitted for publication and gave their consent.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2021-12-23T06:22:45.678Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "35384240a6ceea8cf045f1e1920d9169bcaae858",
"oa_license": null,
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15589447211064360",
"oa_status": "BRONZE",
"pdf_src": "Sage",
"pdf_hash": "267ba06112a589cac8b7b7bd966650674e41b1a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11884946 | pes2o/s2orc | v3-fos-license | A comprehensive transcriptome and immune-gene repertoire of the lepidopteran model host Galleria mellonella
Background The larvae of the greater wax moth Galleria mellonella are increasingly used (i) as mini-hosts to study pathogenesis and virulence factors of prominent bacterial and fungal human pathogens, (ii) as a whole-animal high throughput infection system for testing pathogen mutant libraries, and (iii) as a reliable host model to evaluate the efficacy of antibiotics against human pathogens. In order to compensate for the lack of genomic information in Galleria, we subjected the transcriptome of different developmental stages and immune-challenged larvae to next generation sequencing. Results We performed a Galleria transcriptome characterization on the Roche 454-FLX platform combined with traditional Sanger sequencing to obtain a comprehensive transcriptome. To maximize sequence diversity, we pooled RNA extracted from different developmental stages, larval tissues including hemocytes, and from immune-challenged larvae and normalized the cDNA pool. We generated a total of 789,105 pyrosequencing and 12,032 high-quality Sanger EST sequences which clustered into 18,690 contigs with an average length of 1,132 bases. Approximately 40% of the ESTs were significantly similar (E ≤ e-03) to proteins of other insects, of which 45% have a reported function. We identified a large number of genes encoding proteins with established functions in immunity related sensing of microbial signatures and signaling, as well as effector molecules such as antimicrobial peptides and inhibitors of microbial proteinases. In addition, we found genes known as mediators of melanization or contributing to stress responses. Using the transcriptomic data, we identified hemolymph peptides and proteins induced upon immune challenge by 2D-gelelectrophoresis combined with mass spectrometric analysis. Conclusion Here, we have developed extensive transcriptomic resources for Galleria. The data obtained is rich in gene transcripts related to immunity, expanding remarkably our knowledge about immune and stress-inducible genes in Galleria and providing the complete sequences of genes whose primary structure have only partially been characterized using proteomic methods. The generated data provide for the first time access to the genetic architecture of immunity in this model host, allowing us to elucidate the molecular mechanisms underlying pathogen and parasite response and detailed analyses of both its immune responses against human pathogens, and its coevolution with entomopathogens.
Background
The introduction of novel high through-put sequencing technologies provides insight into the genetic architecture of an increasing number of non-model organisms including insects. Next-generation (NextGen) pyrosequencing has become an important tool in transcriptomic studies and allows targeted identification of genes which are (differentially) expressed in distinct tissues or cells, during development, or upon activation of immune responses. This technology has been used, for example, to characterize both the midgut-specific and the immunity-related transcriptome of Manduca sexta, which has emerged as a model in lepidopteran biochemistry and physiology [1,2]. In this study, we subjected the immunity-related transcriptome of the greater wax moth Galleria mellonella to a combination of Sanger and NextGen sequence analysis. Our study was motivated by two reasons. Firstly, Galleria is suited to identify ancient features of innate immunity in lepidopterans because it belongs to the family Pyralidae which has been placed in a basal phylogenetic position within the Lepidoptera. Secondly, Galleria represents a powerful, reliable and proven model system for innate immunity studies. It is currently used as a host system to reconstruct rapid reciprocal adaptations during hostparasite coevolution [3] and as a an alternative model host for testing human pathogens, which is ethically better acceptable than mammalian hosts such as mice, rats and rabbits [4,5]. Galleria caterpillars prosper world-wide in use as alternative mini-hosts because they combine advantages shared with other invertebrate host models with benefits that are unique to this lepidopteran. The advantages of the nematode Caenorhabditis elegans and the fruit fly Drosophila melanogaster are complete, well-annotated genomes and that microarrays, RNA interference libraries and mutant strains are available which allow analysis of host-pathogen interactions at the molecular level [6]. However, the larger size of Galleria caterpillars enables precise injection of antibiotics or a number of pathogens, easy manipulation and collection of tissue and hemolymph samples to study pathophysiology with, for example, proteomic approaches.
Further advantages of Galleria are (i) the low overall costs of breeding large numbers, providing an inexpensive whole-animal high throughput infection assay system [7], (ii) their worldwide commercial availability, e.g., they are sold as bait for fishermen or as food for pets (reptiles), (iii) the positive correlation between the pathogenicity of bacteria and fungi when evaluated in Galleria and mice [8], (iiii) and that this heterologous insect host can be adapted in the laboratory to human physiological temperature (37°C). This is essential in order to mimic the physiological conditions in mammals because human pathogens are adapted to the physiological temperature of their host which is often required for the synthesis and the release of their pathogenic or virulence factors [4,5]. These advantages have convinced an increasing number of researchers to favor Galleria as a mini-host model for prominent pathogenic bacteria and fungi that are responsible for severe human diseases such as Bacillus cereus [9], Enterococcus faecalis [10], Listeria monocytogenes [11], Pseudomonas aeruginosa [12], Staphylococcus aureus [13], Candida albicans [14] and Cryptococcus neoformans [15]. In addition, a number of antimicrobial peptides and inhibitors of microbial virulence factors have been discovered during the past decade in Galleria whose therapeutic potential in medicine and plant protection is presently being explored [16,17].
The major disadvantage of Galleria as a heterologous host system is that neither genome nor transcriptome sequence data are available and, therefore, important information about the immunity and stress related genes and their expression are lacking. Consequently, this study was designed to fill this gap and to provide a data set which enables more detailed studies, for example microarray or proteomic analysis, in the future. In order to induce expression of immunity-related genes in this lepidopteran species we injected a bacterial lipopolysaccharide (LPS) preparation into last instar larvae which has been proven as a potent elicitor of immune responses in Galleria [18] and other insect species [19,20]. Normalized larval dscDNA was sequenced using Roche 454 FLX and Sanger (directional long reads) methods. The combining of both technologies provided deep sequencing coverage of the expressed genes relevant to this research project.
Because of the large hemolymph sample volumes that can be obtained from Galleria caterpillars, their host response to pathogens can easily be studied at the peptide and protein level [21]. To test the correlation between transcriptomic and proteomic data we collected hemolymph samples from untreated and LPS-injected larvae. In order to identify peptides and proteins that are secreted within the hemolymph upon activation of innate immune responses, we used 2D-gelelectrophoresis combined with mass spectrometric analysis of spots that appear or are enhanced upon injection of LPS. Complementary proteomic analysis of hemolymph samples confirmed induced expression and release into the hemolymph of proteins known to mediate recognition of microbes, immunity-related signaling or killing of microbes.
Results and Discussion
Transcriptome assembly and functional analyses using Gene Ontologies Galleria has emerged as a powerful, surrogate and ethically acceptable model host for human pathogens, but the lack of genomic data is a major impediment for its use in preclinical research. In order to provide more detailed information about its transcriptome we subjected normalized larval cDNA to NextGen pyrosequence (Roche 454 FLX) and Sanger analysis. Normalization of the larval cDNA resulted in reduction of any over-abundant transcripts and production of an even distribution of transcripts ranging from 0.2 to > 3.0 kb in size. The average size of the cDNAs of the Galleria mixed larval cDNA library that were cloned and sequenced was 1,150 bp. With the 454 platform we generated over 172 million bases sequence information. The total number of reads was 789,105 with an average length of read (bases) of 218. The data assembly consisted of 22,203 contigs and 11,7 million bases among which 7,265 contigs were larger than 500 bases. The average contig size of contigs larger than 500 bases was 1,132 bases. The largest contig had 4,762 bases, the N50 contig size was 1,309 bases. The Sanger sequencing yielded a total of 12,032 high-quality ESTs, which after assembly resulted in a total of 2,120 contiguous sequences (contigs) and 4,775 singletons represented by a single EST and a total of 6,895 putative gene objects (summarized in table 1). Combined, the Sanger and the pyrosequencing data of the Galleria larval cDNA library resulted in a total of 18,690 contigs (putative gene objects). For the subsequent BLAST searches and annotation, we excluded all of the singleton reads from the 454 pyrosequencing data as the singlets may be more error-prone and unreliable as compared to contigs with good sequence coverage. The complete set of sequences were subjected to a protein translated BLASTx search and a gene ontology (GO) analysis using Blast2GO [22]. 7,556 Sequences (40%) matched described sequences in Genbank (NR database; E-value cut-off of 10 -3 ). More than half of the sequences (11,134) had no BLAST result, indicating a high number of lepidopteran or species-specific transcripts or transcript parts (orphan UTRs) [23].
We then analyzed which part of the assembled contigs had counterparts in certain species. For this purpose we used BLAST databases for the complete proteomes of Drosophila melanogaster, Bombyx mori, and Homo sapiens. Using this approach we cannot discern gene family members but get a rough overview on general protein distributions between clades. The Venn diagram ( Figure 1) shows the distribution of hits with a score threshold of 150. We chose the score value as excluding criteria to make blast hits comparable, since p values are influenced by the data base size. We observed a total of 6782 hits of which more than 3000 are common to all species and therefore belong to the core of all metazoan genes. Given that not all genes are active under the conditions chosen and that there are a number of species specific genes, the 6507 identifiable genes shared between Bombyx and Galleria indicate a good coverage of the Galleria transcriptome by our contig data set. Not surprisingly another large fraction (2326) is shared between the Lepidoptera only and 908 contigs are common between Diptera and Lepidoptera. The slightly higher number of genes shared between Lepidoptera and humans than between D. melanogaster and humans (211 versus 134) indicate a D. melanogaster specific gene loss. Overall, the species distribution of the top BLAST hit against the nr database for the Galleria transcriptome shows a strong preference for matches against Bombyx mori and Tribolium castaneum genes. Both insect species are represented by complete genome sequences in the public databases, as for Drosophila, but the number of top BLAST hits against this insect model organism is much smaller (Additional file 1).
For functional comparisons, all sequences were subjected to Gene Ontology (GO) analysis in Blast2GO, where we classified all gene objects in Biological Function ( Figure 2A) and Molecular Process class 3 ( Figure 2B). To minimize the number of classes with only few gene objects, we set the minimum number of gene objects (cut-off level) in a class to 2% of the total number of sequences that could be classified. Of the 7,556 contigs in the Galleria cDNA library with high-score matches in the Genbank non-redundant (nr) protein database, 3,438 (45%) could be classified into a GO category, with each class containing at least 17 sequences (2% of 3,438). Among the 3,438 genes for which we obtained GO terms, we observed a wide off of 10-3), a much larger fraction (73%) as compared to the Galleria sequence data set (40%). Of the 12,076 genes with a BLAST hit 6,058 Bombyx genes had a functional GO category association (Additional file 2). In a direct comparison of the presence and/or abundance of GO terms, three GO classes were absent in the Galleria transcriptome, but present in the Bombyx Unigene dataset: Generation of precursor metabolites and energy, Cell recognition and Regulation of biological process. Based on the relative numbers of gene objects with functional GO category associations the Galleria sequences fall into GO categories with a roughly similar distribution to that of the Bombyx genome showing comparable numbers of most GO categories both for Biological Process and Molecular Function (Additional file 2). This suggests that the Galleria sequence data contain a large diversity of genes involved in a variety of biological processes, and do not contain notable biases towards particular categories of genes. The patterns of GO category associations nonetheless differed between these two insect species in a few categories, with relatively high abundance of Multicellular organismal development, Anatomical structure morphogenesis and Cellular developmental process in Bombyx and Biosynthetic process and Macromolecule metabolic process and response to stress being more abundant in Galleria. In addition to this, several categories were only present in Galleria (Oxidation reduction and Cell death). Differences in GO category associations between Bombyx and Galleria might be attributed to the fact that ESTs of the latter originate predominantly from larvae.
Recognition of pathogen or damage associated molecular pattern genes
Peptidoglycan recognition proteins (PGRPs) and apolipophorin III which are known to mediate recognition of pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) have been identified both among the determined transcripts and the new or enhanced spots of the 2D-gels, implicating their release within the hemolymph during humoral immune responses ( Figure 3). Insect PGRPs specifically bind to and hydrolyze bacterial peptidoglycan, activate the Toll or IMD signal transduction pathways or proteolytic cascades that generate antimicrobial effectors, and stimulate phagocytosis. They have been found to code for up to 19 PGRPs, classified into short (S) and long (L) forms. We identified six putative PGRP sequences in Galleria, among which one pair solely originates from recent gene duplication (Gme_PGRP3 and Gme_PGRP4) event ( Figure 4A). The two major classes of insect PGRPs are well separated in the gene phylogeny as depicted by the good bootstrap support of the Neighbour-Joining analysis. In the class which includes the gut-expressed Drosophila PGRP-LB gene, we can find two Galleria PGRPs ( Figure 4B). Apolipophorin III mediates pattern recognition of beta-1,3 glucans and cellular encapsulation in Galleria [24]. Recently, it was shown that apoliphorin III present in the hemolymph of Galleria binds to nucleic acids released by damaged cells and wounded tissues, and these aggregates enhance both humoral and cellular defense reactions that can protect from infection [25].
Gram negative bacteria binding proteins (GNBPs) and β-1,3-glucan recognition proteins (βGRPs) have been extensively studied as pattern recognition proteins in Lepidoptera [26][27][28]. Most of these proteins are produced in the fat body and secreted into the caterpillar's hemolymph. Some are constitutively present whereas others are induced upon microbial infection. We have identified five different ßGRPs in the Galleria EST data collection, including one most similar to the midgutspecific short ßGRP with glucanase activity as previously described [26]. To further examine the relationships among βGRP proteins across insects and the ßGRPs identified in Galleria, a total of 45 sequences from 24 species, including many proteins that had previously been found in insect hemolymph, were collected and used to construct a Bayesian phylogeny (Additional File 3). The phylogenetic analysis revealed that these sequences clustered in two distinct clades. One of these Figure 3 Two-dimensional SDS-Page map of immunized Galleria larvae. Hemolymph protein from untreated and LPSimmunized larvae was loaded on 24-cm pH 3 to 11 NL isoelectric focusing strips, followed by Tris-Tricine-SDS-polyacrylamide gel electrophoresis on a 15% gel. Image analysis enabled visualization of new or enhanced spots present in hemolymph samples from immunized larvae depicted in orange color. Putative identifications of immune-inducible proteins by MALDI-TOF analysis and according to our recent study [21] are depicted next to the respective spots. Molecular mass standards are indicated in kDa (left), and the pI range by an arrow. PGRP, peptidoglycan recognition protein; GNBP, Gram negative bacteria binding protein; GST, Glutathione-Stransferase; Apo III, apolipophorin III; DLS..., unknown protein. Bayesian posterior probabilities are shown for all major nodes supported with probability higher than 60%. (B) Amino acid alignment of the 6 predicted proteins from Galleria together with predicted protein sequences deduced from publicly available insect sequence datasets. Amino acid sequence alignments were performed using MAFFT multiple alignment program without the predicted signal peptide and part of the Nterminus as in some cases only partial sequence information was available. Identical residues are boxed with dark shading, and conserved residues are boxed with light shading. Species abbreviations: Manduca sexta (Mse), Bombyx mori (Bmo), Trichoplusia ni (Tni), Phlebotomus papatasi (Ppa), Samia cynthia ricini (Scr), Tribolium castaneum (Tca), Glossina morsitans (Gmm), Galleria mellonella (Gme), Tenebrio molitor (Tmo).
clades is clearly separated from the other clades by a high posterior probability and contains the Helicoverpa armigera Glucanase-1 protein (described in Pauchet et al. [26]), and sequences from cDNA libraries made from midgut tissue of different Lepidoptera species, including one Galleria sequence. This phylogeny suggests that Galleria does have all of the ßGRPs found in more derived Lepidopteran species, including the gene coding for a protein with glucanase activity. This supports the idea of an ancient ßGRP duplication event in Lepidoptera, leading to paralogues that have different functions.
Immunity related signaling
In insects, cell signaling against fungal and bacterial pathogens occurs through the Toll, Imd, and Jak-STAT pathways [29]. These pathways are quite similar to the vertebrate (e.g. TNF) signaling pathways, and induce the expression of antimicrobial peptides and other molecules through interaction with NFkB factors. The major signaling pathways Toll and Imd are represented by central receptors such as toll, toll-like, 18 wheeler and related LRR repeat-containing G-protein coupled receptors. We have identified at least three different toll or toll-like receptor transcripts in the Galleria dataset. The exact number of different toll receptors is not easy to evaluate, as some of the transcripts are incomplete and the predicted amino acid sequences do not always overlap. In addition to toll, we identified two different 18 wheeler partial transcripts with homology to Spodoptera frugiperda 18 wheeler (Genbank entry ADV41489: Li, S: A 18 wheeler toll receptor gene from S. frugiperda cell is in response to LPS and Saccharomyces cerevisiae stimulation). However, a critical evaluation of the role of 18 wheeler in Drosophila has put its postulated function as a pattern recognition receptor for Gram negative bacteria into question [30]. Interestingly, we identified transcripts encoding for the transcription factors NFkB and relish which function as obligate dimmers. Relish regulates downstream of the IMD pathway expression of antimicrobial peptides in Drosophila [31]. While the signaling pathways that stimulate immune gene expression have been well characterized by genetic analysis in Drosophila, they are far from being well understood in most other insect species. However, several proteins involved in these pathways have recently been characterized in Lepidoptera. One such pathway involves proteolytic activation of a cytokine called Spaetzle, which functions in dorsal-ventral patterning during early embryonic development and in the antimicrobial immune response in larvae and adults. Most interestingly, it could be shown that injection of Spaetzle into M. sexta larvae stimulated expression of several immune-related peptides and proteins, including cecropin, attacin, moricin and lysozyme [32]. We have identified a Spaetzle homolog in Galleria. The Gme-Spaetzle cDNA encodes a polypeptide with 29%, 42% and 44% identity to N. vitripennis, B. mori and M. sexta, respectively (Additional file 4Aalignment of Galleria, Bombyx, Manduca Spaetzle).
In addition to major immune signaling proteins, we identified a calreticulin sequence in the immuneinduced Galleria transcriptome data. Calreticulin is involved in signal transduction events associated with innate immunity, cell adhesion, angiogenesis and apoptosis in mammals. The level of calreticulin on the surface of human dentritic cells and polymorphonuclear phagocytes correlates with their phagocytotic ability [33]. Induction of calreticulin upon LPS challenge has recently been determined in other invertebrates such as the planarian Schmidtea mediterranea which is suggestive for its evolutionarily conserved roles in innate immunity [34]. LPS-challenge also induced expression of tetraspanins whose role in modulating immune signal complexes in vertebrates is well established [35]. Its induced expression upon LPS-injection has also been documented in ancient insects such as the firebrat [19]. Similarly, an ankyrin repeat domain containing protein was found both in this basal insect and in Galleria.
Antimicrobial peptides and proteins
Our transcriptomic analysis resulted in identification of a large number of antimicrobial peptides and proteins (AMPs) among which the moricin-like gene family, the gloverins and the cecropins were prominent. We determined the presence of six genes coding for moricin-like proteins of the eight moricin peptide fragments (several of which are identical) reported from Galleria. Moricins have been shown to exert in vitro activity against both Gram negative and Gram positive bacteria, as well as against yeast and filamentous fungi [36]. Although protein sequence alignments show highly conserved blocks of amino acids, a phylogenetic analysis of moricin sequences from Galleria and other Lepidoptera indicate species-specific gene duplication events for some gene family members (e.g. most of the Bombyx moricins), while others cluster according to the species phylogeny ( Figure 5). Moricins belong to the amphipathic α-helical antimicrobial peptides and have been first discovered in the lepidopteran B. mori [37], while gloverins have first been found in the silk moth Hyalophora gloveri [38]. The presence of moricins and gloverins seem to be restricted to Lepidoptera. Both proteomic and transcriptomic analysis confirmed the induced expression of gloverins and their secretion into the hemolymoh of in Galleria (Figure 3). We identified five members among the induced transcripts. Gloverins are basic and heatstable proteins enriched in glycine residues but lacking cysteine residues ( Figure 6). They interact with LPS and thereby increase the permeability and inhibit the formation of the outer membrane in bacteria. At least 7 gloverins have been reported from the genome of the silkworm B. mori [39].
Cecropins represent another group of linear and amphipathic peptides with a-helical structure. The first member of this peptide family exhibiting antibacterial and antifungal activity was discovered in and isolated from the hemolymph of the silk moth Hyalophora cecropia and has therefore been named cecropin [40]. The cecropin-like peptide from Galleria is synthesized as a propeptide, with a putative 22-residue signal peptide, a 4-residue propeptide and a 39-residue mature peptide with a mass of 4.3 kDa. Like cecropins from other insects it exhibits potent activity against both Gram-positive and Gram-negative bacteria [41]. We have identified four different cecropins in the Galleria transcriptome dataset, including a more diverged D-type cecropin. This surprisingly large number of different cecropins (Additional file 4B) covers a larger fraction of the amino acid diversity encountered when comparing cecropins from across the Lepidoptera.
We determined both cysteine-rich peptides reported from Galleria which exclusively inhibit growth of filamentous fungi, the defensin-like antifungal peptides galiomicin [42] and gallerimycin [43]. At least the latter contributes to innate immune responses mediating resistance of G. mellonella larvae against normally lethal infection by the human pathogenic yeast C. albicans [14]. Transgenic expression of gallerimycin has been shown to confer resistance to fungal diseases on crops [43]. A homologue of spodoptericin, the third defensinlike peptide discovered in Lepidoptera [44], is also present in our Galleria transcriptome.
In a previous study, we used the suppression subtractive hybridization method to screen for genes that are induced in Galleria upon challenge with LPS [18]. This approach resulted in the discovery of novel peptides and protein families which were also found in this extended transcriptomic study. For example, we discovered a cobatoxin-like molecule and a protein which was named Gall-6-tox due to its six conserved tandem repeats of cysteine-stabilized alpha beta motifs (CS-αβ), the structural scaffold characteristic of invertebrate defensins and scorpion toxins. Homologues of Gal-6-tox differing in the number of tandem repeats of the CS-αβ motif were later found in other lepidopterans such as Bombyx mori and Spodoptera exigua. It turned out that they belong to a novel family of atypical defensin-derived immunerelated proteins, which is specific to Lepidoptera and which is now called X-tox [45]. Moreover, our study confirmed the induced expression of tenascin-like proteins in Galleria upon LPS-challenge [18], which represent immune effector molecules known from vertebrates. However, using RACE-PCR we obtained the full-length cDNA which is considerably shorter than vertebrate tenascins and lacks characteristic tenascin domains such as fibronectin type-3-like repeats. These findings make the relation of the identified sequences to tenascins unlikely.
With transcriptomic and proteomic analysis we also found different lysozymes, one of which was first identified in Galleria more than 40 years ago, representing the first antimicrobial protein reported from insects [46]. It shares structural similarity with C (chicken) type lysozyme [47], and its activity against Gram-positive bacteria has been attributed to its ability to degrade cell wall peptidoglycan by hydrolysis of the b-1-4 linkages between N-acetylglucosamine and N-acetylmuramic acid residues. Besides moderate activity against Gram-negative bacteria [48], Galleria lysozyme was also shown to exhibit antifungal activity in vitro [49], similar to that of human lysozyme against the pathogenic yeasts Candida albicans and Coccidioides immitis [50]. We identified four c-type lysozyme homologues and an additional i-type lysozyme whose function remains to be elucidated. To further examine the relationships among lysozyme proteins identified in Galleria and those found in other insects, c-type lysozyme sequences from 12 insect species and from human were aligned and used to construct a gene phylogeny ( Figure 7A). The phylogenetic analysis revealed that these sequences clustered in two distinct clades. One of these clades clearly separated with a high bootstrap support contains most of the lepidopteran lysozymes, including a group with two of the four Galleria lysozymes identified here and a previously identified partial lysozyme sequence ( Figure 7B). These findings suggest that Galleria exceeds the number of c-type lysozymes found in other Lepidoptera (e.g. three lysozymes identified in the genome of Bombyx), supporting the idea of species-specific lysozyme gene duplication events in Galleria leading to paralogues with potentially different functions.
The invertebrate i-type lysozymes, although somewhat diverged in their activities, encompass a group of proteins having highly related primary structures. They differ from other lysozymes in having 10 or 12 cysteine residues in the primary sequence. The latter are predicted to form five or six disulfide bonds which have been attributed to cause stability against heat denaturation, high osmolarity and proteolytic degradation. Although several i-type lysozymes have been shown to be active at low temperatures, the enzyme is stabile even after prolonged heating or long-term storage at room temperature [51]. The i-type lysozymes are coded for by single copy genes in Lepidoptera (Figure 8).
In addition, we identified a full-length cDNA sequence which is identical to the deduced protein sequence of a Galleria proline-rich peptide [52] and almost identical to two protein fragments identified in a previous study analyzing hemolymph peptide fragments in Galleria [53]. Finally, our transcriptomic analysis confirmed the presence of genes encoding cobatoxin-like peptides [18,45] (Additional file 4C).
Inhibitors of microbial proteases
The immunity-related transcriptome of Galleria comprises a number of serine proteinase inhibitors among which we identified three genes encoding inhibitors (ISPI-1, ISPI-2, and ISPI-3) that have previously been purified from hemolymph and whose amino acid sequence was partially determined by Edman sequencing. All three ISPIs with molecular masses between 9.2 (ISPI-1) and 6.3 (ISPI-3) were determined to be capable of inhibiting the major virulence factors, designated as Pr1 and Pr2, of the entomopathogenic fungus Metarhizium anisopliae which is used in biological control of pest insects worldwide [54]. ISPI-1 and ISPI-3 do not share sequence similarity with other known proteins whereas ISPI-2 turned out to represent a Kunitz-type inhibitor (Figure 9). In addition, we found transcripts encoding for the insect metalloproteinase inhibitor (IMPI) which represents the first and to date only peptide known from animals which is capable to specifically inhibit thermolysin-like microbial metalloproteinases belonging to the M4 family [55]. The latter encompass a number of prominent members such as aureolysin, bacillolysin, pseudolysin and vibriolysin which are produced by human pathogenic bacteria. Thermolysin-like metalloproteases are recognized to be responsible for a number of symptoms associated with severe infections such as increase of vascular permeability, hemorrhagic edema, sepsis and necrotic tissue destruction in infected humans, and have therefore been implicated as targets for the development of second generation antibiotics [56]. The IMPI has been discovered in and was purified from Galleria larvae which were preinjected with LPS [57]. The amino acid sequence of the IMPI shares no similarity with other known proteins and its prominent stability against heat and acid treatment has been attributed to its five intra-molecular disulfide bonds. It has recently been found to encode two distinct inhibitors. The IMPI-peptide encoded by the N-terminal part contributes to innate immune responses by inhibiting microbial metalloproteases, whereas the IMPI-peptide encoded by the C-terminal part of the gene has been implicated to mediate regulation of endogenous matrix metalloproteinases with pleiotropic functions in immunity and development [58].
Transferrin
Several induced transcripts encode for transferrin which represents a multifunctional and evolutionarily conserved player in innate immunity. Its role in binding and removing available free iron ions, thus creating unfavorable environments for bacteria has first been reported in vertebrates [59]. A recent study using B. mori confirmed both the induced expression of transferrin upon LPS-challenge and its contribution to antibacterial iron-withholding strategy in Lepidoptera B. mori [60].
Stress response genes
In line with our previous studies in which we used LPSchallenge to screen for inducible immunity-related genes in insects and other invertebrates [18][19][20]29,61] we determined induced expression of genes involved in detoxification and stress adaptation such as apolipoprotein D, cytochrome P450s, gluthathione S-transferases, and a number of heat shock proteins which further supports our hypothesis that interdependencies between immune and stress responses are evolutionarily conserved in insects [18][19][20]29,61]. Glutathione S-transferases (GSTs) are a large and diverse family of detoxification enzymes found in most organisms. GSTs help to protect cells from oxidative stress, but they also play a central role in the detoxification of both endogenous and xenobiotic compounds (e.g. plant secondary metabolites or insecticides) and are involved in intracellular transport and biosynthesis of hormones. Eukaryotes contain multiple GSTs belonging to different GST classes and with differing enzyme activities to accommodate the wide range of functions of this enzyme family. The insect GST supergene family encodes a group of proteins that have been assigned to at least six classes: Delta, Epsilon, Omega, Sigma, Theta and Zeta [62,63]. The Delta and Epsilon classes, both specific to insects, are the largest classes and are often involved in xenobiotic metabolism whereas the Omega, Sigma, Theta and Zeta classes have a much wider taxonomic distribution and likely play essential housekeeping roles [62,63]. Herbivorous insects have to cope with toxic plant metabolites taken up with their diet and GSTs can play an important role in their detoxification [64][65][66].
We identified a total of 19 different GSTs in Galleria larval ESTs out of which 2 were microsomal GSTs. Five out of the six classes identified in other insect species are represented and most of the Galleria GSTs belong to the insect-specific Delta and Epsilon classes with 4 and 6 members each, respectively. However, in contrast to a comparable larval EST dataset of the generalist plant herbivore lepidopteran H. armigera [67] both the total number of GSTs identified and the strong overrepresentation of the insect-specific GSTs is much lower in Galleria (Additional file 5). The insect-specific Delta and Epsilon GST classes are often involved in detoxification of xenobiotics and the limited number of GSTs from those classes may point at the unique ecological niche and highly specialized diet of Galleria which is devoid of any (potentially toxic) plant secondary metabolites.
Conclusions
We have generated a comprehensive larval transcriptome map of the phylogenetically ancient lepidopteran Galleria mellonella. This data set complements and massively expands the known spectrum of immunity and stress related genes of this model host which have been found in previous studies using peptidomic [68] or SSH-based transcriptomic approaches [18]. Besides genes encoding proteins that mediate recognition of microbial signatures such as GNBPs, βGRPs, PGRPs and Toll or immunity-related signalling, we determined a broad spectrum of defence related effectors such as antimicrobial peptides and proteins among which moricins and gloverins are restricted to Lepidoptera. In line with other studies, the spectrum of genes which is upregulated in response to injected LPS includes proteins involved in detoxification (apolipoprotein D, cytochrome P450s, gluthathione S-transferases) and stress response (e.g. heat shock proteins). The secretion of induced immunity-and stress-related peptides and proteins into the hemolymph has been confirmed by comparative proteomic analysis of hemolymph samples from untreated and immunized larvae. Importantly, the spectrum of immunity-related genes identified in this study shares high similarity with that found in another lepidopteran species, the tobacco hornworm M. sexta, whose killed bacteria-induced transcriptome has previously been analyzed by pyrosequencing [1]. Furthermore, except for attacins, we identified in Galleria members of all families of antimicrobial peptides which are predicted from the complete genome sequence of B. mori, [39]. Taken together we postulate that all effector molecule families contributing to lepidopteran innate immunity are present in the phylogenetically basal family Pyralidae to which Galleria belongs. The entity of generated data provide a valuable platform for more detailed analyses of immune responses in Galleria and, therefore, improve the suitability of this lepidopteran both as a model host for human pathogens and for studies addressing coevolution with entomopathogens.
Insect material
Galleria mellonella individuals used here were obtained from the laboratory culture which has been used in our previous studies. Galleria caterpillars were reared on an artificial diet (22% maize meal, 22% wheat germ, 11% dry yeast, 17.5% bee wax, 11% honey, and 11% glycerin) at 31°C in darkness. Last-instar larvae, each weighing between 250 and 350 mg, were used for immunization using 10 mg/ml LPS dissolved in water (Sigma, Taufkirchen, Germany). Ten microliters of sample volume per caterpillar was injected dorsolaterally into the hemocoel using 1ml disposable syringes and 0.4-by 20-mm needles mounted on a microapplicator. Larvae were homogenized at 8 h postinjection for RNA isolation or bled at 24 h postinjection to obtain hemolymph samples.
RNA extraction, cDNA normalization and Next Generation Sequencing
Total RNA was extracted from different life stages, from hemocytes, and from immune-challenged larvae (injections) using TRIZOL and mRNA was subsequently isolated from total RNA using the MN-NucleoTrap mRNA kit according to the manufacturers' instructions (Macherey & Nagel). cDNAs were generated from 1 μg of poly(A)+ mRNA using the SMART PCR cDNA synthesis kit (BD Clontech) following the manufacturer's protocol. Reverse transcription was performed with the SMART KIT reverse transcriptase (Takara) for 60 min at 42°C. In order to prevent over-representation of the most common transcripts, the resulting single-stranded cDNAs were normalized using the Kamchatka crab duplex-specific nuclease method (Trimmer cDNA normalization kit, Evrogen) [69]. Subsequently, SMART kit components and Triple-Taq enzyme with proofreading activity were used to generate full-length enriched double-stranded long cDNAs. Each step of the normalization procedure was carefully monitored to avoid the generation of artefacts and overcycling. The optimal condition for ds-cDNA synthesis was empirically determined by subjecting the cDNA to a range of thermocycling numbers and their products checked by electrophoresis. The optimal cycle number was defined as the maximum number of PCR cycles without any signs of overcycling. The resulting normalized cDNA library was used for 454 pyrosequencing [70] using the Roche 454 FLX machine and Sanger sequencing using an ABI 3730 × l capillary sequencer. The 454 sequence reads were assembled using the newbler assembler with standard settings and using the CLC Genomics Workbench as an alternative assembly method. Before assembly, obtained reads were preprocessed by masking PolyA tails and removing SMART adapters using custom written Perl scripts. We compared the resulting contigs to the refseq protein database containing all information on coding sequences so far obtained (March 2010). Furthermore, we set up species specific databases from Drosophila, Bombyx, and human in order to find species specific similarities.
Sanger Sequencing and Generation of EST Databases
A fraction of the dscDNAs was cloned in the pGEM-T-easy vector. Ligations were transformed into E. coli ELECTROMAX DH5α-E electro-competent cells (Invitrogen). Plasmid minipreparation from bacterial colonies grown in 96 deep-well plates was performed using the 96 well robot plasmid isolation kit (NextTec) on a Tecan Evo Freedom 150 robotic platform (Tecan). Sequencing of both the 5' and 3' termini of cDNA library clones was carried out on an ABI 3730 xl automatic DNA sequencer (PE Applied Biosystems). Vector clipping, quality trimming and sequence assembly using stringent conditions (e.g. high quality sequence trimming parameters, 95% sequence identity cutoff, 25 bp overlap) was done with the Lasergene software package (DNAStar Inc.).
Blast homology searches and sequence annotation
We set up individual searchable databases for the complete sequence dataset and used this to identify the genes we describe in more detail in the text. Blast searches were conducted on a local server using the National Center for Biotechnology Information (NCBI) blastall program. Homology searches (BLASTx and BLASTn) of unique sequences and functional annotation by gene ontology terms (GO; http://www.geneontology.org), InterPro terms (InterProScan, EBI), enzyme classification codes (EC), and metabolic pathways (KEGG, Kyoto Encyclopedia of Genes and Genomes) were determined using the BLAST2GO software suite v2.3.1 http://www.blast2go.de [71]. Homology searches were performed remotely on the NCBI server through QBLAST, and followed a sequential strategy. First, Sequences were searched against the NCBI nonredundant (nr) protein database using an E-value cut-off of 10-3, with predicted polypeptides of a minimum length of 15 amino acids. Second, sequences retrieving no BLASTx hit were searched again by BLASTn, against an NCBI nr nucleotide database using an E-value cut-off of 10-10. The GO data presented represent the level 3 analysis, illustrating general functional categories. Enzyme classification codes, and KEGG metabolic pathway annotations, were generated from the direct mapping of GO terms to their enzyme code equivalents. Finally, Inter-Pro searches were performed remotely from BLAST2GO via the InterProEBI web server. In order to obtain a rough transcriptome coverage estimate for the Galleria larval cDNA library, we went through a series of search steps in order to i) obtain all hits against the conserved KEGG pathway database, and ii) estimate genome coverage by identifying the complete ribosomal protein dataset as compared to the full B. mori set. Based on these findings we estimate the theoretical transcriptome coverage to be close to 90% (e.g. 77/79 B. mori ribosomal proteins were found). Nucleotide sequences were analyzed in more detail using the commercial Lasergene Software package and the freeware BioEdit program. Genes were aligned by their amino acid sequences using the ClustalW function [72] or the MAFFT program. If necessary, alignments were then corrected by eye and reverted back to the nucleotide sequences for the phylogenetic analyses and in order to remove redundant contigs.
Sequence submission
We have deposited the EST (Sanger) and short read (454 Roche) data with the following accession numbers: ERP000555 (SRA) and JG394435-JG406465 (dbEST). Phylogenetic data was deposited at TreeBASE with submission ID 11389. All of the predicted protein sequences used for alignments and phylogenies can be found in additional file 6. Note that the names of the validated proteins are made from the letters Gme followed by the number of the contig from the assembly. An assembly of the Galleria data with contig consensus sequences, Blast2GO hits against nr database, hit accessions, and annotations including InterPro scans can be found in Additional file 7.
Phylogenetic reconstruction
The phylogenetic reconstruction implemented for the analysis of several proteins was performed using two different methods. For the Neighbour-Joining (NJ) method we implemented the TREECON program. Amino acid sequences were aligned by MAFFT http://mafft.cbrc.jp/ alignment/server/index.html and each visually inspected for regions of high quality alignment. The NJ consensus tree was generated with TREECON. Distance calculations were performed after Tajima & Nei and bootstrap analysis, running 1000 bootstrap samples. Conserved residues in the alignments were highlighted with BOXSHADE 3.21 http://www.ch.embnet.org/software/BOX_form. html. In addition to the Neighbour-Joining method, for some gene trees the phylogenetic reconstruction was done by Bayesian inference using Mr. Bayes 3.1. The prior was set for the amino acid models to mix, thereby allowing model jumping between fixed-rate amino acid models. Markov Chain Monte Carlo runs were carried out for 10,000,000 generations after which log likelihood values showed that equilibrium had been reached after the first 5000 generations in all cases, and those data were discarded from each run and considered as 'burnin'. Two runs were conducted for the dataset showing agreement in topology and likelihood scores. The Neighbourjoining and the Bayesian tree topologies including their general subfamily relationships and node supports were in agreement. The gene trees were visualized and optimized with the MEGA4 software package [73].
Two-dimensional gel electrophoresis of hemolymph proteins
Proteomic analysis of immune hemolymph has been performed as described previously [21]. In brief, hemolymph samples from 10 larvae 24 h post immune challenge and from 10 untreated larvae used as controls were collected directly into 1.5 ml pre-cooled plastic tubes containing traces of phenylthiourea to prevent melanisation reactions. Hemocytes were removed by brief centrifugation step and cell-free hemolymph was precipitated by the addition of 3 volumes of 100% acetone and 0.4 volumes of 100% trichloroacetic acid and incubation at 20°C for 1 h. After centrifugation at 20,000 × g for 10 min, the pellet was washed three times with 100% acetone and resolved under agitation in 8 M urea at 22°C for 16 h. Protein concentrations were determined using a Micro BC assay kit (Uptima, Montlucon, France). Two-dimensional gel electrophoresis was done with the Ettan IPGphor II system and the Ettan DALTsix electrophoresis unit (Amersham Biosciences, Uppsala, Sweden) according to the instructions of the manufacturer. Briefly, 1 mg of protein was mixed with immobilized pH gradient (IPG) buffer (pH 3 to 11 nonlinear gradient [NL]) and applied on an IPG strip (24 cm; pH 3 to 11 NL). Isoelectric focusing was performed at 20°C and 75 μA per IPG strip as follows: swelling for 24 h and isoelectric focusing for 1 h at 500 V, 8-h gradient to reach 1,000 V, 3-h gradient to reach 8,000 V, and isoelectric focusing for 4 h at 8,000 V. Prior to Tris-Tricine-sodium dodecyl sulfate (SDS)polyacrylamide gel electrophoresis (5) with 26-by 20-cm 15% gels, the strips were equilibrated with 6 M urea, 30% glycerin, 2% SDS, and 50 mM Tris-HCl at pH 8 for 30 min. After electrophoresis at 20°C, the gels were stained using colloidal Coomassie brilliant blue (Carl Roth). For image analysis, the gels were scanned using an Umax PowerLook II scanner and analyzed with Delta2D software (Decodon, Greifswald, Germany). Spot identity has been determined by comparing spots with our recent study [21] combined with additional peptide mass fingerprinting analyses. In brief, spots excised from the gel were carbamidomethylated and in-gel digested using mass spectrometry grade trypsin (Promega) in 0.025 M NH 4 HCO 3 . The mass spectra of the resulting tryptic peptides were recorded using an Ultraflex TOF/ TOF mass spectrometer (Bruker Daltonik, Bremen, Germany) operating under FlexControl 2.4 (Bruker) in the positive-ion reflectron mode, with dihydroxy benzoic acid as the matrix. Peptide mass profiles were analyzed with local Mascot http://www.matrixscience.com, using deduced protein sequences from our present Galleria transcriptome analysis database.
Additional file 2: Comparison of GO category representations between Bombyx mori (predicted genes) and Galleria mellonella transcriptome data. Each transcript was assigned applicable highlevel generic GO terms. Data are presented for Biological Process and Molecular GO-level 3. Note that one gene object can be classified into more than 1 class, therefore the total number of gene objects classified for both species is not identical to the number of contigs with GO associations.
Additional file 3: Gene phylogeny of ßGRP protein sequences. A bayesian phylogenetic tree of insect ßGRP proteins. Bayesian posterior probabilities are shown for all major nodes supported with probability higher than 60%. Amino acid sequence alignments were performed using MAFFT multiple alignment program without the predicted signal peptide and part of the N-terminus as in some cases only partial sequence information was available. Identical residues are boxed with dark shading, and conserved residues are boxed with light shading. All Galleria ßGRP sequences are depicted in red and the group of ßGRP sequences with beta-glucanase activity is shaded. (AdditionalFile-3.pdf) Additional file 4: Amino acid alignments of (A) Spaetzle, (B) Cecropins and (C) cobatoxin sequences from Galleria and other insect species. Deduced from conceptual translation of Galleria transcripts (ESTs) present in the larval dataset. All alignments were performed with MAFFT. Identical residues are boxed with dark shading, and conserved residues are boxed with light shading.
Additional file 5: Gene phylogeny of glutathione S-transferases (GSTs). Neighbour-joining phylogenetic analysis of glutathione-Stransferases from Galleria mellonella (Gme) and other insect species (accession numbers are given). Bootstrap values next to the nodes represent the percentage of 1000 replicate trees that preserved the corresponding clade. Positions containing alignment gaps and missing data were eliminated and not used for the generation of the phylogenetic analysis. An additional Bayesian analysis supported all major nodes with posterior probabilities higher than 60%.
Additional file 6: Deduced protein sequences from Galleria mellonella, other Lepidopteran ESTs and NCBI sequences used in the phylogenetic analyses. Additional file 7: Complete annotation file of the assembled Galleria ESTs. Contig IDs, sequence length, Galleria contig sequences, top BLAST hits (if any) in the NCBI nr database for each unique contig, including accession number, E-value and percentage similarity, EC numbers, GO annotations and InterPro scans are listed. | 2017-06-20T16:26:37.115Z | 2011-06-11T00:00:00.000 | {
"year": 2011,
"sha1": "4e3e73c83ff51295e7ec847a5bbf3fc3f9ee0494",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-12-308",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3040851b81b4e897772293ad0c073e308b428428",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
142504011 | pes2o/s2orc | v3-fos-license | Sulfamerazine Schiff-base complex intercalated layered double hydroxide: synthesis, characterization, and antimicrobial activity
Cobalt (Co(II)) and copper (Cu(II)) complexes of sulfamerazine-salicylaldehyde (SS) ligand intercalated Mg/Al-layered double hydroxide [Co-SS-LDH/Cu-SS-LDH] were prepared for the antimicrobial application. Sulfamerazine and salicylaldehyde were mixed together and dissolved in methanol for the synthesis of SS ligand and modified further by the complexation with Co(II) and Cu(II) metal ions [Co-SS/Cu-SS]. The delaminating/restacking method was used to intercalate the Mg/Al-NO3-LDH with the metal complexed ligands (Co-SS/Cu-SS). The obtained materials were analyzed using different characterization techniques to prove their successful synthesis and preparation. The antibacterial activity of the synthesized Co-SS-LDH/Cu-SS-LDH were checked by the inhibition zone method. The prepared hybrid materials showed good antimicrobial activity against both gram negative (Escherichia coli, E. coli) and gram positive (Staphylococcus aureus, S. aureus) bacteria.
Introduction
Layered double hydroxides (LDHs) are a type of bidimensional solids, which have plenty of positive charge in their brucite-like layers [1]. In addition, LDHs are also familiar as anionic clays as well as hydrotalcite-like compounds [1,2]. LDHs showed potential usage in various applications in recent years owing to their excellent recycling capability and the presence of large interlayer surface with a large pore volume and high specific surface area as well as their easy preparation ways [1,2]. LDHs possess the skeletal structure, [ , and x is in the range of 0.17e0.33 [2]. The introduction of anions (together with water) in the interlayer space would compensate the excess charge, which is developed by an exchange of a divalent cation with a trivalent cation [2,3,4]. The interlayer anions can be exchanged with other anions because they are easily exchangeable, which is the most outstanding characteristics of LDHs and can offer high anion exchange capability [1]. In this way, LDHs can be used for the adsorption of metal cations from aqueous solution in spite of the positive charge present in the surface layer [5].
Sulfamerazine is a type of sulfa drug used widely as the preventive and therapeutic compounds against various bacterial infections, such as urinary tract infections, eye infections, actinomics infections, meningitis, and influenza [6,7]. In addition, sulfamerazine can also be worked as classic compounds to probe the mechanisms of the action of drugs. Better complexing ability and biological activity can be secured by merging products of sulfa drugs with aldehydes, ketones or their derivatives [8].
Similarly, the biological activity can also be improved by complexation with metal ions. A compound possessing of heterocyclic ring system with both nitrogen and sulfur in their structure shows excellent biologically active property [8]. Schiff base ligands are treated as a powerful ligand, due to the ease of preparation as well as onepot condensation of primary amines and aldehydes in an alcohol solvent [8,9].
Schiff bases are used widely in various fields, such as intermediates in organic synthesis, biological, catalysis, dyes, polymer stabilizers, and pigments [10,11]. The complexation of these ligands with metal are ubiquitous because of their apparent synthesis, potentiality for various chemical modifications, and wide applications. In particular, Schiff bases and/or their metal complexes are reported to possess a broad range of biological activities, such as antibacterial, antifungal, antipyretic, anti-inflammatory, antimalarial, antiviral, and anti-proliferative activity [12,13,14,15]. Various kinds of antibacterial agents have been disclosed in the past two decades [16].
Intercalation is an eminent approach used for the inclusion of diverse organic compounds into the interlayer of hydrotalcite-like compounds or anionic clay for a variety of applications. The usability of guest molecules can be expanded by the intercalation of metal-containing anions in the interlayer space of hydrotalcite.
The LDH nanocarrier modified materials showed excellent compatibility to cell tissues and sustained delivery behavior [17,18]. Recently, the synthesis of LDH-indole acetic acid-liposome material showed the potential usage in photodynamic therapy against skin melanoma [19]. Furthermore, the material displayed biodegradable and biocompatible properties and also sustained drug delivery behaviour to various drug molecules. Several research works also focused on the examination of biomolecule incorporated-LDH hybrid materials for biological applications [20,21].
Furthermore, these drug loaded LDH composites exhibit high chemical stability. Therefore, the Schiff base-intercalated LDH is being a hot topic for developing a novel material with antimicrobial activity and other related biomedical applications. In this work, we report the potential use of the Schiff base ligand based metal-LDH hybrid for antibacterial application.
Materials
, sulfamerazine, salicylaldehyde, and formamide were obtained from Sigma-Aldrich. Sodium hydroxide (NaOH, 97%) was purchased from Junsei Chemical Co. Ltd. All the chemicals were used as received. The pertinent volume of 2 M NaOH was used to control the pH of the solution.
Synthesis of [N-(salicylidene)-sulfamerazine] (SS)
Salicylaldehyde (1.22 g, 10 mmol) dissolved in absolute ethanol (50 mL) was added drop-wise with stirring to a solution of sulfamerazine (2.64 g, 10 mmol) in absolute ethanol (50 mL) in a 250 ml round flask. A yellow-colored solution was obtained by heating the reaction mixture under reflux for 6 h, followed by isolating the solid product [8]. The obtained product was filtered and recrystallized by ethanol. Yield: 58 %. m.p. ¼ 215 C (see Fig. 1 The Schiff base ligand metal complexes were obtained by reacting Co(NO 3 ) 2 $6H 2 O (0.291 g; 1.0 mmol) and Cu(NO 3 ) 2 $3H 2 O (0.241 g; 1.0 mmol), with 0.378 g (1 mmol) of Schiff base ligand. First, the Schiff base ligand was prepared separately in absolute ethanol (20 mL), followed by slow addition of metal salts to a solution of the ligand [8]. The reaction mixture was stirred at room temperature for 1 hour.
The obtained sample was filtered, washed with ethanol, followed by drying at 50 C for overnight (see Fig. 2
Mg-Al-NO 3 -LDH synthesis
Mg-Al-NO 3 -LDH was synthesized by using a co-precipitation method according to the reported procedure with slight modifications [22,23,24]. A mixed-metal nitrate solution (0.25 M Mg 2þ and 0.08 M Al 3þ ) was fine-tuned to pH 9e10 with the addition of 2 M NaOH. White precipitate was obtained by heating the reaction mixture at 80 C for 18 h. The product was filtered and washed using water, followed by subsequent drying at 85 C, and ground into a fine powder.
Intercalation of metal complexes into Mg-Al-NO 3 -LDH
A translucent colloidal suspension was produced by the addition of Mg/Al-NO 3 -LDH sample (0.075 g) into formamide (30 mL), and allowed to keep for 24 h in a stationary condition. 25 mL of ethanol containing Co-SS and Cu-SS complexes (0.3e0.4 g) was added separately to this suspension. During this procedure, the system was changed from translucent to cloudy. The product was stirred vigorously for 24 h, followed by another 48 h of stirring at 70 C and subsequent cooling to room temperature. The attained materials were named as Cu-SS-LDH and Co-SS-LDH (see Fig. 3).
Antimicrobial study
The antimicrobial study was performed according to the standard by the National Committee for Clinical Laboratory Standards (1993a) [25]. The gram-negative Escherichia coli (E. coli) and gram-positive Staphylococcus aureus (S. aureus) bacteria were used to check the in-vitro antimicrobial screening of the prepared materials. The bacterial culture strains were procured from the Microbial Type Culture Collection and Gene Bank (MTCC) and the standard was referred from the Institute of Microbial Technology (IMTECH), Chandigarh, India. Similar method was used to prepare nutrient agar (NA) as reported in the literature [26]. The sterilized NA medium was solidified on petri dishes. A 100 mL sample of the bacterial cultures (E. coli and S. aureus) was allowed to grow on the surface of the NA medium. Various concentrations (25, 50 and 100 mg/mL) of the material solutions were prepared in water and loaded onto a sterilized paper disk (diameter: 6 mm) and incubated at 37 C for 24 h. Water was used to test the antimicrobial activity of the complexes since an organic solvent such as dimethyl formamide was reported to have an inhibitory effect on a bacterial strain such as Mycobacterium abscessus unless it is diluted significantly [27]. The inhibition zones appearing around the sample disk were noted as the antibacterial effect of the materials, where the obtained inhibition zone diameters were recorded. The test was carried out for three times in order to obtain an average value.
Characterization
The X-ray diffraction (XRD) pattern was obtained over the wide angle range from 5 to 70 2q by a step scan mode (step size 0.02 with 1s/step of counting time) using powder X-ray diffraction (XRD, Bruker AXN) instrument with Cu-Ka radiation.
High resolution scanning electron microscopy (HRSEM, JEOL 6400, 20 kV accelerating voltage) was used for the measurement of the surface morphology of the materials. The materials were loaded on a carbon tape surface, followed by platinum coating before analysis. Transmission electron microscopy (TEM, JEOL 2010, 200 kV accelerating voltage) was also performed. For the TEM measurement, samples were dispersed in ethanol and loaded on a copper grid surface by immersion followed by subsequent drying using blown air. Fourier transform infrared (FTIR, JASCO FTIR 4100) spectroscopy was carried out at the frequency range from 400 to 4000 cm À1 using the pre-prepared sample using KBr. Thermogravimetric analysis (TGA, Perkin Elmer Pyris Diamond) was conducted at a heating rate of 10 C min À1 in air. The 1 H and 13 C cross polarization (CP) magic angle spinning (MAS) nuclear magnetic resonance (NMR) (Bruker DSX 400) spectra were recorded using dimethyl sulfoxide (DMSO) with tetramethylsilane (TMS) as the internal standard using a 4 mm zirconia rotor spinning at 6 kHz (resonance frequencies of 79.5 and 100.6 MHz for 1 H and 13 C CP MAS NMR, respectively). The ultraviolet (UV) absorption spectra of samples in absolute ethanol were observed using a UVvisible spectrophotometer (U-2010, HITACHI Co.). Elemental analysis was performed using a CHNS analyzer (Carlo Erba Instruments, NQ 1500). The percentages of Co and Cu metal ions in materials were determined by using inductively coupled plasma-optical emission spectrometer (ICP-OES; ACTIVA, JYHORIVA, Japan). Samples were dissolved in hydrogen fluoride prior to measurement.
Characterization of Mg/Al-NO 3 -LDH
The Mg/Al-NO 3 -LDH has been already thoroughly studied by many researchers.
Thus detailed characterization data of the LDH can be referred to literatures. Only a few fundamental characterization data are presented here, since the LDH synthesized for this work will be used for further complexation with cobalt and copper. Fig. 4b. The peak at 1384 cm À1 was assigned to the n 3 vibration of NO 3 [29]. The stretching vibrations of hydroxyl groups and surface and interlayer water molecules were observed as broad and strong bands at 3451 cm À1 [23]. The bending mode of water molecules showed a weaker band at 1638 cm À1 . The bands at 446 and 672 cm À1 were due to Al-O and Mg-O lattice vibrations, respectively [8,29]. Fig. 4c and d illustrate the HRSEM and TEM images of Mg/Al-NO 3 -LDH, respectively. The HRSEM image showed an agglomerated plate-like morphology of the LDH with various particle sizes (diameter: 30e60 nm and 100e500 nm). The TEM image also showed similar plate-like morphology of the LDH with the size of 30e500 nm. [30,31]. The loss of free amino groups by reacting the SS with aldehyde and metal ion would lead to reduce the crystallinity. The XRD patterns of Mg-Al-NO 3 -LDH, Co-SS-LDH, and Cu-SS-LDH are shown in Fig. 6, where characteristic peaks of Co-SS and Cu-SS as well as LDH were well observed in their XRD patterns when comparing with Fig. 5 [28]. The XRD patterns in Fig. 6, however, clearly show some shifts in the positions of the characteristic crystalline peaks to lower angles and that the intensities of those strong characteristic crystalline peaks of SS, Co-SS, and Cu-SS became weaker and broader. In particular, such shift toward lower angles can be noticeably observed at the low Bragg angles between 5 and 25
XRD analysis
2q. This result suggests that the Co-SS and Cu-SS was successfully intercalated into the LDH. The d 003 basal spacing was increased from 8.67 A to 12.43 A with a gallery height of the intercalated materials of 7.63 A. Debye-Scherrer equation was used to predict the gallery height from full with half maximum (FWHM) values [32], D ¼ K l/b cos q, where D is the particle size of the crystal grain; K is a constant (0.94 for Cu grid); l is the X-ray wavelength (1.5406 A); and q and b are the Bragg diffraction angle and integral peak width, respectively. It should be also noted from Fig. 6 that the crystalline peaks of the Co-SS and Cu-SS are maintained even after the intercalation of LDH with Co-SS, and Cu-SS, though some of the crystalline peaks of LDH were overlapped with the Co-SS and Cu-SS peaks. It is considered that the SS is not melting in the organic complex, and moreover the metal ions have strong interactions with the functional groups in SS and form a stable material, as discussed for Fig. 5. The results are attributed to the fact that the anion exchange property of LDH with metal complexed Schiff base can maintain the stability even after intercalation with each other [29]. In Fig. 7, the SS ligand showed vibrational bands of -OH, -NH, -CH, C¼C, -NH, -C-N-C, and S¼O groups, which were remained even after complexation with Co (II) and Cu(II) ions. The peaks at 2999 and 2850 cm À1 are due to the asymmetric and symmetric -CH stretching peaks of the methyl group in sulfamerazine. The imino and hydroxyl groups in SS were proved by the broad band at 3440 cm À1 [8,33].
In addition, the FT-IR spectra showed the appearance of N-H bond at 1645 cm À1 . The broad peaks at 1160 and 780 cm À1 were due to the anti-symmetric stretch of the C-N-C moiety and N-H wagging vibration, respectively. Furthermore, a peak due to the secondary amine was also appeared at 695 cm À1 in each spectrum [8,33]. The new bands in the 550e585 and 430-460 cm À1 region noticed for all complexes were assigned to stretching vibrations of (M-O) and (M-N) bonds, respectively. Fig. 8 shows the successful intercalation of Co-SS and Cu-SS complexes in the interlayer cavity of Mg/Al-NO 3 -LDH from the existence of the above spectral vibrations with decreased intensities in the spectra, compared to that prior to complexation (as shown in Fig. 7). Table 1 lists the 1 H NMR spectral data of different protons in the materials.
Article Nowe01521
The imine (CH ¼ N) and O-H protons in the SS-ligand were detected at 8.9 ppm and 12.53 ppm ( Fig. 9 and Table 1) [8,33,34,35]. The intensities of imine (CH ¼ N) as well as other aromatic protons peaks were changed partially by complexation with metal, whereas the peak positions are almost identical as in the SS (Table 1) [8,33,34,35]. This reduced peak intensity in the complexes indicates the existence of metal-nitrogen bond. The 13 C NMR spectrum of Co-SS-LDH showed several Table 1 for the signal assignments.
strong peaks at 170.19, 134.64, 112.85, and 21.99 ppm with some minor peaks, confirming the successful synthesis of the materials (Fig. 10a and Table 1). Similar result was also obtained for Co-SS-LDH with slight modifications in the peak position ( Fig. 10b and Table 1).
HRSEM and TEM images
The HRSEM image of Mg/Al-NO 3 -LDH showed a plate-like morphology, whereas a sheet-like morphology was observed by intercalating Mg/Al-NO 3 -LDH with Co-SS (Fig. 11a). The mean particle size of Co-SS-LDH was in the range of 110e160 nm. The TEM image of the Co-SS-LDH clearly showed uniform distribution of LDH particles in the Co-SS, which suggests appropriate intercalation of the material with each other (Fig. 11b). Fig. 11c and d illustrate the SEM and TEM images of Cu-SS-LDH. Almost identical surface morphology to that of Co-SS-LDH was observed by the introduction of copper instead of cobalt in the hybrid system. Table 1 for the signal assignments. Cu-SSLDH, respectively. As for the previous work, the thermal decomposition behavior of the Mg/Al-NO 3 -LDH followed four steps of weight loss including the decomposition of inter layer anions and brucite-like layer as well as the initial weight loss due to the decomposition of physically adsorbed/interlayer structural water molecules in the material [4,28,36] (Fig. 12a). The thermal stability of Co-SS-LDH and Cu-SS-LDH are shown in Fig. 12b and c. The materials showed the initial weight loss up to 135 C, due to the loss of physically adsorbed water molecules ( Fig. 12b and c). The organic molecules slowly start to melt and decompose by the increasing the temperature from 135 C to 330 C. The 5% and 10% weight loss of the samples were predicted at the temperatures of 90.5 C and 140 C for CO-SS-LDH, and 89.2 C and 146 C for Cu-SS-LDH, respectively. The decomposition of the organic complex from the intercalated cavity materials was observed in the range 330e450 C, followed by steady weight loss up to 800 C (remaining residual mass at 800 C, 54.4 and 54.7 % for Co-SS-LDH and Cu-SS-LDH, respectively [8]. The organic functional group available in both materials were determined to be approximately 30.5 wt.%. The obtained TGA curves clearly suggest the structural deformation occurred in both materials by intercalation as compared to the pristine LDH.
UV-vis spectroscopy
The UV-vis spectra of Co-SS (Fig. 13Aa) and Cu-SS (Fig. 13Ab) both illustrated two characteristic peaks at 273 and 350 nm, respectively, showing the p-p* and n-p* transitions within the ligand molecule [35]. The Co-SS-LDH (Fig. 13Ba) and Cu-SS-LDH (Fig. 13Bb) exhibited similar absorbance bands. The complex- intercalated LDH hybrid possess relatively lower abundance compared to the aromatic rings present in the ligand and exhibited less prominent n-p* peak due to the protection by the high reflectance in the visible region. The obtained result proved the intercalation of the complexes into LDH.
Antibacterial properties
An agar disc diffusion method was employed to study the antibacterial activity of both LDH-metal complexes using both E. coli and S. aureus bacteria. The inhibition zones are illustrated in Fig. 14. The obtained inhibition zones area diameter around the disc was summarized in Table 2. It was reported that the antibacterial activity of LDH could be improved when the pristine LDH is chemically or physically modified [37,38,39,40]. In this work, the results also showed that both Co-SS-LDH and Cu-SS-LDH exhibited much better antibacterial activity against both gram-negative E. coli bacteria and gram-positive S. aureus than the pristine LDH. The good antibacterial property of the complex-intercalated LDH is due to the release of complexes to the bacterial cell membrane. The interaction of the complexes with the enzyme prosthetic group can inhibit the replication of DNA. It is interesting to note from Table 2 that the Cu-SS-LDH showed relatively better antibacterial activity as compared to Co-SS-LDH, while both LDH-metal complexes exhibited marginally higher activity against E. coli than S. aureus. The change in the structure of the compounds based on the polarity of the metal contents in the complex may be in part the reason behind such difference in the activity of the complexes. The antibacterial activity depends on the sharing nature of the positive charge of the metal with donor groups of the ligands.
Conclusions
In this work, we synthesized a new Schiff base ligand followed by the complexation of the ligand with Co(II) and Cu(II) ions. In addition, the Mg/Al-NO 3 -LDH was intercalated further with the metal complex. We studied the synthesized materials by various characterization tools and proved the successful formation of the ligand, metal complexes, Mg/Al-NO 3 -LDH, and LDH intercalated metal complexes.
Furthermore, the antibacterial activity was checked for the pristine Mg/Al-NO 3 -LDH and LDH intercalated metal complexes. In conclusion, the obtained results suggest the intercalation of metal complex with the Mg/Al-NO 3 -LDH enhances the antibacterial activity of the materials against both gram-negative E. coli bacteria and gram-positive S. aureus bacteria, suggesting that Co-SS-LDH/Cu-SS-LDH has potential for further biomedical applications. Also, the Cu-SS-LDH exhibited slightly better antibacterial activity for both bacteria as compared to Co-SS-LDH.
Declarations
Author contribution statement Mary J. Barnabas: Performed the experiments.
Surendran Parambadath, Saravanan Nagappan: Analyzed and interpreted the data; Wrote the paper.
Chang-Sik Ha: Conceived and designed the experiments. | 2019-05-04T13:04:42.674Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "157a65db45d0fbde35721da5d13f52061df7e931",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844018375339/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "157a65db45d0fbde35721da5d13f52061df7e931",
"s2fieldsofstudy": [
"Chemistry",
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
10348402 | pes2o/s2orc | v3-fos-license | Open Approach to Primary Lumbar Hernia Repair: A Lucid Option
Background Lumbar hernia is a rare type of hernia. Awareness of the anatomical basis of this hernia is important for proper diagnosis and treatment. Introduction Lumbar hernia is a protrusion of either extraperitoneal fat or intraperitoneal contents through either of the lumbar triangles. Primary lumbar hernias are extremely rare thereby rendering such a case reportable, to create an awareness about this condition to upcoming surgeons. Case Report A case of primary lumbar hernia treated successfully by open mesh repair is presented. Discussion The anatomical aspects underlying this condition along with diagnostic tests, their pitfalls, and surgical approaches are discussed. Conclusion Awareness of this condition is essential for arriving at a clinical diagnosis. CT scan provides a road map for deciding the approach. Both the traditional open and the newer laparoscopic approaches are described. However open meshplasty is still a very safe and effective method of treatment.
Introduction
A lumbar hernia is best defined as a protrusion of either extraperitoneal or intraperitoneal contents through a defect situated in the posterolateral abdominal wall. Barbette was the first to describe this entity in 1672 [1]. Subsequently Petit and Grynfeltt described the anatomical boundaries of the inferior and superior lumbar triangles, respectively [2]. Majority of lumbar hernias arise from these anatomical sites. Due to the rarity of this type of hernia, diagnosis and management of this hernia always pose a challenge to the attending surgeon.
Case Report
A 54-year-old male presented with a swelling in the right lumbar region for one year. There was no history of any undue straining or any medical comorbidities. Physical examination revealed a bulge arising below the twelfth rib ( Figure 1). The bulge disappeared on lying down and became obvious on standing and coughing. Both a visible and palpable impulse on coughing was appreciated. A contrast enhanced CT scan was done and revealed a defect in the posterolateral body wall with protrusion of omentum.
An open approach through a posterolateral lumbar incision overlying the swelling was adopted. The retroperitoneal fat constituted the hernia ( Figure 2). The content was dissected up to the neck of the sac. The extraperitoneal fat was excised. The defect was clearly appreciated and delineated ( Figure 3). The herniation was through the superior lumbar triangle. The loose lumbar fascia adjacent to the defect was dissected enough to create flaps for approximation. The fascia was approximated with interrupted Prolene stitches ( Figure 4). The attenuated muscle layers were dissected by undermining thereby creating good muscle flaps. A Prolene mesh was placed over this fascial layer and fixed to the overhanging muscle flaps ( Figure 5). The muscle flaps were approximated over the mesh ( Figure 6). Care was taken at every step to ensure a tension-free repair. The postoperative recovery was uneventful. The patient has followed up for one year with no recurrence.
Discussion
The rarity of lumbar hernias renders this condition enigmatic. A surgeon may not even encounter this type of hernia in his entire career as a surgeon. Hence awareness of this distinct entity is pivotal to avoid mismanagement. Understanding the intricate anatomy of the region is essential for a good repair. There are triangles described in this area. The superior lumbar triangle is described by Grynfeltt and the inferior lumbar triangle by Petit. The boundaries of the superior triangle are the posterior border of the internal oblique muscle anteriorly, the anterior border of the sacrospinalis posteriorly, the twelfth rib, and the serratus posterior inferior muscle superiorly. The floor is formed by the aponeurosis of the transversus abdominis and the roof by the external oblique and latissimus dorsi. The inferior lumbar triangle is formed by the external oblique anteriorly, by the anterior border of the latissimus dorsi posteriorly, and below by the iliac crest. The internal oblique constitutes the floor and loose fascia of the roof. Therefore the contents of a hernia arising from any of these triangles could be retroperitoneal fat, kidney, colon, and omentum. There is increased possibility of incarceration in these hernias.
Lumbar hernias can be classified into two types: congenital and acquired. Congenital hernias are very rare and are associated with multiple musculoskeletal anomalies in that region typically described as lumbocostovertebral syndrome [3]. The acquired variety may be either primary or secondary. Primary variety is very rare with approximately 300 cases being described in literature. The acquired variety is usually seen after operations such as iliac bone graft harvesting or drainage of abscesses in that region [4].
The clinical presentation is quite straightforward with a bulge in the lumbar region which exhibits both a visible and palpable impulse on coughing. The bulge will disappear Case Reports in Surgery on lying flat in a lateral position and become prominent on standing up and coughing. Ignorance of the existence of this entity may lead to misdiagnosis as a lipoma or an abscess which can have disastrous consequences [5,6]. Contrast enhanced CT scan is essential before a surgical repair [7,8]. The only pitfall of CT scan is that retroperitoneal fat is invariably misinterpreted as omentum. However if organs find their way into the sac then these can be identified preoperatively providing a road map for surgical repair [8].
Surgical repair is the mainstay of treatment [9]. The traditional open approach still holds true [10]. With the advent of minimally invasive surgery, laparoscopic approach has gained popularity and is strongly advocated by some [10,11]. Open repair has evolved over a period of time. Proper delineation of the defect followed by tension-free placement of a sublay mesh yields good results especially in primary lumbar hernias. However in acquired type or secondary type of acquired type of lumbar hernias, advanced muscle flaps may be required over and above the mesh to ensure complete coverage of the defect. Despite the best of surgical repair, failures have been described [4,11]. They have been attributed to limited fascial strength, weakening of the surrounding musculoaponeurotic structures, inadequate hold of sutures to weakened tissues, and bony edges. Laparoscopic repair may be done by either an extraperitoneal or transperitoneal approach with placement of a mesh. Laparoscopic repair confers certain advantages. Reduced operative morbidity, reduced pain, and early return to routine activity are established advantages [11]. However long term outcomes with respect to morbidity and recurrence rates do not differ [11]. Therefore depending upon the site and size of the defect, contents of the sac, the attenuated state of the surrounding tissues, and cost factor, a tailor made repair has to be performed to ensure successful outcome of surgical intervention.
Conclusion
Awareness of the anatomy of the lumbar triangles is essential for prompt diagnosis of lumbar hernias.
A contrast enhanced CT scan is essential for confirming the diagnosis.
Repair can be done by both laparoscopic and open approach. Open mesh repair is an easy, safe, and effective means of curing this rare surgical condition.
Consent
Written informed consent of patient was sought specifically for photographs prior to preparing this publication.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Authors' Contributions
Dr. Ketan Vagholkar was the operating surgeon and was responsible for writing the manuscript. Dr. Suvarna Vagholkar was responsible for assisting surgeon and literature review. | 2018-04-03T01:37:44.650Z | 2017-10-17T00:00:00.000 | {
"year": 2017,
"sha1": "1b8480072dc8acac7d0e4a2ae2b2b2953bc000ea",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cris/2017/5839491.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b8480072dc8acac7d0e4a2ae2b2b2953bc000ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32473684 | pes2o/s2orc | v3-fos-license | Membranes, molecules and biophysics: enhancing monocyte derived dendritic cell (MDDC) immunogenicity for improved anti-cancer therapy
Despite great medical advancement in the treatment of cancer, cancer remains a disease of global significance. Chemotherapeutics can be very expensive and drain medical resources at a national level and in some cases the cost of treatment is so great that it prohibits their use by local health authorities. Drug resistance is also a major limiting factor to the successful treatment of cancer with many patients initially responding well but then becoming refractory to treatment with the same drug and in some case may become multi-drug resistant. The immune system is known to be important in the prevention of tumors by eliminating pre-cancerous or cancerous cells. This concept of immune surveillance has largely been super-ceded by the concept of immunoediting whereby the immune system imposes a selective pressure on tumor cells which may either control tumor growth or inadvertently select for tumor cells which have evolved to escape the immune response and which may induce tumor development. Stimulation of the immune system by vaccination offers many benefits in the treatment of cancer. It is highly cost effective and vaccines can be manipulated to include multi-antigens which in some cases may overcome equilibrium (and selective pressure) while also preventing the establishment of reactivated cancer cells, since cancer antigen-specific memory would be induced following the initial vaccination/booster phase. To date studies using vaccination as a treatment for cancer have been a little disappointing, probably due to insufficient level of immunogenicity. In this review we will discuss methods of manipulation of the immune system to increase the anti-cancer activity of dendritic cells in vivo and how monocyte derived dendritic cells may be manipulated ex vivo to provide more robust, patient-specific treatments.
Review The global impact of cancer
Although new chemotherapeutics and greater diagnostic modalities have been developed, cancer remains a disease of global significance. There are around 12.7 million cases of cancer globally per year and this is estimated to increase to 26 million per year by 2030 [1]. In 2010, 324,579 cases of Cancer were diagnosed in the UK and 157,275 deaths due to cancer were recorded [2]. The enormous cost of the disease suffered by governments and healthcare providers undoubtedly negatively impacts on other areas of health spending and in many cases newly developed drugs are too expensive for health authorities to purchase. The cost has been estimated to increase from £30000 to £40000 per patient per year between 2010 and 2021 [3]. Furthermore, there is significant emerging resistance in many cases to chemotherapy [2], and a wealth of research has now been published which has shown the various mechanisms, detection and amelioration of resistance [4][5][6][7][8][9] by tumours to chemothereutic drugs. In the developing world many cancers remain untreated, due simply to the financial burden of current treatments, even more significant when considering that current global estimations of 12.7 million new cases of cancer per year are predicted to increase to 26 million per year by 2030 [1]. Less expensive and effective new treatments would, therefore, be of obvious benefit in both the developed and developing world.
The concept of 'immuno-surveillance' in cancer biology is not new, and suggests that the immune response largely keeps in check the development of tumours. The development of efficient cancer vaccines therefore would appear to be a rational and cost effective therapeutic approach to the treatment of cancer.
The role of dendritic cells in adaptive immunity and vaccination
Antigen presenting cells (APCs) bridge the gap between innate and adaptive immunity, since APCs are innate immune cells which present antigens to the adaptive immune system (T and B lymphocytes). All APCs express major histocompatibility complex II (MHCII) as well as MHCI on their cell membranes, either constitutively or when activated. Antigen presentation to lymphocytes also requires a second (co-stimulatory signal) provided by molecules such as CD40, CD80 and CD86 which engage with their counterparts (CD40 ligand and CD28) on the surface of T lymphocytes. The extent to which these molecules are expressed on the surface of APCs is paramount to the induction of a robust lymphocyte response to antigen. Another factor which may influence the lymphocyte response is the number of lymphocytes, with cognate T cell receptors, which can engage antigen on the APC surface and the most effective APCs have large surface areas due to membrane folding or cellular extensions (pseudopodia).
Amongst APCs, only dendritic cells (DCs) have the ability to present antigens to naïve T lymphocytes [10-11] and B lymphocytes [12] and the amount of antigen presented by DCs is orders of magnitude greater than can be presented by macrophages [13]. DCs are therefore essential in the development of adaptive immunity and as such are important targets for vaccine antigens and adjuvants. In humans, DCs exist as a number of different sub-populations. These include Langerhans cells (LCs) and interstitial DCs (iDCs) (found in peripheral tissues), conventional DC (cDCs) which are further subdivided into type 1 and type 2 subsets (mDC1/mDC2) and plasmacytoid DCs (pDCs). In addition to these subsets, monocytes can be differentiated into monocyte derived DCs (MDDCs). Previously, the DC activation model suggested that precursors of LCs and iDCs migrate from blood into tissues and remain in an immature form until activated by pathogens or cytokines. Once activated these cells mature as they migrate to the draining lymph node where they present antigens to cognate lymphocytes [14][15]. However, more recent studies have indicated that this model may be too simplistic and that activation of DCs is more tightly controlled than was previously believed. For example a recent study has suggested that migrated rat pseudo-afferent DCs do not mobilize CD40 stores to the cell membrane during interaction with steady state T lymphocytes but do mobilize CD40 to the immune synapse during allogeneic interaction but only for a limited time period [16] and a study by Geissmann [17] has also shown that MHCII stores in human LCs, migrating from inflamed skin into draining lymph nodes, remain cytoplasmic and only low levels of CD86 are detected on the cell membrane. While LCs may actually migrate into embryonic tissue and renew their population in situ, rather than from blood precursors [18] as was previously thought. However human DC populations are controlled in steady state conditions or activated and renewed in vivo, it is clear that much more of the biology of these DC subsets has yet to be revealed. Furthermore, the relative number of circulating DCs is very low with cDCs representing about <0.6% of the total peripheral blood mononuclear cells [19][20] and pDCs representing <0.4% [21]. It would not be a viable proposition to obtain DCs for in vitro manipulation (e.g., culturing with tumor antigen) since the numbers obtained would be much too low. With this in mind strategies have been developed which attempt to expand DC populations in vivo or manipulate MDDCs in vitro for replantation into the syngeneic patient (Figure 1).
Expanding DC populations in vivo as a mechanism for increased tumor immunogenicity Cytokines
Expanding the tumor antigen expressing DC populations would appear to be a rational approach to increasing the immunogenicity of cancer vaccines. One cytokine known to increase DC populations in lymphoid organs is fms-like tyrosine-kinase 3 ligand (Flt-3 ligand) [22][23][24] and daily administration of Flt-ligand has been shown to induce tumor regression and decrease tumor growth in mice with fibrosarcoma [25]. Flt-3 ligand and CD40 ligand were also shown to synergise to further expand DC numbers in mice with B10.2 or B10.5 tumors and this treatment led to a reduction in tumor growth or, in some cases, complete cure [26]. Flt-3 ligand delivered via particle mediated transfection was also reported to inhibit the growth of MCA205-induced Sarcomas in mice and in this study increased CD11C+ cDCs were shown in tumor tissues as well as increased CD80 and MHCII expression [27]. Intra-nodal administration of Flt-3 ligand with antigenencoding RNA has also been reported to expand both cDC and pDC populations in a murine melanoma model [28]. In this study Flt-3 ligand and RNA induced expansion of nodal Th1 cells and increased tumor-specific CD8+ lymphocytes via antigen presentation by pDCs. The study also showed that 70% of mice were cured of tumors following Flt-3 ligand/ RNA inoculation compared with only 11% cured following inoculation with RNA alone. This suggests that Flt-3 ligand may act as a DC adjuvant and that its inclusion in vaccine formulations may significantly enhance the immunogenicity of cancer vaccines.
Conversely, a study by Taylor et al., [29] has reported that progression of murine myeloproliferative disease (MPD) (induced by inactivating knock-in mutation in the RING finger domain of the c-Cbl E3 ubiquitin) is prevented by treatment with the Flt-3 kinase inhibitor AC220 and continued for as long as AC220 was administered. A study by Greystoke et al., [30] has also indicated that elevated Flt-3 ligand can be used to predict patients who develop neutropenic sepsis following chemotherapy against lymphoma. It is difficult to say whether there was a true causal link in this case but increases in plasma Flt-3 ligand in irradiated non-human primates was shown to be inversely correlated with neutropenia and when peak concentrations subsided this was correlated with a return of total blood cells counts to normal [31]. The treatment of neutropenia usually requires administration of granulocyte-colony stimulating factor (G-CSF) and a more stable form of recombinant G-CSF (Pegfilgrastim) has been developed [32]. Interestingly, administration of Pegfilgrastim to patients with gynaecological malignancies was shown to significantly increase circulating numbers of both cDCs and pDCs and as well as preventing neutropenic sepsis, caused by chemotherapy or possible future Flt-3 therapy, Pegfilgrastim appears to also expand DC populations in vivo in its own right. Therefore, studies to date have certainly highlighted the doi: 10.7243/2049-7962-2-20 immune-therapeutic potential of Flt-3 ligand in cancer but the use of Flt-3 to increase DC populations in vivo may have some draw-backs which need to be studied more thoroughly.
Although Flt-3 ligand is probably the best studied of molecules which expand DC populations in vivo others cytokines have also been shown to have therapeutic potential. Macrophage inflammatory protein 3 alpha (MIP-3α) is an inflammatory chemokine which binds to CC-chemokine receptor 6 [33]. CCR6 is involved in the peripheral recruitment of immature Langerhands cells but it is CCR7 expression by mature DCs which induces their migration into the T cell areas of lymph nodes. Vaccination of mice with another cytokine granulocyte macrophage colony stimulating factor (GM-CSF) has been shown to have a very potent anti-tumor effect [34] and more recently Choi and Kim, [35] reported that plasmids encoding MIP-3 α and GM-CSF synergistically induced DC recruitment and increased T helper (Th) cell and cytotoxic T lymphocyte (CTL) activity at the site of murine EML/Muc1 tumors and draining lymph. However, there is also evidence to suggest that tumours may suppress the host immune response via CCR7 expression and that expression of CCR7 may also be involved in metastasis [36], via ligation of the cytokine CCL21 [37]. It is clear therefore that the exact relationship between CCR7 expression, DCs and tumor regression, or indeed progression, needs further clarification.
Hohman et al., [38] were the first to identify a population of murine cells with both Natural killer cell (NK) cell and DC-like properties. These cells were termed NKDCs and later Chan et al., [39] reported that these cells produce IFNγ and kill cells expressing tumor antigen. Recently it has been shown that murine splenic NKDCc will proliferate in vitro when cultured with IL-21 and when replaced back into B16F10 (melanoma) tumor bearing C57BL/6 mice, the IL-21-cultured NKDCs were associated with a reduction in tumor size [40]. This may suggest that administration of IL-21 with tumor antigen could increase immunogenicity of cancer vaccines in vivo. However, one study has suggested that NKDCs may actually be a population of activated NK cells [41] and to date a similar NKDC population has not been reported in humans. IL-24 is another cytokine which has come into prominence recently and is being investigated for its effect on many different cancers. IL-24 was first shown to be a protein product encoded by the melanoma differentiation-associated gene 7 (mda-7) [42] and was shown to have an anti-cancer effect in a number of human cancerous cell lines, including colorectal, prostate and cervical cancers [43]. Low expression of mda/IL-24 has been associated with increased nodal involvement in human breast cancer [44] and reduced survival of colorectal cancer patients [45]. One mechanism by which mda/IL-24 has anticancer effects is by the promotion of cell death, such as the generation of ceramide, the production of reactive oxygen species and increased ER stress and mda-7 gene transfer via vectors such as adenovirus is currently being investigated [46]. High expression of mda-7 has also been shown in DCs within germinal centres of melanoma patients [47] which may suggest a different mechanism by which DCs suppress tumors via direct suppression rather than presentation of tumor antigens. However, increased expression of molecules essential for antigen presentation (HLA-DR, CD40 and CD80) was also measured in human DCs transfected with IL-24 gene containing adenovirus and pulsed with lysates from SMMC-7721 (human hepatocellular carcinoma cells) [48]. This was also associated with increased production of IL-24, IL-12 and TNF-α and when these DCs were cultured with cytokines induced killer cells (ICK), they stimulated greater lytic activity by ICKs on SMMC-7721 cells. Once again these latter experiments suggest that IL-24 could be an important adjuvant if used in DC vaccines.
Other molecules
Carthamus tinctorius (CT), or safflower, is used in Chinese traditional medicine to improve blood circulation. A study by Chang et al., [49] has reported that when murine bone marrow derived DCs were cultured with CT extract and pulsed with a JC cell (murine mammary adenocarcinoma) lysate, prior to administration into tumor bearing mice, they reduced tumor mass by >15%. This was also associated with a skewed Th response towards IFN-γ-producing Th1 cells and in vivo the CT-DC combination induced proliferation of CD8+ lymphocytes. An earlier study also reported that when CT polysaccharide was administered into mice bearing either S180 Sarcoma or LA795 lung tumors it was associated with a respective tumor inhibition or shrinkage and increased activity of CD8+ lymphocytes and natural killer (NK) cells, although DC activity was not studied [50]. Polyinosinic-cytidylic acid (poly (I:C)) has also been reported to increase DC populations removed from cervical cancer patients and when pulsed with E7 human papilloma virus antigen they increased surface expression of CD11C and CD86 [51]. It should be noted that although CD11C is a classic marker for myeloid DCs it is also has very important immunological properties. For example CD11C is a receptor for compliment protein iC3b and increases phagocytosis of iC3b-opsonized particles [52], while antibody blockade of CD11C on the surface of LCs ameliorates proliferation of allogeneic T lymphocytes [53], thus indicating a key role in the induction of innate and adaptive immunity. In another study, Poly: IC has also been shown to inhibit metastasis in a B16-F10 murine lung cancer model and was associated with increased DC maturation, increased CD8+ (CTL) activity and skewing of the immune response towards Th1/Th17 [54], while administration of a combination of the synthetic lipopeptide Pam3Cys with Poly: IC induced greater maturation of DCs and conferred tumor protection in mice [55].
The therapeutic potential for Monocyte derived DCs (MDDCs) in patient-specific cancer therapy
DCs can be derived ex vivo from blood monocytes (MDDCs) and since monocytes represent a large population of blood cells, numerous MDDCs can be derived in culture media containing granulocyte-monocyte colony stimulating factor (GM-CSF) and interleukin 4 (IL-4) [56]. Compared with their autologous monocytes, these MDDCs have increased surface area due to the development of dendrites (Figures 2A and 2B) and express a characteristic surface phenotype in which they express high levels of CD11C, CD83 and CD1a (which are not expressed by monocytes) and increased HLA-DR but very low levels of CD14, which is highly expressed by monocytes (Figures 2C-2G). These immature MDDCs can then be matured using various other cytokines including, TNF-α and IL-1β [57] while we have recently shown that human MDDCs mature in response to IFN-γ, IL-1β and novel cytokines, such as IL36β or IL36γ, allowing easy measurement of immunological parameters such as further increases in HLA-DR and CD83 with a concurrent decrease in CD1a [58] (Figures 3E-3G). Furthermore, in humans and murine models, MDDCs appear to be a physiological population generated in vivo during inflammation [59,60], possibly serving to replenish spent peripheral DC populations. Autologous MDDCS can be safely injected back into recipients without inducing a host versus DC reaction (as long as utmost sterility can be achieved). This approach could allow patient specific vaccines to be developed according to tumor type and antigen expression. As such, a number of studies have now been reported to show that autologous monocyte derived DCs (MDDCs) preloaded with relevant tumour antigen may be of some benefit in certain cancers [61][62][63].
Meta-analysis of the first 1000 MDDC vaccinees, with different tumours, demonstrated that about half exhibited a clinical response, and that MDDC vaccination was associated with virtually no adverse effects [64]. In another study, survival of patients with advanced melanoma was increased from 5 months to 24 months following injection of autologous MDDCs loaded with melanoma antigen, and was significantly correlated with increased numbers of IFN-γ producing Th1 lymphocytes [65]. It has also been reported [66] that MDDCs pulsed with carcinoembryonic antigen (CEA) induced antigen-specific T cell response in 8 out of 11 patients with colorectal cancer, and that this technique was far superior doi: 10.7243/2049-7962-2-20
Figure 2. IL-4 and GM CSF differentiates human monocytes into monocyte derived dendritic cells (MDDC) in vitro
After 5 days culture in IL-4 (10 ng/ml) and GM CSF (50 ng/ml) for 5 days, human monocytes (A) differentiate into immature MDDCs (B) which are larger and have characteristic dendrites (arrows). MDDCs are also phenotypically distinct from monocytes by expressing high levels of CD11C, CD1a, CD83 and HLA-DR (C-F) but not CD14 (G) on their cell membrane.
to MDDC transfection using CEA mRNA. Phase I clinical trials using the CEA loaded MDDCs in colorectal cancer patients indicated that they induced both a CD4+ and CD8+ effector response, but that T suppressor activity may have impacted on the full effect of the vaccine [67]. MDDCs loaded with CEA and melanoma associated antigen 3 (MAGE-3) induced significant increases in the killing of target cells by CTLs [68]. In a murine model of Melanoma therapy, MAGE-3 pulsed MDDC induced tumour specific CTL responses and inhibited tumour growth [69]. Recent studies have also shown that when mature MDDCs (designed to initiate Th1 cells) were pulsed with HER-2 antigen, they induced a robust IFN-γ response in addition to enhanced tumour cell-killing capacity in breast cancer patients. Furthermore, a recall response was measured after 52 months [70]. The data therefore, indicates that tumour antigen loaded MDDCs are a realistic strategy for future cancer therapy. Although DCs have been shown to be important in anti-melanoma immune responses [71] advanced melanoma is normally associated with immune suppression. However, one study has shown that human melanoma-conditioned media had no effect on the maturation of MDDCs, cytokine production (including Th1-inducing IL-12) or T cell proliferation [72]. This may suggest then that MDDC vaccines may be an ideally suited therapy for melanoma. Conversely, expression of Galectin-1 by lung cancr cell lines (A549 andNCI-H460) induces IL-10 producing MDDCs and IL-10-producing CD11C+ cells (DCs) have been shown to populate human lung tumors [73]. Since production of IL-10 by DCs is associated with the differentiation of tolerogenic T cells, the study discussed above may suggest that MDDCs vaccines may have detrimental effects on therapy, although the immune status of MDDCs loaded with lung tumor antigen has not been reported.
By using tumour cell lysates [74], it has been demonstrated that a maximum of 5 mg/ml lysate can be loaded into 1 X 10 6 MDDCs, and that a minimum of 1 mg/ml over a 24h culture period is required to produce measurable changes in DC immunogenicity. Such studies are critical to the future success of this strategy since the ability to deliver 1 effective vaccine dose may be necessary, as booster vaccines using tumour loaded MDDCs will be killed by memory CTLs. Although Dhodapkar et al., [75] have reported that a single injection of MDDCs is enough to induce a rapid, antigen specific and broad T cell response in humans and it is possible that that techniques which can boost the immunogenicity of a single injection vaccine will undoubtedly produce a greater clinical effect, and could contribute to overcoming the effect of suppressor activity.
Enhancing MDDC vaccination by adequately engineering the cell membrane
The membrane is composed of two leaflets which are themselves composed chiefly of lipids and other amphipathic molecules. For a long time these were considered to be neutral, with regard to the complex biochemical processes taking place within cells, but this view has now changed and it is well acknowledged that the lipid phase forming the boundary of cells is at least as important as the biochemistry taking place within cells. How the cell membrane is used by cells has become a much studied field of research and from these studies one can say that the lipid phase of the membrane has two main properties as it: (i) permits the stability required to allow transmembrane proteins to function properly and, (ii) allows exchanges with the extracellular medium via the mechanism of membrane recycling i.e., endocytosis and exocytosis. Both points (i) and (ii) have been shown to rely deeply on the biophysical properties of the cell membrane as well as the ability of cells to biologically control these processes.
What matters in the case of antigen presentation is the potential ability to control membrane exocytosis. Membrane doi: 10.7243/2049-7962-2-20 recycling is driven by the ability of cells to form intracellular vesicles. The motor force behind the creation of vesicles resides, initially, in the membrane. The membrane is composed of two leaflets but these leaflets are not randomly composed, some lipids are preferentially located within the inner leaflet (e.g., phosphatidylsérine or phosphatidylethanolamine) whereas others remain in the outer leaflets (sphygomyelin). The non random composition of the membrane is attributed to ATPdependant lipid flippases ordering the membrane [76]. The creation of such an asymmetry in the type of lipids between the two leaflets induces a change in the physical properties of the membrane with an inner leaflet being more compacted than the outer one [77] (Figure 4). This compaction cannot be stored in the membrane, requiring the cell to release this energy and one way to do this is via the creation of intracellular vesicles. Naturally, one could argue that the model suggests that too much endocytosis should deplete the plasmalemma of the cell. However this never happens as exocytosis and endocytosis are two faces of the same coin and as a result will balance each other [78]. In essence the lipid asymmetry between the two membrane leaflets permits a low cost in ATP usage and a constant and smooth recycling of the membrane.
How can this process be altered is down to how the physical properties of the membrane are affected. Let us assume that the lipid asymmetry is annihilated, endocytosis will stop. Now, if we reverse the lipid asymmetry -i.e., compact more the outer leaflet via the addition of exogenous lipid species compared to the inner leaflet; endocytosis will stop and exocytosis should be promoted [79][80][81]. In cells, not all receptors are displayed on the membrane, some stay within cells in endosomes or vesicles some may even recycle constantly between the membrane and the intracellular vesicles. By physically triggering membrane exocytosis it should be possible to empty all the intracellular compartments of their content leading to higher "expression" of antigens and relevant immunological molecules such as MHC and CD40. Experimentally, exocytosis can be promoted by simple incubation of phospholipids with cells such as phosphatidylcholine (PC) that is not translocated
. How membrane lipids influence endocytosis and exocytosis
The lipid number asymmetry induced fluid phase endocytosis : Sketch representing the current model that has been applied to living cells, links fluid phase endocytosis (A) and the membrane phospholipid number asymmetry maintained by a lipid flippase (the aminophospholipid translocase). In the left figure, the translocation of dark-head lipids into the inner leaflet induces a differential lipid packing between leaflets (namely a difference in surface pressures) leading to membrane bending and vesiculation. Note that it is assumed that the membrane recycling that occurs in cells, i.e., the exocytosis of vesicles of a size similar to endocytic vesicles (B), also allows the maintenance of the lipid asymmetry at the level of the plasmalemma. The relationship between the lipid number asymmetry and the vesicle radius can be determined fully considering the physical biology of the cell membrane see ref (75). A fundamental consequence resulting from the theory is that if the lipid asymmetry is inverted (i.e., if the outer leaflet contains more lipid than the inner leaflet) exocytosis should be promoted to balance the membrane stress. This was demonstrated by Rauch and Loughna (2005) using phosphatidylcholine (PC) as a phospholipid that is not translocated by the aminophospholipid translocase (i.e., flippase), the ability of PC to promote exocytosis was demonsrated in C2C12 myoblasts expressing the glucose transporter GLUT-1 (B). Upon incubation with cells, PC switched the balance of GLUT-1 from intracellular stores (in red) compartment to the membrane (in green). Note that phosphatidylserine (PS) that accumulates into the inner leaflet thanks to the aminophospholipid translocase activity does not generate exocytosis (C: control; PC: phosphatidylcholine; PS: phosphatidylserine). | 2018-11-30T21:25:38.720Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "93454b4f854889b5508c2722401efaedb10214d2",
"oa_license": "CCBY",
"oa_url": "http://www.hoajonline.com/journals/pdf/2049-7962-2-20.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5c824a6111e1c1a33c0651264e20bcf9c9c7e2e8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6795121 | pes2o/s2orc | v3-fos-license | Optimization of an electromagnetics code with multicore wavefront diamond blocking and multi-dimensional intra-tile parallelization
Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell's Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU.
Abstract-Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell's Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU.
A. Photovoltaic devices and Maxwell's Equations
Photovoltaic (PV) devices play a central role in the recent transition from nuclear and fossil fuels to more environmentally friendly sources of energy. There exist various different PV technologies, ranging from well-established polycrystalline silicon solar cells with thicknesses up to 300 µm to more recent thin-film technologies with active layer thicknesses of only 1 µm or less. To improve these thin-film PV devices and make them more competitive against other renewable energy sources, optimization of their optical properties is decisive. The importance of an optimal collection of the incident light can be seen in any of the recently developed most efficient solar cell designs [1].
To understand the effects of different light trapping techniques incorporated into PV devices and improve upon them, detailed optical simulations are necessary. The simulation code we investigate here uses the Time Harmonic Inverse Iteration Method (THIIM) [2], which is based on the staggered grid algorithm originally proposed by Yee [3] and uses the Finite-Difference Frequency Domain (FDFD) method to discretize Maxwell's Equations.
Although the algorithm performs stencil-like updates on the electric and magnetic field components, the code is different from the well-studied standard stencil benchmarks such as the 7-point stencil with constant coefficients emerging from a discretized Laplace operator. Multiple components per grid cell are involved since six coupled partial differential equations discretized by finite differences must be solved. It uses staggered grids, which results in non-symmetric data dependencies that affect the tiling structure. The loop kernels of the simulation code have very low arithmetic intensity (0.18 flops/byte) for the naive implementation), leading to a memory bandwidth-starved situation. The number of bytes per grid cell is large (40 double-complex numbers), which makes it difficult to maintain a sufficiently small working set to have the necessary in-cache data reuse for decoupling from the main memory bandwidth bottleneck.
The time-harmonic variants of Maxwell's equations are given by with permittivity ε, permeability µ, the electric and magnetic conductivities σ and σ , and the frequency of the incident plane wave ω. The time-independent electric and magnetic field components are related to the time-dependent fields by E =Êe iωτ and H =Ĥe iωτ . After discretization of Maxwell's equations in time and space the following iterative scheme is obtained: with time step τ, time step index n and source terms S E and S H . To model materials with negative permittivity (ε < 0, e.g., silver electrodes) the THIIM method applies a "back iteration" scheme to the electric field components of the corresponding grid points: With this method, the optical constants of any material can be used directly in the frequency domain without the need for any approximation or auxiliary differential equations [4], [5], [6], [7]. THIIM has proven to be numerically stable and give accurate solutions for setups with metallic back contacts [8], [9] and also for the simulation of plasmonic effects, e.g. around silver nano wires [10]. A perfectly matched layer (PML) is used to allow absorption of outgoing waves, employing the split-field technique originally presented by Berenger [11]: All sixÊ andĤ field components are split into two parts each. For example, theÊ x component of equation 1 is split intoÊ x =Ê xy +Ê xz , resulting in two equations: For all six vector components this procedure is performed on Equations 3, 4 and 5 resulting in a total of 12 coupled equations.
In order to overcome the problem of the representation of complicated light-trapping geometries, such as rough interfaces between layers or curved particle surfaces, the Finite Integration Technique (FIT) [12] is applied on the rectangular structured grids. FIT allows to accurately treat curved interfaces by integrating the material data on an unstructured tetrahedron grid and mapping the data back to the structured grid. Figure 1 shows a sample simulation setup of a thin-film tandem solar cell that can be simulated by the methods mentioned above [13]. The amorphous and microcrystalline silicon layers are used to absorb different ranges of the incident spectrum. Their surfaces are etched to increase the trapping of light inside the cell. Atomic force microscopy is used to obtain height information that is then introduced between the layers in the simulation. Additionally, at the back electrode (Ag) SiO 2 nano particles can be deposited to increase the scattering of light. For such a setup PML boundary conditions are chosen vertically. Horizontally periodic boundary conditions are used.
B. Contribution
This work makes the following contributions: We optimize the multi-threaded (OpenMP-parallel) part of the THIIM code using temporal blocking. Our multi-dimensional intra-tile parallelization implementation shows a significant reduction in the cache block size requirement, providing sufficient data reuse in the cache to decouple from the main memory bandwidth bottleneck. As a result, we obtain a 3×-4× speedup compared to an efficient spatially blocked code. In addition to the performance improvements, our results show significant memory bandwidth savings of 38%-80% off the available memory bandwidth, making it immune to more memory bandwidthstarved systems. Via appropriate cache block size and code balance models we prove that cache block sharing is essential for decoupling from the memory bandwidth bottleneck. We validate these models by analyzing different tile sizes and measuring relevant hardware performance counters.
In the current usage scenarios for this code, the overhead caused by MPI communication is negligible or can be hidden by a dedicated communication thread. An in-depth analysis of the performance implications of communication overhead for the temporally blocked variant is out of scope for this paper and left for future work.
A. Background
Temporal blocking is a well-known technique to reduce the data traffic to the main memory for memory bandwidth-starved stencil computations. It allows the code to iterate multiple times over a subdomain that fits into a cache. In recent years, diamond tiling has moved into the focus of research on temporal blocking [14], [15], [16], [17]. Diamond tiling provides a convenient and unified data structure to maximize the incache data reuse [14], has low synchronization requirements, allows more concurrency in tile updates, and can be utilized to perform domain decomposition in a distributed memory setup [17].
Wavefront blocking is another important temporal blocking technique, introduced by Lamport [18]. It maximizes the data reuse in a given space-time block as long as the wavefront tile fits in the desired cache memory level.
The combination of diamond tiling with wavefront bocking in three-dimensional problems is becoming more popular. These techniques have recently been shown to yield good performance [16], [17], and diamond tiling has been implemented in the PLUTO framework [19]. Wavefront diamond blocking is usually applied to the outer two space dimensions. The fast moving (inner) dimension is usually left untouched for better performance, as confirmed in [16], [19], [20]. The contiguous memory access of a long inner loop is important for efficient hardware data prefetching and better utilization of the CPU execution pipelines.
Multicore-aware cache block sharing techniques, introduced in [21], are another way to reduce the data traffic in bandwidthstarved situations. Cache block sharing among the threads of the processor reduces the number of tiles required to fit in the cache memory. As a result, larger tiles can fit in the cache memory to provide more in-cache data reuse. This technique is particularly important for the THIIM stencil in this paper. The THIIM stencil requires many bytes per grid cell, which makes it challenging to fit sufficiently large tiles in the cache memory. We have introduced a more advanced cache block sharing technique in [22], where we propose multi-dimensional intratile parallelization to achieve a further reduction in the tile size requirements and maintain architecture-friendly memory access patterns.
Our experiments require a system that allows full control over the tunable parameters of a temporally blocked stencil algorithm. The open source system provided by Malas et al. [17], [22], called Girih, provides these options. Girih uses wavefront-diamond tiling with multi-dimensional intratile parallelization to construct a Multi-threaded Wavefront Diamond blocking (MWD) approach. It allows running multiple threads per cache block, fitting larger blocks in the cache memory to reduce the data traffic to main memory. The implementation leverages the performance counter API of the LIKWID multicore tools collection [23] to measure the data traffic in the memory hierarchy via the provided hardware counters.
Threads are assigned to the tiles in Thread Groups (TGs), similar to [21]. Multiple TGs can run concurrently, updating different tiles and observing inter-tile dependencies. The TG size parameter provides a controllable tradeoff between concurrency and sharing of the loaded data from memory among the available threads.
Diamond tiles are dynamically scheduled to the available TGs. A First In First Out (FIFO) queue keeps track of the available diamond tiles for updating. TGs pop tiles from this queue to update them. When a TG completes a tile update, it pushes to the queue its dependent diamond tile, if that has no other dependencies. The queue update is performed in an OpenMP critical region to avoid race conditions. Since the queue updates are performed infrequently, the lock overhead is negligible.
We use the auto-tuner in the Girih system to select the diamond tile size, the wavefront tile width, and the TG size in all dimensions to achieve the best performance. To shorten the auto-tuning process, the parameter search space is narrowed down to diamond tiles that fit within a predefined cache size range using a cache block size model. Figure 2 shows the diamond tiling implementation of the THIIM stencil kernel. We split theĤ andÊ field updates in the figure as they have different data dependency directions. TheĤ andÊ fields have dependencies over the positive and negative directions, respectively, as illustrated in Fig. 3. Splitting the fields allows more data reuse in the diamond tile and provides proper tessellation of diamond tiles. As a result, a full diamond tile update starts and ends with anÊ field update. The horizontal (blue) lines in Fig. 3 divide the components in three regions, which can be handled by three threads. See below for details.
B. Multi-dimensional intra-tile parallelization
The extruded diamond tile is shown in Figure 4. We perform the wavefront traversal along the z dimension (outer dimension) and the diamond tiling along the y dimension (middle dimension). We do not tile the x dimension (fast moving dimension), as we split its work among multiple threads with simultaneous updates in the TG.
The staggered grid and multi-component nature of this application requires different intra-tile parallelization strategies than "standard" structured grid implementations. We use a Fixed-Execution to Data (FED) wavefront parallelization approach [22], which always assigns the same grid points to each thread while the wavefront traverses the tile. This idea maximizes the data reuse in the thread-private caches, since only boundary data instead of the full tile has to travel between threads. The corresponding performance improvement is very Fig. 2: Diamond tile shape along the y dimension for the THIIM stencil. Although theĤ andÊ fields are updated in the same iteration of the simulation code, we split them in our tiling implementation to achieve better data reuse and better diamond tile tessellation. limited for simple stencils, but the THIIM stencil and highorder stencils do benefit from it.
We allow a concurrent update of the x dimension grid cells by the threads in the TG while the data is in cache. This handling of the x dimension has two advantages: it reduces the pressure on the private caches of the threads, and it maintains data access patterns that allow for efficient use of hardware prefetching and the Translation Lookaside Buffer (TLB).
The fixed amount of work per time step in the z and x dimensions leads to good load balancing, but parallelizing the diamond tile along the y dimension can be inefficient since the odd number of grid points at every other time step in the diamond tile makes load balancing impossible for more than one thread along the y dimension. Doubling the diamond tile width is possible, but it would result in doubling the cache block size without increasing the data reuse. Moreover, a load-balanced implementation cannot make the intra-tile split parallel to the time dimension, so more data will have to move between the private caches of the threads. As a result, we do not perform intra-tile parallelization along the diamond tiling dimension for this stencil.
We exploit the concurrency in the field component updates by adding a further dimension of thread parallelism. Each field update can update six fields concurrently. We parameterize our code to allow 1, 2, 3, and 6-way parallelism in the field update so that the auto-tuner selects the most performanceefficient configuration. For example, Fig. 3 shows a case of parallelizing the components update using three threads.
In our spatial blocking and MWD benchmark implementations we use homogeneous Dirichlet boundary conditions in all dimensions to study the performance improvements of our techniques. We expect no significant changes in performance with periodic boundary conditions.
III. DETAILED ANALYSIS OF THE STENCIL CODES
Here we analyze the data traffic requirements per lattice site update (i.e., the code balance) of the stencil code for the naïve, spatially blocked, and temporally blocked variants.
: Extruded diamond tiling of the THIIM kernels, showing an example of D w = 4 and W w = 4. The data dependencies of theĤ andÊ fields allow more data reuse in the wavefront.
As described above, six components each are used for the electric fieldÊ and the magnetic fieldĤ. We show the code of two component updates in the THIIM stencil in Listings 1 and 2. The remaining three and seven components updates have very similar memory access and computation patterns. The H XY update in Listing 1 uses three coefficient arrays (tHyx, cHyx, SrcHy) and the H ZX update in Listing 2 uses two coefficient arrays (tHzx, cHzx). Overall (i.e., considering all component updates) this results in 4 · 3 + 8 · 2 = 28 domainsized arrays for the coefficients. In total, 12 + 28 = 40 domainsized arrays have to be stored using double-complex numbers, leading to a storage requirement per grid cell of 16 · 40 bytes =
A. Naïve kernel arithmetic intensity
We count the total floating-point operations per Latticesite Update (LUP) in the stencil code. The loop nests in Listings 1 and 2 perform 22 flops and 20 flops, respectively. In total we count 4 · 22 + 8 · 20 = 248 Double Precision (DP) flops/LUP. For calculating the data traffic we note that the loop in Listing 1 writes two double precision numbers, reads twelve numbers with no index shift, and reads four numbers with an outer dimension index shift (ishift). If we assume that all accesses to arrays with an outer dimension index shift (in Listing 1 these are Exy and Exz) actually go to main memory we have a total traffic of 18 double precision numbers in this loop. Whether this is true or not depends on the problem size: if two successive x-y layers of those grids fit into the cache, the shifted and non-shifted accesses to the same arrays come at half the data transfer cost because the access with the smaller index comes from cache. This reasoning is well known in stencil optimizations [24], [25]. At a problem size of 512 3 two layers take up 512 2 · 16 · 2 = 8 MiB of cache per thread and per array, which exceeds the available cache size by far. See the next section on how this can be corrected.
The code in Listing 2 writes two numbers and reads ten numbers without large index shifts. The shifted accesses to Exz and Exy can be ignored since the shift is only along the middle dimension, and two rows of the data easily fit into some cache. The third variant of array updates is identical to the second in terms of data transfers since it has a very small shift of −2 along the inner dimension only.
Overall we thus have a code balance of
B. Spatial blocking arithmetic intensity
The total load/store operations to memory can be reduced by standard spatial blocking techniques, which establish "layer conditions" along the outer grid dimensions (see, e.g., [25] and references therein). Spatial blocking results in a reduction of the memory traffic in the four loop nests that are structured as shown in Listing 1 by four double precision numbers each, if the blocking sizes in the inner and/or middle dimensions are chosen such that two successive layers of an array with index shifts in the outer dimension (highlighted in the listing) fit into a cache. The new code balance is thus and the arithmetic intensity becomes I = 248/1216 flops/byte = 0.20 flops/byte. The spatial blocking optimization improves the performance of the code by a mere 10% because the main contributors to the data traffic are not the electric and magnetic fields but the coefficient arrays. Spatial blocking is not effective for these because they are accessed with no temporal locality.
We can now predict the maximum performance for optimal spatial blocking using a simple bottleneck model [26]: The limit due to the maximum memory bandwidth b S of the CPU is P mem = b S /B C . The Haswell chip we used for our experiments has b S ≈ 50 GB/s (see Sect. IV-A), hence This prediction is in very good agreement with the measurements. See Sect. IV for details.
C. Diamond tiling arithmetic intensity and cache size requirements
The performance of temporal blocking techniques relies on a reduction of data traffic, especially to and from main memory. Data traffic models are very useful for understanding the expected or observed performance gains. We build a cache block size model and a code balance model based on [22]. The cache block size model estimates the maximum tile size that fits in the cache memory for this application by counting the working data set in the diamond-wavefront tile, such as the one shown in the y-z plane in Fig. 4. The total required number of bytes per tile is: Each point in the diamond-wavefront tile extends over the full length of the x dimension (N x ) with double-complex values (8· 2 bytes). The area of the wavefront-tile is , which depends on the diamond tile width (D w ) and the tile size along the z dimension (B Z ). Since each grid cell requires loading 12 components and 32 coefficients, we multiply the wavefront-diamond tile area by 40 numbers per grid cell. Finally, the 12 · (D w + W w ) part corresponds to the neighbor access of the 12 components around the wavefront-diamond tile, where the wavefront tile width W w = D w + B Z − 1. For example, in Fig. 4 we have D w = 4, B Z = 4, and W w = 7, so we have C s = 14912 · N x bytes per cache block.
For the code balance model we have to estimate the potential reduction of memory bandwidth pressure by temporal blocking. If the tile fits entirely in the L3 cache, the code loads each grid point once from main memory and stores it back only after completing the wavefront updates. We count the total reads and writes per diamond tile and divide by the diamond area (i.e., data reuse). Each diamond update consists of writing sixĤ field components per cell at full diamond width (D w ) and writing sixÊ field components per cell at D w − 1. In total, each diamond requires 6 · (2 · D w − 1) writes. The diamond tile requires reading 40 numbers per cell and accessing the neighbors of the 12 components (40 · D w + 12). The diamond area is (D 2 w /2). The code balance for doublecomplex numbers of the kernel is thus: We validate our models and study the potential impact of our temporal blocking techniques using our Single-threaded Wavefront Diamond blocking (1WD) implementation. Figure 5 shows the model predictions (solid black lines) of the code balance and cache block size and the code balance measurements (dashed blue lines). The latter is based on a direct measurement of the memory data traffic via hardware performance counters. We test four diamond tile widths (4, 8, 12, and 16). The red vertical lines indicate the estimated usable block size in the L3 cache of the Haswell processor (as a rule of thumb we assume that half the overall cache size, i.e., 22.5 MiB, is available for tile data). Figures 5a-5c correspond to three wavefront width sizes (B Z =1, 6, and 9), where more concurrency is achievable along the z dimension at the cost of using a larger cache block size. We perform our tests at a grid of size 480 3 in the 18-core Haswell processor using a single core and a single cache block.
The measurements show that the model accurately predicts the usable cache block size. The measured code balance diverges from the model when more than half of the L3 cache is used (the right side of the vertical red line), which is expected.
Our results also emphasize the importance of multidimensional intra-tile parallelism compared to parallelizing the wavefront only, where the maximum number of threads per tile is restricted by the wavefront tile width. Using B Z = 6 would require three thread groups at the Haswell processor. As a result, the minimum diamond width D w = 4 requires a cache block size C s = 30 MiB, which exceeds the available cache memory. Although the cache block size of B Z = 9 would fit in the L3 cache at D w = 4, it cannot use larger diamond tiles, to enable more data reuse. On the other hand, our approach provides parallelism along the other dimensions without increasing the cache block size (i.e., it uses smaller wavefront tile widths), which saves space for larger diamond tiles. For example, we can set B Z = 1 and use nine threads per cache block along the other dimensions. This setup provides a D w = 8 that uses C s = 20MiB, allowing more data reuse within the usable cache block size limit.
IV. RESULTS
We present performance results for the spatially blocked code, 1WD, and MWD with full parameters auto-tuning to show the performance improvements. In order to get more insight into performance properties we show thread scaling at fixed grid size and full socket performance at increasing grid size (cubic domain). Since 1WD generally performs better than PLUTO and Pochoir as we show in [22], we do not implement the THIIM stencil in their frameworks to compare their performance. We also present results using different thread group sizes to show the impact of the cache block sharing over the memory transfer volume and the memory bandwidth.
A. Test hardware
We conducted all experiments on an Intel Haswell EP CPU (18-core Xeon E5-2699 v3, 2.3 GHz, 45 MB L3 cache, and 50 GB/s of applicable memory bandwidth), which features the most recent microarchitecture available to date in an Intel Xeon processor. The "Turbo Mode" feature was disabled, i.e., the CPUs ran at their nominal clock speed of 2.3 GHz, to avoid performance fluctuations. The chip was set up in a standard configuration with "Cluster on Die" (CoD) mode disabled, meaning that the full chip was a single ccNUMA memory domain with 18 cores and a shared L3 cache. The documented clock slowdown feature with highly optimized AVX code [27] was not observed on this machine with any of our codes. Simultaneous multi-threading (SMT) was not used.
B. Thread scaling results
We present performance results of the THIIM kernel at increasing number of threads for a fixed problem size in Figure 6a. We also show the memory bandwidth measurements in Fig. 6b, measured code balance in Fig. 6c, and the autotuned MWD diamond width parameter in Fig. 6d. Diamond width (c) B Z = 9. Fig. 5: The cache block size requirements of the application's kernels at three wavefront widths (B Z ). We use an 18-core Haswell at grid size 480 3 , running a single thread with the 1WD approach. Smaller wavefront tile widths, which provide less concurrency along the z dimension, enable more data reuse. The spatially blocked code saturates the memory interface already with six cores, resulting in a performance of around 40 MLUP/s. This is in very good agreement with the bandwidth-based prediction of 41 MLUP/s that was derived in Sect. III-B. Using separate cache blocks per thread in 1WD alleviates the memory bandwidth pressure and achieves better performance than spatial blocking code at smaller thread counts, but the cache is too small to accommodate sufficient blocks at larger thread counts so that a performance drop is observed beyond twelve cores. This can be seen more clearly in Fig. 6b: 1WD goes into bandwidth saturation at ten cores. In contrast, MWD does not saturate the memory bandwidth and can still profit from more cores up to the chip limit, showing a parallel efficiency of about 75% on the full chip. It can maintain a low code balance of 200-400 bytes/LUP for all thread counts (see Fig. 6c). The comparison of diamond width parameters selected by the autotuner in Fig. 6d is quite revealing: at larger core counts, 1WD requires smaller diamonds to meet the stringent cache size limit per core, whereas MWD can employ larger diamonds due to several threads sharing a diamond tile for wavefront updates.
C. Increasing grid size results
Although thread scaling, as shown in the previous section, reveals many interesting features of the 1WD and MWD algorithms, it is also instructive to study their behavior with changing problem size. We therefore present performance results of the THIIM kernel at different (cubic) grid sizes in Figure 7a, ranging from 64 to 512 with an increment of 64. We also show the auto-tuned MWD intra-tile parallelization parameters in Fig. 7b, the memory bandwidth measurements in Fig. 7c, and measured code balance in Fig. 7d.
1WD performance decays at larger grid size because of the increasing cache requirements as the leading dimension grows. The rise in the memory transfer volume seen in Fig. 7d suggests that the larger cache blocks cause more capacity misses in the L3 cache. Our auto-tuner selects a very small D W = 4 at all grid sizes of 1WD, which already exceeds the available cache memory.
Our MWD implementation is decoupled from the memory bandwidth bottleneck over the full range of problem sizes. Compared to the spatially blocked code it has a 6× lower code balance, resulting in a 3×-4× speedup. The memory bandwidth measurements in Fig. 7c show that our approach is immune to even more memory bandwidth-starved situations, where the machine balance (ratio of memory bandwidth to computational performance) would be lower. The auto-tuner selects larger thread groups as the grid size increases, as shown in Fig. 7b, to reduce the cache size requirements. This allows diamond widths in the range 8-16. For all grid sizes, two or three threads are used for the parallel components update. The components parallelism is a major contributor in reducing the cache block size requirements while maintaining high intra-tile concurrency. On the other hand, parallelizing the wavefront dimension alone would result in a larger cache block size, as described in Sect. III-C.
D. Thread group size impact on performance and memory transfers
In this section we show the impact of the thread group size (i.e., cache block sharing) on the THIIM kernel performance in Figure 8a, on the memory bandwidth measurements in Fig. 8c, and on the code balance in Fig. 8d. We also show the tuned MWD diamond tile width in Fig. 8b.
The cases 6WD, 9WD, and 18WD are able to decouple from the memory bandwidth bottleneck at large grid sizes, allowing them to achieve similar performance. The small performance variations make the auto-tuner select different thread group sizes, as shown in the case of MWD performance at grid size of 512 in Figs. 7 and 8.
Larger thread group sizes reduce the need for cache size. As a result, increasing the thread group size allows the autotuner to select a larger diamond tile width, resulting in more in-cache data reuse, less memory bandwidth, and less memory transfer volumes. The 18WD version uses at least D W = 16 at all grid sizes, as shown in Fig. 8b. The massive in-cache data reuse of 18WD results in saving more than 38% of the memory bandwidth at all grid sizes. On a CPU with smaller machine balance we expect an even more pronounced advantage of large thread group sizes.
V. RELATED WORK Stencil computations are important kernels in many scientific applications, as they appear in many Partial Differential Equation (PDE) codes. As a result, optimization techniques for stencil computations are studied extensively in the literature. Datta [28] showed results for many optimization techniques over several processors and stencil computations.
Temporal blocking techniques for stencil algorithms have been the subject of intense study over the last two decades. Applying all those approaches to an application code in order to find out the highest performing candidate is out of the question, and it is also not necessary: We provide a comprehensive comparison of MWD and its variations (FED and multidimensional intra-tile parallelism) with the widely accepted state-of-the-art frameworks and techniques CATS2/1WD [16], PLUTO [19], and Pochoir [20] in a companion paper [22]. We show there that MWD outperforms the other techniques significantly for four carefully chosen "corner case" stencil schemes. Thus we have restricted ourselves to 1WD and MWD for optimizing the electromagnetics code.
Most of the proposed temporal blocking algorithms use separate cache block per thread [16], [19], [20], [29]. Our work shows the inefficiency of these techniques for memorystarved stencils in contemporary CPUs. On the other hand, cache block sharing technologies (introduced by Wellein et al. [21]), achieve better performance by utilizing the shared hardware caches of modern CPUs. Recently, Shrestha et al. [30] introduced cache block sharing techniques within PLUTO framework to perform source-to-source transformation of the stencil codes. To the extent of our knowledge, all proposed cache block sharing temporal blocking techniques compromise tile size for intra-tile concurrency, which we show to be sub-optimal in this work.
For the simulation of solar cells the THIIM algorithm was chosen because it is a stable method for negative permittivity materials such as silver. It allows to directly use the refractive index material data in the simulation without any further approximation or an auxiliar differential equation (ADE). To model lossy dispersive media, alternative methods introduce a convolution integral between the electric flux density and the electric field. This integral is then approximated by recursive convolution [4] or piecewise linear recursive convolution [5].
Another approach expresses the convolution integral by an ADE, which is then discretized by finite differences [6]. A Ztransform (ZT) is applied to obtain field update equations [7]. The ADE and ZT approaches need reformulation for different dispersive media. All those alternative methods either introduce further unknowns or additional equations that need to be solved, increasing the memory requirement and computation time.
VI. CONCLUSION AND OUTLOOK
We have applied multicore wavefront diamond temporal blocking with multi-dimensional intra-tile parallelization to a Maxwell's Equations solver used in a solar cell simulation application, achieving a 3×-4× speedup and a 38%-80% memory bandwidth saving. This stencil code has very low arithmetic intensity (0.20 flop/byte for optimal spatial blocking) and requires many bytes of storage per grid cell (640 bytes). Applying thread parallelism inside shared cache blocks as well as across electric and magnetic field components was decisive in lowering the severe cache size constraints of the code. Using a validated cache block size and code balance model we were able to describe the impact of the tiling parameters and the cache size on the memory traffic and thus limit the effort of the auto-tuner. To our knowledge, none of the existing temporal blocking techniques in the literature can achieve a similarly efficient memory bandwidth reduction for such a memory-starved stencil kernel.
The design and optimization process of solar cells requires thousands of parallel runs of this code. In order to cover the whole visible wavelength spectrum for only a single solar cell configuration, about 80-160 simulations are needed. Our performance improvements reduce the turnaround time of each individual run and also the overall cost of the computations. We believe that our approach is applicable to many algorithms with similar characteristics, i.e., where the code has significant demand for memory bandwidth and cache size.
The solar cell simulation application in this paper uses a cubic domain shape. In many applications, from climate models to reservoir models, etc., one dimension is significantly smaller than the other two, i.e., the domain is "thin." Our approach can benefit such applications significantly: Mapping the thin dimension to the leading array dimension helps both, tiling in shared memory and domain decomposition in distributed memory setups. For shared memory, we show in Eq. 11 that the cache block size is proportional to the leading dimension size, so we can use larger blocks in time with more data reuse. Although tiling a long leading dimension can also reduce the cache block size, it increases the pressure on the TLB and may lead to inefficient hardware data prefetching [25]. In distributed memory, decomposing the leading dimension is usually the most expensive, as the halo layer is not contiguous in memory. Thin domains reduce the requirement of decomposing the leading dimension while maintaining a favorable surface-to-volume ratio per subdomain. It is worth mentioning that very short leading dimensions (i.e., thin domains with less than about 50 cells) are inefficient because of bad pipeline utilization. This effect is amplified by long SIMD units, which lead to even shorter loop lengths and slow (scalar) remainder loops. In this situation the thin domain should be mapped to the middle or outer dimensions.
Our MWD work has eliminated the memory bandwidth bottleneck of this code. We are currently in the process of implementing periodic boundary conditions. These may be introduced along the x dimension by peeling the first and last iteration off the the x loop to explicitly specify the contributing grid points at the other end of the domain. Furthermore we can use the leftmost half-diamond in Fig. 2 to complete the rightmost half-diamond via a memory copy operation, and vice versa.
In the future we plan to investigate further the performance limitations within the core (in particular the SIMD vectorization) and the cache hierarchy, since the code runs at only about 5% of the theoretical peak performance of the CPU despite being cache bound. Hardware performance counter measurements and subsequent chip-level performance modeling will provide more insight here. The temporal blocking optimization will change the communication versus computation characteristics of the code, which also deserve an in-depth analysis. | 2015-10-18T10:13:47.000Z | 2015-10-18T00:00:00.000 | {
"year": 2015,
"sha1": "3b043b019dd7b4f2c4724f26dc1f5c18e1806fa8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1510.05218",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a1e967eedbe66555c40b39afc2bb8776dd6497cf",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
4138067 | pes2o/s2orc | v3-fos-license | Recalibration of the Limiting Antigen Avidity EIA to Determine Mean Duration of Recent Infection in Divergent HIV-1 Subtypes
Background Mean duration of recent infection (MDRI) and misclassification of long-term HIV-1 infections, as proportion false recent (PFR), are critical parameters for laboratory-based assays for estimating HIV-1 incidence. Recent review of the data by us and others indicated that MDRI of LAg-Avidity EIA estimated previously required recalibration. We present here results of recalibration efforts using >250 seroconversion panels and multiple statistical methods to ensure accuracy and consensus. Methods A total of 2737 longitudinal specimens collected from 259 seroconverting individuals infected with diverse HIV-1 subtypes were tested with the LAg-Avidity EIA as previously described. Data were analyzed for determination of MDRI at ODn cutoffs of 1.0 to 2.0 using 7 statistical approaches and sub-analyzed by HIV-1 subtypes. In addition, 3740 specimens from individuals with infection >1 year, including 488 from patients with AIDS, were tested for PFR at varying cutoffs. Results Using different statistical methods, MDRI values ranged from 88–94 days at cutoff ODn = 1.0 to 177–183 days at ODn = 2.0. The MDRI values were similar by different methods suggesting coherence of different approaches. Testing for misclassification among long-term infections indicated that overall PFRs were 0.6% to 2.5% at increasing cutoffs of 1.0 to 2.0, respectively. Balancing the need for a longer MDRI and smaller PFR (<2.0%) suggests that a cutoff ODn = 1.5, corresponding to an MDRI of 130 days should be used for cross-sectional application. The MDRI varied among subtypes from 109 days (subtype A&D) to 152 days (subtype C). Conclusions Based on the new data and revised analysis, we recommend an ODn cutoff = 1.5 to classify recent and long-term infections, corresponding to an MDRI of 130 days (118–142). Determination of revised parameters for estimation of HIV-1 incidence should facilitate application of the LAg-Avidity EIA for worldwide use.
Introduction
Laboratory methods to detect recent HIV infection and estimate HIV incidence using crosssectional specimens continues to be a high priority because they have the potential to help monitor the leading edge of the epidemic, target resources and evaluate successes of prevention programs in a very cost-effective and timely manner [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Measurement of HIV-1 incidence is also critical for identifying high incidence populations for prevention trials, including efficacy of candidate vaccines and other interventions.
The development of an optimal laboratory method for worldwide use has remained challenging due to the diversity of HIV-1 subtypes, biologic differences among populations or limitation of the assays [1,[19][20][21][22][23][24][25]. Several reviews and reports have been written summarizing the status of the evolving research in this area; they have stressed the need for accurate calibration of assays or algorithms but substantive progress has been slow [4,22,[26][27][28][29][30][31].
In the absence of reliable laboratory methods, UNAIDS and others have derived incidence estimates based on mathematical modeling [32][33][34][35][36], while others have used prevalence in younger age groups or successive rounds of prevalence to estimate incidence [37][38][39][40][41][42]. Incidence estimates based on mathematical modeling are retrospective, not timely and have their biases. Additional limitations of modeling include inability to generate subgroup and risk factor analysis which are critical for understanding current transmission dynamics and for designing prevention strategies. In addition, increasing but variable ART coverage and decreasing mortality in most countries require input of additional but uncertain parameters into models, further contributing to potential biases.
In recent years, definitive progress has been made in the identification of new biomarkers and the development of assays, including molecular methods and rapid tests to detect and distinguish recent from long-term infections [5][6][7][43][44][45][46]. Reliable laboratory assays, if available, are attractive because of ease of use, application to cross-sectional population, low recruitment bias, low cost and provision of real-time incidence estimates. We recently described a novel, single-well limiting-antigen (LAg) avidity assay [5]. This novel concept was further developed into an optimized assay [6] and characterized with respect to its performance in multiple subtypes. Subsequently, we have transferred the assay to two commercial entities for development of a kit and have conducted field evaluations in several populations worldwide in countries such as Vietnam, Ghana, Swaziland, and Kenya (to be published separately).
In March 2013, we organized a consultation meeting of experts to review data pertaining to characteristics, performance, and validation of the LAg-Avidity EIA. One of the recommendations included review of the mean duration of recent infection (MDRI) analysis. Although our previous report described the MDRI of 141 days at cutoff ODn of 1.0, our and others' subsequent work indicate that the method used to determine the MDRI was not applied optimally and recalibration of the assay was needed. We describe here the revised estimates of the MDRI using data from >250 seroconverters panels at various cut-offs using multiple statistical methods to ensure that these estimates are reliable and recommend a new MDRI at a preferred cutoff for application in cross-sectional incidence estimates.
Materials and Methods Specimens
Longitudinal specimens (n = 2737) from 259 individuals infected with HIV-1 were collected as part of various cohort studies in different locales by different investigators. The specimens from consenting individuals were made available to permit development and characterization of new incidence assays, including the LAg-Avidity EIA. Some of the basic information about the cohorts, including source, number of seroconverters, available specimens, and likely or confirmed HIV-1 subtypes are shown in Table 1. Of the 259 individuals, 89 of them (n = 393 specimens) were part of our previous study [6]. The following HIV-1 subtypes were included in this study: HIV-1 subtype B (Thailand BMA IDU cohort [47], Amsterdam cohort [48] and Trinidad cohort), subtype AE (Thailand BMA IDU cohort [47]), subtype C (Ethiopia and China cohorts), subtypes A & D (Kenya CSW cohort [49]). Because early antiretroviral therapy can affect the development and maturation of HIV antibodies, only specimens from persons who were not on antiretroviral therapy (ART) were used for determination of MDRI. Time between last negative and first positive specimens ranged from 4 days to 1486 days for different panels with median interval of 125 days and mean of 171 days.
An additional 3740 specimens from treatment-naïve adult individuals with HIV-1 infection longer than 1 year were used to estimate the proportion of specimens misclassified as recent, termed here as the proportion false recent (PFR). The specimens were collected under multiple approved protocols that permitted use of left-over, unlinked specimens for research. This set included 1845 specimens from Vietnam, 952 specimens from Ghana, 455 specimens from China and 488 specimens from individuals with AIDS (CD4<200). Specimens from individuals with AIDS were derived from three sources: 261 specimens were collected in the 1990s from treatment-naïve women with AIDS enrolled in the HIV Epidemiologic Research Study (HERS) [50], while additional specimens were from Thailand (n = 128) and Cote d'Ivoire (n = 99), collected in the 1990s from treatment-naïve AIDS patients with (Cote d'Ivoire) or without (Thailand) tuberculosis (TB). This study was conducted under a protocol approved by Centers for Disease Control and Prevention (CDC) Institutional Review Board (IRB) titled "characterization, validation and application of HIV-1 incidence assays". Selected specimens were collected under multiple CDC approved protocols (IRB # 5533, 5758). Study was also approved by Bangkok Metropolitan Administration Ethics Committee, respective ministries of health and CDC. Individuals donating the blood specimens provided written consent for use of the specimens for biological research.
LAg-Avidity EIA
The LAg-Avidity EIA was performed as described earlier [6]. Following successful transfer of technology to a company, commercially produced kits were used to perform the testing (Sedia BioSciences, Portland, OR). These kits were verified as having the same performance characteristics as our in-house assay, including a matching Calibrator (CAL) specimen, a key to classification of recent and long-term infection. Details of the assay steps are as follows as per manufacturer's instructions: Assay controls [Negative control (NC), CAL, low-positive control (LPC) and high-positive control (HPC)] or HIV-positive specimens were diluted 1:101 in specimen diluent and 100 μL of controls or specimens were added to appropriate wells of antigencoated plates and incubated for 60 min at 37°C. Controls are included in duplicate (NC) or triplicate (other controls) on each plate, while specimens were tested in singlet. Plates were washed 4 times with 1x wash buffer to remove unbound antibodies. A pH 3.0 buffer was added to each well (200 μL/well) and incubated for 15 min at 37°C to dissociate low avidity antibodies, if any. Following 4 washes, goat-anti-human IgG peroxidase (100 μL/well) was added to each well and incubated for 30 min at 37°C. Tetramethyl benzidine substrate (100 μL/well) was then added and incubated for 15 min at 25°C. Color development was stopped by addition of 100 μL/well of 1N H 2 SO 4. The optical density (OD) was read at 450 nm with 650 nm as a reference using a spectrophotometer.
Raw OD for each specimen was normalized using CAL OD on each plate as a ratio, such that normalized OD (ODn) = (OD of specimen/median OD of CAL). For the purpose of this exercise, all specimens were tested on two independent runs in singlet and the mean ODn was used for further analysis. Plates were validated using acceptable values of OD and ODn for each control and CAL as determined for the kit. If one or more of the controls fell outside of the acceptable ranges defined in the kit insert, the run was rejected. Specimens were then retested and ODn values from only valid runs were used for analysis. To assist with data management and analysis, an Excel-based data management tool was developed to auto-validate each plate, calculate ODn and classify specimens as recent or long-term infections based on 1.5 cutoff.
Statistical Methods to Determine MDRI
Formally, the Mean Duration of Recent Infection (MDRI) of an assay, a required parameter of an assay to estimate incidence, is the mean time which subjects spend classified as 'recently infected' during a period T post-seroconversion. The Proportion of False Recent (PFR) result is a population-level probability of obtaining a 'recently infected' result on a randomly chosen person infected for more than time T which was set to one year [51]. We used seven different statistical approaches to derive the MDRI for the LAg assay at various cutoffs; they are presented below. Methods 1, 3, 6 and 7 assumed that the sero-conversion occurred at the mid-point of last negative and first positive dates while for methods 2, 4, and 5 sero-conversion was assumed to have occurred at any time between the last negative and first positive dates with uniform probability.
Methods 1 and 2 (Empirical methods balancing false recent and false long-term): These two methods for estimating the mean duration of recent infection (MDRI) for the LAg Avidity assay use the "empirically balanced observation time" approach. The methods are based on some of the early work on incidence assays [3] suggesting that the rates for false-recents and false-long terms need to balance out, especially within the interval between 0 and 1 year (T = 365 days) post-serconversion. Also for method 2, ODn values were raised to the power λ = 1.53, as estimated using a repeated measures model, in order to linearize the relationship between the ODn and time values. Daily values were determined by linear interpolation between time points or extrapolation from the last two points on either the untransformed (method 1) or on the transformed (method 2) ODn λ scale. The baseline ODn value at day 0 was considered to be 0.05, equivalent to the background signal on the assay. Specimens from all serconversion panels, without exclusion, were used when applying these methods. Confidence intervals were determined by 10,000 replicate MDRI estimates obtained subject-level bootstrap resampling.
Method 3 (Linear interpolation [SACEMA-1]): This approach is based on linearly interpolating the LAg ODn values between visits, per subject. Each subject is assigned a reading of 0 at infection, and there is no extrapolation beyond the last visit. Using the interpolating readings, P R (t), which is the probability of testing 'recent' at time t post-infection, is estimated by the proportion of available results below the threshold at t post-infection.
Methods 4 & 5 (Binomial regression [SACEMA-2 and -3]):
A linear binomial regression is used to model P R (t), as a function of time since infection, t [52]. Although the regression model does not account for the clustering of data points by subject, estimates of uncertainty through case bootstrap resampling do. The general form of the linear binomial regression model is g(p) = β T x, where p is the probability of testing 'recent', g(.) is the link function, and η = β T x is the linear predictor (β is a vector of model parameters, and x is the vector of predictors). Two model were fitted: (i) A two-parameter model using a loglog link, where the linear predictor is a linear function of time (SACEMA-2); and (ii) a five-parameter model using a logit link, where x consists of the basis functions of a natural cubic spline over [0,T], with knots occurring every two months (SACEMA-3). Data points more than T×110% post-infection were not used in the fitting.
For both methods, Ω T = ᶴ _ 0^T P R (t) dt is then estimated using the composite trapezoidal rule for integration (20,000 subintervals). Assuming uniformly distributed infection times between last HIV-negative and first HIV-positive visits, the (expected) infection times are used (midpoints of inter-visit intervals), with no further accounting for uncertainty in infection times. Confidence interval (CI) limits are estimated by the percentiles of (1000) replicate MDRI estimates obtained by subject-level bootstrap resampling [53].
Method 6 (Nonparametric Survival Analysis): A nonparametric survival analysis method for interval-censored data was used to estimate the recency period of the assay. This approach required fewer assumptions than other approaches. It was adapted in our context where it was known that sero-conversion has occurred between two time points t 1 and t 2 and assay threshold was crossed between time points t 3 and t 4 , where t 1 t 2 t 3 t 4 . Therefore the recency period lies in the interval (t 3 -t 2 ) to (t 4t 1 ). If the threshold was not crossed by the last observation then the upper end of this interval was set as censored. These two limits were used to calculate the maximum likelihood estimate of the survival curve. The mean estimates of the recency period were directly derived from the survival curve. For the mean to be defined finitely, it was assumed that the event occurred for the longest observed subject at the latest observed time. We employed a SAS macro called EMICM to estimate the survival curve for the recency period [54]. Confidence intervals were estimated based on bootstrap techniques. Upper and lower limits of the interval were derived as 97.5 th and 2.5 th percentiles of the empirical distribution. Threshold values from 1.0 to 2.0 were used to derive MDR estimates for each of the subtypes and overall recency period.
Method 7 (Individual Panel Regression Analysis): Seroconversion (SC) panels were included in the analysis if they exhibited a rise in ODn response over the collection period and a regression equation could be fitted describing the antibody avidity kinetics. A total of 176 optimal SC panels comprising 2076 specimens were included in this analysis (Fig. 1B). Midpoint of last negative and first positive dates was used as the seroconversion date which was designated as day 0 and used to calculate days since seroconversion for subsequent longitudinal specimens. Each individual SC panel was plotted and a regression equation was generated using Excel. The regression equation was then solved for the desired cutoff value to determine estimated the number of days it required for that individual to reach the LAg assay cutoff. The mean duration of recent infection was then calculated by averaging the all SC panel results, along with the 95% CI for the mean. The mean duration of recent infection was calculated by subtype and overall, considering all subtypes. While the data requirements for this method are minimal, it does not use all the available data, such as the plateau data points which were not fully utilized.
Determination of Proportion False Recent (PFR)
The PFR was determined as % of specimens collected more than one year post-seroconversion, which were misclassified as recent HIV-1 infection by the LAg-Avidity EIA. The PFR was calculated at each ODn cutoff of 1.0, 1.25, 1.5, 1.75 and 2.0 to evaluate the extent of misclassification at the various cutoffs. The 95% confidence intervals were calculated for each PFR.
Antibody Avidity Kinetics
Antibody avidity maturation, as measured by LAg-Avidity EIA, for all 259 seroconverters is shown in Fig. 1A. Overall, there is an increase in avidity of gp41-specific antibodies following seroconversion, reaching a plateau level at about 500 days post-seroconversion. Most individuals exhibited normal increase in avidity kinetics; however, a few individuals showed some decline in antibody avidity over time while in rare cases antibody avidity remained low. Since statistical methods to calculate the MDRI can be affected by the duration between last negative and first positive results, collection interval, avidity kinetics and/or frequency of specimen collections, we separated optimal and sub-optimal seroconverter specimen sets. Fig. 1B shows 176 seroconverters (2076 specimens) with optimal time interval between last negative and first positive specimens (<100 days), with 3 or more specimens per donor with regular collection schedules, and also exhibiting a typical rise in antibody avidity levels. Fig. 1C shows 83 individuals (641 specimens) with one or more of the following when examined individually: sub-optimal collection schedules, longer time interval between last negative and first positive specimens (>100 days), or atypical antibody kinetics not crossing the potential cut-off threshold of ODn 1.0 to 2.0. It is interesting to note that when examined collectively, the antibody kinetics in Fig. 1C are not very different from those in Fig. 1B, except in rare cases when antibody avidity remained low, a likely contribution from elite controllers.
MDRI and PFR Results
Analysis of the data by 7 different statistical methods for calculation of MDRI at varying cutoffs between ODn of 1.0 and 2.0 are summarized in Table 2. Overall, the different methods (Table 2). This PFR represents overall misclassification frequency for all specimens, irrespective of subtypes or geographic locations. The PFR increased from 0.6% to 2.5% when threshold cutoff increased from 1.0 to 2.0 ODn, respectively. Subtype or country specific PFR data will be further analyzed in separate reports.
Differences of MDRI among different methods were minimal. Demonstratively, method 6, a binomial regression method that utilized all data points, indicated that at cutoff of 1.0 ODn, the MDRI was 88 days (95% CI 79-98) with a corresponding PFR of 0.6%, while at cutoff of 1.5 ODn, the MDRI was 130 days (95% CI 118-142) with a corresponding PFR of 1.6%. The MDRI increased to 161 days (95% CI 148-174) at cutoff of 2.0 ODn but there was a corresponding increase in PFR to 2.5%. Fig. 2 shows MDRIs by different subtypes or by geographic location (e.g., A&D from Kenya) using Method 6. MDRIs by subtypes were: 129 days (subtype B), 122 days (subtype AE), 152 days (subtype C) and 109 days (subtypes A&D). Although there are some differences, use of an overall MDRI of 130 days (horizontal arrow) is appropriate for application to determine HIV-1 incidence in cross-sectional populations.
We examined the calculated MDRIs (Method 6) at different cutoffs overlapping with the avidity kinetics during the early period of seroconversion (<500 days) as shown in Fig. 3. The line joining the MDRIs at varying cutoff goes through the middle portion of the increasing avidity in this close up view of antibody maturation. The 95% CI around the MDRIs are indicated with the red lines.
Discussion
The MDRI is an essential characteristic of an incidence assay for appropriate application of the assay in cross-sectional estimation of the HIV-1 incidence. We previously reported optimization and characterization of LAg-Avidity EIA [6], including the determination of the MDRI using longitudinal specimens from 89 seroconverters. However, further review of the data indicated that the method used to calculate the MDRI was not properly applied to the dataset, resulting in an overestimate of the MDRI for the LAg-Avidity assay at the cutoff 1.0 ODn. Application of an elevated MDRI would result in an underestimation of HIV incidence if applied to a cross-sectional cohort. Therefore, this recalibration exercise was necessary and in order to achieve a more representative and robust estimate, we increased the number of seroconversion panels from 89 to >250 panels that represent more diverse subtypes and included multiple statistical methods to ensure accuracy and consensus of the final results. In all, we used seven different statistical methods to determine the MDRI values and found that these methods gave very similar results providing further confidence to the robust nature of the analysis and the methods employed (Table 2). Our new results show that the MDRI of the LAg-Avidity EIA using the highlighted binomial regression method was 88 days and 161 days at the cutoffs of 1.0 to 2.0 ODn, respectively (Fig. 3). Determination of the optimal cutoff for cross-sectional application is a balance between MDRI (which should not be too small) and PFR (which should not be too large) [27,31,55,56]. At the cutoff of 1.5 ODn, our overall PFR was 1.6%, lower than 2% recommended by WHO Incidence Working Group [57] for new incidence assays. A cutoff of 1.5 ODn provides this balance between duration of MDRI (130 days) and PFR (<2.0%) using specimens in our collection. Therefore, we propose a default cutoff of 1.5 ODn to classify recent and long-term infections; this represents the mean duration of 130 days (95% CI 118-142) since seroconversion. It is recommended that the studies conducted previously with the LAg-Avidity EIA should reanalyze their data using revised cutoff (ODn<1.5) for recent HIV infection classification and MDRI of 130 days as per our new analysis. This revision does not impact the raw data generated using the LAg-Avidity EIA, just the interpretation and use of the data.
Statistical methods used to determine MDRIs have varied since the first description of the detuned assay in 1998 [9]. Since then multiple approaches have been used, partly due to lack of consensus among statisticians about the best methods [3,11,15,18,21,46,58]. Under the auspices of WHO Incidence Working Group, a statistical workshop was organized in 2011 to develop a consensus and promote a preferred method(s). Although there was some broad agreement and a better understanding of various approaches used, differences in the approaches remain, and no detailed benchmarking has been carried out. Recently, a benchmarking project has Comparison of the MDRIs by HIV-1 subtypes/population show that the MDRIs varied from 109 days (subtype A&D) to 152 days (subtype C). Further evaluation of the MDRI in more seroconversion panels collected from individuals infected with divergent HIV-1 subtypes and geographic locations should provide further data on subtype differences, if any. Such assessment by independent groups, such as CEPHIA, will be critical for this and other incidence assays. If subtype-specific differences are confirmed, use of MDRI for prevalent subtype is appropriate and may be considered when applying the LAg-Avidity EIA keeping in mind that trend of HIV incidence measured over time is more important than a single point estimate to assess the impact of HIV prevention efforts.
Although the PFR of 1.6% at the 1.5 ODn cutoff was below the recommended level of <2% in our study, this PFR was determined in ART-naïve populations. We realize that the actual PFR will vary in different populations depending on the state of the HIV epidemic, overall ART coverage, timing of ART initiation and duration of ART. Early initiation of treatment before maturation of antibodies will prevent the development of high-avidity antibodies and will result in misclassification of long-term infections on most antibody-based assays, including LAg-Avidity EIA (unpublished data). Collection of additional clinical information about ART use during the surveys can help address this issue.
The continuing need to determine the local PFR during each round of survey is a burden for surveillance systems and can be impractical. False-recent classifications are caused primarily by elite controllers or individuals on treatment. Both of these cases can be effectively identified by testing for viral load, such that the LAg recent samples with VL <1000 copies/mL, for example, would be classified as long-term. This approach is attractive for multiple reasons 1) it reduces the need to conduct exhaustive PFR studies, 2) it identifies misclassified elite controllers and those on ART in the study pool, and 3) it improves accuracy of incidence estimates. Given that this testing will be done only on LAg-recents (usually <10% of total positives) and many national reference laboratories in developing countries can now perform viral load testing, this is logistically feasible.
Recently, a multi-assay algorithm (MAA) has been suggested that includes 1) BED-capture EIA at a higher cutoff as the first step, followed by 2) Bio-Rad Avidity EIA, again at a higher cutoff 3) then CD4 measurement, and 4) finally VL measurement to classify recent HIV infection [52,59,60]. It is unclear how each of the multiple components of this somewhat complex algorithm contributes, whereas the relative simplicity of a single immunoassay and VL may suffice to provide sufficient precision. Additionally, any algorithm using CD4 will have limited application in surveys that collect dried blood spot (DBS) specimens for incidence testing. It should be pointed out that there are on-going developments in the area of HIV incidence algorithm development.
For wider application of the LAg-Avidity EIA to estimate HIV incidence, the next steps will include field validation of the revised parameters in cross-sectional populations and comparison of the LAg-derived estimates with other reference estimates of incidence in the same population. Further association of demographic and other risk factors in the context of HIV-1 incidence and prevalence should further assist in validation of the assay.
In summary, we have recalibrated the LAg-Avidity EIA using >250 longitudinal seroconversion panels from multiple subtypes derived from diverse geographical locations using several methods. Based on these data, we recommend a cutoff ODn of 1.5, which corresponds to an MDRI of 130 days (95% CI 118-142) for application in a cross-sectional population for estimation of HIV-1 incidence and risk-factor analysis. Determination of these parameters and recent availability of the assay kits from two manufacturers (Sedia BioSciences and Maxim BioMedical), including a dried blood spot (DBS) kit from the latter manufacturer, should further facilitate measurement of HIV-1 incidence in cross-sectional populations for program planning and impact evaluation of prevention and intervention efforts worldwide.
Supporting Information S1 Table. Spreadsheet with LAg-Avidity EIA data on seroconversion panels (donors = 259; specimens = 2737) from multiple countries as described in Methods section. Mean ODn of two independent runs was used for this analysis. (XLSX) S2 Table. Spreadsheet with LAg-Avidity EIA data on specimens from individuals (n = 3740) with known long-term infections (>1 year) for the purpose of determining proportion of false recent (PFR) classification. Mean ODn of two independent runs was used for this analysis. (XLSX) | 2016-05-04T20:20:58.661Z | 2015-02-24T00:00:00.000 | {
"year": 2015,
"sha1": "a4417a60c59c30ab4cc52f31bb1276a2a5aa6673",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0114947&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6dfb8030f23bed2d9b03e15b7965c20cceccc0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
239888957 | pes2o/s2orc | v3-fos-license | Left-handed cardiac looping by cell chirality is mediated by position-specific convergent extensions
In the embryonic heart development of mammals and birds, a straight initial heart tube undergoes left-handed helical looping, which is a remarkable and puzzling event. We are interested in the mechanism of this chiral helical looping. Recently, observations were reported that myocardial cells in the embryonic chick heart show intrinsic chirality of rotation. The chirality of myocardial cells, via anisotropic polarization of Golgi inside the cells, leads to a left-right (LR) asymmetry of cell shape. On cell boundaries of LR asymmetric cells, phosphorylated myosin and N-cadherin are enriched. Such LR asymmetric cellular circumstances lead to a large-scale three-dimensional chiral structure, the left-handed helical loop. However, the physical mechanism of this looping is unclear. Computer simulations were performed using a cell-based three-dimensional mathematical model assuming an anterior-rightward-biased contractile force of the cell boundaries on the ventral surface of the heart (orientation of a clock hand pointing to 10 to 11 o’clock). An initially straight heart tube was successfully remodeled to the left-handed helical tube via frequent convergent extension (CE) of collective cells, which corresponds to the previously reported observations of chick heart development. Although we assumed that the biased boundary contractile force was uniform all over the ventral side, orientations of the CEs became position specific on the anterior, posterior, right, and left regions on the ventral tube. Such position-specific CEs produced the left-handed helical loop. In addition, our results suggest the loop formation process consists of two distinct phases of preparation and explicit looping. Intrinsic cell properties of chirality in this investigation were discussed relating to extrinsic factors investigated by other researches. Finally, because CE is generally exerted in the axial developmental process across different animal species, we discussed the contribution of CE to the chiral heart structure across species of chick, mouse, Xenopus, and zebrafish.
INTRODUCTION
In mammals and birds, embryonic development of the heart involves the conversion of a straight tubular structure into a three-dimensional (3D) left-handed helical loop. The structure and morphogenesis of heart looping have been investi-gated by many scientists (1)(2)(3)(4)(5)(6)(7). Heart looping is a mechanical event, and we were interested in the mechanism determining handedness. Cardiac looping in mammals and birds was first identified a century ago (8)(9)(10). At that time, Patten (10) proposed that looping results from a buckling mechanism in a tube elongating between fixed poles. Later, heart looping was observed to form through a combination of ventral bending and rightward rotation (11). Recently, helical looping definitively distinguished from simple bending was also reported to form through the combination of ventral bending and rightward displacement of the heart tube (12). Meanwhile, investigation to understand the cellular and subcellular mechanism of loop formation was performed. Local variation of the arrangement of actin bundles in the looping heart was first investigated in the chick embryonic heart (13,14), and later, the investigation was performed in 3D space (15). The role of actomyosin in the looping and bending of the heart tube was also investigated (6,14). Recently, Kidokoro et al. (16) observed actomyosin-based cell rearrangement and the resulting dynamic tissue reshaping in detail. They elucidated clearly that elaborated cell behavior, cell convergence, and tissue extension (CE) was exerted in the looping chick heart tube. Additionally, left-right (LR) asymmetry in the looping chick heart tube was analyzed at the cell and tissue levels using unique mathematical techniques (17). Recently, Ray et al. (18) confirmed a clockwise (CW) rotational chirality of cells in the developing myocardium. They sequentially examined the following: the chirality of the rotational behavior of myocardial cells, Golgi rightward polarization within cells (rightward means the orientation of a clock hand pointing to around 9 o'clock), the rightward-biased alignment of the cell boundary and LR asymmetry in cell shape, and enrichments of N-cadherin and phosphorylated myosin II (p-myoII) on the rightward-biased cell boundaries. Strong intensity p-myoII was predominantly aligned toward the anterior-rightward direction, which is expected to produce anisotropic force to lead to left-handed helical looping of the heart tube. These experimental observations were bases on which we constructed the heart model tube.
The purpose of this study is to elucidate the process of formation of the left-handed helical looping of the heart tube. Because the loop formation is a mechanical event of a cell assemblage being remodeled into a chiral structure, we used a cell-based 3D vertex dynamics in which we can introduce chiral properties of cells to the heart model tube. The cell-based 3D vertex dynamics is fundamentally different from other mathematical models that have been used for investigations of heart looping. Using this model we can examine process of loop formation in morphology of celllevel and remodeling of 3D structure of the heart tube.
First, we made a mathematically artificial tube whose surface consisting of polygons in cell-based 3D vertex dynamics system. We assumed anisotropic contractile force of the edges in the polygons (that is, edges of a certain direction have specifically contractile force). Computer simulations showed two cases: 1) a simple rotation of the tube surface around the sticking tube axis and 2) a large-scaled deformation of the tube itself producing a helical loop. We focused our interest on the second case to elucidate the looping mechanism of the heart tube. We made a straight tube similar to the embryonic initial heart tube under assumptions based on recently published observations. The mechanophysical mechanism of the formation of helical looping and how to determine the handedness of the helical looping were examined.
We found the process consisted of two distinct phases of implicit preparation of heart looping and explicit 3D remodeling of the looping heart tube. This investigation is based on the intrinsic properties of the myocardial cells, whereas other researchers have participated in the extrinsic factors of the heart looping. The relationship between contributions by the two factors was discussed. Finally, because CE is known to be generally exerted in the axial developmental process across different animal species (e.g., chick, mouse, Xenopus, and zebrafish), we discussed the formation of the chiral heart structure accompanied with CE across these animals.
MATERIALS AND METHODS
Some of the methods used in this study were based on those presented previously (12), and are described in the Supporting materials and methods. The main features of the models are summarized here, and methods that are newly used (to our knowledge) in this study are described.
Heart model tube
A mathematical tube that is used in computer simulations is referred to as a heart model tube, as distinguished from a real heart tube. The initial heart tube is a straight tube consisting of a cylindrical surface and two disks at the top and bottom extremities. Although the posterior end of the real heart tube is the bifurcation between the two atrial regions, we simplified it as a posterior pole. On the cylindrical surface, many polygons (452 polygons) are packed without gaps and overlaps. The polygons are approximations of myocardial cells in the heart tube and do not have thickness. Each cell is assumed to have its own polarity that is used as a reference line of anisotropic behaviors of cell edges and cell divisions (see Supporting materials and methods, Cell polarity). The model tube deforms and is remodeled during the computer simulations and its volume increases with time. Centers of the top and bottom disks (anterior and posterior poles, respectively) are fixed, although their peripheral vertices are movable.
Calculation of vertex positions by equations of motion
To describe the shape of the heart model tube we require the x, y, and z coordinates of all vertices in the heart model tube and the neighboring relationships of vertices of the edges in the polygons. Changes in the neighboring relationships were performed using the elementary process of reconnecting the vertices (Fig. S1 B). Migrations of the vertices were calculated by the equations of motion (Eq. S1 in Cell-based 3D vertex dynamics in the Supporting materials and methods). The equation of motion involves potential U. The vertices move to decrease U (strictly, with no increase in U), as shown in Supporting materials and methods, Eq. S2.
The potential U in the equations of motion contains the terms for edge energy of the polygons, elastic surface energy of the polygons, elastic volume energy of the tube, elastic deviation energy of the vertices from the planes, and boundary restriction energy of the top and bottom of the model tube (Eq. S3 in the Supporting materials and methods). The ventral and dorsal sides of the heart model tube are distinct from each other. On the ventral side, we introduced the anisotropic contractile force of the edges in the cells.
Anisotropic contractile force of edges
The potential U contains the term for the edge potential energy U L ¼ s L S <ij> w ij L ij , as shown in Eq. S4 in the Supporting materials and methods, where w ij is the weight of edge ij. Anisotropic contractile force of edges is directed by the weight of edge that depends on orientation of edge. We can investigate anisotropic morphogenesis of the heart model tube using various values of the edge weight. In the cells on the ventral side, we introduce the anisotropic contractile force of edges; that is, we put w ij equal to various w, depending on anisotropic angles. We do not consider edge contractile force on the dorsal side of the heart model tube.
Determination of anisotropic contractile edges
It was necessary to determine the strong contractile edges, depending on their orientation. However, we do not to consider that an edge has the ability to measure its orientation in 3D space and judge whether it has strong contractile properties or not. Rather, a cell determines which edges in the cell should have a strongly contractile force, referring to its polarity. Therefore, we determined particular edges whose orientations were close to an anisotropic angle, as shown in Fig. 1 A. Particular edges are designated by a thick line.
Strength of contractile force of each edge
An edge in polygonal patterns consists of two boundaries of two neighboring cells (Fig. 1 B). For cells of the artificial tube ( Fig. 2) and on the ventral side of the heart model tube (e.g., Fig. 3), the strength of the edge contractile force was considered to be the sum of contributions from the two cell boundaries of neighboring cells. For example, edge <ij> is a boundary between cells A and B and has two contributions from cell A and cell B, w ij A and w ij B . The strength of edge contractile force of the edge <ij> is w 0 S4 in the Supporting materials and methods). Usually, we use w as the weight of the strength of a strong contractile force of an edge (i.e., contribution from one cell). When the weight of the strong contractile edges is w ¼ 2.7 and the weight of the other edges is 1.0, the edges on the ventral side of the heart tube have three strength levels; that is, w 0 can be 2 (¼ 1 þ 1), 3.7 (¼ 1 þ 2.7), or 5.4 (¼ 2.7 þ 2.7).
Measurement of edge angle with the polarity direction
To analyze results of computer simulations we have to measure edge angles. An edge is defined by two neighboring polygons ( Fig. 1 B). Average orientation of polarities of the two polygons (blue arrow) is obtained from direction cosines of polarities of the two polygons. An angle of the edge is the angle (designated by the arc) between the edge direction and the average orientation of polarities (blue arrow).
Shape anisotropy and orientation of cells
Each polygonal cell area S was measured at t ¼ 80. A polygonal cell was approximated by a momental ellipse (an ellipse of inertia). The principal axis and the second axis of the ellipse were a and b, respectively. Cell orientation, O cell , was defined as an angle of the principal axis of the ellipse from the anterior-posterior axis. Cell anisotropy was defined as A cell ¼ (a À b)/a.
Shape anisotropy and orientation of colonies around a cell
We considered a colony around cell i ( Fig. 1 C, t ¼ 0). Colony members k surround cell i at t ¼ 0 (faint blue polygons in Fig. 1 C). k ¼ 1,. n i , where n i is number of cells surrounding cell i at t ¼ 0. We quantified the colony shape at t ¼ 80 as follows. At first, polygon i, whose vertices are the central points of colony member k, was normalized as described in the legend of where line A passes through the center of the polygon. Next, a line (P 1 P 2 ) is drawn that is perpendicular to line A and includes the center of the polygon. Edges of a polygon that cross line P 1 P 2 are designated as specific edges (thick line). Orientations of these two edges are closer to the anisotropic angle than other edges. (B) Method to measure an edge angle with respect to the polarity direction. An edge consists of two boundaries of neighboring polygons. Average orientation of polarities of the two polygons (blue arrow) is obtained from the direction cosines of polarities of the two polygons. An angle of the edge is the angle (designated by arc) between the edge direction and the average orientation of polarities (blue arrow). (C) Method to analyze change of colony shape during morphogenesis. Left and right patterns are at t ¼ 0 and 75, respectively. A colony of closely neighboring cells (faint blue cells and blue cell at t ¼ 0) is deformed to become a concave colony (faint blue cells and blue cell at t ¼ 75). Colony i consists of cell i (blue cell) and n i cells that surround cell i at t ¼ 0. n i is the number of surrounding cells at t ¼ 0. The surrounding cells are designated by a faint (blue color) and referred to as cell k (k ¼ 1, 2, 3, ., n i ) throughout the analysis. Their central points at t ¼ 75 are designated by small blue circles. Note that cells designated by the plus sign (þ) and the symbol (#) do not belong to cell k, although they are neighbors of cell i at t ¼ 75, because the cells designated by the plus sign (þ) and symbol (#) were not the neighbors of cell i at t ¼ 0. The central points of cell k (k ¼ 1, 2, 3, ., n i ) at t ¼ 75 are normalized (solid black circles) as follows: polygon i, whose vertices are the central points of colony member k, was normalized using a ratio of distances r k /r 0k , where r 0k and r k are distances of cell k from cell i at t ¼ 0 and t ¼ 75, respectively. Positions of central points of the normalized colony polygon were (x k 0 , y k 0 ), where x k 0 ¼ x i þ (x k À x i ) r k /r 0k and y k 0 ¼ y i þ (y k À y i ) r k /r 0k (black solid circles in Fig.1 C, on the right). This normalization enabled us to obtain net changes of colony shape anisotropies at t ¼ 75, regardless of the colony shape at t ¼ 0. A polygon whose vertices are the normalized central points (solid black segment line) was analyzed using the approximation method of a momental ellipse. Orientations of red and black segment lines represent cell orientation (O cell ) and colony orientation (O colony ), respectively. Lengths of segment lines represent cell shape anisotropy (A cell ) and colony shape anisotropy (A colony ), respectively. To see this figure in color, go online.
shape anisotropies, regardless of the colony shape. The normalized colony polygon was approximated by a momental ellipse. The principal axis and the second axis of the ellipse were a 0 and b 0 , respectively. Colony orienta-tion, O colony , was defined as an angle of the principal axis a 0 of the ellipse from the anterior-posterior axis. Colony anisotropy was defined as A colony ¼ (a 0 À b 0 )/a 0 . Orientation and length of solid black segment line are O colony and relative scale of A colony , respectively ( Fig. 1 C, right). Note in Fig. 1 C that, although the blue cell is flattened horizontally (O cell ; red line segment), the colony involving n i cells k is elongated longitudinally (O colony ; black
RESULTS AND DISCUSSION
The first-round computer simulation We started with a sheet consisting of polygons and first looked at four neighboring polygons in this sheet (Fig. 2 A). Contractile force of specific edges was assumed; that is, that edges whose directions are close to the vertical direction have strong contractile force (anisotropic contractile force) and the change in the polygonal pattern was examined by using a mathematical model system (see Cell-based 3D vertex dynamics in the Supporting materials and methods). As shown in Fig. 2 A, several edges of the four polygons expressed strong vertical contractile force (thick solid line) and the edges were rearranged with each other exchanging connections at vertices. This is a topological pattern transformation. Two neighboring polygons (faint orange color) were intercalated by two dark orange-colored polygons as shown in Fig. 2 A (t ¼ 0-25). A pattern of the four polygons changed from a vertical to horizontally elongated shape (t ¼ 25). Therefore, the anisotropic contractile force of the edges enables a pattern of polygons to be expanded perpendicularly to the direction of contractile force ( Fig. 2 A inset). This is a collective motion of CE. CE of tissues was first identified in the neural plate (19) and has been found in many other tissues. CE caused by the anisotropic contractile force of edges was demonstrated at the cellular level by computer simulation of the mathematical model (20,21).
We applied the above-mentioned anisotropic contractile force to a mathematically artificial model of a straight tube consisting of many polygons ( Fig. 2 B). Several simulations were performed with various anisotropic angles. At first, we applied an anisotropic angle of À15 ( Fig. 2 C), where edges whose direction was close to À15 expressed strong contractile force. The direction of À15 is the orientation of a short clock hand pointing to 11:30 o'clock. The polygons migrated perpendicularly to the anisotropic angle and to form a right-handed screw pattern on the surface of the tube (Fig. 2 C, red arrow). Indeed, the array of blue polygons that had been vertical in Fig. 2 B rotated to form a right-handed screw, just as a barber's pole would, as shown in Fig. 2 C. When we changed the anisotropic angle from À15 to À55 , the array of blue polygons rotated backward. The array of blue polygons did not rotate on the tube surface at the anisotropic angle of À55 . Instead, the tube itself mechanically 3D deformed to a bent shape, as shown in Fig. 2 D (white arrow). The array of blue polygons followed the bending of the tube. The bottom figure shows that the central line of the tube, shown as a colored chain, forms a hairpin-like shape (see Centers of sliced model tubes in the Supporting materials and methods). The hairpin-like shape of the colored central line indicates that the tube is bent. The bend looks like buckling of the tube. The array of blue polygons runs parallel with the hairpin-like shape of central line of the tube. Moreover, when we changed the anisotropic angle to À65 , the tube not only became bent but also deformed its entire shape to be a helical loop (Fig. 2 E, blue arrow). The array of blue polygons on the tube surface followed the helical looping of the central line. It should be noted that the helical tube formed a lefthanded helix with the anisotropic angle to À65 , whereas the spiral rotation of the tube surface with the anisotropic angle to À15 was right-handed ( Fig. 2 C). We understood these results when we considered the horizontal and vertical components of the extension force separately as follows. As the absolute anisotropic angle increased (from 15 to 65 ), the horizontal component of the extension force decreased, and surface rotation stopped. Instead, the vertical component force increased so that the tube itself deformed to be bent. Taken together, the anisotropic angles of À55 is a critical angle for the deforming heart model tube. When absolute anisotropic angle is less than 55 , the array of blue polygons on the tube rotates around the tube axis. When it is more than 55 , the array is fixed to the surface of tube and moves in parallel with the twisting tube. In conclusion, the anisotropic contractile force of the edges of the polygons on the tube surface causes either 1) spiral rotation of the polygons on the tube surface (rotation of a barber's pole) or 2) 3D deformation of the entire tube shape. When the cell arrays are difficult to rotate because of weak component force of the horizontal direction, the tube changes its 3D shape to become a helical loop. This result shows that we have a powerful mathematical cell model, by which we can construct 3D tissues that are remodeled in large-scale. Also, we already know that the anisotropic forces cause rotation of the tissue surface in the hindgut and genitalia of Drosophila (22,23).
Computer simulations of the initial heart tube
The above-mentioned computer simulation showed that the anisotropic contractile force of the edges of polygons causes an artificial tube to loop helically under certain conditions. This result encouraged us to elucidate a physical mechanism underlying the helical looping of the real heart tube. We considered a model tube for the initial embryonic heart (Fig. 3, inset). The model tube consists of the ventral and dorsal sides, in which cells in the dorsal side do not have any anisotropic properties.
We performed computer simulations with various anisotropic angles of cells in the ventral side (Fig. 3). Strong contractile force of edges whose angles were close to the anisotropic angle of À70, À75, and À80 produced lefthanded helical looping (Fig. 3, A-C). These patterns were more or less the same. The results show degree of robustness of the anisotropic angle. When a horizontal edge expressed strong contractile force (anisotropic angle ¼ À90 ), the model tube simply bent, showing a hairpin pattern (Fig. 3 D). When we used an inverse angle (anisotropic angle ¼ þ75 ), we obtained an expected inverse (righthanded) helix loop (Fig. 3 E).
Next, we examined the effect of differential strength of contractile force of edges in the ventral cells (Fig. S2). w is the relative weight of the strength of contractile force of a specific cell edge (see Strength of contractile force of each edge in Materials and methods). To edges whose angles were close to an indicated anisotropic angle, we used a large w (e.g., w ¼ 3), whereas the w of the other cell edges was set to 1. For examination of the effect of differential contractile force of edges, we fixed the anisotropic angle at À75 . When we did not consider specific cell edges (i.e., all cell edges had w ¼ 1), the model tube did not loop but bent instead (Fig. S2 A). When we used w ¼ 2.7, we obtained a helical loop as shown in (11)).
Distribution of edges with anisotropically contractile force
Ray et al. (18) observed p-myoII on cell boundaries in the ventral myocardium at HH9 and reported that the intensity of p-myoII on cell boundaries of angles between À90 and 0 was higher than that of angles between 0 and 90 . Here, to obtain the result of Fig. S3 (t ¼ 150), we used the weight of anisotropic force strength of edge w ¼ 2.7. We confirmed that the angle distribution under the assumption of w ¼ 2.7 was compatible with the observation of Ray et al. (18) as follows. The model tube at t ¼ 12.5 was almost straight, as shown in Fig. S3 (t ¼ 12.5). An enlarged polygonal pattern is shown in Fig. 4 A, where edges with strong contractile force (large w-value) are represented by a thick black line. We measured the angles of all edges of the polygons in the ventral side in the polygonal pattern of t ¼ 0 (Fig. 4 B) and t ¼ 12.5 (Fig. 4 A) using the method described in Measurement of edge angle with the polarity direction in Materials and methods. The results of these analyses are shown in Fig. 4, E and D. The angle distribution that had been roughly even at t ¼ 0 (Fig. 4 E) came to have three peaks, with an$60 interval at t ¼ 12.5 (Fig. 4 D). The three peaks remind us of a hexagonal pattern resembling the pattern in a bee's nest. Angle distribution of the edges with FIGURE 4 Confirmation of distribution of anisotropic edges with strong contractile force in the computer simulations. We have assumed that edges whose orientation was close to the anisotropic angle have strong contractile force. Here, the edges that have strong contractile force were examined with the results of the early stage of computer simulations (t ¼ 12.5). (A and C) Partially enlarged areas of heart model tubes with anisotropic angles À75 and þ75 are shown (t ¼ 12.5). Horizontal cell arrays are indicated by narrow rectangles, which are slightly tilted to the CW or CCW direction (arrow). Angle distribution that is approximately even at t ¼ 0 (E) changed to have three peaks with a 60 interval at t ¼ 12.5 (horizontal arrows with double arrowhead in D and F). Numbers of strong contractile edges are analyzed (dark blue bar). Comparison between the total numbers of edges with a negative angle (À90 to 0 ) and a positive angle (0 to þ90 ) is performed (186 and 113 in D 0 ; 114 and 189 in F 0 ). Statistics for (D 0 ) and (F 0 ) are as follows: the null hypothesis that the orientations of edges with strongly contractile force ([À90 to 0 ] or [0 to þ90 ]) and the anisotropic angles (À75 or þ75 ) in the assumption are independent was rejected (p < 0.001 by c2 test, test for independence); degree of freedom 1; c2 statistics ¼ 36.38; c2 statistics p ¼ 0.001 ¼ 10.83). In conclusion, when we assumed that anisotropic angle of edge is À75 and analyzed numbers of the edges with strong contractile force, edges with negative orientation (À90 to 0 ) are larger than those with positive orientation (0-90 ), as shown in (D 0 ). When we assumed that anisotropic angle of edge is þ75 , the opposite was shown in (F 0 ). To see this figure in color, go online. strong contractile force is shown in Fig. 4 D (dark gray bar). The total number of edges of strong contractile force with a negative angle (À90 to 0 ) and a positive angle (0 -90 ) are shown in Fig. 4 D 0 . The number of edges with negative angles was certainly larger than that of edges with positive angles, which corresponded with the observation of Ray et al. (18). We performed a similar examination in the results of other computer simulations with an anisotropic angle of þ75 , as shown in Fig. 4 C. The angle distribution of edges is shown in Fig. 4, F and F 0 . Differences between the results with the anisotropic angle À75 (Fig. 4 D 0 ) and þ75 (Fig. 4 F 0 ) were significant, as described in the legend of Fig. 4. Thus, we confirmed that the method of Determination of anisotropic contractile edges in Materials and methods was appropriate to computer simulations of the embryonic heart tube.
Embryonic chick heart tubes consist of heterogeneously chiral cardiomyocytes
Ray et al. (18) demonstrated that chick myocardial cells were intrinsically chiral and exhibit dominant CW rotation in vitro, and myocardial cell chirality in the heart tube controls the directionality of cardiac looping. However, the chirality of myocardial cells was not homogeneous. Myocardial cells in the rotation system showed a 62.5-74.1% CW rotation and a 37.5-25.9% counter clock-wise (CCW) rotation of cells (Table S1), which suggests a lefthanded helical heart consisting of cells not only with edges of the rightward strong contractile force but also with edges of the opposite chiral type.
We thus examined the robustness of the handedness of helical looping in the model tubes. Using a series of random numbers, we made nine heart model tubes consisting of various ratios of cell types (0.1-0.9), where type ratio is a ratio of cell number of À75 type to the total cell numbers of both types. We examined the handedness of the resultant twisted model tube. Results are shown in Fig. S4 (top). Color chains that represent the central lines of tubes were projected on the horizontal plane. The patterns of projected color chains indicate whether or not model tubes are in a helical loop conformation. These conformations were left-handed helix, hairpin-like bent, and right-handed helix, sequentially from left to right in Fig. S4 (top). Changing between left-and right-handed helices took place around a ratio of 0.4 and 0.5. We performed four further computer simulations with model tubes made by another series of random numbers. All results were presented by line chart. The green line was the result of the first series of random numbers. Four other similar computer simulations are represented with black lines. The data were analyzed using statistics of t-test as shown in the legend of Fig. S4. Results of our computer simulations of the heart model tube are consistent with the observation of Ray et al. (18).
Cell rearrangement took place frequently in the early stage of the helical loop formation
We showed that CE leads to a left-handed helical loop. To elucidate the mechanophysical mechanism of the formation of the left-handed helical loop, we examined the process in detail using computer simulations. Fig. S5 A shows the frequency of the rearrangement of four polygons during the helical loop formation. The rearrangement was particularly concentrated until t ¼ 10, then ceased. The rearrangement after t ¼ 10 appeared to be unnecessary for the loop formation. Indeed, we performed another computer simulation in which the rearrangement did not take place after t ¼ 10, and a similar left-handed helical loop was obtained as shown in Fig. S5 A. We obtained a similar result by computer simulation with anisotropic angle ¼ þ75 , in which we obtained the right-handed helical loop (Fig. S5 B). The results are discussed with the process of the helical looping in General discussion.
A comparison between the early stage of cell patterns (t ¼ 12.5) with anisotropic angles ¼ À75 and þ75 showed a delicate but precise difference (Fig. 4, A and C). Some of the horizontal cell arrays are observed in Fig. 4 A (designated by a narrow rectangle and arrow) and the narrow rectangle is slightly tilted to the CW direction. We also observed an opposite tilt of the narrow rectangle in the model tube with a þ75 anisotropic angle (Fig. 4 C, arrow). As already mentioned, we examined the angle distribution of the edges with strong contractile force (Fig. 4 D, dark blue bar). The number of edges with an angle between À90 and 0 was certainly larger than that between 0 and 90 ( Fig. 4 D 0 ). The opposite result was obtained in the model tube with a þ75 anisotropic angle (Fig. 4 F 0 ). In addition, we found interesting changes in cell patterns. A blue cell in Fig. 4 B (t ¼ 0) had five surrounding neighbor cells that enclosed the blue cell. Afterward, there were six surrounding neighbor cells for the blue cell. The manner of addition of one cell was different between the model tubes of À75 and þ75 anisotropic angles (Fig. 4, A and C). A new surrounding cell in Fig. 4 A was from the right side (cell designated by a plus sign (þ)), whereas a new surrounding cell in Fig. 4 C was from the left side (cell designated by an asterisk (*)). These LR asymmetric behaviors of cells were suggested to be the results of the anisotropic cell properties and the causes of the left-and right-handed helical looping. Cell rearrangements in the heart model tube were almost settled until reaching t ¼ 12.5. Such cell rearrangement did not notably change the whole view of the tube shape (t ¼ 12.5-20; see Fig. S3). Thereafter the cells were elongated and enlarged without cell rearrangement, and a large-scale morphogenesis of helical looping took place. Together, the collective motion of CE and the cell rearrangement did not have an explicit effect on helical looping immediately but provided potential abilities of the formation of large-scaled helical looping.
Position-specific deformation of cell colonies in the process of the helix loop formation
To examine the relationship between cell deformation in local regions and entire helical looping during left-handed helical loop formation, we divided and analyzed the ventral side of the model tube in four regions: anterior left (aL), anterior right (aR), posterior left (pL), and posterior right (pR). We did not analyze the dorsal cells because we assumed that the dorsal cells have no anisotropic properties in the computer simulations. During the process of t ¼ 0-80, Fig. 5 A shows that the aR region extended longitudinally and the aL region extended with curving, which was compatible with the observations of the chick embryonic heart by Kawahira et al. (17). In the posterior region, the results were the opposite. The pR region extended with curving and the pL region extended longitudinally. Furthermore, we examined behaviors of each cell and its neighbors in detail. We measured the cell area (S), cell shape anisotropy (A cell ¼ (a À b)/a, where a and b are the longest and the shortest axes of approximated ellipses, respectively), and cell orientation (O cell ; orientation of the longest axis a) (see Materials and methods). To examine the deformation of the neighborhood of each cell, we observed a colony of surrounding cells around each cell. We measured colony shape anisotropy (A colony ¼ (a 0b 0 )/a 0 , where a 0 and b 0 are the longest and the shortest axes of approximated ellipses, respectively) and colony orientation (O colony ; orientation of the longest axis of colony a 0 ). Results of the measurement are shown in Fig. 5, D and E, in which the lateral surface of the heart model tube was unfolded in a plane. Each measured value of cells was plotted on the plane by a line segment whose orientation was O cell and whose length was relative A cell (Fig. 5 D). Cell shapes at t ¼ 80 were plotted in green. Each measured value of colony was also plotted in the other plane by a line segment whose orientation was O colony and whose length was relative A colony (Fig. 5 E).
Comparison between Fig. 5, D and E, colony orientation and anisotropy (O colony and A colony ) differed significantly from cell orientation and anisotropy (O cell and A cell ). O colony and A colony are higher-sensitive indicators of shape change than O cell and A cell because the cells in a colony migrate individually and dynamically through repetition of cell intercalation. Statistics of these measurements in each region are presented in Fig. S7 A and Table S2 A. In the anterior and posterior regions, we performed t-tests of these five terms and confirmed that the difference of these terms between the left and right sides were significant (Table S2 A). The result is shown schematically in Fig. 5, B and C. When we examined the difference between the left and right side in the total data of the anterior and posterior regions, the difference was not significant (Fig. S7 A; Table S2 A). The result was in agreement with the observations by Kawahira et al. (17). We also performed a similar examination of the right-handed helical loop (with a þ75 anisotropic angle) and obtained similar results, as shown in Fig. S6, Fig. S7 B, and Table S2 B. The simulation with a þ75 anisotropic angle was used to show that the bias of LR asymmetry of the initial model tube was negligible. Because the initial model tube had been made using random numbers (as described in Construction of the initial model tube in the Supporting materials and methods), we had been concerned about the bias of the LR symmetry of the initial model tube. We confirmed that the bias of the initial model tube was so small that it did not disturb the determination of the handedness of helical looping.
Mechanophysical mechanism that determines the handedness of the helical loop
Which term made the largest contribution to determination of the handedness of the helical looping among the abovementioned changes of cells and colonies? To answer this question, we superimposed line segments expressing cell orientations (O cell ) of the left side on the left column and line segments expressing cell orientations of the right side on the right column (Fig. 5 F). We also superimposed line segments of colony orientations (O colony ) of the left and right regions in a similar way (Fig. 5 G). The superimposed line segments of colony orientations should be noteworthy. The distribution of line segments of colony orientations close to the horizontal direction was outstandingly different between the left and right sides. There were many roughly horizontal line segments in the anterior half of the left column, whereas there were few in the anterior half of the right column. The LR asymmetric distribution was opposite in the posterior half. Longitudinal expansion close to the vertical direction was also remarkable in the anterior half of the right column and the posterior half of the left column. These results show that the colony orientations (O colony ) are considered to be the primary factor in the determination of the handedness of the helix. We also performed a similar analysis of the right-handed helical loop with a þ75 anisotropic angle (Fig. S6; Table S2 B). The result was confirmed to be inverse asymmetric; that is, the left-right (LR) bias of the initial heart model tube was negligible. On the other hand, the difference of the cell orientation (O cell ) between the left and right sides was not so clear, as shown in Fig. 5 F. Cell shapes seemed to not be deeply correlated with the handedness of helical looping.
General discussion
Distinctive feature of the cell-based vertex dynamics model In this study, we succeeded in making a connection between the chirality of myocardial cells and the handedness of the helical heart tube via anisotropic cell behavior. To investigate the physical mechanism producing the handedness, mathematical models were indispensable. Previously, a few mathematical models had been used in the investigation of looping of the heart tube. Shi et al. (24) constructed a finite element analysis model and recapitulated bending and torsion of the heart tube. Computer simulations using another finite element analysis model were performed and demonstrated a recapitulation of large-scale dynamic heart looping (7). We note the advantage of cell-based vertex dynamics over the finite element analysis model. In the finite element analysis, the heart tube was assumed to be a sheet of continuous material rather than an assembly of discrete cells. In the cell vertex dynamics used in our simulation, it was instead possible to assume cell polarity, anisotropic edge properties, chiral properties of individual cells, and orientation of cell division in individual cells.
On contractile force derived from the potential energy density of cell edges The edge contractile force is produced by differentiating the term for edge potential energy in the equations of motion (Eq. S1 in the Supporting materials and methods). Indeed, p-myoII is observed to be enriched on the specific boundary of cells by Ray et al. (18). The boundaries are then expected to become short. However, enrichment of p-myoII on the boundary is not necessarily for the boundary to be shortened. The series of boundaries with p-myoII form polarized myosin supracellular cables and align perpendicularly to the direction of tissue extension (16). Formation of the series of boundaries with p-myoII consisting of unshortened boundaries was also observed in the formation of the neural tube and analyzed using the vertex dynamics (21).
According to the observation of Ray et al. (18), not only p-myoII but also N-cadherin is enriched on specific boundaries of cells. The edge contractile force from the term for edge potential energy is also related to cell-cell adhesion. It may be appropriate that we comment on N-cadherin. When cell-cell adhesion of a cell boundary is strong, the boundary is known to be elongated in the theory of differential cell adhesion in the cell sorting system (25)(26)(27)(28)(29). When cell adhesion is weak, the cell boundary becomes short; that is, the boundary contracts. The observation of N-cadherin on specific boundaries enables us to consider the contribution of N-cadherin to the left-handed helical looping. However, further investigation on the interaction between the two contributions of the contractile molecules and cell-cell adhesion molecules is required.
The helical looping process consists of two phases of implicit preparation for looping and explicit remodeling of looping We demonstrate in Fig. S5 A that cell rearrangement took place by frequent reconnections of paired vertices in the early stages of helix looping, but the heart model tube at this stage is almost straight in shape. After this stage, the heart tube is dynamically remodeled, forming the helical loop without cell rearrangement. The helical loop is not made by continuous accumulation of the elemental process of rearrangement but rather by switching from the stage of cell rearrangement to the stage of alteration of the cells themselves. The result may correspond to the proposal by Ray et al. (18) that, before cardiac looping, LR polarization of N-cadherin and myosin II on cell boundaries could lead to LR asymmetric cellular contraction and junctional remodeling.
Furthermore, the mathematical result of the two-phase process may help to explain a discrepancy between two experimental results. Kidokoro et al. (16) reported that when the heart tubes were exposed to the myosin II inhibitors at the HH8/9 stages the treatment blocked the directional extension of cells and the heart tube did not loop (16). Interestingly, heart tubes that were exposed to the myosin II inhibitors at the HH10-12 stages looped normally (6,30,31). In addition, treatment with cytochalasin, which is an inhibitor of actin filament formation, is known to inhibit the looping of the heart tube (6,14). The heart tube at the HH8/9 stage may be implicitly preparing for heart looping, and the heart looping may take place explicitly at the HH10-12 stages by a mechanism that is not inhibited by myosin II inhibitors. However, further detailed investigations of cell rearrangements are required for confirmation.
Intrinsic and extrinsic factors causing the left-handed helical looping
The cell chirality of anisotropic cell edge properties that was based on the observation by Ray et al. (18) is the intrinsic factor in the chiral helical looping. We demonstrated the possibility that the intrinsic factor alone, without external factors, determines the handedness of the heart looping. However, Shi et al. (6) have extensive experience with the real heart tube and have noticed the left and right omphalomesenteric veins (OVs) connected caudally to the heart tube. Normally, the left OV is larger and exerts more pushing force than the right OV, causing the heart tube to form with left-handed helical looping. Recently, computer simulations were performed based on detailed observation of developing mouse heart tubes (7). A recapitulation of large-scale dynamic heart looping was demonstrated. In their study, a rightward rotation of the arterial pole and an asymmetric cell ingression in the venous pole were observed. On the basis of the observation of formation of the initial heart tube, Kidokoro et al. (16) described that the left heart cells may more actively rearrange than do right cells, driving asymmetric heart elongation and looping. In our previous study, we introduced the rightward displacement of the anterior part of the heart tube and succeeded in forming the left-handed helical looping (12). The above-mentioned observations belong to extrinsic factors.
We have two questions about the intrinsic and extrinsic factors in the heart tube looping. 1) What is the major cause of the left-handed helical looping, the intrinsic or external factor? 2) Outcomes of the handedness of the helical looping by these two factors seem to be the same. Why are the effects of the two factors consistent? For the first question, we considered that the intrinsic and external factors synergistically determine the handedness of heart looping; that is, an initially subtle asymmetry is amplified via a positive feedback interaction between the intrinsic and external responses. In fact, in this work, we showed that the individual properties of cell chirality caused the left-handed helical looping. On the other hand, the external rightward displacement of the heart model tube has been suggested to cause asymmetrical arrangements of individual cells in the previous study (12). The individual cells and the global deformation of the heart tube may synergistically interact with one another. Both the internal and external factors may not be redundant. For the second question, we think of the nodal signaling pathway, which is known to be a global molecular signaling pathway establishing embryonic laterality of the LR bias. Nodal signaling may be related to the abovementioned external factors that lead to the left-handed helical looping (the larger left OV, the rightward rotation of the arterial pole, and the LR asymmetric cell ingression in the venous pole). On the other hand, according to Ray et al. (18), myocardial cells constructing the heart tube have CW chirality initially, and nodal signaling reverses the chirality of myocardial cells from CW to CCW. The cells in the right side of the heart tube originating from the nodal-negative lateral plate mesoderm (LPM) exhibit dominant CW chirality, whereas the cells in the left side of the heart tube, receiving contributions from the nodal-positive LPM, exhibit more randomized cellular bias. Ray et al. (18) reported that such a heart tube forms the left-handed helix. When we consider the effect of nodal signaling, we understand that the intrinsic and extrinsic factors consistently work in the heart tube. Desgrange et al. (32) also mentioned that intrinsic and extrinsic mechanisms are not mutually exclusive and may well occur synergistically to drive morphogenesis.
Consideration of CE of collective cells across different animal species
Behaviors of myocardial cells in the chick heart have been observed in detail (16). The cells intercalated with each other, and p-myoII was enriched in cell edges aligned along the convergence axis and perpendicularly to the direction of tissue extension, indicating that CE occurred. The myocardial cells in the chick heart tube show asymmetry with rightward-biased edges on which N-cadherin and p-myoII were enriched (18). These data suggest that CE works in the heart tube. Under the assumption of the anisotropic contractile force of edges, we then performed computer simulations using the mathematical model. The computer simulation suggested that CE works in the heart model tube and the heart tube was remodeled into a chiral structure of the left-handed helix. Generally, CE is known to be exerted in axial developmental processes across different animal species (e.g., chick, mouse, Xenopus, and zebrafish) (33). We will thus discuss the formation of the chiral heart structure across these animal species.
We have investigated the looping of the mouse embryonic heart (12). Contrary to the heart of the chick embryo, the looping of the mouse embryonic heart is deeply related to the proliferation of cardiomyocytes (12). The mouse heart tube bent in a hairpin fashion through the localized proliferation of cardiomyocytes in the ventral side. By successive anterior-rightward displacement of the tube, we succeeded in remodeling of the bent tube looping to become the lefthanded helix. The mechanism of the anterior-rightward displacement of the heart tube had been unclear in the previous work. On the other hand, our results of the chick heart model tube in this study suggest a possibility that anisotropic edge contractile force contributes, through the rightward displacement of the heart tube, to loop formation of the mouse embryonic heart. We thus performed an additional computer simulation. We assumed that the anisotropic edge contractile force works in the mouse heart tube in addition to the cell proliferation; we performed computer simulations as shown in Fig. S8. Fig. S8 A is the result of the model tube with only the cell proliferation; the model tube simply bent. When we added an anisotropic contractile force of edges (anisotropic angle, À75 ), the model tube became a lefthanded helix as shown in Fig. S8 B. The ventral view of Fig. S8 B shows the anterior-rightward displacement of the tube. Inversely, when the anisotropic angle was þ75 , we obtained the right-handed helical loop, as shown in Fig. S8 C. It is plausible that the anisotropic contractile force of edges (À75 ), via CE of myocardium, caused the anterior-rightward displacement of the mouse heart tube.
The shape of the Xenopus heart is a left-handed heart tube similar to chick and mouse hearts (34). CE in the Xenopus heart tube, in which CE-defective mutants show heart abnormalities, was experimentally investigated, (33). Because the heart looping of amniotes (birds, mice, and frogs) appears to be similar, there may be a common role for CE in the formation of heart looping.
In contrast to the heart of amniotes, fish do not show a well-defined helix structure of the heart (35)(36)(37)(38)(39), and the mature shape of the fish heart is a flat S shape (32). In the early stage of development of the zebrafish heart tube, the heart tube forms via the fusion of bilateral cardiac cell populations of the LPMs, which are assembled into a disk that rotates clockwise (38). The disk is remodeled to a coneshaped intermediate, in which ventricular precursors forming the venous end are at the tip and atrial precursors forming the arterial end are at the base. The cone then telescopes out into a tube (40). As the tube undergoes elongation, its venous end is displaced toward the left, accompanying rotation around the axis of the venous portion (38). The ventricular chamber starts to bend rightward, a process referred to as cardiac looping. The outer curvature of the cardiac chambers expands under the constriction at the atrioventricular region. The axis of the looped heart in zebrafish then takes the shape of a flat S. Formation of the zebrafish heart involves rotation of the disk and cone, elongation of the cone, and bending of the cone axis; that is, the CE process is expected to be exerted in the zebrafish heart formation. Indeed, loss-of-function analysis was performed using gene knockdowns, which demonstrates significant impairment of CE (33). In contrast to the normal rightward looping of the heart observed in controls, the heart often failed to loop and instead showed a mirror reversal in the knockdown zebrafish. The CE process seems to contribute to the formation of the chiral structure of the zebrafish heart. Further investigation of the CE is expected to be performed at the cellular level.
CONCLUSION
The results of these computer simulations concluded as follows. The anisotropic contractile force of cell edges caused the cell rearrangements, which consequently produced the CE of cell colonies. Cells slipped out of neighboring cells in the convergent region, and cells intercalated between neighboring cells in the extension region. Such deformation took place differently in orientation of the CE between the left and right regions and between the anterior and posterior regions of the heart tube. Thus, the direction of edge contractile force is considered to determine whether the helix loop is left-or right-handed. It should be noted that, despite of the assumption that the distribution of edges with anisotropic contractile force was entirely uniform on the ventral side, the response of colonies in the heart model tube was different in each region. Such regional differences produced the chiral structure of the helical loop.
AUTHOR CONTRIBUTIONS
H.H. designed the project, performed simulations, and wrote the manuscript. | 2021-10-27T06:18:22.902Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "84ffd42e49a641a97c760beebc82dfa49d9d0e9e",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0006349521008845/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9250b121a12143e5de76fa2ba6a09930c4a28a2d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246295018 | pes2o/s2orc | v3-fos-license | Selection and the Distribution of Female Hourly Wages in the U.S
We analyze the role of selection bias in generating the changes in the observed distribution of female hourly wages in the United States using CPS data for the years 1975 to 2020. We account for the selection bias from the employment decision by modeling the distribution of the number of working hours and estimating a nonseparable model of wages. We decompose changes in the wage distribution into composition, structural and selection effects. Composition effects have increased wages at all quantiles while the impact of the structural effects varies by time period and quantile. Changes in the role of selection only appear at the lower quantiles of the wage distribution. The evidence suggests that there is positive selection in the 1970s which diminishes until the later 1990s. This reduces wages at lower quantiles and increases wage inequality. Post 2000 there appears to be an increase in positive sorting which reduces the selection effects on wage inequality.
Introduction
The dramatic increase in female wage inequality in the United States since the early 1980s (see, for example, Katz and Murphy 1992, Katz and Autor 1999, Lee 1999, Autor et al. 2008, Acemoglu and Autor 2011, Autor et al. 2016 Topel 2016) has been accompanied by substantial changes in both female employment rates and the distribution of their annual hours of work. Given the prominence that accounting for the selection bias (see Heckman 1974Heckman , 1979 from employment decisions has played in empirical studies of the determinants of female wages it seems natural to investigate its role in the evolution of wage inequality. This paper examines the sources of changes in the distribution of female hourly real wage rates in the United States from 1975 to 2020 while accounting for movements, and individuals' locations, in the annual hours of work distribution.
The inequality literature allocates wage changes to two sources. The first is the "structural effect" which captures the market value of an individual's characteristics.
This includes skill premia, such as the returns to education (see, for example, Welch 2000 and Murphy and Topel 2016), cognitive and noncognitive skills (Heckman, Stixrud and Urzua 2006), declining minimum wages in real terms (see, for example, DiNardo et al. 1996 andLee 1999), and the increasing use of non-compete clauses in employment contracts (Krueger and Posner 2018). The second source, referred to as the "composition effect", reflects differences across workers' observed characteristics.
These include increases in educational attainment. Earlier papers (see, for example, Angrist et al. 2006, andChernozhukov et al. 2013) have estimated these two effects under general conditions. However, as they focus on male wages they have ignored the role of selection.
Understanding the role of selection in the female wage inequality context is important. First, as the impact of selection is frequently interpreted as reflecting sorting patterns it is valuable from a policy perspective to understand how worker productivity has changed as an increasing proportion of women have entered the labor market. Second, the importance of accounting for selection bias in estimating the determinants of female wages suggests that an evaluation of the role of structural and composition effects requires an appropriate treatment of selection. Third, assessing the impact of selection on wages and inequality is particularly relevant when the composition of the working and nonworking populations have evolved as drastically as has occurred in our sample period. Finally, understanding the impact of selection bias may provide policy makers with guidance as to which measures may be taken to reduce wage inequality.
Three important papers have investigated the role of selection in the United
States over a period of increasing female wage inequality. 1 Mulligan and Rubinstein (2008), hereafter MR, correct for selection in the female mean wage in the United States for the years 1975-1999 and argue that the sharp increase of female wages partially reflected that the selected population of working females became increasingly more productive in terms of unobservables. They also find the pattern of sorting turned from negative to positive in the early 1990s. Evaluating the contribution of selection bias on the mean wage is straightforward in additive models as the selection component, under some assumptions, can be separated. The pattern of sorting is inferred from the coefficient for the selection correction. The nonseparable model required for estimating wage distributions has greater difficulties in isolating the selection component. Maasoumi and Wang (2019), hereafter MW, employ the copula based estimator for quantile selection models of Arellano and Bonhomme (2017), hereafter AB. AB and MW define the selection effect as the difference between the observed wage distribution and the counterfactual wage distribution simulated via their models' estimates assuming 100% participation. MW provide a similar conclusion regarding the pattern of sorting as MR for the overlapping years in their studies. Blau et al. (2021) follow Olivetti and Petrongolo (2008) who use a "selection on unobservables" approach to impute wages for nonworkers based on propensity scores for employment. They also compute the predicted wage distribution assuming 100% participation. In contrast to MR and MW, they find a more modest role for selection and that sorting did not change sign over the sample period. We define the selection effect as the difference in the observed wage distribution and the wage distribution that would result under the participation process associated with the year with the lowest participation rate. We find that the direction of sorting did not change during the years considered by MR.
We address several methodological and empirical issues regarding selection in the female wage inequality context. Our methodological contributions are the following.
First, we extend the Fernández-Val, Van Vuuren, and Vella (2021), hereafter FVV, estimator for nonseparable models with censored selection rules. The FVV estimator incorporates the number of working hours rather than the binary work decision as the selection variable and here we allow for different censoring points conditional on the individual's characteristics. This variation across censoring points captures differences in "fixed working costs" (see, for example, Cogan 1981). Second, we provide a procedure for decomposing changes in the wage distribution into structural, composition and selection effects in a nonseparable model which allows for selection from the choice of annual working hours. We contrast our decomposition approach with the corresponding exercise based on the Heckman (1979) selection model (HSM). Third, we extend our estimator to allow selection into annual hours to reflect two separate selection mechanisms. Namely, the choices of annual weeks and weekly hours. Fourth, we provide an estimator motivated by the ordered treatment model of Heckman and Vytlacil (2007) which allows for bunching in annual hours or annual weeks and apply it via our decomposition method.
Our following empirical contributions feature results based on the two most commonly employed Current Population Survey (CPS) data sets. First, unlike MR and MW who analyze wages for full-time full-year (FTFY) workers, we obtain a fuller picture of the evolution of the wage distribution by including all workers and accounting for selection from the hours of work decision. Second, we confirm previous findings, restricted to FTFY workers, regarding movements in the wage distribution. Female wage growth at lower quantiles is modest although the median wage has grown steadily. Gains at the upper quantiles are large and have produced an increase in female wage inequality. Finally, we provide new evidence regarding the role of selection. Changes in selection are especially important at the lower end of the wage distribution and have generally decreased wage growth and increased wage inequality. Although we are able to reproduce the estimated sorting pattern as MR and MW, we illustrate this reflects the employed identification assumptions.
We show that exploiting the variation in hours worked as a form of identification produces results consistent with positive sorting for the whole sample period.
An important empirical result relates to the pattern of sorting and its implication for the impact of selection on wages and inequality. We find clear evidence of positive sorting in the mid 1970s. The period 1975 to 2000 experiences a shift in the distribution of female annual hours of work, accompanied by a reduction in the level of positive sorting. These two forces decrease wages at lower quantiles and increase wage inequality. For the remainder of our sample period there appears to be a return to higher levels of positive sorting and a decrease in the impact of selection on wage inequality.
The rest of the paper is organized as follows. The next section discusses the data.
Section 3 describes our empirical model and defines our decomposition exercise. It also provides alternative estimators employing ordered or multiple censored selection rules. This section concludes with a comparison of our decomposition approach with that associated with the HSM. Section 4 presents the empirical results. Section 5 reconciles the difference between our results with those of MR, while Section 6 investigates the impact of the changes in selection in wage inequality. Section 7 offers some concluding comments.
Data
We employ the two most commonly analyzed micro-level data sets, the Annual Social and Economic Supplement (ASEC) and the Merged Outgoing Rotation Groups (MORG), from the CPS. Appendix A of Lemieux (2006) provides a comparison of the two data sets. We employ both to contrast results and to allow comparisons with earlier studies.
Annual Social and Economic Supplement
We employ the ASEC for the 46 survey years from 1976 to 2021 reporting annual earnings for the previous calendar year. 2 Unless otherwise stated, we refer to the year for which the data are collected and not that of the survey. The 1976 survey is the first for which information on weeks worked and usual hours of work per week last year are available. To avoid issues related to retirement and ongoing educational investment we restrict attention to those aged 24-65 years in the survey year. This produces an overall sample of 2,219,820 females. The annual sample sizes range from 33,924 in 1976 to 59,622 in 2001. Annual hours worked are defined as the product of weeks worked and usual weekly hours of work last year. Those reporting zero hours usually respond that they are not in the labor force in the week of the March survey. We define hourly wages as the ratio of reported annual labor earnings in the year before the survey, converted to constant 2019 prices using the consumer price index for all urban consumers, and annual hours worked. Hourly wages are unavailable for those not in the labor force.
For the self-employed, unpaid family workers and the Armed Forces annual earnings or annual hours tend to be poorly measured and we exclude these groups from our sample. This results in a deletion of 5.4, 0.4 and 0.07 percent of the observations for the self employed, unpaid family workers and the Armed Forces, respectively.
The figures for self employed and the armed forces have trended upwards while those for family workers have trended downwards over the sample period. These groups do not show any cyclical variation. The only exception is the number of self employed during the Great Recession which, compared to the total employed, dropped considerably. We use observations with imputed wages for their values of working hours but do not use them in the wage sample. The restriction to civilian dependent employees with positive hourly wages and people out of the labor force last year results in a sample of 2,055,063 females. The subsample of civilian dependent employees with positive hourly wages comprises 1,190,928 observations. A benefit of the ASEC is its extensive family background variables.
Merged Outgoing Rotation Groups
We use the years 1979 to 2019 for the MORG using the CEPR extracts. The MORG contains information on hourly wages in the survey week for those paid by the hour and on weekly earnings from the primary job during the survey week for those not paid by the hour. Lemieux (2006) and Autor et al. (2008) 1980 (121,786) and the lowest in 2019 (91,647). The subsample of civilian dependent employees working in the reference week is 2,219,820 observations. This low figure, relative to the ASEC, is expected as employees who did not work in the reference week may have worked in another week. Family background variables are only available since 1984. This restricts the family background characteristics to family size. Figure 1 confirms two observations made by Lemieux (2006 below, may be the relatively lower employment rate in the MORG. This implies that the MORG D1 is higher in the population distribution than the ASEC D1.
Descriptive statistics
The difference between the data sets decreases for the MORG in 1979-1981 with a corresponding smaller decrease for the ASEC. The ASEC and the MORG then show similar growth with the ASEC wage consistently below the MORG. As noted by Lemieux (2006), the period 1979 to 1984 displays a sharp increase in the residual variance in the MORG not found for the ASEC. is somewhat more complicated. Overall, the MORG has the larger increase but the difference reflects wage movements for the period 1979 to 1984. The increase in the interdecile ratio post 1984 is relatively lower for the MORG. This is consistent with Lemieux (2006) who notes that the ASEC not only has higher wage dispersion but also increases faster over time. categories might also reflect selection effects. The failure to include those who do not work FTFY means that the selection effects in earlier studies may reflect movement from the non-FTFY to FTFY, rather than from non-employment to FTFY.
Econometric analysis 3.1 Model with Censored Selection
We consider a version of the HSM where the censoring rule for the selection process incorporates the information on annual hours worked rather than the binary employment/non-employment decision. The model has the form: where Y is the logarithm of hourly wages, H is annual hours worked, D is a selection is not censored at µ(Z), X and Z are vectors of observable conditioning variables, g, h and µ are unknown functions, and E and V are respectively a vector and a scalar of potentially dependent unobservable variables with cumulative distribution functions (CDFs) F E and F V . We assume that X is a, not necessarily strict, subset of Z, i.e. X ⊆ Z.
We refer to equation (3) as the selection rule. It corresponds to censored selection with an unobserved censoring point, that is we observe the censoring status, D, but not the censoring point, µ(Z). Equations (2)-(3) can be considered a reduced-form representation for hours worked. The model is a nonparametric and nonseparable version of the Tobit type-3 model considered by FVV, extended to incorporate an unknown censoring threshold which is a function of Z. This threshold is motivated by fixed labor costs measured in terms of hours. Individuals only work if the desired number of hours exceeds a minimum number given by µ(Z). Cogan (1981) shows that fixed labor costs reduce the number of individuals working very few hours. We allow the fixed labor costs to vary by individual and household characteristics.
Without loss of generality, we can normalize V to be standard uniformly distributed. The potential dependence between E and V implies Z ′ s independence of E in the entire population does not exclude dependence in the selected population . Lemma 1 extends FVV's result to our model.
Lemma 1 (Existence of Control Function) Under the model in (1)-(3) and
Assumption 1: The proof of the first statement follows from the same argument as in Lemma 1 of The assumption that Z is independent of (E, V ) then proves the result implying that V is an appropriate control function. 3 The result V = F H * | Z (H | Z) follows directly from the assumption that h is strictly increasing in its second argument and the normalization on the distribution of V . Identification of F H * | Z follows from Buchinsky and Hahn (1998). In the Appendix, we propose an estimator of F H * | Z based on distribution regression. This estimator is an alternative to the estimators of Buchinsky and Hahn (1998) and Chernozhukov and Hong (2002), which are based on quantile regression.
The decompositions presented below require a wage distribution which incorporates the value of V and a statement regarding the region in which it is identified.
To proceed, we denote the support of random variables and vectors by calligraphic letters while lower case letters in parentheses indicate that the support is conditional on a stochastic vector taking a particular value; e.g. Z(x) is the support of Z | X = x. We define the Local Average Structural Function (LASF) and Local Distribution Structural Function (LDSF) as: They represent the mean and distribution of Y if all individuals with control function equal to v had observable characteristics equal to x. An argument similar to FVV shows that: This set, referred to as the identification set by FVV, is identical to the support of (X, V ) among the selected population. Lemma 1 implies that the LASF and LDSF equal the mean and distribution of the observed Y conditional on (X, V ) and that it is identified. This follows directly from (E, V ) ⊥ ⊥ Z and that (x, v) ∈ X V * implies the ability to find a (z, v) combination for which h(z, v) > µ(z). We refer to FVV for a discussion on how the size of the identified set depends on the availability of exclusion restrictions on Z with respect to X.
There are different candidates for H in (2). As the ASEC provides both usual hours worked per week and annual hours, calculated as the product of weeks worked last year and the usual number of hours worked per week, we employ several alternatives. Although the usual hours per week may be the variable in the ASEC that is closest to the hours decision in labor supply models (Killingsworth, 1983), it may also reflect whether the job has pre-set hours. Therefore, we employ the annual measure which incorporates the weeks decision. As the extensive margin may capture whether an individual has worked a positive number of hours in the past year, we also investigate the use of the number of worked weeks. A theoretical motivation for this measure follows from search models in which the offered wages depend positively on the job offer arrival rate and negatively on the separation rate (Burdett and Mortensen, 1998). As these rates also determine the number of weeks worked, it implies a relationship between weeks worked and wages. The appropriate censoring variable in the MORG is the number of hours worked in the reference week. Note that this variable solves some of the problems mentioned above.
Counterfactual distributions
We consider counterfactual CDFs constructed by integrating the LDSF with respect to different joint distributions of the conditioning variables and control function. where: denotes the joint CDF of (Z, V ) in the selected population and F Z,V denotes the joint CDF of Z and V in the entire population.
The counterfactual CDFs are constructed by combining the CDFs G and F Z,V with the selection rule (3) for different groups, each group corresponding to a different time period or a subpopulation defined by certain characteristics. Specifically, let G t be the LDSF in group t, F Z k ,V k be the joint CDF of Z and V in group k, and let 1{h r (z, v) > µ r (z)} be the selection rule in group r. The counterfactual CDF of Y when G is as in group t, F Z,V is as in group k, and the selection rule is as in group r is defined as: provided that the integrals are well-defined. Since the mapping v → h(z, v) is strictly monotonic, the condition h r (z, v) > µ r (z) in (5) is equivalent to the condition: where is the probability of working less hours than the censoring point conditional on Z = z in group r and π r (z) is the propensity of working in that group. Given G s Y t,k,r (y), the corresponding counterfactual quantile function (QF) is: Under these definitions the observed CDF and QF of Y for the selected population in group t are G s Y t,t,t and q s Y t,t,t respectively. Nonparametric identification of (5) and (7) depends on whether the integrals in (5) are well defined. They are when two conditions are met. First, if Z k ⊆ Z r , then π r is identified over all z combinations in the integral. Second, when (X V k ∩X V * r ) ⊆ X V t , then the LDSF is identified for all combinations of z on which we integrate.
Here, X V * r denotes the support of (X, V ) for the selected population in group r. The identification conditions simplify when we consider two years for q, r, and t, such as 0 and 1, which is relevant for the decompositions. For example, we need is that the employment rates in year 0, conditional on X, are lower than those in year 1.
Using (7), we decompose the difference in the observed QF of Y for the selected population between any two groups, say group 1 and group 0, as: 6 where [1] is a selection effect that captures changes in the selection rule given the joint distribution of Z and V , [2] is a composition effect that reflects changes in the joint distribution of Z and V , and [3] is a structural effect that reflects changes in the conditional distribution of Y given Z and V . These effects are relative to the base year. We stress that this definition of the selection effect differs from the standard definition. This is discussed in Section 3.5.
Double selection mechanism
The model can be extended to a multiple censored selection mechanism operating through both weeks and hours. The model has the form: The analysis of this model is similar to that above. However, it is necessary to employ both control functions to, for example, calculate the LDSF, i.e.
The identification conditions change to accommodate that the support condition is defined over two control functions. Using the same notation as Section 3.2, the support requirements for the counterfactuals are ZV H V W,k ⊆ ZV H V W,r and We acknowledge that there are circumstances under which this model will collapse to the single censoring mechanism case. However, as these are somewhat obvious we do not detail them here.
Model with ordered selection
The models above employ control functions which assume that the selection variable is continuous. However, both the numbers of weeks and hours worked feature bunching at specific values (e.g. 40 hours and 52 weeks). The following model with ordered selection incorporates bunching: where the variables have similar interpretations as above. The main difference between this model and those above is that it allows a discrete distribution of H at the expense of requiring separability in the selection process. We assume Z ⊥ ⊥ (E, V ) and V follows a standard uniform distribution. This model is related to the ordered choice model of Heckman and Vytlacil (2007, p. 4980), but unlike their model g(X, E) does not depend on H. It can also be interpreted as an extension of Newey (2007) to multiple ordered outcomes.
We define the identification set as: This set collects (x, p) combinations in the selected sample (i.e. H = h > 0) for which there is a (h, z) combination in HZ(x) such that µ h ′ (Z) = p for the propensity score of a value of H smaller or equal than the observed value h. For example, if H = 3, this restriction is satisfied when µ h ′ (z) = p, for some h ′ ∈ {0, 1, 2, 3}. We define the LDSF as in (4). We prove the following lemma in the Appendix.
and the probability in the RHS is identified.
Lemma 2 implies (x, p) ∈ X P K is also a sufficient condition for identification as (see also Heckman and Vytlacil, 2007): We need additional assumptions to obtain counterfactual distributions. In the models with continuous censoring we hold the value of the control function constant and change the lowest value at which the individual is participating (see (6)). We cannot follow the same strategy here as V is not point identified. However, from the values of H and Z we know that the value of V is between µ H−1 (Z) and µ H (Z).
This implies that individuals with H = 1 have the lowest values of V . Therefore, if we increase µ 0 (Z), while leaving V unchanged, some individuals with H = 1 would no longer participate although we do not know who. Hence, we integrate over the distribution of V and change the range of integration accordingly. We show in the appendix that: where µ K (z) := 1 for any z. This equation is comparable to equation (7.2) of Heckman and Vytlacil, 2007). Based on (12), the counterfactual distribution when G is as in group t, F Z is as in group k, and the selection rule is as in group r is: The decompositions are identical to (8). The identification restrictions are related to the integrals in (13). The integral in the numerator of the second line of (13) can be written as For both of these terms to be identified for any h, we need that X P k K ⊆ X P t K . For the identification of the integral in the numerator of the first line of (13), a similar argument gives (X P r K ∩ X P k K ) ⊆ X P t K . We also need that Z k ⊆ Z r otherwise µ r 0 (z) is not identified. The identification restrictions imply that, for example, to identify G s Y 1,1,0 , one needs that X P 0 K ⊆ X P 1 K and Z 1 ⊆ Z 0 . The interpretation of these restrictions not only depends on the employment rates between year 0 and 1 but also on whether the propensity scores in year 1 overlap those of year 0.
Despite these requirements, there is a benefit of using an ordered rather than dichotomous selection rule. In the latter case, the restriction for G s Y 1,1,0 would have been that the support of the propensity scores of employment for year 1, µ 1 0 (Z), should overlap with those of year 0, µ 0 0 (Z). For ordered selection it is only necessary that one of the propensity scores in year 1, i.e. µ 1 h (Z), h = 1, . . . , K, overlaps with the propensity score of employment for year 0, i.e. µ 0 0 (Z).
Comparison with the Heckman selection model
The decomposition (8) To illustrate the difference with MR, suppose that the population model in period t is the following parametric version of the HSM: where the first element of X t is the constant term, and E t and V t are distributed independently of (X t , Z t ) as bivariate normal with zero means, variances σ 2 Et and σ 2 Vt , and correlation coefficient ρ t . 8 The counterfactual mean of Y for the selected population when the LASF is as in group t, F Z,V is as in group k, and the selection rule is as in group r, is: where: denotes the LASF in group t. The observed mean of Y t in the selected population, integrating over Z t , is: where λ denotes the inverse Mills ratio. We decompose the difference µ s Y 1,1,1 −µ s Y 0,0,0 between two time periods, t = 0 and t = 1, into selection, composition and structural effects.
MR define the selection effect as: This comprises the following four elements: is the counterfactual probability of selection in group k when the selection rule is as in group r and Φ denotes the standard normal CDF. The first two elements in (18) We now present the selection, composition and structural effects for our decomposition. Plugging the expression for µ t (x, v) into (16) gives, after some straightforward calculations: Our selection effect is: The first element on the right-hand-side of (19) is the effect on the average wage from changes in the distribution of observable characteristics of the selected population, holding the population distribution constant, resulting from applying the selection equation from period 0 to period 1. It is positive if those entering the selected population have characteristics associated with higher wages. This element is missing in the selection effect in (18). The second element is the corresponding effect for the unobservable characteristics and corresponds to the first in (18).
Our composition effect is: The first element on the right-hand side of (20) is the change in the average wage resulting directly from changes over time in the distribution of the observable characteristics while the second element is the same as the second term in (18).
Finally, our structural effect is: The first element on the right-hand side of (21) reflects the impact of changes over time in the returns to observable characteristics while the second captures the type and degree of sorting and is the same as the sum of the third and fourth elements in (18). As the expectation involving the inverse Mills ratio is positive, its contribution is positive when ρ 1 σ E 1 > ρ 0 σ E 0 .
Finally, consider a simple example illustrating that the two elements of the structural effect cannot generally be identified in nonseparable models. 10 Consider a multiplicative version of the parametric HSM obtained by replacing (14) with: and weaken the parametric assumption on the joint distribution of E t and V t by only . α t and ρ t cannot be separately identified from the moment condition: 4 Empirical results
Hours equation
We start by describing the variables included in Z using the ASEC. Following MR, we include six indicator variables for the highest educational attainment reported.
We include indicator variables for Black and Hispanic and four indicator variables
for regions: northeast, midwest, south and west. Finally, we use linear terms for the number of children aged less than 5 years interacted with the indicator variables for marital status. For the MORG we use 5 levels of education as the two lower categories in the ASEC are merged. The variables Black, Hispanic, experience and region are the same. Only one indicator for marital status is used (married or not) and we employ household size, and its interaction with marital status, as the only household characteristic.
With the exception of the household size and composition variables, all of the conditioning variables appear as both determinants of annual hours and hourly wages. While one might argue that household size and composition may affect hourly wage rates, we regard these exclusion restrictions as reasonable and note that similar restrictions have been previously employed (see, for example, MR). However, given their potentially contentious use we explore the impact of not using them below.
The assumption that annual hours of work do not affect the hourly wage rate means that the variation in hours across individuals is a source of identification.
Although our primary focus is the wage decomposition, we highlight the ma- 11 We employ the methodology described in MR for education and potential experience. We only report the result of the latter here; see Figure 5. The selected points in the hours distribution are 0, 1000 and 2000. We find that many of the individual characteristics have an impact on the level of annual hours worked. This is not particularly surprising given the large literature on labor supply documenting the roles of education and marital status on labor market participation. Perhaps what is more surprising is that the magnitude of the impact of these variables does not appear to change substantially over the sample period in either the ASEC or MORG data. The exception is with respect to the exclusion restrictions which became less important over time. This is consistent with Card and Hyslop (2021). Note that the level of education has drastically increased over the sample period and this has had a substantial impact on the hours distribution.
We also estimated models for annual weeks of work using distribution regression and ordered models for annual weeks and annual hours using the ASEC.
Decompositions
The wage equations are estimated for each year by distribution regression over the subsample with positive hourly wages. The conditioning variables are those in the hours equation with the exception of the household size and composition variables.
We also include the appropriate control function, its square, and interactions be- We start with annual hours as the censoring mechanism and Figure 6A presents the decomposition for the mean, which increases by 25% over our sample period.
The total effect is driven by the composition effect although in several instances the structural effect is contributing. It is generally negative and small relative to the composition effect. The contribution of the selection component is negative and small. A negative selection component implies that females are positively selected into employment and those who entered employment between 1975 and 2020 were less productive than those already employed. Annual hours is an economically attractive censoring mechanism as it exploits the variation in annual hours induced both by hours and by weeks. However, it is possible that selection operates either through hours or weeks exclusively. We first address this issue by replacing annual hours with annual weeks as the selection mechanism. The results from these decompositions using the ASEC are in Figures 7A-7F. Their primary feature is their similarity to those for annual hours. This suggests that the control function from the annual hours censoring mechanism is highly correlated with that from annual weeks despite the differences in their respective distributions.
Now consider the decompositions for the MORG recalling that wages are measured differently than in the ASEC and the hours measure is based on the survey week. We implement our censored selection estimator using hours in the survey week as the censoring mechanism noting that only a subset of the exclusion restrictions
Results of the double selection model
Our results from Section 4.2 seem robust to the use of either hours or weeks as the selection variable in the censored selection model. Figures 11A-11C report the decomposition for the double selection mechanism. There continues to be no evidence of selection above the median so we report the decompositions for D1, Q1 and the median.
While there are some differences in these figures compared to those for selection using only annual hours or annual weeks they are relatively small. These results seem to suggest that the unobservables which increase participation on any margin, such as usual weeks, usual hours, hours in survey week, are all highly correlated. To We first explore the term ρ t σ Et in (17). We estimate the HSM using the MR sample and exclusion restrictions to obtain the results in Figure E This contrasts with MR. The three obvious causes are the use of the normality assumption in the HSM, the identifying power introduced through hours as a censoring variable in the selection equation, and the nonseparable nature of our model.
Results of the ordered selection model
To address these issues we first estimate the model using a parametric approach which relies on normality but which exploits the variation in hours for identification purposes. We employ the Vella (1993) To more closely correspond to the HSM we divide the generalized residual by the 12 Hirsch (2005) provides empirical evidence supporting this assumption. estimated standard deviation of working hours, σ Vt . The only difference with the HSM is the use of the Tobit generalized residual rather than the inverse Mills ratio.
We plot the corresponding coefficient on the Tobit generalized residual in Figure 9-B. The coefficient on the Tobit generalized residual also estimates ρ t σ Et .
Two striking features are revealed in Figure 9-B. First, under normality the estimates of ρ t σ Et and the coefficient on the generalized residual should be identical.
However, the estimates are very different and most importantly the coefficient estimate is always positive. As there is no reason that departures from normality will bias the estimates of ρ t σ Et and the coefficient for the Tobit generalized residual in the same manner one could interpret the difference in the estimates as evidence of non-normality. However, recall that the Tobit generalized residual also exploits variation in the hours variable for identification purposes and this could contribute to the difference in the signs and the behavior of the two coefficients. Second, the pattern of movement in the coefficient on the generalized residual is almost identical to the average derivative of our control function despite the drastically different ways in which each is computed. The two procedures are very different but each exploits the variation in hours as a means of identification.
While it appears that the use of the variation in hours as the source of identification is the cause of the differences with MR, it is possible that the departures from normality may also be responsible. The final approach we explore is the use of the propensity score as the control function noting we allow it to enter the wage equation in a nonseparable manner (see, for example, Newey, 2007). The propensity score employs the exclusion restrictions as the sole source of identification. We estimated the model and computed the average derivative of wages with respect to the propensity score. The results are presented in Figure 9-C. This derivative also changes sign as we move through the sample period and shows behavior similar to ρ t . We conclude that the differences in terms of the relationship between E t and V t between our results and MR are due to the use of the variation in hours which appear to identify a different pattern of sorting. This produces a relatively larger positive value of E t and that individual will have relatively higher hours and wages. In this setting the value of the inverse Mills ratio for both individuals will be the same while the Tobit generalized residual of the individual with the higher value of H will be greater than the other. This suggests that the Tobit generalized residual is capturing information regarding "sorting" into hours which is ignored by the inverse Mills ratio. Moreover, the inverse mills ratio, unlike the Tobit generalized residual, is unable to explain the variation in wages across these two individuals.
It is important to explore why ρ t might change sign for the models identified solely by exclusion restrictions. A negative ρ t implies that the working individuals with the lowest probabilities of participation should have the lowest observed wages among individuals with the same observed characteristics relevant for the wages, X. The reverse is true for a positive ρ t . We explore this by estimating a wage regression identical to the second stage of MR while replacing the inverse Mills ratio by a dummy variable for a child below the age of 5 years. The impact of having a "young child" was negative until 1982 at which time it turned, and remained, positive. This corresponded to a period, also reported by Card and Hyslop (2021), in which the magnitude of the negative impact of a "young child" on the employment decision decreased. While we acknowledge the presence of other ongoing factors this change in the impact of "young child" could generate a change in the sign of ρ t . For example, in the absence of other influences, the large positive influence of "young child" on the value of the inverse Mills ratio combined with negative correlation between "young child" and wages would produce a negative value of ρ t . In contrast a decreasing effect of "young child" on participation would produce a smaller value for the inverse Mills ratio and that, combined with the positive correlation between "young child" and wages, would produce a positive ρ t .
We highlight that we consider the above discussion as suggestive rather than conclusive. Our objective is to consider the possible causes of the differences in the results from the use of the two control functions. The evidence suggests that part in the sample period. 13 We provide the decompositions of changes in inequality using the annual hours as the censored selection variable for the ASEC data, hours in the survey week for the MORG and annual weeks as the ordered selection variable for the ASEC. For each of these models and selection rules we decompose the interquartile and interdecile ratios. Those for annual hours using the ASEC are reported in Figure 13 and those for the MORG in Figure 14. The interquartile ratio is driven by each of the components. Neither the composition or structural effect dominates throughout the sample period. The selection effect contributes throughout the period and clearly increases inequality. The interdecile ratio is driven primarily by the structural effect especially during the drastic increase at the beginning of the sample period.
The selection effect is clearly important and frequently more important than the composition effect. For the MORG the conclusions regarding the structural and composition effects are similar to those for ASEC while the selection effects are slightly smaller. This reflects the smaller selection effects at lower quantiles in the wage decompositions (as presented in Figures 6 and 8). The evidence for both data sets support that selection has a modest but important impact on wage inequality that varies in magnitude over the sample period. As the wage decompositions based on the ordered selection rule suggested selection was more important than in the censored selection models (see Figure 12) we examine now whether this carries over to the inequality decompositions. We do not report the result but note that the evidence is similar to that for the censored selection rule.
Our results indicate that as an increasing number of females have entered the labor market they have reduced wages at the lower parts of the wage distribution while having no impact on wages above Q2. This increases measures of inequality based on ratios involving lower and upper quantiles. Potentially, there are two reasons why selection increases inequality based on whether either the observed or unobserved characteristics of those participating has changed over time. However, an examination of education levels, for example, suggests that observed characteristics have played a minor role. In particular, we find that those with education higher than high school degrees were more likely to participate over the whole sample period and that this did not change over time. This suggests that our results reflect changing unobserved characteristics. The selection effect captures the difference between the observed wage distribution and the counterfactual in which women participated as in 1975. Our decomposition method presented in (5) imposes that this difference captures the exit of females with lower levels of the control function. Figure E.5, reveals a strong and positive relationship between wages and the control function suggesting that selection effects reflect that women entering the labor market were less productive than observationally identical women participating in 1975.
Our evidence of positive sample selection over the whole period implies that the decision to work is largely based on economic motivations. However, as employment rates have increased this has seen a reduction in sorting on economic grounds. This is consistent with the explanation provided by AB for the U.K. labor market. This is also consistent with the results above that the conventional household background family characteristics have become less important in explaining participation and hours worked. The reduction in positive sorting describes the changes in the hours distribution from the mid 1980s to the end of the 1990s. Blau et al. (2021) argue that the booming economy and welfare reform may have played an important role in the 1990s. Our collective evidence suggests that post 2000 there was an increase in positive selection. This supports the evidence in Blau et al. (2021). Towards the end of the sample period it appears that the impact of selection on inequality and, more generally, wages has returned to 1975 levels.
Conclusions
This paper documents the changes in female real wages over the period 1975 to 2020.
We decompose these changes into structural, composition and selection components by estimating a nonseparable model with selection. Female wage growth at lower quantiles is modest although the median wage has grown steadily. The increases at the upper quantiles for females are substantial and reflect increasing skill premia.
These changes have resulted in a substantial increase in female wage inequality. As our sample period is associated with large changes in the participation rates and the hours of work of females we explore the role of changes in "selection" in wage movements. We find that the impact of these changes is to decrease the wage growth for some h ∈ (0, ∞) such that P(T = 1) > 0. The estimator of the LASF is µ(x, v) = w(x, v) ′ β, where w(x, v) is a d w -dimensional vector of transformations of (x, v) with good approximating properties, and β is the OLS estimator: where W i = w(X i , V i ). The estimator of the LDSF is G(y, x, v) = Λ(w(x, v) ′ β(y)), where y ∈ R and β(y) is the logistic distribution regression estimator: Finally, in the third step we use (6) to estimate the counterfactual CDF (5) by: where the average is taken over the sample values of V i and Z i in group k, n s kr = n i=1 1{ V i > 1 − π r (Z i )}, β t (y) is the logistic distribution regression estimator for group t from the second step, and π r (z) is the estimator of the propensity score of selection for group r from the first step. Given G s Y t,k,r , we estimate the counterfactual QF (7) by: q s Y t,k,r (τ ) = ∞ 0 1{ G s Y t,k,r (y) ≤ τ }dy − 0 −∞ 1{ G s Y t,k,r (y) > τ }dy.
Following FVV, inference is based on the weighted bootstrap (Praestgaard and Wellner 1993). This method obtains the bootstrap version of the estimator of interest by repeating all the estimation steps including sampling weights drawn from a nonnegative distribution with mean and variance equal to one (e.g., standard exponential).
C Proof of Lemma 2 The first equality is definition. The second equality uses V ∼ U (0, 1) and the third equality uses independence of (E, V ) and Z. The final equality uses the definitions of Y and H and is identified because (x, p) ∈ X P K . (12) Adapting the representation of the distribution of the observed Y in Section 3.2 to the ordered selection rules yields G s Y (y) = Z k G(y, x, v)1{v > µ 0 (z)}dvdF Z (z) | 2018-12-21T08:26:50.000Z | 2018-12-21T00:00:00.000 | {
"year": 2018,
"sha1": "ba5d09a34878f90ee3488959f11053d9ea4b1bed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ba5d09a34878f90ee3488959f11053d9ea4b1bed",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Mathematics"
]
} |
235458446 | pes2o/s2orc | v3-fos-license | Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation
Machine learning systems based on minimizing average error have been shown to perform inconsistently across notable subsets of the data, which is not exposed by a low average error for the entire dataset. In consequential social and economic applications, where data represent people, this can lead to discrimination of underrepresented gender and ethnic groups. Given the importance of bias mitigation in machine learning, the topic leads to contentious debates on how to ensure fairness in practice (data bias versus algorithmic bias). Distributionally Robust Optimization (DRO) seemingly addresses this problem by minimizing the worst expected risk across subpopulations. We establish theoretical results that clarify the relation between DRO and the optimization of the same loss averaged on an adequately weighted training dataset. The results cover finite and infinite number of training distributions, as well as convex and non-convex loss functions. We show that neither DRO nor curating the training set should be construed as a complete solution for bias mitigation: in the same way that there is no universally robust training set, there is no universal way to setup a DRO problem and ensure a socially acceptable set of results. We then leverage these insights to provide a mininal set of practical recommendations for addressing bias with DRO. Finally, we discuss ramifications of our results in other related applications of DRO, using an example of adversarial robustness. Our results show that there is merit to both the algorithm-focused and the data-focused side of the bias debate, as long as arguments in favor of these positions are precisely qualified and backed by relevant mathematics known today.
Introduction
Machine learning algorithms are increasingly used to support real-world decision-making processes. In that context, optimising for the loss averaged on the overall population can easily yield models that perform poorly on specific subpopulations, potentially amplifying the injustices that already plague our society [Datta et al., 2014, Chouldechova, 2017, Dastin, 2018, Rahmattalabi et al., 2019, Qian et al., 2020, Metz and Satariano, 2020, Angwin et al., 2020.
Whether such problems can be addressed by curating the training data leads to contentious debates. For instance, it is difficult and often painful to know whether it is sufficient to ensure that the relevant subpopulations are well represented in the training set, whether the structure of the statistical model Figure 1: Illustration of the failings of Expected Risk Minimization (ERM) and i.i.d assumption in a linear binary classification problem, where the dataset can be partitioned to a majority and a minority subpopulation. ERM misclassifies a significant proportion of the minority subpopulation. DRO equalizes the performance across subpopulations. The DRO solution was obtained using the Lagrangian algorithm (Algorithm 1), both solutions use the same linear SVM model. must be revisited, or whether the whole system, its goals, and its methods, are fundamentally broken. We argue that there is merit to both the algorithm-focused and the data-focused side of the discussion, as long as arguments in favor of these positions are precisely qualified and backed by mathematics.
Distributionally Robust Optimization (DRO) [Ben-Tal et al., 2009] bridges two perspectives on this problem. On one hand, DRO seems to offer a promising solution because it minimizes the worst loss observed on multiple distributions such as those representing each subpopulation. On the other hand, it can be shown under weak conditions that DRO is closely related to minimizing the average loss on an adequate mixture of those distributions, that is, a training set in which the subpopulations have been adequately weighted. Our contributions are: 1. We establish results that clarify the relation between DRO and the optimization of the same loss averaged on an adequately weighted training set (see Section 2 summarizing the theoretical Appendix).
2. We also show that neither DRO nor curating the training set should be construed as a complete solution of our initial problem. In particular, each DRO formulation implicitly makes calibration assumptions on the losses measured on various subpopulations. Making them explicit brings back the contentious issues (see Section 3.) 3. We leverage this mathematical understanding to provide a minimal set of practical recommendations to approach such difficult problems. Although what is acceptable or not in real-world applications is not a part of the mathematical problem, elementary mathematics tells us that we cannot obtain an acceptable result with DRO if we are unable to obtain an acceptable result with systems specifically optimized for each subpopulation.
DRO versus data curation
Traditionally, training a model in machine learning seeks parameters, say, the weights w of a neural networks, that minimize a risk C P (w) that is the expectation of a loss function with respect to a single distribution of training examples.
Alas, even when the training distribution is representative of the actual testing conditions, the trained system might perform very poorly on selected subsets of examples. For instance, Figure 1 describes a training problem where a majority population and a minority population have different classification boundaries. Minimizing the expected loss over the full dataset (bottom left plot) yields a system whose performance is skewed towards the majority population at the expense of the performance in the minority population. In real life, this can be a source of major injustice. Algorithms that optimize for minimum average error yield models that perform poorly on subpopulations that are already at risk due to pre-existing biases. This is most pronounced when ERM (based on minimising the average prediction error) produces solutions that privilege the majority populations over the minorities. This was shown to be consequential in scenarios such as court verdicts, loan applications, hiring and healthcare interventions [Datta et al., 2014, Chouldechova, 2017, Dastin, 2018, Rahmattalabi et al., 2019, Qian et al., 2020, Metz and Satariano, 2020, Angwin et al., 2020. For example, hiring systems and ad-targeting algorithms based on minimising average error were found to discriminate against female users by more frequently proposing executive and technical jobs to men [Datta et al., 2014, Dastin, 2018.
Distributionally Robust Optimization (DRO) seemingly addresses this problem by considering instead a collection Q of 'training distributions' and minimizing the expected risk observed on the most adverse distribution min For instance, we can substantially reduce the performance differences reported in Figure 1 (bottom left plot). Using DRO with a set Q of two distributions, representing the majority and minority populations, leads to the decision boundary illustrated in Figure 1 (bottom right plot). This classifier does not reproduce the pre-existing imbalance between the majority and the minority subpopulation. Instead, the performance is equalized, and the average accuracy increases from 77.5% to 82.5%.
As discussed more precisely in Section 3, the approach of using the basic definition of DRO (1) without additional information on the subpopulations is insufficient in practical applications. However, we argue that DRO remains an interesting building block because it provides a bridge between two common approaches to this problem, namely, (i) ensuring that the trained system has consistent performance across subpopulations, and (ii) curating the training set by remixing the populations until obtaining a more palatable result.
One of the contributions of our work is an ensemble of mathematical results that clarify the relation between finding a local minimum of the DRO problem (1) and (2) on one hand, and minimizing the usual expected risk with respect to a single, well crafted, training distribution. We hope these results will be useful for both the data-focused and the algorithm-focused sides of the bias debate in machine learning community.
For convex cost functions, these results are well known, because one can reformulate the DRO problem as a constrained optimization problem by introducing a slack variable L, and relying on standard convex duality results [Bertsekas, 2009] (see Section A.4. in the Appendix).
The point of our theoretical contribution is that similar results hold for the local minima of the nonconvex costs typical of modern deep learning systems, and also hold when the family Q is infinite. We now summarize the main results (the elaboration and proofs of these propositions are in the Appendix).
Let (z, w) be the loss of a machine learning model where w ∈ R d represent the parameters of the model and z ∈ R n belongs to the space of examples. For instance, in least square regression, the examples z are pairs (x, y) and the loss is (z, w) = y − f w (x) 2 . When the collection Q is finite, under weak regularity assumptions, a DRO local minimum is always a stationary point of the expectation of the loss with respect to a suitable mixture of the DRO training distributions. Theorem 1 states this fact for a finite collection Q of distributions: Theorem 1 (Finite case). Let Q = {P 1 , . . . , P K } be a finite set of probability distributions on R n and let w * be a local minimum of the DRO problem (1) or the calibrated DRO problem (2). Let the costs C P (w) = E z∼P [ (z, w)] be differentiable in w * for all P ∈ Q. Then there exists a mixture distribution P mix = k λ k P k such that ∇C Pmix (w * ) = 0.
When the collection Q is infinite (possibly uncountably infinite) but satisfies a tightness condition (see Definition 1 in the Appendix), we can still show that a DRO local minimum is a stationary point for a well crafted training distribution. However, this training distribution is not necessarily a finite or countable mixture of the distributions found in Q but is always found in the weak closure of the convex hull of Q. Adversarial robustness is an example of applying DRO on an infinite family of distributions (see a discussion of the implications of Theorem 2 in Section 5).
Theorem 2 (Infinite case). Let Q be a tight family of probability distributions on R n . Let w * be a local minimum of problem (2). Let Q mix be the weak convergence closure of the convex hull of Q.
Let there be a bounded continuous function h(z, w) defined on a neighborhood V of w * such that Conversely, we consider a local minimum of the expectation of the loss with respect to an arbitrary mixture of distributions from Q. Such a local minimum always is a local minimum of a calibrated DRO problem where one introduces suitable calibration constants r P that control how we compare the costs for different distributions: We shall see in Section 3 that such calibration coefficients are in fact needed to express the subtleties of the original problem of learning from multiple subpopulations with DRO. Theorem 3 (Converse). Let P mix = k λ k P k be an arbitrary mixture of distributions P k ∈ Q. If w * is a local minimum of C Pmix , then w * is a local minimum of the calibrated DRO problem (2) with calibration coefficients r P = C P (w * ).
Note that there is a discrepancy between these two theorems. Theorem 3 says that a local minimum of an expected risk mixture is a DRO local minimum, but Theorem 1 only says that a DRO local minimum is a stationary point (that is, a point with null derivative) of an expected risk mixture.
This distinction is moot when (z, w) are convex in w because all stationary points are not only local minima, but also global minimum. When this is the case, Theorem 1 and Theorem 3 then describe the same equivalence as the standard convex duality results. In contrast, when the cost functions are nonconvex, the stationary point described in Theorem 1 need not even be a local minimum of the expected cost mixture. For instance, if the DRO local minimum is achieved in a region where all C P (w) have a negative curvature, then any mixture of these costs also has a negative curvature, and, as a result, the stationary point can only be a local maximum. Figure 2 shows how such a situation can arise in theory.
However, the learning algorithms that are typically used to train overparametrized deep learning models empirically follow trajectories where the Hessian is very flat apart from a few positive eigenvalues [Sagun et al., 2018]. Weak negative curvature directions always exist -even when the algorithm stops making progress-but are very weak. In order to understand whether situations like Figure 2 happen in practice, it makes sense to now take a closer look at the algorithms commonly used to implement DRO.
In the convex case, it is known that increasing the weight of a distribution in the mixture is equivalent to reducing the corresponding calibration coefficient. This observation leads to a plethora of saddlepoint seeking algorithms such as Uzawa iterations [Arrow et al., 1958]. See Algorithm 1 for a representative example. Because such algorithms are reliable and can be made efficient, many authors advocate using similar strategies for nonconvex deep learning systems (e.g., [Sagawa et al., 2020]). Section A.4. in the Appendix shows how our theoretical results offer support for this practice. It also shows that such an algorithm fails to find DRO minima when the associated stationary point of the expected cost mixture is not itself a local minimum. For practical purposes, this means that there is no substantial difference between using such an efficient DRO algorithm and minimizing a well-crafted expected risk mixture.
Input: Equally sized training sets D k for k = 1 . . . K Input: Calibration coefficients r k . Initial weights w 0 . Input: Temperature β. Stopping threshold .
Output: A sequence of weights w t .
Calibration problems
As promised earlier, we now return to the statement of the simple DRO problem (1) and discuss how it fails to properly account for many subtleties of the bias fighting problem.
One set of issues is described in the algorithmic bias literature [Louizos et al., 2016, Beutel et al., 2017, Hashimoto et al., 2018b, Amini et al., 2019. Representation disparity refers to the pheonomenon of achieving a high overall accuracy but low minority accuracy. For instance, ubiquitous speech recognition systems, such as voice assistants, struggle with accents and dialects [Behravan et al., 2016, Najafian and Russell, 2020. A minority user becomes discouraged by the poor performance of such system, which leads to disparity amplification over time due to the increasing gap between quantity of data provided by active users (majority groups, favored by the system from the beginning) and groups that experienced poor performance due to the initial representation disparity. This shows that it is sometimes justified to augment the DRO statement with means to favor certain subpopulations in order to account for representation disparity or disparity amplification. A related development is the method proposed by Sagawa et al. [Sagawa et al., 2020]: in order to account for the potentially small size of the training data for some distributions P , they augment the cost C P (w) with a penalty that decreases when the number of training examples for this distribution increases. Figure 2: Consider the two real functions C 1 (w) = tanh(1 + w) + w 2 and C 2 (w) = tanh(1 + w) + w 2 . The minimum w * = 0 of max{C 1 (w), C 2 (w)} is a stationary point of the mixture cost C mix (w) = 1 2 C 1 (w) + 1 2 C 2 (2). However, because it is achieved in negative curvature regions of C 1 and C 2 , this stationary point is not a local minimum but a local maximum of the mixture cost. In practice, overparametrized deep learning models are unlikely to meet such a situation because the Hessian along the learning trajectories tend to be essentially flat apart from a few positive eigenvalues [Sagun et al., 2018].
There might also be instances where it is justifiable to account for the difference in difficulty across distributions. For instance, it might be known that one of the training distribution represents examples collected with a deficient method, such as, bad cameras, bad conditions, etc. Because the task is more difficult due to data limitations, the cost C P (w) for such a distribution will systematically be higher than for other distributions. The simple DRO formulation (1) then amounts to optimizing only for this distribution. As a consequence, small gains for the deficient distribution will be obtained at the expense of a massive performance degradation for all other distributions, essentially making it as bad as the performance for the deficient distribution. It might then be necessary to reduce the weight of this distribution in order to prevent it from dominating the DRO problem. In other words, the ideal solution to such a problem is to address the deficiencies of the data collection method.
Both issues (the need to account for pre-existing difficulties related to modelling certain subpopulations and the equalizing effect of DRO) can be addressed by augmenting the DRO problem with calibration constants that make such adjustments explicit. This leads to the calibrated DRO problem that we have already introduced when discussing Theorem 3 (Calibrated DRO 2). Specifying a set of calibration constants amounts to describing what we consider to be an acceptable outcome for the original bias fighting problem. What is acceptable or not is obviously problem dependent and can be the object of difficult controversies. Making the calibration constants explicit separates the mathematical optimization statement from the difficult task of deciding what results are acceptable in a real-world problem.
A consequence of the mathematical theory is the practical duality between calibration constants r P and mixture coefficients λ P k . Theorem 1 says that a DRO local minimum for a particular choice of calibration constants is a stationary point of the expected loss for a particular mixture of the original distributions. Conversely, Theorem 3 says that a local minimum of any particular expected risk mixture is also a DRO local minimum for a particular set of calibration constants.
The calibration constants r P might in fact be a better way than mixture coefficients λ P k to specify which performance discrepancies are considered acceptable across subpopulations because there are useful reference points for choosing them. The first reference point is to use equal calibration constants. DRO then optimizes the performance of the most adverse subpopulation at the cost of potentially degrading the performance of the system for the remaining subpopulations. See Figure 1 for an example. Another approach is to use the calibration constants r * P that represent the best performance we can reach with our machine learning model on each distribution P in isolation.
Solving the DRO problem for these calibration constants amounts to constructing a single machine learning system that performs almost as well on each distribution P as a dedicated machine learning system specifically trained for distribution P alone. In the following section, we elaborate on this method of setting calibration constants.
Note that based on Theorems 1 and 3, regardless of the chosen calibration constants, no DRO solution can achieve a performance better than r * P on any distribution P . If this were the case, it would mean that r * P was not correctly estimated, and the new performance would become the corrected r * P . This simple observation forms the basis for the practical recommendations discussed in the next section.
A minimal set of practical recommendations
In this section, we provide a minimal set of practical recommendations to machine learning engineers who face the difficult task of constructing and deploying bias-sensitive machine learning systems. We do not pretend that these recommendations are sufficient to address the bias problem, but merely represent intuitively sensible steps that are supported by our mathematical insights and should not be avoided. We summarize these recommendations in Inset 1.
We also motivate and elaborate on each step below.
The identification of the subpopulations of concern frames the problem because it also defines the success criterion, that is, bias mitigation with respect to meaningful subpopulations. Key factors to consider are future users of the system, information on which groups have previously suffered from discrimination in similar scenarios, and the quantity and quality of the available data at the training time. In particular, we must at least have enough data to evaluate the subpopulation performances Fighting bias with DRO: practical recommendations 1. Identify subpopulations P k at risk in the available data. 2. For each subpopulation, and in isolation, determine the best performance r * P k that can be achieved with the machine learning model of choice. 3. Decide whether the r * P k represent an acceptable set of performances. There is no point using DRO if this is not the case. Instead, investigate why the model performs so poorly on the adverse distributions (insufficient data, inadequate model, etc.) until obtaining an acceptable set of r * P k . 4. Use DRO to construct a single machine learning system whose performance on each subpopulation is not much worse than r * P k . This can be achieved by using the r * P k as calibration coefficients in a Lagrangian algorithm. 5. Deploy the system on an experimental basis in order to collect more data. Sample the examples with the lowest accuracy in order to determine whether we missed a subpopulation at risk. If one is found, add the vulnerable subpopulation to the initial data and repeat all the steps.
Inset 1: Summary of practical recommendations.
reliably. For instance, in a face recognition system, subpopulations might contain images of people representing distinct ethnicities [Klare et al., 2012].
Working on each subpopulation in isolation attempts to determine the best achievable performance on each subpopulation if this subpopulation were the only target. Data available for minority subpopulations might be limited. In such case, data from remaining subpopulations can be used as a regularizer to improve performance on the subpopulation P of interest. For instance, we can train on a mixture of data coming from both the subpopulation P (with weight 1) and the remaining subpopulations (with weights α P ). We then treat α P as a hyperparameter that we tune to achieve the best validation performance on data from the subpopulation P . Our estimate of r * P is then the performance of the resulting system, either measured on the validation set, or on held out data if such data is available in sufficient quantity. This is why it is important to have sufficient data to reliably validate a model performance on each subpopulation. Techniques proposed to tackle noisy datasets and scenarios with limited labelled examples (active learning [Ren et al., 2020], transfer learning [Pan and Yang, 2010, Tan et al., 2018) can be used to increase the performance.
We can then judge whether the r * P represent an acceptable set of performances for a final system. As explained in Section 3, no DRO solution can perform better on a subpopulation P than a model trained for this subpopulation P only. If the set of performances obtained in the previous steps is not acceptable, we must identify the root cause of this problem. For instance, if poor performance stems from insufficient data quality for the subpopulation, this problem will persist at the step of finding a consistent system using DRO. We need to then focus on improving data quality for vulnerable subpopulations. We recommend investigating the root cause of insufficient performance for each of the vulnerable subpopulations in isolation.
If minimum cost that can be achieved per each subpopulation is acceptable, we can then build a system that works consistently well across the subpopulations using DRO. In the simplest case, calibration coefficients r P per each subpopulation are going to be equal to the optimum expected risk for that subpopulation alone, r P = min w C P (w). We can also adjust the calibration coefficients to prevent overfitting to individual subpopulations [Sagawa et al., 2020]. For n examples in a certain subpopulation P , the expected risk C P (w) can be replaced by its empirical estimate C Pn (w)+ augmented with a calibration constant that decreases when the number n of training examples increases. Moreover, the model size often needs to be larger than the model size that achieves the best performance on each individual subpopulations. Intuitively, this is needed because handling all subpopulations at once might be more demanding than handling only one. In Section 2 and Section A.4. in the Appendix, we also argue that overparametrization improves the issues associated with DRO local minima that are stationary points of an expected loss mixture but are not local minima of this mixture. As a result, overparametrization helps practical Lagragian DRO algorithms to find a good solution.
Finally, we must remain aware that the final system critically depends on the initial selection of the subpopulations of interest. Therefore, it remains essential to cautiously deploy such a system and to monitor its performance during the ramp up. In particular, the worst performing cases should be examined for consistent patterns that might indicate that a vulnerable subpopulation was not considered in the problem specification. When this is the case, the correct solution is to include the initially omitted subpopulation and start again.
DRO for adversarial examples
The previous sections make several observations about the application of DRO to fight bias in machine learning systems. In particular, we have argued that DRO is practically equivalent to training on a well chosen example distribution, and we have also shown that this well chosen example distribution is far from universal but depends on often implicit assumptions hidden in the DRO problem statement, such as calibration coefficients.
These observations extend beyond the bias fighting scenario. For instance, DRO is often presented as a good way to construct systems that are robust to adversarial examples [Szegedy et al., 2014, Madry et al., 2017. This application of DRO can be formalized by considering a set Φ of all measurable functions ϕ that map an example pattern z to another pattern ϕ(z) that is assumed to be visually indistinguishable from z according to a predefined criterion. For instance, it is common to consider the set of all transformations ϕ such that z − ϕ(z) p ≤ κ, that is, transformations that can only modify an input pattern while remaining in a given L p ball.
Let P ϕ represent the distribution followed by ϕ(z) when z follows the distribution P . Robust solutions against the class of adversarial perturbation Φ can be expressed as the DRO problem The distribution family Q = {P ϕ : ϕ ∈ Φ} is typically much larger than the ones considered in the bias fighting scenario. Instead of representing a finite number of subpopulations, the family Q is usually infinite and uncountable. Therefore one cannot reduce this DRO problem to a finitely constrained optimization problem and one cannot use a separation lemma because the family Q or its convex hull may not be topologically closed.
Theorem 2 relies on an additional tightness assumption 1 to establish that a DRO local minimum w * is also a stationary point of the expected risk for an example distribution that belongs to the weak closure of the convex hull of Q. The tightness assumption is trivially satisfied when the examples belong to a bounded domain (as is the case for images) or when they remain close to reference images drawn from a single distribution (as is the case for adversarial examples).
At first sight, this result seems to imply that there is a distribution of images on which the ordinary training procedure yields a solution robust to adversarial examples. Is it true that we would not have adversarial example issues if only we had the right examples to start with? Or the right data augmentation scheme?
More precisely, Theorem 2 states that a DRO local minimum is a stationary point of the expected risk for an example distribution that depends on all the details of the DRO problem and in particular on the definition of the set Φ of adversarial perturbation, or equivalently on which images are considered visually indistinguishable from a reference image. On one hand, we could use DRO with a class of adversarial perturbations Φ that are very conservatively below the threshold of visual distinguishability. For instance, the perturbation might be limited to changing pixel values by no more than a small threshold. Alas, the solution might be fooled by adversarial examples that do not satisfy this strict condition but nevertheless are still visually indistinguishable from the original pattern. On the other hand, we could use DRO with much broader class of perturbation, potentially including some that would affect a human observer. For instance, dithering patterns might occasionally introduce enough noise to be perceptually meaningful. Because such perturbations can dominate the DRO problem, it becomes necessary to introduce calibration constants in order to account for the variation in performance that can be justifiably expected with such perturbations.
Because DRO is fundamentally related to minimizing the expected cost for a well crafted example distribution, DRO does not really solve the original problem but displaces it into the specification of the class of adversarial perturbations and the selection of the associated cost calibration constants. However, the adversarial example scenario is substantially more challenging than the bias fighting scenario: because the number of potential perturbations is much larger than the number of potentially vulnerable subpopulations, we cannot work around the problem by first working on each of them in isolation as suggested in Section 4. Using DRO for adversarial robustness without a reliable perceptual distance might be fundamentally flawed [Sharif et al., 2018].
Related work
Finding the appropriate choice for adversarial risk and making it match some notion of perceptual similarity is a topic of ongoing research. Early work on this topic considered L p norms as a similarity metric, for example L 0 [Papernot et al., 2015], L 2 [Szegedy et al., 2014], or L ∞ [Goodfellow et al., 2014a]. Sharif et al. [Sharif et al., 2018] show how these norms, as well as SSIM [Wang et al., 2004], fail to model perceptual similarity and still get fooled by simple adversarial examples. The view of adversarial machine learning through the lens of DRO is shared by Sinha et al. [Sinha et al., 2018], who use Wasserstein distance as a measure of perceptual similarity and achieve important statistical guarantees regarding the computed solution, as well as excellent practical performance.
Rahimian and Mehrotra [Rahimian and Mehrotra, 2019] survey recent research in DRO, and in particular mention the various ways risk can be defined. To the best of our knowledge, the question of what the calibration coefficients should be has not been the topic of much investigation. Meinshausen et al. propose setting r P = Var[Y P ] in order to maximize the minimum explained variance across distributions. Our suggestion is based on accounting for the acceptable performance (the best obtainable performance in isolation) of a particular subpopulation.
One thread in the debate on the source of bias was inspired by the outcomes of applying a photo upsampling algorithm [Menon et al., 2020] to images of non-white people. Examples of using DRO to approach similar problems include text autocomplete tasks [Hashimoto et al., 2018a], noisy minority subpopulations and protection with respect to specific sensitive attributes [Taskesen et al., 2020], as well as lexical similarity and recidivism prediction [Duchi et al., 2020]. The phenomeonon of neural networks exploiting 'shortcuts' in data [Geirhos et al., 2020] is a related line of work on robustness and fairness.
Conclusion
Whether fighting bias in machine learning systems is a data curation problem or an algorithmic problem has been the object of much discussion. Our theoretical results clarify the relation between a well known algorithmic approach, DRO, and the optimization of the expected cost on a well crafted data distribution. Contrary to the usual convex duality results, these results hold for nonconvex costs and for infinite families of distributions. These results also provide some support for the common practice of leveraging this quasi-equivalence to design efficient DRO algorithms. But it also becomes clear that running such an imperfect DRO algorithms is equivalent to optimizing the expected risk for a well crafted distribution.
This analysis also makes clear that this well crafted distribution is not universal but depends on often implicit details of the DRO problem setup such as calibration constants. Alas, in the same way that there is no universally robust training set, there is no universal way to define calibration constants that ensure an acceptable set of results. However, an elementary argument shows that one cannot reach acceptable results with DRO unless one can reach acceptable results on each subpopulation in isolation. This forms the basis for a minimal set of practical recommendations. Finally we discuss how our insights -this time with an infinite distribution family-raise concerns about the commonly advocated use of DRO to tame adversarial examples without a reliable perceptual similarity criterion.
Using DRO for fairness or adversarial robustness without a clear understanding of its algorithmic limitations can have a negative societal impact. Recommendations in Section 4 aim to prevent misuses of DRO, such as lowering performances on the remaining subpopulations to match the error on the most difficult distribution. However, as a consequence of Theorems 1, 2 and 3, it is also necessary to address the underlying problems in the most challenging distribution. On on hand, failure to address the issues in the minority subpopulation leaves it susceptible to discrimination, both in the application at hand and in the future applications, where the unresolved issues might persist. On the other hand, reducing the performance of the majority populations can lead to an unacceptable average performance, and as a result, the system is not going to be used -which might lead to a loss of interest in designing broadly accessible systems for this purpose (i.e., voice assistants robust to minority accents). We hope that our results and discussion will give more context to the debate on the sources of bias in machine learning, as well as help in bias mitigation in real-life scenarios.
A Theoretical Appendix
Notation Let (z, w) be the loss of a machine learning model where w ∈ R d represent the parameters of the model and z ∈ R n belongs to the space of examples. For instance, in the case of least square regression, the examples z are pairs (x, y) and the loss is (z, w) = y − f w (x) 2 .
Distribution Robust Optimization (DRO) Instead of assuming the existence of a probability distribution P (z) over the examples z and formulating an Expected Risk Minimization (ERM) problem: min the Distribution Robust Optimization (DRO) problem considers a family Q of distributions and seeks to minimize Many authors define Q with the purpose of constructing a learning algorithm with additional robustness properties. For instance, Q may be the set of all distributions located within a certain distance of the training distribution [Bagnell, 2005, Namkoong and Duchi, 2016, Blanchet et al., 2019, Staib and Jegelka, 2019. Different ways to measure this distance lead to different and sometimes surprising solutions [e.g., . Interesting theoretical possibilities appear when Q also contains the discrete distributions that represent finite training sets. Besides these theoretically justified choices of Q, many practical concerns can be viewed through the prism of DRO on ad-hoc families Q of distributions.
Example 1 (Fighting bias). Let the example distributions P 1 to P K represent identified subpopulations for which we want to ensure consistent performance. It is appealing to formulate this problem as a DRO problem with Q = {P 1 . . . P K }. However, as discussed in the main text, it is important to realize that much of the original problem is hiding behind the choice of the calibration constants.
Example 2 (Fighting adversarial attacks). Szegedy et al. [2014] have shown that one can almost arbitrarily change the output of a deep learning vision system by modifying the patterns in nearly invisible ways. Let Φ be the set of all measurable functions ϕ that map an example pattern z to another pattern ϕ(z) that is assumed visually indistinguishable from z according to a certain psycho-visual criterion. Let P ϕ represent the distribution followed by ϕ(z) when z follows the distribution P . Robust solutions against the class of adversarial perturbation Φ can be found with DRO with the distribution family Q = {P ϕ : ϕ ∈ Φ}.
Calibrated costs The simple DRO formulation makes sense when we know that all distributions define problems of comparable difficulty. It is however easy to imagine that a particular distribution emphasises harder examples. We can introduce calibration terms r P in the DRO formulation to prevent any single distribution P to dominate the maximum.
Correctly setting the calibration terms is both difficult and application specific. A simple but costly approach consists in letting r P be equal to the optimum cost for that distribution alone, r P = min w C P (w). Calibrated DRO (5) then controls the loss of performance incurred by seeking a solution that works for all distributions as opposed to solutions that are specific to each distribution. Another approach relies instead on the variance of the predicted quantity.
Calibration terms can also be used to counter the effect of finite training data. For instance, when we only have n examples for a certain distribution P ∈ Q, the expected risk C P (w) can be replaced by its empirical estimate C Pn (w)+ augmented with a calibration constant that decreases when the number n of training examples increases [Sagawa et al., 2020].
A.1.1 Finite case
We first address the case where Q is a finite set of distributions P 1 . . . P K . The following result simplifies Proposition 2 of Arjovsky et al. [2019] by eliminating the KKT constraint qualification requirement. In the rest of this document, we always assume that the mixture coefficients λ k are nonnegative and sum to one. Theorem 1. Let Q = {P 1 , . . . , P K } be a finite set of probability distributions on R n and let w * be a local minimum of the DRO problem (4) or the calibrated DRO problem (5). Let the costs C P (w) = E z∼P [ (z, w)] be differentiable in w for all P ∈ Q. Then there exists a mixture distribution P mix = k λ k P k such that ∇C Pmix (w * ) = 0.
The proof relies on a simple hyperplane separation lemma closely related to Farkas' lemma [Boyd and Vandenberghe, 2014, Sec.2.5 and Ex.2.20]. Lemma 1. A nonempty closed convex subset A of R n either contains the origin or is strictly separated from the origin by a certain hyperplane, that is, there exists a vector u ∈ R n and a scalar c > 0 such that, for all x ∈ A, u, x ≥ c.
Proof. Assume 0 / ∈ A. Let u ∈ A be the projection of the origin onto the closed convex set A. For all x ∈ A and all t ∈ [0, 1], the point u + t(x − u) also belongs to the convex set A. Since u is the point of A closest to the origin, Proof of Theorem 1. Let A ⊂ R n be the convex hull of the g k =∇C P k (w * ) for k = 1 . . . K. A is closed and convex. If A does not contain the origin, according to the lemma, there exist u and c such that ∀ x ∈ A, u, x ≥ c > 0. Therefore, for all t > 0, moving from w * to w * − tu reduces all costs C P k by at least tc + o(t). As a consequence, max k C P k is also reduced by at least tc + o(t), contradicting the assumption that w * is a local minimum. Hence A contains the origin, which means that there are positive mixture coefficients λ k summing to one such that k λ k ∇C P k (w) = ∇ w C Pmix (w) = 0.
All local and global solutions of the DRO problem (4) or (5) are therefore stationary points of the expected risk (3) associated with a mixture of the distributions of Q. The exact mixture coefficients depend on the loss functions, the distributions included in Q and, in the case of the calibrated version of DRO, on the calibration constants r P . This result raises several important questions. Is this result valid when Q is not finite? Are these stationary points always local minima? Is the converse true? What is the relation between the mixture coefficients λ k and the calibration constants r P ? How far can such results go without assuming convex losses? These questions will be addressed in the rest of this document.
A.1.2 Infinite case
The infinite case differs because the convex hull of an infinite set of vectors is not necessarily closed, even when the original set is closed. Therefore cannot directly apply the lemma to the convex hull A of the gradients g P = ∇C P (w * ) for all P ∈ Q. Appling instead the lemma to the closureĀ of A yields a substantially weaker result: if w * is a local DRO minimum, then for each ε > 0, there is a mixture P Note there is no guarantee that P (ε) mix converges to an actual distribution when ε converges to zero. 2 Therefore this weaker result does not help relating the solution of a DRO problem with the solutions of an ERM problem for a suitable training distribution. However such a stronger result can be obtained at the price of a tightness assumption [Billingsley, 1999].
Definition 1. A family of distributions Q on a Polish space 3 Ω is tight when, for any > 0, there is a compact subset K ⊂ Ω such that ∀P ∈ Q, P (K) ≥ 1 − .
Tightness is therefore obvious when all the examples belong to a bounded domain. Even when this is not the case, it is known that any finite set of probability distributions on a Polish space is tight [Billingsley, 1999]. This often provides the means to prove the tightness of an infinite family Q of distributions that are "close" enough to a single distribution such as the training data distribution. For instance, in the case of adversarial examples (Example 2), tightness is doubly obvious, first because all images belong to a bounded domain, second because the visual similarity criterion ensures that the distance between z and ϕ(z) is bounded. Theorem 2. Let Q be a tight family of probability distributions on R n . Let w * be a local minimum of problem (5). Let Q mix be the weak convergence closure of the convex hull of Q. Let there be a bounded continuous function h(z, w) defined on a neighborhood V of w * such that ∇C P (w) = E z∼P [h(z, w)] for all P ∈ Q mix and such that h(z, w) − h(z, w ) ≤ M w − w for almost all z ∈ R n . Then Q mix contains a distribution P mix such that ∇ w C Pmix (w * ) = 0.
Following , the theorem does not require the loss (z, w) to be differentiable everywhere as long as the purported derivative h(z, w) has the correct expectation. For our purposes, it must also be bounded and continuous on V and satisfy a Lipschitz continuity requirement.
Proof. LetĀ be the closure of the convex hull of the g P = ∇C P (w * ) for all P ∈ Q. According to Lemma 1, ifĀ does not contain the origin, then there are u and c > 0 such that ∀x ∈ A, u, x > c. In particular, for all P ∈ Q, we have u, ∇C P (w * ) > c > 0. Thanks to the Lipschitz continuity of h(z, w), we have C P (w * − tu) < C P (w * ) − tc + M t 2 for all P ∈ Q. Therefore for any 0 < t < c/2M and any P ∈ Q, we have C P (w * − tu) < C P (w * ) − tc/2 contradicting the assumption that w * is a local DRO miminum. ThereforeĀ contains the origin. This means that for any t > 0, there exists a mixture P (1/t) mix of distributions from Q such that ∇C P (ε) mix (w * ) < 1/t. Note that if Q is tight, the convex hull of Q is also tight. Therefore the sequence P (1/t) mix is also tight, and, by Prokhorov's theorem, contains a weakly convergent subsequence whose limit P mix belongs to the closure Q mix of the convex hull of Q. Because h(z, w * ) is continuous and bounded, the map P → ∇C P (w * ) is continuous for the weak topology. Therefore ∇C Pmix )(w * ) = 0.
A.2 A local minimum of an expected loss mixture is a local minimum of a calibrated DRO problem
The following elementary result states that if w * is a local minimum of an expected cost mixture C Pmix , then it also is a local minimum of the calibrated DRO problem (5) with calibration constants r P equal to the costs C P (w * ).
Theorem 1 (Converse). Let P mix = k λ k P k be an arbitrary mixture of distributions P k ∈ Q. If w * is a local minimum of C Pmix , then w * is a local minimum of the calibrated DRO problem (5) with calibration coefficients r P = C P (w * ).
Proof. By contradiction, assume that w * is not a local minimum of (5), that is, for all > 0 there exists u such that u < and max P ∈Q {C P (w * + u) − r P } < max P ∈Q {C P (w * ) − r P }. Recalling our choice of r P yields max P ∈Q {C P (w * + u) − C P (w * )} < 0. Since C P (w * +u) < C P (w * ) for all P ∈ Q, C Pmix (w * + u) < C Pmix (w * ), and w * cannot be a local minimum of C Pmix .
A.3 With and without convexity assumptions
Note that there is a discrepancy between the statements of Theorem 3 and Theorems 1-2. The former requires a local minimum of the expected loss mixture, whereas the latter only provides a stationary point. This distinction becomes moot if we assume that the loss functions (z, w) are convex in w. Figure 3: The minimum w * = 0 of max{C 1 (w), C 2 (w)} is a stationary point of the mixture cost C mix (w) = 1 2 C 1 (w) + 1 2 C 2 (2). However this stationary point is not a local minimum but a local maximum of the mixture cost.
When this is the case, all stationary points are not only local minima, but global minima as well. 4 Theorems 1 and theorem 3 then provide an exact equivalence between finding a minimum of the calibrated DRO problem (5) and finding a minimum of an expected loss mixture.
However there is little point in providing theorems for the convex case because, at least in the finite case, it is well covered by theory of convex duality [Bertsekas, 2009] applied to a simple restatement of convex DRO as a convex optimization problem with an additional slack variable L Convex duality also clarifies the relation between the mixture coefficients λ k and the calibration constants r P k . Increasing the weight of a distribution in the mixture is equivalent to reducing the corresponding calibration coefficient. This observation then leads to a plethora of saddle-point seeking algorithms such as Uzawa iterations [Arrow et al., 1958] (see Section A.4.).
The nonconvex case is more challenging because the stationary points identified by Theorem 1 need not be local minima. Consider for instance the two real functions C 1 (w) = tanh(1 + w) + w 2 and C 2 (w) = tanh(1 − w) + w 2 where the term w 2 with 0 < 1 is only present to ensure that each of these functions has a well defined optimum. As shown in Figure 3, their maximum max{C 1 (w), C 2 (w)} has a a minimum in w * = 0. As predicted by Theorem 1, this solution is a stationary point of the the mixture C mix = 1 2 C 1 (w) + 1 2 C 2 (w). However this stationary point is not a local minimum but a local maximum.
This situation is in fact easy to understand.
The solution w * =0 of the problem min w max{C 1 (w), C 2 (w)} (Figure 3) falls in negative curvature regions of the functions C 1 and C 2 . As a result any mixture of these two costs also has negative curvature in w * . Therefore, the stationary point w * cannot be a local minimum. It is also easy to see that this situation cannot occur when the optimum of the DRO problem is achieved in points where the individual cost functions have positive curvature. Since all mixtures must also have positive curvature on these points, the stationary points can only be local minima. This remark is important because learning algorithms for deep learning problems tend to follow trajectories where the Hessian is very flat apart from a few positive eigenvalues [Sagun et al., 2018]. Weak negative curvature directions always exist -even when the algorithm stops making progress-but are very weak. Although we lack a good understanding of these landscapes, it seems a safe bet to assume that the situation presented in Figure 3 is often cured by overparametrization. The following section reaches a similar conclusion with closer look at a popular family of DRO algorithms. 4 Convexity also provides easy means to weaken the differentiability assumption because of the existence of subgradients. One could similarly weaken the differentiability assumptions of Theorems 1-2 by assuming instead the existence of local sub-and super-gradients.
Input: Equally sized training sets D k for k = 1 . . . K Input: Calibration coefficients r k . Initial weights w 0 . Input: Temperature β. Stopping threshold . Output: A sequence of weights w t .
A.4 Lagrangian algorithms for DRO
The calibrated DRO problem (5) is easily rewritten as a constrained optimization problem by introducing a slack variable L: With convex loss function, finite Q, and under adequate qualification conditions [Boyd andVandenberghe, 2014, Bertsekas, 2009], convex duality theory suggests to write the Lagrangian The solution of this problem must satisfy k λ k = 1 because the dual D(λ 1 . . . λ k ) is −∞ when this is not the case. With this knowledge, the dual problem becomes The inner optimization problem is precisely the minimization of the expected risk with respect to the mixture k λ k P k and therefore lends itself to many popular gradient descent methods. The mixture coefficients λ k must then be slowly adjusted by ascending the outer optimization objective [Arrow et al., 1958].
Algorithm 2 is a typical example of this strategy. Although this particular instance uses a temperature parameter β to smooth the mixture coefficient update rule, it is also common to focus on a single term with β = +∞. When this is the case, each outer iteration of Algorithm 2 merely amounts to augmenting the training set with an extra copy of the examples associated with most adverse subpopulation.
Because of their simplicity and effectiveness, such Lagrangian DRO algorithms are widely used with deep learning system with nonconvex objectives [Sagawa et al., 2020, Augustin et al., 2020. The theoretical results discussed in this appendix provide a measure support for this practice.
A crucial assumption for this algorithm is the idea that increasing the weight of a distribution in the mixture amounts to finding a local DRO minimum with a lower calibration coefficient for that distribution. This is true in the convex case. This requires a more precise discussion in the nonconvex case. Suppose for instance that one modifies the mixture coefficients by slightly increasing λ 1 by a small δ > 0 and re-normalizing: λ 1 = 1 Z (λ 1 + δ) λ k = 1 Z λ k ∀k > 1 Such a change can yield two outcomes. Either w * remains a local minimum of the new expected cost mixture, or we can follow a descent trajectory and reach a new local minimum w : i) Let us first assume that the old cost function increases when one moves from its local minimum w * to the local minimum w of the new cost function Subtracting (7) from (6) yields which, according to Theorem 3, means that the new local minimum w is a local minimum of a DRO problem with a reduced calibration coefficient for distribution P 1 , just as for convex losses. ii) However, it is also conceivable that (7) does not hold. This means that the new minimum w achieves a lower cost than w * for both the old and new mixture costs. In other words, tweaking the mixture allowed us to escape the attraction basin of the local minimum w * . From the perspective of algorithm 2, this disrupts the determination of the mixture coefficient, but this is nevertheless progress because both the old and new mixture costs are lower. In theory, this can only happen a finite number of times in a neural network because there is only a finite number of attraction basins. In practice, this never happens: stochastic gradient descent in neural networks usually follows a path with slowly decreasing cost without hopping from one attraction basin to another one [Goodfellow et al., 2014b, Sagun et al., 2018.
As mentioned earlier, it is also conceivable that w * remains a local minimum with the new mixture cost. Algorithm 2 then keeps increasing the weight of distribution P 1 as longs as the cost C P1 (w * ) = C P1 (w ) remains too high with respect to the desired calibration coefficients. This last case covers two distinct scenarios.
iii) The Lagrangian algorithm could keep increasing the weight of the first distribution without moving away from the local minimum w * . The inner loop eventually minimizes the empirical risk for the first distribution only, yet without achieving progress. This suggests that we have reached a disappointing bound on the best performance achievable with our model using training data sampled from this first distribution. iv) Alternatively, the old mixture local minimum w * could stop being a local minimum of the new mixture once the first distribution weight reaches a certain threshold. Consider for instance the problem of Figure 3. Even though the DRO minimum corresponds to a local maximum of the mixture cost C mix (w) = 1 2 C 1 + 1 2 C 2 , Theorem 3 tells us that both minima of this mixture cost are also local DRO minima, albeit for different calibration constants r i . Figure 4 shows the case where r 2 > r 1 . Figure 5 shows that increasing the weight of the second cost function beyond a certain threshold eventually erases the left minimum and causes Algorithm 2 to jump to the condition r 1 > r 2 . In other words, our algorithm is not able to simultaneously keep both cost functions as low as they could separately be. This either suggests that these two goals are incompatible, or that the model does not have enough capacity to simultaneously achieve them together. As usual with neural networks, the remedy is overparametrization. . . One can derive two conclusions from this brief analysis. First, as long as we use a Lagrangian descent algorithm to solve the DRO problem, there is little point being concerned about stationary points of the mixture cost that are not local minima because (1) the algorithm is not going to find them anyway, and (2) overparametrizing the network is likely to make them disappear anyway (scenario iv C 1 − r 1 C 2 − r 2 max(C 1 − r 1 , C 2 − r 2 ) 1 2 C 1 + 1 2 C 2 w Figure 4: Both minima of C mix (w) = 1 2 C 1 + 1 2 C 2 are solutions of a DRO problem, albeit one with different calibration constants r 1 and r 2 . Here r 2 > r 1 . 1 2 C 1 + 1 2 C 2 1 4 C 1 + 3 4 C 2 1 8 C 1 + 7 8 C 2 Figure 5: Increasing the weight of the second distribution beyond a certain threshold erases the first minimum and causes Algorithm 2 to jump to the other minimum which is a calibrated DRO minimum for r 1 > r 2 .
above). Second, the most concerning scenario is the case where a single distribution or subpopulation dominates the DRO problem because our model is unable to achieve a satisfactory performance even when it is trained to minimize the expected cost for that distribution only. When this is the case, DRO cannot help. | 2021-06-18T01:15:54.660Z | 2021-06-17T00:00:00.000 | {
"year": 2021,
"sha1": "0cbb9aea5fe8e4ec493d02319866b3723dda23e3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0cbb9aea5fe8e4ec493d02319866b3723dda23e3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
75328695 | pes2o/s2orc | v3-fos-license | AWARENESS ABOUT PHARMACOVIGILANCE AMONG RESIDENT DOCTORS IN A TERTIARY CARE HOSPITAL
BACKGROUND: Monitoring and evaluation of adverse drug reactions (ADRs) through a well-organized pharmacovigilance system is vital for safe use of medicines. ADR reporting by healthcare professionals forms the backbone of pharmacovigilance system. AIM: To assess the awareness of pharmacovigilance among resident doctors in a tertiary care hospital. MATERIAL AND METHODS: This is a cross sectional descriptive study, carried out in a total 160 resident doctors from various specialities in a government run tertiary care teaching hospital were administered a questionnaire to evaluate knowledge, attitude and practice of pharmacovigilance. The questionnaire consisted of open and closed ended questions. The study was conducted in January/ February 2012. RESULTS: Most of the residents had heard the term ‘pharmacovigilance’ but only 25% knew the actual meaning of it. About 84% of the residents did not know about Pharmacovigilance Programme of India (PvPI), 68% did not know about existence of any ADR reporting centers in the country. Most of them (75%) agreed that ADR reporting is responsibility of the doctors. So far none of the residents have reported an ADR under PvPI. CONCLUSION: The awareness of pharmacovigilance among resident doctors in teaching hospitals is very low. There is an urgent need to train health professionals in pharmacovigilance to improve the current sorry state.
INTRODUCTION:
Adverse drug reactions (ADRs) due to drug are common yet often preventable and are associated with significant morbidity and mortality. 1,2 ADRs were responsible for increased length of stay in hospitals and increased economical burden. 2, 3 Monitoring and evaluation of ADRs through a well-organized pharmacovigilance system is vital for safe use of medicines.
To accomplish this task government has initiated pharmacovigilance programme of India (PvPI). WHO defines pharmacovigilance as "the science and activities relating to the detection, assessment, understanding and prevention of adverse effects or any other medicine related problems. 4
"
Spontaneous ADR reporting by health care professionals forms the backbone of pharmacovigilance system. However, under-reporting is a major hurdle in achieving goal of pharmacovigilance. In order to improve the reporting rate, it is important to improve knowledge, attitude and practice (KAP) of the healthcare professionals regarding pharmacovigilance and its importance, we conducted a study in postgraduate residents in a tertiary care hospital and attached medical college. This will also help to build a useful database that will improve the quality of health care in our hospital.
AIM:
To assess the awareness of pharmacovigilance amongst resident doctors in a tertiary care hospital.
MATERIAL AND METHODS:
Type of Study: Cross-sectional, questionnaire based, descriptive study. Place of Study: At Govt. run tertiary care hospital and attached medical college. Duration of Study: Study was conducted from 10 th January to 10 th February 2012. Sample Size: the study was carried out in a total 160 resident doctors from various specialities. Study Tools: The pre-designed questionnaire based on KAP were given to resident doctors.
RESULTS:
Out of 160 received filled questionnaire from resident doctors 10 was incompletely filled and were excluded from the study, giving response rate 93.75 %.
KNOWLEDGE ABOUT PHARMACOVIGILANCE: 74 % of resident doctors had heard the term 'pharmacovigilance', but 24.67% of resident doctors knew the actual meaning the term pharmacovigilance. 84% of resident doctors did not know about the the existence of "Pharmacovigilance programme of India (PvPI)". 68% of resident doctors did not know the adverse drug reaction (ADR) reporting centres in India: Attitudes regarding Pharmacovigilance: 75% of the resident doctors believe that ADR reporting is only a duty of doctors, and only 8% of resident doctors believe that ADR reporting can be done by doctors, nurses as well as pharmacists. 96% of resident doctors were willing to undergo pharmacovigilance training and all resident doctors believe that ADR reporting is necessary for safe use of medicines. All residents (100%) believe that ADR reporting is necessary. Practice regarding Pharmacovigilance: None of the residents have ever reported ADR under pharmacovigilance.
DISCUSSION:
ADRs are a major clinical problem 3 and it is truism that detection of common or uncommon ADRs requires vigilance. This led to development of pharmacovigilance system. The Uppsala Monitoring centre (UMC, WHO), Sweden is maintaining the international database of ADR reports (Currently about 4.7 million case reports) received from several national centre (96 member countries). However, still it is estimated that only 6-10 % of all ADRs are reported 5 . Contribution of India to UMC database is very little, this is due to lack of awareness among the healthcare professionals about pharmacovigilance system. Majority of residents responded to our study with response rate 93.75% which is higher than that of Hasford et al study in which response rate was 50%. 6 74% residents of our institute have heard the term pharmacovigilance however, only 24.66% of them knows actual meaning of pharmacovigilance which is much lower than study conducted by Olufunsho Awodele et al in which 79.3% knows the meaning of pharmacovigilance. 7 Many factors are associated with ADR under-reporting among health care professionals. These factors have been broadly classified as personal and professional characteristics of health carers, their knowledge and attitude to reporting. Only 32% of residents knew the pharmacovigilance Centre's, whereas 10% of residents knew that our institute has pharmacovigilance Centre. Which is similar to study conducted by Amrita P et al. in which 8.87% of doctors were aware that their institute has pharmacovigilance centre. 8 All residents (100%) have agreed the fact that ADR reporting is necessary which is similar to result obtained by Amrita P et al (99.19%) 8 and higher than that of LI Quing et al study (94%). 9 None of the residents had ever reported ADR under PvPI, as they have low level of awareness of ADR reporting under PvPI (16%).
CONCLUSION:
The awareness of pharmacovigilance among resident doctors in teaching hospital is very low. There is an urgent need to train resident doctors by health professional expert in pharmacovigilance to improve the current sorry state. This will not only lead to a better standard of reporting, but will also serve for safe and effective use of medicines and doctors would be more apt to consider ADRs as the cause of complaints at an early stage in their differential diagnosis. | 2019-03-13T13:27:22.426Z | 2015-01-02T00:00:00.000 | {
"year": 2015,
"sha1": "b5d71912732fba270eee47997d0e14a9214cdc29",
"oa_license": null,
"oa_url": "https://doi.org/10.14260/jemds/2015/33",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7339242343ea93ddba537a8a97bb844d9eea798d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20894588 | pes2o/s2orc | v3-fos-license | Generalized Kubo formula for spin transport: A theory of linear response to non-Abelian fields
The traditional Kubo formula is generalized to describe the linear response with respect to non-Abelian fields. To fulfil the demand for studying spin transport, the SU(2) Kubo formulae are derived by two conventional approaches with different gauge fixings. Those two approaches are shown to be equivalent where the non-conservation of the SU(2) current plays an essential role in guaranteeing the consistency. Some concrete examples relating Spin Hall Effect are considered. The dc spin conductivity vanishes in the system with parabolic unperturbed dispersion relation. By applying a time-dependent Rashba field, the spin conductivity can be measured directly. Our formula is also applied to the high-dimensional representation for the interests of some important models, such as Luttinger model and bilayer spin Hall system.
I. INTRODUCTION
Kubo formula, one of the most important formulas in the linear response theory, has been widely used in condensed matter physics since it was derived by Kubo [1] for the electrical conductivity in solids. There are several kinds of Kubo formulas for the external fields to which the system responses are different. However, these formulas, such as those for the electrical conductivity and the susceptibility, all describe linear responses to the U(1) external fields.
Recently, a newly emerging field, spintronics [2,3], has absorbed much attention for its promising applications in quantum information storage and processing. Spin Hall effect [4,5,6,7,8,9,10,11,12,13,14,15,16,17], as a candidate method to injecting spin current into semiconductors, is also discussed intensively. In this effect, the spin-orbit coupling is necessary no matter intrinsically or extrinsically. As this coupling can be regarded as a contribution of the SU(2) gauge potential [18], a new version of linear response theory in SU(2) formulation is necessary. Nevertheless, most of the previous works have mainly focused on the linear response of such system to an external electric field, and hence the traditional Kubo formula was adopted directly except Ref [19] which dealt with non-Abelian response and considered spin Hall effect in the presence of an SU(2) gauge field. There were some papers [20,21] discussing the responses to a spin-orbit coupling with spatially varying strength, but the authors employed other approaches rather than Kubo formula as the SU(2) Kubo formula has not been established. It thus becomes inevitable to develop a generalized Kubo formula so that the linear response to the external SU(2) gauge fields can be evaluated.
In present paper, we derive a formula which describes the linear response to an SU(2) external field using the strategy ever employed by Kubo for the U(1) case. It is not a straightforward derivation since the algebra is totally different. Especially, the expression of the SU (2) "electric field " evolves one more term of gauge potentials [22] than the U(1) case due to its non-Abelian feature. It seems obscure to directly find the equivalence between the Kubo formulas derived with different gauge fixings. We will show that the extra term in the SU(2) "electric field " precisely corresponds to the nonvanishing term in the "continuity-like" equation [18] which includes the spin procession [12,18]. Its origin stems from the definition of the conserved current [18] in the presence of the SU(2) field. Since one of recent research interests focuses on the spintronics, some explicit examples in spin Hall systems are discussed in terms of our SU(2) Kubo formula, such as the spin susceptibility and current in response to the effective spin-orbit coupling [13,14]. In spin Hall effect, the spin conductivity is believed to be canceled by the effect of disorder in two-dimensional electron gas [8]. It is due to the parabolic unperturbed dispersion relation [25]. In such a system, the spin current in response to the external spin-orbit coupling also vanishes. Then the systems with nonparabolic unperturbed dispersion relation become significant. In such systems, the spin conductivity in response to either U(1) or SU(2) external fields does not vanish. An experimentally accessible case is also given in which the spin conductivity is related to the dielectric function. We also extend the application of our formula to a high-dimensional representation, saying spin-3/2 representation, which is related to some important systems, such as the Luttinger model [23] and the bilayer systems [24]. The spin conductivity in the Luttinger model vanishes, which describes the response to the effective field of structural inversion asymmetry.
The paper is organized as follows. In Sec. II, we derive a general Kubo formula with respect to a single-frequency SU(2) external field at zero temperature. Then we show in Sec. III that this formulation is consistent with the one by choosing a zero-frequency external field at the very beginning. In Sec. IV, we give the applications of our SU(2) Kubo formula to some models of spin-1/2 representation. In Sec. V, our theory is applied to a high-dimensional representation (i.e., spin-3/2 representation) system. Several concrete example models are also given. In Sec. VI, we give a brief summary with some remarks. In the appendixes, we give the detailed calculations of the correlation functions in Matsubara formalism.
II. SU(2) KUBO FORMULA AT ZERO TEMPERATURE
A generalized total Hamiltonian for a variety of models to study spin transport can be written as [18] whereV dis is the potential caused by disorders. Throughout this paper, the index l refers to the lth particle, a, b, and c refer to spin space while i and j the spatial space, and repeated indices are summed over. τ a stands for the generators of SU(2) algebra, A and A A a are the U(1) and SU(2) gauge potentials, respectively. Usually these gauge potentials consist of two parts, internal and external fields. In order to derive a general formula for the conductivity in response to an SU(2) external "electric field ", we separate the Hamiltonian (1) into two parts, where the operatorπ π l = p l − e c A int (r l , t) − ηA A a int (r l , t)τ a stands for the dynamical momentum involving internal U(1) and SU(2) potentials if any. Note that this separation is up to the first order of A A a ext (r l , t). The relation between SU(2) "electric fields " and the gauge potentials is given by where η is the coupling constant. The SU(2) "electric field " [18] is expected to be realized by spatially [21] or timely [20] dependent Rashba or Dresselhaus coupling strength. We consider the linear response with respect to the "electric" components of a non-Abelian field, E i = E a i τ a . For simplicity, we take the SU(2) external field to be of single-frequency, namely, The problem involving the external field of an arbitrary form on r only differs from this case by a fourier transform. As is well-known, the external field of frequency(q, ω) actually reads E a i (q, ω) cos(q · r − ωt) which is real valued. The expression in Eq. (4) is convenient for calculation. Thus in the subsequent formulas, only the real parts have the physical meaning. Now we choose a gauge that A a 0 = 0, which corresponds to the temporal gauge in the U(1) case, then the SU(2) external "electric field" comes into the Hamiltonian through the vector potential A a i , Hereafter, we omit the subscript specifying the external field for simplicity. Based on the definition for single particle, one can define the SU(2)-current operator for the unperturbed system J a (r) = η 4m l {τ a ,π π l }δ(r−r l )+δ(r−r l ){τ a ,π π l } , (6) where the curl bracket denotes anticommutator and the velocity operator of the lth particle determined by the Heisenberg equation of motion,π π l /m = [r l , H 0 ]/(i ) is matrix-valued for the SU(2) case. In terms of this current, the perturbation Hamiltonian H ′ can be expressed as whereĴ a i (q) is the Fourier image ofĴ a i (r). Clearly, the interaction term is the product of the SU(2) current and the SU(2) "electric field ".
Taking the perturbation of the external field into account, we have Π l =π π l − ηA A a (r l )τ a . Then the total SU(2) current driven by the external SU(2) "electric field " reads where n 0 is the particle density. At zero temperature, this SU(2) current is evaluated for the ground state of the system. In the interaction representation, the state |ψ(t) of the system at time t is related to the eigenvector |φ of H 0 by the S-matrix, i.e., |ψ(t) = S(t, −∞)|φ . Up to the linear order of Then the average of the first term in the total SU(2) current Eq. (8) is given by where Ĵ a i (r, t) 0 has been dropped since no SU(2) current is considered to follow in the absence of the external fields. Together with the second term, we obtain the following expression: Since the conductivity represents the property of the whole system, we need take the average over the system to get the SU(2) conductivity, with V the volume of the system. The spin conductivity here is a tensor in spin space rather than a vector as in the case of linear response to the U(1) external field. As a conventional strategy, a retarded current-current correlation function is thus introduced to calculate this conductivity, where θ(t − t ′ ) is the step function which vanishes unless t > t ′ . The Fourier transform of Eq. (12) is given by Comparing with Eq. (11), we obtain To simplify the calculations, we introduce a Matsubara function Q ab ij (q, iν) which reduces to the retarded correlation function Q ab ij (q, ω) by changing iν to ω + iδ, where T u denotes the u-ordering operator and β = (k B T ) −1 with k B the Boltzmann constant. We thus have derived a generalized Kubo formula for spin transport in response to an external SU(2) "electric field ".
III. AN EQUIVALENT FORMULATION FOR ZERO FREQUENCY
In the previous section, we derived the SU(2) Kubo formula choosing the gauge potential A a 0 = 0. To obtain the dc conductivity, one just needs to take the limit ω → 0. As is well-known in the conventional electrical conductivity, the Kubo formula can also be derived alternately by choosing a constant external field as a start point. The continuity equation for electric charge conservation guarantees the two derivations to be equivalent. Whereas, in the SU(2) case, the current defined by Eq. (6) is not conserved as long as an SU (2) interaction is present. For example, the spin current, a special SU(2) current with η = , is not conserved if there exists the Zeeman term or spin-orbit coupling. In this case, the continuity equation does not hold [18,19], instead, we have the following relation: (16) where σ a (r) = ηψ † (r)τ a ψ(r) and J a (r, t) are the SU (2) density and current respectively, and notations σ = etc. are adopted. Unlike the charge current which is conserved, the spin current is not conserved, thus a natural question is whether the SU(2) Kubo formula we derived in the previous section is still consistent with the other derivation? Now let us choose ∂ 0 A a i = 0 for the zero frequency case, then the SU(2) electric field and the perturbation Hamiltonian are given by and By means of the method suggested by Luttinger, the total SU(2) current can be obtained once the density matrix ρ is introduced. The density matrix including the deviations caused by the perturbation takes the form where ρ 0 refers to the density matrix with respect to the unperturbed Hamiltonian and δρ(t) is brought about by the perturbation one, H ′ . From the equation of motion for the perturbed part of the density matrix, we can obtain a solution for δρ(t) With the help of the density matrix, the SU(2) current can be then evaluated by taking the average where the equilibrium part of the current tr(ρ 0Ĵ a i (r)) is assumed to be zero. The derivative of H ′ I with respect to time t is calculated as Using the "continuity-like" equation (16) and integration by parts, we have where we did not write out the arguments in the first line for simplicity. Substituting it into Eq. (22), we obtain Consequently, the dc SU(2) conductivity is obtained from the above equation after integrating r over the volume V , This result is obviously independent on the frequency. It is also consistent with the one which we derived in the previous section once we introduce the representation of the eigenstates |n of H 0 . Note that the spin procession terms, η A i × J i (r, t)−η A 0 × σ(r, t), precisely compensate the second term of Eq. (17), which makes our theory selfconsistent. Since the SU(2) "electric field " includes an extra term of gauge potential in comparison to the U(1) field, the nonconservation of the SU(2) current plays an essential role in guaranteeing the consistency. That is to say, the SU(2) current exactly responds to the SU(2) "electric field " no matter which gauge is chosen.
IV. APPLICATIONS FOR SPIN-1/2 REPRESENTATION
From now on, we will give some applications of our SU(2) Kubo formula. In this section, we mainly focus on the examples in spin-1/2 representation without impurities and in the limit q → 0.
A. Spin susceptibility
Spin is a category of SU(2) entity with η = . The spin degree of freedom is discussed extensively in recent years for its promising application. The effective spinorbit coupling emerged significantly in some semiconductors [13,14] is of importance for its possible manipulating of spin. Using our SU(2) Kubo formula, we can directly calculate the spin susceptibility which describes the linear response of the spin density to the spin-orbit coupling.
The spin susceptibility χ ab i is defined as whereŜ a = k C † k τ a C k is the spin density. Here we adopted a simplified notion creating a spin-up particle of momentum k etc. The corresponding retarded correlation function in Matsubara formalism Π ab i (iν) is given by Hereafter, we take the unperturbed Hamiltonian to be H 0 = k C † k (ε(k) + d a (k)τ a )C k for its elegant form in Green's function. The second term represents the internal SU(2) field with d a the components of this field. This system has two bands, E − = ε(k) + |d| and E + = ε(k) − |d|, with |d| = √ d a d a . In the limit ω → 0, the susceptibility reads where n F− and n F+ are the Fermi distribution functions and "−, +" label the different bands. This result is antisymmetric to the indices labeling spin degree of freedom. Using this result, we calculate the spin susceptibilities with two kinds of internal fields, Rashba and Dresselhaus couplings, These two kinds of couplings dominate in narrow gap semiconductors such as GaAs and here we take their twodimensional (2D) forms to represent the effective spinorbit couplings in two-dimensional electron gas (2DEG). In these cases, the components χ xy i vanish since d z = 0. The results are shown in Table I, where we have taken the usual parabolic form that ε(k) = 2 k 2 /2m.
B. Spin conductivity
With η = , the spin current readŝ After calculating the Matsubara function (see Appendix A) and changing iν → ω, we derive the conductivity This expression manifests that the conductivity is antisymmetric to the spin indices a, b and symmetric with respect to the spatial indices i, j with the parabolic dispersion relation. Note that when the U(1) part of H 0 is parabolic, i.e., ε(k) = 2 k 2 /2m, and d a is linear of k a , the summation over k vanishes. Since the conventional spin-orbit couplings are Rashba and Dresselhaus couplings, which contain no quadratic terms of k, we should consider ε(k) = 2 2m (k x + c) 2 + (k y + c ′ ) 2 for nonvanishing results, which represents a shift of momentum k in the material. Table II shows the spin conductivities with two kinds of internal fields.
At this stage, it is worthwhile to recall some previous work in spin Hall effect. Up to now, a general consensus is made that the spin conductivity in response to an external Maxwell electric field is exactly canceled by the effect of disorder in two-dimensional electron gas with spin-orbit coupling. The cancellation is due to the parabolic form of unperturbed band structure [25]. It is worthwhile to point out that our SU(2) conductivity also vanishes when ε(k) takes the parabolic form even in the absence of disorder. It is an essential difference that our conductivity refers to the linear response to an external Yang-Mills electric field which is also a vector in SU(2) Lie algebra space whose bases, the Pauli matrices, are anticommute. Anyway the system with nonparabolic dispersion relation is of great importance, since the conductivity, no matter in the usual spin Hall effect with disorder or derived by our SU(2) Kubo formula without disorder, is expected to be observed in experiments.
Finally, we will consider an experimentally available case. Since the Rashba coupling strength can be tuned by the gate voltage applied to 2DEG, we take a Rashba coupling with time-dependent strength as the external SU(2) field and Dresselhaus coupling as an internal field.
Then we can get an ac conductivity depending on the frequency ω. The result reads where ǫ D (ω) is the dielectric function caused by the Dresselhaus spin-orbit coupling [26], namely, This dielectric function is a macroscopic quantity and can be directly measured. Carrying out the integration over k gives a resonant result, where k F − and k F + refer to the Fermi momenta of both bands. The same resonance is also shown in Ref. [27]. Other components are given by σ zx xy = 1 2 σ zy xy while σ zx yy and σ zy xy differ from them by c → c ′ .
V. APPLICATIONS FOR SPIN-3/2 REPRESENTATION
In the previous section, we have discussed several examples using the SU(2) Kubo formula in spin-1/2 representation. It is well-known that there exit many important systems which carry out the spin-3/2 representation of the SU(2) algebra, for example, the Luttinger model [23] containing the intrinsic spin-orbit coupling, bilayer systems [24] taking into account of spin degree of freedom, etc. Thus it is worthwhile for us to extend our discussion to high-dimensional representations, such as spin-3/2 representation. The examples mentioned above are also discussed, which may be instructive for the experiments.
Since Γ α , Γ αβ together with the identity I span the space of 4 × 4 Hermitian matrices, one can write out a general Hamiltonian in spin-3/2 representation in terms of those gamma matrices, where C † k = (C † k,1 , C † k,2 , C † k,3 , C † k,4 ) with the second index referring to either spin-band or spin-layer labels. Here we do not include the linear combination of Γ αβ which makes the Green's functions difficult to calculate. For this unperturbated Hamiltonian, there exist two types of perturbation part H ′ . One is constructed by Γ αβ and the other by Γ α . The problem relating the spin current in Luttinger model in response to the effective field caused by the structure inversion asymmetry is of the first type. In this case, the structure inversion asymmetry is taken as the perturbation and hence Then the linear response of the spin current to H ′ reads In calculating the retarded correlation function Q ab i (iν), we will encounter where G(k, iω n ) is the Matsubara function for which the detailed calculation is given in Appendix B.
Since the traces of gamma matrices are always real, the appearance of double Γ αβ matrices makes Eq. (40) real and the summation of the Matsubara function also gives no imaginary contribution after changing iν → ω. This directly results in a vanishing spin conductivity.
The second type of perturbation is constructed by Γ α , and later we will discuss some concrete examples. The
SU(2) Kubo formula is then
The corresponding retarded correlation function is given by After changing iν → ω and taking the limit ω → 0, we obtain the dc conductivity In this model the tunneling between the two layers is included in which ξ accounts for the tunneling strength.
We take the SU(2) flux to be the external perturbation, H ′ = φ σ z ⊗ σ z , i.e., h 5 = φ. A direct calculation of the spin conductivity gives the following result: In the limit ξ → 0, σ x5 x reduces to − √ 3 4πα . As another example, we take the tunnelling term to be the perturbation, that is H ′ = ξ ′ σ x ⊗ I and H 0 = ε(k) + ασ z ⊗ (k x σ y − k y σ x ), correspondingly, h 4 = ξ ′ . Then we have the following result:
VI. SUMMARY AND REMARKS
In this paper, we have generalized the Kubo formula to describe the linear response of the SU(2) current to the external SU(2) "electric field", which traditionally describes the one to the U(1) external field. From two distinct routes with different gauge fixings, we derived the SU(2) Kubo formula and showed that those two approaches are equivalent. The non-Abelian feature of SU(2) electric field involves one more term of gauge potentials in comparison to the U(1) case, while this term precisely compensates the nonconservation part in the SU(2) continuity-like equation for the SU(2) current.
For the interests in spin transport, we applied our formula to calculate the spin susceptibility and spin conductivity in the system containing a Rashba or Dresselhaus field. The results show that in the usual system, where ε(k) = 2 k 2 /2m, the spin susceptibility is constant. However, the spin conductivity vanishes, much like the case in the spin Hall effect where the spin conductivity in response to the external electric field vanishes in the presence of disorder. To derive the nonvanishing spin conductivity, the systems with nonparabolic unperturbed band structure are necessary, and the spin conductivity, no matter in response to the U(1) or SU(2) electric field, is expected to be observed in such systems. What is more, we also discussed an experimentally available case. In response to the time-dependent Rashba field, the spin conductivity is related to the dielectric function which can be measured directly.
Generalized to the high-dimensional representation, our SU(2) Kubo formula is available to discuss the Luttinger model as well as bilayer spin Hall effect. The spin conductivity in response to the effective field caused by structural inversion asymmetry in the Luttinger model always vanishes.
The work was supported by NSFC Grant No.10225419.
APPENDIX B: SPIN CONDUCTIVITY IN SPIN-3/2 REPRESENTATION
Before we calculate the spin conductivity in spin-3/2 representation, it is wise to warm up with the Clifford algebra. The 4 by 4 gamma matrices Γ α are constructed by the 2 by 2 sigma matrices, which satisfy {Γ α , Γ β } = 2δ αβ and Γ 1 Γ 2 Γ 3 Γ 4 Γ 5 = −1. Using these gamma matrices, one can also compose ten antisymmetric matrices Γ αβ = 1 2i [Γ α , Γ β ]. Together with the identity matrix, Γ α and Γ αβ span the space of 4 × 4 hermitian matrices. The SU(2) generators τ a in spin-3/2 representations can also be expressed as the linear combinations of Γ αβ , i.e., τ a = 1 4i L a αβ Γ αβ with Note that L a are antisymmetric and satisfy the commutation relation [L a , L b ] = iǫ abc L c . Thus they form the we can calculate the trace term 1 4i tr g(k, iω n ) + d µ τ µ ∂ε(k) Note that L a are all imaginary, then the dc conductivity is given by | 2017-09-28T07:33:45.293Z | 2006-05-02T00:00:00.000 | {
"year": 2006,
"sha1": "5574fcfc20101387940753db43d5d7094b406622",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0605067",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e04786d0862b686295322eb2aad652ed3e097663",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252975760 | pes2o/s2orc | v3-fos-license | The Relationship Between Bullying Victimization and Cyber Aggression Among College Students: The Mediating Effects of Relative Deprivation and Depression
Introduction Based on the general aggression model, the present study aims to examine the relationship between bullying victimization and cyber aggression as well as the mediating effects of perceived relative deprivation and depression on this relationship. Methods The present study employed a 3-wave longitudinal method featuring 6-month intervals to investigate 795 Chinese college students (476 female; Mage = 19.67). Structural equation modeling was used to evaluate the relationships among bullying victimization, perceived relative deprivation, depression and cyber aggression. Results The results suggested that bullying victimization positively and significantly predicts cyber aggression in college students and that both perceived relative deprivation and depression play positive mediating roles in this relationship. Moreover, bullying victimization affects cyber aggression via the chain mediating roles of perceived relative deprivation and depression. Discussion This study offers valuable insight into ways of considering perceived relative deprivation and depression in the context of prevention and intervention strategies to help attenuate cyber aggression among victims of bullying.
Introduction
Bullying victimization in a school context is defined as a phenomenon in which an individual is exposed, repeatedly and over time, to negative actions (ie, to actions that inflict injury or discomfort upon another) on the part of one or more other students. 1 Bullying has long been perceived as a widespread phenomenon in schools. Worldwide, approximately 32% of students experience bullying from others; 2 a comparative study of bullying victimization in schools across the US and China found that 22.05% of US students and 21.77% of Chinese students had suffered from bullying. 3 In addition, evidence from a review of the literature indicated that bullying was a prevalent issue for college students, as approximately 30-40% of college students reported being victimized by bullying. 4 Notably, predominant research concerning college students has shown that individuals who experience bullying victimization are vulnerable to a wide range of psychological and behavioral problems, in which context the impact of bullying on aggression is receiving increasing attention from researchers. [5][6][7] Compared with populations at other developmental stages, college students, who are mostly emerging adults, are more likely to engage in cyber aggression rather than traditional aggression. 8 Cyber aggression refers to aggressive behavior that takes place via an online platform, such that individuals harm others on purpose and repeatedly by taking advantage of the convenience and anonymity of cyberspace. 9 Compared with traditional aggression, cyber aggression is characterized by features such as anonymity, rapid spread, an unlimited potential audience, a lack of social cues, limited social control and decreased inhibition, all of which contribute to the higher vulnerability to cyber aggression faced by college students. 10,11 Previous studies have revealed that approximately 59.5% of college students have been victims or perpetrators of cyber aggression, thus indicating the prevalence of cyber aggression in this population. 12,13 The General Aggression Model (GAM) is a comprehensive framework that explains in detail how personal and situational factors affect the occurrence of aggressive behavior by influencing an individual's present internal state, specifically by altering a person's affect, cognition, and arousal. 14 According to GAM, aggressive cues might function as situational inputs and thus influence aggressive behavioral outcomes by changing an individual's cognitions (eg, perceived relative deprivation), feelings (eg, depression), and arousal levels.
Although multiple empirical studies have supported GAM, 8,[15][16][17] the majority of the extant research has focused on the collective effect of input variables and present internal state variables, and relatively little attention has been given to the way in which input variables influence present internal state variables and the manner in which present internal state variables in turn influence aggressive behavior. In fact, the present internal state of an individual is highly susceptible to influence by input variables. 18,19 Therefore, based on GAM, this study seeks to explore the underlying mechanisms of chain mediation by which situational factors affect aggression among college students via psychological factors by reference to individuals' internal states.
Bullying Victimization Affects Cyber Aggression
The external factors affecting aggression mainly stem from adverse environmental stressors in the individual's context, 20 particularly the individual's exposure to violence in daily life. 21 GAM proposes that consistent exposure to violent information, such as in the case of bullying victimization, predisposes individuals to activate and reinforce aggressive schemas and automatizes related associations, thus increasing the likelihood of aggression. 22 According to the frustration-aggression hypothesis, 23 frustration always leads to some form of aggression, and the more lenient the anticipated punishment is, the more likely aggression is to occur. By remaining anonymous online, the perpetrators of cyberbullying face with less anticipated punishment, which encourages the victims of bullying to perpetrate cyberbullying themselves. 5,22 Studies have supported that bullying victimization in school, as a frustrating experience for college students, can easily elicit aggression from victimized individuals. 24,25 Many studies have found that some victims of school bullying are also perpetrators of cyberbullying. 5,26-28 A metaanalysis of cyberbullying revealed that bullying victimization is significantly and positively associated with cyber aggression. 25,28 Longitudinal studies have also supported the longitudinal predictive effect of bullying victimization on cyber aggression. 29 Taken together, these findings suggest that the experience of bullying victimization in college may have a positive impact on cyber aggression.
While a large body of research has examined the relationship between traditional bullying and cyberbullying and the corresponding mechanisms, 30-32 very little research has investigated the underlying mechanism through which bullying victimization affects cyber aggression. Based on GAM, the cognitive and affective factors of individuals might play a mediating role in the effects of bullying victimization on cyber aggression.
The Mediating Role of Relative Deprivation
Relative deprivation refers to a subjective experience featuring negative emotions and a perception of being at a disadvantage when compared with others. 33,34 Social comparison is key to the development of relative deprivation. 35 While the victims of bullying may suffer from marginalization in the peer group, 36 which in effect put the victims at a disadvantage in social comparison and produce feelings of unfairness, 37 matching the conditions of relative deprivation. 38 Meanwhile, bullying victimization could have a long-term negative effect on self-esteem, 39 which can render victims of bullying vulnerable to social comparison and elicit more experiences of discrimination, disadvantage and social injustice, thus leading to feelings of relative deprivation. 40 The link between bullying victimization and relative deprivation has also received empirical support regarding college students in China. 41 Classical relative deprivation theory proposes that high levels of relative deprivation affect an individual's psychological development, leading to behavior such as aggression. 42 The frustration-aggression hypothesis also supported relative deprivation as a source of frustration that creates aggressive inclinations. 23,34 While previous study has supported the predicting role of relative deprivation on offline aggressive behavior, 43 3004 more emphasis on online deviant behaviors caused by relative deprivation, as Internet use gratifies the subjectively deprived individual's desire to avoid the reality. 44 Studies have demonstrated the positive relationship between relative deprivation and online deviant behavior including online gaming addiction and online flaming. 45,46 Therefore, a similar correlation with cyber aggression can be expected as well. Accordingly, we propose that experiences of bullying victimization are predictive of cyber aggression by increasing feelings of relative deprivation.
The Mediating Role of Depression
In addition to the cognitive factor of relative deprivation, depression, as a common negative emotion, may also play a mediating role in the effects of bullying victimization on cyber aggression. General strain theory suggests that negative stimuli (eg, bullying victimization) produce negative emotions (eg, depression) in individuals, which in turn lead to the development of aggression. 47 A retrospective study conducted in 2021 revealed that college students who had previously suffered from cyberbullying exhibited higher levels of depressive symptoms. 48 Studies have also supported that cyberbullying victimization significantly and positively predicts depression, 5 even after controlling for traditional bullying victimization. 49 However, inconsistent findings have been reported by previous studies regarding the effects of depression on aggression. Some research has found that depression does not increase aggression in individuals. 50,51 The reason for this finding may be that a typical symptom of depression is a pronounced sense of tiredness and exhaustion, 52 which causes such individuals to be less able to resort to violence, even when they are aware of potential harms from the outside world. However, other research has reported that depression increases the likelihood of aggression. A review conducted in 2013 supported the claim that depression acts as a risk factor for aggression; 53 evidence has also suggested that depression is related to higher levels of self-directed aggression (eg, nonsuicidal self-injury) in college students. 54 Hence, we propose that the experience of bullying victimization may affect cyber aggression by enhancing the individual's level of depression.
In addition, perceived relative deprivation is also considered to be a factor that can affect depression. Previous research has found that perceived relative deprivation is positively associated with depressive symptoms and might act as a means of enhancement; 55 after controlling for family income, this connection remains among the adolescent population; 56 the greater the perceived relative deprivation of college students is, the higher their risk of depression. 57 Therefore, we propose that experiences of bullying victimization may act as a predictor of cyber aggression via the chain mediation of relative deprivation and depression.
The Present Study
In summary, based on GAM, the frustration-aggression hypothesis, classic relative deprivation theory and general strain theory, the present study employed longitudinal tracking data to explore the impact of bullying experiences on cyber aggression among college students and to examine the mediating effects of relative deprivation and depression on this relationship.
Specifically, we tested the following hypotheses (H): (H1) Bullying victimization is positively related to cyber aggression. (H2) Relative deprivation mediates the relationship between bullying victimization and cyber aggression.
(H3) Depression acts as a mediator in the association between bullying victimization and cyber aggression. (H4) Bullying victimization indirectly predicts cyber aggression via the chain mediation of relative deprivation and depression.
Since a literature review has revealed mixed findings regarding the relationship between gender and bullying victimization and demonstrated the variability of bullying victimization across ages, we also included gender and age as covariates in the model. 58 This study contributes to improving our understanding of the effect of bullying victimization experiences on cyber aggression and the underlying mechanisms of this relationship, and it enriches theories related to the development of aggression. In addition, the study has important implications for the prevention of the adverse consequences of bullying and the control of cyber aggression among college students.
Participants and Procedures
We used a longitudinal method to collect data over three waves with six-month intervals; the data concerned college students from five universities located in different parts of mainland China, and the data collection took place in April 2019 (Time 1), October 2019 (Time 2), and April 2020 (Time 3). Specifically, 2445 college students completed a survey concerning bullying victimization at T1, 2150 college students completed a survey pertaining to relative deprivation and depression at T2, and 2181 college students completed a survey focusing on cyber aggression at T3. After removing invalid sample (ie, students who did not complete all the waves of surveys), a total of 795 students who participated in the complete survey were included. 476 of these students (59.9%) were female, and the mean age of the sample was 19.67 (SD = 1.43), with a range of 17-25 years. The purpose of the study was explained, and written informed consent was obtained from all student participants. The study protocol was approved by the Research Ethics Committee of Beijing Normal University, which confirmed that all research processes related to this study were in accordance with the relevant ethical standards.
Measures Perceived Relative Deprivation
We assessed the perceived relative deprivation of participants by adapting the four-item Relative Deprivation Scale developed by Ma, 59 which showed excellent reliability and validity for use among Chinese populations. Response options for each item were scored on a 6-point Likert-type scale ranging from 1 (strongly disagree) to 6 (strongly agree); higher scores indicated stronger feelings of relative deprivation. A sample item included in the scale is "My life should have been better than it is now given all the work and dedication I have contributed". In the current study, Cronbach's alpha for this scale was 0.86.
Depression
We evaluated participants' levels of depressive symptoms by reference to the Center for Epidemiological Study -Depression Scale (CES-D). 60 The Chinese version of this scale, as revised by Chen et al, showed good reliability and construct validity for use among Chinese populations. 61 Each of the 20 items included in the measure was scored on a 4-point Likert-type scale ranging from 0 (never) to 3 (always). Participants with higher total scores were considered to have higher depressive symptoms. In this study, Cronbach's α for this scale was 0.92.
Cyber Aggression
The Adolescent Cyber Aggression Scale measures the level of individuals' engagement in reactive aggression online (eg, "I often insult and scold others when playing online games" and "I exclude someone from our network of friends"). 62 The scale showed good reliability and construct validity for use among Chinese populations. 63,64 Participants were asked to rate 15 propositions on a four-point Likert scale, with answers ranging from 1 (never) to 4 (always); higher scores indicated higher levels of engagement in cyber aggression. In this study, the internal consistency of the scale was excellent (Cronbach's alpha = 0.95).
Bullying Victimization
The frequency of participants' experiences of bullying victimization over the past 6 months was evaluated using a Chinese version of the Delaware Bullying Victimization Scale-Student (DBVS-S). 65,66 This 12-item inventory contains three dimensions indexing the participants' victimization in terms of verbal, physical, and relational bullying. Sample items include "Some of my classmates have said some mean things to me", "I have been pushed and shoved by others" and "Some students have told or urged others not to be friends with me". Each of the items was scored on a 6-point Likert scale, with answers ranging from 1 (never) to 6 (everyday); higher total scores indicated higher levels of exposure to bullying victimization. In the current study, the scores of this scale showed high internal consistency (Cronbach's alpha = 0.97).
Statistical Analyses
All analyses in this study were conducted using SPSS Version 22.0 and Amos 17.0. The result of Harman's single-factor test showed that the variance for both rotated and unrotated first factors was below the threshold of 40%, suggesting that no significant common method bias affected this study. 67 In addition, no missing values were found in the current sample 3006 as participants were required to answer all questions in order to submit the survey on the online questionnaire system that we adopted. It should also be noted that despite the loss of samples due to reasons such as graduation and hinders of teaching arrangement, no significant differences were found on the key variables between participants who participated in all waves of surveys and those who did not.
First, descriptive statistics were collected and correlation analysis was conducted to clarify the preliminary associations among variables. Based on those data, we examined the direct effect of bullying victimization on cyber aggression and the mediating roles played by perceived relative deprivation and depression by developing a structural equation model (SEM) after controlling for age and gender. The bullying victimization latent variable was evaluated in terms of relational bullying, physical bullying and verbal bullying, whereas the cyber aggression latent variable was evaluated in terms of explicit aggression and relational aggression. Finally, 5000-fold bias-corrected bootstrap analyses were performed to test the mediating effects. Table 1 provides an overview of the descriptive statistics, including the means, standard deviations, and intercorrelations among study variables. The results revealed that gender differences were significant in both bullying victimization and cyber aggression. Specifically, women's vulnerabilities to bullying victimization and cyber aggression are both significantly higher than men. There was also a significant negative association between age and bullying victimization. In addition, all of the psychological variables were found to be significantly and positively correlated with each other.
Structural Equation Model Analyses
First, we developed a structural equation model that comprised the two observed variables and two latent variables (ie, bullying victimization and cyber aggression). After taking gender and age at T1 as covariates, the direct effect model between bullying victimization at T1 and cyber aggression at T3 fit the data well, exhibiting the following fit indices: χ2/ df = 3.512, CFI = 0.993, TLI = 0.988, RMSEA (90% CI) = 0.056 (0.039-0.074). Specifically, the results indicated that bullying victimization was a significantly positive predictor of cyber aggression at T3 (β = 0.17, p<0.001).
During the second step of the analysis, perceived relative deprivation and depression were included as mediators in the correlation between bullying victimization and cyber aggression to construct a mediation model. The model fit was acceptable: χ2/df = 2.864, CFI = 0.991, TLI = 0.986, RMSEA (90% CI) = 0.048 (0.035-0.062). Compared with the direct effects model, the model fit indices of the mediation model were lower, which was largely due to the increase in the degree of freedom as we added the mediating variables to the model. 68,69 The results showed in detail that bullying victimization significantly and positively predicts perceived relative deprivation at T2, depression at T2 and cyber aggression at T3; perceived relative deprivation has a significantly positive effect on depression at T2; and bullying victimization and depression at T2 each has a significant and positive effect on cyber aggression at T3 (See Figure 1).
3007
Finally, 5000-fold bias-corrected bootstrap analyses were conducted to evaluate the mediating effects of perceived relative deprivation and depression in further detail. The results demonstrated that bullying victimization has an indirect effect on cyber aggression via depression. Additionally, perceived relative deprivation and depression have a chainmediating effect on the relationship between bullying victimization and cyber aggression.
Direct Effect of Bullying Victimization on Cyber Aggression
Our study showed that, in line with previous research, 24,25,28,29 bullying victimization has a significant direct effect on cyber aggression, thus supporting the frustration-aggression hypothesis and GAM. Bullying victimization has been perceived as prevalent among college students. 70 However, the unpleasant feelings and negative cognitions produced by bullying victimization can act as a source of frustration and can accumulate, thus enhancing the victim's tendencies toward aggression. 23 College students are recognized as the most frequent and unsupervised Internet users and thus at heightened risk of problematic Internet use (in particular, cyber aggression). Featured with anonymity, cyber aggression is associated with less anticipated punishment, which makes the online platform an attractive and convenient place for college students to perpetrate bullying without fear of retaliation. 5
Mediating Roles of Perceived Relative Deprivation and Depression
These results also support our hypotheses that both perceived relative deprivation and depression serve as significant mediators in this context. First, consistent with general strain theory 47 and previous research, 5,48,49 bullying victimization is related to higher levels of depression, which in turn function as stressors and contribute to cyber aggression. However, this finding is inconsistent with the conclusions of the studies by Benarous et al and Yu et al, 50,51 who argued that depression is not correlated with a higher inclination toward aggression. One possible explanation for this discrepancy lies in the different characteristics of the samples used in these studies. Studies that have reported evidence supporting our conclusion have been based on samples of college student populations, while the aforementioned studies reporting contradictory findings have used data drawn from clinical patient populations. Most college students are midway through the process of developing an independent personality, which makes it easier for them to fall into a tendency to exhibit depressive thoughts regarding themselves. Meanwhile, the anonymity of online media makes up for the imbalance of power between victims and perpetrators, which makes the originally disadvantaged individuals easier to exhibit cyber aggression as an outlet for negative emotions due to impulsivity.
Second, bullying victimization indirectly affects cyber aggression by increasing perceived relative deprivation, which is consistent with previous findings. 41 College students suffering from bullying victimization are likely to be marginalized in the 3008 peer group. 36 Given the vital importance of peer relationship for college students, they may predispose to lower self-esteem; 39 on the other hand, they may also feel as if they are at a disadvantage because of this marginalization and to experience injustice during peer comparison; thereby leading to a sense of relative deprivation. 40 To either avoid this negative cognition or turn this perceived disadvantage around, the affected ones may use cyber aggression as a coping strategy. 45 However, we found that this adverse perception may not directly predict the act of aggression as the path from relative deprivation to cyber aggression did not reach significance in this study. Despite that, we found a chain mediating effect by which perceived relative deprivation affects depression with respect to the relationship between bullying victimization and cyber aggression. This finding is in line with the conclusions of previous studies. [55][56][57] Perceived relative deprivation induced by bullying victimization may contribute to an increase in negative automatic thoughts toward the self. 55 Faced with pressure from both study and career path decisions, college students who feel relatively deprived are more likely to develop lower expectations regarding the future, thereby contributing to higher levels of depressive symptoms; which in turn externalize as aggressive behavior, ie, in this study, specifically, the act of cyber aggression.
The present study tested and extended the existing theory of GAM. According to GAM, bullying victimization can function as a situational input and influence internal cognitive and affective states; that is, in this study, such victimization produces a sense of relative deprivation and a feeling of depression, and the internal state variables that initially occur in this context can also influence subsequent variable(s) (eg, cognition of relative deprivation can influence depressive affect) and ultimately contribute to the behavioral outcome of aggression. 14 Despite the particular proneness of college students to cyber aggression, relatively few studies have examined its relation with bullying victimization based on GAM. 70 Therefore, the present study extends the GAM theory by applying it to a broader context of aggression (ie, cyber aggression).
Implications, Strengths, Limitations, and Directions for Future Research
This study uncovered the effect of bullying victimization on cyber aggression and the mechanisms underlying this relationship among college students. Cyber aggression may not only be committed by students who have acted as perpetrators in the context of traditional bullying but also by those who used to be victims, who are likely to adopt the opposite role in cyberbullying contexts. 5 Therefore, it is important to focus on both sides of bullying and to develop targeted intervention strategies to reduce cyber aggression. Most college students are young adults, and their experiences and mental well-being at this stage have an impact not only on their academic performance but also on their future growth. By exploring the relationship between bullying victimization and cyber aggression as well as the mechanism underlying this relationship, this study can provide theoretical support for the prevention of and interventions in school bullying in the future.
The chain mediating effect we found in the study suggests that by reducing the perception of relative deprivation, it is feasible to reduce the depressive symptoms of bullying victims and hence their likelihood of aggression. To prevent cyber aggression among college students with a history of bullying victimization, on the one hand, more attention should be given to their perceptions of relative deprivation, and timely intervention and control measures, such as fostering independent personality and creating a good family atmosphere, 43 should be implemented to lower their levels of perceived relative deprivation; on the other hand, attention should also be given to the mental health of college students with experiences of bullying victimization, and guidance should be provided to mitigate their negative emotions through social support and mindfulness. 71,72 The present study has several strengths. First, this study enhances our understanding of the relationship between bullying victimization and cyber aggression among college students. Second, this study offers empirical support for utilizing GAM theory as the predominant theoretical lens to explore the development of cyber aggression among college students, and it contributes to existing knowledge concerning the mechanism of GAM by investigating the interactions among situational factors and multiple present internal state variables. Moreover, a longitudinal design featuring three waves was used in this study, thus making the causal association between bullying victimization and cyber aggression and the underlying mechanism of the chain mediation effect of perceived relative deprivation and depression more rigorous and allowing us to draw more reliable conclusions.
Nevertheless, several limitations of this study should be noted. First, all data used in this study were collected via selfreport questionnaires, which may lead to a certain degree of social desirability bias. Future research should employ various methods, such as peer nominations and observations, to examine the role played by bullying victimization in cyber aggression
3009
via perceived relative deprivation and depression. Second, this study included only two covariates (ie, gender and age) and omitted other potential variables, such as trauma exposure, personality, and family socioeconomic status. Additional variables should be considered in future studies to enhance the robustness of the conclusions. Finally, this study examined only the outcomes of cyber aggression rather than those of noncyber aggression, but there may be significant differences between these varieties. To extend our findings, future studies should explore the mechanisms underlying the relation between bullying victimization and noncyber aggression in further detail.
Ethics Approval
Prior to data collection, this study was approved by the ethics committee of the Faculty of Psychology, Beijing Normal University, and it was conducted in accordance with the Declaration of Helsinki.
Informed Consent
Written informed consent forms were obtained from each participant in this study. | 2022-10-19T15:14:27.371Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "41fbaf6ba8670ae9152bdb52bee9a67d5581db2e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=84719",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0c2a4c4c3f9a95bdeac949d403770f94ccddbf4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73729443 | pes2o/s2orc | v3-fos-license | Mutating for Good: DNA Damage Responses During Somatic Hypermutation
Somatic hypermutation (SHM) of immunoglobulin (Ig) genes plays a key role in antibody mediated immunity. SHM in B cells provides the molecular basis for affinity maturation of antibodies. In this way SHM is key in optimizing antibody dependent immune responses. SHM is initiated by targeting the Activation-Induced Cytidine Deaminase (AID) to rearranged V(D)J and switch regions of Ig genes. The mutation rate of this programmed mutagenesis is ~10−3 base pairs per generation, a million-fold higher than the non-AID targeted genome of B cells. AID is a processive enzyme that binds single-stranded DNA and deaminates cytosines in DNA. Cytosine deamination generates highly mutagenic deoxy-uracil (U) in the DNA of both strands of the Ig loci. Mutagenic processing of the U by the DNA damage response generates the entire spectrum of base substitutions characterizing SHM at and around the initial U lesion. Starting from the U as a primary lesion, currently five mutagenic DNA damage response pathways have been identified in generating a well-defined SHM spectrum of C/G transitions, C/G transversions, and A/T mutations around this initial lesion. These pathways include (1) replication opposite template U generates transitions at C/G, (2) UNG2-dependent translesion synthesis (TLS) generates transversions at C/G, (3) a hybrid pathway comprising non-canonical mismatch repair (ncMMR) and UNG2-dependent TLS generates transversions at C/G, (4) ncMMR generates mutations at A/T, and (5) UNG2- and PCNA Ubiquitination (PCNA-Ub)-dependent mutations at A/T. Furthermore, specific strand-biases of SHM spectra arise as a consequence of a biased AID targeting, ncMMR, and anti-mutagenic repriming. Here, we review mammalian SHM with special focus on the mutagenic DNA damage response pathways involved in processing AID induced Us, the origin of characteristic strand biases, and relevance of the cell cycle.
INTRODUCTION
Somatic hypermutation (SHM) occurs in antigen-activated germinal center B cells and contributes to antibody affinity maturation (1)(2)(3)(4)(5)(6)(7)(8). Class switch recombination (CSR) involves a deletional rearrangement process within the immunoglobulin (Ig) heavy constant region, which enables B cells to switch the isotype of the clonotypic antibody, adapt its effector functions, and alter its tissue distribution (9)(10)(11). SHM and CSR are both initiated by the Activation-Induced Cytidine Deaminase (AID), which has a preference for targeting single-stranded DNA of rearranged Ig genes (5,(12)(13)(14)(15)(16)(17). SHM correlates with transcription and promotor proximal transcriptionally active regions in immunoglobulin genes appear to be preferred targets of AID (18)(19)(20)(21)(22). AID deaminates cytosines in the DNA into deoxy-uracil (U) during the G1 phase of the cell cycle, though Us and abasic sites may persist into S phase (23)(24)(25). AID activity and error-prone processing of the resulting U increases the mutation rate in Ig genes by an estimated six orders of magnitude, specifically from ∼10 −9 to ∼10 −3 mutations per base pair per division (26)(27)(28). AID preferentially targets WRCY motif (W = A/T, R = A/G, and Y = C/T). Another hotspot is the WA motif, though this motif is not targeted by AID (29)(30)(31)(32). Five modes of mutagenic U processing are thought to be involved in generating the well-defined mutational spectrum of somatically mutated Ig genes (Figure 1) (1,8,(33)(34)(35)(36)(37). SHM profiles comprise both transversions, where a pyrimidine base (C or T) is substituted by a purine base (A or G) or the reverse, and transitions, where a pyrimidine base (C or T) or a purine base (A or G) is replaced by the same class of base. In this review we discuss these modes, which are (1) replication opposite template U generates transitions at C/G, (2) UNG2 dependent translesion synthesis (TLS) generates transversions at C/G, (3) a hybrid pathway between non-canonical mismatch repair (ncMMR) and UNG2 dependent TLS generates transversions at C/G, (4) ncMMR generates mutations at A/T, and (5) UNG2 and PCNA Ubiquitination (PCNA-Ub) dependent mutations at A/T. Furthermore, the origin of tandem mutations, the characteristic strand biases, and cell cycle dependency of SHM are discussed. This review focuses on reports in mammalian systems. Findings in the DT40 lymphoma cell line have been reviewed elsewhere (38,39).
DNA DAMAGE RESPONSES INVOLVED IN SOMATIC HYPERMUTATION
The DNA damage response plays a key role in SHM and takes advantage of defined components involved in DNA repair and DNA damage tolerance (40,41). Here we will explain how these DNA damage repair and DNA damage tolerance pathways are repurposed to establish SHM pathways.
Normally, base modifications including Us in the DNA are efficiently recognized and repaired by the base excision repair (BER) pathway (42)(43)(44). To excise U from DNA backbone BER can use either one of four glycosylases, namely UNG, TDG, SMUG1, and MDB4 (45). However, only UNG was implicated in SHM (46). SMUG1 is downregulated in hypermutating B cells, though upon overexpression is able to rescue Ung-deficiency (47). Furthermore, SMUG1 was only found to have an effect on SHM and CSR in an Ung-deficient background (48). Initiating BER of Us, Uracil-(N)-glycosylase UNG recognizes and excises the irregular base from the sugar-phosphate backbone, creating an abasic site. At abasic sites the sugar phosphate backbone is intact. To further repair the abasic site, an incision is made beside the abasic site by AP-lyase APEX1 or APEX2 to yield a 3 ′ hydroxyl adjacent and a 5 ′ deoxyribose-phosphate (dRP). APEX2 is active in germinal center B cells, although it has a weaker AP-nuclease activity compared to APEX1 (49)(50)(51). Subsequently, POLB processes the abasic site and fills in the single nucleotide gap. Finally, ligases LIG1 or LIG3 seal the 3 ′ hydroxyl and 5 ′ phosphate groups. This pathway is known as short patch BER. In contrast, long patch BER involves strand displacement synthesis by replicative polymerase POLD. In contrast to POLB, POLD requires the homotrimeric DNA clamp PCNA as a processivity factor. After the incision by an AP-lyase, strand displacement synthesis by POLD is followed by removal of the displaced single-stranded DNA flap by FEN1, and finally ligation by LIG1 completes long patch BER.
In addition to processing of U by BER, the U can also be recognized as a U-G mismatch and the mismatch can be repaired by mismatch repair (MMR) (52,53). During replication, DNA polymerase errors can give rise to mismatches, which are recognized by the mismatch recognition complex MSH2/MSH6 (52,54). MSH2/MSH6 is tethered to the replication fork in order to detect mismatches and initiate mismatch repair. Next, the endonuclease complex PMS2 and MLH1 can make an incision 5 ′ of the mismatch. This nick serves as an entry point for exonuclease EXO1, which by means of its 5 ′ to 3 ′ exonuclease activity creates a single-stranded gap, which normally is filled up by replicative polymerases (55)(56)(57). Next to the canonical replication-associated MMR, an alternative ncMMR pathway that is predominantly active during G1 has been identified (52,53,58). ncMMR is thought to introduce mutations during SHM. During ncMMR, MSH2/MSH6 recognizes the AID induced U-G mismatch. The incision is made by the PMS2 and M LH1 endonuclease complex (53,58). Subsequently, exonuclease EXO1 generates a single-stranded gap, though EXO1 does not necessarily remove the U containing strand. Next, the gap is filled in an error-prone manner by the translesion synthesis (TLS) polymerase POLH, a member of the Y-family of TLS DNA polymerases which lack proof-read activity (52,53,58). Effective recruitment of POLH to the single-stranded gap depends on monoubiquitination of the DNA clamp PCNA at lysine residue K164 (PCNA-Ub) (59)(60)(61). By tethering DNA polymerases to the template, the homo-trimeric DNA clamp PCNA serves as a critical processivity factor of DNA polymerases.
To establish somatic mutations, TLS as part of the DNA damage tolerance system plays an important role during SHM. During TLS, specialized polymerases can continue DNA replication or DNA repair synthesis by inserting a nucleotide opposite of the lesion. During replication, TLS may occur at the replication fork to allow replication to continue or during the filling in of post replicative gaps. In this manner TLS is thought to prevent prolonged fork stalling or even a fork collapse (62,63). When the replication fork or repair synthesis is stalled by an abasic site, PCNA is monoubiquitinated by ubiquitin ligase complex RAD6/RAD18 (59,60,(64)(65)(66). The formation of PCNA-Ub is a key step in the recruitment of damage tolerant, error-prone TLS polymerases, where the PCNA interacting peptide (PIP) warrants specificity and the UBM or UBZ motif in TLS polymerases increase the affinity to PCNA. In addition, REV1 exerts an PCNA-K164 ubiquitination independent function FIGURE 1 | Mutagenic pathways of SHM. Deamination of C by AID during SHM leads to a specific mutagenic spectrum. The creation of the full SHM spectrum depends on (1) replication opposite template U instructs a template T and generates transitions at C/G. (2) UNG2 dependent TLS generates C/G transversions. UNG2 converts a U into an abasic site. As abasic sites are non-instructive, TLS opposite these sites generates both transitions and transversions. (3) Hybrid pathway between non-canonical mismatch repair (ncMMR) and UNG2 dependent TLS generates transversions at C/G. (4) ncMMR generates the majority of mutations at A/T. (5) UNG2-and PCNA Ubiquitination (PCNA-Ub)-dependent mutations at A/T. This non-canonical long-patch BER pathway generates a minor but significant subset of A/T mutations (∼8%).
An alternative DNA damage tolerance mode involves repriming behind the fork stalling lesion by PRIMPOL (70)(71)(72)(73). PRIMPOL activity is thought to be restricted to the leading strand while the replicative primase POLα primes continuously on the lagging strand (73,74).
REPLICATION OPPOSITE DEOXY-URACIL GENERATES C/G TRANSITIONS
After C deamination by AID, DNA synthesis by any known DNA polymerase across the template U creates a C/G > T/A transition (Figure 1 point 1). If not recognized and processed by BER or MMR, the U in the DNA template will because of its close similarity to a T instruct the insertion of an A opposite the U (34,75). In line, in the absence of UNG2 and MSH2 or MSH6, Us remain and cannot be shunted into other mutagenic pathways (see below). Consequently, C/G transitions were found almost exclusively in this setting. The resulting SHM profile is considered as the DNA footprint of AID activity (34,76).
UNG2 DEPENDENT TRANSLESION SYNTHESIS CREATES C/G TRANSVERSIONS
When a U is processed by UNG, an abasic site is generated. As abasic sites are non-instructive, replication opposite this lesion generates both C/G transversions and transitions (Figure 1, point 2) (36,46,77). A strong decrease of C/G transversions is observed in Ung deficient mice, however C/G transitions are increased, suggesting that abasic sites are key intermediates in the generation of C/G transversions (36,46). Apparently, when Us are no longer processed into abasic sites, the increased number of Us leads to increased C/G transitions, as described above. There are two different splice variants produced from the Ung gene, mitochondrial localized UNG1 and nuclear localized UNG2 (78). As expected, only the nuclear isoform UNG2 was found relevant for SHM (79,80).
POLB has a central function in short patch BER. At present, the role of POLB during SHM remains controversial. Ex vivo analyzed B cells from transplanted fetal liver HSC and progenitors deficient for Polb did not affect SHM (81). In contrast, another study found Polb-deficiency to mildly suppress SHM and CSR (82). The different results may be explained by a variation in the methods. The latter study has isolated and cultured the B cells for 4 days before the analysis, while the first performed analysis immediately after harvesting germinal centers.
As abasic sites are non-instructive and stalling entities to replicative DNA polymerases, specialized TLS polymerases are recruited to bypass this lesion. The TLS polymerase REV1 was found to be able to tolerate abasic sites (83,84). The structure of its active site only allows REV1 to insert dCMP (85,86). As such, REV1 is considered a dCMP transferase rather than a genuine polymerase. Next to its transferase capacity, REV1 has a BRCT and a C-terminal domain; the latter can recruit other Yfamily members POLH, POLI, POLK (63). The inactivation of Rev1 selectively prevents C/G to G/C transversions, in line with the dCMP transferase activity of REV1 (37,87). The N-terminal BRCT domain of REV1 is involved in binding PCNA and does not affect SHM (88,89). Further studies have shown that the catalytic domain of REV1 is key for C/G > G/C transversions (90,91). In addition, in the presence of a catalytically inactive REV1 the TLS polymerase POLH appears involved in the generation of C/G > G/C transversions, though a Polh single mutant does not affect C/G > G/C transversions (91). It seems that the TLS recruitment function of REV1 plays a very limited role in SHM, as the SHM profile is similar in the REV1 catalytic dead mutant compared to the full knock-out.
To date, it remains unknown which polymerases are involved in G/C > T/A transversions, though in DT40 chicken lymphoma cell lines POLD3 as subunit of replicative polymerase delta has been suggested (92). This finding awaits corroboration in the mammalian system.
HYBRID PATHWAY OF NON-CANONICAL MISMATCH REPAIR AND UNG2 DEPENDENT TRANSLESION SYNTHESIS GENERATES C/G TRANSVERSIONS
Interestingly, about half of all C/G transversions depend on the UNG2 and ncMMR hybrid pathway (Figure 1, point 3) (36,37). In both the ncMMR dependent and independent arm, REV1 creates C/G > G/C transversions (37). This raises the question regarding the difference between C/G transversions relying only on UNG2 and those relying on both UNG2 and ncMMR. It has been suggested that C/G transversions in the AID hot spot AGCW depend on UNG2 alone, while C/G transversions outside this motif rely both on UNG2 and ncMMR (93). The authors propose that the ncMMR dependent and independent C/G transversions relate to the status of the cell cycle. How ncMMR and UNG2 cooperate to introduce C/G transversions remains largely unaddressed. We proposed that ncMMR either creates single-stranded DNA substrate for AID, or additional Us and abasic were already present before excision. Consequently, Us are modified by UNG2 into abasic sites to generate C/G transversions (36). The U on singlestranded DNA can be processed by UNG2 leading to C/G transversions. UNG2 is around 1.7-fold more effective on single-stranded DNA as compared to double-stranded DNA and therefore the MMR-generated single-stranded gap may provide a preferred UNG2 substrate (94). Alternatively, there may be a mutagenic repair pathway involving both UNG2 and ncMMR. Biochemical studies indicated that such a pathway indeed exists and demonstrated that UNG2 is involved in resolving U-G mismatches in cooperation with ncMMR (95). As UNG2 itself cannot provide the nick, APEX2 instead can nick the DNA downstream of UNG2 during abasic site processing in germinal center B cells, as initially proposed (51). Further studies should reveal which of the above sources of abasic templates contribute to the generation of MSH2/UNG2 dependent G/C transversions and the exact interplay between UNG2 and ncMMR.
MUTAGENIC NON-CANONICAL MISMATCH REPAIR GENERATES A/T MUTATIONS
Cytosine deamination by AID generates a U-G mismatch, which can be repaired by ncMMR (52). During SHM, more than ninety percent of all A/T mutations depends on ncMMR (Figure 1, point 4). Recognition of the U-G mismatch requires the heterodimer MSH2/MSH6. The inactivation of Msh2 or Msh6 lead to impaired A/T mutagenesis (36,75,96,97). Consistent with the single nucleotide mismatches generated by AID, the alternative mismatch recognition complex MSH2/MSH3 is not involved in A/T mutagenesis during SHM as it recognizes only long insertion/deletion loops and mismatches involving multiple bases (96,97). Unexpectedly, endonuclease complex PMS2/MLH1, which is involved in ncMMR by making the incision for exonuclease EXO1, has a very small to no effect on A/T mutagenesis (98)(99)(100). However, when Pms2 and Ung defective alleles are combined, the number of A/T mutations was found reduced to 50%, suggesting that both PMS2 and UNG2 are involved in making the incision for the entry of EXO1 during ncMMR (24). After recognition of the mismatch, exonuclease EXO1 is key in generating single-stranded DNA patches (101). These gaps can be filled in by TLS polymerase POLH in an error-prone manner (52,102,103). POLH is a highly errorprone polymerase with an in vitro error-rate of 10 −1 to 10 −2 mutations per base pair (104). Therefore, one would expect that it contributes to all mutation types during SHM. However, POLH has been shown to be preferentially error-prone at template TW motifs in vitro, explaining the contribution of POLH to A/T mutagenesis (105,106). Interestingly, in vitro still one fourth of POLH induced mutations are C/G mutations, although this is not reflected in Polh-deficient mouse models (102,103,107). Another polymerase that does not have the same preference of POLH in mutating WA likely fills in the gap. Orthologs of POLH such as other members of the Y-family of polymerases REV1, POLK, and POLI may fill in the gap in absence of POLH. Indeed, the closest ortholog of POLH, POLK does contribute to A/T mutagenesis in absence of POLH (108)(109)(110).
Generally PCNA ubiquitination is associated with replication stalling, the question how PCNA is ubiquitinated during ncMMR during G1 remains. Like during S phase, this may also be mediated by RPA coated single strand which recruit RAD6/RAD18.
In conclusion, ncMMR is key in generating the vast majority of A/T mutations during SHM.
UNG2 AND PCNA UBIQUITINATION DEPENDENT A/T MUTAGENESIS
A minor but significant subset of A/T mutations of about eight percent, is generated independently of ncMMR, although does depend on UNG2 and PCNA-Ub (Figure 1, point 5) (36,117). Comparison of the mutation spectra from Ung −/− and Pcna K164R/K164R single mutant to Ung −/− ;Pcna K164R/K164R double mutant germinal center B cells revealed a further reduction of A/T mutations (36). This observation suggests that long-patch BER involving PCNA-Ub also contributes to A/T mutagenesis downstream of UNG2. Apparently, this non-canonical long patch BER has a minor but significant contribution to the generation of A/T mutations. This finding is in line with the observation that in B cells U-G mismatches and U-A base pairs are both mainly repaired by short patch BER (79). During both long-patch as well as short patch BER, APEX proteins are involved in the repair of abasic sites. Surprisingly, ubiquitously expressed APEX1 is downregulated while APEX2 is up-regulated in germinal center B cells (50,51). Furthermore, Apex2-deficient B cells show a reduction of A/T mutagenesis (118,119). Our reanalysis of the mutation frequency instead of percentages revealed a 60-80% percent reduction of A/T mutagenesis (Supplemental Table 1). Our reanalysis using mutation frequency instead of percentage of mutations revealed that C/G transitions and transversions were also decreased, in both datasets. The general reduction of all mutation types in Apex2-deficient mice is in line with the conclusion drawn in Sabouri et al. The discrepancy stresses the importance of determining mutation profiles with frequencies instead of percentages. Since long patch BER only generates a minor proportion of A/T mutagenesis, APEX2 is likely involved in the ncMMR pathway, for example by making incisions in the DNA which can be used by EXO1, as suggested previously (50, 51, 119) (Figure 2). Indeed, EXO1 is activated by a 5 ′ incision, a mismatch, and MSH2/MSH6 (120). In line, in vitro analysis of MMR and BER activity on a U-G mismatch containing plasmid demonstrated that an UNG2 dependent nick can be processed by EXO1 and MSH2/6 to effectuate MMR, independently of MLH1/PMS2 (95).
In summary, during SHM UNG2 and APEX2 contribute to the generation of a minor and a major part of A/T mutations, respectively. The prime role of APEX2 in A/T mutagenesis may be to provide the nick for both ncMMR and long patch BER.
TANDEM MUTATIONS
Tandem mutations are two mutations in neighboring bases and around 5% of all mutations are tandem mutations (121). These may arise through independent mutagenic events in the neighboring bases. However, analysis of SHM profiles revealed that tandem mutations are found more frequently than expected by chance (121,122). Furthermore, a part of all tandem mutations rely on the presence of MSH2 or MSH6, POLζ (REV3/REV7 complex), and POLI. Remarkably, both Poli-and POLζ subunit Rev3-deficient B cells displayed a decrease in tandem mutations (121,122). The mutation load in POLζ subunit Rev3-defective B cells was lower. However, this may be due to an impaired proliferation rate in POLζ subunit Rev3-defective B cells (123). It is reasoned that POLI fills the gap that is generated by EXO1 during ncMMR in an error prone manner. As POLI can generate mismatches but not extend efficiently from those, the mismatch provided by POLI is likely to be extended by POLζ which subsequently generates the second mutation. POLH is also involved in ncMMR however, Polh-deficiency has no effect on tandem mutations (121,122). These data suggested that during ncMMR predominantly POLH, but also POLI is involved in gap filling.
MUTATIONAL STRAND BIASES OF SHM AID Targeting and C/G Transition Bias
AID targeting occurs on both the coding and the noncoding strand of Ig loci. Differential targeting of AID to these strands has been implicated with the C/G transition bias observed in Msh2;Ung double mutant, as well as Msh6;Ung and Pcna K164R ;Ung double mutant mice (34,36,75,76). This C/G transition bias consists of the 1.5-fold higher number of C>T over G>A in the coding strand. After nucleotide correction, 60% of all C/G transitions arise on the coding strand and 40% on the non-coding strand (Figure 3A). This difference was considered to represent the AID targeting bias. Indeed as shown through measuring U content in the DNA, the amount of AID dependent FIGURE 2 | Detailed model of ncMMR in A/T mutagenesis. After U induction by AID, UNG2 processes U into an abasic site. In this more detailed model of ncMMR there are to arm on ncMMR, (1) UNG2 and APEX2 provide the incision for EXO1. EXO1 requires MSH2/6 and a 5' gap to the mismatch to be generate single-stranded DNA. PCNA-Ub recruits POLH, which can fill in the single-stranded DNA gap. (2) MSH2/MSH6 complex recognizes mismatch and activate the PMS2/MLH1 complex to make the incision. EXO1 creates singles stranded DNA gap, which is filled in by POLH.
U in the switch region are also 60% of Us are on the coding strand and 40% on the non-coding strand (124). AID is thought to target the single-stranded DNA of a transcription bubble of the coding strand, which is consistent with the C>T over G>A transition bias (17,125). AID can also target DNA in DNA/RNA hybrids, G-structures, and supercoiled DNA which can all be found in transcribed genes (126)(127)(128). Differential distribution of these structures may contribute to the C/G transition bias.
A/T Mutation Bias
Another strand bias found in SHM is the A/T bias, where in the coding strand A mutation are 2-fold more frequent than T mutations ( Figure 3B) (33,129,130). As A/T mutagenesis is largely dependent on ncMMR pathway, it is likely that the A/T bias involves the ncMMR pathway as well. During ncMMR, EXO1 creates single-stranded gap to remove the mismatch. Error-prone filling of the gap by POLH is likely the cause of the A mutations at template TW (131). As POLH is especially error-prone at TW templates giving rise to WA hotspots, for A mutations to arise, the non-coding strand is used as template by POLH, whereas for T mutations, the coding strand is used as template by POLH. This means that the coding strand contains the gaps more frequently than the non-coding strand. The difference in gap formation, suggests that the U containing strand is removed, and that the AID targeting bias co-determines the A/T bias. However, in ncMMR the U containing strand was found not to be targeted specifically when the U-G mismatch is repaired (53).
In order to gain more insight into the A/T bias, a transgene containing a stretch of A and T with a C or a G in the middle was made (132). The analysis showed that the C in the coding strand leads to an increase in A/T mutagenesis in the surrounding of the C. Whereas, a G in the coding strand leads to the suppression of A/T mutagenesis. The downstream mutation bias could be suppressed by impairing Msh2. This suggests that ncMMR is needed to induce the mutation bias, though the study has a limited amount of mutations analyzed. Strangely, MMR component PMS2 seems to counterbalance to the A/T bias, even though PMS2 does not affect the number of A/T mutations. Pms2-defective mice show an increased A/T bias, due to both an increase of A mutations and a decrease of T mutations (24,133). The authors hypothesize about the existence of a MLH1/PSM2 dependent pathway and an UNG/APEX2 dependent mismatch repair pathway. According to this model, the MLH1/PMS2 mismatch pathway has no strand bias, but UNG/APEX2 dependent pathway does. The actual strand bias is proposed to be the result of averaging the amount of bias of both pathways. When the MLH1/PMS2 pathway is impaired, the increased activity of the APEX2 dependent pathway could lead to an increased strand bias. This model suggests that the location of the U dictates the A/T bias. Hereafter, UNG2 and APEX2 cooperate to provide the incision on the U containing strand which is needed for EXO1 activity. As there are more Us found on the coding strand, there will be more EXO1 dependent singlestranded gaps on the coding strand, which lead to A mutations ( Figure 3B) (124). This notion fits with the direction of the AID targeting bias and the A/T bias. Accordingly, one expects that an Apex2-or Ung-defect, would lead to a decreased strand bias. However, analysis of Ung-defective mice and our reanalysis of Apex2 knock-out mice fail to demonstrate a decrease in the A/T bias (24, 118, 119) (Supplemental Table 1).
Another model suggests that the A/T mutations arise though reverse transcriptase activity (134,135). In line, POLH has been identified as DNA polymerase, RNA polymerase, and a reverse transcriptase (136)(137)(138). In fact, the reverse transcriptase activity of POLH has been implicated in A/T mutagenesis. However, potential POLH reverse transcriptase activity using the pre/mRNA as a template and the observed A/T strand bias are opposing each other, i.e., predict a higher amount of T rather FIGURE 3 | Strand-biases in SHM. (A) AID targeting with a preference for the coding strand leads to a C/G transition strand bias. Us on the coding strand lead to C>T transitions, while Us on the non-coding strand lead to a G>A transitions. (B) During error-prone mismatch repair, the MSH2/MSH6 complex recognizes the U-G mismatch, after which APEX2 or PMS2 provide the incisions for EXO1. POLH is especially error-prone on template TW. Therefore, the orientation of the gap made by EXO1 likely governs the A/T bias. (C) Replicative forks can be stalled on both leading and lagging strand by AID dependent abasic sites (1). After priming on the lagging strand, a replicative polymerase resumes DNA synthesis. PRIMPOL establishes G>C over C>G transversion bias found in Jh4 intron of the Igh gene, likely though anti-mutagenic activity on the leading strand of replication. PRIMPOL restarts by repriming after stalled DNA synthesis (2) and prevents TLS (3). On the lagging strand, TLS opposite of the abasic site leads to G>C mutations (4). PRIMPOL activity likely activates a homology driven error-free pathway such as template switching to prevent mutagenesis (5). (C) adapted from Pilzecker et al. (73).
than A mutations (Figure 4). The pre/mRNA is unlikely the source of A/T mutagenesis, unless it involves RNA editing at A preferentially in WA motifs. A POLH mutation leading to a POLH with an RNA or DNA specificity may provide the ultimate test, regarding the role of reverse transcription in SHM. However, involvement of RNA editing in A/T mutagenesis seems unlikely FIGURE 4 | The potential but unlikely role of pre/mRNA in A/T mutagenesis. As the pre/mRNA is copied from the non-coding strand, it can only act as template for repair synthesis on the non-coding strand. After the pre/mRNA is copied from the non-coding strand, a gap can arise in the DNA-RNA hybrid on the non-coding DNA strand. As POLH has reverse transcriptase activity, this gap will be filled in an error-prone manner by POLH. However, if this potential mechanism or any other mechanism using the cDNA as an intermediate would be a dominant mode, a higher rate of T mutations compared to A mutations is expected, which directly contrasts the observed A/T bias.
as A/T mutagenesis fully depends on AID and ncMMR, both of which target DNA.
In conclusion, the A/T bias involves ncMMR activity and arises through POLH using the non-coding strand as template more frequently.
G>C Over C>G Transversion Bias
An additional bias found in Jh4 intronic region in SHM is the 3.5-fold higher frequency of G>C over C>G mutation bias (73). C>G and G>C mutations arise from abasic sites, which suggests that there is either and unequal number of abasic sites on the coding and non-coding strand, or there is a difference of antior pro-mutagenic DNA damage responses on the coding and non-coding strand. The G>C over C>G bias is governed by PRIMPOL, as Primpol-deficiency was found to increase C>G transversions to the level of G>C transversions ( Figure 3C). Apparently, an anti-mutagenic activity of PRIMPOL prohibits C>G transversions, which suggests that PRIMPOL exerts strandbiased anti-mutagenic activity at abasic sites. The dominant origin of replication in the Igh locus in B cells lies near the 3 ′ regulatory region enhancer (139)(140)(141). Therefore, most C>G mutations are likely to arise from abasic sites on the leading strand, whereas most G>C mutations arise from abasic sites on the lagging strand. Apparently, PRIMPOL has an antimutagenic activity on the leading strand (73). The notion that PRIMPOL acts as conservator of the genome is supported by the anti-mutagenic activity of PRIMPOL on AID family APOBEC induced mutagenesis in invasive breast cancer. In a genome wide setting, PRIMPOL anti-mutagenic activity on the leading strand on APOBEC dependent mutagenesis, would be expected. In line with this notion, an enrichment of APOBEC mutagenesis was actually found on the leading strand (142,143).
Using purified PRIMPOL, it was demonstrated that this polymerase/primase is stalled at abasic site under nuclear conditions (70). Therefore, repriming by PRIMPOL has an important anti-mutagenic function. The anti-mutagenic function may be explained by redirecting DDT from error prone TLS to error-free homology-directed template switching. This has also been observed in yeast, where leading/lagging strand primase POLα has been shown to promote recombination directed template switching (144).
We propose a role for PRIMPOL during S phase, despite the observation that AID is mainly active in G1 of the cell cycle, mainly using overexpression settings (24,25). However, overexpression of AID outside of G1 is toxic to cells, therefore it is not possible to exclude S/G2 activity of AID in overexpression settings (145). Furthermore, U and abasic sites may persist into S phase. In addition, another study suggested that C/G transitions and transversions can occur to some extend during S phase (23).
In conclusion, the G>C over C>G transversion bias is established by the anti-mutagenic activity of PRIMPOL, where PRIMPOL likely reprimes behind abasic sites to stimulate error-free template switching. Strand biases in SHM are established by pro-mutagenic biases like the AID targeting and ncMMR, as well as anti-mutagenic activities, as the G>C over C>G PRIMPOL dependent mutation bias.
CELL CYCLE REGULATION OF SOMATIC HYPERMUTATION
AID is active during the G1 phase of the cells cycle (25,145). Accordingly, G1 is also the cell cycle phase in which the highest levels of Us can be found in the immunoglobulin genes (25). Though, Us and abasic sites may persist into S phases. Furthermore, expression of an AID modified to be specifically expressed in G1 provides all mutations characterizing SHM, whereas AID modified for S/G2/M expression does not support mutagenesis (24). Though, the lack of mutagenesis S/G2/M may be due to the toxicity of overexpressing AID during S phase (145). While most A/T and C/G mutagenesis has been suggested to be limited to G1, C/G transitions and transversions can occur during S phase (23).
During G1, dNTP levels are very low. The low dNTP levels have been shown to impair A/T mutagenesis. Ribonucleotide reductase Samhd1 deficiency increases the concentration of nucleotides in G1 (146). Surprisingly, this led to a decrease of A/T transversions, but not A/T transitions. However, knocking out Samdh1 led to increased arrest in late G1, due to high nucleotide levels (147,148). As AID is active in early G1, the late G1 arrest may affect the SHM profile independently of dNTP levels.
In conclusion, AID is active in G1. A/T and C/G mutations are generated primarily during G1, though a substantial fraction C/G mutations may arise during S phase.
CONCLUSION
The discovery of AID laid the foundation in solving the molecular puzzle underlying SHM. Further detailed characterization uncovered distinct mutagenic pathways responsible in generating the typical mutation spectrum of somatically mutated Ig genes. Both, pro-and anti-mutagenic activities contribute in establishing defined strand preferences responsible for specific mutation biases recurrently found in these independently generated spectra. Many details regarding the transformation of faithful DNA damage response pathways into effective mutator pathways, often take advantage of errorprone DNA polymerases, as proposed in the Brenner & Milstein model (149), now identified as members of the Y-family of TLS polymerases. At the same time insights from SHM studies fueled general mutation studies and provided novel insights into genome maintenance in general.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. | 2019-03-12T13:03:27.247Z | 2019-03-12T00:00:00.000 | {
"year": 2019,
"sha1": "c3dbdd9685a198833e26ad682e6c59b53ab569a0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.00438/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3dbdd9685a198833e26ad682e6c59b53ab569a0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
118992053 | pes2o/s2orc | v3-fos-license | Isospin-Violating Dark Matter and Neutrinos From the Sun
We study the indirect detection of dark matter through neutrino flux from their annihilation in the center of the Sun, in a class of theories where the dark matter-nucleon spin-independent interactions break the isospin symmetry. We point out that, while the direct detection bounds with heavy targets like Xenon are weakened and reconciled with the positive signals in DAMA and CoGeNT experiments, the indirect detection using neutrino telescopes can impose a relatively stronger constraint and brings tension to such explanation, if the annihilation is dominated by heavy quark or $\tau$-lepton final states. As a consequence, the qualified isospin violating dark matter candidate has to preferably annihilate into light flavors.
I. INTRODUCTION
It is well established that Dark Matter (DM) dominates the matter in the universe, but the identity of DM remains unclear. The direct detection experiments aim to decode the DM non-gravitational interactions by observing the scattering of DM off detector materials [1]. Many efforts have been made to search for such events for decades. Two collaborations, DAMA [2] and Co-GeNT [3], have claimed the evidences for annual modulation in the differential event rate, which is a characteristic property due to the motion of the Earth around the Sun [4]. The simplest explanation points to a low mass O(10)GeV DM spin-independently (SI) elastic scattering off nucleon with cross sections around (2 − 5) × 10 −4 pb. It, however, turns out that DAMA tends to favor a relatively larger cross section than CoGeNT does. Moreover, they contradict with the null experiments CDMS [5] and XENON [6] results, which put most stringent constraints on the SI DM-nucleon scattering cross sections.
In order to alleviate the tension between the Co-GeNT, DAMA results and the constraints of CDMS and XENON, various theoretical attempts and solutions have been put forward [7,8]. Among them, isospin-violating dark matter (IVDM) [8][9][10][11] draws a lot of interests. It was proposed that the DM particles might couple differently to the protons and neutrons. Under this generic assumption, one therefore gains an additional degree of freedom, f n /f p , the ratio between the two couplings. If it satisfies f n /f p = −Z/(A − Z) for a given nuclear isotope (A, Z), the scattering amplitudes will interfere destructively and cancel each other. Therefore, the constraints from the corresponding isotope could be completely evaded .
There are also huge experimental efforts to detect DM indirectly through the detection of secondary products of DM annihilation in the galaxy or astrophysical bodies [12,13]. One promising way is to detect the high energetic neutrino signals resulting from the annihilation of DM that have been gravitationally captured by the Sun [14,15], using the neutrino telescopes on the Earth, such as Super-Kamiokande (Super-K) [16] and IceCube [17]. The most severe constraints set on the low mass DM are given by Super-K data. In this paper, we study the neutrinos flux coming from annihilation of IVDM in the Sun.
Throughout the discussion, we will assume the DM particle is symmetric, namely it can have significant annihilation when the number density is higher. We will comment on the scenario of asymmetric DM case in the end of the paper.
II. CONSEQUENCES OF ISOSPIN VIOLATION
In this section, we discuss the general consequences of isospin-violating DM-nucleon interactions in various DM direct/indirect detection approaches. We will focus on the class of spin-independent interactions.
For ground-based direct detection experimental target containing a certain element i with nucleon and proton numbers (A i , Z i ), the ratio of the isospin-violating (IV) cross section to isospin-conservative (IC) cross section is The phenomenologically favored ratio is found to be f n /f p ≈ −0.7. Due to destructive interference in the amplitude, the direct detection rate gets reduced significantly. their small chemical abundance. Therefore, the suppression factor for capture is where µ i = m χ m i /(m χ + m i ) and m i is the mass for nucleus i. Taking into account of the presence of different isotopes, we list the reduction factors in direct detection and solar capture rates in Table. I. The key observation from Table. I is the hierarchy in the suppression factors, amongst which solar capture rate receives the weakest suppression from isospin violation. For DM mass around 10 GeV, the capture rate is reduced only by a factor of 0.04. Therefore, the indirect detection using the neutrino flux can give relatively stronger bounds on the DMnucleon SI interactions, if the interactions are isospin violating. This serves as the main point of this paper. In the next section, we illustrate this statement quantitatively.
III. INDIRECT DETECTION FROM THE SUN
Weakly interacting DM can be captured in astrophysical bodies like the Sun. The capture process usually happens due to the scattering between DM and the nuclei. As DM particles are accumulated near the core region of the Sun, there can be significant annihilation process whose rate is proportional to its squared number density, where C is the capture rate and C A is the annihilation rate of DM particles in the Sun. For simplicity we neglect the evaporation term, which could be important for DM lighter than 3-4 GeV [24]. Assuming that C and C A do not depend on time, one can readily solve the DM number N (t) If the time needed to reach equilibrium is much smaller than the age of the solar system, i.e., 1/ C C A t , the capture and annihilation processes are now in equilibrium and the two terms on the right-handed side of Eq. (3) are balanced with each other.
A. Capture rate
The capture rate of DM by element i in the Sun can be calculated by [18,19] where the sum i goes over all the elements and the DM local density is chosen to be ρ χ = 0.3 GeV/cm 3 . The capture rate C ,i is proportional to the corresponding DM-nucleus scattering cross section. We use the standard chemical composition of the Sun given in [20] where n i is the mass fraction for a given element i in the Sun. The DM velocity distribution is taken as a standard Maxwell-Boltzmann form, where v = 220 km s −1 .
The nuclear form factor F 2 (E R ) takes the Helm form where E i = 3/(2m i R 2 i ) and R i = (0.9A 1/3 + 0.3) fm. For heavier nucleus, it is easier to deposit more energy in each scattering. Thus, the recoil energy integral is approximately proportional to the reduced mass squared µ 2 i . This explains the additional factor of reduced mass ratio in Eq. (2).
The lower limit for recoil energy integral in Eq. (5) is the minimal energy transfer needed to capture the DM, E R,cap = m χ u 2 /2; while the upper limit is the largest energy transfer allowed by kinematics E R,max = (2µ 2 i /m i ) u 2 + v 2 esc (r) . The valid recoil energy integral must satisfy E R,cap < E R,max , so for the case m χ µ i , the initial velocity u at infinity must be small enough for the capture to happen.
The escape velocity of the Sun is given by Compared with the Hydrogen (proton), the cross sections off heavy nucleus are enhanced by both A 2 i and µ 2 i /µ 2 p [21]. Of course, the capture rates for heavier elements are further suppressed by their abundance in the Sun. Among all the relevant elements, as can be seen in Fig. 1, the Helium plays the most dominant role in capturing the DM when the DM is lighter than 30 GeV, while the Oxygen is most important for heavier DM.
On the other hand, in the isospin-violating case, the DM-nucleus scattering cross section is related to that between the DM and proton by The cross sections for heavy nuclei are suppressed by the destructive interference between protons and neutrons inside. Therefore it turns out that the Hydrogen is the dominant species to capture the DM with mass lower than 40 GeV, as shown in Fig. 1. In Fig. 2, we plot the ratio of the total capture rates between isospin conserving and violating scenarios. As a rough estimate, the ratio is proportional to A 4 He ρ He /ρ H ≈ 20 for m χ ≈ 10 GeV. As one can see in the plot, there is an upper bound C IC /C IV 50 for f n /f p = −0.7.
B. Annihilation and final state neutrinos
The annihilation rate can be well approximated as where V eff is the effective volume of the core of the Sun and found to be V eff ≈ 2.0 × 10 26 cm 3 1 TeV mχ 3/2 [12]. There is in principle a competition between the capture and annihilation processes happening around the center of the Sun. It has been shown that for the DAMA and CoGeNT favored region, the capture-annihilation equilibrium has been reached [22]. In fact, for fixed spinindependent interaction σ SI χN and annihilate rate σv , in isospin-violating scenario the processes reach equilibrium more quickly due to a smaller capture rate. After the capture and the annihilation processes become balanced, the flux of the annihilation process will be completely controlled by the capture rate.
We mainly are interested in the final state neutrinos from the annihilation which can be detected by the neutrino telescopes such as the Super-K experiment. We use the results of Ref. [23] to obtain the neutrino spectrum (dN νi /dE νi ) F per process, taking into account of hadronization, hadron stopping, neutrino absorption and assuming the effect of neutrino oscillation to the earth averages the three neutrino flavors [12]. Here F denotes the annihilation product of the DM. For light DM, the important final states are ττ , bb and cc, which can further decay to neutrinos. The neutrino flux when they arrive at the earth is then (
C. Muon rate at Super-K
The state-of-art technique to study the neutrino flux from cosmic rays is to observe up-going muons into the detector. These muons signals are generated by muon neutrino interacting with rocks as well as materials (e.g., water) inside the detector. Here we follow Refs. [24][25][26] to calculate the muon rate generated in the presence of neutrino flux obtained in the previous subsection.
where the square bracket represents the probability of muon being generated from charge-current weak interactions with cross section σ ν , and traveling though the average length R µ . ρ is the mass density of the rocks or the water. The charged current interaction with nucleons can be written as where a ν . The average length that muon travels before losing its energy below the detector threshold energy E th µ is parametrized by where we have taken the parameters α = 2.3 × 10 −3 cm 2 g −1 GeV −1 and β = 4.4 × 10 −6 cm 2 g −1 in the calculations.
The Super-K experiment [16] measures the Cherenkov radiation of energetic muons generated in the chargecurrent interactions. The effective area of detection is around A eff = 900 m 2 , and the τ = 1679.6 live days measurement allows at most 11 events other than originating from the atmosphere neutrino background [24] at 95% confidence level. We use this as the upper bound on the number events N µ = Φ µ A eff τ /2 from DM annihilations in the Sun, where the factor 1/2 accounts for the nighttime.
We have plotted the constraints on DM-nucleon cross section in Fig. 3, including both direct detections and indirect detection via neutrinos from the Sun. We focus on the low mass DM region in light of the recent direct detection excitement. As was noticed in [10], the positive signals from DAMA and CoGeNT can be reconciled by including isospin-violating DM-nucleon interactions. Isospin violation effect can also relieve the tension with the null results of XENON experiments, but cannot remove the constraints from CDMS which uses the same material as CoGeNT [10].
An interesting finding is that the indirect detection with neutrinos from DM annihilation in the Sun imposes a stronger constraint if the annihilation final states are neutrino-rich, i.e., ττ or bb (or marginally cc), as shown by the solid curves in the Fig. 3. The annihilation to light quarks or muon is still allowed, since they would lose most energy before decay, due to relatively longer lifetimes. Therefore, the qualified IVDM candidates should annihilate preferably into light flavors. This is a modelindependent result, originating from the hierarchical nature characterized in reduction factors for different rates, as shown in Table. I. It brings more challenge for the DM model building. One possibility could be the portal bridges the DM sector only to the first generation fermions in the SM sector. The couplings to second generation quarks (like cc) could then be induced but safely suppressed by the Cabibbo mixing angle.
A second message we can learn from Table I and Fig. 2 is, the suppression factor of the total capture rate is always larger than 1/50, still less suppressed compared to those in direct detection experiments. Therefore, even in the case of inelastic scattering where elements heavier than Helium are more likely to dominant [27], the indirect detection from the sun could still impose an important constraint.
Finally, we comment on the scenario where the DM particle is asymmetric. The asymmetric dark matter has been proposed in order to understand why the baryonic matter and DM have similar relic densities [28]. In such scenario, due to the lack of its antiparticle, the dark matter cannot annihilate as they are accumulated inside the sun. If this happens, it is more difficult to have indirect detection signal of the asymmetric DM if it is completely stable and the above indirect detection bounds from the sun will no longer hold. However, if the dark matter self interaction is strong enough, the self capture could still play important roles and reveal its existence [29].
IV. CONCLUSION
In summary, we studied the capture of low mass isospin-violating DM in the Sun and the corresponding neutrinos flux from their subsequent annihilation. Low mass isospin-violating dark matter has been proposed to reconcile the annual modulation signals observed by DAMA and CoGeNT with the constraints put by the null direct-detection experiments. The isospin-violating effects make the scattering cross section of DM off cer-tain isotope relatively suppressed, which therefore helps to solve the contradiction between positive signals with the null experiments. However, we find that the indirect detection of neutrino signals through the neutrino telescope Super-K sets stronger constraints on the DMnucleon interactions and brings further tension to such explanation, if the DM particles annihilate into neutrinorich final states, e.g., tau leptons or bottom quarks. | 2011-06-20T20:55:29.000Z | 2011-06-20T00:00:00.000 | {
"year": 2011,
"sha1": "0152c14c78708d99f7aa4e98b7ffc129748a12eb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1106.4044",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0152c14c78708d99f7aa4e98b7ffc129748a12eb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221015726 | pes2o/s2orc | v3-fos-license | MR1-Restricted T Cells in Cancer Immunotherapy
Major histocompatibility complex class I-related (MR1) was first identified as a cell membrane protein involved in the development and expansion of a unique set of T cells expressing an invariant T-cell receptor (TCR) α-chain. These cells were initially discovered in mucosal tissues, such as the intestinal mucosa, so they are called mucosal-associated invariant T (MAIT) cells. MR1 senses the presence of intermediate metabolites of riboflavin and folic acid synthesis that have been chemically modified by the side-products of glycolysis, glyoxal or methylglyoxal. These modified metabolites form complexes with MR1 and translocate from the endoplasmic reticulum to the plasma membrane where MAIT cells’ TCRs recognize them. Recent publications report that atypical MR1-restricted cytotoxic T cells, differing from MAIT cells in TCR usage, antigen, and transcription factor profile, recognize an as yet unknown cancer-specific metabolite presented by MR1 in cancer cells. This metabolite may represent another class of neoantigens, beyond the neo-peptides arising from altered tumor proteins. In an MR1-dependent manner, these MR1-restricted T cells, while sparing noncancerous cells, kill many cancer cell lines and attenuate cell-line-derived and patient-derived xenograft tumors. As MR1 is monomorphic and expressed in a wide range of cancer tissues, these findings raise the possibility of universal pan-cancer immunotherapies that are dependent on cancer metabolites.
Introduction
Recently, there has been increasing interest in the role of the microbiome in developing and activating the mucosal host immune response against tumor formation, infections, and inflammation [1][2][3][4][5][6]. The major histocompatibility complex (MHC) class I-related (MR1) is a cell membrane protein expressed in cells infected by certain types of bacteria or in cells under metabolic stress [7][8][9][10][11]. Tumor cells also express MR1 [12,13]. Unlike the highly polymorphic MHC-I molecules, MR1 is monomorphic. Rather than MHC-I binding 8-10-amino acid peptides, MR1 binds small metabolite molecules derived from bacterial biosynthesis of riboflavin (vitamin B2) or folic acid (vitamin B9) [7,8,14]. The binding of those vitamin B metabolites to MR1 requires their modification through chemical reactions with glyoxal or methylglyoxal, which are generated endogenously by glycolysis or derived from exogenous food sources [7,[15][16][17][18][19][20]. Notably, bacteria and tumor cells use glycolysis as a source of ATP [21,22]. The MR1-metabolite complex is then presented to a class of T cells, called mucosal-associated invariant T (MAIT) cells [23], as they were originally discovered to be preferentially located in the gut lamina propria of humans and mice [7,8]. Significant advances have been made in the understanding of MAIT cell functions following the discovery of MR1 ligands [7,14]. Recent publications report that a subpopulation of MR1-restricted cytotoxic T cells recognizes an as yet unknown cancer-specific metabolite presented by MR1 in cancer cells [24]. These atypical MR1-restricted T cells (called MR1T), with their distinctive TCRs, represent a unique opportunity for the development of pan-cancer know how CD8 -MAIT cells stabilize that interaction. All TRAV1-2 + MAIT cells express both CD161 and CD26 at high levels and the T-cell memory marker CD45RO [28,35]. It is unknown whether they express immune checkpoint inhibitory surface molecules (e.g., PD1, CTLA4, TIM3, LAG3, TIGIT, VISTA, or B7/H3), co-stimulatory molecules (e.g., CD80, CD86, OX40, ICOS, GITR, 4-1BB, or CD40), or other molecules found in the tumor microenvironment, such as IDO or TLR. Classical MAIT cells express the transcription factors, promyelocytic leukemia zinc finger (PLZF) and RAR-related orphan receptor γt (RORγt) [28]. The latter transcription factor is also expressed by regulatory T cells and interleukin-17 (IL-17)-producing helper T cells (Th17) [36]. 5-OP-RU is a major antigen recognized by MR1 and the MAIT TCRs. MR1 also binds 6-FP, a byproduct of folic acid biosynthesis [7,14] (Figure 2). The cytotoxic capacity of MAIT cells is likely due to their ability to express perforin and granzyme B upon activation [28]. MAIT cells also express granulysin [28]. It is unknown whether they also use the FAS/FAS ligand (FASL) or other pathways for their cytotoxic function. MAIT cells can produce, upon TCR-activation, IL-17, along with IL-22, TNF, IL-2, and interferon-γ (IFN-γ). IL-17 may increase the proliferative capacity of different types of tumors [28].
Non-classical MAIT Cells without TRAV1-2
There are MAIT cells that have no TRAV1-2 in their TCRs (TRAV1-2 -) [28,37]. These non-classical MAIT cells account for less than 0.01% of circulating T cells, yet they share many features with TRAV1-2 + classical MAIT cells [28]. Their functions are less well known and some of them have shown reactivity against several tumors expressing MR1. We summarize the characteristic features of non-classical MAIT cells in Table 1. Similar to classical MAIT cells, non-classical MAIT cells express PLZF, CD161, and IL-18R, as well as recognize 5-OP-RU as an antigen; however, the TCRs expressed by non-classical MAIT cells are significantly different from those expressed by classical MAIT cells, with one subgroup showing more diverse TCRs than the other (Table 1; Figure 3). metabolites, such as 6-FP, derived from an intracellular infection, utilize an alternative pathway to present the MR1-ligand complex at the cell surface. The MR1-ligand binding complex either passes to the plasma membrane or translocates into endosomes. In a low pH environment, the intracellular ligands are exchanged with exogenous ligands, such as 5-OP-RU or 5-OE-RU, and the MR1-ligand complex then translocates to the plasma membrane to be recognized by the TCR (green dashed arrow). β2M, β2-microglobulin; 6-FP, 6-formylpterin.
Non-classical MAIT Cells without TRAV1-2
There are MAIT cells that have no TRAV1-2 in their TCRs (TRAV1-2 -) [28,37]. These non-classical MAIT cells account for less than 0.01% of circulating T cells, yet they share many features with TRAV1-2 + classical MAIT cells [28]. Their functions are less well known and some of them have shown reactivity against several tumors expressing MR1. We summarize the characteristic features of non-classical MAIT cells in Table 1. Similar to classical MAIT cells, non-classical MAIT cells express PLZF, CD161, and IL-18R, as well as recognize 5-OP-RU as an antigen; however, the TCRs expressed by non-classical MAIT cells are significantly different from those expressed by classical MAIT cells, with one subgroup showing more diverse TCRs than the other (Table 1; Figure 3).
MAIT Cells and Cancer
Clinical studies of the role of MAIT cells in solid tumors are mostly focused on hepatocellular carcinoma (HCC) and colorectal cancer (CRC). It is remarkable that MAIT cells are abundant in normal hepatic tissues (~50% of T cells in the liver are MAIT cells; Table 1) but are decreased in HCC tumor tissues [12,38,39]. Duan et al. reported that tumor-educated MAIT cells in HCC expressed higher levels of PD1, CTLA4, and TIM3, yet produced lower levels of IFNγ, IL-17, granzyme B, and perforin [38]. Thus, HCC-infiltrating MAIT cells were functionally compromised and likely reprogrammed from being tumor-suppressive to being tumor-promoting [38]. In contrast, more MAIT cells infiltrate CRC tumor tissues than infiltrate healthy colorectal tissues [40][41][42][43]. MAIT cells in CRC express higher levels of IL-17A and lower levels of IFN-γ than those in healthy tissue [42]. Perhaps MAIT cells in CRC patients differentiate towards a Th17 phenotype. A transwell assay showed an increase in the migration of MAIT cells towards mucosal cancer cells [44]. In addition, it is known that normal cells express low levels of MR1 [7,8,24]. Perhaps the inflammatory milieu of the tumor is enriched in factors that increase MR1 expression in tumor tissues, including both tumor cells and stromal cells. Thus, MAIT's MR1-specific cytotoxic activity may reach many cell types within a tumor, disrupting the integrity of the tumor stroma, vessels, and lymphoid tissues, and facilitating the dissemination of tumor cells beyond the tumor milieu.
The frequency of circulating MAIT cells in patients with non-mucosa-associated solid tumors (e.g., kidney, breast, and thyroid cancers) was found to be high [44]. Interestingly, in kidney, urothelial, and prostate cancers, the high MR1 mRNA expression level is associated with better probability of survival than those with low levels of MR1 mRNA expression [28]. In patients suffering from breast, thyroid, and lung cancers, MR1 expression level is not associated with survival [28]. Worth mentioning is the fact that, in several different cancers, including glioma, melanoma, colorectal, stomach, liver, pancreatic, head and neck, cervical, endometrial, and ovarian cancers, a high MR1 expression level is associated with decreased probability of survival [28]. It should be noted that levels of MR1 expression might not correlate with levels of MAIT cell infiltration into tumors; these two parameters should be investigated in parallel to provide clear conclusions. More studies using larger cohorts and complete histopathological classification of the tumors will provide a precise understanding of the role of MAIT cells in the various cancers and the relationship to clinical outcomes.
In addition to solid tumors, it has been shown that multiple myeloma (MM) cell lines express MR1 in complex with vitamin B metabolites. Consequently, MAIT cells derived from healthy controls were able to efficiently kill MM cells in vitro [44]. Interestingly, in a study on newly diagnosed MM patients, the frequency of MAIT cells was lower than that in healthy controls [44]. Moreover, the percentages of CD8 + and CD8 -CD4 -MAIT cells in MM patients were lower than in healthy controls [44]. Thus, MAIT cells may play a protective role in MM. In addition, a study using peripheral blood from 91 patients suffering from myelogenous leukemia and 20 healthy controls showed that MAIT cells displayed TCR-activated cytotoxic activity [44]. Indeed, after in vitro activation they were cytotoxic to myelogenous leukemia cells [44].
MAIT Cells and IL-17 Family Members
It is interesting that MAIT cells or different subsets of MAIT cells appear to play dichotomous roles, promoting the growth of or eliminating cancer cells [12,13,28]. One school of thought is that MAIT cells may promote cancer growth at early stages of tumorigenesis and eliminate cancer cells at late stages [12,13,28]. We reasoned that this dual role might have to do with the subtype of MAIT cells infiltrating the tumor and/or the tumor microenvironment.
Such a dichotomous role of MAIT cells may have something to do with their capacity to secrete IL-17 and/or the expression levels and isoforms of IL-17 receptors (IL-17RA to IL-17RE) within tumors [45,46]. IL-17 family members are widely reported to have dual roles in tumorigenesis [45,46]. The IL-17 family is comprised of six molecules (IL-17A to IL-17F, including IL-17E, which is also called IL-25) [45,46]. They display high homology, are secreted as homodimers or heterodimers, and have different binding affinities for IL-17Rs [45,46]. For instance, IL-17A may exhibit 20-times higher-affinity binding than IL-17F to, presumably, the IL-17RA/RC heterodimer, providing a signal that is more robust than that of IL-17F [45,46].
That MAIT cells secrete IL-17 that may explain that its tumor-growth-promoting ability is its role in promoting the tumor microenvironment. IL-17 induces the secretion of CCL2, CXCL1, CXCL5, CXCL6, and CXCL8 from several types of cells that infiltrate the tumor, and subsequently, it recruits myeloid-derived suppressor cells, including tumor-associated macrophages [45,46]. On the other hand, IL-17 family members may elicit immune defense against tumors by orchestrating anti-tumor activity by promoting the secretion of chemokines that attract tumor-suppressive lymphocytes, such as CD8 + T cells, to the tumor [45,46]. In addition to IL-17 production for host defense against tumors, MAIT cells are able to directly eliminate tumor cells through the expression of perforin and granzyme B after activation through their TCRs, which are specific against target tumor cells expressing MR1 in complex with metabolite antigens in the plasma membrane [7][8][9][10][11][12][13][14]28,30,31,41,47]. Moreover, IL-17A is known to stimulate tumor cell proliferation and metastasis in non-small-cell lung cancer and breast cancer [48][49][50][51][52]. Perhaps, this is one of the mechanisms associated with the poor prognosis of HCC and CRC patients with tumors of IL-17A-expressing MAIT cells.
Until we know in detail how IL-17 family members, secreted from MAIT cells, act as factors to promote tumor cell growth or to enhance tumor cell killing and MAIT activation, we cannot propose a safe intervention to antagonize or stimulate IL-17 production, secretion, and functions for cancer therapies. Given the fact that vitamins B2 and B9 are important for all mammalian cell metabolic functions, targeting the inhibition or activation of their receptors is unlikely to be beneficial for the inhibition or activation of MAIT cells [53,54].
Atypical MR1-Restricted T cells
In 2017, Lepore et al. reported a novel population of MR1-restricted T cells that expressed diverse TCR-α and TCR-β genes, yet they were unable to recognize previously identified microbial or folate-derived ligands of MR1 [55]. These cells were named MR1T cells. The frequency of circulating MR1T cells is estimated to be 0.02% of total T cells [55]. Six MR1T lines were cloned and their TCR sequenced ( Table 2). All MR1T cells expressed IFN-γ, and some expressed other cytokines, as in Th1 cells (IL-2, TNF-α, and TNF-β), Th2 cells (IL-3, IL-4, IL-5, IL-6, IL-10, IL-13), and Th17 cells (IL-17A, G-CSF, GM-CSF), supporting remarkable functional plasticity [55]. Two MR1T cell clones expressed transcription factors FOXP3, FOSL2, and IRF4 [55]. Three years later, Crowther et al. reported a single clone of MR1T cells that mediated the in vitro killing of cancer cells and the in vivo regression of autologous and non-autologous tumors [24]. Table 2) that was able to kill A549 cells and sequenced its TCR. This MR1T clone expressed IFN-γ and TNF-α upon activation; no information was Cancers 2020, 12, 2145 7 of 12 provided regarding PLZF [24], similar to the Leport et al. report [55]. Crowther et al. made cDNA constructs with the TCR for the transfection and expression of those TCRs in cytotoxic T cells and performed comprehensive genome-wide knockout screening using the CRISPR-Cas9 system. They were able to find six genes (MR1, β2M, RFX, RFXANK, RFXAP, and STAT6) in HEK293T cells essential for the activation of MC.7.G5 cells [24]. They discovered that MR1 is one of the essential proteins needed for MR1T TCR-mediated targeting of cancer cells while sparing noncancerous cells [24]. Clone MC.7.G5 expresses TCRs capable of recognizing MR1-metabolite complexes in the plasma membrane of several different types of tumor cell lines and primary tumor cells in vitro [24]. However, as mentioned above, these metabolites (i.e., antigens recognized by the TCR) are currently unknown [24].
Crower et al. showed that MR1T cells induced cell death in autologous and non-autologous melanoma, as well as lung, breast, colon, prostate, ovarian, and hematological cancer cell lines [24]. The cytotoxicity of MR1T cells depends on: (1) the expression of MR1 in complex with β2M on the tumor-cell surface; (2) the unknown antigen or antigens that are restricted by MR1-β2M; (3) the TCR of the MR1T cells that recognizes the MR1-β2M-ligand complex. We cannot completely rule out that the MR1T TCRs require an additional and unidentified membrane molecule for stable interaction with MR1-β2M-ligand. The chemical natures of the antigen or antigens within tumor cells bound to the MR1 of tumor cells and the TCR of MR1T cells remain elusive, but it is possible that they are unstable intermediates resulting from the reaction of vitamin B2 (riboflavin) intermediates with glyoxal or methylglyoxal (see below).
TCR Antigens of MAIT Versus MR1T Cells
It is important to mention that neither riboflavin (vitamin B2) nor folic acid (vitamin B9) is produced by mammalian cells [25,27,56]. Thus, microbial infection or colonization with microorganisms expressing functional vitamin B-metabolizing enzymes might be required to produced 5-OP-RU and 5-OE-RU. Circulating unstable metabolites of vitamin B2 and B9 produced in microbial-colonized mucosa may reach the tumor cell milieu and be captured by specific receptors and translocated to the intracellular space, where they react with glyoxal or methylglyoxal to generate the MR1-binding tumor metabolites [8,57]. Whether the resulting MR1-metabolite complex is able to activate MAIT cells in vivo is not clear because 5-OP-RU and 5-OE-RU, but not 6-FP, can activate MAIT cells in vitro [7,14].
In regard to the manner by which MAIT cells' TCR αand β-chains dock the MR1-antigen metabolite complex, Patel et al. [14] described the astonishing similarity of this interaction with that of conventional TCR αand β-chains with MHC class I/peptide complexes [14]. However, upon interaction of MAIT cells' TCR with MR1-antigen complexes, the former undergoes remodeling of the antigen-binding cleft, a feature observed in innate pattern-recognition receptors [14]. Thus, the TCR of MAIT cells might be an innate-like pattern-recognition receptor targeted towards vitamin B metabolites, and MAIT cells are considered reminiscent of innate immunity, bridging innate and cognate mechanisms of ligand recognition. In support of the innate-like nature of MAIT cells' TCRs is the fact that they recognize the monomorphic MR1 in complex with a limited number of metabolite antigens [7,14]. In contrast, the classical MHC-I proteins are highly polymorphic and the peptides they bind are diverse, depending on the flexibility of the TCR V region, which is often sufficient to allow the TCR to interact with numerous MHC-I/peptide complexes [58].
We have limited knowledge of the tumor antigens recognized by the αβ TCRs of MR1T. The anti-cancer MR1T clones isolated by Crowther et al. did not recognize MR1 tetramer complexes with the microbial-derived T cell activator 5-OP-RU [24,35]. Yet, the recognition of target cancer cells by MR1T was reduced when loaded with either MAIT-activating bacteria, Mycobacterium smegmatis or Salmonella enterica, or the MR1 ligand acetyl-6-FP [24]. These results indicate that the as yet unknown ligand or ligands restricted by MR1, and thereby recognized by MR1T cells, have a similar structure to acetyl-6-FP and other MAIT TCR ligands. In addition to being byproducts of glycolysis, glyoxal and methylglyoxal come from food sources, yet it is tantalizing to speculate that MR1-expressing cancer cells undergo increased glycolysis to produce glyoxal and methylglyoxal, which react with vitamin B Cancers 2020, 12, 2145 8 of 12 metabolites to form the antigen or antigens that bind MR1 and are recognized by the TCRs of MR1T cells. As the unknown metabolite antigen or antigens presented by MR1 are specific to or associated with cancer, they may represent a novel class of neoantigens, beyond the neo-peptides arising from altered tumor proteins and presented by classical MHC-I or MHC-II.
MR1-Restricted γδ T Cells
Virtually all MR1-restricted T cells were reported to have αβ TCRs until late 2019, when Le Nours et al. reported a class of γδ TCRs in MR1-restricted T cells [59] (Table 1). This group detected MR1-tetramer + γδ T cells that accounted for <0.001% to 0.1% of CD3 + circulating T cells and <0.1% to 5% of γδ T cells. MR1-5-OP-RU tetramer + γδ T cells were mostly CD4 − CD8α − or CD8α + with variable CD161 expression, resembling other cells of the γδ T cell lineage [59]. MR1-restricted γδ T cells were detected by staining in the liver, stomach, lung, and duodenum of healthy subjects and were enriched in a celiac duodenum and a Merkel cell carcinoma [59]. The group solved the crystal structure of a γδ TCR-MR1-5-OP-RU complex and found that the γδ TCR binds underneath the MR1 antigen-binding cleft, rather than binding to the presented antigen within the cleft [59]. Thus, MR1-restricted γδ TCRs in T cells can adopt diverse binding modes with MR1, representing noteworthy progress for both γδ T cell and MR1 biology. We classify these γδ T cells as an independent subset of MR1-restricted T cells (Table 1).
MR1T Cells in Cancer Immunotherapy
The discovery of MR1T opens the door to translating the findings of MR1-restricted T cells to clinical application. The potential of MAIT cells in immunotherapy is limited, as they are essential for host immunity to bacterial infections, and their antigens are not specific to or associated with cancer [28]. The nature of the as yet unknown neoantigen to MR1T prompts us to envision four potential routes of MR1T cells in cancer immunotherapy: TCR gene therapy, monoclonal antibody therapy, chimeric antigen receptor (CAR) T therapy, and bispecific T cell engager (BiTE) therapy ( Figure 4). First, autologous T cells are genetically engineered with the TCRα and TCRβ subunits of MR1T cells (such as those listed in Table 2). Second, a monoclonal antibody is developed to bind both MR1 and its tumor-specific antigens to induce antibody-dependent cytotoxicity. Third, the single-chain variable fragment (scFv) of the above antibody is used in CAR T cells to target the MR1-antigen complex from tumor cells. Finally, the above scFv is fused to the scFv of an antibody against CD3 in a BiTE design to attract CD3 + T cells. The last three approaches would require us to identify the tumor-specific antigen or antigens first before generating the monoclonal antibodies. All these potential therapeutic approaches are dependent on MR1 expressed on the tumor cell surface and bound to the cancer-specific metabolites (i.e., neoantigens) but independent of antigen-presenting cell processing. A less appealing use of MR1T cells is to increase the generation or accumulation of the as yet unknown antigen in tumor cells as vaccines or treatments. This mechanism of action in immunotherapy has been attempted using neoantigens based on mutant tumor-protein-derived peptides. Yet, given the small size and the potentially transient nature of these metabolite neoantigens for MR1T cells, this approach is suboptimal. cancer-specific metabolites (i.e., neoantigens) but independent of antigen-presenting cell processing. A less appealing use of MR1T cells is to increase the generation or accumulation of the as yet unknown antigen in tumor cells as vaccines or treatments. This mechanism of action in immunotherapy has been attempted using neoantigens based on mutant tumor-protein-derived peptides. Yet, given the small size and the potentially transient nature of these metabolite neoantigens for MR1T cells, this approach is suboptimal.
Concluding Remarks
Immune surveillance is the mechanism by which the immune system detects and destroys any threat to the body, such as microbial pathogens or cancer cells. MAIT cells are capable of activating or inactivating a great variety of immune cells and are believed to, in some circumstances, participate in immune surveillance [28]. The activation of classical and non-classical MAIT cells using ligand agonists from microbes, such as vitamin B metabolites, is of great interest in host defense. Most of our knowledge on MR1-restricted T cells in cancer comes from studies on MAIT cells, which have been shown to have a dichotomous role in prognosis [28]. However, MR1T cells, the atypical MR1-restricted T cells, have a predominant role in killing tumor cells while sparing noncancerous counterparts. As MR1 is monomorphic and expressed in a wide range of cancer tissues, a future MR1T cell-based immunological therapy against all MR1-expressing cancers can be hypothesized and pursued by academia and industry. A possible obstacle in translating MR1 findings is T cell exhaustion and anergy. If this is the case, a combination of an MR1T agent and an antibody blocking immune checkpoints, such as anti-PD1 or anti-CTLA4, might hold promise. Moreover, future clinical studies should embrace efficient biomarkers to select patients who may benefit the most from this type of treatment. | 2020-08-06T09:06:44.189Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "25a380e6a79b77735aaf9b74952543e30764c011",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/8/2145/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "370a4de95d93ccce170b7bd1ff0068104f26fbbc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
10091335 | pes2o/s2orc | v3-fos-license | Granular computing with multiple granular layers for brain big data processing
Big data is the term for a collection of datasets so huge and complex that it becomes difficult to be processed using on-hand theoretical models and technique tools. Brain big data is one of the most typical, important big data collected using powerful equipments of functional magnetic resonance imaging, multichannel electroencephalography, magnetoencephalography, Positron emission tomography, near infrared spectroscopic imaging, as well as other various devices. Granular computing with multiple granular layers, referred to as multi-granular computing (MGrC) for short hereafter, is an emerging computing paradigm of information processing, which simulates the multi-granular intelligent thinking model of human brain. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of information and even knowledge from data. This paper analyzes three basic mechanisms of MGrC, namely granularity optimization, granularity conversion, and multi-granularity joint computation, and discusses the potential of introducing MGrC into intelligent processing of brain big data.
Introduction of big data and data science
To gain an insight of philosophy into the nature of brain data and the significance of processing it, we would firstly introduce a broad view on some related concepts such as physical space, social space, data space, natural sciences, social sciences, and data sciences.
There have been for a long history the physical space and social space to describe the phenomena in natural world and human society, respectively, and the research on the spaces leads to natural science and social science. In recent years, the ubiquitous digitalization of both natural world and human society has produced huge amount of data. Along with ''big data'' becoming a hot topic for researchers, entrepreneurs and government officials, people realize that a data space has come into existence.
The connection and interaction among people is the one of the key sources of human intelligence; in other words, the interactions of elements in social space produce human intelligence. So, similarly, it is expected that the relations and interactions of entities in data space would produce other forms of intelligence such as machine intelligence and web intelligence [1].
The data space is ''relatively independent'' of physical space and social space, since it remains stable in a way despite being a reflection of them. Once the data have been generated, they will not evolve accordingly as the described objects change if no special mechanism is arranged. One dataset as a mirror of entities from natural world or human society would yield new results if interacted with others, and then the results may have reaction on natural world or human society with the assistance of automatic control devices or human beings. The data may have powerful reaction to the real world even if it is fabricated, e.g., rumors spread via mobile phones and the Internet played a vicious role in the London riot 2011 [2].
It would be agreed that research on data space will lead to data science which has many differences from natural and social science with respect to research objectives, research methodologies, and technologies. In some circumstances, data science can be used interchangeably with big data [3]. To get the best out of big data, funding agencies should develop shared tools for optimizing discovery and train a new breed of researchers, says Mattmann [4]. Data Science need not be always for big data; however, the fact that data are scaling up makes big data an important aspect of data science [3].
''Big data'' is the most highlighted term in the past 2 years, and it can be expected with much confidence that it would continue to be popular in the next a few years for its promising utility in many fields such as commerce and business, biology, public administration, material science, and cognition in human brain, just to name a few. People from the society of academia, industry, and the open source community have done a lot of work concerning big data analytics.
The studies of big data by academia society could be classified into two categories: basic researches and application researches.
The basic researches of big data are about basic concepts, rules, procedures, and so on. Fisher discussed the challenges lying in the interactions in big data analytics [5]. A community white paper developed by leading researchers across the United States discussed the application of big data in several typical fields and proposed a data analysis pipeline [6]. Recently, Wu presented a HACE theorem that characterizes the features of the big data revolution, and proposed a 3-tiered big data processing model [7]. A closeup view about big data was demonstrated by Chen and Zhang, which included applications, opportunities, and challenges of big data; the state-of-the-art techniques and technologies; as well as several underlying methodologies to handle the data deluge [8]. Han presented a novel skyline algorithm on big data showing significant advantage over the existing skyline algorithms [9], and there are many other researches falling into this category such as [10][11][12][13].
Application researches on big data refer to the applications of big data analytics in many different fields. In commerce and business, Chen introduced in detail the evolution of business intelligence, analytics, and the impact of big data in typical areas [14]. In biology, powerful computers and numerous tools for data analysis is crucial in drug discovery and other areas, and biologists get neither their feet nor their hands wet [15]. In public administration, the Trento big data platform offers the service of representing the mean availability of cars in regions of Munich at noon, which can be easily used to improve customer satisfaction, by identifying bottlenecks [16]. In materials science, advances in data analysis have placed the field on the verge of a revolution in how researchers conduct their work, analyze properties and trends in their data, and even discover new materials [17].
There are also quite a few research works which address some challenges in big data analytics with keywords like ''huge data,'' ''large scale dataset,'' and ''high speed streaming data,'', but no ''big data''. These works surely should be noticed and appreciated by big data researchers and practitioners [18][19][20].
The international IT giants such as Google, IBM, Microsoft, Oracle, and EMC have developed their own big data solution systems and platforms, which are Dremel, InfoSphere BigInsights and InfoSphere Streams, HDInsight, ExaData, Greenplum and so forth [21][22][23][24][25][26]. Most of the big data platforms are based on Hadoop. Apache also supports other projects related to Hadoop such as HBase, Hive, Pig, Mahout, and Spark, each of which has special effect in dealing with different challenging aspects in big data processing (BDP) [27]. In addition to the projects supported by Apache, there are other open source big data projects, such as Cloudera Impala [28] and RHIPE [29].
The rest of the paper is organized in the following fashion. Section 2 discusses brain big data and its applications. Section 3 introduces the three mechanisms of MGrC and discusses their relationship with five major theoretical models of MGrC. Some key issues of BDP based on MGrC are also analyzed in this section. In Sect. 4, we propose the potential of using MGrC to explore brain big data. The conclusions are drawn in Sect. 5.
Brain big data
Among the methods of generating data from natural world and human society, using equipments of fMRI, EEG, and MEG to collect brain data is of great concern from the interdisciplinary researchers of computing, neuroscience, and cognitive psychology [30]. Because the techniques of noninvasive studies of human brain function have been in widespread use to detect metabolic activity and neuronal activity throughout the brain of different subjects all around the world, huge amount of complex datasets are collected every day. There is no doubt that the brain data are a significant category of big data, which hold great potential to unlock mysteries of the human mind [31].
Brain data are in the forms of pulse curve, 2D images [32], and 3D structures reconstructed from 2D images [33], as shown in Fig. 1. Pulse curves are generated by EEG; 2D images are produced by fMRI, MEG, OCT, etc., and 3D structures are reconstructed from 2D images using computer graphics technology. Furthermore, 4D models of the brain can be based on imaging and modeling its 3D structure at a sequence of time-points [33]. We can see that brain data are more complex than regular information tables, which lead to difficulties in modeling and processing of them.
Researches on brain data can achieve a new understanding of the brain, new treatments for brain diseases (such as Alzheimer's and Parkinson's [34]), and new brainlike computing technologies [35]. The significance of brain data research had been realized so clearly that governments of the EU and USA started their own brain projects [35,36]. There have been some successful researches on this field. Ryali described a novel method based on logistic regression using a combination of L1 and L2 norm regularization to identify relevant discriminative brain regions and accurately classify fMRI data [37]. Zhong and Chen proposed Data-Brain, a new conceptual model of brain data, to explicitly represent various relationships among multiple human brain data sources, with respect to all major aspects and capabilities of human information processing systems [32].
3 Multi-granularity computing for big data ''GrC is a superset of the theory of fuzzy information granulation, rough set theory and interval computations, and is a subset of granular mathematics,'' stated Zadeh in 1997. Granules are any subsets, classes, objects, clusters, and elements of a universe as they are drawn together by distinguishability, similarity, or functionality [38]. Yao considers GrC to be a label of theories, methodologies, techniques, and tools that make use of granules in the process of problem solving [39]. GrC has become one of the fastest growing information processing paradigms in the domain of computational intelligence and human-centric systems [38]. There are two fundamental issues in GrC: granulation and granular structure. Different semantic aspects and algorithm aspects of granulation will lead to different granular structures of the universe. Chen defined five classes of modal-style operators to construct granular structure and hierarchical structure of data based on the lattice of concepts [40].
Evolved from GrC, MGrC emphasizes jointly utilizing multiple levels of information granules (IG) in problem solving, instead of considering only one optimal granular layer.
Three basic mechanisms and five theoretical models of MGrC
MGrC considers multiple levels of IG when solving a problem, and there have been a lot researches in this regard [41][42][43][44][45][62][63][64][65][66][67][68][69]. Three basic mechanisms of MGrC can be summarized from these research works with regard to the way in which multi-granular levels are used in problem solving. They are granularity optimization, granularity conversion, and multi-granularity joint computation. In granularity optimization, the most suitable granular level of a domain is chosen for the multi-granular information/ knowledge representation model (MGrR), and the most efficient and satisfactory enough solution is generated on it [41][42][43]. Granularity conversion means the working granularity layer will be switched between adjective layers or jump to a higher or lower granular layer, in accordance with the requirements of solving a problem [44,45]. Multigranularity joint computation takes a problem-oriented MGrR as input, and every layers of the MGrR are employed jointly to achieve a correct solution to the problem. Each of the three mechanisms has its particular type of problem to deal with. The three basic mechanisms is a new perspective on GrC. Then, what is the relationship between the three mechanisms and models to implement GrC such as fuzzy set, rough set, quotient space, cloud model, and deep learning? We will see that some models suit certain mechanisms better, which are to be introduced in detail as follows. Fig. 1 Forms of brain data. a Pulse curves (EEG)-http://www.trueimpact.ca/introduction-to-electroencephalogram-eeg/, b 2D images (fMRI)-http://irc.cchmc.org/research/fmri/cochlear.php, c 3D structures (reconstructed)-http://cloakunfurled.com/tag/fmri/ MGrC for brain big data processing 3
Granularity optimization
The theories of fuzzy set and rough set are good choices for the mechanism of granularity optimization. The fuzzy set theory presented by Zadeh in 1965 starts with definitions of membership function, with the more functions defined about an attribute, the attribute is granulated into the finer fuzzy IG. The reason for fuzzy IG is that crisp IG (e.g., an interval is partitioned by exact values) does not reflect the fact that the granules are fuzzy in almost all of human reasoning and concept formation [46,47]. The number of concepts formed through fuzzy granulation reflects the corresponding granularity being relatively fine or coarse, and decision on the number is an application-specific optimization problem.
The rough set theory developed by Pawlak in 1982 is an effective model to acquire knowledge in information system with upper approximation and lower approximation as its core concepts, making decisions according to the definition of distinguishable relation and attribute reduct. Researchers of related fields have made great variety of improvements to the classic rough set theory mainly by redefining the distinguishable relation and approximation operators [48][49][50], and integrated it with other knowledge acquisition models, which yield rough neural computation [51], rough fuzzy set and fuzzy rough set [52], and so on.
Rough set can be used to granulate a set of objects into IGs. The grain size of the IG is determined by how many attributes and how many discrete values each attribute takes in the subset of the whole attribute set, which is selected to do the granulation. Generally, the more attributes and the more values each attribute takes, the finer the resulting IGs.
In the perspective of knowledge transformation [53], the process of data analyzing and problem solving by fuzzy sets or rough sets is actually to find a mapping from the information represented by the original finest-grained data to the knowledge hidden behind a set of optimized coarser and more abstract IGs.
Granularity conversion
The quotient space theory proposed by Zhang is a model for problem solving with the basic idea of conceptualizing the world at different granularities and shifting the focus of thinking onto a different abstract level [54,55]. It is not hard to tell that quotient space is meant to solve problems with need of granularity conversion. In the quotient space theory, a problem space is described by a triplet (X, f, T) with X as its domain, f as its attributes, and T its structure. Suppose R is an equivalence relation on X, [X] is a quotient set under R. Taking [X] as a new domain, we have a new problem space ([X], [f], [T]). The worlds with different granularities are represented by a set of quotient spaces. Based on the descriptions, the construction of different-grain-sized quotient spaces and problem solving on these spaces are researched [55].
The quotient space theory has attracted the attention of researchers from the fields of information science, automatic control, and applied mathematics [56,57]. Integrating the idea of fuzzy mathematics into quotient space theory, Zhang proposed fuzzy quotient space theory subsequently, which provides a powerful mathematical model and tool for GrC [58,59]. Fuzzy quotient space theory introduces fuzzy equivalence relation into the construction of quotient space, in which different threshold values of the membership function will lead to quotient spaces of different grain sizes. By setting different threshold values, an MGrR can be derived.
The cloud model proposed by Li realizes the uncertain transformation between qualitative concepts and quantitative values and can be further used to realize the bidirectional cognition, i.e., from concept intension to extension and from extension to intension [60], as shown in Fig. 2. Since a concept definitely has the property of granularity, mapping of quantitative values to a suitable grain-sized qualitative concept is also the process of granularity optimization.
Inspired by the idea of MGrC, Liu constructed an MGrR using cloud model with an Adaptive Gaussian Cloud Transformation (A-GCT) algorithm [61]. Multi-granular concepts are generated by clustering academicians of Chinese Academy of Engineering (ACAE) with regard to age based on the definition of parameter concept clarity, as shown in Fig. 3.
Therefore, granularity conversion can be implemented using cloud model with A-GCT algorithm and a set of different values of parameter concept clarity.
Multi-granularity joint computation
Deep learning is a breakthrough of learning by neural networks in recent years. Starting with Hiton publishing his research on Science magazine in 2006, whose major contribution is using deep auto-encoder networks to learn lowdimensional codes for data of high dimensionality [62], other research works of closely related topics are reported afterward [63][64][65]. The structure of restricted Boltzmann machines (RBMs) and a deep belief network based on RBMs are shown in Fig. 4. Deep learning also draws the attention from IT industry. Researchers from Google and Stanford University consider the problem of building highlevel, class-specific feature detectors from only unlabeled data, and train a face detector on a cluster with 1,000 machines (16,000 cores) for three days, without having to label images as containing a face or not. The experiment obtains 15.8 % accuracy, a leap of 70 % relative improvement over the previous state-of-the-art [66]. Deep learning was selected as the first of the 10 breakthrough technologies 2013 [67]. The core idea of deep learning is training a deep architecture with many layers of neural network, and the constraints between the adjacent layers are set beforehand. Although deep learning does not make the most popular topic in machine learning until 2006, the research with the similar idea can be traced back to 1990s, for example, Jang presented an adaptive-network-based fuzzy inference system (ANFIS) to implement fuzzy inference with a 5-layered neural network [68], as depicted in Fig. 5. Using a hybrid learning procedure, ANFIS can construct an inputoutput mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. And we proposed a triple-valued or multiplevalued logic neural network (TMLNN) to represent and process triple-valued or multiple-valued logic knowledge using neural network [69], as illustrated in Fig. 6. The fundamental element of TMLNN is a novel neuron model, triple-valued or multiple-valued logic neuron (TMLN). Each TMLN can represent a triple-valued or multiplevalued logic rule by itself. There are two triple-valued or multiple-valued logic neurons: TMLN-AND (triple-valued or multiple-valued ''logic and'' neuron) and TMLN-OR (triple-valued or multiple-valued ''logic or'' neuron).
The application of deep learning to pattern recognition reflects that human firstly takes as input the pixels of an object projected onto the retina and then detects the edges of the object, recognizes the parts of it, and finally forms the high-level abstract concept of the object [70], as shown in Fig. 7a. This means that visual concept formation procedure is local-to-global, which seems to be contradictory to the topologically global first visual perception theory developed by Chen in 1982 [71]. According to the results of experiments conducted by Chen's team, the human MGrC for brain big data processing 5 visual system is sensitive to global topological properties, as illustrated in Fig. 7b. However, after a careful analysis, we realize that the two theories are NOT contradictory. In fact, they reflect the different facets of human visual cognition. Chen's theory focuses on the last phase of the whole visual concept formation, since the experiments are conducted using noninvasive measurement on human brain cortex. However, visual concept formation in deep learning considers all the organs of visual system and the whole perception process.
What can be learnt from Chen's global first theory and deep learning from the standpoint of MGrC for BDP is that original finest-grained data (compared to the pixels projected on retina) are the information source for sure, but we should not stick to it. Exploiting components of higher level abstraction (compared to edges and parts) and the relation among them (compared to the topological relations of visual stimuli) is helpful to efficient problem solving.
Deep learning itself is a typical model of multi-granularity joint computation, and it could be expanded to a more general structure for multi-granularity joint computation (MGrJC). Major differences between MGrJC and deep learning are that the input of deep learning is the finest-grained data when MGrJC takes an MGrR as its input, and a layer-wise learner of deep learning is usually a neural network when MGrJC intends to generalize it to any type of learning model.
Although we introduce fuzzy set and rough set in section granularity optimization, and quotient space and cloud model in section granularity conversion, this does not mean that the GrC models are limited to the corresponding mechanisms. Actually, fuzzy set and rough set could be used in granularity conversion, and quotient space and cloud model could be used in granularity optimization as [70] and b the visual system was more sensitive to the topological distinction between a connected component with a hole and one with no hole [71] well. The relationship between the three mechanisms and five models is summarized in Table 1.
Key issues for BDP
There are quite a few issues that remain unaddressed despite much effort having been made to BDP, among which some are caused by the same reason: always getting start analytics from the original or finest-grained data.
Issue 1: Lacking BDP models of human level machine intelligence (HLMI)
The founder of Fuzzy Set theory, Zadeh argues that the precisiated natural language computing, which originated from CW (computing with words), is the cornerstone of HLMI [72]. The current BDP models fail to simulate human thinking to grasp the proper granularity of information when solving a problem, and thus consequently lose the opportunity to build human-centric data processing systems. The research team led by Chen founded the topologically ''global first'' visual perception theory in 1982 [71]. Always dealing data from the finest granularity does not accord with this human perception law.
Issue 2: Lacking measures to effectively reduce the size of data in BDP
Volume is the most highlighted challenge when compared to other aspects in BDP, and many difficulties are directly caused by it. To cope with this problem, a straightforward idea is reducing the data size but preserving as much as possible its information, which could avoid excessive reliance on the finest-grained data and reduce the cost in storage and communication.
Issue 3: Lacking the methods to offer effective solution to big data problems with various constraints
There are some situations where a user does not insist on precise answer to a particular problem regarding BDP, since a coarser-grained imprecise result would make him/ her happy enough. There are other situations where the precise answer is not available in time due to the problem complexity, data amount and complexity, and the capacity of computing and communication, but if the problem granularity is shifted to a coarser granular level, an imprecise yet acceptable result may be obtained in time. Therefore, it is necessary to introduce a term ''effective solution'', which means that the solution meets the requirements of the user regarding granularity and timeliness simultaneously, or in other words, that the solution has a fine enough granularity with respect to the user's quest and it is delivered in time.
MGrC are able to tackle the issues listed above. For Issue 1, computation with information described in natural language ultimately reduces to computation with granular values, which is the province of GrC [72]. Therefore, MGrC will help BDP move toward HLMI. For Issue 2, multi-granular representation of original data is a form of simplification or abstraction; hence the considerable reduction in data volume can be realized. And when it comes to Issue 3, the most highlighted feature of employing MGrC in BDP is that it can manage to offer effective solution under various constraints.
MGrC for brain big data
As mentioned in Sect. 2, the targets for brain BDP achieve a new understanding of the brain, new treatments for brain disease, and new brain-like computing technologies. So the targets are mainly qualitative rather than quantitative, that is, we do not need a solution of precise value or mathematic function, but a result that can be described with words. This is the very province of MGrC.
There have been some related works on pulse signal processing and remote sensing images with GrC methodology, from which the future research on processing brain big data with MGrC can benefit a lot. For example, Gacek and Pedrycz developed a general framework of a granular representation of ECG signals [73], which share many common features with the EEG form of brain data. Furthermore, Gacek recently discussed the granular representation of time series with a number of representation alternatives and the question of forming adjustable temporal slices, and presented an optimization criterion of a sum of volumes of IG [74]. Meher and Pal presented a new rough-wavelet granular space-based model for land cover classification of multispectral remote sensing image [75], which can be used for reference to analyze 2D brain image data.
The three mechanisms of MGrC have great potentials for brain BDP in three facets. Firstly, the brain big data is MGrC for brain big data processing 7 of multi-granular in nature. As shown in Fig. 8a, EEG signals can be granulated like other kinds of time series [74] and be processed subsequently. For 2D and 3D brain data, they can be viewed hierarchically considering two factors. One is the organizational granularity of brain, e.g., we could view the brain in this hierarchy: whole brain ? lobes ? gyrus and sulcus ? neurons, with lobes, gyrus and sulus shown in Fig. 8b, c, respectively. The other is the measurement granularity of activation degrees in particular region of the subject's brain, e.g., we could granulate the activation degrees as follows: strongest ? very strong ? strong ? weak ? very weak. Secondly, the computation performed on brain big data needs to be multi-granular and produce results of variable precision. As previously mentioned, the targets for brain BDP could be described with words; thus, they are of multi-granularity. For example, researches on the cure of a kind of brain disease may focus on the changes of certain gyrus and sulcus, while another kind of brain disease needs the neurons of temporal lobe to be investigated. Therefore, granularity optimization mechanism is useful for the former disease and granularity conversion is useful for the latter. And if the brain disease is the result of multiple causes, then multi-granularity joint computation may be required.
Thirdly, identify the proof or signs of the granular thinking in human brain, and offer valuable inspiration to computing technologies. The existence of granular thinking of human beings is already a common sense shared by the cognition and computing society, but to our best knowledge, the process of granularity optimization, granularity conversion, or MGrJC in human thinking has not been explicitly depicted by the equipments of fMRI, EEG, MEG, etc. Therefore, many details of the granular thinking in human brain still remain unknown. Using MGrC to identify and interpret the MGrC occurring in human brain is meaningful for future work.
Conclusion
In this paper, we firstly review data space, data science, and researches on BDP, and talk about the source, form, significance, and research works of brain big data. We propose the three mechanisms of MGrC and discuss their relationship with five major models of MGrC, i.e., fuzzy set, rough set, quotient space, cloud model, and deep learning. We also discussed the key issues of current BDP and the reasons why MGrC can tackle them. Then we propose the potential of exploring brain big data with MGrC. Future research may include representing the brain big data from real world with MGrR and conducting intelligent computation based on it to offer effective solution to the problems to do successful research in brain BDP. Fig. 8 Granularities in brain big data. a Linear models and granular models of time series [74] and b lobes of human brain (coarser granularity)-http://www.brightfocus.org/alzheimers/about/under standing/anatomy-of-the-brain.html and c gyrus and sulcus of human brain (finer granularity)-http://what-when-how.com/neuroscience/ overview-of-the-central-nervous-system-gross-anatomy-of-the-brainpart-1/ | 2017-08-03T02:58:31.879Z | 2014-09-06T00:00:00.000 | {
"year": 2014,
"sha1": "05c2a07c905d787960e79d38b2dfe2fe62af047f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40708-014-0001-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8030ebdb9175ebc7c71ace1d04c29c2fd820964",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
246747491 | pes2o/s2orc | v3-fos-license | Advances in Ophthalmic Optogenetics: Approaches and Applications
Recent advances in optogenetics hold promise for vision restoration in degenerative eye diseases. Optogenetics refers to techniques that use light to control the cellular activity of targeted cells. Although optogenetics is a relatively new technology, multiple therapeutic options are already being explored in pre-clinical and phase I/II clinical trials with the aim of developing novel, safe, and effective treatments for major blinding eye diseases, such as glaucoma and retinitis pigmentosa. Optogenetic approaches to visual restoration are primarily aimed at replacing lost or dysfunctional photoreceptors by inserting light-sensitive proteins into downstream retinal neurons that have no intrinsic light sensitivity. Such approaches are attractive because they are agnostic to the genetic causes of retinal degeneration, which raises hopes that all forms of retinal dystrophic and degenerative diseases could become treatable. Optogenetic strategies can also have a far-reaching impact on translational research by serving as important tools to study the pathogenesis of retinal degeneration and to identify clinically relevant therapeutic targets. For example, the CRY-CIBN optogenetic system has been recently applied to animal models of glaucoma, suggesting a potential role of OCRL in the regulation of intraocular pressure in trabecular meshwork. As optogenetic strategies are being intensely investigated, it appears crucial to consider the opportunities and challenges such therapies may offer. Here, we review the more recent promising optogenetic molecules, vectors, and applications of optogenetics for the treatment of retinal degeneration and glaucoma. We also summarize the preliminary results of ongoing clinical trials for visual restoration.
Introduction
Optogenetic molecules can be delivered and targeted to different regions of the mouse visual system. Once in place, optogenetic modules can be activated using light illumination at the appropriate wavelength for the respective module used (Figure 1).
Optogenetic therapy, a new treatment strategy that combines optic and genetic techniques, offers fresh hope for targeting various eye conditions, regardless of the gene responsible for the disease. Optogenetics is a relatively novel and elegant biological technique based on genetic engineering technologies, which makes use of light to control the localization of proteins of interest with a high temporal and spatial resolution to target specific areas within cells, tissues, or organs [1]. Inherited retinal degenerations are estimated to affect 1 in 4000 people worldwide and have become the most common cause of blindness amongst the working-age population in developed countries [2][3][4]. Over the past decade, optogenetics has been intensively applied in both groundbreaking experimental laboratory settings for the discovery of novel biological mechanisms and in the explorations of novel technological techniques to convert optogenetics into innovative therapeutic methods [5][6][7][8]. Especially from a therapeutic standpoint, the feasibility of this technique was proven on multiple occasions [9,10]. One recent example demonstrated that optogenetic molecular tools can be successfully adapted to restore neuron light sensitivity by targeting gene expression in selective retinal cell types. In rodent models of retinitis pigmentosa (RP), a group of progressive, hereditary diseases of the retina that lead to incurable blindness, light-insensitive retinas were successfully reactivated using optogenetics [11]. Importantly, these preliminary studies showed that for the reactivation to occur, it sufficed to deliver genes of light-activated proteins to the few surviving cell types in the remaining retinal circuits, supporting the advantage of optogenetics over classical gene therapy, which relies on the presence of the original cell population [11].
Although in recent years, novel approaches to treat diverse eye conditions have emerged from the field of gene therapy, most of these approaches are based on classical local gene-replacement techniques, accompanied by their typical problems and limitations [12][13][14]. There are two main disadvantages of classical gene therapy, which can be summarized by the example of RP. The most obvious limitation is that gene therapy can only target and restore the function of genes in living cells, while the genetic defects in RP result in progressive dysfunction and irreversible loss of photoreceptors [15]. The photoreceptors cells are of two types-rods and cones. These specialized cells, located in the outer neural retina, contain the visual pigments (rod and cone opsins) and function in the capture of light and its conversion into electrical signals via phototransduction [16]. Rods are highly sensitive to light and play a role in peripheral vision and in vision under low-light conditions. Cones function under bright light and are responsible for central discriminative vision and for color detection. In the healthy retina, photoreceptors form synaptic connections with bipolar cells, which in turn synapse with retinal ganglion cells (RGCs). These connections are referred to as the 'primary vertical pathway' to the brain, whereas amacrine cells and horizontal cells form the lateral connections [17]. Indeed, photoreceptor cell death tends to be the final, irreversible event in many blinding diseases, including RP [18]. Despite this, the remaining visual pathway remains largely viable and provides the rationale behind the methods as well as an appreciation of the directions in which the field of gene therapy is headed, including optogenetics [19][20][21]. However, even if gene therapy could be applied early, before the loss of rod photoreceptor cells becomes too incapacitating, RP is caused by the mutation of not one but of up to 50 different genes, making the targeting choice for gene therapy a daunting undertaking [22][23][24]. Even so, except for LUXTURNA, a gene replacement therapy for the rare form of autosomal recessive RP caused by mutations in the retinal pigment epithelium 65 (RPE65) gene, there is no approved therapy for RP [12]. Gene-replacement therapy primarily aims to prevent vision loss by slowing the rate of progression of photoreceptor degeneration, which makes it unsuitable for patients with undetermined genotypes and those who present with late-stage disease in which there is profound photoreceptor cell loss. In such degenerated retinas, downstream neurons can be transduced to express opsins thereby conferring sensitivity to light [9,10]. Such strategies are attractive because they have the potential to salvage vision irrespective of the genetic etiology. Hence, optogenetic approaches to treat diseases that affect the inner retina, such as glaucoma, are also being explored [5,6]. As optogenetic strategies are being intensely investigated, it appears crucial to consider the opportunities and challenges such therapies may offer. Here, we review the more recent promising optogenetic molecules, vectors, and applications of optogenetics for the treatment of retinal degeneration and glaucoma. We also summarize the preliminary results of ongoing clinical trials for vision restoration.
Opsins Used for Optogenetic Therapy
The opsin genes used for optogenetic vision restoration are divided into two superfamilies: microbial opsins (Type 1) and animal opsins (Type 2). Although both opsin families encode photoactive proteins consisting of seven transmembrane α-helical domains, they greatly differ in their functions, sensitivity to light, and utility for vision restoration [25]. In Type 1 opsins, the proteins are covalently linked to the all-trans-retinal chromophore, which isomerizes upon light absorption to induce a conformational change in the opsin and directly influence ion channels or pumps. By contrast, Type 2 opsins are usually covalently bound to 11-cis-retinal, and light illumination activates downstream intracellular G-proteincoupled receptor (GPCR) signaling cascades to indirectly influence ion channels [26]. For this reason, microbial opsins, such as channelrhodopsin-2 (ChR2), are the most widely used optogenetics tools in neuroscience to impart light-induced membrane permeability to neurons [27]. For applications specific to ophthalmology, four opsin classes have proven useful in optogenetic vision restoration: channelrhodopsin (ChR), halorhodopsin (HR), melanopsin (OPN4), and human rhodopsin (RHO) [26,28,29].
The different functional properties of each opsin, such as ion selectivity and light sensitivity, enable researchers to optimally target specific subpopulations of neurons within the retina. For example, light-insensitive cone photoreceptors with damaged outer segments caused by retinal degenerative diseases, such as RP, can be reactivated using HR. HR derived from the archaeon Natronomonas pharaonis (NpHR) acts as a light-gated chloride pump with a peak wavelength sensitivity of 580 nm; light activation leads to hyperpolarization of the cell, mirroring the native response of a cone [30]. Alternatively, optogenetics can be used to target the surviving inner retinal neurons following photoreceptor loss, such as retinal bipolar cells via ectopic expression of ChR2 or RHO [29,31].
The ChRs are a family of cation channel-forming opsins that were the first to be identified as optogenetics tools. ChR2, isolated from the green algae Chlamydomonas, is the most commonly used microbial opsin to date and maximally absorbs blue light at a wavelength of approximately 470 nm, which generates a depolarizing inward current in the cell [32]. Several studies have successfully exploited the ectopic expression ChR2 in retinal bipolar cells to rescue visual function in the rd1 mouse model of photoreceptor degeneration [33,34]. ChR2 expression in ON bipolar cells mediated by intravitreal injection of a recombinant adeno-associated virus (AAV) vector leads to the restoration of ON and OFF visual responses at the retinal and cortical levels, highlighting its potential for optogenetic therapy. Natural and engineered variants of ChR, such as ChrimsonR and ChronosFP, have different kinetics and sensitivities to light and are also used in optogenetic therapies [35].
RHO is the native visual pigment in rod photoreceptors; its ectopic expression in ON bipolar cells effectively restores visual responses to light in rd1 mice [36]. In fact, light responses of RHO-treated mice are more sensitive and occur at lower light intensities than those of ChR2-treated mice [37]. Human opsins such as RHO and OPN4 are GPCRs, which have integral signal amplification cascades in which a single photon can activate multiple downstream signaling proteins that gate cyclic nucleotide-gated ion channels. In addition to their signal amplification, the ectopic expression of endogenous human proteins may be preferable to that of other species for clinical applications to limit the risk for non-self immune responses [29,37].
In cases of advanced retinal degeneration, RGCs can be targeted with OPN4 or ChR2 [28,38]. OPN4 is a mammalian opsin expressed in a subpopulation of photoreceptive ganglion cells, which support a variety of non-image-forming visual functions, such as circadian photoentrainment and the pupillary light reflex, and are categorized as intrinsically photosensitive retinal ganglion cells (ipRGCs) [28]. Upon photoactivation, OPN4 acts as a GPCR to open cation channels in the cell membrane of ipRGCs. [39]. OPN4 has been successfully expressed in the RGCs of rd1 mice; these cells show greater light sensitivity, yet with a slower and more sustained response than RGCs transduced with ChR2 [26,38,39].
Cryptochrome-Based Dimerizers in Optogenetics
An alternative branch of optogenetics derives from different families of blue light receptors in higher plants, fungi, and algae. Studies from this branch of optogenetics have principally focused on photosensory receptors, which, in contrast to photosynthetic pigments responsible for photosynthesis, mediate non-photosynthetic light responses. These types of light photolyase-like receptors were originally discovered in Arabidopsis thaliana, a plant in which the genome encodes multiple different types of photoreceptors; these include phototropins (phot1 and phot2), three LOV/F-box/Kelch domain proteins (ZTL, FKF, and LKP2), red-light receptor phytochromes (phyA, phyB, phyC, phyD, and phyE), and the blue-light receptor photolyase-like flavoproteins commonly known as cryptochromes (CRY1 and CRY2) [40].
The more extensively studied cryptochromes mediate blue-light regulation of gene expression and photomorphogenic responses not only in plants but also in other organisms. As a consequence, the cryptic (CRY) origin of this protein resulted in the overall naming of these optogenetic modules. Blue-light receptors have been extensively used in experimental optogenetic approaches. These wide-ranging approaches include interactions between photoreceptor cryptochrome 2 (CRY2) and the putative transcription factor cryptochrome-interacting basic-helix-loop-helix 1 (CIB1), and various interactions involving light-oxygen-voltage-sensing domains (LOV domains), which are photosensors found in bacteria, archaea, plants, and fungi that detect blue light via a flavin cofactor [41,42].
Similar to cryptochromes, LOV domains can be found at the N-terminal of diverse signaling and regulatory domains, such as sensor histidine kinases, DNA-binding domains, and factor regulators [40,42]. Structurally, LOV domains are defined by the presence of an active site flavin cofactor and a GXNCRFLQ consensus flavin-adduct protein motif [43,44]. Upon blue-light absorption, these flavin-binding domains function in the regulation of enzymatic activity and signaling cascades that govern cellular responses, such as plant phototropism and bacterial phototaxis [45,46].
Approaches for the Delivery of Optogenetic Therapy in the Eye
The successful delivery and expression of various optogenetic constructs in chosen target areas represent a novel field of increasing interest to the scientific community, and multiple studies aim to optimize the expression and tissue specificity with which optogenetic modules target cells. AAV-based vectors, which have been studied extensively in animal models, are one of the favorite modes of expression. The final transduction efficiency of these vectors has proven superior to that of lentiviral vectors [47,48]. They have been shown to efficaciously express the light-sensitive opsin molecule or CRY2/CIBN-based enzymes in different parts of the eye [5,49]. Furthermore, the AAV-based delivery strategy has translated successfully into an effective treatment for patients; the first approved human AAV gene therapy (LUXTURNA) recently received approval for the treatment of an inher-ited retinal disease, Leber's congenital amaurosis (LCA). This demonstrated clinical utility bolsters the choice of AAVs as a preferred delivery method for optogenetic approaches.
Although systemic delivery of AAV vectors in gene therapy has proven to be more challenging, AAV vectors in optogenetic applications have the advantage of offering flexibility in the choice of the delivery route [50]. Both intravitreal and subretinal injections provide a more targeted approach than systemic administration, and recent studies highlight the generally minimal adverse events in clinical trials [51]. Common drawbacks that have plagued the usage of AAVs as vectors for gene delivery, including the ability to only transfect particular cell types and a size limitation of packaging capability, do not seem to limit the applicability of optogenetic approaches because optogenetic coding sequences are relatively small in comparison to genes in supplementation strategies [52]. Moreover, transfection efficiency is likely to increase in the future since optogenetic therapies have recently driven investigation of novel engineered capsid variants able to specifically target a variety of different eye cell types, including the retinal pigment epithelium and trabecular meshwork [5,7].
Non-viral vectors, such as nanoparticles, for the ocular delivery of therapeutic materials to the retina are of particular interest for their advantages over viral-based strategies. Viral vectors are used to deliver functional genes to the retina, whereas nanoparticles can deliver both drugs and genes to the retina. Furthermore, the maximum capacity of AAVs to accommodate genomic information is approximately 4.8 kb, which prevents their use for the delivery of large genes, such as USH2A [53]. Several pre-clinical studies have used murine models to assess the safety and expression profiles of nanoparticle-mediated gene transfer to treat various inherited retinal diseases [32,36,54,55]. Liposome-protamine-DNA nanoparticles have been shown to promote cell-specific delivery and long-term expression of the RPE65 gene in RPE65 knockout mice in vivo [25]. Subretinal injection of compacted DNA nanoparticles containing the retinal degeneration slow (Rds) gene resulted in improved cone and rod function in an RP mouse model [26]. Further, biodegradable poly(β-amino ester) nanoparticle-based delivery of a plasmid encoding vascular endothelial growth factor (VEGF) neutralizing protein, p3sF1t1Fc, significantly suppressed vascular leakage and neovascularization in a mouse model of wet age-related macular degeneration (AMD) [36]. These studies suggest that nanoparticles are less immunogenic than viral therapies and can drive high levels of transgene expression in ocular tissues.
The next decade will likely see dramatic improvements in optogenetic delivery approaches thanks to additional natural and synthetic AAV serotypes and enhanced specificity of promoters that will restrict expression to specific cell types. Furthermore, the optogenetic modules themselves will improve because of developments in their sensitivity as artificial photoreceptors or as sensors responsive to a wide range of wavelengths, including near-infrared wavelengths [56].
Using Optogenetics in Retinal Degeneration and Glaucoma
Several light-induced molecules have been applied to eye diseases, particularly in the field of retinal degeneration. Retinal degeneration primarily affects light-detecting rod cells, which are mainly responsible for peripheral and dim light vision, and cone photoreceptors, which function most effectively in relatively bright light and are primarily responsible for high acuity central vision and color vision. Retinal degenerations cause severe visual impairment and blindness in millions of individuals worldwide [57][58][59]. Current research focuses on investigating the role of each gene in initiating and driving disease progression because there is a strong genetic component to the development of retinal degeneration, and over 300 causative genes have been identified to date [23,60,61]. This effort will eventually narrow down possible targets for gene replacement therapy via AAV. An optogenetic approach takes advantage of the evidence that, although the underlying role of the responsible genes is variable, the pathogenesis of retinal degeneration appears to converge on common final pathways, which produces considerable similarities in the physiological changes ultimately observed in the affected retina. These points of convergence can then be universally targeted by providing light-sensitive molecules that facilitate light perception in the surviving layers of cells in the neural retina [62].
The recent advances in optogenetics have also been applied to the field of glaucoma [5,6,63]. Glaucoma comprises a group of progressive optic neuropathies characterized by an abnormal increase in intraocular pressure (IOP), which damages the optic nerve over time and consequently results in injury and eventual loss of RGCs and retinal nerve fiber layers [64]. RGCs and their projections to the brain play a key role in visual perception; the loss of these cells ultimately represents the final stages of the converging pathways and mechanisms leading to visual impairment. The pressure for new therapeutic strategies has increased in recent years because elevated IOP is a leading risk factor for glaucoma [65,66].
Optogenetic approaches are used in animal models of glaucoma to understand the mechanisms that ultimately lead to the loss of RGCs. These studies use the optogenetic CRY2-CIBN system to study the role of a specific enzyme, inositol polyphosphate 5phosphatase (OCRL), in modulating IOP prior to RGC loss [5,6]. Evidence of the role of this enzyme in glaucoma pathogenesis derives from patients with mutations in OCRL who develop Lowe syndrome, a multisystem disease causing abnormalities in the brain, kidneys, and eyes. The development of high IOP characterizes the disease; thus, it provides an apt model for understanding congenital glaucoma. Available evidence suggests that OCRL plays a key role in the mechanisms that modulate IOP early in the disease [5]. Interestingly, OCRL is localized in the primary cilia and plasma membrane of trabecular meshwork (TM) cells, a spongy tissue located around the base of the cornea involved in draining the aqueous humor via the anterior chamber of the eye. Targeted optogenetic modulation of OCRL to the primary cilia and plasma membrane was shown to cause contraction of the TM and an increase in outflow facility, which are correlated with a decrease in IOP [6]. These findings support the idea that defects in the regulation of aqueous humor outflow may be critical for the subsequent development of glaucoma.
Optogenetic approaches have also recently been applied to understanding the final stages of glaucoma. Under normal conditions, nerve cells that die in the eye because of glaucoma do not grow back, indicating that the loss of vision is inevitable and irreversible. For this reason, there has been a considerable effort to identify molecular mechanisms that will allow damaged RGCs to reconstitute nerve connections with the brain before dying [67]. One of these studies successfully applied a modified optical coherence tomography (OCT) technology to image the neuronal connections of genetically engineered animals with acute nerve cell injury in the retina. This specific platform, which allows direct imaging of neurons in living experimental animals, provides an extremely useful model for studying the correlation between the loss of specific neuron process (axon) connections and the death of the neurons, particularly when combined with the optogenetic studies designed to restore RGC growth mentioned above [67].
In this context, the current classical approach to glaucoma focuses on two main categories of research: mechanisms controlling intraocular pressure, which is a major risk factor for glaucoma, and mechanisms contributing to the development of neuroprotective therapies to protect RGCs. To expand the treatment possibilities in the later stages of the diseases, it would be desirable to add a third research category, which would stimulate lost RGCs to grow back or completely replace them by optogenetic modules that assume some of the functions of image transmission.
Importantly, if the above-mentioned research category is to be successful, the type of stimulation needed to restore original RGC function will not only require the identification of the responsible pathways but also their precise localization where these mechanisms are activated. It is, therefore, relevant that several studies provide tentative evidence that RGC survival depends in part on support from the target neurons to which they ultimately project, suggesting that targeting mechanisms should not exclusively focus on RGCs. However, this broader focus has important new implications for applying replacement signaling molecules to different cell types because the signaling pathways induced by target-derived factors are distinctly different from those activated when the same molecules are provided at the RGC [63]. The importance of this difference in signaling pathways receives support from a recent study showing that an optogenetic approach could be employed to induce a controllable, prolonged activation of neuronal activity in a visual target center in the brain. This approach enhanced RGC survival in a mouse glaucoma model and consistent neuronal activation by repeated stimulations. Specifically, when optogenetic constructs known as stabilized step function opsins (SSFO) were directed to the superior colliculus (SC), one of the targets of RGC axons in the mouse brain, they produced kinetics that allowed a lasting signal in response to only a brief light pulse. These results support the idea that increasing the neuronal activity of the visual system target centers involved in the bidirectional circuit between RGCs and the brain will protect against RGC degeneration over time and can potentially be exploited as a future strategy to increase neuroprotection in glaucoma [63].
A key consideration before using optogenetics to treat retinal degeneration in humans is safety. Determining the parameters of light stimulation necessary to drive the optogenetic proteins is of particular importance, as the retina can be damaged by certain light intensities and wavelengths [68,69]. Standard limits for ocular exposure to radiation have been defined, and the maximal permissible exposure for a given optogenetic therapy is determined by a variety of factors including wavelength, exposure duration, and pupil size [70][71][72]. Several pre-clinical optogenetic studies have used mouse and primate models to assess the safety and functional characteristics of opsins in terms of their translatability to human subjects [9,33,34,73]. These studies have demonstrated that while microbial opsins, such as ChR2, can restore visual responses when targeted to inner retinal neurons, the short wavelength of blue light sufficient to activate ChR2 requires an intensity that exceeds the safety threshold for retinal illumination in humans, increasing the risk of photochemical damage to the retina [74][75][76]. As a result, in the last few years, human and animal studies have favored the use of opsins with red-shifted wavelengths, such as ChrimsonR, which can be activated at light intensities well below the safety threshold [8,77,78].
Inflammation is another crucial factor to consider when evaluating the translational potential of optogenetic studies. The introduction and expression of foreign proteins, mainly of microbial origin, on the membranes of retinal cells pose an inherent risk for an undesired immune response. Furthermore, vectors derived from AAV are the most commonly used platform for delivery of retinal gene therapy and have been previously associated with inflammation in many tissues [79][80][81]. One general concern for AAV vectors is that pre-existing immunity and subsequent induced adaptive immunity following vector administration can significantly reduce retinal gene expression, as shown in pre-clinical studies [82]. Furthermore, there is growing appreciation for the risk of gene therapy associated uveitis following AAV administration, which appears to be related to the vector dose and route of administration [83,84]. However, optogenetic clinical trials thus far have demonstrated that intravitreal delivery of AAV-based vectors is mostly safe and welltolerated [8]. Nonetheless, clinical assessment of patients in current optogenetic clinical trials uses a standardized assessment of ocular inflammation according to the international guidelines of the Standardization of Uveitis Nomenclature Working Group [8,85,86].
Current Clinical Trials
Although optogenetics is a relatively new technology, there are currently four companies that have advanced optogenetic retinal gene therapies into clinical trials (Table 1) PIONEER is a Phase 1/2a, open-label, dose-escalation study that was designed to evaluate the safety and tolerability of GS030 in patients with advanced non-syndromic RP (ClinicalTrials.gov, accessed on 1 January 2022 Identifier: NCT03326336). GS030, developed by GenSight Biologics, is a novel optogenetic treatment that combines a drug product (GS030-DP) administered by a single intravitreal injection with a wearable visual stimulation medical device (GS030-MD). GS030-DP introduces a gene encoding the lightsensitive ChR protein ChrimsonR-tdTomato (ChrR-tdT) into RGCs by intravitreal delivery of an AAV2.7m8 capsid variant that features a peptide on its heparin binding site [8,87]. ChrimsonR is a microbial opsin with a peak wavelength sensitivity of 590 nm, which is approximately 100 nm more red-shifted than ChR2. Theoretically, it allows greater safety and causes less pupillary constriction than the highly phototoxic blue light used to activate many other sensors [35,77]. In 2017, GS030-MD was granted Orphan Drug Designation by the FDA for the treatment of RP. The optronic device (GS030-MD) works in conjunction with the gene therapy to activate the newly photosensitive RGCs with local 595 nm light pulses from autonomous pixels that detect light intensity changes in the visual field as distinct events [88]. In May 2021, a case report described the first successful example of partial functional vision recovery after optogenetic therapy in a 58-year-old male patient in the PIONEER study treated with 5.0 × 10 10 vector genomes (vg) of GS030-DP. Preliminary safety data of the ongoing PIONEER clinical trial suggests good tolerance of GS030-DP in three dose-escalation cohorts administered 5 × 10 10 , 1.5 × 10 11 , and 5.0 × 10 11 vg per eye (gensight-biologics.com, accessed on 9 September 2021). However, one-year post-injection results of all treated patients are not expected until 2023.
In 2015, Allergan (formerly RetroSense Therapeutics) initiated a Phase 1/2a, openlabel, dose-escalation study to evaluate the safety and tolerability of RST-001 in patients with advanced RP (ClinicalTrials.gov, accessed on 1 January 2022 Identifier: NCT02556736). RST-001 is an AAV-2-based vector encoding ChR2 for the transduction of RGCs through intravitreal injection. In June 2021, results from the Phase 1 sequential dose-escalation study reported no serious adverse events, suggesting that RST-001 is well tolerated. However, no data regarding the effect of RST-001 on improving visual function have been released. Because ChR2 is a Type 1 opsin that requires a very high-intensity blue light for activation, this therapy would require an artificial light source that can potentially be toxic to the retina [9]. In 2014, the FDA granted Orphan Drug Designation for RST-001 as treatment for RP, and Allergan plans to expand the indication of RST-001 to include dry AMD. Phase 2a of the study, in which patients will receive RST-001 at the maximum tolerated dose, is currently ongoing.
Bionic Sight has created an enhanced light-sensitive ChR (ChronosFP) that is currently being tested in a Phase 1/2 clinical trial (ClinicalTrials.gov, accessed on 1 January 2022 Identifier: NCT04278131). The open-label, dose-escalation study delivers its BS01 gene therapy to the RGCs of patients with advanced RP by intravitreal injection of a recombinant AAV-2 vector expressing ChronosFP. This study is not only of interest because of its use of a variant of ChR but also because it combines gene therapy with an advanced neuroprosthetic that incorporates the "retina's neural code" [89]. The prosthetic system consists of two parts: an encoder and a transducer. The encoder uses the retina's neural code to convert visual input into signals the brain can interpret. The transducer then projects the encoded pulses onto the retina, which creates the potential for patterns of light that are visually meaningful. In March 2021, Bionic Sight announced that the first four patients who received BS01 could detect light and motion (bionicsightllc.com, accessed on 9 September 2021). These early observations are very encouraging, and the company plans to escalate to higher doses of BS01 and report additional results later this year.
Nanoscope Therapeutics is a clinical-stage biopharmaceutical company that has developed an optogenetic therapy employing proprietary ambient light-sensitive, polychromatic opsins that have the potential to restore vision in different color environments without the need for artificial light interventions. RESTORE is an ongoing Phase 2b randomized, placebo-controlled, dose-ranging study that will evaluate the safety and efficacy of a single intravitreal injection of a virus carrying multi-characteristic opsin (vMCO-010) in adults with RP (ClinicalTrials.gov, accessed on 1 January 2022 Identifier: NCT04945772). At the American Academy of Ophthalmology's 2020 annual meeting, Nanoscope Therapeutics announced preliminary data from their completed Phase 1/2a open-label, dose-escalation study. The company reported that all 11 patients with advanced RP who received a low dose (1.75 × 10 11 vg per eye) or high dose (3.5 × 10 11 vg per eye) of vMCO-010 experienced significant dose-dependent improvements in visual acuity from baseline to 16 weeks (nanostherapeutics.com, accessed on 9 September 2021). vMCO-010 uses an AAV2 vector to target bipolar cells, rather than RGCs, which may be preferable in the earlier stages of the disease to preserve as much of the visual processing circuitry as possible [90]. MCO1 is a microbial opsin with a broad activation spectrum that can be stimulated by ambient light ranging from blue to red wavelengths [91]. vMCO-010 has received Orphan Drug Designations for RP and Stargardt macular degeneration from the FDA. In July 2021, the company announced that the first patient was dosed with vMCO-010 in their Phase 2b clinical trial.
Future Perspectives
In no field have novel experimental therapies, especially those based on genetic engineering, played a more prominent role over the past decade in driving scientific innovation than for the development of innovative treatments in eye disease. Future therapies affecting eye function are likely to benefit from a combination of these novel approaches, including interventions to delay photoreceptor degeneration and more recently conceived methods such as implantation of stem cell/photoreceptor precursors combined with synergistic optogenetic procedures to indirectly or directly restore or completely replace RGC function if the retinal cycle regenerating the photopigment is completely lost.
Future clinical trials in optogenetics are likely to focus on determining the optimal route of delivery of gene therapy without causing inflammation or potentially detaching the retina. Clinical trials for gene therapy to treat LCA have demonstrated the safety and efficacy of subretinal delivery of transgenes to the retina via an AAV2 vector [92,93]. An advantage of subretinal injection is a more favorable biodistribution profile compared to intravitreal injection, thereby decreasing the risk of systemic exposure and inflammation [94]. However, subretinal delivery involves transient detachment of the retina, which may further damage a fragile and degenerated retina. Intravitreal injection is a less technically challenging procedure and therefore less prone to complications [95,96]. Still, higher viral titers are required to reach therapeutic levels of viral transduction from the vitreous, increasing the risk of intraocular and systemic inflammation [82,94,97]. Alternative drug delivery methods, including suprachoroidal injection and sub-internal limiting membrane, are under pre-clinical investigation [98,99]. | 2022-02-11T16:05:18.651Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "aaf21f09021fcacb614c01aa6e156ca87387646e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/12/2/269/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91f3738abb3a9f27e7b5897bf391f6a2609f7fee",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254619132 | pes2o/s2orc | v3-fos-license | MetaLab-MAG: A Metaproteomic Data Analysis Platform for Genome-Level Characterization of Microbiomes from the Metagenome-Assembled Genomes Database
The studies of microbial communities have drawn increased attention in various research fields such as agriculture, environment, and human health. Recently, metaproteomics has become a powerful tool to interpret the roles of the community members by investigating the expressed proteins of the microbes. However, analyzing the metaproteomic data sets at genome resolution is still challenging because of the lack of efficient bioinformatics tools. Here we develop MetaLab-MAG, a specially designed tool for the characterization of microbiomes from metagenome-assembled genomes databases. MetaLab-MAG was evaluated by analyzing various human gut microbiota data sets and performed comparably or better than searching the gene catalog protein database directly. MetaLab-MAG can quantify the genome-level microbiota compositions and supports both label-free and isobaric labeling-based quantification strategies. MetaLab-MAG removes the obstacles of metaproteomic data analysis and provides the researchers with in-depth and comprehensive information from the microbiomes.
■ INTRODUCTION
The microbiome is the community of microorganisms inhabiting various environments. Research on the microbiome developed rapidly in recent years and in particular, the role of the microbiome in human health, e.g., the human gut microbiota has been associated with type 2 diabetes, 1 inflammatory bowel diseases, 2,3 cardiovascular disease, 4 and other diseases. Currently, multiomics approaches including metagenomics, metatranscriptomics, metaproteomics, and metabolomics provide comprehensive information on microbiomes. 5 Constructing metagenome-assembled genomes (MAGs) from metagenome data reveals the composition and predicted functional potential of the complex community. 6,7 In contrast, mass spectrometry (MS)-based metaproteomic strategies focus on the characterization of the expressed proteins from the microbiome. Metaproteomics provides dynamic insights into the functional, enzymatic, and pathway changes occurring at the individual microbes and at the systems microbiome level. 3,8,9 The experimental workflows of metaproteomics have similarities to the conventional approaches in proteomics studies. Briefly, proteins are extracted from the samples and digested into peptides, then subjected to high-performance liquid chromatography-electrospray ionization tandem mass spectrometry (MS/MS) analysis. The generated MS data sets are searched against a theoretical protein sequence database to determine the peptides and proteins. Taxonomic analysis and functional annotations are performed based on the identified peptides and proteins using specialized metaproteomics postanalysis tools. 10 In general, proteomics studies focus on single species. In contrast, metaproteomic studies have to deal with complex microbiota often with hundreds of different species, each having up to a few different strains. Moreover, this is further complicated by significant differences in microbiome compositions between individuals. This complexity brings significant obstacles to the data analysis of metaproteomics studies. In particular, the microbiota protein database is derived from a gene catalog consisting of nonredundant genes, which could come in two forms: (1) metagenomics/metatranscriptomic sequencing of the sample of interest, which provides a sample-specific database, and (2) integrated public gene catalogs. The quality of individualized protein sequence databases derived from sample-specific metagenomics/metatranscriptomic sequencing affects the identification of peptides. If a gene is missed in the gene catalog database the corresponding protein cannot be detected. It can be challenging to obtain a high-quality gene catalog database for researchers who lack the resources and/or experience for deep sequencing. An alternative approach is the use of a public microbiome gene catalog database. These databases are always comprehensive but are very large. For example, a commonly used human gut microbiome database, the Integrated Gene Catalog (IGC) contains 9,879,896 sequences; 11 by contrast, the Homo sapiens database in Uniprot contains 20,361 sequences. A large database produces enormous search space, which will cause an extremely long data processing time and a low identification rate. We have developed the MetaLab 12,13 software to solve the problem of identifying peptides from huge gene catalog databases such as the IGC. An iterative searching strategy 14 is implemented in MetaLab, in which a first search is performed against the gene catalog database to generate a refined database. Then a second search against the refined database is performed for peptide identification. Before the first search, a spectra cluster step is performed to select the delegate spectra which will be used for the search, which can greatly decrease the processing time. We also perform an open search strategy in the second search to improve peptide identifications. Although specially designed metaproteomics data analysis software is available, 12−16 many studies still use conventional proteomic tools. A recent metaproteomic benchmark study utilized four bioinformatic workflows for data analysis and three of them were proteomic tools. 17 Their result showed that the proteomic toolset SearchGUI 18 /PeptideShaker 19 performed best, but the highest spectra identification rate was only 34.79% for fecal samples.
Another challenge of metaproteomics data analysis is the taxonomic assignment, which is usually performed based on the identified peptides/proteins using specialized metaproteomics postanalysis tools. The taxonomic information is retrieved from a peptide-to-taxon or protein-to-taxon database, which is constructed from a repository of annotated proteins such as NCBI and Uniprot. A drawback of this method is that it is a generic solution but not specific to the target microbiome. For example, a Uniprot-based peptide-to-taxon database will contain information from all the species. A peptide will likely be matched to multiple proteins from various species. For one peptide multiple taxonomic lineages will be found in the database. In this case, a commonly used algorithm named the lowest common ancestor algorithm (LCA) will be used, which results in a situation where the taxonomic information is usually obtained at a higher phylogenic level.
To address these challenges, we developed a complete metaproteomics data processing platform, MetaLab-MAG (metagenome-assembled genomes), which uses the publicly available MAGs as the resource for peptide/protein identification and genome-level taxonomic/functional annotation. Currently, four MAGs databases in MGnify 20 including Cow Rumen v1.0, 21 Unified Human Gastrointestinal Genome (UHGG) v2.0, 22 Human Oral v1.0, and Marine v1.0 were supported (https://www.ebi.ac.uk/metagenomics/browse/ genomes). In this paper, we evaluated the performance of MetaLab-MAG by analyzing human gut microbiota data sets with the UHGG 2.0 database consisting of 4,744 species-level genomes. In these cases, the taxonomic identification was restricted to the MAGs of the human gut microbiome, which is more precise than acquired from NCBI or Uniprot whole databases. To accelerate and improve the data processing, a twostep database search strategy was utilized. The first search of the high-abundant protein (HAP) database is used to generate the sample-specific MAGs database, 23 and the second search against the refined MAGs databases is used to identify peptides and proteins. The results from multiple data sets demonstrated that the MS/MS identification rates were close to or even exceeded the values obtained by searching the customized or public gene catalog database from the same samples. Analyzing the same fecal data set of the above benchmark study, the average identification rate reached 52%. The reliability of the results had been confirmed by comparing the data from searching the reference or gene catalog database. MetaLab-MAG was used for qualitative and quantitative analysis and supported both labelfree and isobaric labeling-based quantification. MetaLab-MAG (1) raw files were searched against the HAP database consisting of 331,054 proteins; (2) from the search result the possible components of the samples were determined, and proteins from the corresponding genomes were added to compiled to create a refined database; (3) the raws files were searched against the refined database; (4) data tables in peptide, protein, taxa, genome, and function levels were obtained.
is free for academic use and can be downloaded from https:// imetalab.ca/.
The Workflow of MetaLab-MAG
The workflow of MetaLab-MAG is shown in Figure 1. In the first step, each raw file from microbiota analysis was searched against a HAP database, which was composed of all the ribosomal and elongation factor proteins from the MAGs. MetaLab-MAG utilized pFind 24 as the database search engine for the two-step search. A target-decoy strategy was adopted in both the two search steps for the assessment of the false discovery rate (FDR). Then, a peptide-spectrum-match (PSM) list with FDR < 1% was generated for each raw file. A PSM may have multiple possibilities for which genome it came from, so the Occam's razor principle was utilized here to keep a minimized genome list that can explain the sources of all the PSMs. The generated genome list was used to compose the sample-specific MAGs database by adding all the proteins from the selected genomes to the refined database. The next step was searching each raw file against the sample-specific database. The open search strategy has been demonstrated to be a valuable method not only for identifying modified peptides but also for improving the overall identification rate. 24−26 We have utilized the open search method in metaproteomics data analysis and the identification at both the peptide and taxa levels was significantly improved. 13 Therefore, the open search strategy was used in this step. The target-decoy strategy was used to evaluate the FDR, similarly. After the peptide and proteins were identified, the results from each raw file were combined. A minimized protein group list that can explain the attribution of all peptides was kept. In the peptide list, for each peptide multiple source proteins were listed. Furthermore, the protein with the highest pFind score was selected as the razor protein for this peptide. In the quantification part, the intensity of the peptide only contributed to the razor protein and the corresponding genome. Both labelfree and isobaric labeling strategies were supported in this step. Taxonomic annotations based on each genome and functional annotations based on each protein were available for all of the four MGnify MAGs databases. This information could be retrieved after the quantification of peptides, proteins, and genomes was finished. Then according to the quantitative information, data tables about the peptides, proteins, genomes, taxa, and functions were exported. Visualized reports of the result were also available in the form of web pages. If the metadata was provided, multivariate statistical analyses such as a principal component analysis score plot and a hierarchical clustering heatmap were generated as well in the report.
The Construction of the MAGs Database
Currently, four MAGs databases were available in MGnify, including Cow Rumen v1.0, Unified Human Gastrointestinal Genome (UHGG) v2.0, Human Oral v1.0, and Marine v1.0. Users can select and download what they need in MetaLab-MAG. These public MAGs databases are constructed based on large-scale metagenomic sequencing data from the target microbiomes. The enormous amount of sequencing data is clustered into genomes representing the composition of the microbiomes. In other words, all the genes/proteins in the MAGs database belong to specific genomes. Proteins identified from the MAGs could be readily linked to the corresponding genomes. Let us take the UHGG for example. This database included 4,744 genomes derived from the human gut micro-biome. For each genome, the protein sequence database, and taxonomic and functional information were available. Based on the UHGG database, we constructed an integrated built-in database in MetaLab-MAG for peptide/protein identification, taxonomy analysis, and functional annotation. This database contained four components: (1) the original protein sequence databases of the 4,744 genomes; (2) the functional annotation information generated by eggNOG and taxonomy information based on Genome Taxonomy Database r202 (GTDB, https:// gtdb.ecogenomic.org/); (3) a HAP database including 331,054 sequences, which was created by extracting all the ribosomal and elongation factor proteins from the original database; (4) a host protein database, which was a Homo sapiens protein database downloaded from Uniprot (https://www.uniprot.org/). The taxonomy information provided by the UHGG project was based on GTDB. We added the corresponding NCBI taxon information for the genomes to the taxonomic database. The taxonomy and function database was packed within the MetaLab-MAG software and the other databases can be downloaded from the MetaLab-MAG software interface, no manual configuration step was required.
Sample Description and LC-MS/MS Acquisition
The data set of the bacterial strain samples was obtained from a recent work of our lab, in which we analyzed proteomic profiles of these strains cultured with or without added sugars (glucose, sucrose, and kestose) in a glucose-free Yeast Casitone Fatty Acids (YCFA) broth. Samples were analyzed on an Orbitrap Exploris 480 mass spectrometer. A 60 min gradient of 5 to 35% (v/v) buffer B at a 300 μL/min flow rate was used to separate the peptides on a tip column (75 μm inner diameter ×10 cm) packed with reverse phase beads (3 μm/120 Å ReproSil-Pur C18 resin, Dr. Maisch GmbH, Ammerbuch, Germany). Buffer A was 0.1% formic acid (v/v), and buffer B was 0.1% formic acid with 80% acetonitrile (v/v). The MS full scan ranging from 350 to 1200 m/z was recorded in profile mode with the resolution of 60,000. Data-dependent MS/MS scan was performed with the 15 most intense ions with the resolution of 15,000. Dynamic exclusion was enabled for duration of 30 s with a repeat count of one. The data set of the four intestinal aspirate samples obtained by high-pH reversed-phase fractionation and Orbitrap Exploris 480 mass spectrometer (Thermo Fisher Scientific, USA) was obtained from our previous work, with a detailed experimental procedure described before. 27 The data set of the TMT labeling samples was obtained from our previous work with a detailed experimental precedure described before. 28
Metagenomics Gene Catalog from Real Human Gut Microbiota Samples
Illumina paired-end reads from the microbiomes of four human gut aspirate samples, i.e., HM454, HM455, HM466, and HM503, obtained from another study 14 were used to construct a gene catalog. The raw sequence data are accessible in NCBI under the Sequence Read Archive (SRA) with the accession number SRP068619. Raw reads were filtered to remove the adapter and low-quality sequences with the trimming and quality filtering step of the MOCAT pipeline. 29 Reads with human origin were also filtered out using the SOAP2 30 package by mapping the sequences against the human genome database (hg19). The high-quality reads were assembled into contigs using MEGAHIT v1.2.9 31 with default options. The sequence data of each sample were assembled individually. The assembled contigs from the four samples were then used for gene prediction with the prodigal v2.6.3 32 software. The contigs were translated
Journal of Proteome Research
pubs.acs.org/jpr Article into amino acid sequences using the anonymous gene prediction mode (prodigal -p meta) and default parameters. The proteincoding gene sequences of the four samples were compiled into FASTA files and used as the metagenome-inferred protein database for benchmarking the MetaLab-MAG pipeline. For the taxonomic annotation, the amino acid sequences of the proteins in the catalog were searched against the UHGG database with DIAMOND v2.0.15 33 using the blastp command with default settings. To estimate the abundance of each predicted protein sequence, the high-quality reads were first aligned to the assembled contigs with minimap2 v2.24-r1122, 34 and the generated BAM files were used to create the read count matrix of the protein sequences using featureCounts v2.0.1. 35
Metaproteomics Data Analysis
All the MetaLab-MAG workflows in this paper utilized uniform parameter settings. The protein databases were downloaded from http://ftp.ebi.ac.uk/pub/databases/metagenomics/ mgnify_genomes/human-gut/v2.0/. The host database was a reviewed human protein database from UniProtKB containing 20,387 sequences.
In the pFind open search workflow, modifications were also accepted. Therefore, cysteine Carbamidomethylation was set as the fixed modification, methionine oxidation, and protein N-term acetylation were set as the variable modification. The enzyme was set as trypsin. Fully specific digestion mode was used and up to two miss cleavages were allowed. The mass tolerances of precursor and fragment ions were 10 and 20 ppm, respectively. The false discovery rate was less than 1% at the PSM level. When using the iterative searching strategy in MetaLab 2.3.0, the IGC human gut microbiome database was used. 10 Other database searches in this paper were all performed by pFind directly. The databases used for the searches were described in the corresponding sections. Other parameters were all consistent with these used in the MetaLab-MAG workflow.
Sensitivity and Accuracy of MetaLab-MAG in Identifying Single Bacterial Species
We first evaluated the performance of MetaLab-MAG by analyzing samples with known taxonomic compositions. Here we used a data set consisting of six sole species samples (Supplementary Table 1) to assess the sample-specific database generation, and the qualitative/quantitative identification of peptides, proteins, and taxa. As a comparison, we also analyzed this data set by searching against the Uniprot single bacteria database and the IGC database using the conventional iterative searching strategy in MetaLab's previous version (2.3.0). We found that the processing time was significantly decreased compared with searching the IGC database in MetaLab 2.3.0 (Figure 2a). The time cost for peptide and protein identification was only about 15% of the conventional iterative searching method. The reason was although the total size of the MAGs database was big, there was no need to search the whole MAGs database directly. The first search was against a HAP database and the second search was against the selected genomes. A target-decoy search could be performed in both of the two steps which will benefit the FDR control. Then we investigated if the correct species could be selected from the MAGs database. The average MS/MS identification rate was 6.7% at FDR < 1% by searching the HAP database. We matched the identified PSMs to the corresponding Uniprot single species database and found that 94% of PSMs matched. According to these identifications, the identified genomes were selected to generate the samplespecific database. On average, 45 genomes and 117,375 proteins were contained in one database. This result showed that the correct species were identified and their proteins were selected successfully to form a proper-sized sample-specific database.
The peptide/protein identifications were obtained by searching the sample-specific MAG database, yielding MS/MS identification rates that were similar to those by searching against the Uniprot single species database (Figure 2b) and higher than those obtained from the conventional iterative search method against the IGC database. Searching the Uniprot single-species database was the ideal method for peptide identification from single-species samples. However, in the MetaLab-MAG workflow, only a slight decrease in the identification rates of about 5% was observed. Then we tested if the PSMs were identified correctly from the expected species. The Uniprot single species database was in silico digested, and we matched the PSMs to the theoretical peptide sequences. It was found that 97.1% of fully tryptic PSMs were matched with the corresponding Uniprot single species database (Figure 2c). This result showed that although multiple genomes were collected to create the sample-specific database, most of the obtained PSMs and peptides were from the correct species through the target-decoy search. The relative abundance of taxa identified from the samples was shown in Figure 2d. It was observed that the taxonomic identification was accurate at the genus level. For some of the samples, the relative abundance of correct species was relatively low. This was because part of the proteins was attributed to very similar species, as mentioned above, the peptide identification was correct. Overall, the quantitative results illustrated the same conclusion that the MetaLab-MAG workflow successfully identified the target species from the single species samples.
These results demonstrated that MetaLab-MAG can easily identify high-abundant bacteria. However, the human gut contains a highly diverse microbial community with a significant number of microorganisms having very low abundance. Hence, we tested whether the low abundant bacteria could be identified from a spike-in sample, i.e., a data set of human cell samples mixed with Escherichia coli at different concentrations (2, 3, 4, 5, or 6%). 36 MetaLab-MAG provided a workflow enabling the identification of host proteins simultaneously by concatenating a human protein database. By this strategy, we found that the MS/ MS identification rate reached 72% and was higher than searching a Homo sapiens-E. coli combined database (Figure 2e, Supplementary Table 2). The proportion of non-Homo sapiens PSMs from E. coli was about 97% (Figure 2f). The relative quantitative results also fitted well with the expected values 1-, 1.5-, 2-, 2.5-, or 3-fold (Figure 2g). This result shows that relatively low abundant components could also be identified and quantified correctly using MetaLab-MAG.
From these results, we demonstrated that MetaLab-MAG is an efficient and reliable tool for metaproteomic data analysis. First, MetaLab-MAG outperformed the conventional iterative search strategy in sensitivity, accuracy, and searching speed. Second, compared with the ideal matched Uniprot database search, the loss of the MS/MS identification rate was less than 5%. Considering the database used in MetaLab-MAG was a generic MAGs database consisting of 4,744 genomes, identifying single species samples with high sensitivity and accuracy was encouraging.
Evaluation of MetaLab-MAG Using Synthetic Microbial Community
We then used MetaLab-MAG to analyze a data set derived from a synthetic microbial community named SIHUMIx. The SIHUMIx sample was composed of eight types of bacteria found in the human gut microbiome. 37 We determined that all eight species had corresponding genomes in the UHGG database. There were 139 genomes in UHGG that were found from the same genera of the eight microbes (Supplementary Table 3). In the following analysis, we defined the PSMs/ peptides from the genomes of the correct species as "correct species identifications" and PSMs/peptides from the 139 genomes of the correct genus as "correct genus identifications". We analyzed 15 raw files by searching against the HAP database, and 67,748 PSMs were obtained at FDR < 1% with the MS/MS identification rate at 4.4%. The correct genus identifications accounted for 93.5% of the PSMs and 90.8% of the peptide. Two microbes (Clostridium butyricum DSMZ 10702 and Lactobacillus plantarum DSMZ 20174) were not found in all the 15 raw files and Lactobacillus plantarum DSMZ 20174 was only found in 7 raw files. Actually, these three microbes cannot be found from the metagenomics database (termed DB2MG below) based on the assembled contigs, either, suggesting their relative abundances were very low in the samples (Supplementary Table 3). 17 Sample-specific databases were generated based on these identifications. On average, 82 genomes and 212,644 proteins were contained in one sample-specific database. These results show that by searching the HAP database, the highabundance components of the samples were effectively identified and could be retrieved to form the sample-specific database.
The overall identification rate using the above generated sample-specific MAG database reached 58%, which was slightly higher than searching against the DB2MG (57.1%) and the selected eight species database from Uniprot (53.4%) ( Figure 3a). As well, most of the peptides identified by these three methods were the same (Figure 3b, Data S1, S2, S3). This suggested valid peptide identifications from searching against the three different types of databases. Theoretically, the most suitable database for metaproteomics data analysis was a reference database with proper size or a sequencing readbased database. This result proved that optimal performance was achieved using the MetaLab-MAG strategy.
Next, we investigated the results at the taxon level. It was found that 1,041,817 PSMs were identified from bacteria, at the same time 1,018,307 (97.7%) were from correct species and 1,029,947 (98.9%) were from correct genus (Figure 3c, Supplementary Table 4). By contrast, only 393,345 (38.1%) and 869,219 (84.1%) PSMs were identified from the correct species and correct genus, respectively, by searching the DB2MG. The huge difference did not come from the peptide level, but the taxa level. The taxonomic annotations of the metagenomics database were against the NCBI database on the protein level, which was not perfectly matched to the theoretic compositions. For example, for the DB2MG, it was found that 382 proteins were from the species Erysipelatoclostridium ramosum (correct species in SHUMIx) but 1,012 proteins were from the species Coprobacillus sp. 3_3_56FAA (incorrect species in SHUMIx). As a result, for the DB2MG, 84,981 PSMs were from Coprobacillus sp. 3_3_56FAA and only 9,425 PSMs were from the correct species Erysipelatoclostridium ramosum. The 84,981 PSMs matched to Coprobacillus sp. 3_3_56FAA corresponded to 6,833 unique peptide sequences, among which 6,217 peptide sequences were also identified from the MetaLab-MAG workflow and 6,164 (99.1%) of them were from the genome "MGYG000001400" (Erysipelatoclostridium ramosum). This result showed a major advantage of using the MAGs database for the metaproteomics data analysis. In the MAGs database, the taxonomic annotations were generated at the genome level; moreover, these genomes were constructed from the target microbiome. As a comparison, the taxonomic annotation of the metagenomics database was performed at the protein level against the NCBI nonredundant database, which was more general and less sensitive. From the quantitative result, we also observed that the relative abundance of the taxa was more similar between the Uniprot database search results and the MetaLab-MAG results (Figure 3d).
Through the analysis of the SIHUMIx data set, we found that MetaLab-MAG fulfilled our requirement to successfully identify and quantify the components of a simplified human gut microbiome. Similar MS/MS identification rates and better performance of taxonomy analysis were achieved compared with the workflows searching against reference and metagenomic sequencing databases.
Application of Label-Free Metaproteomics of Human Gut Microbiota
We then explored the application of MetaLab-MAG for the analysis of individual human gut microbiota samples. The first data set was from fecal samples analyzed by the Critical Assessment of MetaProteome Investigation (CAMPI) project with available metagenomics and metatranscriptomics sequencing data. 17 Another human metaproteomic data set included four intestinal aspirate samples collected from pediatric individuals. 27 For the deeper measurement of these samples, in addition to the conventional liquid chromatographic separation (1D in Figure 4a), a two-dimensional separation was adopted. High-pH reversed-phase fractionation was used first and then each fractionation was subjected to LC-MS/MS analysis. Metagenomic sequencing data were available for this data set. The taxonomic annotation was performed by DIAMOND 38 against the UHGG 2.0 database. Here we analyzed the CAMPI and the intestinal aspirate data sets using MetaLab-MAG (Data S4−S9). Meanwhile, these two data sets were also searched against the corresponding multiomics databases (Data S10, S11). Figure 4a shows the MS/MS identification rates of the samples, which illustrated that for the analysis of real human gut microbiome samples, the performance of MetaLab-MAG was still similar to searching the metagenomics database. At the same time, the identification rates were 21.3% to 66.3%, which were much higher than that obtained by the SearchGUI/PeptideShaker workflow in the CAMPI study (12% to 34.8%) 17 (Data S4). It was worth noting that MetaLab-MAG outperformed the metagenomic database searching result from the intestinal aspirate samples. We compared the identified peptides and found that for each data set, the identified peptide sequences from the two strategies had significant overlap. (Figure 4b). MetaLab-MAG analysis of the CAMPI fecal samples identified 110,993 peptides, 17,235 protein groups, and 678 genomes. Higher MS/MS identification rates and more peptide identifications were obtained by MetaLab-MAG workflow in the analysis of the intestinal aspirate samples. The numbers of identified proteins and genomes from the four samples were shown in Figure 4c,d. Nearly 10,000 proteins and 200 genomes could be obtained from a single raw file of the 1D-separation experiments and the 2D-separation yielded about triple the number of proteins and four times genome identifications.
Since the performances of the multiomics database searching strategy and the MetaLab-MAG workflow were similar in identification, we compared the quantitative results obtained by the two methods. As a representative, Figure 4e showed a good correlation between the relative abundances of genomes quantified by the two methods from HM454-2D samples. Finally, we compared the taxa composition estimated using three different methods based on (1) the read counts of the metagenomic data; (2) the metaproteomics result searching against the metagenomic database; (3) the metaproteomics result by MetaLab-MAG (Figure 4f). We found that the relative abundances of taxa were very similar between the two metaproteomic workflows. In the previous part, different trends in the relative abundance of taxa were observed between these two types of results, mainly because the taxonomic annotation was performed based on the NCBI but not the UHGG database. The taxa information obtained by MetaLab-MAG and searching the metagenomic database were quite consistent, which demonstrated the credibility of the quantitative information provided by MetaLab-MAG and the multiomics database workflow. Obvious differences were observed compared to the metagenomics results (read counts), which suggested combining metagenomics and metaproteomics methods will help the researchers get a better profile of the microbiota samples.
Through the analysis of these data sets from the real human gut microbiota samples, we found that although the multiomics database was decent for the metaproteomics analysis, MetaLab-MAG provided an alternative solution for the interpretation of human gut microbiota samples, which showed similar or better performance. This will greatly expand the applicability of metaproteomics in the analysis of human gut microbiota samples, in particular for samples without matched multiomics data.
Application of Isobaric Quantitative Analysis of Human Gut Microbiota Samples
The most prominent advantage of isobaric labeling strategies is the multiplexing capability, enabling the relative quantification of more than 10 samples in a single MS run. This technique will greatly reduce the MS running time in high-throughput experiments such as drug screening and clinical sample analysis. The commonly used isobaric labeling methods including tandem mass tag 39 (TMT) and isobaric tags for relative and absolute quantification 40 , (iTRAQ) are all supported in MetaLab-MAG. Here we tested three TMT11plex labeled human gut microbiome data sets (Data S12−S14). The MS/ MS identification rates exceeded 50% in all three data sets ( Figure 5a). Generally, over 4,500 protein groups could be identified from a single raw file (Figure 5b).
Another advantage of isobaric labeling methods was higher accuracy because samples in different channels were injected into MS analysis together, which greatly reduced the variability from different batch injections. However, if the number of samples exceeded the channels of the isobaric labeling reagent, multiple experiments were still required to quantify all the samples, which can introduce variability. To solve this problem, we took two measures. First, the MS1 intensity was utilized for the quantification and the normalization of the MS1 intensity was adopted. The intensities of the reporter ions were used to determine the relative abundances of the peptides from different channels, and the abundance of each peptide was calculated as the corresponding normalized MS1 intensity multiplied by the proportion of the reporter ion's intensity. Second, a reference channel with the same sample in every experiment was required. According to the quantitative information on the reference channel, all the channels in different experiments could be aligned and normalized. To assess the accuracy of the quantitative results, we compared the calculated genome abundances between the replicate experiments from different raw files. After the normalization and alignment, good correlations between the replicate experiments were observed (Figure 5c). In particular, we noticed that the higher the abundance of the genomes, the smaller the deviation observed. This illustrated that the quantitative information on the highabundance species was credible, even though the comparison was taken across different MS runs.
An isobaric labeling strategy was suitable for large-scale quantitative analysis. For example, in Data 3 we tested the impacts of live microbiota frozen conditions and treatment of kestose on ex vivo cultured human gut microbiota. More details can be found in our previous study. 28 We built two UMAPs based on the quantitative information on the genomes and proteins, respectively (Figure 5d,e). It was observed that all the samples were clustered based on the culturing conditions, which was in conformity with our expectations and reflected the confidence of the quantitative results. It was also observed that the samples were better distinguished at the protein level than at the genome level. Based on the quantitative information in all the peptides, proteins, genomes/taxa, and function levels, we can interpret the data from various angles. Figure 5f showed the total intensities of eight high-abundance taxa (relative abundance above 2%) in different groups. All of them got higher abundances in PBS-treated samples. However, different trends were observed in the relative abundances (Figure 5g). The relative abundances of Phocaeicola dorei and Alistipes putredinis were lower than in BL and KES.
In this part, we analyzed multiple metaproteomic data sets with isobaric labeling quantitative methods. The performance of MetaLab-MAG was quite stable and reliable. Generally, 4,500 to 5,000 protein groups could be identified with a ∼90 min MS run at the MS/MS identification rate higher than 50%. The accuracy of the quantitative results was verified by comparing the replicated samples from different labeling experiments. The information on the abundance was readily available on various levels including proteins, genomes/taxa, and functions. Investigating the responses to different conditions was straightforward based on the information provided by MetaLab-MAG. We believed that MetaLab-MAG could be a helpful tool for researchers aiming to study the changes of the human gut microbiota samples, not only on the composition level but also the active functions.
■ DISCUSSION
In metaproteomics data analysis, customized gene catalog databases from the metagenomics/metatranscriptomics sequencing of individual samples are generally perceived as the most reliable databases for peptide identification from the same microbiota samples. However, in the majority of projects, the sequencing information is not available, and instead searches against a generic gene catalog database would be performed. The drawback is that the generic gene catalogs contain numerous bacterial genomes, the vast majority of which are not present in a specific sample. Usually, the size of the generic database is huge, and identifying peptides against the database suffers from a long processing time and low identification rate.
MetaLab integrates the iterative search strategy and enables the characterization of the microbiome from the public gene catalog. However, the taxonomic and functional information on the gene catalog is annotated based on the gene. The relationship between different genes/proteins is nonexistent.
By contrast, in this work, we develop MetaLab-MAG, a specialized metaproteomic data analysis tool, using publically available MAGs databases for peptide/protein identification. A significant advantage for metaproteomics in using the MAGs database is that the genomes are constructed with additional information such as which genes are likely from the same species. The sequencing read-based protein database from metagenomics and/or metatranscriptomics is not required. An efficient refined database generation strategy, namely highabundance protein database search, 23 is adopted in MetaLab-MAG. Compared to the conventional iterative searching method, searching the HAP database is more efficient and accurate, yielding more identifications in less time. Moreover, the performance is comparable to or even better than searching the corresponding multiomics database. The MS/MS identification rates are similar, and the identified peptides shared significant common parts. The taxonomic information is obtained at the genome level. One current drawback of the MAGs database is the more limited taxa identification. However, with the continuous improvement of the MAGs database in the public data repository (such as MGnify), the taxonomic identification results will be more reliable.
Sample-specific metagenomics/metatranscriptomics can be used for metaproteomic data analysis; however, the reality is that they are not routinely performed on all samples and the lack of corresponding information should not hinder the application of metaproteomics. That is the motivation for the development of MetaLab-MAG. MetaLab-MAG is readily accessible to researchers with limited bioinformatics backgrounds. The researchers only need to provide their MS raw files. The MAGs databases for peptide/protein identification, taxonomy analysis, and functional annotation can be downloaded in MetaLab-MAG with a click. We anticipate that with the improvement of MGnify resources, other types of microbiomes would be supported for analysis. Rich information is generated automatically, including all data tables at the peptide, protein, genome, taxonomy, and function levels. The web-based report contains many useful charts and is usable for research manuscripts. The continuous development of MetaLab-MAG will include in the future more statistical functions to help the research community better understand their data sets. We believe that MetaLab-MAG can help researchers from various fields interested in using metaproteomics to investigate microbiomes.
■ ASSOCIATED CONTENT Data Availability Statement
The mass spectrometry proteomics data from our lab and the result tables including the peptide, protein, genome, and function annotation lists (Data S1−14) have been deposited to the ProteomeXchange Consortium via the PRIDE 41 partner repository with the dataset identifier PXD037839. The other dataset identifiers are PXD005590 (E. coli spiked in human samples) and PXD023217 (SIHUMIx and fecal samples). | 2022-12-14T16:21:23.334Z | 2022-12-12T00:00:00.000 | {
"year": 2022,
"sha1": "82dc6c6161736525de21f35a809733f9bd8ba801",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a6391287a6dba549165c80473589d0a79bf81890",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236650904 | pes2o/s2orc | v3-fos-license | Efficacy of anti-oxidation potential of homoeopathic medicine Hydrastis canadensis 6C and 200C
The study was intended to investigate the anti-oxidation potential of homoeopathic medicine Hydrastis canadensis 6c and 200c. The anti-oxidation potential of each medicine were evaluated using DPPH method by calculation from absorbance observed using calorimeter. Ascorbic acid and ethanol were taken as standard for comparison. It was observed that Hydrastis canadensis 200c had 81.8% anti-oxidation property and Hydrastis canadensis 6c had 100% anti-oxidation property in comparison to ascorbic acid. Whereas in case of ethanol anti-oxidation potential were 54%.
Introduction
World Health Organization states, "Cancer is a large group of diseases that can start in almost any organ or tissue of body when abnormal cells grows uncontrollably, go beyond their usual boundaries to invade adjoining part of body/spread to other distant organs" [1] . Globally in 2018, malignancies stands as a 2nd leading cause of death with 9.6 million/year. Also patients, families, communities and health system are imposed with high physical, emotional, and financial strain. Free radicals can be hazardous at higher concentration and can damage all major components of cells including DNA, protein and cell membranes by exposure to ionizing radiation and other environmental toxins resulting in diseases like cancer [2,3] . Anti-oxidants, the free radical scavengers, are chemicals that interact with and neutralize free radicals thus preventing them from causing damage, which is associated by animal studies by the presence of increased level of exogenous antioxidants has been shown to prevent free radical damage that has been associated with cancer development.
In homoeopathic system of medicine, after years of clinical experience, several homoeopaths has well proved about utility of Hydrastis canadensis in cancer but the reason behind their action in cancer cases is not studied. So this study is intended to find the presence of antioxidant potential in Hydrastis canadensis 6c and 200c.
Materials and Method 2.1 Materials
Homoeopathic medicine Hydrastis canadensis 6c, 200c (Procured from Dr. William Schwabe India Pvt. Ltd.) and Chemicals including DPPH (Sisco Research Laboratories; Batch no: 3592381; Procured from Sri Durga Laboratory), Methanol (Procured from Sri Durga Laboratory) and Ascorbic acid (Procured from Sri Durga Laboratory) were procured.
Method
DPPH Solution was prepared by adding 1 particle of DPPH powder to 6 ml of methanol. Control solution was prepared using ascorbic acid by adding 5ml of DPPH solution to 0.005g of ascorbic acid. To a clean test tube 90 microliter of DPPH solution, 20 microliter of medicine and 3ml of distilled water were added, whereas in case of control ethanol 20 microliters was added to 3ml of distilled water. The test tube was covered using an aluminium foil and stored in a dark room for 30 minutes.
~ 180 ~ The absorbance of control and medicine were measured using a photoelectric calorimeter after calibration. The readings were recorded and calculated using following formula. DPPH radical scavenging activity was measured by
Result
The recorded results were tabulated as on table 1.
Discussion
Hydrastis canadensis 200 showed 81.8% anti-oxidation property and Hydrastis canadensis 6c showed 100% antioxidation property in comparison to ascorbic acid. The differences between both the potencies were about 18.2%, which does not make a significant difference. Whereas, in case of ethanol, anti-oxidation property was 54%. The studies done by Tayyeba Rehman and Saeed Ahmad in the year 2019 on Pulsatilla nigrican mother tincture and Cinchona officinalis mother tincture showed 85% and 89% of anti-oxidation property respectively [4] . Another study conducted in the year 2017 among Syzigium jambolanum tincture, Damiana tincture, Cinchona officinalis tincture, Chelidonium tincture, and coca tincture showed highest anti-oxidation property in Syzigium jambolanum tincture among the other tinctures.
Conclusion
As mentioned earlier, 100% anti-oxidation property is present in Hydrastis canadensis 6c and 81.8% antioxidation property present in Hydrastis canadensis 200c. This raises a research question on the relation of antioxidant property of Hydrastis canadensis in treatment of malignancies. Further researches are required to investigate the role of anti-oxidants of Hydrastis canadensis in efficient treatment of Cancers. | 2021-08-03T00:05:55.121Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bd7ee6649efc36db1e9c07d9f36cc11b770a8449",
"oa_license": null,
"oa_url": "https://www.homoeopathicjournal.com/articles/382/5-1-105-391.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a797e3c9866cc72cd54772c2f371d86374ac4ea3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
249201311 | pes2o/s2orc | v3-fos-license | Huge vulvar varicosities in pregnancy: case report and systematic review
The appearance of severe vulvovaginal varicosities (VVs) is challenging in pregnancy. The management of VVs may require a multidisciplinary approach, including radiologists, vascular surgeons, and obstetricians. We report a rare case of enormous VVs and pubic varicosities and summarize similar cases in the literature. A woman in her 20s with a full-term pregnancy visited our hospital for severe VVs and pubic varicosities. She had been in a spoke maternity unit where a cesarean section was scheduled. After a multidisciplinary evaluation, we offered her the chance to have a vaginal delivery (VD). The woman had an uneventful VD, and VVs disappeared after 40 days. A comprehensive literature search on this topic showed 11 cases of VVs during pregnancy (five VDs and six cesarean sections). The presence of VVs represented the indication for surgery in 70% of cases. Severe complications occurred in 20% of VDs vs. 50% of CSs. In pregnant women with VVs, the risk-benefit ratio suggests a chance of having VD.
Introduction
The vulva surrounds the vagina and external urethral meatus, including the labia minora, labia majora, clitoris, vestibule, and mons pubis. The pelvis contains numerous venous plexuses and anastomoses between the veins of the pelvis and legs, thus offering many opportunities to develop varices.
Woman's Health Sciences Department, Gynecology and Obstetrics Section, Polytechnic University of Marche, Ancona, Italy During pregnancy, the rate of vulvar varicosities (VVs) increases. VVs occur in 8% of pregnancies. 1 There are anatomical (venous compression by the pregnant uterus) and hormonal changes that favor the formation of VVs during pregnancy. Ovarian and internal iliac veins with incompetent or absent valves can cause pelvic venous insufficiency. 1 Downstream VVs may develop owing to blood pooling in the pelvis. 1 The signs and symptoms of VVs usually depend on the severity of the clinical conditions. The symptoms range from mild local discomfort to difficulty in walking. The slight appearance of VVs is not an absolute contraindication to vaginal birth. VVs usually disappear within approximately 30 to 40 days after delivery. 2 The occurrence of severe VVs is a rare event in pregnant women. In these cases, the decision-making process is more demanding regarding the timing and the mode of delivery. The fear of vulvovaginal lacerations with possible severe bleeding makes management more uncertain. Several cases in the literature showed cesarean section (CS) as the delivery mode of choice in these extreme clinical conditions. [2][3][4][5][6][7] A further crucial variable in VVs is represented by the setting where the birth occurs. CS rates are frequently higher in spoke maternity units than in hub hospitals because of unsuitable settings for prompt action with severe obstetric complications. 8 To date, there is no clear evidence on the best mode of delivery for these women. This report describes a rare case of huge VVs and pubic varicosities where the patient was able to have a vaginal delivery (VD). We also summarize similar cases in the literature.
Case presentation
The reporting of this study conforms to the CARE guidelines. 9 A 23-year-old woman with a full-term pregnancy visited our hospital for severe VVs and pubic varicosities, and she had difficulty in walking. She had been in a spoke maternity unit where a CS was scheduled.
The woman noticed an enlargement of the varicosities in the previous 2 weeks with worsening local discomfort and severe asthenia. Up to that point, her pregnancy had been uneventful. She had already had a vaginal birth without the appearance of VVs in her previous pregnancy. Her personal and obstetric history was unremarkable. She was not taking any medicine. All laboratory findings of blood and urine were within normal limits.
The patient's external genitalia showed impressive pubic and VVs ( Figure 1). On palpation, the varicosities appeared soft and slightly painful. The vaginal introitus was edematous and congested. We decided to perform an ultrasound with a color Doppler evaluation. An examination of the venous drainage of VVs showed bilateral reflux on the great saphenous vein. We were unable to evaluate possible drainage on the obturator circle. The superficial and deep femoral circle was bilaterally patent. These findings suggested pubic varicosities and VVs with probable drainage in the pelvic circulation. No thrombotic complications were observed.
After a multidisciplinary evaluation with vascular surgeons and radiologists, we offered the patient the opportunity to have VD. The patient's consent for treatment was obtained. Following the induction of labor with a cervical ripening balloon, the woman had an uneventful vaginal birth. The appearance of VVs and pubic varicosities changed during labor and after VD ( Figure 2). These varicosities began to regress during the second stage of labor and then disappeared after 40 days.
After 3 days, the patient had a regular postpartum course and was discharged with her healthy newborn.
Literature search and methods
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed to review the literature systematically. 10 A literature search was performed in the PubMed and Scopus databases (1946-2021) (accessed on 12 December 2021) ( Figure 3). Retrospective observational studies (case reports, case series) including a full case description and reporting the following items were included: age, gestational weeks, signs/symptoms, lesion location, previous delivery, mode of delivery, indications for CS, and delivery outcome. Review articles and cases in a non-English language were excluded. In the search strategy, we searched for "vulvar, vaginal, vulvovaginal varicosities/varices/veins", AND "delivery", AND "vaginal delivery", AND "cesarean section", AND "pregnancy", AND "bleeding", AND "hemorrhage". The only filter used was the English language. Relevant articles were obtained in full-text format and screened for additional references.
Two independent reviewers (L.G. and M.M.) selected the studies using a twostep screening method. At first, the screening of titles and abstracts was performed to assess eligibility and inclusion criteria and exclude irrelevant studies. The two reviewers then evaluated full texts of included articles to (1) assess study eligibility and inclusion criteria and to (2) avoid duplications of the included cases. Two other authors (J.D.G. and G.D.C) performed a manual search of reference lists to search for additional relevant publications. A.C. checked the extracted data. The objective of this systematic review was to provide and summarize the literature regarding a rare event for which there are limited data, such as the mode of delivery and outcome, in pregnant women with VVs.
The data collection was study-related (authors and year of study publication) and case-related (age, gestational weeks at delivery, signs/symptoms, varicosity location, previous delivery, mode of delivery, indication for cesarean section, and outcome). The collected data are reported as continuous or categorical variables. Continuous variables were tested for a normal or non-normal distribution using the Kolmogorov-Smirnov test. According to the distribution, the data are expressed as mean AE standard deviation or median and interquartile range. Categorical variables are expressed as the frequency and percentage. MedCalc V R Statistical Software version 20 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc. org; 2021) was used for the analysis.
Discussion
We report a successful VD in a woman with enormous VVs and pubic varices. Such huge VVs are a rare occurrence in pregnancy. Our patient had recently been in a spoke maternity unit where a CS had already been scheduled.
In primary maternity units, the resources readily available for obstetric emergencies are more limited compared with those in hub hospitals. 8 In this case, the fear of facing severe bleeding in an unsuitable setting may have led to choosing to have a CS performed. Our hospital is the only regional obstetric hub hospital. Our hospital manages all high-risk pregnancies (preterm fetuses, severe pregnancy pathologies), while low-risk and full-term pregnant women can also give birth in primary maternity units (spoke centers).
We carried out a multidisciplinary consultation to decide on the mode of delivery. We offered the patient the chance to have VD because of the absence of mandatory indications for surgery and a suitable setting for dealing with any complications. This situation emphasizes that, in a hub and spoke obstetric network, these women should be referred to a high-volume obstetric unit to avoid unnecessary CS.
The literature on VVs is limited. Few cases of VVs with severe varicosity have been reported. Furthermore, there are no indications for obstetric management of this condition. Therefore, more in-depth research on this topic needs to be performed to provide more detailed information for daily obstetric practice. To date, a summary including all cases of VVs in pregnancy reported in the literature is missing. In our literature review, 11 cases of VVs in pregnancy were identified at the final analysis, including the present case (Table 1). [2][3][4][5][6][7][11][12][13][14] The median age was 29 years. All of the women were pregnant at full term. Almost all patients reported a vulvar and/or vaginal localization. Our case is the only one that showed a pubic location. The signs and symptoms appeared nonspecific. Most women reported mild local vulvar discomfort. Approximately 55% of the women were nulliparous. Interestingly, three women had Klippel-Trenaunay syndrome. Klippel-Trenaunay syndrome is a rare malformation that includes the presence of capillary malformation, hypertrophy of soft tissues and bones, and atypical lateral varicosity. Pregnancy in these women can exacerbate the signs/symptoms of the disease. Five (45%) women had a vaginal birth, while six (65%) underwent a CS. Approximately 70% (4/6) of women who underwent CS did not have a mandatory indication for surgery; therefore, the reason for CS was the presence of VVs. Severe complications occurred in 20% (1/5) of VDs vs. 50% (3/6) of CSs.
The descriptive analysis of these cases provides further information on VVs in pregnancy. In the presence of severe VVs, a CS was preferred more often than VD. These women had a non-negligible complication rate. The only case of severe complication in a VD was reported in 1951 after using "prophylactic" forceps and episiotomy. 13 Although the small sample size cannot provide mandatory indications for daily obstetric practice, the risk-benefit ratio should allow offering the chance of having VD for these women.
In conclusion, our patient reported satisfaction with her successful VD and avoiding surgery. Our findings suggest the following two main points: 1) the mode of delivery in women with huge VVs should be managed by hub obstetric centers with a multidisciplinary consultation including vascular surgeons and radiologists; and 2) on the basis of only a few reported cases, the risk-benefit ratio suggests a chance of having VD for these women.
Author contributions
L.G.: conceptualization, methodology, formal analysis, data curation, patient management, and writing-original draft preparation; M.M, J.D.G., and G.D.C: methodology, data curation, and writing-original draft preparation; A.C.: methodology, data curation, patient management, supervision, writing-review, and editing. All authors have read and agreed to the published version of the manuscript.
Ethics statement
Review board approval was not required because of the nature of this study (case report). The patient provided written informed consent for publication of this case report.
Data availability statement
The data supporting the findings of this study are available within the article. | 2022-06-01T06:26:10.521Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "d49bb55cefe2fbe52c2fe7710256e18a08b610c4",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03000605221097764",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1340cbcc6f94bcdcd9be80a9431f9d5aee2a847",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
120941701 | pes2o/s2orc | v3-fos-license | Cauchy horizon stability and mass inflation with a cosmological constant
Motivated by the strong cosmic censorship conjecture, we consider the Einstein- Maxwell-scalar field system with a cosmological constant Λ (of any sign), under spherical symmetry, for characteristic initial conditions, with outgoing data prescribed by a (complete) subextremalReissner-Nordstrom black hole event horizon. We study the structure of the future maximal (globally hyperbolic) development, analyze the mass inflation scenarios, identifying, in particular, large choices of parameters for which the Hawking mass remains bounded, and study the existence of regular extensions. We also discuss why our results, although valid for all signs of Λ, only provide evidence for the failure of strong cosmic censorship in the case of a positive cosmological constant.
Introduction
The existence and stability of Cauchy horizons is intimately related to the question of global uniqueness for the Einstein equations and, in particular, to the celebrated strong cosmic censorship conjecture. To study this, we consider the following problem: Problem. Given spherically symmetric characteristic initial data for the Einstein-Maxwellscalar field system with a cosmological constant Λ, with the data on the outgoing part of the null initial hypersurface prescribed by a (complete) subextremal Reissner-Nordström black hole event horizon, with non-vanishing charge, and the remaining data otherwise free, verify if the corresponding maximal globally hyperbolic development is future inextendible as a suitably regular Lorentzian manifold. This is a direct extension of the framework of [1] with the introduction of a cosmological constant in the field equations.
Here, we will review some of the results obtained in a recent series of papers [2,3,4] dedicated to the above problem, where we generalize the results of Dafermos concerning the stability of the radius function, for a cosmological constant of any sign. This has the remarkable consequence of allowing continuous extensions of the metric across the Cauchy horizon. We analyze the mass inflation scenarios and, in particular, identify large choices of parameters for which the Hawking mass remains bounded. Then we carefully unveil the consequences of this last fact concerning the existence of regular extensions of the metric beyond the Cauchy horizon. More precisely, we obtain continuous extensions of the metric across the Cauchy horizon with square integrable Christoffel symbols and, under slightly stronger conditions, we construct (nonisometric) extensions which are classical solutions of the Einstein equations. To the best of our knowledge, these are the first results of this kind.
Framework
The Einstein-Maxwell-scalar field system with a cosmological constant Λ is given by with T µν obtained from the sum of the stress energy tensor of a masseless scalar field φ, satisfying the wave equation, and an electromagnetic field F solving the (source free) Maxwell equations. Imposing the following (spherically symmetric) ansatz with u the ingoing and v the outgoing null coordinates, respectively, the electromagnetic field completely decouples: Introducing the renormalized Hawking mass ̟ through the field equations reduce to These are a wave equation for the scalar field (5), a wave equation for the radial function (6), and equations prescribing the gradient of the mass (7) and (8); from these we can easily derive the remaining unknowns: the metric component Ω from (4) and the Faraday tensor F from (3).
The Reissner-Nordström solution and strong cosmic censorship
The first reason to consider the previous Einstein-matter system is that it admits the full Reissner-Nordström (RN) family of solutions, where we include all the anti-de Sitter (Λ < 0) the asymptotically flat (Λ = 0) and the de Sitter (Λ > 0) subfamilies, as particular solutions. This family is obtained by setting φ ≡ 0. Then, the usual mass and charge parameters are given by M = ̟ and Q = 4πe. In this context, it is natural to try to derive the black hole solutions of this family from a characteristic initial value problem for the system (5)- (8). To do that in a unified way for all signs of Λ, we can prescribe their corresponding characteristic data on the event horizon: the future maximal globally hyperbolic developments obtained will then coincide with the shaded region in figure 1. But such developments can be continued to the future in a highly non unique way, with the metric extending smoothly. This well known fact is quite disturbing since what happens in such extensions is not determined, in any way, by the prescribed initial data. In such extensions, a Cauchy horizon arises as the boundary of the maximal development, signaling the failure of global uniqueness. This puts into question the deterministic character of general relativity. A heuristic argument devised by Simpson and Penrose [5] suggests that small perturbations might turn the Cauchy horizon into a singularity, via a blue-shift instability, beyond which spacetime cannot be continued in any meaningful way. As a consequence, global uniqueness might still hold as a generic feature of initial value problems for the Einstein equations. These are the core ideas behind the strong cosmic censorship conjecture (SCC). Later, Israel and Poisson [6] identified the blow up of the Hawking mass as a source for the expected Cauchy horizon instability, a process known as mass inflation.
The previous discussion motivates the problem in the Introduction. Regarding the specific choices made there, the self-gravitating real massless scalar field is the simplest non-pathological matter model with dynamical degrees of freedom in spherical symmetry; moreover it exhibits a wavelike behavior reminiscent of the general Einstein vacuum equations. A non-vanishing charge parameter is needed to exclude the Schwarzschild subfamily, whose solutions do not contain a Cauchy horizon to start with.
We introduce a cosmological constant because it plays a fundamental role in modern physics and, from a purely mathematical point of view, provides rich geometrical structures and dynamical behaviors. But it is not clear why it should matter in the context of the SCC. Note, for instance, that our problem concerns what happens in the interior of a black hole region, whereas the effects of the cosmological constant are specially relevant at large scales. Nonetheless, during the late 90s, there was a considerable amount of activity concerning the SCC with Λ > 0. The first impressions where that the SCC should fail. But, later on, such claims were dismissed by arguing that the conclusions were based on over-simplified models A full non-linear analysis, for instance as provided by our problem, was clearly in order.
Regularity of extensions and Cauchy horizon stability
In the formulation of our problem, a fundamental role is played by the regularity that we allow the extensions to have, beyond the maximal globally hyperbolic development. It turns out that this regularity may be viewed as a measure of the stability of the Cauchy horizon: the more regular the extensions are, the more stable the Cauchy horizon can be thought of. We will now briefly discuss some regularity requirements on the metric. Inextendibility of the metric in C 2 : This is, first of all, motivated by the fact that the Einstein equations are of second order. Also, the blow up of the Kretschmann scalar (a measure of tidal forces) implies this criterion. It is clear that its failure would provide overwhelming evidence against the SCC. Nonetheless, its success would not provide such a compelling argument in favor of the conjecture, since there are plenty of relevant solutions of the Einstein equations whose regularity is well below this threshold. In fact, we have constructed classical solutions which are not C 2 .
Inextendibility of the metric in C 0 : This corresponds to Christodoulou's choice in his original formulation of the SCC. Its validity would provide overwhelming evidence in favor of the deterministic nature of general relativity.
It turns out that neither of the two previous criteria are enough to capture the full richness of the subject. In fact, in [8], Dafermos proved the existence of (spherically symmetric) extension, with metric in C 0 \ C 1 , for the Λ = 0 case.
We are thus lead to consider a third type of requirement, based on the possibility of trying to exclude extensions satisfying the field equations.
Inextendibility as a Lorentzian manifold with Christoffel symbols in L 2 loc : This is enough to ensure inextendibility of the metric, even as a weak solution of the Einstein equations, and it was proposed by Christodoulou in response to Dafermos' results. Chruściel had already proposed a similar criterion with the following conditions imposed directly at the level of the metric: inextendibility with g ∈ H 1 loc and g −1 ∈ L ∞ loc . It turns out that, under the genericity conditions alluded to above, this criterion holds for the (Λ = 0) solutions of Dafermos.
Main results
Given a reference subextremal RN solution, from whose event horizon we take the outgoing initial data (recall that the remaining data is fixed freely), we first show that it has a unique maximal development (M, g, φ) with the metric (2) and scalar field φ defined on M = P × r S 2 , where P is a past set P ⊂ [0, U ] × [0, ∞[. In our coordinates, the Cauchy horizon of the the reference RN solution is given by v = +∞.
Stability of radius function at the Cauchy horizon. We show that there exists U > 0 and r 0 > 0 for which So, we see that the symmetry orbits do not collapse to points on [0, U ] × [0, ∞[. This is a stability result for the Cauchy horizon. We have that (M, g, φ) extends, across the Cauchy horizon {v = ∞}, to (M,ĝ,φ), withĝ andφ in C 0 .
We now need to introduce the quantity ρ defined as the ratio between the surface gravity of the Cauchy horizon and the surface gravity of the event horizon of the reference solution. Note that ρ > 1 and that it provides a measure of how close the initial data is from extremality; the case ρ = 1, which we have excluded from beginning, corresponds to the extremal case where the radius of the event horizon and the radius of the Cauchy horizon coincide. After controlling r, the fundamental quantities that will determine the regularity of extensions are the mass ̟ and the radial derivative of the scalar field along lines of fixed u, ∂ r (φ| u=c ). As we will see, our control over these quantities will depend on the decay rate imposed on the free data φ( · , v = 0) and on how close we are to extremality.
Mass inflation. If ρ > 2 and In particular, no C 1 (spherically symmetric) extensions across the Cauchy horizon exist, since ̟ is at the level of first derivatives of the metric. In fact, the Christodoulou-Chruściel inextendibility criterion holds. We stress the fact that we are only considering the existence of spherically symmetric extensions: for instance, when considering general extensions, the blow up of the mass, a priori, only excludes the existence of C 2 extensions, since it leads to the blow up of the Kretschmann scalar. The existence, under these circumstances, of non-spherically symmetric extensions with metric in C 1 is an open problem, to our knowledge.
No mass inflation.
If then there exists C > 0 such that, Moreover, the Christodoulou-Chruściel inextendibility criterion fails, i.e. (M, g, φ) extends, across the Cauchy horizon, to (M,ĝ,φ), withĝ andφ in C 0 , Christoffel symbolsΓ in L 2 loc , andφ in H 1 loc . Bounding ∂ r (φ| u=c ). If |∂ u φ(u, 0)| ≤ cu s , for some s > 13ρ 9 − 1, then, there exists C > 0 such that, Consequently, (M, g, φ) extends (in a non-unique way), across the Cauchy horizon, to a spherically symmetric (classical) solution (M,ĝ,φ) of the Einstein-Maxwell-scalar field system with cosmological constant Λ, withĝ andφ in C 1 . Moreover, the Kretschmann scalar is uniformly bounded. To the best of our knowledge, these are the first results where the generic existence of extensions as solutions of the Einstein equations is established.
6. Remarks about the strong cosmic censorship conjecture Strictly speaking, our results, which hold for all signs of Λ, do not apply directly to the SCC. The reason for this is that we are considering exact RN data on the initial outgoing hypersurface, with our dynamical degrees of freedom provided by the free data on the ingoing hypersurface. So our results only cover the case where the scalar field has compact support on the event horizon. This scenario is not expected to arise from the gravitational collapse of generic (and appropriate) Cauchy (for Λ ≥ 0) or Cauchy and boundary (for Λ < 0) initial data, to which a proper formulation of the SCC should refer to. In fact, for Λ = 0, the expected generic behavior along the event horizon is described by a Price's law of the form for some p > 0 and v a Eddington-Finkelstein coordinate as above. For initial data satisfying this law, in [8], Dafermos has shown that, for all 1 < ρ < ∞, mass inflation occurs and the Christodoulou-Chruściel criterion holds (see also [7]). For Λ < 0, in the general non-symmetric case, the decay along the horizon is expected to be even slower, and the process of gravitational collapse is far less understood. Therefore we excuse ourselves from extrapolating from our results (which, recall, hold when Λ < 0 as well).
For Λ > 0 the expected Price law provides "fast" decay for some positive ∆, which leads to an exponential approach to the stationary data considered here. So, in the Λ > 0 case, the no mass inflation scenario and some of its remarkable consequences are expected to be true for dynamical data close to extremality ρ ≈ 1 and satisfying (9). This puts the validity of the SCC, in the Λ > 0 case, into question. From the previous discussion it is clear that the stationary data case at the event horizon, discussed here, is the most natural setting to study, in a unified framework, the influence of the sign of Λ in the stability of Cauchy horizons. Our results and techniques show that differences concerning Cauchy horizon stability and mass inflation when Λ changes sign are due solely to the differences in the decay rates of φ along the event horizon, as described by a Price's law. | 2019-04-19T13:04:42.801Z | 2015-04-28T00:00:00.000 | {
"year": 2015,
"sha1": "82688134fdc6414977a9cf927e80b9de21f6b629",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/600/1/012031",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f3e9e1e9075ecd4dd2b6c205b287b04efd285a21",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225097774 | pes2o/s2orc | v3-fos-license | An atypical presentation of a mesenteric Meckel’s diverticulum in a 7-year-old warmblood mare: case report
Background Meckel’s diverticula are a rare cause of small intestinal strangulation, diagnosed at laparotomy or necropsy. This congenital anomaly of the gastrointestinal tract originates from a remnant of the vitelline duct. In reported equine cases, they present as a full-thickness diverticulum on the antimesenteric border of the distal jejunum or proximal ileum. Case presentation On laparotomy a Meckel’s diverticulum positioned at the mesenteric side was found to be the cause of small intestinal strangulation. This position is very uncommon and to the best knowledge of the authors there is no unambiguous description of another case. Conclusions Meckel’s diverticula should be on the list of differential diagnoses in cases of small intestinal strangulation. As in humans, equine Meckel’s diverticula can have the standard antimesenteric as well as a more exceptional mesenteric location. This case adds to the series of anecdotal reports of anomalies with regard to Meckel’s diverticula in the horse.
Background
Vitelline anomalies are a rare and often overlooked cause of small intestinal strangulation in horses [1,2]. During the early embryonic stages, the vitelline duct (also called omphalomesenteric duct) connects the yolk sac with the developing midgut [3][4][5]. As the yolk regresses and the placenta takes over, the vitelline duct and paired vitelline arteries atrophy [3,5]. In case of an incomplete obliteration, a mesodiverticular band, a vitelline duct cyst or a Meckel's diverticulum may remain [2,3,[5][6][7][8]. Vitelline remnants can be an incidental finding during exploratory laparotomy, but they may as well lead to small intestinal strangulation for which surgical intervention is needed [1,3,8].
This report describes the case of a 7-year old warmblood mare with an atypical mesenteric positioned Meckel's diverticulum as the cause of small intestinal strangulation diagnosed at exploratory laparotomy.
Case presentation
History A 7-year-old, 495 kg Dutch warmblood mare developed signs of acute colic (attempts to lie down, rolling, restlessness and pawing) after riding. The horse did not respond to intravenous analgesia with flunixin meglumine given by the referring veterinarian and was referred to the Utrecht University Equine Clinic (Utrecht, the Netherlands). The mare arrived at the university clinic 2.5 h after the onset of symptoms.
Clinical examination
On presentation, the mare continued showing signs of abdominal pain. A general physical examination was performed, which revealed a heart rate of 48 beats/min, a respiratory rate of 10 breaths/min and a rectal temperature of 37.2°C. Oral mucous membranes were slightly pale and tacky with a prolonged CRT. The abdomen was slightly distended. On abdominal auscultation borborygmi were absent on the left and reduced in frequency on the right side. Upon nasogastric intubation no reflux was obtained. Rectal examination revealed a mildly impacted pelvic flexure, no distended loops of small intestine could be palpated or seen on transcutaneous abdominal ultrasonography at this timepoint. Results of a hematological assessment and biochemical panel were still within normal limits.
Based on these clinical findings and particularly on the severe persisting signs of colic, which were unresponsive to repeated analgesia and sedation, a tentative clinical diagnosis of a small intestinal strangulation was made. The mildly impacted pelvic flexure was assumed to be secondary. An exploratory laparotomy under general anesthesia was recommended.
With the horse in dorsal recumbency a standard midline approach was used, and a 25 cm incision was made through the linea alba. Manual exploration of the abdomen revealed a small intestinal strangulation, several dilated loops of small intestine and a mild impaction of the ascending colon. The strangulation was located in the distal jejunum ( Figure 1), which after reduction and reposition appeared to have a Y-shaped anatomy in this horse ( Figure 2). Given its appearance a congenital anomaly was presumed. The tube-like diverticulum originated from the mesenteric side of the jejunum instead of the anti-mesenteric side, as it is most often described for a Meckel's diverticulum [9][10][11]. Oral to the strangulation, the jejunum was fluid and distended for a length of 6-7 m. After correction of the strangulation, the small intestines were decompressed by gently pushing its contents into the cecum before resection was started.
The blind-ending diverticulum was located in the distal segment of the jejunum not involving the jejuno-ileal junction, it measured 44 cm in total length and 5 cm in diameter. The proximal 17 cm ran parallel to the jejunum after which it diverged to a free diverticulum (27 cm) with its own mesentery ( Figure 3). The luminal part had a full length of 34,5 cm. The distal tip, a fat pedicle that secured the loop of strangulation ( Fig. 1), had a dark red hemorrhagic serosal discoloration. The serosa of the adjacent jejunum showed a band of pink-purple discoloration on the sides of strangulation; in total, a loop of 30 cm of jejunum was strangulated by the diverticulum (Fig. 3).
Intestinal motility appeared to be decreased orally and normal aborally of the strangulation. All parts of intestine were considered viable based on their motility, wall thickness and color. However, to prevent reoccurrence of a strangulation or other complications caused by the diverticulum (e.g. impaction), the decision was made to perform 'en bloc' resection of the anomaly and the associated jejunum of approximately 1 m. An end-to-end jejuno-jejunostomy single-layer anastomosis as described by Freeman (2019) [8] was performed. Vessels in the mesentery were ligated with polyglactin 910 (Vicryl I ) USP 0. The one-layered anastomosis was created with an interrupted Lembert pattern of polydioxanone (PDS J ) USP 3-0. The mesentery was closed with a simple continuous suture of polyglactin 910 (Vicryl I ) USP 0. At the end of surgery 1 L of carboxymethyl cellulose (Carmellose gel 2% K ) was deposited intra-abdominally and the horse received dalteparin (Fragmin L , 50 IU/kg) subcutaneously to prevent intestinal adhesions. The abdominal wall was closed in four layers: the peritoneum was sutured with a simple continuous suture pattern of polyglactin 910 (Vicryl I ) USP 0, the linea alba with a simple continuous suture pattern of polyglactin 910 (Surgicryl M ) USP 6, the subcutis with a simple continuous suture pattern of polyglactin 910 (Vicryl I ) USP 2-0 and finally the skin with a continuous intradermal suture pattern of poliglecaprone (Monocryl N ) USP 2-0. A rolled gauze stent was sutured over the wound with interrupted cruciate sutures of nylon (Ethylon O ) USP 0. Recovery from anesthesia was uneventful.
Histopathology
Histopathological examination of the resected intestine was performed. The lumen of the diverticulum was covered by a stratified squamous epithelium. The second layer consisted of loose fibrous tissue, followed by a layer of adipose tissue and finally at the periphery a mesothelial,
In the early post-operative period, the horse developed post-operative ileus with enterogastric reflux and mild colic signs (decreased appetite, weight shifting and pawing). On rectal examination, the previously encountered mild colonic impaction had increased in size, consistency was much firmer as a result of dehydration. Medical treatment with laxatives and metoclopramid CRI (Emeprid T , 0.04 mg/kg BW/h) did not resolve the condition. Based on these findings, and as colic persisted, the decision was made to perform a relaparotomy. During repeat laparotomy, the anastomosis was checked and assessed to be patent and viable, with no impacted ingesta and a normal luminal diameter. The small intestines were decompressed to the cecum and the impaction in the ascending colon was resolved through an enterotomy at the pelvic flexure.
Post-operative course after the second laparotomy was favorable. Supportive therapies were gradually discontinued based on clinical findings and bloodwork. Infusions and Dalteparin were stopped after 3 days. Antibiotics were discontinued after 5 days. Gastroprotectants and NSAIDs were given for 10 days after surgery; 5 days of intravenous flunixin meglumine (Megluxin Q ) at 1.1 mg/ kg BW q12h was followed by 5 days of oral meloxicam (Metacam U ) at 0.6 mg/kg BW q24h. The amount of roughage and soaked feed was gradually increased until the horse was back to full ration 8 days after the second surgery. The horse was discharged after 14 days of hospitalization.
Four months after discharge the owner was contacted by telephone for follow-up. The mare had not shown any new episodes of colic, nor signs of wound infection during the previous months. Riding had been gradually reintroduced, at the time of the telephone survey the
Discussion and conclusions
Meckel's diverticula (MD) are a congenital disorder of the gastrointestinal tract found in several species like humans, pigs and horses [1,2,12]. In rare cases the vitelline duct (partially) fails to atrophy during embryonic development and a tube-like diverticulum remains, called a Meckel's diverticulum [2,3,8]. This blind ending branch of small intestine is located at the distal end of the jejunum or proximal ileum, has a lumen and can vary in length from 10 to 35 cm [1,3,13]. The diverticulum encountered in this case consisted orally of a part that was attached to and ran parallel to the distal jejunum, sharing the serosa but separated by a mucosacovered septum (Figure 4), similar to the cases described by Yovich and Horney (1983) [14] and Barakazai et al. (2003) [1]. Because of this, the diverticulum measured a total length of 44 cm, with a free part of 27 cm in length (Fig. 3). It has been postulated that the stratified squamous epithelium, as found in the diverticular part, might develop secondary to a chronic impaction [1,2]. Similar to the findings described by Verwilghen et al. (2010) [2], no lamina muscularis could be identified in the case described here.
In literature a Meckel's diverticulum is usually characterized by an origin at the antimesenteric aspect of the small intestine, however in humans there are a few cases known to have had a mesenteric positioned diverticulum [12,15,16]. To our knowledge this is the first equine case report to describe a mesenteric positioned tube-like diverticulum of the distal jejunum ( Figure 5). The unusual condition encountered in this case is of value to report to assist surgeons encountering this during an emergency laparotomy as its location also influences the surgical possibilities. The case report by Wefel et al. (2011) [17] also describes a mesenteric diverticulum associated with a mesodiverticular band, however as that case did not present the typical anatomy of a blind conical extension of a Meckel's diverticulum, it might have been an acquired diverticulum [17][18][19]. The anatomy of the diverticulum in the case described here must have been a congenital anomaly rather than an acquired one. Based on the typical anatomy, location in the distal jejunum and histological composition, this is very likely a variety of a Meckel's diverticulum rather than a jejunal duplication [12] (Fig. 3).
Just like a Meckel's diverticulum, a mesodiverticular band is a kind of vitelline anomaly [20]. A mesodiverticular band is a remnant of a vitelline artery, usually the left, and its associated mesentery [6,8,21]. This creates a triangular fold between the intestinal mesentery and attaching to the anti-mesenteric side of the distal jejunum [8,10,20]. These vitelline anomalies are rather rare findings: Van den Boom and Van der Velden (2001) [2] reported that 2 out of 1008 horses that were admitted for acute colic had Meckel's diverticula on laparotomy or post-mortem examination. Sprinkle et al. (1984) [3] found only 5 Meckel's diverticula on 15,000 (0,03%) post-mortem examinations. Vitelline anomalies can go undetected and a horse can live without complications for years, as is reflected by the wide age range of cases reported in literature [1]. Based on this, the figures mentioned above might be an underestimation of the real prevalence in the equine population [9,23].
Meckel's diverticula can be an incidental finding; however, they are most often found to be related to the cause of colic [3,8,9]. They can cause intestinal strangulation, form an axis for volvulus, can be herniated or the blind ending can become impacted and ultimately rupture causing a fatal peritonitis [4,8,13,18,24,25]. Similarly, mesodiverticular bands can lead to volvulus, small intestines can become strangulated in the blind sac or through a rent in the mesodiverticular band [8,17,20]. In the case described here the diverticulum was entangled around an adjacent part of distal jejunum, while the blind ending of the diverticulum was only slightly impacted with feed material.
Surgical intervention is straightforward for symptomatic diverticula, however a prophylactic surgical intervention for asymptomatic incidental diverticula is debatable [9,12,26]. Based on the risk factors, most surgeons advise a resection of the diverticulum or mesodiverticular band if found at laparotomy, as it may predispose for episodes of colic [11,16,17,27]. In human medicine several surgical options have been described for cases of Meckel's diverticula [27]. Based on the encountered anatomy, its relation to the colic signs and if applicable the viability of the strangulated part, a diverticulectomy, a wedge resection or a segmental resection is performed [25,26]. Because of its location a diverticulectomy as described by Bartmann et al. (2002) [13] was not possible in this case as this would impair blood supply to the remaining jejunum. A segmental resection with a one-layered anastomosis was performed using a method described by Freeman (2019) [8]. This kind of anastomosis minimizes luminal diameter reduction [8], as could be evaluated during the re-laparotomy.
Although there was a mild secondary impaction palpated during the initial surgery, the decision was made not to empty and flush the colon through a pelvic flexure enterotomy at that time. It was assessed that this impaction could be resolved on its own, based on the large amount of fluid decompressed from the small intestines into the cecum; also, this decision helped minimize surgery time. However, in the first 36 h after surgery a larger and firmer impaction developed, probably in part secondary to ileus causing fluid retention in the small intestinal lumen. A re-laparotomy was performed to resolve the impaction and from that point the horse recovered well.
Small intestinal strangulations carry a poor to guarded prognosis, with highest mortality in the peri-operative period [22]. Short-term survival rates of horses that are allowed to recover from small intestinal surgery range from 68% and above [8,22,28]. Ileus, post-operative colic and repeat laparotomy are factors associated with non-survival after small intestinal resection [8,29]. Approximately half of the Meckel's diverticula cases described were euthanized [1,2,13,17,18,24,25,30]. The horse described here recovered well after repeat laparotomy and was gradually reintroduced to its previous level of work.
In conclusion, Meckel's diverticula are a rare cause of small intestinal strangulation encountered in horses of a wide age range. In literature, they are often defined as an anomaly of the vitelline duct, presenting as a fullthickness diverticulum on the antimesenteric border of the distal aspect of the jejunum or proximal aspect of the ileum [11]. This case adds to the series of anecdotal reports of anomalies with regard to Meckel's diverticula in the horse. To our knowledge, this report describes the first mesenteric variant of a Meckel's diverticulum in a horse with a Y-shaped bifurcation of the distal jejunum. It is worthwhile to report on the different types of aberrations encountered as this may aid accessibility of epidemiological data in future and may help surgeons to identify and recognize these anomalies. | 2020-10-29T14:05:29.676Z | 2020-10-29T00:00:00.000 | {
"year": 2020,
"sha1": "51c89c90c9980441b78f8632478e4a61c88eec40",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-020-02631-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51c89c90c9980441b78f8632478e4a61c88eec40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257879458 | pes2o/s2orc | v3-fos-license | Current Commonly Used Dynamic Parameters and Monitoring Systems for Perioperative Goal-Directed Fluid Therapy: A Review
Goal-directed fluid therapy (GDFT) is usually recommended in patients undergoing major surgery and is essential in enhanced recovery after surgery (ERAS) protocols. This fluid regimen is usually guided by dynamic hemodynamic parameters and aims to optimize patients’ cardiac output to maximize oxygen delivery to their vital organs. While many studies have shown that GDFT benefits patients perioperatively and can decrease postoperative complications, there is no consensus on which dynamic hemodynamic parameters to guide GDFT with. Furthermore, there are many commercialized hemodynamic monitoring systems to measure these dynamic hemodynamic parameters, and each has its pros and cons. This review will discuss and review the commonly used GDFT dynamic hemodynamic parameters and hemodynamic monitoring systems.
INTRODUCTION
Fluid resuscitation is critical during perioperative periods to ensure vital organs receive adequate oxygen perfusion. Especially since patients usually fast for 8 hours before surgery, which leads to dehydration [1]. Poor intraoperative fluid resuscitation can lead to poor patient outcomes [2].
Traditionally, intravenous (IV) fluid regimens are "liberal," meaning high fluid amounts are given intraoperatively. In abdominal surgery, that amount is up to 7 liters (L) of fluids. However, these regimens lead to a weight gain of 3 to 6 kilograms, which suggests fluid overloading [3]. Other problems associated with liberal regimens include pulmonary complications [4,5], prolonged wound healing [6], and bowel edema causing prolonged ileus [7]. Moreover, hypervolemia may lead to an increase in atrial natriuretic peptide release from the heart, which enhances the deterioration of endothelial glycocalyx, a vital part of the vascular permeability barrier. This leads to vascular barrier loss and an increase in interstitial edema [8]. On the other hand, in patients undergoing low-to moderate-risk surgeries, a liberal fluid approach may be beneficial. For example, Holte et al. showed that patients who underwent laparoscopic cholecystectomy, when given 40 ml/kg IV fluids instead of 15 mL/kg IV fluids, had significantly improved postoperative pulmonary function, with less nausea, dizziness, and fatigue [9].
More recently, major surgeries and enhanced recovery after surgery protocols (ERAS) have been adopting more "restrictive" IV fluid regimens. In previous studies, intraoperative restrictive fluid regimens varied from 1.0 to 2.7L in abdominal surgeries [10]. While these regimens may avoid the side effects of the liberal method, it may cause hypotension leading to organ damage. For example, The Restrictive versus Liberal Fluid Therapy in Major Abdominal Surgery (RELIEF) trial, which randomized 3,000 patients to a liberal fluid regimen vs a restrictive fluid regimen, showed an increase in acute kidney injury (8.6% vs 5.0%). However, there is no difference in the rate of disability-free survival at one year. Other studies have shown that restrictive fluid regimens do have significant benefits in major surgeries. For example, multiple studies have shown that restrictive fluid regimens decrease postoperative ileus recovery in colon resections [11,12], and in the length of stay as well as 60-day surgical complications in patients who underwent hyperthermic intraperitoneal chemoperfusion [13].
While liberal and restrictive fluid regimens have pros and cons, the definition of "liberal" or "restrictive" is arbitrary and depends on individual institutions or clinicians. Therefore, Goal-Directed Fluid Therapy (GDFT) has been utilized in several surgeries to avoid hypotension and fluid overloading by giving the "just right" fluid amount. This review aims to evaluate the current evidence and literature on commonly used dynamic parameters and hemodynamic monitor systems, which have been used for multicenter GDFT randomized clinical trials (RCT), and verified perioperatively in other studies (Table 1 and Table 2).
WHAT IS PERIOPERATIVE GOAL-DIRECTED FLUID THERAPY?
Perioperative GDFT aims to maximize oxygen delivery to tissue by optimizing cardiac output (CO) through fluid resuscitation. Optimal cardiac output is defined as the top of the Frank-Starling preload-stroke volume curve [14]. At max capacity, which is at the top of the curve, the increase in end-diastolic pressure increases the stroke volume (SV) less when compared to the lower part of the curve. Thus, less "elastic." The ideal fluid status will be at max capacity where the increase in end-diastolic volume does not change the SV significantly ( Figure 1).
CURRENT EVIDENCE FOR GOAL-DIRECTED FLUID THERAPY
Multiple large-scale RCTs have shown GDFT's benefits in the perioperative setting. Additionally, smaller studies demonstrated that GDFT has clinical benefits in non-cardiac, cardiac, and vascular surgery high acuity patients [15][16][17][18]. PubMed and clinicaltrials.gov searches with the terms "Goal-Directed Fluid Therapy" and "multicenter randomized clinical trials" are summarized in Table 3. On the other hand, pushing the patient to the top of the Frank-Starling curve will lead to increased natriuretic peptides secreted by the heart, which provokes vasodilation, capillary leakage, and diuresis [19].
The largest GDFT multicenter RCT to date, the Optimization of Cardiovascular Management to Improve Surgical Outcome (OPTIMIZE) trial, randomized 734 high-risk adult patients undergoing major gastrointestinal surgeries. The GDFT group which used LiDCO™rapid (LiDCO Ltd, Cambridge, UK) as a monitor to guide fluid resuscitation had lower 30-day moderate or major complications and mortality (36.6% vs 43.4%) when compared to control. Though there was no difference in morbidity on day 7; infection, critical care-free days, and all-cause mortality at 30 days; all-cause mortality at 180 days; length of hospital stay [20]. Serum biomarkers from participants in the OPTIMIZE trial also showed no evidence of GDFT induced cardiac damage, as there was no elevation of troponin I concentration and N-terminal probrain natriuretic peptide [21]. Furthermore, a sub-study of the OPTIMIZE trial showed GDFT reduced health care costs [22]. Another multicenter RCT on GDFT, the FEDORA trial, randomized 450 low-to moderate-risk patients undergoing major abdominal, urological, gy-necological, or orthopedic surgery to GDFT-guided by esophageal doppler or control groups. The FEDORA trial showed that patients in the GDFT group had fewer moderate to severe complications, such as acute kidney injury, pulmonary edema, and respiratory distress syndrome (8.6% vs 16.6%), and a shorter length of stay. However, there was no change in mortality [23].
Conversely, some studies have shown that GDFT lacks benefits. For instance, Gómez-Izquierdo et al. demonstrated that GDFT using esophageal doppler did not decrease the incidence of postoperative ileus in a RCT of 128 patients; even though it did increase CO and SV, and reduced perioperative IV fluids administration [24]. Moreover, Challand et al. showed no difference in discharge readiness and length of stay between patients who underwent major colorectal surgery in the GDFT group guided by esophageal doppler and the control group [25]. Besides abdominal surgeries, GDFT has been shown to lack effect in major vascular surgeries. Bisgaard et al. showed that in patients who underwent open elective abdominal aortic surgery, GDFT-guided by LiDCO™plus (LiDCO Ltd, Cambridge, UK) did not decrease postoperative complications or length of stay in the intensive care unit [26]. Additionally, very recently, Fischer et al. demonstrated in a RCT involving 447 intermediate-risk patients who underwent hip or knee arthroplasty that GDFT-guided by the Pleth Variability Index (PVi) had no effect in hospital stay, acute renal failure, and cardiac complications when compared to the control group [27].
While there are currently mixed GDFT trial results, more large-scale multicenter trials with a heterogeneous patient population may help resolve debates about its clinical benefits. Two notable differences in all these clinical trials are that they use different fluid responsiveness parameters and monitoring systems to guide fluid resuscitation. Ongoing accurate in low perfusion states due to its use of perfusion index.
PULSE PRESSURE VARIATION (PPV) IN GOAL-DIRECTED FLUID THERAPY
PPV is a fluid responsiveness dynamic parameter described by Coyle et al. in 1983 [34]. PPV is calculated by measuring PP alterations during the respiratory cycle ( Figure 2b) [35]. In general, patients with a PPV <12% are unlikely to benefit from further fluid therapy, whereas those with >12% are more likely to benefit from fluid resuscitation [36]. Compared to CVP, which requires a central venous catheter, PPV requires a minimally invasive or noninvasive monitor, commonly an arterial catheter. It is important to note that PPV does not indicate a patient's fluid status or preload; rather, it is only an indicator of the patient's position on the Frank-Starling curve [37].
PPV has been used to guide GDFT. For example, Malbouisson et al. showed that PPV-guided GDFT reduced postoperative complications such as respiratory, renal, and hepatic dysfunctions, and hospital length of
DYNAMIC PARAMETERS FOR GOAL-DIRECTED FLUID THERAPY
The criterion standard of CO measurement remains using intermittent thermodilution with a pulmonary artery catheter (PAC). However, this measurement requires a PAC, making it impractical in many perioperative settings. As such, there are many CO surrogates and parameters developed to measure a patient's fluid status and SV as defined by "static" and "dynamic." These parameters are summarized in Table 1.
Although CVP is traditionally used as a static parameter to assess fluid responsiveness, multiple studies showed that it is unreliable [28]. In contrast, dynamic parameters were shown to estimate fluid responsiveness and status with reasonable accuracy [29][30][31][32].
DYNAMIC PARAMETERS AND THE FRANK-STARLING CURVE
These dynamic parameters depend on the Frank-Starling relationship. Under the relationship, the left ventricle (LV) SV changes due to intrathoracic pressure. This is because increased intrathoracic pressure compresses the right ventricle (RV), which decreases RV preload. If a patient's fluid status is at the Frank-Starling curve plateau, respiratory pressure will have little effect on the RV preload. Thus, the RV preload respiratory variations will be low, and there will be no effect on the LV SV [33]. Because the LV SV and arterial compliance correlate with the arterial pressure, thus arterial pressure parameters, such as pulse pressure (PP), correlate with the LV SV variations.
As shown in Figure 2a and b, the arterial wave area under the curve (AUC) is the LV SV, whereas the wave peak is the systolic blood pressure (SBP) and the wave trough is the diastolic blood pressure (DBP). PP equals SBP -DBP. Both LV SV and PP vary during the respiration cycle (Figure 2d). If a patient is at the Frank-Starling curve plateau, both the arterial pressure AUC and the PP will have minimal variations during the respiration cycle. Figure 2c shows the pulse oximetry plethysmography which also correlates with the arterial pressure waves and thus also oscillates during the respiratory cycle. In the section below we will discuss the four commonly used fluid responsiveness dynamic parameters for GDFT guidance: pulse pressure variation (PPV), pleth variability index (PVi), stroke volume variation (SVV), and aortic blood flow peak velocity variation (ΔVpeak). Figure 3 shows the dynamic parameter equations, which are crucial to understand how to interpret each parameter accurately, and for providers to judge the appropriateness to use each parameter in different physiological states or surgeries. For example, PVi may not be arterial pulsation (AC) relative to total amount of light absorbed (DC). PVi is then calculated from the PI (Figure 4a-c) [48].
In a single-center RCT, Cesur et al. showed that in 70 ASA I and II patients, PVi-guided GDFT decreased the amount of fluids administered and the return of bowel function time [47]. Additionally, Patrice et al. demonstrated that PVi-guided GDFT decreases lactate level and fluid administration during major abdominal surgery [48]. Likewise, a study by Yu et al. found Pvi-guided GDFT decreases blood lactate level in the first hour of surgery and reduces fluid administration. However, the study found no changes in lactate level beyond the first hour, possibly due to its small sample size of 30 patients [49].
Currently, most GDFT studies using PVi are in abdominal surgeries. While there are great promises for this parameter, more studies must be done on other types of surgery to verify its use. Moreover, there is no definitive PVi cutoff in predicting fluid responsiveness, but most RCTs decided that it is around 13% to 14% [48,50]. Additionally, because PI is measured by pulse oximetry, nail stay, in high-risk patients undergoing open surgeries [38]. However, there have been small RCTs with negative clinical results on PPV-guided GDFT. For instance, Suzuki et al. found that PPV-guided GDFT did not significantly affect renal, hemodynamic, and metabolic variables in patients after they underwent cardiac surgery [39].
Moreover, PPV faces some limitations. To ensure accurate PPV measurement, the patient must be mechanically ventilated, the chest must be closed (an open chest will affect the interaction between the pericardium and mechanical ventilation), be in sinus rhythm, and intra-abdominal pressure must be within the normal range [33]. Other parameters that may affect PPV readings include extreme bradycardia or high respiratory rate, low tidal-volume ventilation (must be between 6 to 8 ml/kg), high positive end-expiratory pressure (PEEP) (PEEP should be between 0 and 5 cm H 2 O), low arterial compliance (high dose vasopressors, severe atherosclerosis), and RV or LV failure [32,33,40,41]. Also, if respiratory system compliance is ≤30 mL/cm H 2 O, such as in patients with ARDS, PPV will become less accurate [41,42].
PLETH VARIABILITY INDEX (PVi) IN GOAL-DIRECTED FLUID THERAPY
PVi is calculated from plethysmographic waveform amplitudes and PP derived from pulse oximetry or other devices. It was commercialized in 2007 [43]. This dynamic parameter has been shown to predict fluid responsiveness as accurate as Stroke Volume Variation [44], esophageal doppler [45], and PPV [46]. PVi also depends on the increased intrathoracic pressure interaction with the RV preload and the LV SV upon mechanical ventilation, which causes variation in the plethysmographic waveform amplitudes and area under the curve ( Figure 2c). PVi is calculated by measuring perfusion index (PI) alterations during the respiratory cycle, which is described as the percentage of light absorbed as a result of sponsive, it will be vice versa. Marc et al. first described this parameter in 2001, where he reported that ΔVpeak of 12% has a sensitivity of 100% and a specificity of 89% to distinguish fluid responders from non-responders in adults [60].
ΔVpeak can be used to determine a patient's fluid responsiveness in situations when PPV, SVV, or PVi cannot be measured accurately. This is especially true in pediatric and neonatal patients since they are ventilated by small tidal volumes, have higher arterial vascular compliance, and have higher chest wall and lung compliance. Therefore, in these patients, changes in intrathoracic pressure during mechanical ventilation may not cause the same circulatory changes as with adults. For example, Gan et al. demonstrated in a systemic review that ΔVpeak is the only dynamic or static variable that can predict fluid responsiveness in pediatric patients. PPV, SVV, and PVi did not predict fluid responsiveness in children [61].
Currently, there are no optimal cutoff points for ΔVpeak to assess fluid responsiveness. Furthermore, vasoactive drug effects on this parameter in the pediatric population are largely unknown [62]. However, this newer dynamic parameter holds many promises in pediatric resuscitation. A multicenter RCT is currently investigating ΔVpeak-guided GDFT on postoperative outcomes in pediatric patients undergoing elective or urgent major noncardiac surgery [63]. It will be interesting to see what this RCT shows and how ΔVpeak can further enhance pediatric surgery fluid optimization and postoperative outcomes.
GOAL-DIRECTED FLUID THERAPY MONITORING SYSTEMS
Currently, there are many methods to monitor fluid responsiveness dynamic parameters in the perioperative setting [51]. Traditionally, the Swan-Ganz catheter has been used to measure CO, a fluid responsiveness static parameter, despite multiple studies showing Swan-Ganz catheters do not improve clinical outcomes [64]. Moreover, it is invasive and can lead to numerous complications such as thrombosis, pulmonary artery rupture, infection, and arrhythmia on insertion [51]. Therefore, other minimal or noninvasive hemodynamic monitoring systems have been developed [65,66]. In the section below, we will focus on common and popular monitoring systems which have been verified and used preoperatively for GDFT. A summary of the monitoring systems is shown in Table 2.
Flotrac™ for Goal-Directed Fluid Therapy
Flotrac™ (Edwards Lifesciences, Irvine, CA, USA) was first introduced in 2005 to be used with the EV1000 monitor or Vigileo monitor [67]. It uses a blood flow sen-coloring, skin coloring, and altered physiological states (such as methemoglobin) may change infrared light absorption leading to inaccurate calculations. Finally, because PVi is based on the same principle as PPV, both suffer the same limitations.
STROKE VOLUME VARIATION (SVV) IN GOAL-DIRECTED FLUID THERAPY
SVV is calculated from the difference between the maximum and minimum SV over the respiratory cycle and is caused by changes in RV preload due to alterations in intrathoracic pressure (Figure 2b). Patients with a SVV of <10% are unlikely to be fluid responsive, whereas those with SVV >15% are likely to benefit from fluid resuscitation [51].
SVV-guided GDFT has been shown in multiple single-center RCTs that it is an effective way to decrease postoperative complications in bowel, orthopedics, and neurosurgery [52][53][54][55][56]. For instance, Gottin et al. compared three methods of resuscitation in patients undergoing pancreatic surgery: liberal, restrictive, and SVV-guided GDFT. This study demonstrated that SVV-guided GDFT and restrictive fluid resuscitation decreased major surgical complications such as postoperative fistula, abdominal collection, and hemorrhage compared to liberal fluid resuscitation [57]. However, while most studies showed that SVV-guided GDFT improved postoperative outcomes, some studies showed the contrary. For example, Iwasaki et al. showed that SVV-guided GDFT hepato-biliary-pancreatic surgery patients had greater amount of fluids administered, and lower PaO 2 /FiO 2 ratio on postoperative day one [58].
Interestingly, Wang et al. showed that in elective retroperitoneal tumor resections, patients who were resuscitated at SVV 9% has increased serum syndecan-1, interleukin-6, and tumor necrosis factor-α, as well as higher incidence of fever and blood transfusion when compared to patients resuscitated at SVV 14%. This suggests that increased fluid administration may enhance perioperative glycocalyx shedding leading to significant inflammatory responses [59].
Because SSV is based on the same principle as PPV, it also faces the same restriction as discussed above [41].
AORTIC BLOOD FLOW PEAK VELOCITY VARIATION (ΔVpeak) IN GOAL-DIRECTED FLUID THERAPY
Because LV SV changes during different phases of mechanical ventilation, the aortic blood flow peak velocity will also vary. If the patient is at the plateau of the Frank-Starling relationship and thus fluid nonresponsive, such variations will be minimal. If the patient is fluid re-
LidCO™ for Goal-Directed Fluid Therapy
LiDCO™plus (Masimo, Irvine, CA, US) must be calibrated by lithium dilution before being used. This calibration method boluses 0.5 to 1.0 ml of lithium chloride (0.15 mmol ml -1 ) through a central or peripheral venous line. Then, blood is aspirated through an arterial catheter at a constant rate of 4 ml min −1 with a disposable electrode selective for lithium. The change in electrode voltage is converted to plasma lithium concentration, and a lithium concentration vs time curve is plotted to calculate plasma flow. Blood flow is calculated with plasma flow divided by 1 -packed cell volume [51].
After calibration, it uses a propriety PulseCO™ algorithm, a pulse power algorithm, to calculate the CO. The algorithm uses the conservation of mass, rather than pulse contour, and assumes the net power change in a heartbeat between the input of a mass (SV) of blood minus the blood mass lost to the periphery during the beat has a linear relationship with net flow [75]. By doing so, LiDCO™ can also calculate the SV, SVR, SVV, oxygen delivery, and PPV.
LiDCO™plus has been validated and compared against other hemodynamic monitoring systems. For example, Mora et al. showed that LiDCO™plus is comparable to PAC thermodilution in patients with impaired LV SV after cardiac surgery [76]. The newer models of LiDCO™, such as the LiDCO™rapid and LiDCO™ LXi, are noninvasive and require no calibration. Instead, they use a two-finger cuff applied to the patient's hand to measure the parameters. These newer models still use the same algorithm and have shown benefits in multiple RCTs [77,78]. More importantly, the OPTIMIZE trial used LiDCO™rapid-guided GDFT, which showed postoperative clinical benefits.
However, like Flotrac™, some studies have questioned LiDCO™rapid's accuracy. For example, Asmaoto et al. examined Flotrac™ and LiDCO™rapid across a broad range of cardiac indexes (CI). Both monitors tended to underestimate CIs when the PAC measured CIs were relatively high [79]. LiDCO™rapid is a convenient and noninvasive option to guide fluid therapy, though there are unanswered questions about its utility in different physiological states. More studies on different types of surgery may help address some of these issues.
Masimo Radical 7 Pulse CO-Oximeter™ for Goal-Directed Fluid Therapy
Masimo Radical 7 Monitor (Masimo, Irvine, CA, US) can measure the PVi using a Masimo pulse oximeter, which can be attached to the ear, digits, or forehead [80]. This monitor does not need to be calibrated and is noninvasive. It is also the only commercially available monitor to measure PVi. sor that is attached to an arterial catheter. Hemodynamic parameters such as CO, SV, SVV, and systemic vascular resistance (SVR) are calculated every 20 seconds with a proprietary algorithm using pulse contour analysis. SV is derived from the equation: K × Pulsatility (standard deviation of the arterial pressure wave over 20 seconds), where K is a constant. K is derived from the sex, age, height and weight according to methods described by Langewouters et al. [68] and the skewness and kurtosis of the individual arterial waveform. K is recalculated automatically every minute based on the patient's specific vascular compliance. Then, the CO is calculated by multiplying SV with pulse rate [69].
Flotrac™ does not need to be calibrated for use [67]. However, because the hemodynamic parameter calculations are heavily dependent on the constant, K, the software needs to recognize changes in vasodilation or vasoplegia to calculate an accurate CO.
Some studies have found that Flotrac™ is comparable to other hemodynamic monitoring systems. For example, Mclean et al. found that Flotrac/Vigileo™ is comparable to transthoracic doppler echocardiography when measuring CO [70]. Also, Cannesson et al. showed that in patients undergoing coronary artery bypass grafting, the CO estimated by Flotrac™ is comparable with PAC thermodilution [71]. However, many studies questioned its accuracy, even when it was updated to its newest software version: the fourth generation. While Flotrac™ is reasonably accurate in stable patients, in patients with low SVR, Flotrac™ does not provide accurate CO measurement when compared to invasive CO monitoring [67]. For instance, Murata et al. showed that in end-stage liver failure patients, fourth-generation Flotrac™ and PAC readings had poor agreement with each other during liver transplantation [72]. Moreover, Lin et al. found that fourth-generation Flotrac™ had a 61.82% and 51.80% error in estimating the CO before and after cardiopulmonary bypass, respectively. Therefore, found it unsuitable to be used during or after cardiopulmonary bypass [73].
Nonetheless, Flotrac™ remains popular and has been used in multiple RCTs to guide GDFT with good clinical outcomes [53,74]. For example, Yu et al. demonstrated that in major gynecologic oncology surgery patients, Flotrac™-guided GDFT decreased postoperative complications risk significantly (OR = 0.572), especially in surgical site infection risk (OR = 0.127). Furthermore, GDFT patients received significantly less fluid infusion than the control group [74].
While Flotrac™ is easy and convenient to use, many questions remain of its use in high-risk patients, especially in patients suffering from vasoplegia. In the future, newer software updates from the manufacturer may resolve such issues and allow it to better estimate hemodynamic parameters. contour method to estimate CO [88].
Conflicting studies have shown that ClearSight™ may or may not be comparable to other invasive or minimally invasive hemodynamic monitoring systems. For example, Wang et al. showed that ClearSight™ is comparable to PAC thermodilution in estimating CO in cardiac surgery patients [89]. However, while Tanioku et al. found that ClearSight™ was interchangeable with arterial catheters in measuring MAP, it had significant biases when measuring SBP and DBP in cardiovascular surgeries [90]. Moreover, Kanazawa et al. demonstrated that ClearSight™ was inaccurate in estimating CI in patients with reduced cardiac ejection fraction (<55%) when compared to PAC thermodilution. However, it is accurate in measuring MAP [91].
Some studies also found that ClearSight™-guided GDFT has no postoperative benefits. Stens et al. showed that using ClearSight™ calculated CI, PPV and MAP for GDFT guidance had no impact on postoperative complications [92]. Furthermore, Fischer et al. found that in low-risk colorectal patients, using ClearSight™-guided GDFT does not affect hospital length of stay, postoperative mortality, and the total number of complications [93]. Similarly, Davies et al. found that ClearSight™-guided GDFT offers no postoperative complication benefits in high-risk patients undergoing hip fracture repair [94]. Given these questionable study results, this monitoring system needs to be studied more in multicenter RCTs to verify its benefit in GDFT.
PiCCO™ for Goal-Directed Fluid Therapy
PiCCO™ (Pulsion Medical System, Munich, Germany) uses the transpulmonary thermodilution technique for CO estimations. To calibrate the monitor, it requires a bolus of cold saline in a central venous catheter, then a thermo-sensor tip arterial catheter senses the decrease in blood temperature. The difference between this system and PAC transpulmonary thermodilution is that the cold solution is bolused into a central vein, not in the right atria. Therefore, the temperature change detection is at a peripheral artery rather than the pulmonary artery. Uniquely, this monitoring system can also measure global end-diastolic volume [95], global ejection fraction, intrathoracic blood volume, and extravascular lung water [96].
Goedje et al. found that PiCCO™ had a comparable and robust correlation with PAC thermodilution with a mean bias of 0.07-liter min -1 (2 SD 1.4-liter min -1 ) in post-cardiac surgery patients. These patients had various CO (ranging from 3.0 to 11.8 liter min -1 ) and SVR (ranging from 252 to 2434 dyn s cm -5 ) [97]. Moreover, it has been shown to improve postoperative outcome when used to guide GDFT. For instance, Jing et al. demonstrated that PiCCO™-guided GDFT decreased volume infu-Multiple studies used Masimo pulse oximetry for GDFT to achieve improved postoperative outcomes. For instance, Saugel et al. showed that PVi-guided GDFT reduced ileus significantly and decreased fluid administrated in colorectal surgical patients [81]. Also, Collange et al. found that a PVi >9% before renal artery unclamping is an individual risk for delayed graft function in renal transplant patients [82]. However, the Optimization using the Pleth Variability Index (OPVI) trial, in which 447 ASA I and ASA II patients were randomized, showed that forehead measured Pvi-guided GDFT did not shorten hospital stay, or reduce postoperative complications such as AKI. This study, however, attributed these results to patients being of lower acuity [27].
Compared to other dynamic parameters, PVi is equivalent. For example, Coeckelenbergh et al. showed that in low-to intermediate-risk abdominal surgeries, patients who underwent PVi-guided GDFT or PPV-guided GDFT had no significant differences in hospital length of stay, postoperative outcomes, and amount of fluids administered [83]. In another study, Pişkin et al. showed that PVi and inferior vena cava distensibility index could predict fluid responsiveness comparably in intensive care patients [84].
However, PVi may be inaccurate in specific patient populations. Konur et al. found that in liver transplant patients, PVi cannot distinguish fluid responders from non-responders. PVi value was similar at baseline between responders and non-responders in the dissection and anhepatic phase. Only SVV measured by the PiC-CO™ monitoring system was reliable [85]. Likewise, Le Guen et al. showed that PVi does not respond to fluid challenges and had poor agreement with esophageal doppler measurements in renal transplant patients [86].
Because PVi measurements require adequate perfusion, a change in blood flow may affect its reading. Currently, Masimo Radical 7 Pulse CO-Oximeter™ cannot determine if a decrease in blood flow is due to a depressed cardiovascular system or impaired blood flow to the tissue. Additionally, any physiological or pathological influences on the peripheral vascular flow, such as in patients with severe Raynaud's phenomenon or with peripheral vascular disease, may affect PVi readings [87]. Thus, PVi should be used and interpreted cautiously.
ClearSight™ for Goal-Directed Fluid Therapy
ClearSight™ (Edwards Lifesciences, Irvine, CA, USA) is a noninvasive monitor which uses a finger cuff to estimate CO, SV, SVV, SVR, and mean arterial pressure (MAP). It was introduced in 2007 as Nexfin™. The finger cuff inflates and deflates to keep the diameter of finger arteries constant throughout the cardiac cycle. From the finger pressure waveform, it calculates the brachial pressure waveform using an algorithm. Then, it uses the pulse Transesophageal (TEE) or Transthoracic Echocardiogram (TTE) are frequently used to guide fluid therapy in high-risk cardiothoracic and liver transplant surgeries. They are ultrasound imaging techniques that can visualize the heart or the aorta directly [107]. This direct visualization allows multiple hemodynamic parameters to be calculated or measured in real-time. For example, the LV outflow tract velocity time integral (VTI) is a measurement by ECHO of how far blood travels during the flow period. It is a LV systolic function surrogate, which can be used to calculate the SV and then the CO, by multiplying with HR [108]. Moreover, TEE can estimate the preload using the LV end-diastolic area [109]. These visualized changes in preload can detect even minor volume deficits. TEE and TTE can also detect inferior vena cava diameter variations (ΔIVCD) during tidal respiration in mechanically ventilated patients, which correlates with SVV [110]. ΔIVCD can also be used to calculate the caval index ( Figure 3); a caval index <50% is strongly associated with a CVP of <8 mmHg [111].
While TEE and TTE are well-established monitors for fluid resuscitation intraoperatively and has shown benefits in intensive care patients, it lacks studies on postoperative outcomes in low-and moderate-risk surgeries. Furthermore, TEE has limitations. It cannot be used in patients with previous esophageal surgeries and cannot be used in esophageal surgeries. Probe insertion can lead to bleeding in cirrhotic patients with esophageal varices. Also, both TTE and TEE probes must be positioned by a trained professional to interpret the images. Finally, the TEE probe cannot be used in awake patients.
Currently, TEE-guided GDFT is being investigated in a multicenter RCT for pediatric surgery postoperative outcomes [112]. It will be interesting to see how it will impact pediatric resuscitation.
CONCLUSION
In the last 20 years, there has been much advance in noninvasive detection of fluid status perioperatively, as an alternative to using PAC thermodilution. Each monitoring system uses different physiological principles and offers various pros and cons. Furthermore, these systems use different algorithms and methods to calculate dynamic parameters. Because of these different calculations, different monitors can display different values for the same dynamic parameter. As such, a clinician should consider the type of surgery being done, and the current evidence for GDFT-guided by that monitoring system and dynamic parameter. Moreover, a clinician should understand and be familiar with how dynamic parameters are calculated, and how they are derived from the monitoring systems.
Dynamic parameters are all based on the physiological principles of mechanical ventilation, intra-thoracic sion, hospital stay length, and time needed for ambulation [98]. However, in emergency surgeries, PiCCO™-guided GDFT increased major complications (95% vs 40% in the control group) and did not decrease in-hospital mortality [99].
While PiCCO™ offers a less invasive approach than PAC, it still requires a central venous catheter and arterial catheter, which may be impractical in many surgeries and can only be used in high-risk patients. Therefore, this method continues to have many limitations.
Esophageal Doppler for Goal-Directed Fluid Therapy
Esophageal Doppler (ED) is an ultrasound-based technique introduced in the 1970s, which allows aortic blood flow measurements in the descending thoracic aorta to calculate CO and SV [100]. It is the only minimally invasive CO monitor evaluated and endorsed by the United States Agency for Healthcare Research and Quality and the United Kingdom Centre for evidence-based purchasing [101].
The ED is a 6-mm probe positioned at the distal esophagus to measure blood flow velocity in the descending thoracic aorta. The blood flow velocity waveform also can be used to measure flow time, which is the time from the start of the waveform upstroke to return to baseline. When corrected for HR, flow time is found to be a sensitive measure of LV filling, thus an accurate RV preload measurement [102].
ED is comparable to PAC thermodilution in measuring intensive care patients' CO [103]. Studies also showed that ED produced similar clinical results as Flotrac™-guided GDFT [104] and NICOM-guided GDFT [105]. Furthermore, a systemic review by ECRI Evidence-based Practice Center, an independent federally certified patient safety organization by the US Department of Health and Human Services, has found that there is high quality evidence that ED reduces postoperative complications and hospital length of stay [106]. Due to its accuracy and longevity, ED had become the noninvasive fluid status monitor of choice. Both the RELIEF and FEDORA trials used ED-guided GDFT.
Despite much evidence supporting ED's use in GDFT, some limitations exist. First, it requires some skills; thus, the user must be trained. Secondly, it cannot be used in esophageal surgeries, patients with esophageal anatomical anomalies, or with recent esophagus surgeries. Thirdly, liver cirrhosis patients may have esophageal varices and ED can induce bleeding. Finally, the doppler signal can be poor due to aorta or user issues.
Echocardiogram (ECHO) for Goal-Directed Fluid Therapy
2,502 patients undergoing major elective gastrointestinal surgery. This study will be the largest GDFT trial to date and will be using ClearSight™ or Flotract™ as hemodynamic monitors, with SVV as the dynamic parameter. Primary outcomes will be 180 day mortality, 30-day AKI rate, and acute cardiac event within 24 hours and 30 days [116].
As technology advances, perioperative hemodynamic monitors have also become more non-invasive. One exciting technology is the wireless and wearable doppler which may be comparable to TEE. For example, Kenny et al. showed that a wearable wireless carotid doppler at the common carotid artery is equivalent to TEE during coronary bypass surgeries; it can accurately detect the common carotid artery VTI, and significant changes in SV and aortic VTI after a straight leg raise test [117,118]. Wang et al. also described a wearable and flexible ultrasound doppler device similar to an electrocardiogram lead which can monitoring real time blood flow velocities in human arteries [119]. These new technologies and devices can circumvent the minimally invasive or invasive hemodynamic monitor systems' limitations.
Another exciting progress is the increasing use of artificial intelligence, machine learning and big data to solve hemodynamic problems. Hatib et al. described the hypotension prediction index, which is developed by machine learning from the arterial waveform of 13,000 past hypotensive events and 12,000 non-hypotensive events. HPI can predict hypotension 15 minutes before the actual event occurs [120]. In the future, a new hemodynamic algorithm may go beyond PPV or SVV to allow patient fluid resuscitation before hypotension happens.
pressure, and LV SV changes. Thus, they have the same limitations. To ensure accurate measurement, the patient must be: 1) Mechanically ventilated, 2) Chest must be closed, 3) In sinus rhythm, and 4) Intra-abdominal pressure must be in the normal range. In addition, low arterial compliance and respiratory system compliance ≤30 mL/ cm H 2 O can decrease the accuracy of parameters [42,43]. However, these factors may be compensated by the monitoring systems' algorithm.
The selection of dynamic parameters for GDFT guidance is also affected by the surgery itself. For example, PVi has been shown to be unreliable in predicting fluid responsiveness in cardiac surgery patients when compared to PPV and SVV [113]. Even if the dynamic parameter is selected carefully, it has been demonstrated that few patients fit all the criteria for accurate dynamic parameter measurements in both perioperative and intensive care settings [40,114]. For example, Maguire et al. found that only 39% of surgical patients met the criteria for accurate noninvasive plethysmographic waveform variations monitoring [40]. Therefore, suggests that many providers may not be aware of all the requirements and surgical nuances for using dynamic parameters, leading to false readings and wrong interpretations.
Currently the American Society of Anesthesiologists nor other international medical societies endorse the use of any hemodynamic monitoring systems or dynamic parameters to guide GDFT. Furthermore, there are no GDFT clinical guidelines due to ongoing debates of its uses and benefits. However, there is strong evidence that esophageal doppler, as endorsed by the United States Agency for Healthcare Research and Quality and the United Kingdom Centre for evidence-based purchasing, can improve postoperative outcomes [101].
OUTLOOK
While GDFT has been more prevalent than in previous decades, there continues to be slow adaptation of this fluid regimen, even though multiple studies have reported that GDFT offers postoperative benefits and is cost-effective. In a survey of anesthesiologists, only 35% reported they "always" use GDFT in the United States. Worse yet, only 15% and 10% of the anesthesiologist in the United Kingdom and Australia/New Zealand reported they "always" use GDFT respectively. The most common reason was the lack of monitoring tools availability, followed by a lack of experience with instruments [115].
With more RCTs, hopefully, there will be a change in attitude towards using GDFT and purchasing hemodynamic monitoring systems. It will be exciting to follow the current global multicenter RCT, the OPTIMISE II trial, which launched in January 2017. With over 30 participating centers in multiple countries, it has enrolled | 2023-04-02T05:09:02.838Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "e1dfc7bb6afb5901d3b1074cc585aeafdbeef8d6",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e1dfc7bb6afb5901d3b1074cc585aeafdbeef8d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214553135 | pes2o/s2orc | v3-fos-license | Optimal filtering of multidimensional random fields generated by autoregressions with multiple roots of characteristic equations
. The use of mathematical models allows to compare the theoretical expressions and simulation results. Autoregressive random fields can be used for description of the images, however, such models have pronounced anisotropy, and the simulated images are too sharp. The elimination of this drawback is possible through the use of models with multiple roots of characteristic equations. The analysis shows that using models with multiple roots in filtering images with smoothly varying brightness provides smaller errors than the use of autoregressive random fields. However, studies of the dependences of filtering efficiency on various model parameters and signal-to-noise ratios for multidimensional autoregressive random fields were almost not carried out. The article discusses the solution of the problem of optimal filtering of images based on models with multiple roots of characteristic equations. Theoretical dependences of the relative variance of the filtering error on the dimension of random fields are obtained. Furthermore, it was presented some results of filtering real images by such model in comparison with autoregressive model.
Introduction
Currently there are many different mathematical models of random fields (RF) using for describing images [1][2][3][4][5]. The popularity of this approach is due to a number of advantages that mathematical models provide. First of all, it is the generation of sufficiently large volumes of material for research, and also mathematical models act as a tool for developing and testing various algorithms. The simplest autoregressive models generate RF with pronounced anisotropy and such models are suitable for describing only a narrow class of real multidimensional images. Doubly stochastic models [6,7] provide a change in the probabilistic properties of the generated RF at each point, but on average the properties of such a model depend on the model chosen for the main RF simulation. Therefore, to obtain RFs that are close to isotropic fragments of multispectral images, it is necessary to use autoregression with multiple roots of characteristic equations [8][9][10].
However, one of the main tasks of signal processing is the noise reducing or filtering. It is often considered that the observed signal is an additive mixture of the information (useful) signal and white noise. In this paper we analyze the efficiency of spatial Wiener filtering of multidimensional autoregressive RFs with multiple roots of characteristic equations against additive white Gaussian noise background. At the same time the investigation is aimed at such models of different multiplicities, which provide equivalent correlation properties. The developed filtering algorithms can become very useful tool in solving various applied problems of image processing, among which an important place is occupied by the detection and localization of various objects in the image [11,12]. Furthermore filtering and segmentation tasks are of interesting [13,14].
Model of a multidimensional random field and its linear filter
The following equations are commonly used to describe a multidimensional autoregressive Gaussian RF For such a model it is easy to find the transfer function of a linear filter. Using Z-transformation for model (1) it is possible to get a spatial linear filter, which is described by the transfer function of the following form It should be noted that the transfer function (2) also depends on the parameters of the signal model, as does the energy spectrum of such a RF. The relationship of the transfer function (2) and the energy spectrum of the RF X is determined by the expression ( ) (3) The analysis of probabilistic properties of the RF is simplified if the transfer function of a multidimensional filter can be factorized: with multiple roots k ρ of characteristic equations having multiplicities 1, 2,..., The transfer function of such a RF will be factorizable and will be written as
Filtering efficiency of multidimensional random fields with multiple roots of characteristic equations
One of the difficult tasks of filtering image sequences on multidimensional grids is the analysis of the effectiveness of such filtering. In this case, the necessary criterion for analysis is the dependence of the variance of the filtering error on various model parameters and noise. Formally, spatial covariance matrices of estimation errors can be calculated using the recurrence relations for the Kalman filter [6,7]. However, if it is necessary to compare the algorithms for different values of the parameters of the stochastic equations and noise levels, the determination of even steady-state values of the elements of the covariance matrices becomes a very laborious task.
Consider a relatively simple way to determine the effectiveness of estimating homogeneous fields on infinite grids based on the basic principles of Wiener's filter theory [8]. Using the observations can be written as a system of linear which can be considered as a spatial analogue of the Wiener-Hopf equations.
Using multidimensional z − transformation it is possible to Figure 1 shows the dependence of the relative error of the filter variance on the correlation interval 0 k for models of different dimensions and orders with q=0.1. Correlation interval is less than 100 in figure 1a and less than 500 in figure 1b. This interval determines the equivalence of models of different multiplicities. On the graphs, the dimension is indicated as N, the multiplicity as m.
The analysis of the obtained dependences shows that with sufficiently small correlation intervals 0 ( 10) k < the variances of filtering errors of autoregressive RF of the 1st and 2nd orders are rather close. An increase in the dimensions and a further increase in the correlation interval leads to the distancing of the curves. At the same time the smallest values of relative variances of filtering errors are obtained for the cases N=3, m=(2,2,2), N=4, m=(2,2,2,2). This is because when m=1 autoregression along the axes are quite prickly and their filtering is a more difficult task.
Using expression (7), one can obtain the following equations for models of various dimensions, presented in Table 1. An analysis of the curves in the graph shows that in the case of a large dimension of the RF, for example N = 3, the variance of the filtration error is rather small. In this connection, effective filtering is obtained both for small signal-to-noise ratios and for large ones. At the same time, an increase in the multiplicity of models leads to a decrease in the relative dispersion of the filtering error. N A Andriyanov, K K Vasiliev V International Conference on "Information Technology and Nanotechnology" (ITNT-2019) 76 Figure 3 shows the dependences of the relative variance of filtering errors on the dimension of the AR for the cases k0=100, q=0.01 with multiplicities m=1 and m=2. The analysis of the curves presented in Figure 3 shows that increasing the dimension of the RF leads to a significant increase in filtration efficiency, which is associated with a large number of correlations in the multidimensional model. At the same time large dimensions provide variance of filtering errors tending to 0 (~10 -15 ) already with multiplicities m=2 along each axis. At the same time if m=1 then the variance of the filtering error is several orders of magnitude greater.
Real image processing
The filtering algorithm based on a multiple-root model was tested on a multidimensional satellite image compared to an algorithm based on autoregressive models. Figure 4 shows the filtering results for one of the images. Figure 4a shows the source image, figure 4b shows the noisy image, figure 4c shows the filtering results using autoregressive model of the first order and figure 4d shows the filtering results using autoregressions with multiple roots model.
The analysis of the presented pictures shows the model with multiple roots provides better results in variance of filtering error, for example, the results for image on figure 4 is following: relative error variance for figure 4c is 0.782, error variance for figure 4d is 0.358. The signal-to-noise ratio is 0.5.
Conclusion
Thus, in this paper, the filtration efficiency of multidimensional RF with multiple roots of characteristic equations is investigated. At the same time, an increase in the dimensions and orders of the models leads to a significant decrease in the relative dispersion of filtering error. Therefore, it is advisable to use less computationally sophisticated mathematical models of RFs that provide fairly small errors. For example, already for the dimension N=3 it is possible to achieve relative error equal 10 -5 for q=0.01 and multiplicities m= (3,3,3). In addition, studies have been conducted on the effectiveness of filtration depending on the dimension of the RF. It should be noted that in the logarithmic axes, these dependencies are close to linear for the dimensions N=1,...,4. Such models are also useful in processing real images having strong correlation properties. | 2019-12-05T09:24:57.959Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "0b2edb87688464cd6ae5153846d0b7844b31d58a",
"oa_license": null,
"oa_url": "https://doi.org/10.18287/1613-0073-2019-2391-72-78",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2069a17e134732510321c00eabf4a6970c4e0de9",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
245480400 | pes2o/s2orc | v3-fos-license | Characterization of Fructose-1,6-Bisphosphate Aldolase 1 of Echinococcus multilocularis
Glycolysis is one of the important ways by which Echinococcus multilocularis acquires energy. Fructose-1, 6-bisphosphate aldolase (FBA) plays an important role in this process, but it is not fully characterized in E. multilocularis yet. The results of genome-wide analysis showed that the Echinococcus species contained four fba genes (FBA1-4), all of which had the domain of FBA I and multiple conserved active sites. EmFBA1 was mainly located in the germinal layer and the posterior of the protoscolex. The enzyme activity of EmFBA1 was 67.42 U/mg with Km and Vmax of 1.75 mM and 0.5 mmol/min, respectively. EmFBA1 was only susceptible to Fe3+ but not to the other four ions (Na+, Ca2+, K+, Mg2+), and its enzyme activity was remarkably lost in the presence of 0.5 mM Fe3+. The current study reveals the biochemical characters of EmFBA1 and is informative for further investigation of its role in the glycolysis in E. multilocularis.
Introduction
Echinococcus multilocularis, the causative agent of alveolar echinococcosis (AE), is widely distributed in the Northern hemisphere [1], and is mainly found in Tibet, Qinghai, Gansu and other high-altitude areas in China [2]. In the life cycle, the adult worm resides in the intestine of canids (mainly foxes) acting as a definitive host and the eggs expelled with the feces contaminate food and water, which may cause rodents and occasionally humans to be infected. Then the eggs develop into the metacestodes in the liver and lung. If foxes prey on these infected rodents, the metacestodes grow into the adults, thus finishing an entire life cycle. In an intermediate host, E. multilocularis displays a tumor-like infiltrative growth. The latent period of AE is long, up to five years, and the approaches for AE treatment are very limited. Clinically, parasites can be removed by surgery with a high risk of secondary infection and serious immune responses due to cyst fracture [3,4]. It was estimated that the mean annual incidence of AE was approximately 0.26 per 100,000 population in Switzerland, with mortality of >90% within 10 to 15 years after diagnosis in patients with or without proper treatment [5].
The energy metabolism of parasites mainly depends on the Embden-Meyerhof-Parnas pathway (EMP) [6,7]. Fructose-1, 6-bisphosphate aldolase (FBA) is a key enzyme that is involved in the first stage of glucose metabolism, where glucose is broken down into pyruvate. FBA is able to catalyze the breakdown of fructose-1, 6-diphosphate (FDP) into one molecule of dihydroxyacetone phosphate (DHAP) and one molecule of glyceraldehyde-3-phosphate (GAP), and this reaction process is reversible. To date, FBA has been described in many parasites, including Toxoplasma gondii [8], Plasmodium knowlesi [9], Schistosoma mansoni and Schistosoma japonicum [10], Trichinella spiralis [11] and E. granulosus [12]. FBA plays an important role in parasite growth, development, metabolism and substance transport [8,13,14]. FBA not only participates in the glycolytic pathway, but also plays an important role in other processes in parasites. For instance, it was possibly involved in the invasion and motility of Plasmodium sporozoites [15,16], and immune evasion during S. japonicum infection [17,18]. There is growing evidence to support that FBA can act as a potential drug therapeutic target for parasitic diseases [19]. However, little is known about E. multilocularis FBA.
In this study, we conducted the genome-wide analysis of the fba genes in the Echinococcus species, and analyzed the secondary structure of E. multilocularis FBA1 (EmFBA1) and its localization by immunofluorescence assay. Finally, we determined the enzyme kinetics of EmFBA1.
Parasites
The larva of E. multilocularis was passaged in Meriones unguiculatus in our lab. The cyst mass was aseptically dissected from infected M. unguiculatus and washed several times in PBS. Then samples were immediately immersed in 4% paraformaldehyde for preparation of sections or stored at −80 • C for total RNA extraction.
Identification and Phylogenetic Analysis of FBA Genes
The full amino acid sequence of EmFBA1 (EmuJ_000905600) was used to query protein data of Echinococcus species (E. multilocularis and Echinococcus granulosus) in the WormBase ParaSite (https://www.parasite.wormbase.org, accessed on 1 August 2020). For identification of motifs and/or domains, each FBA amino acid sequence was aligned and analyzed using the database of Conserved Domains in NCBI (https://www.ncbi.nlm.nih.gov/Structure/ cdd/docs/cdd_search.html, accessed on 1 August 2020).
Alignment of the FBA amino acid sequences was performed using Clustal W (MEGA 7.0, Mega Limited, Auckland, New Zealand) with the default parameter. Before the construction of a phylogenetic tree, an optimal model was selected using Molecular Evolutionary Genetics Analysis (MEGA 7.0, Mega Limited, Auckland, New Zealand). The phylogenetic tree was created by the maximum likelihood method using the WAG model with 1000 bootstrap replications.
RNA Extraction and cDNA Synthesis
Total RNA was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the method provided by the manufacturer. Briefly, parasites were ground into powder in liquid nitrogen, followed by homogenization in TRIzol, aqueous and organic phase separation using chloroform and RNA precipitation using isopropanol. After centrifugation, RNA was washed in 75% alcohol and dissolved in RNase-free water. The concentration and integrity of the extracted RNA were analyzed by Nanodrop 2000 (Ther-moFisher Scientific, Waltham, MA, USA).
1 µg of total RNA was used for cDNA synthesis using RevertAid cDNA First-strand Synthesis Kit (ThermoFisher Scientific) in accordance with the instructions.
Polymerase Chain Reaction and Prokaryotic Expression of EmFBA1
Using the previously synthesized cDNA as a template, a pair of specific primers (Primer Premier 5.0) were used to amplify the open reading frame of emfba1 gene by PCR: FBA-F: 5 -GGATCCATGTCTCGTTTTGTTCCCTAC-3 and FBA-R: 5 -GTCGACCCTAGTAG GCATGGTTGGCC-3 . PCR was performed using T100 Thermal Cycler (Bio-Rad, California, USA) with the following steps: 95 • C for 5 min, followed by 35 cycles of 98 • C for 10 s, 55 • C for 30 s, 72 • C for 1 min, and finally 72 • C for 5 min. PCR products were resolved using 1.2% agarose gel and purified by PCR Purification Kit (TSINGKE).
The purified PCR products were subcloned into pET-28(+) (TaKaRa) and transformed into trans-5α competent cells. The positive plasmid was verified by double digestion and sequenced (TSINGKE), and then transformed into BL21 (DE3) competent cells. The positive clone was induced with IPTG (BBI life sciences) at a final concentration of 0.5 mmol/L on 20 • C for 8 h. The collected bacterial cells were treated by ultrasound, centrifuged at 4 • C for 20 min at 10,000× g, and the supernatant were used for purification of EmFBA1.
Preparation of Polyclonal Antibodies against EmFBA1
The recombinant EmFBA1 was purified by Ni Sepharose 6 Fast Flow (GE Healthcare, Stockholm, Sweden) and dialyzed by 10K centrifuge filter units (Merck, Branchburg, NJ, USA). The purified protein was resolved using 10% SDS-PAGE gel and analyzed by ImageJ, and its concentration was determined using BCA Protein Assay Kit (Beyotime, Hangzhou, China).
Polyclonal antibodies were produced by immunizing New Zealand white rabbits with emulsified recombinant EmFBA1. First, purified EmFBA1 was mixed with an equal volume of Freund's complete adjuvant and well emulsified. Then rabbits were subcutaneously injected at multiple locations with emulsified EmFBA1 of 200 µg per rabbit. After 15 days, the second immunization with 200 µg EmFBA1 emulsified with an equal volume of Freund's incomplete adjuvant was administrated in each rabbit. Finally, the last immunization was performed after 15 days in the same way. Sera were collected and sequentially purified using saturated ammonium sulfate and HiTrap™ protein G (GE Healthcare) according to the manufacturer's instructions.
Immunofluorescence and Western Blotting
Paraffin sections were dewaxed by xylene, followed by antigen repair in 0.01 M citrate buffer. After wash, the sections were blocked with 5% BSA (Amresco, Pittsburgh, PA, USA). The slides were then incubated overnight at 4 • C with the purified anti-EmFBA1 antibodies diluted at 1:100. After wash, Alexa Fluor 594 goat anti-rabbit antibodies diluted at 1:10,000 (Merck) were added and incubated at room temperature for 1 h, followed by overnight incubation with DAPI (Merck). The slides were observed under fluorescence microscope (Leica, Berlin, Germany).
Western blotting was conducted as previously described [20]. Briefly, 10 µg of the crude tapeworm proteins (E. multilocularis, E. granulosus, Taenia hydatigena and Taenia asiatica) stored in our laboratory was resolved using 10% SDS-PAGE gel, and then transferred to PVDF membrane (Millipore, Burlington, MA, USA). The membrane was sequentially incubated with 1:1000 diluted purified anti-EmFBA1 or 1:10,000 diluted anti-acitn (Abcom, London, UK) and then 1:10,000 diluted goat anti-rabbit IgG HRP linked (Sera care, Gaithersburg, MD, USA). After wash, the membrane was dealt with ECL HRP Chemiluminescent Substrate Reagent kit (Invitrogen, Carlsbad, CA, USA) and then exposed to X-ray film (Carestream, Rochester, NY, USA) for visualization. In this experiment, healthy rabbit serum was used as control.
Determination of Enzyme Kinetics
The enzyme activity was determined using FBA Activity Detection Kit (Solarbio, Beijing, China) according to the protocol previously reported [18]. Briefly, EmFBA1 (40 µg/mL) was mixed with the following reagents in the 200 µL reaction system: 2 mM fructose-1, 6-bisphosphate (Sangon Biotech, Shanghai, China), 400 µM NADH (Sigma, River Edge, NJ, USA), 100 µg/mL bovine serum (BSA), 1 unit glycerol phosphate dehydro-genase (Sigma) and 1 unit triose phosphate isomerase (Sigma). Under the conditions of fixed substrate and EmFBA1 concentration, 50 mM Tris-HCl buffer with a different pH from 6.0 to 9.0 and different reaction temperature from 25 to 50 • C were used to screen the optimal pH and temperature, respectively. Absorbance values at a wavelength of 340 nm were recorded using an absorbance microplate reader (Molecular Devices). In these experiments, PBS was used as a control.
Using the same reaction system, the enzyme kinetics were determined under the optimal conditions (pH 7.5, 37 • C), and the experimental data were fitted into the formula through Origin 7.5 [21]: where V max is the maximum velocity, [S] is the substrate concentration, and K m is the Michaelis-Menten constant. Similarly, different metal ions (Ca 2+ , K + , Na + , Fe 3+ and Mg 2+ ) at a different final concentration were added to evaluate the effects on the enzyme activity of EmFBA1.
Data Analysis
Data were presented as mean ± SD (n = 3). Origin Pro 7.5 (Northampton, MA, USA) was used to plot, analyze the kinetic data and estimate K m and V max using the non-linear fit function.
Identification of fba Genes in Echinococcus Species
The genome-wide analysis revealed that E. multilocularis contained four potential fba genes, named as emfba1-4, and there were also four homologues in E. granulosus Figures 1A and S1). Each fba had only one copy, and the four fba genes in each parasite were distributed on different chromosomes ( Figure 1A). Among these fba genes, the overall amino acid similarity was 74.64%, whereas each fba shared >92% similarity with its corresponding ortholog. The phylogenetic tree showed that these fba genes formed four branches, each of which consisted one fba of E. multilocularis and its corresponding ortholog of E. granulosus ( Figure 1B). Moreover, all FBA proteins were predicted to contain one domain of FBA I and multiple conserved active site residues ( Figures 1C and S1).
Localization of EmFBA1
The purity of EmFBA1 was estimated to be more than 92% ( Figure S2). Western blotting results showed that natural EmFBA1 (~40 kDa) could be clearly recognized by the antibodies raised in the study (Figures 2A and S3). As expected, the antibodies also cross-reacted with natural FBA1 from E. granulosus, Taenia asiatica, and Taenia hydatigena, suggesting that FBA1 is widely expressed in cestodes. For the localization of EmFBA1, strong immunofluorescent signals were predominantly present along the germinal layer and in the posterior of unevaginated and evaginated protoscoleces ( Figure 2B). No signals were observed in the control slides ( Figure 2B).
Enzyme Kinetics of EmFBA1
Comparison of EmFBA1 enzyme activity demonstrated that the optimal reaction temperature and pH were 37 • C and 7.5, respectively ( Figure 3A,B). Nevertheless, EmFBA1 was still active in a broad range of temperature and pH ( Figure 3A,B). The enzyme activity of EmFBA1 was 67.42 U/mg, and it completely converted the substrate within 10 min, while PBS as the control showed no significant difference ( Figure 3C). The K m and V max of EmFBA1 were 1.75 mM and 0.5 mmol/min, respectively ( Figure 3D).
It was further found that Fe 3+ significantly inhibited the EmFBA1 activity compared with other ions, including Na + , Ca 2+ , Mg 2+ and K + (Figure 3E,F). Moreover, the enzyme activity of EmFBA1 was remarkably abolished with the addition of Fe 3+ at a final concentration of 0.5 mM ( Figure 3F).
Discussion
Glycolysis is a process of the decomposition of glucose or glycogen into lactic acid and the production of ATP in the presence or absence of oxygen, named as aerobic glycolysis or anaerobic glycolysis. Both E. multilocularis and E. granulosus can produce energy by glycolysis without a significant difference in the rate of glycogen use under aerobic and anaerobic conditions [22,23]. It has been also shown that glycolysis is critical for the survival of Trypansoma brucei during its lifecycle, as it is the only source of ATP. Only 50% glycolysis inhibition is enough to kill T. brucei, which makes it a potent target for drugs [24,25]. Therefore, it is plausible to develop the anti-echinococcosis interventions via targeting the glycolysis pathways. FBA, a member of aldolase family, is involved in the glycolysis, gluconeogenesis and Calvin cycle in photosynthesis and widely exists in animals, plants and microorganisms, showing different responses under various stress conditions [26][27][28]. In S. japonicum, FBA was potentially involved in the growth and development of sporocysts [13]. In apicomplexan parasites, aldolase acted as a bridge between cell surface adhesion and actin cytoskeletons during invasion [29,30]. In both E. multilocularis and E. granulosus, all eight FBA shared the same active site residues, which were also reported in the FBA isoenzymes in Clonorchis sinensis [31], suggesting that they catalyze the substrates under a similar mechanism. In the phylogenetic tree, each emFBA formed a single branch with its homologue from E. granulosus. This may be explained by the fact that a set of emFBA genes might have been formed before speciation of both Echinococcus parasites.
It was herein shown that EmFBA1 was widely localized along the germinal layer and in the posterior of the protoscolex. This distribution pattern is consistent with that of FBA1 in E. granulosus [12]. As the germinal layer is involved in the growth of cysts and the generation of protoscolex [32], EmFBA1 may play a role in these processes. Moreover, EmFBA1 was only localized in the posterior of both unevaginated and evaginated protoscoleces, suggesting that the posterior is a main venue of anaerobic oxidation of glucose. Previous studies reported that FBA were also localized on the sucker in Opisthorchis viverrini, which may provide energy to muscle tissues [33], and on the surface of Plasmodium spp., which may be involved in the movement and invasion of parasites [15,34]. Of interest is to investigate the localization of other emFBAs in future studies.
FBA plays a key role in the anaerobic metabolism of glucose. The activity of EmFBA1 was 67.42U/mg, as three times lower as one of S. japonicum that has eight different FBAs [18]. This discrepancy may be explained by the fact that the E. multilocularis FBA proteins are functionally redundant. As expected, EmFBA1 show the highest enzymatic activity at 37 • C and pH 7.5. In the presence of four ions including Na + , Ca 2+ , Mg 2+ and K + , EmFBA1 exhibits no significant alterations in enzymatic activity, suggesting ion independence. It was also shown that EmFBA1 was susceptible to Fe 3+ ion, which has never been described for other FBAs. This adverse effect of Fe 3+ is still unclear. Whether other emFBAs have similar traits or not needs to be investigated in future.
Conclusions
We herein genetically and biochemically characterized FBA1. The results demonstrated that FBA1 was conserved in all cestodes investigated and EmFBA1 showed a relatively lower enzymatic activity. The current study reveals the discrepancy in the biochemical characters of EmFBA1 and is informative for further investigation of its role in the glycolysis in E. multilocularis.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/vetsci9010004/s1. Figure S1: Alignment of the FBA amino acid sequences from Echinococcus species. Figure S2: Purity analysis of EmFBA1. Figure S3: Western blotting analysis of natural FBA1 in E. multilocularis and other tapeworms. Institutional Review Board Statement: The study was reviewed and approved by Animal Ethics Committee of Zhejiang A&F University (2017YFC1200500, 07/2021). All animal experiments were carried out in strict accordance with the guidelines Informed Consent Statement: Not applicable.
Data Availability Statement:
The original contributions presented in the study are included in the article, and further inquiries can be directed to the corresponding authors. | 2021-12-26T16:05:28.413Z | 2021-12-23T00:00:00.000 | {
"year": 2021,
"sha1": "51b94f68cbe6dae62aa6f1c1924e36c6c591104f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/9/1/4/pdf?version=1640309240",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9160671ddb5b5a0642c95fc2222863becb382db",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233036157 | pes2o/s2orc | v3-fos-license | The Mentalized Affectivity Scale (MAS): Development and validation of the Italian version
This study proposes a psychometric validation of the Italian version of the Mentalized Affectivity Scale (MAS) developed by Greenberg and colleagues in 2017. The mentalized affectivity construct integrates mentalization ability in the process of emotional regulation. An adult sample (N = 506) completed the 60-items MAS online version. In contrast to the three-factor structure of the original version, the Italian context confirmatory and exploratory factor analyses with splitted sample (CFA = 258; EFA = 248) revealed a five-factor structure. The hierarchically structured MAS factors are: Emotional Processing (being able to process emotion in situations); Expressing Emotions (talking and knowing emotions); Identifying Emotions (awareness of emotions); Control Processing (to control emotional reactions and expression), and Autobiographical Memory (related to childhood experiences). We also verified the convergent validity and reliability of the Italian version of the MAS by correlating the above five factors with measures of emotion regulation and reflective functioning. Moreover, we analyzed the relationships among the factors of the MAS, personality measures and well-being indexes, such as life satisfaction and self-efficacy: The new 35-item MAS scale showed robust correlations with all the tested constructs. Our results confirm that the MAS is a useful measure to assess mentalized affectivity, with the Italian version showing a more complex structure than the original English one, thus enriching the literature about mentalization.
Introduction
Beginning from early childhood, people learn how to manage their emotions in everyday life in order to adapt appropriately to social situations [1]. This ability, known as emotion regulation, is defined as "the extrinsic and intrinsic processes responsible for monitoring, evaluating, and modifying emotional reactions, especially their intensive and temporal features, to a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Recently, emotion regulation and mentalization have been the subject of interest in crosscultural studies and reflections. Regarding emotion regulation, Ford and Mauss [10] consider the classic distinction between cultures promoting interdependence (Eastern cultures) and cultures promoting independence (Western cultures) and highlight that cultures play an important role in determining the motivation of individuals' emotion regulation and in the use of specific strategies in different regulatory contexts. In interdependent cultures, individuals prefer to regulate their emotions as a way to maintain collective harmony, using strategies like expressive suppression. On the contrary, in independent cultures, expressive suppression is not considered an adaptive strategy, because individuals are less motivated to be concerned with collective harmony. Although studies about cultural differences in mentalization are still limited, in a recent review Aival-Naveh and colleagues [11] reflect upon the difference between individualistic vs. collectivistic cultures: in the former, mentalization is mainly self-oriented, whereas in the latter, mentalizing abilities develop firstly with the aim to understand others, then they are applied to understand themselves. Commenting on this review, Fonagy and Campbell [12] suggest that this is due to the different way in which attachment bonds are constructed. In individualistic cultures, parenting practices require that the newborn interact mainly with the caregiver, whereas in collectivistic cultures, the caregiver welcomes the infant to interact with all the members of the community; in the former, the baby is a primary focus of the caregiver's mind and, consequently, of her/his own, in the latter, the focus is on other people's minds and the ability to interact with them. In spite of the limited number of available studies, Jurist and Sosa [13] argue for the importance of identifying cross-cultural differences in mentalization and particularly in mentalized affectivity, a complex construct that, connecting emotion regulation, mentalization and autobiographical memory, is likely to be strongly influenced by the culture to which it belongs.
How to measure mentalized affectivity?
In order to evaluate the three components of mentalized affectivity, Identifying emotions, Processing emotions, and Expressing emotions, Greenberg and colleagues created the Mentalized Affectivity Scale-MAS ( [1], Italian translation in [14]), a 7-points Likert scale in which respondents indicate their degree of agreement to 60 statements. A principal-components analysis (PCA) with varimax rotation showed that the Kaiser-Meyer-Olkin Measure of Sampling Adequacy was .95, and that the 60-items scale explained the 43% of the variance [1]. Moreover, the hierarchical analysis revealed a three-factors scale structure: Identifying, Processing, and Expressing. Examples of the Identifying-factor items are: "Understanding my emotional experience is an ongoing process" or "I am curious about identifying my emotions". For the Processing-factor, some items are "When I am filled with a negative emotion, I know how to handle it" or "I am good at controlling my emotions". Finally, for the Expressing-factor, some examples are "People tell me I am good at expressing my emotions" or "I often keep my emotions inside".
As it can be seen from the examples above, answering the items of the MAS requires a mentalizing about one's own positive and negative emotions. This process focuses both on one's own personal experience in emotion management (e.g. "I am good at distinguishing between different emotions that I feel"), and on the tendency to take the point of view of other people in relation to oneself (e.g. "I am open to other people's view of me because it helps me to better understand myself"). Due to the complexity of the mentalization process concerning one's own emotions, Greenberg and colleagues [1] investigate the characteristics of the MAS also in a sample with psychological disorders, showing that this scale individuates significant differences between a typical and atypical population. Comparing a typical sample with a sample characterized by 18 clinical diagnoses, the authors demonstrate that in people with psychological disorders, the score in the Identifying factor is higher compared to the typical population; on the contrary, the score in the Processing factor is lower in people with psychological disorders than in the typical population. Although the authors do not identify a cut-off score, the MAS seems to discriminate some relevant components of mentalized affectivity in a clinical population, offering significant considerations with respect to the type of treatment useful to these subjects in dealing with emotions. Research is now being conducted using a clinical population.
The involvement of mentalization in emotion regulation made the MAS scale an innovative tool in the international panorama, with translations into 11 different languages [13]. In fact, as already highlighted by Greenberg and colleagues [1], several tasks assessing other constructs close to mentalized affectivity have been created over the past years, but they are able to capture only certain aspects of the larger construct of mentalized affectivity. For example, the Emotion Regulation Questionnaire (ERQ; [15] in the Italian version of [16] is a 10 items tool on a 7-point-Likert scale detecting the use of two different emotion regulation strategies: cognitive reappraisal, rethinking a situation in order to modify its emotional meaning, and emotional impact and expressive suppression, referring to modifying or reducing emotional behavior. In this case, there are some similarities between the cognitive reappraisal tested by the ERQ and the Processing factor of the MAS, as well as between the expressive suppression factor of the ERQ and the negative pole of the Expressing factor of the MAS, but the component of the Identification factor is lacking in the ERQ while it is present in the MAS. Well known measures developed so far to assess mentalization are the Reflective Functioning Scale [17], based on the Adult Attachment Interview, and the Reflective Functioning Questionnaire (RFQ), the first self-report measure developed to specifically assess one's own mentalization ability [18].
These measures test mentalization in terms of reflective functioning, whereas the MAS emphasizes mentalized affectivity. In fact, although reflective functioning and mentalized affectivity are overlapping constructs, as they both imply the ability to reflect on oneself, the first one seems to regard mainly the reinterpretation of the past during critical life situations [19], whereas the second one is focused on the capacity to utilize live current emotional experience. This difference is also evident analyzing the structure of the RFQ, which has two subscales, Certainty and Uncertainty in mentalization. High scores on the "Certainty" subscale are related to hypermentalizing in reflective functioning, i.e. an "over-mentalizing" attitude where the attributed mental states do not correspond to reality. High scores on the "Uncertainty" subscale lead to hypomentalizing, which indicates a poor understanding of one's own and others' mental states [20]. So, the RFQ seems to be particularly sensitive to assess the distortions of mentalization [21], whereas the MAS aims at capturing mentalization along the continuum of typical and atypical functioning. Therefore, we have concluded that the development of an Italian version of the MAS is desirable, in order to have a useful tool for research and intervention on mentalization in the Italian context along with the other above-mentioned measures already developed in the past years.
Aims
In the light of the increasing interest in mentalized affectivity and in its evaluation, we aimed to test the psychometric validity and the reliability of the Italian version of MAS in a cohort of Italian adults. Specifically, we aimed to: 1. test the factorial validity (with confirmatory factor analysis-CFA) and the hierarchical structure of the model proposed by Greenberg et al. [1]. We hypothesized that the Italian version of the MAS would reduce into three distinct factors based on the mentalized affectivity theory, as in the original version of the scale; 2. test the assessment's reliability and concurrent and convergent validity by examining associations with the MAS and its socio-affective correlates: the emotion regulation, tested with the Emotion Regulation Questionnaire, and the reflective functioning, tested with the Reflective Functioning Questionnaire. In fact, according to the theoretical model, emotion regulation and reflective functioning are two constructs really close to the mentalized affectivity. Notably, we chose the ERQ in line with the theoretical proposal of Greenberg and colleagues [1], who state that the MAS assesses the emotion regulation, despite the existence of relevant differences between the two tasks in terms of awareness and mentalization of emotions. Regarding the relation between the MAS and the RFQ, given that both constructs involve mentalization skills (according to the mentalized affectivity model), we hypothesize the existence of associations between these two competences; 3. examine the psychological correlates of mentalized affectivity including personality measures and well-being (such as life satisfaction and self-efficacy). In line with the results obtained by Greenberg and colleagues (2017), we hypothesized that high levels of mentalized affectivity may correlate with some personality traits, such as openness to experience and extraversion (both related to the emotional experience), and with high levels of wellbeing.
Participants
The total number of participants was 779.
Procedures
Data were collected through an online survey hosted on the Qualtrics platform from February 2018 to January 2019. Once the study protocol was implemented and completed, a link to the survey was presented to university courses in the Psychology at the Department of Human and Social Sciences of the University of Bergamo, and of the Faculty of Education of the Catholic University of the Sacred Heart of Milan. The same link was sent to personal contacts and to other contacts of the participants through a snowball sampling method. In addition to providing a link to the survey, participants were presented with all the necessary information, including the purpose of the study, the instructions, the duration of the survey, which was estimated in about 30 minutes. In the first page of the survey, participants were informed about personal data processing, and only those who gave their informed consent were included in the data collection. Furthermore, all participants were treated in accordance with the ethical guidelines for research provided by the Declaration of Helsinki [22], American Psychological Association [23] and by Italian Psychological Association [24]. The study was approved by the local ethical committee of the Department of Psychology of the Catholic University of the Sacred Heart of Milan, according to APA ethical standards. Participants provided some sociodemographic information first, then they completed the Mentalized Affectivity Scale in the Italian translation provided in Jurist [11]. In order to test the validity of the scale, other questionnaires concerning personality, emotional regulation, perception of satisfaction with life, self-efficacy and reflective function were included.
Measures
Sociodemographic information. All participants were asked to provide sociodemographic information such as gender, year of birth, education level, marital status, employment status, and residence type.
Emotion regulation. The Emotion Regulation Questionnaire. ERQ [25] in the Italian translation by Balzarotti and colleagues [16], is a self-report scale that evaluates the emotional regulation of participants. It is a 7-point Likert scale from 1 (I strongly disagree) to 7 (I strongly agree) consisting of 10 items representing the emotional regulation strategies of cognitive reappraisal (6 items) and expressive suppression (4 items) [16]. Scoring is obtained by creating an overall score from the two scores obtained in the subscales. The minimum and the maximum scores range from 10 to 70. Cronbach's α for the cognitive reappraisal subscale is 0.847, while Cronbach's α for the suppression subscale is 0.747.
Reflective functioning. The Reflective Functioning Scale. RFQ; [26] in the Italian version retrieved from the Psychoanalytic Unit of University College of London by Fonagy https:// www.ucl.ac.uk/psychoanalysis/research/reflective-functioning-questionnaire-rfqme).
The short version of the scale was used: an 8-items self -report scale assessing reflective functioning from 1 (strongly disagree) to 7 (strongly agree). This scale has two scales: Certainty and Uncertainty in mentalization, evaluated on a 7-point Likert scale. Scoring is obtained summing up the items belonging to the two scales, 6 for Certainty (range 0-18) and 6 for [27], in the Italian version of Chiorri and colleagues (I-TIPI;25). The Italian version of the scale was freely downloaded from Samuel Gosling's website (http://homepage.psy.utexas. edu/homepage/faculty/gosling/scales_we.htm). The I-TIPI is a self-report scale that investigates five dimensions of personality. The scale is developed using descriptors from Big Five instruments. The five personality dimensions are [28]: Extraversion (E), being able of preserving the species reproduction thanks to the ability to adapt to the social contexts; Agreeableness (A), having an optimistic view of human nature and get along well with people; Conscientiousness (C), being able to arrange personal things, be methodical and considered by others reliable; Neuroticism (N), related to anxiety and depression, defined as emotional instability, and Openness to Experience (O), be willing to experience with new things and have many and varied interests [29]. Each dimension consists of two items, in a total of 10 items with a 7-point Likert scale from 1 (strongly disagree) to 7 (strongly agree). The scoring is calculated by summing the two items for each factor. The minimum and the maximum scores range from 2 to 14. Cronbach's α for each factor is: Extraversion α 0.661; Agreeableness α 0.199; Conscientiousness α. 0.456; Neuroticism α 0.496 and Openness to Experiences α 0.457.
Life satisfaction. Satisfaction with Life Scale. SWSL [30] in the Italian version of Di Fabio and colleagues [31] is a self-report scale that assesses respondents' perception of satisfaction with their lives. It is 5-items scale designed to measure global cognitive judgments of one's life satisfaction. Participants indicate how much they agree or disagree with each of the 5 items using a 7-point scale that ranges from 1 (I strongly disagree) to 7 (I strongly agree). The scoring is obtained by summing the scores of each of the 5 items and it ranges from a minimum of extreme dissatisfaction [5] to a maximum of extreme satisfaction [35] Cronbach's α 0.855.
Self-efficacy. General Self-Efficacy. GSE [32] in the Italian version of Sibilia, Schwarzer, Jerusalem [33] evaluated through a self-report scale the perception that subjects have of their sense of self-efficacy referring to personal agency. It has 10 items on a 4-point Likert Scale from 1 (not all true) to 4 (exactly true). Scoring is evaluating summing up all the answers, from a minimum score of 10 to a maximum score of 40. Cronbach's α 0.868.
Statistical analysis
Data analyses were performed using Jamovi statistical software [The Jamovi project (2020). Jamovi (Version 1.2) (Computer Software). Retrieved from https://www.jamovi.org]. For the sample characteristics, mean values and standard deviations (SDs) for continuous variables were calculated; for categorical/nominal variables, frequencies and percentages were computed. Skewness and kurtosis of the MAS items were first checked to assess normal distribution; West, Finch, & Curran [34] recommend concern if skewness > 2 and kurtosis > 7.
The factorial validity of the MAS, considering the model proposed by Greenberg et al. [1], was assessed with confirmatory factor analysis (CFA). Hu and Bentler's guidelines [35] for various fit indices were used to determine whether the expected model fits the data. The chisquare test statistic was employed, but considering its sensitivity to sample size, other fit indices were evaluated: (a) the comparative fit index (CFI �0.90 indicates a good fit); (b) the root mean square error of approximation (RMSEA �0.08 indicates an acceptable fit); and (c) the standardized root mean square residual (SRMR �0.08 indicates an adequate fit).
As is often the case, scales translated in different languages and analyzed in different cultural contexts, may not have the same latent factor structure of the original version: in this case, it is appropriate to examine the latent structure of the assessment through an exploratory factor analysis (EFA), followed by a new confirmative factor analysis (CFA). Since this is the case of this study, we examined the latent structure of the MAS through an exploratory factor analysis (EFA), followed by a new confirmative factor analysis (CFA). The total sample was later randomly divided into two halves. The first sample was used to perform an EFA (SAM-PLE A, n = 258), and the second was used to perform a CFA in order to validate the EFA structure (SAMPLE B, n = 248).
On Sample A, the Kaiser Meyer Olkin (KMO) and the Bartlett's test of sphericity were run in order to be sure that the correlation matrix could be subjected to analyses (KMO should be > 0.5; Bartlett's test of sphericity should be significant). The Cattell scree test (judging the elbow of a scree plot) and the Kaiser-Guttman criteria (eigenvalue greater than one) were used to identify the number of factors to be extracted using EFA. EFA with the Oblimin oblique rotation was used to analyze the items on the MAS. Oblique rotation was used because the factors extracted from the MAS are likely to correlate with each other. In the first step, all 60 items were included. Subsequent factor analyses were conducted in a stepwise fashion, in order to eliminate items until a stable factor solution emerged. Loadings in the .40 range or above are generally considered the cut-off on substantial loadings [36,37]; for this reason, items that had a factor loading <|.40| were excluded, and, after the first step, items that loaded at >|.40| on more than one factor were excluded. Moreover, in order to obtain a more refined and clear-cut solution, we selected those items that showed a loading higher than |.40| on the intended factor, but also a ratio higher than 2 among the primary loading and the highest secondary loading (i.e., the primary loading was two times the highest secondary loading).
On Sample B, CFA was conducted. Maximum Likelihood (ML) was used as an estimation method. Hu and Bentler's guidelines for various fit [35] indices were used to determine whether the expected model fit the data.
Cronbach's alpha coefficients were performed on the total sample to examine internal consistency. Cronbach's Alpha below .60 are unacceptable [38].
To examine the hierarchical structure of the scale, the one-component through five component solutions was explored using the procedure proposed by Goldberg [39]. First, a single component was specified in a PCA and then, in four subsequent PCAs, we specified two, three, four, and five orthogonally rotated components. The component scores were saved for each solution. Next, correlations between component scores at adjacent levels were computed.
The concurrent validity of the MAS scale was evaluated by correlating the MAS factors with age, education, personality, emotion regulation, life satisfaction, self-efficacy and reflective functioning with the Pearson's r correlation coefficient. Following Cohen's guidelines [40] we interpreted correlations as measures of the effect size. Correlations were considered weak (| 0.10| < r < |0.29|), moderate (|0.30| <r <|0.49|) or strong (|0.50| < r < |1|). Furthermore, ttests were used to test the difference among profiles of the MAS factors due to gender. Missing values were treated via listwise deletion.
Descriptive analysis of MAS items
The descriptive analysis of the MAS items is presented in Table 2. The average scores of the responses to the 60 items from all the participants ranged from to 60 to 420 and were split into three factors scores (Expressing from 14 to 98; Identifying from 24 to 168 and Processing from
Factor structure of the Mentalized Affectivity Scale. Exploratory factor analysis
Data from Sample A and 60 items were used in these analyses. The Bartlett's sphericity test (χ2 = 7605, p < .001) and the KMO = 0.84 have ensured that the correlation matrix could be subjected to factor analysis. The Cattell scree test and the Kaiser-Guttman criteria indicated that a five-factor solution was the most appropriate. EFA was then conducted, with five factors extracted. The initial pool of 60 general items, after subsequent factor analyses conducted in a stepwise fashion, was reduced to 35. The following twelve items were excluded, because their loadings were lower than . The following thirteen items were excluded because they showed a ratio higher than 2 among the primary loading and the highest secondary loading: "I can express my emotions clearly to others."; "It is hard for me to manage my emotions."; "I often think about my past experiences to help me understand emotions that I feel in the present."; "I am able to keep my emotions to myself if the timing to express them isn't right."; "People tell me I am good at expressing my emotions."; "I often look back at my life history to help inform my current emotional state and situation."; "People get confused when I try to express my emotions."; "I can see how prior relationships influence the relationships that I have now."; "It is helpful to think about how my emotions stem from family dynamics."; "I rarely think about the reasons behind why I am feeling a certain way."; "It's important to understand the major life events that have had an impact on my behavior."; "I am more comfortable "talking around" emotions I am feeling, rather than talking about them directly."; "When I express my emotions to others, it is usually jumbled". The pattern of factor loadings from the five-factors exploratory measurement model for the MAS scale with 35 items is given in Table 3. The first extracted factor explains 12.85% of the variance after rotation. It showed loadings from ten items evaluating a self-assessment of one's ability to be aware of one's own emotions. This factor can be named "Identifying Emotion". The second extracted factor explains 10.64% of the variance after rotation. It showed strong loadings from eight items assessing the way people try to express and communicate their emotions with others, i.e. externalizing them. This factor can be labelled "Expressing Emotions". The third extracted factor explains 9.53% of the variance after rotation. It showed loadings from seven items assessing people's ability to identify and label their emotions. This factor can be called "Curiosity about Emotions". The fourth extracted factor explains 7.80% of the variance after rotation. It showed strong loadings from seven items assessing people's ability to control their emotions using cognition. This factor can be named "Processing Emotions". The fifth extracted factor explains 4.89% of the variance after rotation. It showed loadings from three items assessing people memories about personal childhood emotion experiences. This factor can be labelled "Autobiographical Memory". The total variance explained by the five factors extracted was 45.7%. As shown in Table 2, no item displays a loading lower than .40. The extent of cross-loading between factors was moderate; the size of this secondary loading was usually small, below .30.
Factor structure of the Mentalized Affectivity Scale. Confirmatory factor analysis
CFA was conducted separately on data from Sample B using the 35 items; item selection to load on CFA factors was based on EFA loadings. Table 3 presents the standardized factor loadings in Sample B. The fit of the CFA model to the data from the 248 subjects was acceptable (χ 2 (584) = 1076.00 p < .001; RMSEA = .058; SRMR = .074). Loadings from the CFA were comparable with those found in the EFA, identifying the five factors.
Hierarchical structure
The hierarchical structure of the one-component through five-components solution was conducted using the procedure proposed by Goldberg [39] on the total sample of participants. The resulting hierarchical structure is displayed in Fig 1. Items that loaded highest on the onecomponent solution (FUPC) represented Identifying and Expressing Emotions, which are related to the ability to recognize emotions and to express them, including "I am good at distinguishing between different emotions that I feel", "I try to put effort into identifying my emotions.", and "If I feel something, I will convey it to others". Items in the two-components solution appeared to represent "Identifying and Processing" and "Curiosity and Expressing Emotions" dimensions of mentalized affectivity. Items that loaded high on the "Identifying and Processing" dimension were "I can quickly identify my emotions without having to think too much about it", "I often figure out where my emotions stem from", and "I can easily label "basic emotions" (fear, anger, sadness, joy, and surprise) that I feel". This component remained virtually unchanged in the three-component solution. Items that loaded high on the "Curiosity and Expressing Emotions" dimension were "If I feel something, it often comes pouring out of me", and "If I feel something, I prefer not to discuss it with others", and "I am curious about identifying my emotions". In the three-components solution, the "Curiosity and Expressing Emotions" dimension split into two subcomponents that differentiated "Curiosity about present and past" affects from "Expressing" affects. Items that loaded highly on the "Identifying present and past" affects were "I can tell if I am feeling a combination of emotions at the same time", and "I try to put effort into identifying my emotions". Items that loaded highly on the "Expressing" dimension were "If I feel something, I prefer not to discuss it with others", "I often know what I feel but choose not to reveal it outwardly", "I often keep my emotions inside". In the four-components solution, both "Curiosity about present and past" and "Expressing" dimensions remained virtually unchanged; the "Identifying and Processing" component split into two subcomponents that differentiated "Identifying" affects and "Processing" affects. Items that loaded highly on the "Identifying" were "I am good at distinguishing between different emotions that I feel", "I can quickly identify my emotions without having to think too much about it", and "I am able to understand my emotions within the context of my surroundings"; items that loaded highly on the "Processing" were "When I am filled with a negative emotion, I know how to handle it", "I am good at controlling emotions that I do not want to feel", and "I can still think rationally even if my emotions are complex". Finally, at the fifth-components solution "Identifying", "Expressing", and "Processing" dimensions remained unchanged. "Curiosity about present and past" split into two subcomponents that differentiated "Curiosity about emotions" and "Autobiographical memory". Items that loaded highly on the "Identifying" were "I try to put effort into identifying my emotions", "I try to understand the complexity of my emotions", and "I am curious about identifying my emotions", and items that loaded highly on the "Autobiographical memory" dimension were "I can pinpoint childhood experiences that influence the way that I often think and feel", "Knowing about my childhood experiences helps to put my present emotions within a larger context", and "I often think about how the emotions that I feel stem from earlier life experiences (e.g. family dynamics during childhood)".
Reliability of the Mentalized Affectivity Scale and correlations among factors
All the factor scores showed an acceptable distribution; skewness and kurtosis showed normal distribution (SkewnessMIN = -0.72-SkewnessMAX = 0.21; KurtosisMIN = -0.42-KurtosisMAX = 0.58). The analysis of reliability performed on the data collected from all participants showed that the scale has adequate internal consistency for all factors. All Cronbach's alphas were adequate: "Identifying Emotions" = .86, "Expressing Emotions" = .84, "Curiosity about Emotions".82, "Processing Emotions" = .79, "Autobiographical Memory". = .75. As long as correlations among the five factors, "Identifying Emotions" and "Curiosity about Emotions", "Expressing Emotions" and "Processing Emotions", "Curiosity about Emotions" and "Processing Emotions" are not linked, whereas all the other factors show significantly positive correlations.
Convergent validity
Convergent validity was assessed with correlations among the five mentalized affectivity factors, and the reflective functioning and emotion regulation.
As it can be seen in Table 4, Identifying Emotions is positively correlated with the ERQ Cognitive reappraisal emotion regulation strategy, and with the Certainty in the reflecting functioning, and it is negatively correlated with the emotion regulation strategy of Expressive suppression and Uncertainty in reflective functioning. Expressing emotions is strongly negatively correlated with the Expressive suppression emotion regulation strategy. Curiosity about Emotions is significantly positively correlated with the Cognitive reappraisal strategy, while it is negatively correlated with the Expressive suppression strategy. Processing Emotions is significantly positively correlated with both scales of the Cognitive reappraisal strategy, and with the Certainty in the reflective functioning, while it is negatively correlated with the Uncertainty in reflective functioning. Finally, Autobiographical Memory is strongly positively correlated Table 4. Convergent correlations with reflective functioning and emotion regulation.
Identifying Emotions
Expressing with the Cognitive reappraisal strategy, and it is negatively correlated with the Expressive suppression strategy.
Demographics, personality, well-being, life satisfaction and self-efficacy
Correlations among mentalized affectivity factors and the other measures are displayed in Table 5.
Demographics. Identifying Emotions is moderately positively correlated with age, and weakly positively correlated with education. Expressing Emotions and Curiosity about Emotions are both significantly positively correlated with education. Finally, Processing Emotions is significantly positively correlated with age.
Personality. As for personality scales, Identifying Emotions is positively correlated with Conscientiousness and Agreeableness traits, it is weakly positively correlated with Openness and Extraversion traits, while it is strongly negatively correlated with Neuroticism. Expressing Emotions is strongly positively correlated with Openness and Extraversion. Curiosity about Emotions is positively correlated with Openness, Agreeableness and Extraversion traits. Processing Emotions is significantly positively correlated with Agreeableness, it is weakly positively correlated with Openness and Conscientiousness traits, and it is negatively correlated with Neuroticism. Finally, Autobiographical Memory is not correlated with the others measures.
Life satisfaction. Identifying Emotions, Expressing Emotions and Processing Emotions are strongly positively correlated with Life satisfaction.
Self-efficacy. Identifying Emotions and Processing Emotions are positively correlated with General Self-Efficacy, while Expressing Emotions and Curiosity about Emotions are weakly positively correlated with this construct.
Discussion
The present research tested the factorial validity of the Italian version of the MAS in an Italian sample. Moreover, we tested reliability, concurrent and convergent validity by examining associations with the MAS and its socio-affective correlates, such as emotion regulation and reflective functioning. Finally, we explored possible links among mentalized affectivity as tested with the MAS and other measures of personality and well-being, such as life satisfaction and self-efficacy. Referring to the factorial structure of the Italian version of the MAS, the confirmative factor analysis did not confirm the original three-factors structure. As it can often be the case, scales translated in different languages and analyzed in different cultural contexts may not have the same latent factor structure as the original version: then, we conducted an exploratory factor analysis, followed by a new confirmative factor analysis, to examine the latent structure of the Italian version of the MAS. Following these steps, we delineated a new five-factors structure: Identifying Emotions, Expressing Emotions, Curiosity about Emotions, Processing Emotions and Autobiographical Memory.
Research has started to explore cultural differences underlying the construct of mentalized affectivity, and the growing interest in cultural differences in mentalization may provide a helpful path for the interpretation of our results. In a recent review, Aival-Naveh and colleagues [11] proposed that mentalizing development could be interpreted from different perspectives: a universalist one, that highlights the role of innate aspects of mentalization; a relativist one, that underlines the importance of the context in mentalization development; an intermediate one, that relies between the other two perspectives. This last hypothesis seems compatible with the mentalized affectivity theoretical model proposed by Greenberg and colleagues [1], because it assumes the existence of basic psychological processes, similar across cultures, which are affected during human development by specific cultural factors (a relevant cultural factor can be, for example, the possibility to establish attachment relationships in extrafamilial contexts, e.g. at school; see [43,44]. Also following the theoretical model proposed by Greenberg and colleagues [1], the biological bases of mentalization develop during infancy and childhood through parental attachment and early social experiences, which are deeply influenced by culture, and then generate mentalized affectivity. Regarding mentalization, Aival-Naveh and colleagues [43] proposed a macrodifference between individualistic vs. collectivistic cultures: in individualistic/Western cultures, mentalization is mainly oriented to the self, whereas in collectivistic/Eastern cultures, mentalization abilities develop firstly with the aim to understand others. Notwithstanding the above arguments, research in this area is still limited and do not analyze specific cultures in detail. In our case, although American and Italian culture can be both considered Western cultures, i.e. individualistic cultures, it is possible to speculate about the existence of some differences in mentalistic and affective development that can have an impact on a complex skill such as mentalized affectivity. The hierarchical model that we proposed has showed that in the Italian version of the MAS structure the three original factors are already present on second level, accompanied by a fourth factor, Curiosity, that is splitted in the third level to Curiosity about Present and Past and, finally, it is divided into Curiosity about Emotions and Autobiographical memory. Both Curiosity about Emotions and Autobiographical memory factors refer to the individual's tendency to question their present and past emotions, in particular, the role that these emotions play in their current experience; these factors are well explained by the theory of mentalized affectivity. Regarding Curiosity about emotions, Jurist [3] notes, for example, that not all people in therapy are necessarily interested in their emotions, which might predict to the duration of a therapy. Curiosity may be regarded as the basis of the mentalized affectivity; in order to become able to "being aware of one's one affect by remaining within that affective state", it is important to be interested in emotional experience and to consider this experience relevant to oneself. At the same time, curiosity may relate to the present or the past, so the individual may develop an interest in understanding how her/his previous experiences impact her/his current emotional state. According to Greenberg and colleagues [1], autobiographical memory can serve to illuminate how past emotional experience has an impact upon current experience, a critical element of mentalized affectivity. We may speculate that the cultural differences between Italy and the US influence the styles of how people reflect on the past and consider such reflections useful for understanding the present. In fact, European cultures and education stress the value of knowledge of the past: children and adolescents read classical texts, in their mother language or in the original language (i.e., in Latin), that often focus on the relationship between the inner world of the characters and their behavior (think of the Homeric classics, or the Romantics). Moreover, history is really important because educators believe that it is only possible to understand the present through knowledge of one's own origins. We can suppose that US culture and education are different, more oriented to the present and to the future: wide space is given to the study of technology and the continuous impulse to innovate are promoted as part of this culture and education, which tends to deemphasize the need for revisiting the past. For these reasons, it is possible that Italians are more used to questioning their emotions and interpreting the present in light of the past than Americans [45]. So, the Italian version of the MAS seems to represent a detailed description of the mentalized affectivity dimensions, as it suggests two new factors that had so far only been hypothesized in the literature.
In order to assess the reliability and the concurrent and convergent validity, we examined the links between the MAS factors and emotion regulation and reflective functioning constructs, confirming our hypothesis. In fact, referring to emotion regulation, results showed positive correlations between the cognitive reappraisal and four of the five MAS factors, Identifying Emotions, Curiosity about Emotions, Processing Emotions and Autobiographical Memory: on the contrary, the link between cognitive reappraisal and expression of emotions is not present. We can assume that all the factors correlated with cognitive reappraisal refer to internal abilities of the individual, that is, being able to identify, to be curious, to process (as suggested by Greenberg and colleagues [1] and to use autobiographical memory, without expressing those mental states externally. It is possible that the Expressing Emotions component of the mentalized affectivity comes into play when the emotion must not only be thought, as in the case of the cognitive reappraisal, but also communicated to others, i.e. in the interpersonal sphere.
As regards the construct of reflective functioning, our results provide evidence that both emotional and cognitive dimension of processing evaluated in the MAS are positively related to the tendency to hyper-mentalize and negatively related to the tendency to hypo-mentalize. In the theoretical perspective proposed by Greenberg and colleagues [1], the processing dimension indicates the tendency to modulate, refine and regulate emotions, i.e. the tendency to think about emotions, a trait directly involved in ability to mentalize well. We can assume that people with an "over-mentalizing" attitude are able to focus on emotions, in terms of awareness and control, i.e. Processing, whereas people with a "hypo-mentalizing" attitude are not focused on their own emotions, so that they are not able to recognize and manage these internal states.
Moreover, we found several links among four out of the five MAS factors and the others constructs examined in this study. As regards personality, the results showed that Identifying Emotions and Processing Emotions are positively related to almost all the personality characteristics, and negatively with Neuroticism. Moreover, the Expressing and Curiosity components of mentalized affectivity are related to the Openness, Extraversion and Agreeableness personality characteristics. Although the link between mentalization and personality is already well-established in the literature, especially pertaining to personality disorders (just think about borderline personality disorder; [6,46], recently Karterud and Kongerslev [47] proposed the Temperament-Attachment-Mentalization-Based theory of personality: the above-cited constructs represent innate or experiential components of the personality, intrinsically linked to each other, as they contribute to personality construction through emotion regulation abilities. So, this theory aims at explaining both typical and pathological personality in a structure similar to that of mentalized affectivity: in both cases, the authors assume the existence of inner developmental bases that allow the subject to live relational experiences, such as attachment relationships, that impact on their her/his ability to manage emotions. We can also hypothesize that mentalized affectivity is a fundamental aspect of adult personality, which derives precisely from the encounter of mentalizing and emotion regulation skills. At the same time, correlations among four out of the five factors of the MAS (excluding Autobiographical Memory), life satisfaction and self-efficacy confirm, as in Greenberg and colleagues [1], the important role of mentalized affectivity in individual well-being: being curious about emotions and being able to properly process, identify and express emotions allow the understanding of oneself and of others, favoring individual well-being and the ability to face life effectively.
The lack of links between the Autobiographical Memory factor and the others scales is not surprising: although personal memories play an important role in mentalized affectivity, we have to remember that this construct is mainly oriented to the present, because it refers to the capacity of being aware of one's one affect by remaining within that affective state [6]: probably, when responding to questions about personality, life satisfaction and self-efficacy, people tend to refer to their present experience and to put their past ones on a back burner.
As far as personal information is concerned, Identifying Emotions and Processing Emotions factors correlate with age. As the literature argues [1][2][3], mentalized affectivity is an adult ability, and we can suppose that Identifying and Processing Emotions are two complex factors of this construct, improving with age. Items composing the Identifying Emotions factor seek to recognize the link between the emotional experience and the context, or to individuate the origin of the emotion, both operations involving highly cognitive activity. At the same time, Processing Emotions integrates emotional and cognitive skills, requiring a high level of selfawareness. Therefore, it may be conceivable that precisely these components of mentalized affectivity are the most apparent in adulthood.
With regards to the structural characteristics of our sample, we found a correlation between educational level and Identifying Emotions, Expressing Emotions and Curiosity about Emotions factors. These three factors involve the ability and the propensity to think about one' s own emotions and to reflect on their origin, to monitor oneself and one's own abilities, and to name emotions appropriately: all these activities are related to the metacognitive and self-regulatory reasoning, as well as to linguistic skills. People with a high level of education develop more advanced metacognitive and self-regulatory skills, and are more articulate than those who have less years of study; this can affect their propensity to be aware of their emotions, how to name and manage them properly. At the same time, people with lower self-regulation skills may be less likely to reach a high level of education, so in adulthood they may struggle to apply metacognitive strategies to reason about emotions. Finally, usually age and educational level are associated, and this can explain the double correlation between Identifying Emotions, age and education: the older a person is, the more elevated their level of education is, prompting them to have more cognitive tools to engage in identifying their emotions. Among the mental features related to the age that facilitate the identification of emotions we can mention the "cognitive reserve" [48], which emphasizes the role of individual differences in cognitive coping with emotional and mental burdens, helping people to become increasingly resilient to external stressors. This paper has also several limits that need to be considered. First, compared with the original sample, the Italian sample is smaller and does not involve clinical information of the participants. In the future, it will be interesting to test the validity of the MAS also in a clinical sample, in order to compare the Italian data with the original ones (for the relevance of the mentalization in clinical samples see, for example [49,50]. Moreover, in this research the educational level of the sample is different from the original one: to pursue our speculation about cultural differences as a way to interpret the new factors that emerged, it would be desirable to compare a sample similar on this variable. In addition, in accordance with the original research, our online survey did not inquire about the specific region where the participants live: although we aimed to compare the US and Italian populations, it is possible that knowing the specific area of life of Italian participants belong could provide additional relevant information. With respect to the measures used, we wanted to remain aligned to the original paper, so we evaluated personality characteristics with the Italian validation of the TIPI. This test shows low reliability values, similar to that of the original version [27]: the reliability seems to be a limit of the scale structure, it is possible that the use another measure of personality might have provided more accurate and consistent results with respect to this construct.
Conclusion
In this paper we proposed the validation of the Italian version of MAS, and we found a more articulated factorial structure than the original scale. Specifically, the new factors of Curiosity about Emotions and Autobiographical Memory emerged, thus highlighting to important components of mentalized affectivity that in our sample is well distinguished from the other ones. We also verified the validity of this factorial structure, and we confirmed the relationship of the mentalized affectivity construct with other psychological correlates, highlighting the role of mentalized affectivity in individual well-being. Taken altogether, our findings show that the Italian version of the MAS could be considered a useful tool in the Italian context, both for research activities and clinical practices, enriching the complexity of the construct of mentalization and the variety of tasks devised to test such a critical ability for social life. | 2021-04-07T06:16:51.787Z | 2021-04-05T00:00:00.000 | {
"year": 2021,
"sha1": "91c1bfdd4e98ceaac09e2563ab13f6473c59188c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0249272&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92e2f21a5dc6e7802e69c343d541cabbff039169",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230437642 | pes2o/s2orc | v3-fos-license | Jackiw-Teitelboim Gravity in the Second Order Formalism
We formulate the path integral for Jackiw-Teitelboim gravity in the second order formalism working directly with the metric and the dilaton. We consider the theory both in Anti-de Sitter(AdS) and de Sitter space(dS) and analyze the path integral for the disk topology and the"double trumpet"topology with two boundaries. We also consider its behavior in the presence of conformal matter. In the dS case the path integral evaluates the wavefunction of the universe which arises in the no-boundary proposal. In the asymptotic AdS or dS limit without matter we get agreement with the first order formalism. More generally, away from this limit, the path integral is more complicated due to the presence of modes from the gravity-dilaton sector and also matter sector with short wavelengths along the boundary that are smaller than the AdS or dS scales. In the double trumpet case, for both AdS and dS, we find that bosonic matter gives rise to a diverging contribution in the moduli space integral rendering the path integral ill-defined. The divergence occurs when the size of the wormhole neck vanishes and is related to the Casimir effect. For fermions this divergence can be avoided by imposing suitable boundary conditions. In this case, in dS space the resulting path integral gives a finite contribution for two disconnected universes to be produced by quantum tunneling.
Contents
Jackiw-Teitelboim (JT) gravity is a theory of two-dimensional gravity which has received considerable attention recently [1-29, 29-37, 37-106] In this paper we carry out a path integral quantization of the theory in the second order formalism which involves the metric and the dilaton. This is to be contrasted with the first order formalism, used in much of the recent discussion [3,25,[107][108][109], which involves the spin connection and Vierbein along with the dilaton. Some of the motivation for our work comes from wanting to compare the results we get from the second order formalism with those obtained in the first order formalism. The second order formalism allows matter to be added in a direct way, and this permits us to generalize our study of JT gravity to also include matter. Finally, one might hope to glean some lessons about quantizing gravity in higher dimensions from the second order formalism.
We study both JT gravity in Anti-de Sitter (AdS) space and in de Sitter (dS) space here. Our analysis includes the path integral for spacetimes with the topology of a disk which have Euler character χ = 1, with one boundary, and also spacetimes of the "double trumpet" kind with χ = 0 and two boundaries.
For the disk topology we find in the asymptotic AdS or dS limit, obtained by taking the dilaton and length of the boundary to diverge while keeping their ratio fixed, that the results of the second order path integral quantization agree with those obtained from the first order formalism. In particular, we find like in the first order formalism that the dynamics of the gravity-dilaton system is given by the reparametrization modes of the boundary (called time reparametrization modes) which are governed by an action involving the Schwarzian derivative. However away from the this limit, for example, even when working with a boundary of large but fixed length, we find that the path integral is more complicated to evaluate. This happens because the measure for diffeomorphisms is more complicated in general, due to the mixing of small and large diffeomorphisms with each other, and also because the determinants which now arise have a complicated dependence on the large diffeomorphisms. The underlying reason for all this is that away from the asymptotic limit there are modes with wavelengths shorter than the AdS or dS scale which need to be included in the path integral and their dynamics is not simple. In particular one needs to include an arbitrarily large number of higher derivative terms, beyond the Schwarzian, valued in Diff(S 1 )/SL(2, R) in order to incorporate their coupling with the large diffeomorphisms.
The matter we include is conformally invariant -mostly free bosons or fermions, although some of our results are more general. In the presence of matter, for the disk, we find again that in the asymptotic AdS or dS limit the path integral can be carried out but away from this limit the matter determinant from quantum fluctuations has a dependence on the large diffeomorphisms which requires us to go beyond the Schwarzian action and include the higher derivative terms mentioned above. It is worth mentioning that simple dimensional counting shows that the quantum effects of matter only arise away from the asymptotic limit when one is working at finite boundary length, and including them in a systematic manner along with the quantum effects from the gravity-dilaton sector is quite non-trivial. Such an analysis would need to be carried out to go beyond the semi-classical limit which has been analyzed in considerable detail recently where the number of matter fields N → ∞, and the gravity-dilaton is treated as being classical.
In AdS space the path integral for a single boundary has the interpretation of computing the partition function of the boundary theory at finite temperature. In dS space the path integral we carry out evaluates the wavefunction of the universe obtained from the no-boundary proposal first put forward by Hartle and Hawking [110]. The contour in this case is more complicated and goes over spacetime regions with different signatures. We consider both the Hartle-Hawking (HH) contour, which involves initially a Euclidean region with signature (2, 0) and then de Sitter space of signature (1,1), and also an alternate contour suggested by Maldacena [24] which involves a spacetime with signature (0, 2), with metric −AdS 2 , followed by a region of dS space. We find agreement between both contours and discuss how to carry out the path integral by analytically continuing fluctuations across regions of different signature. The short distance modes we were referring to above, which render the calculations more complicated away from the asymptotic limit, in the dS case refer to modes which are still dynamical and have not yet exited the horizon. It is important to mention that especially in the dS case one would like to obtain the wavefunction when the universe has finite size (and the dilaton takes a finite value) and it is therefore important to try and go beyond the asymptotic limit after including these modes, although we will not be able to make much progress in this direction here.
The contribution of the double trumpet geometry is suppressed compared to the disk topology, since it has higher Euler character, by a factor of e Φ 0 /4G where Φ 0 can be thought of as the horizon value of the volume of the internal space which gives rise to the JT AdS or dS theory after dimensional reduction. The double trumpet can be thought of as a wormhole joining the two boundaries. Most of the comments above about agreement with the first order formalism in the absence of matter and in the asymptotic AdS or dS limits apply to the double trumpet geometry as well. The path integral now involves two sets of large diffeomorphisms which act independently at the two boundaries and also moduli, including one which corresponds to the size of the "neck" of the wormhole (called b below). We show how the correct measure for summing over these diffeomorphisms and moduli arise in the second order formalism.
Once matter is added we find that its quantum effects give rise to a contribution in the path integral which diverges when the neck goes to zero size. The quantum effects can be thought of as giving rise to a Casimir effect which diverges when the neck becomes vanishingly small. We show that this happens both for bosons and also for fermions. The bosons have periodic while the fermions have anti-periodic boundary conditions along the time or temperature direction. This divergence is analogous to the tachyon divergence which arises on the world sheet for Bosonic string theory. Alternatively, we can also consider periodic boundary conditions along this direction for the fermions, as would be appropriate for example if one is evaluating an index Tr((−1) F e −βH ) instead of the partition function 1 . In this case we find that the quantum effects from fermions do not diverge when b → 0 and the path integral is well behaved.
In dS space the double trumpet gives rise to an amplitude for two universes to arise after quantum tunnelling from "nothing". The divergence in the b → 0 limit mentioned above arises in this case as well and can also be avoided by taking the fermions to have periodic boundary conditions along the spatial directions of the two universes. In fact, the periodic boundary conditions for the fermions cannot be satisfied for the disk topology, so with these boundary conditions the the leading contribution in the wavefunction, in an expansion in Euler character, arises from the amplitude to pair produce the two universes from the double trumpet wormhole.
This paper is organized as follows. In section 2 we elaborate on the quantization of JT gravity in second order formalism for the case of Euclidean AdS spacetime with disk topology. In section 3, we extend this analysis to include additional matter fields for the disk topology. Following this, in section 4 we repeat the analysis for the case of Euclidean AdS spacetime with "double trumpet" topology, both with and without additional matter. In section 5 and 6 we redo the analysis of sections 2,3 and 4, now in the dS spacetime. Finally we end with conclusions in section 7. Appendices A-J contain important additional details.
Pure JT theory path integral in AdS
In this section we will consider JT gravity in Euclidean AdS space. The path integral for the system is given by where S JT , the action for Jackiw-Teitelboim gravity, involves the metric and a scalar, the dilaton. In Euclidean signature the action is given by where we have set the AdS 2 length R AdS = 1. Note that the sum in eq.(2.1) is over all metric and dilaton configurations and our main task here will be to make this precise. This problem has invited considerable attention recently, [3,13]. In general, one must sum over all topologies subject to the boundary conditions that are imposed. Our approach will be to work directly in the second order formalism which involves a sum of metric configurations and not with the first order formalism which has been used in much of the previous literature. This will also allow us to include matter easily, as we will see later. We will restrict ourselves in this section to the relatively simple case of the disk topology with one boundary.
Before proceeding let us note that the action of JT gravity actually includes one additional term which is topological, where χ is the Euler characteristic of the manifold, related to the number of handles, H and boundaries, B by Such a topological term arises for example when one constructs the JT action by dimensionally reducing from higher dimensions in the near horizon region of a near extremal black hole and in that case it accounts for the ground state entropy of the extremal black hole. We will mostly ignore S top for now, since we will be working on the disk topology with fixed χ = 1, and work with the action eq. (2.2). We will formulate the path integral for a boundary of fixed length l with the dilaton taking a fixed value φ B at this boundary. An important limit in which the path integral eq.(2.2) has been studied is the asymptotic AdS limit. In this limit we introduce a cut-off to regulate the theory and take the limit → 0, with the dilaton and length of the boundary scaling like with J, β fixed. By rescaling we can set β = 1 so there is actually only one dimensionless parameter specifying the limit given by βJ. So, we take the boundary conditions for φ and l in this section as follows.
l → 2π (2.8) We will see below when dealing with determinants that there is also the issue of taking the cut-offs, introduced to regulate the determinants, to infinity; the asymptotic AdS limit then needs to be defined more precisely keeping track of the correct order of limits. In this asymptotic AdS limit we will find complete agreement between the path integral in the second order and first order formalisms. In particular, we will show below how the sum over large diffeomorphisms, which correspond to fluctuations of the boundary, arises, with the correct measure, in the second order formalism as well. In the more general case where the dilaton takes a fixed value φ B at the boundary of length l we will show how the path integral can be defined quite precisely, but will not be able to carry out the evaluation till the very end. Interestingly, we will find that the more general case differs from the asymptotic AdS one in important ways, even when φ B , l 1. This more general case will also be of interest when we turn to de Sitter space later in the paper.
We have not specified yet what the Vol(Ω) factor in eq.(2.1) refers to. In defining the path integral for any gauge theory one would only like to sum over physically distinct configurations. This can be achieved by summing over all configurations and then dividing by the volume of the gauge group. For our case we would therefore divide by the volume of all diffeomorphisms which leave the geometry -along with the boundary -unchanged.
These diffeomorphisms which will be defined more precisely below be will referred to as "small diffeomorphisms" and Vol(Ω), in eq.(2.1), then refers to the volume of these diffeomorphisms, . In contrast, there will also be a set of "large diffeomorphisms", these are physically distinct configurations corresponding to different boundaries and we will sum over them without treating them as gauge transformations.
The path integral defined more precisely
We first consider in our discussions below the general case of a disk with a boundary of length l where the dilaton takes value φ B . As a limiting case we will then turn to the asymptotic AdS boundary conditions, eq.(2.7) and (2.8).
Let us begin by specifying the measure for the sum over metrics more carefully. The starting point is as follows. We consider the space of metrics satisfying the required boundary conditions itself to be a Riemannian manifold and denote this space as R. A point in this space is a metric g ab on a manifold with disk topology and boundary of length l. The tangent space of all metric deformations at any particular point in R, T g R, corresponds to small deformations δg ab . This space is endowed with an ultra local inner product which takes the form, √ gg ac g bd δ 1 g ab δ 2 g cd (2.9) for two deformations, δ 1 g, δ 2 g. The inner product then defines a metric on R and the measure for summing over different metrics is defined using the volume element which follows from this metric. In two dimensions things become especially simple because a general metric, g ab , after a coordinate transformation can always be locally written in terms of a conformal factor σ as g ab = e 2σĝ ab (2.10) whereĝ ab is a fiducial metric. For the disk topology, the manifold can be covered by a single coordinate chart and we can takeĝ ab to be a constant negative curvature metric with curvatureR = −2 in this chart. Furthermore, a small metric deformation in general in two dimensions can be decomposed in the following manner, [111]: (2.11) Note that this is an orthogonal decomposition with respect to the inner product, eq.(2.9) In eq.(2.11) δσ is a perturbation in the conformal factor, P is an operator acting on vector fields, V , as (2.12) and P † , which is the adjoint of P , acts on traceless metric perturbations as (P † 1 δg) b = −2∇ a δg ab (2.13) It is well known that the kernel of P † in general corresponds to moduli, which together with the conformal factor then determine the metric, upto coordinate transformations. In fact Eq.(2.11) is the statement that any perturbation around a given metric can be written as a combination of an infinitesimal conformal transformation, an infinitesimal diffeomorphism and an infinitesimal change in the moduli. The Kernel of P † vanishes for the disk since it has no moduli.
The orthogonal decomposition in eq.(2.11 ) means that the measure for summing over metrics can then be written as Dg ab = D[σ]D[P V ] (2.14) Here D[σ] involves the volume element in the space of conformal deformations which arises from the inner product, eq.(2.9). For deformations, δ 1 σ, δ 2 σ eq.(2.9), this takes the form, Similarly the measure D[P V ] includes the volume element which arises from eq.(2.9) for the metric perturbations of the form δg ab = (P V ) ab .
We now come to the main new element in this problem. The set of diffeomorphisms we sum over, whose measure we schematically denoted as D[P V ] above, includes both "small" and "large" diffeomorphisms as mentioned above. Small diffeomorphisms leave the boundary unchanged and roughly speaking "fall off fast enough" towards the boundary. Large diffeomorphisms in contrast do not leave the boundary unchanged, in fact they can be thought of as modes which describe the fluctuations of the boundary.
Before proceeding, let us note that in general the space of vector fields on the disk also has a natural inner product given by For P † to be the adjoint of P it is easy to see that a boundary term which arises during the manipulation For the boundary term eq.(2.17) to vanish, the vector field V a must satisfy appropriate boundary conditions. We choose the small diffeomorphisms to correspond to vector fields which satisfy the following two boundary conditions, n a V a = 0, t a n b P V ab = 0 (2.19) where t a , n b are the tangent and normal vector to the boundary respectively. It is easy to see that the first condition ensures that the boundary remains unchanged and together the two boundary conditions ensure that the the boundary term vanishes for δg = P V in eq. (2.17). Acting on the space of all such small diffeomorphisms P † P is therefore an adjoint operator. As was mentioned above, the small diffeomorphisms, which we have now defined precisely, correspond to the gauge transformations and therefore Vol(Ω), in eq.(2.1) is given by Vol(Ω) = Vol(sDiffeo) (2.20) where Vol(sDiffeo) denotes the volume of the group generated by the small diffeomorphisms.
The additional large diffeomorphisms we would like to include arise from zero modes of P † P . We turn to describing them next. For now then putting together all the information we have acquired so far the partition function for the disk topology is given by where D[P V ] together refer to the measure for the sum over the small and large diffeomorphisms.
Large diffeomorphisms
Physically, as has already been noted [7], one can think of the large diffeomorphisms as follows. Consider carrying out the path integral by first fixing a metric, summing over all configurations of the dilaton for this metric, and then summing over all metrics. As the dilaton varies, due to the boundary condition that φ = φ B on the boundary, the boundary must also fluctuate. The diffeomorphisms we are including correspond to these fluctuations of the boundary and they can be thought of as different ways of cutting out a single connected component, meeting our boundary conditions, from a given disk geometry. In particular, we will consider such diffeomorphisms which preserve the boundary length to be l.
We will see in the next subsection that on carrying out the path integral for the dilaton first, along the contour we choose, we obtain a delta function constraint that localizes the metric path integral to geometries with constant curvature R = −2. We restrict ourselves to describing the large diffeomorphisms for such a geometry here.
In general, any vector field on the disk can be written in terms of two scalar fields ξ, ψ as For a constant curvature metric with R = −2 it is easy to see that zero modes of P , and therefore of P † P , arise from scalars ψ, φ which satisfy the equation The large diffeomorphisms arise from modes where ξ = 0, with ψ satisfying eq.(2.23). To be more explicit, take the metric for AdS 2 in "polar coordinates" given by, The θ coordinate is periodic θ ∈ [0, 2π] and can be thought of as the Euclidean time direction. Solutions to eq.(2.23) in this coordinate system, with ψ ∼ e imθ , which are regular at the origin, r = 1 take the form The modes with m = 0, 1, −1 give rise to killing vectors, corresponding to the SL(2, R) isometries of AdS 2 .
For other values |m| > 1 we get zero modes of P † P which correspond to the large diffeomorphisms of interest. The corresponding vector field for ψ in eq.(2.25) is given by, where the subscript L, m denotes that it is a large diffeomorphism with mode number m.
The resulting metric perturbations are Note that these large diffeomorphisms do not satisfy the boundary conditions eq.(2.19) in general.
For r → ∞ the vector field takes the form, In particular, since the θ coordinate transforms as θ → θ + V θ ; the vector field generates reparametrizations of the θ coordinate (Euclidean time) in this limit. Also note that the metric in the large r limit is given by and we see that δĝ θr and the fractional change in the components δĝrr grr , δĝ θθ g θθ vanish in this limit. As a result these diffeomorphisms give rise to asymptotic isometries, in the limit r → ∞. Note that the requirement that the vector field and the associated metric perturbations be real gives the conditionĉ The action for the metric perturbations generated by the large diffeomorphisms stays finite even in the asymptotic limit when the dilaton and total length scaling like eq.(2.8), as is well known and as we will also see below in section 2.5. The physical reason for this is the low-dimension of spacetime we are working in here and the fact that the diffeomorphisms are asymptotic isometries in this limit. Due to their finite action these large diffeomorphisms need to be included in the path integral.
To be very explicit, for the metric, eq.(2.24), we note that the boundary of length l is located at r = r B , where when l 1 we have r B l 2π 1. Once a diffeomorphism is turned on we go to new coordinatesr = r + V r ,θ = θ + V θ , where (V r , V θ ) is the vector field leading to the diffeomorphism. The boundary will now be located atr = r B and so at the boundary. For large diffeomorphisms where V · n does not vanish, unlike for small diffeomorphism, eq.(2.19), the boundary will change. Let us also note that for the diffeomorphisms eq.(2.28) with |m| > 1 the boundary length does not change to linear order in c m showing that these give rise to length preserving diffeomorphisms at the boundary. In the asymptotic AdS limit, eq.(2.7) and (2.8) we have that and we see that r B → ∞.
More generally, away from the asymptotic AdS limit, when we consider a boundary of finite length l and finite boundary value of the dilaton φ B , the large diffeomorphisms continue to be give rise to physically distinct geometries and we need to include these modes in the path integral in the general case as well.
The resulting measure in the space of small and large diffeomorphisms is actually quite complicated in general. This is because the inner product which follows from eq.(2.9) is not orthogonal between the small and large diffeomorphisms, and as a result the metric in the space of diffeomorphisms has off-diagonal components between the large and small diffeomorphisms. In the asymptotic AdS limit though these off-diagonal elements vanish, in a precise manner which we estimate below. As a result the measure simplifies allowing the path integral to be explicitly carried out. More generally, for fixed φ B , l carrying out the path integral is more challenging.
To estimate how the off-diagonal components in the space of diffeomorphisms vanishes in the limit when r B → ∞ let us first consider the diagonal components of the metric. Starting from eq.(2.12) for two small diffeomorphisms V s 1 , V s 2 meeting boundary conditions eq.(2.19) we get that The subscript s in the diffeomorphisms is to indicate that it is a small diffeomorphism. Note that the inner product on the left is between two metric deformations, eq.(2.9), and on the right between two vector fields, eq.(2.16). The subscripts s i is to indicate that the vector fields correspond to small diffeomorphisms. On the other hand, the inner product between two large diffeomorphisms can be written as a boundary term, since they are zero modes of P † P . With metric, eq.(2.24) and boundary at r = r B , this takes the form, where V L,m is given in eq.(2.26) and δ m 1 ,−m 2 is the Kronecker delta symbol.
A general small diffeomorphism can be decomposed in a basis of vector fields going like ∼ e imθ which are eigenmodes of −i∂ θ and also eigenmodes of P † P with eigenvalues, λ. Denoting such a basis element as V s,m,λ , the inner product between a large diffeomorphism V L,m and V s,λ,−m can also be expressed as a boundary term and is given by where √ γ = r 2 B − 1 is the boundary volume element, n r is the unit normal and | ∂ denotes the boundary values at r = r B . It is easy to see that the expression above does not vanish in general.
As discussed in the appendix E in the asymptotic AdS limit, when r B → ∞, one gets that the ratio P V s,λ,−m , P V L,m P V s,λ,−m , P V s,λ,m P V L,m , P V L,−m ∼ 1 It is this ratio which determines the importance of the off-diagonal terms compared to the diagonal ones in the volume element for the sum over all diffeomorphisms. Since it vanishes we learn that the off-diagonal terms can be neglected when r B → ∞ and the measure for the small and large diffeomorphisms decouple. We learn that . (2.38) where in the last equality we have carried out the integral over the small diffeomorphisms since the action is independent of them. The prime in det (P † P ) indicates that the zero modes have been removed. In fact, for the disk, with the boundary conditions, eq.(2.19), there are no zero modes, which we we explain in appendix D . The path integral in the asymptotic AdS limit then becomes, From eq.(2.35) we also learn that when r B → ∞ leading to the measure where we used eq.(2.30) in obtaining the above form of the measure. Before closing this subsection let us introduce a variable u which is the rescaled proper length along the boundary. Using a small diffeomorphism, we can bring the boundary line element to the form, For the metric eq.(2.24), when the boundary length l → ∞, we have near the boundary, where r 1, Using the form of the vectors fields specifying a large diffeomorphism V r L , V θ L in eq.(2.28) it is easy to then show that u =θ. For infinitesimal transformations we can therefore write On comparing we find that the measure obtained in eq.(5.40) agrees with that which arises in the first order formalism as discussed in [3] and [13]. Let us also note, as was mentioned above, that the θ direction can be thought of as the Euclidean time direction. From eq.(2.44) we also see that large diffeomorphisms act as reparametrizations of θ, when l → ∞. For this reason we will sometimes refer to the large diffeomorphisms as time reparametrizations below.
Integral over the Dilaton
Having described the diffeomorphisms, small and large, in some detail, and the measure for summing over them we now come back to a systematic evaluation of the path integral. Our starting point is eq.(2.21). We will first carry out the integral over the dilaton, then over the conformal factor σ and finally turn to the sum over the diffeomorphisms. Also, to begin here we will consider the case of a general boundary of length l with the dilaton taking a value φ = φ B on the boundary, and consider the asymptotic AdS limit as a special case in subsection 2.5.
We carry out the dilaton path integral in the background of a general metric g ab given by eq.(2.10). To begin we write a general dilaton field as where φ 0 (r) is a fixed function of r given by The constant A is fixed to take the value so that for the metric in eq.(2.24) the locus r = r B also has the required boundary value of the dilaton φ = φ B with δφ = 0. Let us note in passing that φ 0 (r) satisfies the classical equations of motion in JT gravity.
The path integral for the dilaton requires us to sum over various values for δφ. To carry out this sum we adopt the prescription, now being commonly adopted, [3], of rotating the contour for δφ to lie along the imaginary axis [−i∞, i∞] so that after taking δφ → iδφ, δφ has the range [−∞, ∞]. Next we expand δφ into eigenmodes of the operator (∇ 2 − 2) where∇ 2 is the scalar laplacian obtained from a metricĝ ab of constant negative curvaturê R = −2. We require that δφ vanishes at the boundary (at r = r B ) so that the dilaton, eq.(2.46) takes the value φ B at the boundary as required.
With this prescription the path integral over the dilaton gives is the Ricci scalar which arises from the metric eq.(2.10), δ(R[σ] + 2), denotes a delta function which has non-trivial support only when the curvature R[σ] = −2, and S JT,∂ is the boundary part of the JT action, eq.(2.2), which with φ = φ B takes the form, After carrying out the dilaton path integral the partition function therefore takes the form, Let us note before proceeding that one could have considered another contour for doing the dilaton integral. In fact from the higher dimensional point of view it is perhaps more natural to consider a contour where the dilaton is real with a range [−Φ 0 , ∞], where Φ 0 is the prefactor of the topological term, eq.(2.3), since this ensures that the volume of the internal space does not become negative. We will not have anything further to say about such a contour here and leave it for future consideration.
We have glossed over one subtlety above. The path integral as we mentioned at the beginning is being done for a general metric of the form eq.(2.10) and the measure for the dilaton integral will therefore depend on σ the conformal factor. This measure arises from an inner product which, to begin with for two dilaton perturbations δ 1 φ, δ 2 φ, takes the form and g ab is the metric including the conformal factor. The dependence of the measure on the conformal factor is the same as for a scalar field satisfying Dirichlet boundary conditions, and can be obtained from the conformal anomaly, as is discussed in appendix F. Since, as we will see shortly below, satisfying the delta function in eq.(2.52) results in setting σ = 0, this dependence results at best in a constant multiplying the partition function. We can therefore ignore this subtlety, since we are not keeping track of the overall multiplicative constant in the partition function.
Integral over the Liouville mode
Next, we turn to the path integral over the Liouville mode. The delta function we obtained in eq.(2.52) makes this easy. The argument of the delta function manifestly vanishes when σ = 0. Linearizing around it we get It is easy to see that in conformal gauge the action, eq.(2.2), gives rise to a well-defined variational principle with δσ vanishing on the boundary (where φ b , l are fixed). This is true because we have included the Gibbons-Hawking boundary term in the action. As a result we will sum over all Liouville mode fluctuations in the path integral subject to the condition that δσ vanishes on the boundary.
Carrying out the integral over the non-zero modes of (∇ 2 − 2) then gives rise to a factor of (det(−∇ 2 + 2)) −1 in the partition function. The zero mode is fixed by requiring that the boundary has length l. We note that, as for the dilaton above, the measure D[σ] in general has a non-trivial dependence on σ, since the inner product eq.(2.15) involves the full metric, eq.(2.10). However, again, this dependence which is the same as for a massless scalar and can be obtained from the conformal anomaly only gives rise to an overall multiplicative factor in the partition function once we set σ = 0. The resulting determinant det(−∇ 2 + 2) then only depends on the metricĝ ab .
Putting all this together gives Note that the determinant which appears above depends on the metricĝ ab and therefore on the large diffeomorphisms.
The asymptotic AdS limit and some remarks
So far we have been considering the general case of a boundary of finite length. There are two complications in going further with the evaluation of the partition function when the boundary has a finite length, even for the case when the length is large. First, as was discussed in section 2.2 the metric perturbations generated by the small and large diffeomorphisms are not orthogonal and as a result the measure for summing over them is quite complicated to obtain. Second, the dependence of the the determinant det(−∇ 2 + 2) on large diffeomorphisms is also not easy to obtain. To proceed, we will therefore take the asymptotic AdS limit where we take the length to go to infinity, while also taking the dilaton to diverge at the boundary, as given in eq.(2.7) and (2.8). Actually this limit has to be defined more precisely in the path integral where we are dealing determinants of various operators. These determinant are formally infinite and need to be regulated. We will regulate the determinants by first keeping only the contributions of eigenmodes with finite eigenvalues, take the asymptotic AdS limit, where l → ∞, eq.(2.8), and then finally take the cut-off on the eigenvalues to go to infinity. This order of limits is part of our definition of the asymptotic AdS limit. It will be responsible for some of the simplification which occurs.
In the more general case where l is finite we need to regulate the determinants and then take the cut-off introduced for regulating the determinants to zero 2 keeping l fixed. This makes the evaluation of the determinants more complicated. To explain some of the resulting complications consider evaluating the determinant det(−∇ 2 + 2). We cannot use conformal invariance for evaluating this determinant, unlike det(−∇ 2 ) which arises for a massless scalar for which at least some information can be obtained, as we will see in the next subsection. A direct evaluation of det(−∇ 2 + 2) is also not easy. For example, consider evaluating this determinant in the metric eq.(2.24). The eigenmodes of (−∇ 2 + 2) can also be simultaneously chosen to be eigenmodes of ∂ θ . Denoting these modes by φ λ,m , we have that φ λ,m ∼ e imθ and (∇ 2 − 2)φ λ,m = −λφ λ,m . In the asymptotic AdS limit, as mentioned above, we first take the the boundary r B → ∞, keeping m, λ fixed, and then take m, λ → ∞. This means that we are including modes whose wavelength along the θ direction Λ = r B /m 1. Reinstating the radius of AdS, R AdS , in this relation we see that in the asymptotic AdS limit we are only including modes with For such modes the asymptotic form for φ λ,m can be used and this considerably simplifies the analysis. One can then show that the determinant in this limit is independent of the large diffeomorphisms as discussed in appendix F.
In the more general case when the length is finite, there are modes with m ≥ r B whose wavelength Λ ≤ R AdS (2.57) and the contributions of these modes also need to be included. This is harder to do since we need to include terms with arbitrary number of derivatives beyond the Schwarzian term. For example these can arise in the action due to the expansion of the extrinsic trace to obtain the analog of eq.(B.38) in the Euclidean AdS disk.
Similarly, there are high wavenumber modes, for both the large and small diffeomorphisms, also with Λ < R AdS whose contribution would need to be included at any finite value of l. This is again complicated due to two reasons. First, our estimate that the ratio in eq.(2.37), for the inner product of normalized small and large diffeomorphisms is suppressed at large r B is valid only for modes with fixed mode number m, as r B → ∞, as discussed further in appendix E. And the mixing between large and small diffeomorphisms discussed in subsection 2.2 above therefore does not vanish for modes with wavelength Λ < R AdS . Second, because obtaining the contribution due to such modes, even after neglecting this mixing, is not straightforward, since terms beyond the Schwarzian derivative for the large diffeomorphisms would need to be included, for example for the determinant det (P † P ) which arises from the small diffeomorphisms, see discussion above eq.(F.1). As a toy model in appendix H we show how these high wavenumber modes could potentially have a significant effect on the behaviour of the partition function when l, the boundary length becomes very big. In the cosmological context which we study below eq.(2.57) is replaced by where H is the Hubble constant, and this condition therefore corresponds to modes which have not yet exited the horizon and "frozen out". Our analysis shows that such modes can significantly affect the wavefunction.
More details on the asymptotic AdS limit
For all these reasons, here after in this subsection we only consider the asymptotic AdS limit. Since the small and large diffeomorphisms become orthogonal in this limit the path integral eq.(2.55) gives The measure for the large diffeomorphisms is given in eq.(5.40). The prime in det (P † P ) is to indicate that the zero modes in the space of small diffeomorphisms of the operator P † P are to be excluded in calculating the determinant. There is in fact one zero mode for the operator P † P satisfying the boundary conditions eq.(2.19). More discussion on this zero mode is contained in the appendix F.1 above eq.(F.41).
In the asymptotic AdS limit as we have defined it above one can show that both det (P † P ) and det(−∇ 2 + 2) become independent of the large diffeomorphisms as is discussed in appendix F. Upto a multiplicative constant which we are not keeping track of we then get This final expression agrees completely with what has been obtained from the first order formalism, [3].
For completeness let us carry out the remaining integral over V L here. As is well known fact the boundary action S JT,∂ gives rise to the Schwarzian term involving the time reparametrization generated by the large diffeomorphism as follows. As discussed in appendix B the extrinsic curvature for a general boundary curve specified as (r(u), θ(u)) is given by where r(u) is the radial coordinate along the boundary as a function of proper time u and prime denotes a derivative with respect to u. From eq.(2.43) we get that upto corrections sub-leading in r B dθ du where r(u) is given in eq.(2.32) with Substituting gives, The net result is the path integral eq.(2.60) with action and measure eq.(5.40). This agrees with the result obtained earlier. In particular note that the measure we have obtained from the second order formalism above agrees with the measure obtained in [13] see also [3]. The integral over the modesĉ m is in fact one-loop exact [13]. Using the measure eq.(5.40), the action eq.(2.65) and noting eq.(2.30), we have the path integral as Adding the topological term eq.(2.3) for completeness gives the partition function in the asymptotic AdS limit to be Let us conclude this section with a remark. As mentioned above for the boundary located at a finite value for the length l with dilaton taking value φ B , the path integral we have defined is still quite explicit, eq.(2.21), but much harder to fully evaluate. This is true even when the boundary length l 1, which one might expect is simpler than that of the general case. We hope to return to this issue and also to the analogous one in dS JT gravity, where it is related to computing the wavefunction at late but finite time after including modes which have not yet exited the horizon, in the future.
JT path integral with matter in AdS
We shall next extend the analysis of the previous section to include additional scalar massless matter fields. The path integral is given by where S JT is the same action for the JT theory as before, the measure Dφ, Dg µν and the volume of small diffeomorphisms, Vol(sdiffeo) are the same as above and S M is the action for the minimally coupled massless scalar fields, ϕ i , given by where N is the number of matter fields. As can be seen from the above action, the matter fields do not directly couple to the dilaton. We will carry out the path integral for fixed boundary values for the scalar fields, with u being, upto a multiplicative constant, the proper length along the boundary, eq.(2.42). The resulting partition function is a functional ofφ i (u), besides being a function of the length l and the boundary value of the dilaton φ B , as before. We discuss the general case of finite l, φ B first and then turn to the asymptotic AdS limit below.
Working in conformal gauge we can carry out the integral over the dilaton and the Liouville modes. Since the matter fields do not couple to the dilaton directly the dilaton integral will localize the path integral to constant negative curvature metrics as before and allow us to set the the Liouville mode σ in eq.(2.10) to vanish. After the Liouville mode integral is done we are then left with the integral over diffeomorphisms and the matter fields, giving The measure for the scalar fields in this path integral is to be evaluated using a metricĝ ab , with curvatureR = −2. This measure follows from the standard ultra local inner product for two scalar perturbations given by Thus the background geometry for the scalar path integral is hyperbolic space with a boundary determined by the large diffeomorphisms.
To perform the path integral over the fields ϕ i , we first expand it around the classical solution obtained by solving the scalar laplacian equation with the boundary condition specified by eq.(3.3) and also demanding that the solution is regular everywhere in the interior.
Let us denote the resulting solution to be ϕ The boundary condition eq.(3.3) translates to the Dirichlet condition, We can write the path integral for the matter fields then as where S M,cl , the classical contribution resulting from ϕ (0) i , is given after using the equations of motion, by a boundary term, with ∂ n being the normal derivative at the boundary. Note that the Laplacian∇ 2 has no zero modes for the Dirichlet boundary conditions satisfied by δϕ i . Thus the path integral over δϕ i is straightforward and gives, In much of the discussion later in this paper, we will drop the factor of 1 2 that appears in the determinant in the above expression as it will only change the overall numerical coefficient of the path integral which we are not keeping track of. Both S M,cl and the determinant on the RHS depend on the large diffeomorphisms. This dependence is not easy to obtain, for the general case of a finite boundary of length l, as is discussed in appendix F. The reason, related to the discussion towards the end of the subsection 2.5, is the presence of high wave number modes with wavelength less than the boundary length, Λ < R AdS . The Schwarzian action is no longer sufficient to describe the dependence on the large diffeomorphisms for such modes. In addition, as is also discussed in subsection 2.2, the subsequent step involving the integral over the diffeomorphisms is also not easy to carry out in this case.
Keeping these points in mind we again restrict ourselves to the asymptotic AdS limit for the subsequent evaluation of the path integral. As discussed in appendix F the dependence of the large diffeomorphisms in det(−∇ 2 ) vanishes in the asymptotic AdS limit when r B → ∞, after a suitable length dependent counter term is added. If the boundary values of the scalars ϕ (0) i vanish the path integral is therefore unchanged (upto an overall temperature independent prefactor) by the presence of the matter in the asymptotic AdS limit. And the thermodynamics essentially does not change, other than a possible change in the ground state entropy.
When the boundary values ϕ (0) i are non-zero, the matter sector does couple to the large diffeomorphisms. In appendix G, eq.(G.13), we show that the dependence of S M,cl is given, for r B → ∞, by Here θ(u) specifies the time reparametrization as a function of the boundary proper length u, and θ = dθ du .φ i (u) is the value of the scalars along the boundary and the function F is defined in eq.(G.7). At linear order in δθ(u) this becomes, Upto an overall constant the path integral then takes the form, The measure D[P V L ] is defined in eq.(5.40), and the action S JT,∂ involves the Schwarzian derivative of θ(u), eq.(2.65). The path integral can then be done by integrating out the large diffeomorphisms perturbatively, including the self interactions from the Schwarzian term and the interactions with the matter fields, to obtain Z JT +M as a function of the boundary values of the scalar fieldsφ i (u), β and J. We will not go into the details here. These calculations are also discussed in [7].
Further remarks
Let us end this section with some comments. We have seen that the path integral at finite values of the boundary length l is difficult to calculate even when l 1. Some of the reasons for this were mentioned above. On the other hand the quantum effects of matter vanish when l → ∞ since the matter determinant does not couple to large diffeomorphisms in this limit anymore as was also mentioned above.
One way to obtain a tractable situation where quantum effects due to matter can be incorporated is to consider a semi-classical limit by taking G, the gravitational constant which appears in front of the JT action eq.(2.2), to vanish, G → 0, with the number of scalar fields, N → ∞ keeping GN fixed, [18]. In this limit the measure for the diffeomorphisms is not important since gravity is classical and quantum fluctuations over these diffeomorphisms can be ignored, similarly the dependence of det(−∇ 2 + 2) on the large diffeomorphisms can be neglected. However the quantum effects of matter remain. This limit has received considerable attention recently, [5,8,18,27]. The saddle point equations in this limit for the system we are considering were obtained in [18]. It was found that they can typically be solved only in slowly varying situations where the excited modes have wavelengths Λ R AdS as discussed in [18].
More generally one could consider a system away from the semi-classical limit, with a finite number of matter fields, where we are interested in the response to slowly varying sources provided for example by the boundary valuesφ In this case one can consider constructing a Wilsonian effective action which will contain the sources coupled to the large diffeomorphisms by integrating out the other degrees of freedom. The determinants which arise must be valued in Diff(S 1 )/SL(2, R) and can be expanded in a derivative expansion. The leading term in this expansion which depends on the large diffeomorphisms is the Schwarzian term, other terms involve more derivatives and would be suppressed when eq.(3.14) is met. The resulting effective action, after adding a suitable counter term to cancel a boundary length dependent term, is then given by where the action, see appendix F, is S M,cl above arises from the quadratic action of the scalar fields and depends on the boundary valuesφ i (u) and time reparametrization θ(u); to leading order it is is given in eq.(3.11), a correction at O( ) can also be similarly obtained. The measure in eq.(3.15) is the Diff(S 1 )/SL(2, R) invariant measure given in eq.(5.40) above. The coefficient C in front of the Schwarzian action to begin with, before the short wavelength modes have been integrated out, is given by The first term on the RHS, φ B , is from the classical JT action. The matter fields contribute the second term, GN , eq.(F.16); the third term, −26G, comes from det (P † P ), eq.(F.23) and the term with coefficient q 2 which we have not been able to determine and should be of order unity, arises from det(−∇ 2 + 2), see eq.(F.24). The factor of multiplying the Schwarzian shows that the effect of the matter determinant etc vanishes when → 0. Note that the coefficient C will be renormalized from this starting value though, once we integrate out the short wavelength modes. This Wilsonian effective action can then be used for calculating the long-wavelength properties of the system, including computing loop effects from modes meeting eq.(3.14). If necessary a renormalization procedure can be carried out to make such calculations optimal. We leave a further analysis along these lines for the future.
Double trumpet in AdS
In this section we will extend our discussion to consider the path integral over connected geometries with two boundaries in Euclidean AdS. These spaces have Euler character χ = 0. Such a spacetime is often referred to as the double trumpet geometry. The path integral, denoted as Z DT reads The action for the path integral is given as in eq.(2.2) above, with boundary terms at both boundaries. Note that in this case the boundary contribution to the path integral will arise from both boundaries. The non-trivial part of the calculation, like for the disk, is to correctly identify the metric configurations which need to be included in the path integral and obtain a measure for summing over them. We discuss this issue first here and thereafter in the next subsection will carry out the path integral in a systematic manner, analogous to section 2, by first summing over the dilaton, the conformal mode and then the diffeomorphisms. The boundary conditions we impose are that φ takes values φ B,1 , φ B,2 , at the two boundaries which are taken to have lengths l 1 , l 2 respectively. It will be convenient to keep in mind the following background metric for the double trumpet, which has curvature R = −2, with two boundaries, at r → ∞, r → −∞ (henceforth also referred to right and left boundaries respectively): Note that this metric can be obtained from eq.(2.24) by the analytic continuation, however now the periodicity of θ is a free parameter b. This parameter actually corresponds to a modulus and we will integrate over it in the path integral, as we will see below.
While we first set up the path integral with general boundary conditions, as in the case of the disk, it will turn out to be difficult to carry out the calculations all the way through in this general case. As a result, at some point in the discussion below we will specialize to the asymptotic AdS limit. This limit is defined for two boundaries by taking the length of both boundaries to go to infinity and also taking φ B → ∞, while keeping the ratio φ B /l to be fixed at each boundary. The ratio φ B /l takes an independent value at both boundaries, i.e., we take → 0 with The resulting answer will then depend on both the parameters 1 Jβ 1 and 1 Jβ 2 . The asymptotic limit corresponds to taking 1 , 2 → 0.
The path integral for the disk topology can be interpreted as the partition function of the boundary theory, as was mentioned above. After a suitable rescaling of 1 , 2 we can take the renormalized lengths of the two boundaries in the double trumpet to be β 1 , β 2 so that the path integral for the double trumpet geometry can be interpreted as giving a contribution to the connected two point function of the partition functions Z(β 1 )Z(β 2 ) for two boundary theories. We will have more to say about this interpretation in section 4.2 below.
In general metric perturbations about a metric g ab can be decomposed, similar to eq.(2.11) in subsection 2.1, as δg ab = δσg ab ⊕ δg ab (4.5) Here δg ab are traceless metric perturbations which will include, for the double trumpet, perturbations produced by small diffeomorphisms, large diffeomorphisms and moduli. We describe all three types of perturbations below.
Perturbations produced by small diffeomorphisms are generated by vector fields V s and of the form where the operator P is given in eq.(2.12) . The vector fields V s satisfy the boundary conditions, eq.(2.19) at both the boundaries. These perturbations describe the same spacetime after a coordinate transformation and therefore give rise to gauge transformations. Their volume Vol(sdiffeo) is the factor Vol(Ω) in the denominator of eq.(4.1). The perturbations produced by large diffeomorphisms describe fluctuations of both boundaries in double trumpet case. They correspond, as in the disk, to zero modes of the operator P † P , with the operator P † as defined in eq.(2.13) acting on traceless metric perturbations. Denoting a vector field which generates such a transformation by V L the condition V L satisfies is P † P V L = 0 (4.7) and the metric perturbation it produces is δg ab = (P V L ) ab . Taking a cue from the disk case, in identifying these we look for diffeomorphisms which reduce to reparametrizations of the two boundary circles, in the limit where both boundaries have large length. Such diffeomorphisms can be obtained by setting with ψ being a scalar field satisfying the eq.(2.23) . Solutions to this equation for a background metric eq.(4.2) can be obtained from eq.(2.25) after noting the analytic continuation eq.(4.3) and are given by Note that we have twice the number of solutions compared to the disk case, since there are two modes for every value of m. And unlike the disk, there is no condition of regularity in the interior which cuts down the number of solutions since the coordinate system in which the metric in eq.(4.2) is written is non-singular everywhere. We should also mention that the sum in eq.(4.9) does not include an m = 0 mode. We will have more to say about this sector shortly.
Before proceeding let us note that the solution in eq.(4.9) has functions involving (r+i) and (r − i) raised to various powers. These are defined, for a general exponent a, as follows with the log function in both cases being defined to have a branch cut along the negative real axis, i.e., Also in our definition, (4.13) With these definitions we see that the reality of ψ imposes the condition on the coefficients in eq.(4.9).
We will sometimes find it convenient to work with linear combinations of the basis elements used in the expansion in eq.(4.9), and rewrite ψ as follows, The coefficients γ m , δ m are related to A m , B m by It is easy to see that the radial functions multiplying γ m , δ m vanish respectively as r → ∞ and r → −∞ respectively, to leading order . In this basis it is therefore manifestly clear that the expansion eq.(4.9) includes independent perturbations at the two ends. Using eq.(4.8) it is also easy to see that the resulting diffeomorphisms become independent reparametrizations of the θ direction at r ± ∞.
We now turn to the m = 0 sector. There are two solutions to eq.(2.23) in this sector, these are independent of θ and are given by where t, s are arbitrary coefficients. It is easy to see that ψ 1 corresponds to the U (1) isometry under which θ → θ − s. Since it keeps the metric and boundary unchanged it does not correspond a distinct spacetime, and we must not sum over it in the path integral.
On the other hand ψ 2 give rises to the diffeomorphism V tw = * dψ 2 . The subscript m is to indicate that it is a modulus and tw is to denote that this vector field introduces a relative twist between the two boundaries. It is easy to see that V r tw = 0 and Thus this diffeomorphism produces a relative twist between the θ variables parametrizing the circles at the two boundaries in the limit when the boundaries have length l → ∞. It is in fact one of the two moduli associated with this geometry. The corresponding metric perturbation is given by The other modulus for this space is related to the parameter b which is the size of the θ circle, eq.(4.2). Consider a vector field under which θ → θ + δb b θ, so that the periodicity of θ changes. The subscript b is to denote that this is a vector field corresponding to the modulus parameter b. This vector field is not single valued on the circle, however the metric perturbation it gives rise to, is well-defined and single valued, We will also include this metric perturbation in the sum over all configurations in the path integral. Note that both V b , V tw satisfy To summarize the discussion so far then we have argued that general metric deformations which we sum over include changes in the conformal factor and deformations associated with small diffeomorphisms which preserve the boundaries, large diffeomorphisms which changes the boundaries, and two moduli.
Let us now turn to describe the measure in the space of all metric deformations. This measure arises from the inner product in the space of metric perturbations, eq.(2.9) as described in subsection 2. The decomposition in eq.(4.5) is an orthogonal one with respect to the inner product eq.(2.9). Thus the measure in eq.(2.1) can be written as is the measure for the sum over conformal factors following from the inner product, eq.(2.15).
As discussed in appendix E in general the large and small diffeomorphisms included in D[g ab ] are not orthogonal to each other and the resulting measure is hard to obtain, even when the lengths l 1 , l 2 of the two boundaries are large but finite. This is because modes of wave numberm coming from small and large diffeomorphisms, meeting the conditioñ can mix with each other. The resulting complications for the disk topology are discussed in subsection 2.5 and there are similar issues in the double trumpet as well, see discussion after eq.(E.40).
In order to avoid these complications we will therefore finally have to resort to the asymptotic AdS limit as described above. In this limit there are no modes which meet the condition eq.(4.24) (since the l → ∞ limit is taken while keeping the mode numberm fixed). The small and large diffeomorphisms, and moduli are all orthogonal to each other in this limit, see discussion after eq.(I. 22), and the measure then splits up into a product where the three terms on the RHS denote the measures for summing over the small and large diffeomorphisms and the two moduli with V mod denoting V tw , V b .
We will describe these three measures in more detail next. The definition of D[P V s ] is the same as in the disk case and we get in an analogous way, after carrying out the integral over the small diffeomorphisms that Note that there is a zero mode of P † P which corresponds to the U (1) isometry of eq.(4.2).
The measure in the space of large diffeomorphisms is described in appendix I.1. Expressing the complex modes γ m , δ m appearing in eq.(4.15) in terms of the real variables p m , q m , r m , s m , as we find that the measure in terms of the coefficients above is given by eq.(I.13) dp m dq m dr m ds m (16b m 3 +m sinh 2 (πm)) 2 (4.28) Finally for the moduli, from eq.(4.20) and eq.(4.22), it is straightforward to evaluate the inner products using eq.(2.9) and we find the measure for moduli after integrating over the twist modulus turns out to be Here we have used the fact that the range of t is [0, b], since a twist between the two ends which is bigger than b in magnitude can always be brought to lie in this range using the periodicity θ θ + b.
Carrying out the path integral
We now turn to a systematic evaluation of the path integral eq.4.1. As for the disk, we will first carry out the dilaton path integral, then the integral over the Liouville mode and finally, after taking the asymptotic AdS limit, the integral over the diffeomorphisms and moduli. It is convenient, but not essential, to expand the dilaton about a background φ 0 , where the background The coefficients α, β can be adjusted so that φ 0 takes the values φ B1 , φ B2 at the two boundaries in the metric, eq.(4.2) which are located at r B1 , r B2 . For the boundary lengths l 1 , l 2 1 we have that The fluctuation δφ about the background then satisfies Dirichlet boundary conditions at the two ends. Let us also note that there is no solution to the equations of motion where the metric has the required form eq.(4.2) so as to meet the condition R = −2, and the dilaton takes the values φ B1 , φ B2 at both ends with φ B1,2 > 0. We carry out the dilaton integral by taking a contour along which δφ is purely imaginary, as in the disk topology. This yields, (as in the disk we will not keep track of any overall constants in Z from here on carefully.) Note that the delta function imposes the constraint R = −2 and there are contributions from both boundaries in the exponent on the RHS now.
For the integral over σ we write the metric as whereĝ ab is a metric obtained from eq.(4.2) after carrying out changes due to small and large diffeomorphisms as well as the moduli. We will impose Dirichlet boundary conditions on σ at the two ends. This condition is needed to obtain a well defined variational principle in the presence of the Gibbons-Hawking boundary terms at boundaries.
This gives Note that with the Dirichlet boundary conditions we are imposing on the Liouville mode det(−∇ 2 + 2) has no zero modes.
At this point to simplify the measure and also deal with det(−∇ 2 +2), which in general depends on the large diffeomorphisms and also moduli, in a tractable manner, we take the asymptotic AdS limit described earlier, eq.(4.4). The measure D[g ab ] then becomes a product, eq.(4.25) and proceeding as discussed at the end of the previous subsection we get As argued in appendix F both det (P † P ) and det(−∇ 2 + 2) become independent of the large diffeomorphisms in this limit. Furthermore their ratio has important cancellations. In particular, the exponential divergences that we shall discuss shortly, cancel in this ratio, see appendix I.3. The action, obtained in eq.(I.22), is given by where p m , q m , r m , s m are related to the modes γ m , δ m appearing in eq.(4.15) by the relation eq.(4.27). The measure for summing over the large diffeomorphisms is given in eq.(4.28). All this then leads to which agrees with eq.(127) of [3]. Note that we have cancelled terms in the action which are proportional to length that arise from the various determinants by adding counterterm with suitably chosen coefficient, see discussion around eq.(F.25).
Doing the integrals over p m , q m , r m , s m in eq.(4.39) we get which further yields as the final result in agreement with eq.(135) of [3] (we have not been careful about the overall numerical factor as discussed above).
We note that away from the asymptotic AdS limit the additional modes, meeting condition eq.(4.24) would enter in the calculation and one would have determine to their dependence in both det (P † P ) and det(−∇ 2 + 2). This dependence is not easy to obtain and would involve an infinite number of higher derivative terms beyond the Schwarzian. Similarly the measure for summing over such modes is not easy to calculate. For all these reasons we will not attempt a calculation of Z DT in this more general case here.
Adding matter
Next we turn to adding matter to the theory and consider its effect in the path integral while summing over connected geometries with two boundaries. To begin we take a free bosonic massless scalar field. It's action is given by and is the same as eq.(5.47) with N = 1. We will also consider fermionic matter subsequently.
The matter field does not couple to the dilaton and we can carry out the integral over φ and thereafter over the conformal factor σ as before, leading to the partition function, Z DT +M being given by where S JT,∂ 1,2 is given by eq.(B.17) at each of the boundaries.
First let us consider the case where the matter vanishes at the boundaries. Carrying out the path integral over ϕ then gives, The factor (det(−∇ 2 )) 1/2 in the denominator arose from the integral over the matter field and with the matter field vanishing at both boundaries is obtained from the product of eigenvalues of the laplacian∇ 2 with Dirichlet boundary conditions. It is easy to see that with these boundary conditions the operator has no zero modes. In general (det(−∇ 2 )) 1/2 will depend both on the moduli, see appendix I.2, and the large diffeomorphisms as discussed in appendix F. To proceed we now take the asymptotic AdS limit, eq.(4.4). In this limit the dependence on the large diffeomorphisms of (det(−∇ 2 )) 1/2 vanishes, as discussed in appendix F. However there is still an important dependence on the modulus b as we discuss shortly below and in appendix I.2. Also the measure breaks up into a measure over the small and large diffeomorphisms and moduli as mentioned in eq.(4.25). Carrying out the integral over the small diffeomorphisms then gives, Here the measure for summing over the large diffeomorphisms D[P V L ] is given in eq.(4.28) and the two boundary actions, S JT,∂ 1 , S JT,∂ 2 are given in eq.(B.18). Note that in the asymptotic AdS limit both det (P † P ) and det(−∇ 2 + 2) are independent of the large diffeomorphisms. Again, as mentioned earlier, their ratio has crucial cancellations as discussed in appendix I. 3. The matter determinant depends on the modulus b and this dependence is given by η(τ ) on the RHS is the Dedekind eta function. Keeping all these facts in mind and carrying out the integral over the large diffeomorphisms then gives, Now we come to a rather interesting consequence of having added the matter. Using the well known properties of η(τ ) under modular transformations it is easy to see, as discussed in appendix I.2, eq.(I.50), that as b → 0 As a result the integral over the modulus b diverges as b → 0 3 and the partition function Z DT +M is in fact not well defined. To examine the behaviour of the wavefunction as b → ∞, we note from the results for determinants evaluated in appendix I.2,I.3, see eq.(I.52), (I.70), that in this limit, the contribution from various determinants can atmost go as e xb , x > 0. However the boundary terms of the JT theory, after integrating over the large diffeomorphisms has the behaviour e −yb 2 , y > 0, see eq.(4.47). Thus the wavefunction is convergent at the other end of the b-integral as b → ∞.
Why does the divergence as b → 0 arise ? We see from eq.(4.2) that as b → 0 the "neck" of the double trumpet gets thinner and thinner. More precisely the geometry, eq.(4.2) has a geodesic winding around the θ direction with minimum length b and the length of this geodesic goes to zero when b vanishes. The divergence is related to the quantum stress tensor of matter giving a negative contribution due to the Casimir effect which blows up as the size of the neck vanishes.
The result for the double trumpet partition function in the absence of matter, eq.(4.41) can be interpreted as a two point correlation between the partition functions of both boundary theories Z(β 1 )Z(β 2 ) , [3], which could arise for example in a boundary theory with random couplings. The divergence, once bosonic matter is added, suggests that the dominant contribution in the sum over geometries will arise when the neck goes to zero size resulting in the two ends not being connected at all and the connected two point function for the partition function vanishing. This suggests that in the presence of bosonic matter one is describing a more conventional system without random couplings. To put it another way, the theory with matter is ill-defined due to the divergence above. To make it welldefined, one possibility could be to take the result from the double trumpet which peaks at b = 0 as a clue and simply disallow all topologies except the disk.
However, it could well be that this is not the only possibility, it is certainly not a very elegant one. Instead, perhaps further study will show that the path integral can be made well defined in various ways and the resulting dynamics would then determine whether wormholes are allowed or not depending on how the divergence is tamed 4 . We leave a more detailed investigation along these lines for the future.
We can also consider what happens if fermionic matter is added instead of the bosonic matter we considered above. Let us take as an example one complex free fermion field ψ with central charge c = 1 and action where the subscript f in S M,f in to indicate the fermionic nature of matter. Since we are thinking of θ direction as the Euclidean time direction, or as the temperature directions, we impose anti-boundary conditions along it. In addition let us also impose anti-periodic conditions, i.e. NS boundary conditions, in the radial direction.
The partition function, Z M,f as a function of the moduli b can then be easily written down and is given by where H the Hamiltonian is given by In the notation used in [112] eq.(10.7.8a), for the boundary conditions above, in eq.(4.47). To understand the b → 0 limit we can do a modular transformation. Since Z 0 0 (τ ) = Z 0 0 (−1/τ ), (eq.(10.7.14) of [112]), we learn that As a result, once again the integral over b diverges. We could also consider imposing periodic (Ramond) instead of anti-periodic boundary conditions in the radial direction at the two ends of the double trumpet, while still keeping the boundary conditions along the temperature direction to be anti-periodic. This gives Z M,f = Tr (R) e −bH , which in the b → 0 limit diverges in the same way as eq.(4.53). However suppose we sum over both the NS and Ramond sectors in the path integral with an opposite relative sign, then and now the leading divergence at small b would cancel. From eq.(10.7.14) of [112] we see that Tr (R) e −bH → Z 0 1 (−1/τ ) under the modular transformation, τ → − 1 τ . Thus, we get that after this modular transformation , and F denotes the fermion number operator, under which the NS vacuum has charge 0 and the ψ, ψ † operators have charge ±1 respectively. It is now easy to see that the vacuum state after the modular transformation is projected out and the leading contribution on the RHS in eq.(4.55) arises from the first excited states with F = 1. These have H = 1 2 − 1 24 = 11 24 , so that as b → 0 now We see therefore that Z M,f now decays very rapidly as b → 0 and this renders the integral over b convergent in the region where b → 0. We have not explored in detail whether the relative minus sign between the NS and R sector in eq.(4.54) gives a consistent theory. In string theory (where there is additional matter and world sheet supersymmetry) the contributions of the NS and R sectors do come with relative opposite sign, as in eq.(4.54) above, and this is easy to understand on the basis of spacetime spin statistics. However, our problem is different and in particular spacetime itself is two dimensional here.
Finally, we can consider imposing periodic boundary conditions for the fermion along the θ direction 5 . This would correspond to calculating not the partition function but an index Tr[(−1) F e −βH ]. For the disk topology it would not be possible to impose this boundary condition since the θ direction shrinks to zero size and going around it is a 2π rotation under which the fermion must be anti-periodic. But we can do so for the double trumpet since the θ circle has a finite size everywhere in the geometry. Imposing NS boundary conditions along the radial direction in the periodic case would give Z M,f = Z 0 1 (τ ) which after a modular transformation becomes Z 1 0 (−1/τ ). In the limit b → 0 this means We see that this now vanishes as b → 0, and the divergence goes away. The periodic boundary conditions have reversed the sign of the Casimir energy and there is no obstruction to a wormhole connecting the two ends now. If we impose periodic boundary conditions along the radial direction as well as along the θ direction, the partition function continues to behave like eq.(4.57) as b → 0 resulting in no divergence. In both these examples we would conclude that the two point function of the index on the two boundaries is non-zero.
The main purpose of the last few examples above was to show that for suitable matter, added in a consistent manner, one can avoid the divergence seen in the bosonic case. The integral over b should then converge, the double trumpet geometry connecting the two ends would contribute to the path integral and the dual theory would involve averaging over coupling constants in some way.
Basic set-up
In this section we consider JT gravity in dS space. This corresponds to the action for gravity and the dilaton In comparison with eq.(2.2) we see that the cosmological constant is positive here and we are working in units where the Hubble constant H is given by Note also that the boundary term above differs from the corresponding one in AdS. In the AdS case there is a term proportional to the length of the boundary, going like dx √ γ, which is absent here. For the AdS case this term can be thought of as a counter term which is added, with a suitable coefficient, to remove a divergence which arises in the path integral when we take the limit where the length of the boundary diverges. However, in the dS case the dependence of the wavefunction on the length of the boundary is of physical significance and we should not be adding such a term 6 .
We also note that the path integral in our conventions is given by the measure etc. which appears above will be discussed in more detail below. Matter can also be added to the system. Later on we will consider conformal matter, specifically scalar fields with action eq.(5.47) or fermionic fields with action eq.(5.48).
We will study the wavefunction of the universe as given by the no boundary proposal. This wavefunction gives the probability amplitude for a universe which has length l when the dilaton takes the value φ B and it is given by the partition function For a single connected universe this partition function needs to be calculated over geometries which have one boundary with length l where φ = φ B . One can think of φ as providing a clock for the universe and the wavefunction as giving the amplitude for the universe to have different lengths at time φ B .
A key new element in the calculation, in comparison to the AdS case with disk topology, is that the path integral involves metrics of different spacetime signatures. There are two contours which have been suggested to calculate the no-boundary wavefunction. In the conventional Hartle-Hawking proposal, [110], the contour studied involves Euclidean dS which is a sphere, S 2 , with metric of signature (2, 0), which is then connected along the contour, at the equator of the S 2 , to Minkowski dS with signature (1, 1). To implement the no-boundary proposal the contour starts at say the north pole of the S 2 . We will refer to this as the Hartle-Hawking (HH) contour below. In contrast, in the Maldacena contour, [24], we start at the north pole but evolve along −AdS 2 which is a Euclidean geometry of signature (0, 2), eventually then continuing to Minkowski dS. We will explore these contours in the second order formalism here. These contours are elaborated more after eq.(5.12).
As far as our analysis below will reveal, we find no difference between the two contours for the resulting wavefunction. The reason for this, which will become clearer as we proceed is that the fluctuations over which we sum while carrying out the path integral are analytically continued in going from one signature to another, and we will not encounter any singularities while carrying out these continuations.
For signature (2, 0) or (0, 2) the action, denoted by S JT,edS is given by For a contour which passes through regions of different signature we will calculate piece-wise the contribution to S JT and add them to get the full result, keeping track of boundaries which arise when the different signature pieces are glued together. Note that there is actually an additional topological term in the action, eq.(2.3). This term is also present in the de Sitter case we are considering here. For (2, 0) and (0, 2) signature cases this term is given by and for (1, 1) signature by where Φ 0 is a parameter which suppresses topological fluctuations. When dS JT gravity arises from higher dimensions it is related to the volume of the extra dimensions and the topological term gives a contribution proportional to the higher dimensional dS entropy. For a contour which passes through regions of different signature we will again calculate pieces wise the contribution to S top and add them.
In this section we focus on the case with a single boundary. In this case S top = − Φ 0 4G with the contribution coming from the part of the contour which has (2, 0) or (0, 2) signature. This yields Note that the higher dimensional de Sitter entropy is given by and is twice in magnitude compared to S top . The three spacetimes mentioned above can be described with the metric the region r < 1 this describes the S 2 , and r > 1, −AdS 2 . Taking r → ±ir (5.11) gives dS space with signature (1, 1) which can be written as .
We consider below a value of the length of the boundary l > 2π which is classically allowed for dS spacetime with H = 1. Using the (r, θ) coordinate system mentioned above, The HH contour, shown in blue corresponds to starting at r = 1(P ), going till r = 0(O) and then continuing till either Q or S depending on whether we started at r = 1 just above or below the real axis respectively. The Maldacena contour, shown in green, starts at r = 1(P ) in the −AdS metric and proceeds along real axis to r 1(T ) and then is analytically continued to Q or S.
The steps to calculate the path integral have many similarities to the AdS case considered previously and we will mainly emphasize some of the key new points below.
We first find a classical solution meeting our boundary conditions, obtain its contribution to the action, then expand around it and compute the contributions due to the quantum fluctuations.
For the first contour, the classical equations are solved for the (2, 0) signature case by the S 2 with metric eq.(5.10) and dilaton, On continuing to (1, 1) signature using eq.(5.11) we get Minkowski dS space with φ 0 = Ar. (5.14) Note that if the boundary value φ B is real, A must be real and therefore in the Euclidean signature region the dilaton is imaginary. To meet the boundary conditions we take the boundary to be at r = r B , where l = 2πr B (5.15) and fix A to be For the second contour, in the (0, 2) signature part the solution is also given by eq.(5.13) (but now for r > 1) and again continuing to (1, 1) signature gives the same result for the dilaton and the same values for r B , A as above.
The resulting classical action gives rise to the saddle point contribution of the wavefunction [24] For l 1, the leading term denoted byŜ cl , is given bŷ The result for the action and wavefunction are the same in the two contours [24,110]. This is because the value of the on-shell classical action only depends on the data close to the boundary (φ B and the extrinsic curvature of the surface of length l) and these are determined by the dS part of the contour.
When l < 2π we are in the classically disallowed region. The action is given by Note that the wavefunction which in the WKB approximation is given by is exponentially suppressed in the classically disallowed region, with the exponential suppression growing with time, φ B , and is classically oscillating in the allowed region.
Quantum fluctuations
Next, expanding about the classical part we compute the contributions due to the quantum fluctuations in the path integral. The calculation is closely related to that in section 2.3. We write the dilaton as φ = φ 0 + δφ (5.21) where δφ is the fluctuation which we will integrate over. And we work in conformal gauge by writing the metric as g ab =ĝ ab e 2σ (5.22) whereĝ ab is a conformally flat metric of appropriate signature with curvatureR = 2.
The Liouville mode fluctuations can be expanded in eigenmodes of the operator∇ 2 + 2 .
where the modes φ λ are regular at the north pole and the coefficients c λ , φ λ are chosen so that δσ is real for the (2, 0) or (0, 2) signature parts of the contour. For the Minkowski (1, 1) part we take the modes to be given by analytic continuation from the value it takes in the (2, 0) or (0, 2) segments of the geometry, depending on whether we are considering the HH or Maldacena contours and impose Dirichlet boundary conditions, at the boundary r = r B . This ensures δσ = 0 at the boundary. For the dilaton fluctuation we do a similar expansion δφ = d λ φ λ (5.25) in terms of the same mode functions φ λ , except that the expansion coefficients d λ are chosen to be "phase mismatched" compared to c λ so that in the (2, 0) or (0, 2) parts the dilaton fluctuation is purely imaginary. Again by analytic continuation we obtain δφ in the (1, 1) region and impose Dirichlet boundary conditions on it as well.
The dilaton and Liouville mode path integrals can then be done in a way very similar to the AdS case, leading to where S JT,∂ , which arises entirely from the boundary term in eq.(5.1) andŜ cl is given in eq.(5.18). Note that when the boundary length l 1 so that r B , eq.(5.15), meets the condition, r B 1, we get, as discussed in appendix B.3, that the boundary action is given by where the line element along the boundary is Let us also mention that since the contour goes over metrics with different signatures care needs to be taken in how we define the operator (∇ 2 + 2) (similarly comments also apply to to (P † P ) and∇ 2 which will appear below). We do this by analytic continuation as follows. Suppose we are working in the metric eq.(5.10) and its analytic continuation eq.(5.12). We promote the radial variable which appears in the metric and in the operator (∇ 2 + 2) to be a complex coordinate and analytically continue the operator as we go along the contour. The eigenmodes φ λ are constructed to be regular at the north pole or the origin of the disk for the (2, 0) or (0, 2) signature regions respectively and then analytically continued to dS spacetime. And the eigenvalues are then determined by imposing Dirichlet boundary conditions at the boundary. Let us also note that the fluctuations φ λ will not be real everywhere along the two contours. We will take them to be real in the (2, 0) or (0, 2) regions, this will result in them being complex, in general, in the dS region of the path integral.
The vector fields V which appear in eq.(5.26) include large and small diffeomorphisms, V L and V s , respectively as in the AdS case. The small diffeomorphisms V s , which satisfy the boundary conditions eq.(2.19), generate the gauge transformations whose volume is in the denominator in eq.(5.26). The large diffeomorphisms V L are zero modes of P † P and correspond to different ways in which the boundary wiggles with φ = φ B on the boundary. The operators P, P † are given in eq.(2.12) and (2.13). These vector fields can be defined in the (2, 0), (0, 2) signature regions as in the AdS case and are also then analytically continued to the (1, 1) region.
with the coefficientsĉ m being chosen so that ψ is real in the region r 1 of the dS part of the contour, which means the relation In general the vector field V L obtained from ψ is then complex in this region. It is given by V L,m =ĉ m e iθm r − i r + i |m| 2 m(r + i |m|), i ir |m| − m 2 + r 2 + 1 r 2 + 1 (5.33) and the corresponding metric perturbations are given by Now we come to a complication similar to what we found in the AdS case. While the path integral is quite clearly defined as we have seen, evaluating it explicitly, for a fixed φ B and l is difficult even when l is large. This is because the metric perturbations resulting from the small and large diffeomorphisms are not orthogonal resulting in a measure that is difficult to calculate and also because evaluating the determinant (∇ 2 + 2) is non-trivial.
To simplify things we therefore consider the asymptotic limit, in which In this limit the inner product between metric perturbations generated by small and large diffeomorphisms become orthogonal and the measure simplifies, as in section 2.5 above. Doing the integral over the small diffeomorphisms then gives, Moreover both det (P † P ) and det ∇ 2 + 2 do not depend on V L in this limit and can therefore be taken out of the integral over V L -the arguments leading to this conclusion are analogous to the AdS case, see also appendix F. The measure for summing over the diffeomorphisms is obtained from the measure for metric perturbations. This is given in eq.(2.9) with δg ab being given by eq.(2.27) for the (2, 0) or (0, 2) segment of the contour and obtained in the (1, 1) segment by continuation. The resulting inner product is then given by Thus the integral over the large diffeomorphisms reduces to the integral over the modesĉ m with the standard well known measure [3,13], The action in terms of the modesĉ m is obtained from eq.(5.27) by noting that θ(u) is specified by eq.(2.32) and eq.(2.62). So we see that θ(u) u + m≥2 mĉ m e imu . The action then becomes The integral of the Schwarzian action with the above measure is shown to be one-loop exact [13], leading to the wavefunction in this limit being where we have also added the contribution from S top , andŜ cl , eq.(5.8) and (5.17). Putting in eq.(5.35) gives More correctly, this is the value of Ψ upto an overall coefficient which we have not fixed. Note that in the limit we are considering the first term in the exponent which arises from eq.(5.18) and thus the wavefunction has very rapid fluctuations in its phase.
It is worth mentioning that the two determinants on the RHS above in general depend on l and can also give rise to a term diverging linearly like l in the exponent of Ψ, as is discussed especially for the AdS case in appendix F.1 in considerable detail. There are some subtle issues which arise in this context having to do with how the determinants are regulated in the UV, and related to the order of limits involved while taking the asymptotic dS limit. This is also connected to the discussion below.
It is worth emphasizing that while we have considered the asymptotic limit eq.(5.35), eq.(5.36) since it is analogous to the asymptotic AdS limit which was also tractable, in the context of cosmology one really wants to obtain Ψ for fixed values of l, φ B . The l → ∞ then is the the limit of Ψ obtained first for such fixed values.
The case with l fixed is considerably more complicated, as was emphasized in the AdS spacetime, and we unfortunately have to postpone such an investigation for the future. It is however worth noting that the different order of limits required when we work at l fixed and take the cut-off on the eigenvalues, which regulates the determinants, to infinity first can yield a significantly different result. In this limit modes of the form e imθ with mode number m ≥ 1 l have a physical wavelength which lies within the universe and such modes can play an important role in determining the behaviour of the determinants and the resulting behaviour of the wavefunction. In contrast in the asymptotic limit, since l → ∞ first, all modes which are kept in the determinants have a diverging physical wave length. To illustrate how the behaviour at fixed and large l might be different we evaluate the integral over the large diffeomorphisms with a measure obtained from the inner product eq.(5.38). We find that the behaviour of Ψ changes quite dramatically at large l and begins to decay exponentially going like Ψ dS ∼ e − l in the path integral, see eq.(H. 16) in appendix H. We hasten to reiterate here that this calculation is not really self consistent because these modes mix with small diffeomorphisms, since we are at finite l (or non-zero ) , eq.(2.37), and this mixing needs to be included in obtaining the correct measure while integrating over them. Our purpose in presenting the discussion of appendix H is mainly to emphasize that a different result can be potentially obtained with the different order of limits, due to such modes once they are correctly included in the path integral.
Adding matter
We end this section by making some comments about the case with matter. We again consider N free bosonic scalar fields, as in the AdS case with the action Although we will only discuss about bosonic fields for now, we mention the action for a free fermionic field for completeness, which is given by At late times for r B 1 the classical action for the bosonic fields as a functional of their boundary values is given by where u is the rescaled proper length along the boundary, eq.(5.28), andφ i (u) is the late time value of ϕ i . The details of the above result can be found in appendix G, see eq.(G.17) The quantity F in eq.(5.49) is given by eq.(G.7) of appendix G. To obtain the behaviour of the wavefunction in the asymptotic dS limit we would couple the Schwarzian action to the matter action above and integrate the large diffeomorphisms to obtain the wavefunction as a functional of the boundary valuesφ i (u). This was studied in considerable detail in [24]. We will not pursue this line of investigation further here.
One can also include quantum corrections due to the matter fields. The quantum corrections come from (det(−∇ 2 )) −N/2 which arises when one integrates out the matter fields. The dependence on the large diffeomorphisms of this determinant is suppressed at large l going like O(1/r B ) ∼ O( ) analogous to the case of AdS disk, see discussion after eq.(F.16). The resulting term to quadratic order in the diffeomorphisms vanishes in the asymptotic dS limit. One can include the quantum effects of matter and neglect those due to the other degrees of freedom, which are difficult to obtain at finite , as mentioned above by working in the semi-classical limit where we take N → ∞ and G → 0 keeping GN fixed. Solving the resulting saddle point equations now with the additional quantum effective action to O( ) yields the wavefunction as a function of φ B for large values of l in the theory. We leave further investigation of this interesting limit for the future.
One can also try to go beyond the semi-classical limit and include the quantum effects of matter as well as the gravitational degrees by working at fixed and large l. However, now modes within the horizon meeting the condition eq.(5.46) will need to be included and this is more challenging as discussed above. We also leave an investigation of this interesting case for the future.
Before ending let us give a few more details on the matter determinant calculation. To calculate the matter path integral with the scalar ϕ being subject to Dirichlet boundary conditions at the dS boundary we proceed as follows. We consider the operator∇ 2 which is obtained in different regions of spacetime along the contour by analytic continuation as per our discussion of the operator (∇ 2 + 2) above. And expand φ in terms of the complete set of eigenmodes of this operator which satisfy the equation∇ 2 ϕ λ = −λϕ λ . The eigenmodes are analytically continued from the (2, 0) or (0, 2) regions to the (1, 1) region. Specifically in the (2, 0) or (0, 2) regions these modes, which satisfy the regularity condition, at r = 1 (r being the radial coordinate in eq.(5.10)) are given by P In the complex r plane shown in Fig.1, P −|m| v−1/2 (r) has singularities at ±1. The contours shown in Fig.3 illustrates how the analytic continuations we have in mind are to be carried out. It also shows why the HH and Maldacena contours will agree since both avoid any singularities and the solutions along these contours can be analytically continued to each other. For completeness we should also mention that the inner product eq.(2.9) which goes into defining the measure of the path integral, (5.40), should be analytically continued as well along the contour.
de Sitter double trumpet
Let us now turn to discussing the analogue of the double trumpet spacetime in the context of de Sitter space. More specifically, as in section 5 we will consider the no-boundary proposal for calculating the wavefunction but now we ask about the amplitude for two disconnected universes of length l 1 , l 2 to arises when the dilaton takes values φ B1 , φ B2 respectively. The result for this amplitude is suppressed by a factor of e S dS /2 where S dS is given by eq.(5.9), compared to the amplitude for producing one universe. We will find that the amplitude to produce two disconnected universes is non-zero in pure JT gravity. Once matter is included the result for the double trumpet space can be finite, or have a divergence of the kind we found in the AdS case which arises when the neck of the wormhole shrinks to zero size due to quantum effects of the matter stress-tensor.
We will start by considering pure dS JT gravity and then add matter.
Let us note that the pure JT theory does not have a classical solution with the double trumpet topology and the dilaton meeting its boundary conditions. For carrying out the path integral in this case we have to use the Maldacena contour. Along this contour the geometry has a segment with −AdS 2 metric of signature (0, 2) which then connects to dS space ending in two boundaries as shown in Fig.2. The −AdS 2 segment is now described by the double trumpet geometry with signature (0, 2). One can think of doing the path integral by starting with a fiducial metric of the form eq.(2.24), with ds 2 → −ds 2 for −AdS 2 space, and incorporating fluctuations about this fiducial metric. To join this spacetime to dS space with two disconnected boundaries we continue the two ends of the double trumpet taking r → ±ir. This gives at each end the metric which can be easily seen to be a metric with curvatureR = 2 describing the "Milne" region of dS space. By taking the boundary to lie at r = r B1,2 and choosing r B1,2 suitably at the two ends we can impose the condition that the two universes have lengths l 1 , l 2 . To carry out the path integral for the dilaton we expand it about a background value, φ 0 , which is the analogue of eq.(4.32), and which takes values, φ B1 , φ B2 at the two ends of de Sitter space. The fluctuation about this background is then given by δφ and both φ 0 and δφ, are analytically continued across the different regions of the spacetime with signatures (0, 2) and (1,1). Similarly the fluctuations in the conformal factor δσ are also defined across the two regions with different signature by analytic continuation. Both δφ, δσ are expanded in terms of the eigenmodes of the operator (∇ 2 + 2), with a relative factor of i between their expansion coefficients, as for the dS integral with disk topology.
The result of the path integral over the dilaton and the conformal factor then gives for the wavefunction, Ψ DDT , when l 1 , l 2 1, Vol(sdiffeo)det(∇ 2 + 2) e −S JT,∂ 1 −S JT,∂ 2 (6.2) r → ±ir r → ±ir and D[g ab ] is the measure for summing over traceless metric deformations which includes small, large diffeomorphisms and the moduli.
As discussed in section 5 earlier in the dS case, the case of physical interest is one where the lengths l 1 , l 2 are finite. However this is a difficult situation in which to make progress since there are modes meeting the condition, eq.(5.46), which have not yet exited the horizon. The dynamics of such short distance modes is difficult to evaluate. To make progress we therefore consider the asymptotic limit where eq.(4.4) is met. In this asymptotic limit all modes have exited the horizon and are frozen out by the exponential expansion of the universe. It could be, as suggested in [24] eq.(2.12), that in fact this asymptotic limit is the more fundamental quantity in dS space and the finite length case should be thought of as arising from it by integrating back in some of the modes which are yet to exit the horizon.
In the asymptotic limit, reasoning analogous to the AdS case in subsection 2.5, and for the disk topology in dS, subsection 5.2, then leads to the conclusion that Doing the integral over p m , q m etc in eq.(6.6) gives, which in turn gives, We can also consider adding matter. Consider a massless scalar field with action eq.(5.47). We impose vanishing boundary conditions for the scalars at the two ends. Other boundary conditions can also be similarly dealt with by a straightforward extension of the methods discussed in this paper, but we will not do so here. These cases would give rise to the wavefunction for two disconnected universes with the scalar field taking some specified values in these universes.
The path integral can be carried out along the lines described above for this case too. It follows from the previous section that in the asymptotic limit discussed above As discussed in appendix J when b → 0, Z M,s diverges like eq.(4.48). As in the AdS case this divergence arises due to the Casimir effect for matter which results in a diverging stress tensor when the size of the neck in the wormhole goes to zero. Now let us turn to the fermionic case with action (5.48). For AdS the anti-periodic boundary condition along the θ direction is needed for computing the thermal partition function, however here it is upto us to specify whether the fermion satisfies periodic or anti-periodic boundary conditions along the θ direction. When the boundary condition is anti-periodic the disk topology path integral corresponding to producing one universe gives a well-defined result but the double trumpet has a divergence of the form eq.(4.53). When the boundary condition is periodic (for both universes) the disk topology does not contribute, since the θ circle shrinks to zero size and therefore the fermion must necessarily having anti-periodic boundary conditions along it. The leading contribution then arises from the double trumpet topology for two connected universes. In this case one cannot produce one universe from nothing through quantum tunnelling they must come in at least a pair! Let us end with one comment. The continuation from the −AdS 2 to dS 2 can be done by taking r → ±ir. We have not specified, either in the case of the disk or the double trumpet, which of the two analytic continuations needs to be carried out. In the disk topology, as far as we can tell, it does not make a difference and both are allowed. at either end. More precisely, as can be seen from the figure, we can start at r = −∞ from either above the real axis at A (green contour) or below the real axis at E (blue contour). In each case there are again two possibilities after going along the imaginary axis. For example, for the blue contour, at B, we can continue to either A or F . All of these seem to be allowed, as far as we can tell. When dealing with eigenmodes for various determinants, e.g. a scalar laplacian, the modes need to be analytically continued, we find it is possible to do so for all four continuations. A proper understanding of this issue is also left for the future.
Conclusion
In this paper we have formulated the path integral for JT gravity in the second order formalism working directly with the metric and the dilaton. This allows one to incorporate matter easily as well. It also allows one to investigate whether spacetimes with different topologies can contribute to the path integral. We considered both AdS and dS spacetimes in our analysis. For matter, we have mostly considered free bosons or fermions, but many of our conclusions extend more generally to conformal matter and some even more generally to non-conformal matter.
Many questions remain to be followed up.
We found agreement with the first order formalism for the pure JT theory in the asymptotic AdS or dS limits. Away from this limit the path integral is more complicated due to the presence of modes whose wavelength along the boundary is short and in particular smaller than the radius of AdS or dS. Understanding the dynamics of these modes and carrying out the path integral more generally, is an important open question. One might hope that some of the determinants which arise can be made more tractable by a general analysis, of the kind that leads to a simplification in the ratio of determinants √ det (P † P ) det(−∇ 2 +2) for compact manifolds ( [109,113,114]), and such simplifications might help with the problem.
In dS space the path integral is more non-trivial to define since the no-boundary proposal, which is what we explored here, requires one to include regions of spacetime with different signatures along the contour of the integral. In our formulation we continue modes analytically between these regions while carrying out the path integral. While this means the general metric, dilaton and matter configurations we sum over are complex, the resulting path integral is well defined, as best as we can tell. However, this needs to be investigated more carefully further. Another issue for the dS case is how to deal with the divergences which arise when we take the asymptotic limit. In AdS space these are dealt with by the standard procedure of holographic re-normalization after adding suitable counter-terms which are local on the boundary. But it is less clear if such a procedure is the correct one to adopt in dS space. It would be worth understanding this issue better as well. Calculating the path integral away from the asymptotic limit is especially important in the dS case, since we are interested in the wavefunction for a universe of finite size and at finite values of the dilaton. More generally, it will be worth establishing firmly whether a precise and sensible formulation of JT gravity can be given in dS space, as a start even for spacetimes with the topology of the disk.
Adding matter introduces some interesting new facets. Most important, as we have seen above, is the divergence which arises in the integral over moduli space while computing the path integral for the double trumpet topology in the presence of matter. This divergence is due to the Casimir effect leading to a negative stress tensor which diverges when the neck of the wormhole goes to zero size and is the analogue of the tachyon divergence in Bosonic string theory. While the precise result we get is simply that the path integral is ill-defined, the divergence suggests that perhaps the presence of matter would cause the spacetime to disconnect into two pieces each of disk topology with one boundary. In fact, taking a cue from the divergence one could simply posit that to get a well-defined theory one should disallow higher topologies and only keep spacetimes with the topology of the disk. However this is clearly too premature. Rather, further analysis is needed to see what are the possible ways to make the path integral well-defined and whether the divergence can be cured while still allowing the two boundaries to stay connected. It could well be that the fate of the wormhole depends on the details of how the divergence is tamed. This is an important issue which we hope to investigate more fully in later work The double trumpet geometry in AdS can be thought of as a contribution to the connected two point function of the partition functions of the two boundary theories, Z(β 1 )Z(β 2 ) . If the double trumpet and more generally wormholes survive in the path integral it would suggest that the boundary theories dual to JT gravity involve some unconventional features, for example random couplings over which one needs to sum, resulting in this connected correlation. JT gravity can be obtained by dimensional reduction from higher dimensional near-extremal black holes, [16] [17] and one expects that the higher dimensional systems which arise for example in string theory are more conventional with a Hamiltonian with fixed coupling constants. The dimensional reduction gives rise typically to a lot of extra matter. It would be satisfying if the presence of this matter itself causes wormholes to pinch off and the two boundaries to disconnect. However, as was mentioned above, this needs to be investigated further. It could also be that the dimensional reduction to two dimensions removes essential degrees of freedom in the system and thereafter wormholes are allowed in the two dimensional theory 7 .
It is also worth drawing attention to the fact that the divergence mentioned above does not always arise. For example, in the double trumpet if one consider fermions instead of bosons with periodic boundary conditions along the two boundaries then the Casimir effect reverses its sign and the divergence goes away. The periodic boundary conditions in the AdS context would be appropriate for computing an index Tr((−1) F e −βH ) instead of the partition function Tr(e −βH ). Investigating the behavior of the divergence as we vary the kind of matter and the boundary conditions we impose on it is another important direction to pursue.
A similar divergence for the double trumpet also arises in the dS case. And as in the AdS case with fermions, by imposing periodic boundary conditions along the spatial boundary the divergence goes away. This suggests that for appropriate matter and boundary conditions the wavefunction of the universe in the no-boundary proposal can have an amplitude to create multiple universes. It is clearly important to understand this more deeply. In this context one would also like to study the "pants diagram" which corresponds to one universe tunnelling into two, and more generally study the role of higher topologies. If wormhole do arise, the proper setting for quantum cosmology would be the third quantized one, where one is dealing with a multiverse.
We have focussed on the no-boundary proposal in dS space here. There are other possibilities that are also worth investigating [115][116][117] . One would also like to add an inflaton to the theory and study the resulting dynamics of the system, including how it behaves in the presence of a potential for the inflaton, with metastable minima. It would also be interesting to explore the role of "bra-ket" wormholes and their contribution to density matrices [58,118,119].
Finally, we have not explored the Lorentzian AdS theory here. The fact that the quantum effects of matter only arise when we are away from the asymptotic AdS limit, with a boundary of finite length, is true for the Lorentzian case as well, as was discussed in [18]. Thus, for discussing the effects of Hawking radiation by coupling the JT system to external radiation, as has been done quite extensively in recent literature, [8,18,27,28,34], 7 Another possibility is that the higher dimensional theory itself has wormholes due to averaging over the various ground states of the extremal system. We are grateful to Ashoke Sen for emphasizing this possibility to us. one needs to work away from the asymptotic limit. To incorporate corrections beyond the leading semi-classical analysis (obtained with N matter species by taking N → ∞) one would then also need to include the effects of the short wave length modes mentioned in the first few paragraphs above. The dynamics of these modes might in fact play a key role in the recovery of information during the evaporation process. A discussion of the Lorentzian theory will also be of interest from the point of view of potentially taming moduli space divergences that were discussed above, see [120].
Clearly two dimensional gravity is a rich and fascinating playground. While results obtained in lower dimensional settings might not always apply to higher dimensions one can hope to gain some important insights from them. We look forward eagerly to exploring some of the questions mentioned above in the future.
A.1 Euclidean AdS disk
The metric for the Euclidean AdS 2 disk geometry is given by Redefining the coordinate t, r as we get Defining the coordinate r * as r * = log tanh ρ 2 (A.4) The metric then becomes Writing in term of the complex coordinates defined as we find that the metric is given by Futher doing a coordinate transformation ζ = arctan(x),ζ = arctan(x) ⇒ x = tan(ζ),x = tanζ (A. 8) it is easy to see that the metric becomes which can further be written in terms ofr defined bŷ r = exp(r * ) = r − 1 r + 1 (A.10) as ds 2 = 4 dr 2 +r 2 dθ 2 (1 −r 2 ) 2 (A.11)
A.2 Euclidean AdS double trumpet
The line element for this geometry is given by The two boundaries correspond to the limits r → ∞ and r → −∞. Performing the coordinate transformations r = sinh(ρ), t = θ, (A. 13) we find that the metric is given by It has to be noted that the θ direction is periodic with period 2π. Defining r * coordinate as In term of the r * coordinates the metric becomes This can be written in complex coordinates as the metric becomes To get it to the Poincare form, consider the further coordinate transformation x = coth(ζ),x = tanhζ (A. 19) and hence the metric becomes
A.3 de Sitter
The metric for the de Sitter spacetime in 2 dimensions, for Lorentzian signature, is given in the global coordinates as As before, we define the coordinate r * as dr * = dτ cosh(τ ) = 2 arctan tanh τ 2 ⇒ cosh τ cos r * = 1 (A. 22) and the metric in the r * becomes Doing the transformation gives us From eq.(A.22) and (A.24), we find Defining the null coordinates the line element becomes in these coordinates, The coordinate transformation The line element eq.(A.28) becomes
B.1 Euclidean AdS disk
Consider the metric The boundary is located in the region where r → ∞. The general boundary curve is specified by (r(u), θ(u)) where u is the proper boundary time. The line element on the boundary is where 1. The tangent vector to the boundary curve is given by The unit normalized normal vector is then given by The extrinsic trace is given by where it is understood that r, θ are functions of u. Differentiating the line element relation gives Using this to simplify the eq.(B.5), we get Noting from eq.(B.1) and eq.(B.2) that to leading order The above relation need to be extended to one higher to obtain the leading schwarzian term in the extrinsic trace. Doing so gives With this relation between θ, r, expanding the extrinsic trace to quadratic order in , we get We will now provide some formulae that will be useful later on in appendix H.
B.2 Euclidean AdS double trumpet
The line element is given by eq.(4.2). The geometry now has two boundaries in the asymptotic region, i.e near r → ±∞. We need to be a bit careful when evaluating the boundary terms in the JT action as the relative signs play a crucial role in the final result of the path integral. Consider a curve given by (r(u), θ(u)) where u is proportional to the boundary proper time. We have The unit normalized normal vector is then given by n r = ± (r 2 + 1) 3 2 θ θ 2 (r 2 + 1) 2 + r 2 n θ = ∓ r (r 2 + 1) θ 2 (r 2 + 1) 2 + r 2 (B.12) where the upper sign corresponds to the right boundary and the lower sign to the left boundary. The extrinsic trace is given by K = ∇ µ n µ = ∂ r n r = ∂ u n r r = √ r 2 + 1 r 2 + 1 r θ + θ r 3r 2 + r 2 + 1 2 θ 2 − r 2 + 1 r where it is understood that r, θ are functions of u. Consider the situation when both the boundaries of the double trumpet geometry have the line element as given in eq.(B.2) with the same parameter . The coordinate u is chosen such that its range is same as the range of the θ coordinate. From eq.(4.2), we have to leading order where we need to use + sign at the right boundary and − at the left boundary. More generally, the parameter need not be same at both boundaries. The above relation need to be extended to one higher to obtain the leading Schwarzian term in the extrinsic trace.
Doing so gives With this relation between θ, r, expanding the extrinsic trace to quadratic order in , we get We get the above action for both the signs in eq.(B.15), or in other words, at both the boundaries. This relative plus sign in between the boundary terms at both the boundaries is important because of the dependence on the moduli of the integral over the large diffeomorphisms as we shall see later in appendix I. The boundary term of the JT action eq.(2.51), at either boundary, then becomes Denoting the boundaries ∂ 1 , ∂ 2 and writing the two boundary terms explicitly we have
B.3 de Sitter
Consider the metric in global coordinates given by The general boundary curve is specified by (τ (u), θ(u)) where u is the proper boundary time. The line element on the boundary is given by The tangent vector to the boundary curve is given by The unit normalized normal vector is then given by The extrinsic trace is given by where it is understood that τ, θ are functions of u. Noting from eq.(B.19) and eq.(B.20) that to leading order Correcting this relation to one higher order in , we have and expanding to quadratic order in , we get Also, we can simplify the extrinsic trace formula eq.(B.23) as follows using eq.(B.19) and eq.(B.20). Doing so, we get We now provide some useful formulae of the same calculations in the metric eq.(5.12). The unit normal vector components are given by The extrinsic trace is then given by K = √ r 2 + 1 θ r 2 + 1 r + r r 2 + 1 2 θ 2 − 3r 2 − r 2 + 1 r θ Expanding r(u) in terms of θ(u) as we get the extrinsic trace as in eq.(B.26). We now derive some formulae that will be used in appendix H. Taking we see that the metric in eq.(B.19) becomes which is negative of the metric for global AdS 2 . Using eq.(B.31) then becomes Now, taking the line element on the boundary to be where τ 0 is an arbitrary fixed value of τ . The boundary relation then reads where τ 0 is an arbitrary value. Expanding τ and θ as and solving for δτ iteratively to quadratic order in δθ, we get Using eq.(B.37) to expand the extrinsic trace eq.(B.33) to quadratic order in δθ, we find
C Zeta-function regularization
In this appendix, we mention some useful formulae pertaining to Zeta-function regularization that are used in this work. The Riemann-zeta function, denoted by ζ(s) is, given by and it has the specific values The generalized Zeta function, ζ(s, m 0 ) is given by Consider the sum m>1 ln α m . Defining ζ A (s) as and using λ m = α m in the above, we get that Using the result eq.(C.5) in eq.(C.4), we get Now consider the sum m=2 ln(m − m 0 ). Defining ζ B (s, m 0 ) as Using the value of λ m = m − m 0 , we see that Using the result of eq.(C.8) in eq.(C.7), we get Generalizing the above results, we note here a general formula for the zeta functionregularized product,
D Conformal Killing Vectors in the Euclidean AdS Disk
In this appendix, we shall explicitly evaluate the conformal Killing vectors for the Euclidean AdS disk topology with the metric given in polar coordinates as in eq.(2.24). First, note that conformal Killing vectors satisfy the condition P V = 0. This immediately implies that P † P V = 0 for a CKV. Thus, we only look for CKVs in the sector of zero modes of the operator P † P . For a general zero mode of this operator, we can write the vector field as where ψ m is given in eq.(2.25) and the condition that the vector field be real means that k 1 , k 2 are real. The metric components are then given by It can be seen from the above that if all the metric components δg ab = (P V m ) ab were to vanish, the possibilities are The diffeomorphisms corresponding to k 2 = 0, k 1 = 0 in eq.(D.3) are exact isometries of AdS 2 . This is straightforward to see since if m = 0±1 it follows from eq.(D.2) that P V = 0, and with k 1 = 0 is follows that ∇ · V = 0, leading to the conclusion that ∇ a V b + ∇ b V a = 0.
Among the set of CKVs those given by eq.(D.3) correspond to vector fields of the form and q i are arbitrary real constants. These give rise to an SL(2, R) algebra. The above functionsψ i are in fact linear combinations of the solutions appearing in eq.(2.25) for m = 1, −1, 0 modes.
An important observation to note here is the following. The CKVs both in eq.(D.3) and eq.(D.4) do not satisfy the boundary conditions eq. (2.19) corresponding to that of allowed small diffeomorphisms. Thus on the disk the operator P † P has no zero modes.
E Estimate of the inner product of metric perturbations arising from large and small diffeomorphism
We are interested in calculating the inner product Let us calculate each of the terms in the above expression. In terms of scalar field, ψ 0 , ξ λ , ψ λ , the vector fields corresponding to large and small diffeomorphisms are given by respectively.
The large diffeomorphism V a L being a zero mode of the P † P translates into the condition that and the small diffeomorphism being an eigenmode of P † P with eigenvalue λ, i.e P † P V s = λV s translates into The first of the boundary conditions for the small diffeomorphisms in eq.(2.19) just becomes at the boundary r = r B . The second condition becomes t a n b (P V s,λ,m ) ab = 0 ⇒ (P V s,λ,m ) θr = 0 , at r = r B , where the last line is obtained by using the first condition in eq(E.7). Expressed in terms of the ξ λ,m , ψ λ,m , this condition becomes To understand how these conditions can be met in the asymptotic AdS limit when r B → ∞ more clearly, let us first examine the scalar field equation for ψ λ,m carefully. The general solution is given by and P m α , Q m α are the associated Legendre functions of the first and second kind respectively. Regularity at the origin of the above solution forces us to choose c 2 = 0. Then behaviour of this solution for r 1 can be immediately obtained from the asymptotic forms of the associated Legendre function and is given by where f 1 (λ, m), g 1 (λ, m) are some specific expressions which can be read off from the asymptotic behaviour of the associated Legendre functions and the θ dependence is not explicitly shown. From the above, it is clear that if v is imaginary, the expression has the functional form where w = −iv and F, β are given by It is clear from the expression eq.(E.13) that the magnitude of the scalar field solution ψ λ,m ∼ c 1 √ r . The same analysis holds for ξ λ,m , albeit with a different constant instead of c 1 , say d 1 . It is now clear that one way to satisfy the conditions eq.(E.7) and eq.(E.9) is to choose the constants c 1 , d 1 such that d 1 ∼ c 1 r B so that in eq.(E.7), the two terms are comparable and cancel each other whereas in eq.(E.9), the second term dominates, giving rise to the condition which determines eigenvalue λ. The alternate way is to choose the constants c 1 , d 1 such that c 1 ∼ d 1 r B so that in eq.(E.9), the terms are comparable and cancel each other whereas in eq.(E.7), the first term dominates giving rise to the condition ∂ r ξ λ,m r=r B 0 (E. 16) thus determining the eigenvalue λ. These two ways of meeting the conditions eq.(E.7) and eq.(E.9) give rise to two sets of eigenvalues and in fact exhaust all the possibilities.
We now proceed to evaluate the various inner products. We shall first evaluate the expressions in general and then take the asymptotic AdS limit to get the estimates. Consider the inner product of two metric perturbations, one with a large diffeomorphism and one with a small diffeomorphism.
In the large r B limit, from eq.(2.27), we have From the eq.(E.2), we get V θ s,λ,m = g θθ imξ λ,m − ∂ r ψ λ,m im r 2 ξ λ,m − ∂ r ψ λ,m (E. 19) which for either set of eigenvalues determined by eq.(E.15) or eq.(E.16) becomes, using the equations of motion, Then, using eq.(E.18) and eq.(E.20) in eq.(E.17), we get The inner product of two metric deformations both corresponding to large diffeomorphisms has already been obtained in eq.(2.35) which in the asymptotic AdS limit becomes We now are left to calculate inner product of two metric perturbations corresponding to small diffeomorphisms. This is given by The inner product of two small diffeomorphisms can be manipulated as follows.
For either set of eigenvalues, only one of the term in the bulk integral dominates. Since, the bulk integral is positive definite we see that Putting together eq.(E.21),(E.22) and eq.(E.25), we find in the asymptotic AdS limit, that Let us now consider the case when there are some modes such that This modes will need to be included when we are considering the general case with a boundary of finite length. Our estimates need to be revised for such modes. We will examine below the case where the eigenvalue λ m 2 while m becomes big meeting eq.(E.27).
We start with the analysis of the scalar field solution. For r ∼ O(1), and λ such that m 2 λ , the equation eq.(E.5) for the scalar field ψ λ,m , with θ dependence being e imθ , is the solution for which we take to be This solution can be extended till the region where r is such that So, in this region, the solution eq.(E.29) becomes The scalar field equation, eq.(E.5), for r 1 is the solution for which is given by where I a is the modified Bessel function of the first kind and v is as defined in eq.(E.11).
Matching this with the solution eq.(E.31) in the region 1 r m, we find and so the scalar solution becomes where K v is the modified Bessel function of the second kind. The scalar ξ λ.m will also behave in the same way. We also see from eq.(E.35) that the scalar field solution ψ λ,m is a function of the combination m r ∼ O(1) near the boundary. We also see from eq.(E.15), eq.(E.16) that the boundary conditions can be met when the relative coefficient between ψ λ,m , ξ λ,m is order unity.
We can now estimate the magnitudes of the required quantities. First, from eq.(E. 18) and (E.22), we find that To estimate the magnitude of V s,λ,m , we note as mentioned above that scalar fields ψ λ,m , ξ λ,m in eq.(E.35) are a function of the combination m r ∼ O(1) near the boundary. This gives, where the prime is a derivative with respect to the quantity r m . To estimate the value of P V s,λ,m , P V s,λ,−m , we show using eq.(E.24) that this quantity is of O( √ m). To see this, consider the bulk integral in eq.(E.24) where r c is such that 1 r c m. It is also easy to see, by noting eq.(E.37) that the boundary term is also of the same order as above and so Thus putting together eq.(E.39), eq.(E.37) and eq.(E.36) and noting eq.(E.17), we find that We can easily extend the above analysis for the case of Double trumpet topology in Euclidean AdS spacetime discussed in 4. To evaluate the quantity in eq.(E.1), we would proceed as before. The discussion till eq.(E.16) continues to hold true except that the eigenvalues are now determined by imposing either eq.(E.15) or eq.(E.16) at both the boundaries. Taking the left and right boundaries to be located at r = −r B1 and r = r B2 respectively. Since the vector fields for small and large diffeomorphisms can be chosen so that the modes corresponding to left and right boundaries can be decoupled, the corresponding boundary terms in eq.(E.17) will be independent of each other. The inner product of two large diffeomorphisms in the basis in eq.(4.15) is calculated in detail in appendix I.1, the final result appearing in eq.(I.11). The calculation of inner product of small diffeomorphisms is again analogous to the disk case with the expression in eq.(E.24) interpreted as having two boundary terms which again are independent of each other. Thus it immediately follows that we will have the result analogous to eq.(E.26) in the present case of double trumpet topology also.
F Estimation of various determinants in Euclidean AdS disk
In this section we shall discuss the computation of various determinants in detail in AdS spacetime for the Euler characteristic χ = 1 corresponding to the disk topology. To begin with, we will compute the determinant of the scalar Laplacian. This requires the specification of appropriate boundary conditions which we take to be Dirichlet boundary conditions. We will mostly consider the case when the boundary has large length l ∼ 1 1 and obtain the dependence of the determinant on the large diffeomorphisms discussed above. In the asymptotic AdS limit, where → 0, we will find that a length dependent counter-term needs to be added to get a finite result, and that the dependence on large diffeomorphisms vanishes. We will make essential use of the conformal anomaly in the analysis.
Similar results will also be obtained for det (P † P ). For det(−∇ 2 + 2), on general grounds, upto O( ), the dependence of the large diffeomorphisms will be shown to be of the form of the Schwarzian action with a coefficient which is linear in , but we will not be able to obtain the precise value of this coefficient.
Coming back to the scalar case, we are interested in the dependence of det(−∇ 2 ) on the large diffeomorphisms. On general grounds this dependence should be a functional of Diff(S 1 )/SL(2, R), since it is easy to see that diffeomorphisms lying in the SL(2, R) isometry group of AdS 2 must leave the determinant unchanged 8 . This imposes a strong restriction on the kind of terms that appear in the final result for the determinant. The simplest such term that one can write is proportional to length of the boundary. The next term, which involves two derivative with respect to u, eq.(2.42) -the rescaled proper time along the boundary-is uniquely given by the Schwarzian action. On dimensional grounds its coefficient must go like , and by using the conformal anomaly we can obtain the coefficient in front of this action as we show below. Beyond this, in general additional terms will also be present -these will involve additional derivatives of u and correspondingly additional powers of . If we consider modes whose mode number m, eq.(2.26) is small enough meeting the condition m 1, (F.1) so that their wavelength meets the condition, Λ R AdS , eq.(2.56), then these additional terms will be suppressed. For modes of higher mode number where eq.(F.1) is not met these higher order terms must all be retained and the resulting behaviour of the determinant is much more non-trivial to obtain. These arguments can also be applied to the double trumpet with two boundaries and de Sitter case when we calculate the no-boundary wavefunction by analytic continuation from the (2, 0) or (0, 2) signature metrics as discussed in 5.1.
Let us now show how the conformal anomaly can be used to obtain the coefficient of the first two terms mentioned above, involving the boundary length and the Schwarzian derivative. We can expand the determinant where u is renormalized boundary proper time, eq.(2.42), and θ is the coordinate appearing in the line element of the AdS metric in eq.(2.24). Our task now simplifies to evaluating the constants c 1 and c 2 . To fix the constants we consider a non-wavy boundary specified by r = r B , and use the conformal transformation property of the determinant to fix its dependence on the value r B . For this it is convenient to work in the coordinate systemr, θ in which the line element is given by eq.(A.11).
The boundary r = r B in the metric eq.(2.24) is specified in terms ofr coordinate aŝ Defining the coordinateρ asr =ρr B , we find that the boundary specification now becomes ρ = 1. The line element becomes whereḡ ab is defined as where as mentioned earlier the boundary is located atρ = 1. It is easy to see from eq.(F.5) that r B -dependence is entirely in the conformal factor with the flat metricḡ ab independent of r B . Now, we note that the conformal transformation property of the determinant of a scalar laplacian with Dirichlet boundary conditions for conformally related metricŝ is given by where S σ is given by From eq.(F.5) and eq.(F.6), we note that Bρ 2 ) 2 ,R = 0 (F. 10) and normal vector to the boundary normalized withḡ ab and the corresponding extrinsic trace arenρ Using these results, we get Now, take the case where r B = 1 1, so that the boundary length l 1.
To be more precise, this defined by the boundary value of r B is the same as that in eq.(2.42) to leading order, there will be subleading corrections between the two variables. However, we shall work, for now, consistently with it being defined as defined by the value of r B . But, Using eq.(F.4) we get where we used eq.(2.8) to obtain final equality. Note that det(−∇ 2 )ḡ is some constant independent of r B and hence . From eq.(F.2), for the boundary at r B = −1 , we get So comparing, we get , c 2 = 1 4π (F.15) and so we have Note that the dimensional analysis we had mentioned above which fixes the powers of in each term in eq.(F.14) can be understood as follows. The line element eq.(2.42) is invariant under ( , u) → (λ , λu). Under this rescaling (with tan(θ/2) unchanged ) the Schwarzian term, Sch(tan(θ/2), u) → 1 λ 2 Sch(tan(θ/2), u), while the line element ds is invariant. This fixes the powers of appearing in the coefficients. Also note that after adding a counter term to cancel the length dependent first term in eq.(F.16) which goes like 1/ , we get are left with the Schwarzian term and additional subleading corrections which all vanish in the asymptotic AdS limit where → 0.
It is also easy to see that with this result for the determinant, the on-shell action, S OS for the JT gravity in the presence of matter fields satisfying vanishing Dirichlet boundary conditions at finite temperature in the semi-classical limit, G → 0, N → ∞ with fixed GN is given by where S JT,∂ is given by first line in eq.(2.65) and S M,qm is given by For the finite temperature case taking θ(u) = 2π β u, φ B = 1 J + GN 3 , and introducing the counterterm mentioned to cancel the first term in det(−∇ 2 ) which is length-dependent, we see that the value of the on-shell action becomes which indeed matches with the results in [18]. We should alert the reader to an important issue connected to the above calculation. The formula relating the scalar laplacian determinant for conformally related metrics given in eq.(F.9) is different from the one appearing in [121] by an extra term Indeed, in general, the bulk conformal anomaly and Wess-Zumino consistency conditions [122] fix the form of the action S σ completely upto the possibility of an additional term of this type. While [121] do report that such a term arises for the determinant with Dirichlet boundary conditions we find that its presence leads to disagreement with the semi-classical results in [18], and have accordingly not included it here. We now extend the above considerations to compute the value of the determinant of the operator P † P . This is in fact straightforward. Once again, we can expand the determinant in powers of , with the first two terms being, ln det (P † P ) = k 1 ds + k 2 Sch tan θ(u) 2 , u (F.21) We will now use the same trick of considering a non-wavy boundary and use the conformal transformation property of det P † P to compute its r B dependence and then match the coefficients by expanding in where r B = 1 . For conformally related metrics in eq.(F.7) the determinants of the operator P † P are related as [121] (det P † P )ĝ (det P † P )ḡ = exp{−26S σ } (F. 22) We note that only the prefactor in the exponent in eq.(F.22) is different from the scalar case due to the difference in the central charges 9 . So again by comparing eq.(F.21) and eq.(F.22) 9 We must mention though that we have not been too careful about the possible presence of a counter for the geometry, eq.(2.24) with boundary at r B = 1 , we get, upto an independent prefactor which we are not retaining, This is of the same form as in the scalar case and once the first term is removed by a suitable counter term again vanishes in the → 0 limit.
The determinant det(−∇ 2 + 2) is more complicated. Since it arises after doing the path integral for a massive scalar of mass 2, we cannot use the conformal anomaly to obtain useful information about it . However, we can still argue from the requirement that the determinant is valued in Diff(S 1 )/SL(2, R) that it can be expanded as where q 1 , q 2 ∼ O(1) constants. Once again it then follows that the dependence on the large diffeomorphisms vanishes in the asymptotic AdS limit.
To reiterate a point made earlier, note that the length-dependent terms in various determinants eq.(F.16), eq.(F.23) and eq.(F.24) grow like 1 , i.e. linearly in the length of the boundary, and thus diverge in the asymptotic AdS limit, → 0. To obtain a finite result in this limit we need to add a boundary term to the JT action, eq.(2.51), and fix the constant A appropriately so as to cancel this divergence.
F.1 Asymptotic AdS limit case
We now come to a subtlety having to do with the order in which various limits are being taken while evaluating the determinants. The computations above of the determinants used the Weyl anomaly and are valid for an arbitrary boundary. We see above that the result at leading order in the length l for the determinants go like where C is a constant which depends on the operatorÔ. More precisely the determinants above are obtained for the case of a given geometry with a finite length boundary by introducing a cut-off (for large eigenvalues ), regulating the product of eigenvalues of the relevant operators and then taking the cut-off to infinity in a manner which is consistent with the Weyl anomaly. In contrast, in the asymptotic AdS limit, as was mentioned above, we are interested in first taking the limit when the boundary length l → ∞, keeping the cutoff on eigenvalues fixed, and thereafter taking the limit where this cut-off goes to infinity term of the form eq.(F.20) in eq.(F. 22). Also there could be some subtleties due to zero modes. 10 .
Here we will show that this second order of limits can give a different result, and in particular the leading term, eq.(F.26), which is exponential in l can be absent in the asymptotic AdS limit. To carry out the calculations in this limit we will use a method first discussed by Coleman [123].
Let us illustrate this method for the Simple Harmonic Oscillator (SHO). Consider two simple harmonic oscillators with frequencies w 1 and w 2 constrained to move between x = 0 and x = L. Let ψ (1) λ be the solution to the equation with the appropriate boundary conditions. ψ (2) λ satisfies a similar equation with the frequency w 2 . Suppose we impose Dirichlet boundary conditions at both ends. Coleman's formula then states that In the above formula the left and right hand sides are to be regarded as a function of the complex variable λ. The formula follows from noting that the zeros and poles of the left and right hand slides are the same, and that both sides go to unity as λ → ∞ in any direction except the real axis. It then also follows that upto a constant independent of ω where the RHS is the value of the wavefunction obtained at x = L for the operator with frequency ω by starting at x = 0 with the correct boundary conditions. One important point to note here is that the normalization of the solution ψ λ (x) should be fixed in such a way that when viewed as a function of λ, any spurious zeros or poles, other than those corresponding to the actual eigenvalues of the operator (−∂ 2 x + ω 2 1 ) in the solution ψ λ (x) are cancelled and that the ratio of two solutions with different frequencies goes to unity as λ → ∞ in any direction other than along the real axis 11 . The solution ψ λ=0 is then obtained by taking the λ = 0 limit of this appropriately normalized ψ λ solution.
A similar formula also follows for Neumann boundary conditions or mixed boundary conditions where we set aψ + bψ = 0 (at say both x = 0, L) with eq.(F.29) being replaced by det(−∂ 2 where the subscript refers to taking the λ = 0 solution as in eq.(F.29). The normalization of the solution ψ 0 is fixed as explained before.
We will now adopt the same strategy to calculate the determinants det(−∇ 2 ), det(−∇ 2 + 2) and det (P † P ), in the case of the asymptotic AdS spacetime. In applying this method to the AdS case we expand the modes for the operator in the basis of modes in the θ direction, e imθ , and then working for any fixed value of m obtain a one-dimensional problem in the radial direction. In this one -dimensional problem we take the boundary to go to infinity l → ∞ and then use the Coleman method to obtain the determinant of the radial operator. The full determinant is then be obtained by taking the product of contributions over all values of the mode number m, and then taking in this product |m| → ∞. We see therefore that in the calculations below while working in the asymptotic AdS limit with the order of limits mentioned above, first taking l → ∞ and then taking mode number m → ∞.
Let us first compute det(−∇ 2 ) in this manner. The boundary condition at x = 0 in the SHO is now replaced by the requirement of the regularity of the solution in the interior. The solution to the eigenvalue equation∇ 2 ψ λ = −λψ λ for fixed mode number m which is regular everywhere in the interior, chosen such that the ratio of two solutions with different mode numbers goes to unity as |λ| → ∞ other than along the real axis, and without have any spurious zeros, poles or branch points in the variable λ, is given by and P β α is the associated Legendre function of the first kind. In fact, using the asymptotic form of the P 32) we see that the asymptotic form has v → −v symmetry. For the case of Dirichlet boundary condition at r = r B , the eigenvalues are obtained by solving ψ λ,m (r B ) = 0 (F.33) Let the eigenvalue be labelled by λ m,n . The subscript m in λ m,n denotes mode number m and the n labels the various eigenvalues for this particular mode number. To compute the determinant in the asymptotic AdS limit we take the asymptotic form of the solution eq.(F.37), with λ = 0, which is given by (F.34) 12 We have followed the conventions of [124] in obtaining the asymptotic forms here and elsewhere in this manuscript.
Next consider the determinant for the operator (−∇ 2 + 2). The eigenvalue equation for the operator reads which for the mode number m now has the regular solution To compute the determinant in the asymptotic AdS limit we take the asymptotic form of the solution eq.(F.37), with λ = 0, which is given by We now use this to compute det −∇ 2 + 2 with the Dirichlet boundary condition eq.(F.33) using eq.(F.28). This gives which is independent of r B since m=∞ m=−∞ ln r B = 0.
Thus we see that for both the operators considered above, we get no dependence growing exponentially as in eq.(F.26), in the asymptotic AdS limit. For det(−∇ 2 ) we saw in the previous subsections that such a dependence does arise when we consider a different order of limits.
The computation of the determinant det P † P is very similar. Let us first study the case when the index v in eq.(F.37) is imaginary. We shall later see that there exist one eigenvalue when v is real. For v imaginary, the main difference in this computation would be the boundary conditions on the scalar field. For a general vector field decomposed as in eq.(2.22) the boundary conditions eq.(2.19), in the asymptotic AdS limit gives two possible conditions on the scalar fields.These are The determinant of the eigenvalues determined by the second of the condition above,∂ r ξ λ,m = 0, is straightforward to compute. The appropriately normalized solution is again given by eq.(F.38) and so, the product of these eigenvalues is given by taking the derivative of eq.(F.38) which still gives eq.(F.39).
The contribution from the other set of eigenvalues ∂ 2 r ψ λ,m = 0 are more complicated. First, let us note that even though the boundary condition is a second order equation, it can be understood as a mixed boundary condition of the form eq.(F.30) upon using the eigenvalue equation eq.(F.36) for ψ λ,m . The contribution from the modes m = 0 is straightforward to obtain by taking the second derivative of the corresponding asymptotic expression in eq.(F.38). However, for m = 0 mode, since the solution is just ψ 0,0 = r, taking the double derivative gives zero, which using the analog of eq.(F.29) then shows that the determinant for the m = 0 sector is zero. The zero mode is in fact ψ 0,0 itself and it corresponds to the U (1) isometry of AdS 2 under which θ → θ + c. We are actually interested in evaluating det (P † P ) and would therefore need to evaluate the determinant without the zero eigenmode.
We have not been able to find a fully satisfactory way of dealing with this complication. One might hope to proceed as follows. We consider in the m = 0 sector the operator det(−∇ 2 + 2 − λ) for non-zero λ and then take the λ → 0 suitably, removing the extra zero mode and thereby obtaining the determinant for non-zero modes. For non-zero λ the arguments above lead to the conclusion that and so we get Computing the solution ψ λ,0 in a perturbation series in λ near λ = 0 by imposing regularity near the origin, we get the solution to O(λ) to be and hence So, the net value of the determinant det (P † P ) becomes The resulting r B dependence, after doing the sum by zeta function regularization, is then given by ln det (P † P ) = 2 ln r B (F. 46) This is not a very satisfactory result though since in the r B → ∞ limit the resulting divergence in the determinant cannot be removed by a local counter-term (unlike for a term which is growing linearly with r B ). We leave a proper resolution of this puzzle for the future.
Let us end with some comments. It is easy to see that there exists one additional discrete eigenvalue when v = 1 2 corresponding to λ = 2. Consider the scalar fields ψ λ,m , ξ λ,m that satisfy the equation∇ 2 ψ λ,m = 0 =∇ 2 ξ λ,m with mode number m, the regular solutions for which are taken to be It is then easy to see that near the boundary r 1, with the choice of constants β m = i sgn(m)α m , the boundary conditions (2.19) are satisfied thus showing that λ = 2 is a genuine eigenvalue, which does not belong to the either of the sets of eigenvalues in eq.(F.40). Also, note that this discrete eigenvalue exists only for m = 0, since when m = 0 the vector field constructed out of these scalar fields vanishes identically everywhere. Further, there are no other eigenvalues apart from the ones we have obtained so far. Including this eigenvalue of course does not change the r B dependence obtained above.
The calculation of determinants discussed in this subsection can be easily extended to the general case when the boundary is located at large but finite value of r B , and also to de Sitter spacetime.
G Matter coupling to the time reparametrization modes in AdS
In this appendix we will describe in more detail the coupling of the matter to the time reparametrization modes in the classical action eq.(3.9). The result can be obtained for a general boundary of length l but for simplicity we will work out the case l 1 below. We begin with the metric eq.(2.24) in which the boundary is at r B given by eq.(5.15) in terms of l, so that r B 1. We next turn on a large diffeomorphism. Under such a diffeomorphism the new coordinates asymptotically close to the boundary are given bỹ with the boundary lying atr = r B . f (θ) denotes a derivative of f (θ) with respect to θ. It is easy to see from our definition of the rescaled proper time u, eq.(2.42), and eq.(2.33) that The infinitesimal version of these transformations follows from eq.(2.28) and is discussed in eq.(2.44) eq. (2.45). Note that the coordinate r varies along the resulting wavy boundary as We will consider one massless scalar ϕ here. A general solution to the massless scalar equation ∇ 2 ϕ = 0 is given by where p m are coefficients which are fixed by the form of ϕ at the boundary. Near the boundary at large r we get and If ϕ is given by the functionφ(u) along the boundary, with u being the rescaled proper length as above, then we get (to leading order ) which determines the Fourier coefficients p m in terms of the functionsφ and f . It is easy to see that the classical action for the scalar reduces on shell to a boundary term, where ds is the line element along the boundary and n µ the unit normal. This gives where ϕ − (θ) is given in terms of the boundary functionφ(u) and f (u) by eq.(G.8). Inverting eq.(G.2) we can express θ as a function of u which allows us to also express eq.(G.11) as This gives the classical action in terms of the boundary time reparametrizations specified by θ(u) and the boundary value of the scalarφ(u).
At linear order in the diffeomorphisms we haveθ given in terms of θ in eq.(2.44), (2.45), inserting this in eq.(G.13) gives This result agrees with (3.12) in subsection 3, after being generalised to N scalar fields.
Also, for the discussion in subsection (3.1) these formulas need to be extended to O( ) if we are to include the dependence on the large diffeomorphisms coming from the quantum part, i.e. the scalar laplacian determinant in (3.10). This can be done in a straightforward fashion along the lines above, but we spare the reader the details. This in the limit r 1 has the asymptotic form where ϕ − , ϕ + are as before in eq.(G.5). The matter coupling can then be obtained in a manner analogous to that in the AdS case above. Doing so, we get where F is as defined in eq.(G.7). So, comparing eq.(G.11) and (G.17), we see that the expressions upto the factor of i and so the linearized version in eq.(G.14) will also have an additional factor of i.
H de Sitter wavefunction using Euclidean AdS contour
In this appendix we calculate the wavefunction for the de Sitter spacetime in the nonasymptotic limit, by considering modes which have m > l where l is the length of the boundary. Although in such a case we need to carefully calculate various quantities such as determinants, measure for large and small diffeomorphisms which does not decouple, we ignore all such subtleties and evaluate the measure for large diffeomorphisms and try to do the path integral. To evaluate the wavefunction, we follow the Maldacena contour, described in subsection 5.1 and so we first do the computation in the negative AdS metric of signature (0,2) and then analytically continue to the (1,1) de Sitter spacetime. Consider the metric given by It is easy to compute the Ricci scalar for this metric which turns out to have the value R = 2. This metric is the negative of the AdS metric written in global coordinates. As before, we find that the zero modes of the operator P † P are given by the vector field as either the gradient or the curl of a scalar which satisfies the scalar Laplacian equation in the background eq.(H.1). So, we get the vector field as For an arbitrary τ = τ 0 , we have the boundary term for the inner product of two metric perturbations analogous to that of the eq.(2.17) to be So the measure for the path integral overĉ m is given bŷ is given by where the superscript on δK is to indicate that this is the quadratic term in time reparametrization modesc m . The relation betweenĉ m andĉ m is obtained by noting that δθ = V θ and is given byc m = −ĉ m csch 2 τ |m| cosh τ + sinh 2 τ + m 2 tanh |m| τ 2 (H.9) Using this to find the quadratic action in terms of the variablesĉ m , we obtain δK (2) = − m≥1ĉ mĉ−m |m| 2 (m 2 − 1)csch 3 τ 0 sech 3 τ 0 |m| cosh τ 0 + sinh 2 τ 0 + m 2 2 tanh 2|m| τ 0 2 (H.10) The path integral over the modesĉ m is given by,Ψ nAdS , where u is related to the proper time on the boundary and is defined through the relation eq.(B.34). Note that we have ignored the contribution from the topological term eq.(2.3) and the exponential prefactor above is the classical contribution coming from the leading term in the extrinsic trace eq.(B.33). The sum in eq.(H.11) can be regulated using zeta-function regularization. Defining the variables α, m 1 , m 2 , m 3 , m 4 , as α = 32π cosh 3 τ 0 γφ B ,γ = 1 8πG we get the regularized value to be For evaluating large τ 0 1 behaviour, we note that πe τ 0 = l, and so we get, using the Stirling approximation for the Gamma functions, Now doing a continuation to the Lorentzian de Sitter by taking 16) We see that there is an exponential damping term for large l. The ± signs in the above expression correspond to the ± signs in eq.(H.15) for the different ways of analytic continuation. As can be seen from eq.(H.16), we find that the exponential damping is independent of the choice of analytic continuation. Moreover this exponential damping cannot be removed by adding a length-dependent counterterm with a real coefficient as that would have an explicit factor of i as in the action eq.(5.1).
I.1 Measure for large diffeomorphisms and Schwarzian action
In this appendix we will elaborate more on the calculation of the measure for the large diffeomorphisms in the double trumpet topology and also show the calculation of the Schwarzian action in explicit detail. The line element is given by eq.(4. 2) The solutions for the scalar field ψ satisfying eq.(2.23) is given in eq.(4.15). We will use the form in eq.(4.9) to calculate the measure and the form in eq.(4.15) to evaluate the Schwarzian action and finally relate them using eq.(4.16). We can now construct the vector field corresponding to the large diffeomorphisms. In the disk topology, the modes m = ±1, 0 for the vector field taken as the curl of the scalar field turned out to be isometries for the spacetime. However, we now have only one isometry corresponding to the m = 0 mode. So, the large diffeomorphisms correspond to modes with |m| ≥ 1. The components of the vector field eq.(4.8) computed in terms of the solution eq.(4.9) is given by The metric perturbations obtained by δg ab = (P V ) ab is given by It is now straightforward to compute the measure for the modes corresponding to the large diffeomorphisms. The measure is obtained by taking the inner product of two metric perturbations P V L and P V (2) L using eq.(2.9), which just becomes the boundary term given in eq.(2.17). We now have two boundary terms due to the two boundaries as r → ±∞. The value of this boundary term at a single boundary is given by where the upper sign is to be used at the right boundary (r → ∞) and the lower sign at the left boundary (r → −∞). The relative sign between the two boundaries arises due to the change in the sign of the outward normal used to compute this boundary term in eq.(2.17). In the asymptotic limit, eq.(I.3) becomes, We will now evaluate each of the boundary terms separately. The contribution to the measure coming from the boundary term at r → ∞, denoted M 2 , can be evaluated by following the conventions in eq.(4.12), (4.13). We see that as r → ∞ and so from eq.(I.5), we find Noting that as r → −∞, the contribution to the measure from the boundary term at r → −∞, denoted M 1 , is given by Reading off the measure for the large diffeomorphism modes p m , q m , r m , s m , from the above, we have (16bm m 2 + 1 sinh(πm)) 2 dp m dq m dr m ds m (I.13) We will now evaluate the action for the large diffeomorphisms. The action is given by the boundary term in the JT action eq.(2.51). Using eq.(I.5),(I.7), we see that the scalar field solution eq.(4.15) as r → ∞ becomes ψ r→∞ 2 m e imθ δ m r sinh(mπ) (I.14) and at r → −∞, we get 15) which shows that the the large diffeomorphism at the left and right boundaries are independent and so we can independently compute the action at each of the boundaries. In the asymptotic AdS limit, for the parametrization of θ(u) in terms of the diffeomorphism as 16) the boundary term in the JT action, eq.(B.17) to the quadratic order in large diffeomorphisms, becomes where we used the fact the dilaton is of the form eq.(2.7) at the boundary in the asymptotic AdS limit. Noting that near the boundary at r → ∞, the large diffeomorphism is given by the term quadratic in the large diffeomorphism in the action is given by where we have used eq.(I.10) and the first line in eq.(4.4) to obtain the above equation.
Similarly the action for the boundary term near r → −∞ can be obtained by noting the vector field corresponding to the large diffeomorphisms is given by e imu γ m sinh(mπ) (I. 20) and the term quadratic in the large diffeomorphism in the action is given by where again we have used eq.(I.10) and the second line in eq.(4.4) to obtain the above result. Combining eq.(I.21) and eq.(I. 19) and expressing in terms of p m , q m , r m , s m using eq.(4.27), the net action then becomes We will now elaborate on the orthogonality of different classes of metric perturbations namely, those corresponding to twist, b-modulus, small and large diffeomorphisms. The discussion regarding the inner product of small and large diffeomorphisms is presented towards the end of E which in the asymptotic AdS limit satisfies the inequality with the notation being the same as in appendix E. Let us first consider the inner product of the metric perturbation corresponding to b-modulus with others. From eq.(4.22), eq.(4.20), noting the metric eq.(4.2) and using the definition eq.(2.9), it is straightforward to see that P V mod , P V tw = 0 (I.24) Further since there is not θ dependence in P V b where as the large diffeomorphisms have the dependence on θ as e imθ , m ≥ 1, we have Considering the inner product of a metric perturbation for small diffeomorphism with P V mod we get where in obtaining the second line we used eq.(4.23) and for a small diffeomorphism that eq.(E.6) is satisfied at the boundary. The vanishing of the second line then follows by noting that P V b does not have off-diagonal components, see eq.(4.20). Now, we shall consider the inner product of V tw with other metric perturbations. The inner product with P V b is already obtained in eq.(I.24). The inner product of P V tw with large diffeomorphisms also vanishes since as before the large diffeomorphisms have a non-trivial θ dependence whereas the twist perturbation has no θ dependence as it corresponds to m = 0 sector, see eq.(4.20). The inner product of P V tw and P V s,λ,m , can be simplified following the steps in eq.(I.26) to obtain which is non-zero for m = 0 mode of V s,λ,m after noting eq.(4.20). So, to estimate it, we consider the quantity, P V tw , P V s,λ,m P V tw , P V tw P V s,λ,−m , P V s,λ,m (I.28) We shall show that in the asymptotic AdS limit, the above quantity goes as O(r −3/2 ). The quantity P V tw , P V tw has already been computed in eq.(4.29). For m = 0, using eq.(E.3), we have that It is also easy to see from line element eq.(4.2) that the normal vector at either of the boundaries r = −r B1 or r = r B2 , has the behaviour where ∂ stands for both the boundaries. From eq.(4.20), we have Further noting that P V tw , P V tw ∼ O(r 0 ) from eq.(4.29), we have, combining eq.(I.29), (I.30), (I.31) that P V tw , P V s,λ,m P V tw , P V tw P V s,λ,m , P V s,λ,−m ∼ O(r −3/2 ) (I. 33) which vanishes in the asymptotic AdS limit. So, in total, noting all the above results, eq.(4.25) then follows immediately.
I.2 Matter in double trumpet calculations
In this appendix we shall elaborate on the details used in the discussion in subsection 4.2.
We shall carefully evaluate the determinant of scalar laplacian, det(−∇ 2 ). We will consider massless scalar in the background of the double trumpet topology with the metric written in conformally flat coordinate system, eq.(A.16), as We can compute the dependence on b by noting that the metric above is conformally flat and so we can use the conformal anomaly to evaluate the contribution due to the conformal factor and then compute the contribution from the flat metric separately. The b dependence coming from the conformal anomaly can be evaluated using the conformal anomaly since the theory of a massless scalar field is a conformal field theory. The relation between determinants of conformally related metricsĝ ab = e 2σḡ ab is given by where quantities denoted by hats are calculated with respect to the metricĝ. In the case at hand σ = − ln cos r * (I.36) andR = 0. For a non-wavy boundary, the boundary can be specified by r * = −r * 1 , r * = r * 2 , r * 1,2 > 0 (I.37) where the subscripts 1 and 2 correspond to the left and right boundaries respectively. The normal vector normalized with respect to the flat metric,ḡ, at the left and the right boundaries is given byn µ 2 = (1, 0),n µ 1 = (−1, 0) (I. 38) and the extrinsic curvatureK = 0. So, we have (tan r * 1 + tan r * 2 − r * 1 − r * 2 ) (I.39) We shall now show the computation of the b dependence in the determinant coming from the flat metric given by where we have shifted the range of r * by π 2 so as to simplify the computations. Taking θ circle as the time direction in the Euclidean scalar field theory, the periodicity of the θ circle determines the temperature at which we need to calculate the thermal partition function. So, we have the subscript f in Z f to denote that the computation is being done for the flat metric, H is the Hamiltonian given by The solution for the matter equation∇ The Hamiltonian in terms of the modes is then given by Using the modular transformation property of η(τ ), we can study the behaviour of the partition function near b = 0. So, we have (I.50) Putting together eq.(I.39) and eq.(I.48), we get the full dependence of det −∇ 2 as (tan r * 1 + tan r * 2 − r * 1 − r * 2 ) (I.51) The terms tan r * 1 , tan r * 2 in the exponent above diverges when r * 1 , r * 2 → π 2 . This can be avoided by adding a length-dependent term with the appropriate coefficient, following which we have, in the limit r * 1 , r * 2 → π 2 , (tan r * 1 − sec r * 1 + tan r * 2 − sec r * 2 − π) η ib 2π One can also compute the contribution to the determinant from the flat metric eq.(I.40) directly by noting that the eigenvalues in the geometry eq.(I.40) with Dirichlet boundary conditions along the θ and r * directions are given by λ = n 2 +m 2 (I. 53) where n is the mode number along the r * direction andm is related to the mode number m in the θ direction by eq.(4.10).
I.3 Coleman method for computing determinants in AdS double trumpet
In this section, we shall show in detail how to evaluate the various determinants in the Euclidean AdS double trumpet geometry. The metric for the double trumpet geometry is given by ds 2 = dr 2 r 2 + 1 + (r 2 + 1)dθ 2 (I.54) The left and right boundaries are taken to be located at r = −r B1 and r = r B2 respectively where r B1 , r B2 > 0. The solution for the wave equation We shall use this form of the general solution to compute the determinants of (−∇ 2 ) and (−∇ 2 + 2) with Dirichlet boundary conditions. To begin with, let us compute the value of the determinant of (−∇ 2 ) using Coleman Trick. To do this, we need to impose the required boundary conditions and find the appropriately normalized solution to get the correctm dependence. We note however, that the overall r B normalization of the solution, where r B > 0 referring to either r B1 or r B2 is the location of the boundary, cannot be uniquely fixed. We will discuss more about this point later. Consider first the case ofm = 0 modes. For this, imposing the Dirichlet boundary conditions at the left boundary, say specified as r = −r B1 we get the relation between the constant k 1 and k 2 as Now, we need to fix the constant k based on the following requirements. Viewed as a complex function of λ, the ratio of two solutions with different mode numbers m 1 and m 2 , should have no extra poles of zeroes other those corresponding to the eigenvalues of the operator (−∇ 2 ). Also, the ratio of two solutions should approach unity as |λ| goes to infinity in any direction other than the positive real axis. This completely fixed them dependence of the constant k although the r B1 , r B2 dependence is ambiguous. This can be fixed by the requirement that the final answer should be independent of r B since otherwise, in the asymptotic AdS limit that would mean that we would get a ln r B divergence in the net action, which cannot be cancelled away by a local counterterm. First, it is useful to note the asymptotic expansions of the associated Legendre functions P b a (±ir B ). From eq.8.1.5 of [124], we see that for at the left boundary, Using this and the fact that F (a, b, c, z) 1 + O(z) for small z, we find that the asymptotic forms at −ir B1 and ir B2 are However, there is a small subtlety which is as follows. After imposing the Dirichlet boundary conditions at the left boundary and relating the coefficients, the solution has to be continued through r = 0 to the right boundary. In doing so, one has to use eq.8.1.4 of [124] and so our asymptotic expansions will have a relative factor of em π between z = −ir B and z = ir B due to the presence of the branch cut between (1, −∞). The physics problem of continuing the solution through r = 0 is unambiguous as it is perfectly well-defined point in the double trumpet, where as the expansions in [124] are defined with a different choice of branch which can be seen from the factor (z 2 − 1) − µ 2 in eq 8.1.4 and 8.1.5 . Thus, the solution is not continuous across r = 0. So, to get rid of this problem, we multiply the asymptotic form at ir B by an extra factor of e πm . Doing so, we have Using these asymptotic forms, we can immediately calculate the solution at r = ir B2 to be From the above we see that as |v| → ∞, the second term and the corresponding v → −v are subleading and so we need to choose k such thatm dependence in the remaining terms cancels in this limit and that there are no extra poles or zeroes. So, in the limit |λ| → ∞, we get ψ λ,m = 2 sinh(mπ)k π √ 4r B2 r B1 (4r B2 r B2 ) v + (4r B2 r B2 ) −v (I.66) From the above expression, we see that there will be no spurious poles or zeroes in the ratio of two solutions. So we choose k as k = 1 2 sinh(mπ) (I.67) The solution then becomes Comparing this with the answer obtained earlier for the determinant in eq.(I.51), we see that there is a mismatch of the exponential term coming from the conformal part of the metric in the earlier calculation. We could not satisfactorily establish the reason for this but it could be due to the different order of limits that are being implemented in the Coleman method used above in calculating the determinant, as was also mentioned in eq.F.1. Now, we will evaluate det(−∇ 2 + 2) in the same manner for Dirichlet boundary conditions. The solution is the same as in eq.(I.58) except that the value of v is now given by v = 9 4 − λ (I.72) The arguments leading to eq.(I.68) remain the same and hence we get the same expression as in eq.(I.68). In the final step of computing the determinant when we set λ = 0, we get v = 3 2 and so the leading term in the solution becomes ψ 0,m = r B2 r B1 Γ(2 − im)Γ(2 + im) = sinh(mπ) πm(1 +m 2 ) r B2 r B1 (I. 73) To get rid of the r B1 , r B2 dependence, we can further normalize the solution by a factor of 1 r B1 r B2 . The reasoning for this is same as in the calculation of (−∇ 2 ), namely that the absence of local counterterms to cancel ln r B divergent term in the determinant. We would also get dependence on r B2 , r B1 from them = 0 sector which can again be normalized to unity and so we will ignore the contribution from them = 0 sector. The value of the determinant of (−∇ 2 + 2) is then given by Now, we shall repeat the steps for the operator P † P . As in the case of the disk, we have two sets of eigenvalues corresponding to the boundary conditions ∂ r ξ = 0 and ∂ 2 r ψ = 0, see eq.(E.15),(E.16). Let us first compute the contribution to the eigenvalues from ξ with the boundary condition ∂ r ξ = 0.
(I. 76) The solution for ξ is given Now, imposing the boundary condition eq.(I.76), the constants k 1 , k 2 are related by (−ir B1 ), we get the solution to be which using eq.(I.64) becomes 87) Again noting that in the limit of |λ| 1, the first terms dominates and so we choose k as in eq.(I.67). With this choice of normalization in eq.(I.85), setting λ = 0, we get the leading term as ∂ 2 r ψ λ,m (r) r=r B2 =m sinh(mπ) πr 3 B1 r 3
B2
(I. 88) In getting the above result, one has to keep the subleading terms arising from expanding the hypergeometric functions appearing in eq.(I.62) when evaluated for ν = v − 1 2 = 1. Again ignoring the r B1 , r B2 dependence with the understanding that they can be cancelled by an appropriate normalization and also them = 0 sector, we get the total value of the determinant of P † P as ln det P † P = So, we find that there is a non-trivial b-dependence in the ratio √ det P † P det(−∇ 2 +2) . However, using results in string theory for the partition function of the ghost fields, it it straightforward to compute the determinant of the operator P † P directly. To do so we use the conformally flat form of the double trumpet geometry as in eq.(I.34). The contribution due to the conformal factor is same as in eq.(I.39) with a factor of −26 multiplied in the exponent. The contribution from the flat part is then obtained by reading off the result from eq.7.4.1 of [122], the string theory vacuum amplitude for open strings on a cylinder which gives det (P † P ) η( ib 2π ) b 2 (I. 91) which then has the correct b → 0 limit as in eq.(I.75).
J.1 Coleman method computation
In this section, we shall compute the determinants in the de Sitter double trumpet topology. We will find that the computation of the determinants is very similar to that in the case of AdS double trumpet and so we shall only work out in detail the case of scalar laplacian determinant. To do so, we shall view the metric of the de Sitter double trumpet as arising from the analytic continuation of the −AdS double trumpet which is given by and continuing by r → ±ir (J.2) at both the ends of the AdS double trumpet, we get the metric of dS double trumpet as The solutions for an eigenvector of the scalar laplacian in the AdS double trumpet, with mode number m, satisfying the equation Now, to have te correct asymptotic expansions at the left and the right boundaries, we need to look at the contour a bit more carefully. Due to the presence of the branch cut from (−1, ∞) in the complex r plane, we take the contour for the de Sitter double trumpet to start at r = −∞ just below the real axis and first rotate counterclockwise to r = −ir B1 , r B1 → ∞. We then end up in the AdS double trumpet geometry in which we go from the left end to the right end along the imaginary axis from which we pick a relative factor of em π , see discussion after eq.(I.63). From the point r = ir B2 , r B2 → ∞, we rotate clockwise to end up at r = ∞. However, note that there are other equivalently good choices of contour, say rotating counterclockwise in the last step or beginning from above the cut at r = −∞. In total, we have four possible choices of contours, which are depicted in Fig.3. Our choice corresponds to the path EDCBF . Other choices will only change the phase factor e ±iπv and will end up giving the same final answer. Now, following the same steps as in the AdS double trumpet, imposing the Dirichlet boundary conditions at one end of the boundary r = −r B1 , we get the form of the solution in eq.(I.61) as With the choice of contour we made, the asymptotic forms of the associated Legendre functions now read and hence we get, at r = r B2 1, the value of the scalar field solution as The same reasoning as in the case of AdS double trumpet, around eq.(I.66) shows that k should be chosen as in eq.(I.67) and so the wavefunction and consequently, the laplacian determinant are as given by eq.(I.70). It should be noted that the only difference in the associated Legendre function expansions in the two cases as seen from eq.(I.64) and eq.(J.9) is only in the overall constant phases and factors of v-dependent exponentials, due to the fact that argument of the associated Legendre function is imaginary and real in the AdS and dS double trumpets respectively. However, this doesn't make a difference in the value of k and hence the value of the solution ψ λ,m for v = 1 2 becomes ψ 0,m = sinhmπ mπ (J.11) which is the same as in eq.(I.69). It then immediately follows that the value of the scalar laplacian determinant is same as before eq.(I.70) upto an irrelevant numerical constant 13 .
the solution satisfying the Dirichlet boundary conditions is given by ψ ∼ e imθ sin λ −m 2 r * ,AdS (J. 18) wherem is related to m by eq.(J.6). Continuing this solution to the dS using eq.(J.14) gives ψ ∼ ±e imθ sinh λ −m 2 r * ,AdS (J. 19) which correctly satisfies the Dirichlet boundary conditions as r * ,AdS → 0. Now to obtain the continuation at the right end where r * ,AdS → π, we do the continuation which combined with the contribution from the conformal factor gives the full dependence. Note the flat part contribution is the same as in the AdS case since the eigenvalues are the same and hence we will get the same b → 0 dependence as in eq.(I.50) for the scalar partition function. | 2021-01-05T02:15:31.072Z | 2021-01-03T00:00:00.000 | {
"year": 2021,
"sha1": "8486eda0c095f5148606d28f3481fb4f86c0fc68",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2021)204.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "97eef2cd5999a342564c6451ebc6c1615dbb64f0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1890499 | pes2o/s2orc | v3-fos-license | Alexithymia and emotional regulation: A cluster analytical approach
Background Alexithymia has been a familiar conception of psychosomatic phenomenon. The aim of this study was to investigate whether there were subtypes of alexithymia associating with different traits of emotional expression and regulation among a group of healthy college students. Methods 1788 healthy college students were administered with the Chinese version of the 20-item Toronto Alexithymia Scale (TAS-20) and another set of questionnaires assessing emotion status and regulation. A hierarchical cluster analysis was conducted on the three factor scores of the TAS-20. The cluster solution was cross-validated by the corresponding emotional regulation. Results The results indicated there were four subtypes of alexithymia, namely extrovert-high alexithymia (EHA), general-high alexithymia (GHA), introvert-high alexithymia (IHA) and non-alexithymia (NA). The GHA was characterized by general high scores on all three factors, the IHA was characterized by high scores on difficulty identifying feelings and difficulty describing feelings but low score on externally oriented cognitive style of thinking, the EHA was characterized by high score on externally oriented cognitive style of thinking but normal score on the others, and the NA got low score on all factors. The GHA and IHA were dominant by suppressive character of emotional regulation and expression with worse emotion status as compared to the EHA and NA. Conclusions The current findings suggest there were four subtypes of alexithymia characterized by different emotional regulation manifestations.
Background
Alexithymia has been a familiar conception as "no words for feeling" in psychiatry and psychosomatic medicine since it was first termed by Sifneos [1]. Now its definition is more explicitly refined with five dominant features: (1) difficulty in identifying one's emotion; (2) difficulty in describing self feelings verbally; (3) a reduction or incapability to experience emotions; (4) an absence of tendencies to image one else's emotion, or an externally oriented cognitive style; and (5) poor capacity for fantasize or symbolic thought [2]. Alexithymia refers to a specific disturbance in emotional processing, especially reduced capabilities in verbalizing and realizing emotion. Longitudinal study also suggested that alexithymia was significantly associated with the severity of depression [3], anxiety [4] and schizophrenia [5]. The prevalence rate of alexithymia is significantly higher in patients with psychosomatic disorders, such as eating disorder [6], fibromyalgia syndrome [7] and low-back pain [8], than control groups.
Researchers [9,10] found alexithymia overlaps with various dimensions like external locus of control and irrational beliefs, except impulsiveness, of the Five-Factor Model (FFM) of personality in an undergraduate student sample. It has been speculated that alexithymia is a cognitive state of externally oriented thinking with an emotional instability and unsecure performance in controlling stressful situation. However, alexithymia has also been criticized whether it is an affect-deficit disorder (state-orient) or a continuous personality variable (trait-orient). Tolmunen et al. [11] considered alexithymia as a stable personality trait in general. Their 11-year follow-up study also suggested that alexithymia might increase vulnerability to depressive symptoms [11]. Honkalampi [12] further demonstrated that depressive Neuropsychology and Applied Cognitive Neuroscience Laboratory; Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, PR China Full list of author information is available at the end of the article symptom might act as a mediator between alexithymia and psychiatric morbidity. Parker and Mattila used taxometric analysis to synthesize several studies about alexithymia in large sample pools including general population and psychotic patients [11,12]. These findings suggest that "aleixhtymia is not a discrete affect deficit type of person but represents 'the lower tail' of an emotion processing ability that is continuously distributed in the general population" [11].
The purpose of this study was to examine whether there might be subtypes of alexithymia characterized by different behavioural manifestations. In so doing, the current study adopted a cluster analytical approach to examine whether there were natural grouping of people characterized by different psychological features associating with alexithymia. Cluster analysis is a statistical procedure for determining cases can be placed into groups if they share properties in common, while the cases in different clusters are as dissimilar as possible. It was hypothesized that there were various subtypes of alexithymia characterized by different psychological features associating with alexithymia.
Individuals on various level of alexithymia would adopt different ways to express and regulate their emotion. The higher alexithymia groups would perform more serious level of depressive or anxious emotional status and more possible to adopt improper regulation strategy.
Method
Participants 1788 college students (freshmen and sophomore) were recruited from three regional universities in Guangzhou, south China. 1071 were males and 616 were females, aged 20.44 ± 1.40 years and 20.51 ± 1.39 years respectively, 101 individual did not mention their gender or age. Economic status was also recorded by a multiplechoice question in the checklist for monthly income per person a month. Among all subjects, 110 individuals did not mention their economic status.
All subjects were literarily informed the aim of current study was to examine about psychological status of Chinese youngsters and were voluntarily attended this study. All of them would receive a feedback on the assessment results via email. This study was approved by the ethics committee of the Sun Yat-Sen University.
Measurements
Alexithymia was assessed by the 20-item Toronto alexithymia scale (TAS-20) to assess the severity of alexithymia [13,14]. It is a 20-item self-report instrument rated on a 5-point Liker-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). Total scores range from 20 to 100, with higher scores indicating higher level of alexithymia. Within the five criteria, the TAS-20 consists of 3 factors: difficulty identifying feelings (DIF); difficulty describing feelings (DDF); externally oriented cognitive style of thinking (EOT). The Chinese version has been shown with having the same factor structure of the original version and has been associated with good internal consistency [15], which was adopted in this study. The Cronbach's α coefficient of it was 0.83, the test-retest reliability coefficient was 0.87, the mean inter-item correlation coefficients ranged from 0.13 to 0.32, the correlate on coefficients of the three factors with the total scale score ranged from 0.72 to 0.82, the correlation coefficients among the three factors ranged from 0.29 to 0.54 [16].
Emotion expression tendency was assessed by the Chinese version of the Emotional Expressivity Scale (EES) [17]. It is a 17-item self-report assessing the ability to express emotion rated on a 6-point Liker-type scale (1 = never true to 6 = always true). There were two factors in the Chinese version, namely the emotional suppression and emotional expression [18]. The Cronbach's alpha coefficient for the total scale showed a high internal consistency reliability of 0.816. Cronbach's alphas for the two factors were 0.84 and 0.78 respectively indicating adequate internal consistency [18]. Higher total score reflects a higher ability to express emotion, higher expressive factor score means higher intention to express, but lower suppressive factor score means higher inclination to control emotion.
Emotion Regulation Questionnaire (ERQ) [19] was used to measure emotion regulation. The ERQ is a 10-item checklist capturing two commonly used emotion regulation strategies, i.e., reappraisal and suppression. Reappraisal refers to the use of methods changing the way of thinking about a potential emotional event, whereas suppression refers to the adoption of regulation to suppress when facing the same emotional event. Subjects were required to rate their respond to a 7-point Liker-type scale (1 = totally disagree to 7 = totally agree) on their usual ways of emotional regulation. The test-retest reliability and a coefficient of Chinese version of ERQ were 0.82 and 0.85 for reappraisal dimension, were 0.79 and 0.77 for suppression [20]. Higher score indicates a higher tendency to adopt such strategy. Reappraisal strategy was thought to be a more appropriate way to regulate emotion than suppression one.
Depression was measured with the Beck depression inventory (BDI) [21,22]. It is a 21-item scale to assess depression problems with higher score representing more depression tendency. The current study adopted the Chinese version of BDI, which Cronbach's alpha coefficient was found to be 0.85 [23].
Anxiety was assessed with the Chinese version of the state portion of State-Trait Anxiety inventory (STAI-T) [24]. This is a self-reported scale containing 20 items assessing level of anxious status rated on a 4point Liker-type scale (1 = never to 4 = always). The Cronbach's alpha coefficient showed a high internal consistency reliability was 0.81. The higher score refers a more serious anxious state.
Data analysis
The Statistical Package for Social Sciences (SPSS) 15.0 (SPSS Inc, Chicago, IL, USA) was used for all statistical analyses reported.
Independent sample t-tests were conducted to analyze the gender effect on the total scores of TAS-20. ANOVA was conducted to evaluate the potential effect of economic status upon the total scores of TAS-20, whereas correlation analyses were used to explore any association of education and age with the TAS-20 scores.
Cluster analyses were conducted in two phases. First, a hierarchical cluster analysis was conducted using the 3 factors: difficulty identifying feelings; difficulty describing feelings; externally oriented cognitive style of thinking scores of TAS-20 as the clustering variables and the between-group linkage method with a squared Euclidean distance measure to discriminate clusters. Second, the cluster solution was validated with analysis of variance (ANOVA) on scores of Emotional Expressivity Scale with its subscales, Emotion Regulation Questionnaire, Beck depression inventory and the state portion of State-Trait Anxiety inventory of the identified groups.
Results
No significant difference was found in the total mean TAS-20 scores between boys and girls (49.63; SD, 8.70 vs 48.96; SD, 8.60; p = 0.13). Age was significantly correlated the total mean TAS-20 (r = 0.05, p = 0.04), whereas there was no significant association between education and TAS-20 total score (r = 0.04, p = 0.11). No significant different in the total mean TAS-20 scores between economic status was found (F = 2.06, p = 0.08). Given the effect of age upon the TAS-20 score was negligible, it was not controlled for subsequent analyses between cluster comparisons. Table 1 shows there were four subtypes of alexithymia groups, namely extrovert-high alexithymia (EHA), general-high alexithymia (GHA), introvert-high alexithymia (IHA) and non-alexithymia (NA). The extrovert-high alexithymia (EHA) group was characterized a relative high in externally oriented cognitive style, regular scores in difficulty identifying feelings and difficulty describing feelings and contained most of the cases (77.3%). The general-high alexithymia (GHA) group was characterized by a high score of every factor of alexithymia. The introvert-high alexithymia (IHA) was characterized by a significant high score in difficulty identifying feelings and difficulty describing feelings, which referring to self emotional experience, but relative low score in externally oriented cognitive style of thinking scores. Finally, the non-alexithymia (NA) group was characterized a general low score of alexithymia problems.
An ANOVA showed that the four subtypes of alexithymia differed significantly in terms of emotional status, emotional expression and regulation as our estimation ( Table 2). The general-high alexithymia (GHA) and introvert-high alexithymia (IHA) groups showed dominant higher level of depression and anxious than the extrovert-high alexithymia (EHA) and non-alexithymia (NA) groups. The GHA demonstrated significantly higher scores on suppressive factor in Emotional Expressivity Scale (EES) and Emotion Regulation Questionnaire (ERQ). The introvert-high alexithymia (IHA) group also demonstrated higher scores in suppressive tendency in expressing emotion but adopting more reappraisal strategies in regulating emotion than the GHA group. The non-alexithymia (NA) exhibited the highest will to express their emotions and to choose more reappraisal strategies to regulate their emotions, and was associated with the least depressive and anxiety problems. The extrovert-high alexithymia (EHA) were modest between NA and GHA groups.
Discussion
The major findings of this study showed there were four subtypes of alexithymia and were consistent with previous studies. For example, Vorst and Bermond [25] suggested that there were two types of alexithymia characterized by the emotional and cognitive factors of the Bermond-Vorst Alexithymia Questionnaire (BVAQ) [26]. They proposed that Type I alexithymia is characterized by a low degree of conscious awareness of emotional arousal and a low degree of emotion accompanying cognitions; whereas Type II alexithymia is characterized by a normal or high degree of conscious awareness of emotional arousal together with a low degree of emotion accompanying cognitions. Our cluster analysis showed that there were 4 subtypes of participants associating with different degrees of alextithymia in the college students, namely the extroverted-high alexithymia (EHA), general-high alexithymia (GHA), introversive-high alexithymia (IHA) and non-alexithymia (NA). The GHA was characterized by a general low profile of emotional cognition including identifying, describing self emotion and external imagination. The general-high alexithymia (GHA) was similar to Type I alexithymia mentioned by Vorst. The introversive-high alexithymia (IHA) was dominant by low arousal of self emotional experience but normal ability of externally oriented thinking style, which was very similar to Type II. The extroverted-high alexithymia (EHA) is characterized by a normal range of self emotional arousal and a profile score of externally oriented thinking style. These features were very similar to those of Type II alexithymia.
Validation of the cluster solution suggested that these subtypes of alexithymia were characterized by different emotional expression and regulation abilities. The general-high alexithymia (GHA) and introversivehigh alexithymia (IHA) were characterized by poorer emotional regulation and expression with worse emotion status. In more details description, the extroverted-high alexithymia (EHA) seemed to be modest in emotion status, with emotion regulating more efficiently as compared to the general-high alexithymia (GHA). These results suggest the potential functional outcome of these different subtypes. Mattila [27] found that individuals with alexithymia showed significantly lower satisfaction to many dimensions of general life than individuals without alexithymia. Our current findings also showed that individuals with general-high alexithymia (GHA) and introversive-high alexithymia (IHA) tended to show less effective ways to regulate their emotion and might face more stress in their social life than other groups. These findings highlight the need for timely and appropriate psychological counseling for these individuals. The characteristics associated with the different clusters of individuals with alexithymia suggest that regulation ability of alexithymia may require different intervention regimes to protect or maintain their own emotional regulation and expression. It should be noted that the EHA cluster includes the 77.3% of the sample. The EHA cluster shows alterations mostly on externally oriented cognitive style of thinking (EOT). Some studies showed difficulty identifying feelings (DIF) and difficulty describing feelings (DDF) had good internal reliability but not EOT [28]. Some researcher thought EOT dimension showed different development paths companying with DDI and DDF [29]. Moreover, it should also be cautious that the clusters we found in the current study may not be stable at all in time and/or it is an artefact due to the instruments that have been used by the current study. It should be necessary to reassess the students to evaluate validity. Without a longitudinal approach any speculation in the discussion should be shown just as an hypothesis to be confirmed. Our results indicated that most of the cases were clustered into extroverted-high alexithymia (EHA) and suggested alexithymia might be a general phenomenon of an emotion processing ability distributed in the general population. More investigations were needed to clarify the relation of alexithymia and personality.
The current study has several limitations. First, participants were recruited from a convenient sample pool was the main limitation of this study. Whether these cluster groups could discovered in more broad population needs future study adopted a more rigorous epidemiological approach to improve its representativeness. Second, the findings were based on subjective self-report measures. More rigorous methodologies adopting experimental designs or neurophysiological approaches such as ERP or imaging paradigms should be enforced in the near future in order to validate potential differential neural bases of these subtypes of alexithymia. Finally, the current cross-sectional design could not examine the stability of the cluster solutions across different time points. Future study should adopt a longitudinal design to test the stability of the cluster solution.
Conclusions
The current findings suggest there were four subtypes of alexithymia characterized by different emotional regulation manifestations. | 2016-05-04T20:20:58.661Z | 2011-02-23T00:00:00.000 | {
"year": 2011,
"sha1": "016fa1a0bf5efcac700c82a9ded4c002b0487207",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/1471-244X-11-33",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72406ea4c48c25ae2a6f6e2fbe097072f75bff73",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
251148081 | pes2o/s2orc | v3-fos-license | Review on Listeriosis in small ruminants and public health signi fi cance in Ethiopia
food from animal sources, such as beef, pork, or poultry. Wash raw vegetables thoroughly before eating. Keep uncooked meats separate from vegetables and from cooked foods and ready-to-eat foods. Avoid raw (unpasteurized) milk or foods made from raw milk. Wash hands, knives, and cutting board and can contact with before and after handling cooked foods.
Introduction
Milk and milk products are an important source of many disease-producing microbes including Listeria monocytogenes, which is a Gram-positive, motile, also its principal cause of listeriosis in humans and in a wide variety of animals including birds. However, disease occurs in sporadic as well as epidemic forms, following the ingestion of food contaminated by this organism. In the world, it is becoming an important foodborne bacterial disease, with low incidence but high case fatality rate (Pal, et al. 2012a). Listeriosis is caused by several species of Listeria, bacterial organisms that live as saprophytes in the environment but occasionally cause disease in a wide range of vertebrates including mammals, marsupials, birds, and reptiles so that organisms are most often ingested in food, where they can proliferate even at refrigeration temperatures.
Most illnesses are caused by Listeria monocytogenes, but L.
ivanovii found occasionally rare reports of clinical cases caused by other species of Listeria [1]. L. monocytogenes has been recovered from dust, soil, water, sewage, decaying vegetation, at least 42 species of wild and domestic mammals, and 17 avian species, crustaceans, pond trout, ticks, and fl ies. Among food sources, milk and milk products, uncooked vegetables, fi sh and shellfi sh, ready-to-eat meat products, ground beef, and poultry have all been found to contain the organism [2].
Listeriosis affects all ages and sexes, but animals less than three years of age are more commonly prone to clinical disease than older animals and the bacterial disease can is seen clinically in animals as one of four forms and is more common during the winter \or spring month's also adult animals usually get the encephalitis form, while neonates often get the septicemic Citation: Kundul seeligeri, L. ivanovii, L. murrayi, and L. grayi. L. monocytogenes is the only species of listeria that is pathogenic for both humans and animals [5].
The Genus Listeria contains seven species but one is most pathogenic for both animal and human beings. The most important species is L. monocytogenes a gram-positive facultatively anaerobic bacillus 0.5 to 2 microns long and 0.5 microns in diameter that is motile at temperatures between 20°C and 25°C also it is beta-hemolytic in blood agar and forms a narrow band of hemolysis around the colonies (unlike L. ivanovii, which forms a wide band). A noteworthy characteristic of L.monocytogenes is its ability to grow at low temperatures; at a pH between 6 and 9, [6].
Epidemiology
Geographical location: Although the organism is widespread in nature, clinical diseases in animals occur mainly in the northern and southern latitudes and are much less common in tropical and subtropical than in temperate climates. In the northern hemispheres, Listeriosis has a distinct seasonal occurrence, probably associated with seasonal feeding of silage, with the highest prevalence in the months of December [7].
Listeriosis monocytogenes are ubiquitous in the environment and can be found in soil, water, faeces, silage, affl uent foods, and sewage. L. monocytogenes bacteria are widely distributed in nature, especially in the food chain. Most cases occur sporadically but foodborne and nosocomial outbreaks have been documented Foods associated with infection include unpasteurized milk, soft cheeses, processed meats, and contaminated vegetables. Newborns, the elderly, immunocompromised persons, and pregnant women are at greater risk of infection. About 30% of all cases occur in newborns within the fi rst 3 weeks of life [8].
Therefore, it can survive in food processing environments and become persistent. Such persistence of L. monocytogenes has been shown, often for many years, at a larger scale and smaller artisan facilities of different production sectors. Because L.monocytogenes is ubiquitous in the environment and frequently present in the processing environment, it can contaminate foods including fi sh, mammals, crustaceans, poultry, ticks meat, soft cheeses, and ready-to-eat [9]. Listeria monocytogenes has been considered a widespread bacterium in nature, as it is part of the faecal fl ora of many mammals and it is a common foodborne source. It is believed that the main route of bacterial transmission occurs through the consumption of contaminated food such as meat (sausages, pate, ham, salami, and chicken), vegetables, ready-to-eat seafood (such as smoked fi sh or mussels), raw seafood, unpasteurized milk, soft-serve ice creams, and soft cheeses [10].
Listeriosis is not a reportable disease, but data from two active surveillance studies performed in 1980-1982 and 1986 by the Centers for Disease Control and Prevention (CDC) indicate annual infection rates of 7.4 cases per million populations, accounting for 1,850 cases per year in the United States and for 425 deaths (Gellin , et al. 1986).
Mode of transmission
Listeria monocytogenes was recognized as an animal pathogen over 80 years ago, the fi rst outbreak confi rming an indirect transmission from animals to humans was reported only in 1983, in Canada's Maritime Provinces. In that outbreak, cabbages, stored in the cold over the winter, were contaminated with Listeria through exposure to infected sheep manure [11].
The laboratory methods used to detect L. monocytogenes in food have improved in recent years and the organism has been found, but usually in small numbers, in many foods including raw fi sh, shellfi sh, and fi sh products; raw meat, poultry, and their products, including hot dogs and pate; raw and processed vegetables; ripened soft cheeses; ice cream; retail cookchill meals; salads including coleslaw; raw and inadequately pasteurized milk as well as raw and liquid egg (Aureli, et al.
2000).
The primary mode of transmission for Listeria is through soil contamination and ingestion of contaminated feed. Calves that develop the septicemic disease may acquire infection from contamination of the cow teat from the ingestion of milk containing the organism or from a cow with subclinical bacteremia, through the navel from the environment and also as congenital infection [12].
After a short incubation, L. monocytogenes can spread systemically and cross the blood-brain barrier or the placenta and a recent analysis of several outbreaks found that At present, the pathogenesis of neuron Listeriosis is not entirely understood and importantly, the required infectious dose still remains unknown. Oral infection is a common feature for both, animals and humans. However, subsequent mechanisms to access the brain are likely to differ between host species. In humans, it is assumed that L .monocytogenes pass the gastrointestinal barrier and spreads hematogenous to the brain [16]. L. monocytogenes, its nucleic acids, and antigens may be detected in the placenta, fetus (e.g., fetal stomach contents), or uterine discharges after an abortion; in the blood of septicemic animals; in samples from sites of localization, such as (CSF) or ocular swabs; and in postmortem tissue samples such as the liver, kidneys, spleen, and brain [1].
Listeria grows on most conventional laboratory media, with small, round colonies observed on agar media after incubation plans are applied to reduce chemical and biological adulterants.
In poultry slaughtering control, points can include the scalder, evisceration, fi nal wash, chilling, and storage for further processing [28].
Disease status in Ethiopia
Listeriosis monocytogenes via contaminated food and/or water, or by a zoonotic infection, and in Ethiopia, a study has by a zoonotic infection. In Ethiopia, a study has and other Listeria species in a variety of raw and ready-to-eat food products in Addis Ababa prevalence of to-eat food products in Addis Ababa with the prevalence of raw meat and dairy products like raw milk, cottage cheese, and cream cake collected from the capital and fi ve neighboring towns in Ethiopia. The serotypes of Listeria monocytogenes identifi ed belonged to 1/2b, 4b, and 4e (Derra M. 2013).
Although foods of animal origin such as milk, cheese, meat, and poultry are consumed well in Ethiopia, published information on the status of food-borne Listeriosis caused by L. monocytogenes is very limited and incomplete in both the veterinary and public health sectors. In Ethiopia, a study has shown the presence and distribution of a variety of raw and ready-to-eat food products in Addis Ababa with a prevalence of 5.1% described with 4.1% of prevalence from raw meat and dairy products like raw milk, cheese, and cream cake collected from the capital and fi ve neighboring towns in Ethiopia [29].
According to a research study, the overall prevalence of Listeria species was 28.4%, and specifi cally, that of L. monocytogenes was 5.6%. Taking the prevalence of Listeria species into consideration, cheese was found to be highly contaminated at 60%, followed by pasteurized milk samples (40%), raw milk (18.9%), and yogurt (5%) [30].
Public health signifi cance
Listeria is an opportunistic intracellular pathogen that has become an important cause of human foodborne infections world wid. Although L. monocytogenes is infective to all human population groups, it has a propensity to cause especially severe problems in pregnant women, neonates, the elderly, and immunosuppressed individuals and direct transmission is possible especially among veterinarians performing gynecological interventions with aborted animals. Animals may be diseased or asymptomatic carriers of L. Monocytogenes shedding the organism in their feces [31].
Milk is supposed to constitute a complex ecosystem for various microorganisms including bacteria. Milk products like cheese, ice cream, and curd are widely consumed and the market for them has existed in many parts of the world for many generations. Raw milk and other dairy products are consumed by all age groups, including those populations at Citation: Kundul (Pal, et al. 2012a). The disease primarily affects older, pregnant women, newborns, and adults with weakened immune systems. However, rarely, persons without these risk factors can also be affected. Among the different species of the genus Listeria, L. monocytogenes has been known to cause Listeriosis in humans and animals [32,33].
Animals naturally harbor many food-borne bacteria in their intestines that can cause illness in humans, but often do not cause illness in animals. During slaughter, meat and poultry carcasses can become contaminated, if they are exposed to small amounts of intestinal contents (Pal, 2015;Pal and Mahendra 2015). The Public Health Agency of Canada convened an expert panel in August 2008 to provide information to health care professionals and the general public on the diagnosis and management of Listeriosis during the recent outbreak. The following information is based on the panel's discussion and addresses what should be done for patients who have eaten food items that are suspected of being contaminated with Listeria and who have symptoms of diarrhea with or without fever. For healthy adults and children with a normal immune system, no Listeria-specifi c investigation is required. Gastroenteritis due to Listeria infection has a short duration and is self-limited in this population [34].
Healthy adults and children occasionally get infected with L. monocytogenes, but they rarely become seriously ill. The body's defense against L. monocytogenes is called "cellmediated immunity" because it depends on our cells, especially lymphocytes called "T-cells." Therefore, individuals whose cell-mediated immunity is suppressed are more susceptible to the devastating effects of Listeriosis. Pregnant women naturally have a depressed cell-mediated immune system. In addition, the systems of fetuses and newborns are very immature and are extremely susceptible to these types of infections (Richard , et al. 2008).
Many food-borne zoonoses are of serious public health concern with a long-term sequel to various organs. Among these, Listeriosis can cause severe and life-threatening complications. Owing to changes in food habits towards ready-to-eat products, food production systems, processing, and supply, refrigeration for food preservation, interest in organic and natural products, interest in free-range birds, and awareness towards better health, Listeriosis is now considered an emerging food-borne zoonosis of increased public health signifi cance [35][36][37][38][39][40][41][42][43][44][45][46][47][48].
Conclusions and recommendations
Listeriosis has gained recognition as a global human and animal pathogen because of the increasing incidence, diagnosis of infections and also, it is widespread in nature and lives naturally in food contamination and soil environments, and has the potential to introduce food to animals and food plants. Meat products should be treated with heat before consumption which can kill Listeria species or reduce them to a detectable level. | 2022-07-29T15:15:20.277Z | 2022-07-12T00:00:00.000 | {
"year": 2022,
"sha1": "66e259db99c92b66f133201604ed53ac377c0f2f",
"oa_license": null,
"oa_url": "https://doi.org/10.17352/ijvsr.000119",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ef508f7385993b54470441cdf965a9b7360fe392",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
249248056 | pes2o/s2orc | v3-fos-license | TOWARDS BETTER COASTAL MAPPING USING FUSION OF HIGH TEMPORAL SENTINEL-2 AND PLANETSCOPE-2 IMAGERIES: 12 BANDS AT 3 M THROUGH NEURAL NETWORK MODELLING
Coastal interfaces are subject to an unprecedented rate of risks, gathering waves and rainfalls’ hazards, human assets’ densification, sea-level rise and precipitation intensification. Their sound management requires iterative observation at the highest possible spatial resolution. Sentinel-2 (S-2), provided with 13 spectral bands, datasets leverage high temporal resolution (one week) but spatial resolution (from 60 to 10 m) often remains too coarse to finely classify and monitor the coastal patches. PlanetScope-2 (PS-2) imagery benefits from very high temporal resolution (<one week) and high spatial resolution (3 m) for its blue-green-red-nearinfrared dataset. This research paper proposes to, first, downscale 12 S-2 bands (cirrus S10 being evicted) by using neural network (NN) regressions built on the 4 PS-2 bands following two methods, and second, evaluate the NN classification performance of the 12-band datasets at 3 m for mapping 8 common coastal classes on a representative site (Brittany, France). Straightforward and stepwise downscaling procedures, respectively based on 12 and 22 NN regressions, generated very good performances (Rtest=0.92 ± 0.02 and 0.95 ± 0.01, respectively). The 3-m NN classifications were considerably improved by the number of spectral bands (overall accuracy, OA, of the 4 bands: 48.12%) but also the precision of the downscaling (OA of the straightforward and stepwise downscaling: 75.25% and 93.57%, respectively). For the best classification, examination of the contribution of the individual bands revealed that S5, S7, S1, S9, S6 and S8A were meaningful (62.42, 55.02, 50.82, 46.4, 45.1, 31.02%, respectively), contrary to S12, S11 and S12 (12.47, 0 and 0%, respectively). * Corresponding author
Downscaling Satellite Optical Remote Sensing Imagery
The downscaling procedure has allowed spectrally-riched, regionally-acquired, coarse-scaled imageries to be refined at higher spatial resolution. The MODIS chlorophyll-a level-3 product at 4 km pixel size has been downscaled to 30 m using Landsat-8 OLI for complex coastal water monitoring (Fu et al., 2018). In turn, Landsat-8 OLI 30-m multispectral bands have been pansharpened to 15 m, then downscaled to 5.8 m (using ZiYuan-3 imagery) for better extracting coastlines (Wang et al., 2018). The bathymetry mapping (without in situ data) has been enhanced from Landsat-8 to very HSR (VHSR) satellite sensors (Gaofen-1/2, ZiYuan-3, WorldView-2, Liu et al., 2021). S-2 MSI 60-m and 20-m bands have been successfully downscaled to 10 m for improving LULC mapping (Zheng et al., 2017). The S-2 coastal band, downscaled from 60 m to 10 m, was deemed very promising to both deepen the water depth and seagrass mapping in Mediterranean Sea (Poursanidis et al., 2019). Recently, Landsat-8 OLI 15-m panchromatic band has been downscaled to 3-m PS Dove imagery, then used for pansharpening the 30-m multispectral bands in order to map bathymetry (Gabr et al., 2020).
Sentinel-2 and PlanetScope-2 fusion
Landsat-8/9 and S-2 bands or by-products constitute tangible advances for coastal mapping when downscaled with higher commercial optical spaceborne sensors. Given its comparable high temporal resolution (HTR) with Landsat-8/9 and S-2, the PS constellation could be considered as a good candidate for downscaling both freely available HSR NASA and ESA imagery. Contrary to SPOT of finer imagery series, PS leverages greater global distribution and cheaper prices (even free for granted scientists, Planet Team, 2017). This research study innovatively proposes to produce 12 bands at 3 m spatial resolution for coastal mapping by fusing both HTR (two 60-m, six 20-m and four 10-m) S-2 and (four 3-m) PS-2 (PS-2) imageries through neural network (NN) modelling. First, the NN regression downscaling will be optimized by testing the statistical reliability of the straightforward versus the stepwise approaches. Second, the NN classification of a complex, thus representative, coastal area, composed of 8 common classes (Figure 1), will be quantified at the scene scale for the two downscaling techniques and also for the original PS-2 dataset, for the sake of comparison.
Study Area
The investigated coastal zone (48°37'N; 2°7'W) is located along the Emerald Coast in Brittany (France). It features coastal shallow and optically-deep seawaters, muddy estuaries, sandy beaches, rocky cliffs, resort cities, crop fields and semi-natural vegetated areas ( Figure 1 and Table 1). Subject to a megatidal regime, the intertidal zone alternates with emersion and immersion phases, entailing sediment class features to be examined in their wet and dry status. For the sake of transferability, the generic terms of crop and soil encompass all features of the herbaceous stratum, and unvegetated bare but living layer, respectively.
PlanetScope-2:
The PS-2 dataset was derived from a Dove Classic nanosatellite (1-day revisit and a frame size of 24 km × 8 km). The Dove Classic constellation was launched in 2016 and 2017. The imagery was acquired on 09 October 2019 at 10 h 49 min 48 sec (UTC), then orthorectified and radiometrically-corrected at the surface reflectance (the equivalent of the BOA). The resulting dataset consisted of 4 bands at 3-m pixel size (Table 3) Table 3. Spectral specificities of the PlanetScope-2 Dove.
Imagery Processings
Downscaling Process: The entire scene was first divided into: • a calibration, • a validation, and • a test sub-datasets, whose their number of pixels was equalized and randomized for each spectral band.
Two approaches of imagery fusion based on NN regression were designed and evaluated.
The first procedure, coined as straightforward, sought for predicting the 12 S-2 bands from the 4 PS-2 bands, regardless of the S-2 native resolution ( Figure 2).
The activation function was a function of transformation, defined as a hyperbolic tangent function (TanH), scaling values between -1 and 1 bounds: , where z = a linear combination of the predictors (X).
Pixel-wise Classification:
The 8 classes were each represented by 1000 pixels, randomly split into calibration and validation datasets. The "calval" pixels were selected based on a pansharpened multispectral Pléiades-1 imagery acquired on 22 October 2019 at 11 h 25 min 49s (UTC, James et al., 2020). The 500-pixeled calibration datasets were used to construct NN learners defined as one-neuroned two-layered perceptrons provided with a logistic (sigmoid) activation function. Following the classification procedure, the 500-pixeled validation datasets were used to quantify the overall accuracies (OAs) derived from the PS-2, straightforward-and stepwisedownscaled S-2 confusion matrices.
RESULTS AND DISCUSSION
The fusion of the 12-band S-2 and 4-band PS-2 yielded, despite a slight difference, very satisfactory results for both NN downscaling procedures. The classification of the 8-class coastal scene was much better with the 12-band than the 4-band dataset at 3 m.
Performance of the Downscaling Procedure
The straightforward downscaling necessitated 12 NN regressions, compared to 22 ones for the stepwise approach, what implied a faster process.
Straightforward Downscaling:
The lowest regression performance, even satisfactory, was tied with the coastal band (B1, R 2 test=0.79), followed by B5 (R 2 test=0.83), and B2 (R 2 test=0.84). The regressions of the remaining bands bottomed at 0.91 (B9) and topped at 0.99 (B8). The overall mean and standard deviation reached 0.92 ± 0.02. The 60-m, 20-m and 10m means and standard deviations were 0.85 ± 0.06, 0.94 ± 0.02 and 0.94 ± 0.01, respectively (Figure 4). Stepwise Downscaling: Three regression steps enabled to predict 12 bands at 3 m. The first regressions downscaled S1 and S9 from 60 to 20 m, ranging from R 2 test of 0.84 to 0.99, respectively. The second round of regressions downscaled both new 20-m S1 and S9, as well as native 20-m 6 bands, ranging from R 2 test of 0.95 (S6) to 0.99 (S7, S8A, S9 and S11). Finally, the last round of regressions downscaled the 12 10-m bands, native or not, to 3 m spatial resolution, ranging from R 2 test of 0.84 (S2) to 0.99 (S1 and S8). The overall mean and standard deviation of the last round achieved R 2 test=0.95 ± 0.01, and statistics of the first and second rounds hit 0.91 ± 0.05 and 0.98 ± 0.01, respectively ( Figure 5).
The first augmentation (by 27.13%), from the 4-band to the straightforward 12-band dataset, could be driven by the multiplication by three of the number of the spectral predictors. This result is intuitive and corroborates the RS assumption: the higher the number of spectral bands, the better the discrimination will be (Collin, Planes, 2011). The second enhancement (by 18.32%, that is to say, by 45.45% compared to the first OA) might be attributed to the finer process tied to the stepwise versus the straightforward approach. Downscaling NN regressions were indeed more successful for the last round of the stepwise regressions (R 2 test=0.95 ± 0.01) than the single batch of the straightforward regressions (R 2 test=0.92 ± 0.02). Even if the process is longer (10 regressions more), it is highly recommended to implement the stepwise approach to produce a very satisfactory classification. Based on the best OA (that of the stepwise), the contribution of the individual NN-downscaled bands was quantified by referencing to the OA of the standard blue-green-red (S2-S3-S4) dataset (Figure 7). No spectral band, additional to the standard dataset, diminished the standard performance classification (OA=12.5%). S8 and S11 did not improve the standard OA, resulting in absence of gain. S12 brought then the lowest contribution to the standard OA. The poor results of those three bands could be explained by their large bandwidths compared to other bands (106, 91 and 175 nm, respectively). The narrow near-infrared S8A (21 nm bandwidth) band then provided a tangible increase of the standard OA (by 31.02%). The water vapour S9 and coastal aerosol S1 narrow bands (20 and 21 nm, respectively) drastically ameliorated the standard OA (by 46.4 and 50.82%). Finally, the three vegetation red edge bands (S6, S7 and S5) produced the highest contributions (45.1, 55.02, and 62.42%, respectively).
Figure 7.
Overall accuracies of the sole standard dataset, composed of S2-S3-S4 (blue-green-red, in blue bars), and the joint standard with other individual bands (in orange bars).
CONCLUSIONS
The fusion of the 12 S-2 bands, radiometrically corrected at the bottom-of-atmosphere, with the 4 3-m PS-2 bands was successful using a fully connected one-neuroned one-layered NN downscaling. Two approaches were assessed: one, straightforward, directly predicting S-2 bands at 3 m, irrespective of the native pixel size (that is to say 12 NN regressions); and another one, stepwise, first regressing both 60m S-2 bands at 20 m, second regressing resulting 6 20-m bands at 10 m, and third regressing 12 10-m bands at 3 m pixel size (that is to say 22 NN regressions). Both approaches were highly conclusive in a slight favour of the more precise but longer one (overall R 2 test=0.92 ± 0.02 and 0.95 ± 0.01, respectively). One-neuroned two-layered NN classifications of the 3-m scene, including 8 common coastal use and cover classes, showed that the 4-band PS-2 dataset was increasingly surpassed by the 12band straightforward and stepwise datasets (OA=48.12, 75.25 and 93.57%, respectively). Added to the blue-green-red standard dataset, the most contributing spectral bands, issued from the NN downscaling, were the vegetation red edge narrow bands, followed by the atmosphere and near-infrared narrow bands, while the wider bands barely or did not contribute to the standard OA. Even if the NN regression downscaling is more industrious, we advocate to implement the stepwise approach, especially when the goal is the supervised classification of the coastal landscape. | 2022-06-02T15:08:43.964Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "9944bc3cc8c195ee75465e8052e1d89c5568237a",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B3-2022/479/2022/isprs-archives-XLIII-B3-2022-479-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ab951dd82a37cdbc9ed7838cd068001248453eaa",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
8084755 | pes2o/s2orc | v3-fos-license | Factors influencing the decisions of senior UK doctors to retire or remain in medicine: national surveys of the UK-trained medical graduates of 1974 and 1977
Objective To report attitudes to retirement of late-career doctors. Design Questionnaires sent in 2014 to all UK medical graduates of 1974 and 1977. Setting United Kingdom. Participants 3695 medical graduates. Main outcome measures Factors which influenced doctors’ decisions to retire and factors which encouraged doctors to remain in work. Results The response rate was 85% (3695/4369). 55% of respondents overall were still working in medicine (whether they had not retired or had retired and returned; 61% of men, 43% of women). Of the retirees, 67% retired when they had originally planned to, and 28% had changed their retirement plans. Fifty per cent of retired doctors cited ‘increased time for leisure/other interests’ as a reason; 43% cited ‘pressure of work’. Women (21%) were more likely than men (11%) to retire for family reasons. Women (27%) were more likely than men (9%) to retire because of the retirement of their spouse. General practitioners (GPs) were more likely than doctors in other specialties to cite ‘pressure of work’. Anaesthetists and GPs were more likely than doctors in other specialties to cite the ‘possibility of deteriorating skill/competence’. Radiologists, surgeons, obstetricians and gynaecologists, and anaesthetists were most likely to cite ‘not wanting to do out-of-hours work’. Doctors who were still working were asked what would encourage them to stay in medicine for longer. Factors cited most frequently were ‘reduced impact of work-related bureaucracy’ (cited by 45%) and ‘workload reduction/shorter hours’ (42%). Men (30%) were more motivated than women (20%) by ‘financial incentivisation’. Surgeons were most motivated by ‘reduction of on-call or emergency commitments’. Conclusions Retention policy should address ways of optimising the clinical contribution of senior doctors while offering reduced workloads in the areas of bureaucracy and working hours, particularly in respect of emergency commitments.
IntrODuctIOn
When doctors retire, health services lose their experience, knowledge and support. 1 In the UK, 1 in 10 specialty and associate specialist doctors and general practitioners (GPs) are aged over 60 years. 2 In 2015, over 80% of senior hospital doctors in the UK were considering early retirement, with stress identified as the main cause. 3 A survey of 1400 GPs by the Wessex Local Medical Committee showed that one-fifth planned to retire early. 4 The early retirement of such experienced doctors creates challenges for the medical workforce, places more stress on remaining staff and may adversely affect patient safety. 2 3 A recent survey of consultant physicians in the UK found that the most common reasons for retiring cited by these doctors included pressure of work, length of working hours and dissatisfaction with the National Health Service (NHS). 5 Interviews with GPs have identified further influences, including the following: ageing and health, family life, uncertainty about the future of primary care, concerns around revalidation, increased administrative burden and lack of time with ► This is a large nationwide study with a very high response rate. ► For retired doctors, the data are based on actual retirements and not intentions. ► Doctors who have not yet retired were surveyed at a key stage for retirement planning. ► Although a large number of retired doctors responded, some may not have done, and there is a possibility of some responder bias. ► The small minority of non-contactable doctors largely comprised those who were unregistered, either through retirement or through having left medicine or the UK. It is possible that, if surveyed, their views on the topics of the paper would differ from those of the respondents.
Open Access patients. 6 7 A cohort of UK-trained doctors, across all specialties, in their early 50s revealed that these doctors considered early retirement for family/leisure reasons and because of concerns about health, workload and changes in the NHS. 8 Much of the research focus on how to retain doctors nearing retirement has been on GPs. In interviews, GPs have suggested a need for reduced workload, a focus on their own health, and improvements in morale. 6 Other research on retention of the GP workforce calls for a slower pace of administrative change and less work outside face-to-face patient care. 7 In one study, half of hospital consultants working in Scotland said they would postpone retirement if their workload reduced. 9 We have studied the careers of the UK-trained medical qualifiers of 1974 and 1977 periodically from the first year after they qualified. In our latest survey, we asked about retirement status, intentions to retire for those who had not already done so, and attitudes to retirement. The aim of this paper is to report on factors which had influenced the decision to retire and on factors that might encourage doctors to stay in medicine longer. We compared the replies of men and women and of those working (or who had worked) in different specialties.
MethODs
In 2014, the UK Medical Careers Research Group surveyed the UK medical graduates of 1974 and 1977 using identical postal and web-based questionnaires. Up to four reminders were sent to non-respondents. Further details of the methodology are available elsewhere. 10 The surveys sent to both cohorts were identical and comprised structured, 'closed' questions and statements, with scope and encouragement for additional free-text comment. Doctors were asked to indicate which one of seven phrases best described their current employment status: working full time in medicine; working part time in medicine; working full time outside medicine; working part time outside medicine; retired, not now working in medicine; retired and 'returned' for some medical work; and other.
Retirees and doctors who had 'retired and returned' were asked to indicate which, if any, of the following factors had influenced their decision to retire when they did: pressure of work; not wanting to do out-of-hours work; family reasons; to increase time for leisure/other interests; reduced job satisfaction; retirement of spouse/partner; financial security/ insufficient financial incentive to stay; possibility of deteriorating skills/competence; the prospect of revalidation; poor health; to maintain good health; 'none of the above -I just wanted to retire'; and other.
Doctors still working in medicine (full time or part time) were asked 'Would any of the following factors encourage you to stay working in medicine longer?' Doctors could choose from one or more of the following factors: workload reduction/shorter hours; reduction of on-call or emergency commitments; reduced impact of work-related bureaucracy; financial incentivisation; improved working conditions, other than (or as well as) hours; career change and development opportunities; more involvement in direct patient care; less involvement in direct patient care; none of these; and other.
In each case, the list of response options offered was developed in part by studying text responses from doctors to previous surveys and in part by reviewing the literature on retirement and factors affecting retirement decisions.
We allocated a career specialty to each respondent using their recorded job history as reported to us in successive surveys and additional information about their specialist registration with the GMC as reported to us by the doctors in these surveys. For a small number of respondents, we were unable to allocate a single career specialty, either because we did not have sufficient data about the doctor's career or because the doctor had worked in different specialties during their career. The career specialty allocation allowed us to analyse the responses and employment status of doctors in different specialties. Respondents were then grouped for analysis into these groups: hospital medical specialties, surgical specialties, paediatrics, emergency medicine, obstetrics and gynaecology, anaesthesia, radiology, clinical oncology, pathology, psychiatry, and general practice/ family medicine (GP).
The replies were analysed using χ 2 tests and Mann-Whitney U tests to explore differences in views towards retirement between men and women, between cohorts and between doctors working in different specialties. Statistical analysis was undertaken using SPSS V.22. Numbers of doctors were small in a few combinations of (for example) specialty and gender, but we report them for the record.
results Demographics and response rates There were 5482 graduating doctors in the two year-of-graduation cohorts we studied: 1974 (2347 graduates) and 1977 (3135 graduates). These two cohorts graduated many years before the recent increase in the UK in the proportion of women among successive cohorts of UK medical graduates. In 1974, the graduation cohort was 73.2% male (1717/2347), and the cohort of 1977 was 67.5% male (2116/3135).
Across both cohorts, 677 doctors were not contactable, 210 were deceased, 70 had told us that they did not wish to participate and 156 doctors who had never replied to any of our previous surveys were not contacted. The aggregated response rate of the remaining contactable doctors, over both surveys, was 84.6% (3695/4369). The response rate among men was 84.4% (2554/3026) and that among women was 85.0% (1141/1343). Taking the two cohorts together, the responders represent 70% of all surviving graduates and 85% of the contactable doctors. An abbreviated questionnaire which omitted some of the content reported here was completed by 98 graduates: we exclude these from further analysis.
The median age of the doctors from the cohorts at the time of the surveys was 64 (men: 64, women: 63) for the Open Access 1974 cohort and 61 (men: 61, women: 60) for the 1977 cohort.
current employment status Over both cohorts, 44% (1572/3597) of respondents had retired from medicine and were no longer working in medicine (38% of men, 56% of women), 26% (935/3597) had retired and returned for some medical work (29% of men, 20% of women) and 29% (1043/3597) were still working in medicine (32% of men, 23% of women). Therefore, 55% of respondents overall were still working in medicine (whether they had not retired or had retired and returned; 61% of men, 43% of women). Over both cohorts, 0.8% were working outside medicine and 0.5% did not give their employment status. retired, and retired and 'returned' doctors: circumstances of retirement When asked 'What were the circumstances of your retirement?', most of the retired doctors (66.8%) had retired when they had planned to retire, while 27.7% had retired not when originally planned (14.0% due to changes in the work environment and 13.7% due to changes in personal circumstances; table 1). Retired doctors from the 1974 cohort were a little more likely to have retired when they had planned to retire compared with the 1977 cohort. In comparing the responses of doctors in different specialties, we did not consider doctors in emergency medicine or clinical oncology for comparison, owing to small counts. Radiologists were most likely to have retired when they had planned to retire (79.4%) and hospital medical specialists were least likely (62.4%; table 1). Psychiatrists were most likely to report that their retirement was unplanned and due to a change in the work environment (21.3%). GPs were most likely to report that their retirement was unplanned and due to changes in personal circumstances (15.2%).
retired doctors: reasons for retiring When asked which factors had influenced their decision to retire, the retired doctors most frequently cited wanting 'increased time for leisure/other interests' as contributing to their decision to retire when they did (50.4%; table 2). 'Pressure of work' was cited by 42.8%. Women were more likely than men to retire because of the retirement of a spouse or partner (table 2). Further inspection within each specialty revealed that this gender difference was present in all specialties except surgery, obstetrics and gynaecology, radiology, pathology, and psychiatry: while more women than men in these specialties cited 'retirement of a spouse/partner', these differences within each of these specialties were not significant. In general practice, 28.4% of women cited this reason compared with 10.4% of men.
Overall, women were more likely than men to retire for family reasons (table 2): this difference was most pronounced among GPs (cited by 20.7% of women and 12.2% of men, p<0.001) and hospital medical specialists (cited by 30.6% of women and 13.3% of men, p<0.01). Men were more likely than women to retire for financial reasons. This difference was significant within both cohorts. Further inspection within each specialty grouping revealed that this gender difference was only present among GPs (men GPs: 33.3%, women GPs: 23.7%).
Men were also more likely than women to retire because they did not want to do out-of-hours work: this difference was most marked among hospital medical specialists (men: 16.4%, women: 5.6%) and paediatricians (men: 26.7%, women: 10.1%) and was significant within both cohorts.
Retired doctors from the younger 1977 cohort were significantly more likely than doctors from the 1974 cohort to have retired due to pressure of work Retired GPs were more likely to cite 'pressure of work' as a reason for retiring than doctors in other specialties (table 3). Anaesthetists and GPs were more likely to cite the 'possibility of deteriorating skill/competence' than doctors in other specialties. Radiologists, surgeons, obstetricians and gynaecologists, and anaesthetists were more likely to cite 'not wanting to do out-of-hours work' compared with doctors in other specialties. This pattern of reasons for retirement was observed in both cohorts (p<0.001). For numbers corresponding to the percentages in table 3, and numbers for small number specialties, see online supplementary appendix 1.
Doctors still working: retirement plans
The doctors still working were asked which factors would encourage them to stay in medicine for longer. These doctors most frequently cited 'reduced impact of work-related bureaucracy' as a factor that would encourage them to stay working in medicine for longer (45.4%; table 4) and 'workload reduction/shorter hours' (42.2%). Men were more likely than women to be encouraged by 'financial incentivisation'. This difference was significant within both cohorts (p<0.05). Doctors from the 1977 cohort were significantly more likely than doctors from the 1974 cohort to be encouraged to remain by 'workload reduction/shorter hours' (1974: 32.5%, 1977: 46.6%) and 'improved working conditions' (1974: 14.0%, 1977: 22.8%).
There was little variation by specialty grouping in the scoring of factors which would encourage doctors to stay in medicine longer (table 5). There was significant variation on only one factor: reduction of on-call or emergency commitments was assigned more importance by surgeons than by others. For numbers corresponding to the percentages in table 5, and numbers for small number specialties, see online supplementary appendix 2.
Main findings
Over half of the respondents were still working in medicine (whether having retired and returned or having never retired). Of those doctors who had retired, two-thirds had retired when they had planned to retire and a quarter had an unplanned retirement due to either a change in the work environment or a change in personal circumstances. Doctors retired mainly to spend more time on leisure and other interests or due to work pressures. More women than men retired because of the retirement of a spouse or partner: this difference was pronounced in general practice and was less pronounced in specialties such as surgery. More women than men retired due to family reasons (especially among GPs and hospital medical specialists). Thus, not only are female doctors influenced much more than men by family factors in their career decisions when they are young, 11 but the male-female differences, in respect of family and career decisions, persist with age into their 60s. More men GPs than women GPs retired for financial reasons, citing 'financial security/insufficient financial incentive to stay'. More men than women in the hospital medical specialties and paediatrics cited retiring because they did not want to do out-of-hours work. Certain retirement factors were cited more by the younger 1977 cohort than the 1974 cohort (pressure of work, reduced job satisfaction, financial reasons), suggesting that doctors considering early retirement are more influenced by these factors. There were differences between specialties. More GPs cited 'pressure of work', more anaesthetists and GPs cited the 'possibility of deteriorating skill/competence' and more radiologists, surgeons, obstetricians and anaesthetists cited 'not wanting to do out-of-hours work' compared with doctors in other specialties.
The doctors still working cited two main factors that would encourage them to stay working in medicine for longer: 'reduced impact of work-related bureaucracy' and 'workload reduction/shorter hours'. More men than women could be encouraged to remain in medicine by financial incentivisation. More surgeons cited a reduction of on-call or emergency commitments as influential to a future decision to stay.
strengths and limitations
This is a large study with a very high response rate among contactable doctors (85%). In the case of the retired doctors in our study, the data are based on actual retirements and not intentions. In the case of the doctors who have not yet retired, we ask about factors that might encourage them to stay at a key stage when these doctors are typically considering retirement; therefore, their answers are gathered at an optimal time. As has been mentioned elsewhere, retired, though contactable, doctors may be less likely to respond. 1 This may be for a variety of reasons including deteriorating health. The non-contactable doctors largely comprised those who had ceased GMC registration, either through retirement or through having left medicine or the UK. It is possible that, if surveyed, their views on the topics of the paper would differ from those of the respondents.
comparison with existing literature We found that a desire to spend more time on leisure/ other interests and pressure of work were key factors in retirement decisions for the retired doctors we surveyed. A recent UK study found that 81% of senior hospital doctors were considering retiring early due to work pressures 3 ; similarly, the most common reason for intended early retirement cited by UK consultants is pressure of work. 5 Family reasons and leisure time were the main reasons cited by senior UK doctors when considering early retirement. 8 A systematic review of retirement planning among doctors found that workload and burnout were the most common reasons provided for early retirement. 12 A 2015 literature review of occupational health issues among UK doctors 13 revealed interesting findings which concurred with our observed specialty differences among senior doctors. Stress and burnout were widely reported across specialties, though the reported levels of problems varied. For example, one referenced study showed that radiologists reported particularly high levels of job-related exhaustion compared with surgeons, oncologists and gastroenterologists. 14 Our finding that more GPs cited 'pressure of work' than doctors in other specialties is consistent with other research which has found that GPs are concerned about high workloads. 6 We found that doctors could be encouraged to stay in practice by reducing work-related bureaucracy and reducing workload/hours. Other research has found that doctors can be encouraged to stay working in medicine by reducing workplace frustration and workload pressure. 12
Further work
The results reported here cover only some of the themes explored by us in these surveys. We asked specific questions about any adverse effects on health and well-being Open Access of work as a doctor, published elsewhere. 15 We also asked the doctors about their roles in addition to clinical work, specifying 'teaching and training, research, management' and 'other (please describe)'. In addition to our closed questions, we asked for some 'free-text' responses to questions about changes in policy and practice that respondents would like to be implemented in medicine in the UK. We are in the process of analysing responses and, in further work, we will publish on these.
Implications/conclusions
Doctors described a variety of professional and personal motivators for choosing the timing of their retirement. Some of the reasons given may be amenable to policy initiatives, which could result in securing a longer contribution to the health service by some doctors than would otherwise be the case. Retention policy should address ways of optimising the clinical contribution of senior doctors. For example, staged retirement with reduced workloads in the areas of bureaucracy and working hours, particularly in respect of emergency commitments, may enable seniors to continue using their skills for longer.
A higher percentage of women than men had retired. As noted in the Results section, these were cohorts in which the majority of doctors were men: they pre-dated the substantial increase in the intake of women into medicine. If the male-female differences in the likelihood of early retirement become evident in younger generations of doctors, these may become an important source of future attrition from the medical workforce overall. | 2018-04-03T00:05:51.600Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "94d3c47ef34da9f8ee33459b28bfe5ffb67bc11e",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/7/9/e017650.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "94d3c47ef34da9f8ee33459b28bfe5ffb67bc11e",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4571715 | pes2o/s2orc | v3-fos-license | The Prenylflavonoid Xanthohumol Reduces Alzheimer-Like Changes and Modulates Multiple Pathogenic Molecular Pathways in the Neuro2a/APPswe Cell Model of AD
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that has proved refractory to drug treatment. Given evidence of neuroprotection in animal models of ischemic stroke, we assessed the prenylflavonoid xanthohumol from the Common Hop (Humulus lupulus L.) for therapeutic potential in murine neuroblastoma N2a cells stably expressing human Swedish mutant amyloid precursor protein (N2a/APP), a well-characterized cellular model of AD. The ELISA and Western-blot analysis revealed that xanthohumol (Xn) inhibited Aβ accumulation and APP processing, and that Xn ameliorated tau hyperphosphorylation via PP2A, GSK3β pathways in N2a/APP cells. The amelioration of tau hyperphosphorylation by Xn was also validated on HEK293/Tau cells, another cell line with tau hyperphosphorylation. Proteomic analysis (2D-DIGE-coupled MS) revealed a total of 30 differentially expressed lysate proteins in N2a/APP vs. wild-type (WT) N2a cells (N2a/WT), and a total of 21 differentially expressed proteins in lysates of N2a/APP cells in the presence or absence of Xn. Generally, these 51 differential proteins could be classified into seven main categories according to their functions, including: endoplasmic reticulum (ER) stress-associated proteins; oxidative stress-associated proteins; proteasome-associated proteins; ATPase and metabolism-associated proteins; cytoskeleton-associated proteins; molecular chaperones-associated proteins, and others. We used Western-blot analysis to validate Xn-associated changes of some key proteins in several biological/pathogenic processes. Taken together, we show that Xn reduces AD-related changes in stably transfected N2a/APP cells. The underlying mechanisms involve modulation of multiple pathogenic pathways, including those involved in ER stress, oxidative stress, proteasome molecular systems, and the neuronal cytoskeleton. These results suggest Xn may have potential for the treatment of AD and/or neuropathologically related neurodegenerative diseases.
INTRODUCTION
Xanthohumol (Xn) is the abundant prenylated polyphenol, or chalcone ( Figure 1A), in cones of the Common Hop (Humulus lupulus L.), a species of flowering plant native to temperate regions of the Northern Hemisphere. Xn is reported to scavenge reactive oxygen species (ROS) at relatively low concentrations (Gerhauser et al., 2002;Yamaguchi et al., 2009) while exhibiting pro-oxidative (Strathmann et al., 2010;Blanquer-Rossello et al., 2013) or pro-apoptotic (Strathmann et al., 2010;Festa et al., 2011) effects at higher concentrations. In addition, the Michael Acceptor of this chalcone irreversibly binds to and modifies cysteine residues of proteins, such as those involved in the NF-κB activation pathway, which is inhibited by Harikumar et al. (2009). Both of these chemical properties appear to underwrite the poly-pharmacological activities of Xn, including anti-proliferation (Deeb et al., 2010), anti-inflammation (Dorn et al., 2013), and anti-adipogenesis (Yang et al., 2007).
Xn is also reported to protect rat neuron-like PC12 cells from oxidative damage; to promote neuronal differentiation and neurite outgrowth of mouse embryonic forebrain neural precursors and murine N2a neuroblastoma-derived cells; to improve cognitive flexibility in young mice, and to protect brain tissue in a rodent model of cerebral ischemia (Yen et al., 2012;Oberbauer et al., 2013;Zamzow et al., 2014;Yao et al., 2015). Although the neuroprotective effects of Xn have been attributed to its free-radical scavenging property, the chalcone also positively modulates central regulators of cellular redox and energy balance via actions on endoplasmic reticulum (ER) (activation of nuclear factor E2-related factor 2: Nrf2) and mitochondria (activation of AMP-activated kinase: AMPK) in mouse fibroblasts (Zimmermann et al., 2015).
The aforementioned observations have encouraged the view that prenylflavonoids such as Xn may have therapeutic value in cancer, diabetes, atherosclerosis, acute brain and spinal cord injury and in chronic progressive neurodegenerative disorders by promoting neurogenesis, neuroregeneration, and neuroprotection (Yen et al., 2012;Oberbauer et al., 2013;Zamzow et al., 2014;Yao et al., 2015). In the present study, we have used murine neuroblastoma N2a cells stably transfected with human amyloid precursor protein (APP) to assess whether and how Xn affects molecular mechanisms relevant to Alzheimer's disease (AD). This is a progressive neurodegenerative disorder in which dysfunction of multiple cellular organelles correlates with the formation of amyloid β (Aβ) plaques and neurofibrillary tangles (NFT) containing hyperphosphorylated tau, the two neuropathological hallmarks of the disease (Ittner and Götz, 2011). We found that Xn attenuated Aβ accumulation and tau phosphorylation via APP processing, GSK-3β and PP2A pathways as well as reducing ER and oxidative stress and related proteasome processing, properties consistent with therapeutic potential in AD and other progressive neurodegenerative disorders with AD-related neuropathology.
Reagents
Xn (stated ≥98%) was purchased from Aladdin Company Biochemical Technology Co., Ltd. (Shanghai, China). The stock solution of Xn (10 mM) was prepared in dimethylsulfoxide (DMSO, Thermo Fisher Scientific, Waltham, MA, United States) and was used directly. Selections of antibodies are listed in Table 1.
Cell and Cell Culture
Wild-type murine neuroblastoma Neuro2a cells (N2a/WT) and N2a stably transfected with human APP Swedish mutant (N2a/APP) were gifts from Professor Jian-Zhi Wang (Tongji Medical School, Wuhan, China). The cells were maintained in medium consisting of an equivalent volume of Dulbecco's modified Eagle's medium (DMEM) and Opti-MEM with 5% fetal bovine serum in 5% CO 2 at 37 • C. Stably transfected cells were screened in the presence of 0.2 g/L Genticin (Thermo Fisher Scientific). Human Embryonic Kidney 293 cells stably transfected with tau protein (HEK293/Tau) were also gifted by Prof. Jian-Zhi Wang. The cells were cultivated in DMEM with 5% fetal bovine serum and. 0.2 g/L Genticin in 5% CO 2 at 37 • C. The cells were grown in 25 cm 2 or 75 cm 2 culture flasks and passaged when there were 1.2 × 10 6 cells in a 25 cm 2 flask and 3-4 × 10 6 in a 75 cm 2 flask.
Cell Viability Assay
N2a/WT or N2a/APP cells were placed on a 96-well cell culture microplate (10 4 cell per well). When the cells had attached, the original medium was removed and fresh medium with Xn or vehicle (0.5% v/v DMSO) was added to the plate. After 24 h incubation with 0-25 µM Xn, the medium was removed and fresh medium with cell counting kit-8 solution (Dojindo Laboratories, Shanghai, China) was added. After 1 h incubation, the plate was read by a microplate reader (Tecan M1000, Männedorf, Switzerland) at 450 nm. The cell viability of a well was the absorbance of the well with cell and cell culture medium subtracted from the well with cell culture medium only. The relative cell viability was the viability of the treated cell normalized by the viability of the control (vehicle). FIGURE 1 | Chemical structure (A) and viability of N2a/WT cells (B) and N2a/APP cells (C) treated for 24 h with 0-25 µM Xn. N = 3. * * p < 0.01, * * * * p < 0.0001 compared with N2a/WT cells treated with vehicle. #### p < 0.0001 compared with untreated N2a/APP cells.
Western-Blot Analysis
After 24 h of drug treatment, cells were washed two times with cold PBS and 200 µL IP lysis buffer added per culture flask, placed on ice for 30 min, scraped from the flasks and collected in 1.5-mL tubes, and centrifuged at 18,000 g at 4 • C for 20 min. BCA Protein Assay Reagent (Thermo Fisher Scientific) was used to measure total protein concentration of debris-free supernatants. Total protein (20 µg) was boiled for 8 min in 5 × SDS loading buffer (Thermo Fisher Scientific), separated by 10% SDS-polyacrylamide gel electrophoresis, and transferred to polyvinylidene difluoride (PVDF) membranes. Non-specific binding was prevented by incubating membranes in 5% nonfat milk dissolved in 1 × TBST buffer at RT for 1 h. The membranes were incubated overnight at 4 • C with solutions of primary antibody diluted with 1 × TBST (see Table 1). The membranes were washed and incubated with anti-mouse, anti-rabbit or anti-goat IgG conjugated to horseradish peroxidase (HRP) (1:3000) at RT for 1 h before development in ECL solution (Thermo Fisher Scientific). The densitometry of the blots was quantified by ImageQuant 1D software (GE Healthcare, Pittsburgh, PA, United States).
Protein Preparation and Labeling
After 24 h of drug treatment, cells were washed twice with cold PBS and dissolved in 600 µL 1 × DIGE lysis buffer (7 M urea, 2 M thiourea, 30 mM Tris-HCl, 4% CHAPS, pH 8.5) per 75 cm 2 culture flask, placed on ice for 30 min, then scraped from the flasks and collected in 1.5-mL tubes. The cell suspensions were ultrasonicated for 2 min in cycles (4 s on and 6 s off) at 45% power with a Fisher 550 Sonic Dismembrator (Pittsburgh, PA, United States) until the samples were pellucid. Samples were incubated on ice for 30 min and then centrifuged at 20,000 g at 4 • C for 60 min. The supernatants were ultrafiltered at 14,000 g at 4 • C for 30 min to remove salt and other impurities, and then diluted in 1 × DIGE lysis buffer. The protein solutions were collected and stored at −80 • C. The 2-D Quant Kit (GE Healthcare, United States) was used to measure protein concentrations according to the manufacturer's protocol. All the samples from N2a/WT cells and N2a/APP cells treated with or without Xn were diluted to 5 µg/µL after protein quantification. The protein samples (25 µg) were labeled with Cy3 or Cy5, and the internal standards derived from the mixture of all samples were labeled with Cy2. Each vial of CyDye, Cy2 (GE Healthcare, 25-8008-62), Cy3 (GE Healthcare, 25-8008-61), and Cy5 (GE Healthcare, 25-8008-62), was dissolved in 99.8% anhydrous N, N-dimethylformamide (DMF) (Sigma 227056) to obtain a stock concentration of 1 nmol/µL. DMF was added to obtain working solutions of 200 pmol/µL of each CyDye with which to label protein samples. Labeling reactions were carried out on ice in the dark and were quenched by adding 1 µL 10 mM lysine at 4 • C in the dark for 10 min. After protein labeling, the Cy2-, Cy3-, Cy5-labeled samples were mixed together and rehydration buffer [7 M urea, 2 M thiourea, 4% CHAPS, 2% DTT, 2% (v/v) IPG buffer, pH 3-11 NL] added to a total sample volume of 450 µL.
Protein Separation
The first-dimension protein separation employed the Ettan IPGphor Isoelectric Focusing (IEF) System (GE Healthcare). For each separation, 75 µg protein were transferred on 24-cm pH 3-11 NL ImmobilineDryStrips (GE Healthcare). Then, 1.5 mL mineral oil was added to cover each strip to reduce solvent evaporation. Proteins were immobilized onto strips at 50 V for 18 h, followed by focusing at 300 V for 12 h, 500 V step for 2 h, 1000 V gradient for 2 h, 8000 V gradient for 8 h, 8000 V step for 8 h at 20 • C. The temperature for IEF was kept at 18 • C. After IEF, each strip was immediately equilibrated in a 15 ml buffer of 6 M urea, 75 mM Tris-HCl buffer (pH 8.8), 30% (v/v) glycerol, 2% (w/v) SDS, and 1% (w/v) DTT for 15 min at RT on a shaking table, and subsequently re-equilibrated in the same buffer containing 6 M urea, 75 mM Tris-HCl buffer (pH 8.8), 30% (v/v) glycerol, 2% (w/v) SDS, and 4.5% (w/v) IAA for 15 min. The equilibrated strips were loaded on the top of 12.5% SDS-PAGE gels and covered with 0.5% (w/v) ultralow-melting-point agarose sealing solution [25 mM Tris, 192 mM glycine, 0.1% SDS, 0.5% (w/v) agarose, 0.002% (w/v) bromophenol blue]. Electrophoresis was executed with an Ettan DALTsix Electrophoresis System (GE Healthcare) at 1 W/gel for 1 h, 11 W/gel for 4.5 h at 12.5 • C in the dark. After peptide separation in the second dimension had been completed, the gels were immediately scanned with a Typhoon TRIO Variable Mode Imager (GE Healthcare).
Image Analysis
The DIGE gels were analyzed with the DeCyder software package (Version 6.5 GE Healthcare). After confirmation of appropriate spot detection, matching, and normalization, the spot statistics were reviewed. Both DeCyder and Progenesis employed one-way ANOVA to quantify differential expression of spots among the experimental groups. The normalized spot density was further compared across the gels of the replicate groups. Protein spots found to be statistically significant (p ≤ 0.05) were isolated for further analysis.
Protein Identification
Protein (1.4 mg) was separated by the same DIGE method but without protein labeling. The gel was stained with Coomassie blue solution (0.12% Coomassie Brilliant Blue G-250, 20% ethanol, 10% phosphoric acid, 10% ammonia sulfate). Proteins displaying significant variation (P ≤ 0.05) were manually excised from the blue-stained gel and quenched with 50% acetonitrile in 25 mM ammonium bicarbonate followed by dehydration in 100% acetonitrile. After the reagents had been removed, the gel pieces were digested with 0.15 µg of sequencing-grade trypsin (Promega, Madison, WI, United States) in 15 µL digestion buffer containing 25 mM ammonium bicarbonate. The mixture was incubated overnight at 37 • C and then subjected to analysis using mass spectrometry (MS).
Peptide analysis was performed by MALDI-TOF-MS/MS (SCIEX TOF/TOF 5800 System, ABSCIEX, Framingham, MA, United States) analysis was used for peptide analysis. In brief, 1 µL of peptide extraction was crystallized with 0.7 µL 10 mg/mL α-cyano-4-hydroxycinnamic acid (CHCA) in 0.1% trifluoroacetic acid (TFA), 50% acetonitrile (ACN) on the target and dried at room temperature. The spectra were externally calibrated and MASCOT (Matrix Science, United Kingdom) was adapted to search the Swiss-Prot database for mouse brain proteins. The search was conducted with a tolerance on mass measurement of 100 ppm in MS mode and 0.3 Da in MS/MS mode. Protein molecular weight (MW) was also used for protein identification based on the location of the excised protein spot from the 2-D gel.
Immunocytochemistry
N2a/WT and N2a/APP cells were seeded on coverslips and treated with Xn or vehicle for 24 h. The cells on the coverslip were fixed for 10 min in cold 4% paraformaldehyde (PFA), rinsed three times with 1× PBS and then permeabilized in 0.3% Triton X-100 in 1× PBS for 30 min. After fixation, the cells were sealed in 5% BSA diluted in 1× PBS for 30 min. Cells were incubated with primary antibodies 8-OHdG overnight at 4 • C, then with peroxidase-labeled donkey-anti-goat secondary antibody for 1 h in the dark at RT, then rinsed with 1× PBS four times. Next, DAPI (4 , 6-diamidino-2-phenylindole) was added to the cells for 5 min in the dark and then rinsed four times with 1× PBS. Finally, Fluo-Antifading Medium (Beyotime, Beijing, China) was added to the microslide and a coverslip added. The cells were examined by laser confocal microscopy.
Bioinformatics Analysis and Statistics
Functional annotation of differentially expressed proteins was performed with the Database for Annotation, Visualization and Integrated Discovery Resource (DAVID 1 ). Gene ontology (GO) terms for biological processes (BP), molecular functions (MF), charts and cellular components (CC), were obtained with default statistical parameters.
Results are expressed as means ± SEM. One-way ANOVA was used to determine the statistical significance of differences among the groups and following post hoc assessment by the Student-Newman-Keuls Method (GraphPad Prism 7.0 2 ) except special indication. A p-value less than 0.05 was considered to be statistically significant. Detailed p-value of analyses are listed in the Supplementary Tables S6-S9.
Xn Has Low Cytotoxicity for N2a/WT and N2a/APP Cells
We first studied the cytotoxicity of Xn on both N2a/WT and N2a/APP cells. Compared with vehicle control (0.5% DMSO), no difference in cell viability was observed in N2a/WT cells treated for 24 h with less than or equal to 12.5 µM Xn (Figure 1B). For N2a/APP cells, loss of cell viability was seen in concentrations exceeding 5.0 µM Xn treatment ( Figure 1C). Accordingly, treatment for 24 h with a maximum concentration of 3.0 µM Xn was used in the following investigations.
Xn Inhibited Aβ Accumulation and APP Processing
To study the effect of Xn on Aβ accumulation, we determined the levels of Aβ 1−42 and Aβ 1−40 in lysates and culture medium of N2a/WT cells and N2a/APP cells with or without 24 h Xn treatment. Shown in Figures 2A,B, the level of "toxic" Aβ 1−42 and "non-toxic" Aβ 1−40 in N2a/APP cell lysates were both one third higher than that in N2a/WT cell lysates. The ratio of Aβ 1−42 to Aβ 1−40 in N2a/APP cell lysates showed a trend of modest increase compared to that in N2a/WT cell lysates ( Figure 2C). The levels of both Aβ 1−42 and Aβ 1−40 in N2a/APP cell culture medium were higher than that in N2a/WT cell culture medium yet the differences are not as big as those in cell lysates (Figures 2D,E). After 0.75 and 3 µM Xn treatment for 24 h, N2a/APP cell lysates had significant reduction in Aβ 1−42 , Aβ 1−40 , and Aβ 1−42 to Aβ 1−40 ratio compared with those from control treated cells (Figures 2A-C). Yet the reductions of Aβ 1−42 , Aβ 1−40 , and Aβ 1−42 to Aβ 1−40 in cell medium were not significant (Figures 2D-F) except the level of Aβ 1−40 in 3 µM Xn-treated cells ( Figure 2F). The data from ELISA analyses show the potency of Xn in reducing Aβ generation.
Since Xn treatment suppressed Aβ generation by N2a/APP cells, we used Western-blot analysis (Figures 2G,H) to explore the critical proteins implicated in APP processing. Shown in Figures 2G,H, N2a/APP cells had significantly higher mean levels of total APP and BACE1 than N2a/WT cells. N2a/APP cells treated with 3.0 µM Xn had significantly reduced levels of APP, BACE1, and PS1 compared with vehicle-treated N2a/APP cells. The mean level of sAPPα in Xn-treated vs. vehicle-treated N2a/APP cells increased with Xn concentration (Figures 2G,H). The data from Western-blot analyses suggested the effect of Xn in a non-amyloidogenic pathway in N2a/APP cells.
Xn Attenuated Tau Phosphorylation in Both N2a/APP Cells and HEK293/Tau Cells
To investigate the effect of Xn on tau phosphorylation, we profiled various sites in phosphorylated tau using Western-blot analysis. Shown in Figures 3A,B, N2a/APP cells treated with 3.0 µM Xn showed significant reduction of phosphorylated tau at serine 404, 396, and 262. The effect of Xn on phosphorylated tau at site threonine 231 in N2a/APP cells was not significant as other sites above.
We validated the effect of Xn treatment in another cell line with tau hyper-phosphorylation, namely the HEK293/Tau cell line. Shown in Figures 3C,D are responses of HEK293/Tau to Xn similar to those observed with N2a/APP cells. HEK293/Tau cells treated with 3.0 µM Xn had significant reductions in phosphorylated tau at serine 404, 396, and 262.
While many kinases and phosphatases are responsible for the phosphorylate state of tau, we focused on protein pathways most often associated with tau phosphorylation, namely p-GSK3β, PP2A, p35/25, and CDK5 (Figures 3E,F). N2a/APP cells treated with 3.0 µM Xn had significantly higher levels of p-GSK3β (Ser 9) and significantly lower levels of p-PP2A while having no significant effects on the levels of p35/25, co-activators of CDK5.
Xn Modified Critical Proteins Involved in ER Stress and Oxidative Stress
To explore molecular species affected by Xn treatment, we performed a comparative proteomic analysis using 2D-DIGE peptide separation and identification by MS. A total of 51 proteins (shown in Figure 4) in 2D-DIGE gels was significantly (adjust p-value < 0.05 in one way ANOVA) different in any four comparison pairs (N2a/APP vs. N2a/WT, or 0.19 µM Xn vs. N2a/APP, or 0.75 µM Xn vs. N2a/APP, or 3.0 µM Xn vs. N2a/APP). Thirty of the 51 differentially expressed lysate proteins distinguished N2a/APP from wild-type (WT) N2a cells (N2a/WT), and an additional 21 lysate proteins characterized differences in N2a/APP cells in the presence and absence of Xn. The 51 proteins were grouped in seven categories: ER stress-associated proteins; oxidative stress-associated proteins; proteasome are shown on the right (C,F). Levels of t-APP, s-APPα, BACE1, and PS1 were determined by Western-blot analysis (G,H). β-Actin was used as a loading control. N = 5 for (A-C); N = 4 for t-APP and BACE1 (G); N = 3 for the rest. * p < 0.05, * * p < 0.01 compared with N2a/WT cells. # p < 0.05, ## p < 0.01, ### p < 0.001, #### p < 0.0001 compared with untreated N2a/APP cells. pathway-related proteins; cytoskeleton-associated proteins; molecular chaperones; energy metabolism and others. We performed functional annotations using DAVID to gain further insight into the 51 differentially expressed proteins. The major categories of the differentially expressed proteins in N2a/APP cells were; "protein folding, " "response to ER stress, " and "toxin transport" in biological process ( Figure 5A); "extracellular exosome, " "ER chaperone complex, " and "ER-Golgi intermediate compartment, " and "nucleotide binding" in the CC (Figure 5B), and "poly(A) RNA biding, " and "RNA binding" in MF (Figure 5C). The major categories of differentially expressed proteins in Xn-treated N2a/APP cells were: "ER associated ubiquitin-dependent protein catabolic process, " "glycolytic process, " and "metabolic process" in biological process ( Figure 5D); "extracellular exosome, " "proteasome complex, " and "peroxisome" in the CC (Figure 5E), and "poly (A) RNA binding, " "proteasome-activating ATPase activity, " and "ATP binding" in MF (Figure 5F). The finding from the functional enrichment supported our finding that Xn modified proteins relevant to ER stress, oxidative stress and metabolic dysfunction.
As supplemental and validating data, the expression levels of: peroxiredoxin-4 (PRDX4) (Figure 6B and Supplementary Figure S4), protein disulfide isomerase (PDIA1) (Figure 6C and Supplementary Figure S3), the ER chaperone protein GRP78, (Figure 6D and Supplementary Figure S2), and the phosphorylated-pancreatic ER eIF2α kinase (p-eIF2α) ( Figure 6F) were determined by Western-blot analysis. Expression levels of PRDX4, PDIA1, GRP78 were higher in N2a/APP cells compared with N2a/WT cells and lower in Xn-treated N2a/APP cells compared with vehicle-treated N2a/APP cells. These three proteins showed consistent trends on analyses using Western-blot and proteomics methodologies.
DISCUSSION
We examined the action of xanthohumol (Xn) on murine neuroblastoma N2a cells stably expressing human Swedish mutant APP, a well-characterized cellular model of AD. We found higher levels of Aβ and the amyloidogenic pathway, which was consistent with a previous report (Yan et al., 2012). Xn reduced the cellular burden of Aβ 1−42 , which is cytotoxic and has highest propensity for aggregation, Aβ 1−40 which is less cytotoxic and less propensity for aggregation (Meisl et al., 2014), and the ratio of Aβ 1−42 to Aβ 1−40 which is widely used in clinical diagnosis (Janelidze et al., 2016). Further, Xn suppressed the levels of total APP, BACE1 that are responsible for N-terminal cleavage for production of Aβ (Vassar et al., 1999) and PS1 that is responsible for C-terminal cleavage for production of Aβ (De Strooper et al., 1998). Besides, Xn suppressed the hyperphosphorylation of tau FIGURE 7 | The mode of action of xanthohumol. Xn treatment suppressed AD-related changes, notably Aβ and tau phosphorylation via APP processing and the GSK-3β and PP2A pathways, which may be attributed to modifications of proteins related to functions of oxidative stress, ER stress, proteasomes and molecular chaperones.
at the sites Ser 404, Ser 396, and Ser 262, which are critical for formation of paired helical filaments (Goedert, 1996). The fact that Xn robustly increased phosphorylated GSK-3β is consistent with a previous report (Lv et al., 2017) describing Xn activation of GSK-3β via the AMPK pathway. Xn treatment attenuated the two major characteristics of AD, namely, Aβ levels and tau hyperphosphorylation. Furthermore, proteomics and functional analyses revealed that Xn modulated 51 proteins that involve oxidative stress, ER stress, proteasomal function, chaperone, cytoskeleton, ATPase and Metabolism. Detailed discussion is as follows.
Oxidative Stress-Related Proteins and Oxidative DNA Damage
Our results suggest that Xn relieves oxidative stress, which is consistent with previous reports (Dorn et al., 2013;Lv et al., 2017). The category "oxidative stress-related proteins" includes protein DJ-1 (PARK7), PRDX4 and superoxide dismutase [Cu-Zn] (SODC), two (PRDX4, SODC) of which were reduced in N2a/APP cells treated with the higher concentrations of Xn. Additionally, the intensity of PRDX4 (a peroxiredoxin that removes peroxides) was increased in N2a/APP vs. N2a/WT cells, which is consistent with oxidative stress, and was reduced with Xn treatment. Further evidence of oxidative stress in N2a/APP vs. N2a/WT cells included the higher levels of 8-OHdG in the AD-transgenic line and Xn-associated reductions in this DNA damage marker. 8-OHdG adducts are generated when ROSs, the most common of which is the hydroxyl radical, react with nuclear and mitochondrial DNA, such that urine 8-OHdG is a reliable index of DNA oxidation and a potential biomarker of early cellular dysfunction in AD (Valavanidis et al., 2009;Zhang and Rissman, 2017).
ER Stress-Related Proteins and Proteasomal Proteins
Accumulation of Aβ and tau is associated with perturbations in the UPR in AD and a number of other progressive neurodegenerative diseases (Stutzbach et al., 2013). We found perturbations of 18 UPR-related proteins in N2a/APP vs. N2a/WT cells. Some of these proteins are critical in proteolysis and AD progression. We found levels of three 26S protease regulatory subunits (PRS7, PRS8, and PRS10) that were higher in N2a/APP cells and lower in Xn-treated cells. An ATPase (MS37) that acts as a regulatory subunit of the 26S proteasome is reported to immune react with NFTs, plaque neurites and neuropil threads in the hippocampus of AD brains (Fergusson et al., 1996).
Another example is peptidyl-prolyl cis-trans isomerase, which has been shown to be decreased in expression (Sultana et al., 2007) and inactivated (Lee et al., 2011) in AD brains. We also found lower expression of two subunits of peptidyl-prolyl cistrans isomerase (PPID and PPIA) in N2a/APP cells, and the level of PPID was restored in Xn-treated cells.
Our proteomics study revealed some intriguing expressions of proteins in the categories of ER stress-related proteins and proteasomal proteins. For example, calreticulin (CALR) has been reported to have low immune reactivity and less CALR mRNA in AD brains (Taguchi et al., 2000); however, expressed levels of this protein in N2a/APP cells were higher than in N2a/WT cells. As another study revealed (Bollo et al., 2010), CALR could be crucial to cell-protective mechanisms against ER stress, as the level of CALR rose rapidly when cells were treated with increasing levels of an ER stressor. Thus, the high expression of CALR in N2a/APP cells suggests the presence of elevated ER stress. Indeed, our proteomics and Western-blot analyses (Figure 6) show clear evidence of ER stress in N2a/APP cells and relief of ER stress in these cells by Xn treatment.
The unfolded protein response is reported to have an important role in the prevention and attenuation of AD and Parkinson disease progression by preventing neuronal accumulation of misfolded proteins (Milisav et al., 2015). In the UPR cascade, ATF-4 and PERK, which are activated by dissociation of GRP78/BIP, serve as two of the three classes of main ER stress sensors. Phosphorylated PERK initiates phosphorylation of ElF2α, and PDIA1 assists by reducing proteins that have improper paired disulfide bonds. Thus, in response to ER stress, levels of ATF-4, p-PERK, p-elF2α, and PDIA1 would be expected to be increased, as we observed in N2a/APP vs. N2a/WT cells. Importantly, Xn treatment was associated with reduced levels of the aforementioned proteins. Taken together, therefore, our finding that Xn suppressed ER stress is consistent with a previous study showing the modulation of ER stress in the step of Xn activation of the AMPK pathway (Zimmermann et al., 2015).
Cytoskeletal Proteins
The category of cytoskeletal proteins contains some proteins that closely interact with Aβ and tau. For example, the intermediate filament protein vimentin, which co-localizes with Aβ in the cerebral cortex, cerebellum and hippocampus of AD brains (Levin et al., 2009), was increased in N2a/APP vs. N2a/WT cells and suppressed in N2a/APP cells treated with Xn (0.19 and 0.75 µM). Stathmin (STMN1), a microtubule-destabilizing neuroprotein, was more heavily expressed in N2a/WT cells than in N2a/APP cells, consistent with loss of support for potential neuritic processes in the AD culture model. Similarly, in the frontal and temporal cortex of AD brain, STMN1 has a reduced expression and preferentially localizes in NFTs (Jin et al., 1996;Cheon et al., 2001), Importantly, the level of STMN1 in N2a/WT cells was higher with Xn treatment (0.19 and 0.75 µM), which suggests the chalcone may have been acting to maintain the neuronal cytoskeleton which, in vivo, may serve to preserve synaptic plasticity (Uchida et al., 2014).
Taken together, Xn-associated modification of expression of these cytoskeletal proteins may correlate with the suppression of Aβ accumulation and tau hyperphosphorylation upon treatment of N2a/APP cells with Xn.
CONCLUSION
Xanthohumol significantly suppresses Aβ production and tau hyperphosphorylation in N2a/APP cells via APP processing and the GSK-3β pathway, as envisaged in the cascade of molecular events shown schematically in Figure 7. Comparative proteomics and functional studies revealed that Xn induced modulation of redox regulation proteins, ER, proteasomal, and cytoskeleton/cytoplasmic proteins, which may correlate with the suppression of Aβ production and tau hyper-phosphorylation in this cell model of AD. Xn treatment also altered the expression of other proteins, such as those involved in energy regulation and metabolism.
Taken in concert, these in vitro findings suggest that Xn is a promising candidate for AD therapy that merits prompt evaluation for efficacy and safety in animal models of AD. The ability of Xn to modulate multiple proteins, pathways and functions to effect reduction of AD-like pathology expressed by N2a/APP cells might have advantages in treating this complex neurodegenerative disease. This view is consistent with the therapeutic failures of drug candidates that target specific ADrelated molecules, such as immune therapies or inhibitors directed at βor γ-secretases (Kikuchi et al., 2017;Moussa, 2017).
While Xn appears to have multiple physiological and therapeutic potential, any effective secondary prevention or treatment of AD will require long-term therapy with minimal adverse side effects. In this regard, it is noteworthy that Xn, acting as a weak electrophile Michael acceptor, is thought to exert chemopreventive effects via induction of detoxification enzymes such as NADPH quinone oxidoreductase 1 (NQO1) (Dietz and Bolton, 2011), which has also been associated with AD (Chhetri et al., 2017). JCYJ20150529164656093, JCYJ20150529153646078, JCYJ20 140416122811964, and JCYJ20160422143433757), and Sanming Project of Medicine in Shenzhen (SZSM201611090).
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fphar. 2018.00199/full#supplementary-material FIGURE S1 | No protein (a) was changed in four comparison groups. One protein (ATPA) (b) was changed in various concentrations of Xn-treated N2a/APP cells compared with untreated N2a/APP cells. | 2018-04-04T13:08:06.108Z | 2018-04-04T00:00:00.000 | {
"year": 2018,
"sha1": "c01ff0c3c8ea13f0789d2d2ac4c665c869146a52",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2018.00199/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c01ff0c3c8ea13f0789d2d2ac4c665c869146a52",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
269470086 | pes2o/s2orc | v3-fos-license | Study on the Effect of Citric Acid-Modified Chitosan on the Mechanical Properties, Shrinkage Properties, and Durability of Concrete
As an environmentally friendly natural polymer, citric acid-modified chitosan (CAMC) can effectively regulate the hydration and exothermic processes of cement-based materials. However, the influence of CAMC on the macroscopic properties of concrete and the optimal dosage are still unclear. This work systematically investigates the effects of CAMC on the mixing performance, mechanical properties, shrinkage performance, and durability of concrete. The results indicated that CAMC has a thickening effect and prolongs the setting time of concrete. CAMC has a negative impact on the early strength of concrete, but it is beneficial for the development of the subsequent strength of concrete. With the increase in CAMC content, the self-shrinkage rate of concrete samples decreased from 86.82 to 14.52 με. However, the CAMC-0.6% sample eventually expanded, with an expansion value of 78.49 με. Moreover, the long-term drying shrinkage rate was decreased from 551.46 to 401.94 με. Furthermore, low-dose CAMC can significantly reduce the diffusion coefficient of chloride ions, improve the impermeability and density of concrete, and thereby enhance the freeze–thaw cycle resistance of concrete.
Introduction
Mass concrete structures frequently experience a cracking issue brought on by temperature stress.Temperature fractures have a number of detrimental impacts on high volume concrete structures, including alkali aggregate reaction, steel corrosion, concrete carbonation, and even a major impact on the durability of concrete, which can result in early building retirement and significant losses [1][2][3].Therefore, timely control of the shrinkage amplitude of concrete is of positive significance for controlling cracks.
Chitosan, as a natural biopolymer, has good biocompatibility and biodegradability [4,5].Its degradation products are non-toxic, highly adsorptive, and noncarcinogenic, and are widely used in the biomedical, wastewater treatment, flocculation, and civil engineering fields [6][7][8][9].The previous literature [10,11] has shown that chitosan, as a polymer structure added to cement-based materials, has great application prospects in improving the early fracture toughness of cement-based materials, improving the rheological properties of cement-based materials, and regulating the heat release process of cement hydration due to the interaction between its molecular chains and the surface of cement particles.
However, chitosan's effectiveness as a polymer additive in cement-based structures is limited by its insolubility in alkaline pH conditions.Therefore, many scholars have modified chitosan to improve the properties of cement-based materials.The impact of three distinct modified chitosans on cement mortar's fresh mixing performance was examined by M. Lasheras-Zubiate [12].The findings demonstrated that the ionic derivative carboxymethyl chitosan (CMCH) postponed the hydration of cement particles and cut the slump of cement mortar by fifty percent.Nevertheless, the slump was only marginally alleviated by the viscosity promoters with a larger molecular weight, these being hydroxypropyl methyl cellulose (HPMC) and hydroxypropyl citrulline (HPC).Bezerra [13] investigated the effects of chitosan and latex on the mechanical properties and durability of concrete.When 2% chitosan and latex were added, the strength increased and polymer fibers were found on the fracture surface of the composite material.Ustinova's [14] research results showed that the addition of chitosan increased the strength of cement components.The introduction of modified chitosan is beneficial to reducing the overall pore volume in cement components and increasing their frost resistance and bacterial resistance.It was found in our survey of the literature that the application research of chitosan and its modified products in the construction industry is still quite limited.Additionally, they are mostly used as additives and coatings in cement slurry or slurry mixtures.It is worth noting that most studies have focused on using chitosan-modified products as high-efficiency water-reducing agents to investigate their workability, setting properties, etc., [15][16][17] without conducting systematic research on their fresh properties, mechanical properties, or durability.In order to obtain more accurate results to design and control the impact of chitosan and its modified products on the performance of concrete, it is necessary to choose more effective modified products to study the basic properties of concrete, especially to conduct a comprehensive investigation of the fresh-mix performance, mechanical properties, and durability of concrete.
Based on previous research, citric acid-modified chitosan (CAMC) was prepared through acylation reaction and applied to cement-based materials, effectively regulating the hydration process of the cement and, as a result, the rise in temperature and the temperature cracks inside mass concrete [18][19][20].Therefore, this study systematically investigated the effects of CAMC on the fresh-mix performance, mechanical properties, shrinkage performance, and durability of concrete, providing data support for the further promotion and application of CMAC in concrete.
Materials
P II 52. 5 Portland cement (OPC) (CEM PII 52.5 supplied by Xiaoyetian Co., Ltd., Nanjing, China) and a fly ash (FA) (Shenzhen Daote Technology Co., Ltd., Shenzhen, China), as cementitious material, are adopted in this work.The oxide composition of OPC is obtained through XRF testing analysis and the mineral composition of cement is calculated based on the oxide content Bogue model [21], and its basic physical and mechanical properties and composition are shown in Tables 1 and 2. The auxiliary cementitious material is 2 µm special grade FA.The fineness modulus of river sand is 2.86, the apparent density is 2650 kg/m 3 , and the bulk density is 1650 kg/m 3 .The basalt aggregate has an apparent density of 2720 kg/m 3 , consisting of continuously graded small stones (5 mm~10 mm) and large stones (10 mm~20 mm).The oxide compositions of basalt aggregate and river sand are shown in Table 3.The CAMC used in this experiment was citric acid-modified chitosan prepared in the laboratory.The modification preparation method and the characteristics of the polymer-modified products can be found in references [18][19][20].
Methods
In order to compare the effects of different amounts of CAMC on the fresh mixing performance, mechanical properties, and durability of concrete, 0%, 0.2%, 0.4%, and 0.6% of the cement mass were added to the concrete, with the mixing proportions shown in Table 4.
Fresh Mixing Performance
The slump and spread of fresh concrete are measured using a concrete slump meter, and the test method is based on the standard GB/T50080-2002 [22].The setting time of fresh concrete slurry with different types and contents of admixtures is measured using a penetration resistance meter, and the test method is JTGE30-2005.
Mechanical Properties
The quasi-static compressive performance test of concrete is conducted on an electric servo hydraulic testing machine with a range of 3000 kN.The compressive strength and tensile splitting strength tests are conducted using cubic specimens (each 100 × 100 × 100 mm 3 ).The compressive strength of concrete is determined at a constant loading rate of 0.8-1.0MPa/s, and the tensile splitting strength of concrete is determined at a constant loading rate of 0.05-0.08MPa/s.
Shrinkage Performance
The early autogenous shrinkage test of fresh concrete is conducted using the SBT-AS 200 autogenous shrinkage tester.Two parallel samples are formed for each mixing ratio, and the test samples are placed in a corrugated PE tube with a length of 420 ± 5 mm and a diameter of 50 ± 0.2 mm to test the early autogenous shrinkage of the concrete.Then, the sample is placed directly in a standard curing room for curing and testing (temperature of 23 ± 2 • C and relative humidity of 60 ± 2%).The zero point of autogenous shrinkage is based on the initial setting time of the system, which is determined using the penetration resistance method.The calculation method for early autogenous shrinkage strain is shown in the formula: ε a (t) is the self-shrinking strain at time t, ξ(t) is the measured linear strain at time t, ξ(0) is the measured linear strain at time 0.
The dry shrinkage performance of the samples was tested according to the GB/T50080-2002 [22].Three groups of 100 × 100 × 515 mm 3 -prism specimens were made for each mixing ratio sample, and repeated experiments were conducted.After the specimens were formed, they were sent to a standard curing room, and after 48 h of mold curing, the molds were removed.Then, the sample was placed in a standard curing room for curing.The change in the vertical length of the cylindrical specimens was recorded with a microcaliper, and the long-term drying shrinkage rate was calculated.
where ε d (t) is the drying shrinkage strain at time t, x(t) is the microcaliper measurement at time t (mm), x(0) is the initial reading of the test piece placed in the environmental chamber (mm), and l is the initial vertical length.
Durability Performance
(1) Rapid chloride ion migration coefficient method (RCM) Perform a concrete RCM test according to GB/T 50082-2019 [23], and place the test sample on a concrete chloride ion electrotransportation coefficient tester after vacuum water retention (Beijing RCM-NTB).After power-on, use a 0.1 molAgNO 3 color indicator solution to test the chloride ion migration depth.After about 15 min, a white silver nitrate precipitate can be seen at the sample interface.Use a marker to draw the color boundary and divide it into 10 equal parts along the interface.Measure the distance from the boundary to the bottom of the sample and take the average as the penetration depth, accurate to 0.1 mm.Calculate the RCM coefficient of the mortar according to Equation (3).
where D is the non-steady state ion migration velocity of soil, in 10 −12 m 2 /s; T is the average of the initial and end temperatures of the anode solution, in • C; U is the absolute value of the applied voltage in the experiment, in V; L is the thickness of the concrete specimen, in mm; X d is the penetration depth of chloride ions, in mm; t is the energization time, in h.
(2) Concrete Freezing-Thawing Cycle Test Refer to GB/T 50082-2019 [23] for the freezing-thawing cycle test of concrete.Before the freezing-thawing cycle, the test block was first kept in the standard curing room for 24 days, then immersed in water for 4 days to reach saturation.After removing the sample, the surface moisture was wiped clean with a cloth, and the weight was measured and recorded.Then, the test block was placed in a concrete rapid freezing-thawing test machine for the freezing-thawing cycle test.After the test reached the desired number of cycles, the test block was weighed and ultrasonic testing was performed.
(1) Quality loss The quality loss rate is calculated according to formula (4): where ∆m is the mass loss rate of the test block at erosion age t; m 0 is the initial mass of the test block; m t is the mass of the test block at erosion age t.
(2) Relative dynamic elastic modulus The dynamic elastic modulus of concrete is measured using a non-metallic ultrasonic tester.Ultrasonic waves with a frequency of 54 Hz are used to test the dynamic elastic modulus of concrete specimens with different erosion ages, and the relative change in the dynamic elastic modulus is calculated.The calculation method can be obtained by using Equations ( 5) and ( 6).
where E represents the dynamic elastic modulus, in MPa; E 0 and E n represent the dynamic elastic modulus before and at the erosion age n, respectively, in MPa; E rd represents the relative dynamic elastic modulus, 1; ν represents the Poisson's ratio of cement-based materials, 1; ρ represents the density of cement-based materials, in kg/m 3 ; V represents the ultrasonic velocity, in m/s; t 0 represents the ultrasonic time before erosion, in µs; t n represents the ultrasonic time at the erosion age n, in µs.
Mercury Injection Pressure Test (MIP)
The pore structure distribution of the hardened paste was tested and analyzed using the AutoPore IV 9500 mercury porosimeter from the American company Microtek.Prior to testing, the test sample is soaked in anhydrous ethanol for termination of hydration, and dried to constant weight in a 50 • C vacuum drying oven.The pore size analysis range was from approximately 4 nm to approximately 300 µm, and high-purity mercury (99.99%) was used for the test at room temperature.
Working Performance
The change patterns of the slump and the spread of fresh concrete with different CAMC contents are shown in Figure 1.With the change in CAMC content, the slump of the blank group is 335 mm, while the slump of the concrete with 0.2% CAMC added decreases to 315 mm.When CAMC was further added to 0.4% and 0.6%, the slump decreased to 300 mm and 290 mm, respectively.The corresponding spread is 210 mm, 202 mm, 195 mm, and 190 mm.This indicates that, with the increase in CAMC content, the fresh mixing performance of concrete is affected.CAMC has a slight thickening effect.Although CAMC increases the water solubility of chitosan (CTS) and changes its molecular structure, the conclusion obtained from this experiment is the same as that of adding natural chitosan, which also has a thickening effect.
CTS is mainly extracted from chitin and consists of glucosamine and acetyl glucosamine units.When CTS is added, it increases the viscosity of concrete, resulting in a decrease in the fluidity of the mixture [12].Moreover, its effect is almost independent of the dose.The larger the molecular weight of the CTS, the greater the detected thickening effect, which is caused by the increase in entanglement and crosslinking between chains in calcium-rich systems.The setting time at low doses is mainly affected by the molecular weight of the polymer, while the degree of deacetylation at high doses is the main controlling factor.Due to the interaction between the polymer and the cement particles, CTS in cement mortar also has a retarding effect.However, as a modified derivative of CTS, CAMC has a more significant effect on workability and plays a greater role in thickening.effect, which is caused by the increase in entanglement and crosslinking between chains in calcium-rich systems.The setting time at low doses is mainly affected by the molecular weight of the polymer, while the degree of deacetylation at high doses is the main controlling factor.Due to the interaction between the polymer and the cement particles, CTS in cement mortar also has a retarding effect.However, as a modified derivative of CTS, CAMC has a more significant effect on workability and plays a greater role in thickening.
Setting Time
Figure 2 shows the variation in the setting time of fresh concrete under different CAMC content conditions.It can be observed from Figure 2 that the initial setting time of the blank group concrete is approximately 4 h.After the addition of CAMC, the initial setting times increase to 5.5 h, 15 h, and 26 h, respectively, and the final setting times also increase from 8 h to 9.5 h, 21 h, and 33 h, respectively.The hardening process of cement paste is mainly the formation of a calcium silicate hydrate C-S-H phase.The addition of CAMC leads to the interaction between the CAMC and cement minerals, which delays the diffusion of water to the unhydrated phase, resulting in the delay of the formation of C-S-H and prolonging the setting time.The increase in setting time induced by the presence of CTS and its derivatives has been reported in previous studies [24].
Setting Time
Figure 2 shows the variation in the setting time of fresh concrete under different CAMC content conditions.It can be observed from Figure 2 that the initial setting time of the blank group concrete is approximately 4 h.After the addition of CAMC, the initial setting times increase to 5.5 h, 15 h, and 26 h, respectively, and the final setting times also increase from 8 h to 9.5 h, 21 h, and 33 h, respectively.The hardening process of cement paste is mainly the formation of a calcium silicate hydrate C-S-H phase.The addition of CAMC leads to the interaction between the CAMC and cement minerals, which delays the diffusion of water to the unhydrated phase, resulting in the delay of the formation of C-S-H and prolonging the setting time.The increase in setting time induced by the presence of CTS and its derivatives has been reported in previous studies [24].
Mechanical Properties
The compressive strength of concrete with various CAMC contents at different ages can be found in [18].Figure 3 shows a strength development index of concrete with different CAMC contents and ages.In the early stage of hardening, CAMC had a negative
Compressive Strength
The compressive strength of concrete with various CAMC contents at different ages can be found in [18].Figure 3 shows a strength development index of concrete with different CAMC contents and ages.In the early stage of hardening, CAMC had a negative impact on the initial strength of the concrete.On the first day, the compressive strengths of the C0 and CAMC-0.2%samples reached 21.2 MPa and 22.1 MPa, respectively.As the CAMC content increased to 0.4%, the compressive strength of the CAMC-0.4% sample decreased to 15.2 MPa.After 3 days, the compressive strength values of all samples were above 30 MPa, and the early strength loss caused by the CAMC basically disappeared.It is worth noting that, when the compressive strength was measured at 28 days, the strength of the concrete increased with the increase in CAMC content.The compressive strength of C0 was 49.6 MPa, and the compressive strengths of concrete with added CAMC increased by 0.6%, 17.8%, and 21.4%, respectively.From the strength development index (Figure 3, it can be seen that after 3 days of age, with the increase in CAMC content, the strength of the sample increases more rapidly.This indicated that, although CAMC as an additive added to concrete had a negative impact on early strength, it was beneficial for subsequent strength development.
Splitting Tensile Strength
The splitting tensile strength of concrete under different CAMC content conditions is shown in Figure 4.The figure mainly shows that the splitting tensile strength decreases with the increase in CAMC content in the early stage of hardening, while after the hardening is stable, the splitting tensile strength increases slightly except for CAMC-0.6%.In the early stage of hydration, the splitting tensile strength decreases significantly.At 3 days of age, the splitting tensile strength of the blank group is 2.88 MPa, and after adding different amounts of CAMC, the splitting tensile strengths decrease to 2.74 MPa, 2.51 MPa, and 1.08 MPa, respectively.When hydration progresses to 7 days of age, the splitting tensile strength of the blank group reaches 3.25 MPa, and after adding CAMC, the splitting tensile strengths increase to 3.25 MPa, 3.43 MPa, and 3.11 MPa, respectively.Among the samples, the splitting tensile strengths of higher CAMC content samples increase faster, with CAMC-0.4% increasing by 36.65% compared to 3 days of age, while CAMC-0.6%increases by 188% compared to 3 days of age.This is mainly because, before 3 days of age, CAMC causes a significant delay in the hydration of cement paste, and as the hydration time increases, the hydration rate accelerates and the strength increases significantly.
When the curing age reaches 28 days, the splitting tensile strength still has a significant
Splitting Tensile Strength
The splitting tensile strength of concrete under different CAMC content conditions is shown in Figure 4.The figure mainly shows that the splitting tensile strength decreases with the increase in CAMC content in the early stage of hardening, while after the hardening is stable, the splitting tensile strength increases slightly except for CAMC-0.6%.In the early stage of hydration, the splitting tensile strength decreases significantly.At 3 days of age, the splitting tensile strength of the blank group is 2.88 MPa, and after adding different amounts of CAMC, the splitting tensile strengths decrease to 2.74 MPa, 2.51 MPa, and 1.08 MPa, respectively.When hydration progresses to 7 days of age, the splitting tensile strength of the blank group reaches 3.25 MPa, and after adding CAMC, the splitting tensile strengths increase to 3.25 MPa, 3.43 MPa, and 3.11 MPa, respectively.Among the samples, the splitting tensile strengths of higher CAMC content samples increase faster, with CAMC-0.4% increasing by 36.65% compared to 3 days of age, while CAMC-0.6%increases by 188% compared to 3 days of age.This is mainly because, before 3 days of age, CAMC causes a significant delay in the hydration of cement paste, and as the hydration time increases, the hydration rate accelerates and the strength increases significantly.When the curing age reaches 28 days, the splitting tensile strength still has a significant increase compared to 7 days, with the blank group and CAMC-0.2%reaching 4.09 MPa, while CAMC-0.4% increases to 4.32 MPa, and CAMC-0.6%shows a slight decrease compared to the blank group, with a splitting tensile strength of 3.94 MPa.CTS bonds cement particles together through its viscosity, rather than accelerating the production rate of crystallization products.Concrete slump and expansion experiments also show that CAMC increases the viscosity of cement paste while also improving its mechanical strength to some extent.However, when the CAMC content is too high, it can affect the hydration process of cement, which in turn affects the development of later splitting tensile strength.
Autogenous Shrinkage
The non-contact mortar shrinkage tester was used to detect the autogenous shrinkage development of low-temperature-rise concrete from the initial setting stage to the early hydration stage within 7 days.The test results are shown in Figure 5. From the early autogenous shrinkage variation trend of the four groups of proportions, it can be seen that the first three groups all experienced significant shrinkage in the early stage, while the CAMC-0.6% group experienced a slight expansion phenomenon.Among them, the blank group and the CAMC-0.2% group both experienced a relatively rapid shrinkage at the earliest stage, followed by an expansion behavior.After the expansion reached its peak, the sample experienced a second shrinkage.Based on its early sample shrinkage behavior, the entire change stage of the sample can be divided into three stages: shrinkage, expansion, and then shrinkage [25].The four characteristic values of △P, △E, △H, and △S represent the changes in shrinkage behavior in the three stages, where △P is the first-stage shrinkage change value, △E is the absolute value of the early expansion phenomenon in the second stage, △H is the change value after the expansion reaches its maximum value and then shrinks up to day 7 in the third stage, and △S is the final shrinkage change value of the system during the entire early autogenous shrinkage process from the initial setting time to 7 days.
Autogenous Shrinkage
The non-contact mortar shrinkage tester was used to detect the autogenous shrinkage development of low-temperature-rise concrete from the initial setting stage to the early hydration stage within 7 days.The test results are shown in Figure 5. From the early autogenous shrinkage variation trend of the four groups of proportions, it can be seen that the first three groups all experienced significant shrinkage in the early stage, while the CAMC-0.6% group experienced a slight expansion phenomenon.Among them, the blank group and the CAMC-0.2% group both experienced a relatively rapid shrinkage at the earliest stage, followed by an expansion behavior.After the expansion reached its peak, the sample experienced a second shrinkage.Based on its early sample shrinkage behavior, the entire change stage of the sample can be divided into three stages: shrinkage, expansion, and then shrinkage [25].The four characteristic values of ∆P, ∆E, ∆H, and ∆S represent the changes in shrinkage behavior in the three stages, where ∆P is the first-stage shrinkage change value, ∆E is the absolute value of the early expansion phenomenon in the second stage, ∆H is the change value after the expansion reaches its maximum value and then shrinks up to day 7 in the third stage, and ∆S is the final shrinkage change value of the system during the entire early autogenous shrinkage process from the initial setting time to 7 days.As shown in Table 5, the blank group and CAMC-0.2% group had a shrinkage value of 12.91 and 12.59 µε for △P, respectively, while the CAMC-0.4% and CAMC-0.6%groups did not exhibit the initial shrinkage phenomenon.When developing into the second stage of early expansion, the blank group and CAMC-0.2%sample group had an expansion value of 15.8 and 18.9 µε for △E, respectively, with the CAMC-0.2%sample group having a slightly increased expansion value compared to the blank group in the second stage.Similarly, the CAMC-0.4% sample did not exhibit any expansion phenomenon, while the CAMC-0.6%sample had an expansion value of 9.29 µε for △E.When developing into the third stage of re-shrinkage, it was found that the first three groups of samples had shrinkage values of 87.44, 88.07, and 14.85 µε for △H, respectively, while the CAMC-0.6%sample continued to expand with an expansion value of 69.2 µε.When analyzing the final shrinkage value of the entire system, it was found that the shrinkage of the sample decreased with an increasing CAMC content, with the blank group, CAMC-0.2%, and CAMC-0.4% samples having shrinkage values of 86.82 µε, 70.89 µε, and 14.52 µε, respectively.However, the CAMC-0.6%sample ultimately expanded, with an expansion value of 78.49 µε.
There are currently three main theories that can explain the mechanism of autogenous shrinkage: surface tension theory, disjoining pressure theory, and capillary tension theory.The surface tension theory suggests that a decrease in humidity leads to a reduction in the adsorbed water layer between particles within the gelling material, resulting in an increase in surface tension and macroscopic shrinkage of the material [26].The disjoining pressure theory suggests that the separation pressure between solid particles is a combination of complex forces such as van der Waals forces and layer repulsion.When the humidity decreases, the separation pressure between solid particles decreases and shrinkage occurs [27].The capillary tension theory suggests that a decrease in humidity leads to the formation of a meniscus at the interface between the gas phase and the liquid phase, resulting in macroscopic shrinkage [28][29][30].The hydration of cement particles proceeds with the consumption of water and the formation of a porous structure, as shown in Figure 6a.In capillary theory, due to the existence of surface tension, a curved liquid surface forms in unsaturated pores.This meniscus causes capillary stress in the pores, resulting in a decrease in volume [31].The lower the water-cement ratio, the higher the autogenous shrinkage rate, the finer the pore structure, and the lower the porosity.The consumption of free water gradually causes it to enter small pores from large pores to achieve thermodynamic equilibrium.Due to the significant reduction in pore volume, As shown in Table 5, the blank group and CAMC-0.2% group had a shrinkage value of 12.91 and 12.59 µε for ∆P, respectively, while the CAMC-0.4% and CAMC-0.6%groups did not exhibit the initial shrinkage phenomenon.When developing into the second stage of early expansion, the blank group and CAMC-0.2%sample group had an expansion value of 15.8 and 18.9 µε for ∆E, respectively, with the CAMC-0.2%sample group having a slightly increased expansion value compared to the blank group in the second stage.Similarly, the CAMC-0.4% sample did not exhibit any expansion phenomenon, while the CAMC-0.6%sample had an expansion value of 9.29 µε for ∆E.When developing into the third stage of re-shrinkage, it was found that the first three groups of samples had shrinkage values of 87.44, 88.07, and 14.85 µε for ∆H, respectively, while the CAMC-0.6%sample continued to expand with an expansion value of 69.2 µε.When analyzing the final shrinkage value of the entire system, it was found that the shrinkage of the sample decreased with an increasing CAMC content, with the blank group, CAMC-0.2%, and CAMC-0.4% samples having shrinkage values of 86.82 µε, 70.89 µε, and 14.52 µε, respectively.However, the CAMC-0.6%sample ultimately expanded, with an expansion value of 78.49 µε.There are currently three main theories that can explain the mechanism of autogenous shrinkage: surface tension theory, disjoining pressure theory, and capillary tension theory.The surface tension theory suggests that a decrease in humidity leads to a reduction in the adsorbed water layer between particles within the gelling material, resulting in an increase in surface tension and macroscopic shrinkage of the material [26].The disjoining pressure theory suggests that the separation pressure between solid particles is a combination of complex forces such as van der Waals forces and layer repulsion.When the humidity decreases, the separation pressure between solid particles decreases and shrinkage occurs [27].The capillary tension theory suggests that a decrease in humidity leads to the formation of a meniscus at the interface between the gas phase and the liquid phase, resulting in macroscopic shrinkage [28][29][30].The hydration of cement particles proceeds with the consumption of water and the formation of a porous structure, as shown in Figure 6a.In capillary theory, due to the existence of surface tension, a curved liquid surface forms in unsaturated pores.This meniscus causes capillary stress in the pores, resulting in a decrease in volume [31].The lower the water-cement ratio, the higher the autogenous shrinkage rate, the finer the pore structure, and the lower the porosity.The consumption of free water gradually causes it to enter small pores from large pores to achieve thermodynamic equilibrium.Due to the significant reduction in pore volume, the consumption of water in low-water-cement-ratio mixtures results in faster changes in pore saturation.This helps to reduce the relative humidity faster in low-water-cement-ratio slurries.In capillary pores with a low relative humidity, the radius of curvature of the meniscus is smaller, so the capillary stress is larger and the autogenous shrinkage Is also larger [32][33][34].
the consumption of water in low-water-cement-ratio mixtures results in faster changes in pore saturation.This helps to reduce the relative humidity faster in low-water-cement ratio slurries.In capillary pores with a low relative humidity, the radius of curvature o the meniscus is smaller, so the capillary stress is larger and the autogenous shrinkage Is also larger [32][33][34].
Through the analysis of the above data, it is found that, during the early autogenous shrinkage process, some samples undergo a process of micro-expansion.The expansion source in this stage mainly includes three reasons: first, the expansion pressure caused by the formation of hydration products (CH, AFt), followed by the thermal expansion phenomenon caused by the heat release of hydration [35,], and then the influence of slurry bleeding and its reabsorption [29,30].When the sample contains a high content of CAMC the hydration process is significantly delayed, and there is a significant amount of excess free water in the cement paste, resulting in a "bleeding-like" phenomenon (as shown in Figure 6b,c).Previous 1 H-NMR research results have also confirmed that CAMC can cause a "bleeding-like" phenomenon [18].In most cases, the reabsorption of secreted water is the most important reason for this.The free water content and relative humidity inside the high-water-cement-ratio sample are still high, and the capillary stress is relatively small, so the autogenous shrinkage is very small, and the expansion phenomenon occurs [37,38].Through the analysis of the above data, it is found that, during the early autogenous shrinkage process, some samples undergo a process of micro-expansion.The expansion source in this stage mainly includes three reasons: first, the expansion pressure caused by the formation of hydration products (CH, AFt), followed by the thermal expansion phenomenon caused by the heat release of hydration [35,36], and then the influence of slurry bleeding and its reabsorption [29,30].When the sample contains a high content of CAMC, the hydration process is significantly delayed, and there is a significant amount of excess free water in the cement paste, resulting in a "bleeding-like" phenomenon (as shown in Figure 6b,c).Previous 1 H-NMR research results have also confirmed that CAMC can cause a "bleeding-like" phenomenon [18].In most cases, the reabsorption of secreted water is the most important reason for this.The free water content and relative humidity inside the high-water-cement-ratio sample are still high, and the capillary stress is relatively small, so the autogenous shrinkage is very small, and the expansion phenomenon occurs [37,38].
Shrinkage Analysis at Different Stages of the Entire Process
The drying shrinkage is shown in Figure 7.The long-term drying shrinkage rate of low-temperature-rise concrete decreases with time, and the long-term drying shrinkage gradually decreases with the increase in CAMC content.The drying shrinkage value of the blank group at 90 days is about 551.46 µε.After adding 0.2% CAMC, the long-term drying shrinkage value decreases by 42 µε.As the CAMC content increases, the long-term drying shrinkage decreases to 500.97 and 401.94 µε.The long-term drying shrinkage of the four groups of samples is basically consistent with the change law of autogenous shrinkage described in the previous section.The drying shrinkage is shown in Figure 7.The long-term drying shrinkage rat low-temperature-rise concrete decreases with time, and the long-term drying shrink gradually decreases with the increase in CAMC content.The drying shrinkage valu the blank group at 90 days is about 551.46 µε.After adding 0.2% CAMC, the long-t drying shrinkage value decreases by 42 µε.As the CAMC content increases, the long-t drying shrinkage decreases to 500.97 and 401.94 µε.The long-term drying shrinkag the four groups of samples is basically consistent with the change law of autogen shrinkage described in the previous section.The overall shrinkage change in low-temperature-rise concrete during the w process is mainly the sum of the early autogenous shrinkage before demolding and long-term drying shrinkage after demolding, and was determined in a stand environment.The early autogenous shrinkage and long-term drying shrinkage of l temperature-rise concrete have been tested and measured in our previous article.Tab analyzes the early autogenous shrinkage and long-term drying shrinkage data of the groups of samples.The specific division method can be found in the literature [25].A Table 6 is the early autogenous shrinkage values of the four groups of samples in Fig 5 , A14 is the stable autogenous shrinkage values at 14 days, T14 is the long-term t shrinkage values of the four groups of low-temperature-rise concrete after demoldin 14 days ,and E14 is the total shrinkage values of the low-temperature-rise concrete a initial setting.E14 can be calculated by the following formula: E14= A3+ T14 D14 is the drying shrinkage value of concrete at the age of 14 days, and D14 ca calculated by the following formula: The overall shrinkage change in low-temperature-rise concrete during the whole process is mainly the sum of the early autogenous shrinkage before demolding and the longterm drying shrinkage after demolding, and was determined in a standard environment.The early autogenous shrinkage and long-term drying shrinkage of low-temperature-rise concrete have been tested and measured in our previous article.Table 6 analyzes the early autogenous shrinkage and long-term drying shrinkage data of the four groups of samples.The specific division method can be found in the literature [25].A 3 in Table 6 is the early autogenous shrinkage values of the four groups of samples in Figure 5, A 14 is the stable autogenous shrinkage values at 14 days, T 14 is the long-term total shrinkage values of the four groups of low-temperature-rise concrete after demolding at 14 days, and E 14 is the total shrinkage values of the low-temperature-rise concrete after initial setting.E 14 can be calculated by the following formula: E 14 = A 3 + T 14 (7) D 14 is the drying shrinkage value of concrete at the age of 14 days, and D 14 can be calculated by the following formula: The analysis of Table 6 reveals that no expansion was observed in any of the lowtemperature-rise concrete samples, indicating that the shrinkage caused by drying can balance (or mask) the expansion observed in the early autogenous shrinkage phase.The overall trend is that the total shrinkage value decreases with the increase in CAMC content, where the total shrinkage value of the blank group is 473.37 µε, and after the addition of 0.6% CAMC, the total shrinkage value decreases to 370.91 µε.The decrease in the total shrinkage of the CAMC-0.6%sample is due to the expansion phenomenon of autogenous shrinkage and the reduction in long-term drying shrinkage.The total shrinkage rate mainly depends on the drying shrinkage rate, as the observed trends are the same in both cases.This phenomenon is due to the larger magnitude of drying shrinkage compared to autogenous shrinkage.
Chloride Ion Diffusion Coefficient
Figure 8 shows the results of the chloride diffusion coefficient of low-temperature-rise concrete samples with different CAMC content.According to Figure 8, the average D RCM of the blank concrete sample is 7.24 × 10 −12 m 2 /s.With the increase in CAMC content, the chloride diffusion coefficient of low-temperature-rise concrete samples shows a trend of first decreasing and then increasing.The chloride penetration resistance coefficients of low-temperature-rise concrete are in the order of CAMC-0.6%> blank group > CAMC-0.4% > CAMC-0.2%, with average DRCMs of 6.28 ×10 −12 m 2 /s, 6.64 ×10 −12 m 2 /s, and 9.63 ×10 −12 m 2 /s, respectively.Several parameters, such as the water-cement ratio, type of cement and admixtures, curing conditions, the existence of chemical erosion, and characteristics of micro-cracks, can affect chloride penetration.Chloride ions can penetrate into concrete through diffusion caused by concentration gradients and capillary forces, which are related to the volume and size of pores and micro-cracks, as well as their interconnections.Under low-CAMC-content conditions, the chloride diffusion coefficient can be significantly reduced, which an important role in improving the durability of low-temperature-rise concrete.By comparing the mechanical properties and shrinkage performance, it can also be found that low-content CAMC can improve the mechanical properties and durability of concrete.Although 0.6% CAMC can still improve the mechanical properties and shrinkage performance of low-temperature-rise concrete in the later stage of hydration, an excessive CAMC content can lead to an increase in the porosity of the concrete structure in the later stage of hydration.The increase in porosity allows salt solutions to pass through the porous regions and cause precipitation in internal pores and voids, resulting in higher permeability of the concrete, leading to an increase in the chloride diffusion coefficient of low-temperature-rise concrete.
Freeze-Thaw Cycle
The quality changes in low-temperature-rise concrete under different freeze-thaw cycles with different amounts of CAMC are shown in Table 7.When the low-temperaturerise concrete is exposed to freeze-thaw conditions, the quality loss in the lowtemperature-rise concrete increases with the increase in the number of freeze-thaw cycles.As shown in Figure 9a, the quality loss rate of the low-temperature-rise concrete decreases almost linearly with the number of freeze-thaw cycles.At the same time, the addition of CAMC can reduce the quality loss in the low-temperature-rise concrete.When the number of freeze-thaw cycles reaches 150, the surface of the blank group sample exhibits significant peeling, and the coarse aggregate is exposed significantly, with a quality loss rate exceeding 5% (Figure 9c).When 0.2% and 0.4% of CAMC are added, the appearance of the sample remains basically intact, and no peeling of the sample surface occurs.After 200 freeze-thaw cycles, the quality loss rates are finally 2.39% and 3.62%, respectively.When the amount of CAMC is increased to 0.6%, the surface of the sample begins to exhibit surface erosion, and although the quality loss rate is only 1.79%, it can be observed from the relative dynamic modulus (Figure 9b) that the relative dynamic modulus of the CAMC-0.6%sample after 50 freeze-thaw cycles is lower than that of other samples with different amounts of CAMC.As the amount of CAMC increases, the relative dynamic modulus of the low-temperature-rise concrete shows a trend of first increasing and then decreasing.However, after 125 freeze-thaw cycles, the blank group cannot obtain an ultrasonic wave velocity, indicating that the internal structure of the sample has been damaged.
Freeze-Thaw Cycle
The quality changes in low-temperature-rise concrete under different freeze-thaw cycles with different amounts of CAMC are shown in Table 7.When the low-temperature-rise concrete is exposed to freeze-thaw conditions, the quality loss in the low-temperature-rise concrete increases with the increase in the number of freeze-thaw cycles.As shown in Figure 9a, the quality loss rate of the low-temperature-rise concrete decreases almost linearly with the number of freeze-thaw cycles.At the same time, the addition of CAMC can reduce the quality loss in the low-temperature-rise concrete.When the number of freeze-thaw cycles reaches 150, the surface of the blank group sample exhibits significant peeling, and the coarse aggregate is exposed significantly, with a quality loss rate exceeding 5% (Figure 9c).When 0.2% and 0.4% of CAMC are added, the appearance of the sample remains basically intact, and no peeling of the sample surface occurs.After 200 freeze-thaw cycles, the quality loss rates are finally 2.39% and 3.62%, respectively.When the amount of CAMC is increased to 0.6%, the surface of the sample begins to exhibit surface erosion, and although the quality loss rate is only 1.79%, it can be observed from the relative dynamic modulus (Figure 9b) that the relative dynamic modulus of the CAMC-0.6%sample after 50 freeze-thaw cycles is lower than that of other samples with different amounts of CAMC.As the amount of CAMC increases, the relative dynamic modulus of the low-temperaturerise concrete shows a trend of first increasing and then decreasing.However, after 125 freeze-thaw cycles, the blank group cannot obtain an ultrasonic wave velocity, indicating that the internal structure of the sample has been damaged.
Freeze-thaw cycles can significantly weaken the quality, elastic modulus, and other properties of concrete.The causes of these destructive phenomena can be attributed to water pressure, osmotic pressure, ice crystallization pressure, microscopic ice lenses, and thermal effects that result from the mismatch between ice and solid phases.As the frozen pore solution expands within the pores [39,40] and micro-cracks expand and merge, the connectivity of the pores can be enhanced, further increasing the permeability and accelerating damage.The pore structure determines the degree of freeze-thaw damage, and the pore size distribution of concrete is wide, ranging from 0.5 nm to several centimeters.When air is trapped in the pores, it forms a bubble with a diameter from 10 µm to 1 cm [41].During hydration, the water in the pores is consumed, and C-S-H gel can be filled into the pores.Therefore, the capillary pore decreases while the gel pore volume increases.As shown in Figure 10, the pore structures of the concrete samples after 150 freeze-thaw cycles showed a significant change.The pore diameter of the control group mainly ranged from 10 to 100 µm.With an increase in CAMC, the porosity in concrete decreases, and the pore size distribution mainly ranges from 10 to 100 nm.Compared with the control group, the total porosity decreases from a high of 30% to around 15%, indicating that, after the addition of CAMC, harmful pores (>200 nm) in the concrete matrix gradually transform into less harmful pores (20-50 nm) and harmless pores (<20 nm) [42,43].The addition of CAMC can increase the impermeability of the concrete matrix and improve the density of the concrete.Freeze-thaw cycles can significantly weaken the quality, elastic modulus, and other properties of concrete.The causes of these destructive phenomena can be attributed to water pressure, osmotic pressure, ice crystallization pressure, microscopic ice lenses, and thermal effects that result from the mismatch between ice and solid phases.As the frozen pore solution expands within the pores [39,40] and micro-cracks expand and merge, the connectivity of the pores can be enhanced, further increasing the permeability and accelerating damage.The pore structure determines the degree of freeze-thaw damage, and the pore size distribution of concrete is wide, ranging from 0.5 nm to several centimeters.When air is trapped in the pores, it forms a bubble with a diameter from 10 µm to 1 cm [41].During hydration, the water in the pores is consumed, and C-S-H gel can be filled into the pores.Therefore, the capillary pore volume decreases while the gel pore volume increases.As shown in Figure 10, the pore structures of the concrete samples after 150 freeze-thaw cycles showed a significant change.The pore diameter of the control group mainly ranged from 10 to 100 µm.With an increase in CAMC, the porosity in concrete decreases, and the pore size distribution mainly ranges from 10 to 100 nm.Compared with the control group, the total porosity decreases from a high of 30% to around 15%, indicating that, after the addition of CAMC, harmful pores (>200 nm) in the concrete matrix gradually transform into less harmful pores (20-50 nm) and harmless pores (<20 nm) [42,].The addition of CAMC can increase the impermeability of the concrete matrix and improve the density of the concrete.
Conclusions
This work systematically studies the influence of CAMC on the fresh concrete properties, macroscopic mechanical properties, shrinkage behavior, and durability of concrete.The main conclusions are as follows: (1) CAMC enhances the water solubility of CTS, and the interaction between CAMC and cement particles increases the consistency of cement paste.As the amount of CAMC increases, the slump and spreading of concrete decrease.Meanwhile, CAMC
Conclusions
This work systematically studies the influence of CAMC on the fresh concrete properties, macroscopic mechanical properties, shrinkage behavior, and durability of concrete.The main conclusions are as follows: (1) CAMC enhances the water solubility of CTS, and the interaction between CAMC and cement particles increases the consistency of cement paste.As the amount of CAMC increases, the slump and spreading of concrete decrease.Meanwhile, CAMC prolongs the setting time of concrete.When the content of CAMC is too high, it will affect the hydration process of cement, delay the formation of early C-S-H and hydration products, and have a negative impact on the early compressive strength and splitting tensile strength.However, with the acceleration of the hydration rate in the later stage, it is beneficial to the development of subsequent strength.(2) The early shrinkage stages of the studied specimens can be divided into three phases: shrinkage, expansion, and re-shrinkage.With the increase in CAMC content, the early shrinkage values of the specimens were 86.82 µε, 70.89 µε, and 14.52 µε, respectively.When the CAMC content was 0.6%, the hydration process was significantly delayed, and a large amount of excess free water in the cement paste led to a "bleeding-like"
Figure 2 .
Figure 2. Setting time of fresh concrete.
19 Figure 3 .
Figure 3. Compressive strength development index of concrete with various CAMC contents at different ages.
Figure 3 .
Figure 3. Compressive strength development index of concrete with various CAMC contents at different ages.
Figure 4 .
Figure 4. Splitting tensile strength of concrete with different dosages of CAMC.
Figure 4 .
Figure 4. Splitting tensile strength of concrete with different dosages of CAMC.
Figure 5 .
Figure 5.Effect of CAMC on early autogenous shrinkage of cement paste.
Figure 5 .
Figure 5.Effect of CAMC on early autogenous shrinkage of cement paste.
Figure 6 .
Figure 6.The effect of moisture distribution on the self-shrinking properties of concrete; (a) dense slurry, (b) bleeding, (c) test sample.
Figure 6 .
Figure 6.The effect of moisture distribution on the self-shrinking properties of concrete; (a) d slurry, (b) bleeding, (c) test sample.
Figure 7 .
Figure 7. Effect of CAMC on the drying shrinkage of low-temperature-rise concrete.
Figure 7 .
Figure 7. Effect of CAMC on the drying shrinkage of low-temperature-rise concrete.
Figure 8 .
Figure 8. Chloride diffusion coefficients of different samples.
Figure 9 .
Figure 9. Freeze-thaw cycle test results of samples with different CAMC contents; (a) mass loss ra (b) relative dynamic elastic modulus; (c) appearance of samples after 150 freeze-thaw cycles.
Figure 9 .
Figure 9. Freeze-thaw cycle test results of samples with different CAMC contents; (a) mass loss rate; (b) relative dynamic elastic modulus; (c) appearance of samples after 150 freeze-thaw cycles.
Figure 10 .
Figure 10.Effect of freeze-thaw cycles on concrete pore structure.
Figure 10 .
Figure 10.Effect of freeze-thaw cycles on concrete pore structure.
Table 1 .
Physical properties of OPC.
Table 2 .
Chemical and mineral compositions of portland cement and fly ash (wt.%).
Table 4 .
Mix proportions of concrete.
Table 5 .
The characteristic value of early autogenous shrinkage change in cement paste.
Table 5 .
The characteristic value of early autogenous shrinkage change in cement paste.
Table 6 .
Shrinkage value of whole process of low-temperature-rise concrete mixed with CAMC.
Table 7 .
Mass changes in samples with different CAMC contents.
Table 7 .
Mass changes in samples with different CAMC contents. | 2024-05-01T15:37:24.671Z | 2024-04-27T00:00:00.000 | {
"year": 2024,
"sha1": "e04b7eb02c394e514285d89d9b3f213fc2c7f1dc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/17/9/2053/pdf?version=1714211443",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e70b9ea53a6614181e92f6828039272174825c6",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
52142833 | pes2o/s2orc | v3-fos-license | Distinct lipid droplet characteristics and distribution unmask the apparent contradiction of the athlete's paradox
Objective Intramyocellular lipid (IMCL) storage negatively associates with insulin resistance, albeit not in endurance-trained athletes. We investigated the putative contribution of lipid droplet (LD) morphology and subcellular localization to the so-called athlete's paradox. Methods We performed quantitative immunofluorescent confocal imaging of muscle biopsy sections from endurance Trained, Lean sedentary, Obese, and Type 2 diabetes (T2DM) participants (n = 8/group). T2DM patients and Trained individuals were matched for IMCL content. Furthermore we performed this analysis in biopsies of T2DM patients before and after a 12-week exercise program (n = 8). Results We found marked differences in lipid storage morphology between trained subjects and T2DM: the latter group mainly store lipid in larger LDs in the subsarcolemmal (SS) region of type II fibers, whereas Trained store lipid in a higher number of LDs in the intramyofibrillar (IMF) region of type I fibers. In addition, a twelve-week combined endurance and strength exercise program resulted in a LD phenotype shift in T2DM patients partly towards an ‘athlete-like’ phenotype, accompanied by improved insulin sensitivity. Proteins involved in LD turnover were also more abundant in Trained than in T2DM and partly changed in an ‘athlete-like’ fashion in T2DM patients upon exercise training. Conclusions Our findings provide a physiological explanation for the athlete's paradox and reveal LD morphology and distribution as a major determinant of skeletal muscle insulin sensitivity.
INTRODUCTION
Compromised insulin stimulated glucose uptake in skeletal muscle is the culprit in type 2 diabetes and correlates negatively with excessive storage of intramyocellular lipid (IMCL) [1e3]. Paradoxically, IMCL levels are also elevated in endurance trained athletes, while being very insulin sensitive [4]. This phenomenon is known as the athlete's paradox and indicates that factors other than total IMCL storage per se must be responsible for compromised insulin sensitivity in the diabetic state. Total IMCL storage is defined as the product of the number and size of lipids dispersed in lipid droplets (LDs) containing predominantly neutral lipids. LDs are covered by a phospholipid monolayer coated with proteins involved in lipid droplet (LD) synthesis and degradation, rendering LDs into dynamic organelles that can range substantially in size and number depending on tissue distribution [5], subcellular location, and metabolic state [6].
Upon a combined dietary-exercise intervention in obese individuals without diabetes, LD size decreased, and the change in LD size correlated with the change in insulin sensitivity [7]. Muscle fiber types have been shown to differ in LD morphology and localization [8] as well as insulin sensitivity [9]. Importantly, muscle fiber typology is affected by exercise training as well as by aging and inactivity (associated with a higher fraction type II fibers). At the muscle fiber level, storage of lipid in either the intramyofibrillar (IMF) space, where LDs are closely linked to mitochondria [8,10], as opposed to the subsarcolemmal (SS) region, may influence lipotoxic effects. Thus, despite similar levels of IMCL in trained individuals and patients with type 2 diabetes, one can hypothesize that differences in LD size, number, and subcellular distribution, along with fiber type specific differences, may contribute to the negative association between IMCL content and insulin sensitivity in insulin resistant subjects and the seemingly paradoxical absence of such an association in trained athletes.
The dynamic nature of LDs involves interplay of a variety of LD coat proteins like PLIN2, PLIN3, and PLIN5 supposedly involved in regulating LD synthesis and degradation [11e13] and the main TAG lipase ATGL [14] and its coactivator CGI-58 [15]. Compromised dynamic behavior often is paralleled by hampered cell function [5]. Given the putative role of these proteins in LD dynamics and their responsiveness to exercise training [16e18], levels of LD coat proteins and lipolytic proteins may contribute to the athletes' paradox. LD number, size, subcellular distribution, and muscle fiber type dependency have been studied using different approaches in various (patho)physiological states. A comprehensive and systematic analysis of all these LD characteristics against the background of similar IMCL content, however, has never been performed. Thus, we aimed to examine the athlete's paradox in individuals with similar levels of IMCL but over a wide range of insulin sensitivity. We therefore crosssectionally studied LD characteristics and marker proteins of LD dynamics in highly insulin sensitive, lean endurance-trained individuals (Trained), and insulin resistant obese patients with type 2 diabetes (T2DM) matched to IMCL content, along with normoglycemic lean sedentary individuals (Lean) and normoglycemic obese sedentary individuals (Obese) characterized by intermediate levels of insulin sensitivity. We hypothesized LD characteristics and content of marker proteins of LD dynamics to be different between Trained and T2DM. In addition, we examined in T2DM patients if a training intervention induced a shift of the LD phenotype, resembling the LD phenotype observed in Trained.
Participant selection
From our database of endurance-trained athletes (Trained; VO 2max > 55 ml/kg/min), lean sedentary (Lean; VO 2max < 45 ml/kg/ min), obese sedentary (Obese), and T2DM participants (from Vosselman et al. [19] and Brouwers et al. [20]), eight participants per group were semi-randomly selected for inclusion (Supplemental Figure 1), targeting at similar IMCL levels for Trained and T2DM, as measured with widefield microscopy via staining with Bodipy 493/503 (D3922, Molecular Probes, Leiden, The Netherlands) and calculating the area fraction covered by LDs (Table 1). Following this, Obese were matched to T2DM for age, BMI and VO 2max and lean sedentary to Trained for age and BMI. For studying the effects of a 12-week training program, eight T2DM patients who increased whole body insulin sensitivity (GIR) upon the training program were randomly selected (from the studies of Meex et al. [21] and Brouwers et al. [20]) (Supplemental Figure 1). For both the cross-sectional and the training study, all participants gave written informed consent before participating in the original study. Furthermore, these studies were approved by the Medical Ethical Committee of Maastricht University and performed in agreement with the Declaration of Helsinki.
Muscle biopsy and metabolic measurements
After an overnight fast a muscle biopsy was taken from the m. vastus lateralis, directly frozen in melting isopentane and stored at À80 C until histochemical analyses. Directly after the muscle biopsy, glucose infusion rate (GIR) was determined during a hyperinsulinemiceuglycemic clamp at 40 mU/m 2 /min of insulin as a measure for whole body insulin sensitivity. Detailed methods can be found in Meex et al. [21] and Brouwers et al. [20]. Body composition was measured with a DXA scan and plasma glucose was determined. In addition, maximal oxidative capacity (VO 2max ) was determined by a graded maximal cycling test until exhaustion via indirect calorimetry (Omnical, Maastricht, The Netherlands). To account for differences in fat mass, both GIR and VO 2max are expressed per kg lean body mass (LBM), as opposed to per kg body weight which was used for inclusion. For the training study, the same measurements were performed again after the 12 weeks of training. Muscle biopsies were obtained 48e72 h after the last exercise session.
2.3. Training program VO 2max and maximal power output (W max ) were defined during a graded maximal cycling test until exhaustion before and after the training. One-repetition maximum (1RM) was measured to determine maximal strength. The 12-week exercise training protocol was supervised, progressive and combined aerobic and resistance exercise. Details of the training program can be found in Meex et al. [21] and Brouwers et al. [20]. In short, aerobic exercise was performed twice a week for 30 min on a cycling ergometer and resistance exercise once a week, focusing on the large muscle groups.
Histochemical analyses
To determine IMCL content, LD size and number, 7 mm thick sections were cut and mounted on glass slides. To minimize staining intensity were imaged using the 405 nm, 532 nm, and 488 nm laser lines, respectively. Type I fibers were identified based on positive myosin heavy chain type I staining, all other fibers were considered to be type II fibers. The fiber type ratio was determined using a Nikon E800 fluorescence microscope prior to confocal imaging to account for bias by differences in fiber type ratio. For each participant, cross-sections of 20 fibers were imaged with the confocal microscope with the previously determined type I to type II ratio. After image acquisition images were deconvolved using Huygens Essential software (Scientific Volume Imaging B.V., Hilversum, The Netherlands). For each fiber type, lipid area fraction, LD size, and number were analyzed by using ImageJ [22]. LD size distribution and LD-membrane distances were calculated with Matlab R2012a (The Mathworks, Inc., Natick, Massachusetts, USA). Based on these LD-membrane distances, LDs were separated in LDs located in near vicinity of the cell membrane (SS region) or LDs located more towards the core of the myofiber (IMF region). Starting at the cell membrane, the SS region was set to be within 5% of the maximal distance from the cell center to the cell membrane, resulting in an SS surface area of around 8% for all cells. This approach was based upon combined examination of available transmission electron microscopy images and the published conversion of this for application in widefield microscopy that used a fixed distance of 2 mm from the sarcolemma to the core of the fiber [23]. As the SS region contains SS mitochondria (which are more abundant in type I fibers than in type II fibers) and muscle fiber cross sectional area and mitochondrial density is different between trained individuals and T2DM patients, we converted the fixed distance of 2 mm into a value that was relative to cell size, resulting in a value of 5%. This converts into an SS area of w8% of total cell area, which is somewhat higher than the w4% reported in literature for obese sedentary middle-aged males [24]; this can be explained by the inclusion of lean trained and untrained individuals in the present study.
Statistics
Results are presented as mean AE SEM. Statistical analyses were performed using SPSS version 21.0 (SPSS, Chicago, IL, USA). For the cross-sectional study, statistical differences between groups and fiber types were tested with mixed model ANOVA with fiber type as within subject factor for LD morphology. If the interaction effect for fiber type was significant One-way ANOVA was used to test for statistical differences between groups for each fiber type. A Bonferroni post-hoc test was performed to specify which groups statistically differed. A paired sample t-test was used to test for each group significant differences between fiber types. For examining statistical differences between subcellular locations (SS and IMF), mixed model ANOVA with location as within subject factor was performed for each fiber type. In case of a significant interaction effect, subsequent statistical testing was done as described above. To test for training effects and fiber type differences in LD morphology, two-way repeated measures ANOVA was performed with fiber type and time point (pre-and post-training) as within subject factors. In case of a significant interaction effect for fiber type and/or time point, a paired sample t-test was used to test for significant differences between fiber types at each time point or between time points for each fiber type, respectively. Paired sample ttest were used to test for differences in LD morphology in all fibers.
Pearson's correlation coefficients were used to test for significant linear association between variables. P < 0.05 was considered to be statistically significant.
Participant characteristics
Participant characteristics of the cross-sectional study are shown in Table 1. By design, Obese and T2DM participants were significantly older and had a higher body weight, BMI, and percentage fat mass than Trained and Lean. In addition, Obese and T2DM participants had a lower GIR and oxidative capacity compared to Lean and Trained (p < 0.01). By definition, T2DM patients had higher fasting glucose levels compared to the other groups (p < 0.01). Trained had a higher GIR (p < 0.01) and VO 2max (p < 0.01) compared to other groups. Participant characteristics of the training study are shown in Table 2. Upon the exercise training, GIR increased significantly (p < 0.001). In addition, body fat percentage (p < 0.05), fasting glucose (p < 0.05) and VO 2max (p < 0.05) improved with exercise training.
3.2. Differential lipid storage between trained and T2DM patients associates with insulin sensitivity By design, total IMCL content showed no differences between Trained and T2DM patients (p ¼ 0.274, Figure 1A), whereas a significantly lower IMCL content was observed in Lean and Obese compared to Trained (p < 0.05 and p < 0.01 respectively, Figure 1A). In trained, IMCL content in type I fibers was almost 2-fold higher than in T2DM (p < 0.05, Figure 1A). Moreover, in Trained the IMCL content was higher in type I fibers than in type II fibers (p < 0.05), whereas for T2DM patients IMCL content was comparable for both fiber types (p ¼ 0.12, Figure 1A). The higher IMCL content of Trained versus Lean and Obese originates from significantly more LDs (p < 0.01, Figure 1B,C). Conversely, in T2DM, the high IMCL content compared to Obese and Lean was explained by an increase in LD size rather than number, an observation that was accounted for by type II fibers (p < 0.05, Figure 1D,E). Detailed examination of LD size frequency distribution revealed that the higher average LD size in T2DM in type II fibers compared to the other groups originates from a trend towards fewer small LDs ( 0.25 mm 2 , p ¼ 0.075) and significantly more large LDs compared to other groups (!1.00 mm 2 , p < 0.05) ( Figure 1F). No such observations were made in type I fibers ( Figure 1F). So, in the face of similar IMCL content, Trained predominantly store their lipids in a larger number of small sized LDs in type I fibers, while in T2DM patients the lipid is stored in fewer but larger LDs, predominantly in type II fibers. Next, we explored if this differential IMCL storage pattern was associated with the differences in peripheral insulin sensitivity, expressed by GIR. A positive correlation was observed between the number of LDs and insulin sensitivity in type I fibers (r ¼ 0.401, p < 0.05, Figure 2A) but not in type II fibers (r ¼ À0.073, p ¼ 0.690, Figure 2B). Conversely, LD size correlated negatively with insulin sensitivity in type II fibers (r ¼ À0.398, p < 0.05, Figure 2D), but not in type I fibers (r ¼ À0.012, p ¼ 0.949, Figure 2C).
Subsarcolemmal lipid storage is higher in T2DM patients
Subsequently, we examined if the differences in LD morphology were specific for the SS or IMF region. Of all LDs measured, Trained only store 8.1% in the SS region, whereas T2DM store 2-fold more LDs (16.1%, p < 0.01, Figure 3A) in the SS region. This difference in the fraction of SS LDs predominantly originates from differences in type II fibers (10.0% vs. 21.8% in Trained and T2DM, respectively; p < 0.001, Figure 3A). Interestingly, also obese normoglycemic individuals store a significantly higher fraction of their LDs in the SS region of type II fibers than Trained (10.0% vs. 20.2% in Trained and Obese, respectively; p < 0.01 Figure 3A). In T2DM, the increased fraction of LDs in type II fibers translated in a strikingly 4.5-fold higher SS IMCL content than in Trained (p < 0.01, Figure 3B). This in contrast to the Obese, in which the fraction of SS LDs in type II fibers was high, whereas IMCL content in the SS region was not elevated ( Figure 3B). The high IMCL content in T2DM in the SS region of type II fibers compared to Trained was accounted for by a significantly higher number of LDs (0.042 vs 0.020, p < 0.05, Figure 3C) as well as significantly larger LDs (0.47 vs 0.24, p < 0.01, Figure 3D) The high IMCL content in the IMF region of type I fibers in Trained ( Figure 3B) originates from a high number of LDs ( Figure 3C) similar in size compared to the other groups ( Figure 3D). To summarize, Trained store IMCL predominantly in the IMF region of type I fibers in many normally sized LDs. Conversely, T2DM patients IMCL predominantly in the SS region of type II fibers in more LDs of larger size. In addition, we examined if IMCL content, LD number, and size in the SS or IMF region correlated with whole-body insulin sensitivity expressed by GIR. In type II fibers, IMCL content (r ¼ À0.527, p < 0.01, Figure 3E), LD number (r ¼ À0.502, p < 0.01, Figure 3F) and LD size (r ¼ À0.566, p < 0.01, Figure 3G) in the SS region correlated negatively with insulin sensitivity. LD size in the SS region also associated negatively with insulin sensitivity in type I fibers (r ¼ À0.363, p < 0.05, data not shown). Furthermore, in type I fibers in the IMF region a positive correlation with insulin sensitivity was observed for both IMCL content (r ¼ 0.397, p < 0.05, data not shown) and LD number (r ¼ 0.408, p < 0.05, data not shown), but not LD size (r ¼ À0.023, p ¼ 0.091, data not shown).
Reduced LD size parallels improved insulin sensitivity upon exercise training in T2DM patients
The pronounced differences in the pattern of IMCL storage between Trained and T2DM triggered us to examine if a 12-week exercise training program in T2DM patients would elicit changes in LD morphology and localization resembling the phenotype observed in Trained. Neither total IMCL content nor the number of LDs was significantly affected by 12 weeks of training in T2DM ( Figure 4A,B). Upon training, however, LD size was reduced (p < 0.05) in type I and II fibers ( Figure 4C,E), a finding that reached statistical significance in type I fibers (p < 0.05, Figure 4D). This change in LD size in type I fibers originates from a training-associated shift towards a higher fraction of smaller LDs ( Figure 4F). Although the average size change in type II fibers upon training failed to reach significance (p ¼ 0.146), also in type II fibers, the fraction of small LDs ( 0.25 mm 2 ) increased at the expense of large (!1.00 mm 2 ) LDs ( Figure 4F). The training-associated shift in LD size frequency distribution in both fiber types hence matches our cross-sectional observation of Trained having more LDs of smaller size than T2DM. The 12-week training program did not affect subcellular LD localization; the overall drop in LD size was not specific for either the SS or IMF region (Supplemental Figure 2).
DISCUSSION
We aimed to examine characteristics of LDs in relation to the athlete's paradox. We show that T2DM patients and Trained, with similar total IMCL content but significantly different in insulin sensitivity, exhibit a vastly different lipid storage pattern. T2DM patients store more IMCL in fewer but larger LDs, located predominantly in type II fibers in the SS area, whereas Trained store IMCL in numerous normally sized LDs in type I fibers, predominantly in the IMF region. Although a high number of LDs in type I fibers has previously been reported in trained vs T2DM [23], our study is the first to report this in the context of similar IMCL levels. In the past, LD size has been linked to insulin sensitivity [7], here we specified this negative association to originate from large LDs in type II fibers. To explore these fiber type specific differences and its relationship with insulin sensitivity in more detail, we developed a confocal microscopybased approach to assess LD characteristics in the SS and IMF space in a fiber type specific fashion. Thus, we observed that in T2DM patients, IMCL content was almost 4-fold higher in the SS region of type II fibers compared to Trained. This difference originates from larger LDs in the SS region of type II fibers and extends non-fiber-type specific observations by electron microscopy [25]. In type II fibers, IMCL content, LD size and number in the SS region correlated negatively with insulin sensitivity, suggesting that specifically SS LDs may somehow interfere with insulin sensitivity. This is in line with the negative association of specifically SS LD diameter and insulin sensitivity observed previously in healthy young men [26]. LDs in the SS region are in direct vicinity of cellular membrane, a site where LDs or bioactive LDs released from these droplets, can readily interfere with insulin signaling. LDs in the IMF region are in close proximity of mitochondria [8,27]. Spatially, this facilitates direct transfer of intrinsically cytotoxic fatty acids to mitochondria upon demand [28]. In that respect, it is interesting to note that it has recently been reported that bio-active insulin desensitizing lipids, i.e. DAGs and sphingolipids, in T2DM originate from the subsarcolemma as opposed to cytosolic LDs [29] and that differences in myocellular lipid synthesis and partitioning between Trained and T2DM exist [30]. To extend our cross-sectional observation that high IMCL content in insulin resistant T2DM originates from large LDs in the SS area of type II muscle, whereas high IMCL content in highly insulin sensitivity trained individuals originates from many small-sized LDs in the IMF area of type I muscle fibers, we analyzed muscle biopsies from insulin resistant individuals undergoing a 12-week exercise training program. We observed that a 12-week combined endurance exercise-resistance training program improved insulin sensitivity without a significant effect on total IMCL. This is somewhat surprising as in untrained lean individuals, with low IMCL levels prior to training, IMCL mostly increases upon training. However, in T2DM patients, training does not necessarily augment IMCL content [31] and sometimes a reduction has been reported [32,33]. In the present study, LD size decreased significantly in type I fibers and near significantly in type II fibers upon training. A specific decrease in SS IMCL content and SS LD size has previously been observed upon endurance training [24e26,34]. Clearly, these exercise-mediated changes push the insulin resistant LD phenotype into a phenotype mimicking the trained insulin sensitive individuals. Thus, differences in LD size along with subcellular and fiber type specific distribution of LDs can be at the basis of the athlete's paradox. Exactly how LDs interfere with insulin sensitivity mechanistically, remains elusive. However, large LDs have a lower surface-tovolume ratio than small LDs, rendering them less accessible for lipolysis or recruitment of proteins activating or inhibiting lipolysis [35,36]. This is of particular relevance in type II fibers in which the lipolytic machinery is not abundant [37,38]. Hence, control of regulated lipolysis may be less tight for large LDs, rendering large LDs prone to incomplete lipolysis. Although literature on this is inconsistent, incomplete LD lipolysis may result in release rather than sequestration of bioactive (insulin desensitizing) lipid moieties. Upon exploring content of proteins involved in LD dynamics, we observed that PLIN2 was higher in T2DM than in trained and correlated negatively with insulin sensitivity. This observation matches the notion that large LD with low lipolytic rates are coated with PLIN2 [11,39]. However, this cannot directly be reconciled with increasing PLIN2 levels upon training previously observed [18,40], although we could not confirm these observations in this study. Conversely, ATGL and PLIN5 protein levels were higher in Trained than in T2D with ATGL correlating positively with insulin sensitivity and PLIN5 being induced Original Article 78 upon exercise training. These results partly match with previous literature, in which trained athletes reportedly had higher protein levels of PLIN3, PLIN5, ATGL, and CGI-58 compared to lean untrained [16]. Furthermore, exercise training increased muscle perilipins and ATGL protein content in obese subjects [17,41] and in young, healthy volunteers [18]. We previously reported that unilateral overexpression of PLIN5 in rat skeletal muscle prevented high-fat diet induced insulin resistance in the transfected leg [42] and promoted LD-mitochondrial tethering [42,43]. This underscores our previous notion of Trained having significantly more PLIN5 coated LDs than in patients with type II diabetes along with the observation that the number of PLIN5 coated LDs significantly contributed to the variance in insulin sensitivity (glucose infusion rate) [44]. For a more detailed understanding of these changes in protein content and its putative effect on LD lipolysis, subcellular distribution analysis is warranted. Strengths of our study include the combination of both a crosssectional and intervention component and the comprehensive analysis of LD phenotype across the full spectrum of insulin sensitivity. Furthermore, our study is the first to examine the contribution of LD phenotype and localization to the athlete's paradox in light of similar IMCL levels. The obese and T2DM participants in the present study and the individuals undergoing the training intervention were selected from two previous studies [20,21]. The inclusion criteria of these studies were similar, albeit not identical, and there were modest differences in the training program with respect to the balance endurance and resistance exercise. Subgroup analysis revealed that these modest differences in inclusion and training did not affect the endpoints of the present study. The difference in age between the Trained and T2DM can be considered a limitation. Although this limitation is partly overcome by adding two control groups (lean young to match the Trained and Obese normoglycemic to match the T2DM), we cannot discriminate between the effects of obesity and age in this study. Furthermore, we measured total levels of LD coating proteins in whole muscle lysates. This does not directly relate to protein levels or distribution on the surface of LDs, which would be interesting to address in future studies.
CONCLUSION
In conclusion, revisiting the athletes' paradox over a wide range of insulin sensitivity, but in the face of similar muscle fat content, reveals that this paradox can be explained from a physiological perspective rather than being truly paradoxical. Insulin sensitive trained individuals possess high levels of muscle fat that is predominantly dispersed in small lipid droplets in oxidative type I muscle fibers in the intermyofibrillar space, where fatty acids released form the LDs are deemed to fuel oxidation. On the other hand, in the insulin resistant type 2 diabetic state, most of the muscle fat is found in large lipid droplets in the subsarcolemmal space of type II muscle fibers, a region where interference with insulin signaling is likely from a spatial perspective. With high protein levels of ATGL and PLIN5 and low levels of PLIN2 in the trained state, the capacity to promote LD turnover in Trained is most likely high. These cross-sectional observations are strengthened by the observation that upon exercise training insulin sensitivity improved in patients with type 2 diabetes without affecting total muscle fat content while the LD phenotype and the content of proteins involved in LD turnover in patients with type 2 diabetes partly shifted towards the phenotype observed in trained individuals. | 2018-09-16T02:44:23.344Z | 2018-08-18T00:00:00.000 | {
"year": 2018,
"sha1": "3a3bad49446c8f948fa763a49134f7ff0a91b6b8",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.molmet.2018.08.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a3bad49446c8f948fa763a49134f7ff0a91b6b8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3277981 | pes2o/s2orc | v3-fos-license | Quantum sensing of weak radio-frequency signals by pulsed Mollow absorption spectroscopy
Quantum sensors—qubits sensitive to external fields—have become powerful detectors for various small acoustic and electromagnetic fields. A major key to their success have been dynamical decoupling protocols which enhance sensitivity to weak oscillating (AC) signals. Currently, those methods are limited to signal frequencies below a few MHz. Here we harness a quantum-optical effect, the Mollow triplet splitting of a strongly driven two-level system, to overcome this limitation. We microscopically understand this effect as a pulsed dynamical decoupling protocol and find that it enables sensitive detection of fields close to the driven transition. Employing a nitrogen-vacancy center, we detect GHz microwave fields with a signal strength (Rabi frequency) below the current detection limit, which is set by the center’s spectral linewidth \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1{\rm{/}}T_2^*$$\end{document}1∕T2*. Pushing detection sensitivity to the much lower 1/T 2 limit, this scheme could enable various applications, most prominently coherent coupling to single phonons and microwave photons.
S ensitive detectors for weak radio-frequency (>100 MHz) signals of electric, magnetic, or pressure fields would shift several frontiers of physics. They could advance the exploration of phonons on the single-particle level and reveal weak microwave signals encountered in quantum information processing, biomedical imaging, or more exotically, the search for extraterrestrial intelligence 1 .
Detectors based on solid state spin qubits could potentially overcome these limitations. Optically active spin qubits such as nitrogen-vacancy (NV) centers can be optically polarized, that is effectively laser cooled to a temperature of a few 10 mK, even in a substrate at higher temperature. Magnetic tuning of their spin transition enables resonant coupling to external fields at any frequency up to 100 GHz. Theory proposals (Fig. 1a) suggest that both single microwave phonons 17 and photons 18 can be coupled sufficiently strong to drive a full spin-flip within the spin coherence time T 2 (ms 19 to s 20 , depending on species and temperature).
However, radio-frequency sensing by spin qubits is currently precluded by a major roadblock. It is illustrated in the detection protocol of Fig. 1b, where an incoming signal drives the qubit transition, inducing a spin flip which is subsequently detected by readout of the spin. To drive a full spin-flip, an incoming signal has to saturate the spin transition. Therefore, the signal strength (Rabi frequency) has to exceed the inhomogeneous transition linewidth Δω $ 1=T Ã 2 . Since 1=T Ã 2 is much broader than 1/T 2 (MHz vs kHz for an NV center in a natural abundance crystal at room temperature), coupling of spins to high-frequency signals remains inefficient. As a specific example, interfacing spins to single phonons or photons (Fig. 1a) is currently precluded, since coupling would be possible within T 2 but remains out of reach of T Ã 2 .
For signal frequencies below a few MHz, dynamical decoupling protocols can break this limit 21 (Fig. 1c). Here, the transition is driven by a strong continuous or pulsed control field (frequency ω 0 ) to create a pair of photon-dressed qubit states, split by the driving field Rabi frequency Ω 22, 23 . This new transition has a far narrower linewidth 1/T 2 , and can hence absorb much weaker signals. However, practical limitations on drive power limit the frequency range which can be probed in this way.
As the key idea of this work, we note that the fully hybridized spin-photon states (the 'Jaynes-Cummings ladder') support another set of transitions at frequencies (ω 0 − Ω, ω 0 , ω 0 + Ω), the Mollow triplet 24 , which has been extensively studied in quantum optics [25][26][27][28][29] and has been proposed as a narrowband tunable photon source 30 . Since these transitions equally link pairs of dressed states, we posit that they should allow for T 2 -limited sensing of signals with a frequency much higher than the available Rabi frequency Ω. We will show in the following that these transitions can indeed be harnessed to detect high-frequency signals. Moreover, we will demonstrate that Mollow sidebands can be created by robust dynamical decoupling protocols acting as a strong drive. In contrast to a continuous drive, these protocols set the effective Rabi frequency by timing rather than power, an experimental advantage that has made low-frequency sensing by decoupling a widely adopted technique. We will analyze the sensitivity of the resulting schemes, concluding that they could enable coherent coupling of solid state spins to single phonons and photons.
Results
Continuous wave Mollow absorption. We demonstrate the creation of dressed states by the scheme of Fig. 2a. Here, the spin is initialized into the dressed state þ j i ¼ 0 j i þ 1 j i ð Þ = ffiffi ffi 2 p by a (π/2) Y pulse (Y labeling the carrier phase ϕ Y = π/2). This state is locked as an eigenstate of a strong dressing field with orthogonal carrier phase ϕ X = 0. We find that a weak probe field at the detuned frequency ω 0 ± Ω indeed induces rotation at its native Rabi frequency, as evidenced by measurements on an NV center. The central Mollow resonance is absent in this measurement, since it couples dressed states with identical spin projection, as has been previously observed in superconducting qubits 31 . It can be recovered by preparing into an orthogonal state and changing the phase of the signal (Fig. 2b) to account for the different quadratures of the Mollow sidebands 25 .
Pulsed Mollow absorption as a sensing protocol. We now convert Mollow absorption spectroscopy into a pulsed sensing protocol (Fig. 3a) to mitigate an important problem: the continuous wave (CW) protocol is prone to decoherence since fluctuations of the drive field power (Ω) directly translate into frequency noise of the dressed state transition ω 0 ± Ω. We will see that pulsed protocols shift the frequency of the Mollow sideband absorption from ω 0 ± Ω to ω 0 ± π/τ, τ denoting the pulse spacing as shown in Fig. 3b. Since timing (τ) is controlled better than power (Ω), decoherence is reduced to the intrinsic limit set by the Mech. resonator Photon, SC cavity a Examples of signals which could be coupled to spins within T 2 , but which are out of reach of current T Ã 2 -limited protocols 17,18 (g coupling strength, SC superconductor). b High-frequency signals driving the bare spin transition have to be stronger than the inhomogeneous linewidth 1=T Ã 2 . c Dressing of the spin by a strong drive creates a set of narrow transitions between new eigenstates described by the photon occupation n and the qubit state with transition frequencies ω 0 , ω 0 ± Ω, which enable T 2 -limited sensing spin qubit. Importantly, absorption on these transitions will induce a spin trajectory similar to standard Rabi oscillations. This enables sensing of the absorbed field's amplitude, effectively turning the probe into a signal field.
Conversion into a pulsed protocol is best understood from tracking the spin evolution across the sideband absorption sequence (Fig. 3a). We decompose the strong drive into a series of π pulses, spaced by a time τ = π/Ω, and split the weak signal into a commensurate series of weak pulses with pulse area ϵ ( π. We equally discretize its detuning of Δ = Ω − a continuous decrease in carrier phase-into periodic inversions of its axis, that is a discrete decrease of the phase by π occurring with period π/Δ. At the resonance condition Δ = Ω, this period matches the spacing τ = π/Ω of the strong drive. In this case, the weak signal is resonantly rectified in the toggling frame of the spin (Fig. 3a), analogous to the situation in low-frequency sensing. We note that discretization preserves the axes of all fields involved up to a sign so that the resulting absorption resonance remains phasesensitive. While it picks up signals with a carrier phase along Y, it is blind to signals along X.
Our pulsed scheme (Fig. 3b) is an explicit implementation of this discretized sequence. We emulate the strong drive with Rabi frequency Ω by short π pulses with a spacing τ = π/Ω. We equally discretize the amplitude of the signal into a sequence of pulses, applied between the π pulses of the strong drive. We do not discretize its phase, allowing instead for a continuous detuning Δ, since this property should be sufficient to induce the advance in carrier phase discussed above. We find that the weak signal is most strongly absorbed at a detuning Δ/2π = ±(2τ) −1 . The absorption resonance remains locked to this frequency as we scan τ while keeping all other parameters constant (Fig. 3c). All our observations match well with an explicit time domain simulation of the spin evolution (bottom half of Fig. 3c, Supplementary Note 3). We note that discretization (switching) of the signal is not strictly required. All of the above analysis remains valid for a continuous signal, despite the fact that it overlaps with the control pulses during part of the time. We have confirmed this prediction experimentally (Supplementary Note 5), but stick with the switched implementation in the following, since it allows for easier implementation of more robust decoupling sequences (see below).
The bandwidth Δω = π/T of this pulsed Mollow resonance is limited by the finite duration T = 2nτ of the sequence containing 2n π-pulses. Crucially, this bandwidth drops below the inhomogeneous linewidth 1=T Ã 2 if we choose a sequence longer than T Ã 2 (Fig. 3d). The Mollow resonance is framed by sidebands with nodes at frequencies ω 0 ± π/τ ± kπ/T with k 2 N. These are another consequence of the finite sequence length: since sensitivity is nonzero only in a rectangular window in the time domain, the sequence has a sinc response in the frequency domain. Tracing a Rabi oscillation of the weak signal along the resonance hyperbola Δ = ± π/τ we find its native Rabi frequency Ω rf to be reduced to a value Ω pulsed = 2/π · Ω rf . We attribute this reduction to the fact that the detuned signal, rotating at an angular frequency of Δ on the Bloch sphere, has a phase orthogonal to the strong drive phase only during a fraction of the free evolution time τ. It therefore has to be scaled by a factor R τ 0 sin ϕðtÞ ð Þdt À Á =τ, with ϕ(t) denoting the phase of the signal. All of these properties are analogous to similar features in lowfrequency decoupling sequences 32 .
Using the time domain simulation and an analytical model (Supplementary Notes 2 and 4), we find Mollow resonances in many decoupling sequences, including the robust sequences CPMG, XY4, and XY8 (Fig. 3e). A detailed discussion of the effect of decoupling sequence structure on position and shape of the resonances is given in Supplementary Note 4.
Small-signal limit and impact of decoherence. We finally demonstrate the performance and an important limit of our method by the protocol of Fig. 4a. Here, we adopt the XY8 sequence for the strong drive, in order to be maximally robust against experimental fluctuations. We phase-modulate the signal to gain a constructive contribution to the Rabi rotation during every evolution period τ * . In this setting we have been able to drive slow Rabi oscillations with a period as long as 100 µs (Fig. 4b). While this clearly breaks the T Ã 2 limit in terms of signal strength, the limit reappears as a constraint on the pulse spacing τ, which has to be short against T Ã 2 . For longer spacings-corresponding to slower Rabi frequencies in the CW sequence-the Mollow resonance merges with the inhomogeneously broadened transition. To verify this limit explicitly, we artificially shorten T Ã 2 of the NV center by averaging multiple measurements taken at different, Gaussian-distributed, frequencies of the microwave drive. Tuning decoherence by this technique, we find that sensitivity breaks down if pulses are spaced by more than T Ã 2 (Fig. 4c).
Discussion
With these insights, we are finally in a position to evaluate the sensitivity that could be reached by a microwave spin sensor. Table 1 presents a series of such estimates for three typical experimental scenarios, a single NV center at ambient temperature, a NV center at cryogenic temperature with single shot readout, and an ensemble of NV centers in a densely doped diamond. Our estimates derive from two assumptions: 1) We assume high-frequency sensing to be as robust against experimental fluctuations as low-frequency sensing, since it is based on the very same decoupling protocols. In particular, we assume that the same T 2 time can be reached and the same number of control pulses can be applied. This 2) In contrast to low-frequency sensing, T 2 is bounded by an upper limit of N max Á T Ã 2 , where N max ≈ 1000 denotes the maximum number of control pulses that can be applied before pulse errors deteriorate coherence 19 . This condition arises from the additional constraint that pulses have to be spaced by less than T Ã 2 , as discussed in the context of Fig. 4. While this condition does not set the limit for experiments on single NV centers, where isotopic purification can push T Ã 2 times into the range of 100 µs, it is the limiting factor for ensemble sensing where inhomogeneous broadening shortens T Ã 2 times down to the sub-microsecond timescale. More importantly, these estimates suggest that NV centers should be able to couple coherently to photons and phonons in the scenarios of Fig. 1 within their coherence time T 2 (assuming the values of Table 1). This would enable detection of both particles by coherent absorption and subsequent detection of the spin state, a more powerful measurement than time-averaged detection of a signal with a mean strength on the single-particle level. It could pave the way to a quantum bus based on these signals, mediating coupling between distant spins or to other qubits. The narrow transition provided by our scheme could aid the development of room-temperature MASERs based on optically initialized spins 14 . Their use as amplifiers could provide another approach to sensing of weak signals, complementary to optical detection.
In summary, we have pushed spin-based quantum sensing to frequencies much higher than the available Rabi frequency Ω. In the language of superconducting amplifiers, this promotes spins to phase-sensitive microwave detectors that might provide sufficient sensitivity to detect single phonons and photons. Compared to competing approaches such as Josephson parametric amplifiers, our scheme has a very narrow bandwidth. It absorbs signals only within a narrow window of width 1/T 2 , (≈100 Hz-10 kHz for NV centers) and, operated as a detector, would be limited to a maximum count rate of the same order of magnitude. It seems plausible, however, that a future extension of our experiment could continuously shift this window across frequencies up to several 100 GHz, tuning the spin transition e.g. by a magnetic field 33 . Crucially, the absorption frequency ω 0 ± π/τ is set only by timing and frequency of the external drive, which can be controlled well. It is independent of the native spin transition and hence resilient to drifts in surrounding fields.
From a fundamental perspective, we have provided an intuitive microscopic understanding of the Mollow triplet as a pulsed quantum protocol. It appears most intriguing to extend this novel perspective to other effects of quantum interference, such as electromagnetically induced transparency.
Methods NV center preparation. All experiments have been performed on single NV centers spontaneously created inside a polycrystalline electronic grade IIa diamond during chemical vapor deposition (Element Six, part N°145-500-0356).
Quantum control. Both the strong drive and the weak signal were generated by an Arbitrary Waveform Generator (Rigol DG5352), which was mixed onto a GHz frequency carrier, amplified (amplifier MiniCircuits ZHL16W-43-S+), and applied Fig. 2a can be understood as a dynamical decoupling protocol by dissecting the strong drive into a train of π pulses and the probe into a train of weak pulses (ϵ). The detuning Δ of the probe translates into periodic inversions of its axis, which are resonantly rectified by the strong drive. b Pulse sequence for high-frequency sensing, a direct implementation of the interpretation given in a. π pulses at frequency ω 0 emulate a strong drive to resonantly enhance a weak signal at frequency ω = ω 0 + Δ. The pulse spacing τ incorporates the duration of a π pulse. τ * is obtained from τ by subtracting the π pulse duration. c Pulsed Mollow resonance, as measured on a NV center (upper plot) and simulated (lower plot). A resonance at Δ = π/τ is framed by sidebands with nodes at Δ = ± π/τ ± kπ/T (with k 2 N, sequence duration T=2nτ, cycle number n = 4). d Linewidth of the resonance. The line narrows below the natural linewidth 1=T Ã 2 (as observed in an optically detected magnetic resonance (odmr) experiment, see also Supplementary Note 1) for sequences longer than the dephasing T>T Ã 2 À Á . e Simulated spectral response to sensing sequences with different decoupling protocols. The pulse spacing between 24 π pulses was kept constant at τ = 127.6 ns. The stated effective Rabi frequencies are for Rabi oscillations driven on the Mollow resonance. 'XY8, phase switching' refers to the protocol of Fig. 4. More detailed discussion in Supplementary Note 4. Data traces in d, e have been shifted vertically for better comparison by an offset of 1.0. The ODMR trace in d is shifted horizontally by 2623 kHz to the NV center by a coplanar waveguide. All given microwave excitation powers refer to the input of the coplanar waveguide. They have been calculated from the output power of the Arbitrary Waveform Generator by adding a constant offset of +56 dBm to account for all gains and losses along the excitation path.
Spin readout. The spin state was measured by fluorescence readout in a high-NA confocal microscope (excitation 532 nm,~1 mW power, detection in the > 650 nm band by an objective lens Olympus UPLSAPO 60 × 1.35O). In total, 4-8×10 5 readout repetitions per trace were made, corresponding to a measurement time of 15-30 min for each trace. All sequences were recorded twice, with and without an additional π pulse before readout. The difference of both datasets was normalized to the signal contrast of a Rabi oscillation to yield a quantitative estimate of 0 ψ j h i j j 2 .
Data availability. All relevant data is available from the authors upon request. < 77 K refers to cryogenic conditions, 12 C to isotopically pure diamond, and SSR to single shot readout. Sensitivities have been estimated as Z ¼ 1= 2pg ½ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sT2NNV p where γ denotes the gyromagnetic ratio, σ the spin detection efficiency and N NV the number of NV centers. References refer to published values for the respective parameter. Parameters without reference are computed from the others or have been measured in this work. | 2018-04-03T00:25:44.637Z | 2017-02-22T00:00:00.000 | {
"year": 2017,
"sha1": "cc036707b5ab7f01343f23048301958b77132363",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-01158-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b91ee293415e2121dcb004baea2131363fe063a7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
15717626 | pes2o/s2orc | v3-fos-license | Protocol for north of England and Scotland study of tonsillectomy and adeno-tonsillectomy in children (NESSTAC). A pragmatic randomised controlled trial comparing surgical intervention with conventional medical treatment in children with recurrent sore throats
Background Uncertainties surrounding the effectiveness and cost-effectiveness of childhood tonsillectomy for recurrent sore throat led the NHS Health Technology Assessment Programme to commission this research to evaluate the effectiveness and cost-effectiveness of tonsillectomy and adeno-tonsillectomy in comparison with standard non-surgical management in children aged under 16 with recurrent throat infections. The aim is to evaluate if tonsillectomy and adeno-tonsillectomy reduces the number of episodes of sore throats among children to a clinically significant extent. Methods/design A simple prospective pragmatic randomised controlled trial with economic analysis and prospective cohort study of non-trial participants comparing surgical intervention with conventional medical treatment. The treatment arm will receive tonsillectomy and adeno-tonsillectomy while in the control arm non-surgical conventional medical treatment only will be used. The primary outcome measure will be reported number of episodes of sore throat over two years with secondary outcomes measures of reported number of episodes of sore throat, otitis media and upper respiratory tract infection which invoke a GP consultation; reported number of symptom-free days; reported severity of sore throats and surgical and anaesthetic morbidity. The study will take place in five hospitals in the UK. The trial population will be 406 children aged 4–15 on their last birthday with recurrent sore throat referred by primary care to the 5 otolaryngology departments. The duration of the study is seven years (July 2001- July 2008). Discussion As with all pragmatic randomised controlled trials it is impossible to control the external environment in which the research is taking place. Since this trial began a number of factors have arisen which could affect the outcome including; a reduction in the incidence of respiratory tract infections, marked socio-economic differences in consultation rates, the results from the National Prospective Tonsillectomy Audit and the Government's waiting list initiatives.
Background
In the UK sore throats cost the NHS an estimated £60 million in GP consultations alone, result in 90,000 tonsillectomy procedures, approximately half of which are in children, and a loss of more than 35 million school or work days annually [1]. The incidence of tonsillectomy has risen since the early 1990's, although levels are still much lower than in the 1930's, when 100,000 operations were performed in UK school children [2]. Adenoidectomy is performed with tonsillectomy in about one third of patients. Private medical insurance is associated with higher selective ENT surgical rates under the age of seven years [3] and 16% of UK ENT activity is in the independent sector. Therefore figures based purely on NHS returns inevitably underestimate the total activity. In addition to the health care costs, tonsillectomy incurs parental costs as one parent usually resides in hospital overnight. Thereafter the average time to return to normal activity for under 15 year olds is 12 days [4].
There is a broad similarity in the criteria for tonsillectomy in clinical guidelines in the UK [5,6] and North America [7]. The minimum criteria are typically a two year history of three to four sore throats of moderate severity (five day duration) per annum. This is despite evidence that even histories that seem impressive may not be confirmed on close scrutiny in the majority [8]. The complex psychosocial influences on tonsillectomy rates include parental enthusiasm for intervention [9], lack of information [10] and maternal use of psychotropic drugs which increases two-fold the rate of consultation for childhood sore throat [11,12]. Guidelines may not be uniformly implemented, even when locally derived. Surgeons tend to break guidelines more often in favour of performing than withholding surgery [5].
National and international variations in the rates of adeno-tonsillectomy have been recognised for decades. Even in the 1930's, 50% of UK and USA children received a tonsillectomy, while the rate was 0.5% or lower in Germany [2]. A survey of such variation in Quebec, highlighted the importance of clinical uncertainty among physicians about the recommendation of surgical intervention [13], providing further support for conducting primary research. The Scottish National Tonsil Audit showed that rates of tonsillectomy in childhood varied from <4/ 10,000 in Forth Valley to almost 10/10,000 in Dumfries and Galloway [14].
Differential costs and benefits of surgery at different age groups are not known. The tonsils are traditionally thought to undergo a period of physiological enlargement around school entry. At this time also, pathological sequelae may include otitis media. Older children and adolescents, may have a somewhat different natural history, and illness at this age has rather different (educational) implications.
Mortality from tonsillectomy has been estimated at 1/ 16000 to 1/35000 [15], but surgical risk at this level is hard to measure, to conceptualise and to convey. The major nonfatal complications are infection, haemorrhage (2.15%), and pain which lasts on average five to six days [16,17] and may be inadequately treated in children [18]. Haemorrhage is unpleasant, requires intravenous fluid administration, with or without blood transfusion and return to theatre. The reported rate of second anaesthetic for haemostasis varies widely from 0.75% in one British review [4], to as low as 0.06% in a study of almost 9409 children in Toronto [19]. The post tonsillectomy readmission rate is up to 7% [4], but in Newcastle in childhood is only 2.3% (unpublished data; Department of Clinical Effectiveness, Freeman Hospital, Newcastle). The overall reported complication rate ranges from 8% [14] to 14% [17], the majority being relatively minor such as sore throat, nausea, fever, dysphagia. Most two to 10 year olds undergoing ENT surgery show behavioural changes such as attention seeking, temper tantrums and night waking and there is also anecdotal evidence for depression after tonsillectomy [20]. Younger children, due to cognitive immaturity seem less well able to adapt to hospitalisation [21,22]. Late sequelae may include lower postoperative serum immunoglobulin levels but these have been ascribed to reduction in antigen stimulation [23]. There is continuing debate about the suggestion that tonsillectomy increases the risk of Hodgkin's lymphoma [24]. A substantial Scandinavian population based cohort study found an increased risk of Hodgkin's disease, especially in younger children [25]. The risk of transmission of nvCJD from contaminated tonsillectomy instruments remains quite unquantified. Some centres are costing the use of disposable tonsillectomy sets.
Despite the frequency of tonsil dissection, there is a remarkable lack of robust evidence for its efficacy. Uncontrolled patient reports suggest the procedure to be very effective but recurrent sore throat, particularly in childhood may be a self limiting disease. Where non-intervention control groups have been studied, the benefits of tonsillectomy seem almost to disappear after two years. Available studies are either 20 to 30 years old or confined to small numbers of severely affected individuals with limited general applicability. The most recently published Cochrane review concludes that there is no evidence from randomised controlled trials to guide the clinicians in formulating the indications for surgery in children or adults [26]. The authors state the need for high quality evidence from randomised controlled trials to establish its effectiveness and that these should assess the effectiveness of the procedure in patients with throat infections of dif-fering severity and frequency. A recent Dutch randomised controlled trial of adenotonsillectomy versus watchful waiting reported no differences between treatment arms for children with mild symptoms and only a small difference of less than one episode of fever a year between treatment arms for children with moderate symptoms [27].
The Scottish National Tonsillectomy Audit [14] showed high levels of patient satisfaction and that 80% of subjects did not consult a doctor in the subsequent 12 months. However, over the past 30 years a number of controlled studies with longer follow-up indicate marginal and diminishing levels of clinical benefit over a period of nonintervention. There are no substantial claims for the benefit of childhood tonsillectomy after 2 years. Roos [28] assessed the benefit to be 1 to 1.5 fewer sore throats (0.5 to 1 episode per annum) over the first two years after surgery in those with three to four episodes per annum preoperatively. Other studies [29][30][31] showed benefits of the order of ~1.5 fewer sore throats versus controls in the first postoperative year and on average one fewer episode in the second year. All of these and other available studies provide inadequate evidence because of poor definition of entry and outcome criteria, failure to include intention to treat calculations and small or skewed samples [32]. Even the only scientifically acceptable study by Paradise and colleagues [17] suffered from comparatively small numbers of a skewed population of more severely affected children. The benefits of surgery were more marked (approximately 1.75 fewer episodes in year 1, 1.5 in year 2) but equally short lived. The drop out rate was 34% by the end of year 2 and 1 in 3 of the control group underwent surgery and were excluded from analysis. Also, the very active therapy of the control arm may have mitigated any impact of surgery. The Paradise group went on to study a more typical i.e. less severely affected group of children, but the full results of this study, near completion in 1992 have never been reported.
Weight gain is a cited supplementary benefit of tonsillectomy. Two recent studies showed accelerated weight gain postoperatively, but as the children were shown to be of normal or above average height and weight preoperatively, this effect may be undesirable [33]. There appears so far to be only minimal additional benefit from adenoidectomy or adenotonsillectomy in recurrent acute otitis media [16].
A straw poll, for this protocol, of consultant otolaryngologists asked: what level of reduction in sore throat would justify removal of the tonsils? Replies were remarkably consistent -at least 2 sore throats fewer per annum. No published trial to date shows a benefit of this magnitude, even in the first year after surgery. There is a pressing need for a UK, pragmatic trial to evaluate the effectiveness and cost-effectiveness of childhood tonsillectomy.
The purpose of this study therefore is to answer the key research question "What is the effectiveness and costeffectiveness of tonsillectomy/adeno-tonsillectomy in comparison with standard non-surgical management in children aged under 16 with recurrent throat infections?" Assessment of outcome will emphasise those which are important to children themselves and their parents or carers. Specific research questions are; • Does tonsillectomy/adeno-tonsillectomy reduce the number of episodes of recurrent sore throats among children to a clinically significant extent?
• What is the cost effectiveness of tonsillectomy/adenotonsillectomy among children and what are the costs and benefits to families?
• What are the important outcomes of tonsillectomy/ adeno-tonsillectomy for children and their parents/carers and what is the importance of these to children and their parents' quality of life?
• What are parents' (and older children's) preferences for different treatment options for recurrent sore throat?
• How representative of the target population are trial participants?
Methods/design
Trial design A simple prospective pragmatic randomised controlled trial with economic analysis comparing surgical intervention with conventional medical treatment.
Cohort design
We anticipate that a large majority of participants who decline randomisation to the trial will opt for, and receive, surgery. Therefore, in order to assess the external validity of the trial results, we will recruit a cohort of children from those who decline to participate in the trial. The cohort will include both children who opt for surgery and those who choose conventional medical treatment. They will be followed up for 24 months.
Interventions
The treatment arm will receive tonsillectomy and adenotonsillectomy while in the control arm non-surgical conventional medical treatment only will be used.
Treatment
Tonsillectomy and adeno-tonsillectomy with adenoid curettage and tonsillectomy by dissection or bipolar diathermy. Most (80%) UK surgeons use the conventional dissection method [4] and the remainder use bipolar diathermy. Both methods will be allowed in the trial according to surgical preference. Surgical intervention will take place within four weeks of randomisation.
Control
Non-surgical conventional medical treatment only will be used. There will be no active intervention protocol since no single prescribing strategy would be able to cover all patients [34]. The referring GP will be free to treat as in their current practice. The use of usual treatment rather than an active intervention protocol is considered important for the implementation of study findings since surgical enthusiasts may argue against the findings were the control group to be atypically and over rigorously treated.
Outcome measurement
The primary clinical outcome is the reported number of episodes of sore throat in the two years after randomisation. Secondary clinical outcomes include reported number of episodes of sore throat, otitis media and upper respiratory tract infection which invoke a GP consultation; reported number of symptom-free days; reported severity of sore throats and surgical and anaesthetic morbidity. In addition to the measurement of these clinical outcomes, the impact of the treatment on costs and quality of life will be assessed. There will also be an economic evaluation.
Setting
Inpatient facilities and outpatient clinics of 5 hospitals in the North of England and Scotland: Freeman Hospital, Newcastle upon Tyne; Alder Hey Children's Hospital, Liverpool; Booth Hall Children's Hospital, Manchester; Yorkhill Royal Hospital for Sick Children, Glasgow; and Bradford Royal Infirmary and general practices with which study participants are registered. Freeman Hospital, Newcastle is a large teaching hospital with a mixed adult and paediatric ENT unit. The Unit has a wide urban and rural catchment area including Newcastle and Gateshead, Northumberland and north west Durham. Alder Hey Hospital, Liverpool and Booth Hall Hospital, Manchester house two of the largest paediatric ENT units in the UK covering catchment areas in and around Liverpool and Manchester. Yorkhill is a busy university hospital with the largest children's ENT unit in Scotland and Bradford Royal Infirmary is one of the major hospitals within West Yorkshire. It has recently obtained teaching hospital status with the opening of its medical school. The ENT unit acts as a hub and supports clinics in Airedale and Dewsbury.
The unit supports the majority of adult and paediatric care.
Target population
The trial population will be children aged 4-15 on their last birthday with recurrent sore throat referred by primary care to 5 otolaryngology departments in Newcastle, Liverpool, Manchester, Glasgow and Bradford. In 1999 a total of 2683 tonsillectomy/adeno-tonsillectomy procedures were done for children in these centres: Liverpool (750), Manchester (440), Newcastle (545), Glasgow (498) and Bradford (450) of which some two-thirds will be referrals for recurrent sore throat.
Inclusion criteria
The study will use entry criteria drawn from the Northern regional guidelines [5]. Children (or carers) reporting experience of 4 or more episodes of sore throat within each of 2 years or 6 or more episodes of sore throat within 1 year will be eligible. We have considered pre-randomisation prospective data recording to operationalise stricter inclusion criteria for severity, but have rejected these as our aim is to operationalise current UK clinical practice.
Exclusion criteria
Children will be excluded if they require hospitalisation due to quinsy; have obstructive symptoms suggestive of clinically significant sleep apnoea syndrome, have rare medical conditions such as glomerulonephritis or Henoch Schonlein purpura; have previously had a tonsillectomy; have suspected velopharyngeal insufficiency, have co-morbidity that means they are unable to undergo the operation within the next 6 months, have a bleeding disorder, or have congenital/valvular heart disease.
Number of subjects required
We estimate a completed sample size at follow up of 284 children. Allowing for an attrition rate of around 30% we will need to recruit a total of 406 children to the trial to achieve the estimated sample of 284 (who will complete the trial). Within the original three study hospitals some 1700 tonsillectomies/adeno-tonsillectomies are currently performed annually. Only two thirds of these will have recurrent sore throats. In any trial where the intervention is widely used in current practice there are likely to be large numbers of eligible participants who opt for the intervention treatment and decline participation in the trial. We estimate that this could be up to one half of all eligible referrals from primary care. The maximum available for randomisation is therefore estimated as 566 per annum. Loss of eligible subjects in the trial is expected due to holiday periods and 'winter pressures'. On the experience of loss in other trials (50%) a conservative estimate would be 283 per annum. If we assume a conservative rate of attrition of 30% over two years we would expect 198 completing trial participants to be recruited in a 12 month period. Given seasonal effects a full 2 years would be necessary to recruit the estimated sample size. The cohort sample will be identified from participants who indicated a preference not to be randomised within the trial and who agreed to data collection. An appropriate sampling fraction will be used once non-participation in the trial can be estimated.
Subject recruitment
Recruitment to the study will take place in secondary care. All GP referrals to study centres of children with recurrent sore throat will be considered by participating surgeons. Arrangements are in place in each centre for eligible children to be referred to the clinical applicants. GPs will be informed of this reorganisation. This will facilitate efficient use of outpatients clinics at which trial participants would be recruited. Trained Research Nurses will introduce the trial to patients who will be shown a video regarding the main aspects of the trial. Patients will also receive information sheets. Research Nurses will discuss the trial with patients in light of the information provided in the video and information sheets. Patients will then be able to have an informed discussion with the participating consultant. Research Nurses will obtain written consent from patients willing to participate in the trial. Information sheets and consent forms are provided for all parents involved in the trial however these have been amended accordingly in order to provide separate information sheets and consent form which are suitable for children and teenagers. All information sheets, consent forms and the video transcript have been translated into Bengali, Punjabi, Gujarati, and Urdu. There are also separate information sheets and consent forms for the cohort group.
Randomisation
Independent world wide web based computer randomisation will allocate participants to treatment arms. Randomisation will take place once informed consent to the study has been completed and baseline data collected. The sample will be stratified by age of child at last birthday. Blocked randomisation will be used to ensure that within each centre, within each of the three age groups (4-7, 8-11, 12-15) children will be allocated in equal numbers to each arm of the trial. Where trial sites are unable to access the world wide web they will telephone the coordinating centre (University of Newcastle) in order for web based randomisation to be completed on their behalf. Sampling for the cohort study will similarly be stratified by age.
Blinding
Health technology assessment is essentially a pragmatic activity conducted in normal clinical practice, rather than an exploratory activity conducted in highly controlled set-tings. It follows that blinding doctors and patients to treatment is not desirable since it distorts normal clinical practice. Nor is it practicable. In contrast, blinding assessors is important because it minimises subjective bias towards a given treatment. All research staff conducting interviews or processing postal questionnaires and diaries will be blind to treatment modalities of all participants. This will be facilitated by separating the responsibility for recruitment and randomisation from outcome assessment. Furthermore, participants will be encouraged to respond to questions without describing their treatment regime. In this way, we will minimise subjective bias towards a given treatment.
Data collection and follow up
All participants will be followed up for 24 months from the date of initial randomisation. To minimise recall bias, data on sore throats will be gathered by a simple, structured daily health diary completed and returned by participants on a monthly basis for 24 months. Experience of similar studies suggests that with appropriate telephone reminders 90% of diaries will be returned completed. In addition simple outcome questionnaires, using two postal reminders and a telephone reminder, will be sent to trial and cohort study participants. Overall we anticipate an 80% response rate. Postal surveys will be done at 3, 12 and 24 months after randomisation. A baseline questionnaire will be completed by all participants upon recruitment to the trial. The greater frequency of data collection in the first 12 months is necessary in order to capture data on expected changes in direct and social costs to participants in the first 12 months. Experience also suggests that data on consultation rates and prescribed medication can be gathered most accurately and reliably from medical records. Manual abstraction will be performed by trained research nurses at the end of follow up for all participants.
Adverse events will be recorded by self completion daily diaries (parent or child) which will be collected four weekly and GP records which will be examined at the end of the 24 months follow up period. Expected adverse events include infection, haemorrhage and pain following tonsillectomy with possible hospital readmission as well as sore throat, nausea, fever and dysphagia. All adverse events will be managed as per normal care, since the intervention process of this study does not deviate from normal care.
Data handling and record keeping
Only anonymised non-identifiable data will be recorded by the site's research teams from personal medical records. Health diaries and follow-up questionnaires will be anonymous and returned to the trial centre in replypaid envelopes. For linking purposes these data sets will have unique study identifiers. Only the lead researcher, trial manager and trial administrator will have access to the key which links study identifiers to individual data sets. Personal details (participants full name and address) will be stored on a secure database at CHSR for the purpose of sending out questionnaires and diaries centrally. All data held for analysis will be held in accordance with the Data Protection Act. On completion of the study and associated dissemination the Trial Master File will be archived in the CHSR for 10 years. Trial sites will be responsible for archiving their own documentation.
Economic evaluation
An economic evaluation will be carried out alongside the clinical trial in order to ascertain the cost-effectiveness from a societal perspective with a focus on health service and families [35]. The cohort sample will not be included in the economic evaluation except for the purpose of validation and estimating the representativeness of cost and benefit data for trial participants.
Measure of benefits used and study type
Cost consequences analysis (CCA), cost-effectiveness analysis (CEA) and cost utility analysis (CUA) will be conducted. In CCA, all the outcomes used in the clinical study will be adopted as measures of benefits, including the QoL dimensions. In CEA, the benefits will be measured by the number of events of recurrent sore throat and the number of symptom-free days. In CUA, different health outcomes will be combined with QoL dimensions.
Resources data collected within the trial and costing methods
Medical resource data will relate to the interventions under investigation, any use of health care services due to 'sore throat' episodes not averted, treatment of drug sideeffects, surgery complications and long term sequelae. Services to be monitored include: outpatient visits and hospitalisations, investigations, A&E admissions, visits and telephone consultations to and from the GP and any other health care professionals, use of medications (including antibiotics, analgesics, and drugs to manage antibiotic side-effects), and any other use of health care services in both the private and public sectors. Manpower data will be collected separately for each main category of staff. Participants' out of pocket expenses such as over the counter medicines will be reported. Costing of health care resources will be undertaken in a parallel study and a mixed approach using micro-costing and gross-costing methods will be used [36]. We will cost resources using health service pay and price data. Where appropriate, these will be integrated using national published data [37][38][39]. Where relevant, costs will be broken down into capital, staff, consumable and overhead costs. This will aid the production of different cost scenarios. The impact of the interventions on the time 'invested' by children and carers because of illness, treatment and rehabilitation will also be assessed. Children's days of restricted activity and their level of functioning; time off school; carers' time off work; children's and carers' time involved in outpatients attendance (such as travel time, waiting time and the duration of the clinical visit) and impact on children's and carers' quality of life will be monitored. For carers' in paid/ unpaid work, time will be valued in monetary terms. Costing will be undertaken using the human capital approach and the friction cost method [40]. Those resources for which we find a statistically significant difference between the groups will be costed. Those which show no statistically significant difference but are of practical significance in their contribution to costs, will also be costed. The cost analysis will not differ across the different types of economic evaluations. However in the CUA, when carers' preferences will be assessed, particular caution will be used to avoid double counting the loss of income due to work absences [41]. Whenever applicable, a discount rate of 6% will be used, which is the rate currently used by the public sector in the UK. Costs will be expressed in UK pounds sterling. Costs will be expressed in the prices of the year in which the final analysis will be carried out and if necessary inflation method will be used to update costs data.
Resources/costs data collected outwith the trial
The study is not powered to detect significant differences for rare events. Given the relatively low incidence of surgical complications, long-term sequelae due to surgery and drugs side-effects, data on the related use of resources, costs to the carers and impact on children will be gathered outwith the trial, from the literature and from experts' opinions. Consensus estimates will be obtained by interviewing a panel of experts, including members of the study team and others. The source of the data will always be explicitly stated.
Synthesis of costs and benefits
Depending on the outcome measure, if there is no statistically significant evidence that one treatment strategy is more effective than another, a cost-minimisation framework will be used and the less expensive form of care will be recommended. If one strategy appears to be dominant (i.e. to be more effective and less costly than the alternative), the uptake will be recommended. If one form of care appears to be more effective and more expensive than the comparator, estimates of incremental cost-effectiveness (and cost-utility) ratios will be generated. A judgement will be required in a policy making context to establish whether the additional benefits should be achieved sustaining the additional costs. In any case, recommendations will be made taking into account the generalisability of the results.
Sensitivity analysis
To handle uncertainty not related to sampling variations and to enhance the generalisability of the results, oneway; multi-way and extreme scenario analysis will be undertaken as appropriate and confidence intervals for cost-effectiveness ratios will be estimated under different scenarios [42]. A sensitivity analysis taking into account differences in resource use which are practically significant (i.e. potentially costly) but which have not been shown to be statistically significant, will also be undertaken. The sensitivity analysis will also make explicit all the simplifying assumptions made to collect the data [43]. The application of discounting to the benefits will also be tested in the sensitivity analysis, as well as a range of discount rates. Particular attention will also be given to whether the costs data used reflect the true marginal opportunity costs of the resources used. When more than one reliable source of information is available, such data will be used as a term of comparison. The use of different costing methods for multi-centre studies will be explored. Earlier studies [28][29][30][31]. suggest that longer term outcomes such as reduction in recurrent sore throat may show only marginal benefits. An equivalence trial with a substantially larger sample size would be necessary to capture significant longer-term outcomes. To contain the cost of the trial we have not proposed a three year follow up. However, the future sequence of clinical events and economic impact will be modelled beyond 2-year follow-up. The relevant data will be derived from studies which will be available and experts' opinions.
Measuring participants' preferences and utilities
There is a need to value the effectiveness of interventions taking account of the risk of surgery and its long-term sequelae (e.g. sleep, eating, speech, disturbances, regressive behaviour [44]). The utility assessments will also provide insight into informed choice models [45]. Older children's and carers' values will be used to elicit preferences for trade-off between the perceived risks and benefits of surgery versus drugs treatment. Preferences will relate to temporary and chronic scenarios associated with morbidity and QoL because of symptoms and treatment complications. The scenarios will be developed selecting the health outcomes and QoL domains relevant to the problem. Interviews will be carried out with a sample of older children and carers from the cohort group, and the Standard Gamble method [46] will be used to derive utilities.
Statistical considerations Sample size calculation
In this trial we anticipate a fairly large difference in the primary clinical outcome (the reported number of episodes of sore throat in the two years after randomisation) with an effect size of around 1.0, but a smaller difference in a number of psycho-social outcomes including healthrelated quality of life, with an effect size of 0.33. No standard sample size formula is available for economic evaluations, and a number of methods have been proposed [47][48][49]. The information which is currently available limit the use of such methods in practical applications. Published data [17] suggest that tonsillectomy may lead to a reduction of approximately 1.5 days per year in missed schooling. Given a reported standard deviation of 4.5, to detect this difference with 80% power we would need approximately 142 children in each arm of the trial assuming a significance level of 5%. A sample size of 142 children in the cohort group will allow us to detect similar differences between the cohort group and propositi. The sample will be stratified by age (4-7, 8-11, 12-15). With a total of 284 children, we will have approximately 47 randomised to each treatment arm in each strata. Given that the standard deviation of the number of sore throats per year is 2.0).)18, we will be able to estimate the difference between treatments in each strata with a standard error of 0.41. (Equivalently we would have 90% power to detect a difference of 1.35 episodes of sore throat per year in each strata assuming a type 1 error of 0.05). It is anticipated that the difference in outcome between the two arms of the trial will be approximately 2 episodes in the second year of follow up. A sample size of 142 children in each arm should enable us to measure this difference with sufficient precision to undertake a meaningful economic analysis.
Main analysis
An intention to treat analysis will be performed. In particular, children randomised to non-surgical conventional medical treatment will be retained in that group for the analysis even if they subsequently receive a tonsillectomy. The primary clinical outcome measure will be the number of episodes of sore throat. This variable will be analysed using generalised linear modelling assuming a Poisson error structure with a log link function [50]. By fitting the difference between the two experimental groups as a fixed effect, interval estimates of the effect of tonsillectomy (in each of the first two years of follow up) will be generated. These estimates will then be used in the economic analysis. The same approach will be used to analyse the other outcomes. A Poisson error structure will be assumed for data in the form or a count (such as the number of episodes of absence from school) and normal error structure adopted for continuous variables (such as the quality of life indices).
Secondary analysis
The aim of secondary analysis is to determine whether we can identify groups of children who benefit from surgical treatment. It is hypothesised that disease severity may be an important factor. A severity index based on history of the condition during the year before entry to the study will be derived using data recorded in GP records. The relationship between severity and the effect of tonsillectomy will then be investigated using the modelling approach described above.
Economic analysis
We expect skewness in the distribution of use of resources/costs [51]. In the presence of skewness, the logarithmic transformation of data is not recommended, and the application of non-parametric tests can provide misleading results (economic studies aim to base the analysis on arithmetic means and not median values) [52,53]. The non-parametric bootstrap test can be the most appropriate [53], since it does not require any assumptions about the normality of data and equality of the variance or shape of the distributions. The t-test can be safely used if the sample size is not too small [52]. Depending on the level of skewness of data obtained we will make a judgement on which of these two methods can be safely applied. The mean costs estimates and (incremental) cost-effectiveness ratios, and conventional measures of variances will be reported [42].
Cohort analysis
The cohort of patients who decline to be randomised will be used to assess the external validity of the main study. Baseline characteristics of the cohort will be compared with those of the study population using standard tests for the comparison of two independent samples (e.g. the ttest or Mann-Whitney test as appropriate). Outcome for the cohort will be compared with outcome for the two groups of study participants using the modelling approach described above.
Trial steering committee
The study has a Trial Steering Committee which meets 6 monthly. The Trial Steering Committee is responsible for monitoring public interest and ensuring issues relating to research governance are met. The trial does not have a data monitoring committee since it examines routine therapies.
Consumer involvement
Consumer involvement will be encouraged and facilitated throughout the study by the establishment of a consumer advisory panel. We will use the advisory panel to help clarify important outcomes for children and their parents (or carers) and to assist in the development of participantoriented data-collection methods. By consumer we include here children and their parents as well as representatives of appropriate advocacy groups such as the Patients Association. Our experience of consumer panels in the development and implementation of other studies (e.g. quality of life of people with dementia and treatment for primary biliary cirrhosis of the liver) have highlighted the different types of involvement and the different ways that consumers can be involved in primary research. Parents and children will be involved in an advisory capacity rather than in a full participatory role. We will establish and convene regularly the consumer advisory panel in which the group process will use focus group methods. Throughout the project (at least annually) we will use the advisory panel to voice participants' concerns and to identify participant-oriented solutions to such concerns.
Ethical approval
The conduct of this study will be in accordance with the ethical principles set out in the Declaration of Helsinki. The trial has approval from MREC and all the associated LRECs. The trial also holds a Clinical Trial Authorisation from the MHRA. The trial has NHS R&D and Caldicott Guardian approval from each participating site. There are no particular ethical problems with this trial. The ethical challenge is as with any surgical randomised trial where one arm is an irreversible procedure under general anaesthesia and the other limb effectively maintenance of the status quo with reverting to surgery an outstanding choice. Set against the surgical risk, however, is the essentially curative nature of the intervention -no tonsillitis can occur once the tonsils have been removed. Further, the children under consideration all have qualifying levels of sore throat and would otherwise be eligible for surgery. In other words the issue is more the withholding of tonsillectomy rather than one of random allocation to intervention. All subjects will provide written informed consent before any study procedures are carried out and a participant information sheet will be provided. As part of the consent process participants must agree to researchers & regulatory representatives having access to their medical records. Participants will also be informed that they have the right to withdraw from the study at any time.
The NHS Trust has liability for clinical negligence that harms individuals toward whom they have a duty of care. NHS Indemnity covers NHS staff and medical academic staff with honorary contracts conducting the trial.
Discussion
As with all pragmatic randomised controlled trials it is impossible to control the external environment in which the research is taking place. Since this trial began a number of factors have arisen which could affect the outcome. Firstly there appears to be a reduction in the incidence of respiratory tract infections or at least a reduction in the number of patients presenting to primary health care with respiratory tract infections [54]. This will inevitably lead to a reduction in the number of children being referred to secondary care for recurrent throat infections. Secondly it has come to light that there are marked socio-economic differences in consultation rates in primary health care which are not reflected in operation rates for tonsillitis in secondary care [55]. Lower socio-economic groups use NHS services for tonsillitis less in relation to need than higher socio-economic groups. Again this has implications for the rate of referral to secondary care. The results from the recent National Prospective Tonsillectomy Audit [56] may also have led to an alteration in the surgical techniques favoured by our trial consultants however surgical methods and any associated post operative complications are recorded for the trial. In addition there is anecdotal evidence that the Government's waiting list initiatives may impact the study by exporting surgery outside the NHS. | 2014-10-01T00:00:00.000Z | 2006-08-09T00:00:00.000 | {
"year": 2006,
"sha1": "fb9cc83d7e7854576cf72448d82e3fe53af52614",
"oa_license": "CCBY",
"oa_url": "https://bmcearnosethroatdisord.biomedcentral.com/track/pdf/10.1186/1472-6815-6-13",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb9cc83d7e7854576cf72448d82e3fe53af52614",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16936798 | pes2o/s2orc | v3-fos-license | Periplasmic Acid Stress Increases Cell Division Asymmetry (Polar Aging) of Escherichia coli
Under certain kinds of cytoplasmic stress, Escherichia coli selectively reproduce by distributing the newer cytoplasmic components to new-pole cells while sequestering older, damaged components in cells inheriting the old pole. This phenomenon is termed polar aging or cell division asymmetry. It is unknown whether cell division asymmetry can arise from a periplasmic stress, such as the stress of extracellular acid, which is mediated by the periplasm. We tested the effect of periplasmic acid stress on growth and division of adherent single cells. We tracked individual cell lineages over five or more generations, using fluorescence microscopy with ratiometric pHluorin to measure cytoplasmic pH. Adherent colonies were perfused continually with LBK medium buffered at pH 6.00 or at pH 7.50; the external pH determines periplasmic pH. In each experiment, cell lineages were mapped to correlate division time, pole age and cell generation number. In colonies perfused at pH 6.0, the cells inheriting the oldest pole divided significantly more slowly than the cells inheriting the newest pole. In colonies perfused at pH 7.50 (near or above cytoplasmic pH), no significant cell division asymmetry was observed. Under both conditions (periplasmic pH 6.0 or pH 7.5) the cells maintained cytoplasmic pH values at 7.2–7.3. No evidence of cytoplasmic protein aggregation was seen. Thus, periplasmic acid stress leads to cell division asymmetry with minimal cytoplasmic stress.
Introduction
Asymmetry is a much debated property of the bacterial cell [1][2][3][4][5][6][7][8]; see also Table 1.Some bacteria show morphological and functional asymmetry, such as Caulobacter crescentus whose cell division yields a stalked cell and a flagellated cell.Others such as Escherichia coli show bilateral symmetry and generate daughter cells that appear functionally equivalent.Yet even E. coli are asymmetric in that each daughter cell inherits an "old pole" (which existed for one or more previous generations) and a "new pole" formed by septation.The old-pole and new-pole cells may show differential division times and reproductive potential, a property termed cell division asymmetry [4,7,9].Under certain conditions, old-pole cells undergo polar aging, defined as an increase in division time and higher rates of cell death over several generations (generally five or more generations are observed).Polar aging also occurs in stalked cells of Caulobacter crescentus [2].Other bacteria such as rhizobia [10] and mycobacteria [11] show polar "rejuvenation" by elongating at alternate poles.In mycobacteria, old-pole and new-pole cells differ in their resistance to various antibiotics.The fitness advantage of polar aging may derive from partitioning of damage to the cytoplasm [6].When Escherichia coli fissions, each daughter cell inherits cytoplasmic components located at the old poles, as well as components at the mid-cell plane where septation forms the new poles.E. coli may experience "aging" from the selective partitioning of damaged cytoplasmic components such as denatured and aggregated proteins [12,13].The cytoplasmic aggregates are stored preferentially in a pole, and ultimately accumulated by old-pole cells [14].Thus, cell damage is relegated to an older generation while the younger generation receives the newer components.The asymmetric allocation leads to decreased growth rates and higher mortality rates in the old-pole daughter cells over multiple generations [9,12,13,15].In effect, the old pole acts as a "disposable soma", preserving the new pole as an "immortal germ line" [1,5].
The degree of polar asymmetry depends upon growth conditions.A matrix model of cell growth rate and biomass yield [16] predicts that an environment with high growth potential should favor asymmetry; that is, higher growth rate of new-pole cells (rejuvenation) at the expense of losing the biomass yield of old-pole cells.On the other hand, low nutrient levels and low cytoplasmic stress favor symmetrical cell division [7,9].Division asymmetry requires a cytoplasmic damage agent causing protein aggregation, such as streptomycin [9], heat shock [13], or fluorescent reporter proteins [4,12].Protein aggregates [12] and individual RNA-MS2-GFP complexes [14] migrate preferentially toward the old pole.For comparison, the experimental design, results, and statistical analysis of key investigations in this field are compiled in Table 1.
Previous reports have focused on the cytoplasm as the source of stress-induced cell division asymmetry.The effects of periplasmic and envelope stress conditions on cell division asymmetry have yet to be tested.An important periplasmic and envelope stress is low pH; for review, see [17,18].The periplasm is a major site of acid damage because it equilibrates rapidly with the external environment [19].E. coli K-12 growth at pH 4.5-6.0requires many protective stress responses including amino-acid decarboxylase systems such as the Gad regulon, periplasmic acid chaperones, modulation of outer membrane proteins, and adjustment of proton flux by the electron transport system [17,20,21].Nevertheless, bacteria maintain a high degree of cytoplasmic pH homeostasis.During growth in broth medium over a range of external (periplasmic) pH 6.0-7.5, E. coli cells maintain cytoplasmic pH at 7.6, in liquid culture [17,19,22], or at pH 7.1-7.3, in adherent colonies with perfusion [23].The cell maintains a remarkably Cells expressed a MS2 coat protein fused to a GFP along with a RNA target plasmid.
Tracked the migration of individual RNA-MS2-GFP complexes within cells.
Cells were induced for MS2 and cultured on sealed LBagarose at 37°C supplemented with antibiotics and arabinose inducer.After 1 h, cells were tracked for 2 h with images obtained every minute.Cell division time was 1.5 h (tracked 4 generations).
The degree of biased polar segregation of aggregates was analyzed using the model of biased binomial partitioning of RNA-MS2-GFP complexes.
Complex migration shows strong bias for the old pole. doi:10.1371/journal.pone.0144650.t001 strong cytoplasmic pH homeostasis, despite allowing the periplasmic pH to equal the external pH.Thus, low pH permits a test of primarily periplasmic and envelope stress on cell division asymmetry.
We hypothesized that cells cultured at low extracellular pH would show polar aging or cell division asymmetry, due to periplasmic acid stress.Here, we show that extracellular and periplasmic acid (pH 6.0) increases cell division asymmetry within an E. coli colony, compared to colonies cultured at pH 7.5.After six generations at external pH 6, the old-pole cell shows consistently longer division time than new-pole cell.This cell division asymmetry occurs despite comparable cytoplasmic pH homeostasis within new-pole and old-pole cells.Cells cultured at pH 7.5, however, show no cell division asymmetry.
Strains and plasmids
We used JLS1105, a strain of E. coli K-12 W3110 that contains a pH reporter plasmid, pGFPR01, with pHluorin expression under P BAD [23].Ratiometric fluorescence of pHlourin was used to measure cytoplasmic pH and observe cell fission.Ampicillin concentration was 50 μg/ml; P BAD promoter was induced with 6 mg/ml L-arabinose.
Microscopy
Cells were observed using a 100x oil immersion objective lens on an Olympus BX61WIF-5 microscope.Filters D410 and D470 (Chroma Technology Corp) were used for pHluorin excitation with a xenon arc lamp (LB-LS/ OF17; Sutter Instrument).Fluorescence of adherent cells was observed at excitation wavelengths 410 nm and 470 nm [23].Fluorescence emission was captured at wavelengths 510-560 nm using filter HQ535.Images were captured and fluorescence intensities were recorded using Metamorph Metafluor software (Molecular Devices) (binning = 1, gain = 0).Wavelength exposure times were controlled for photobleaching.Excitation ratios (470/410) for each fluorescence image within a given experiment were recorded [23].Adherent cells were located within a field and observed until the first division, so as to confirm viability.After the first successful division, the cells were tracked through at least five more generations of division.Focus was maintained manually, and time-lapse images of the bright field and fluorescent images were acquired at continually decreasing intervals as generation numbers increased.All images were labeled with time stamps and saved in chronological order.
Image and pole age analysis
Experiments recording at least six generations of growth were analyzed and used to construct cell lineages.During cell fission, each daughter cell inherits an old pole (preexisting in the parent) and a new pole (formed by septation).Each pole retains its initial code throughout the experiment, and each new division introduces a new color code.All phase contrast and fluorescent images within a given experiment were coded manually following a color system that indicated the generation of each pole on every cell as divisions accumulated over time (see lineage for specific color scheme).Fluorescence images captured in the Metafluor program were used concurrently to confirm division.
The coded results were used to create two half-lineages stemming from the first two progeny of the initial parent cell.For a given half-lineage to be included in analysis, at least 16 cells (half) had to reach the sixth generation.Lineages include subsequent cell divisions and also indicate the division time of each cell from its initial inception until it divides into two daughter cells.The ages of the two poles (generation number since original cell division) for an individual cell are indicated by two shaded boxes within the lineage.After a given lineage was complete, each cell was assigned a numerical value according to its location within the lineage.Each value was standardized to represent the coordinates of the cell within every half-lineage.
Statistical analysis
For each tracked colony, mean division times were calculated for the old-pole and new-pole cell lines.Thus, each colony generated paired replicates for half-lineages A and B. In effect, the cell division times were binned over several generations for the line containing the oldest pole and for the line containing the new pole at each division.This binning across generations gave us power to detect relatively subtle differences in division rate.
Over multiple colonies (15 half-lineages at pH 6.0, and another 15 at pH 7.5) the difference between old-pole and new-pole division times was tested for significance using the Wilcoxon signed rank test [24] with modification to include rank data with zeros [25].Also, an alternative test was conducted, a computational resampling permutation test for paired replicates [26].
Cytoplasmic pH measurement
Intracellular cytoplasmic measurements were obtained using the pH-dependent ratio between excitation peaks of pHluorin [23].For the standard curve, the pH gradient was collapsed using 40 mM methylamine hydrochloride and 40 mM potassium benzoate.LBK medium containing 100 mM buffer at pH 4.0-8.5 with a step size of 0.5 of a unit was fitted with a Boltzmann sigmoid best-fit curve, to correlate intracellular 410/470 excitation ratio as a function of pH.The curve is used to set the false-color scale bar to the minimum and maximum values of fluorescence excitation ratio.Typical min and max ratios were 0.45 (pH 4.95) and 1.40 (pH 8.83).
Fluorescence intensities were extracted from raw images acquired in Metafluor.Images were analyzed and plot profiles of intensity levels across individual cells were performed using ImageJ software (http://rsb.info.gov/ij/,created by NIH).Plots of cells in each experimental condition were generated.
Results
Cell division asymmetry occurs at external pH 6.0 but not at external pH 7.5 Cell division times and inheritance patterns were recorded for 15 half-lineages at external pH 6.0; and for 15 half-lineages at external pH 7.5.Within each half-lineage, we identified the cell line retaining the old pole (old-pole cells, numbered 2, 4, 8, 16, 32).All old-pole cells show a pole color-coded yellow (Figs 3 and 4).We also identified the line of cells receiving a pole that was new at the previous division (new-pole cells, numbered 3, 6, 13, 26, 53).In each case, we compared equivalent cell lines: the line always inheriting the old pole, and the line always inheriting a newly formed pole.Comparison of binned cell division lines revealed cumulative A histogram for the difference in average division times (old pole line minus new pole line) is plotted in Fig 5 .In this histogram, a disproportionate number of cell line pairs with large positive differences in division time would be consistent with a polar aging effect (that is, slowing cell division rates for old-pole cells).The high number of zeros arises with several lineages that have the same total experiment duration time and number of cell divisions, within the resolution of our experiment.
The distribution of the differences of average cell division times was non-normal, requiring a nonparametric (or "distribution-free") statistical procedures for pairwise comparisons, such as the Wilcoxon signed rank procedure [24].For lineages of cells perfused at pH 6.0, the Wilcoxon test finds that the old-pole path has a significantly larger mean division time than the new pole path (p = 0.017).
A concern with the standard Wilcoxon test is that it requires reducing the sample by discarding zeros.In rare cases, the large number of zeros can cause the Wilcoxon test to behave inconsistently.For this reason, we also applied a modified Wilcoxon procedure which assures a consistent test by including zeros in the ranking [25].At pH 6.0, the modified Wilcoxon test found that the old-pole average division time (45.8 ± 4.3 min) exceeds the new-pole average division time (38.1 ± 3.0 min) by 20% (p = 0.017).Thus, the modified Wilcoxon test still found a longer mean division time for the old poles compared to the new poles.
Another nonparametric procedure, a resampling permutation test for paired replicates, uses computer simulation to derive the distribution of test statistics under the assumption of no difference of paired replicate means [26].In the current study, the permutation test allows the determination of whether a particular sample (i.e. the sample at hand), is inconsistent with the assumption of no difference in mean division times for the old-pole and new-pole paths.After simulating 10,000 resamples, only 149 resamples show a stronger deviation from equal mean division rates than our sample at hand.This result leads to an estimated p-value of 149/ 10,000 = 0.0149.Like the two previous Wilcoxon tests, the permutation test confirms a larger mean division time for the old-pole cells perfused at pH 6.0.
For cells perfused at pH 7.5, the average mean division rates of the old-pole cells was 45.0 ± 5.3 min, and the average of the new-pole cells was 45.4 ± 4.6 min.The difference (less than 1%) was not found to be significant by any of our tests (p values = 0.789, Wilcoxon; 0.750, modified Wilcoxon; 0.742, permutation test).Many studies associate asymmetrical cell divisions with preferential accumulation in the cytoplasm of protein aggregates and inclusion bodies at old poles [12,13]; see also Table 1.We sought evidence for deposition of aggregates in our tracked colonies.To identify protein aggregation, we quantified the cytosolic fluorescence intensities across the entire length of the cell (Figs A and B in S2 File).pHluorin fluorescence should remain diffuse throughout the cytoplasm; any localized sharp increase or decrease in fluorescence values would be an indication of aggregates.Previously, we report observing aggregates or inclusions using pHluorin expressed under a constitutive high-level P bsr promoter [23].Here, we show the same effect (S3 File) in a strain containing the P bsr plasmid.Dark regions in the phase-contrast or in the fluorescence micrograph indicate inclusion bodies thought to be composed of misfolded pHluorin.
The arabinose-inducible strain of our present experiment, however, displays normal cell morphology, and never forms detectable inclusion bodies.In the present study, phase contrast and fluorescent intensity profiles were consistent across the entire cell (Figs A and B in S2 File).The only variation from the normal curve was a dip in the center of the profile of some cells, indicating septation.There were also slight decreases in fluorescence at the ends of the cell but no sharp peaks or valleys were detected.Thus, while we cannot rule out polar deposition of damaged protein in the cytoplasm, we found no positive evidence of it.
Cells in tracked colonies maintained cytoplasmic pH homeostasis
In E. coli, cell growth declines steeply with depression of cytoplasmic pH [27,28].We tested whether cytoplasmic pH depression might be involved in pH-dependent polar aging.We used ratiometric pHluorin fluorescence to measure the cytoplasmic pH of all tracked cells.Over the course of the experiment in both conditions, the cytoplasmic pH decreased slightly as cells divided out past five generations.The average cytoplasmic pH of all cells following culture at pH 6.0 was 7.26 ± 0.05, and the average cytoplasmic pH of cells at external pH 7.5 was 7.23 ± 0.13.There was no significant difference between these conditions.
We also compared the cytoplasmic pH of the new-pole cells against old-pole cells within experiments.In colonies perfused at pH 6.0, the average cytoplasmic pH of new-pole cells was 7.28 ± 0.08 and the average pH of the old-pole cells was 7.26 ± 0.08.Although the difference showed significance upon paired t-test (P = 0.035), the cytoplasmic difference of two hundredths of a pH unit was small and unlikely to cause the observed differences in cell division rates.In colonies perfused at pH 7.5, old-pole cells had an average pH of 7.25 ± 0.13 and newpole cells had an average pH of 7.19 ± 0.18.These values are consistent with previous reports of adherent cell pH at external pH 7.5 [23] and were not significantly different from each other.Thus, both at external pH 6.0 and at pH 7.5, cytoplasmic pH homeostasis was maintained at equivalent levels.
Discussion
Cell division asymmetry and polar aging have important implications for development of bacterial populations, and for occurrence of antibiotic resistance.Several investigations have examined polar aging of cells under the presence of various external stressors including antibiotics, high temperature, and the overproduction of fluorescent reporters.In these experiments, resulting in replicate pairs.The distribution of these pairwise differences is non-normal (Anderson-Darling p-value < 0.005, indicating strong deviation from normality).Non-parametric tests were used including a Wilcoxon signed rank test and a resampling permutation test.doi:10.1371/journal.pone.0144650.g005the occurrence and the mechanism of cell division asymmetry has been controversial.Many researchers find that selective segregation of damaged proteins in these experiments is specific to the stressors used (Table 1), such as streptomycin exposure or protein overexpression [9].An advantage of our approach is that by binning cell divisions over several generations, we could measure significant effects of an important stress condition (periplasmic acidity) despite concurrent fluctuations in growth patterns within a surface-adherent colony.The significance of our results was confirmed by two different nonparametric tests, the Wilcoxon procedure and the resampling permutation procedure.
The periplasm of E.coli is strongly affected by acid stress, and its regulation is not well understood.Our work adds new information to polar aging regarding the role of acid stress, at a value that primarily affects the envelope and periplasm [19].We show that at pH 6, but not pH 7.5 (where periplasmic and cytoplasmic pH are nearly equal), old-pole cells show a longer average generation time and a delay in cell division compared to the new-pole cells.Our results are consistent with the interpretation that asymmetric cell division enables sequestering of damage resulting from periplasmic acid stress.The selective distribution may allow a cell line to exist longer and survive under increasingly toxic conditions [9,16].
The previous literature points to old-pole accumulation of cytoplasmic protein aggregates as the chief cause of polar aging [12,13].While we cannot rule out a cytoplasmic effect during external acid stress, we find no positive evidence for significant cytoplasmic pH stress, nor for protein aggregation (Figs A and B in S2 File) in acid-stressed cells.Nevertheless, acid has important effects on the periplasm [29].The acid-induced protein-folding chaperones HdeA and HdeB protect periplasmic proteins from aggregation and cellular damage [30][31][32].It would be interesting to test whether hdeA and hdeB mutants show increased cell division asymmetry at low external pH.
The outer membrane proteins (OMPs) provide another possible source of pH effect.In the closely related Gram-negative pathogen Shigella, the OMPs show polar accumulation in order to modulate host actin-propulsion motility [33].Thus, we propose that either periplasmic or outer-membrane proteins distribute asymmetrically and may cause polar aging.S3 File.Inclusion bodies observed in E. coli cells expressing pHluorin from P bsr .Phase-contrast (top) and ratiometric fluorescence (bottom) images show the strain JLS1013, which expresses pHluorin under the constitutive promoter P bsr [23].Cultures were incubated at 37°C with rotation to stationary phase (14 h) in LBK media supplemented with 50 μg/ml ampicillin and buffered with 100 mM MOPS at pH 7.5.The cells were suspended in 0.35% agarose and spread on the 40 mm coverslip as described under Methods.The chamber was perfused with LBK media buffered at pH 7.5 (MOPS) during observation.Inclusion bodies resulted in regions of decreased fluorescence (arrow).(PDF) One half-lineage tracks cell A, and the other tracks cell B. The members of half-lineages A and B are delineated by white dotted lines in each panel with colorcoded poles.Within each half-lineage, we identified the cell line retaining the old pole (oldpole cell, numbers 2, 4, 8, 16, 32) as well as the line receiving a new pole at each division (newpole cell, numbers 3, 6, 13, 26, 53).Division times and inheritance patterns were recorded for all cells within each of 15 half-lineages at external pH 6.0; and for cells within 15 half-lineages at external pH 7.5.
Cell division asymmetry was assessed in E. coli colonies under agarose pads with continual perfusion at pH 6.0 (Fig 1) or at pH 7.5 (Fig 2).The first adherent cell division (generation 0) formed two daughter cells, each with a known old pole and a known new pole.Each of these
Fig 1 .
Fig 1. Tracking of individual cells perfused at pH 6.0.The panel above displays time-lapsed images captured during the tracking of cells in a colony for a representative experiment at pH 6.0.Cells are located manually within a field and then followed through six consecutive divisions.The first row of phase contrast images are the original images captured during the experiment.Color coded images below are the phase contrast images marked with relative pole ages (same color scale appearing in lineages).The bottom row of images are corresponding fluorescent images captured at the same time interval as phase contrast images.Scale bar = 5 μm.Time stamps indicate elapsed time from start of the experiment.doi:10.1371/journal.pone.0144650.g001
Fig 2 .
Fig 2. Tracking of individual cells perfused at pH 7.5.The panel above displays time-lapsed images captured during the tracking of cells in a colony for a representative experiment at pH 7.5.Images were obtained and analyzed as in Fig 1.
doi:10.1371/journal.pone.0144650.g002differences in growth rate at low pH, despite the "noise" of individual fluctuations arising from individual cell positions within a colony.
Fig 3 .
Fig 3. Cell half-lineages A and B in colonies perfused at pH 6.0.Half-lineages A and B each include six generations of cells dividing.Each box represents a single cell at that time point.Individual poles of the cell and their corresponding pole age are labeled according to the color scale in the upper right hand corner.Each cell in the half-lineage is given a distinct number (left of the box).Cell numbers are standardized across all lineages.Colors correspond to the relative polar ages of each cell.Time (min) at each box indicate the division time of that cell, the time from initial existence of the cell until the point where it divided into two daughter cells.Time (min) in red beneath the final division indicates the time the cell existed until the experiment was ended.doi:10.1371/journal.pone.0144650.g003
Fig 4 .
Fig 4. Cell half lineages A and B in colonies perfused at pH 7.5.Half-lineages include six generations of cells dividing.Analysis was conducted as in Fig 3.
Fig 5 .
Fig 5. Differences in mean division rates of old-pole and new-pole cell lines.The stacked histograms represent the differences in average division times (old-pole cell line minus new-pole cell line) for each half-lineage.Mean division times for the old-pole line and new-pole line were calculated for each lineage, Intensity plot profiles of old-pole and new-pole cells at pH 7.5.Cells possessing the oldest pole (A) and the newest pole (B) were scanned end to end, and fluorescence was measured as for Fig A in S2 File.(PDF)
Table 1 .
Literature on polar aging in Escherichia coli colonies. | 2016-05-02T18:17:47.175Z | 2015-12-29T00:00:00.000 | {
"year": 2015,
"sha1": "3806aee9ffd988f006028ae7f5f96a8609ca6425",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0144650&type=printable",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "3f85981225c9b47938eb47b3364e143f8b50d4d1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268083378 | pes2o/s2orc | v3-fos-license | Prospective study examining the impact of cerebral angiography on quantitative pupillometer values in the interventional radiology suite
Objectives The purpose of this pilot study was to obtain baseline quantitative pupillometry (QP) measurements before and after catheter-directed cerebral angiography (DCA) to explore the hypothesis that cerebral angiography is an independent predictor of change in pupillary light reflex (PLR) metrics. Design This was a prospective, observational pilot study of PLR assessments obtained using QP 30 min before and after DCA. All patients had QP measurements performed with the NPi-300 (Neuroptics) pupillometer. Setting Recruitment was done at a single-centre, tertiary-care academic hospital and comprehensive stroke centre in Dallas, Texas. Participants Fifty participants were recruited undergoing elective or emergent angiography. Inclusion criteria were a physician-ordered interventional neuroradiological procedure, at least 18 years of age, no contraindications to PLR assessment with QP, and nursing transport to and from DCA. Patients with a history of eye surgery were excluded. Main outcome measures Difference in PLR metric obtained from QP 30 min before and after DCA. Results Statistically significant difference was noted in the pre and post left eye readings for the minimum pupil size (a.k.a., pupil diameter on maximum constriction). The mean maximum constriction diameter prior to angiogram of 3.2 (1.1) mm was statistically larger than after angiogram (2.9 (1.0) mm; p<0.05); however, this was not considered clinically significant. Comparisons for all other PLR metrics pre and post angiogram demonstrated no significant difference. Using change in NPi pre and post angiogram (Δpre=0.05 (0.77) vs Δpost=0.08 (0.67); p=0.62), we calculated the effect size as 0.042. Hence, detecting a statistically significant difference in NPi, if a difference exists, would require a sample size of ~6000 patients. Conclusions Our study provides supportive data that in an uncomplicated angiogram, even with intervention, there is no effect on the PLR.
INTRODUCTION
Assessment of the pupillary light reflex (PLR) is widely recognised as a fundamental tenet of the modern neurological examination for all critically ill patients. 1 2This simple noninvasive assessment can detect neurological deterioration, retinal axonal loss and autonomic dysfunction in critically ill patients but suffers from poor inter-rater reliability when assessed by clinicians. 3 4Although PLR assessment by human observers has limited reliability, 5 quantitative pupillometry provides high reliability, precision and accuracy to provide information on acquired brain injury. 6 7Commensurate with the increasing pace at which pupillometry is being adopted into practice, 8 9 there is a need to fully understand the limits of this technology.Specifically, there are no data examining how invasive neurological procedures impact pupillometer measurements.We propose that angiography may direct an effect on PLR measurements based on the invasive nature of the procedure along with prior studies suggesting that global cerebral dysfunction may result from intra-arterial iodinated contrast administration (such as contrast-induced encephalopathy) leading to diffuse cerebral oedema that can be potentially diagnosed using pupillometry. 10The purpose of this pilot study is to obtain baseline pupil measurements before
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ Obtaining data pupillometer values prospectively before and after angiography is a recognised strength of the study design.⇒ The sampling strategy used was pragmatic and limited by staff availability.⇒ Sampling a diverse cohort has the advantage of providing greater generalisability.⇒ The results should be interpreted with caution as they were obtained from a single centre where quantitative pupillometry is standard of care.
Open access
and after catheter-directed cerebral angiography (angiography) to explore the hypothesis that angiography is an independent predictor of change in PLR metrics.
Although developed roughly 100 years ago, the success of Seldinger's work in the 1950s spawned a steady increase in the type, frequency and availability of angiography globally. 113][14][15] What remains unknown is whether changes in PLR metrics can be attributed to periprocedural events or whether these changes reflect new or worsening neurological deficits.Addressing this fundamental question will provide clinicians with knowledge that directly impacts practice and will provide researchers with data to determine the degree to which periprocedural events are confounding variables in pupillometry-related research.
METHODS
This is a single-centre, prospective, observational pilot study of PLR assessments obtained using a pupillometer both 30 min before (to establish baseline) and after angiography (to assess for neurological deterioration).All procedures were per the ethical standards set forth by the Helsinki Declaration of 1975 and its subsequent addendums.The study was approved by the university institutional review board which determined that patients were exempt from written consent because routine serial PLR assessment with pupillometers is the standard of care at our institution.
Inclusion criteria were a physician-ordered interventional neuroradiological procedure.This includes patients scheduled, but not limited to angiography, embolisation, biopsy, vertebroplasty, thrombectomy and those admitted to the neurosciences intensive care unit (ICU) with a neurological or neurosurgical diagnosis, at least 18 years of age, no contraindications to PLR assessment with a pupillometer and nursing transport to and from angiography.Patients with a history of eye surgery were excluded.Pragmatic recruitment was used to identify the first 50 eligible patients.All patients had pupil measurements performed with the NPi-300 (Neuroptics) pupillometer with assessments being performed in the ICU and in the angiography suite.All pupillometer data were downloaded directly from the device to an electronic spreadsheet.Pupillometry data before surgery began was obtained in the holding area in our angiography suite or the Neurointensive care unit.Data after surgery were taken in the post-op area or the Neuro ICU.Lighting conditions are similar but were not recorded as part of our data collection.Prior studies suggest that ambient lighting conditions are unlikely to play a significant factor in affecting the accuracy of pupil size or reactivity measurements. 6Demographic and baseline data were extracted from the electronic medical record and entered into an electronic case report form.Data were uploaded into SAS V.9.4 for Windows for analysis.This observational study reported the variation in paired pupillary reading before and after angiography.Data are reported as mean (SD) or frequency (per cent) unless noted otherwise.Paired t-test models were constructed to evaluate the differences between pre and post angiogram pupil metrics.The Fisher exact test was used to compare categorical variables.A p value of <0.05 was considered statistically significant for all analyses.
Patients and public involvement Patients and public were not involved in any stage.
Pupillometer readings were obtained within 30 min before the angiogram in all 50 patients, and 44 (88%) had follow-up readings obtained within 30 min of completing the angiogram.There were no procedural complications and none of the patients in our sample had clinically significant neurological changes during the perioperative period up to 2 hours post angiogram.Nurses were more likely to obtain post-angiogram pupil measurements for patients with lower baseline NPi values of 4.3 (0.38) compared with those with higher mean NPi values at baseline (4.6 (0.16); p<0.05).
The QP values met the assumption of being approximately normally distributed and therefore the primary null hypothesis was explored using paired t-test.Models were constructed to explore each PLR metric (table 2).The only statistically significant difference was noted in the pre and post left eye readings for the minimum pupil size (a.k.a., pupil diameter on maximum constriction).The mean maximum constriction diameter prior to angiogram of 3.2 (1.1) mm was statistically larger than after angiogram (2.9 (1.0) mm; p<0.05); however, this was not considered clinically significant.Comparisons for all other PLR metrics pre and post angiogram demonstrated no significant difference (table 2).Using change in NPi pre and post angiogram (Δpre=0.05(0.77) vs Δpost=0.08 (0.67); p=0.62), we calculated the effect size as 0.042.Hence, detecting a statistically significant difference in NPi, if a difference exists, would require a sample size of ~6000 patients.
DISCUSSION
Pupil constriction is largely driven by mechanism such as smooth muscle contraction which is not impacted by normal-dose neuromuscular blockade agents and normal-dose narcotic agents.7][18] We expected QP to identify postangiography patients developing either cerebral oedema (as anecdotally reported in contrast-associated encephalopathy 19 20 ) or expansion of intracranial lesions. 21search examining pupil reactivity in the perioperative and sedated patient is inconclusive with some studies suggesting that changes are associated with nociceptive afferent stimulation, 17 22 and others finding that pupil size change is the most common PLR effect. 22Jolkovsky et al 23 found that although there were significant differences in pupil size and constriction velocity, compared with healthy controls, intoxicated patients had no difference in NPi values.
The findings must be taken in context with the intent of angiography to diagnose or treat cerebral ischaemia.Delayed neurological improvement represents a phenomenon in which there is an absence of neurological improvement during the immediate period following acute ischaemic stroke treatment. 24Although our study did not collect data on delayed neurological improvement following the angiogram, changes in PLR have repeatedly been found to associated with neurological recovery and decline in the periprocedural patient. 25 26uantitative pupillometers have the advantage of high reliability of a subtle neurological exam finding (changes in PLR) which have been suggested as a biomarker of neurological recovery or decline. 27 28PLR recovery after cardiac arrest or global cerebral hypoxaemia predicts recovery, but the temporality of this relationship is not well defined. 29The results add to the body of evidence that a welldesigned large-scale clinical trial is needed to resolve conflicting reports on the use of pupil reactivity as a biomarker of neurologic injury.Recent evidence demonstrates that NPi is correlated with intracranial pressure change in multiple studies, [32][33][34][35][36] but a recent study found no correlations between changes in pupil metrics and outcomes of cerebral autoregulation in critically ill patients. 37Therefore, there is little consensus on the use of measurement of pupillary function as a technique of non-invasive neuromonitoring follow-up in critically ill patients after undergoing an angiogram.PLR can be associated with an early detection of intracranial disorders, and angiography is one test to identify these maladies. 38he finding of difference in maximum constriction size (despite being only in the left eye, and the difference being ~0.3 mm) is worthy of discussion.Pupil diameter and reactivity depend on the intact and coordinated functioning of the sympathetic and parasympathetic nervous systems, including the hypothalamus, brainstem, and upper cervical ganglions and their output. 39It is not unreasonable that the statistically significant difference simply represents a spurious finding.Constriction diameter was not the primary hypothesis, and controlling for multiple testing would result in failure to reject the null.Another possibility is that the nurses were purposeful in timing their assessments with the lights on, and the brighter lights resulted in more constriction. 40ur study provides supportive data that in an uncomplicated angiogram, even with intervention, there is no effect on the PLR.This may be difficult to quantify as this was a pilot study so the number of patients with specific pathologies were small.However, with our preliminary data and an effect size calculation at 0.042, a population of 5959 patients would be required to detect any subtle differences.A previous study has also supported the accuracy and stability of PLR metrics when no neurological change has occurred. 41By assessing the discrete elements of the PLR before and after angiography, our study provides insights for future research as the values did not statistically.
Limitations
There are several limitations to our study that might affect its generalisability.First, this is a single-centre study with limited cohort of patients that might be different from other centres.Second, a 30-min time window was selected in anticipation that patients were at risk for acute neurological worsening and 24% of the readings were outside the 30-min window due to logistics involved in patient transport.Room turnover times are a common metric for measuring operating room and procedural efficiency and may have impacted the ability of staff to focus on postprocedural neurological assessment. 42Third, this is an observational study, and we are limited by our initial patient sampling as they were screened solely for the procedure (ie, angiography) and not for pathology.Our data set did include varying pathologies, such as haemorrhage, stroke and aneurysm, but there were both emergent and elective cases that might undermine the effect of certain pathology if they are not represented in enough numbers.
Open access
Another limitation is our inability to exclude the effects of sedative medications used intraoperatively that may have a pharmacological effect on the pupils.
CONCLUSION
These results demonstrate that a routine angiography is not associated with a change in PLR metrics measured with quantitative pupillometry.The study also provides compelling evidence in favour of the feasibility of using pupillometers in the angiography suite.Therefore, any and all postprocedural changes in the PLR exam should be considered indicative of possible new or worsening neurological deficit until ruled out through comprehensive inquiry.These data should serve as baseline for such future and ongoing research in that they suggest there is no need to control for angiography in multivariable analyses where PLR metrics are an independent or dependent variable.
Twitter DaiWai M Olson @DaiWaiOlson Contributors All authors have contributed and approved of the manuscript.Design and conceptualisation: BN, DMO, SES and VA.Data analysis: CR, FDA and DMO.Interpretation of the results: BN, JLM, CR, FDA, DO and VA.Manuscript draft and preparation: BN, DMO, VA and BR.Critical revisions: all authors.BR accepts full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not applicable.
Table 1
Baseline characteristics of patients
Table 2
Pupil metrics before and after cerebral angiogram *The pupillary light reflex (PLR) is considered abnormal if the NPi value is <3.0. | 2024-03-02T06:17:34.916Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "f03e2378a49c544a437409835385bfa7809f08fb",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "75fc9e9e47b66c2e1299204ca24924d3b831abb7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18866688 | pes2o/s2orc | v3-fos-license | Efficient vitreolysis by combining plasmin and sulfur hexafluoride injection in a preclinical study in rabbit eyes
Purpose To investigate the efficacy of plasmin and sulfur hexafluoride (SF6) on the vitreoretinal junction, as well as the long-term safety in the eye and effect on the recipient’s general health after application in the eye. Methods The study design included four groups of rabbits with three animals in each group. Group 1 received an intravitreal injection (IVI) of plasmin and SF6 in the right eye; group 2 received an IVI of plasmin in the right eye; group 3 received an IVI of SF6 in the right eye; and group 4 received an IVI of balanced salt solution in the right eye, which served as a normal control. Long-term safety (up to approximately three months) after plasmin and/or SF6 injection was evaluated morphologically by clinical examination, histology, and immunohistochemistry, and functionally by electroretinograms (ERGs). General health evaluations after intravitreal injection included the assessment of weight gain, food intake, body temperature, and complete blood count analysis. Results Plasmin plus SF6 injection resulted in complete posterior vitreous detachment (PVD), whereas plasmin or SF6 injection alone resulted in only partial PVD. Balanced salt solution did not induce PVD. Eighty days after intravitreal injection, there were no major differences among the eyes of the three groups of animals compared with the normal control animals upon clinical evaluation, or regarding retinal morphology and ERGs. The lenses examined remained clear for up to 80 days following the intravitreal injection of plasmin plus SF6, except one eye in the plasmin-treated group. ERGs decreased transiently one week after intravitreal injection in groups 1 through 3, but animals recovered fully to normal status afterward. General health was not affected after the injection of plasmin plus SF6. Conclusions Efficient vitreoretinal separation could be achieved, and an acceptable long-term safety profile was noted after plasmin plus SF6 injection in the eye. No major ocular toxicity or systemic toxicity was found after the injection of plasmin plus SF6. These results provide good support for the future clinical use of plasmin plus SF6 for treatment of a variety of vitreoretinopathies.
Vitreous traction on the retina can be a significant pathological factor in certain retinopathies, including central retinal vein occlusion, pediatric vitreoretinopathy, diabetic retinopathy, age-related macular degeneration, and cystoid macular edema [1][2][3][4][5][6][7]. Studies have shown that patients with posterior vitreous detachment (PVD), which is characterized by a lack of vitreous traction on the retina, had a better visual prognosis in certain retinopathies such as retinal vessel occlusion and age-related macular degeneration [4,7]. Relief of vitreous traction by the induction of PVD is theoretically helpful for these retinopathies.
Plasmin is a serine protease that mediates the fibrinolytic process and modulates the extracellular matrix [8]. It hydrolyzes a variety of glycoproteins, including laminin and fibronectin, both of which are present at the vitreoretinal interface and are thought to play a key role in Correspondence to: Chi-Chun Lai, Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan; Phone: 886-3-3281200; FAX: 886-3-3287798; email: chichun.lai@gmail.com vitreoretinal attachment [9,10]. Plasmin enzyme has been proven to cause vitreous liquefaction and PVD [11][12][13][14][15][16][17]. Pharmacological vitreolysis with microplasmin, a truncated form of plasmin, increases vitreous diffusion coefficients [18] and oxygen levels in the vitreous [19]. Therefore, plasmin might be useful in treating a variety of retinopathies because it reduces vitreous traction and retinal ischemia.
In cases without cellular attachment in the vitreoretinal junction, plasmin injection could weaken vitreoretinal adhesion and result in PVD. However, in cases with tough vitreoretinal adhesion with a cellular component, plasmin injection alone results in partial PVD [20]. This condition is termed "anomalous PVD," and is associated with a worse outcome. Therefore, the plasmin enzyme was used clinically, mainly as an adjuvant, to reduce vitreoretinal adhesion during vitrectomy surgery [11][12][13][14][15][21][22][23]. With the application of a plasmin enzyme, PVD was less traumatic than when mechanical methods alone were used [24]. To increase its efficiency in cleaving vitreoretinal adhesion, Sebag was the first to propose the concept of combination pharmacologic vitreolysis therapy [25]. Combination therapy could work more effectively than a single agent in conditions characterized by firm vitreoretinal adhesion.
Several animal studies have shown a good safety profile for plasmin when used in the eye [20,[26][27][28]. Most of the studies used plasmin as a single agent to induce PVD and were conducted over short study periods. Very few studies have addressed the combination vitreolysis technique [20,29]. The long-term safety of plasmin combined with other agents in the eyes remains unknown. The purpose of this study was to investigate the long-term effect of plasmin plus sulfur hexafluoride (SF6) on vitreoretinal adhesion, ocular safety, and the recipient's general health after application in the eye.
Evaluation:
The effect of PVD produced by the plasmin enzyme was investigated by transmission and scanning electron microscopy (TEM and SEM, respectively). The long-term safety profile of an intravitreal plasmin injection was evaluated by examining morphological as well as functional changes in the retina. Clinical examinations included slit lamp examinations, indirect ophthalmoscope, and fundus photos. Morphological studies included retinal histology and immunohistochemistry (IHC) by various antibodies that recognized specific layers of retinal cells. Functional studies of the retina used electroretinograms (ERGs) to identify functional changes in the retina after plasmin injection. An evaluation of general health after intravitreal injection included the assessment of weight gain, food intake, body temperature, and complete blood count (CBC) analysis. Animals: Japanese white rabbits (1.5-1.7 kg) were used in this study. The animals were purchased from the Animal Health Research Institute, Council of Agriculture (Executive Yuan, Jhunan, Taiwan) and were housed in the animal care facilities of the Chang Gung Memorial Hospital, Taoyuan, Taiwan. Animal handling was performed in accordance with the regulations at Chang Gung Memorial Hospital for the use of experimental animals and the Association for Research in Vision and Ophthalmology statement for the use of animals in Ophthalmic and Vision Research. The execution of this project followed the guidelines and standards of Good Laboratory Practice. Animals in group 4 received a BSS injection in the right eye and no injection in the left eye. Each group consisted of three animals. The rabbits were anesthetized with intramuscular injections of 1.5 mL/kg of an equal volumemixture of 2-(2.6-xylidino)-5.6-dihydro-4H-1.3thiazine-hydrochloride, methylparaben (Rompun; Bayer AG, Leverkusen, Germany) and 50 mg/ml ketamine (Ketomin; Nang Kuang Pharmaceutical Co., Tainan, Taiwan). Topical anesthesia (Alcaine; Alcon-Couvreur, Puurs, Belgium) was administered to reduce the animals' discomfort [30].
The intravitreal injection was performed 2 mm posterior to the limbus while the eye was being observed under a surgical microscope (M691; Wild Heerbrugg, Heerbrugg, Switzerland), with the help of a prism lens. Care was taken to avoid damage to the lens and the retina during the injection.
Clinical observations and electrophysiological examination:
Slit-lamp (SL-15; Kowa, Tokyo, Japan) examinations and indirect ophthalmoscopy (Omega 500; Heine, Herrsching, Germany) were performed. The degree of conjunctival congestion was evaluated by the Cornea and Contact Lens Research Unit (CCLRU) grading scale [31]. The severity of conjunctival redness ranged from very slight (grade 1) to slight (grade 2), moderate (grade 3), and severe (grade 4). External photos and color fundus photos were obtained to document the status of the cornea, conjunctiva, lens, vitreous, and retina after plasmin plus or without SF6 injection. For ERG recordings, the rabbits were anesthetized, their pupils were dilated, and a topical anesthetic was applied to the cornea. After 1 h of dark adaptation, ERGs were recorded with an ERG recording system (RETIport ERG; Roland Consult, Brandenburg, Germany) at baseline and at 1, 3, 7, 14, 28, and 80 days after the intravitreal injections. ERGs were recorded with a contact lens electrode that contained light-emitting diodes as a stimulator and that was connected to an electrode on the forehead. A ground electrode was attached to the ear. Amplitudes and implicit times of a-and b-waves were evaluated. These protocols have been published previously [30]. The luminance of the stimulus was 3 cd/m 2 , with a duration of 10 ms. Scotopic 0-dB ERGs were recorded with a standard white flash and a dark background. Twenty responses elicited by identical flashes applied at 10-s intervals were averaged in the dark-adapted state.
Histological and electron microscopic examination: Eighty days after intravitreal injection, all animals were simultaneously sacrificed by an overdose of anesthetics. After enucleation, the eyes were opened with a razor blade, which was used to penetrate the vitreous adjacent to the pars plana to ensure rapid penetration of the fixative. Care was taken to avoid damage to the adjacent retina and lens. Morphological examinations by histology, SEM, and TEM were performed in each eye. One third of the retina was sectioned for IHC, one third for SEM examination, and one third for retinal histology and TEM examination. Therefore, each eye was used for all three morphological analyses.
For TEM, after fixation by a mixture of 3% glutaraldehyde and 2% paraformaldehyde, the tissue was dehydrated in an ethanol series, postfixed in 1% osmium tetroxide, and embedded in epoxy resin (Epok 812; Oken, Tokyo, Japan). Semithin sections were stained with 0.5% toluidine blue. Ultrathin sections were stained for contrast with 8% uranyl acetate and lead citrate, and were analyzed using electron microscopy (H7500; Hitachi, Tokyo, Japan). The observers were blinded to group classification when they interpreted the morphological data. SEM was performed to verify the TEM findings. Immunohistochemistry with confocal microscopy: IHC was used to visualize cells in different retinal layers 80 days after intravitreal injection. The integrity of the intermediate filament proteins of Müller cells and rods were verified. A protocol that was published in a previous study was used with modifications [30]. In brief, after the cornea, lens, and vitreous were removed, the eye cup was cut into three pieces, and one of these pieces was used for the IHC study. The retinas were fixed in 4% paraformaldehyde overnight. They were then incubated in 30% sucrose (USB Corp., Cleveland, OH) overnight at 4 °C, embedded in an optimal cutting temperature compound (Sakura Finetek, Torrance, CA), and sectioned with a microtome cryostat (CM3050S; Leica, Wetzlar, Germany). The sections were placed on slides that had been coated with silane (Muto PureChemicals, Tokyo, Japan) to promote adhesion of the sections to the glass surface. Samples were blocked with 1% BSA (in PBS) for 60 min after washing in PBS. After removal of the blocking serum, the following primary antibodies were added: antivimentin (1:1; Dako, Glostrup, Denmark) and antirhodopsin (1:50; Santa Cruz Biotechnology, Santa Cruz, CA). IgG-fluorescein isothiocyanate was used as a secondary antibody. The resulting sections were then viewed under confocal microscopy (TCSSP2; Leica, Wetzlar, Germany). Body temperature, food Intake, and weight gain: Body temperature, food intake, and weight gain were viewed as indicators of general health in the rabbits. Body temperature and food intake were measured on a daily basis, and the weight of each animal was measured at least every 2 weeks and on the day of anesthesia. Complete blood count: Before and after the experiment, a CBC was taken to detect signs of infection, anemia, or abnormalities in the blood. Blood from the ear was collected for analysis. Statistical evaluation: We compared ERG results between the study eyes and the control eyes before and after intravitreal injection and among the different treatment groups. The results among the different groups were compared using ANOVA and the Dunnett post hoc test. The Wilcoxon signed-rank test was used to compare ERG results at baseline and after treatment, as well as to compare results between study eyes and control eyes. Amplitudes and implicated times of the a-and b-waves were analyzed by group mean comparisons. CBC data were compared before and after intravitreal injection and among different treatment groups. p<0.05 was considered statistically significant.
RESULTS
Clinical examinations and electroretinograms: At 7, 14, 28, and 80 days after intravitreal injection, animals were anesthetized, and external photographs were taken with a digital camera. The degree of conjunctival congestion was assessed by the CCLRU grading scale. Signs of wound infection and corneal epithelial defects were recorded. The degree of conjunctival congestion was similar between the experimental and control eyes. After surgeries, moderate to severe conjunctival congestion lasted for approximately two weeks, after which time the redness decreased gradually. There was no significant difference in the degree of conjunctival redness assessed by the CCLRU grading scale up to 80 days after surgery among these treatment groups. None of these eyes developed corneal or conjunctival infection by the end of the experiment. A mild cellular reaction was noted in the anterior chamber after intravitreal injection of plasmin plus or without SF6, but the reaction cleared in all eyes within one week. The lens remained clear up to 80 days following the intravitreal injection of plasmin plus SF6, except one eye in the plasmin-treated group. Dilated fundus examinations revealed no signs of vitreous opacity, retinal detachment, vessel occlusion, or retinal necrosis in any animal ( Figure 1). ERG data showed a transient decrease in a-and b-wave amplitude within one week after the injections in groups 1, 2, and 3. At one week after the injections, the amplitude returned to baseline. Representative ERG b-wave changes are shown in Figure 2. Histology of the retina: There were no noticeable differences in the histology of the retina among the three treatment groups by the end of this study (Figure 3). The retinal morphology of plasmin plus or without SF6-treated eyes was similar to that of the control eyes. Some vacuole changes could be seen in all groups of animals, even in the normal control group. However, no significant morphological changes, including retinal structure deformation, thinning, or retinal layer loss, were seen in a specific group of animals. Therefore, such morphological changes may have been associated with the tissue handling or the normal physiologic apoptosis or aging changes of the retina.
TEM was used to investigate the presence of vitreoretinal adhesion and to characterize the ultramorphology in different retinal layers after plasmin plus or without SF6 treatment. Dense collagen fibrils were still attached to the internal limiting membrane (ILM) in the control eyes. In contrast, eyes treated with plasmin or SF6 alone showed a cleaner ILM with limited adhesion of collagen fibrils. Eyes treated with plasmin plus SF6 were free from adhesion of collagen fibrils (Figure 4). These results showed that BSS-treated eyes did not undergo PVD. Partial PVD was observed in the plasmin injection group (group 2) and the group receiving the long-acting gas, sulfur hexafluoride (group 3). However, the plasmin enzyme plus the long-acting SF6 gas (group 1) produced complete PVD. The ultrastructure in the retinal pigment epithelium, the outer segments of photoreceptors, the inner segments of photoreceptors, the mitochondria in the inner segments, and cells in the outer nuclear layer showed normal morphology in all four treated groups. SEM and TEM findings were consistent.
Immunohistochemistry: IHC using antibodies that recognized different layers of cells in the retina was performed to compare the difference between treated and control eyes. We did not find any major differences in the IHC results, indicating that the specific cells within the retina were not affected by plasmin plus or without SF6 treatment ( Figure 5).
Body temperature, food Intake, weight gain, and complete blood count: We did not find any differences in body temperature, food intake, or weight gain among rabbits receiving plasmin plus SF6, plasmin alone, or SF6 alone. There were no major differences in the CBC before and after intravitreal injection among the three groups of animals (data not shown).
DISCUSSION
It is important to assess the long-term safety of plasmin plus SF6, including ocular safety and systemic safety, before beginning the clinical application of these agents. Our data also showed good efficacy of PVD induction by plasmin plus SF6. Our clinical observations, as well as our morphological and functional studies on the eye, revealed good safety profiles of plasmin plus the long-acting gas SF6 when injected into the eye. The rabbits' general health was not affected by the injection of plasmin plus SF6. This study has provided important preclinical information regarding the application of plasmin plus SF6 in treating a variety of retinopathies.
Plasmin plus other agents or surgery is necessary to produce a complete PVD in clinical situations with tight vitreoretinal adhesion. In the clinical trial of vitreomacular with intravitreal microplasmin (ThromboGenics Ltd., Dublin, Ireland), a truncated form of plasmin, PVD occurred 8% to 44% of the time, depending on the dose of microplasmin used [32]. Repeated injection of microplasmin (up to three times) increased the incidence of PVD to 58% [32]. These data suggest that even with the use of plasmin and after multiple enzyme injections, the induction of PVD did not reach 100%. However, the plasmin enzyme alone could not produce a complete PVD in some retinal disorders with firm vitreoretinal adhesion, such as diabetic retinopathy [20]. Plasmin injection alone in diabetic retinopathy weakens the vitreoretinal adhesion and most often results in partial PVD only, which is an even more dangerous situation with a worsened clinical outcome [33,34]. Additional procedures, such as vitrectomy or the combination of other agents, are needed to induce a complete PVD in retinopathy with prominent cellular proliferation and attachment of the vitreoretinal junction. SF6 injection alone has been shown to induce PVD clinically [35,36]. Combining plasmin with the long-acting gas SF6 might be more effective in cases with particularly strong vitreoretinal adhesion and may induce a complete vitreoretinal separation, unlike that achieved by the use of a single agent. In addition, this procedure is less aggressive than combining plasmin and vitrectomy.
Although the half-life of plasmin is short, its effect could last for long time. For instance, cataract formation might not be visible immediately after the injection of plasmin, but may appear later. Therefore, the long-term effect is important if we are going to use plasmin and SF6 to treat a variety of vitreoretinopathies with tougher vitreoretinal adhesion in the future. Hikichi et al. [29] was the first to use plasmin and SF6 to induce PVD in rabbit eyes. They found effective PVD induction after the use of plasmin combined with SF6. They observed a short-term effect of combined vitreolysis by plasmin and SF6 up to seven days; we extended the follow-up to almost three months. In addition to ocular safety evaluation, we added the evaluation of systemic safety. Our findings suggest the potential of combining plasmin and SF6 because of the desired clinical effect and good safety profile associated with such treatment.
In the current study, we observed transient ERG changes following the injection of plasmin. After one week, the ERG returned to normal baseline levels. We hypothesize that this phenomenon is associated with the pressure changes following intravitreal injection. The pressure changes might be related to the properties and the volume of the agents injected into the vitreous. We noted mild transient inflammation after the injection of plasmin into the rabbit eyes. The intraocular pressure was high immediately after intravitreal injection, but gradually returned to normal [37]. This indicates that the agents injected into the vitreous caused transient photoreceptor dysfunction, but the ERG values returned to pretreatment levels shortly after the injection. Eyes in group 4 received the lowest injected volume of all the groups and BSS did not cause an inflammatory reaction after its injection in the eyes. These two factors could contribute to the lack of ERG changes after intravitreal injection in these animals. Transient and mild inflammation could be encountered following the injection of plasmin plus or without SF6. Cataracts could also arise following intravitreal injection of plasmin, although the incidence is not high. This result could be attributed to the intravitreal injection or the effect of plasmin on the lens. The combination of plasmin plus SF6 seems to be well tolerated in the eye and was not associated with increased complications. Yet, our study is limited by the small number of animals used. No definite conclusions could be drawn, although the initial results are promising.
Notably, the PVD effect produced by plasmin is dose and time dependent [38]. In the clinical trial of vitreomacular traction syndrome treated with microplasmin, increased exposure and doses augmented the incidence of PVD induction [39]. Therefore, sufficient doses of plasmin and time for action are required to produce the complete effect of vitreoretinal separation. Insufficient doses or time for action will produce either no separation at all or only partial vitreoretinal separation, a condition that is associated with poor prognoses, as mentioned above. In conclusion, our results suggest an efficient vitreoretinal separation and a good long-term safety profile for plasmin plus SF6 injection into the eye. No major systemic toxicity was found after the injection of plasmin plus SF6. These results provide support for the future clinical use of plasmin plus SF6 to treat a variety of vitreoretinopathies with tougher vitreoretinal adhesion. | 2017-11-07T00:05:00.215Z | 2012-09-19T00:00:00.000 | {
"year": 2012,
"sha1": "bb8ab9973d9b584f04531cdf930933f77a34938b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bb8ab9973d9b584f04531cdf930933f77a34938b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255685215 | pes2o/s2orc | v3-fos-license | Nutritional Issues Faced by Patients with Intestinal Stoma: A Narrative Review
The incidences of colorectal cancer and inflammatory bowel diseases are increasing in the developed countries of Western Europe and North America, and consequently, the prevalence rate of temporary or permanent stomas has increased in recent years. Nevertheless, the amount of research in the nutrition field in the case of patients with stoma is rather limited. This review article aims to assess the impact of nutrition on an intestinal stoma and possible sequelae resolved with nutritional therapy. The research analyses conducted thus far indicate a significantly increased risk of developing malnutrition, vitamin deficiency (especially group B), and an increased number of discharges from the stoma and its relevance with abnormalities found in electrolyte concentrations.
Introduction
The stoma serves in moments, whereas for various causes the excretion of faeces is impossible physiologically [1]. The term intestinal stoma might be used interchangeably with an artificial rectum. This review deals with excretory intestinal stomas, but does not discuss enteral feeding, i.e., gastrostomies and jejunostomies [1,2]. There are many research difficulties regarding studies related to the nutrition of stoma patients due to the wide range of different patients. The medical indications to perform an excretory stoma are mainly neoplastic diseases, especially those affecting the distal parts of the gastrointestinal tract, such as the large intestine. Other common indications are congenital anomalies and inflammatory bowel diseases (IBD) such as Crohn's disease (CD) and ulcerative colitis (UC) [1,[3][4][5][6][7]. A Swedish population-based study showed that even with the increasing use of anti-TNF (tumor necrosis factor) drugs and low rates of proctectomy, the total number of stomas removed in patients with a diagnosis of CD, within 5 years of diagnosis, did not decrease from 2003 to 2019 [4]. Intestinal stomas placed pro tempore decompress the diseased part of the intestine for the duration of the treatment. After eliminating the cause of the illness, the continuity of the digestive system is restored, unless there are contraindications [1,3]. Both temporary and permanent intestinal stoma require changes in the diet due to the emergence of disparate gastroenterological problems. Partially altered bowel function and underlying pathology expose patients to numerous nutritional deficiencies, including malnutrition.
This narrative review aims to analyse the scientific reports on the nutrition problems of patients with an intestinal stoma. This review discusses nutrition-related complications among patients with a stoma, their relationship to the stoma site, and measures to cope with it. In addition, this review assesses the occurrence of malnutrition in patients with an intestinal stoma, the risk of developing nutritional deficiencies, and the supply of macroand micronutrients.
Not only a nutritional intervention after the stoma creation is crucial but also the nutritional status of the patient before the surgery. Fulham et al. [8] also emphasized that the nutritional status before the stoma surgery, as well as the patient's age and the cause of the main disease, are of immense importance. Arnott et al. [9] in their study published in 2022 also considered the importance of patient nutrition before surgery to restore the continuity of the gastrointestinal tract. The researchers report that a considerable number of patients undergoing Hartman reversal (HR), were identified as being at substantial risk for malnutrition. Poor nutrition was associated with worse postoperative outcomes, including wound infection or patient death. The cited team of researchers supports the use of preoperative screening for malnutrition. Such screening measures can make it possible not only to diagnose malnutrition among patients, but more importantly to take appropriate steps to improve the patient's nutritional status and avoid complications [9].
In 2001, the Audit Commission and in 2003 the British Association for Parenteral and Enteral Nutrition (BAPEN) highlighted the importance of screening in the context of malnutrition, both upon admission to the hospital and regularly during the entire stay [10]. In 2018, the Global Leadership Initiative on Malnutrition (GLIM) issued unified criteria for the diagnosis of malnutrition, which may also be efficient for patients with an ostomy. The main criteria suggested by participants are non-volitional weight loss, low body mass index (BMI), reduced muscle mass, reduced food intake or assimilation and disease burden or inflammation. As diagnostic tools, in the case of malnutrition, GLIM recommends the use of the following scales: NRS-2002 (Nutritional Risk Screening 2002), MNA-SF (Mini Nutritional Assessment-Short Form), MUST (Malnutrition Universal Screening Tool), SGA (Subjective Global Assessment), and many others [11].
There are no recommendations for the nutrition assessment tool considering the nutritional status in both the ileostomy and the colostomy groups. Usually, universal questionnaires such as MUST are used. MUST is widely used in hospitals to predict the length of stay and mortality. According to the authors, the tool can help diagnose malnutrition such as being overweight and obesity, and most importantly, it can be used even in patients with fluid disturbances. The MUST questionnaire permits for a quick assessment of the patients' risk of malnutrition, as it consists of only five steps: BMI assessment, evaluation of casual loss of body weight in the last 3-6 months, and the presence and possible impact of acute disease on the patient's nutritional status. In the last two steps, the points are added up, and according to the level of risk, appropriate dietary guidance is issued [10]. However, its modified version, the Perioperative Nutrition Screen (PONS) is recommended by Wischmeyer et al. [12] for assessing the nutritional status of patients after a gastrointestinal surgery. PONS is another short questionnaire that indicates the need for preoperative nutrition, and clinical or nutritional intervention.
Patients are qualified in case of at least one of the following occurs: BMI < 18.5 kg/m 2 or <20.0 kg/m 2 (age > 65 years), recent weight loss and reduced half appetite at the very least. An additional condition related to the surgical risk is the decrease of the albumin level below 3.0 g/dL [12].
Santamaria et al. [13] used the SGA for the assessment of nutrition in a group of patients with ileostomy with High Output Stoma (HOS), which is a well-known and standardized tool. In addition, patients with colon cancer who underwent a scheduled ileostomy surgery, were diagnosed before and after with the MNA [14]. The last two questionnaires are based on similar questions, and are slightly longer, however, still fit on one page. This allows them to be carried out efficiently and repeated regularly during the patients' stay in the hospital [13][14][15]. If a patient is qualified for a nutritional treatment, the first step is to perform an in-depth evaluation of the nutritional status of that patient. The next step is the qualification to the selected type of nutritional intervention. Overall, an in-depth assessment of the patient's condition is performed by a nutritional interview, anthropometric measurements, and biochemical and immunological tests [8,16]. The body mass index, also known as the Quetelet II index, is most often used to assess body weight. Other commonly used measures are the thickness of the fat-dermal fold above the triceps muscle of the non-dominant arm (triceps skinfold thickness, TST), mid-arm circumference (MAC), and whole-body bioelectrical impedance analysis (BIA) [10].
Immediate Post-Operative Period
In the initial period, after the surgery and during convalescence, the greatest attention is paid to the physical and psychological aspects of preparing the patient for a life with an ostomy [17]. Therefore, it is easy to overlook the nutritional needs resulting from comorbidities such as cancer or diseases of the gastrointestinal tract, which may exacerbate nutrient deficiencies or weight loss. Malnutrition is a condition that very often accompanies patients devastated by cancer or with intestinal diseases. It results from malabsorption and long-term hospitalization [18,19]. Nutritional complications are usually caused by stoma output, stoma site and base disease ( Figure 1). are added up, and according to the level of risk, appropriate dietary guidance is issued [10]. However, its modified version, the Perioperative Nutrition Screen (PONS) is recommended by Wischmeyer et al. [12] for assessing the nutritional status of patients after a gastrointestinal surgery. PONS is another short questionnaire that indicates the need for preoperative nutrition, and clinical or nutritional intervention. Patients are qualified in case of at least one of the following occurs: BMI < 18.5 kg/m 2 or <20.0 kg/m 2 (age > 65 years), recent weight loss and reduced half appetite at the very least. An additional condition related to the surgical risk is the decrease of the albumin level below 3,0 g/dL [12].
Santamaria et al. [13] used the SGA for the assessment of nutrition in a group of patients with ileostomy with High Output Stoma (HOS), which is a well-known and standardized tool. In addition, patients with colon cancer who underwent a scheduled ileostomy surgery, were diagnosed before and after with the MNA [14]. The last two questionnaires are based on similar questions, and are slightly longer, however, still fit on one page. This allows them to be carried out efficiently and repeated regularly during the patients' stay in the hospital [13][14][15]. If a patient is qualified for a nutritional treatment, the first step is to perform an in-depth evaluation of the nutritional status of that patient. The next step is the qualification to the selected type of nutritional intervention. Overall, an indepth assessment of the patient's condition is performed by a nutritional interview, anthropometric measurements, and biochemical and immunological tests [8,16]. The body mass index, also known as the Quetelet II index, is most often used to assess body weight. Other commonly used measures are the thickness of the fat-dermal fold above the triceps muscle of the non-dominant arm (triceps skinfold thickness, TST), mid-arm circumference (MAC), and whole-body bioelectrical impedance analysis (BIA) [10].
Immediate Post-Operative Period
In the initial period, after the surgery and during convalescence, the greatest attention is paid to the physical and psychological aspects of preparing the patient for a life with an ostomy [17]. Therefore, it is easy to overlook the nutritional needs resulting from comorbidities such as cancer or diseases of the gastrointestinal tract, which may exacerbate nutrient deficiencies or weight loss. Malnutrition is a condition that very often accompanies patients devastated by cancer or with intestinal diseases. It results from malabsorption and long-term hospitalization [18,19]. Nutritional complications are usually caused by stoma output, stoma site and base disease ( Figure 1). According to Fulham et al. [17], malnutrition in patients with impaired bowel function may be caused, in part, by the hospital routine. Meal delivery times are fixed and adapted to the needs of the serving staff and catering companies. The needs of patients with an ostomy are different. The consumption requirements concern smaller and more frequent meals, and more frequent intake of fluids during the main meals [17].
An observational study conducted by Mukhopadhyay et al. [16] involving patients with a created ileostomy demonstrated that early implementation of balanced oral feeding was associated with normal nutritional status. Bodyweight decrease and serum hemoglobin levels were noticed after one week of oral feeding. However, baseline biochemical (hemoglobin, albumin) and anthropometric (body weight) parameters were normal at the twelfth week of a follow-up [16].
Delayed Post-Operative Period
Due to the physiology of the gastrointestinal tract, nutritional deficiencies may affect patients to a greater or lesser extent, depending on the location of the stoma creation. People with an ileostomy are more prone to nutritional deficiencies because most of the digestive system is served temporarily or permanently, and henceforth the mechanism of nutrient absorption is disturbed [8,15]. The sites of absorption of individual nutrients are shown in Table 1 [15]. Table 1. Sites of absorption of nutrients in the intestine [15].
Ileostomy
The small intestine is responsible for the absorption of most nutrients, including amino acids, carbohydrates, fats, minerals, and water [15]. When an ileostomy is established in the initial section of the ileum, the surface area of nutrient absorption may be significantly reduced. In such a case, the efficiency of the process may be impaired due to a reduced number of intestinal villi and microvilli [8,15].
When the intestinal stoma is created within the small bowel, the patient should be monitored for the consumption of fat and of fat-soluble vitamins such as A, D, E, and K. It is also worth paying attention to the level of vitamin B12 (cobalamin), which is absorbed in the ileum, in the presence of Castle's intrinsic factor. In the case of an ileostomy, when the entire colon is absent or not in continuity, the patient has also been deprived of the endogenous source of vitamins and nutrients produced by the gut microbiota. These nutrients are vitamins: K, B, folic acid, and short-chain fatty acids (SCFA) [8,15]. The study by de Oliveira et al. [20] showed that in patients with an ileostomy, low absorption of fat and niacin occurs much more often than in those with a colostomy.
A stoma in the small intestine predisposes patients to an increased amount of water excreted due to the lack of reabsorption in the large intestine. Patients with an ileostomy most often pass stools in a liquid or semi-liquid form. If the stoma is in the proximal part of the small intestine, in addition to absorbing water, there may be problems with the absorption of sodium and potassium. The remaining nutrients should be absorbed normally if no inflammation develops in the intestine and there is no damage to the intestinal villi [8,15].
In the scientific literature on stomas and nutrition, authors pay a lot of attention to the three complications: excessive amount of discharge in the stoma bag, disturbances in the water and electrolyte balance, and leakage of intestinal contents in the stoma area [21][22][23]. Arenas-Villafranca et al. [22] found that the presence of a high-output stoma produces an increased amount of fluid (>1500/2000 mL/day) and is associated with electrolyte disturbances. Study results outlined that all HOS cases were associated with an ileostomy and resulted in an extension of the hospital stay by 6 days compared to patients without HOS [22]. De Oliveira et al. [20] proved that the percentage of patients avoiding the consumption of selected groups of food products is significantly higher among patients with an ileostomy. According to the authors, adjusting the diet to one's own needs and excluding certain foods from it may also increase the risk of malnutrition in patients with a stoma in the small intestine [20].
Colostomy
The responsibility of the water and electrolyte absorption is covered by the large intestine; thus, sodium and potassium are the most essential elements. In people with a preserved large intestine, water is recovered from the contents of the intestine along its entire length. The body recovers water even from the formed stool-this mechanism causes constipation if the stool is held for too long. The second function, apart from water absorption, is the uptake of sodium in this section of the gastrointestinal tract. It occurs in the colon and helps maintain the body's water and electrolyte balance. In the case of partial or complete exclusion of the colon, patients often develop electrolyte disturbances [8,15].
The risk of nutritional deficiency is greater when the colostomy is created due to inflammatory bowel disease (CD or UC). However, it is related to the underlying disease, especially during its exacerbation. In both cases, there is a malabsorption of proteins and the medications used can lead to a deficiency of calcium, potassium, and zinc. Frequently, deficiencies of fat-soluble vitamins, folic acid, vitamin B12, as well as the minerals calcium, magnesium, and zinc occur in patients with CD as a result of extensive inflammation or removal of a part of the intestine [21,24]. Due to bleeding, patients with ulcerative colitis tend to lose iron, fluid, and electrolytes [25]. Chandramouli et al. [26] indicate that malnutrition may be one of the complications of colostomy. Furthermore, they indicate that transverse colostomy had a lower rate of malnutrition than sigmoidal colostomy.
Demand for Energy, Macronutrients, Vitamins, and Micronutrients
There is no scientific data on the validity of increasing or not increasing energy supply in patients with an ostomy. Such data could be helpful, especially in patients who are malnourished or at risk of malnutrition, as well as in those who suffer from peristomal complications related to nutrition.
De Oliveira et al. [20], in a study evaluating the consumption of energy and nutrients, did not find any deterioration in the nutritional status in both ileostomy and colostomy patients. On the other hand, the average usual energy consumption in the group of patients with an ileostomy was 20% lower than in patients with a colostomy.
In the study by Migdanis et al. [27], a decrease in the average energy consumption among patients with an ileostomy was reported to be approximately 100 kcal compared to the preoperative period. There were no such changes in the control group. In addition, patients with an ileostomy also lost weight and body fat percentage during this period. However, the authors emphasize that these data should be carefully applied to the general population with an ileostomy, as patients were examined in the period immediately after the surgery (40 days after the surgery). These people often eat food selectively for fear of an increased stoma secretion and leakage [27].
Nutrient absorption can be impaired in patients with ostomies; therefore, energy intake may increase to about 30% for inpatient with HOS [23]. Increasing the calorific value of the diet can be achieved by dietary modification or by using preparations from the ONS group.
The topic of energy loss is closely related to the topic of fat supply. Several years ago, clinicians recommended vast restrictions on fat intake to reduce the fatty diarrhoea that often-accompanied patients, especially those with an ileostomy. For the time being, very low-fat diets are not recommended because low fat consumption could relevantly reduce the caloric value of the diet [23]. Such restrictions can aggravate the problems of undernourishment or low weight in people and patients trying to regain lost weight. Additionally, fat deficit exposes the patient to a deteriorated absorption of fat-soluble vitamins [8]. It is recommended to limit saturated fats in favour of unsaturated fats, which are a source of phytosterols as well [28][29][30]. No strict guidelines have been found regarding the percentage of this macronutrient in the diet of a patient with a stoma. Each afflicted should be treated individually, and the fat supply should be appropriately matched to their current condition and the condition of the digestive tract.
The total amount of carbohydrates is generally not restricted, with a recommendation to ingest 40-50% of calories as carbohydrates. However, the type of component ingested is important. [23]. Although, the more valid issue seems to concern the quality and the gradual implementation of soluble fibre. Albeit the amount of fibre in the diet is a matter of individual consideration and depends on the type of stoma or the time passed since the surgery [8,23]. The basic products recommended for patients with a stoma are starch products such as potatoes, white rice, refined flour pasta or white bread [8]. In patients with ileostomy, suffering from dehydration or HOS often occurs, ergo it is recommended to avoid hypotonic fluids like sweet fruit juices or hypertonics like soft drinks and other sugar-sweetened beverages, which may worsen these problems [31]. For the same reasons, patients with HOS are also not recommended to consume sweeteners and products such as chewing gum [23]. Ingestion of a low-FODMAP (fermentable oligo-, di-, monosaccharides and polyols) diet may improve gastrointestinal symptoms and hydration in patients with HOS [32]. This diet, especially in the initial stages, is very restrictive and should be carried out under the supervision of a dietitian in order to ensure that nutrient intake is adequate and balanced [23,32]. On the other hand, among patients with colostomy whose gastrointestinal passage is less altered, the preventive use of foods with higher fibre content, such as brown rice or whole wheat bread, may be helpful. This may help to avoid constipation problems that accompany a significant percentage of patients with this type of stoma [8].
The usual supply of protein among patients with ostomies is recommended at the level of 20%, unless there are indications to increase its amount, such as convalescence or malnutrition [8,23]. Burch et al. [33] recommend that ostomies consume two to three servings of protein per day. Due to the limitation of the supply of saturated fat, lean meat, e.g., poultry, fish, eggs or lean dairy products are considered the preferred sources of protein. Despite wherethrough high content of insoluble fibre fractions and the bloating nature, the inclusion of legume seeds should be approached with caution, especially among patients with ileostomies. Protein is a macronutrient relatively well tolerated by both patients with an ileostomy and a colostomy; therefore, all individuals with a stoma are advised to take it daily in appropriately selected portions [8,23]. It should be remembered that, besides the protein content, dairy is also a reliable source of calcium, which may protect the patient against osteoporotic changes [33].
The scientific literature reports an increased risk of osteomalacia and osteoporosis, combined with a calcium deficiency in people who have an intestinal stoma created as a result of inflammatory bowel disease. There is a need for further research and guidelines from scientific societies in this regard because preventive measures can bring many benefits to the examined [34,35].
Unfortunately, thus far none of the global associations dealing with clinical nutrition have developed guidelines for the supplementation of vitamins or macro-and micronutrients in patients with a stoma. In the scientific literature, however, there are references to the risk of cobalamin and a vitamin K2 deficiency in people who have an ileostomy or a colostomy. This is due to these two vitamins being absorbed in the terminal ileum and colon, and not in the duodenum and jejunum, where most of the vitamins and minerals are absorbed [36]. No scientific papers directly addressing the subject of supplementation have been found, however, in one of the cross-sectional studies involving patients with an intestinal stoma. Schiergens et al. [7] associated vitamin B12, iron, and zinc deficiencies with a reduced quality of life. The study found an association between the lack of these nutrients and physical well-being and a reduced quality of life related to the gastrointestinal tract, measured by the Gastrointestinal Quality of Life Index (GIQLI). The strongest correlation was found between vitamin B12 deficiencies and the mental health of patients [7].
It is worth emphasizing that the distribution of macro-and micronutrients may depend to a considerable extent on the underlying disease and the stages of its exacerbation or remission. The individual condition of the patient and the complications associated with the procedure or maintenance of the stoma will also be meaningful. In Figure 2, we have summarized the key measures of the relationship between nutritional complications and stoma output, stoma site and underlying disease ( Figure 2). efits to the examined [34,35].
Unfortunately, thus far none of the global associations dealing with clinical nutrition have developed guidelines for the supplementation of vitamins or macro-and micronutrients in patients with a stoma. In the scientific literature, however, there are references to the risk of cobalamin and a vitamin K2 deficiency in people who have an ileostomy or a colostomy. This is due to these two vitamins being absorbed in the terminal ileum and colon, and not in the duodenum and jejunum, where most of the vitamins and minerals are absorbed [36]. No scientific papers directly addressing the subject of supplementation have been found, however, in one of the cross-sectional studies involving patients with an intestinal stoma. Schiergens et al. [7] associated vitamin B12, iron, and zinc deficiencies with a reduced quality of life. The study found an association between the lack of these nutrients and physical well-being and a reduced quality of life related to the gastrointestinal tract, measured by the Gastrointestinal Quality of Life Index (GIQLI). The strongest correlation was found between vitamin B12 deficiencies and the mental health of patients [7].
It is worth emphasizing that the distribution of macro-and micronutrients may depend to a considerable extent on the underlying disease and the stages of its exacerbation or remission. The individual condition of the patient and the complications associated with the procedure or maintenance of the stoma will also be meaningful. In Figure 2, we have summarized the key measures of the relationship between nutritional complications and stoma output, stoma site and underlying disease ( Figure 2).
Recommendations to Prevent Complications and Deficiencies
A balanced diet and properly conducted nutritional education perform a key role among the methods of preventing peristomal complications and deficiencies that may accompany patients [8,23,33,37].
A patient with an ileostomy is encouraged to chew their food thoroughly and to introduce fibre into their diet gradually after the creation of the ileostomy. These interventions can reduce common ailments such as diarrhoea, bloat, and odour as wells as intestinal obstruction. Patients with ileostomies can easily experience dehydration and electrolyte disturbances; thus, education on dehydration symptoms and treatment is important.
Recommendations to Prevent Complications and Deficiencies
A balanced diet and properly conducted nutritional education perform a key role among the methods of preventing peristomal complications and deficiencies that may accompany patients [8,23,33,37].
A patient with an ileostomy is encouraged to chew their food thoroughly and to introduce fibre into their diet gradually after the creation of the ileostomy. These interventions can reduce common ailments such as diarrhoea, bloat, and odour as wells as intestinal obstruction. Patients with ileostomies can easily experience dehydration and electrolyte disturbances; thus, education on dehydration symptoms and treatment is important. [8,37]. In the past, men with dehydration with an ileostomy were advised to add a teaspoon of salt to drinks or to consume additional salty snacks, such as crisps, between meals. Currently, due to the high degree of processing of many food products (cold cuts, cheese and bread), and thus the relatively high salt content in the daily diet, this method is not recommended [8,33]. In line with the principles of a healthy diet, such as the nutritional pattern of the Dietary Approaches to Stop Hypertension (DASH) diet, all people should be careful about additional salt intake. Patients with HOS should be provided with adequate sodium rehydration solutions prepared with the use of salt in strictly defined proportions. On the other hand, patients with an ileostomy who do not have this problem, and patients with a colostomy should follow the principles of a healthy diet, e.g., not overdoing the supply of salt, monitoring for signs of dehydration and checking sodium levels prophylacti-cally [23,38]. Pedersen et al. [39] report that in patients with an ileostomy, a single spot urine sodium sample taken in the morning to noon hours can reliably estimate the 24-h urine sodium excretion. Testing this in people with an ileostomy can, therefore, quickly identify a sodium deficiency and the sodium supply can be increased, if needed [39]. According to Medlin et al. [23] and Mountford et al. [31], patients with HOS and dehydration who frequently experience electrolyte disturbances should avoid excessive intake of hypotonic drinks (e.g., water, tea, and coffee limited to 500 to 1500 mL/day). While these hypotonic free water liquids are recommended for hydration of healthy individuals, these solutions may aggravate dehydration in patients with an ileostomy in whom sodium cannot be properly absorbed [23,31]. Instead, it is recommended for patients with a stoma to consume a rehydration solution consisting of 1000 mL of water, 20 g of glucose, 2.5 g of sodium bicarbonate, and 3.5 g of sodium chloride (table salt) [23,31]. According to Kelly et al. [40], the use of appropriate rehydration solutions improves fluid absorption (↑ 60%) and sodium absorption (↑ 40%) in the jejunum and may improve the ileal absorption of these substances (↑ 20-30%) [40]. In patients with an ileostomy, hydration status assessment is recommended, and it may be based on urine urea and electrolyte levels, but the monitoring of urine sodium is a better option. Urine sodium, especially in patients with HOS, is recommended to be monitored every 2-3 months [23].
Other dietary interventions depend on the specific problem the patient is facing. In patients with bile acid stagnation, it is recommended to use oat bran with native β-glucans to increase their excretion. A study by Ellegård et al. [28] confirmed an improvement in bile acid metabolism following this dietary intervention. It was manifested in the increased excretion of 7α-hydroxy-4-cholesten-3-one, which was an intermediate in the synthesis of bile acids from cholesterol [28]. As a result, two other papers by these authors investigated the effect of plant phytosterols on patients with an ileostomy. The first study concerned the restriction of saturated fatty acids and increased supply of dietary fibre, while the second study concerned plant sterols derived from rapeseed oil and olive oil [29,30]. It was noticed that patients using food rich in phytosterols were characterized by an increased cholesterol excretion. This relationship is presumably due to the competition of phytosterols for a place in micelles with cholesterol from animal products [29]. It was also emphasized that rapeseed oil tends to reduce cholesterol absorption, increase the excretion of cholesterol and bile acids, and lower serum cholesterol compared to olive oil. The authors of the study suggest that this may be due to the difference in the concentrations of natural plant sterols occurring in these products [30].
Patients with an ileostomy will often find substantial amounts of fibre in the form of raw or unpeeled vegetables and fruits as the cause of some gastrointestinal problems. However, for people with a colostomy, these products can be immensely helpful in preventing constipation. Additionally, a varied diet rich in fruit and vegetables may also protect against vitamin and mineral deficiencies [33]. Arenas-Villafranca et al. [22] warn patients with a stoma against water-insoluble fibre fractions that may lead to intestinal obstruction. At the same time, they state that there is a small amount of good-quality clinical trials on soluble fibre in the nutrition of patients with a stoma [22].
Gastrointestinal problems in patients with ostomies may be triggered by the ingestion of gas-causing foods, carbonated or sweetened beverages, or inadequate fluid intake. In addition, ingestion of some foods such as nuts, corn, beans, coconut, dried fruit, and mushrooms can lead to a mechanical blockage of the stoma. Consuming expired or improperly stored food carries the risk of microbial contamination [41]. Figure 3 summarizes the foods that may cause the most common gastrointestinal problems in patients with ostomies ( Figure 3). tion of gas-causing foods, carbonated or sweetened beverages, or inadequate fluid intake. In addition, ingestion of some foods such as nuts, corn, beans, coconut, dried fruit, and mushrooms can lead to a mechanical blockage of the stoma. Consuming expired or improperly stored food carries the risk of microbial contamination [41]. Figure 3 summarizes the foods that may cause the most common gastrointestinal problems in patients with ostomies ( Figure 3).
Nutrition Consultation and Education as a Tool for Effective Protection against Complications
Recently, some interesting studies on educational support and nutritional consultation for patients with an intestinal stoma have been published [13,42,43].
In a Spanish cohort conducted by Santamaria et al. [13] involving 170 patients (85 each in the study and control groups), patients underwent two nutritional consultations after hospital discharge. They found that such consultations significantly reduced the rate of hospital readmissions associated with high-secreting stoma and dehydration, which also resulted in significant savings for the hospital [13]. Other researchers have reached similar conclusions, citing HOS as the most common cause of readmission [37][38][39]. Many researchers reported significantly lower rates of patients returning to the hospital after implementation of a follow-up nutritional consultation compared to the non-intervention group in a retrospective study compared to other researchers [13,[43][44][45].
In addition, Qu et al. [46] in a study published this year, reported that a combination of a nutritious meal, online advertising and education on the postoperative nutritional and psychological status of patients with a colostomy emerged due to cancer. Patients in the group undergoing the described intervention achieved statistically significantly better scores on indicators related to nutrition, immunological exponents or the WHO Quality of Life Assessment Test (WHOQOL-BREF) score. On the other hand, the scores of the self-rating anxiety scale (SAS) and self-rating depression scale (SDS), as well as the overall
Nutrition Consultation and Education as a Tool for Effective Protection against Complications
Recently, some interesting studies on educational support and nutritional consultation for patients with an intestinal stoma have been published [13,42,43].
In a Spanish cohort conducted by Santamaria et al. [13] involving 170 patients (85 each in the study and control groups), patients underwent two nutritional consultations after hospital discharge. They found that such consultations significantly reduced the rate of hospital readmissions associated with high-secreting stoma and dehydration, which also resulted in significant savings for the hospital [13]. Other researchers have reached similar conclusions, citing HOS as the most common cause of readmission [37][38][39]. Many researchers reported significantly lower rates of patients returning to the hospital after implementation of a follow-up nutritional consultation compared to the non-intervention group in a retrospective study compared to other researchers [13,[43][44][45].
In addition, Qu et al. [46] in a study published this year, reported that a combination of a nutritious meal, online advertising and education on the postoperative nutritional and psychological status of patients with a colostomy emerged due to cancer. Patients in the group undergoing the described intervention achieved statistically significantly better scores on indicators related to nutrition, immunological exponents or the WHO Quality of Life Assessment Test (WHOQOL-BREF) score. On the other hand, the scores of the self-rating anxiety scale (SAS) and self-rating depression scale (SDS), as well as the overall incidence of complications were statistically significantly lower in the intervention group [46].
Fernández-Gálvez et al. [42] subjected 253 ileostomy patients (117 in the control group and 136 in the study group) to a nutritional intervention based on the principles of the Mediterranean diet. In addition, patients were provided with nutritional counselling during periods of the reintroduction of oral nutrition, hospital discharge and the first follow-up visit. Patients in the study group mostly reported weight gain (positive effect), a decrease in dehydration, and a lower rate of re-hospitalization for stoma complications. The dietary model and nutritional consultation introduced made it easier for patients to eat a minimum of five meals a day and reduced their doubts or concerns about diet [42].
Basic Nutritional Guidelines for an Intestinal Stoma
At the outset, it should be noted that the diet of a person with a stoma is also dependent on their underlying pathology and may require the use of a given eating pattern or specific nutritional supplements, or the elimination of a particular group of products [16].
The diet of a patient with a stoma is gradually expanded, and a period of a few weeks is often taken as a return to the habitual diet. Nutrition in the postoperative period begins with a liquid and semi-liquid diet, gradually moving to an easily digestible diet with the limitation of nutrients such as fibre, fat or simple carbohydrates [47,48]. It is recommended for new products to be implemented into the diet individually and in tiny amounts. Thanks to this, when any problem occurs, it is easy to eliminate a product that caused them. After the initial recovery period, a nutrition style consistent with principles of a healthy and varied diet is recommended for patients with ostomies [14,45,48].
In the study of Toledano et al. [49], the impact of the time at which oral nutrition was introduced after the procedure of the stoma creation on the patient's postoperative results was assessed. The previous and mostly purely theoretical post-surgical nutritional protocols suggested that oral food and fluids should be withdrawn to reduce the risk of complications [48,49]. The conclusions represented by Toledano et al. [48] stated that the faster introduction of an oral diet after the procedure resulted in a shorter time of first bloating, faster excretion of the first stool and a shorter time to restoration of the proper functioning of the gastrointestinal tract. The main limitation of the study was its retrospective nature and the fact that most patients who received oral nutrition for a shorter period of time were patients who underwent laparoscopic surgery, which is not always feasible. In addition, nutritional progress in patients was measured on a daily, not hourly scale, which made it difficult to capture more accurate data and perform a more detailed statistical analyses [48]. On the other hand, the team led by Petrelli [50] indicated that early oral feeding is safe and applicable in patients with a history of colorectal cancer and colectomy. The reasons for the failure of such an intervention were intraoperative blood loss and a too large volume of the expander used.
The general guidelines for ostomy patients emphasize the importance of regular meals, consuming plenty of fluids, and thoroughly chewing food to avoid stagnation in the gastrointestinal tract. Patients with a stoma are also recommended to consult a stoma nurse and a dietitian at every stage of nutritional management or in case of doubts [41].
Limitations
A limitation of this review may certainly be the relatively small number of clinical trials conducted with patients with an intestinal stoma removed for dietary nutrition or nutritional treatment. The dietary strategies used were also often heterogeneous, and the study groups of patients cited in randomized clinical trials were often few in number. Many of the studies on intestinal stoma patients are observational: cohort or retrospective studies. In summary, scientific papers on the issue of feeding patients with intestinal stomas are few, and often papers with a very low level of evidence.
Some of the cited studies also administered nutrition concurrently with medications, which may not fully reflect the actual situation regarding dietary nutrition for patients after an intestinal stoma procedure or who are in recovery. Few studies have also considered the nutritional status of the patient before the intestinal stoma procedure; however, sometimes this was impossible due to the unplanned nature of the operation.
As authors, we tried to make the studies we collected as objective as possible, which may be indicated by, among other things, clear inclusion and exclusion criteria for studies in the review, as well as the presentation of all study results, not just those supporting the hypothesis of a beneficial effect of nutrition on the patient.
Nevertheless, despite the authors' efforts, a narrative review is not as objective a form of scientific work as a systematic review or meta-analysis of scientific studies. In contrast, these more highly regarded standards for synthesizing scientific evidence tend to focus only on a narrow area of an issue, are very time-consuming and require similar research criteria and conditions. In the field of the issue of feeding patients with an intestinal stoma, due to the small number of studies, the widely varying research conditions in the interventions used, or the stomas being placed on different segments of the intestine (small or large), it would be a very complicated or even impossible task to conduct a systematic review or meta-analysis.
There is a need for studies involving patients who have had an intestinal stoma removed for distinct reasons. Future studies should also standardize research conditions and think about conducting multicenter studies. It would also be necessary to look closely at the effect of the patient's nutritional status before surgery and the impact of this factor on improving the prognosis for patients with an intestinal stoma.
Conclusions
Progress in stoma care has improved the prognosis and quality of life for many patients worldwide. However, they also revealed the essential role to provide professional care by the interdisciplinary team: a doctor, an ostomy nurse, a psychologist and a nutrition specialist. Many complications can be avoided by using an appropriately balanced diet, monitoring the nutritional status of the examined, and properly conducting nutritional education. A summary of dietary recommendations for patients with ostomies is presented in Table 2. Table 2. Summary of nutritional recommendations for patients with ostomies.
Ileostomy
Avoiding products like popcorn, nuts, seeds, raisins, etc., as they can block stoma. Proper hydration is crucial to prevent excessive secretion and fluid disorders. Parenteral nutrition may be necessary due to the higher risk of nutritional deficiencies.
Extending the diet after surgery should be done carefully and individually. This will allow easy detection and exclusion of harmful products for patients.
Colostomy
The diet does not deviate from the principles of a healthy and balanced diet recommended for healthy people.
For possible constipation problems, plenty of exercise and a greater supply of food rich in insoluble fibre fractions, such as whole-grain bread, dark pasta and brown rice are recommended.
Both
The diet should be individually selected to meet the needs of the patient and be varied and rich in vitamins and minerals.
The basis of the diet should be fresh products and foods and possibly without preservatives and artificial food dyes. The recommended form of food processing is cooking (traditional or steaming), possibly baking in foil and it is recommended to avoid fried, baked and stewed foods after frying. Products that should be avoided or consumed in limited quantities are alcohol (especially beer), sweets (especially chocolate and its products), carbohydrate drinks, egg yolk, salt and hot spices.
Researchers are particularly leaning toward appropriate stoma intervention and access to education for stoma patients, either in person or with the help of the Internet. However, there is still a lack of present good-quality nutritional research and supplementation guidelines for patients with an established intestinal stoma. Further research into an appropriate diet to minimize the risk of complications and re-hospitalization of patients with an intestinal stoma is needed. | 2023-01-12T16:45:56.634Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "bfa257d307b623ce446e5d8473f5814700486823",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/2/510/pdf?version=1673168335",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d13596072c633f0a50b34b172ad29d388d274585",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240205146 | pes2o/s2orc | v3-fos-license | Trend Analysis of Temperature and Rainfall across Agro Climatic Zones of Karnataka-A Semi Arid State in India
The study was carried out for ten Agro climatic zones in Karnataka state in India. The temperature and rainfall data were used for analysis from 1979-2019 which is about 40 years. Understanding spatiotemporal rainfall pattern, Rainfall Anomaly Index which is drought indicator technique was used to classify the positive and negative severities in rainfall anomalies. The RAI ranges below 0.2 are considered as dry zone. The analysis resulted that, all zones are falls in category of dry zone with range of 0.2 to 0.4. For past five years, North Eastern Transition Zone was noted maximum times falling in the range of RAI below 0.2 and near to zero. Statistical techniques like linear trend estimation, R square was used for trend estimation across annual, seasonal to identify the variation in the temperature across different zones. The meaningful statistically significant achieves when there is r2≥0.65 and p≤0.05. It was analysed that, hilly Zone experienced decreased trend in both minimum and maximum temperature in all seasons which ultimately reflected in annual temperature to decrease with high R square values.
INTRODUCTION
For several decades, Industrial revolution in developing countries and destroying of forests for fuel in developing nations became a reason for growth of green house gas emissions which raises the earth's temperature. The accelerating pace of climate change is likely to affect the agricultural production in general and food security in particular. The available evidence from various climate change impact studies suggests that the developing countries are likely to get more adversely affected than the developed countries due to their typical geographical location and large dependence on climate sensitive sectors like agriculture. Higher temperature eventually reduces yields of desirable crops while encouraging weed and pest proliferation [1][2][3]. Changes in precipitation patterns increase the likelihood of short-run crop failures and long-run production declines.
India, a sub-tropical country is highly sensitive to climate change. The main reason for its high sensitivity is the variability in rainfall. About 80 per cent of the total rainfall over India occurs during the four months from June to September as a result of the south-west monsoon. Agricultural activities are very sensitive to climate and weather conditions [4][5][6]. Indian Agriculture despite of its declining share in National income continues to be important sector of the economy because of its strategic importance to food security, livelihood and poverty reduction. Among the agro ecological zones in India, Semi arid zone plays a prominent place in economy, contributing about 42 percent of total food grain production, supporting 60 percent of livestock population and employing nearly 37 percent of marginal farmers through agricultural activities [7,8]. This zone comprises parts of Andhra Pradesh, Karnataka, Tamil Nadu , Deccan plateau, parts of Gujarat and Maharashtra with cultivation of major crops like Rice, Wheat, Sugarcane, Pulses, sorghum, Tur. Despite of its greater vegetation with 62 percent of cropped area and 55 percent of net sown area , this zone is susceptible with famine , water shortages land degradation and consider as most drought area with 34% (112 of 329 million ha).
Karnataka is one of the major agrarian states which is often affected by natural calamities and it ranks second after Rajasthan in terms of drought prone with 50 percent of the area as semi arid with districts having frequent unequal distribution of rainfall for every 5 years and severe drought conditions for once in every 8-9 years [9]. The state collaborated between the Department of Agriculture and the University of Agricultural Sciences has come up with 10 distinct Agro-climatic zones based on soil characteristics, rainfall distribution, irrigation pattern, cropping pattern ,ecological and social characteristics viz., Central Dry zone , Eastern Dry zone , Northern Dry zone , North Eastern Dry zone , Southern dry zone, North transition zone North eastern transition zone , Southern transition zone ,Coastal zone and Hilly zone [10]. There are five dry zones with relatively low rainfall [10][11][12]. Three transitional zones with relatively high rainfall while Hilly and coastal zones receives highest rainfall of more than 1500 mm mean annual rainfall. The rainfall and temperatures [13] are the most important fundamental physical parameters among the climate as these parameters determine the environmental condition of the particular region which affects the agricultural productivity [14,15]. The spatial and temporal feature of rainfall necessitates the need to examine its changing pattern in response to changing temperature because rainfall is one of the most important parameters that influence the agriculture of a region and food production ,thus effecting the socio economic status of resource poor marginal and small farmers. The trend analysis of rainfall [16][17][18][19] temperature [20][21][22] and other climatic variables on different spatial scales will help in the construction of future climate scenarios.
METHODOLOGY
Temperature and rainfall trends observed in different zones over a period of 40 years are presented. Climatic variables like daily rainfall, temperature across Karnataka state were collected from IMD. Although there are certain limitations of secondary data based assessments such as authenticity of data, data inconsistency in some cases and data gaps still the assessments provides a useful means of assessment at the macro level. Statistical methods are used in determining the form of a trend, estimating the trend itself along with some measure of uncertainty and in determining the statistical significance of a trend
Linear Trend Equation and R Square
Most line equations are in the form Y = MX + C with Y as your variable on the y-axis, M as the slope or coefficient of the X variable, which is the values on your y-axis, C is the constant or value when no X value is present. R-squared (R 2 ): It is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. Rsquared explains to what extent the variance of one variable explains the variance of the second variable. So, if the R 2 of a model is 0.50, then approximately half of the observed variation can be explained by the model's inputs.
R-squared (R 2 )= 1-Unexplained variation /Total variation Rainfall Anomaly Index (RAI), developed by Rooy [23], is used to classify the positive and negative severities in rainfall anomalies. The RAI considers two anomalies, i.e., positive anomaly and negative anomaly. First, the precipitation data are arranged in descending order. The ten highest values are averaged to form a threshold for positive anomaly and the ten lowest values are averaged to form a threshold for negative anomaly. The arbitrary threshold values of +3 and -3 have, respectively, been assigned to the mean of the ten most extreme positive and negative anomalies. Nine abnormality classes, ranging from extremely wet to extremely dry conditions, are then given against a scale of numerical values of the relative rainfall anomaly index. The positive anomalies have their values above average and negative anomalies have their values below average. For positive anomalities RAI= 3 (RNF-RNFm/X-RNFm) For negative anomalities RAI= - Where, RNF = current monthly/yearly rainfall (mm) RNFm = monthly/yearly average rainfall of the historical series (mm); X= average of the ten highest monthly/yearly precipitations of the historical series (mm); Y= average of the ten lowest monthly/ yearly precipitations of the historical series (mm)
RESULTS AND DISCUSSION
The increasing and decreasing temperature and precipitation trends were analyzed using daily temperature and precipitation data for ten Agro Climatic zones, spatially distributed across Karnataka. Although trend analysis is often used to predict future events, it could be used to estimate uncertain events in the past. In particular, the weather data exhibit an increasing or decreasing trend that can be statistically distinguished from random behaviour. For trend analysis of temperature, linear trend estimation and R 2 was used .Rainfall Anomaly Index used for Rainfall drought intensity level.
Linear Trend Estimation of Temperature across Agro Climatic Zones 1978-79 to 2018-2019
The positive trend in temperature has observed in all zones while negative trend has been recorded in Hilly Zone (Table 2) which is strongly explained by R 2 of 74 percent and 87 percent in minimum and maximum temperature respectively.
During winter, It was noticed that, there is a negative trend in minimum temperature of all zones except in SDZ (Table 3).While, positive trend in maximum temperature has been noticed in all zones except in HZ for which R 2 is strongly explained by 86 percent. It concluded that, Hilly zone was less hot when compared to other zones. Extremely humid 2 to 4 Very humid 0 to 2 humid -2 to 0 Dry -4 to -2 Very dry Below -4 Extremely dry Source:Freitas [24] adapted by Araujo et al. [25] During summer, It was noticed that, there is a negative trend in minimum temperature of HZ and positive trend in all zones (Table 4) .While, maximum temperature has been noticed in all zones except in HZ which showed decreased trend for which R 2 is strongly explained by 80 percent.
There is a change in trend in minimum temperature of all agro climatic zones. It was further recorded that, trend has been upward slope in all zones except in HZ (Table 5).While the trend in maximum temperature has been noticed positively in all zones. The change in trend in minimum temperature has been observed positively in all zones except in NDZ,HZ (Table 6) .While the change in trend in maximum temperature has been noticed positively in all zones except in HZ which is explained by R 2 with 62 percent. As a whole it is noticed that, Hilly Zone is experiencing decreasing trend in both minimum and maximum temperature in all seasons and ultimately reflected in annual temperature to decrease. The annual maximum temperature has increased in dry zones, transitional and Coastal zone.
ACZ VI: The southern Dry zone is one of the climatic zone which experienced more number of years 1979, 1980,1983,1988 ,2007 with (RAI =0) and many years experienced RAI <0.2 which indicates the dry and drought climate and low rainfall in this zone when comparison to other zones and experienced only for years 2003 with RAI +1.5 (Fig. 6) and few years in the range of RAI +0.3.
ACZ VII: Southern Transition zone recorded with (RAI =0) for years 182,1983,2004,2008 (Fig 7 ) and many years have recorded RAI between 0.4 to 0.8 which indicates the zone has received good amount rainfall and few years have fallen in the range of RAI <0.2.
ACZ VIII: It was observed that, Northern Transition Zone experienced (RAI =0) only in the year 2000 and many years have experienced RAI between 0.4 to 1.2 (Fig. 8) and only few years fall into the category of RAI =0. 15.
.ACZ IX: Hilly zone experienced RAI =0 in 1993,1997,2004,2010 and 2018 and many years experienced RAI =+0.4 in this zone (Fig 9) .When comparison to all zones , Hilly and coastal zones are least drought effected as it receives more than 1500mm average annual rainfall.
Comparative analysis for five years of rainfall pattern from 2015-2019 for 10 agro climatic zones (Fig .11 ) resulted that, North Eastern Transition Zone recorded more times of RAI =0 in the year 2016,2017 , 2019 followed by Northern Transition zone which was in second place with RAI = +0.1 .
In the Western Ghats, the IISc revealed the loss of evergreen forest cover from 16% to 11% from 1985 to 2018 with the increase in anthropogenic pressure owing to unplanned developmental activities. The study reported that, the loss of forest cover has affected the local rainfall subjected to t changes in heat and weather processes which cause extreme weather conditions. Lower evapo transpiration with deforestation across the region caused a delay in onset of the rainy season and decline in the number of rainy days with higher dry conditions. (Indian State of Forest report 2019). In addition to this deforestation, the occurrence of extreme weather events has increased both in terms of frequency and intensity across Karnataka in the last decades. It was recorded that, the hydro-meteorological disasters such as drought, flood, hailstorm, cyclone, heat wave, thunder storm and lighting events have occurred in the state more frequently in recent years [9]. Between 2001 and 2019, the state has experienced a drought of varying severity for 15 years and experienced severe floods in 2005, 2009, 2018 and 2019. The recurrence of droughts and floods because of changing rainfall patterns caused by climate change would be detrimental to the surface and groundwater recharge and also pose a great challenge to water security.
Effect of Temperature on Mean Rainfall across Agro Climatic Zones in Karnataka
A simple graphical representation of both climate variables temperature and rainfall showed that , the rise in temperature has negative effect on rainfall pattern (Fig. 12 ).It was noticed that , the increase in rainfall has seen in hilly zone .While all other zones showed inverse relationship of rainfall with temperature rise over the 40 years.
CONCLUSION
Karnataka has warmed over the years and that climate variability has occurred for sure during the four decadal study period 1979-2019 as there was increased trend in both minimum temperature and maximum temperature for about 40 years. Through Rainfall Anomaly Index ,it was noticed that, all zones are falls in category of dry zone with range of 0.2 to 0.4 and further , North Eastern Transition Zone was noted RAI of below 0.2 to near zero. Therefore, NETZ was needed to have depth analysis at ground level to find about the impact of climate change on agriculture, socio economic condition of farmers and suggesting measures to overcome the adverse impacts of climate variability on agricultural productivity. | 2021-10-30T15:11:59.694Z | 2021-10-25T00:00:00.000 | {
"year": 2021,
"sha1": "bd54d877f65c8e466842b432679c38e0fd357165",
"oa_license": null,
"oa_url": "https://www.journalijecc.com/index.php/IJECC/article/download/30494/57230",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8d3556f2a5c6212a3c41613c0509fa6cd543ba69",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
244751689 | pes2o/s2orc | v3-fos-license | Menopausal Symptoms, Postural Balance, and Functional Mobility in Middle-Aged Postmenopausal Women
The aim of the present study was to determine the associations between the severity of the menopausal symptoms and postural balance and functional mobility in middle-aged postmenopausal women. A cross-sectional study was performed (171 participants, 57.18 ± 4.68 years). Severity of the menopausal symptoms (on the Menopause Rating Scale), postural balance (stabilometric platform) with eyes open and closed, and functional mobility (timed up and go test) were determined. A multivariate linear regression was performed, with body mass index, waist to hip ratio, age and fall history as possible confounders. Our findings showed that a greater severity of the menopausal symptoms at a psychological level was associated, under both eyes open and closed conditions, with worse postural control assessed by the length of the stabilogram (adjusted R2 = 0.093 and 0.91, respectively), the anteroposterior center of pressure displacements (adjusted R2 = 0.051 and 0.031, respectively) and the center of pressure velocity (adjusted R2 = 0.065 for both conditions). Older age was related to greater mediolateral displacements of the center of pressure with eyes open and closed (adjusted R2 = 0.45 and 0.58, respectively). There were no associations between the menopausal symptoms’ severity and functional mobility. We can conclude that a greater severity of psychological menopausal symptoms was independently associated with worse postural balance in middle-aged postmenopausal women.
Introduction
Menopause is defined as the permanent cessation of menstruation, confirmed after twelve consecutive months of amenorrhea. Menopause is characterized by physiological, psychosocial, and sociological changes associated with the reduction in ovarian function [1]. Menopausal symptoms include vasomotor symptoms such as hot flashes and night sweats, physical and mental fatigue, sleep problems and urogenital symptoms such as vaginal dryness and bladder and sexual dysfunctions [2,3]. It has also been shown that depression and anxiety symptoms, which are very prevalent in middle-aged women, increase during the climacteric period [4,5]. The menopause transition is also associated with increased body weight and changes in body composition [6]. In fact, estrogen decline has been linked to increased visceral fat and central adiposity [7].
Falls are a major public health problem, being the second leading cause of unintentional injury deaths worldwide [8], thus, it is important to give priority to fall-related research to determine effective policies aimed at fall prevention [9]. Age is one of the most important fall risk factors. Older people have the highest risk of fall-related injury and death, and this risk increases with age [8]. Nevertheless, it has been reported that the number of co-morbidities increased the risk of a fall in women over 40 years old, and this risk escalated with additional co-morbidities if they were over 60 years [10]. This greater risk of falling, as well as the presence of menopause-related decreased bone mineral density and strength, increases the number of fractures and fall-related injuries, making falls a major health concern in postmenopausal women [11].
Altered balance and gait have been recognized as important fall risk factors [12]. It has been reported that the menopause transition is associated with balance deterioration [1], and that estrogen treatment increased balance performance in postmenopausal women [13]. Postural instability is associated with the risk of falling, and stabilometric parameters have been shown to predict falls in postmenopausal women aged 50-65 years old [14]. In the menopause, there is an increase in intra-abdominal fat and total body weight [15], and agedrelated loss of muscle mass and strength accelerate [16], and all these changes may affect functional mobility. Functional mobility is an adequate method to assess dynamic balance when performing activities during daily life, and thus, it is a key factor for preventing not only falls but also disability and dependency [17,18]. The timed up and go (TUG) test [19] is one of a range of measures identified in clinical guidelines as a possible screening tool to evaluate gait and balance functions and to identify older people at risk of falling [20].
The goal of the present study was to determine the possible associations between the impact of the menopausal symptoms and postural balance and functional mobility in middle-aged postmenopausal women. We hypothesized that a greater impact of the menopausal symptoms is associated with worse stabilometric parameters and longer times in performing the TUG test.
Study Participants
An analytical cross-sectional study was conducted on 171 postmenopausal women. Participants were recruited by contacting the staff of several associations of postmenopausal women from Granada and Jaén (Spain). This study was approved by the Research Ethics Committee of the University of Jaén, Spain (OCT.18/4.PRY). All participants gave their written informed consent to participate in this study, which was conducted in accordance with the Declaration of Helsinki, good clinical practices, and all applicable laws and regulations. Inclusion criteria were: 45-65 years old, at least 12 months of amenorrhea, being able to understand the instructions and complete the questionnaires, and willing to provide their written informed consent to participate in the study. Exclusion criteria were being under hormonal replacement therapy, taking vestibular sedatives or other central nervous system depressants, suffering from any chronic and/or severe medical diseases or any neuropsychiatric disorder that could limit balance and physical activity (i.e., vestibular or neurological conditions).
Sociodemographic and Anthropometric Data
All women were questioned by well-trained interviewers, who collected demographic and clinical data such as age, occupational status, education and marital status, and the number of falls experienced in the previous 12 months (history of falls). Women were classified as fallers if they experienced one or more falls in the last year. A fall was defined as unintentionally coming to rest on the ground, floor, or other lower levels [21]. Body mass index (BMI) was obtained by dividing body weight (kg) obtained with a 100 g-130 kg precision digital weight scale (Tefal), by height (m 2 ) calculated with an adult height scale (Asimed). A 1.5 m flexible tape was used to evaluate waist and hip circumferences. Waist to hip ratio (WHR) was calculated by dividing waist circumference (cm) by hip circumference (cm). As for BMI, values <25 indicated normal weight, between 25 and <30 overweight, and ≥30 obesity [22]. Regarding WHR, values <0.76 indicated a gynoid pattern of body fat distribution, between 0.76 and 0.86 a uniform pattern, and ≥0.86 an android pattern [23].
Postural Balance
In order to assess postural balance, we used a FreeMED resistive multisensor platform (Sensor Medica, Rome, Italy) and the Free-Step Standard 3.0 software (Sensor Medica, Rome, Italy). The Romberg test was performed under eyes-open (EO) and eyes-closed (EC) conditions. Participants stood barefoot, with their feet at a 30 • angle, heels 2 cm apart, with their arms at their sides. Each session lasted 30 s, with a 1-min interval between sessions. The following parameters related to the center of pressure (CoP) under each condition were obtained: velocity of CoP displacements (V, mm/s), length of the stabilogram (L, mm), and the root mean square amplitude of CoP in the mediolateral (RMSX) and anteroposterior (RMSY) directions (mm). These stabilometric variables were obtained under both EO and EC conditions.
Functional Mobility
In order to assess functional mobility, the TUG test was used [19]. This test has been proven a sensitive and specific instrument for identifying community-dwelling adults who are at risk of falls [24]. For this test, participants had to rise from a seated position on a chair, walk three meters, turn around, return, and sit down again. The time required to complete this test was recorded. Longer times correspond to worse functional mobility.
Severity of the Menopausal Symptoms
The Menopause Rating Scale (MRS) was used to assess the severity and the impact of menopausal symptoms on quality of life [25,26]. This scale has 11 items (scored from 0-4) organized in a total score and three domains: somatic (four items), psychological (four items), and urogenital (three items). Greater scores reflect a greater severity of menopausal symptoms. Values equal to or greater than 17 (total score), and 9, 7, and 4 for the somatic, psychological, and urogenital domains, respectively, indicate severity of the symptoms.
Sample Size Calculation
According to Concato et al. [27], in a multivariate lineal regression model at least 20 subjects per event are required for an adequate sample size. Four independent variables (MRS somatic, psychological, and urogenital domains, and the MRS total score), together with four possible confounders (age, BMI, WHR, and the history of falls) were used in this study, and hence, 160 participants were required for this analysis. The final number of participants was 171.
Data Analysis
Data management and analysis were performed with the SPSS statistical package for the social sciences for Windows (SPSS Inc., Chicago, IL, USA). Categorical variables were presented as frequencies and percentages, whereas continuous variables were described using means and standard deviations. The Kolmogorov-Smirnov test was used to evaluate normality. To evaluate the individual associations between stabilometric variables with MRS domains and total scores, as well as other confounders such as age, BMI, WHR and the history of falls, a bivariate correlation analysis was employed. A Student's t-test was performed to analyze differences regarding the history of falls. In order to study the multivariate independent associations between variables, a multivariate linear regression model was used, with the postural control parameters as dependent variables. Those independent variables and confounders exhibiting significant results (p < 0.05) in the bivariate analysis and the Student's t-test were included in the multivariate linear regression. In order to calculate the effect size coefficient of multiple determination in the linear models, we used adjusted-R 2 . According to Cohen [28], adjusted-R 2 can be classified as insignificant when <0.02, small if between 0.02 and 0.15, medium if between 0.15 and 0.35, and large if >0. 35. A 95% confidence level was used (p < 0.05).
Results
Sociodemographic and clinical characteristics of the participants are presented in Table 1. Of the 171 women (57.17 ± 4.71 years), 18.71% reported having experienced a fall in the last year. Mean values of BMI and WHR indicated overweight (only 16.2% were obese) and uniform body fat distribution pattern, respectively. All MRS scores were considered as non-severe. In the analysis of the individual associations between postural control and the severity of menopause-related symptoms and hot flashes, (Table 2), higher values in all stabilometric variables except for RMSXEO, RMSYEO and RMSYEC were associated with a greater impact of the menopausal symptoms at the MRS total score (all p < 0.01), as well as with the psychological MRS domain or subscale (all p < 0.01), which was also related to RMSYEC (p < 0.05). A bigger impact of the symptoms at a somatic level was linked to elevated VEO (p < 0.01), LEO, VEC, LEC and RMSYEO (p < 0.05). No association between postural control and the MRS urogenital domain was found. As for the functional mobility assessed by the TUG test, the analysis did not show any statistical correlation with the severity of the menopausal symptoms.
As for the possible confounders (Table 3), older age was associated with increased mediolateral displacements of the COP with both eyes open and closed (p < 0.01). There were no differences regarding the history of falls for postural control or functional mobility.
The analysis of the independent associations performed by the linear regression (Table 4)
Discussion
The objective of the present study was to assess the associations between the severity of the menopausal symptoms and postural control and functional mobility in Spanish middle-aged postmenopausal women. Our findings suggest that, taking into account possible confounders such as age, BMI, WHR, and the history of falls, the impact of the menopausal symptoms at a psychological level was independently associated with worse postural balance. There were no associations between the severity of the menopausal symptoms and functional mobility.
Menopausal symptoms significantly affect the quality of life of middle-aged women [29]. It has been shown that, in women aged 40-64 years worldwide, 62%, 57%, and 50% reported muscle and joint pain, vasomotor symptoms, and sleep disorders, respectively [30]; items that are included in the MRS somatic domain. The results of the present study showed that the higher impact on quality of life was observed at a somatic level. In fact, the MRS somatic domain had the highest percentage of women with severe symptoms (MRS ≥ 9), followed by the urogenital, and finally the psychological subscale. This is in accordance with the findings described in a multicenter cross-sectional study conducted on postmenopausal women from 11 Latin American countries, although their percentages were lower than those found in the present study. This may be due because the mean age in the Núñez-Pizarro et al. study [31] was younger (52.5 ± 4.7 years) and 50.2% of the participants were in the early postmenopausal stage (≤5 years since menopause onset).
It has been reported that physical and mental fatigue may influence postural control in older adults [32]. Besides, mood states and anxiety have been linked to alterations in the sensory and motor systems of balance control in healthy subjects. [33] This association may be explained by the neural connections between the brain areas for emotional control and for controlling posture and balance [34]. When analyzing the independent associations between the severity of the menopausal symptoms and postural control, our results showed that only a greater impact of the psychological symptoms was related to worse postural controls. These findings are in accordance with those previously described, since fatigue, anxiety, and depression are included in the MRS psychological domain. More precisely, these associations were found with all the posturographic parameters studied except for mediolateral displacements of the CoP, which was only related to older age. This could be explained because, according to the inverted pendulum model, the mediolateral postural sway is associated with the hip strategy to maintain the standing position, which is mainly adopted by older adults [35]. Given that the participants of this study were under 65 years old (mean age of 57.18 years), it is possible that most of them adopt the ankle strategy, which affects the anteroposterior but not the mediolateral displacements of the CoP, and thus, the lower mean values of the latter may limit its association with the menopausal psychological symptoms.
During the menopause transition, women are predisposed to experience musculoskeletal pain [36], which is associated with decreased physical activity and mobility. As for the psychological factors, depression, but not anxiety, has been previously related to poorer functional mobility and described in women aged 60 years and over [37]. On the other hand, Ercan et al. [38] described that self-perceived fatigue was linked with functional mobility in middle-aged obese women [39]. The results of the present study regarding functional mobility do not show any significant associations with the MRS total score and the domains. This could be due to the fact that the times in performing the TUG test were lower, and thus reflected better functional mobility than those of these previous studies.
Obesity is considered as a fall risk factor in people aged 60 years and older [40]. It has recently been demonstarted that obesity, regardless of the presence of metabolic dysfunction, increases the risk of poor physical performance as assessed with several tests, including TUG [41]. However, a study conducted on 5970 Spanish people aged ≥65 years (European Health Survey 2014) concluded that there was no association between BMI and the degree of functional limitation [42]. On the other hand, higher BMI and WHR have been related to worse postural balance [43], and several adiposity measures have been shown to be associated with reduced postural balance, where waist circumference obtained the strongest association [44]. The results of this study did not reveal any associations between postural control and functional mobility with either BMI nor WHR. This may be explained because the WHR and BMI mean scores indicated uniform body fat distribution and overweight (with only 16.2% being obese), respectively.
The history of falls has been described as an important fall risk factor. Older adults who experienced one or more falls are more likely to fall again, and it is considered an essential part in the screening for falls [20]. Moreover, the history of falls is associated with the fear of falling again, which is linked to other negative consequences apart from falling, such as restriction or avoidance of activities of daily living, diminished social contact and poor quality of life [45]. As for postural control, Merlo et al. [46] described that the history of falls was associated with worse posturographic parameters assessed under different visual, proprioceptive, and cognitive conditions; however, we could not confirm this association. On the other hand, the association between the history of falls and functional mobility is not conclusive in the literature. TUG test times and a history of falls have been found to be related, but the clinical relevance of this association is limited [47]. Nevertheless, Asai et al. [48] found that slower time in the TUG test were related to fall history among independent community-dwelling older adults, while Kim et al. [49] reported that the TUG test score did not show a statistical relation to the history of falls. The findings of the present study did not show an association between the history of falls and both functional mobility and postural balance. This may be due to the younger mean age of the participants in the present study, and to the fact that TUG test mean time was 7.86 ± 1.41 s, which is lower than that described as a cutoff point to discriminate people at fall risk [16].
Limitations
Some limitations to our study should be noted. The cross-sectional nature of the study design does not allow establishing causal relationships. Moreover, this study was carried out in postmenopausal women recruited from a specific geographical area, and thus, any generalization of the results should be limited to people with similar characteristics to those of our sample. Although the validity and reliability of the TUG test has been shown in different populations regardless of age [50], the participants of the present study showed a mean score that can be considered as low, which may be responsible for the lack of significant differences. The history of falls was obtained by interviews and self-reports that attempted to identify the number of falls retrospectively, thus causing recall bias that may have influenced the estimated frequency of falls. Future studies should consider exploring prospective designs for a more diverse population, using other tests other than the TUG to assess functional mobility (i.e., gait speed or multiple sit-to-stands), and employing a diary to record the number of falls in order to avoid recall bias.
Conclusions
The results of this cross-sectional study indicated that, considering possible confounders, a higher impact of menopause-related psychological symptoms was a significant predictor of poorer postural control. More specifically: • A greater severity of menopausal symptoms at a psychological level was associated, under both eyes open and closed conditions, with (i) longer length of the stabilogram, (ii) greater velocity of the CoP displacements, and (iii) larger anteroposterior displacements of the CoP. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data shown in this study are available upon request from the corresponding author. The data is not available to the public, since taking into account the sensitive nature of all the questions asked in this study, all participants were guaranteed that the data obtained would be confidential and would not be shared.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-12-01T16:12:41.137Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "79114e24dfd053e2e030c981df886fe488cd99ab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/11/12/2178/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ae2e5b93d597fdc0d0cff0d6b69866259a38ee2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54735194 | pes2o/s2orc | v3-fos-license | Trait heritability in major transitions
Background Increases in biological complexity and the origins of life’s hierarchical organization are described by the “major transitions” framework. A crucial component of this paradigm is that after the transition in complexity or organization, adaptation occurs primarily at the level of the new, higher-level unit. For collective-level adaptations to occur, though, collective-level traits—properties of the group, such as collective size—must be heritable. Since collective-level trait values are functions of lower-level trait values, collective-level heritability is related to particle-level heritability. However, the nature of this relationship has rarely been explored in the context of major transitions. Results We examine relationships between particle-level heritability and collective-level heritability for several functions that express collective-level trait values in terms of particle-level trait values. For clonal populations, when a collective-level trait value is a linear function of particle-level trait values and the number of particles per collective is fixed, the heritability of a collective-level trait is never less than that of the corresponding particle-level trait and is higher under most conditions. For more complicated functions, collective-level heritability is higher under most conditions, but can be lower when the environment experienced by collectives is heterogeneous. Within-genotype variation in collective size reduces collective-level heritability, but it can still exceed particle-level heritability when phenotypic variance among particles within collectives is large. These results hold for a diverse sample of biologically relevant traits. Conclusions Rather than being an impediment to major transitions, we show that, under a wide range of conditions, the heritability of collective-level traits is actually higher than that of the corresponding particle-level traits. High levels of collective-level trait heritability thus arise “for free,” with important implications not only for major transitions but for multilevel selection in general. Electronic supplementary material The online version of this article (10.1186/s12915-018-0612-6) contains supplementary material, which is available to authorized users.
Background
Major transitions, or evolutionary transitions in individuality, are a framework for understanding the origins of life's hierarchy and of biological complexity [1,2]. During such a transition, a new unit of evolution emerges from interactions among previously existing units. This new unit, or collective, has traits not present before the transition and distinct from those of the units that comprise it (particles; see [3] for an in-depth discussion of collective-level traits). These collective-level traits are potentially subject to selection. Over the course of the transition, the primary level of selection shifts from the particle (lower-level unit) to the collective (higher-level unit), for example, from cells to multicellular organisms or from individual insects to eusocial societies.
Evolution by natural selection requires heritable variation in phenotypes that affect fitness at the level at which selection occurs [4,5]. The breeder's equation of quantitative genetics shows that heritability and strength of selection contribute equally to the adaptive response (see the "Analytical model" section below). When a collective-level trait is exposed to selection, it is collective-level heritability (the heritability of the collective-level trait) that determines the magnitude of the response. Collective-level heritability of traits is thus necessary for collective-level adaptations, but the emergence of collective-level heritability during a major transition has often been assumed to be difficult. For example, Michod considers the emergence of collective-level heritability through conflict mediation a crucial step in major transitions [2,6,7]. Simpson says that "From the view of some standard theory, these transitions are impossible," in part because particle-level heritability greatly exceeds collective-level heritability [8].
Major transitions can be conceptualized as a shift from MLS1 to MLS2, in the sense of Damuth and Heisler [5], as in Okasha [9] (see also Godfrey-Smith [10], Shelton and Michod [11]). In MLS1, properties of the particles are under selection; in MLS2, it is the properties of the collectives. We follow Okasha [9] in referring to the lower-level units in a transition as 'particles' and the higher-level units as 'collectives.' Although our biological analogies are presented in terms of cells as particles and multicellular organisms as collectives, in principle our model could be extended to any pair of adjacent levels.
According to Michod [6], "…the challenge of ETI [evolutionary transitions in individuality] theory is to explain how fitness at the group level in the sense of MLS2 emerges out of fitness at the group level in the sense of MLS1." But fitness, or selection, is only half of the breeder's equation. Predicting the response to selection requires an estimate of heritability. Whether or not collective-level fitness in MLS2 is a function of particle-level fitness is a matter of some disagreement (for example, Rainey and Kerr say no [12]). However, collective-level phenotypes must be functions of particle-level trait phenotypes, unless we accept strong emergence, a philosophical position tantamount to mysticism [13]. The function may be complex and involve cell-cell communication, feedbacks, environmental influences, etc., but it is still a function that is, in principle, predictable from particle-level trait values.
Nevertheless, the relationship between the heritability of particle-level traits and that of collective-level traits has rarely been considered in the context of major transitions, leading Okasha [14] to wonder, "Does variance at the particle level necessarily give rise to variance at the collective level? Does the heritability of a collective character depend somehow on the heritability of particle characters? The literature on multi-level selection has rarely tackled these questions explicitly, but they are crucial." Similarly, Goodnight [15] says, "...we really do not have a good understanding of what contributes to group heritability, how to measure it, or even how to define it." While the role of selection has often been considered in the context of major transitions, the role of trait heritability has been relatively neglected. We examine relationships between particle-level heritability and collective-level heritability for several functions that express collective-level trait values in terms of particle-level trait values. For the simplest (linear) function, we derive an analytical solution for the relationship. For more complex functions, we employ a simulation model to explore the relationship over a range of conditions.
Analytical model
There are several ways to estimate heritability, the proportion of phenotypic variation explained by genetic variation. If the strength of selection is known, heritability can be estimated by back-calculating from the breeder's equation: R = h 2 S, where R is the response to selection, S the selection differential, and h 2 the narrow-sense heritability (i.e., the proportion of phenotypic variation explained by additive genetic variation). This can be rearranged as h 2 = S/R. Another method is to compare parent and offspring trait values: the slope of the parent-offspring regression is an estimator of heritability [16]. We use the latter method in the simulations described in the next subsection.
Since heritability can be defined as the proportion of phenotypic variance explained by genetic variance, one method of estimation is to partition total variance into its components using an analysis of variance. We employ this approach in an analytical model to derive the relationship between the heritability of a collective-level trait and that of the particle-level trait from which it arises. For the sake of tractability, we begin with the simplest case, assuming that the size (number of particles) of collectives is fixed and that the collective-level trait value is a linear function of the particle-level trait values. We further assume that reproduction is asexual, so the proper measure of heritability is broad-sense heritability, H 2 , which describes the proportion of phenotypic variation explained by all genetic variation [17].
We imagine a population in which collectives are made up of particles and genetically distinct clones are made up of collectives. As a concrete example, we can think of a population of undifferentiated volvocine algae, such as Gonium, in which case the particles are cells and the collectives are colonies. Because of asexual reproduction, many genetically-identical collectives may comprise a clone. Genetic variation among clones may arise through mutation or because the population is facultatively sexual, in which case these results will only hold for evolution within the asexual phase (in the Gonium example, during the summer bloom that precedes autumn mating and winter dormancy).
Broad-sense heritability is the ratio of genetic variance (V G ) to total phenotypic variance (V P ), estimated as the ratio of among-clone variance to total phenotypic variance [17]. Inherent in this concept is that genetically identical individuals are not always phenotypically identical; V P includes both genetic and non-genetic variation. Non-genetic variation can arise from maternal effects, environmental (including microenvironmental) effects, and random developmental noise. Phenotypic variation among genetically identical individuals has been extensively documented, including in bacteria [18,19], unicellular eukaryotes [20], plants [21], animals [17], and volvocine algae [22].
We used an ANOVA framework to estimate heritability as a ratio of sums of squares. Strictly speaking, heritability is a ratio of variances, not of sums of squares. However, the ratios of the relevant sums of squares converge to that of the variances as the number of categories increases, and for all but tiny or genetically uniform biological populations, the difference between the two ratios is negligible (see Additional file 1).
The ANOVA approach yields the following ratio of collective-level heritability, H 2 z , to particle-level heritability, H 2 y , the derivation of which is explained in the "Methods" section: SSA is the sum of squared deviations from the mean among clones, SS(B/A) that among collectives within clones, and SS(C/B) that among particles within collectives. Given the assumptions of the model (linear function, constant particle number, clonal reproduction), collective-level heritability is therefore never less than particle-level heritability (i.e., the ratio of heritabilities is never less than 1) and is greater unless SS(C/B) = 0, in other words, unless particles within each collective have identical phenotype. Although we have derived this relationship assuming that the collective-level trait value is the average of particle-level trait values, the result holds for any linear function.
The results of the analytical model hold for all but tiny and/or extremely genetically depauperate populations. For example, for a population comprised of ten clones each with ten collectives, the approximation differs from the true value by less than 1%. The number of particles within a collective does not play a role, so our results are relevant even early in a major transition, when the collectives are likely to be small. For most real biological populations, the difference between the true heritability and the sums of squares approximation will be negligible (see Additional file 1 for a simple numerical example).
Simulation model
The correspondence between particle-level and collectivelevel trait values is likely to be more complicated than a linear relationship for many interesting and biologically relevant cases. Here, we explore more complicated trait mapping functions using a simulation model. In this model, we consider two sources of non-genetic effects on particle phenotype (Fig. 1), each of which should lower the heritability of both particle-and collective-level traits. The first is intrinsic reproductive stochasticity in particle phenotype, analogous to developmental instability [23]. In the model, we determine the phenotype of daughter cells by sampling from a distribution centered on the parent's genetic mean, with standard deviation σ. As shown in the analytical model above, by averaging out this variation, collectives can gain a heritability advantage over cells.
Our simulation also considers the phenotypic effects of environmental heterogeneity. Here, we model collectives as independently experiencing different environmental conditions that affect the phenotypes of all cells within them in the same manner. To extend the biological analogy offered above, Gonium colonies growing near the surface of a pond (where light and CO 2 are abundant) may form colonies with larger cells than clonemates near the bottom. We implemented this in our model by assigning a size modifier, drawn from a normal distribution centered on 1 with standard deviation σ′, to each collective. We then multiplied the phenotype of each particle within the collective by this modifier. This source of phenotypic heterogeneity should reduce the heritability of collectives more than particles, simply because collectives experience a relatively higher frequency of stochastic events than particles do (each collective gets assigned a different size multiplier, but every particle within that collective experiences the same size multiplier).
We examine the effect of each of the above sources of phenotypic variation independently for the example of cells (particles) within nascent multicellular organisms Fig. 1 Two non-genetic modifiers to cell phenotype. There are two non-genetic influences on particle phenotype (cell size in this example) in our model: developmental instability, a stochastic effect that varies a particle's phenotype from its genetic mean (with standard deviation σ), and environmental effects, which modify the phenotype of all particles in a collective by the same amount (with standard deviation σ′) (collectives). For a linear relationship, collective size is simply the sum of the sizes of cells within the collective. For both cells and collectives, heritability is assessed by calculating the slope of a linear regression on parent and offspring phenotype [16]. In this simple case, mean collective-level heritability is always greater than or equal to cell-level heritability. Only when σ = 0 (i.e., when all cells within a collective have identical phenotype) are cell-and collective-level heritability equal, in agreement with the analytical model. Greater developmental instability for cell size increases the advantage of collective-level heritability over cell-level heritability (Fig. 2a). Larger collectives, which average out cellular stochasticity more effectively, experience a greater increase in heritability than smaller collectives (Fig. 2a). Note that the simulations run in Fig. 2a reflect a very patchy environment in which environmental effects on cell size within collectives are large (σ′ = 0.25). While our model is not explicitly spatial, when σ′ is high, different collectives experience different environmental effects on their mean cell size, simulating the effects of a patchy environment. Increasing the magnitude of these environmental effects on cell size diminishes the difference in heritability between collectives and cells, but mean collective-level heritability is still greater than cell-level heritability for all parameter combinations (Fig. 2b). The simulations yielding Fig. 2a, b are provided as Additional files 2 and 3, respectively.
The volume of the cellular collective (Figs. 2 and 3a), which is simply the sum of the cell volumes within it, represents the simplest function mapping cellular to multicellular trait values. We now consider more complicated nonlinear functions relating cellular to multicellular trait values, some of which have biological relevance to the evolution of multicellularity. The first nonlinear collective-level trait we consider is its diameter. Large size is thought to provide a key benefit to nascent multicellular collectives when they become too big to be consumed by gape-limited predators [24,25]. For a collective that is approximately spherical, the trait that actually determines the likelihood of being eaten is diameter, which is therefore an important component of fitness. For geometric simplicity, we assume that the cells within the collective are pressed tightly together into a sphere. Collective volume (Fig. 3a) and diameter (Fig. 3b) exhibit similar dynamics, with collective-level heritability always exceeding cell-level heritability and being maximized under conditions of strong cell size stochasticity (high σ) and no environmental heterogeneity (low σ′).
Next, we consider swimming speed as a function of cell radius. We based this simulation on the hydrodynamics model of volvocine green algae derived by Solari et al. [26]. For simplicity, we modeled 32-celled, undifferentiated collectives (GS colonies in [26]), which would be similar to extant algae in the genus Eudorina. In this model, the swimming force of cells is independent of cell size, so, as cells get larger, the collective will become heavier (more negatively buoyant) without a corresponding increase in total swimming force, and therefore, its upward swimming speed will decrease. Thus, upward swimming speed is a monotonically declining function of cell radius (Fig. 3c inset), unlike the functions for volume and diameter (Fig. 3a, b insets), both of which are monotonically increasing. Nevertheless, the general behavior of heritability is very similar to the previous ones, and for a wide range of parameter Fig. 2 Collective-level heritability of size is greater than particle-level heritability for size. In a, we hold the effect of the environment fixed (standard deviation σ′ = 0.25) and vary the degree of developmental instability for particle size σ: 10 − 4 (purple), 0.0625 (blue), 0.125 (green), 0.1875 (yellow), 0.25 (red). In the absence of developmental instability for size, collective and cell-level heritabilities are identical. Greater developmental instability increases relative collective-level heritability. b Here, we hold developmental instability fixed at σ = 0.25, and vary between-collective environmental effects on cell size from σ′ = 10 − 4 (purple) to 0.25 (red). When developmental instability is nonzero, larger collectives improve collective-level heritability. We ran ten replicates of each parameter combination and simulated populations for nine generations of growth values, the collective-level trait has a higher heritability than the cell-level trait (Fig. 3c).
Next, we consider a function describing a collective's survival rate in the presence of a predator that can only consume collectives below a certain size. We calculated the survival rate (c) as a logistic function of the collective's radius, effectively assuming that predation efficiency drops off quickly when collectives reach a threshold size (Fig. 3d inset). As with the previous functions (Fig. 3a-c), collective-level heritability is greater than cell-level heritability for much of the trait space and is maximized under conditions of high cellular stochasticity (σ) and low environmental heterogeneity (σ′; Fig. 3d).
Finally, we consider the case in which the simplifying assumption of constant cell number does not hold. Instead, the number of cells per collective fluctuates around the genetic mean N . In this case, each collective reproduces two new collectives, but the number of cells per new collective is a random variable drawn from a normal distribution with mean N and coefficient of Fig. 3 Relative heritability of various collective-level traits to cell-level heritability for size. Here, we examine the heritability of four multicellular traits that depend on the size of their constituent cells, relative to cellular heritability for size. The relationship between the size of the cells within collectives and the multicellular trait are shown as insets. We consider three biologically significant traits with different functions mapping the size of cells within the collective onto collective phenotype. The heritability of collective size (a) and diameter (b) is always higher than cell-level heritability for size and is maximized when cellular developmental noise is greatest and among-collective environmental effects are smallest (lower right corner). We modeled swimming speed (c) based on the model of Solari et al. [26] for volvocine green algae. We also considered survival rate under predation as a logistic function of radius (d). Like a and b, collective-level heritability is highest relative to cell-level heritability when environmental heterogeneity is minimal. Pink contours denote relative heritability of 1. In these simulations, we consider 32 cell collectives grown for seven generations. The colormap denotes collective-level heritability divided by cell-level heritability for size across 1024 σ, σ′ combinations variation CV N (the coefficient of variation for a normal distribution is the ratio of standard deviation to the mean). We chose to represent variation in the number of cells per collective as CV N instead of standard deviation so that the scale of variation would not change with the size of the collective.
Variation in cell number, unlike the developmental and the environmental variation, does not affect the heritability of cells, only that of collectives. Therefore, we expected that increasing CV N would decrease the ratio of collective-level to cell-level heritability. The simulation shows that the CV N has a strong effect on collective-level heritability (Fig. 4). As CV N increases, the ratio of collective-to cell-level heritabilities decreases, falling below one when the magnitude of σ is similar to or smaller than that of CV N (Fig. 4).
Discussion
When particles (cells, for example) first form collectives (multicellular colonies, for example), traits inevitably come into being that did not previously exist [3]. Some such traits, such as collective size, will be simple functions of particle-level traits; others will be more complex functions. We are interested in how these new collective-level traits respond to selection, and the breeder's equation tells us that the response is proportional to the trait's heritability (for a given strength of selection). We can predict that heritability if we know two things: the heritability of the underlying individual-level trait and relationship between particle-level and collective-level heritability. Estimating the former is straightforward; if we can define the latter, we can in principle predict the efficacy of selection on the new trait.
Using a quantitative genetics framework, we have derived an analytical solution for the relationship between particle-level and collective-level heritability in clonal populations for a limited case. When particle number is constant and the collective-level trait value is a linear function of the particle-level trait values, the organismal heritability turns out to be a simple function of the cell-level heritability. In contrast to claims that particle-level heritability is always higher than collective-level heritability (e.g., [8]), we have shown that collective-level heritability is higher over a wide range of conditions. Because this result depends on the number of clones and the number of colonies within a clone, it may not hold for very small populations or those with little genetic variation. This is not a major limitation, though, since tiny, genetically homogeneous populations are unlikely to be the ones experiencing selectively driven evolutionary transitions in individuality.
This analytical result is a step toward understanding the relationship between heritabilities at two adjacent hierarchical levels, but the assumptions of constant particle number and linear function are restrictive. The simulation model shows that the results are somewhat Fig. 4 Relative heritability of collective size to cell size when the number of cells per collective varies. When the coefficient of variation for cell number per collective (CV N ) is low, collective-level heritability is always higher than cell-level heritability, but this advantage is undercut by increased variation in cell number. The ratio of collective-to cell-level heritability is maximized when developmental variation in cell size (σ) is large and variation in the number of cells per collective is zero. The pink contour denotes a ratio of collective-level to cell-level heritability of 1. In these simulations, we consider collectives with a genetic mean of 32 cells grown for seven generations. The colormap denotes collective-level heritability divided by celllevel heritability for size across 1024 σ, CV N combinations dependent on the function relating the trait values at the two levels. However, these functions were chosen to be diverse, and the behavior of the relative heritabilities is nevertheless qualitatively similar, increasing with cellular developmental variation (σ), decreasing with environmental heterogeneity (σ′), and exceeding 1 for most of the parameter space.
Of course, we have not (and cannot) comprehensively explored the universe of possible functions relating collective-level traits to particle-level traits. What we have done is explore a small sample of this space, with functions ranging from extremely simple (volume) to somewhat more complex (swimming speed, survival under predation). We do not claim that the high heritabilities estimated for these collective-level traits would apply to all such traits, and a full accounting of possible functions is beyond the scope of this (or any) study. Rather, we have shown that for at least some such functions, the resulting collective-level traits can have high heritability, and thus be altered by selection, early in an evolutionary transition in individuality.
All four of the collective-level traits in the simulation models are potentially biologically relevant. Volume and diameter are both aspects of size, which can be an important component of fitness both in evolutionary transitions in individuality [27] and in life history evolution [28]. Swimming speed is a measure of motility, which has selective consequences for a wide range of organisms, including many animals and microbes. For planktonic organisms, a positive upward swimming speed provides active control of depth, allowing some control over light intensity (for autotrophs) and prey abundance (for heterotrophs). Survival under predation obviously has important fitness implications for many organisms, and both theoretical and experimental evidence implicate predation as a possible selective pressure driving the evolution of multicellularity. Kirk, for example, suggests that a "predation threshold" above which algae are safe from many filter feeders may have driven the evolution of multicellularity in the volvocine algae [29]. Microbial evolution experiments in the algae Chlorella and Chlamydomonas have shown that predation can drive the evolution of undifferentiated multicellular clusters [30][31][32].
In our simulations, we examined the effects of three independent sources of phenotypic variation affecting the relative heritability of particle and collective-level traits. Stochastic variation in cell size around the clone's genetic mean (σ) reduces the absolute heritability of cells and collectives by introducing non-heritable phenotypic variation. By averaging across multiple cells, however, collectives reduce the effects of this phenotypic variation, providing them with a relative heritability advantage over cells.
We also considered the effect of environmental heterogeneity in which all of the cells within a collective are affected in the same manner (σ'). Collectives are disproportionately affected: each collective is assessed a different size modifier, but all of the cells within these collectives are affected in the same manner. As a result, collectives experience n-fold more stochastic events (where n is the number of cells per collective), which reduces their heritability relative to cells. The influence of these sources of variation is evident in the contour plots of Fig. 3: the relative heritability of collectives to cells is maximized when the cellular stochastic variation is high and environmental heterogeneity low (lower right corner of the plots).
Finally, we considered variation in the number of particles per collective. Such variation substantially reduces the heritability of a collective-level trait. Even with reasonably large variation in collective size, though, the collective-level trait retains most of the heritability of the particle-level trait on which it is based (for example, 55% at a CV N in particle number of 0.25). A large number of previous studies have addressed heritability in the context of multilevel or group selection. Heritability above the individual level has been called group heritability (e.g., [33]), populational heritability (e.g., [34]), community heritability (e.g., [35]), and heritability of the family mean [16]. These prior treatments differ from ours in one or more of the following respects: they are concerned with the evolution of individual-level traits rather than group level traits (particle-rather than collective-level in our terminology), are based on MLS1 rather than MLS2 models, and are focused on narrow-sense rather than broad-sense heritability. Furthermore, few previous studies have addressed these questions in the context of the major transitions. Without attempting a comprehensive review, we summarize several such studies, and important differences from our own, below.
Queller [33] presents a useful reformulation of the Price equation for selection at two levels: in which ΔG is the change in average trait value, S b and S w are the selection differentials between collectives and within collectives, respectively, and h 2 b and h 2 w are the heritabilities of the collective-level and individual-level traits, respectively. This formulation partitions the response to selection on a particle-level trait into withinand among-collective change, but the focus is still on particle-level traits. Our focus is on the evolution of collective-level traits. In the terminology of Damuth and Heisler [5], our focus is on MLS2, while Queller's is on MLS1. In addition, Queller makes no attempt to derive the relationship between collective-level heritability and particle-level heritability.
Michod and Roze [2] have previously modeled the relationship between particle-level and collective-level heritability of fitness during a major transition. However, as Okasha [14] points out, the heritability of fitness only ensures that mean population fitness will increase over time. For selection to result in directional phenotypic change, it is phenotypes that must be heritable. Furthermore, Michod and Roze focused on within-organism genetic change. Our models assume that such change is negligible, as is likely to be true early in a transition, when collectives (e.g., nascent multicellular organisms) presumably include a small number of clonally replicating particles (e.g., cells).
Okasha [36] considers heritability in MLS1 (which he refers to as group selection 2) and MLS2 (his group selection 1) but does not attempt to derive a relationship between heritabilities at two levels. Knowing the ratio of heritabilities is necessary, though not sufficient, to predict the outcome of opposing selection at two levels and so has important implications for collective-level traits that arise from cooperation among particles. The presumed higher heritability of the particle-level traits has been seen as a problem for the evolution of cooperation that benefits the collective [2,8,[37][38][39]. Our results show that this problem does not always exist, though we would need to know not only the relative heritabilities but also the relative strengths of selection to predict the outcome of opposing selection at two adjacent levels.
Several previous papers have shown that group-level heritability (collective-level heritability in our terminology) exists and can be substantial. Slatkin [34], for example, showed that one measure of group-level heritability, fraction of total variance between lines, is substantial both in an analytical model and in the Tribolium experiments of Wade and McCauley [40]. Under some conditions, the between-line variance of a linear trait such as the one we consider in our analytical model exceeds the within-line variance.
Bijma et al. [41,42] and Wade et al. [43] showed that variance in the total breeding value of a population can be increased, even to the point of exceeding phenotypic variance, by interactions among individuals. Our model does not consider (or require) interactions among individuals. Further, their model and empirical example are exclusively concerned with individual-level traits (particle-level traits in our terminology), for example, survival days in chickens. They do not estimate group heritability as such and judge that "it is unclear how this parameter should be defined or estimated." Goodnight [44] addresses the effect of environmental variance at two scales on the responses to individual and group selection in Arabidopsis thaliana. Although Goodnight's study focused on an individual-level trait (leaf area, thus an MLS1 scenario) of an obligately sexual organism (thus narrow-sense heritability), our results (shown in Fig. 3) showed analogous effects. In both cases, environmental variation at a fine scale (analogous to our σ) increased the efficacy of group selection relative to individual selection, while environmental variation among demes (analogous to our σ′) had the opposite effect. Wade [45] examines a similar case from a theoretical perspective and finds that increased relative efficacy of group selection results from environmental variation among particles decreasing particle-level heritability.
Goodnight [15] considers the ratio of group-level heritability to individual-level heritability (in the narrow sense) using contextual analysis. Although this paper does not provide a formula to calculate this ratio, its inequality 5 sets a minimum bound (with the assumption that selection at the two levels is in opposition). As in our analyses, Goodnight shows that group-level heritability can exceed individual-level heritability in some circumstances.
Several simplifying assumptions underlie our models, most importantly the genetic identity of particles within collectives. This condition only applies to a subset of the major transitions. Queller recognized two subcategories within Maynard Smith and Szathmáry's [1] list of transitions, which he called "egalitarian" and "fraternal" transitions [46]. Briefly, egalitarian transitions involve particles that may be genetically distinct, or even from different species, such as the alliance of a bacterium with an Archaean that gave rise to the eukaryotic cell. Fraternal transitions are those in which the particles are genetically similar or identical, such as the origins of eusociality and of most multicellular lineages.
Because of our assumptions of asexual reproduction and genetic identity among particles, we cannot generalize our results to all types of major transitions. Egalitarian transitions will not normally meet this criterion. A possible exception is aggregative multicellularity, as seen in cellular slime molds and myxobacteria, when assortment is so high that fruiting bodies are genetically uniform. This is probably uncommon [47], but it does happen [48,49]. Transitions in which reproduction of particles is obligately sexual, such as the origins of eusociality, also violate this assumption.
A better fit for our models is clonal multicellularity, which is probably the most common type of major transition. An incomplete list of independent origins of clonal multicellularity includes animals; streptophytes; chytrid, ascomycete, and basidiomycete fungi; florideophyte and bangiophyte red algae; brown algae; peritrich ciliates; ulvophyte green algae; several clades of chlorophyte green algae; and filamentous cyanobacteria [50][51][52][53]. In most cases, the early stages in these transitions probably violated the assumption of uniform particle number per collective, but our simulations show that our main results are robust to reasonable violations of this assumption.
One example that does approximate all of our assumptions is that of the volvocine green algae, an important model system for understanding the evolution of multicellularity. Volvocine algae undergo clonal reproduction only occasionally punctuated by sex, are small enough that within-collective mutation probably has negligible phenotypic effects, and have cell numbers that are under tight genetic control.
Conclusions
A great deal of work has gone into understanding the selective pressures that may have driven major evolutionary transitions. However, heritability is just as important as the strength of selection in predicting evolutionary outcomes. We have shown that, given some simplifying assumptions, heritability of collective-level traits comes "for free;" that is, it emerges as an inevitable consequence of group formation. Qualitatively, this result holds across a wide range of parameters and for a diverse sample of biologically relevant traits. Collectivelevel heritability is maximized (relative to particle-level heritability) when phenotypic variation among particles is high and when environmental heterogeneity and variation in collective size are low. Understanding the emergence of trait heritability at higher levels is necessary to model any process involving multilevel selection, so our results are relevant to a variety of other problems.
Analytical model
Treating particles and collectives separately, the phenotype of particle k in collective j within clone i can be expressed as where m is the mean genotypic value of all clones, A i is the deviation of clone i from m, B j(i) is the deviation of collective j from the mean of clone i, and C k(ij) is the deviation of particle k from the mean of collective j within clone i. The model in (2) describes a nested ANOVA framework, in which the sums of squared deviations from the population mean are partitioned into amongclone, among collectives within clone, and withincollective components. The among-clone component, the sum of squared deviations of A from m, is where a, b, and c are the number of clones, collectives within a clone, and particles within a collective, respectively. The sum of squared deviations of collectives within clones is that among particles within collectives is and the total sum of squares is Broad-sense heritability of a particle-level trait, H 2 y , is the ratio of genetic variance to total phenotypic variance: We now turn our attention to collective-level traits. The phenotype of collective j within clone i can be expressed as where μ is the mean genetic value of all clones, α i is the deviation of clone i from μ, and β j(i) is the deviation of collective j from the mean of clone i. The sum of squared deviations of α from μ is The sum of squares among colonies within clones is and the total sum of squares is Broad-sense heritability of a collective-level trait, H 2 z , is the ratio of genetic variance to total phenotypic variance, If collective-level trait value is the average of cell-level trait values, z ij = y ij• , z i• ¼ y i•• , and z •• ¼ y ••• . Thus SSα = cSSA, and SS(β/α) = cSS(B/A). Substituting into (12), we get The ratio of collective-level heritability to particle-level heritability is thus This result holds for any linear function. The substitution that gets us from (12) to (13) introduces the constant c, which scales both numerator and denominator and therefore cancels out. Different linear functions would change the magnitude of the constant relating SSα to cSSA and SS(β/α) to cSS(B/A) but not the fact that numerator and denominator are scaled by the same constant.
Simulation model
All simulation models were carried out using custom Python scripts. As in the analytical model, particles grow in clonal collectives, which reproduce by forming two new collectives, each with as many particles as its parent. The initial population is founded by ten genetically distinct clones, each of which has a different genetically determined mean particle phenotype (spaced evenly between 1 and 2). These are grown for at least seven generations, resulting in at least 127 collective-level reproductive events per genotype and 127n (where n is particle number per collective) particle-level reproductive events per genotype. For each function, we calculated the relative heritability of collective-to cell-level traits for 32-celled collectives across 1024 combinations of σ and σ′ ranging from 0 to 0.25.
For the simulation of collective volume, we assume a linear relationship between cell volume and collective volume, specifically that collective volume is the sum of cell volumes. The simulation of collective volume is provided as Additional file 4.
For the simulation of collective diameter, we assume that the particles within the collective are pressed tightly together into a sphere, allowing us to calculate collective diameter as d ¼ 2ð 3V 4π Þ 1 3 , where V is the sum of the cell volumes within the collective. The simulation of collective diameter is provided as Additional file 5.
For the simulation of swimming speed, the function relating cell radius to upward swimming speed in Solari et al. [26] (Eq. 4 from [26]) can be simplified to where f is average effective upward swimming force per cell, N is the number of cells per collective, η w is water viscosity, r is the average radius of cells in the collective, and Δρ c is the density difference between cells and water. Additional file 6 provides a more detailed description of the derivation of Eq. 15.
Using the numerical values in Solari et al [26], η w = 0.01 g/cm·s, Δρ c = 0.047 g/cm 3 , and f = 2.4 × 10 −7 g cm/s 2 , so The simulation of swimming speed is provided as Additional file 7.
For the simulation of survival rate in the presence of a predator, we calculated the survival rate (c) as a logistic function of the collective's radius: The simulation of survival under predation is provided as Additional file 8.
For the simulation of variable cell number, we calculated the relative heritability of size (volume) for collectives and cells across 1024 combinations of σ and CV N ranging from 0 to 0.25 with a mean cell number (NÞ of 32. The simulation of variable cell number is provided as Additional file 9. | 2016-11-01T19:18:48.349Z | 2016-02-29T00:00:00.000 | {
"year": 2018,
"sha1": "fca8a85b42bc47b370f0155d5cbcb8709f8bbbde",
"oa_license": "CCBY",
"oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-018-0612-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fca8a85b42bc47b370f0155d5cbcb8709f8bbbde",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
10568215 | pes2o/s2orc | v3-fos-license | Subcutaneous Adipose Tissue from Obese and Lean Adults Does Not Release Hepcidin In Vivo
Hepcidin is the main regulator of systemic iron homeostasis and is primarily produced by the liver but is also expressed, at the mRNA-level, in periphery tissues including the subcutaneous and visceral adipose tissue. Obesity is associated with elevated hepcidin concentrations and iron depletion suggesting that the exaggerated fat mass in obesity could contribute significantly to circulating hepcidin levels consequently altering iron homeostasis. The objective of this study was to determine if abdominal subcutaneous adipose tissue (AbScAT) releases hepcidin in vivo and if release is modified by obesity. Arterio-venous differences in concentrations of hepcidin were measured across AbScAT in 9 obese and 9 lean adults. Overall (n = 18), mean plasma hepcidin concentrations were significantly higher in arterialized compared to AbScAT venous samples [mean difference (arterialized-AbScAT venous plasma hepcidin) = 4.9 ± 9.6 ng/mL, P = 0.04]. Net regional release was not calculated because mean venous plasma hepcidin concentrations were lower than mean arterialized concentrations indicating no net release. Significant correlations between AbScAT venous and arterialized plasma hepcidin concentrations with anthropometric variables were not observed. Findings from this vein drainage study suggest there is no net release of hepcidin from the AbScAT depot and thereby no ability to signal systemically, even in obesity.
INTRODUCTION
Hepcidin is a small peptide hormone that functions as the homeostatic regulator of systemic iron regulation and as the link between host defense and iron metabolism [1]. Hepcidin directs the flux of iron into plasma by regulating the ferroportin-1 (Fpn) exporter. Ferroportin-1 facilitates the export of iron from intestinal enterocytes, hepatocytes, reticuloendothelial macrophages, and the placenta [2]. When hepcidin binds to Fpn, the two proteins are internalized and degraded within lysosomes [3]. Sensing of circulating iron and iron stores is thought to occur in the liver which is the primary site of hepcidin production [2,3]. Hepcidin produced by the liver is simultaneously regulated by iron stores, anemia, hypoxia, and inflammation. When body iron levels are elevated or inflammation or infection is present, liver hepcidin production is increased, resulting in diminished Fpn expression and decreased iron bioavailability (i.e., cellular iron sequestration and decreased dietary iron absorption) [2][3][4]. Conversely, when body iron levels are depleted or anemia or hypoxia exists, liver hepcidin expression is minimal, allowing for increased dietary iron absorption and mobilization from body stores via active Fpn transporters. However, at any given time, hepcidin expression can be determined by the interplay of these pathways and the relative strength of each of the individual signals [2][3][4][5][6][7]. The bioactive form of hepcidin, hepcidin-25, is measurable in human plasma, serum, and urine [8,9].
Hepcidin is also expressed in other tissues including the heart, placenta, kidneys, and adipose tissue [4]. Expression within these tissues is believed to be regulated by hypoxia and inflammation and not body iron stores [4]. The impact of acute inflammation on hepcidin mRNA expression in these tissues has been studied in both surgical and injury models. In a recent study, subcutaneous adipose hepcidin mRNA expression and cytokines were significantly increased immediately following cardiac surgery compared to subcutaneous adipose explants assessed before surgery [10]. Also, in an animal model, acute myocardial infarction was associated with increased hepcidin mRNA expression in the ischemic myocardium [11]. However, it is not known if hepcidin expressed in these periphery tissues is actively secreted or has the potential to pose systemic impact.
Obesity is associated with a multitude of metabolic abnormalities, including altered iron homeostasis. In both adults and children, obesity is linked to diminished iron stores and impaired iron absorption despite adequate dietary iron intake [12][13][14][15][16][17][18][19][20][21][22]. Several researchers have demonstrated that serum levels of hepcidin are significantly elevated in obese compared to lean women and children [18,19,23], suggesting that hepcidin may play a significant role in iron dysregulation observed in obesity. As individuals become obese, their adipose tissue enlarges and undergoes a series of alterations, including increased production of several proinflammatory factors such as TNF-α and IL-6 [24]. Previous research has demonstrated that several of these proteins, referred to as adipokines, including IL-6 and leptin, are actively secreted by the adipose tissue and can act as endocrine signals bringing about systemic metabolic impact [25,26]. Recently, Bekri et al. [27] reported that subcutaneous and visceral adipose hepcidin mRNA expression was significantly higher in obese compared to lean women, while liver mRNA expression was similar. This discovery led to the hypothesis that the altered systemic iron homeostasis and increased serum levels of hepcidin observed in obese individuals are, in part, a result of inflamed, exaggerated fat mass secreting bioactive hepcidin into circulation. However, to date, there are no data demonstrating in vivo release of hepcidin from human adipose tissue. Therefore, the objective of this study was to determine if abdominal subcutaneous adipose tissue (AbScAT) releases hepcidin in vivo, and if this release is modified by obesity.
Subjects
Nine obese (body mass index (BMI) ≥30.0 kg/m 2 ) and nine lean (BMI: 18.5-24.9 kg/m 2 ) adult volunteers were recruited from the greater Oxford community by advertisement or from the Oxford BioBank [28]. Participants attended the Oxford Center for Diabetes, Endocrinology and Metabolism Clinical Research Unit at the National Institute for Health Research, Oxford Biomedical Research Center. The participants were fasted from 10 PM the night prior and asked to refrain from strenuous exercise and alcohol consumption for 24 hours before the study day. None of the subjects took medication known to affect iron or inflammatory status, and all were normoglycemic based on fasting glucose levels. The protocol was approved by the National Health Service/National Research Ethics Service, and all participants gave written, informed consent.
Anthropometric Measures
Subjects were asked to remove shoes, outdoor clothing, and excess items from their pockets before anthropometric measures were obtained. Weight was measured to the nearest 0.01 kg using an electronic scale. Height was measured to the nearest 0.5 cm using a wall-mounted stadiometer. Waist circumference was measured at the midpoint between the bottom of the rib cage and the iliac crest using a flexible tape measure.
Cannulas and Blood Sampling
Arterio-venous differences were measured across AbScAT. Under local anesthesia, a superficial epigastric vein draining AbScAT and an arterialized dorsal hand vein (with the hand kept in a warming box at 60 • C) were cannulated [29]. The cannulae were kept patent with an intravenous infusion of 0.9% saline. After a resting period of ∼45 min, blood samples were taken simultaneously from the two sites. Blood was drawn into lithium heparin syringes, and plasma was prepared rapidly at 4 • C and stored at −80 • C until analysis. Previous studies have shown that venous blood from epigastric veins represents the efflux from the adipose tissue bed, and arterio-venous differences across AbScAT yield results that are similar to microdialysis studies [25,30].
Plasma Hepcidin Assay
Plasma hepcidin was determined using a competitive enzyme-linked immunosorbent assay at Intrinsic LifeSciences in La Jolla, Calif, USA. Detailed methods and performance of this assay have been published elsewhere [31]. The reference ranges for plasma hepcidin using this assay are 29-254 ng/mL in healthy men and 17-286 ng/mL in healthy women.
Blood Flow Measurements and Calculations
Abdominal subcutaneous adipose tissue blood flow was calculated from the washout of 133 Xe assuming a partition coefficient of 10 mL/g. This method is based on the principle that disappearance of 133 Xe is proportional to adipose tissue blood flow [32]. Abdominal subcutaneous adipose tissue plasma flow was calculated as the AbScAT blood flow * (1-hematocrit) [33]. Hematocrit was measured by microcapillary method.
If AbScAT venous plasma hepcidin concentrations are found to be higher than levels in arterialized samples (arterio-venous difference), the net regional release of hepcidin from the AbScAT (ng·100 g adipose tissue −1 · min −1 ) will be calculated using the Fick principle, which is the product of the arterio-venous plasma hepcidin difference and AbScAT plasma flow (net regional hepcidin release = ([AbScAT venous plasma hepcidin] − [arterialized plasma hepcidin] * AbScAT plasma flow)) [25]. This value will be calculated for all subjects combined (overall), for the obese and lean groups separately, and on an individual basis.
Statistical Analysis
Demographic data are presented as means (with standard deviations) or frequencies. Anthropometric and biochemical data are presented as means (with standard deviations). Comparisons between the
RESULTS
The demographic and clinical characteristics are presented in Table 1. All participants were identified as Caucasian. Mean age did not differ significantly between obese and lean participants (45.8 versus 44.2 years, resp.). Although there were more males in the obese compared to the lean group (obese: 55.6% (n = 5) versus lean: 22.2% (n = 2), resp.), the difference was not statistically significant. As expected, mean BMI and waist circumference were significantly greater (36.8 kg/m 2 versus 22.6 kg/m 2 and 115.1 cm versus 77.7 cm, resp.) in the obese group. Hematocrit was not statistically different between the obese compared to the lean group; however, when hematocrit was assessed by gender (data not shown), obese men had lower hematocrit compared to lean men (obese: 41.2 versus lean: 46.5%; P = 0.48) and obese women had higher hematocrit compared to lean women (obese: 41.8 versus lean: 37.3%; P = 0.01). Significant differences between obese and lean groups for AbScAT venous or arterialized plasma hepcidin concentrations were not observed. Also, no differences were observed for AbScAT venous or arterialized plasma hepcidin concentrations when assessed by obese and lean gender groups (data not shown). Overall (all participants), postabsorptive concentrations of plasma hepcidin were significantly higher in the arterialized compared to the AbScAT venous samples (mean difference (arterialized−venous plasma hepcidin) = 4.9 ± 9.6 ng/mL, P = 0.04). When differences in arterialized and venous plasma hepcidin concentrations were assessed by group (obese and lean separately), venous concentrations were lower than arterialized (Figure 1), but these differences failed to reach statistical significance. This may be due to the small sample size within each group. Net regional hepcidin release was not calculated overall or by group because mean venous plasma hepcidin concentrations were lower than mean arterialized plasma concentrations indicating no net hepcidin release. However, in five subjects (Table 2), AbScAT venous concentrations of plasma hepcidin were higher than in the arterialized samples, although the reverse was true for a greater number of participants (n = 12, adipose tissue blood flow could not be determined for one lean female). Notably, there was no consistent pattern to the hepcidin arterio-venous difference with the obese individuals (n = 3) no more likely to show net release than lean participants (n = 2) (Fisher's exact test, P = 0.99); although women (n = 4) were more likely to show net release compared to men (n = 1), this was not statistically significant (Fisher's exact test, P = 0.60). For those demonstrating net hepcdin release from AbScAT, release did not differ between the obese and lean individuals (obese: 9.8 ± 5.6 ng · 100 g adipose tissue −1 · min −1 versus lean: 9.9 ± 7.0 ng · 100 g adipose tissue −1 · min −1 ; Student's t-test, P = 0.98). Significant correlations between AbScAT venous and arterialized plasma hepcidin concentrations with anthropometric variables were not observed (data not shown).
DISCUSSION
These data demonstrate, for the first time, that there is no overall net secretion of hepcidin, in vivo, by AbScAT, from fasted obese or lean individuals. Also, no significant correlations were observed between anthropometric measures and arterialized or venous plasma hepcidin concentrations. Collectively, these findings are somewhat surprising as subcutaneous and visceral adipose hepcidin mRNA expression has been reported to be significantly elevated in obese compared to lean women [27], and adipose-derived hepcidin was hypothesized to be associated, in part, with the increased hepcidin concentrations and the iron depletion observed in obesity [27]. Nonetheless, like IL-6 [34], hepcidin has a high turnover rate in plasma [35], and one would anticipate an arterio-venous difference across a tissue if the tissue was consistently and actively releasing significant amounts of the protein [25]. Further, in this study, when assessed overall, venous hepcidin concentrations were significantly lower than arterialized concentrations suggesting that net removal or degradation of hepcidin by AbScAT may be possible. Yet, arterio-venous studies cannot rule out the possibility that subcutaneous adipose-derived hepcidin could still have an autocrine or paracrine effect on tissue-level iron regulation, such as promotion of iron sequestration within the adipose tissue. This effect, however, may be minor considering that a recent study reported that in morbidly obese women, tissue iron content, assessed by Perl's Prussian blue stain, a semiquantitative technique, was minimal in AbScAT [18]. Additional studies including a quantitative approach to assessing adipose tissue iron content would All biochemical analysis was performed on morning fasting samples. * Net regional hepcidin release = [(AbScAT venous plasma hepcidin] − (Arterialized plasma hepcidin)] * AbScAT plasma flow.
BMI: body mass index; NA: not available due to unreliable adipose tissue blood flow calculation.
be needed to examine the autocrine and paracrine effect of adipose-derived hepcidin in obesity. Also, it is important to note that net secretion of hepcidin was evident in five individuals, but obese were no more likely to show net release compared to lean subjects. Also, no clinical features could clearly distinguish these individuals from those displaying no net secretion although four of the individuals were younger women. Additional larger scale studies are required to better understand this disparity. However, similar results have been reported for tumor necrosis factor-α (TNF-α), in which the researchers concluded that there was no overall net secretion of TNF-α from AbScAT in a group of obese and lean individuals. Although, like with hepcidin, a few subjects within the group did display net secretion, the authors reported that no clinical or demographic characteristics could explain why TNF-α net secretion was observed in these individuals [25]. Two previous studies [18,27] have demonstrated that hepcidin mRNA expression in AbScAT and visceral adipose tissue from obese women did not differ significantly. However, it is well understood that adipose tissue is not homogenous [36]. Therefore, despite reporting that the AbScAT does not release hepcidin, we cannot exclude the possibility that the visceral adipose depot secretes hepcidin and has an influential role in systemic or localized iron regulation.
Collectively, previous research and findings from this study suggest that liver-and not adipose-derived hepcidin may be associated with the increased systemic hepcidin concentrations and the iron dysregulation observed in obesity. First, hepatic hepcidin mRNA expression was reported to be significantly greater than that observed in AbScAT and visceral adipose tissue from obese women [18,27]; in one study, it was reported to be more than 700 times greater [18]. Also, Tussing-Humphreys and colleagues [18] reported that liver hepcidin mRNA expression was positively correlated with serum hepcidin concentrations, whereas adipose mRNA expression was not. However, Bekri and colleagues [27] reported that liver hepcidin mRNA expression was similar in hepatic biopsies from obese and lean premenopausal women suggesting that the liver is not the source of increased hepcidin in obesity. But this observation is somewhat flawed as the iron status of the lean women was not described [27] and could have differed considerably from the obese women. As reported by Bekri et al. [27], iron status was depleted, based on transferrin saturation (<20%), in 60% (n = 15) of the obese women. Matching the iron status of obese and lean subjects is critical as hepatic hepcidin expression is simultaneously regulated by body iron stores, inflammation, hypoxia, and anemia [2][3][4][5][6]. It is also critical that when matching on iron status, gender and age are also considered, as iron status differs between men and women and also with menopause [37]. Several studies have investigated the regulation of hepatic hepcidin by opposing stimuli and suggest that the strength of the individual stimuli, rather than the hierarchy of signaling pathways, determines hepatic hepcidin production [6,7]. Thus, restricting a comparison group to healthy, noninflamed, lean men or women (limit to either premenopausal or menopausal) with similar iron status can allow one to evaluate the impact of the inflammatory signal on hepcidin expression in obesity. For example, in a previous study, in which obese and lean premenopausal women were matched on iron status (hemoglobin, transferrin saturation, and ferritin concentrations), median serum hepcidin concentrations (obese: 103.6 ng/mL versus lean: 16.3 ng/mL; P < 0.0001) and hepcidin regulation appeared to differ significantly between the two groups [18]. In the obese women, serum hepcidin was positively correlated with central adiposity and inflammation and weakly correlated with markers of iron status including transferrin saturation, serum transferrin receptor, and serum iron. Conversely, in the lean women, serum hepcidin was strongly correlated with the markers of iron status assessed. This suggests that in obesity, the inflammatory signal counterregulates the signal related to iron depletion which could lead to a modest increase in liver hepcidin production. Interestingly, serum hepcidin concentrations in obese individuals are similar to those reported in healthy, iron-replete individuals [18,31]. Therefore, if the lean women assessed in the Bekri et al. [27] study were iron replete, it would not be surprising that their liver hepcidin mRNA expression was similar to the obese women and suggests that hepcidin expression in obesity is upregulated, despite underlying iron depletion. Hence, to confirm that liver-derived hepcidin is the source of increased systemic hepcidin concentrations in obesity, obese and lean individuals with similar iron status would need to participate in a hepatic vein catheterization study to assess if hepatic hepcidin release is significantly greater in obesity despite similar iron levels.
This study is not without limitation. First, although the sample size was relatively small and heterogeneous for age and gender, it was not unlike samples reported in other arterio-venous drainage studies investigating adipose tissue physiology [25,[38][39][40]. Also, the small sample size did not impact our ability to demonstrate that collectively there was no net secretion of hepcidin by AbScAT in vivo from obese and lean individuals as the mean AbScAT venous plasma hepcidin concentrations were significantly lower than arterialized samples, and secretion was not modified by obesity. However, the small sample size likely impeded our ability to observe a similar association when assessed separately within the obese and lean (Figure 1) as the sample size was reduced to only nine individuals per group. Second, this study does not rule out the possibility that AbScAT releases hepcidin at different times of the day or in response to food consumption, exercise, or starvation warranting further investigation. Finally, a comprehensive assessment of iron status and data pertaining to menopause and menstruation for females may have allowed for a more thorough characterization of the findings.
CONCLUSION
In summary, there was no overall net secretion of hepcidin by AbScAT from obese or lean individuals. The mechanism linking obesity to elevated hepcidin concentrations and iron depletion remains unclear although it is attractive to speculate that liver-derived hepcidin stimulated by inflammation and not adiposederived hepcidin plays an important role in this phenomenon. Additional research is required to confirm this hypothesis. Gaining an understanding of the relationship between obesity and diminished iron status, in light of the burgeoning obesity epidemic, is critical since the clinical significance of suffering simultaneously from both morbidities remains unknown [41]. | 2014-10-01T00:00:00.000Z | 2011-11-09T00:00:00.000 | {
"year": 2011,
"sha1": "84dd9306a5cfc152155a09abcb4df8104c87fbba",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2011/634861.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2049260625c0a66414ccb059d0b024ce54789a46",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
148309052 | pes2o/s2orc | v3-fos-license | The utility of practical theology : mapping the domain, goals, strategies and criteria of practical theological research
Historically practical theology entered the theological encyclopaedia as a discipline of ‘crises’ in the praxis which ministers, congregations and churches experienced in making the Christian tradition relevant in the life of individuals, communities or in society as a whole. Among scholars in practical theology there is a deep consensus that practical theology starts with practical concerns and contributes to practice, but what is the utility of practical theology? This article want to ‘deconstruct’ the consensus that all practical theology is by default practical. If practical theology wants to construct knowledge about the improvement of practice, some strategies are preferable compared to other strategies. The question of utility refers to the methodological criteria of empirical research regarding (a) the object of research or the problem to be solved, (b) the needs of the stakeholders as to the research and its results and, (c) the type of knowledge to be produced. If utility is the target of practical theological research, then the question is which research strategies meet these methodological criteria better than other criteria. Some research strategies are strong on reaching certain goals, but are weak regarding other goals. It is only in the complexity of the type of knowledge, research strategy and methodological criteria that the focus on practice orientation can be decided.
INTRODUCTION
How to contribute to the future of the Christian religion in the historical, socio-cultural and societal conditions in which people live? This is the core question of practical theology as discipline. Historically, practical theology entered the theological encyclopaedia as a discipline of 'crises' in the praxis, which ministers, congregations and churches experienced in making the Christian tradition relevant in the life of individuals, communities, or in society as a whole. Don Browning, who can be regarded as one of the founding fathers of modern practical theology, formulates it as follows: Religious communities go from moments of consolidated practice to moments of deconstruction to new, tentative reconstructions and consolidation. Then a new crisis emerges and the communities must launch into the entire process once more (Browning 1991:6).
There is a deep consensus among scholars in practical theology that practical theology starts with practical concerns and contributes to practice. According to Browning, the structure of theological reasoning runs from practice, to theory, to practice. He is even inclined to mention that this should be the structure of all theology (Browning 1991:9). 1 For Browning, this is not a dialogue restricted to the Christian community and the Christian sources, but a "critical reflection on the church's dialogue with Christian sources and other communities of experience and interpretation with the aim of guiding its action toward social and individual transformation" (Browning 1991:36).
What is the utility of practical theology? Some will answer this question on the basis of the role of practical theology in learning practical skills in order to do theology. We know that scholars in practical theology often argue on this basis. It is not wrong, but it does not answer the question of the practice orientation of knowledge. Of course, we can (and should) use this knowledge in theological training and education. But what type of knowledge meets the demands of practice orientation? What do we mean by utility of knowledge? How is this knowledge created in research?
In this article, we wish to 'deconstruct' the consensus that all practical theology is by default practical. We start with Rick Osmer's (2004;2008;2011) formulation of the 'consensus' of practice orientation, and claim that this 'consensus' does not help clarify the practice orientation of practical theology, because it does not clearly distinguish the different domains, goals, strategies and criteria of practical theological research. 1 In this instance, Browning agrees with Rahner (1972).
First, we formulate four questions regarding Osmer's consensus of the fourfold tasks of practical theology. We take up these questions in the remainder of this article. The first issue is the role of the hermeneutical understanding of core ideas of the Christian tradition in research programmes. Next, we distinguish four goals of research programmes in terms of the types of knowledge they produce: descriptions, explanations, design, and concepts. The first three goals imply empirical research strategies. Is every research strategy fit to contribute to the development of each type of knowledge? We argue that this is not the case. If practical theology wants to construct knowledge about the improvement of practice, some strategies are preferable compared to others. Finally, we address the issue of utility in terms of methodological criteria of empirical research. Utility refers to methodological criteria regarding the object of research or the problem to be solved; the needs of the stakeholders as to the research and its results, and the type of knowledge to be produced. If utility is the target of practical theological research, then the question is: Which research strategies meet these methodological criteria better than other criteria? 2. BEYOND THE 'CONSENSUS' IN PRACTICAL THEOLOGY Rick Osmer (2004:149), one of the leading scholars in practical theology, has put a great deal of effort into formulating the new consensus in practical theology. His 'consensus idea' is familiar with the four dimensions of Browning's (1991) practice-theory-practice orientation. According to Osmer, the major difference between his theory and that developed by Browning lies in the fact that he distinguishes four core operations within practical theology, whereas Browning distributes them across the theological encyclopaedia (Osmer 2004:150). There may be less difference than Osmer presents, if we understand the four dimensions as submovements within a practice-theory-practice orientation of all theological scholarship (Browning 1991:36). His concept of a fundamental practical theology is presented as an overarching container concept, incorporating four sub-movements, namely descriptive, historical-normative, systematic, and strategic, within a hermeneutical engagement with the community and its practices that displays the religious dimensions of the situation. In this section, we restrict ourselves to the formulation of Osmer's (2004;2008;2011) new consensus. We evaluate the claim of the new consensus that it builds action-guiding theories of religious praxis (Osmer 2004:152). Does the structure of the practice-theory-practice orientation, as developed in the new consensus, help practitioners improve their praxis?
What are the tasks of practical theology? According to Osmer (2011:2), there are four interconnected tasks: • Descriptive-empirical: What is going on? Gathering information to better understand particular episodes, situations, or contexts.
• Interpretive: Why is this going on? Entering into a dialogue with the social sciences to interpret and explain why certain actions and patterns are taking place.
• Normative: What ought to be going on? Raising normative questions from the perspectives of theology, ethics and other fields.
• Pragmatic: How might we respond? Forming an action plan and undertaking specific responses that seek to shape the episode, situation, or context in desirable directions.
The central claim is that practical theology attends to four tasks along the lines of a hermeneutical circle (Osmer 2008:22-23). This hermeneutical circle is composed of five moments: pre-understanding, putting some facet of our pre-understanding in question, dialogical interplay between the horizon of the interpreter and that of the text, person or object being interpreted, and fusion of horizons and application. Osmer (2011:2) situates the fourfold task on the level of pastoral and ecclesial leadership. When practical theology emerged as an academic discipline in the modern, research university, "its task was to develop 'theories of practice' and 'rules of art' that might guide the reflective practice of the leaders of the Church" (Osmer 2011:2). According to Osmer, this level needs to be distinguished from the meta-theoretical level. Practical theologians conceptualize and carry out the four tasks differently on the basis of decisions made on a meta-theoretical level. Osmer identifies four issues: the theory-praxis relationship; sources of justification (scripture, tradition, reason and experience), models of cross-disciplinary work, and the theological rationale (Osmer 2011:3).
We wish to raise four questions regarding the formulation of the 'consensus' by Osmer: 1. The fourfold task is situated on the level of the reflective practitioner, not on the level of the theory formation of scholars. Scholars are not reflective practitioners within a hermeneutical circle, but critical agents within research programmes (and teaching programmes related to the knowledge created in research programmes).
2. Within research programmes, the empirical circle is at the heart of research programmes. The aim of research programmes is to develop new knowledge. It aims to put our knowledge claims to the test. Within theological research programmes, scholars share knowledge claims (core ideas) of the Christian community. Hermeneutical interpretation is necessary to understand the knowledge claims of the Christian community. But the hermeneutical circle is not the overall purpose of research programmes.
3. In Osmer's formulation, the empirical task is called the descriptiveempirical task. Descriptions are grounded on a form of attending as a spirituality of presence, and can also incorporate critical reflection on the current practice (Osmer 2008:58
WHAT IS A RESEARCH PROGRAMME IN PRACTICAL THEOLOGY?
If practical theology is about the development of 'theories of practice' and 'rules of art' (cf. Osmer), then the first question is: What is a research programme in practical theology in which 'theories of practice', and so on are built? The work of scholars in practical theology cannot be situated on the level of reflective practice, but on the level of a research programme (and in relation to this, an educational task). Practical theologians contribute to practice (for example of leaders, congregations, believers) by developing research programmes, creating knowledge about practices in which they are interested. In this section, we will first formulate a concept of a research programme based on the work of the Dutch philosopher of science Kuipers (2001;2005;2007), who positions himself as a neoclassical scholar in the line of Kuhn and Lakatos. Next, we will discuss the thorny issue that practical theology starts from a hermeneutical preunderstanding of the Christian practice. Can we acknowledge the fact to share notions grounded in the Christian traditions, on the one hand, and put our knowledge claims to the test, on the other?
Research is fundamentally an interested enterprise; it serves the agenda of a research programme. Kuhn and Lakatos introduced this idea into the philosophy of science. Since the 1980s, it has become more or less accepted that science develops in encompassing units called research programmes (Kuipers 2007:2). The development of scientific knowledge does not proceed through the development of specific hypotheses and theories, but in more encompassing terms. Structural features of research programmes are: a domain of existing or not yet existing phenomena; the goals of solving some problem associated with it, be it finding its true description or its true theory, or the construction of an intended intervention or concept, 2 a core idea, or a set of coherent ideas couched in a certain vocabulary, about how to solve the problem, and additional ideas, heuristics, suggesting how to safeguard the core idea against prima facie failures to solve the problem (Kuipers 2005:31;2007:63-64).
What is the domain of practical theology? Christian practice as religiouscommunicative action naming God? In the past decades, the domain of the research programme of practical theology widened: from the practices of ministers, to the practices of believers within the community of the Church, to the practices of Christian believers in the coordinates of Church and society (public domain), to the interaction between Christian believers with adherents of other religions (Hermans 2014). This widening of the material object of practical theology has created more diversity in research programmes in practical theology. Where some research programmes include religion on the World Wide Web and in virtual communities, others focus on the practices of ministers and congregations.
A core idea of research programmes in practical theology regards the origin of naming God in the narrative of death and resurrection of Jesus Christ. It is impossible to build a research programme in practical theology beyond the core notions of our epistemic community. All inquiry is from 'accepted' knowledge within an epistemic community to 'new' knowledge. Research aims at expanding our knowledge, but it does not start from nowhere, nor do we need to presuppose "universal, hard and fast premises, clear-cut concepts, straight and narrow theories and universal, irrefutable test results" (Van der Ven 2010:95). This is the epistemological position of strong rationality. There is no research programme that does not start from ideas, which are accepted within an epistemic community 2 Problems refer to what we 'do not know' about a phenomenon (such as a professional practice or congregational development) and the challenges related to it. as knowledge base. "We begin our conversations by bringing our fallible views and judgments to those who traditionally make up our epistemic communities" (Van Huyssteen 1999:265). Each judgment is made in the context of a specific community, and is based on arguments and ideas that are accepted in that community. We cannot abstract from a conversational context when justifying concepts, ideas and viewpoints. This is the epistemological position of 'weak rationalism'. We cannot claim absolute knowledge. We are part of an epistemic community in which we share concepts, ideas and viewpoints but at same time accept that we only have fallible knowledge, which can stand the test of falsification.
'Weak' rationalism demands that we should extend our individual evaluation to communal evaluation, and further to trans-communal evaluation (Van Huyssteen 1999:265). This does not imply a demand to include all possible rational agents in our justification, regardless of time and place. If we were to demand this, it would be impossible to determine the validity of our justifications. Scientific truth is something that is established over time by the community of inquiry, including future generations and different contexts. The principle of fallibilism 3 demands that we need to ground scientific theory not in the origins of our knowledge (context of discovery), but in the rules and norms of inquiry (context of justification). "Our claims to knowledge are legitimized not by their origins -for the origins of knowledge are diverse and fallible -but rather by the norms and rules of inquiry itself" (Bernstein 1971:175).
What is the distinctive role of the hermeneutical or interpretative task and the empirical task in a theological research programme? In a research programme, the hermeneutical task aims at understanding the meaning of practices from the perspective of the epistemic community that is involved in these practices. In order to understand what healing means for a specific (type of) evangelical community, we need to understand hermeneutically the meaning this community connects with this practice of healing. But it is wrong to say that the four tasks are related to each other within a hermeneutical circle. The tasks function within a research programme that aims to solve problems within a specific domain (e.g.,religiouscommunicative practices) based on a core idea or set of related ideas (e.g., the Christian origin narrative of death and resurrection of Jesus Christ). The hermeneutical task plays a role in the context of discovery, but not in the context of justification. In this instance, the empirical task is centrally based on the principle of fallibalism. We put the knowledge of an epistemic community to the test with the help of the norms and rules of inquiry. If some image of God blocks the practice of mourning for believers in a certain type of Christian community, the insight fulfills a critical role towards a theological pre-understanding which considers this image to be beneficial for believers who mourn over a beloved one (Zuidgeest 2001).
GOALS OF PRACTICAL THEOLOGICAL RESEARCH
The overall goal of practical theological research is to solve problems in the domain which is the target of a research programme. In the 'consensus', the general idea is that all practical theological research contributes to solving practical problems. But, is this really the case? In this section, we will distinguish different goals of practical theological research. Following Kuipers (2001;2005;2007), we distinguish between four goals of a research programme, based on the type of knowledge it produces, namely descriptions, explanations, designs and concepts.
Descriptive programmes are the first type of research. They are meant "to describe a certain domain of phenomena, primarily in terms of individual facts (individual programmes) or primarily in terms of general observable factors (general or inductive programmes)" (Kuipers 2001:6;2007:59). Individual facts are, for example, specific rituals of different Christian denominations. General observable factors refer to, for example, the experience of a divided self. According to James (1961:114-142), all of us have some amount of discordancy in our character, but in some this heterogeneity is more extreme than in others. Human beings want to overcome experiences of a divided self, because the self is experienced as being in conflict with itself, thus causing anxiety. Descriptive programmes are also known as observation programmes: the research takes the form of more or less selective observations, and the resulting facts are couched in so-called observational terms. We like to stress (again) that observational terms are not given by the natural world, but are created through a theoretical lens through which researchers examine their research object.
Explanatory research programmes have a different aim: "They are directed at the explanation and further prediction of the observable individual and general facts in a certain domain of phenomena" (Kuipers 2005:29). An explanatory programme is (quasi-)deductive, whereas a descriptive programme is dominated by inductive reasoning. Explanatory programmes are always built on underlying descriptive programmes. Different explanatory programmes can arise from the same descriptive programme. For example, observations of religious rituals, practices, beliefs and emotions can be explained by different theories. For example, secularization theory (e.g., Bruce 2011) and religious market theory (e.g., Stark & Finke 2000) both profess to explain religious decline and revival. Next, we could critically evaluate to what extent these theories indeed envisage elements of religious decline and/or revival (Aarts et al. 2008;Aarts 2010;Sterkens & Vermeer 2012).
Intervention design research programmes involve the development of knowledge on interventions (in terms of the improvement of a certain practice) or from interventions (in terms of the construction of a certain practice). This type of research is often neglected in favour of description, explanation and prediction. However, research in the field of organizations and policy is mainly design research (Van Aken et al. 2008) or policy research (Verschuren 2009a). Since design programmes often use knowledge obtained in descriptive and explanatory programmes, the design process will only be considered scientific if it is not fully based on existing knowledge and techniques. That is, new theories have to be developed or new experiments have to be performed if a design programme is to be scientific (Kuipers 2005:30). Or, we can imagine design research into the development and testing of a protocol of pastoral counselling of people in different contexts (e.g., hospital settings and religious institutions) and different religious groups (e.g., different Christian Churches). It could include research into the development and implementation of a programme format for television (or internet), structured according to the core aspects of a spiritual biography.
Finally, explicative research programmes are directed to concept explication, that is the formal construction of simple, precise and useful concepts that are similar to given informal concepts (Kuipers 2007:62). The strategy of concept explication starts by deriving conditions of adequacy from the intuitive concept to be explicated and, if relevant, from empirical findings, which the explicated concept will have to satisfy, and evident examples and counter-examples that the explicated concept has to include or exclude. Explication may go beyond what the explicated concept has to include or exclude and beyond the explication of intuitive concepts. It may also aim at the explication of intuitive judgments, including their justification, demystification or even undermining.
RESEARCH STRATEGIES OF PRACTICAL THEOLOGY
What type of research strategy should get preference in practical theological research? The choice of a research strategy is, in "real academic life", often related to the preferences of the researcher. Researchers often continue to use research strategies in which they have been successful. However, research strategies should be chosen in view of the goal of the research and the research question related to this. In this section, we will first distinguish seven main categories of a research strategy. Then we will answer the following question: Which type of research strategy is useful to develop a specific type of knowledge?
A research strategy incorporates all the decisions of the research design. It includes the method of research, in the sense of data collection and data analysis, but also decisions, whether it is theory oriented or practice oriented, about the depth or width of the research, the number of research units, and so on (Verschuren 2009a). Creswell formulates nine core questions which a research strategy should answer, including what we want to understand better about our topic, the unit(s) that we want to study, method, data analysis, how to validate our results, ethical issues, and the practicality or value of the result (Creswell 2003:49-50). We will follow the typology of research strategies as offered by Verschuren (2009a;2009b). The reason for this is that his typology is interesting in view of the discussion on utility in the next section. In general, the majority of authors focus on the same research strategies that exist in the field. The major difference is the ordering of the research strategies.
Research strategies can be distinguished regarding the input of the research that forms the basis of the analysis. The first question in choosing a research strategy is: What do we analyse? In data-based strategies, what we analyse is data that is available through empirical observation. The input in human resource-based research is capacities of the human mind (reflections; analysis) (Verschuren (2009a:36).
In data-based types of research strategy, the next decision relates to the choice for quantitative or qualitative approaches (Creswell 2003:13-17;Verschuren 2009b:280-282). The choice for quantitative approaches is led by the need to generalize the insights of the research. In order to be able to generalize the results, the focus of this research strategy is on the magnitude (mass) of a phenomenon, large samples, and external validity (a-selective method of sampling; e.g., does the sample reflect the population?). Qualitative research aims at depth of understanding of a phenomenon, and internal validity. Depth refers to the variety of the phenomenon, which demands for interpretative and comparative ways of analysing the data. Internal validity refers to questions regarding the process of production of the data, namely whether the observed data reflect the ontological reality which exists 'out-there'. The focus of qualitative research calls for small samples, and labour-intensive processes of data collecting and data analysis.
There are two main types of research strategies in quantitative types of data-based research, namely correlational and experimental testing designs. Correlational research is more than survey research, although this is the major kind of research in this type (Bryman 2008:45). In this type of research, the main operation in data analysis is establishing the association between phenomena. Correlation coefficients give insight into the strength and direction of this association. Based on the correlation coefficients, other types of multivariate analysis are possible, such as scaling techniques, factor analysis, and regression analysis. Subtypes within this category are survey research, panel research, trend research, and cohort research. Experimental testing designs include a manipulation of the independent variable (e.g., an intervention) in order to determine whether it does have influence on the dependent variable (such as the beliefs, attitudes and behaviour of people) (Bryman 2008:50). There are two types of experimental designs: true and quasi-experimental designs. True designs demand that the researcher can manipulate all the variables involved in the research. Subjects are distributed on a random basis to different treatment conditions. This type of research is conducted in laboratory settings (e.g., in psychology). Quasi-experimental designs are conducted in real-life situations in which not all variables can be manipulated (e.g., the composition of the groups involved in the different treatments cannot be randomised).
We distinguish three types of research in qualitative research: Q-type research, q-type research, and design research (see Verschuren 2009a;2009b). The Q-type of qualitative research is characterised by an inductive approach, in the sense that it is driven by empirical data and not by theory (e.g., assumptions derived from theory). It uses labour-intensive methods of data collection (open interviews, observation) and of data analysis (interpretation, constant comparative method). The research process is flexible and evolves in response to the lived reality of the phenomenon of the research (such as life narratives, ritual expressions, group decisionmaking processes). Different subtypes can be distinguished within this category: grounded theory (Glaser & Strauss 1967), ethnographic research, narrative research (De Haardt et al. 2014), discourse analysis (Wijsen 2013) and phenomenological research (Heimbrock 2007). Within qualitative research, the q-type resembles quantitative research, in the sense that it follows mainly a theory-driven approach and the empirical cycle (from theory to data and back to theory). But this type shares the use of qualitative methods of data collection and data analysis with the Q-type of qualitative research. The two most well-known variants of this strategy are the comparative case study (Yin 2009) and the qualitative survey (Jansen 2010). The third type of qualitative research is practiceoriented research or design research. The difference between the first two types is that this type implies an intervention. The start and goal of practice-oriented research is an intervention (or action) problem, and not a theoretical knowledge problem as in the quantitative testing research (see above). The problem of the research can be either the improvement problem, or a design problem. In the former, the focus is to improve a practice from an intervention (Verschuren 2009b); in the latter, the focus is on interventions, e.g., designing interventions (Mckenney & Reeves 2012).
All the strategies mentioned above are data driven. There are also strategies driven by the capacity of the human mind to produce knowledge. Verschuren (2009a) calls them human resource-driven research. He distinguishes two subtypes: knowledge-based research and reflective research. By contrast, the inputs for human resource-based research are capacities of the human mind. In knowledge-based research, the input is knowledge that exists in literature and/or in the heads of experts. By confronting and analysing these insights, the researcher endeavours to produce new knowledge. Examples are literature research, Delphi research (Keeney et al. 2010), and focus groups with experts (Stewart & Shamdasani 2015). In reflective research, the knowledge basis does not exist in literature and/or in the mind of experts, but is produced by the reflective capacity of the researcher. In this type of research, the researcher is an instrument for both 'observation' and analysis.
We can now answer the question as to what degree a research strategy can serve different goals? Figure 1 presents an overview of our analysis. Correlational research strategies are strong in producing descriptive and explanatory knowledge. Experimental strategies are strong in producing explanatory knowledge, e.g., knowledge on the causal relationship between x and y. There is a difference between experimental and quasi-experimental research in this regard. As explained earlier, quasi-experimental research is conducted in real-life situations, in which not all variables can be manipulated. The strength of producing causal explanations is, therefore, weaker in quasi-experimental than in experimental research. Hence, we place a '+' between brackets for quasi-experimental research. The three other data-driven strategies are weak in producing explanations. Both the Q-type and the q-type are strong in producing descriptions. The Q-type is also useful in producing new concepts using an inductive approach. This is different with the q-type, which is theory driven, because it takes the existing knowledge base as the frame of reference of the research. There is one type of research design in the q-type. It is strong in developing knowledge on interventions, namely the comparative case study design. Design research is very strong in producing new knowledge on (and from) interventions, but it can also serve the goal of producing new concepts. Both types of human research strategies have their strength on concept explication. In order to answer this question, we will relate different types of practical theological research to different criteria of research. Some criteria are related to the question of truth; others are related to utility. What we intend to clarify is that some types of research score high on truth, others score high on utility, and again others score high on both criteria.
We start by making a distinction between two types of criteria in scientific research, namely internal-scientific or theory-oriented criteria and externalscientific or practice-oriented criteria (Verschuren 2009a:29). The majority of handbooks on methodology only refer to internal-scientific criteria, namely internal validity, external validity (or generalizability), verifiability of the results, and cumulativity of theory development (Bryman 2008). With regard to the utility of research, we need external-scientific criteria. Regarding the question of utility, Verschuren introduces three demands referring to the object of research or the problem to be solved, the needs of the stakeholders as to the research and its results, and the type of knowledge to be produced (Verschuren 2009a:18-19). We will briefly elaborate on each type of demand. As to the object of research, the main question is: What are the characteristics of the problem we are researching? What are the characteristics of the problem in practices in the real-life world? Do we seek the uniqueness of phenomena and their variability in different local contexts? Did the phenomenon emerge relatively recently, which implies that we need to incorporate aspects that are relatively unknown to us? Do we include the social processes influencing the phenomena that we research? Is the interconnectedness (interaction) between people influencing the phenomena of our research? Are we able to incorporate the specificity of different subgroups in our research?
The next demand of utility relates to the needs of stakeholders. These needs result from the fact that stakeholders want to do something with the knowledge that is produced in the research; for example, make a decision, develop a plan, perform a ritual practice, and so on. Verschuren (2009a:20-21) distinguishes four criteria in this category: comprehensibility, acceptability, legitimacy, and research as learning process. In order to understand that the knowledge produced in the research fulfils their needs, the results must be comprehensible for stakeholders. The stakeholders must accept the knowledge as relevant. They must perceive the knowledge and the way in which it was produced as legitimate. And finally, the stakeholders need to be involved in a learning process that helps them change.
The last demand of utility relates to the type of knowledge produced in the research. Verschuren (2009a:21-22) distinguishes the following criteria in this category: holism, interdisciplinarity, context restraint, profoundness, and transformation. Holism refers to the fact that one researches the entire phenomenon, studying the real-life phenomenon, in the conditions of time and place as it manifests itself. Interdisciplinarity refers to the fact that many real-life problems cut across different disciplines. The knowledge is bound to the context in which it is gathered. Profoundness refers to the fact that behind problems, a myriad of causes can influence this problem. Finally, transformation refers to the fact that it does not suffice to understand phenomena. The important issue is whether the problem can be addressed and whether the stakeholder can do it.
Based on the formulated criteria, we can now answer the following question: Which research strategy in practical theological research fulfils the demand of utility more than other strategies? As Figure 2 shows, the two data-driven, quantitative strategies are strong on internal criteria. It is also clear that they are weak in fulfilling demands of utility. The three qualitative, data-driven strategies score high on utility, but they can also deal with internal scientific criteria of truth. The two human resource-related strategies score well on practice-oriented demands, but are weak with regard to internal scientific criteria. In summary: The question of utility relates to a set of practice-oriented demands of practical theological research. These demands refer to the issues regarding the object of research, the needs of stakeholders, and the type of knowledge generated in research. As we have shown, some research strategies can meet the demands of practice orientation better than others. It is not simply qualitative vs. quantitative, or inductive vs. deductive. We need to argue from the level of research strategies in order to decide which research type better meets the demands of practice orientation than other types. Within each research strategy, there are different types of methods. Finally, we need to connect the selection of a research strategy to the goal of the research. We have distinguished research goals according to the type of knowledge that are the result of the research: descriptions, explanations, intervention-designs, or concepts. Some research strategies are strong on reaching certain goals, but are weak regarding other goals. It is only in the complexity of the type of knowledge, research strategy and methodological criteria that the focus on practice orientation can be decided. | 2019-05-09T13:11:49.887Z | 2016-10-17T00:00:00.000 | {
"year": 2016,
"sha1": "78b64c26dc87d471b87b28cd8795223e35d5d267",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/actat/article/download/146076/135588",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "bf2fafc44a1914e8e3073f28725fb435d03af96e",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
} |
12369488 | pes2o/s2orc | v3-fos-license | Anthocyanins Profile of Grape Berries of Vitis amurensis, Its Hybrids and Their Wines
Anthocyanins are responsible for the color of grapes and wine, an important attribute of their quality. Many authors have used anthocyanins profile to classify the grape cultivars and wine authenticity. The anthocyanin profiles of grape berries of Vitis amurensis, its hybrids and their wines were analyzed by HPLC-ESI-MS/MS. The results identified 17 anthocyanins in these grape cultivars, including 11 anthocyanin monoglucosides (five pyranoanthocyanin monoglucosides and one acylated pyranoanthocyanin monoglucoside) and six anthocyanin diglucosides. Likewise, 15 kinds of anthocyanins were detected in wines, including six diglucosides and nine monoglucosides of anthocyanidins, in which four pyranoanthocyanin monoglucosides (Petunidin-3-O-glucoside-4-acetaldehyde, Malvidin-3-O-glucoside-4-pyruvic acid, Malvidin-3-O-glucoside-acetaldehyde and Peonidin-3-O-glucoside-4-pyruvic acid) were detected. In addition, a total of 14 kinds of anthocyanins including six diglucosides and eight monoglucosides of anthocyanidins were identified in skins, in which two pyranoanthocyanin monoglucosides (Peonidin-3-O-glucoside-4-pyruvic acid, Malvidin-3-O-glucoside-4-vinylphenol) and one acylated pyranoanthocyanin monoglucoside (Malvidin-3-O-(6-O-acetyl)-glucoside-4-vinylphenol) were detected. The anthocyanins profile of grape skin of V. amurensis and its hybrids consist of the anthocyanin monoglucosides, diglucosides and pyranoanthocyanins. The wines produced resulted in a slightly different anthocyanin distribution. Pelargonidin-3,5-diglucosides was first found in the skins and wines, however, no acetyl was detected in wines. The principal component analysis results suggest that the anthocyanin profiles were helpful to classify these cultivars of V. amurensis.
amurensis, its hybrids and their wines were analyzed by HPLC-ESI-MS/MS. The results identified 17 anthocyanins in these grape cultivars, including 11 anthocyanin monoglucosides (five pyranoanthocyanin monoglucosides and one acylated pyranoanthocyanin monoglucoside) and six anthocyanin diglucosides. Likewise, 15 kinds of anthocyanins were detected in wines, including six diglucosides and nine monoglucosides of anthocyanidins, in which four pyranoanthocyanin monoglucosides
Introduction
Anthocyanins make up a group of pigments in grapes and wine, an important quality parameter that contributes an appealing color and antioxidant activity to red grapes and wine. Due to its bright color and high water solubility, this group is considered a potential natural pigment to replace artificial food colorants. On the other hand, this group also possesses potent antioxidant capacity and health promoting properties, even reducing the risk of cardiovascular diseases and some inflammatory diseases for people who consume wine, berries, and grapes [1][2][3][4].
The anthocyanins profile of a grape and its wine, determined by the relations of the different anthocyanins, is characteristic of each variety. An anthocyanin profile, or fingerprint, has also been used for many authors to differentiate the grape cultivars, and to identify the authenticity of red wines, especially if the grape cultivar is mentioned on bottle labels. Authors reported that the anthocyanin profiles provide enough information to develop a differentiation of classes in the studied wines or grapes [5][6][7][8][9][10].
V. amurensis is native to north-eastern China, and is resistant to low temperature, even at -40 °C. Wines made from these grapes have an unusual color, aroma and taste, quite different from the wine made from the grape V. vinifera. However, the anthocyanins profile of grape berries from V. amurensis and their wines is still not clear. In addition, it has not been sufficiently verified whether the grape and wine contains the pelargonidin anthocyanins, and whether the grape skin includes the oligomeric anthocyanins. The objective of this work is to identify the anthocyanin profile of grape berries of V. amurensis and further confirm the above problems by the HPLC-ESI-MS/MS analysis.
Samples
Grape berries of seven cultivars: SF (Shuang Feng, intraspecific hybrid of V. amurensis, hermaphroditic), SH (Shuang Hong, intraspecific hybrid of V. amurensis, hermaphroditic), SY (Shuang You, V. amurensis, hermaphroditic), Z1 (Zuo Shan Yi, V. amurensis, female, selected from wild resources), Z2 (Zuo Shan Er, V. amurensis, female, selected from wild resources ), ZH (Zuo Hong Yi, interspecific hybrid, V. amurensis × Мускат Розавый×V. amurensis, hermaphroditic), ZY (Zuo You Hong, interspecific hybrid, V. amurensis × Мускат Розавый×V. amurensis, hermaphroditic) cultivated at the Institute of Special Wild Economic Animal and Plant, Chinese Academy of Agricultural Sciences and were sampled at maturation and harvest based on the color of grape berries. To obtain a sample representing a vineyard population, we sampled according to the method described by Boulton et al. [21]. Three 100-berry samples were selected from at least seven 10-cluster selections at similar positions of 30 whole vine selections. The fresh samples were kept in refrigerated bags, taken to the laboratory within a few hours, the skins peeled with forceps and frozen in liquid N 2 ; then they were crushed as powder, which was frozen at -40 °C for anthocyanins extraction.
Wine samples were performed with a small glass container (10 L). The berries of seven cultivars for wine making were picked up at harvest. To each must, 50 mg/L SO 2 was added before alcohol fermentation; then the activated yeast was added. After the fermentation was performed for four days, the pomace was separated from must and the last-fermentation was carried out. The wine samples were analyzed by HPLC-ESI-MS/MS with direct injection after filtration.
Extraction of Anthocyanins
The extraction of anthocyanins was performed according to Liang et al. with some modification [8]. 20 mL methanol with 5% (v/v) formic acid was added into 100 mL Erlenmeyer flasks that contained 1 g of grape skin powder. Anthocyanins were extracted at 30 °C for 30 min in a dark environment; this was repeated five times to collect the extract solution. The extraction was concentrated under vacuum at 30 °C using a rotary evaporator until dryness. The dry extraction was resolved in 5 mL solvent of 2% formic acid in distilled water. About 1 mL of extracted solution was strained through a 0.45 μm millipore filter for HPLC-ESI-MS/MS analysis.
HPLC-MS Analysis
An Agilent 1200 series LC-MSD, equipped with a UV detector and reversed phase column (Kromasil C18 250 4.65 μm), was used. The solvents were (A) aqueous 2% formic acid, and (B) acetonitrile containing 2% formic acid. The gradient was from 6% to 10% B for 4 min, from 10% to 25% B for 8 min, isocratic 25% B for 1 min, from 25% to 40% for 7 min, from 40% to 60% for 15 min, from 60% to 100% for 5 min, from 100% to 6% for 5 min, at a flow rate of 1.0 mL/min. Injection volumes were 30 μL, and the detection wavelength was 525 nm. Mass spectroscopy (MS) conditions were as follows: Electrospray ionization (ESI) interface, positive ion model, 35 psi nebulizer pressure, 10 L/min dry gas flow rate, 350 °C dry gas temperature, and scans at m/z 100-1000. All analyses were replicated twice.
Statistical Analysis
All individual anthocyanins were quantified and expressed as malvidin-3-glucoside content from the chromatographic results. If any of these anthocyanins remained undetected in a sample, they were represented by zero in the data matrix for principal component analysis (PCA). PCA was performed with the statistical software SPSS 15.0 (USA).
Anthocyanins in Grape Skins
There were 17 anthocyanins identified in V. amurensis grapes, their hybrids and their corresponding wines by HPLC-ESI-MS/MS (Table 1 and Figure 1).
In grape of V. amurensis, the fragment model In the grapes of cultivars SY, Z1 and Z2, which all belong to V. amurensis, HPLC-ESI-MS/MS detected that they comprise of 12 anthocyanins, although some anthocyanin was not detected in maturation. However, peonidin-3-O-glucoside-4-pyruvic acid was not detected in the varieties of SF and SH, which also belong to V. amurensis, and comprise of 11 anthocyanins.
Anthocyanins in Wines
The presence of 15 anthocyanins in these seven wines made from the V. amurensis and its hybrids can be seen in Table 1. These wines consist of six anthocyanin diglucosides and nine anthoyanin monoglucosides (four pyranoanthocyanin monoglucosides). The
PCA Results of Grape Cultivars and Wines
Principal components analysis was also performed, obtaining that the first three components account for more than 80.0% of the total variance in all these analysis.
As it is shown in Figure 3A, the cultivar G-ZH-M and G-ZY-M stands out, which turns out to be clearly distanced from the rest and separated well by PC1. The cultivar G-SY-M, G-Z1-M, G-SH-M, G-Z2-M and G-SF-M were mainly separated along the PC2; however, G-SY-M, G-Z1-M, G-SH-M could not be distanced enough even though they were located in a different quadrant. The cultivar G-Z2-M and G-SF-M were located in the same quadrant, but also were not separated well. The result of bi-plot PC1 versus PC3 was similar to the bi-plot PC1 versus PC2. This suggested that these cultivars may have the similar anthocyanin biosynthesis, and their relative was closer than the rest of the cultivars. Figure 3B is a scatter plot showing the distribution of these grape cultivars, according to PC 1 versus PC 2 and PC 1 versus PC 3 at harvest. In the scatter plot PC 1 versus PC 2, G-ZY-H and G-ZH-H was distanced well, although both of them were located at the same quadrant. However, the cultivars G-SH-H, G-Z2-H, G-SF-H, G-SY-H was closely assembled in the fourth quadrant. In the scatter plot PC 1 versus PC 3, the cultivar G-SY-H was separated into the first quadrant and was close to G-Z1-H. However, the results were similar to the scatter plot of PC 1 versus PC 2.
Comparing the scatter plot at maturation with other plots of the same harvest, it can be seen from Table 1 that the anthocyanin profiles are influenced by the period of grape development. W-ZH wine was totally separated from other wines along PC 1 ( Figure 3C). According to the PC1 and PC2, W-Z2 and W-SH were located at the top left side, while W-Z1 and W-SY were located at the bottom left. However, W-ZY was very close to W-SF, as were W-SH, W-Z1 and W-SY. When the samples were separated according to the PC1 and PC3, W-ZY and W-SF were enough separated into two different locations, but not all the wines could be differentiated applying the two principal components plot (PC1 versus PC2 or PC1 versus PC3). This is the case of the groups: Table 1 for definitions of abbreviations.
Conclusion
In this experiment, 17 anthocyanins were identified from skins and wines of seven grape cultivars, including 11 anthocyanin monoglucosides (five pyranoanthocyanin monoglucosides and one acylated pyranoanthocyanin monoglucoside) and six anthocyanin diglucosides. 15 anthocyanins were identified from their wines, including nine anthocyanin monoglucosides (four pyranoanthocyanin monoglucosides) and six anthocyanin diglucosides. 14 were identified anthocyanins from their skins, including eight anthocyanin monoglucosides (two pyranoanthocyanin monoglucosides and one acylated pyranoanthocyanin monoglucoside) and six anthocyanin diglucosides. In the skins of V. amurensis grape and its hybrids, the anthocyanins profiles are the anthocyanin monoglucosides, diglucosides and pyranoanthocyanins. The wines created from these produced the different anthocyanins distribution. Pelargonidin-3,5-O-diglucoside was detected in the V. amurensis, its hybrids and their wines, while the pyanoanthocyanins were also detected in the grape skin. These characters of anthocyanin profile of V. amurensis and its hybrids are helpful for differing the grape cultivars from other grape species (e.g., V. vinifera, V. labrusca, V. rotundifolia), and identifying the authenticity of red wines. The PCA results also suggested that the anthocyanin profiles are helpful to classify these cultivars of V. amurensis. In addition, the anthocyanin profile of V. amurensis is also important for the study of anthocyanin biological syntheses, because they include pyranoanthocyanins while excluding the non-acylated anthocyanins. | 2014-10-01T00:00:00.000Z | 2010-05-21T00:00:00.000 | {
"year": 2010,
"sha1": "89fbc981db509a93a3f73415ae8e8ca15a988be7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/11/5/2212/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89fbc981db509a93a3f73415ae8e8ca15a988be7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
228852650 | pes2o/s2orc | v3-fos-license | Experimental study on creep characteristics of anchorage rock with cracks under cable pulling condition
The creep characteristics of anchorage rock mass with cracks are the basis for the stability control of anchorage rock and current research hotspots. Relying on the Muzhailing soft rock tunnel project, the anchorage segment is taken as the research object, numerical simulation as the main method is used to systematically study the instantaneous and creep mechanical properties of rock with different crack combinations under the condition of cable pulling, the results show that: 1) Under instantaneous pulling conditions, the effective anchorage range is funnel-shaped, and the maximum value of the maximum shear stress(MSS) appears at the front end of the anchorage rock, while under long-term pulling conditions, the effective anchorage range is uniform along the cable axis distribution, the maximum value of the MSS appears at the end of the cable; 2) Cracks with specific combinations can expand the anchorage range along the axial direction and slow down the shear failure at the front end of the anchorage rock, which is especially obvious for the anchorage rock under instantaneous pulling conditions 3) The optimal number of cracks and the specific area can be obtained by the "virtual intersection method". For the instantaneous MSS and the creep MSS, there is an obvious 45° crack angle effect. The research results are intended to provide a reference for further probing the creep mechanism of anchorage rock with complex crack combinations.
Introduction
In recent years, with the development of rock engineering to deeper underground space, geological disasters, especially creep disasters caused by the "three highs and one disturbance" have increased, and engineering accidents caused by this have been common. A large number of data indicate that [1][2][3] : most of the anchorage engineering failures often occur in the late stage when the engineering structure is put into use, and at the same time, the creep effect of the surrounding rock is less considered in the design of underground engineering support. With the continuous deepening of people's understanding of rock as an anisotropic complex geological medium, the importance of the creep characteristics of anchorage rock with complex crack combination is becoming more and more obvious. Therefore, under the conditions of cable pulling, research on the creep characteristics of anchorage rocks with cracks is a prerequisite for disaster prevention in anchoring engineering [4][5][6][7][8] .
Since the concept of creep was proposed, a large number of scholars have conducted extensive research on the creep properties of rocks, and have also achieved abundant results. Since the 21st century, with the development of information technology and the continuous improvement of experimental equipment, the research on the creep characteristics of cracked rock has increased significantly [9][10][11][12][13][14] . In recent years, with the emergence of anchor cables represented by NPR anchor cables that can adapt to rock large deformations, the creep characteristics of anchorage solids under the condition of anchor cable pulling have gradually become a new research focus [15][16][17][18] . Although the above research results are abundant, the research is mostly focused on cracked rocks without anchor cables and anchorage rock without cracks. There is little research on the creep of cracked anchorage rock. At the same time, there is also a lack of understanding of the instantaneous mechanical properties of cracked anchorage rock. Based on the comparison of instantaneous and creep mechanical properties of anchorage rock, this paper carried out numerical experiments on the creep properties of anchorage rock with different crack combinations under the cable instantaneous pulling and long-term pulling conditions, aiming to enrich the research results in this field, providing a reference for understanding the creep mechanism of anchorage rock with cracks.
Experimental design
This paper relies on the Muzhailing soft rock tunnel project. The tunnel has a large burial depth and relatively developed rock joints and cracks. It adopts NPR end anchor cable support method with alternating length. The pre-tensioning force of anchor cable is 300 KN which is used as the pulling force of anchor cable in numerical test. The row spacing between anchor cables is 1000 mm × 1200 mm, wherein the length of anchor section is 1500 mm, and this is taken as the research object. The geometric characteristics of the crack in the anchor section are divided into the number of cracks, the crack specific area and the crack angle. The number of cracks is 0, 1, 2, 3, and the anchorage segment is divided equally, the crack specific area is expressed as the ratio of the crack area to the anchorage rock cross-sectional area, respectively 0.03, 0.13, 0.33, 0.53, the inclination of the crack is the angle between the crack and the vertical direction, respectively 0°, 30°, 45°, 60°. Using flac3d software to establish the numerical model shown in Figure 1, the anchorage depth of anchor cable is 1500 mm, the model size is 1000 mm × 1200 mm × 3000 mm.
Parameters acquisition
In order to obtain the basic mechanical parameters of the rock, uniaxial compressive strength tests were carried out on the rock samples taken from Muzhailing tunnel. These samples contain beddings and no cracks, and the angles of these beddings are 0°, 30°, 45° and 60°, respectively. The test results are shown in Figure 2, and the basic mechanical parameters of the rock are shown in Table 1. [19] in which the rock samples are the same as that in this paper. The Moore Coulomb criterion is used for the instantaneous constitutive model of the numerical experiment, and the Burgers creep model is used for the creep constitutive model, as shown in Figure 3. [20] . The creep equation of the Burgers model is [20] : where σ is the normal stress;E 2 , η 2 are the elastic modulus and viscosity coefficient of the Maxwell body; and E 1 , η 1 are the elastic modulus and viscosity coefficient of the Kelvin body. The parameters of the Burgers creep equation are fitted by using the monitoring data of points A and C on section K222+001 and section k222+003 of the Muzhailing tunnel, as shown in Figure 4. The fitted results are shown in Figure 5 and Figure 6, and the fitted values of the parameters are shown in Table 2.
Instantaneous and creep displacement analysis
When the 300kN pretension is applied, the instantaneous displacement field(IDF) inside the anchorage rock changes with the number of cracks as shown in Figure 7. It can be seen from the figure that under the condition of anchor cable pulling, the maximum instantaneous displacement appears at the front end of the rock, and the displacement field gradually attenuates from the funnel shape to the deep part, indicating that the effective anchorage range is concentrated at the front end into a funnel shape, and the anchorage effect gradually weakened towards the rear end. With the appearance of crack and the increase in number of cracks, the displacement field forms a new funnel shape after crossing the crack, just like the formation of a new anchorage displacement field after the crack. The reason is that the appearance of crack and the increase in number of cracks make the stress of the anchor cable at the crack area increase compared with that without crack, and the anchorage range expands at the back end and the displacement field increases accordingly, indicating that the increase of the number of cracks can expand the anchorage range of the anchor cable. Figure 8 shows the creep displacement field (CDF) of the anchorage rock. The creep duration is 48 hours, and the displacement at the intersection of rock and the anchor cable is recorded. It can be seen from the figure that the maximum value of the creep displacement also appears at the front end of the rock. The difference is that the maximum creep displacement develops along the anchor cable deeper than the instantaneous situation. Moreover, the funnel-shaped range of the CDF expands deeper than the IDF, indicating that the anchorage range of the rock will develop deeper with time under long-term pulling loads. Figure 9 shows the change of the IDF of anchorage rock with the crack specific area. It can be seen from the figure that with the increase of the crack specific area, a new funnel-shaped displacement field formed behind the crack gradually increases, indicating that the increase of the crack specific area can expand the anchorage range of the anchor cable in the axial direction. Figure 9. Variation of IDF with crack specific area. Figure 10 is the change of the CDF with the crack specific area. It can be seen from the figure that as the crack specific area increases, the new funnel-shaped displacement field formed behind the crack gradually increases, indicating that under the condition of long-term anchor cable pulling, the increase of the crack specific area can expand the anchorage range of the anchor cable in the axial direction. Figure 11 is the change of the IDF of rock with crack angle. It can be seen from the figure that with the appearance of the crack angle, the asymmetric distribution of the displacement field of the anchorage section begins to appear, and as the angle gradually increases, the asymmetry becomes more and more obvious near the crack. The asymmetry is less and less obvious when it goes deep behind the crack.
0° 30°
45° 60° Figure 11. Variation of IDF with crack angle. In Figure 12, the asymmetry of CDF expands with the increase of crack angle, and the change is more obvious than that of IDF. Figure 12. Variation of CDF with crack angle. According to the above numerical test results, (a), (b) and (c) in Figure 13 respectively show the changes of instantaneous displacement and creep displacement at the front end of the anchorage rock with the number, area and angle of cracks. The figure shows that in any case, the creep displacement is much larger than the instantaneous displacement, which is also consistent with the large deformation characteristics of Muzhailing soft rock. As the number and the specific area of cracks increase, both the instantaneous displacement and creep displacement increase linearly; as the angle of the crack increases, both the instantaneous displacement and creep displacement decrease nonlinearly. For the instantaneous displacement, before 45°, the deceleration rate is slower, and after 45°, the deceleration rate increases sharply. It can be seen that for instantaneous displacement, the crack angle has an obvious 45° effect; for creep displacement, as the angle increases, the decreasing speed gradually increases.
Instantaneous and creep MSS analysis
Due to the pulling process of the anchor cable, the rock mass around the anchor cable is mainly subjected to shearing. After the anchor cable is pulled out, shear failure occurs between the rock mass and the binder and the anchor cable. Therefore, the stress discussed in this paper is the maximum shear stress. Figure 14 shows the variation of the instantaneous MSS field of the anchor with the number of cracks. From the figure, the maximum instantaneous MSS occurs in a small part of the area where the front end of the rock closely surrounds the anchor cable. Compared with the displacement field, the maximum shear stress field is mainly concentrated in a cone-shaped distribution around the anchor cable, and the attenuation in the axial direction of the anchor cable is slower than in the lateral direction of the anchor cable. With the appearance of cracks, a new cone-shaped maximum shear stress distribution field appears behind the cracks, indicating that the occurrence of cracks and the increase in number can effectively expand the anchorage range of the anchor cable in the axial direction. At the same time, the cracks play a certain role in sharing the stress concentration area at the front end of the rock, reduce the concentrated stress and avoid premature tensile shear failure at the front end of the rock. can be seen from the figure that after a certain creep time, the MSS field of the rock is basically uniformly distributed along the axial direction of the anchor cable and tightly wound. Around the anchor cable, the maximum value of the MSS appears at the end of the anchor cable, which is the opposite of the instantaneous situation, indicating that during the creep process, the concentrated stress is gradually released from the front end point to the end in the axial direction until it is evenly distributed around the anchor cable and reaches the maximum value at the end of the anchor cable. Moreover, the MSS curve at the front end of the rock shows a decay trend, and with the appearance and the increase in the number of cracks, the MSS field expands slightly in the lateral direction of the anchor cable, and there is no obvious change in the axial direction. Figure 16 shows the change of the instantaneous MSS of the rock with the crack specific area. It can be seen from the figure that as the crack specific area increases, the new cone-shaped MSS field formed behind the crack gradually increases. It shows that the increase of the crack specific area can also expand the anchorage range of the anchor cable in the axial direction and reduce the consentrated stress at the front end of the rock, but the effect is not obvious. Figure 17 is the change of the creep MSS field of the rock with the crack specific area. It can be seen from the figure that the increase of the crack specific area slightly expands the distribution range of the MSS field along the lateral direction of the anchor cable. Along the axial direction, the MSS field is hardly affected. Figure 18 is the variation of the instantaneous MSS field of the rock with crack angle. It can be seen from the figure that with the increase of the crack angle, the instantaneous MSS field appears obviously asymmetrically distributed along the lateral direction of the anchor cable, and as the angle increases futher, this asymmetry becomes more and more obvious. This rule is also applicable to the variation of the creep MSS field of the rock shown in Figure 19. The asymmetry of the two is reflected in the lateral direction of the anchor cable.
0° 30°
45° 60° Figure 19. Variation of creep MSS field with crack angle. The shear stress distribution of the cable interface is shown in Figure 20. Before x 0 , the shear stress increases linearly with the depth; after x 0 , the shear stress decreases exponentially with the distribution function of τ 0 (x), and the maximum shear stress is τ 0max . Therefore, the tensile force P of the cable can be expressed as follows: Where d is the diameter of cable. When there is a crack in the effective anchorage range, the shear stress of cable interface at the crack is 0, and the shear stress of cable interface on both sides of crack redistributes, as shown in Figure 21. In this case, the tensile force P of the cable can be expressed as follows: If the cable tension P remains constant, then That is, the maximum shear stress of the cable interface is less than that without cracks, and the effective anchorage depth is greater than that without cracks, this can be used to infer the situation with multiple cracks. This paper only discusses the theoretical distribution of shear stress at the cable interface without considering the time effect, and further studies are needed in the case of creep. Figure 20 Theoretical distribution of Figure 21 Theoretical distribution of shear stress at cable interface [21] .
shear stress at cable interface with crack. According to the above numerical test data, (a), (b) and (c) in Figure 22 respectively show the changes of the maximum instantaneous MSS and creep MSS with the number, area and angle of cracks. It can be seen from the figure that in any case, the maximum instantaneous MSS of the rock is greater than the maximum creep MSS, because the creep process allows the stress to be effectively released through deformation and slows the damage of the rock material. With the increase of the number and the specific area of the cracks, the maximum instantaneous MSS tends to increase slowly and then decrease sharply, while the maximum creep MSS increases approximately linearly. In the same coordinate system, it is difficult to have an intersection point, however, considering that the damage caused by creep stress is greater than the instantaneous stress, after the creep effect is amplified, the two have a virtual intersection point, and the abscissa corresponding to the intersection point is the optimal number of cracks and the crack specific area. As the crack angle increases, the maximum instantaneous MSS fluctuates slightly before 45°, and the maximum creep MSS decreases slowly before 45°, and both decrease rapidly after 45°, indicating that the instantaneous MSS and creep MSS of the anchorage rock have obvious 45° crack angle effects. In actual anchorage engineering, grouting is a common method to improve the stability of rock mass with cracks. According to the conclusions in this paper, proper control of the grouting amount and strength of the slurry to reserve a certain amount of cracks is beneficial to improve the long-term stability of the anchorage rock.
Conclusion
Based on the Muzhailing soft rock tunnel project, this paper takes the anchorage segment as the research object, systematically studies the instantaneous mechanical properties and creep mechanical properties of anchorage rock with different crack combinations under the condition of anchor cable pulling, the main results are as follows: (1) Under the condition of instantaneous cable pulling, the anchorage rock displacement field and MSS are funnel-shaped along the axis of the cable, and the maximum displacement and MSS appear at the front end of the rock; under long-term cable pulling, the anchorage rock displacement field is funnelshaped, the maximum value appears at the end of rock, the MSS field is evenly distributed along the anchor cable axis, and the maximum value appears at the end of the anchor cable.
(2) The increase in the number and the specific area of cracks can expand the effective anchorage range of the anchor cable, reduce the concentrated stress and avoid premature shear failure at the front end of the anchorage rock to a certain extent. For the instantaneous displacement of the front end of the anchorage rock and the maximum instantaneous MSS, there is an obvious 45° crack angle effect, but for the creep displacement and the maximum creep MSS, the effect is not obvious.
(3) The existence of certain crack geometric characteristics is beneficial to the distribution of the MSS of the anchorage rock. Among them, the optimal number and specific area of cracks can be obtained by the virtual intersection method described in this paper. After the crack angle is greater than 45°, the maximum instantaneous displacement and creep displacement, the maximum instantaneous MSS, and the maximum creep MSS all have a clear decreasing trend. | 2020-11-19T09:14:46.732Z | 2020-11-12T00:00:00.000 | {
"year": 2020,
"sha1": "47838236c70a31a2d720e64b30790537406f10da",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/570/2/022040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b0ba372a3e8d626f8a264455157f32dac09763df",
"s2fieldsofstudy": [
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
244421098 | pes2o/s2orc | v3-fos-license | Uniting The Sun’s Hale Magnetic Cycle and ‘Extended Solar Cycle’ Paradigms
Through meticulous daily observation of the Sun’s large-scale magnetic field the Wilcox Solar Observatory (WSO) has catalogued two magnetic (Hale) cycles of solar activity. Those two ( ∼ 22-year long) Hale cycles have yielded four ( ∼ 11-year long) sunspot cycles (numbers 21 through 24). Recent research has highlighted the persistence of the “Extended Solar Cycle” (ESC) and its connection to the fundamental Hale Cycle - albeit through a host of proxies resulting from image analysis of the solar photosphere, chromosphere and corona. This short manuscript presents the correspondence of the ESC, the surface toroidal magnetic field evolution, and the evolution of the Hale Cycle. As Sunspot Cycle 25 begins, interest in observationally mapping the Hale and Extended cycles could not be higher given potential predictive capability that synoptic scale observations can provide.
INTRODUCTION
For over four centuries solar observers have pondered the physical origins of the canonical marker of solar activity -the sunspot.It took more than 200 years after the sketching and cataloging of sunspots commenced before it was discovered that the number of sunspots waxes and wanes over an approximately 11-year period (Schwabe, 1849).A half century later, mapping the latitudinal variation of the spotted Sun yielded the "butterfly diagram," a pattern progressing from latitudes around 30 • (north and south) to the equator over the 11-year period (Maunder, 1904).In the golden age of solar astronomy that followed, it was first suggested (Hale, 1908) and then demonstrated (Hale et al., 1919) that sunspots were sites of intense magnetism protruding through the Sun's photosphere and that the polarities of the butterfly's wings alternated in sign with a period of about 22 years (Hale and Nicholson, 1925).This alternating magnetic polarity cycle is synonymously identified with its discoverer, the eponymous (22-year) "Hale Cycle," or the (22-year) "Hale Magnetic Polarity Cycle."Understanding how the magnetic spots, their butterfly patterning, and the polarity flipping are tied together to drive solar activity has formed the keystone problem of observational (Babcock, 1961), theoretical (Leighton, 1969) solar-and astro-physics in the intervening century (e.g., Hathaway, 2010).
For over four decades another term describing solar activity has sporadically appeared in the literaturethe "Extended Solar Cycle."The extended solar cycle (e.g., Wilson, 1987) (ESC) was used to describe an spatio-temporal extension of the sunspot butterfly pattern to higher solar latitudes (to around 55 • ) and further back in time (by almost a decade).A culmination of many years of painstaking observation the ESC is exhibited in prominences and filaments (e.g., Bocchino, 1933;Hansen and Hansen, 1975), 'ephemeral' (small-scale transient) active regions (e.g., Harvey and Martin, 1973), global-scale features of the Sun's corona (e.g., Altrock, 1988) and the zonal flow patterns (e.g.,Howard and Labonte, 1980;Snodgrass and Wilson, 1987) of the 'torsional oscillation.'In effect, this assortment of observational phenomena created a set of spatio-temporally overlapping chevron-like activity patterns.(McIntosh et al., 2014.They identified a pattern of coronal and photospheric features that was greatly extended in time and latitude relative to the sunspot butterfly.They deduced that the activity bands observed were the (toroidal) magnetic bands of the Hale Cycle, but no concurrent photospheric magnetic measurement was available to affirm their deduction.The core inference of their study was that the spatio-temporal overlap and interaction of extended activity bands observed contributed directly to the shape (the butterfly) and modulation (the amplitude) of the sunspot cycle.
Figure 1 shows the evolution of the total sunspot number, the latitudinal distribution of sunspots and the data-inspired construct introduced by M2014 that inferred the magnetic activity band arrangement and progression of the Hale Cycle and how those bands contribute to the modulation of sunspot cycles.This 'band-o-gram,' introduced in section 3 (and Fig. 8) of M2014, was intended as a qualitative, and not quantitative, illustration of the position, timing and magnetic field strength of the bands-with the emphasis on their phasing.The activity bands in the band-o-gram start their (assumed) linear progression towards the equator from 55 • latitude at each hemispheric maxima, meeting and disappearing at the equator at the terminator.At the terminator the polar reversal process commences at 55 • latitude, progressing poleward at their (assumed) linear rate-reaching the pole at the appropriate hemispheric maximum.So, for a list of hemispheric maxima and terminators, a band-o-gram can be constructed.The width of the bands is prescribed by a Gaussian distribution 10 degrees in latitude, commensurate with those observed in the coronal brightpoints originally studied by M2014.
DATA & METHOD
The Wilcox Solar Observatory (WSO) began collecting daily low spatial resolution observations of the Sun's global (or mean) magnetic field in May 1975 (Scherrer et al., 1977) and a very well-known data product of WSO is the evolution of the Sun's polar cap magnetic fields (Svalgaard et al., 1978).These low-resolution synoptic observations are ideal for identifying large-scale, long-lived, patterns -reducing the effects of small-scale, rapidly changing fields of emerging magnetic regions.Following, Duvall et al. (1979) the daily WSO magnetograms are obtained by scanning boustrophedonically along 11 east-west rows (i.e., the observation of alternate rows in opposite directions-if one row is taken from left to right then the next row is from right to left).The 180" magnetograph aperture moves 90" between points in the east-west direction and 180" north or south between rows, taking a 15s integration of the Fe I 5247 Å line at 195 points on the solar disk-resulting in a total of about 2 hours per daily map.Because of the large aperture size of the magnetograph the regions from 70 • to the poles lie entirely within the last aperture and are not resolved.
Following the method of Howard (1974) and Duvall et al. (1979), the daily WSO magnetographs can be decomposed into the poloidal and toroidal components which, according to dynamo models, are regenerated from one another, alternating and repeating in an approximately 22-year cycle (e.g., Charbonneau, 2010).The method used to perform this decomposition is detailed by (Shrauner and Scherrer, 1994), where the daily WSO magnetographs are first separated into their positive and negative magnetic field polarities which are then tracked as they cross the solar disk.They are then fitted to estimate the average east-west inclination angle of the magnetic field-or the toroidal component of the photospheric magnetic field (see Fig. 1 of Lo et al., 2010, for an illustration of the geometry).
In this paper we use the Shrauner and Scherrer (1994) derivative data product of the WSO toroidal magnetic field component in the photosphere and the WSO polar magnetic field estimate using the five central aperture pointings (central meridian ± two) in first and last rows of observations documented by Svalgaard et al. (1978).
RESULTS
An initial study of the slowly evolving behavior (Shrauner and Scherrer, 1994) noted the potential relationship with the ESC. Figure 2 contrasts four and a half decades of WSO observations with the evolution of the sunspot number over the same timeframe.Panel B shows the latitude-time variation of the WSO toroidal magnetic field component in addition to the field strength of the northern and southern polar regions.
Several features of Figure 2 are immediately visible, but perhaps the most striking are the strong overlap in time of the toroidal magnetic systems, the short transitions from one polarity to the next -evidenced through the narrow white (very near 0G) zones, the lack of field migration across the Sun's equator, and the close association of these last two features at the Sun's equator four times in the record (in 1978, 1988, 1998 and 2011).The patterns, including a strong resemblance to the ESC, are described in more detail by Shrauner and Scherrer (1994) and Lo et al. (2010).
The last of these features, synchronized zero-crossing transitions at the lowest latitudes in each hemisphere, are concurrent with events that mark the end of the Hale Cycle progressions, or termination events as they have become known, that were initially described by M2014 and explored again (in more detail) recently (McIntosh et al., 2019.The termination events are illustrated with dashed vertical lines in Figure 2.These events signify the final cancellation of the magnetic systems that were responsible for the last sunspot cycle at the equator and, near-simultaneously, a period of very rapid growth of the next sunspot cycle at mid-solar latitudes.Interestingly, M2019 also noted that these termination events at the equator were co-temporal with the start of the polar magnetic field reversal process.This process is perhaps best visualized through the observed progression of the highest latitude filaments (or polar crown filament) to the pole, the so-called "rush to the poles" (e.g., Babcock, 1961;Sheeley, Wang, and Harvey, 1989).The time at which this poleward march completes corresponds to when the measured polar magnetic field crosses zero.
In order to visually compare the WSO observations [Figure 2B] and the ESC band-o-gram [Figure 1C] (extended to cover the baseline of the WSO observations) we convert the WSO data from sine latitude to latitude and the result can be seen in Figure 3.
High-Res/Low-Res & The 2021 Hale Cycle Termination
The alternating toroidal field patterns clearly visible in the WSO observations are borne out also with considerably higher spatial resolution observations from space with SOHO/MDI and SDO/HMI shown in Fig. 4, and Fig. 2 of Liu and Scherrer (2022) which, unlike our previous plots, are current to time of publication.In tandem, the three magnetograph observations illustrate the clear pattern of the ESC that is consistent with previous studies.Further, as we have discussed immediately above, we observe that another zero crossing of the toroidal magentic field at the equator, characteristic of a Hale Cycle terminator event, occurred very recently.In a forthcoming publication we will explore this event in detail (McIntosh et al.in preparation).
DISCUSSION
A general criticism of the M2014 band-o-gram is that it was based on catalogued proxies of the photospheric magnetic field through chromospheric and coronal features.Those tracked features formed by the overlapping activity bands observed were not necessarily representative of the photospheric or interior magnetic field itself.It is clear from the WSO observations that, while comparison of the observed progression with the band-o-gram is still qualitative, that there is an overwhelming correspondence of the features observed in the WSO observations with those of the highly idealized band-o-gram.We note that a similar treatment of higher spatial resolution photospheric observations from the Mt Wilson Solar Observatory over a shorter timeframe yields similar correspondence (Ulrich and Boyden, 2005).Further, it is known that the heliosphere exhibits a 'sector' structure.The sector, or Hale sector, structure reflects the polarity of the heliospheric magnetic field relative to the solar direction in a state of either being "away" from or "towards" the Sun and expresses the largest spatial scales of solar magnetism and connectivity (e.g., Hudson, Svalgaard and Hannah, 2014).Since the earliest articles about sector structure (e.g., Rosenberg and Coleman, 1969) the solar cycle has been noted to have a strong annual modulation around solar minimum.At that time the heliospheric current sheet (HCS) is so flat that for six months of the year (early December to early June) the Earth is at southern heliographic latitudes and the dominant polarity corresponds to the Sun's southern hemisphere.For the other six months of the year in these epochs the Earth almost exclusively samples the dominant polarity of the north, holding at a level of ∼85% (e.g., Svalgaard and Wilcox, 1975).The top panel of Fig. 5 shows the tilt of the HCS as computed by the WSO from 1976 to the present.The slowly evolving solar minimum behavior of the HCS is shown graphically in the lower panel of the figure -an adaptation of Fig 1 of Echer and Svalgaard (2004).The wavelet transform is used to illustrate the prominent periodicities in the sector structure -at approximately one Carrington rotation (CR) timescale, and the other at approximately one year.There are two clear results shown in Figure 5: (1) the strongest signal at one year indeed corresponds to the times of extreme HCS flatness, but also that the strongest signal reverts to CR timescales at Hale Cycle terminators, when the tilt rises sharply with new and stronger new-cycle active regions emerging at mid latitudes.(2) the onset of the annual periodicity signal is at apprximately 0.4 cycles (first dotted vertical line) for even numbered cycles and at 0.6 cycles (second dotted vertical line) for odd numbered cycles.We reserve discussion of the the 22-year difference between odd and even numbered cycles to a manuscript in preparation, that looks at a longer epoch than that covered by the WSO we focus on.Nevertheless, this highly ordered large-scale sector structure is one more piece of evidence consistent with the data-inspired ESC schematic based on the timing of the Hale Cycle terminators.
CONCLUSION
The meticulous daily synoptic scale observations of the WSO have captured two complete 22-year Hale cycles.These observations have permitted a mapping of the Sun's photospheric toroidal magnetic field component over that timeframe.Key features of the WSO observations compare directly to the datainspired schematic of the ESC that was conceived to illustrate how the activity bands of the ESC can interact to shape the latitudinal progression of sunspot cycles and their amplitude.The WSO observations should unambiguously unify the Hale magnetic cycle and the ESC as being, physically, one and the same and indistinguishable.These low spatial resolution ground-based observations are corroborated by higher resolution space-based magnetographic observations from SOHO and SDO where all three identify zero-crossing events we associate as Hale Cycle terminators.As Lo et al. (2010) and M2014 inferred, there is predictive capability in these synoptic analyses through the ESC -providing strong indicators of the current progression and potential evolution of upcoming solar activity at the decadal scale, beyond those amenable through the analysis of sunspots.This result demonstrates the intrinsic power of synoptic observations at a time when it is becoming increasingly difficult to sustain such efforts.
Figure 1 .
Figure 1.Sunspot evolution since 1996.Comparing and contrasting the evolution of the total sunspot number provided (panel A), the spatio-temporal distribution of sunspots provided by the US Air Force and NOAA (panel B), and a data-driven schematic of the Hale Cycle evolution constructed by M2014, the band-o-gram (panel C).
Figure 2 .
Figure 2. WSO Inferred toroidal magnetic field evolution since 1976.Comparing and contrasting the evolution of the total sunspot number (panel A), with the spatio-temporal distribution of the derived toroidal magnetic field component (central) and polar magnetic field components (above north and below south) derived from daily WSO observations (panel B).Note that the toroidal field panel is in its native sine latitude format (Lo et al., 2010).The horizontal dashed lines indicate a latitude of 55 • while the vertical dashed lines shown in each panel mark the times of the Hale Cycle termination events studied by M2019.
Figure 3 .
Figure 3. Comparing and contrasting the WSO toroidal magnetic field (Panel A, see Figure 2B and now expressed in latitude) and polar cap measurements with the data-inspired band-o-gram (cf.Fig. 1C) now extended to cover the WSO record.The horizontal dashed lines indicate a latitude of 55 • while the vertical dashed lines shown in each panel mark the times of the Hale Cycle termination events studied by M2019.
Figure 4 .
Figure 4. Comparing and contrasting the WSO toroidal magnetic field (Panel A) and version derived from the higher-resolution full-disk space observations SOHO/MDI and SDO/HMI (Panel B) with the latter described by Liu and Scherrer (2022) updated to the present.The vertical dashed line in Panel B indicates the transition from observations from SOHO/MDI (prior to May 2010) and SDO/HMI (following May 2010).
Figure 5 .
Figure 5. (top) The tilt angle of the Heliospheric current sheet as measured by the WSO.(bottom) Morlet wavelet map of the interplanetary magnetic field polarity 1976-2022 showing periodicities from 4 days to 22 years.As in Figures 2 and 3, the vertical dashed lines shown in each panel mark the times of the Hale Cycle termination events studied by M2019; the dotted vertical lines correspond to 0.4 and 0.6 of the cycles' duration from terminator to terminator (cf.Leamon, McIntosh and Title, 2022).The dashed horizontal line in the HCS tilt panel is drawn at 23.4 • ; exceeding this value is a reasonable scalar proxy for the Hale Cycle terminator. | 2021-11-20T16:12:38.116Z | 2021-11-18T00:00:00.000 | {
"year": 2021,
"sha1": "61c609c45041300c7f40aa3c9a4381ff7e74e27a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1086644/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c6e0ba3f2691858f5ed91d4f76a43f928d07b68f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
12191899 | pes2o/s2orc | v3-fos-license | Denominators in cluster algebras of affine type
The Fomin-Zelevinsky Laurent phenomenon states that every cluster variable in a cluster algebra can be expressed as a Laurent polynomial in the variables lying in an arbitrary initial cluster. We give representation-theoretic formulas for the denominators of cluster variables in cluster algebras of affine type. The formulas are in terms of the dimensions of spaces of homomorphisms in the corresponding cluster category, and hold for any choice of initial cluster.
Introduction
Cluster algebras were introduced by Fomin and Zelevinsky in [FZ1]. They have strong links with the representation theory of finite dimensional algebras (see e.g. the survey articles [BM,K2]), with semisimple algebraic groups and the dual semicanonical basis of a quantum group (see e.g. the survey article [GLS]), and with many other areas (see e.g. the survey article [FZ2]); these articles contain many further references.
Here we consider acyclic coefficient-free cluster algebras of affine type, i.e. those which can be given by an extended Dynkin quiver. We give a formula expressing the denominators of cluster variables in terms of any given initial cluster in terms of dimensions of certain Hom-spaces in the corresponding cluster category. The representation theory, and hence the cluster category, is well understood in the tame case. Thus, the formula can be used to compute the denominators explicitly.
We assume that k is an algebraically closed field. Caldero and Keller [CK2] (see also [BCKMRT]) have shown, using the Caldero-Chapoton map [CC], that for an acyclic quiver Q, the cluster variables of the acyclic cluster algebra A Q are in bijection with the indecomposable exceptional objects in the cluster category C H , where H = kQ is the path algebra of Q. Furthermore, under this correspondence the clusters correspond to cluster-tilting objects. We denote by x M the cluster variable corresponding to the exceptional indecomposable M in C kQ .
Recall that an indecomposable regular H-module X lies in a connected component of the AR-quiver of H known as a tube, which we denote by T X . For a regular indecomposable exceptional module X, we let W X denote the wing of X inside T X , i.e. the category of subfactors of X inside T X . We let τ denote the Auslander-Reiten translate.
We prove the following theorem.
Theorem A. Let Q be an extended Dynkin quiver. Let H be the path algebra of a Q, and let {y 1 , . . . , y n } = {x τ T 1 , . . . , x τ Tn } be an arbitrary initial seed of the cluster algebra A Q , where T = ∐ i T i is a cluster-tilting object in C kQ . Let X be an exceptional object of C not isomorphic to τ T i for any i. Then, in the expression x X = f /m in reduced form we have m = i y d i i , where there is a tube of rank t ≥ 2 containing T i and X, q. l. T i = t − 1 and X ∈ W τ T i , dim Hom C (T i , X) otherwise.
We remark that representation-theoretic expressions for denominators of cluster variables for an arbitrary initial seed were given in [CCS1] for type A and for any simply-laced Dynkin quiver in [CCS2,RT]. In the general case, for an initial seed with acyclic exchange quiver, it was shown in [BMRT,CK2] that denominators of cluster variables are given by dimension vectors (see the next section for more details). The general case for an arbitrary initial seed was studied in [BMR2]. In particular, it was shown that for an affine cluster algebra, provided the cluster-tilting object corresponding to the initial seed contains no regular summand of maximal quasilength in its tube, the denominators of all cluster variables are given by dimension vectors. Cluster variables in affine cluster algebras of rank 2 have been studied in [CZ, MP, SZ, Ze]. The present article completes the denominator picture (for an arbitrary initial seed), in terms of dimension vectors, for affine (coefficient-free) cluster algebras.
In [FK,5.6] it is shown that for any cluster category (and in fact in a wider context), the dimension vector of a module coincides with the corresponding f -vector in the associated cluster algebra with principal coefficients. Thus our results determine when Conjecture 7.17 of [FZ3] holds for affine cluster algebras. We also remark that in Theorem A, each exponent in the denominator is less than or equal to the corresponding entry in the dimension vector, in agreement with [FK,5.8] and [DWZ].
In Section 1, we recall some of the results described in the previous paragraph. In Section 2, we recall some standard facts about tame hereditary algebras. In Section 3, we show that, in order to prove Theorem A for every indecomposable object in C which is not regular it is sufficient to prove this holds for the summands of a single cluster-tilting object with no regular summands. In Section 4, we study regular objects in the cluster category, and then in Section 5 we prove the main theorem, and in Section 6 we give a small example to illustrate it.
Preliminaries
Let Q be a finite connected acyclic quiver and k an algebraically closed field. Then H = kQ denotes the (finite dimensional) path algebra of Q over k. Let D b (H) be the bounded derived category of finite dimensional left H-modules. The category D b (H) is a triangulated category with a suspension functor [1] (the shift). Since H is hereditary, the category D b (H) has almost split triangles; see [Ha], and thus has an autoequivalence τ , the Auslander-Reiten translate. Let C = C H = D b (H)/τ −1 [1] be the cluster category of H (introduced in [CCS1] for type A and in [BMRRT] in general). Keller [K1] has shown that C is triangulated. For more information about the representation theory of finite dimensional algebras, see [ARS, ASS], and see [Ha] for basic properties of derived categories.
We regard H-modules as objects of C = C H via the natural embedding of the module category of H in D b (H). For a vertex i of Q, let P i denote the corresponding indecomposable projective H = kQ-module. Note that every indecomposable object of C is either an indecomposable H-module or of the form P i [1] for some i.
We denote homomorphisms in C simply by Hom( , ), while Hom H ( , ) denotes homomorphisms in mod H (or D b (H)). For a fixed H, we say that where X, Y are direct sums of H-modules or objects of the form P i [1]. Note that the composition of two F -maps is zero.
An H-module T is a called a partial tilting module if Ext 1 H (T, T ) = 0, an almost complete tilting module if in addition it has n − 1 nonisomorphic indecomposable summands, and a tilting module if it has n such summands (by a result of Bongartz [Bo] this is equivalent to the usual notion of a tilting module over H). We shall assume throughout that all such modules are basic, i.e. no indecomposable summand appears with multiplicity greater than 1. For more information on tilting theory see [AHK].
The corresponding notions of cluster-tilting object, partial cluster-tilting object and almost complete cluster-tilting object in C can be defined similarly with reference to the property Ext 1 C (T, T ) = 0; see [BMRRT]. Note that every cluster-tilting object in C is induced from a tilting module over some hereditary algebra derived equivalent to H [BMRRT,3.3].
Let A = A(Q) ⊆ F = Q(x 1 , x 2 , . . . , x n ) be the (acyclic, coefficient-free) cluster algebra defined using the initial seed (x, Q), where x is a free generating set {x 1 , x 2 , . . . , x n } for F; see [FZ1]. For . The following gives a connection between cluster categories and acylic cluster algebras. [CK2] There is a bijection β : X → x X from indecomposable exceptional objects of C to cluster variables of A such that for any indecomposable exceptional kQ-module X, we have x X = f /c X as an irreducible quotient of integral polynomials in the [BCKMRT] The maps α and β are mutual inverses.
We now recall some results and definitions from [BMR2]. Assume Γ is a quiver which is mutation-equivalent to Q. By the above theorem there is a seed (y, Γ) of A, where y = {y 1 , y 2 , . . . , y n } is a free generating set of F over Q. Let T i = τ −1 α(y i ) for i = 1, 2, . . . , n, so that we have α(y i ) = τ T i . Then ∐ n i=1 τ T i is a cluster-tilting object in C and Γ is the quiver of End C (τ T ) op ≃ End C (T ) op by [BMR1].
Let x be a cluster variable of A. We say that x expressed in terms of the cluster y has a T -denominator if either: (I) We have that α(x) = X for some exceptional indecomposable object X of C not isomorphic to τ T i for any i, and Here, in addition, we make the following definition: T i be a cluster-tilting object in C = C kQ for an acyclic quiver Q and let A = A(Q) be the cluster algebra associated to Q. Then: (a) If no indecomposable direct summand of T is regular then every cluster variable of A has a T -denominator. (b) If every cluster variable of A has a T -denominator, then End C (T i ) ≃ k for all i. Suppose in addition that kQ is a tame algebra. Then the following are equivalent: (i) Every cluster variable of A has a T -denominator.
The main result (Theorem A) of this paper gives a precise descripton of the denominators of all cluster variables for the tame case, i.e. also including the case when T has a regular summand T i of quasi-length r − 1 lying in a tube of rank r.
Fix an almost complete (basic) cluster-tilting object T ′ in C. Let X, X * be the two complements of T ′ , so that T ′ = T ′ ∐ X and T ′′ = T ′ ∐ X * are cluster-tilting objects (see [BMRRT,5.1]). Let be the exchange triangles corresponding to X and X * (see [BMRRT,§6]), so that B → X is a minimal right add(T ′ )-approximation of X in C and B ′ → X * is a minimal right add(T ′ )-approximation of X * in C. The following definition is crucial: If M is compatible with every exchange pair (X, X * ) in C we call M exchange compatible.
We also have: (a) Suppose that (X, X * ) is an exchange pair such that neither X nor X * is isomorphic to τ M . Then the following are equivalent: is exact, or the sequence is exact. (b) Let M be an exceptional indecomposable object of C and suppose that X ≃ τ M or X * ≃ τ M . Then we have that Note that (c) is used as an induction step in [BMR2] for showing that cluster variables have T -denominators. Also, in [BMR2] it is shown that in (c) the cluster variable x X * = (x ′ k ) * has a T -denominator if and only if each summand T i of T is compatible with the exchange pair (X, X * ), but we shall not need this stronger statement.
Tame hereditary algebras
In this section we review some facts about tame hereditary algebras, cluster categories and cluster algebras.
We fix a connected extended Dynkin quiver Q. The category mod kQ of finite dimensional modules over the tame hereditary algebra H = kQ is well understood; see [R]. Let τ denote the Auslander-Reiten translate. All indecomposable kQ-modules X are either preprojective, i.e. τ m X is projective for some m ≥ 0; preinjective, i.e. τ −m X is injective for some m ≥ 0; or regular, i.e. not preprojective or preinjective.
The Auslander-Reiten quiver of H consists of: (i) the preprojective component, consisting exactly of the indecomposable preprojective modules; (ii) the preinjective component, consisting exactly of the indecomposable preinjective modules; (iii) a finite number d of regular components called non-homogenous (or exceptional) tubes, T 1 , . . . T d ; (iv) an infinite set of regular components called homogenous tubes. For a fixed tube T , there is a number m, such that τ m X = X for all indecomposable objects in T . The minimal such m is the rank of T . If m = 1 then T is said to be homoegeneous.
We will also use the following facts about maps in mod H. Let P (respectively, I and R) be preprojective (respectively, preinjective and regular) indecomposable modules, and a R ′ a regular indecomposable mod-
The transjective component
We will call an indecomposable object in the cluster category transjective if it is not induced by a regular module. Note that the transjective objects form a component of the Auslander-Reiten quiver of C. One of our aims is to show that every transjective object has a T -denominator for tame herediary algebras. In this section, we show that for this it is sufficient to find one transjective cluster-tilting object all of whose summands have a T -denominator. Note that the results in this section do not require H to be tame, but hold for all finite dimensional hereditary algebras.
Remark 3.1. We remark that, given a finite set of indecomposable transjective objects in the cluster category, we can, by replacing the hereditary algebra H with a derived equivalent hereditary algebra, assume that all of the objects in the set are preprojective [BMRRT,3.3]. We shall make use of this in what follows.
We start with the following observation.
Proof. Part (a) is well-known and follows directly from the fact [BMRRT] that Ext 1 C (X, τ X) ≃ k when (X, τ X) is an exchange pair.
For part (b) we can assume that τ X → E → X → is induced by an almost split sequence in mod H ′ , by Remark 3.1. C H = C H ′ . Then we use [BMR2,5.1] to obtain that Hom C (M, ) applied to the AR-triangle τ X → E → X → gives an exact sequence. The claim then follows from Proposition 1.6.
The following summarizes some facts that will be useful later.
Proof. Part (a) is a theorem of Riedtmann and Schofield [RS]. Part (b) is a special case of a theorem by Happel and Unger [HU]. Part (c) is contained in [BMRRT] and part (d) is obvious.
The following is also well-known and holds for any finite dimensional hereditary algebra H. Note that for H of finite representation type, all modules are by definition preprojective.
Lemma 3.4. For every preprojective tilting module U in mod H there is a finite sequence of preprojective tilting modules We use the fact that the preprojective component is directed, so there is an induced partial order on the indecomposable modules, generated by X Y if Hom(X, Y ) = 0. For the above exchange sequences we have M * j M j . The result now follows directly from Proposition 3.3.
Next we consider transjective exchange pairs. Lemma 3.5. Let (X, X * ) be an exchange pair, where both X and X * are transjective. Then any regular indecomposable exceptional M is compatible with (X, X * ).
Proof. We choose a hereditary algebra H ′ derived equivalent to H such that both X and X * correspond to preprojective H ′ -modules (see Remark 3.1). Hence one of the exchange triangles, say is induced by an short exact sequence, by [BMRRT]. It is clear that the middle term B is also induced by a preprojective module. Note that we have C H ≃ C H ′ .
We want to show that we get a short exact sequence Since there is a path of H ′ -maps from X * to X in the preprojective component of H ′ , and this component is directed, we have that there is no H ′ -map X → τ X * . Hence the nonzero map X → τ X * induced from the exchange triangle is an F ′ = F H ′ -map. Any map M → X is also an F ′map, using that there are no H ′ -maps from regular objects to preprojective objects. But any composition of two F ′ -maps is zero. Hence every map M → X will factor through B → X, so the sequence (7) is right exact.
Assume there is a map M → X * . Then this map must be an and hence the composition is zero. Hence the map M → X * is zero, and we have shown left-exactness of (7). This finishes the proof by Proposition1.6.
A slice in mod H (see [R]), is a tilting module V with a hereditary endomorphism ring. Note that End C (V ) is hereditary if and only if End H (V ) is hereditary by [ABS].
Lemma 3.6. Assume there is a slice V = ∐ i V i such that each indecomposable direct summand V i has a T -denominator. Then every transjective indecomposable object has a T -denominator.
Proof. This follows from combining Lemma 3.2 with Proposition 1.6.
Lemma 3.7. Assume there is a transjective cluster-tilting object U = ∐ i U i such that each indecomposable direct summand U i has a T -denominator. Then there is a slice V = ∐ i V i such that each indecomposable direct summand V i has a T -denominator.
Proof. We choose a hereditary algebra H ′ derived equivalent to H, so that all the U i are preprojective modules in mod H ′ and hence U is a preprojective tilting module in mod H ′ (see Remark 3.1) It is clear that each W j in Lemma 3.4 is a cluster-tilting object in C H , and that the object H ′ forms a slice in C H . Also it is is clear that the short exact sequences (6) are exchange triangles in C H = C H ′ , with transjective end-terms. So the claim follows from Lemma 3.5 and Proposition 1.6.
We can now state the main result of this section.
Proposition 3.8. Assume that there is a transjective cluster-tilting object U = ∐ i U i such that each indecomposable direct summand U i has a T -denominator. Then every transjective indecomposable object has a Tdenominator.
Proof. This follows directly from combining Lemmas 3.6 and 3.7.
Wings
For this section assume that H is a tame hereditary algebra. We state some properties and results concerning regular objects in the cluster category of H.
Recall that a module M over an algebra A is known as a brick if it is exceptional and End A (M ) = k. In fact, it is known that if A is hereditary, every exceptional A-module is a brick. We say that an object M in the cluster category C is a C-brick if M is exceptional with End C (M ) = k. The following lemma summarizes some well-known facts, including the fact that there are bricks in the cluster category of H which are not C-bricks. Let W M be the full category of subfactors of a regular exceptional indecomposable module M in T M . This is called the wing of M . Suppose that q. l. M = t. We consider W M as an abelian category equivalent to mod Λ t , where Λ t is the hereditary algebra given as the path algebra of a quiver of Dynkin type A t , with linear orientation; see [R]. The module M is a projective and injective object in W M , and a tilting object in W M has exactly t indecomposable direct summands. The following is well-known by [S]. We recall the notion of a Bongartz complement: Proof. See [Bo] for (a) and [Ha] for (b) and (c).
We are especially interested in the Bongartz complements of certain regular modules.
Lemma 4.4. Let X = X t be an exceptional regular indecomposable module with q. l. X = t. For i = 1, . . . , t − 1, let X i be the regular indecomposable exceptional module such that there is an irreducible monomorphism X i → X i+1 . Then there is a preprojective module Q such that: Proof. For (a), first note that Ext 1 H (X, τ A) = 0 while Ext 1 H (A, τ A) = 0 for any indecomposable module A which is either preinjective or regular with T A = T X . Hence by (B1) the summands in Q are either preprojective or regular and lie in T X . The property (B2) shows that any regular summand of the Bongartz complement E of X must be in W τ X , by Lemma 4.1. The fact that E is a complement implies that any regular summand must be in W X ′ , where X ′ → X is an irreducible monomorphism, since an object Z in W τ X \ W X ′ has Ext(X, Z) = 0. We claim that for any indecomposable regular summand E ′ of E there is a monomorphism E ′ → X. Assume E ′ is an indecomposable regular summand of E. Then, if E ′ is in W X ′ , but there is no monomorphism to X, the module τ E ′ will satisfy Ext 1 (X, τ E ′ ) = 0, while Ext 1 (E ′ , τ E ′ ) = 0, a contradiction to (B1). Since X ∐ E is a clustertilting object in C, it follows from Lemma 4.2 that all indecomposable regular objects in the tube of X with monomorphisms to X are summands of E.
Part (b) is easily verified, noting that (B1) and (B2) are satisfied. For (c) we show that if a module A satisfies Ext 1 H (Y, A) = 0, then it satisfies Ext 1 H ( X, A) = 0. Then it follows that Ext 1 H (Q, A) = 0, which implies that Q satisfies (B1); (B2) is clearly satisfied.
To see that Ext 1 H ( X, A) = 0 we use that W X is equivalent to mod Λ t , where Λ t is the path algebra of the Dynkin quiver A t with linear orientation. Now let Y i be a direct summand in Y which is generated by Y /Y i , and consider the exact sequence 0 Figure 1. A complement N of X in W X with summands (indicated by •) in W τ M : see Lemma 4.5(a). many times, which is possible by Lemma 3.4, we get that Ext 1 H ( X, A) = 0.
Lemma 4.5. Let T be a tube of rank t + 1 and M an exceptional object in T which is not a C-brick. Let X = X s be an exceptional indecomposable with q. l. X = s ≤ t such that X ∈ W τ M .
(a) There is a complement N of X in W X all of whose summands lie in W τ M . (b) The partial tilting module X ∐ U has a preprojective complement Q which generates X.
Proof. See Figure 1 for a pictorial representation of this lemma. For (a) consider the relative projective tilting module in W X given by is an epimorphism. The last statement follows since τ M is not a factor of X s−1 .
We also claim that X ′ s−1 is a complement of X 1 ∐ · · · ∐ X s−2 ∐ X s in W X . This follows from the fact that the map X s−1 → X is a minimal left add X 1 ∐ · · · ∐ X s−2 ∐ X s -approximation, together with Proposition 3.3. Now, if necessary, we exchange X s−2 using the minimal add X 1 ∐ · · · ∐ X s−3 ∐ X ′ s−1 ∐ X s -approximation X s−2 → X. The same argument as above shows that the cokernel of this map gives us a complement in W τ M . We iterate this at most s − 1 times, until we obtain a complement all of whose summands lie in W τ M , as required. For (b), let Q be the Bongartz complement in mod H of the partial tilting H-module X 1 ∐ · · · ∐ X s−1 ∐ X s , and apply Lemma 4.4. By Lemma 4.3(a), Q generates N , and thus, in particular, it generates X.
The main result
In this section, we show the main theorem. The proof will follow from a series of lemmas. Throughout this section, let T be a cluster-tilting object in the cluster category C H of a tame hereditary algebra H. We assume that T has a summand which is not a C-brick. We have the following preliminary results.
Lemma 5.1. Let Z be an exceptional indecomposable regular module. Let X ∐ Y be a tilting object in W Z , with X indecompsable. Assume U ∐ X ∐ Y is a tilting module in mod H, where U has no preinjective summands.
(a) Let B → X be the minimal right add Y -approximation in W Z and Proof. Let U = U p ∐ U r where U p is preprojective and U r is regular. By assumption U r has no summands in W Z .
We claim that Hom H (U r , B) → Hom H (U r , X) is also surjective. For this note that there is an indecomposable direct summand B ′ in B such that the restriction B ′ → X is surjective. Let U ′ r be a summand in U r such that Hom H (U ′ r , X) = 0. By assumption Hom H (U ′ r , τ X) = 0, since it follows that any non-zero map U ′ r → B ′ is an epimorphism, and hence factors through B ′ → X, and the claim follows. Hence B → X is a minimal right add(U ∐Y )-approximation. This completes the proof of (a). The proof of (b) is similar. Proof. Note that there is a hereditary algebra H ′ , with C H ′ = C H , such that τ T ′′ as a H ′ -module has only regular and preprojective direct summands (see Remark 3.1). Assume q. l. M ≤ t, and that the rank of T is t + 1. Let U = τ T ′ = τ N 1 ∐ · · · ∐ τ N t−1 ∐ τ M be the tilting object in W τ M . Using Proposition 3.3 and Lemma 3.4, we have that all tilting objects in W τ M can be reached from U by a finite number of exchanges, given by exchange sequences in W τ M . Using Lemma 5.1 these exchange sequences are also exchange sequences in mod H ′ and hence in C H ′ = C H . This shows (a). For (b) it suffices to show that each such exhange pair is compatible with T . Consider the exchange triangle By Proposition 1.7, the pair (X ′ , X ′′ ) is compatible with all summands in T which are C-bricks. It is clearly compatible with any regular summand T j of T with T T j = T which is not a C-brick, since Hom(T j , ) vanishes on all terms of the sequence. By Lemma 4.1(a) we only need to consider compatibility with M in case q. l. M = t. But, since the exchange triangle (8) lies inside W τ M , we see that the Hom (M, ) vanishes when applied to (8). Since T has at most one indecomposable summand in a given tube of quasilength one less than the rank of the tube (by Lemma 4.1(b)), this finishes the proof of (b).
Lemma 5.3. Let X be an exceptional regular indecomposable object of C which is a C-brick.
(a) An exchange pair (X, Z) is compatible with any regular object M for which either M is a C-brick, or is an irreducible monomorphism in case q. l. X > 1 and X ′ = 0 otherwise, with the property that Y and Q are transjective.
Proof. (a) If M is a C-brick then this holds by Proposition 1.7. For the other cases note that Hom(M, X) = 0 = Hom(M, τ −1 X), and hence when Hom (M, ) is applied to the exchange triangle Z → Q ′ → X →, one obtains a short exact sequence. For (b), let E be the Bongartz complement of the H-module X, and consider the minimal right add E-approximation E ′ → X (as H-module). By Lemma 4.3, E generates X, so the approximation is surjective, and we have a short exact sequence 0 → Y → E ′ → X → and thus an induced approximation triangle, Y → E ′ → X → in C. By Lemma 4.4(a), we have that X ′ is the only regular summand of E ′ and the other summands are preprojective. Since E ′ → X is surjective, we also have, using Lemma 4.4(a), that E ′ has a preprojective summand, and the claim follows.
We now deal with the transjective objects.
Proposition 5.4. All transjective objects have a T -denominator.
Proof. By Proposition 3.8 it is sufficient to show that that there is one transjective cluster-tilting object all of whose indecomposable direct summands have T -denominators. Without loss of generality we can assume that T has at least one indecomposable direct summand which is not a C-brick.
Assume T = Q ∐ R, where Q is transjective and R is regular. Then, using Lemma 4.2, there are indecomposable summands M 1 , . . . , M z of R such that each summand of R lies in one of the wings W M i . We choose a minimal such set of summands. Since Ext 1 C (M i , A) = 0 for any object A whose wing overlaps W M i , any two of the W M i must be either equal or disjoint.
By definition, all summands of τ T have T -denominators. By Lemma 5.2, we can, for each i, replace the summands of τ T in W τ M i with the indecomposable objects in the tube of M i which have a monomorphism to τ M i . We obtain a new cluster-tilting object U = (∐ z i=1 τ M i ) ∐ U ′ all of whose indecomposable direct summands have T -denominators.
Fix N = M 1 of quasilength t and let N 1 , N 2 , . . . , N t = N be the indecomposable objects in T N with monomorphisms to N , where q. l.(N i ) = i for all i. Then we can write U = (∐ t i=1 τ N i ) ∐ Y . We claim that, via a sequence of exchanges, the τ N i can be replaced by transjective summands Q i which have T -denominators. When repeating this for M 1 , M 2 , . . . , M z , we will end up with a transjective cluster-tilting object having T -denominators as required.
We exchange τ N with a complement (τ N ) * , via the exchange triangles: If (τ N ) * is not induced by an H-module, it is induced by the shift of a projective module, and we are done. So we can assume that (τ N ) * is induced by a module. Then one of these two exchange triangles must arise from a short exact sequence of modules.
If it is the first, then clearly Hom H (X, τ N ) = 0 for any regular summand X of U not in T N . But if X lies in T N and not in W τ M 1 , again Hom H (X, τ N ) = 0 since the the wings W τ M i do not overlap (and q. l. (M i ) is less than the rank of its tube for all i). Let N 0 = 0. Since τ N t−1 does not generate τ N t = τ M i , it follows that B has a nonzero preprojective summand, and hence that (τ N ) * is preprojective.
If it is the second, then clearly Hom H (τ N, X) = 0 for any regular summand X of U not in T N . But if X lies in T N and not in W τ M 1 , again Hom H (τ N, X) = 0 since the the wings W τ M i do not overlap. Since Hom H (τ N, τ N j ) = 0 for all j, it follows that B ′ has a nonzero preinjective summand, and hence that (τ N ) * is preinjective.
Hence, in either case, (τ N ) * is transjective. We next show that (τ N ) * has a T -denominator, by considering two cases: CASE I: We assume first that N has End(N ) = k, i.e. N is a C-brick.
Every summand of T in T = T N is a C-brick (by the choice of the M i ), so by Lemma 5.3(a) we obtain that the exchange pair (τ N, (τ N ) * ) is compatible with all summands of T , and hence that (τ N ) * has a T -denominator by Proposition 1.6. We then repeat this procedure for τ N t−1 , . . . , τ N 1 . CASE II: N has End(N ) = k, i.e. N is not a C-brick. Arguing as above, we see that we can exchange τ N with a transjective object (τ N ) * . Since N is a summand of T , we have that T is compatible with the exchange pair (τ N, (τ N ) * ) by definition. So (τ N ) * has a T -denominator by Proposition 1.6. We can then exchange the other summands τ N t−1 , . . . , τ N 1 with transjectives, all having T -denominators, as in Case I.
Hence, there is a transjective cluster-tilting object having a T -denominator, and we are done.
Lemma 5.5. Let T be a tube such that each direct summand of T lying in T is a C-brick, or such that T has no summands in T . Then each exceptional indecomposable object in T has a T -denominator.
Proof. Let X be an exceptional indecomposable object in T . We prove the Lemma by induction on the quasilength of X.
If q. l. X = 1, then by Lemma 5.3(b) there is an exchange triangle Y → Q → X → with Q and Y transjective. By Proposition 1.7, we need only show that (Y, X) is compatible with any regular non C-brick summand M of T . But this follows from Lemma 5.3. Now assume that any exceptional indecomposable object Y of quasilength less than t has a T -denominator. We want to show that the result also holds for the exceptional indecomposable X with q. l. X = t. For this we use Lemma 3.2.
It now remains to deal with the exceptional objects which are in W τ M for a non C-brick summand M of T . For this the following lemma is crucial.
Lemma 5.6. For each indecomposable exceptional object X in W τ M , there are exchange sequences Proof. By Lemma 4.5(a), there is an object N in W X such that N ∐ X is a tilting object in the wing W X and all direct summands of N are in W τ M .
By Lemma 4.5(b), we have that N ∐ X has a preprojective complement Q in mod H, such that Q generates X. Let R = Q ∐ N and let B → X (respectively, X → B ′ ) be the minimal right, (respectively, minimal left) add R-approximations of X. We claim that the induced exchange triangles satisfy (i), (ii) and (iii).
Consider the exchange triangle Since Q generates X in mod H, it is clear that this triangle is induced by a short exact sequence in mod H, and hence X * is induced by a preprojective module (showing (iii)), since X * → B is nonzero and B must have a preprojective summand as N doesn't generate X. Apply Hom (M, ) to obtain the long exact sequence We claim that dim coker((M, B) → (M, X)) = 1. By Lemma 4.1, we have that dim Hom H (M, X) = 1 and it is clear that an H-map M → X will not factor through B, since N is in W τ M , and hence Hom H (M, N ) = 0, by Lemma 4.1.
By Lemma 4.1 the space of F -maps M → X is also one-dimensional. We claim that such F -maps will factor through B. For this we consider two possible cases: the object X * is either induced by a projective H-module P or not. First assume that X * is non-projective. Since the compostion of two F -maps is 0, it is clear that all F -maps M → X will factor through B → X. Hence the claim follows in this case. Now consider the case where X * is projective. Then the composition [1] is clearly zero, so the claim follows in this case. We next want to show that when Hom (M, ) is applied to the second exchange triangle we do not obtain an exact sequence. The map X → B ′ decomposes into There is a non-zero F -map M → X and the composition M → X → B ′ will be zero since M → X → Q 0 is the composition of two F -maps and Hom (M, N 0 Hence we obtain (i), and (ii) follows using Lemmas 3.6 and 5.2(b), using the fact that X * and all indecomposable summands of B and B ′ are either transjective or in W τ M .
The proof of the following is an adoption of parts the proof of [BMRT,Prop. 3.1]. It completes the proof of our main result, Theorem A.
Proposition 5.7. Let T be a tube such that T has a non C-brick summand M , lying in T . Then each object in W τ M has a reduced T -denominator.
Proof. Let X * be an indecomposable object in W τ M . By Lemma 5.6 there is an indecomposable object X and exchange triangles such that (i) and (ii) in Lemma 5.6 hold.
We have [BMR1] that Assume M = T l . We need to discuss two different cases. CASE I: Suppose that neither X nor X * is isomorphic to τ T i for any i. Let B = B 0 ∐ B 1 , where no summand of B 0 is of the form τ T i for any i, and B 1 is in add τ T . Similarly, write . We then have: using that t B = t B 0 since Hom C (T i , τ T j ) = 0 for all i, j, and similarly t B ′ = t B ′ 0 . Since M = T l is a summand in T , we have by Lemma 5.6 that max(dim Hom(T l , B), dim Hom(T l , B ′ )) = dim Hom(T l , X * ) + dim Hom(T l , X) − 1.
For any other summand of T , say T i with i = l, we have that T i is compatible with (X, X * ), and hence We thus obtain: Hence We have that m and m ′ are coprime, by definition of least common multiple. Since B and B ′ have no common direct summands [BMR1, 6.1], y B 1 and y B ′ 1 are coprime. Suppose that m and y B ′ 1 had a common factor y i . Then we would have a summand Z of B ′ 0 such that Hom C (T i , Z) = 0, and τ T i was a summand of B ′ . But then Ext 1 C (Z, τ T i ) ≃ D Hom C (τ T i , τ Z) ≃ D Hom C (T i , Z) = 0. This contradicts the fact that B ′ is the direct sum of summands of a clustertilting object. Therefore m and y B ′ 1 are coprime, and similarly m ′ and y B 1 are coprime. It follows that my B 1 and m ′ y B ′ 1 are coprime. It follows from our assumptions that f B 0 (e i ) > 0 and f B ′ 0 (e i ) > 0 for each i ∈ {1, 2, . . . , n}. It is clear that (my B 1 )(e i ) ≥ 0 and (m ′ y B ′ 1 )(e i ) ≥ 0. Using that my B 1 and m ′ y B ′ 1 are coprime, it follows that these two numbers cannot simultaneously be zero, so (f B my 1 satisfies the positivity condition. By assumption, f X also satisfies the positivity condition.
By the Laurent phenomenon [FZ1,3.1], x X * is a Laurent polynomial in y 1 , y 2 , . . . , y n . Clearly t X * /y l is also a Laurent polynomial. Hence u = is also a Laurent polynomial. Since u is defined at e i for all i, it must be a polynomial. By the above, u satisfies the positivity condition.
We have that y l divides t X * = y dim Hom C (T i ,X * ) i , since dim Hom C (T l , X * ) = 2. Hence we get that t X * /y l is a monomial. This finishes the proof in Case (I).
CASE II: Assume that X ≃ τ T i for some i. Note that i = l, since X and hence T i is transjective, while T l is regular.
Since Ext 1 C (T r , T s ) = 0 for all r, s, we have that X * ≃ τ T j for any j. Using Proposition 1.6 and Lemma 5.6, we have dim Hom C (T j , X) + dim Hom C (T j , X * ) = max(dim Hom C (T j , B), dim Hom C (T j , B ′ )) + ǫ j , where ǫ j = 1 if j = i or j = l 0 otherwise .
As in Case (I), but using that x X = y i (as X = τ T i ), we obtain the expression Using lcm(t B , t B ′ ) = t X t X * y −1 i y −1 l , we get As in Case (I), we get that the numerator satisfies positivity and is a polynomial, and that t X * y −1 l is a monomial. The proof is complete.
6. An example We give a small example illustrating the main theorem. Let Q be the extended Dynkin quiver and let H = kQ be the path algebra. Then H is a tame hereditary algebra where the AR-quiver has one exceptional tube T , which is of rank 3. The (exceptional part of) the AR-quiver of T is as follows, where the composition factors (in radical layers) of indecomposable modules are given.
For each exceptional object Y in the the tube T , we give the the dimension vector of Hom C (T, Y ) over End C (T ). Note that Hom C (T, τ T 3 ) = 0 = | 2013-03-15T18:09:12.000Z | 2008-05-04T00:00:00.000 | {
"year": 2008,
"sha1": "928bc11bd4f715f4eda7656f4184622aaff7f39f",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jalgebra.2010.02.012",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bdca5208199167f2ef43216597d0f35e440951f2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234318973 | pes2o/s2orc | v3-fos-license | Possibilities and drawbacks using arbitrary precision numbers for structural analysis
In various areas of computational mechanics, rounding errors can have a considerable influence on the quality of the simulation results; in some cases, these lead to the termination of the numerical calculation. Rounding errors are caused by limited accuracy in the representation of floating point numbers. Current codes usually use double precision numbers (p = 16 significant digits). Until now, modern multi‐precision libraries with which floating‐point numbers can be processed with arbitrary accuracy are largely unused.
Introduction
Increasing computational power not only enables us to compute larger models, but allows to do the analysis with higher number accuracy. Modern tools like the Eigen C++ library [1] offer easy access to perform dense and sparse linear algebra. The library includes sparse matrix solvers, too. Furthermore, its templated implementation offers the possibility to use any kind of number type. In combination with Boost's [2] multiprecision number types it is possible to perform calculations with arbitrary precision. To do so, the cpp_bin_float type is chosen. It represents the numbers with fixed-precision, but the amount of non-zero digits can be set arbitrarily at compile time.
Theory
When performing finite element analysis, in most of the cases the necessity to solve sparse linear systems of equations arises: The system is given in eq. (1), with the stiffness matrix K, the solution vector u and the load vector f . As in computer simulations the number of digits to represent their values is limited, the solution of eq. (1) is affected by round-off errors. According to [3], a conservative estimate on the number of correct digits in the solution is: Herein, s is the number of correct digits in the solution, p the number of digits each value in eq. (1) is represented, c(K) is the condition number of the matrix K and log the logarithm to the base 10. The condition number itself is defined as the maximum eigenvalue λ max divided by the minimum eigenvalue λ min of K. As shown in [3], a better estimate of the numerical accuracy can be obtained, if the condition number c(K) of the scaled matrixK is used in eq. (2). A common way of scaling the matrix with its diagonal elements is proposed Herein, d ii are the diagonal entries of D and k ii the ones of K.
Numerical Example
As said, p represents the number of digits to store the floating-point values, e.g. p = 8 is the basic single precision and p = 16 the basic double precision number type. For higher order precision (p ≥ 32) Boost's cpp_bin_float is used.
of 2
Section 4: Structural mechanics
Numerical Error
To investigate the impact on the numerical error, a simple LSFEM formulation for Bernoulli's beam theory is chosen. Firstly, the differential equation is transformed into a system of first order differential equations and then the squared L2-norm of each equation gets summed up. Finally, this sum gets minimized: To set up the equation system, the variation of eq. (4) is determined. Afterwards each function (vertical displacements w, rotations β, moments M and shear force Q) is discretized using linear Lagrange shape functions. The modelled system is Figure 2 shows the dependence between the condition number and the total number of degrees of freedom (Total Dof). As one can observe, the condition number of the scaled matrix c(K) is several orders of magnitude lower than the one of the original matrix c(K). When comparing the solution error with p = 16 in fig. 3, with the results in fig. 2, it is clearly visible that using the scaled matrix to compute the condition number a better accuracy estimate is achieved. At the same time, this condition number reaches a value of approx. 10 16 , the error curve starts to deteriorate. In this case, increasing the number of digits, to e.g. p = 32 recovers the convergent behavior of the solution.
Performance
In this example the computational cost is investigated. Using direct solvers, the factorization of the matrix is in general the most time consuming part of the calculation. Therefore, the time in case of LL T -factorization of Eigen's SimplicialLLT solver is observed. The system used is depicted in fig. 4. It is a simple cantilever beam modelled with triangular shell elements using standard finite elements. In fig. 5 the factorization time over the number of degrees of freedom for different number precision is shown. In case of approximately 3 · 10 5 unknowns, switching from the native number type to the cpp_bin_float number type seems to have a huge impact on the performance as the time for the LL T factorization increases by a factor of approximately 30, see fig. 6. In contrast to that, switching from single precision to double precision approximately doubles the time needed for factorization. The same happens when switching from p = 32 to p = 64 or p = 64 to p = 128. | 2021-05-11T00:05:42.384Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "d9525e7e81f6ab67c968bfb743fa9e5895c657da",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pamm.202000079",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "8e8ac54ae9644b0eec7fbd7b621f5a65a6abd84e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
86851330 | pes2o/s2orc | v3-fos-license | Closure of mesenteric defects during Roux-en-Y gastric bypass for obesity: A systematic review and meta-analysis protocol
Highlights • Roux-en-Y gastric bypass is an effective treatment for severe obesity and its comorbidities.• Closure of mesenteric defects may reduce the overall incidence of internal herniation.• Closure of mesenteric defects may also increase early small bowel obstruction and bleeding.• This review is designed to establish the evidence for and against closure of mesenteric defects.
Introduction
Metabolic/bariatric surgery (MBS) has become an important and highly effective treatment in the management of obesity and its related diseases, such as hypertension and type 2 diabetes. A Cochrane review of 22 randomised control trials showed MBS to achieve more effective weight loss after two years than nonsurgical measures [1] and this superior efficacy has been shown to endure for at least 2 decades and is accompanied by multiple health improvements [2]. Several types of bariatric procedure exist and laparoscopic Roux-en-Y gastric bypass (RYGB) has remained a popular choice, with particularly impressive endurance and comorbidity resolution [3].
Internal herniation and intermittent internal herniation are recognised late complications of RYGB and represent a common cause for reoperation. Small bowel internal herniation may occur through the retroalimentary space, through the mesojejunal defect created by the jejuno-jejunal anastomosis, or through the transmesocolic space, and can lead to potentially life-threatening small bowel obstruction, ischaemia and necrosis. Several studies have suggested that the routine primary closure of these mesenteric defects may decrease the rates of internal hernia, but this practice is currently not routinely adopted worldwide.
Rationale
Closure of the various mesenteric defects is a relatively simple and quick task to perform, which may reduce risk of internal herniation and the complications thereof. A number of closure methods have been described, including the use of sutures, surgical clips and topical adhesives. If demonstrated to be effective upon consolidation and comparison of all existing data, it may be possible to recommend closure as standard practice worldwide in order to reduce the incidence of these long-term complications. However, it is also essential to document and take account of the extent of excess complications associated with closure of mesenteric defects, such as early reoperation for bleeding and small bowel obstruction. The findings may raise questions regarding the management of patients who have already undergone RYGB without mesenteric defect closure [4].
Objectives
To identify, retrieve and assess all studies comparing the incidence of early and late complications following closure of mesenteric defects during laparoscopic RYGB surgery, with the primary outcome being reoperation for suspected small bowel obstruction, and secondary outcomes including internal herniation, kinking or narrowing at the jejuno-jejunal anastomosis, adhesions, complications (<30 days and >30 days after surgery), 30-day mortality, and any other outcome deemed relevant and reported in more than one study.
We hypothesise that closure of mesenteric defects: 1. decreases incidence of internal herniation and thus the need for reoperation; 2. is safe to perform, with acceptable rates of immediate and delayed complications.
Method
This review will follow the recommendations of the Cochrane Handbook for Intervention Reviews (version 5.1), will be compliant with AMSTAR 2 [5] and will be reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [6]. This protocol has been developed a priori, and the review is registered on the PROSPERO international prospective register of systematic reviews (www.crd.york.ac.uk/ prospero); registration number: CRD42018118934.
Study inclusion and exclusion criteria
A summary of inclusion and exclusion criteria is found in Table 1.
Search strategy
Embase and Medline (via ovid), the Cochrane Central Register of Controlled Trials, and ClinicalTrials.org databases will be searched using the strategy illustrated in Table 2. The search will be restricted to items published in English language up to 1st January 2019. The reference lists of included studies, and the related articles function of Medline via PubMed will be searched for additional relevant studies.
Types of studies included
Randomised controlled trials and comparative non-randomised studies comparing closure vs. non-closure of mesenteric defects for RYGB, and reporting outcomes of interest in an extractable form, will be included. Non-comparative studies, review papers, case reports/series, published abstracts, along with reports not written in English will be excluded.
Types of participants
The population of interest is adults (aged 18 years and over) undergoing RYGB for obesity. Studies examining individuals under the age of 18 years will be excluded.
Types of intervention
The intervention of interest will be closure of any or all of: the retroalimentary space, jejuno-jejunal mesenteric defect and/or transmesocolic space during RYGB, using sutures, staples or topical adhesive compounds.
Types of comparator
Only comparative studies will be included and the comparator of interest will be non-closure of any or all of the retroalimentary space, jejuno-jejunal mesenteric defect and/or transmesocolic space during RYGB. Subgroup analyses will be undertaken where data are available, to determine the relative effects of closure of each specific anatomical defect.
Primary outcome
The primary outcome will be reoperation for suspected small bowel obstruction after RYGB.
Secondary outcomes
Secondary outcomes will include internal herniation, kinking or narrowing at the jejuno-jejunal anastomosis, adhesions, complications (<30 days and >30 days after surgery), 30-day mortality, and any other outcome deemed relevant and reported in more than one study. In anticipation of variation in the duration of follow up in relevant studies, a time frame of at least one year of follow-up will be considered as beyond the short term, and subgroup analyses of different follow-up time points will be undertaken where feasible and necessary.
Identification and selection of studies
Articles identified from the electronic search will be recorded into a Microsoft Excel 2017 database with titles and abstract. Duplicates will be excluded. Two separate researchers will independently screen titles and abstracts and code inclusion status as: include (1), exclude (2), or undetermined (3). The full text of all studies coded 1 or 3 will be examined for secondary coding. Discussion between coding researchers will be undertaken, with arbitration by a third author as required, to reach a consensus on final coding for inclusion decisions.
Data extraction, collection and management
Data extraction will be performed by two researchers independently, resolving discrepancies to consensus, with arbitration by a third author where necessary. Data will be input into a Microsoft Excel 2017 database, formatted a priori to facilitate simple and consistent data entry.
The following core information will be gathered from each study:
Data analysis
The meta-analysis will be performed in line with the recommendations of the Cochrane Collaboration and the PRISMA guidelines. Analysis will be performed within the Review Manager V.5.3 (RevMan) programme. Statistical analysis will be performed on aggregate data, using proportions for binary outcomes and mean (standard deviation) values for continuous outcomes (or standardised mean values if different scales are used). For outcomes reported in at least five studies, we will fit both random-effects and fixed-effect models. For outcomes reported in less than five studies, we will display the effect estimates in forest plots and perform fixed-effect meta-analysis where appropriate. We will report results with 95 per cent confidence intervals, as well as the I-squared index to assess consistency among results within each outcome.
Assessment of bias
The Cochrane tool will be used in the assessment of randomised studies and the Newcastle-Ottawa scale for non-randomised studies.
Ethical approval
Ethical approval is not needed for this research project as it does not involve direct contact with patients or direct reporting of identifiable or individual patient level outcome data.
Funding
No funding has been received for this study.
Author contribution AB conceived the study. All authors (RT, TO, JB, and AB contributed to study design, protocol development and writing (NB no data collection or data analysis performed for protocol).
Conflicts of interest
No conflicts of interest are declared.
Guarantor
Mr Andrew Beamish.
Research Registration Number
This study has been registered a priori on the PROSPERO international prospective register of systematic reviews (www.crd. york.ac.uk/prospero); registration number: CRD42018118934 | 2019-03-28T13:33:43.914Z | 2019-02-26T00:00:00.000 | {
"year": 2019,
"sha1": "bd3a28ad8a57844fd3b0af5fd1125a534a5b4c67",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.isjp.2019.02.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e847cee00984677f55e5ac2e9274ca2542a5653d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254591731 | pes2o/s2orc | v3-fos-license | Development of a test in German language to assess middle school students' physics proficiency
This short contribution reports the development of a test for assessing middle school students' physics proficiency via multiple-choice single-select items in German language. The test assesses students' content and procedural knowledge across various content areas that are typical of physics education at the middle-school level and is based on adapted items developed within the Third International Mathematics and Science Study (TIMSS). We report the study design we used to develop this test, as well as the results and selected parameters regarding the test's psychometric quality.
Theoretical background and aim of the study
Students' learning in the physics classroom strongly depends on their physics proficiency, i.e., the entirety of their previously acquired content and procedural knowledge in physics. 1; 2; 3 As a result, one of the standard procedures of empirical research on physics education is to assess students' physics proficiency via achievement tests in order to use this information, e.g., as a covariate. However, unlike in other school subjects, there is a lack of subjectspecific achievement tests for physics in German language that are generally accessible in educational research. 4; 5 To fill this gap, a popular approach among physics education researchers in Germany (e.g., Ref. 6,7,8,9,10) is to assess students' proficiency using self-compiled pencil-and-paper tests based on published items from the Third Inter- In this short contribution, we report the results of a research project that was aimed at tackling these issues. More precisely, we report on the development of a test in German language for assessing middle school students' physics proficiency. This test features the following characteristics: (i) The test consists of 20 items in total and has an estimated time requirement of 20 minutes. (ii) The test is based on adapted items developed for the assessment of primary-and middleschool students' science achievement within TIMSS. (iii) To ensure test economy and objectivity, all items are multiple-choice single-select items. (iv) The test assesses students' content and procedural knowledge across various content areas that are typical of physics education in middle schools.
Below, we report the study design we used for the development of our test. Subsequently, we report results, as well as selected parameters regarding the test's psychometric quality.
Study design
For test development, we selected 40 physics-related items from the released TIMSS item sets for the primary-and middle-school levels. 12; 15; 16; 17 We translated items that were only published in English into German. Furthermore, through a discursive process, we adapted a items with an open-ended format into multiple-choice single-select items (4-5 options) based on the items' coding manual. 15; 16; 17 The resulting pool of 40 items was administered a to N = 177 eighth grade students from Hamburg, Germany (mean age = 14.01 years; share of male participant = 52.54 %). According to the TIMSS technical report 14 , middle school students require about 1 minute to accomplish one multiple-choice single-select item, resulting in a total test time of approximately 40 minutes. In addition to their responses to these items, we also collected further background data from the participants. We surveyed their interest in science (see Ref. 19 To evaluate our test's psychometric quality, we conducted a Rasch analysis of the surveyed data 22 using the R-package eRm version 1.0-2. 23 We used the item-wise Wald test (split criterion: median) to check individual items for differential item functioning. 24 In doing so, we detected poorly fitted items that needed to be removed from the test. After misfitting items were removed from the test, we applied Andersen's likelihood ratio test (split criterion: median) to evaluate whether the Rasch model appropriately fits participants' testtaking behavior 24 and calculated the test's itemand person-separation reliability. 25 Finally, we conducted correlation analyses between the participants' physics proficiency (estimated person parameters from the Rasch analysis) and their interest in science, their self-concept in science, their latest school grades in physics, and their cultural and economic capital in order to examine the correlational validity of our developed test.
Results
In total, 20 items were excluded from the test due to differential item functioning (detected via the itemwise Wald test). As summarized in Table 1, the remaining 20 items do not show irregularities regarding their model fit indices. Both the infit and the outfit mean square of the items lie within a modelfitting range. 22 In addition, all 20 items show a positive and sufficient point-measure correlation. 26 Furthermore, the Andersen's likelihood ratio test revealed that the Rasch model adequately captures participants' responses (LR-value = 11.82, df = 19, p = 0.89).
The remaining 20 items address different content areas typical of physics education in middle schools (electrics, magnetism, mechanics, optics, thermodynamics), with mechanics (six items) and a Item adaptation and data collection for this study were provided by a pre-service physics teacher as part of his master's thesis at the Universität Hamburg. For details, see Ref. 18. optics (five items) being particularly prevalent (see Table 1). Analysis of the Wright Map (Figure 1) indicates that all 20 items properly cover the range of the participants' physics proficiency, especially within the proficiency mid-range. Only the lowest and uppermost ends of the Wright Map show noteworthy gaps between the distribution of estimated item difficulties versus person parameters. This result is also reflected in solid reliability indicators for our test. The person-separation reliability reached a coefficient of 0.78 and is thus sufficient. 27 For the item-separation reliability, a coefficient of 0.97 was obtained. Therefore, our test's item separation reliability can be considered excellent. 27 Regarding correlations with additional participant attributes, our results were in line with expectations in terms of our test's correlative validity. We found a moderate and positive correlation between the participants' estimated person parameters and their interest in science (r Pearson = 0.45; p < 0.01), as well as their self-concept in science (r Pearson = 0.34; p < 0.01). The rank correlation between estimated person parameters and latest school grades in physics was negative, as expected b , but low (ρ Spearman = -0.24; p < 0.01). For the participants' cultural and economic capital, the rank correlation was also low but positive (ρ Spearman = 0.22; p < 0.01).
Summary and Discussion
The test developed in this study allows us to assess students' physics proficiency at the middle-school level based on 20 translated (German) and adapted TIMSS items. Based on the TIMSS technical report 14 , the test requires approximately 20 minutes to complete (1 minute per item), which is in accordance with our experiences. One particular advantage of the test is the multiple-choice-singleselect format of all items because this format enables us to assess students' physics proficiency in a highly test-economic and objective manner.
Administering the developed test to N = 177 middle school students from Hamburg (Germany) and performing a Rasch analysis of the surveyed data revealed a sufficient person-separation reliability, an excellent item-separation reliability, and a satisfactory alignment between the distributions of estimated item difficulties and person parameters. These findings, as well as consistent results from our correlational analyses, provide cogent evidence that our developed test allows for a valid assessment of middle school students' physics proficiency.
Finally, it should be noted that there are several limitations regarding the results of our study. We administered our developed test only to eighth grade students from Hamburg, Germany. Therefore, based on our results, it is not yet possible to draw valid conclusions regarding whether our developed test is suitable for students of higher or lower grades or within school systems significantly differing from the Hamburg education system. Furthermore, because electrics, which is a key topic within the physics curriculum for middle schools in Germany, is underrepresented among the test's items (2 out of 20 items), it is reasonable to assume that our developed test captures students' proficiency within this content area only to a limited extent. Consequently, in future research, it might be reasonable to extend our developed test by adding further items on electrics. Finally, as our Wright Map analysis revealed, it could be advantageous to extend our test by adding some items with a very high and/or very low item difficulty. Presumably, this could further improve the psychometric quality of our developed tests, particularly for assessing students with very high and/or low physics proficiency.
Note
For non-commercial purposes, our developed test can be made available upon request by contacting b The negative correlation between estimated person parameters and latest school grades in physics results from the German school grade scale that ranges from 1 = very good to 6 = insufficient. the authors. | 2022-12-14T06:41:14.574Z | 2022-12-13T00:00:00.000 | {
"year": 2022,
"sha1": "130fd93c40a9ad8311f9a53643b458ff0042cfe7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "130fd93c40a9ad8311f9a53643b458ff0042cfe7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
244715072 | pes2o/s2orc | v3-fos-license | SPATL: Salient Parameter Aggregation and Transfer Learning for Heterogeneous Clients in Federated Learning
Federated learning~(FL) facilitates the training and deploying AI models on edge devices. Preserving user data privacy in FL introduces several challenges, including expensive communication costs, limited resources, and data heterogeneity. In this paper, we propose SPATL, an FL method that addresses these issues by: (a) introducing a salient parameter selection agent and communicating selected parameters only; (b) splitting a model into a shared encoder and a local predictor, and transferring its knowledge to heterogeneous clients via the locally customized predictor. Additionally, we leverage a gradient control mechanism to further speed up model convergence and increase robustness of training processes. Experiments demonstrate that SPATL reduces communication overhead, accelerates model inference, and enables stable training processes with better results compared to state-of-the-art methods. Our approach reduces communication cost by up to $86.45\%$, accelerates local inference by reducing up to $39.7\%$ FLOPs on VGG-11, and requires $7.4 \times$ less communication overhead when training ResNet-20.
I. INTRODUCTION
Distributed machine learning (ML) is extensively used to solve real-world problems in high performance computing (HPC) environments. Typically, training data is first collected at a central location like a data center or HPC cluster. Afterwards, the data is carefully distributed across the cluster nodes based on the availability of resources. Training is conducted in a distributed manner and a resource and data aware fashion. However, new legislation such as General Data Protection Regulation (GDPR) [1] and Health Insurance Portability and Accountability Act (HIPAA) [2] prohibit user data collection. In response to user privacy concerns, federated learning (FL) [3] was proposed to train ML models while maintaining local data privacy (restricting direct access to private user data).
FL trains a shared model on edge devices (e.g., mobile phones) by aggregating locally trained models on a cloud/central server. This setting, however, presents three key challenges: First, the imbalance/non-independent identically distributed (non-IID) local data easily causes training failure in the decentralized environment. Second, frequent sharing of model weights between edge devices and the server incurs excessive communication overhead. Lastly, the increasing demand of computing, memory, and storage for AI models (e.g., 1 Code is available at: https://github.com/yusx-swapp/SPATL deep neural networks -DNNs) makes it hard to deploy on resource-limited edge devices. These challenges suggest that designing efficient FL models and deploying them effectively will be critical in achieving higher performance on future systems. Recent works in FL and its variants [4]- [6] predominantly focus on learning efficiency, i.e., improving training stability and using the minimum training rounds to reach the target accuracy. However, these solutions further induce extra communication costs. As such, there is no superior solution to address the three key issues jointly. Furthermore, the above methods aim to learn a uniform shared model for all the heterogeneous clients. But this provides no guarantees of model performance on every non-IID local data.
Deep learning models are generally over-parameterized and can easily overfit during local FL updates; in which case, only a subset of salient parameters decide the final prediction outputs. It is therefore unnecessary to aggregate all the parameters of the model. Additionally, existing work [7], [8] demonstrate that a well-trained deep learning model can be easily transferred to non-IID datasets. Therefore, we propose to use transfer learning to address the data heterogeneity issue of Federated Learning. As such, we train a shared model and transfer its knowledge to heterogeneous clients by keeping its output layers customized on each client. For instance, computer vision models (e.g., CNNs) usually consist of an encoder part (embed the input instance) and a predictor head (output layers). In this case, we only share the encoder part in the FL communication process and transfer the encoder's knowledge to local Non-IID data using a customized local predictor.
Although we use the encoder-predictor based model as an example, our idea can be extend to all AI models whose knowledge is transferable (i.e., we can transfer deep learning model by keeping the output layers heterogeneous on local clients).
Based on these observations, we propose an efficient FL method through Salient Parameter Aggregation and Transfer Learning (SPATL). Specifically, we train the model's encoder in a distributed manner through federated learning and transfer its knowledge to each heterogeneous client via locally deployed predictor heads. Additionally, we deploy a pretrained local salient parameter selection agent to select the encoder's salient parameters based on its topology. Then, we customize the pre-trained agent on each local client by slightly fine-tuning its weights through online reinforcement learning. We reduce communication overhead by only uploading the selected salient parameters to the aggregating server. Finally, we leverage a gradient control mechanism to correct the encoder's gradient heterogeneity and guide the gradient towards a generic global direction that suits all clients. This further stabilizes the training process and speeds up model convergence.
In summary, the contributions of SPATL are: • SPATL reduces communication overhead in federated learning by introducing salient parameter selection and aggregation for over-parameterized models. This also results in accelerating the model's local inference.
II. MOTIVATION -USE CASES
With advancements in the performance of mobile and embedded devices, more and more applications are moving to decentralized learning on the edge. Improved ML models and advanced weight pruning techniques mean a significant amount of future ML workload will come from decentralized training and inference on edge devices [10]. Edge devices operate under strict performance, power, and privacy constraints, which are affected by factors such as model size and accuracy, training and inference time, and privacy requirements. Many edge applications, such as self-driving cars, could not be developed and validated without HPC simulations, in which HPC accelerates data analysis and the design process of these systems to ensure safety and efficiency. Therefore, the prevailing edge computing trend alongside FL requirements and edge constraints motivate SPATL to address challenges in HPC. Firstly, frequent sharing of model weights between edge devices and the central server incurs a hefty communication cost [11], [12]. Thus, reducing communication overhead is imperative. Secondly, the increasing demand for computing, memory, and storage for AI models (e.g., deep neural networks -DNNs) makes it hard to deploy them on resource-constrained Internet-of-Things (IoT) and edge devices [13], [14]. Transfer learning can be a viable solution to address this problem. Thirdly, latency-sensitive applications with privacy constraints (e.g., self-driving cars [15], augmented reality [16]) in particular, are better suited for fast edge computing [17]. Hence, cutting back on inference time is quite important. Tech giants like Google, Apple, and NVIDIA are already using FL for their applications (e.g., Google Keyboard [18], [19], Apple Siri [20], [21], NVIDIA medical imaging [22]) thanks to their large number of edge devices. Hence, scalability is important in FL and HPC settings. Lastly, training data on client edge devices depends on the user's unique usage causing an overall non-IID [3], [23] user dataset. Data heterogeneity is a major problem in decentralized model training [4]- [6], [11], [23]- [29]. Thus designing efficient decentralized learning models and deploying them effectively will be crucial to improve performance of future edge computing and HPC.
III. RELATED WORK A. Federated Learning
With increasing concerns over user data privacy, federated learning was proposed in [3], to train a shared model in a distributed manner without direct access to private data. The algorithm FedAvg [3] is simple and quite robust in many practical settings. However, the local updates may lead to divergence due to heterogeneity in the network, as demonstrated in previous works [4], [26], [30]. To tackle these issues, numerous variants have been proposed [4]- [6]. For example, FedProx [6] adds a proximal term to the local loss, which helps restrict deviations between the current local model and the global model. FedNova [5] introduces weight modification to avoid gradient biases by normalizing and scaling the local updates. SCAFFOLD [4] corrects update direction by maintaining drift variates, which are used to estimate the overall update direction of the server model. Nevertheless, these variants incur extra communication overhead to maintain stable training. Notably, in FedNova and SCAFFOLD, the average communication cost in each communication round is approximately 2× compared to FedAvg.
Further more, federated learning has been extended in real life applications [44], [45]. One promising solution is personalized federated learning [46]- [51], which tries to learn personalized local models among clients to address data heterogeneity. These works, however, fail to address the extra communication overhead. However, very few works, such as [52], [53] focus on addressing communication overhead in FL. They either use knowledge distillation, or aggregation protocol, the communication overhead reduction is not significant.
Additionally, benchmark federated learning settings have been introduced to better evaluate the FL algorithms. FL benchmark LEAF [54] provides benchmark settings for learning in FL, with applications including federated learning, multi-task learning, meta-learning, and on-device learning. Non-IID benchmark [9] is an experimental benchmark that provides us with Non-IID splitting of CIFAR-10 and standard implementation of SOTAs. Framework Flower [55] provides FL SOTA baselines and is a collection of organized scripts used to reproduce results from well-known publications or benchmarks. IBM Federated Learning [56] provides a basic fabric for FL on which advanced features can be added. It is not dependent on any specific machine learning framework and supports different learning topologies, e.g., a shared aggregator and protocols. It is meant to provide a solid basis for federated learning that enables a large variety of federated learning models, topologies, learning models, etc., particularly in enterprise and hybrid-Cloud settings.
B. Salient Parameter Selection
Since modern AI models are typically over-parameterized, only a subset of parameters determine practical performance. Several network pruning methods have been proposed to address this issue. These methods have achieved outstanding results and are proven techniques to drastically shrink model sizes. However, traditional pruning methods [57]- [59] require time-consuming re-training and re-evaluating to produce a potential salient parameter selection policy. Recently, AutoML pruning algorithms [60], [61] offered state-of-the-art (SoTA) results with higher versatility. In particular, reinforcement learning (RL)-based methods [62]- [65], which model the neural network as graphs and use GNN-based RL agent to search for pruning policy present impressive results. However, AutoML methods need costly computation to train a smart agent, which is impractical to deploy on resource-limited edge FL devices.
The enormous computational cost and effort of network pruning makes it difficult to directly apply in federated learning. To overcome challenges of previous salient parameter selection methods and inspired by the RL-based AutoML pruning methods, we utilize a salient parameter selection RL agent pre-trained on the network pruning task. Then with minimal fine-tuning, we implemented an efficient salient parameter selector with negligible computational burden.
IV. METHODOLOGY
SPATL consists of three main components: knowledge transfer learning, salient parameter selection agent, and gradient control federated learning. Figure 1 shows the SPATL overview. Unlike mainstream FL solutions, which attempt to train the entire deep learning model, SPATL only trains the encoder part of the model in a distributed manner and transfers the knowledge to heterogeneous clients. In each round of federated learning, the client first downloads the encoder from the cloud aggregator ( in Figure 1) and transfers its knowledge using a local predictor through local updates ( in Fig. 1: SPATL Overview. SPATL trains a shared encoder through federated learning, and transfers the knowledge to heterogeneous clients. Clients upload salient parameters selected by a pre-trained RL-agent. The selected parameters are then aggregated by the server. Figure 1). After local updates, the salient parameter selection agent will evaluate the training results of the current model based on the model performance ( in Figure 1), and finally selected clients send the salient parameters to the server ( in Figure 1). Additionally, both clients and the server maintain a gradient control variate to correct the heterogeneous gradients, in order to stabilize and smoothen the training process.
A. Heterogeneous Knowledge Transfer Learning
Inspired by transfer learning [7], SPATL aims to train an encoder in FL setting and address the heterogeneity issue through transferring the encoder's knowledge to heterogeneous clients. Formally, we formulate our deep learning model as an encoder E(w e , x) and a predictor P (w p , e), where w e and w p are encoder and predictor parameters respectively, x is an input instance to the encoder and e is an input instance to the predictor (or embedding).
SPATL shares the encoder E(w e , x) with the cloud aggregator, while the predictor P k (w k p , e) for the k th client is kept private on the client. The forward propagation of the model in the local client k is formulated as follows: During local updates, the selected k th client first downloads the shared encoder parameter, w e , from the cloud server and optimizes it with the local predictor head, w k p , through back propagation. Equation 3 shows the optimization function.
Here, l refers to the loss when fitting the label y i for data x i , and n i is the constant coefficient.
In federated learning, not all clients are involved in communication during each round. In fact, there is a possibility a client might never be selected for any communication round. Before deploying the trained encoder on such a client, the client will download the encoder from the aggregator and apply local updates to its local predictor only. After that, both encoder and predictor can be used for that client. Equation 4 shows the optimization function.
B. RL-based Topology-Aware Salient Parameter Selection
One key issue of FL is the high communication overhead caused by the frequent sharing of parameters between clients and the cloud aggregator server. Additionally, we observed that deep learning models (e.g., VGG [66] and ResNet [67]) are usually bulky and over-parameterized. As such, only a subset of salient parameters decide the final output. Therefore, in order to reduce the communication cost, we implemented a local salient parameter selection agent for selecting salient parameters for communication. Figure 2 shows the idea of a salient parameter agent. Specifically, inspired by topology-aware network pruning task [62], [65], we model the neural network (NN) as a simplified computational graph and use it to represent the NN's states. Since NNs are essentially computational graphs, their parameters and operations correspond to nodes and edges of the computational graph. We then introduced the graph neural network (GNN)based reinforcement learning (RL) agent, which takes the graph as input (RL's environment states) and produces a parameter selection policy from the topology through GNN embedding. Additionally, the RL agent uses the selected submodel's accuracy as reward to guide its search for the optimal pruning policy. Training a smart agent directly through RL, however, is costly and impractical to deploy on the edge. To address this issue, we first pre-train the salient parameter agent in the network pruning task, and then customize the pre-trained agent on each local client by slightly fine-tuning its weights through online reinforcement learning (detailed hyperparameter setting in section V).
1) Reinforcement Learning Task Definition: Defining environment states, action space, reward function, and RL policy are essential for specifying an RL task. In this section, we will discuss these components in more detail. Algorithm 1 Algorithm 1: Salient parameter selection with PPO Input: shared encoder E(w e ) and model size constraints d Output: selected salient parameter and corresponding index for shows the RL search process. For search step, we first initialize the target encoderÊ(ŵ e ) with the input encoder E(w e ), and convert it to a graph. If the size ofÊ does not satisfy the constraints, the proximal policy optimization (PPO) [68] RL agent will produce a parameter selection policy a (i.e., the action of the RL), to updateÊ. IfÊ satisfies the size constraint, the RL agent will use its accuracy as reward to update the policy. Finally, the parameter w and corresponding parameter index idx of the target encoderÊ with the best reward will be uploaded to the cloud server. Environment States. We use a simplified computational graph G(v, e) to represent the NN model [65]. In a computational graph, nodes represent hidden features (feature maps), and edges represent primitive operations (such as 'add', 'minus', and 'product'). Since the NN model involves billions of operations, it's unrealistic to use primitive operations. Instead, we simplified the computational graph by replacing the primitive operations with machine learning operations (e.g., conv 3x3, Relu, etc.). Action Space. The actions are the sparsity ratios for encoder's hidden layers. The action space is defined as a ∈ [0, 1] N , where N is the number of encoder's hidden layers. The actor network in the RL agent projects the NN's computational graph to an action vector, as shown in equations 5 and 6.
Here, G is the environment state, g is the graph representation, and MLP is a multi-layer perceptron neural network. The graph encoder learns the topology embedding, and the MLP projects the embedding into hidden layers' sparsity ratios. Reward Function. The reward function is the accuracy ×100 of selected sub-network on validation dataset.
2) Policy Updating: The RL agent is updated end-to-end through the PPO algorithm. The RL agent trains on the local clients through continual online-learning over each FL round. Equation 8 shows the objective function we used for the PPO update policy.
Here, θ is the policy parameter (the actor-critic network's parameter),Ê t denotes the empirical expectation over time steps, r t (θ) is the ratio of the probability under the new and old policies, respectively, t is the estimated advantage at time t, and is a clip hyper-parameter, usually 0.1 or 0.2.
C. Generic Parameter Gradient Controlled Federated Learning
Inspired by stochastic controlled averaging federated learning [4], we propose a generic parameter gradient controlled federated learning to correct the heterogeneous gradient. Due to client heterogeneity, local gradient update directions will move towards local optima and may diverge across all clients. To correct overall gradient divergence by estimating gradient update directions, we maintain control variates both on clients and the cloud aggregator. However, controlling the entire model's gradients will hurt the local model's performance on non-IID data. In order to compensate for performance loss, SPATL only corrects the generic parameter's gradients (i.e., the encoder's gradients) while maintaining a heterogeneous predictor. Specifically in equation 9, during local updates of the encoder, we correct gradient drift by adding the estimate gradient difference (c g − c l ).
Here, control variate c g is the estimate of the global gradient direction maintained on the server side, and c l is the estimate of the update direction for local heterogeneous data maintained on each client. In each round of communication, the c l is updated as equation 10: Here, E is the number of local epochs, and η is the local learning rate, while c g is updated by equation 11: Here, ∆c k is the difference between new and old local control variates c l of client k, N is the set of clients, and K is the set of selected clients. Algorithm 2 shows SPATL with gradient controlled FL. In each update round, the client downloads the global encoder's parameter w g and update direction c g from server, and performs local updates. When updating the local encoder parameter w e , (c g − c l ) is applied to correct the gradient drift. The predictor head's gradient remains heterogeneous. Before uploading, the local control variate c l is updated by estimating the gradient drift.
Algorithm 2: SPATL with gradient controlled FL Server executes: initialize w g , c g . for each round t = 1, 2, . . . , T do K ← random set of clients ∈ N for each client k ∈ K in parallel do communicate with client k w k , i k , ∆c k ← ClientUpdate(w g , c g ) ClientUpdate(w g , c g ): B ← split local dataset into batches initialize the local encoder w e ← w g , and control c l for epoch= 1, 2, . . . , E do communicate w e , idx, ∆c c l ← c * l 1) Aggregation with Salient Parameters: Due to the non-IID local training data in heterogeneous clients, salient parameter selection policy varies among the heterogeneous clients after local updates. Since the selected salient parameters have different matrix sizes and/or dimensions, directly aggregating them will cause a matrix dimension mismatch. To prevent this, as Figure 3 shows, we only aggregate partial parameters according to the current client's salient parameter index on the server side. Equation 12 shows the mathematical representation of this process. Here, w g is the global parameter, w k is the i th client's salient parameter, i k is the w k 's index corresponding to the original weights, and η is the update step size. By only aggregating the salient parameter w k and its corresponding index i k (negligible burdens), we can significantly reduce the communication overhead and avoid matrix dimension mismatches.
V. EXPERIMENT
We conducted extensive experiments to examine SPATL's performance. Overall, we divided our experiments into three categories: learning efficiency, communication cost, and inference acceleration. We also performed an ablation study and compared SPATL with state-of-the-art FL algorithms.
A. Implementation and Hyper-parameter Setting
Datasets and Models. The experiments are conducted with FEMNIST [54] and CIFAR-10 [69]. In FEMNIST, we follow the LEAF benchmark federated learning setting [54]. In CIFAR-10, we use the Non-IID benchmark federated learning setting [9]. Each client is allocated a proportion of the samples of each label according to Dirichlet distribution (with concentration α). Specifically, we sample p k ∼ Dir N (α) and allocate a p k,j proportion of the instances to client j. Here we choose the α = 0.1. The deep learning models we use in the experiment are VGG-11 [66] and ResNet-20/32 [67].
Federated Learning Setting. We follow the Non-IID benchmark federated learning setting and implementation [9]. In SPATL, the models in each client are different. Thus, we evaluate the average performance of models in heterogeneous clients. We experiment on different clients and sample ratio (percentage of participating clients in each round) setting, from 10 clients to 100 clients, and the sample ratio from 0.4 to 1. During local updates, each client updates 10 rounds locally. The detailed setting can be found in supplementary materials. RL Agent Settings. The RL agent is pre-trained on ResNet-56 by a network pruning task. Fine-tuning the RL agent in the first 10 communication rounds with 20 epochs in each updating round. We only update the MLP's (i.e., output layers of RL policy network) parameter when fine-tuning. We use the PPO [68] RL policy, the discount factor is γ = 0.99, the clip parameter is 0.2, and the standard deviation of actions is 0.5. Adam optimizer is applied to update the RL agent, where the learning rate is 3 × 10 −4 and the β = (0.9, 0.999).
Experimental Setup. Our experimental setup ranges from 10 clients to 100 clients, and our experimental scale is on par with existing state-of-the-art works. To better compare and fully investigate the optimization ability, some of our experiments (e.g., communication efficiency) scales are set to be larger than many SOTA method experiments (such as FedNova [5], FedAvg [3], FedProx [6], and SCAFFOLD [4]). Recent FL works, such as FedAT [40], pFedHN [41], QuPeD [51], FedEMA [42], and FedGen [43], their evaluations are within the same scale as our experiments. Since the FL is an optimization algorithm, we mainly investigate the training stability and robustness. The larger experiment scale will show a similar trend.
FL Benchmark. We use two standard FL benchmark settings: LEAF [54] and Non-IID benchmark [9]. LEAF [54] provides benchmark settings for learning in FL, with applications including federated learning, multi-task learning, metalearning, and on-device learning. We use the LEAF to split the FEMNIST into Non-IID distributions. Non-IID benchmark [9] is an experimental benchmark that provides us with Non-IID splitting of CIFAR-10 and standard implementation of SOTAs. Our implementation of FedAvg, FedProx, SCAFFOLD, and FedNova are based on the Non-IID benchmark.
B. Learning Efficiency
In this section, we evaluate the learning efficiency of SPATL by investigating the relationship between communication rounds and the average accuracy of the model. Since SPATL learns a shared encoder, each local client has a heterogeneous predictor, and the model's performance is different among clients. Instead of evaluating a global test accuracy on the server side, we allocate each client a local non-IID training dataset and a validation dataset to evaluate the top-1 accuracy, i.e., the highest probability prediction must be exactly the expected answer, of the model among heterogeneous clients. We train VGG-11 [66] and ResNet-20/32 [67] on CIFAR-10 [69], and 2-layer CNN on FEMNIST [54] separately until the models converge. We then compare model performance results of SPATL with state-of-the-arts (SoTAs) (i.e., Fed-Nova [5], FedAvg [3], FedProx [6], and SCAFFOLD [4]). Figure 4 experiments show 10 clients setting where we sample all 10 clients for aggregation. The effect of heterogeneity is not significant compared to a real-world scale. SPATL moderately outperforms the SoTAs on CIFAR-10. Results on the 2-layer CNN model trained on FEMNIST however, is an exception; in this case the model trained by SPATL has a slightly lower accuracy than SoTAs. We suspect that it has to do with the small size of the 2-layer CNN and the large data quantity. Particularly, in this case, our "model overparameterization" assumption no longer holds, making it hard for the salient parameter selection to fit the training data. To verify our analysis, we increase the complexity of our experiments and conduct further experiments on larger scale FL settings. We increase the number of clients to 30, 50, and 100 with different sample ratios.
As heterogeneity rises with the increase in number of clients, SPATL demonstrates superiority in coping with data heterogeneity. Experiment results in Figure 4 show that for more complex FL settings, SPATL outperforms SoTAs with larger margins. In the 30 clients FL setting 2 , for ResNet-20, ResNet-32, and VGG-11, SPTAL outperforms the SoTA FL methods. Notably, SPATL yields a better convergence accuracy and a substantially more stable training process. In the 50 clients and 100 clients settings, the experiment improvements become more significant, as SPATL outperforms the SoTAs by a larger margin. Moreover, we noticed that the gradient control based method SCAFFOLD [4] suffers from gradient explosion issues when the number of clients increases. Even though we set a tiny learning rate, the explosion problem persists. Other researchers are facing the same issues when reproducing SCAFFOLD, and our results satisfy finding 6 in [9].
Intuitively, we investigate the model's accuracy overhead. Figure 5 shows the converge accuracy comparison with SoTAs. SPATL surpasses SoTAs in all the FL settings and achieves higher converge accuracy than SoTAs. Again, the superiority of SPATL grows progressively with the heterogeneity of FL settings. For instance, in ResNet-20 with 30 clients, SPATL outperforms SoTAs only in terms of final convergence accuracy. However, when we increase to 50 heterogeneous clients, SPATL achieves 42.54 % final accuracy, that is around 10% higher than FedAvg and FedProx (they achieve 32.71% and 32.43% accuracy, respectively). Additionally, it is worth mentioning that, in the 100 clients experiment setting, we compare the accuracy within 200 rounds since all the baselines diverge in 200 rounds except SPATL. This further demonstrates that SPATL optimizes and improves the quality of the model progressively and stably.
Trained model performance on heterogeneous local clients is also an essential indicator when evaluating FL algorithms with regards to deploying AI models on the edge. Since edge devices have various application scenarios and heterogeneous input data, models will likely exhibit divergence on such devices. We further evaluate the robustness and feasibility of FL methods on distributed AI by testing local model accuracies on all clients. Figure 6 shows ResNet-20 on each client's accuracy after the training is complete for CIFAR-10 (total 10 clients trained by SPATL and SCAFFOLD with 100 rounds). The model trained by SPATL produces better performance across all clients. In particular, the edge model trained by SPATL produces more stable performance among each client, 2 In Fig. 4 SCAFFOLD [4] diverges with gradient explosion in Non-IID benchmark settings [9] when there are more than 10 clients. whereas models trained by baselines exhibit more variance. For instance, all the edge models trained by SPATL have similar accuracy performance. Since SPATL uses heterogeneous predictors to transfer the encoder's knowledge, the model is more robust when dealing with non-IID data. However, our baseline methods (such as SCAFFOLD [4]) share the entire model when training on non-IID clients, leading to a variance in model performance on non-IID clients and causing poor performance on some clients.
C. Communication Efficiency
A key contribution of SPATL that makes it stand out among SoTAs is its significant reduction of communication overhead due to salient parameter selection. Although SoTAs, like FedNova [5] and SCAFFOLD [4], achieve stable training via gradient control or gradient normalization variates, their average communication cost doubles compared to FedAvg [3] as a result of sharing the extra gradient information. We present two experiment settings to evaluate model communication efficiencies. First, we trained all models to a target accuracy and calculated communication cost. Second, we trained all models to converge and calculated the communication cost of each FL algorithm. The communication cost is calculated as: #Rounds × Client's round cost × #Sampled Clients (13) Table I shows the detailed information of communication cost (FedAvg [3] as benchmark) when we train models to a target accuracy. SPATL remarkably outperforms SoTAs. In ResNet-20, SPATL reduced communication cost by up to 7.6× (FedNova costs 8.12GB while SPATL only costs 1.1GB). Moreover, SPATL reduced communication by up to 102GB when training VGG-11 compared to FedNova.
There are two main benefits of SPATL in reducing communication overhead. First, training in SPATL is much more efficient and stable. Our experiments show that SPATL requires fewer communication rounds to achieve target accuracy. For example, in VGG-11, FedProx uses 296 training rounds while SPATL uses 250 less rounds to achieve the same target accuracy. This significantly reduces the communication cost. We provide a more comprehensive comparison to show the number of rounds different models take to achieve target accuracy. As Figure 7 shows, we try different FL settings, and SPATL consistently requires fewer rounds than SoTAs in most of the FL settings (except in ResNet-20 with 10 clients and 80% target accuracy, SPATL requires 3 rounds more than SCAFFOLD. However, as shown in Table I), the total communication cost of SPATL is significant less than all others. Second, since the salient parameter selection agent selectively uploads partial parameters, SPATL significantly reduces communication cost. As shown in Table I, compared to the gradient control based methods, such as SCAFFOLD [4] and FedNova [5], SPATL remarkably reduces per round communication cost. For instance, SPATL uses 2× less round costs in ResNet-20 compared to the traditional FL, such as FedAvg [3]. Even when factoring in gradient control information, salient parameter selection enables SPATL to drop unnecessary communication burdens, which makes its round costs comparable to FedAvg. Furthermore, we investigated the convergence accuracy of models using SoTAs and SPATL optimizations. We consider a performance upper bound by creating a hypothetical centralized case where images are heterogeneously distributed across 30, 50, and 100 clients. Table II shows the results of training the models to convergence. Compared to FedAvg [3], gradient control based FL algorithms have higher accuracy at the expense of communication efficiency. For instance, FedNova [5] achieves slightly higher accuracy. However, their communication budget increases by more than 2×. Models optimized by SPATL achieve significantly higher accuracy than all other baselines. Especially on VGG-11 with 50 clients, SPATL achieves 17.8%, 19.86%, and 17.62% higher accuracy than FedAvg, FedNova, and FedProx respectively. Particularly,
D. Inference Acceleration
In this section, we evaluate the inference acceleration of SPATL. Local inference is a crucial measure for deploying AI models on the edge since edge devices have limited computing power, and edge applications (e.g., self-driving car) are inference sensitive. In SPATL, when the salient parameter selection agent selects salient parameters, it prunes the model as well. For a fair evaluation of inference, instead of recording the actual run time (run time may vary on different platforms) of pruned models, we calculated the FLOPs (floating point operations per second). Table III shows the inference acceleration status after training is complete. SPATL notably reduced the FLOPs in all the evaluated models. For instance, in ResNet-32, the average FLOPs reduction among 10 clients is 29.6%, and the client with the highest FLOPs reduction achieves 38.4% fewer FLOPs than the original model, while the client models have a relatively low sparsity ratio (the sparsity ratio represents the ratio of salient parameters compared to the entire model parameters). The low sparsity ratio can further benefit by accelerating the models on parallel platforms, such as GPUs. Additionally, we evaluate the salient parameter selection agents' pruning ability and compare it with SoTA pruning methods. As shown in Table V, our agent achieves outstanding results in pruning task and outperforms popular AutoML pruning baselines. The results indicate that SPATL can significantly accelerate model inference with acceptably small accuracy loss.
E. Transferbility of Learned Model
In SPATL, since only the partial model (i.e., knowledge encoder) is trained in a distributed manner, we conducted a transferability comparison experiment to test for successful transfer of knowledge among heterogeneous edge clients. Specifically, we transfer the neural network trained by SPATL and SoTAs (e.g., FedAvg, FedNova, SCAFFOLD, etc.) separately to a new portion of data and compare the performance of transferred models. The experimental settings are as follows: we split the CIFAR-10 [69] into two separate datasets, one with 50K images (for federated learning) and another with 10K images (for transfer learning after FL is finished). We use ResNet-20 and set 10 clients for federated learning, where each client has 4k image local training data and 1k validation set. Transfer learning was conducted in a regular manner without involving training distribution. Table IV shows the results. The model trained by SPATL achieves comparable transfer learning results with the SoTAs. This further shows that SPATL, which only trains a shared encoder in a distributed manner (i.e., as opposed to training the entire model), can successfully learn and transfer the knowledge of a heterogeneous dataset.
1) Salient Parameter Selection vs. No Parameter Selection:
Modern AI models are huge (involving billions of parameters) and often over-parameterized. Thus, only a subset of salient parameters can significantly affect the model's final performance. As such, a reasonable pruning of redundant parameters might not negatively impact model training. This section investigates the impact of salient parameter selection on federated learning. Specifically, we compare SPATL with and without salient parameter selection. Figure 8 shows the results. We conducted the experiment on ResNet-20 with various FL settings. All of the results indicate that properly pruning some unimportant weights of over-parameterized networks will not harm training stability in federated learning. Instead, it might produce better results in some cases. Especially in the 10 clients setting, SPATL optimized a higher quality methods, which means that it can prune redundant parameters and significantly reduce the FLOPs or pruned model with negligible accuracy loss. Moreover, state-of-the-art salient parameter selection methods, such as SFP [70], DSA [72], and FPGM [71], are usually non-transferable for a given model. They require time-consuming search and re-training to find a target model architecture and salient parameters. For instance, as shown in [73] (table 2), a combined model compression method needs 85.08 lbs of CO 2 emissions to find a target model architecture. This makes it expensive to deploy on edge devices. In SPATL, the RL agent is a tiny GNN followed by an MLP. The cost to compute target salient parameters within one-shot inference (0.36 ms on NVIDIA V100) and the memory consumption is 26 KB, which is acceptable on edge devices.
2) Transfer Learning vs. No Transfer Learning: SPATL transfers the shared encoder to local non-IID datasets and addresses the heterogeneous issue of FL. To investigate the effects of transfer learning on SPATL, in this section, we disable SPATL's transfer learning. Figure 9 (a) shows the results. We train the ResNet-20 [67] on CIFAR-10 [69] with 10 clients and sample all the clients in communication. SPATL without transfer learning has a poor performance when optimizing the model. Combining the results present in Figure 6, we can infer that a uniform model deployed on heterogeneous clients can cause performance diversity (i.e., the model performs well on some clients but poor on others). Intuitively, clients with data distribution similar to global data distribution usually perform better; nevertheless, clients far away from global data distribution are hard to converge. It is adequate to show that by introducing transfer learning, SPATL can better deal with heterogeneous issues in FL. Transfer learning enables every client to customize the model on its non-IID data and produces significantly better performance than without transfer learning.
3) Impact of Gradient Control: SPATL maintains control variates both in the local and cloud environment to help correct the local update directions and guide the encoder's local gradient towards the global gradient direction. Figure 9 (b) shows the results of SPATL with and without gradient control. We train VGG-11 [66] on CIFAR-10 [69] with 10 clients. Training the model in the heterogeneous non-IID local dataset typically causes high variants of local gradients leading to poor convergence. The gradient control variates in SPATL maintain the global gradient direction and correct the gradient drift, thus producing a stable training process. The results are in line with our expectations that gradient control remarkably improves the training performance of SPATL. 4) Fine-tuning Reinforcement Learning Agent: This section discusses the cost of pre-training a reinforcement learning agent and the cost of customization by slightly fine-tuning the agent's weights through online reinforcement learning. We pre-train the RL-agent and perform network pruning on ResNet-56, then transfer the agent to ResNet-18 by finetuning, and only update the predictor of RL-agent's policy network. Figure 10 shows the average reward the RL agent gets on network pruning task and the agent's corresponding update round. In both ResNet-18 and ResNet-56, the RL agent converges rapidly around 40 rounds of RL policy updating. Particularly, in ResNet-18, by slightly fine-tuning the RLagent, it achieves comparable rewards to ResNet-56. That means the agent can be successfully transferred to a newly deployed model. This further shows the feasibility of finetuning the pre-trained salient parameter selection agent.
VI. CONCLUSION AND DISCUSSION
In this paper, we presented SPATL, a method for efficient federated learning using salient parameter aggregation and transfer learning. To address data heterogeneity in federated learning, we introduced a knowledge transfer local predictor that transfers the shared encoder to each client. We proposed a salient parameter selection agent to filter salient parameters of the over-parameterized model before communicating it with the server. As a result, the proposed method significantly decreases the communication overhead. We further leveraged a gradient control mechanism to stabilize the training process and make it more robust. Our experiments show that SPATL has a stable training process and achieves promising results. Moreover, SPATL significantly reduces the communication cost and accelerates the model inference time. The proposed approach may have poor performance on simple models. As Figure 4 shows, our approach works well on over-parameterized neural networks, such as ResNet [67] and VGG [66] net. However, when it turns to less-parameterized models, such as 2-layer CNNs, the salient parameter selection may degrade in performance, making the model converge slower than baselines. In practice, less-parameterized models are rarely used in real-world applications. Second, not all AI models are transferable. In our future work, we will continuously improve the universality of our method. | 2021-11-30T02:16:13.533Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "b7d7616c7b7531f3088a398258c72ee67cd16685",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "04029cb70ab0441cd9711c3ad2a15e5d6a85b7ff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14349202 | pes2o/s2orc | v3-fos-license | Einstein-Weyl structures and Bianchi metrics
We analyse in a systematic way the (non-)compact four dimensional Einstein-Weyl spaces equipped with a Bianchi metric. We show that Einstein-Weyl structures with a Class A Bianchi metric have a conformal scalar curvature of constant sign on the manifold. Moreover, we prove that most of them are conformally Einstein or conformally K\"ahler ; in the non-exact Einstein-Weyl case with a Bianchi metric of the type $VII_0, VIII$ or $IX$, we show that the distance may be taken in a diagonal form and we obtain its explicit 4-parameters expression. This extends our previous analysis, limited to the diagonal, K\"ahler Bianchi $IX$ case.
Introduction
In the last years, Einstein-Weyl geometry has raised some interest, in particular when in a recent paper, Tod [1] exhibits the relationship between a particular Einstein-Weyl geometry without torsion (the four-dimensional self-dual Einstein-Weyl geometry studied by Pedersen and Swann [2]) and local heterotic geometry (i.e. the Riemannian geometry with torsion and three complex structures, associated with (4,0) supersymmetric non-linear σ models [3,4,5]).
To extend these ideas to other situations, we analysed in a first step [6](hereafter referred to as [GB]) Einstein-Weyl equations in the subclass of diagonal Kähler Bianchi IX metrics (in the standard classification [7,8]). In the present work, we study (non-)compact 4-dimensional Einstein-Weyl structures (for recent reviews see refs. [2,9]) on cohomogenity-one manifolds with a 3 dimensional group of isometries transitive on codimension-one surfaces, i.e., in the general relativity terminology, Bianchi metrics, and neither require a diagonal metric nor the Kähler property ; we however obtain interesting results for any (class A) Bianchi metrics.
Let us recall that, in the compact case, on general grounds, strong results on Einstein-Weyl structures have been known for some time : • There exits a unique metric in a given conformal class [g] such that the Weyl form is co-closed [10] , ∇ µ γ µ = 0 .
One then speaks of the "Gauduchon's gauge" and of a "Gauduchon's metric".
• The analysis of Einstein-Weyl equations in this gauge gives two essential results : -The dual of the Weyl form γ is a Killing vector [11]: -Four dimensional Einstein-Weyl spaces have a constant conformal scalar curvature [12]: The paper is organised as follows : in the next Section, we recall the classification of Bianchi metrics and the expressions of geometrical objects, separating the 4-dimensional metric g into a "time part" and a 3-dimensional homogeneous one. Focussing ourselves on Class A Bianchi metrics, we exhibit a specific Gauduchon's gauge and show how, in the diagonal case the Einstein-Weyl equations simplify and ensure that the dual γ µ of the Weyl one-form γ is a Killing vector, as in the compact case, and that the metric is either conformally Einstein or conformally Kähler. In particular, this proves that four-dimensional Einstein-Weyl spaces equipped with a diagonal Bianchi IX metrics are necessarily conformally Kähler, i.e. that our previous solution [GB] is the general one, up to a conformal transformation.
In Section 3, we show that for all class A Bianchi metrics, there exits a simple Gauduchon's gauge such that the conformal scalar curvature is constant on the manifold and the dual γ µ of the Weyl one-form γ satisfies D µ ∇ (µ γ ν) = 0 , where D denotes the covariant derivative with respect to the Weyl connection γ . Using these results, we prove that for Bianchi V I 0 , V II 0 , V III , and IX, the most general solution of Einstein-Weyl constraints is the same as the one in the diagonal case, i.e. in the non-conformally Einstein cases, the Kähler one of previous subsection, up to a conformal transformation. Finally, we also prove that the only self-dual Einstein-Weyl structures are the Bianchi IX ones of Madsen [9,13].
The geometrical setting
• A Weyl space [2] is a conformal manifold with a torsion-free connection D and a one-form γ such that for each representative metric g in a conformal class [g], A different choice of representative metric : g −→g = e f g is accompanied by a change in γ : γ −→γ = γ + df . Conversely, if the one-form γ is exact, the metric g is conformally equivalent to a Riemannian metricg : D µgνρ = 0. In that case, we shall speak of an exact Weyl structure.
• On the other hand, Bianchi metrics are real four-dimensional metrics with a threedimensional isometry group, transitive on 3-surfaces. Their classification was done by Bianchi in 1897 according to the Lie algebras of their isometry group, i.e. according to the Lie algebra structure constants C i jk , (i, j, k = 1, 2, 3 ) ; on general grounds, these ones may be decomposed into two parts [14]: where the symmetric 3×3 tensor n il may be reduced to a diagonal matrix with entries 0,1 or -1 and the vector a l satisfies n il a l = 0 .
This splits Bianchi metrics into two classes : class A in which the vector a l is zero, and class B in which it has one non vanishing component, say a 1 .
• An invariant Weyl struture may then be written as : where the three σ i are one-forms invariant under the isometries of the homogeneous 3space, characterised by the aforementionned structure constants C i jk . Notice that there is no loss of generality in choosing the metric element g 00 = 1 as this corresponds to a choice of "proper time " T, but the matrix h ij is a priori non-diagonal. On another hand, one might always choose a representative in the conformal class [g] such that γ 0 (T ) ≡ 0 .
The Ricci tensor associated to the Weyl connection D is defined by : µν is related to R (∇) µν , the Ricci tensor associated to the Levi-Civita connection [GB]: R (∇) µν may be expressed as (for exemple see [14]) : ij , the 3-dimensional Ricci tensor associated to the homogeneous space Levi-Civita connection, in the basis of the one-forms σ i , may be expressed as a function of the 3-metric h ij and of the structure constants of the group [14,15]. In the same way, the 4-dimensional Bianchi identity splits : and (see the appendix) : We do not find equation (8) in the standard textbooks on gravity.
The Gauduchon's gauges
We computed (using equations (39,40 of the appendix) the components of the tensor ∇ (µ γ ν) and find ; and, as a consequence, When C i ij ≡ 2a j = 0, which corresponds to class A Bianchi metrics, a special Gauduchon's gauge is obtained through the choice : In the compact case, the choice (11) is the unique good one ( [9],Proposition 5.20).
The Einstein-Weyl equations
Einstein-Weyl spaces are defined by : Note that for an exact Einstein-Weyl structure, γ = df , the representative metric is conformally Einstein. Note also that the conformal scalar curvature is related to the scalar curvature through: For Class A Bianchi metrics, in the special Gauduchon's gauge (11), Einstein-Weyl constraints (12) splits into :
Diagonal metrics and conformal Kählerness
Let us restrict ourselves to the diagonal Bianchi metrics, usually written as [7,8]: Define α i through : In [7], Dancer and Strachan gave the conditions on the α i under which the four dimensional diagonal Bianchi metric is Kähler, but not Hyper-Kähler. These conditions are : -Class A : two of the α i have to be equal, the third one vanishing ; -Class B : the three α i have to be proportional to ω 1 and to satisfy : Under a conformal transformation preserving the cohomogeneity-one character of a Bianchi metric :g = µ 2 (T )g , these conditions are easily converted into conditions for Kählerness up to a conformal transformation : (17) is conformal to a Kähler one iff. : • Class A metric (a i = 0) : two of the α i are equal ; • Class B metric (a i = aδ i1 ) : the following relations hold : For a Class A diagonal Bianchi metric , equation (15) leads to and equations (16) wrote for i = j : By inspection of the different possibilities for the n ii [14], it is readily shown that at least two of the Γ i necessarily vanish, with no other constraint for Bianchi I and II ; for Bianchi V I 0 , the three of them vanishing, the metric is necessarily conformally Einstein ; for Bianchi V II 0 and V III [n 11 = n 22 = +1] the third Γ 3 is constrained by Γ 3 [ω 2 1 − ω 2 2 ] = 0 , then, either the metric is conformally Einstein or, with ω 2 1 = ω 2 2 , the metric is conformally Kähler (thanks to Lemma 1). For Bianchi IX case, the same result holds, the special direction being unfixed (it will be chosen in the same direction as for Bianchi V II 0 and V III). A Corollary of this analysis is that in all Class A cases, the dual of the one form γ is a Killing vector.
In these three types of Bianchi metrics, and the remaining equations (14,16) wrote in the vierbein basis corresponding to (17)( a comma indicates a derivative with respect to t): Consider the function u(t) = α 3 ω 2 . Its derivative is readily obtained, using the difference of the (00) and (33) equations (21) : Then one can change the variable t into u and compute : which integrates to : Then one gets : The difference of the (11) and (33) equations (21) then gives a second order linear differential equation on ω 2 (u) : The solution is : Equations (22,25) and give the distance 1 and Weyl form as functions of the new "proper time" u : Finaly, the conformal scalar curvature is the constant Under the conformal transformationg = [(Γ 3 ) 2 + u 2 ]g/2, the metric may be rewriten in the standard form (17) withω 1 =ω 2 = Ω (Γ 3 ) 2 + u 2 ) ,ω 3 = −n 33 u + k , the "proper time"t being given by .
ensuring that the metricg is Kähler. Then we have proved the Theorem 1 : The most general (non-)compact non-exact Einstein-Weyl structure with a diagonal Bianchi V II 0 , V III or IX metric is conformal to a Kähler 4-parameters's one. In particular, in the Bianchi IX case, the Kähler metric is the one found in [GB, equ.(27)].
In the following Section, we shall consider non-diagonal Bianchi metrics 2 , but still restrict ourselves to Class A ones, where the particular choice of Gauduchon's gauge (11) will be of great help. 1 Of course, the 4 parameters k, Γ 3 , λ 1 , λ 2 and the "time" variable u are constrained by positivity : 2 When γ = 0 (Einstein equations), and for Bianchi V III and IX metrics, it was shown in [8] that, thanks to (15), the looked-for Einstein metrics may be chosen to be diagonal. I thank Paul Tod for a clarifying discussion on that assertion.
(Non-)compact Einstein-Weyl structures with class A Bianchi metrics.
We first prove the Lemma 2 : In the special gauge γ 0 = 0, Einstein-Weyl structures with a Class A Bianchi metric have a constant conformal scalar curvature S D .
Acting with ∇ µ on the Einstein-Weyl constraint (12) in the Gauduchon gauge and using the four-dimensional Bianchi identity, the constant value of S D ≡ R (∇) − 3/2γ µ γ µ , one gets : Note that in the compact case, contraction of the previous identity with γ ν , followed by an integration on the manifold, leads to the vanishing of ∇ (µ γ ν) [11].
• Considering the ν = 0 component of the previous equation, the expression of [∇ (µ γ ν) ] given by (9), and the formula (41) given in the appendix, we obtain for any class A Bianchi metric : • In the same way, considering the ν = α component of the equation (34), and multiplying by σ α i gives after some manipulations [using the expression of ∇ α σ i β given in the appendix (40)]: where the 3×3 symmetric matrix X ij [h mn ] is given by which may be expressed for a Class A Bianchi metric as: The contraction of (36) by γ i and the use of (35), finally gives : Then we have: Lemma 4 : For any Class A Bianchi metric h ij such that (γ, h) is an Einstein-Weyl structure, the Weyl form γ may be written in our particular Gauduchon's gauge as : γ = Γ i h ij (T )σ j , where the Γ i are constant parameters. Indeed, at any given time T one can find coordinatesσ i such that h ij is a diagonal matrix h ij , the structure constants being unchanged. The matrixX is then diagonal too, with elements and circular permutations. h ij being a strictly positive definite matrix, we get the vanishing of dγ i dT , and, at that time of dγ j dT in any coordinate frame ; the same results then holds at any proper time. Q.E.D.
We are now in position to discuss the issue of the diagonal hypothesis for the metric h ij (T ) . In the Einstein equation analysis, as explained by Tod [8], the condition (15, with γ i = 0) ensures -at least for Bianchi IX and V III cases 3 -, a possible simultaneous diagonalisation of the matrices h ij and dh ij dT or K ij at T 0 , with no change of the structure constants n ij . Here 4 , let us start from a proper time T 0 such that h ij (and n ij ) is diagonal. By inspection of the possible values of n ii , equation (38) ensures that the value of the constants Γ i fall into one of three cases : • all zero : in particular, this is the sole solution in the Bianchi V I 0 case. In such a situation, there exists no non-exact Einstein-Weyl structure, and Tod's argument ensures that for Bianchi V I 0 , V II 0 , V III and IX cases, there is no loss of generality in the choice of a diagonal metric h ij (T ) .
• at most one of them vanishes : this may happen only in the Bianchi IX case with h ij (T 0 ) = h 0 δ ij . Then, a possible simultaneous diagonalisation of the matrix dh ij dT orK ij at T 0 is possible, and (38), at T = T 0 + ǫ, enforces the equalness of theK ii at T 0 . So, at that time, the matricesñ ,h ,K are proportional to the 3×3 unit matrix. Then, one can find new coordinates where dK dT is also diagonal, which ensures that the matrices h and K stay in a diagonal form. But, equation (16), where the termγ iγj is not in a diagonal form, contradicts the hypothesis of at most one of the Γ i vanishing.
• one of them subsists : this is the case for Bianchi II case, but (15) enforces no further constraint on the metric and it seems hard to prove that the metric will stay in a diagonal form ; this occurs in Bianchi V II 0 , V III and IX cases, when at that time, one of theX ii given previously vanishes, sayX 33 . For these three cases, we have, for a non-exact Eintein-Weyl structure: Condition (15) ensures that at T 0 : As a consequence, the particular block diagonal structure of the matricesh ij ,ñ ij and dh ij dT ensures that they may be simultaneously diagonalised at T 0 . Soh ij and K ij (thanks to equ. (16)) stay diagonal and we have proved that the constraints that result from Einstein-Weyl equations for Bianchi IX, V III and V II 0 in the non-diagonal case, are the same as the ones in the diagonal situation.
We can summarize this discussion in a theorem : Theorem 2 : (Non-)compact Einstein-Weyl Bianchi metrics of the types V I 0 , V II 0 , V III and IX are conformally Kähler or conformally Einstein and the metric may be taken in a diagonal form. In the non-exact Einstein-Weyl case, the metric and Weyl form were given in equ.(27). The conformal scalar curvature has a constant sign on the manifold and, in our particular Gauduchon's gauge, the dual of the Weyl form is a Killing vector.
Theorem 1 then gives the following :
Concluding remarks
In this paper, we have presented a (nearly) complete analysis of the Einstein-Weyl structures (g , γ) corresponding to Class A Bianchi metrics. We have shown that, also in the non-compact case, there exists a conformal gauge in which the conformal scalar curvature is a constant, and we have proved that types V I 0 , V II , V III and IX, diagonal or not, are conformally Kähler or conformally Einstein. We have explained why, in these cases, one can restrict oneself to diagonal metrics. Moreover, in the non-exact Eintein-Weyl cases, the explicit expression for the distance and Weyl 1-form, depending on 4 parameters submitted to some positivity requirements has also been obtained in subsection 2.4.
The further requirement of completeness and compactness will restrict the parameters of our solutions : in particular, Bianchi V I 0 , V II and V III metrics cannot give compact metrics, their isometry group being non-compact. We shall give elsewhere the full family of Compact Bianchi IX Einstein-Weyl metrics, which, as we have proven here, are conformally Kähler [16].
Due to positivity requirements on Ω 2 , solutions exist only in the Bianchi IX case, and were given in [GB.Corollary 3][2, 13].
Then, the covariant derivative of the three basis vectors σ i α are found to be : The expression K j i σ β j ∇ β σ i α = C i jk K j i σ k α will be useful, as well as The ν = 0 component of the Bianchi identity 2∇ µ R (∇) µ ν = ∇ ν R (∇) is split according to µ = (0 , α) . Using (6,40) and one obtains : As a consequence : where 0α . | 2014-10-01T00:00:00.000Z | 1998-03-02T00:00:00.000 | {
"year": 1998,
"sha1": "394b13aca989ed19c38e0f50d85c9d35318456cf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9803009",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "394b13aca989ed19c38e0f50d85c9d35318456cf",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
102347668 | pes2o/s2orc | v3-fos-license | Quantal Release of Dopamine and Action Potential Firing Detected in Midbrain Neurons by Multifunctional Diamond-Based Microarrays
Micro-Graphitic Single Crystal Diamond Multi Electrode Arrays (μG-SCD-MEAs) have so far been used as amperometric sensors to detect catecholamines from chromaffin cells and adrenal gland slices. Besides having time resolution and sensitivity that are comparable with carbon fiber electrodes, that represent the gold standard for amperometry, μG-SCD-MEAs also have the advantages of simultaneous multisite detection, high biocompatibility and implementation of amperometric/potentiometric protocols, aimed at monitoring exocytotic events and neuronal excitability. In order to adapt diamond technology to record neuronal activity, the μG-SCD-MEAs in this work have been interfaced with cultured midbrain neurons to detect electrical activity as well as quantal release of dopamine (DA). μG-SCD-MEAs are based on graphitic sensing electrodes that are embedded into the diamond matrix and are fabricated using MeV ion beam lithography. Two geometries have been adopted, with 4 × 4 and 8 × 8 microelectrodes (20 μm × 3.5 μm exposed area, 200 μm spacing). In the amperometric configuration, the 4 × 4 μG-SCD-MEAs resolved quantal exocytosis from midbrain dopaminergic neurons. KCl-stimulated DA release occurred as amperometric spikes of 15 pA amplitude and 0.5 ms half-width, at a mean frequency of 0.4 Hz. When used as potentiometric multiarrays, the 8 × 8 μG-SCD-MEAs detected the spontaneous firing activity of midbrain neurons. Extracellularly recorded action potentials (APs) had mean amplitude of ∼-50 μV and occurred at a mean firing frequency of 0.7 Hz in 67% of neurons, while the remaining fired at 6.8 Hz. Comparable findings were observed using conventional MEAs (0.9 and 6.4 Hz, respectively). To test the reliability of potentiometric recordings with μG-SCD-MEAs, the D2-autoreceptor modulation of firing was investigated by applying levodopa (L-DOPA, 20 μM), and comparing μG-SCD-MEAs, conventional MEAs and current-clamp recordings. In all cases, L-DOPA reduced the spontaneous spiking activity in most neurons by 70%, while the D2-antagonist sulpiride reversed this effect. Cell firing inhibition was generally associated with increased APs amplitude. A minority of neurons was either insensitive to, or potentiated by L-DOPA, suggesting that AP recordings originate from different midbrain neuronal subpopulations and reveal different modulatory pathways. Our data demonstrate, for the first time, that μG-SCD-MEAs are multi-functional biosensors suitable to resolve real-time DA release and AP firing in in vitro neuronal networks.
Micro-Graphitic Single Crystal Diamond Multi Electrode Arrays (µG-SCD-MEAs) have so far been used as amperometric sensors to detect catecholamines from chromaffin cells and adrenal gland slices. Besides having time resolution and sensitivity that are comparable with carbon fiber electrodes, that represent the gold standard for amperometry, µG-SCD-MEAs also have the advantages of simultaneous multisite detection, high biocompatibility and implementation of amperometric/potentiometric protocols, aimed at monitoring exocytotic events and neuronal excitability. In order to adapt diamond technology to record neuronal activity, the µG-SCD-MEAs in this work have been interfaced with cultured midbrain neurons to detect electrical activity as well as quantal release of dopamine (DA). µG-SCD-MEAs are based on graphitic sensing electrodes that are embedded into the diamond matrix and are fabricated using MeV ion beam lithography. Two geometries have been adopted, with 4 × 4 and 8 × 8 microelectrodes (20 µm × 3.5 µm exposed area, 200 µm spacing). In the amperometric configuration, the 4 × 4 µG-SCD-MEAs resolved quantal exocytosis from midbrain dopaminergic neurons. KCl-stimulated DA release occurred as amperometric spikes of 15 pA amplitude and 0.5 ms half-width, at a mean frequency of 0.4 Hz. When used as potentiometric multiarrays, the 8 × 8 µG-SCD-MEAs detected the spontaneous firing activity of midbrain neurons. Extracellularly recorded action potentials (APs) had mean amplitude of ∼−50 µV and occurred at a mean firing frequency of 0.7 Hz in 67% of neurons, while the remaining fired at 6.8 Hz. Comparable findings were observed using conventional MEAs (0.9 and 6.4 Hz, respectively). To test the reliability of potentiometric recordings with µG-SCD-MEAs, the D 2 -autoreceptor modulation of firing was investigated by applying levodopa (L-DOPA, 20 µM), and comparing µG-SCD-MEAs, conventional MEAs and current-clamp recordings. In all cases, L-DOPA reduced
Ion-Induced Damage Simulation
The ion-induced damage density profiles were obtained using a numerical simulation performed with the "Stopping and Range of Ion in Matter" SRIM-2013.00 Monte Carlo code (Ziegler et al., 2010) in "Detailed calculation with full damage cascades" mode, by setting a displacement energy value of 50 eV (Wu and Fahy, 1994). The output of the simulation process (i.e., the number of vacancies created per unit of depth and ion, in #vac #ions −1 µm −1 units) was then multiplied by the ion fluence, assuming the absence of non-linear effects such as damage saturation or formation of defect complexes.
Cell Cultures
The methods for the primary culture of mesencephalic dopamine neurons from substantia nigra (SN) was adapted from Pruszak et al. (2009). The ventral mesencephalon area was dissected from embryonic (E15) C57BL6 TH-GFP mice (Sawamoto et al., 2001;Matsushita et al., 2002). TH-GFP mice were kept heterozygous via breeding TH-GFP mice with C57BL/6J mice. All animals were housed under a 12-h light/dark cycle in an environmentally controlled room with food and water ad libitum. All experiments were conducted in accordance with the European Community's Council Directive 2010/63/UE and approved by the Italian Ministry of Health and the Local Organism responsible for animal welfare at the University of Turin (Authorization DGSAF 0011710-P-26/07/2017). HBSS (Hank's balanced salt solution, without CaCl 2 and MgCl 2 ), enriched with 0.18% glucose, 1% BSA, 60% papain (Wortington, Lakewood, NJ, United States), 20% Dnase (Sigma-Aldrich) was stored at 4 • C and used as digestion buffer. Neurons were plated at final densities of 600 cells mm −2 on petri dishes, or 2000 cells mm −2 on conventional MEAs or µG-SCD-MEAs. Cultured neurons were used at 7 DIV for current-clamp experiments and at 14 DIV with MEAs or µG-SCD-MEAs. Petri dishes, as well as MEAs and µG-SCD-MEAs were coated with poly-L-Lysine (0.1 mg ml −1 ) as substrate adhesion. Cells were incubated at 37 • C in a 5% CO 2 atmosphere, with Neurobasal Medium containing 1% pen-strep, 1% ultra-glutamine, 2% B-27 and 2.5% FBSd; pH 7.4.
Potentiometric recordings with MEAs and µG-SCD-MEAs were performed while keeping the cells under a controlled CO 2enriched atmosphere and stable temperature conditions.
Amperometric Recordings
Amperometric recordings were performed by means of µG-SCD-MEAs (4 × 4 channels geometry) and dedicated electronics, which were designed at the Institute of Electron Devices and Circuits (Ulm University). The whole electronic chain was inserted into a Faraday cage to minimize noise. The chip carrier was directly plugged-in to the front-end electronics connected to a data acquisition unit (National Instruments USB-6216). The circuit was grounded by means of a reference Ag/AgCl electrode, which was immersed in the extracellular solution. Amperometry was performed by holding the 16 electrodes at a constant potential of +0.65 V relative to the Ag-AgCl reference electrode.
The acquisition electronics consisted of low-noise transimpedance amplifiers with an input bias current of 1 pA and a gain, set by feedback-resistors, of 100 M . The amplified signals were filtered at 4 kHz with 4 th order Bessel low-pass filters and were subsequently acquired at a sampling rate of 25 kHz per channel. The National Instruments DAQ interface was connected to a computer via a high-speed USB. We used data acquisition control software that was developed in LabView. The noise level was evaluated in spike-free trace segments and then averaged over the 16 electrodes, leading to a mean amplitude of 5.5 ± 0.7 pA, with a mean signal-to-noise ratio (S/N) of ∼3. Spike analysis was performed using "Quanta Analysis" routine (Mosharov and Sulzer, 2005) in Igor pro 5.00 data analysis software by waveMetrics. No change in current output was observed when electrode polarization was lowered below 50 mV.
Potentiometric Recordings Using µG-SCD-MEAs and MEAs
Potentiometric recordings were performed while the cells were kept in their culture medium. Recordings took place inside a dedicated incubator, at a controlled temperature and 5% CO 2 atmosphere. A MCS MEA 1060-Inv-BC amplifier from Multi Channel Systems (Reutlingen, Germany) was used as the readout unit for both with µG-SCD-MEAs (8 × 8 geometry) and conventional MEAs (60MEA200/300iR-Ti).
Data acquisition was controlled using MC_Rack software. The threshold for spike detection was set at −30 µV and the sampling frequency at 10 kHz. Data were then analyzed using Clampfit software (Molecular Devices, Silicon Valley, CA, United States).
Current Clamp Recordings
Patch-clamp experiments were performed using Pclamp software (Molecular Devices, Silicon Valley, CA, United States). All experiments were performed at a temperature of 22-24 • C. Data analysis was performed using Clampfit software.
Fluorescence Images
Images were acquired using a Zeiss microscope primovert 40x (Carl Zeiss, LLC United States). In the fluorescence configuration, samples were excited with radiation in the visible spectrum at a characteristic wavelength λ ex = 470 nm, and the emission wavelength was λ em = 505 nm, which is typical of GFP staining.
INTRODUCTION
Dopamine (DA) plays fundamental roles in a variety of neurophysiological functions and neurological diseases. Dopaminergic microcircuits are involved in movement, reward, memory and cognition (Waelti et al., 2001), while the degeneration of the nigrostriatal pathway in Parkinson's disease (PD) impairs control and planning of movement, causing tremors and postural instability.
Fluctuations of DA concentration occur on the seconds to subseconds time scale, thus making them suitable for study with carbon fiber electrodes (CFEs) and fast-scan cyclic voltammetry (Hafizi et al., 1990;Kawagoe and Wightman, 1994;Heien and Wightman, 2006;Patel and Rice, 2013). However, in vivo DA electrochemical detection is hampered by electrode fouling, which is caused by accumulation of oxidized products and by interference of ascorbic and uric acid, ultimately limiting electrode sensitivity and selectivity (Suzuki et al., 2007).
Since the early investigation of synaptic dysfunction is a target when attempting to understand the molecular mechanisms that lead to neurodegenerative processes, the development of mutifunctional sensing tools for simultaneous monitoring neurotransmitter release and electrical activity are extremely relevant for addressing key aspects of neurotransmission in the early stages of neurodegenerative diseases (Suzuki et al., 2013;Schirinzi et al., 2016;Castagnola et al., 2018;Ghiglieri et al., 2018;Picconi et al., 2018).
In this regard, conventional multielectrode arrays (MEAs) have been employed to investigate the firing properties in SN pars compacta slices (Berretta et al., 2010), while amperometric detection of DA release from cultured neurons, was initially performed using CFEs (Pothos et al., 1998;Pothos, 2002;Staal et al., 2004;Mosharov and Sulzer, 2005). A range of different amperometric microarrays to detect DA release from PC12 cells (Chen et al., 1994;Lin et al., 2012;Trouillon and Ewing, 2014), striatal slices (Suzuki et al., 2013) and from isolated dopaminergic somas from the pond snail Lymnaea stagnalis have been designed . Dopamine release from striatal slices has been detected by carbon nanotube multi electrode arrays and the same device could successfully detect APs and field postsynaptic potentials from cultured hippocampal neurons and slices (Suzuki et al., 2013). In spite of this, no data concerning the detection of quantal release and electrical activity from the same cultured neurons using the same multiarray prototypes have been reported, to the best of our knowledge. Micro-graphitic single-crystal diamond multielectrode arrays (µG-SCD-MEAs) are a powerful sensor for investigating neurosecretion in living cells (Picollo et al., 2013(Picollo et al., , 2015b. Previous findings have demonstrated their ability to monitor spontaneous and evoked quantal catecholamine release from cultured mouse and bovine adrenal chromaffin cells (Picollo et al., 2016b) as well as from fresh mouse adrenal slices (Picollo et al., 2016a;Carabelli et al., 2017). Besides providing simultaneous recordings from a variety cells, which have been plated and cultured on the planar array for a number of days, µG-SCD-MEAs possess high-time resolution and sensitivity for the detection of amperometric events with different shape, such as the small amplitude, or previously identified "stand-alone-foot" events (Picollo et al., 2016a). Taking advantage of diamond biocompatibility (Bonnauron et al., 2008;Nistor et al., 2015;Piret et al., 2015;Alcaide et al., 2016), we have succeeded in culturing primary midbrain neurons on µG-SCD-MEAs. In the present work, we have provided the first evidence that µG-SCD-MEAs can detect the quantal exocytosis of neuronal synaptic vesicles as well as spontaneous neuronal firing activity.
Fabrication of µG-SCD-MEAs
The electrochemical sensors that have been used in the present work consist of multi electrode arrays with either 16 or 60 graphitic electrodes that have been embedded into an artificial single-crystal diamond substrate. The two devices combine the properties of diamond, including: (1) biocompatibility, guaranteeing the plating and maintenance of primary cultures for weeks (Tang et al., 1995;Nistor et al., 2015); (2) chemical inertness, which prevents modifications to the employed solutions; and (3) wide optical transparency.
Conventional fabrication schemes cannot be used for the assembly of these sensors due to the extreme chemical/physical characteristics of diamond, meaning that an advanced MeVion-beam-implantation-based process was used (Olivero et al., 2010;Picollo et al., 2015b). This fabrication technique allows the selective phase transition from the diamond to graphite to be promoted by taking advantage of the metastable nature of the substrate. Indeed, if the density of ion beam-induced defects (commonly parametrized in terms of vacancy density) overcomes a critical threshold, the graphitization of the damaged region is obtained upon high-temperature thermal treatment. Moreover, the fact that the defects created by irradiation with MeV ions follow a typical distribution, which is characterized by the so-called "Bragg peak" and the ion end of range ( Figure 1A), means the position of the graphitic electrodes can be modulated along the substrate depth, thus guaranteeing intrinsic electrical passivation due to the presence of the diamond cap layer. Only the electrodes end-points emerge to the surface, thus allowing the interfacing with the front-end electronic in the peripheral region of the sensor and the cells coupling in the central region.
Figures 1B,C shows the sensor and the magnification of the graphitic electrodes, which were arranged in a 4 × 4 matrix, while Figures 1D,E show analogous representations for the device with a 8 × 8 matrix of electrodes (without the four electrodes on the corners). In both cases, active areas were regularly spaced with a ∼200 µm step.
Real-Time Detection of Quantal DA Release From Midbrain Neurons Using µG-SCD-MEAs
DA release from cultured midbrain neurons can either occur at the somato-dendritic or the axon-terminal level (Rice and Patel, 2015). In this work, we have cultured midbrain neurons for 20 DIV on µG-SCD-MEAs and found that quantal exocytotic events can be detected after 10 DIV. Under our experimental conditions, the density of cell plating on the Frontiers in Neuroscience | www.frontiersin.org multiarray (see section "Materials and Methods"), allowed us to reveal amperometric signals from approximately 25-30% of the electrodes. Unstimulated (i.e., spontaneous) release was barely detectable (5% of trials) and occurred at low frequency (0.11 ± 0.07 Hz) in 2 mM CaCl 2 (Figure 2A). Amperometric spikes were characterized by a mean maximum current amplitude (I max ) of 13.2 ± 1.0 pA and a half-time width (t half ) of 0.57 ± 0.03 ms (n = 5). Stimulation with 30 mM KCl (Figures 2A-C) increased the release frequency to 0.40 ± 0.03 Hz. However, spike parameters were unaffected: I max was 18.5 ± 1.1 pA and t half was 0.52 ± 0.01 ms (n = 13 cells, from 4 µG-SCD-MEAs). On the other hand, 200 µM CdCl 2 suppressed Ca 2+ -dependent exocytosis through voltage-gated Ca 2+ channels, as shown in Figure 2A. No events were detected when the recording electrodes were polarized to 0 mV to nullify dopamine detection (bottom trace). Representative recordings of simultaneous acquisition from five different electrodes of the same µG-SCD-MEA are shown in Figure 2B: multiple events, such as the one visible in the first trace, were discarded from the analysis. Some representative spikes that were recorded in the presence of KCl are shown at a more expanded time scale in Figure 2C.
Detection of Spontaneous AP Firing From Cultured Midbrain Neurons
After assessing the sensitivity of µG-SCD-MEAs to reveal quantal DA release in dopaminergic neurons, we tested if these sensors were able to measure the electrical activity of cultured midbrain neurons. For this purpose, µG-SCD-MEAs were patterned with a higher electrode number (8 × 8 array) than to those used for amperometry (4 × 4 array). Recordings were performed in parallel using µG-SCD-MEAs and conventional MEAs [Multi Channel Systems (MCS)], for a more rigorous interpretation of acquired data. As has already been observed in cultured hippocampal neurons (Gavello et al., 2012Allio et al., 2015) and other brain regions (Martinoia et al., 2005), mesencephalic DA neurons start generating spontaneous APs after 7 DIV (Henderson et al., 2016), while network functionality was wellresolved at 14 DIV.
Representative recordings of spontaneously firing midbrain neurons, measured using µG-SCD-MEAs and conventional MEAs, are shown in Figure 3A. This spontaneous spiking activity occurred under physiological conditions (2 mM Ca 2+ ) and was suppressed by blocking the firing during the exogenous application of 300 nM TTX (data not shown). Unlike amperometric spikes, which exibit monopolar waveforms, single APs ( Figure 3B) were characterized by a fast downward deflection (negative peak), which corresponds to the AP rising phase, followed by an upward deflection (positive antipeak), which is associated to the AP repolarising phase (Fromherz, 1999;Marcantoni et al., 2007). The mean amplitude of the negative peaks recorded by µG-SCD-MEAs (n = 10) was −50.2 ± 3.6 µV, with S/N of ∼4, while the mean signal amplitude was equal to −54.0 ± 4.7 µV, with S/N of ∼5, for conventional MEAs (n = 10). The amplitude of the positive antipeak, when detectable, was approximately 30% of the negative peak amplitude. For instance, for the channels indicated by the asterisks in Figure 3A, the mean positive antipeak amplitudes was 25.4 ± 0.4 and 26.0 ± 0.4 µV (with µG-SCD-MEAs and MEAs, respectively), while the mean peak amplitude was −60.1 ± 0.6 and −67.1 ± 0.5 µV (with µG-SCD-MEAs and MEAs, respectively). Since the positive antipeak was not detectable in all neurons, we limited our analysis to the negative peak, in good agreement with our previous observations (Vandael et al., 2010).
Firing frequencies were comprised within 0.1 and 15 Hz (Figure 3C), which is consistent with the presence of distinct neuronal populations within the network (Berretta et al., 2010;Ramayya et al., 2014). Most neurons (67%) were spontaneously active, fired with a basal frequency which not exceeded 4 Hz and had a mean firing frequency of 0.66 ± 0.14 or 0.90 ± 0.10 Hz, when measured with µG-SCD-MEAs and conventional MEAs (p > 0.1), respectively. The remaining neurons had a much higher basal firing frequency, ranging between 4 and 11 Hz and characterized by mean values of 6.8 ± 1.4 and 6.4 ± 0.5 Hz, when measured with µG-SCD-MEAs and MEAs (p > 0.1), respectively. This heterogeneity of responses can, most likely, be ascribed to the presence of distinct neuronal populations such as DA neurons from SN, GABAergic and DA neurons from the nearby ventral tegmental area (Berretta et al., 2010;Cucchiaroni et al., 2011). Optical images of GFP-TH + neurons that were cultured on µG-SCD-MEAs are provided in Figure 3D.
Despite the above-mentioned heterogeneity, these recordings are the first experimental evidence that µG-SCD-MEAs are suitable for potentiometric recordings from primary cultures of brain neurons.
D 2 -Autoreceptor Induced Inhibition of Repetitive Firing in Current-Clamp Recordings
The firing of nigral dopaminergic neurons is down-regulated by DA release through a D 2 -autoreceptor mediated pathway (Aghajanian and Bunney, 1977;Mercuri et al., 1990). Since this down-regulatory pathway has been observed in midbrain slices and we were recording from primary cultured midbrain dissociated neurons (Lacey et al., 1987;Guatteo et al., 2013), we aimed to identify this inhibitory down-regulation under our experimental conditions first (Bigornia et al., 1990). Preliminary experiments were performed in whole-cell currentclamp configuration, by applying the DA precursor levodopa (L-DOPA) (20 mM). Recordings were selectively performed on 7 DIV dopaminergic neurons that were identified by means of GFP staining (Figure 3D). Although the responses to applied L-DOPA varied, it caused a 70 ± 4% reduction of the firing frequency in 80% of cases (n = 20 cells, from 1.36 ± 0.02 to 0.41 ± 0.11 Hz; Figures 4A,F). Maximum inhibition occurred within 2-5 min of L-DOPA perfusion, and was reversed some minutes after the application of the D 2 antagonist sulpiride (10 µM). The repetitive firing frequency measured in the presence of the D 2 antagonist recovered to 1.2 ± 0.2 Hz, thus confirming the autocrine inhibition that is induced by released DA (Guatteo et al., 2013). It is worth mentioning that the reduced firing frequency was associated, in 70% of the cases, to a membrane hyperpolarization of −7.8 ± 1.1 mV and by a sharp increase in AP peak amplitude (from 27 ± 3 to 35.6 ± 1.6 mV; n = 14, p < 0.05; Figures 4C,D). All this was most likely induced by the DAmediated activation of a G-protein-coupled potassium channel (GIRK) (Lacey et al., 1987). Both effects were reversed after perfusion with sulpiride.
In the remaining 30% of neurons, the nearly threefold reduction of firing frequency occurred without causing either the significant hyperpolarization of the membrane potential, or alterations in the AP waveform (Figures 4F,G). For this subset of neurons, in some cases sulpiride restored the control firing frequency, even though the recovery was not always complete. This variability reveals the probable existence of distinct modulatory pathways that may originate from different midbrain neuron subpopulations (Dragicevic et al., 2015;Duda et al., 2016).
Phase plane plot analysis was performed in order to gain further insights into the APs properties and their modulation by L-DOPA (Vandael et al., 2012;Marcantoni et al., 2014). By plotting the time derivative of voltage versus voltage (dV/dt), parameters such as the AP threshold can be easily inferred from the voltage value at which dV/dt suddenly increases. The phaseplane plots in Figure 4B are referred to the same APs that are indicated by the asterisks in Figure 4A. From the plot we found that: (i) the maximum derivative (dV/dt max ), which is associated with the maximum current density through voltagegated Na v channels, was drastically enhanced by L-DOPA (from 67 to 129 mV ms −1 , p < 0.05, Figure 4D), suggesting a sustained recruitment of Na v channels (Guarina et al., 2018); (ii) the AP hyperpolarization peak was significantly augmented by L-DOPA, from from −37.2 ± 1.3 to −40 ± 2 mV (p < 0.05, Figure 4D); (iii) the AP threshold, measured from the phase-plane plot when an abrupt change in dV/dt was observed (at 4.5 ± 1.2 mV ms −1 for control and 6.4 ± 0.9 mV ms −1 for L-DOPA-treated neurons), decreased from −25.3 ± 1.8 to −31.9 ± 1.8 mV (p < 0.05), respectively. This again confirms a potentiated recruitment of Na v channels during L-DOPA treatment.
In order to compare the AP waveform recorded intracellularly with those recorded extracellularly, the negative first derivative of AP traces shown in Figure 4C is reported in Figure 4E. They correspond to the AP shape recorded extracellularly by the MEAs (Fromherz et al., 1991) and identified as a biphasic AP waveform in which, similarly to that of Figure 3B, a large negative peak and a small positive antipeak component can be distinguished.
Finally, a range of different effects on neuronal activity were detected in the neurons that were not inhibited by L-DOPA (20% of neurons). L-DOPA accelerated repetitive firing by 80 ± 20% (17% of neurons), while it was ineffective in the remaining ones (3% of neurons).
Heterogeneity of L-DOPA Induced Responses in Cultured Midbrain Neurons Observed Through µG-SCD-MEAs and Conventional MEAs
Potentiometric recordings using µG-SCD-MEAs were performed to simultaneously detect spikes arising from different neuronal populations and to investigate their responses to the applied drugs. With respect to patch-clamp experiments, performed on isolated and young neurons (7 DIV), these trials were designed to provide a rapid screening of the effects of L-DOPA on mature networks (14 DIV). After the firing properties under control conditions were monitored for a couple of minutes, the addition of L-DOPA to the culture medium revealed three different responses, confirming the existence of heterogeneous firing, as measured in SN slices (Berretta et al., 2010). In most cases (70% of neurons), the firing activity was significantly reduced by L-DOPA and the inhibitory effect required some minutes for completion (Mercuri et al., 1990). As shown in a representative recording using µG-SCD-MEAs, the firing frequency was reduced by 80% after 2-3 min, and the extracellular AP peak increased from −75 ± 1 to −87 ± 3 µV, while sulpiride reversed both effects, suggesting that D 2 autoreceptors are involved (Figures 5A,C). On average L-DOPA decreased the spontaneous spiking activity from 1.1 to 0.3 Hz and increased the negative peak amplitude by 14% (n = 5 µG-SCD-MEAs, p < 0.05), suggesting a prominent recruitment of Na v channels following L-DOPA hyperpolarization. In order to validate these experimental findings, we repeated the same experiments using conventional MEAs. Once again, the majority of neurons (64%), responded to L-DOPA by reducing the mean firing frequency, on average from 1.5 ± 0.7 to 0.29 ± 0.09 Hz (p < 0.05, n = 10 MEAs; Figure 5B), while sulpiride restored the basal frequency to 1.1 ± 0.3 Hz (Figure 5C). In this subset of neurons, firing frequency reduction was also associated to a 20% increase in the negative peak amplitude, confirming the prominent role that D 2 -autoreceptors play in L-DOPA induced inhibition.
Nevertheless, a relevant fraction of neurons in the mature networks (30 and 36%, respectively for µG-SCD-MEAs and MEAs) also displayed a significant increase (up to sixfold) in spontaneous frequency and a 30% reduction in the negative peak amplitude following exposure to L-DOPA. This is in good agreement with the heterogeneity of responses that we observed in dissociated neurons under current-clamp conditions. In the example shown in Figure 6A for µG-SCD-MEAs, the negative peak amplitude decreased from −44.1 ± 1.2 to −34.2 ± 1.1 µV, while the firing frequency increased from 0.5 to 2.9 Hz (Yasumoto et al., 2004). Similarly, the potentiation of firing activity by L-DOPA occurred with a mean threefold increase in firing frequency when using conventional MEAs, and was usually accompanied by a 28% decrease in the negative peak amplitude (Figure 6B).
L-DOPA reduced the spiking activity without affecting the AP shape in the remaining cases. Examples of this modulation are shown in Figure 6C. The unaltered amplitude of the negative AP peak, that have been revealed by µG-SCD-MEAs and conventional MEAs (Figure 6D), confirms the findings obtained under patch-clamp conditions, in which 30% of neurons displayed a reduced firing frequency without alterations of the AP rising phase.
DISCUSSION
We have provided the first evidence that µG-SCD-MEAs allow long-term neuronal cultures to be performed and that they can function as sensing devices for recording quantal exocytosis and spontaneous AP firing.
Amperometric Detection of Quantal Dopamine Release
Amperometric recordings have proved that µG-SCD-MEAs are suitable for the real time detection of exocytosis from neuronal networks. Under physiological conditions (2 mM Ca 2+ ), µG-SCD-MEAs can resolve spontaneous secretory events as amperometric spikes of <20 pA I max and mean quantity of charge Q of 0.01 pC, which can most likely be associated with the tonic discharge activity of the network (Sulzer et al., 2016). These exocytotic events are significantly smaller than those of released catecholamines from large dense-core vesicles of adrenal chromaffin cells, whose I max is in the order of 10s of pA, and Q is > 1.5 pC, as has already been reported using the same µG-SCD-MEAs (Picollo et al., 2016b).
When using KCl as a secretagogue to increase the probability of release from DA neurons, a mean release of 3.7 × 10 4 DA molecules/spike was found. Comparable neurotransmitter content values (∼3 × 10 4 DA molecules) were estimated from axonal DA vesicles using CFEs (Sulzer et al., 2016). In other CFE experiments on postnatally derived midbrain neurons, performed at a 100 kHz sampling rate to discriminate between single-spike and flickering events, DA release values were estimated to be around 1 × 10 4 and 2.4 × 10 4 DA molecules, respectively for single-spike and flickering events (Staal et al., 2004). It is worth noting that estimates of quantal size can be affected by the different experimental configurations used, such as detection being performed from the cell apex using CFEs or from the cell bottom using µG-SCD-MEAs (Amatore et al., 2007).
Detection of Spontaneous Firing From Cultured Midbrain Neurons
Besides detecting the quantal release of DA, µG-SCD-MEAs can be exploited to measure the electrical activity of cultured midbrain neurons. The 8 × 8 µG-SCD-MEAs and the conventional MEAs can reveal APs only after 7 DIV in approximately 30% of the electrodes, due to the delayed maturation of the network after dissociation. However, the network activity becomes detectable in most (i.e., 70%) of the electrodes at later stage of maturation (14 DIV). Spontaneous firing at this stage of maturation exhibits great variability of responses, even within the same µG-SCD-MEA or conventional MEA. Neuronal firing frequencies were scattered throughout a range of frequencies varying from 0.1 to 15 Hz ( Figure 3C) for both devices, which is in reasonably good agreement with previous reports of low frequency activity in isolated SN DA neurons (Bean, 2007;Margolis et al., 2010). A broad distribution of firing frequencies has also been observed in SN slices positioned onto conventional MEAs: the majority of neurons (∼94%) exhibit low firing frequencies (i.e., 1-3 Hz), whereas the remaining ones fire at higher frequencies (5-10 Hz) (Berretta et al., 2010). Different firing patterns have also been described for in vivo recordings, where SN DA neurons display both slow single-spike activity (1-10 Hz), and higher frequency discharges (∼13-20 Hz) (Dragicevic et al., 2015;Hage and Khaliq, 2015). A final consideration concerns the low firing rate (<1 Hz) that was recorded in the majority of neurons, and the hardly detectable DA release that occurred at 0.1 Hz. Both processes suggest the existence of a tonic discharge activity at rest, partially tuned by D 2 -autoreceptors (Al-Hasani et al., 2011).
Using µG-SCD-MEAs for Pharmacological Studies: The L-DOPA-Induced Down-Modulation of Spontaneous Firing The neuronal discharge of SN DA neurons is inhibited by D 2autoreceptors-mediated GIRK activation and is prevented by the D 2 -antagonist sulpiride (Mercuri et al., 1990;Dragicevic et al., 2014). To assay the sensitivity of the µG-SCD-MEAs, we tested this inhibitory pathway in current-clamped TH-GFP neurons, as well as in mature midbrain DA neurons, cultured for 2 weeks on µG-SCD-MEAs (or conventional MEAs). This autocrine inhibition is induced by adding L-DOPA, which is converted to DA and then released from dopaminergic neurons.
As shown, L-DOPA caused a range of effects (Figure 4). In most current-clamped neurons, the response caused a firing frequency reduction together with a slow membrane hyperpolarization and an increased AP amplitude that was reverted by sulpiride. The increased AP amplitude and the rapid increase in the AP rising phase was revealed as an enhanced dV/dt peak amplitude in the phase-plane plot analysis of AP recordings. The same occurrence was detected using both the µG-SCD-MEAs and conventional MEAs (Figure 5). Both MEAs revealed the reduced firing frequency and the increased AP rising phase, which were reverted by sulpiride. It is worth noting that, in the case of MEAs, the increased AP rising phase in currentclamp recordings is converted to an increased peak amplitude in the extracellular APs.
From a physiological point of view, both measurements are in excellent agreement and suggest sustained recruitment of Na V channels due to the increased cell hyperpolarization induced by GIRK K + channel activation. Sustained cell hyperpolarizations increase the rate of Na V channels recruitment from steady-state inactivation (Vandael et al., 2015;Guarina et al., 2018), while the recrutiment of different Na v channel isoform that characterized by a lower threshold of activation cannot be excluded.
Concerning the opposing effect that was observed in a minority of neurons, in which L-DOPA increased the spiking activity (Figure 6), variable responses have also been described in Substantia Nigra pars compacta (SNc) neurons, using MEA recordings from midbrain slices (Berretta et al., 2010). In that case, neurons that fired at high rates (>5 Hz) were insensitive to DA, while low-firing neurons were either highly or weakly inhibited by DA. Furthermore, a fraction of lowrate spiking neurons were insensitive to DA, or excited by DA, and a minority of neurons were potentiated by L-DOPA. Under our experimental conditions, where midbrain neurons were cultured for weeks on the microarray, signals detection may occur from different DA subpopulations (Lammel et al., 2008;Liss and Roeper, 2010), either from non-DA neurons or from DA neurons of the nearby ventral tegmental area. Indeed, the excitatory effects of L-DOPA on nigral dopaminergic neurons have been previously described, and were featured as an "early" and a "late" phase of excitation (Guatteo et al., 2013).
Finally, regarding the fraction of neurons that were inhibited by L-DOPA and did not undergo relevant membrane potential hyperpolarization, several pathways may be responsible for this modulation, based on the involvement of K + channels other than GIRK (Yang et al., 2013), or D 1 -mediated signaling cascades (Surmeier et al., 2007).
CONCLUSION
Our data demonstrate that µG-SCD-MEAs are highly reliable as multi-functional sensing multiarrays for long-term recordings of neuronal activity under variable pharmacological conditions. With respect to conventional approaches, the real-time measurements of quantal exocytosis and neuronal firing makes µG-SCD-MEA a promising biosensor for in vitro investigation of neuronal circuit properties as well as a valid tool for studying mistuned neurotransmission in neurodegenerative disorders.
AUTHOR CONTRIBUTIONS
GT performed the experiments and analyzed acquired data. FP fabricated the sensors, performed the experiments, and manuscript preparation. AB fabricated the sensors. BP contributed to critically editing the manuscript. SDM contributed to critically editing the manuscript. AP made hardware and software of the 16-channel setup and revised the manuscript. PO contributed to the design for the diamond biosensor and to the preparation of the manuscript. AM contributed to experimental design and manuscript preparation. PC contributed to critically editing the manuscript. EC contributed to the interpretation of AP recordings and helped with a critical revision of the manuscript. VC contributed to planning the experimental design, manuscript writing, and overall revision.
FUNDING
This work was supported by the following projects: Project 2015FNWP34 (from Italian MIUR) to PC and VC and CSTO165284 (from Compagnia di San Paolo) to VC. DIACELL project (from National Institute of Nuclear Physics) to FP. MiRaDS project (from CRT Foundation), "Finanziamento expost di progetti di ricerca di Ateneo" (from CSP Foundation), "Departments of Excellence" (L. 232/2016) project (from Italian MIUR) to PO. Ion beam irradiation was performed at the AN2000 accelerator of the Legnaro National Laboratories of the Italian Institute of Nuclear Physics (INFN) within the "Dia.Fab." beamtime. | 2019-04-09T13:08:05.083Z | 2019-04-09T00:00:00.000 | {
"year": 2019,
"sha1": "1ab7f5921879437de11363f689fb92994a79cb01",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.00288/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e18ca1419a14380defa150147131a3bac38988c4",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
216110718 | pes2o/s2orc | v3-fos-license | A Multi-parameter regression model for interval censored survival data
We develop flexible multi-parameter regression survival models for interval censored survival data arising in longitudinal prospective studies and longitudinal randomised controlled clinical trials. A multi-parameter Weibull regression survival model, which is wholly parametric, and has non-proportional hazards, is the main focus of the paper. We describe the basic model, develop the interval-censored likelihood and extend the model to include gamma frailty and a dispersion model. We evaluate the models by means of a simulation study and a detailed re-analysis of data from the Signal Tandmobiel$^{\circledR}$ study. The results demonstrate that the multi-parameter regression model with frailty is computationally efficient and provides an excellent fit to the data.
Introduction
Interval censored survival data can arise in longitudinal epidemiological studies where the response variable Y (t) is binary.Typically, at baseline, t 0 , patients start in an initial state, e.g., for the ith patient Y i (t 0 ) = 0 (say) and later, as follow-up proceeds at scheduled inspection times, the event of interest may occur at a time t s whence, Y i (t s ) = 1 , where t s > t 0 .This leads naturally to the use of "time to event" survival modelling in order to determine the effect of selected risk factors measured at baseline on the time to the event of interest.The interval censoring arises because patients are not monitored continuously but rather a finite schedule of m follow-up examinations at times t k , k = 1, . . ., m.Thus, if, for the ith subject, the event occurs between the (k − 1)th and the kth follow-up examination the binary indicators become Y i (t s ) = 0 for s = 1, . . ., (k − 1) and Y i (t s ) = 1 for s = k and then the time to event is As a motivating study we consider the Tandmobiel study which is a longitudinal prospective oral health study conducted in Flanders (Belgium) from 1996 to 2001.A cohort of 4430 randomly sampled school children who attended the first year of the primary school at the beginning of the study were examined annually (6 times).The response was time to the emergence of the permanent upper left first premolars (tooth 24 in European dental notation).When emergence occurs between annual follow-ups the exact time of emergence is not known and the time to the emergence is said to be interval censored.
Modelling such data presents a variety of challenges.MacKenzie and Peng (2013) developed interval censoring methods for parametric models, some of which were non-PH and they compared the use of standard right censored likelihoods based on midpoints with interval censored likelihoods.They showed that the use of mid-points led to artificially precise estimators in PH models when analysing time to loss of vision in a longitudinal trial of Age-Related Macular Degeneration (ARMD) (Hart, 2002).Survival data arising in longitudinal vision studies, were analysed earlier, by MacKenzie (1999) and later by Al-tawarah and MacKenzie (2002).See also Finkelstein (1986) for an important early paper in the field, Huang and Wellner (1997) for a more theoretical review, and the books by Sun (2006) and Bogaerts et al. (2017) for comprehensive treatments of the subject.Nearly all of these papers employ models where covariates enter through a single parameter.In this paper such models are designated as single parameter regression (SPR) models.
In contrast, the concept of multi-parameter regression (MPR) survival modelling was developed in Burke and MacKenzie (2017).In MPR survival models the scale and shape parameters are modelled simultaneously by means of two separate linear predictors: these models are parametric and intrinsically more flexible than classical proportional hazards (PH) survival models.In the 2017 paper, MPR models were investigated in the context of right censored survival data from the Northern Ireland Lung Cancer Study (Wilkinson, 1995).
In this paper, we extend MPR models for interval censored survival data arising in longitudinal studies and introduce a MPR model with gamma frailty and a disper-sion model to re-analyse data from the Signal Tandmobiel study (Bogaerts et al., 2002;Gómez et al., 2009).
MPR modelling framework
We envisage the class of two-parameter parametric survival models supporting scale and shape parameters.Within that class we model the scale and shape parameters simultaneously by means of two separate linear predictors which may involve the same set, or different sets, of covariates.In this paper we focus on the Weibull MPR model in order to illustrate the MPR approach to analysing interval-censored data.This model has proved useful in other contexts and has the advantage of directly extending a standard proportional hazards model.
Frailty extension
We extend the Weibull MPR model of Burke and MacKenzie (2017) to incorporate a multiplicative frailty term via the conditional hazard where λ(t) is defined in (2.1), and u is the frailty term, which we will assume follows a gamma distribution with density where a = b = 1/φ, such that E(U) = 1 and the frailty variance is V ar(U) = φ.This is the classical frailty model in which the random effect, u, measures additional person-specific heterogeneity not accounted for by the covariates.(Vaupel et al., 1979) Since u is an unobserved variable, the marginal distribution, obtained by integrating over u, has survivor function given by where the subscript "m" indicates marginal, and Λ(t) = exp(x T β)t exp(z T α) is the cumulative hazard function associated with (2.1).Note that, in the absence of frailty, i.e., φ → 0, we have that S m (t) → exp{−Λ(t)} restoring the familiar (non-frailty) relationship between a survivor function and its cumulative hazard.See Hougaard (1995Hougaard ( , 2000) ) and Duchateau and Janssen (2008) for more details on frailty models.
Dispersion model
The MPR frailty model can be extended further with advantage.Usually the frailty variance, φ, is a constant, but we can allow the frailty variance to be person-specific via another regression model, i.e., φ = exp(w T ψ), (2.4) where w = (1, w 1 , . . ., w r ) T and ψ = (ψ 0 , ψ 1 , . . ., ψ r ) T are vectors of covariates and their coefficients respectively.This dispersion model (DM) allows one to investigate the structure of the dispersion and provides a convenient framework for testing the homogeneity of frailty variances among covariates (e.g., between sexes).In addition, when the frailty variance is unstructured, the model reduces to the Weibull MPR frailty model of Section 2.2.The concept of modelling the structure of the dispersion can be traced back to joint mean-dispersion modelling (Smyth, 1989;Lee and Nelder, 2001;Pan and MacKenzie, 2003), but its adoption in the frailty paradigm, in survival analysis, is more recent (Lynch and MacKenzie, 2014).Furthermore, the combination of frailty dispersion modelling in combination with an underlying MPR model is novel in the literature.It will be apparent that the frailty dispersion model which introduces a third regression (2.4) is an entirely natural development in the MPR paradigm.
Likelihood Functions
In most longitudinal studies the idea of a fixed schedule of follow-ow examinations is too rigid as many subjects fail to respect their exact re-examination appointment dates.Accordingly it is usual to allow the intervals to be person-specific such that t i ∈ I ik = (t i(k−1) , t ik ].In general, t ik is close to the scheduled t k and of course t i0 = 0 ∀i.This notation, whilst accurate, is rather cumbersome and it is convenient to abbreviate it to I i = (a i , b i ] in the equations which follow. Then a general likelihood for inteval-censoring (IC) data is given by where time a i < b i , ∀ i.Here δ i = 1 denotes an interval censored observation and δ i = 0 denotes a right censored observation with censoring time t ci .In this IC setting, the ith subject either "fails" in interval I i , or is right-censored.In total, there are n patients of whom n c are right censored or withdrawn at specific times, leaving n − n c patients who are interval-censored.Note that the interval-censored subjects play the same role as "failures" in the right-censored setting and often right-censoring occurs at times completely unrelated to the scheduled followup examinations, e.g., an early withdrawal from the study.Thus, this notational setup is advantageous when it is important to distinguish between interval-censored and rightcensored observations.See MacKenzie and Peng (2013) for more details on this approach.
Here, however, we re-write the likelihood above as for notational convenience, in which we define the ith right censored observation as lying in the interval (a i , b i ] = (t ci , ∞].Accordingly, there are now n intervals, with This representation is the most commonly occurring form in the IC literature and we use it below. The IC likelihood for the Weibull MPR DM model (i.e., the most general model of Section 2) is i are the cumulative hazard functions for the ith individual evaluated at the end-points of I i (where, for notational convenience, we avoid expressions such as Λ(a i ; x i , z i , w i ) and Λ(b i ; x i , z i , w i )), λ i = exp(x T i β) and γ i = exp(z T i α) from (2.2), φ i = exp(w T i ψ) from (2.4), and x i = (1, x 1i , . . ., x pi ) T , z i = (1, z 1i , . . ., z qi ) T , and w i = (1, w 1i , . . ., w ri ) T are the scale, shape, and dispersion covariate vectors respectively (which may or may not contain covariates in common).
Score Functions
We now let so that the loglikelihood function can be written as ℓ(θ) = log L(θ) = n i=1 log π i .The score functions are then given by all of which have a similar functional form, differing only with respect to the ω "weight" functions, which are given by , and ω ψ (b i ) analogously defined by replacing a i with b i .
Although, in the above, we intend that Λ(a i ) = λ i a γ i i for the purpose of the current paper (i.e., that of a Weibull MPR model), we have written the above score functions in a generic form so that Λ(•) can be replaced by any cumulative hazard function.If the underlying MPR model had another (positive) shape parameter, say, ρ, modelled as ρ = exp(τ T v), then we would gain an additional score function this score function has the same structure as that of U(β) and U(α), but with a different ω.On the other hand, if the frailty distribution was changed, the S m (a i ) 1+φ i factor in all of the score functions would change (and not only through S m (•) changing), and, of course, the form of ω ψ would also change.Thus, although we focus on a Weibull-gamma frailty model, the above is easily adapted to a wide range of MPRDM models for IC data.
Model selection
For the purpose of selecting among models within the MPRDM class, standard information criteria may be used, namely, the Akaike Information Criterion, AIC = −2 log L( θ) + 2k, and the Bayesian Information Criterion, BIC = −2 log L( θ) + (log n)k where θ is the maximum likelihood estimator and k = dim(θ).There are two levels of model selection, both of which can be handled by these information criteria, namely: (a) the overall model type, and (b) the covariate set for each regression component within a given model type.
Model types
The MPRDM modelling framework introduced in Section 2 2 is quite general, containing a range of new and existing regression model types.A natural hierarchy of model types emerges as follows: the underlying model may be PH (λ regression) or MPR (λ and γ regressions), and the frailty component may be absent, present, or present with a φ regression component.The six model types are summarised in Table 1.Note that models PH and PHF are single parameter regression (SPR) models (only one parameter depends on covariates) while all other types are multi-parameter regression (MPR) models -in particular, the PHDM model has a PH baseline component (i.e., SPR), but the overall marginal model is MPR since φ depends on covariates.We have found models without a λ regression to be less useful and, so, these are not considered here.
is the name of the model; "Baseline" is the baseline covariate structure such that "PH" is a Proportional Hazards structure where only the scale parameter, λ, depends on covariates whereas "MPR" is a Multi-Parameter Regression structure where the shape parameter, γ, also depends on covariates (see "Regression" columns); "Frailty" indicates the presence of a frailty term; "Regression" highlights the regression components via the distributional parameters which depend on covariates ("Yes") and which do not depend on covariates ("No").
Covariates
Given a particular model type from Table 1, we will generally wish to select from a set of candidate covariates, say c = (c 0 = 1, c 1 , c 2 , . ..)T , to appear in the model (note: c 0 is used for the intercept term).In the most general MPRDM model, this amounts to the selection of scale covariates, x ⊂ c, shape covariates, z ⊂ c, and frailty dispersion covariates, w ⊂ c, where the subsets may or may not overlap.While the union, x ∪ z ∪ w, is of interest as these covariates affect survival in some way, so too are the x, z, and w vectors themselves as these characterise the nature of specific covariate effects, e.g, in an MPR model without frailty, c j ∈ z implies that c j is a non-PH covariate, and, in a frailty dispersion model, c j ∈ w indicates that the frailty variance differs in the sub-groups defined by c j .
In general, the basic parameters of survival models (including the frailty variance) are rarely orthogonal, i.e., estimates of these parameters will be correlated.When covariates enter these parameters, this correlation propagates to the regression coefficients.In particular, if the covariate c j appears in all regression components simultaneously, then the estimates of its corresponding β, α, and ψ coefficients tend to be quite correlated.It is important to emphasize that this correlation does not lead to convergence issues in model fitting, nor does it imply that a covariate must only appear in one regression component within the model.However, it does have implications for variable selection, e.g., individual Wald-based significance tests (which account only for the variance of estimates, and not covariance -which is important in this context) might render a particular covariate non-significant in all regression coefficients, when, in fact, the overall effect is significant.Covariate selection in MPR models was developed in Burke and MacKenzie (2017) who suggested the use of stepwise procedures in which covariate additions/deletions are carried out for each regression component separately as well as simultaneously, e.g., in an MPR model the covariate c j could be added to x first (but not to z), then to z (but not to x), and finally to x and z simultaneously.
Simulation study
We conducted a simulation study to assess the estimation properties of the Weibull MPR model for interval-censored data.Failure times were generated from the Weibull regression model (with or without frailty) with two covariates: x 1 , a binary covariate where Pr(X 1 = 1) = 0.5 mimicking the treatment effect for example, and x 2 a continuous baseline covariate distributed as N(0, 0.5).
In addition, we constructed trajectories for each individual in the study by constructing intervals ), where U (1) and U (2) are independent continuous variables with uniform distribution in the interval (0, c).Zhang (2009) used this approach which, by construction, defines intervals which are non-informative about the survival time distribution, T .Furthermore, it can be shown that E(B i − A i ) = 2c/3 (proof omitted).In this simulation study, we set c = (3d/2)E(T ), i.e., E(B i − A i ) = dE(T ), so that the average inspection length is proportional to the average survival time, E(T ), where we use d ∈ {0.1, 0.5}.
In the simulations, the proportion of right (random) censoring was controlled by using an exponential distribution where the estimate of the controlling parameter, φ, was obtained from the "J(•)-function" approach of MacKenzie and Peng (2013).Suppose the independent censoring times follow an exponential distribution with density g(t; ϕ).Let where S(t; θ) is the survival function and p is the censoring proportion required.Then, φ = arg min[J(ϕ)] ensures that, on average, proportion of censored individuals in each simulation equals p.We set p ∈ {0, 0.3} in this simulation study.The entire simulation was conducted in the R software package.(R Core Team, 2018) Three sample sizes were used n ∈ {200, 500, 1000}.Each scenario was replicated 5000 times, and, for each replicate, the model was fitted using the likelihood function given in (3.6).Note that the true coefficient values were set as β 0 = 2.0, β 1 = 0.5, β 2 = 0.3; α 0 = 2.0, α 1 = 0.25, α 2 = −0.1;φ = 0.5, φ 1 = 0.15, φ 2 = −0.2.
Results
Table 2 shows the median estimate, standard error, and average relative percentage bias (= 100 × θ−θ θ ) for each parameter arising from our simulation study.We see that the estimators from the MPR IC likelihood have very little bias, and that both the bias and standard error tend to reduce with increasing sample size.As expected, larger rightcensoring and inspection tend to reduce performance, but, even in the worst case of 30% right-censoring and average inspection length of 0.5 E(T ), the results are quite good.
Table 3 and Table 4 show the results of the simulation from the Weibull MPR with frailty, φ = 0.5 and Weibull MPR with dispersion model, φ = 0.5, φ 1 = 0.15, φ 2 = −0.2,respectively.For the smallest sample size (n = 200) the frailty estimates can be somewhat biased (and even for n = 500 for the more complicated dispersion model case), but the bias is much reduced for n = 1000.Of course, we expect that more complicated models tend to require larger sample sizes, but, overall, our simulation suggests that the model parameters are recoverable for reasonable sample sizes.
Data analysis
We analysed a subset of data from the Signal Tandmobiel study.Following other authors (e.g., Gómez et al. (2009)), time to emergence, T , was measured as "child's age minus five years" (i.e., age − 5) since emergence of pre-molars does not occur before age five; more generally, of course, this threshold parameter could be estimated.Two covariates were analysed, namely, sex, where 0 = boy (52%) and 1 = girl (48%), and dmf, where 0 (57%) indicates that the primary predecessor tooth was sound and 1 (43%) indicates that it was decayed, missing due to caries, or filled.Gómez et al. (2009) excluded 44 (1%) school children in whom the dmf status was unknown thus leaving 4386 children for our analysis.It should be noted that a more extensive data set from the Signal Tandmobiel study has been analysed by Bogaerts et al. (2002), Lesaffre and Komárek (2005) and Komárek and Lesaffre (2009).
We investigated four covariate structures: (I) sex only, (II) dmf only, (III) sex and dmf together, and (IV) sex and dmf together along with their interaction term.Each of these covariate structures were included in the six different model types designated by: PH, PHF, PHDM, MPR, MPRF, MPRDM (see Table1).Once the best-fitting model type is identified (using the mean AIC averaged over covariate structures as a guide), covariate 0.30 0.09 0.33 -0.10 0.06 -1.55 † The true parameters: λ = 2.0, β 1 = 0.5, β 2 = 0.3; α = 2.0, α 1 = 0.25, α 2 = −0.1.Thus, there are 24 initial models in total defined by the combination of four covariate structures in each of six model types.These models were fitted to the data using a specially written R programme (R Core Team, 2018) which called the routine nlm to maximize the likelihood function (3.6) and compute the observed information matrix.Furthermore, we computed the non-parametric maximum likelihood estimator of the survivor function (Turnbull, 1976) using the R package interval (Fay and Shaw, 2010).
Results
A summary of the 24 initial models fitted to the data is given in Table 5.Firstly, looking at the four regression structures (I) -(IV), we see that, for all model types, the dmf models, (II), have lower AICs and BICs than the sex models, (I), suggesting that the status of the primary predecessor tooth (sound versus decayed/missing/filled) has a greater effect than the sex of the child.This is not to say that sex is unimportant, as the simultaneous inclusion of both sex and dmf, (III), yields a greater reduction in AIC and BIC relative to the two single-factor models.On the other hand, interestingly, the addition of the interaction term, (IV), reduces the AICs, but increases the BICs, suggesting that there may be a weak interaction effect.
We now discuss the merits of the various model types.First we notice in Table 5 that generally the MPR models outperform their simpler PH counterparts in terms of AIC and BIC (the only exceptions being some MPRDM versus PHDM comparisons).Visually, this improvement in model fit is clear by comparing Figures 1 (a) and (b) to Figures 1 (c) and (d).This shows the additional value gained by modelling the shape, and highlights that one or more of the covariates have non-PH effects.
The addition of frailty to PH or MPR models improves the fit in all cases.This suggests that additional heterogeneity (e.g., via unobserved covariates) exists within the data.That this is so for the MPR models is noteworthy, since the MPR model already explains variation beyond the PH model by means of its person-specific shape regression -but, it is clear, that there is additional heterogeneity present in these data.
It is noteworthy that the inclusion of dispersion models (DM) is supported for the PH model cases, but not for the MPR cases.From (2.3), we see that φ plays a role in describing the shape of the marginal distribution (albeit that γ is the primary shape parameter).Thus, the support for DM on top of a basic PH model essentially highlights the benefit of modelling shape in addition to scale as advocated by Burke and MacKenzie (2017).In particular, the comparison of PHDM against MPRF allows us to assess the value of modelling shape via the frailty variance, φ, or the shape parameter, γ.The latter models outperform the former in terms of AIC and BIC suggesting that there is more value in modelling the shape than the frailty variance, at least, in these data.This is perhaps not surprising since γ is the main shape parameter as mentioned.
Overall, we find that the best model in terms of AIC is MPRF(IV).This is the MPRF model with λ and γ regression components both containing sex, dmf, and the sex×dmf "sex" and "dmf" indicate a single-factor model in which one of sex or dmf appears; "sex+dmf" indicates a model with both sex and dmf; "sex×dmf" indicates a models with both sex and dmf along with the interaction between these two; note that all models contain an intercept terms in the scale and shape; ℓ( θ) is the log-likelihood value; dim(θ) is the number of parameters in the model; "AIC" and "BIC" are the Akaike Information Criteria and Bayesian Information Criteria respectively; dAIC = AIC−min(AIC) where min(AIC) represents the lowest AIC among the models shown in this interaction.The best model in terms of BIC is the MPRF(III) model, obtained by omitting the interaction term from the regression components of MPRF(IV).Gómez et al. (2009) did not consider a non-PH process for time to emergence of tooth24.Here, both of these best-fitting models are non-PH, and the non-PH-ness arises in two ways: (a) the shape parameter, λ, depends on covariates, and, (b) through the presence of frailty.The coefficients for these two models are shown in Table 6 where we see that, in MPRF(III), the sex effect appears to be statistically significant in neither the scale nor shape.A naive approach to variable selection might treat the scale and shape regression components completely separately, thereby removing sex entirely from the model.Such a removal would bring us back to MPRF(II) which, from Table 5, has much higher AIC and BIC values.This highlights the more involved nature of variable selection within MPR models, i.e., we cannot simply rely on separate Wald tests for (potentially highly) correlated scale and shape effects (see Burke and MacKenzie (2017) for further details).In this case, it is clear that the sex effect is required, but eliminating it from the shape yielded model MPRF(III)R which has improved AIC and BIC; note that eliminating sex from the scale instead produces similar AIC and BIC values (not shown).Similarly, careful reduction of MPRF(IV) yielded MPRF(IV)R which has the lowest AIC and BIC of all models considered.Both of these reduced models are shown in Table 6 and, of course, the reduction produces more interpretable models.MPRF(IV)R is the best-fitting model by both AIC and BIC measures, and we can confirm that the fit is near-perfect via Figures 1 (e) and (f).Before we interpret this model however, we first highlight PH(IV) shown in Table 6 which is the simplest full-covariate model fitted.The scale coefficients of this model suggest that tooth24 emerges first in girls with dmf, then girls without dmf and boys with dmf (whose emergence times are close), and, lastly, boys without dmf.On the other hand, model MPR(IV), also shown in Table 6, directly extends this model and improves the fit which suggests that in fact something more complex arises.Returning to MPRF(IV)R, because this has a reduced shape covariate structure (compared with MPR(IV)), we can see that dmf has a highly non-PH effect due to its appearance in the shape component.
Figures 2 (a) and (b) show the hazard functions for the four groups defined by the sex×dmf interaction for PH(IV) and MPRF(IV)R, while Figures 2 (c) and (d) display hazard ratios relative to the boys without dmf group.We can see that the MPRF(IV)R model permits quite different hazard shapes for each group, whereas, for the PH(IV) model the shapes are constrained to be the same; in both cases the hazards increase with time, indicating the inevitability of the emergence of tooth24 later in time.Overall, both models agree in terms of the highest and lowest emergence hazards which correspond, respectively, to girls with dmf and boys without dmf groups.On the other hand, within the MPRF(IV)R model, it seems that boys with dmf have a higher hazard than girls without dmf earlier in time which reverses later in time -at about time point 5 (i.e., when the children are aged 10); this crossing effect cannot be handled by the simpler PH(IV) model which forces these two groups to be equal.Table 7 shows the estimated median emergence times for the four groups under the PH(IV) and MPRF(IV)R models; d)) models respectively.Note that, for the hazard ratios, we are using boys without dmf (the lowest hazard group) as the reference group for the four groups formed via the sex×dmf interaction.
the results are in line with the hazard-based interpretations.Recall that we have modelled age − 5 years so that adding 5 to the above emergence times gives the emergence age.
While the nature of the covariate effects can be determined by examining Figure 2 it is instructive to consider the effects of dmf and sex separately (albeit they do interact).Thus, Figures 3 (a) and (b) present the dmf hazard ratios (i.e., dmf versus non-dmf) in boys and girls.We can see that the effect of dmf, which is highly time-dependent and greater in boys, is to increase the hazard of emergence -although, later in time, the strength of this effect reduces.Figures 3 (c) and (d) present the sex hazard ratios (i.e., girls versus boys) in non-dmf and dmf groups.We can see that the girls have a greater hazard of emergence than boys; the effect is reduced when dmf is present.Compared with the dmf hazard ratios, we can see that the sex effect is weaker (as was apparent from Table 5) and is much less time-dependent due to the lack sex in the shape regression (in fact the MPRF(IV)R sex hazard ratios are much closer to their PH(IV) counterparts).
Discussion
In this paper we investigated the utility of MPR models in the context of interval censored data, by way of simulation and a practical application, which, to the best of our knowledge, is the first time such an investigation has been carried out.In particular, we have found that the parameters can be estimated with reasonable precision even in relatively small samples of interval censored data (albeit the most complicated model structures work best in larger samples).Moreover, we found that the MPR extension, and the additional extensions of frailty and dispersion modelling, to be fruitful in the context of the Signal Tandmobiel study.Thus, as we might expect, the utility of the MPR Weibull model, and indeed the MPR framework in general, extends beyond right-censored data and the specific lung cancer application considered in Burke and MacKenzie (2017) Our analysis of the tooth data considered a variety of additional model structures not previously explored in the existing literature (Bogaerts et al., 2002;Gómez et al., 2009;Lesaffre and Komárek, 2005;Komárek and Lesaffre, 2009).It is noteworthy that Lesaffre and Komárek (2005), who developed a spline-based AFT model, suggest that "parametric methods do not offer enough flexibility to correctly model survival data".In contrast, we have demonstrated the appropriateness of relatively simpler parametric models for these data which achieve flexibility through a combination of multi-parameter re- gression and frailty modelling.Interestingly, Lesaffre and Komárek (2005) and Komárek and Lesaffre (2009) briefly considered a dispersion model extension of their AFT model which they refer to as a mean-scale model (citing earlier work by Pan and MacKenzie (2003) on meancovariance modelling in longitudinal studies).While an investigation of MPR approaches was not their focus, they nonetheless found, like us, that an MPR extension yielded a superior fit to the data.
It is fortunate that these data are sufficiently extensive to permit the investigation of such models.We find that to emergence of the permanent upper left first premolars depends on sex and dmf.In particular, emergence times are significantly earlier in children whose predecessor tooth was decayed, missing, or filled (dmf), and that emergence times are earlier in girls.However, we have also found that frailty effects are supported within the data, i.e., there may be further unmeasured features at play.The time-dependent nature of the dmf hazard ratios is quite interesting, and suggests that the dmf group becomes more like the non-dmf group later in time.Recall that the dmf group is a mixture of individuals with decayed, missing, and filled teeth.With this in mind, one might speculate that filled teeth are more similar to sound teeth (non-dmf teeth) with larger emergence times, while missing teeth are perhaps quite different with shorter emergence times.This, at least, would be a frailty interpretation of time varying effects, i.e., there is a mixture of groups, some of which "fail" earlier.However, the time variation is much greater than that supported by frailty alone due to the presence of the significant dmf shape effect within the MPRF(IV)R model.
The extension of the MPR framework to interval-censored survival data with frailty permits the examination of a variety of potential data structures.An appealing aspect of this approach is that the breadth of models supported exist within a reasonably straightforward parametric setup which is not computationally intensive.Such a framework provides a practical and useful adjunct to existing methods which may reveal new insights.
Figure 2 :
Figure 2: Hazard functions (panels (a) and (b)) and hazard ratios (panels (c) and (d)) forboys without dmf (solid), girls without dmf (dash), boys with dmf (dot), and girls with dmf (dash-dot) for the PH(IV) (panels (a) and (c)) and MPR(IV)R (panels (b) and (d)) models respectively.Note that, for the hazard ratios, we are using boys without dmf (the lowest hazard group) as the reference group for the four groups formed via the sex×dmf interaction.
Figure 3 :
Figure 3: Ratio of dmf to non-dmf marginal hazards in boys (panel (a)) and girls (panel (b)),and ratio of girls to boys marginal hazards in non-dmf (panel (c)) and dmf (panel (d)) groups.These marginal hazard ratios are shown for both the PH(IV) (dash) and MPR(IV)R (solid) models; a reference line at one is also shown.
Table 1 :
Model types
Table 2 :
Simulation: ML Estimates for the MPR Weibull model with various sample sizes and censoring rates -without frailty †
Table 3 :
Simulation: ML Estimates for the MPR Weibull model with various numbers of sample sizes and censoring rates -with frailty variance φ = 0.5 †
Table 4 :
Simulation: ML Estimates for the MPR Weibull model with various numbers of sample sizes and censoring rates -with frailty dispersion selection proceeds within the best type, and the final model selected, may, if MPR, have different covariates in the regressions.
Table 5 :
Summary of initial models fitted table which is that of model MPRF(IV) whose AIC is 10950.1;dBIC = BIC − min(BIC) where min(BIC) corresponds to model MPRF(III) whose BIC is 11001.9.Note that, for each of the six model types, the mean AIC, BIC, dAIC, and dBIC values are shown to facilitate quick comparison of the basic model types.
Table 7 :
Estimated median emergence times with 95% confidence interval | 2019-01-28T13:10:09.000Z | 2019-01-28T00:00:00.000 | {
"year": 2019,
"sha1": "0ac5a706672e606be906d5a8301efa81b06ecd0e",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1901.09634",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7482838202a0fc3e8c6c8cff9e8a66aba56ec21d",
"s2fieldsofstudy": [
"Medicine",
"Mathematics"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
} |
118489666 | pes2o/s2orc | v3-fos-license | The strength of beta-decays to the continuum
The beta-strength in beta-delayed particle decays has up to now been defined in a somewhat ad hoc manner that depends on the decay mechanism. A simple, consistent definition is presented that fulfils the beta strength sum rules. Special consideration is given to the modifications needed when employing R-matrix fits to data. As an example the 11Be(beta-p) decay is investigated through simple models.
Motivation
Close to the beta-stability line all beta-decays will populate particlebound states, i.e. states that are longlived (stable, beta or gamma decaying) and therefore have narrow widths, less than 1 keV. Moving towards the driplines a larger and larger fraction of beta-decays will feed states that are embedded in the continuum. A general overview of the physics changes this brings about can be found in recent reviews [1,2]. Close to the dripline betadelayed particle emission can become the dominating decay mode and the question of how beta strength is assigned to transitions to unbound levels becomes important. This has been discussed at several instances, e.g. [3,4,5,6], the aim of this paper is to provide a consistent answer that is independent of the mechanism for the particle emission. Quite apart from the conceptual interest this also has a very practical implication for the way the total beta strength is calculated: as remarked earlier [5] current approaches give a strength corresponding to decays in an energy region that is proportional to (f t) −1 dE for decays going directly to the continuum versus ( f t dE) −1 for decays through a resonance.
To simplify the notation I shall mainly consider Gamow-Teller transitions. For transitions to bound states the decay rate is w = ln 2 where f is the phase space factor, K = 2 ln 2π 3h7 /(m 5 e c 4 ) (m e being the electron mass), the beta strength B GT is given by the reduced matrix element squared | f |β ± |i | 2 and the weak interaction constant g A is factorized out explicitly from the operator β ± that flips spin and isospin. The basic suggestion of this paper is to define the beta strength for final unbound states so that the following expression holds for the decay rate: where there is an implicit sum over all final states with the same E. This definition is in principle experimentally simple to implement, but can be more complex to use theoretically since it does not distinguish between different cases such as isolated resonances in the continuum, interfering resonances, one or several decay channels etc. Essentially one takes out the lepton part (Q − E is the energy going to the beta particle and the neutrino), so it is an attempt to separate the weak interaction part (the "incoming channel") and the strong interaction part (the "outgoing channel"). A complete separation is not possible unless each decay goes through one and only one intermediate state. As explained in detail later different definitions of B GT have been employed in earlier papers. Section 2 presents an argument based on the Gamow-Teller sum rule for why the above suggestion is appropriate, and the two following sections compares the definition to existing frameworks. Section 3 looks in detail on beta decays going directly to continuum states and how they have been treated theoretically so far. Section 4 deals with the treatment of decays through intermediate resonances as done in the R-matrix formalism and how this can be modified to be consistent with the proposed definition. Section 5 presents the conclusions and the appendix gives more mathematical details relevant for the R-matrix treatment.
Beta strength sum rule
The Gamow-Teller sum-rule is very useful for beta decay studies. It gives a natural scale for B GT for a given decay and is derived by using the completeness relation for rewriting the summed strength for an initial state |i as and by evaluating the commutation relations of the beta operators one gets In this standard derivation of the sum-rule one implicitly assumes a discrete set of final states each with beta strength B(GT ) f = | f |β ± |i | 2 , this must of course be changed when significant contributions come also from continuum states.
A pragmatic way of proceeding that shall be explored later in section 4.1 is to use as a first step a discretized continuum by imposing a finite (but large) quantization volume. By construction the rules for a discrete spectum applies and the sum rule is unchanged. In the continuum limit of increasing quantization volume one would naturally obtain equation (1) and the Gamow-Teller strength will obey the sum rule. Calculations of continuum spectra that proceed by this route will be safe, but other approaches are possible that throw more light on the intricacies of the continuum.
A more formal treatment of the question of how to formulate the completeness relation including continuum states was given by Berggren and collaborators [7,8]. With a careful definition of the continuum wavefunctions one can derive general sum rules [9,10] where for our specific case the sum over discrete states is replaced by a sum over bound states and an integral over all (real values of the) momenta in the continuum Berggren further showed how one by allowing complex momenta k and modifying the contour of integration in eq. (2) could extend the sum over bound states to include also contributions from resonance states. Conceptually this gives the crucial insight that even though the physical decay mechanism may favour the description in terms of resonances or the one in terms of continuum transitions we are in principle at liberty to use both (or, in the general case, a mixture). There are two important points to note: first that even when all physical resonances are included in the sum there may remain a small continuum contribution, secondly that in practical implementations one may encounter non-positive contributions from individual terms in the sum as shown explicitly in [11] for the corresponding case of an electric dipole. The resonances that emerge in this framework can therefore not be replaced by or simply identified with the resonances occuring e.g. in the R-matrix framework. If one in equation (2) integrates B GT (k) over all momenta corresponding to the same energy E one obtains the B GT (E) from above. It is therefore possible to consistently define Gamow-Teller strength in the "pure continuum" so that the sum rule is maintained. If one wishes to assign strength to a specific resonance this can be done, but there is in principle a risk of obtaining non-positive values. The question of when continuum contributions will remain important is treated in [9,10].
Decay directly to continuum
The Berggren approach is being implemented in nuclear structure calculations via the Gamow Shell Model [12], but has so far not been applied to beta-delayed particle emission. Calculations of beta decays directly to a continuum state |k have been made within several approaches with different conventions for the normalization of continuum states and correspondingly different choices for the normalization of the reduced matrix element B GT (k): in [13] the continuum was discretized in a large volume and the wavefunction normalized to one particle per volume, in other calculations, see e.g. [14,15,16], the wavefunctions at large radii become scattering wavefunctions. When calculating the decay rate as a function of the continuum energy dw(E) one must sum over all states with the same energy E. An explicit "phase space factor" for the outgoing particle should therefore be included, a factor that of course also depends on the chosen normalization thereby bringing some confusion to the notation 1 . The conversion to the present definition is for [13] where k and m are the momentum and mass of the outgoing particle. For [14] one has where v is the velocity of the outgoing particle. For [16] one has where the extra ratio of coupling constants is due to a different convention that includes them in the definition of B GT . From the previous section it follows that calculations of beta decays going directly to continuum states should essentially automatically fulfil the Gamow-Teller sum rule. The main point in the present definition, equation (1), is for this case only a redefinition of B GT (E) as a sum of all B GT (k) so that the calculation dependent "phase space factor" is not included in the strength definition. This makes comparisons between calculations and between experiment and theory easier.
Up to now the decays that have been described as going directly to continuum states are some of the decay channels of halo nuclei [2]. In the specific case of beta-delayed deuteron decays of two-neutron halo nuclei this is the standard assumption in most theoretical descriptions of the process (based on the picture of the two halo neutrons decaying "remotely" into a deuteron), see e.g. [16] for the most recent calculation of this decay mode in 6 He. However, a description within the R-matrix approach has also been done [17] and more experimental data may be needed in order to settle whether direct decays is the only reasonable description.
Sequential decay
The case of decays through resonances is considerably more complex, in particular for broad resonances where the f -factor changes significantly across the level and where interference may play a role. This case is typically analysed with the R-matrix formalism [18,19] that allows adjusting level parameters to better fit experimental data. Before going into technical details it may be useful to remind why a resonance description is used at all. It is the natural description when there are narrow lines in the experimental spectrum, but it is also of interest more generally since a resonance description summarizes much information into a few numbers. If a few resonances can describe all the structure in a spectrum it gives an economical description that furthermore can be extrapolated (with caution) to neighbouring regions that may be harder to access experimentally.
The R-matrix approach (or an equivalent framework, see [20] for an overview of theoretical approaches that have been used to describe resonance reactions) is essential if there is strong coupling to the continuum or if resonances overlap so that interference occurs, some examples from the light nuclei are the decays of 8 B, 12 N, 17 Ne and 18 N. Appendix A contains a more detailed exposition of the R-matrix formalism for beta-delayed decays. I shall mainly consider the single level, singel channel case where, as shown in [6], the decay rate is where the size of the beta strength parameter B R GT (essentially the square of the parameter g in [4]) depends explicitly on the normalization of the line shape that is given by Here P, S and B are the penetrability, shift function and boundary parameter and E λ and γ λ are the level energy and width parameters. By comparison to the continuum description one sees that B R GT will give the summed beta strength for an isolated level if the integral of ρ(E) is π. If the integral differs from π the basic suggestion of the present paper is that the strength derived from the continuum description is the correct one (it leaves the sum rule unchanged) and is related to the R-matrix parameter so that the integrated strength of the decay through a specific isolated level is B R GT ρ(E)dE/π. A very similar correction has been applied by Barker [3,4,6], who for narrow levels approximates ρ(E)dE = π/(1 + γ 2 λ dS/dE) where the derivative is evaluated at E λ . This question is analysed in more detail in Appendix A where the limitations of the approximation are exposed. There is no general unique prescription that in a simple manner will give the total beta strength corresponding to a level. Furthermore, if one tries to determine the total strength by performing the integral ρ(E)dE, the contribution to the integral above an energy E h will be proportional to γ 2 λ / √ E h and therefore be potentially large for wide levels. It is not obvious that this contribution at high energies is physically relevant and therefore not obvious which upper integration limit should be used. The best one can do is to employ equation (4) and e.g. determine the strength for decays through a specific level in a given energy range.
The extension from the single-channel, single-level case to the more general situation will not give qualitative changes in the picture. Numerically, the interference that enters in the multi-level case redistributes the beta strength rather than changing its total value. (It seems also to diminish the dependence on E h mentioned above.) In any case, if interference effects are large the whole procedure of attributing beta strength to each individual level may be questioned. The beta strength to a given level cannot be extracted immediately from a spectrum. If one in such cases choses to quote a B GT (or, equivalently, an f t-value) the price to pay is that the sum rule is no longer valid and that an evaluation of total strength directly from the spectrum will give a different result (that fullfils the sum-rule).
A pragmatic way to extract beta strength when fitting with the R-matrix formalism is the following: if the resonances are narrow and isolated one can normally use the same procedure as for bound states, except when the variation of the f -factor across the level is substantial. In the latter, and other more complex cases, one can either switch to using eq. (1) or equivalently use equation (4) and calculate explicitly the integral of ρ(E) or the corresponding integrals for the multi-channel, multi-level cases given in [4]. Barker uses the Q-value as the upper limit for the integration range (this would correspond to including only the observed strength within the energetically available window), but if the choice has any effect it must be carefully stated.
A model case: 11 Be(βp)
The general results will now be exemplified via a simple tractable case, namely the beta-delayed proton emission from 11 Be. This decay mode should be similar to the beta-delayed deuteron decays from two-neutron halo nuclei mentioned above, but is conceptually simpler. A recent paper [21] contains more details on this decay with references to the literature. The model considered here is too simple to be applied immediately to the decay and e.g. does not consider isospin nor decays of core nucleons. Nevertheless, it will serve as a useful illustration.
The basic assumption in this discretized continuum direct decay model (DCDD) is that the initial state is an s-wave neutron in the potential given by 10 Be that is assumed to be inert. The final states are continuum wave functions of a proton in an s-wave in the combined Coulomb and nuclear potential from 10 Be. The Gamow-Teller operator simply converts the halo neutron into a proton (the spin operator will not change the physics) so the matrix element reduces to the overlap between the two wavefunctions. Fermi transitions are assumed to go mainly to the isobaric analogue state and are therefore not included. The final wavefunctions are found as the discrete set of positive energy solutions in a finite volume. The "energy resolution" given by the differences in level energies decrease as the radius of the quantization volume is increased; in the calculations the radius of the volume varied between 400 fm and 4000 fm.
The strong potential between the core and the nucleons is taken as a square well of radius 4.0 fm. This gives an appropriate halo wavefunction for the 11 Be ground state when using wavefunctions with one node inside the potential (if wavefunctions with no node are used, the potential radius should be reduced to 3.5 fm). For the initial state the potential strength is adjusted to 33.819 MeV to fit the known 11 Be neutron separation energy [22] of 501.64 (25) keV. For the final state a square well is used up to 4 fm and a pure Coulomb potential for radii beyond this. The structure of the solutions depend on the well depth used in the final state. For most values one obtains small overlaps with wavefunctions within the 280.7 keV window open for βp decay and a featureless spectrum. A very similar result was obtained in the more sophisticated two-body calculations of the decay in [23]. This is the "non-resonant" regime with nothing conspicuous appearing in the calculated decay spectrum.
When the depth of the final square well potential is in the range 33-34 MeV one obtains significantly higher overlaps (in this range the final state wavefunctions have one node inside the potential, significant overlaps are also obtained in limited regions where wavefunctions have no or two nodes inside the potential). The obtained overlaps are shown in figure 1 for a 1000 fm confining radius. One observes a "resonance" inside the small window with a width that becomes smaller for lower resonance positions. This behaviour is in contrast to the one observed if one puts the Coulomb potential to zero in the final state (as if the final nucleons were neutrons) in which case there is a broad maximum in the overlaps. A similar behaviour to the one seen here appears in the calculations of beta-delayed deuteron emission in [24] where the final state spectra generally are broad and almost featureless but display a low energy peak for small ranges of the potential depth. It may be of interest to briefly compare the found resonances with the famous case of the 8 Be ground state. Comparing the Schrödinger equations in the tunneling region for the α + α case and for p+ 10 Be one finds that scaling the radius in the latter case up by the ratio of reduced masses in the two systems and the energies down by the same factor, one obtains exactly the same equation. In other words, the 8 Be ground state corresponds to a proton resonance at 42 keV. Higher resonance energies correspond to systems more unstable than 8 Be. In a similar way, when the p+ 10 Be system is compared to the d+ 9 Li one, if the proton radial distances are scaled by a factor 27/20 and the proton energy by a factor 81/80 one obtains exactly the same Schrödinger equation for the two systems in the region below the Coulomb barrier. I.e., the energy scaling factor is here very close to one. The DCDD model clearly produces a resonance when the effects of the Coulomb barrier are sufficiently strong to confine the wavefunctions. The asymmetry in the line profiles in figure 1 is caused by the decreasing effect of the Coulomb barrier as the energy is raised. An interesting feature can be seen in figure 2 that compares overlaps for three different initial neutron separation energies, 5 keV, 500 keV and 5 MeV. For final state energies around 200 keV the wavefunction gets above the Coulomb barrier at about 29 fm and will start oscillating, i.e. change sign periodically. For the case of 5 keV separation energy the initial wave function will reach out to large distances and the opposite sign contributions are sufficiently strong to give a clearly visible interference dip in the upper tail of the resonance. The finite energy resolution inherent in the direct decay model gives problems for resonance structures at low energy since one cannot be sure to cover the resonance if its width becomes too small and the total overlap from the model becomes prone to numerical uncertainties and therefore unreliable. One can still find the resonance position and width from the procedure outlined in [25], but care must be taken when extracting overlaps.
Since all involved states have low energy one would expect the results to be insensitive to the details of the potential shape. This has been tested by performing calculations also with a Woods-Saxon potential with parameters taken from the potential used in [23]. Very similar results were obtained as shown in figure 3 that displays the calculated total branching ratio for betadelayed proton emission in the two models as a function of the position of the resonance. The B GT was taken as 3, the sum-rule value for a single neutron. To obtain the branching ratio the calculated total decay rate is divided by w tot = ln 2/t 1/2 . If one has a mismatch between the nodal structure, e.g. one node in the neutron wave function within the potential and no nodes in the final state wave functions, the decay rate decreases as also shown in the figure. (One expects the neutron and proton in the 11 Be decay to have the same nodal structure.) The DCDD model calculations point to a resonance dominated decay, so it is natural to employ also R-matrix calculations of the decay. At first I assume that the level at energy E 0 fed in beta-decay only decays via proton emission. The expression for the decay rate is then (converted into a differential branching ratio): where ∆ = −(S(E) − S(E 0 ))γ 2 and P and S are the penetrability and shift factors. The value of γ 2 is taken ash 2 /(ma 2 ) which is the maximum possible, the Wigner limit, and where the channel radius used is a = 1.4(1 + 10 1/3 ) fm. The B GT is again taken as 3. A simpler approximation sometimes used [5] is to neglect the energy dependence of the shift factor. Doing this and allowing also for α-decay from the level gives the following expression: where Γ tot = Γ α +Γ p , the α decay width Γ α is assumed to be constant over the Q-window (this should be a good approximation as the α threshold is more than 2.5 MeV lower), Γ p = 2P γ 2 and P is the standard (energy-dependent) penetrability factor. Integration over the Q-window gives the total branching ratios shown in figure 3 as a function of E 0 for different values of Γ α . The branching ratios agree well with the ones from the DCDD model in the region where proton decay dominates, note that the calculation with zero Γ α becomes unreliable at very low resonance energies. As discussed in detail in Appendix A R-matrix parameters should not be identified immediately with experimentally observed quantities; a correction factor (1 + γ 2 dS/dE) −1 that for our case decreases slightly from 1/2.5 at 50 keV to 1/3 at 200 keV enters often. To illustrate this explicitly figure 4 displays the differential spectra for the DCDD model with potential depth 33.5 MeV and R-matrix calculations from eqs (5) and (6) where in both cases γ 2 is taken as the Wigner limit, B GT is 3 and E 0 is 181 keV (the approximate resonance position for the DCDD calculation). Comparing first the full R-matrix calculation, eq. (5), with the DCDD results the resonance width and overalll shape are very similar but the overall strength is reduced by the above factor. This explains immediately the corresponding reduction in intensity in figure 3. For the simpler expression from eq. (6) the width is clearly too large, again due to the same correction factor; this underlines that one must insert the observed width (and not the R-matrix width) when using eq. (6). A further effect then enters as the energy dependence of the beta-decay f -factor distorts the spectral shape and moves the peak position several keV down. This effect increases with Γ tot as also seen in figure 4, a simple evaluation where the energy dependence of the width is neglected and the f -factor is approximated as (Q−E 0 ) 5 gives a shift of −(5/8)Γ 2 tot /(Q−E 0 ). For the calculation of the total branching ratio a wrong value of the width does not matter as long as the resonance is narrow since the integration over the Breit-Wigner then gives a constant, but it could explain that the results from eq. (6) lie above the other results in figure 3 at energies above 250 keV. When the correct parameter values are inserted in the R-matrix calculations the spectra corresponding to figure 4 agree very well, as do the integrated intensities.
Implications for R-matrix fits
If fits are made only in a small energy range it does not matter which approximation of R-matrix is used. The larger the energy range, and the larger the effect of having several levels and/or several decay channels, the more obvious is the need to employ the full theory. However, whatever method is used, it is essential to distinguish clearly observed parameters from R-matrix parameters. The main difference in fitting comes from including or neglecting the shift factor (compare eqs (5) and (6)), whereas the energy dependence of the level width Γ = 2P (E)γ 2 may be inserted or not according to whether its variation is significant. In eq. (6) observed parameters must be inserted (except for the explicit energy variation of the level width), in eq. (5) the R-matrix parameters. The conversion between the two parameter sets is, for narrow levels, via the correction factor (1 + γ 2 dS/dE) −1 . For wide levels it eventually becomes meaningless to attempt a conversion. The important fact to note is now that the Wigner limit applies to the R-matrix parameter value, whereas the Gamow-Teller sum rule applies to the observed value. Note further that the "observed B GT " only represents the strength present close to the peak. It may be an acceptable value for narrow peaks, but for broader peaks one should apply eq. (4). This holds in particular when interference occurs.
In some cases the observed B GT gives a misleading impression even for narrow levels. This is when the small width Γ = 2P γ 2 is due to a small penetrability rather than a small value for γ 2 that measures the strength of the coupling to the outgoing channel. In this case one may get a sizable contribution also at higher energies where the penetrability has increased, the "ghost peak" of Barker and Treacy [26]. This effect is also seen in the 11 Be(βp) case in the DCDD model for resonance energies below about 65 keV. The effect is mainly due to the change in penetrability, the change in the shape of the final state wave function inside the potential that will be present in the DCDD model is small. Barker preferred initially [3] to work with ft-values rather than B GT values. There is no conceptual advantage in doing so for broad levels, but it may be slightly simpler experimentally and one does not have to worry about the unfortunate ambiguity in the literature on whether the ratio of coupling constants (g A /g V ) 2 is included in B GT or not. For the cases where it is imperative to use R-matrix fits rather than simply treating a resonance as a bound level one has no advantage in using ft-values and the comparison to theory anyway becomes less direct.
Several values of B GT given in the literature will be affected if the present recommendation is followed. The intriguing case of the large asymmetry in the decays of the mirror nuclei 9 Li and 9 C will certainly be affected, but the correction factors quoted in [27] are too small to solve the problem. An extreme case with a clear effect is the beta-decay into states in 12 C above the three-alpha particle threshold. Recent experiments [28,29] gave a summed strength to identified states up to and including the 12.71 MeV state of 1.0-1.1 for 12 B and 12 N. However, further strength is clearly seen at 15-16 MeV excitation energy and if this is interpreted as due to a tail from higher-lying states, as would be naturally assumed from the R-matrix fitting in [29], one would violate the sum rule drastically. If on the other hand the observed strength is summed bin by bin, as done in [28], one obtains a strength of about 0.6 for 12 N, which is perfectly allowed.
Summary
Using the Gamow-Teller sum rule as a guideline this paper puts forward a simple general definition of the beta strength in beta-delayed particle decays. The strength defined in this way differs from the strength entering in the Rmatrix formalism, for extreme cases such as the 12 N decay the difference is large. When beta decays proceed through narrow resonances the expression B R GT /(1 + γ 2 dS/dE) is a good approximation for the summed strength close to the resonance, but if broad levels are involved or the Q β -window is large one should use the definition in eq. (1) directly.
The suggested definition also differs quantitatively from the ones employed so far in calculations of decays directly to the continuum, but has the following advantages: (i) the expression for the decay rate, equation (1), is independent of the normalization of the wavefunctions, (ii) the sum over bound states in the sum-rule is extended as (with the definitions used so far one had to include in the integral an explicit phase space factor) and (iii) the experimental determination of beta strength becomes the same independently of whether the beta decay feeds bound states or the continuum: one simply uses eq. (1) for each bin. Finally, since most theoretical calculations so far have been carried out in a bound state approximation, any comparison between experiment and theory must be done with great care once the effects treated in this paper are significant.
since the penetrability contains a factor kR = √ c ′ E. The poles of ρ λ (E) appear for and apart from the "classical" pole at E − = E 0 − iΓ/2 (and its mirror E + = E 0 +iΓ/2) one can have poles from the terms P (E) and S(E). One now extends the integral along the negative real axis and closes it in the lower half so that the residue theorem can be used. The added part in the lower complex plane gives a vanishing contribution and the integral along the negative real axis gives a purely imaginary contribution (except for possible poles on the axis which then have to be evaluated explicitly) due to the factor √ E, this can be shown explicitly for neutral particles but probably holds in general. The integral can therefore be evaluted simply from the residues at the poles of the function in the lower half where the main contribution (at least for small γ 2 ) will be from the pole at E − . Close to this pole the behaviour is similar to that of the standard Breit-Wigner function but the residue, which for the Breit-Wigner is 2πiΓ/2/(E + − E − ) = 2πiΓ/2/(2iΓ/2) = π, becomes more complex due to the analytic continuation of P (E) into the complex plane. The correct result for the integral turns out to be π also for s-wave neutrons (where P (E) = kR and S(E) = 0) but for higher angular momenta deviations will occur. One can show in general that each pole E p will give a contribution to the integral of i.e., if the imaginary term can be neglected to lowest order, at first glance a similar result to eq. (A.3) but with opposite sign! Furthermore, the contributions from the poles that arises from P (E) give a contribution that is of the same order as the correction terms from the classical pole. In fact, an explicit calculation for p-wave neutrons (where P (E) = (c ′ E) 3/2 /(1 + c ′ E) and S(E) = −1/(1 + c ′ E)) gives to lowest order that the integral is where the classical pole and a pole close to −1 contributes equally to the last term. The reason that the Lane and Thomas expansion gives the wrong result is, as pointed out explicitly in [29] for the case of the Hoyle state, that part of the strength appears away from the pole position as so-called ghosts [26]. Whether this contribuiton is physically important or not depends on the specific circumstances of a decay. The summed strength close to the resonance energy will for narrow levels be given to a good approximation by eq. (A.3).
For the one-level multi-channel case one can see easily from the general formulae in [4] that the only change is to insert a sum over all channels in the terms (S(E) − B)γ 2 and P (E)γ 2 . Summing the contribution in all outgoing channels one therefore obtains a result similar to the one-level one-channel case. However, in the multi-level one-channel case interference between the levels will appear. Numerically this does not seem to have a major effect; in fact, for levels that interact constructively between the two peak positions it seems that the total areas are less perturbed than for individual single levels even for cases with rather strong interference (the desctructive interference away from the levels seems to suppress the contributions that makes integrals deviate from π). | 2013-12-02T14:51:46.000Z | 2013-12-02T00:00:00.000 | {
"year": 2013,
"sha1": "70ef82f4f6bbc35b4ec5b4799e7a33f1a200cbfc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.0479",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "70ef82f4f6bbc35b4ec5b4799e7a33f1a200cbfc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261801715 | pes2o/s2orc | v3-fos-license | The Effectiveness of Marketing Strategies from the Perspective of Consumer Psychology
: In the contemporary world economy, marketing has played a crucial role in determining product values. However, it is subject to changes in consumer psychology that are the inevitable outcomes of social developments. The topic this research is to analyze the timeliness of marketing strategies based on consumer psychology in new media era. The investigation is carried on by reviewing existing papers relevant to this topic, thus providing insight into the effectiveness of marketing strategies over time. The sources of information are relevant and recent to keep track of changes as much as possible. The research suggests that pricing strategy is losing its place in modern marketing practices as consumers become favoring emotional values rather than the practicality of products, while social media is gaining popularity in creating commercial values for businesses. Therefore, marketers are recommended to draw up customer-centric marketing strategies that leverage the power of technology, including simplifying promotion content to avoid information overload and depending on the interactions between potential customers to expand influence.
Introduction
Marketing has been playing an essential role in the global economy in modern times.Following the revolutionary changes in the external environment, previous marketing strategies are considered outdated and are urgently experiencing a transformation to adapt to the ever-changing social context.It is significant to identify social trends of consumers 'tastes and preferences over marketing practices, factors that affect their purchase intention, and ways to increase firms 'revenue with the renewal of marketing strategies.The awareness of changes happening allows marketing managers to further understand consumer behavior, which may help one in exceeding its competitors by timely modifying its marking strategies to fit the current trend.Based on the compelling view of marketing tactics by Marshall, Bain, and Galbraith, brand-driven preferences could make consumers less price-elastic, creating barriers to entry and strengthening the market power of existing businesses [1].Another informative approach advocated by Stigler or Telser explains that the primary purpose of marketing is to inform consumers about product features [1].Either way, marketing is well-justified to raise welfare and thereby add to the importance of keeping track of social trends.
Existing research has focused on the absolute impact of social media on marketing.People are spending more time online as time goes on, particularly on social media platforms like Facebook, LinkedIn, and Instagram.Brands, then, are now spending more money on digital marketing than on traditional marketing [2].This once again showcases the significant impact of the changing context on marketing practices, as technological improvements have enabled this innovation to occur.These changes are noticeable since it implies that modern marking practices are becoming more and more diverse.The variety of measures has allowed high degrees of product differentiation, as the brands' images established through distinctive marketing tactics made their appeal to the target consumers for different reasons.In contrast to earlier times when marketing only assisted in informing consumers about the availability of products without understanding their actual needs and demands, the relationship is now the reversed [2].Today's businesses manufacture products based on consumer demand, and then they try to make that practical [2].This allows for more forms of marketing, which is successful as long as it correctly identifies the pain points of the consumers and makes them aware of their need for the product.One obvious change is that marketers are spending less time on price strategy and more time on creating a satisfying brand image.However, few studies have explored the relationship between the change in consumer psychology and the timeliness of marketing strategies on a macro level.To prepare for the changing consumer psychology in the future, this study aims to explore the timeliness of marketing strategies and how best to respond to them.This is of great significance for the future development of the marketing industry.
Based on the literature about consumer psychology and marketing strategies in the last five years, this research will analyze how consumer psychology changes over time and what implications this has for the timeliness of marketing strategies.This is an analysis of marketing strategies based on consumer psychology in the new media era, trying to put forward suggestions for the development direction of marketing strategies in the future.This study will start from the economic, social, and technological backgrounds, discussing changes that occurred in the external environment.By studying the impact of the diversification of marking strategies, access to information, and alternative payment systems on consumer psychology, this research will evaluate the effectiveness of traditional pricing strategies for marketing purposes, and eventually put forward some proposals on using a combination of online marketing tactics for success.
Economic Factors
The world's economy is growing rapidly in recent decades.Benefiting from the advocacy of globalization, the development of international trade, finance, or migration of the labor force is enhanced.Compared to the twentieth century when limited number of international transactions were taken place, permitting free trade between countries has greatly improved efficiency and therefore led to progress in increasing the universal standard of living.The gross domestic product (GDP) per capita is consistently increasing on a global scale from 1960 to 2020 [3].The global economy keeps maintains a positive annual growth rate for most of the time, indicating a steady increase in income per capita and hence consumers' demand for goods and services.Although the world's economy has experienced two major economic crises, namely the covid-19 pandemic and Russia's invasion of Ukraine, it is undoubtedly that the global economy is capable of recovering and continuing its growth even at a lower rate.
Cultural Factors
As time passed on and social trends altered, consumers value how people around them evaluate the product and focus more on emotional appeals than physical attributes.There are plenty of similar brands that sell homogenous products in the market.However, consumers 'search intensity is often not high enough for them to distinguish the brands.Brand memories, then, play a critical role in the decision.The term "brand memories" refers to brand cues in customers' minds that are made up of general brand knowledge, brand associations, historical brand encounters, virtual brand experiences, word-of-mouth communications, and various other forms of marketing promotions [4].Such brand awareness is established mainly through the influences of the community or people around them, or even on a larger scale ----social media.The trust between consumers and the brand is built by customers' sharing of knowledge and experience, which enables potential customers to get over their doubts about the product and inspires a desire to buy. Brand memories consequently serve as important determinants of consumer preference as they build brand images that direct purchase intention and subsequent purchase decisions.
Technical Factors
Technological advancements in the telecommunication sector boost digital payment systems.The emergence of the so-called electronic commerce (e-commerce) has made the need for electronic payment systems grows.Several alternative platforms known as "electronic wallets (E-wallets)" have occurred during the past ten years.By enabling clients to make online payments, a digital wallet renders the need to carry a physical wallet unnecessarily.To facilitate smooth money transfers on a global scale, the digital payment industry includes a wide range of highly interoperable vendors.The majority of e-commerce websites and digital services accept a variety of payment methods, including PayPal, Alipay, WeChat, etc [5].The impacts of technologies are deep in one's life, changing the way people socialize and work, thereby changing marketers 'methods of creating values for their brands and competing for profits.In summary, modern consumers pay attention to emotional values the brand generated, societal perceptions of the products and services, as well as the use of the Internet to promote and advertise.The implications of growing income universally imply a possible decrease in consumers 'income elasticity of demand and the price elasticity of demand, making them less sensitive to either change in income or price.This means that a decline in prices or rise in real disposable income is unlikely a factor that determines consumers 'purchasing intention in modern times.Also, considering emotional appeals, the extent to which the consumers are capable of associating themselves with the brand drive purchase behavior.This spiritual link between brands and consumers is built upon trust and brand image.Lastly, the favor of technologies throughout the transaction, referring to social media and digital payments, are then noticeable in understanding how consumers make purchase decisions.
Diversification of Marketing Strategies
Undoubtedly, market structures have shifted over time.The emerging literature on general equilibrium analysis during the middle of the 20th century worked with models of perfectly competitive markets, with a freely flowing distribution of information and goods from sellers to buyers and no role for investments in marketing [1].In a perfectly competitive market, products tend to be homogenous in the eyes of consumers.An individual company can only increase its sales and hence revenue via means of price competition --lowering the price.This market structure focuses merely on objective product attributes, dismissing any role for marketing to create values beyond that except clarifying the sellers of the product.In contrast, the most prevalent market structure nowadays -a monopolistic competitive market --enables marketing to function effectively.Monopolistic competition recognizes the slight differences in products manufactured by distinctive producers and even encourages product variation by characterizing brand images uniquely through non-price competition, which is essentially marketing.It has urged the need for one to distinguish itself to stand out from its competitors via non-price competition, thereby gaining exposure and establishing brand awareness to increase sales.This brings the diversification of marketing strategies, including the most common instruments such as advertising, promotions at pivotal moments, targeted messages, trade shows, public relations, etc.The inclusive pond of marketing varieties defines the modern marketing industry.
Access to Information
Modern transactions are more transparent as a result of increased information access.Consumers ' purchasing habits using more conventional Internet devices and in physical stores are being impacted by the quick uptake of smartphones and growing reliance on mobile Internet.This is reflected in the differences in the search intensity of consumers.More specifically, access to the Internet provides consumers with the motivation and opportunities to obtain a full picture of the evaluated product.The perceived costs and advantages of traditional web, offline, and mobile channels all vary.With access to mobile platforms, consumers may act right away and conduct searches.Users of smartphones have high expectations, are impatient, and are urgently in need of information, and their searches are in the moment [6].The freedom and openness to search product information, even more, enhances the effects of product variation formed by monopolistic competition, as it becomes less costly to compare and contrast available products and look for lower-priced substitutes.Also, it opens up more aspects of the product for consumers to consider when carrying out the evaluation, namely the prototypicality, familiarity, and usage situations of the product, as well as the brand preference of the general public and retrieval cues from the Big Data.The comparative evaluations assisted by the Internet may make certain brands more appealing than others through the compromise and attraction effect, influencing consumers 'purchasing decisions.Retail searches, on the other hand, need a significant time and financial commitment to visit stores and speak with salespeople.Customers can try things in-store and receive immediate gratification when they make a purchase.But overall, access to the Internet has increased consumers 'search intensity with higher motivation and greater opportunity to acquire an understanding of the product from a well-rounded perspective.
Alternative Payment System
Technology also brings another revolutionary change to the business sector --the digital payment system.The differences between cash and electronic payments are then significant to investigate.Recently, several supportive studies draw a connection between the use of E-wallets and impulsive purchases.Studies have indicated that service quality, perceived convenience, perceived benefits, trust, and promotion can positively influence users 'attitudes towards mobile applications, such as Shopee, OVO, and Gopay, which generates users 'acceptance and intentions to purchase online [7].Moreover, the sense of enjoyment, gratification, and pleasance in users could be initially manipulated by how the E-wallet is designed.With an emphasis on interactivity, visual appeals, and establishment of subjective norms in creation, the process of using E-wallets itself could bring satisfying emotional values to consumers, not even associated with the product purchased.Particularly, developers can evoke a sense of satisfaction in users by enhancing E-wallets 'features of responsiveness and personalization.Some visual elements such as the background color, images, and layouts can also appeal to users visually.Combined with the encouragement from family, friends, or the community around, the strong word-of-mouth effect produced can further elicit users 'positive emotional responses towards digital payment systems.Perceived enjoyment is a significant antecedent in triggering E-wallet users' impulse purchases.On the condition that E-wallet users feel that the process of usage is pleasant, enjoyable, and fun, the perceived experience will induce unplanned purchase desires [8].
Effectiveness of Pricing Strategies for Marketing Purposes
Pricing strategy appears to be outdated in modern times as price has become a less decisive factor in consumers 'decision-making process.That is because changes in the external environment have brought a growing variety of marketing strategies, increasing access to information, and the widespread adaptation of digital payment systems.Market sellers continue to develop novel marketing practices that place less importance on price as a physical attribute of the product than on the brand itself.By making the brand appeals to target consumers, the positive attitude then shapes consumers' preferences for its products.The growing variety of marketing strategies driven by the transition to monopolistic competitive markets has made price competition a less powerful component to achieve the marketing purpose of raising sales revenue.Similarly, technological advancements have led to upgraded access to information by enabling consumers to carry out comparative evaluations with higher search intensity.This enhances the subsequent effects of product variation, which ultimately made the price a less influential indicator of a product's attractiveness and therefore consumers ' intention to purchase.The fact that digital payments can bring users a perceived sense of enjoyment and potentially impacts impulsive purchase further challenges the rationality of consumers to consider price when making a purchase, especially when the positive emotional response of E-wallets can be indirectly manipulated by the designers.When impulse purchase intention is promoted, consumers tend to ignore practicality, as represented by price, while evaluating products.These all add to the conclusion that people who are highly inventive and more likely to use online purchasing applications are less price-conscious [9].As a result, pricing strategies may be ineffective in modern social contexts in promoting purchases.
Uninformative Advertising
To cater to the decline in consumer patience for reviewing the quality of products, repeated uninformative advertising is the best remedy for modern consumers.Consumers incur transaction costs before making purchasing decisions in most shopping contexts, including internal thinking and judging through comparison or external like browsing and research.The reason why consumers are prompted to choose branded products or service is that they are less costly to consider.Avoiding the unnecessary burden of transaction costs is always the priority of consumers.In this case, consumers are more than likely to adopt the peripheral route during evaluation, which can be conducted without requiring scrutiny of the merits of information and hence with less effort.By recognizing a brand and recalling relevant product information, transaction costs can be reduced, resulting in consumers ' preference for the branded products.This can be achieved through a long-term, repeated, and uninformative way of marketing.A temporary price reduction might not last long, but the brand memories forged through decades of upfront marketing investment are so embedded in consumers' minds that they pop up without any deliberate effort.In fact, most forms of brand promotion only serve to remind consumers of the brand and don't generally offer much accurate information [1].Even when cheaper alternatives are available, consumers frequently pay more for branded goods [1].In this era of informative overload, applying repeated uninformative advertising is rather effective.
Social Media Promotion
Social media platforms have been growing in prominence lately as a means of creating, fostering, and maintaining relationships.Individuals have reduced their access to traditional media like TV and radio as a result of the development of social media.Social media platforms have been growing in prominence lately as a means of creating, fostering, and maintaining relationships.These interactions are then evolved to become a typical buyer-seller relationship.The growing influence of social media platforms has encouraged companies to take advantage of these platforms for marketing, as it well connects brands to all prospective consumers.Social media helped businesses adopt a customer-centric strategy rather than a mass market one by realigning marketing focus from a supplier to a customer stance [10].This corresponds to a change in consumer habits introduced in the previous sections, where sellers are expected to identify a consumer's needs even if they are not aware of what they want.The incorporation of social media in the campaigning structure also facilitates word-ofmouth (WOM), which is defined as the practice of sharing product information among customers [10].While looking for information to help them make decisions about what to buy, consumers frequently turn to WOM.This is because WOM is made and distributed by friends and family, which is considered a more credible source of information than messages produced by marketers.Given these desirable impacts, social media is a favorable place for marketing.
Influencer Marketing
Businesses are progressively using the conceptual model of influencer marketing.The practice can be best described as the employing of key opinion leaders (KOLs), celebrity or not-celebrity, who have a large following on social media, to encourage favorable consumer responses to the brand's interests by utilizing posts shared on those platforms [11].It also enables influencers and followers to collaborate on the co-creation of the brand image [11].Influencer marketing on Instagram, as a commonly used online platform, is becoming so popular that the global industry is anticipated to increase from 13.8 billion dollars in 2021 to almost double that amount by 2025 [12].To take advantage of the existing influences of the KOLs, the key is to limit commercial orientation, reduce brand control, and ensure congruence between the brand image and the influencer.Similar to how social media shape marketing practices to be customer-centric, influencer marking should focus on the followers.This can be assisted by digital celebrities, who should be capable of influencing their followers' purchases by appealing to their followers' impulses to own the promoted products [13].
Intensive brand control may reveal commercial proposals, but the focus should be on the influencer, not the brand.This brought up another point --selecting a suitable influencer.The chosen KOLs should have an image that fits well with the brand and have a certain degree of reputation among the target audience.By establishing authority and remaining congruent, the influencer and the brand can work collaboratively to success.
Conclusion
As suggestions to existing marketing practices, those that incorporate repeated uninformative advertising, social media promotion, and influencer marketing to form a combination of strategies are desirable.Contemporary consumers are tired of information overload because they are exposed to all kinds of information all the time.Without directly informing the consumers about the advantages of products through advertisements, marketing can still be successful as long as consumers think of the brand at the time of purchase.Marketers are then advised to strip marketing of information regarding the objective performance of the product itself and provide sentimental values or act as a reminder of the brand.Realizing the influence of modern technology on the way people socialize, all businesses should be looking forward to integrating online marking practices to leverage the power of social media.Shifting the focus from suppliers to consumers, marketers should start to listen to consumers ' voices.This can be reached by creating and actively operating brands 'online social media accounts, interacting with prospective consumers, and carefully monitoring changes in social trends to get upto-date information.These all together help businesses to capture consumers' insights and hence establish an adaptable and accessible brand image.Likewise, influence marketing leads to growing brand awareness with the help of KOLs, who play a critical role in linking potential customers with the brand.Marketers are then encouraged to avoid traditional endorsement formulas and allow influencers to come up with their message based on their understanding of the brand.Provided the influencer remains congruent with the brand's values, marketers should welcome and allow different perspectives and interpretations to further enlarge brand influence.Transitioning from a traditional pricing strategy to a multi-faceted mix of new media marketing strategies is the prospect of the marketing industry.This research argues that pricing strategies should be replaced in contemporary marketing tactics, and social media is progressively used to help businesses expand.Therefore, it provides insights for marketers to create customer-focused marketing strategies by keeping track of the changes in consumer psychology, as well as acknowledging the advancements in the external environment and constantly update their strategies based on their research.In the future, more relevant research should be continued to ensure that marketing strategies keep pace with social development.
Proceedings of the 2023 International Conference on Management Research and Economic Development DOI: 10.54254/2754-1169/26/20230594 | 2023-09-14T15:12:07.863Z | 2023-09-13T00:00:00.000 | {
"year": 2023,
"sha1": "e534b0765213ccba0d44b9750f4e9e9244e2e075",
"oa_license": "CCBY",
"oa_url": "https://aemps.ewapublishing.org/media/44f8b16cfbc54ed0b08627bd4410449b.marked_JvLFnNn.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b1cfa72c80136aab7861bde20445db49ed1c0416",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
6147435 | pes2o/s2orc | v3-fos-license | Whole-exome sequencing implicates UBE3D in age-related macular degeneration in East Asian populations
Age-related macular degeneration (AMD) is a leading cause of irreversible central blindness among the elderly worldwide. We use exome sequencing to analyse nonsynonymous single-nucleotide variants (SNVs) across the whole genome of 216 neovascular AMD cases and 1,553 controls. As a follow-up validation, we evaluate 3,772 neovascular AMD cases and 6,942 controls from five independent cohorts in the East Asian population. Here we show strong evidence of an association at a novel, missense SNV, rs7739323, which is located in the ubiquitin protein ligase E3D (UBE3D) gene (Pmeta=1.46 × 10−9, odds ratio (OR)=0.74, 95% confidence interval (CI): 0.63–0.88). Furthermore, ablation of the UBE3D protein lead to an abnormal amount of pigment granules deposited in retinal pigment epithelium microvilli area and an abnormal response on electroretinography (ERG) in UBE3D+/− heterozygous mice. Our findings indicate that the ubiquitin-proteasome system may play a role in the pathogenesis of neovascular AMD. Age-related macular degeneration is a prominent cause of irreversible blindness among the elderly. Here Huang et al.identify a novel missense variant in UBE3D that sheds new light on the pathogenesis of the disease.
A ge-related macular degeneration (AMD) is a leading cause of irreversible central blindness among the elderly worldwide 1 . Neovascular AMD is a subtype of advanced AMD and is responsible for 480% of severe vision loss caused by AMD 2 . The aetiology of AMD is multifactorial and is affected by age, the environment, and genetic susceptibility [3][4][5] .
Previous genome-wide association studies have identified numerous risk-associated variants within 19 susceptibility loci for AMD [6][7][8][9][10] . However, genome-wide association studies focus on common variants (minor allele frequency (MAF) 45%), which are usually located in the intronic regions of the chromosome and do not encode specific functional proteins 11,12 . The functional coding variants in these susceptibility loci have not been systematically investigated.
Whole-exome sequencing (WES)-based studies are emerging as a popular approach to test for the association of coding variants with complex traits and diseases [13][14][15] . Exome sequencing evaluates the exonic regions of the chromosome, which constitute B1% of the entire genome and encode functional proteins. It is possible to obtain deep coverage, which is necessary for the detection of variants, with relatively few reads 16 . Therefore, there is strong interest in exome sequencing because of its potential to identify genes underlying complex traits and enable straightforward functional annotation of coding variation 15 .
The purpose of the present study is to identify the genetic variants that confer susceptibility to neovascular AMD. We screen the susceptibility loci for neovascular AMD in the coding exome using WES. Then, we validate the contribution of functional coding variants to genetic susceptibility for AMD in six independent samples. Furthermore, we generate UBE3D þ / À heterozygous mice (no UBE3D À / À mice survives) and perform electron microscopy to investigate the structural changes and electroretinography (ERG) to measure retina function. Our findings indicate that the ubiquitin proteasome system (UPS) may play a role in the pathogenesis of neovascular AMD.
Results
SNP associations in the exome-sequencing discovery cohort. In the discovery stage, we performed WES of 216 neovascular AMD cases and 1,553 controls ( Table 2 and Supplementary Fig. 2). We identified 40,062 SNPs that were previously reported in the Single Nucleotide Polymorphism Database (dbSNP) and 22,808 novel SNPs (Fig. 1). Compared with the SNPs in the 1000 Genomes Project (China Han þ Japan Tokyo (CHB þ JPT), 2010, 3), the recovery rate of common SNPs (MAFZ5%) was 91.4% (Supplementary Table 3), suggesting a low false-negative rate in our data. In addition, we performed a validation assay using mass spectrum genotyping technology on 79 randomly selected SNPs in 252 exome-sequenced samples (Supplementary Table 4). The genotype concordance rate between the genotyping and sequencing was 99.5% (19281/19387), demonstrating the high quality of our sequencing data and the accuracy of our genotype calling. Initial association analyses performed using the 216 AMD cases and 1,556 controls from this discovery stage confirmed previously reported associations of AMD with sequence variants mapping to CFH, HTRA1, B3GALTL, C2 and SKIV2L (Supplementary Table 5). UBE3D is not a highly polymorphic gene in the East Asian population. The MAF of rs7739323 is between 0.13 and 0.15 in the Chinese population, and 0.09 in the Japanese (according to the 1000 Genomes project; http://www.1000genomes.org/). In our study, the MAF ranged from 0.13 to 0.22 in the control Chinese population and was 0.11 in the Japanese population ( Supplementary Fig. 3). The transition/transversion ratio was 2.72 when MAF ¼ 0.1-0.2 (Supplementary Table 6).
We performed linkage disequilibrium (LD) analysis of rs7739323 and found that this SNP was not linked with any other previously reported AMD loci in chromosome 6 (C2-CFB, VEGFA, COL10A1 and IER3-DDR1); in addition, no other SNPs were in high LD with rs7739323, which indicates that the results can be attributed to rs7739323 and not neighbouring SNPs ( Supplementary Fig. 4).
Furthermore, we performed a SKAT-O test, and found that UBE3D was associated with AMD with P ¼ 3.55 Â 10 À 3 , which is The association results of the single-variant analysis (w 2 -test is used for association test P value ( À log 10 P); 216 neovascular AMD cases and 1,553 controls are used) are plotted against the genomic position. The previously identified loci are presented in blue, and the locus that was identified in the current study is presented in red.
consistent with the association of rs7739323 and AMD (Supplementary Data 1).
Independent replication of SNP associations and meta-analysis.
A total of 53 SNP markers not mapping to known AMD loci surpassed Po1 Â 10 À 4 , and thus were used for replication genotyping in two independent cohorts (North China and Beijing China cohorts), comprising an additional 1,653 cases and 1,251 controls ( Table 2 and Supplementary Table 7). We note significant evidence of replication at rs7739323, which encodes a non-synonymous substitution of V4M at position 379 of the gene for UBE3D (P replication for Northern China ¼ 0.011 and P replication for Beijing ¼ 2.12 Â 10 À 4 ; P replication China overall ¼ 3.57 Â 10 À 6 ; Table 2). We then evaluated whether this association could be extended outside of China, and thus further assessed it in three additional independent AMD cohorts comprising 2,119 AMD cases and 5,691 controls drawn from Hong Kong, Japan and Singapore. We observed supportive evidence for replication in these three additional samples, yielding a genome-wide significant observation when all six AMD cohorts were meta-analysed (P meta ¼ 1.46 Â 10 À 9 , odds ratio (OR) ¼ 0.78, 95% confidence interval: 0.67-0.90; Fig. 2a). We note some degree of heterogeneity upon meta-analysis of all six cohorts (heterogeneity w 2 ¼ 16.85, P ¼ 0.005, I 2 index ¼ 70.3%). However, this heterogeneity arose mainly upon the inclusion of the Singapore cohort. We then repeated the meta-analysis after removing each of the independent cohorts one at a time. No individual study affected the overall OR, as the omission of any single study did not produce a significant difference ( Supplementary Fig. 5).
The homologues study of rs7739323 showed that the UBE3D gene is conserved in most animals. However, most animals have a T at this position. Only humans and pigs have a C at this position ( Fig. 2b).
Electron microscopy was performed to evaluate the retinal anatomy of the mice, and it showed more loose retinal pigment epithelium microvilli and more pigment granules (PG) in UBE3D þ / À heterozygous mice (3 months old) when compared with age-matched controls (Fig. 3b). In ERG, an objective assessment of retinal function, the amplitude of the a-wave and b-wave was decreased in UBE3D þ / À heterozygous mice (Fig. 3c). These data demonstrate that ablation of the UBE3D protein leads to structural and functional changes in mice.
Discussion
In this study, we performed a comprehensive association analysis on 3,988 neovascular AMD cases and 8,495 controls from East Asia cohorts, and identified a nonsynonymous amino-acid substitution, rs7739323 (encoding for a V4M at position 379 in ARTICLE UBE3D), that was significantly associated with AMD risk. UBE3D, also known as ubiquitin-conjugating enzyme E2Cbinding protein (UBE2CBP), accepts ubiquitin from specific E2 ubiquitin-conjugating enzymes and transfers it to substrates, generally promoting their degradation by the proteasome. Moreover, ablation of the UBE3D protein leads to retinal structural changes and abnormal ERG in UBE3D þ / À heterozygous mice. Therefore, this study demonstrates a potential link between AMD and the UPS 17 .
The cellular ubiquitin-proteasome system, which is important for regulating the intracellular sorting and degradation of proteins, plays a central role in cellular protein homeostasis 17,18 . The ubiquitin-conjugating system requires three enzymes: E1, an ubiquitin-activating enzyme; E2, a ubiquitin-conjugating enzyme; and E3, a ubiquitin ligase [19][20][21] . The ubiquitin ligase recognizes the target protein that is to be degraded. Thus, an association between AMD and the UPS is biologically possible. The retina is an integral part of the central nervous system and, similar to other regions of the brain, it is derived from the neural tube, a precursor of the central nervous system. Consequently, AMD is a neurodegenerative disease 22,23 . In the nervous system, the UPS plays a role in normal physiological functions, and evidence gathered over the past decade also indicates a role for the UPS in neurodegenerative diseases 24,25 . A previous study found that deficiency in ubiquitin ligase led to the accumulation of neurofilament light chain and to neurodegeneration 26 . Impairment of the ubiquitin-proteasome pathway can also disrupt signal transduction pathways resulting in inflammation and angiogenesis; both are important features of AMD 27 . Therefore, dysfunction of ubiquitin protein ligase (E3) may lead to the accumulation of extracellular deposits (a component of drusen), degeneration of photoreceptors and the retinal pigment epithelium (RPE), and late-stage AMD.
Our targeted gene knockout study confirmed the above hypothesis. First, we did not obtain UBE3D À / À mice. Only heterozygous UBE3D þ / À mice survived. This suggests that UBE3D, although rarely studied, may play an important role in the development and growth of mice. Second, more pigment granules were deposited in the retinal pigment epithelium microvilliin UBE3D þ / À heterozygous mice than in wild-type mice, as demonstrated by the electron microscopy analysis, which indicates that dysfunction of ubiquitin protein ligase (E3) leads to the accumulation of extracellular deposits. Third, heterozygous UBE3D þ / À mice displayed severely decline of ERG responses, which shows that a dysfunction of ubiquitin protein ligase (E3) leads to functional changes in the retina.
Our results also suggest that rs7739323 is representative of genetic heterogeneity between East Asian regions (Mainland China, Hong Kong and Japan) and South Asia (Singapore). No heterogeneity was found between the Chinese and Japanese cohorts, which may indicate that the association of AMD with the UBE3D variant is only limited to the East Asian population. In addition, we collected the existing data and analysed the effect of age, sex and smoking, and found no significant statistical difference between them (Supplementary Table 8).
In summary, we performed a comprehensive analysis of coding single-nucleotide variants in a large sample and uncovered a strong association between AMD risk and a non-synonymous amino-acid substitution within UBE3D in East Asian populations, thus indicating that the UPS may play a role in the pathogenesis of neovascular AMD.
Methods
Subjects. Six independent study cohorts, totalling 12,483 individuals, were evaluated in this study. The samples in the initial stage (China cohort, exome sequencing) included 216 AMD cases and 1,553 controls ( Table 1 and Supplementary Table 1) were recruited in the replication stage. The subjects received a standard ophthalmic examination, including a visual acuity measurement, slit-lamp biomicroscopy and dilated fundus examination, all of which were performed by a retinal specialist. All of the subjects who were diagnosed with AMD underwent fundus fluorescein angiography, optic coherence tomography and indocyanine green angiography. All of the participants provided written informed consent. The study was approved by the ethics committee of Beijing University People's Hospital and was conducted according to the Declaration of Helsinki principles. The investigators were not blind to group allocation during the experiment when assessing out comes.
AMD grading and definition. The diagnosis of exudative AMD was made at each site by retinal physicians according to the standard clinical definition. If retinal photographs were available, AMD was also graded following the Wisconsin Age-Related Maculopathy Grading System 28 , the Age-Related Eye Disease Study criteria 29 or the International Classification 19 . The diagnosis of polypoidal choroidal vasculopathy was based on the indocyanine green angiography results using the Japanese Study Group guidelines 30 . Patients demonstrating a branching vascular network that terminated in aneurismal enlargements, that is, polypoidal lesions, and subjects exhibiting other macular abnormalities, such as pathological myopia, idiopathic choroidal neovascularization, presumed ocular histoplasmosis, angioid streaks or other secondary choroidal neovascularization, were excluded from the study. Unaffected subjects were defined as individuals exhibiting no clinical evidence of early or late AMD in either eye or any other eye disease, with the exception of mild age-related cataracts. Subjects with severe cataracts were excluded from the study.
Exome sequencing. We sequenced 1,769 whole exomes in the first stage of the project at BGI Shenzhen. The genomic DNA for each individual was hybridized with the NimbleGen 2.1M-probe sequence capture array 31 to enrich for exonic DNA in each library. We performed sequencing using the IlluminaHiSeq 2000 platform for each captured library independently to ensure that each sample had an average coverage of B31-fold. The raw image files were processed using the Illumina Pipeline (version 1.3.4) for base calling with default parameters, and sequences for each individual were generated as 90-bp reads.
Alignment and genotype calling. The sequence reads were aligned to the human genome assembly hg18 (GRCh36) using SOAP aligner ( Quality control of samples and SNPs. We performed a principal component analysis to control for population stratification; sites with a call rate (quality Z20 and depth Z8) 490% were used (1000 Genomes Project data as a reference, release 2010, 3 pilot1) ( Supplementary Fig. 8). Pairwise Identical-By-Descent was used to assess the relationship between individuals. The SNP sites that were used for Identical-By-Descent were as follows: (a) call (quality Z20 and depth Z8) rate SNP quality control. The following filter criteria were used to control for SNP quality: (a) the SNP should be located in the target region (NimbleGen 2.1M capture kits) and the ± 200-bp flanking region; (b) call rate: 90% of the individuals should exhibit high-quality genotypes (quality Z20 and depth Z8X); (c) alternative depth: at least one individual should have 44 reads supporting an alternative allele; (d) control HWE test P value Z10 À 4 ; (e) strand bias: the distribution of the major and minor allele should not enrich on either strand; Fisher's exact test P value Z1.0 À 7 ; (f) homopolymer run r6; and (g) end enrichment: the major and minor alleles should not be enriched at the end of the reads, Fisher's exact test P value Z1.0 À 7 . The missing SNPs that were located in indels in the ±5-bp region were detected using the 1000 Genomes Project (http://www.1000genomes. org, release 2012, 10, 16).
Replication cohorts. We examined the association of the top SNPs from our discovery cohort in two independent case-control cohorts ( Supplementary Fig. 1).The most highly associated SNP from the discovery cohort was directly genotyped in six replication cohorts.
Generation of CRISPR-mediated UBE3D knockout mice. The CRISPR-mediated UBE3D knockout mice were produced by Beijing View Solid Biotechnology, China. The linear plasmid pCAG-T7-Cas9 cut by the NotI restriction enzyme was used as the in vitro transcriptional template. After gel purification, Cas9 mRNA was transcribed with the mMESSAGEmMACHINE T7 Ultra Kit (Life Technologies). UBE3D-gRNA2 templates were amplified based on gRNA scaffold using T7 promoter sequence-conjugated primers: T7-UBE3D-g2-FP and gRNA-RP (Supplementary Table 9). UBE3D-gRNA2 was transcribed with a fast in vitro transcription T7 kit (cat. no. VK010, Beijing View Solid Biotechnology, China) and frozen at À 80°C. Zygotes of C57BL/6 mice (n ¼ 120) were injected with Cas9 mRNA and UBE3D-gRNA2 in M2 media (Millipore) using a FemtoJet micromanipulator (Eppendorf, Germany). After microinjection, zygotes were transferred to pseudopregnant females. All mice were maintained in a specific pathogen-free facility. The tail-derived DNA from 2-week-old newborn mice was genotyped by sequencing the PCR products amplified by the primers: UBE3D-sens and UBE3D-anti (Supplementary Table 9). The mutant mice were mated with wildtype C57BL/6 mice to obtain heterozygous UBE3D þ / À mice. Western blotting analysis was performed to confirm the UBE3D expression ( Supplementary Fig. 6E and Supplementary Fig. 7). The study was approved by the ethics committee of Beijing university people's hospital.
Electron microscopy. Fixed tissues were osmicated for 45-60 min in 0.5-1% OsO 4 in 0.1 M cacodylate buffer, processed in maleate buffer for staining with uranyl acetate and resin-embedded 32 . Ultrathin sections were cut at 90 nm using a Leica ultramicrotome and imaged at 80 KeV using a JEOL JEM 1400 electron microscope. Images were captured using a 16-megapixel GATAN Ultrascan 4000 camera.
ERG measurement of retina function. Heterozygous UBE3D þ / À mice and age-matched wild-type mice were dark-adapted overnight and anaesthetized. The coordination of ERG stimulation and the recording of electrical responses were completed using the Phoenix Micron IV Retinal Imaging Microscope (Phoenix, Pleasanton, CA, USA) according to the manufacturer's instruction manual 33 . The pupils were dilated and a gold-tipped electrode was placed centrally on the cornea. The Ganzfeld mice default setting was used. Two flashes were delivered with a 15-s interval and five responses were averaged. The amplitude of the a-wave and b-wave was recorded.
Meta-analysis. A meta-analysis across cohorts was performed as previously reported 34,35 . The meta-analysis was performed using Stata/SE version 11.0 (Stata Corp., College Station, TX, USA). Between-group heterogeneity was assessed using Cochran's Q-test and the I 2 index. We conducted influence analysis by omitting each study to find potential outliers.
Statistical analysis. The exome sequence data were analysed with a w 2 -test using PLINK software (http://pngu.mgh.harvard.edu/Bpurcell/plink, version 1.07) and the parameter set '-assoc'. The OR and MAF of the affected and unaffected samples were calculated at the same time. The allelic and genotypic distributions between the different groups were compared using a w 2 -test. For the low-frequency mutations (MAF o5%), SKAT-O was used. SKAT-O 36 is a SNP set-level (for example, a gene or a region) test for the association between a set of rare or common variants and dichotomous or quantitative phenotypes. SKAT aggregates individual score test statistics of SNPs in a SNP set and efficiently computes SNP set-level P values. A meta-analysis across cohorts was performed using Stata/SE version 11.0 (Stata Corp.). | 2018-04-03T00:05:45.865Z | 2015-04-15T00:00:00.000 | {
"year": 2015,
"sha1": "847b0604b800bebb8f18dc47d58227f594987a66",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms7687.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c884967fe63c5fc279b0710e436fa3dc00629c94",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
10011191 | pes2o/s2orc | v3-fos-license | Topological Quenching of Spin Tunneling in Mn_12-acetate Molecules
We investigate the topological quenching of spin tunneling in Mn_12-acetate molecules with an applied magnetic field along the hard axis. The model Hamiltonian describing this system has a fourth-order term due to the tetragonal anisotropy. We treat this model using the discrete phase integral formalism in which the Schrodinger equation corresponding to the Hamiltonian becomes a nine-term recursion relation. We solve this recursion relation and find that the tunnel splitting is quenched at certain values of the applied field. We also present a qualitative treatment based on the instanton approach.
I. INTRODUCTION
Quantum tunneling of a spin or spin-like degree of freedom has been discussed for over a decade now [1], but unambiguous evidence for its existence has only come recently [2] from studies on the magnetic molecule [Fe 8 O 2 (OH) 12 (tacn) 6 ] 8+ (Fe 8 for short). This molecule has a total spin S = 10, is biaxially symmetric, and can be modeled by the spin Hamiltonian with k 1 > k 2 > 0. In the first approximation, it has two degenerate ground states, approximately given by S z = ±10, which are separated by an energy barrier in the xy-plane. The question of interest is to understand how these states are admixed by quantum tunneling. Direct numerical diagonalization of Eq. (1.1) using the experimentally determined values k 1 ≃ 0.33 K, k 2 ≃ 0.22 K [3][4][5] reveals that the tunnel splitting ∆ is ∼ 10 −9 K, which is too small to be observed directly. Wernsdorfer and Sessoli [2] overcome this difficulty by applying a small amplitude ac magnetic field along the z direction, which causes the S z = ±10 levels to cross one another. Transitions between these levels are now possible via the Landau-Zener-Stückelberg (LZS) process [6], and the underlying tunneling matrix element ∆ can be deduced from a measurement of the incoherent LZS relaxation rate for the total magnetization. The key experimental fact that supports this interpretation of the relaxation (which could after all be due to a classical activation process a priori), is a systematic and remarkable oscillatory dependence of the inferred splitting ∆ on the strength of the magnetic field H when this field is applied along the hard directionx. This phenomon was predicted some time ago [7] based on an instanton approach. Briefly, when H x, there are two symmetry related instanton paths that wind aroundx in opposite directions, and together form a closed loop on the (complexified) unit sphere. The actions for the instantons are complex, and differ by a real valued Berry phase given by S times the area of the loop, giving rise to interference [8]. Now, however, the Berry phase is not fixed at 2πS, but may be continuously varied by varying H. As a function of H, therefore, the tunnel splitting oscillates, and is completely quenched at values of H where the Berry phase is an odd integer times π.
In this paper, we wish to study topological quenching of tunneling in a second magnetic molecule, Mn 12 -acetate (or Mn 12 -ac for short), which has also been the subject of several experimental studies [9][10][11][12][13]. The reason for our interest in Mn 12 -ac is that, in contrast to Fe 8 , it has tetragonal symmetry. The spin Hamiltonian of Mn 12 -ac can be written as where A ≫ B ≫ C > 0. The easy axis z now has four fold symmetry, the hard axes are ±x and ±y, and the medium axes are the lines y = ±x in the xy-plane. Here, the symmetry of a pair of instanton paths is preserved when a magnetic field is applied along one of the four hard axes. Thus, the quenched spin tunneling phenomenon is also anticipated in Mn 12 -ac.
Whether it can be observed or not depends on how strong the environmental decoherence is, and is a question that we shall not investigate here.
To investigate the topological quenching of the spin tunneling in Mn 12 -ac molecules we use the discrete phase integral (DPI) (or, discrete WKB) method [14,15]. The DPI method has been applied to spin tunneling problems in a recent series of works by one of us (AG) [16][17][18][19]. This method is semiclassical just as the instanton approach is, but it is easier to use for the study of the splittings of higher pairs of levels. It is particularly well suited to study tunneling when H is not along the hard axis. As in Fe 8 , we anticipate that ∆ will be quenched at a number of such field values [17,20], which correspond to non-trivial diabolical points [21,22] in the magnetic field space. In this paper, however, we shall not consider such general orientations of the field as the calculations for H x are already quite complex.
In the following section we present the DPI formalism for the present model. Unlike the previously studied model for Fe 8 , the Schrödinger equation corresponding to the spin Hamiltonian for Mn 12 -ac becomes a nine-term recursion relation because of the fourth-order term. We give a systematic analysis for this recursion relation. We then calculate the tunnel splittings as a function of the applied field for the first few energy levels. The results will be compared with those obtained by numerical diagonalization of the spin Hamiltoian. In Sec. III we give a qualitative discussion of spin tunneling based on the instanton approach. This gives a good physical picture of the quenched spin tunneling and explains some of the interesting features found in the DPI results. A summary of the results in Sec. IV concludes the paper.
II. DPI CALCULATION OF TUNNEL SPLITTINGS
We consider the spin Hamiltonian in Eq. (1.2) with magnetic field applied along the x axis. For convenience we divide the Hamiltonian by A to work with dimensionless quantities. With this choice we can write Here, µ B is the Bohr magneton, g = 2, and S is the spin. Following Ref. [13], A/k B = 0.556K, B/k B = 1.1 × 10 −3 K, C/k B = 3 × 10 −5 K, so that λ 1 = 1.98 × 10 −3 , λ 2 = 5.4 × 10 −5 , and H c = 0.414T. Let |n = |θ, φ be the spin coherent state with maximal spin projection along the directionn, with spherical coordinates θ and φ. We introduce the classical energy When h x = 0, H c has minima at θ = 0, θ = π. As h x is increased, these minima move toward θ = π/2, φ = 0, lying in the xz-plane. At a certain critical field, h xco , these minima will merge with each other, giving rise to a double zero of ∂H c (θ, φ = 0)/∂θ at θ = π/2. By using this condition, we can show that With the experimental numbers given above, h xco = 1.946 [23].
A. Recursion Relation
The DPI formalism can be started with Schrödinger equation in the S z representation. Introducing H|ψ = E|ψ , S z |m = m|m , m|ψ = C m , and m|H|m ′ = t m,m ′ , the Schrödinger equation corresponding to the Hamiltonian (2.1) can be expressed as This is a nine-term recursion relation with diagonal terms t m,m from S 2 z and S 4 z , and offdiagonal terms t m,m±1 , t m,m±,4 which are from the S x and S 4 ± parts, respectively. Since there are no S 2 ± or S 3 ± terms in the Hamiltonian, we have t m,m±2 = t m,m±3 = 0. The recursion relation (2.5) may be interpreted as the Schrödinger equation of an electron in a one-dimensional tight binding model. That is, we can consider the diagonal and off-diagonal terms as the on-site energy and hopping terms, respectively. Once this analogy is recognized, assuming t m,m±α (α = 0, 1, or 4) vary slowly with m, we can treat the recursion relation within a continuum quasiclassical approximation or a phase integral method [18,14,15]. With this approximation we can define smooth functions For the present model, t α 's are given by where we have used the approximation S(S + 1) ≈S 2 . Introducing the DPI wavefunction within the semiclassical approximation we have the Hamilton -Jacobi equation (2.10) In Eqs. (2.8) and (2.10), q(m) is a local, m-dependent Bloch wave vector obtained by solving Eq. (2.9) for q for any given energy E. It is very useful to have a physical picture of these equations. For a given value of m, Eq. (2.9) gives an energy band E(q) which defines the classically allowed range of energies. In Fig. 1 we show possible E vs. q curves for our problem. At lower and upper edges of the band the transport equation shows that v(m) becomes zero because the slope ∂E(q)/∂q is zero. This means the band edges are related to the classical turning points. These are not the only turning points, however. Such points are more generally defined by the condition that the velocity v(m) vanishes. This condition produces additional loci in E − m space, which we call critical curves, along with the mdependent band edges. These curves are crucial to understanding how the oscillating tunnel splitting, i.e., the quenching effect, appears.
B. Critical Curves ¿From Eq. (2.10) the condition v(m) = 0 is satisfied when q = 0, or q = π, or q = q * , where q * is the solution of 32 t 4 (m) cos 3 q * (m) − 16 t 4 (m) cos q * (m) + t 1 (m) = 0. (2.11) Substituting these into Eq. (2.9) we obtain the following energy curves for each of the three q's where U 0 (m) ≡ E(0, m), U π (m) ≡ E(π, m), and U * (m) ≡ E(q * (m), m). Whenever a given energy E crosses one of these curves a turning point occurs. Various types of turning points depending on the characteristic of the critical curves have been analyzed in Ref. [18]. An interesting feature of this analysis is the existence of novel turning points inside the classically forbidden region, which is crucial for the quenching of spin tunneling. The recursion relation studied there was based on a spin Hamiltonian which includes terms up to second order, and there were only three critical curves to be considered. Here, we expect to have up to five curves, U 0 (m), U π (m), and up to three U * (m)'s from the cubic equation (2.11).
In order to proceed further, it is necessary to analyze the critical curve structure more closely, in particular, its dependence on h x . To do this, let us first compare U 0 (m) with U π (m). From Eq. (2.7) it can be easily seen that U π (m) > U 0 (m) since t 0 (m) < 0, t 1 (m) < 0, and t 4 (m) > 0 for all |m| <S. Thus, U π (m) can be the upper band edge. However, in order for this to be so we still need to prove that U π (m) > U * (m). This is not obvious. Indeed, since the Eq. (2.11) is a cubic in cos q * , it is possible to have complex solutions. These solutions will yield a complex U * (m), which is not of interest because the Hamilton-Jacobi equation E = U * (m) can not then be satisfied. A careful consideration of the solutions of Eq. (2.11) is therefore necessary.
Defining x = cos q * , µ = m/S, and using Eq. (2.7) for the t α 's, we can write Eq. (2.11) as (2.13) A sketch of the function f (x) is drawn in Fig. 2. This sketch incorporates the following easily verified properties of f (x): It follows that a curve of type marked (a), characterized by one real zero of f (x) arises when h x is large, or when |m| is large, and that a curve of type marked (b), characterized by three real zeros arises when h x is small, or when |m| is small. Let us denote the largest zero by x 1 , and the other two, when they are real, by x 2 and x 3 with x 2 > x 3 . The corresponding values for q * (m) and U * (m) are denoted by q * i and U * i (m), with i = 1, 2, or 3. It is obvious that x 1 > 0, and that −1 < x 3 < − 1 √ 6 < x 2 < 0. The first real root yields a positive value for cos q * 1 , but since we cannot say if x 1 is greater or lesser than 1, q * 1 may be real or pure imaginary. The other two real roots, when they exist, always yield real wavevectors q * 2 and q * 3 .
The transition from one to three real roots occurs when f (x) has a double zero, i.e., f (x) and f ′ (x) both vanish simultaneously. It is easily shown that this condition is equivalent to The curve h xc (m) and some special values of h x are displayed in Fig. 3. The physical meanings of these values are listed in Table I. From the arguments of the previous paragraph, it follows that we will have three zeros when h x < h xc (m), and one zero when h x > h xc (m).
When h x < h xmax , we can also ask for the points ±m a (h x ) at which we change from one to three real roots of f (x). These are directly given by solving Eq. (2.14) for h xc (m) = h x : (2.16) Next, let us investigate whether U * 1 (m) is inside or outside the classically allowed energy band. Since x 1 moves to larger positive values as |m| increases (see Fig. 2), we see that U * 1 lies inside the band if x 1 < 1, i.e., |m| < m * , where m * is such that f (1) = 0. Solving this equation we get Clearly, m a < m * . Let us also explore whether the U * i (m)'s, when they are real, are larger or smaller than U π (m) or U 0 (m). We consider the following differences: where i, j = 1, 2, or 3 and we have used Eq. (2.11) to eliminate t 1 (m) in favor of t 4 (m) . From these equations, and using the facts that t 4 (m) > 0, plus we find : 1. When there is only one real root, When there are three real roots, , which from Eqs.(2.21) and (2.22), is equivalent to cos q * 3 (m = 0, h xi ) = −2/3 . We can now list the various types of critical curve patterns that arise in our problem, and the corresponding ranges of the field h x . In the following, U − (m) and U + (m) denote the lower and upper bounds of the energy band, and U f (m) and U i (m) mean the forbidden and internal energies, respectively.
Case I : h xr < h x < h xco . In this case U * 2 and U * 3 are not real for any m, and q * 1 (m) is imaginary, i.e., U * 1 (m) is outside the band for all |m| ≤S. The energy band E(q) is of the type in Fig. 1a for all m, and the critical curves become which are shown in Fig. 4. Case II : h xmax < h x < h xr . Now, U * 2 and U * 3 continue to be complex for all m, but q * 1 (m) is real in the central region |m| < m * . In this region, the energy band is as in Fig. 1b, while in the outer region it is of the type in Fig. 1a. Accordingly, the critical curves have the structure shown in Fig. 5 and can be written as There are now three m regions. In the outer region, |m| > m * , U * 2 and U * 3 are still complex, U * 1 is outside the band, and E(q) has the shape in Fig. 1a. In the intermediate range m a < |m| < m * , U * 2 and U * 3 continue to be complex, but U * 1 is inside the band, and E(q) has the shape in Fig. 1b. In the central range, |m| < m a , U * 2 and U * 3 become real, and E(q) has the shape shown in Figs. 1 The critical curves can be expressed as [25] Fig. 6a. When h x < h xi , as in Eq. (2.25), there is a range of m values in which U * 3 < U 0 (see Fig. 6b). For the experimental parameters relevant to Mn 12 -ac, the field h xi is rather small, and the points m 0 , m * , m a , and m i are all clustered tightly near m =S. This means that for the low lying states, there will be four turning points very close to one another, and the DPI analysis would have to be done using a quartic turning point formula, analogous to the quadratic turning point formula as discussed by Berry and Mount [26]. Since we know the qualitative structure of the energy spectrum for fields as small as h xi , based on the arguments of Sec. III, e.g., this exercise is largely academic, and we have chosen not to perform it. This means that our analysis is not quite correct at very small fields, and this can be seen in Fig. 7 especially in the behavior of the splitting between the first excited pair of levels. As we shall discuss in Sec. III, this splitting is rigorously zero at h x = 0, whereas we appear to find a zero at a slightly non-zero value of h x .
As discussed in Ref. [19] the quenching of spin tunneling occurs when q(m) has a real part as well as an imaginary part inside the forbidden region. From the viewpoint of energy curves this happens when there is an energy curve inside the forbidden region. From the above analysis we can see that only U * 1 (m) resides inside the forbidden region. For a given energy E such that U 0min ≤ E < U * 1max , q changes from pure imaginary to complex as m passes from the |m| > m c region to the |m| < m c region, where m c is the point where E intersects U * 1 (m), (for example see Fig. 4). When q becomes complex the semiclassical wavefunction in Eq. (2.8) oscillates with exponentially decaying or growing envelope. The quenching of spin tunneling arises from this oscillating nature of the wavefuction inside the forbidden region.
We note here that for the experimental Mn 12 parameters, the field h xmax is quite small (see the legend in Fig. 3), and so in the entire field range for Case III, even though there is a forbidden region turning point, the behavior of the ground state tunnel splitting is qualitatively similar to that for h x = 0. The behavior of the splitting of the next two levels is more interesting, and as can be seen from Fig. 7, the DPI method does capture it, at least qualitatively, and perhaps even quantitatively.
C. Tunnel Splittings
We now calculate the energy splitting due to the spin tunneling between degenerate states in Mn 12 -acetate. In Ref. [19], tunnel splittings for five-term recursion relation have been obtained from Herring's formula. The final result is, however, quite general so that it can be applied to a recursion relation which includes more than five terms. Moreover, as we can notice from the above classifications, although the present nine-term case has more critical curves the possible types of the turning points are all included in those discussed in Ref. [18], and no new type of turning point emerges here. Thus, we can directly apply the formula for the tunnel splittings obtained in Ref. [19] to the present problem. Since our calculation is based on this formula we quote the main results here. The tunnel splitting for nth pair of states is given by Here, κ and χ are the imaginary and real parts of complex q, respectively, and which is of order (1/S) compared to the energy barrier, as long as n ≪ S. This is why m c is modified, and also why the primary phase integral for the Gamow factor Γ 0 runs from −m 0 to m 0 , the minima of U 0 (m), rather than between the points where U 0 (m) = E n . Since all energy curves are a function of both m and h x , these points still depend on h x , which in turn makes the ∆ n depend on h x . The mass M and frequency ω 0 in Eq. (2.30) are obtained by approximating U − (m) near its minima by a parabola, i.e., U − (m) = E + 1 2 mω 2 0 (m ± m 0 ) 2 . For m = −m 0 we find, , The application of the formulas (2.29)-(2.33) cannot be carried out in closed form all the way, and we must resort to numerical methods. We explain the principal steps in our numerical calculation below.
In step 1, we must find ±m 0 , and U − (±m 0 ). For our problem we discover that U − (m) is always given by U 0 (m) near the classically allowed minima. The equation for the minima can be reduced to another cubic, where y = m 2 . For th parameters λ 1 and λ 2 of interest to Mn 12 , and h x < h xco , all three roots of this cubic equation are real, but only one is positive. This root gives us m 0 , and substitution of this value into Eq. (2.12) for U 0 (m) gives E, and Eqs. (2.33) then give M and ω 0 .
Step 2 is to obtain the points ±m c given by the roots of the equation As discussed after Eq. (2.32), up to terms of relative order (1/S), the points ±m c are the actual turning points for the low lying energies. Note that it is U * 1 which appears in Eq. (2.35) since this is the critical curve that lies in the classically forbidden region. To solve Eq. (2.35) numerically, we first solve Eq. (2.11) for the function cos q * 1 (m), which can be done in closed form. This solution is then substituted in Eq. (2.12) to obtain U * 1 (m). The entire procedure can be implicitly implemented in the numerical routine. The same holds for dU * 1 (m)/dm. Since U − (m 0 ) is known from step 1, any of the standard root-finding methods-Newton-Raphson, bisection, secant etc.-can be applied to Eq. (2.35).
Step 3 is to find q(m), in particular its real and imaginary parts κ 0 (m) and χ 0 (m). This is done by solving the Hamilton-Jacobi equation (2.9) with the energy E found in the first step. The problem amounts to solving a quartic equation in cos q and making sure that one has the correct solution, which can be done easily by making use of the properties that we have found above. Thus in the region m c < |m| < m 0 , there are two solutions of the form iκ (with κ real), and two of the form π − iκ. We discard the latter, and of the former select that one which continuously tends to 0 as m → ±m 0 . In the region |m| < m c , the solutions can be written as iκ ± χ, and as π − (iκ ± χ), where χ → 0 as m → ±m c . We discard the latter two, and read off κ(≡ κ 0 ), and χ(≡ χ 0 ) from the imaginary and real parts of the first two. Note that both κ 0 and χ 0 are taken to be positive.
Step 4 is to find the ǫ partial derivatives κ ′ 0 and χ ′ 0 , in effect ∂q(ǫ, m)/∂ǫ. (B ′ 0 is directly obtainable from κ ′ 0 .) We differentiate the Hamilton-Jacobi equation with respect to E: Since q(m) is found in step 3, this equation gives ∂q/∂ǫ for any m directly. We now have all the ingredients needed to evaluate the one-dimensional integrals Γ 0 , Λ n , F , and Q 1 . This is a straightforward numerical procedure. The only point worth noting is that the integrand for Q 1 is non-singular at m = −m 0 , and behaves, in fact, as (m + m 0 ).
In Fig. 7 we show the tunnel splittings for first three pairs of states as a function of the field parameter h x . For comparison we also plotted the tunnel splittings obtained from exact diagonalization of the Hamiltonian.
¿From the results we observe several interesting features. First, as anticipated the tunnel splittings are completely suppressed at certain values of h x . The overall pattern of zeros, their number, and the dependence of this number on n, the pair index, is understandable on general grounds as we shall discuss. What is surprising is how regularly spaced these values of h x are. For the first pair of splittings, e.g., the intervals between successive zeros decrease by 2 or 3% only, and the last interval is 92% of the first. For the next pair, ∆ 2 , the last interval is 95% of the first. The mean interval between zeros for the first three pairs is The regularity of the zeros means that the phase integral Λ n decreases almost linearly with h x . (From Fig. 7, Gamow factor Γ 0 also appears to be quite linear in h x .) While this variation is clearly expected to be smooth, we have no a priori way to judge how linear it will be. A similarly strong regularity of quenching intervals is experimentally discussed in Fe 8 . The simplest model Hamiltonian for Fe 8 entails only second order terms in the components of the spin operator, and in this model, the spacing of zeros is exactly equal [28], but to describe actual Fe 8 , one must add fourth-order terms. These terms change the spacing significantly, but still seem to preserve its regularity. It would be interesting to find a physical argument for this feature, which appears to be somewhat general.
Second, at h x = 0 the tunnel splitting alternates between zero and nonzero as the level number goes up. This is due to the fourth-order terms in the spin Hamiltonian. This term causes the tunneling in the Mn 12 system, but it also restricts the transitions to the case that the difference, ∆m = |m − m ′ |, between levels E m and E m ′ is a multiple of 4. For tunneling between degenerate states E m and E −m this requires the condition 2m = 4p, where p is an integer. Therefore, there is no tunneling between +m and −m when m is an odd number.
To give a more detailed argument of this point, we note that when H x = H z = 0, because of the S 4 ± terms, the Hamiltonian can be divided into the following subspaces for S = 10. where the numbers in brackets give the m quantum numbers. The subspace V 1 contains 6 levels, which form 3 pairs split by tunneling due to the CS 4 ± terms. The space V 3 contains 5 levels, of which ±8, and ±4 are split by tunneling, and m = 0 is isolated. There is no degeneracy amongst the 5 levels in space V 2 , but because of time reversal, this space is isomorphic to V 4 , and we therefore conclude that in the full spectrum of H, there should be five pairs of strictly degenerate levels, corresponding approximately to m = ±(2n + 1) with integer n. Third, there are five quenching points in the ground and first excited states tunnelings, and the number of quenching points decreases as the level number goes up. For the ground state tunneling the allowed number of quenching points can be explained qualitatively by thinking in terms of instantons. Since the instanton approach also gives a good geometrical structure to the quenching we present it in a separate section.
III. QUALITATIVE TREAMENT
In this section we give a qualitative treatment of the spin tunneling in Mn 12 using instanton methods. Adding a hard axis field to Eq. (1.2), the Hamiltonian becomes The instanton method is based on spin coherent state path integrals. In the spin coherent state representation the anisotropy energy corresponding to the Hamiltonian is given by where α and β are the polar and azimuthal angles with respect tox, i.e., cos α =n ·x, sin α cos β =n ·ŷ (3.3) sin α sin β =n ·ẑ.
The energy (3.2) exhibits two degenerate minima atn i = (α, β) = (α 0 , π/2) andn f = (α, β) = (α 0 , −π/2), where α 0 = pi/2 for H x = 0, and decreases smoothly to 0 as H x is increased. The level splitting due to tunneling between these minima can be obtained from the imaginary time propagator where is the Euclidean action and complex in general. Here, the boundary conditions aren(0) =n i , n(T ) =n f . Geometrically, for a closed path, the integral in Eq. (3.6) can be interpreted as a surface area on the unit sphere enclosed by this path, which can be verified by Stokes theorem. In the large spin limit the path integral can be approximated by the sum of all contributions from paths that minimize the action, that is, the instanton paths. The instantons for the present model are not simple because of the fourth-order terms. However, we can construct a qualitative argument to find the quenching effect without performing explicit calculations . Since the Euclidean action S E has both real and imaginary parts we can generally express the ground state tunnel splitting as where j labels the various instantons, S Rj , S Ij are the real and imaginary parts of the instanton action, respectively, and D j are prefactors. With these ingredients we now discuss how the quenching appears in the present model. Let us first consider the case when H x = 0. Since the energy has four-fold symmetry, an argument of von Delft and Henley can be applied [8]. Ifn(τ ) is an instanton path, so is Rẑ(π/2)n(τ ), where Rẑ(π/2) is a rotation through π/2 aboutẑ. Keeping in mind that n(τ ) is complex, when we project onto the real unit sphere, there are four saddle point paths passing through each of the four medium directions. Because of symmetry, each has the same real contribution to the action integral S R . However, since their azimuths about the easy axis are different, the imaginary part of the action, i.e., the phase S I , will not be the same. From the geometrical meaning of the integral in Eq. (3.6), the phase difference between two instanton paths equals S times the surface area on the unit sphere enclosed by these instanton paths.
To visualize the interference effect we map the two-sphere onto a plane, as in an ordinary atlas (see Fig. 8). The hard axes are mapped onto four equally separated points lying on the equator, and the points exactly halfway between these correspond to the medium axes. Thus, the real projections of the instanton paths can be drawn as curves which start from +z, pass through the medium points, and end at −z. The area enclosed by two adjacent instanton paths equals π, since the sphere is equally divided into four paths by the instantons. Thus, the phase difference between adjacent paths becomes Sπ. Choosing the phase of path 1 as the base, we can perform the summation in Eq. (3.7). Recalling that by symmetry the contribution from real parts of the instanton actions are all same, as are the prefactors D j , we have where γ is an irrelevant phase. This result gives us two quenching conditions. From the factor cos πS, we obtain the quenching of spin tunneling for half-integer S, which is just the Kramers degeneracy effect. The second cosine implies that the ground state spin tunneling is quenched for odd integer spins, i.e., S = 1,3,5, etc, and so ∆ is non-zero only for S = 2p, where p is an integer [27]. We now consider the case with H x = 0. Since the field is assumed to be applied along the +x axis, both easy and all four medium axes move close to +x axis. Thus, the two dimensional picture becomes one as shown in Fig. 9. The four-fold symmetry is now broken, but there are two pairs of instanton paths surrounding the +x axis: (a, a ′ ) and (b, b ′ ). The real parts of the instanton actions in a pair are same, but different between the pairs. The phase differences in each pair are the areas enclosed by each pair of instanton paths (the small and large oval regions in Fig. 9) and are dependent on the field H x . If we choose the straight line joining +z ′ to −z ′ as a reference, S Ia ′ = −S Ia , S Ib ′ = −S Ib , so that the summation in Eq. (3.7) can be performed as where S Ra , S Rb are the real parts of the instanton actions in each pair, and A a (H x ), A b (H x ) are the areas enclosed by the pairs (a, a ′ ) and (b, b ′ ), respectively. For H x > 0 the saddle points through which the paths (a, a ′ ) pass are lower than those for (b, b ′ ), which means that S Ra < S Rb . The main contribution to ∆ in Eq. (3.9) then comes from the first term, and we can neglect the second term. The quenching of the ground state tunnel splitting thus arises when A a (H x ) ≃ (2n + 1)π/2S, where n is a non-negative integer. To see how many quenching points are allowed we note that A a (H x ) < A a (0), where A a (0) = π (the area enclosed by the two paths 1 and 4 in Fig. 8). From this condition we find n < (S − 1)/2. For S = 10 there are thus five values of H x at which the quenching appears.
The instanton argument provides another way of seeing that the region of very small (but non-zero) H x is special. Exactly at H x = 0, four instantons are important, but for large H x only two are important. There must therefore be a regime of small H x where we make a smooth transition between these two behaviors. The width of this regime can be quite small since S Rb > S Ra as soon as H x = 0, and these actions appear in the exponents in Eq. (3.9), so that the difference (S Rb −S Ra ) is amplified. We thus have another way of seeing why the formula (2.29) fails near h x = 0. It contains only cosine factor, and is effectively ignoring the second term in Eq. (3.9).
IV. SUMMARY
In this paper, we have used the DPI method to study tunneling in Mn 12 , especially its behavior with a hard-axis field, which is expected to show oscillation as in Fe 8 . The recursion relation now has nine terms, complicating the analysis. There may be up to five critical curves, which leads to many more turning points. The DPI method still works, however, even though the phase integrals and integrands must be evaluated numerically. But, the numerical procedures required are simple, and involve only root finding and integration in one variable. Except for some special narrow field regions, where two or more turning points merge, the DPI analysis based on linear turning point formulas is extremely good, and agrees with exact numerical results quantitatively. Note U π (m) = U + in all cases. (a) When h xr < h x < h xco with |m| <S, and when h xc (m) < h x < h xr with |m| > m * . In this case, U 0 (m) = U − , and U * 1 (m) does not appear since q * 1 is imaginary. (b) When h xc (m) < h x < h xr with m a < |m| < m * . Here, U * 1 (m) = U − , U 0 (m) = U i . (c) When h xi < h x < h xmax with |m| < m a , and when 0 < h x < h xi with m i < |m| < m a . (d) when 0 < h x < h xi with |m| < m i . Note that, in both (c) and (d), U * 1 (m) = U − , and U 0 (m), U * 2 (m), and U * 3 (m) are inside the band and thus all denoted U i . FIG. 4. The critical curves for Case I. At points ±m 0 , U 0 has minima, and the points ±m c denote the intersection between E and U * 1 . Note U * 1 = U f and U 0 = U − for all |m| ≤S. For a given value of E, q becomes complex for |m| < m c which lies inside the classically forbidden region. In this region the semiclassical wavefunction C m oscillates with decaying or growing envelope. There are five critical curves. Note, however, that U * 2 and U * 3 appear only in the region |m| < m a because they are complex outside this region. The value below which the wavefunction can have an oscillating part inside the forbidden region. h xr The value above which q * 1 becomes real. | 2014-10-01T00:00:00.000Z | 2000-09-29T00:00:00.000 | {
"year": 2000,
"sha1": "1bd2239d5ce97b225d72f17f194252fdb89cfc7e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a8c71c42fb5004b73bfcca1db646388c86b209db",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
32811942 | pes2o/s2orc | v3-fos-license | ATP synthase. Conditions under which all catalytic sites of the F1 moiety are kinetically equivalent in hydrolyzing ATP.
Conditions have been reported under which the F1 moiety of bovine heart ATP synthase catalyzes the hydrolysis of ATP by an apparently cooperative mechanism in which the slow rate of hydrolysis at a single catalytic site (unisite catalysis) is enhanced more than 10(6)-fold when ATP is added in excess to occupy one or both of the other two catalytic sites (multisite catalysis) (Cross, R. L., Grubmeyer, C., and Penefsky, H. S. (1982) J. Biol. Chem. 257, 12101-12105). In the novel studies reported here, and in contrast to the earlier report, we have (a) monitored the kinetics of ATP hydrolysis of F1 by using nucleotide-depleted preparations and a highly sensitive chemiluminescent assay; (b) followed the reaction immediately upon addition of F1 to ATP, rather than after prior incubation with ATP; and (c) used a reaction medium with Pi as the only buffer. The following observations were noted. First, regardless of the source of enzyme, bovine or rat, and catalytic conditions (unisite or multisite), the rates of hydrolysis depend on ATP concentration to the first power. Second, the first order rate constant for ATP hydrolysis remains relatively constant under both unisite and multisite conditions declining only slightly at high ATP concentration. Third, the initial rates of ATP hydrolysis exhibit Michaelis-Menten kinetic behavior with a single Vmax exceeding 100 micromol of ATP hydrolyzed per min/mg of F1 (turnover number = 635 s-1) and a single Km for ATP of about 57 microM. Finally, the reaction is inhibited markedly by low concentrations of ADP. It is concluded that, under the conditions described here, all catalytic sites that participate in the hydrolysis of ATP within the F1 moiety of mitochondrial ATP synthase function in a kinetically equivalent manner.
Conditions have been reported under which the F 1 moiety of bovine heart ATP synthase catalyzes the hydrolysis of ATP by an apparently cooperative mechanism in which the slow rate of hydrolysis at a single catalytic site (unisite catalysis) is enhanced more than 10 6 -fold when ATP is added in excess to occupy one or both of the other two catalytic sites (multisite catalysis) (Cross, R. L., Grubmeyer, C., and Penefsky, H. S. (1982) J. Biol. Chem. 257, 12101-12105). In the novel studies reported here, and in contrast to the earlier report, we have (a) monitored the kinetics of ATP hydrolysis of F 1 by using nucleotide-depleted preparations and a highly sensitive chemiluminescent assay; (b) followed the reaction immediately upon addition of F 1 to ATP, rather than after prior incubation with ATP; and (c) used a reaction medium with P i as the only buffer. The following observations were noted. First, regardless of the source of enzyme, bovine or rat, and catalytic conditions (unisite or multisite), the rates of hydrolysis depend on ATP concentration to the first power. Second, the first order rate constant for ATP hydrolysis remains relatively constant under both unisite and multisite conditions declining only slightly at high ATP concentration. Third, the initial rates of ATP hydrolysis exhibit Michaelis-Menten kinetic behavior with a single V max exceeding 100 mol of ATP hydrolyzed per min/mg of F 1 (turnover number ؍ 635 s ؊1 ) and a single K m for ATP of about 57 M. Finally, the reaction is inhibited markedly by low concentrations of ADP. It is concluded that, under the conditions described here, all catalytic sites that participate in the hydrolysis of ATP within the F 1 moiety of mitochondrial ATP synthase function in a kinetically equivalent manner.
ATP synthase (F 0 F 1 -ATPase), the enzyme that synthesizes and hydrolyzes the ␥-phosphate bond of ATP, is crucial for the life of aerobic organisms. The enzyme resides in the inner mitochondrial membrane of animals, plants, yeast, and Neurospora; in the cytoplasmic membrane of bacteria and in the thylakoid membrane of chloroplasts in plants (see Refs. 1-6 for reviews). In accordance with the chemiosmotic hypothesis (7) the electrical energy of respiration is first conserved as a protonmotive force (⌬ H ϩ), which via the F 0 moiety (8) of the synthase delivers the accumulated energy to the -subunits of the F 1 moiety (␣ 3  3 ␥␦⑀), presumably through conformational changes in the stalk. There is a wealth of experimental evidence supporting the view that the step with the largest demand for energy is the one involved in the release of ATP from the catalytic sites of the enzyme. According to basic postulates of the "binding change" mechanism (9 -11), there is an equivalent participation of the three -subunits in the synthesis of ATP as they proceed through a cycle of "open," "loose," and "tight" states. Thus, at any one time, all three catalytic sites are in different conformations, but all pass sequentially through the same conformations (9 -11). The alternating participation of catalytic sites was shown, for the first time by Adolfsen and Moundrianakis (12) in hydrolytic reactions catalyzed by bacterial F 1 -ATPase.
Although the equivalent, alternating participation of catalytic sites within ATP synthases does not require positive cooperativity among them, many investigators believe that such cooperativity does in fact occur. The genesis of this long held view derives primarily from studies (13,14) in which the F 1 moiety of the bovine heart enzyme was compared under unisite and multisite catalytic conditions, i.e. conditions in which the ATP/F 1 ratio was adjusted so that either one or all sites were operating. The resultant multisite/unisite rate enhancement ratio of 10 6 was interpreted as reflecting strong positive cooperativity among catalytic sites (13). The bovine heart F 1 preparation used in these experiments contained bound nucleotide (2.8 -3.5 mol/mol of F 1 ) and was prior incubated with ATP (13).
In studies reported here, and for the first time, the kinetics of ATP hydrolysis catalyzed by nucleotide-depleted F 1 have been determined by measuring the entire time course of individual reactions from the moment ATP (in the range from 1 nM to ϳ 20 M) enters in contact with the catalytic sites until it is totally hydrolyzed. Thus, catalytic sites are open when the experiment is commenced, and data are collected throughout the entire range from unisite through multisite reaction conditions. This study demonstrates that regardless of the ATP/F 1 molar ratio and enzyme source, bovine or rat, that all catalytic sites participating in ATP hydrolysis within F 1 can function in a kinetically equivalent manner.
EXPERIMENTAL PROCEDURES
Sources of Enzymes, Chemicals, and Materials-Bovine heart F 1 was obtained from Drs. William Allison and J. M. Jault (University of California, San Diego). The enzyme was prepared by a modification (15) of the procedure of Knowles and Penefsky (16) and depleted of nucleotides as described by Garrett and Penefsky (17). It was stored prior to use in the refrigerator at 4°C in 100 mM Tris-Cl, pH 8.0, 4 mM EDTA, and 50% glycerol (v/v). Rat liver F 1 was purified by a modification (18) of the procedure of Catterall and Pedersen (19). The purified enzyme, in 250 mM KP i and 5.0 mM EDTA, was divided into 100-l aliquots and lyophilized to dryness and stored at Ϫ20°C until use. Immediately before use the enzyme was redissolved in 100 l of water and precipi-tated twice with ammonium sulfate. The bovine heart and rat liver F 1 preparations contained, respectively, Ͻ0.04 (11 determinations) and 0.9 (7 determinations) tightly bound nucleotide/mol enzyme when determined after denaturation by the highly sensitive chemiluminescent assay described below. As it is known that such preparations can bind a total of 5-6 mol of nucleotide/mol of F 1 (20,21), these preparations are appropriately defined as "nucleotide-depleted." In fact, the bovine heart F 1 preparation employed here is essentially nucleotide free.
ATP was a product of Sigma and the 1243-200 ATP monitoring reagent, a mixture of luciferin and luciferase was a product of BioOrbit. The luminometer used in these studies to monitor ATP disappearance was a product of LKB (Wallac model 125), and the attached recorder was a product of Soltec (model 330). [␥-32 P]ATP was purchased from DuPont NEN and its radioactivity assessed in a Beckman LS600IC liquid scintillation counter using Budget solve complete counting mixture (Research Products International Corp.). Activated charcoal (number C4386), prewashed with HCl, was a product of Sigma, and the filtration device (Centricon 100, molecular weight cutoff ϭ 100,000) used to separate F 1 and charcoal-bound [␥-32 P]ATP from 32 P i was obtained from Amicon.
Chemiluminescent Method for Monitoring ATP Hydrolysis-The standard reaction medium, in 1.0 ml final volume at 24°C, consisted of 200 mM sucrose, 50 mM KCl, 10 mM NaP i , pH 7.05, 2 mM MgS0 4 , and 50 l of a 5-ml solution in distilled water of the ATP monitoring reagent. ATP in amounts indicated in the legends to the tables and figures was then added to the stirred reaction medium to elicit the chemiluminescent response, followed by the addition of F 1 to initiate the ATPase reaction. The reverse (adding ATP to suspensions of F 1 ) was also possible without affecting the steady-state kinetics of the reaction. Changes in ATP concentration were monitored with an LKB Wallac model 125 luminometer. The electrical signal elicited by light emission was suitably amplified up to 10,000 times by changes in the current (from 10 Ϫ6 to 10 Ϫ8 A) and the voltage (from 10 V to 1 mV), and recorded using a Soltec model 330 multichannel recorder, usually run at a chart speed of 120 cm/min. The contents of the cell were stirred with a magnetic bar rotating at speeds of near 1000 rpm by means of an electrical device placed on the side of the reaction chamber. The concentration of ATP, in different standard solutions, was spectrophotometrically determined from the absorbance at 259 nm using a millimolar extinction coefficient of 15.4.
As the observed initial rates of decrease of the chemiluminescent signal were always first order with respect to the concentration of ATP (Ϫd[ATP]/dt ϭ k[ATP]), the rate constant "k," was calculated from the integrated form of the equation (ln[ATP] t ϭ ln[ATP] 0 Ϫ kt) by plotting 1n[ATP] t versus t at 400-ms intervals. Only steady-state segments (Ͼ70% of the reaction) were considered in these calculations (Fig. 2).
Radioactive Method for Monitoring ATP Hydrolysis-The entire reaction was carried out in a Centricon-100 filtration device. The standard reaction medium, again in 1-ml final volume at 24°C, was identical to that described above except that the ATP monitoring reagent was omitted and [␥-32 P]ATP (2.1 ϫ 10 6 cpm) rather than "cold" ATP was included in the assay. The reaction was initiated with F 1 , quenched at 2 s by addition of 1 ml of 100 mg/ml activated charcoal, and immediately centrifuged at 3000 rpm (1086 ϫ g) in a SS-34 rotor in a Sorvall RC-2B centrifuge at 4°C. The filtrate containing 32 P i released in the ATPase reaction was then assessed for radioactivity in 10 ml of Budget Solve as indicated above.
Determination of Protein-Protein was determined by the method of Lowry et al. (22) after first precipitating with 5% trichloroacetic acid.
RESULTS AND DISCUSSION
Response Time and Accuracy of the Luciferin-Luciferase Chemiluminescent Assay in Detecting ATP-In kinetic studies described below ATP hydrolysis was monitored by following the disappearance of the chemiluminescent signal induced by adding ATP to an assay medium containing luciferin and luciferase. Prior to commencing these experiments, it was important to define the response time of the system for detecting ATP and to establish whether concentrations of ATP could be accurately detected at ratios of ATP/F 1 in the assay ranging from less than 1 to much greater than 1 (i.e. from unisite to multisite conditions). Confirming previous studies of DeLuca et al. (23,24), the results depicted in Fig. 1A (traces a to c) show that, regardless of the amount of ATP (1-20,000 pmol), the response time of the luciferin-luciferase system is close to 300 ms, much faster than the time course of the ATPase reaction at concentrations of ATP and F 1 used in this study. The accuracy of the technique depicted in Fig. 1B, the data of which were derived from over 70 different experiments, shows that the correlation coefficient between light emission and ATP concentration is better than 0.999 at ATP concentrations of Յ15-17 M ATP. As very low F 1 concentrations are used in the studies described below, the technique allows for accurate ATP measurements to be made with assays in which the ATP/F 1 ratio ranges from ϳ0.1 to ϳ1900. Above 15-17 M ATP, the extent of the chemiluminescent signal rapidly decreases as the concentration of ATP is increased (Fig. 1), due most likely to the accumulation of dehydroluciferin (23). For this reason, and for the reason indicated below, the hydrolytic reaction was initiated by injecting F 1 into the reaction cell already containing ATP.
Time Course of ATP Hydrolysis Catalyzed by either Nucleotide-depleted Bovine Heart or Rat Liver F 1 -Three conditions were adhered to in the performance of all experiments. First, nucleotide-depleted preparations of F 1 (bovine heart or rat liver) were used to avoid any possible allosteric effects resulting from nucleotides bound to noncatalytic sites, i.e. sites located predominantly on ␣-subunits (25). Second, the reaction was initiated by adding F 1 to the reaction mixture already containing ATP to allow the nucleotide free catalytic sites to bind and hydrolyze ATP immediately upon contacting ATP. Third, and in contrast to the earlier studies (13,14), prior incubation of F 1 with ATP ("aging") was avoided both to allow for detection of the actual initial rates of ATP hydrolysis and to avoid product (ADP) inhibition that might give rise to "apparent" cooperative kinetics upon addition of excess ATP. Fig. 2 shows that under the above conditions the initial rates of ATP hydrolysis depend on ATP concentration to the first power as precisely defined by the first order rate equation . This is true regardless of the source of enzyme (bovine heart or rat liver) or whether the reaction takes place under "unisite" (ATP/F 1 ϭ 0.02, Fig. 2A) or "multisite" (ATP/F 1 ϭ 48.5, Fig. 2B) catalytic conditions. Plots (insets in Fig. 2) of the integrated form of the first order rate equation rate constant, k, which in turn was used to calculate the turnover number. For example, for the hydrolysis of 1 pmol of ATP by bovine heart F 1 (45.9 pmol) the first order rate constant, k, obtained from the plot ln[ATP] versus t ( Fig. 2A, inset) was 0.475 s Ϫ1 , and the calculated turnover number (1 pmol of ATP ϫ 0.475 s Ϫ1 /45.9 pmol of F 1 ) was 0.010 s Ϫ1 .
Initial Rates of ATP Hydrolysis by F 1 in the Substrate Concentration Range Spanning 1 nM to 17.1 M ATP-It has been reported that the F 1 moiety of bovine heart F 1 catalyzes the hydrolysis of ATP by a mechanism in which the slow rate of hydrolysis at a single catalytic site is enhanced by ϳ10 6 -fold when the ATP concentration is increased by more than 3 ϫ 10 3 -fold, a process interpreted to result in strong cooperative interactions between catalytic sites (13). The important data leading to this conclusion is actually derived from two separate experiments. The first was conducted under unisite conditions (see Fig. 5 in Ref. 14) in which a turnover number of 3.6 ϫ 10 Ϫ4 s Ϫ1 was determined by monitoring the dissociation of ADP remaining bound to F 1 after prior incubation (aging) of ATP and excess F 1 (ATP/F 1 ϭ 0.2) The second experiment was conducted under multisite conditions (Fig. 1 in Ref. 13) where a turnover number of 300 s Ϫ1 was determined by monitoring release of 32 P i from [␥-32 P]ATP after adding ATP (2.5 ml, 10 mM) in excess to a unisite mixture of F 1 (3 M in 2.5 ml) and [␥-32 P]ATP (0.3 M in 2.5 ml) that had been prior incubated for 2 s. (The multisite turnover number is somewhat higher (600 s Ϫ1 ) when measured by the authors under steady-state conditions in the presence of an ATP regenerating system.) The ϳ10 6 -fold enhancement factor was calculated from the simple multisite/unisite ratio of 300 (or 600 s Ϫ1 )/3.6 ϫ 10 Ϫ4 s Ϫ1 .
In experiments reported here (Tables I and II, Fig. 3), and in contrast to those described above, nucleotide-depleted F 1 preparations (bovine and rat) were employed; a single method (chemiluminescent assay) was used to measure both unisite and multisite turnover rates; the entire time course of the reaction was monitored at each ATP concentration as indicated in Fig. 2; and ATP concentration was varied over a wide range. Table I and Table II summarize kinetic data obtained for typical experiments with bovine and rat liver F 1 , respectively. The first order rate constant for ATP hydrolysis remains relatively constant under both unisite and multisite conditions declining only slightly at high concentrations of ATP. Turnover numbers under unisite conditions range from 0.01 to 0.06 s Ϫ1 at the lowest ATP/F 1 assay ratios, i.e. from 0.11 to 0.55. Fig. 3, A and B, summarize V versus ATP and 1/V versus 1/ATP (Lineweaver-Burk) plots of the data presented in Table I for bovine heart F 1 . Significantly in neither plot, which spans the range from unisite condition, with an ATP/F 1 assay ratio as small as 0.11, to multisite conditions, with an ATP/F 1 ratio as high as 1900, is there any indication of sigmoidicity characteristic of cooperative behavior. Rather, it is clear that bovine heart F 1 assayed under these conditions exhibits strictly Michaelis-Menten kinetic behavior consistent with the view that all participating catalytic sites are kinetically equivalent. The extrapolated single K m is 57 M, and the extrapolated V max is 103 mol of ATP hydrolyzed per min/mg of F 1 . The turnover number of 635 s Ϫ1 is very close to the value of 600 s Ϫ1 obtained under multisite conditions in the earlier study (Ref. 13, see discussion above). Similar results were obtained with rat liver F 1 (Table II, Fig. 3B), which exhibits a V max nearly identical to bovine heart F 1 and a slightly higher K m of 79 M.
Comparison of the Chemiluminescent Assay for Monitoring ATP Hydrolysis with an Assay That Monitors Release of 32 P i from [␥-32 P]ATP-Although the turnover number of 635 s Ϫ1 for multisite catalytic conditions reported here for the bovine heart F 1 is nearly identical to that reported in the earlier study (13), the unisite turnover numbers at the lowest ATP/F 1 ratios are much higher, in the range of 0.01-0.06 s Ϫ1 (Tables I and II), rather than near 10 Ϫ4 s Ϫ1 (13, 14). Consequently, data obtained here result in multisite/unisite ratios near 10 4 rather than near ATP concentration Reactions were initiated by injecting 9.16 pmol of nucleotide-depleted bovine heart F 1 into a 1 ml chemiluminescent reaction system containing the indicated amount of ATP. Assays were carried out exactly as described under "Experimental Procedures." The first order rate constant (k) of the reaction was calculated from the slope of the line that results from plotting the integrated form of the first order rate equation: ϪdATP/dt ϭ k [ 2. Time course of the ATPase reaction catalyzed by F 1 under unisite and multisite catalytic conditions. Assay conditions are exactly as described under "Experimental Procedures." In A., the reaction was initiated by adding 45.9 pmol of bovine heart F 1 to a medium containing only 1 pmol of ATP. In B, the reaction was initiated by adding 20.6 pmol of rat liver F 1 to a medium containing 1000 pmol of ATP. The first order rate constant, k, was calculated from the slope of the line that results from plotting the integrated form of the first order rate equation at 0.4-s intervals (see "Experimental Procedures"). ATP concentration ϭ light intensity in arbitrary units.
6 (13) as reported earlier.
For this reason, it might be argued that the chemiluminescent assay, which monitors ATP disappearance, may not accurately report ATP hydrolytic rates at low ATP/F 1 assay ratios (unisite conditions). To address this question, we compared the chemiluminescent assay for monitoring ATP hydrolysis under unisite conditions with that of an assay that monitors 32 P i release from [␥-32 P]ATP. Two experimental conditions were chosen, one in which the ATP/F 1 assay ratio was only 0.11 (lowest data point in Table I), and one in which the ratio was 1.1. The assay medium was identical to that used for monitoring ATP disappearance except that luciferin and luciferase were omitted. Significantly, the specific activities of 2.9 Ϯ 0.21 and 14.5 Ϯ 2.9 nmol of P i released per min/mg of protein obtained using the 32 P i release assay (Table III) compare favorably with those of 1.8 and 18 nmol of ATP disappeared per min/mg of protein obtained using the chemiluminescent assay (Table I), for ATP/F 1 ratios, respectively, of 0.11 and 1.1. Therefore, it seems clear that the chemiluminescent assay is a reliable indicator of the ATP hydrolytic rate under unisite as well as multisite conditions. Summary and Mechanistic Implications-In studies described here, the bovine heart F 1 preparation was essentially nucleotide-free, and the rat liver F 1 preparation contained less than 1 mol/mol of F 1 . Both enzymes exhibited strictly Michaelis-Menten kinetic behavior with maximal turnover numbers of 635 s Ϫ1 and multisite/unisite rate enhancement ratios near 10 4 , consistent with the view that F 1 -ATPases can catalyze ATP hydrolysis by a mechanism in which all participating catalytic sites are kinetically equivalent. In earlier studies (13,14), the bovine heart F 1 preparations used contained 2.8 -3.5 mol of bound nucleotide/mol of F 1 . These preparations exhibited maximal turnover numbers of 600 s Ϫ1 and multisite/unisite rate enhancement ratios of 10 6 , consistent with the view that F 1 -ATPases can catalyze ATP hydrolysis by a mechanism in which positive cooperativity occurs between catalytic sites. Perhaps the simplest interpretations of the two studies is that, depending on nucleotide content and its subunit distribution, F 1 ATPases can exist in different conformational states, one in which all participating catalytic sites are kinetically equivalent and one in which they are kinetically nonequivalent. In support of this view are two different x-ray structures of F 1 (25,26), one in which the catalytic -subunits appear more structurally equivalent (25) than in the other (26).
Alternative interpretations are possible and require further investigation. One possibility is that F 1 -ATPases normally function during ATP hydrolysis as simple Michaelis-Menten enzymes and that deviations from this behavior (i.e. sigmoid kinetic behavior), rather than reflecting positive catalytic co- Table I for bovine heart F 1 . See legend to Table I for details of the assay. Note that the first 7 of the 14 data points are plotted on an expanded scale in the inset. Values for V max and K m were obtained from a Lineweaver-Burk plot shown in B. B, Lineweaver-Burk plots of the data tabulated in Table I for bovine heart F 1 (A) and in Table II for Rat Liver F 1 ( B). See legends to Tables I and II Reaction conditions were identical to those used in the chemiluminescent assay (see "Experimental Procedures") except that the luciferin-luciferase reagent was omitted and [␥-32 P]ATP rather than "cold" ATP was present. Upon addition of 9.16 pmol of nucleotide-depleted bovine heart F 1 to the assay medium contained within a Centricon-100 filtration device, the reaction was allowed to proceed for 2 s followed by immediate quenching with activated charcoal. Separation of 32 P i from charcoal bound [ 32 P]ATP was performed by centrifugation (see "Experimental Procedures"). Specific activity values are reported as mean Ϯ the standard deviation. operativity, simply reflect inhibitory ADP bound at a catalytic site that is displaced as ATP concentration is raised. As shown in Fig. 4, ADP is a potent inhibitor of bovine heart F 1 with over 90% inhibition being observed when the ADP concentration equals the ATP concentration at values Ն5 M. Significantly, F 1 preparations are frequently stored or isolated in ATP, the hydrolysis of which results in F 1 ⅐ADP complex formation. Specifically, as it applies to the earlier studies (13,14) leading to the view that bovine heart F 1 exhibits positive catalytic cooperativity, it is interesting to note that the unisite turnover number of 10 Ϫ4 s Ϫ1 was based neither on the release of P i nor ADP following ATP hydrolysis per se, but rather on the dissociation of ADP remaining bound to F 1 following hydrolysis (Fig. 5 in Ref. 14). Thus, the possibility exists that the unisite turnover number of 10 Ϫ4 s Ϫ1 obtained from this earlier analysis is not an accurate reflection of the actual value and corresponds to release of ADP from a noncatalytic site. It is important to note that the same investigators in a separate study ( Fig. 1 in Ref. 13) find that incubation of [␥-32 P]ATP (0.3 M in 2.5 ml) under unisite conditions with F 1 (3 M in 2.5 ml) for 2 s prior to adding excess ATP results in 20% hydrolysis of the total ATP present. Had these investigators calculated the unisite turnover number under these conditions, they would have obtained a value of 0.01 s Ϫ1 (i.e. 0.15 M ATP ϫ 0.20/1.5 M F 1 ϫ 2 s). This value is almost identical to the unisite turnover number of 0.011 s Ϫ1 reported here as the first entry in Table I and consistent with Michaelis-Menten rather than cooperative kinetic behavior.
Finally, consistent with the studies reported here, recent work on nucleotide-depleted Escherichia coli F 1 (27) show that in the ATP concentration range of 1 M to 1 mM a single K m value of 38 M is sufficient for an adequate description of the ATP hydrolytic behavior of the enzyme. However, these authors believe that all catalytic sites (presumably three) must be occupied to achieve significant rates of ATP hydrolysis. Clearly, this is not the case with the nucleotide-depleted F 1 preparations from the animal systems examined here, as significant rates of ATP hydrolysis are readily detected at ATP/F 1 assay ratios as low as 0.02 ( Fig. 2A). Moreover, at 17 M ATP, well below the K m of 57 M, the catalytic turnover number is already 149 s Ϫ1 (ϳ24 mol of ATP hydrolyzed per min/mg of F 1 ), as shown in Table I. Thus, nucleotide-depleted animal F 1 preparations, in accordance with simple Michaelis-Menten kinetic behavior, show significant rates of ATP hydrolysis at all concentrations of ATP tested. FIG. 4. Inhibition of ATP hydrolysis catalyzed by bovine heart F 1 . The chemiluminescent assay conditions are exactly as described under "Experimental Procedures" except that the reaction was initiated by adding ATP to a medium containing 9.16 pmol of F 1 and an amount of ADP equal to that of the added ATP. | 2019-08-17T09:39:01.915Z | 1996-12-20T00:00:00.000 | {
"year": 1996,
"sha1": "2fafc2d6f6e95b8a67be2737bd3d0ab531b1c1ee",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/51/32546.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "4fc5955fe6d99f9415d3821d4eb477f185b8f4ee",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
135446190 | pes2o/s2orc | v3-fos-license | Renal Failure Impact on the Outcomes of ST-Segment Elevation Myocardial Infarction Patients Due to a Left Main Coronary Culprit Lesion Treated Using a Primary Percutaneous Coronary Intervention
Background: Patients with ST-segment elevation myocardial infarction (STEMI) and primary percutaneous coronary intervention (PPCI) on a left main culprit lesion have very high mortality rates. The interaction of chronic kidney disease (CKD) with such a catastrophic acute event on the background of their highly complex atherosclerotic lesions is not well established. Therefore, we sought to evaluate in these patients the influence of the estimated glomerular filtration rate (eGFR) on short- and long-term mortality. Methods: We retrospectively analyzed renal function in 81 patients with STEMI and PPCI on a left main culprit lesion from two tertiary centers. Results: Patients were divided in two groups according to an eGFR cut-off of 60 mL/min/1.73 m2: 40 patients with CKD and 41 without CKD. Patients with renal failure were older, had more diabetes, and had experienced more frequent myocardial infarction MIs. CKD patients had a higher baseline-SYNTAX score (p = 0.015), higher residual-SYNTAX score (p < 0.001), and lower SYNTAX revascularization index-SRI (p = 0.003). Mortality at 30-day, 1-year, and 3-year follow-ups were not significantly different between the two groups. However, when analyzed as a continuous variable, eGFR emerged as a predictor of 1-year mortality, both in univariate analysis (OR = 0.97, 95% CI: 0.95–0.99, p = 0.005) and in multivariate analysis, after adjusting for cardiogenic shock and Thrombolysis in Myocardial Infarction TIMI 0/1 flow (OR = 0.975, 95% CI: 0.95–0.99, p = 0.021). Conclusions: In STEMI with PPCI on a left main culprit lesion, renal failure was associated with more complex coronary lesions and less complete revascularization, and turned out to be an independent predictor of mortality at 1-year follow-up.
Introduction
Chronic kidney disease (CKD) is a strong risk factor for fatal and nonfatal cardiovascular events [1][2][3]. Even mildly decreased kidney function with an estimated glomerular filtration rate (eGFR) of 60-90 mL/min/1.73 m 2 independently predicts long-term mortality, both in coronary stable [4] and ST-segment elevation myocardial infarction (STEMI) patients [5]. There are data showing that renal dysfunction per se might initiate and accelerate adverse cardiac events [4].
Revascularization improves the poor outcomes in CKD patients with coronary artery disease (CAD). Although the percutaneous coronary intervention (PCI) procedure itself is challenging and associated with an upfront risk of further kidney injury or procedural complications [6], it actually decreases long-term mortality [6].
Patients with STEMI due to an unprotected left main coronary artery (UPLMCA) lesion are particular in sharing both a catastrophic acute instability and a very high atherosclerothic burden, quantified by baseline SYNTAX (synergy between PCI with TAXUS™ and cardiac surgery) score [7]. Survivals of an acute UPLMCA occlusion treated using PCI have a damaged myocardium and residual coronary lesions (residual SYNTAX score) consistent with the completeness of revascularization. Therefore, they are vulnerable to residual ischemia or future acute events. These vulnerabilities are supposed to be oversized in patients with renal failure because of a more extensive and complex CAD [8] and more vulnerable non-culprit plaques [3,9] than in patients with normal renal function. What is more, in the CKD group, the myocardial substrate stroked by the index acute left main event might already be damaged by the increased fibrosis [3].
Although there are data from randomized trials on the outcomes of CKD patients with stable left main disease [10], the situation is different regarding STEMI patients due to UPLMCA with renal impairment, where data are few and conflicting with gaps in the evidence [11][12][13][14]. Some of these studies [11,13] used creatinine levels to characterize renal function, which is known as an unreliable method when compared to the eGFR. Therefore, we sought to retrospectively evaluate the influence of eGFR, calculated using the most currently accepted equation (CKD Epidemiology Collaboration) on short-and long-term mortality of the patients with STEMI due to UPLMCA treated using primary PCI.
Materials and Methods
This was a retrospective observational cohort study. Patients with STEMI and primary PCI for a UPLMCA culprit lesion were identified in the registries of two centers in Cluj-Napoca, Romania. These two centers perform more than 3000 PCI procedures annually, of which 900 are primary PCIs, in the setting of acute STEMI. Eight senior operators were involved in the study, each performing more than 250 interventions annually, 15-50 of these being left main PCI procedures. The inclusion period was between January 2010 and March 2017.
The inclusion criteria, presented in detail within a previous paper [7], were ongoing ischemic chest pain with a duration of more than 30 min, accompanied by ST-segment elevation of at least 0.2 mV in two contiguous electrocardiographic (ECG) leads, left main STEMI equivalent ECG changes [15], new left bundle branch block, and/or cardiogenic shock.
Coronary flow was graded according to the Thrombolysis in Myocardial Infarction TIMI classification system. Collateral flow was evaluated using Rentrop criteria [16].
Left main was considered "unprotected" in the absence of any patent left coronary artery bypass grafts.
UPLMCA was considered the culprit vessel in the case of a more than 90% stenosis or in the case of an angiographic complicated lesion: dissection, thrombus, plaque rupture, or TIMI 0-2 flow.
Baseline SYNTAX score I (synergy between PCI with TAXUS™ and cardiac surgery), baseline SYNTAX score II, and residual SYNTAX score were calculated for each patient by two independent senior interventional cardiologists. SYNTAX score revascularization index (SRI) represents the proportion of CAD burden treated using PCI [17]. It was calculated using the formula [17]: SRI = (1−(rSS/bSS)) × 100 rSS = residual SYNTAX Score bSS = baseline SYNTAX Score Technical success was defined as less than 30% residual stenosis in the presence of TIMI 3 flow.
Baseline Blood Investigations
Baseline serum creatinine was derived from the venous sample acquired in the emergency ward prior to cath-lab admission. The estimated glomerular filtration rate (eGFR) was calculated using the CKD Epidemiology Collaboration (CKD-EPI) equation, as per the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines [18]. CKD was defined as an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m 2 . Patients were separated into two groups: CKD with GFR < 60 mL/min and non-CKD with a GFR > 60 mL/min. Urine output was monitored before and after the primary PCI procedure, defining oliguria as an output less than 0.5 mL/kg/min.
All-cause mortality was the primary end-point of the study. Mortality data were obtained from medical records for inpatient deaths. Regarding end-point after discharge, information was obtained from primary care physician records, telephone interviews, questionnaires sent by mail, or domicile visits. Data on the vital status was available in the electronic records of the national assurance company in all patients. Medical records of the follow-up events were obtained in 90% of the patients. All-cause mortality was reported according to Academic Research Consortium recommendations [19]. Follow-up ended on 31 December 2017.
Statistical Analysis
Normally distributed continuous variables (e.g., age) were presented as mean ± standard deviation and the significance between independent groups was tested using the independent samples t-test. Skewed continuous variables (e.g., bSYNTAX, rSYNTAX scores) were presented as median (interquartile range) and were analyzed using the Mann-Whitney U test. Categorical variables were presented as counts and proportions, and for statistical comparisons we used the chi-square or the Fisher's exact tests. Normality in variable distributions was tested using the Shapiro-Wilk test. Correlations between continuous variables were assessed using the partial Spearman correlation coefficient, with adjustment for age, diabetes, left ventricular ejection fraction less than 30%, and prior myocardial infarction (MI) with the associated statistical test.
To assess the relationship between variables of interest and mortality, logistic regressions were used. Unadjusted models were built first, followed by models that included the variables of interest. Two models were developed: one adjusted for the presence of shock and TIMI flow 2/3 versus 0/1 and a second one adjusted for left ventricle low ejection fraction (<30%), diabetes mellitus, and prior MI. The odds ratio along with 95% confidence intervals and p-values were computed for each regression. The goodness-of-fit, the presence of multicollinearity, and misspecification were checked for each model. The log-linearity assumption was checked for continuous variables.
A two-sided p-value < 0.05 was considered statistically significant. Statistical analysis was performed using the R environment for statistical computing and graphics (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.4.
Results
Baseline renal function was evaluated in 81 patients with STEMI due to UPLMCA whose outcomes were presented in a previous paper [7]. Among the 30-day survivors, mean follow-up was 36 months.
At presentation, the mean eGFR was 63.03 ± 28.7 mL/min/1.73 m 2 . The distribution of baseline eGFR was roughly normal. Among the 81 enrolled patients, 40 were classified into a CKD group with a mean eGFR of 38.75 ± 13.97 mL/min/1.73 m 2 and 41 into a non-CKD group. All patients had preserved urine output before the PCI procedure.
Discussion
In patients with STEMI due to the UPLMCA culprit vessel being treated using primary PCI, the main findings on the influence of renal dysfunction on short-and long-term outcomes are summarized below. When comparing the groups with and without CKD, as defined by the standard cut-off eGFR of 60 mL/min/1.73 m 2 , there were no differences in short-and long-term mortalities. However, when renal function was analyzed as a continuous eGFR variable, it turned out to be an independent predictor of mortality at the 1-year follow-up, even when adjusted for cardiogenic shock and TIMI 0/1 flow. This means that progressively worse renal impairment was associated with steadily increasing all-cause mortality rates.
Our analysis offers some data on the interplay between renal dysfunction and mortality. Of the patients with STEMI due to UPLMCA, the CKD subgroup was more severely diseased, presenting with a higher lesion extent and complexity, as it is shown from their b-SYNTAX scores, which increase progressively with decreasing renal function (eGFR). The results of our study confirm those from post-mortem [8,20] and SYNTAX score studies [21] and extend them to these very acute patients with STEMI due to UPLMCA.
Following primary PCI, patients with renal failure had a higher r-SYNTAX score and lower SRIs as a consequence of less complete revascularization, most probably due to a propensity for complex and heavily calcified lesions. More residual lesions lead to poor outcomes since it is established that a greater CAD burden confers a significantly higher risk for clinical plaque progression [22]. Moreover, it has also been shown that patients with multivessel CAD during the original PCI were more likely to require non-target lesion PCI [22,23].
Plaques are more prone to rupture due to increased inflammation in CKD [3,9]. Optical coherence tomography (OCT) is one of the best imaging techniques for visualizing vulnerable plaques [24]. OCT studies revealed that a lower eGFR was associated with a larger lipid core in non-culprit plaques [9,25]. Additionally, calcium deposits damage vascular smooth muscle, impair vascular reactivity, and increase plaque rupture [1]. However, we did not observe an increase in nonfatal MI. It is likely that such events add to the outcomes as deaths and not non-fatal MI. This is in line with a previous study showing an increase in risk of cardiovascular death at 1 year without a concomitant increase in risk of MI [1].
CKD patients might develop acute heart failure easier, when LM occludes, due to concerted myocardial damages: fibrosis, coronary microcirculatory dysfunction [3], increased microvascular and obstruction after primary PCI [26]. Our data showed a correlation between eGFR and cardiogenic shock.
Evaluation of kidney function in our retrospective study has several shortcomings. We deemed it useful to quantify renal impairment in our patients as it is an important predictor of outcome in the settings of both acute and chronic kidney diseases [27].
The main issue is the differential diagnosis between impaired kidney function due to CKD versus acute kidney injury (AKI). Detection of renal impairment in STEMI patients at the moment of hospital admission may reflect a combination of acute hemodynamic instability as well as chronic renal disease, and therefore affect both short-and long-term risk [27]. In the absence of previous data on renal function, the probability of CKD versus AKI is to be inferred based on the clinical context. In our particular case, creatinine was assessed at presentation in the first few hours after onset of pain in patients. Specifically, in cardiogenic shock patients of our cohort, the mean delay between pain onset and presentation to emergency ward was 6.3 h. This short time of clinically evident acute illness is generally insufficient to a generate measurable increase in serum creatinine in most cases after acute kidney damage. In a large study on hypotensive patients, the average acute kidney injury onset time was 2.3 days after intensive care unit admission [28]. It is known that serum creatinine rise is commonly delayed after kidney function decline [29] and do not depict real-time changes in renal function. On the other hand, decrease in urine output is a more sensitive marker of AKI soon after onset of the risk factor for AKI due to the speed of the response [30]. Therefore, in this context, in patients with preserved urine output, impairment of kidney function was supposed to be chronic, and therefore evaluated via estimation of GFR according to KDIGO guidelines [18]. We also acknowledge that not all patients with mildly decreased GFR (grade 1 and 2) are certain to have CKD, as other markers of abnormalities of kidney structure or function persisting for >3 months were not documented in our cohort. Nevertheless, eGFR is an accurate measurement of kidney function, even in the absence of such markers of chronic kidney damage [30]. This is a common problem in trials evaluating the impact of renal insufficiency on the outcomes of patients with UPLMCA lesions, treated using PCI [10,11]. It is not a large population of patients and usually there are no previous records on the renal function [10]. It should be stressed that STEMI patients due to UPLMCA are an even smaller group as they account for roughly 2% of all primary PCIs [11]. Due to severe and extensive myocardial damage, the mortality rates are high, and some of the patients die before any medical contact.
However, regardless of the type of renal impairment, in our study, eGFR emerged as a predictor of one-year mortality independent of the cardiogenic shock.
There are studies suggesting that renal dysfunction is a good indicator of the overall systemic function and of the biological age, and therefore is not causative of mortality per se [5]. Actually, in our study, CKD patients were older, more likely to have diabetes or prior MI, and had a lower left ventricular ejection fraction. The significant association of eGFR with 1-year mortality does not necessary imply a causal relationship, although it turned out to be independent of diabetes, antecedents of MI, or ejection fraction. Previous studies had also shown an independent association with all-cause cardiac mortality, even after adjusting for cardiac risk factors [4], suggesting that renal dysfunction might initiate and accelerate adverse cardiac events [4].
Evidence from previous trials to provide information on the influence of low eGFR in STEMI patients due to UPLCA are scarce and conflicting, but rather negative [11][12][13][14]. In the largest registry to date, Patel et al [11] did not find renal dysfunction to be an independent short-or long-term mortality predictor in a large number of STEMI patients due to occluded left main. In a Korean multiregistry analysis [31], the risk of stent failure itself was similar between CKD and non-CKD patients.
Limitations
First, the present study is an observational one, therefore it could not account for unmeasured confounders and causal inferences are difficult to make. Second, the number of patients is small. It could be easily increased if databases with an angiographic Digital Imaging and Communications in Medicine DICOM standardized structured reporting, similar to those developed by our group in echocardiography [32,33] would be developed and largely available. Third, the problem of differential diagnosis between CKD and AKI is a major limitation. It must be emphasized that serum creatinine was measured on admission and this might have led to the inclusion of individuals with acute kidney injury, whose eGFR was not estimated at equilibrium. However, this is an improbable major selection bias since the time from symptoms' onset to probe collection was limited to a few hours and all included patients had preserved urinary output before primary PCI procedure. Fourth, due to the low event per variable ratio, we could not analyze all the variables from the two models into a single one without too much overfitting.
Conclusions
In patients with STEMI due to UPLMCA lesions treated using primary PCI, low eGFR was associated with increased mortality at the 1-year follow-up. These worse outcomes in patients with low eGFR may be explained, at least partially, by the more complex coronary lesions (higher b-SYNTAX score) and less complete revascularization (higher r-SYNTAX score and lower SRI). | 2019-04-28T13:03:21.239Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "c2e2dd4fdb386df5d5ccc5a5a9acbd361ba49933",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/8/4/565/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2e2dd4fdb386df5d5ccc5a5a9acbd361ba49933",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228886256 | pes2o/s2orc | v3-fos-license | Gas Outburst Prediction Model Using Improved Entropy Weight Grey Correlation Analysis and IPSO-LSSVM
(is paper investigates the problem of gas outburst prediction in the working face of coal mine. Firstly, based on a comprehensive analysis of influence factors of gas outburst, an improved entropy weight algorithm is introduced into a grey correlation analysis algorithm; thus, the reasonable weights and correlation order of the influencing factors are obtained to improve the objectivity of the evaluation. (e main controlling factors obtained are used as the input of the prediction model. Secondly, by utilizing the improved particle swarm optimization (IPSO), the penalty factor and kernel parameter of least square support vector machine (LSSVM) are optimized to enhance the global search ability and avoid the occurrence of the local optimal solutions, and a new prediction model of gas outburst based on IPSO-LSSVM is established. At last, the prediction model is applied in the tunneling heading face 14141 of Jiuli Hill mine in Jiaozuo City, China. (e case study demonstrates that the prediction accuracy of the proposed model is 92%, which is improved compared with that of the SVM model and GA-LSSVM model.
Introduction
Gas outburst is a kind of complicated dynamic phenomenon in the process of coal mining [1]. A large amount of gas is ejected from gas-bearing coal rock to a mining face with rapid speed in a very short time. China is one of the most seriously stricken by gas outburst and gas explosion countries in the world. As the mining depth continues to increase, the gas outburst is becoming increasingly serious, especially in the case of high geostress and gas pressure conditions [2]. It gravely threatens the health and safety of miners, damages facilities, and decreases coal production. erefore, the scientific and accurate prediction of gas outburst is very important for the safety of coal mine production.
Over the past years, gas outburst prediction research has achieved fruitful results [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Numerous approaches are currently used to predict the gas outburst. Roughly speaking, these methods can be classified into two types: classical prediction method and artificial intelligence prediction method. Classical prediction methods use single-factor or multifactor comprehensive effects to predict from the outburst mechanism, mainly including D-K comprehensive factor prediction, geodynamic division, and other methods [3,4]. Since these methods are relatively simple in theory, a large amount of experimental data are required. However, the data acquisition process is complicated and requires high time and economic costs. With the rapid development of data mining technology and machine learning, some methods such as data mining algorithms [5,6], neural networks [7], D-S evidence theory [8], rough set theory [9], machine learning algorithms [10][11][12], support vector machines (SVM) [13][14][15], and wavelet transform [16,17] have been gradually adopted in the field of gas outburst prediction. In [6], BP neural network and D-S theory were used to establish the gas outburst prediction model: the BP neural network was optimized by PSO in the feature layer and D-S theory was used to make the fusion decision of time domain and space domain. A PCA-BP neural network prediction model was established; three principal components were extracted through the PCA method [7]. In [8], a gas outburst evaluation strategy was established by using the combination of fuzzy neural network and D-S theory, and the outburst level was accurately predicted. In [9], by combining artificial neural network (ANN) and coupled fault tree analysis (FTA), the prediction model was proposed to improve the prediction of the potential risk of gas outburst events. e eight main model parameters identified by the FTA method were used as input variables to an ANN model. In order to overcome traditional GABP mode problems, an effective genetic algorithm was proposed to reduce its iterative calculation time, and the parameters of BPNN could be determined automatically, so as to enhance the training ability of BPNN [12]. In [14], a decision table of gas outburst intensity was established by employing the RS theory and SVM, using the attribute reduction algorithm in rough set theory and PSO algorithm to optimize parameters of SVM, which improved the prediction accuracy. All of the above studies have proposed their own methods in the prediction of gas outbursts and have achieved good results but also have certain limitations. For example, neural networks are prone to local minima and overfitting for small sample problems. Moreover, there are problems such as dimensional disaster and network structure. In addition, the selection of kernel function and penalty parameters of the traditional SVM model is difficult. erefore, it is necessary to introduce a new effective method to improve gas outburst prediction accuracy.
Motivated by the above discussion, in this paper, we will study the prediction factors analysis through the improved entropy weight grey correlation and IPSO-LSSVM prediction model. e main contributions can be summarized as follows: (1) Based on a comprehensive analysis of the influence factors of gas outburst, an improved entropy weight algorithm is introduced into a grey relational analysis algorithm; thus the reasonable weights and correlation order of influencing factors are obtained to improve the objectivity of the evaluation. (2) By using IPSO to optimize the punishment factor and nuclear radial range of LSSVM, the gas outburst prediction model based on IPSO-LSSVM is established, which can better consider that the influencing factors have a comprehensive effect on outburst risk and improve the accuracy of gas outburst prediction. e rest of this paper is organized as follows: In Section 2, the main controlling factors influencing gas outburst analysis are presented. In Section 3, the LSSVM method and the IPSO method are adopted to establish the gas outburst prediction model. Application of the IPSO-LSSVM model in gas outburst prediction is given in Section 4, and conclusions follow in Section 5.
Identify the Main Controlling Factors Influencing Gas
Outburst. Grey system is generally used to study the uncertainty problem of the small sample, poor information, and multiple factors. According to the entropy weight method, the smaller the information entropy of factor, the greater the variation degree of factor value, the greater the role it plays in the analysis, and the greater its weight [18]. e entropy weighted grey correlation method is concise and clear, with low data requirements and small calculation amount, which is suitable for analyzing the main controlling factors.
Improved Entropy Weight Method.
e entropy weight method uses the information entropy to measure the information utility value of each evaluation factor and determine the entropy weight, which is an objective weighting method. e steps of the improved entropy weight method are as follows: (1) Establishing the original evaluation matrix R � (r ij ) m×n . Suppose there are n evaluation factors and m items to be evaluated, where r ij is the evaluation value of the i-th item under the j-th factor: (2) Normalize the original evaluation matrix and get the normalized matrix R ′ � (r ij ′ ) m×n . Since the dimensions of each evaluation factor are different and cannot be directly analyzed and compared, the per-unit value is used to perform dimensionless processing on each factor. For the positive factor, For the negative factor, where r min and r max are the minimum and maximum values of the factor, respectively. (3) Calculating entropy value and entropy weight of evaluation factor, where P ij � r ij ′ / m i�1 r ij ′ (i � 1, 2, · · · , m; j � 1, 2, · · · , n). e traditional entropy weight method uses a standardized processing method when calculating the entropy weight ω j , as shown in the following equation: 2 Mathematical Problems in Engineering It can be seen from the above equation that when H j ⟶ 1(j � 1, 2, . . . , n), the difference in the entropy value of different evaluation indexes is small, but the difference in the entropy weight is large. erefore, this paper uses the following equation to calculate entropy weight instead of equation (5): In the above method, the different entropy weights for different factors (e.g., p and q) become where indicates that the entropy value changes slightly, and its corresponding entropy weight also changes slightly. For the factor with equivalent useful information, the weight information is consistent with the corresponding entropy level.
Improved Entropy Weight Grey Correlation Analysis
Algorithm. Grey relational analysis (GRA) is a multifactor statistical analysis method. Compared with other theories, it has an incomparable role in the research of the small sample and poor information uncertainty. Its principle is to use the similarity between the change curve of the object to be analyzed and the ideal data curve to determine the level of correlation between the two, and it is easy to handle the correlation between elements with uncertain and incomplete information [16]. e grey correlation degree is calculated as follows: (1) Determine the reference and comparison columns, let X i � x i (k)|k � 1, 2, · · · , n be the comparison column and X 0 � x 0 (k)|k � 1, 2, · · · , n be the reference column. (2) Dimensionless processing of data.
(3) Calculate the absolute difference: select a reference column, assuming that the absolute difference between x i and x 0 at each time is δ, Calculation of grey correlation coefficient: the correlation coefficient between the k-th index and k-th optimal index value of the reference column is derived as follows: where λ is the resolution coefficient, λ ∈ (0, 1). is coefficient is introduced to improve the significance of the difference between the correlation coefficients.
(5) Calculation of grey correlation degree: In order to reduce the impact of correlation coefficient and human subjectivity, the entropy weight method is introduced to objectively weigh each influencing factor, and the improved grey relational entropy (GRE) method is proposed to remove the subjective impact while improving the calculation accuracy of the grey relational analysis. e grey correlation entropy is calculated as follows: e influencing factors are sorted according to the correlation degree μ i , and the influencing factors with the large correlation degree have a greater impact on the reliability factors.
Least Square Support Vector Machine (LSSVM). SVM
has great advantages compared with other methods in the gas outburst prediction. Its characteristic is to construct the algorithm based on the principle of structural risk minimization, which has good generalization ability and can obtain the global optimal solution [15]. However, the standard SVM algorithm needs to solve the quadratic programming problem, which requires a large amount of calculation. LSSVM uses the least squares algorithm instead of the quadratic optimization algorithm in SVM and transforms the convex quadratic programming problem into a solution of linear equations, which simplifies the calculation process and improves the learning speed [13,21,22]. e principle of LSSVM is to use a nonlinear function to transform the input vector to a high-dimensional feature space and construct a linear function in the high-dimensional feature space to describe the nonlinear relationship between the input vector and the output variable to minimize structural risk. Set the training sample set L � (x i , y i )|i � 1, 2, · · · , n , x i ∈ R d is the d-dimension input vector, and y i ∈ R is the corresponding output vector. x i is mapped to a high-dimensional feature space by a nonlinear function f(x), and a linear regression function where ω and b are the weight vector and the deviation, respectively. e optimization problem is expressed as follows:
Mathematical Problems in Engineering
where J(ω, e) is the structural risk and c is the adjustment parameter, which is used to adjust the balance between promotion ability and empirical risk. e i ∈ R is the error variable. e Lagrange method is adopted to solve the optimization of equation (11), and the following linear equations are obtained according to the KKT optimization conditions: According to the Mercer condition, the following kernel function is defined: e regression model of LSSVM is finally derived: where m is the number of input samples, x i is the i-th input sample, and x is a certain input variable. Selecting different kernel functions can build different LSSVM models, and the performance of the models is also different. e most widely applied radial basis kernel functions are chosen as follows in this paper: where σ is the kernel function width.
Improved Particle Swarm Optimization Algorithm (IPSO).
PSO algorithm considers individuals as particles moving at a specific velocity. e velocity is dynamically adjusted according to the optimal history of itself and the entire community, and the position is updated according to the adjusted velocity until the optimal solution is found [17,23]. In the m-dimensional search space, the entire community X is composed of n particles, X � x 1 , x 2 , . . . , x n . e position and velocity of i-th spectively. e historical optimal position is P i � (p i1 , p i2 , . . . , p im ) T , the historical optimal position of the entire community is P g � (pg 1 , pg 2 , . . . , pg m ) T . e particles update velocity and position according to equations (16) and (17). After multiple iterations are completed, the individual finally achieves optimization in free space: where ω is inertia weight, r 1 and r 2 are random numbers in the range [0,1], and c 1 and c 2 are learning factors. In PSO algorithm, each particle is close to the individual and global extremum, which easily causes all particles to gather to a certain extreme point and stagnate, resulting in a decrease in convergence. Reference [17] proposed an s-dPSO optimization algorithm, which updated each dimension of the particle in each iteration, and evaluated the updated fitness. e new position of i-th particle is as follows: After updating all dimensions of the particle, Since each dimension is updated based on the fitness value before the update, the new particle values are the optimal values since the iteration, and the new velocity is updated by the following equation: e aggregation degree of the particles is described by the dimensional diversity h(j): e smaller the h(j), the worse the diversity and the more particles aggregate in this dimension. h(j) is sorted in order, and the particles in front are mutated: redefine the value of particles in this dimension and break up the aggregation state of all particles in this dimension. e flowchart of optimizing LSSVM parameters with the IPSO algorithm is displayed in Figure 1. e steps of optimizing LSSVM parameters with IPSO are as follows: (1) Normalized preprocessing sample set. (18) and (19). (6) Determine whether the termination condition is met. If it has been met, the operation is ended and the result is output; otherwise, it returns to (3) to continue. (7) Substitute the obtained optimal values of the penalty factor and the kernel function parameter σ into the LSSVM regression prediction model, input training samples, and start training. (8) Determine whether the number of iterations has been met; if it has been met, end; otherwise, return to (7).
Establishment of the IPSO-LSSVM Prediction
Model. e specific process of the IPSO-LSSVM prediction model for the gas outburst is as follows: Step 1. Collect data for the prediction area.
Step 2. Divide the collected data into reference columns and comparative columns, analyze the data with the improved entropy weight grey correlation analysis, sort by the correlation degree, and use the prediction model to initially train and predict the factors constrained by different correlations. e dimension of the parameters is optimized according to the prediction results.
Step 3. Part of the selected parameter data is selected as the training samples and the other is test samples.
Step 4. Optimize the two parameters of LSSVM through IPSO and find the optimal combination of c and σ.
Step 5. Put the optimized c and σ parameters into the LSSVM model for training.
Step 6. Substitute the test samples into the trained model and evaluate the errors and fitting degrees of the prediction results and the actual results. e flowchart of the IPSO-LSSVM model of gas outburst prediction is displayed in Figure 2
Extraction of Factors Influencing Gas Outburst.
e mechanism of gas outburst in the working face is complicated, which is the result of a combination of factors such as geostress, coal seam gas, and physical and mechanical properties of the coal body [5]. Based on the historical data of gas outburst accidents and consulting relevant references [2][3][4]24], this paper selects failure type of coal (X 1 ), initial velocity of gas emission (X 2 , mL/s), coal solidity coefficient (X 3 ), gas content (X 4 , m3/t), drill cuttings amount (X 5 , kg/ m), gas pressure (X 6 , MPa), depth of coal mining (X 7 , m), coal seam thickness (X 8 , m) as the influencing factors for the occurrence of outbursts. e amount of coal (rock) outburst is used as the basis for classification usually. According to the actual situation of the site, the outburst intensity has 4 prediction results: I (0∼1t), II (1-50t), III (50∼100t), and IV (100t or more). Among them, I represents no outburst, II represents small outburst, III represents medium outburst, and IV represents large-scale outburst [25,26]. e data sets for prediction is from the No. 4 coal seam of Jiuli Hill mine in Jiaozuo City, China. ere are 6 minable coal seams, and the coal produced is mainly lean coal and anthracite. Due to the high gas content and many structures near the working face, the coal seam has more outbursts. In the grey correlation analysis between the outburst risk and the influencing factors, the outburst risk is taken as the reference columns and the influencing factors as the comparison columns. Select the influencing factors in 60 data sets of field engineering cases in Jiuli Hill mine as sample data and establish the sample matrix, as shown in Table 1.
Based on the improved entropy weight grey correlation analysis algorithm, standardize the initial data by employing equations (2) and (3), and thus calculate the entropy weight of each influencing factor by employing equations (4) and (6). e results are shown in Table 2. Table 2 shows that the initial velocity of gas emission has the largest weight in all the influencing factors, which also indicates that the initial velocity of gas emission (X 2 ) provides the most useful information for the prediction of gas outburst.
In order to analyze and calculate the correlation between influencing factors and outburst risk in coal and gas outburst prediction, the reference columns were selected as the type of gas outburst, and the comparison columns were selected Start Initialization parameters (N, ω, r, r, c 1 , c 2 , x i1 , v i1 ,) Calculate fitness values for each particle Calculate the optimal fitness value for each particle and update Calculate the global optimal fitness value of the population and update Optimize the velocity and position of particles according to equations (18) and (19) Whether the termination condition is met Return the optimal solution End Yes No Figure 1: Flowchart of optimizing the LSSVM parameters with IPSO.
Mathematical Problems in Engineering 5
Calculate fitness values for each particle Calculate and update the optimal fitness value of each particle Calculate and update the global optimal fitness value of the population Optimize the velocity and position of particles by equations (18) and (19) Whether the termination condition is met Using IPSO to optimize kernel function parameters Initialization parameters (N, ω, r, r, c 1 , c 2 , x i1 , v i1 ,) as the influencing factor of gas outburst. After preprocessing the data in Table 1, the grey correlation coefficient is calculated by employing equation (8), and then the average grey correlation and embedded grey entropy weight are obtained according to the operations of equations (9) and (10). e results are shown in Tables 3 and 4. As can be seen from Table 4, the order of the correlation degree of each influencing factor is X 4 > X 2 > X 3 > X 1 > X 6 > X 7 > X 8 > X 5 . e weighted grey correlations of gas content and coal solidity coefficient is 0.1685 and 0.1593, respectively, indicating that the two have the greatest impact on the danger of gas outburst. erefore, in the practice of controlling the abovementioned factors for gas outburst prediction, the gas content and initial velocity of gas emission can be emphasized, which can produce more ideal results.
Prediction Experiment of the IPSO-LSSVM Model.
is experiment was performed in MATLAB R2016b and involved 60 data sets, of which 48 were used for training and 12 for prediction. According to the results of the improved entropy weight grey correlation analysis, gas content (X 4 ), initial velocity of gas emission (X 2 ), coal solidity coefficient (X 3 ), and failure type of coal (X 1 ) were selected as the main controlling factors influencing coal and gas outburst. In order to verify the effectiveness of the prediction model in this paper, the IPSO-LSSVM model is compared with the GA-LSSVM [13] and SVM model [15]. e population optimal fitness and average fitness curves of the IPSO algorithm and GA algorithm are shown in Figure 3, respectively. e prediction results are shown in Figure 4.
As can be observed from the population fitness curves, for the IPSO algorithm, the optimal fitness value of the population began to stabilize after the 4 th generation. Although there were slight fluctuations in the subsequent evolution process, the fluctuation range was small, and it was basically stable near 0.027. For the GA algorithm, the optimal fitness value converged to a small extent after the population evolved to the 7 th generation. ere was a large convergence to the 22 nd to 23 rd generation, and the optimal fitness basically stabilized at the 27 th generation. erefore, the convergence speed of the IPSO is significantly better than that of the GA. In addition, considering the calculation speed and simplicity of the model, the GA has disadvantages of complex structure, many parameters, slow calculation speed, and so on. e kernel function of the LSSVM model is selected by comparing the three kernel functions and the other parameters are set at the default value of the system under the same condition. e relative error and standard deviation of the gas outburst prediction model established by the three kernel functions in the training process were compared, as shown in Table 5. Table 5 shows that the relative error and standard deviation of the prediction model established by RBF_kernel function are less than the model established by LIN_kernel function and Poly_kernel. erefore, RBF_kernel function is used in this study. Figure 4 shows that the prediction results of the IPSO-LSSVM model agree with the actual intensity well. e accuracy of the SVM prediction model is 67%, the GA-LSSVM prediction model is 83% and that of the proposed model is 92%, indicating that the IPSO-LSSVM model has higher accuracy. Figure 5 shows that the average relative errors of the three prediction models are 2.0%, 5.8%, and 9.0%, respectively. e IPSO-LSSVM model is the smallest, followed by the GA-LSSVM model. Both the IPSO-LSSVM model and the GA-LSSVM model have certain optimization effects, but the IPSO-LSSVM model shows higher adaptability. For the SVM model, the combination of its two parameters is liable to fall into the dilemma of the local optimal solution, which affects the prediction accuracy.
Conclusions
In this paper, a new prediction model of gas outburst based on the improved entropy weight grey correlation analysis and IPSO-LSSVM is established. e main conclusions are as follows: (1) e improved entropy weight algorithm is introduced into a grey correlation analysis algorithm to identify the main influencing factors. e reasonable weights and correlation order of influencing factors are obtained to avoid the subjectivity of weight determination and reduce the fluctuation of correlation coefficients. (2) By using IPSO to optimize the punishment factor and nuclear radial range of LSSVM, the optimization enhances the global search ability and convergence speed of the entire system. e gas outburst prediction model based on IPSO-LSSVM is established, which can improve the accuracy of gas outburst prediction. (3) e IPSO-LSSVM model proposed in this paper is applied in the tunneling heading face 14141 of Jiuli Hill mine in Jiaozuo City, China. e case study demonstrates that the IPSO-LSSVM model has higher accuracy than the SVM model and the GA-LSSVM model. It should be mentioned that although the proposed prediction model is proved to be more accurate, gas outburst is a complex nonlinear dynamic process affected by many factors. ere are still some interesting issues that need to be studied, such as gas outburst mechanism and application of some newly developed methods.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 2020-11-19T09:15:50.858Z | 2020-11-12T00:00:00.000 | {
"year": 2020,
"sha1": "6ee69973246f748cf95efc85b2f0519ef9cc3842",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2020/8863425.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d630cb6618b341660d4274ff440a2c44467e895",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
18985490 | pes2o/s2orc | v3-fos-license | Mechanisms, Risk Factors, and Management of Acquired Long QT Syndrome: A Comprehensive Review
Long QT syndrome is characterized by prolongation of the corrected QT (QTc) interval on the surface electrocardiogram and is associated with precipitation of torsade de pointes (TdP), a polymorphic ventricular tachycardia that may cause sudden death. Acquired long QT syndrome describes pathologic excessive prolongation of the QT interval, upon exposure to an environmental stressor, with reversion back to normal following removal of the stressor. The most common environmental stressor in acquired long QT syndrome is drug therapy. Acquired long QT syndrome is an important issue for clinicians and a significant public health problem concerning the large number of drugs with this adverse effect with a potentially fatal outcome, the large number of patients exposed to these drugs, and our inability to predict the risk for a given individual. In this paper, we focus on mechanisms underlying QT prolongation, risk factors for torsades de pointes and describe the short- and long-term treatment of acquired long QT syndrome.
Introduction
Acquired long QT syndrome is a disorder of cardiac repolarization most often due to specific drugs, hypokalemia, or hypomagnesemia that may precipitate torsade de pointes and cause sudden cardiac death. Selzer and Wray first reported QT prolongation and ventricular fibrillation as a response to quinidine in 1964 [1]. Two years later, Dessertenne [2] described torsades de pointes, a polymorphic ventricular tachycardia where QRS complexes twist around an isoelectric line in a sinusoidal fashion in an elderly woman with complete atrioventricular block and syncopal attacks ( Figure 1). Torsade de pointes is usually self-limited but may degenerate into ventricular fibrillation. The incidence of acquired long QT syndrome is difficult to be estimated. Although the chances of provoking torsades de pointes by a noncardiac medication are generally lower than antiarrhythmic medications, a number of noncardiovascular drugs have been recently withdrawn from market because of unexpected sudden cardiac deaths associated with prolongation of QT interval and torsades de pointes [3]. The frequency of drug-induced long QT syndrome and our inability to predict the risk for a given individual, makes long QT syndrome an important issue for clinicians. This paper focuses on mechanisms underlying QT prolongation, risk factors for torsades de pointes and describes the short-and long-term treatment of acquired long QT syndrome.
QT Interval Measurement
QT interval on the surface electrocardiogram describes the manifestation of ventricular depolarization and repolarization. It is measured from the beginning of QRS complex to T wave termination and averaged over 3 to 5 beats in a single lead. Longest QT intervals are usually measured in precordial leads and V3 or V4 leads appear more reliable for assessing QT prolongation [4]. Prominent U waves should be included in the measurement if they merge into the T wave. QT interval is influenced by heart rate. The RR interval preceding the QT interval should be measured for rate correction. Several formulae have been proposed for heart rate correction of the QT interval. The most commonly used formulae are Fridericia's cube root formula (QTc = QT/RR1/3) and Bazett's square root formula (QTc = QT/RR1/2). Although there is no consensus on best QTc method, Bazett's formula is considered the gold standard, even though it may overestimate QT prolongation [5]. In general, QT prolongation is considered when the QTc interval is greater than 440 ms, but arrhythmias are most often associated with values of 500 ms or more. QTc interval is longer in adult women because of a relative shortening of the QTc interval in men during adolescence [6]. Intervals of 440 to 460 milliseconds in men and 440 to 470 milliseconds in women are considered borderline [7]. QT intervals may also vary due to ECG acquisition technique, electrolyte imbalance, sympathovagal activity, intra-and interobserver variability, and diurnal variation which can be up to 75-100 ms [8,9]. It is important to notice that for every individual there is a different relation between the QT interval and the heart rate, and even though rate-correction formulae are useful clinically, they may not be accurate enough, especially when assessing the minor changes of the QT interval induced by drugs.
Mechanisms of Drug-Induced QT Prolongation
QT interval on the surface electrocardiogram represents the summation of action potentials in ventricular myocytes. QT prolongation entails action potential prolongation, that results from an increase in inward current (e.g., through sodium or calcium channels) or a decrease in outward current (e.g., through potassium channels). Myocardial repolarization is primarily mediated by efflux of potassium ions. Two subtypes of the delayed rectifier potassium current, IKr (rapid) and IKs (slow), are predominantly responsible for repolarization. The two currents have different activation, inactivation, and deactivation characteristics, different sensitivities to blocking drugs [10][11][12], different rate, and catecholamine sensitivity [13,14] and were later found to be the result of expression of different genes [15,16]. The hallmark mechanism of acquired LQTS and TdP is the blockade of IKr by specific drugs [17]. IKr current proteins are encoded by the human ether-a-go-go-related gene HERG (now termed KCNH2) [18]. Two structural characteristics account for the unusual susceptibility of IKr channels to various drugs [19]. First, the presence of aromatic aminoacids (Tyr652 and Phe656) with side chains directed towards the large central cavity of the pore region provides high-affinity binding sites for many of compounds. These amino acids are not present in most other potassium channels, and mutation of KCNH2 at these sites to other amino acids reduces binding affinity of several drugs. Second, while most potassium channels contain two proline residues in the helix that forms part of the pore that restrict access to the drug binding site, these two prolines are absent in KCNH2. Mutation of these residues to the conserved Pro-Val-Pro results in reduced drug binding [20].
Ikr blockade causes a delay in phase 3 rapid repolarization of the action potential (Figure 2), which is reflected by QT prolongation. Prolonged repolarization can cause early afterdepolarizations (EADs) due to activation of inward depolarizing currents (L-type calcium channels or sodiumcalcium exchange current) [21], that appear as depolarizing oscillations in membrane voltage during phases 2 and 3 of the action potential ( Figure 3). EADs that reach threshold voltage can cause a ventricular extrasystole preceded by a long QT interval on the surface ECG. On the other hand, dispersion of refractoriness due to heterogeneity in ventricular repolarization can create zones of unidirectional block. Repetitive extrasystoles, unidirectional block and zones of slow conduction can lead to reentry and TdP [22]. Torsades de pointes is usually preceded by a short-longshort ECG sequence ( Figure 4) [23]. In this case one or more premature ventricular complexes are followed by a compensatory pause. The subsequent sinus beat may have an especially long QT and deformities of T or U waves. This sinus beat is followed by another premature ventricular complex that precipitates torsades de pointes [24].
Several other ECG variables besides QT interval have been proposed to be predictors of TdP. QT dispersion, which represents the difference between maximum and minimum QT intervals, was supposed to be a more direct measure of spatial heterogeneity of repolarization [25], but proved to be a disappointing tool, because it is mostly dependent on T wave morphology [26]. An increasing number of basic and clinical studies suggest that the interval from the peak to the end of the electrocardiographic T wave (Tp-e) corresponds to transmural dispersion of repolarization [27][28][29]. Prolonged QTc interval and Tpeak-Tend was found to correlate with increased risk for torsades de pointes during acquired bradyarrhythmias [30]. The Tp-e/QT ratio serves as a more sensitive index of arrhythmogenesis as it provides an estimate of dispersion of repolarization relative to total duration of repolarization. Thereby, it eliminates the confounding effects of variability of heart rate and interindividual variation of QT interval [31]. Outlined evidence clearly suggests the applicability of Tp-e/QT ratio as a potentially important index of arrhythmogenesis, even though direct validation of Tp-e interval as a body surface index of transmural dispersion of repolarization is still lacking [32]. More recent studies have also provided guidelines for the estimation of transmural dispersion of repolarization in the case of more complex T waves, including negative, biphasic, and triphasic T waves [33]. In such cases, the interval from the nadir of the first component of the T wave to the end of the T wave was shown to provide an electrocardiographic approximation of transmural dispersion of repolarization. Twave alternans or a change in amplitude or polarity of the T-wave on alternating beats has been observed in LQTS as a The Scientific World Journal Short-long-short precursor to TdP [34]. T-wave alternans is thought to result from alternation of the M-cell APD, leading to exaggeration of transmural dispersion of repolarization during alternate beats, and thus the potential for development of TdP [35]. Abnormal, giant T-U waves and a slow QRS upstroke separate TdP initiation in LQTS patients from PVCs in other heart disease and from other PVCs in LQTS patients.
Abnormal T-U waves support the notion that EADs are the trigger for TdP in LQTS. If found, they may be an indicator for imminent risk of TdP [36]. Short-term variability of QT intervals (as measured from 30 consecutive beats) is increased in patients with a history of drug-induced long QT syndrome, suggesting that it could prove to be a useful noninvasive, easily obtainable parameter aiding the identification of the patient at risk for potentially lifethreatening arrhythmia in the context of drugs with QT prolonging potential [37].
Although evaluating the effect of a new drug on the QTc interval is important, conclusions on the potential clinical risk of TdP associated with its use, based solely on its ability to prolong the QTc interval, might turn out to be highly flawed. Tpeak-Tend measurement and Tp-e/QT ratio, giant T-U waves, slow QRS upstroke, and short-term variability of QT intervals tend to be useful clinical variables to predict risk of TdP.
4
The Scientific World Journal
Risk Factors
Multiple clinical risk factors (Table 1) are often present in an individual case. These factors provide a starting point for basic research into underlying mechanisms at the genetic, molecular and cellular level. The occurrence of drug-induced LQTS is unpredictable in any given individual, but a common observation is that most patients have at least one identifiable risk factor in addition to drug exposure [38].
A female preponderance has been consistently observed in multiple studies, with TdP occurring two to three times more commonly in women than in men [39]. These clinical observations, coupled with the finding that the QT shortens after puberty in males but not female [40], suggest that sex hormones modulate repolarization. Testosterone, by increasing IKr, shortens QTc and has been implicated as the major factor lowering risk of TdP in males [41].
Hypokalemia is another common risk factor in druginduced LQTS. Low extracellular potassium paradoxically reduces IKr by enhanced inactivation [42] or exaggerated competitive block by sodium [43]. As a result, hypokalemia prolongs the QT interval. However, the fact that low extracellular potassium increases IKr blockade by drug, is of most importance in clinical practice [44]. Correction of extracellular potassium to the high normal range can shorten QT interval and associated morphological abnormalities [45,46].
Pauses, usually after an ectopic beat, precipitate druginduced TdP. It is presumed that pause generates the dispersion of many electrophysiological properties, notably repolarization times, that underlie torsades de pointes [47]. In Holter recordings of patients with drug-induced TdP an increase in underlying sinus heart rate was reported in the minutes prior to an event [48]. This finding suggests that a pause in the setting of heightened sympathetic activation and long QT intervals may be especially arrhythmogenic.
The period shortly after conversion of atrial fibrillation is characterized by increased risk of torsades de pointes. Studies using QT/RR plots during atrial fibrillation have shown rate-independent QT prolongation after conversion to sinus rhythm [49]. Dofetilide causes only minor QT prolongation during atrial fibrillation, but significantly more QT prolongation when given to the same patients after cardioversion to sinus rhythm [50]. Congestive heart failure [51] and left ventricular hypertrophy are other high-risk situations for drug-induced torsades de pointes, but further investigation is needed on molecular and cellular mechanisms. For the majority of drugs (with the exception of class IA drugs), risk increases with higher drug concentrations. Class IA drugs (quinidine, disopyramide, and procainamide) block outward potassium currents and inward sodium currents. Sodium current blockade increases as serum levels increase, but potassium current blockade predominates at low serum levels. Therefore, TdP frequently occurs at low or subtherapeutic serum levels [52]. Administration of more than one drug that prolong repolarization increases the risk of drug-induced LQTS, but in most cases the mechanism of increased risk is due to drug-drug interactions altering metabolism, rather than simple additive effects on IKr. Cytochrome P450 superfamily of proteins is responsible for the metabolism of most of the drugs by liver and CYP3A4 is the predominant cytochrome P450. Coadministration of drugs which are substrates for CYP3A4 and/or IKr blockers results in further QT prolongation. Drugs that prolong QT interval and inhibitors of CYP3A4 are shown in Tables 2 and 3. Amiodarone rarely causes torsades de pointes despite significant QT prolongation. Amiodarone blocks IKr without reverse use dependence and prolongs action potential duration in a homogenous manner, thus reducing heterogeneity of refractoriness and making myocardium less The Scientific World Journal 5 susceptible to reentry. Additional electrophysiologic effects that explain its safety include noncompetitive β antagonism and inward L-calcium channel blockade which may reduce EADs [53]. The incidence of torsades de pointes at currently used doses is <1% [54,55] while with sotalol the incidence ranges from 0.8 to 3.8% [56] and with ibutilide from 3.6 to 8.3% [57,58]. Verapamil, even though a potent IKr blocker, rarely causes torsades de pointes [59]. Verapamil reduces EADs by blocking inward calcium current [60], reduces transmural dispersion of refractoriness, and shortens QT interval and the incidence of TdP in a model of acquired LQTS from combined IKs and IKr block [61]. Subclinical mutations or polymorphisms in congenital LQTS genes have been described as a risk factor for the drugassociated form [38]. Patients with subclinical congenital LQTS may develop TdP after exposure to a QT prolonging agent [62]. In approximately hundred patients with druginduced form of LQTS, congenital LQTS disease genes were identified in a percentage of 5-10% and their mutations classified them as having the congenital syndrome [63]. Identification of these cases emphasizes the increasing recognition of incomplete penetrance. As a result, many patients with the congenital syndrome have normal baseline ECGs, but they may be at increased risk for torsades during drug challenge [64,65]. First-degree relatives of patients with drug-induced TdP exhibit greater abnormalities of cardiac repolarization in comparison with first-degree relatives of patients tolerating QT-prolonging antiarrhythmic therapy [66].
This variability in the risk of torsade de pointes involves defining the molecular mechanisms that control the duration of action potential and the QT interval in the normal heart and in diseases such as the congenital long QT syndrome or heart failure. The term repolarization reserve has been proposed as a unifying framework for analysis of risk factors and their clinical mechanisms [67]. Repolarization reserve characterizes the capacity of the myocardium to affect orderly and provide rapid repolarization through normal mechanisms. In normal hearts, repolarization is accomplished by multiple and redundant mechanisms. The presence of a single risk factor is usually insufficient to elicit a LQTS phenotype. Multiple subclinical lesions are necessary in the repolarization process before superimposition of an IKr-blocking drug can produce marked action potential prolongation and torsades de pointes.
Genetic modulation of repolarization reserve is thought to contribute to interindividual differences in susceptibility to QT-prolonging drugs. Genetic variations in KCNH2, KCNE, KCNE2, and ANKB have been identified in some patients with drug-induced torsades de pointes [68,69]. The gene in which mutations seem most common is KCNQ1, whose expression results in IKs [62,63]. Studies in human myocytes and in computational models have implicated variability in IKs amplitude as a major contributor to variability in response to IKr block (i.e., to repolarization reserve) [70,71]. IKs amplitude is readily increased by interventions such as adrenergic stimulation [72] or endothelin [73], but also by compensatory increase through posttranscriptional upregulation of underlying units of IKs, likely mediated by microRNA changes due to sustained reductions of IKr [74]. The way in which the above-mentioned and other mechanisms might contribute to IKs regulation during challenge with an IKr blocker remains a broad area for investigation, both at the clinical level and at the molecular level.
Current studies investigate the relationship of common variants within the human genome termed polymorphisms and variable risk for torsades de pointes. The most compelling example to date is a single nucleotide polymorphism (SNP), common in African Americans, that results in substitution of a tyrosine for serine at position 1103 of the cardiac sodium channel. The SCN5A variant S1103Y points to the potential role of ethnicity as a genetic determinant of repolarization reserve [75]. This is further confirmed by the unique distribution of certain ion channel variants across different ethnic groups [76][77][78]. Pharmacogenetics additionally may determine arrhythmia risk in patients with acquired LQTS. Genetically determined reduced activity of cytochrome P450 enzyme CYP3A4 may decrease efficient metabolism of the QT prolonging drugs thioridazine, erythromycin, and terfenadine [79].
Treatment
The cornerstone of the management of acquired LQTS includes the identification and discontinuation of any precipitating drug and the aggressive correction of any metabolic abnormalities, such as hypokalemia or hypomagnesemia. Most of the episodes of torsade de pointes are short-lived and terminate spontaneously. However, prolonged episodes 6 The Scientific World Journal result in hemodynamic compromise and require immediate cardioversion.
Short-term treatment of the syndrome focuses on prevention of recurrence of torsade de pointes and includes administration of intravenous magnesium sulfate and temporary transvenous cardiac pacing. Intravenous isoproterenol is rarely needed. Important step in the management of acquired LQTS is withdrawal of offending agents and correction of electrolyte abnormalities [80]. The effectiveness of lidocaine, phenytoin, or atropine even though reported to be beneficial is uncertain [81].
Intravenous magnesium is the agent of choice for immediate treatment of torsade de pointes irrespective of the serum magnesium level. 2 g bolus of magnesium sulfate is followed by intravenous infusion of magnesium at a rate of 2-4 mg per minute [82]. The mechanism by which magnesium prevents the recurrences of torsade de points is unclear. Its action is probably mediated through blockage of sodium or calcium currents. The only side effect of intravenous magnesium is flushing during the bolus injection. Administration of potassium is an important adjunct to intravenous magnesium for the short-term prevention of torsade de pointes, especially if the serum potassium level is low. Serum potassium should be maintained in the high normal range. Overdrive transvenous pacing shortens QTc and is highly effective in preventing recurrences of torsades de pointes [83], especially when they are precipitated by a pause or bradycardia. Short-term pacing rates of 90 to 110 beats/min are recommended. Cardiac pacing prevents pauses and shortens the QTc interval by enhancing the repolarizing potassium currents [84]. Isoproterenol is useful if temporary pacing is unavailable or while preparing for transvenous catheter insertion [85]. Unlike acquired LQTS, isoproterenol is contraindicated in patients with congenital LQTS or ischemic heart disease. Side effects include palpitations and flushes.
Long-term treatment is rarely required. Conditions that predispose to electrolyte imbalance must be corrected. In cases of sick sinus syndrome or atrioventricular block and bradycardia, permanent pacing may be indicated [86]. | 2016-05-12T22:15:10.714Z | 2012-04-19T00:00:00.000 | {
"year": 2012,
"sha1": "6fdf9d15e97a0f1212c819c47da91e4e3af3eb26",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2012/212178.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a4523d41c6e6427bf30c6359a9c4b52b9645c9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3913596 | pes2o/s2orc | v3-fos-license | The Politics of Subnational Undemocratic Regime Reproduction in Argentina and Mexico
This article studies the continued existence of subnational undemocratic regimes in Argentina and Mexico, two countries that have recently experienced national democratization. The first part of the article offers a conceptualization of subnational democracy and measures its territorial extension across all subnational units. The second part explores a common, albeit not systematically tested explanation about subnational undemocratic regime continuity, namely, that these regimes persist because they meet national incumbents’ strategic political needs. This claim is tested using statistical analyses to contrast patterns of spending across undemocratic subnational units during the presidencies of Menem (1989-1999), De la Rúa (2000-2001), Duhalde (2002), and Kirchner (2003-2007) in Argentina, and Fox (2000-2006) in Mexico. Contradicting conventional wisdom, the results show that presidents only reproduce a handful of subnational undemocratic regimes, as not all of them can meet presidential needs. In addition, the results reveal that the strategic calculation of presidents regarding this reproduction is dictated by factors that have been largely overlooked by the literature.
Introduction
During the past two decades a growing number of Latin American countries have transitioned away from autocracy and authoritarianism towards democracy. 1 However, as several works show, much of this democratic advancement has been territorially uneven and mostly limited to the national level. 2 As a result of this phenomenon, many new democracies are characterized by what scholars have referred to as "regime juxtaposition": the prevalence of subnational undemocratic regimes alongside a democratic national government. 3 The existence of regime juxtaposition has presented analysts with the twofold challenge of defining and measuring subnational undemocratic regimes (SURs), 4 and advancing general explanations to account for the causes of SUR continuity. With the exceptions of Solt (2003) and Gervasoni (2010 in this issue), however, researchers have devoted little attention to issues of conceptualization. As a result of this neglect, SURs have not been clearly defined, and for that reason have been largely misidentified and misclassified. In terms of explanations, scholars have produced compelling and rich accounts to address the causes of SUR continuity. These explanations, nevertheless, have been mostly drawn from case studies and small-N works, and have rarely been tested using the totality of existing SURs. As a result, these accounts have fallen short of providing general explanations that are valid for the universe of SURs. This lack of theory testing has had two main drawbacks: little accumulation of replicable findings on the causes that explain SUR continuity, and an under-specification of the scope conditions that guide these explanations.
Drawing on Argentina and Mexico, this paper seeks to contribute to the study of regime juxtaposition in Latin America in three ways. Conceptually, the article advances a careful characterization and operationalization of subnational political regimes. In doing so, the paper helps researchers "map 1 For their excellent comments on previous drafts I thank Juan Bogliaccini, Todd Eisenstadt, Eugenia Giraudy, Florencia Guerzovich, Jonathan Hartlyn, Evelyne Huber, Sandra Ley, Lars Schoultz, John D. Stephens, and two anonymous reviewers. 2 See Eisenstadt (2004); Snyder (1999); Solt (2003); Gibson (2005); Gervasoni (2010 in this issue); Montero (2007Montero ( , 2010 in this issue); Behrend (2008); Fenwick (2010 in this issue); Durazo Herrmann (2010 in this issue); Benton forthcoming, among others. 3 In these subnational regimes, elections are severely manipulated, civic liberties of the local populations are partially suppressed, varying degrees of harassment and violence skew the playing field in favor of incumbents, and incumbents exert a tight grip over the Legislative Branch, local agencies of control, subnational party organizations, and the local media, as well as local civic organizations. 4 Unless otherwise specified, the term "subnational" only refers to state/provinciallevel (rather than municipal or lower-tier) governments/regimes/politicians. the terrain" and spell out more clearly what we talk about when we talk about SURs. Empirically, the article measures the level of democracy in all Argentine and Mexican subnational units. In this way, the paper not only helps distinguish between democratic and undemocratic subnational regimes, but also singles out the universe of SURs (in each country) upon which theories of regime juxtaposition should be applied. Theoretically, the study refines and specifies the scope conditions of the so-called strategic calculation explanation, which posits that SURs continue to exist because they meet national incumbents' strategic political needs. To test the strategic calculation explanation about SUR continuity, the paper focuses on the allocation of federal transfers, as this distribution is one of the potential mechanisms through which national politicians can help reproduce SURs. Using time-series statistical analyses to contrast patterns of federal spending across all Argentine and Mexican SURs, the article shows that national incumbents only help reproduce a handful of SURs, as just some, and not all, can meet presidents' strategic political needs. More importantly, the study finds that these regimes are not reproduced exactly for the reasons stipulated in the literature. The article is organized as follows: The first section conceptualizes and measures subnational democracy in Argentina and Mexico. The following section presents the tenets of the strategic calculation explanation and specifies its scope conditions by carefully defining the main factors that shape presidents' strategic calculi about SUR reproduction. Building on this theoretical specification, a set of hypotheses about SUR reproduction is subsequently spelled out. The next two sections discuss the measures of the dependent and independent variables, as well as the analytic technique used to test the hypotheses. The results of the study of the universe of SURs in each country and their discussion are presented in the subsequent section. The final section closes with a brief discussion of the implications of this study for future research on regime juxtaposition and intergovernmental relations.
Subnational Democracy in Argentina and Mexico: Conceptualization and Measurement
The definition of democracy adopted in this article follows the tradition of Joseph Schumpeter (1947), who understands democracy in procedural terms. Yet unlike some leading analysts, who adopt a procedural minimal definition of democracy (Collier and Levitsky 1997;Diamond et al. 1999;Mainwaring, Brinks, and Pérez-Liñán 2007), this article subscribes to Prze-worski et al.'s (2000) procedural subminimal, or electoral definition of democracy. 5 Accordingly, democracy is conceived of as having three constitutive elements: a) fully contested elections (for both legislative and executive posts), b) clean elections, and c) alternation (turnover) in office. Figure 1 outlines in graphic terms the operationalization of democracy used in this article and provides a brief description of the aggregation procedure. Table 1 discusses the indicators that were employed to measure subnational democracy. The decision to adopt a subminimal definition of democracy responds mostly to data availability problems. The inexistence of subnational data to measure, for instance, freedom of speech and assembly, or human rights violations prevents an operationalization of democracy along the minimal standards. 6 The definition of democracy adopted in this study uses the "necessary and sufficient condition" concept structure (Goertz 2006). Therefore, in order for a subnational political regime to be conceived of as democratic, a number of conditions must be present (i.e., are necessary), and these conditions in turn, are jointly sufficient to make a given polity be classified as democratic. If any of these conditions is absent, the subnational polity cannot be conceived as democratic. To translate a necessary and sufficient concept structure into mathematical terms without violating concept-measure consistency multiplication is used, this study follows Goertz' (2006) suggested aggregation procedure of multiplying (rather than adding) individual indicators. Consequently, as Figure 1 shows, contestation (for both executive and legislative posts), clean elections, and turnover -i.e., the necessary conditions -are "connected" via the logical AND -a first cousin of multiplication (denoted with the * symbol). Post-electoral conflict ranges from 1 to 4, where 1= absence of post-electoral conflict, 2= post-electoral conflict lasted less than a week (7 days), and there were no dead and/ or human/material casualties, 3= postelectoral conflict lasted more than one week (from 8 to 30 days), and/or people were held in custody, and/or there were human/ material casualties, 4= post-electoral conflict lasted more than one month and/or there were deaths** † N/A* Based on a review of major local (state-level) newspapers (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006) [Giraudy 2009] Note: All indicators were standardized to make scales comparable. *First round. **State-level newspapers were reviewed for a period of four consecutive weeks starting with the day the election was held. †Reversed scale. *Little fraud or manipulation of the vote-counting procedures has taken place in post-1983 Argentina; see Levitsky and Murillo (2005), Gervasoni (2010 in this issue).
The dataset used to measure subnational democracy covers 24 provinces in Argentina, and 32 states in Mexico. The data span the 1983-2006 period in Argentina, and1997-2006 in Mexico, with time intervals in each country starting with the onset of the (latest) transition to democracy at the national level. The results presented in Figure 2 and 3 show that, consistent with the trend observed in previous small-N and case-studies, democracy in Argentina and Mexico has not trickled down evenly. In fact, at least five, out of 24 provinces in Argentina (20.83 percent) -La Rioja, San Luis, Santiago del Estero, Santa Cruz, and Formosa -have remained undemocratic (i.e., obtaining scores close to zero), indicating a sustained democratic deficit over time. By contrast, only five provinces (20.83 percent) -Mendoza, San Juan, Ciudad de Buenos Aires, Chubut, and Entre Ríos -have reached high and sustained levels of democracy throughout the 1980s, 1990s, and the first decade of this century. In other provinces, such as in Chaco, Jujuy, and Salta, democracy has made some mild progress and also suffered setbacks, indicating that subnational democracy is not static, but rather that it ebbs and wanes. A similar pattern of democratic territorial unevenness can be observed in Mexico, where at least ten out of 32 states (31.25 percent) have remained undemocratic (i.e., score near zero), and only six states (18.75 percent) have experienced some political liberalization. The states of Oaxaca, Puebla, Baja California, Coahuila, Colima, Hidalgo, Tabasco, Tamaulipas, Veracruz, and Yucatán show that democracy has not made inroads in all districts, whereas the states of Chihuahua, Querétaro, Michoacán, Nuevo León, Mexico City, and Zacatecas present a pattern of progressive democratization. In between these two extremes, a group of states -Tlaxcala, Aguascalientes, and Nayarit -show that the pattern of regime evolution has been more erratic, in that democracy has undergone progress and retrenchment. Altogether, the results of Figures 2 and 3 show that the territorialization of democracy in Argentina and Mexico is generally uneven. The data also reveal that since the onset of the latest democratization process in Argentina and Mexico, many subnational units have never experienced democratic progress, thus highlighting that undemocratic regimes in these countries are "sticky." The remaining part of this article explores the scope of one common explanation employed to address this regime resilience.
Theory, Scope Conditions, and Hypotheses
Several studies show that undemocratic governors in Latin American countries are key partners for electoral coalition-making (Hagopian 1996;Snyder 1999;Gibson 1997Gibson , 2005Hunter and Power 2007). Indeed, with their tight control over local party machines, governors from SURs can help deliver votes that have decisive impact on general and mid-term national elections (Snyder 1999;Gibson 2005). Furthermore, undemocratic governors may provide invaluable legislative support for the passage of bills that are central to national incumbents' political projects. 7 Finally, these governors can help maintain political stability and manage security threats, thus assisting presidents in areas that are strategic to national security and governability. For instance, recalcitrant governors, who usually control paramilitary forces, can be charged, as Snyder (1999) points out, with the presidential "mission" of managing security threats on key geographic areas.
The evidence presented in these works lays the groundwork for the strategic calculation explanation, which posits that SURs in nationally democratic countries continue to exist because they often meet national incumbents' strategic political needs (Fox 1994;Cornelius 1999;Gibson 2005). Underlying this explanation is the idea that federal incumbents opt to reproduce these regimes because the political costs of challenging them outweigh the benefits of leaving them intact. Since undemocratic governors can provide much valued political support and stability, democratically elected national politicians have strong incentives to sustain the very same regimes that keep these governors in power.
This explanation, however, has not been tested beyond the specific case-studies or small-N works from which it is drawn. Thus, one central question is the extent to which it is valid when the universe of SURs (in any given country) is taken into consideration. This article claims that democratically elected national politicians do not always have incentives to contribute to SUR reproduction, as differences across SURs alter the strategic calculations of presidents regarding SUR maintenance. In other words, presidents opt to sustain some, but not all, SURs because only some of these regimes can or are likely to meet federal incumbents' strategic political needs. 7 Additionally, governors from SURs become attractive partners for legislative coalition-making because they usually rule small and underpopulated provinces/states, which are overrepresented in National Congress, and whose legislative votes weigh far more heavily than those of larger and more democratic districts (Samuels and Snyder 2001;Calvo 2001, Gibson 2004;Jones and Hwang 2005).
To explore whether or not this is the case, this article draws on the universe of SURs present in Argentina and Mexico and retests the main hypotheses derived from the strategic calculation explanation. Before turning to this analysis, the next section specifies the scope conditions of the strategic calculation explanation by carefully defining the main factors that shape presidents' strategic calculi about SUR reproduction in Argentina and Mexico. Hypotheses about SUR reproduction are subsequently spelled out.
From the literature on federalism and legislative politics we know that governors' ability to deliver legislative support stems from their capacity to discipline legislators' voting behavior. Several studies show that provincial/ state-level executives in Argentina and Mexico can influence legislative behavior simply because they control legislators' political careers (De Luca, Jones, and Tula 2002;Gordin 2004;Jones and Hwang 2005;Samuels 2003;Díaz-Cayeros 2006;Langston 2004Langston , 2005Langston and Aparicio 2008). This capacity to influence legislators' votes converts governors into legislative brokers and key partners for legislative coalition making, as they can ensure presidents the delivery of congressional support.
Yet undemocratic governors differ considerably in their ability to deliver legislative votes. Because their capacity to ensure votes depends on their ability to control legislators' political careers, governors can only exert leverage over deputies and senators who belong to their own political parties. They cannot, by contrast, influence the voting behavior of opposition legislators, simply because governors do not control their political careers. It thus follows that provincial executives' capacity to ensure and deliver legislative support is determined by the share of legislators that belong to his or her party. Governors who control the bulk of the legislative delegation (i.e., the largest share of copartisans) and who in turn can secure national incumbents more legislative votes, should be substantially more attractive to presidents than provincial executives who control small shares of legislators. For instance, Oaxaca's Governor Ulises Ruiz from the Partido Revolucionario Institucional (PRI), who between 2006 and 2009 only controlled 21 percent of Oaxaca's PRI national deputies (i.e., four out of 19 deputies), and 33 percent of the senators (one PRI senator out of three), was less attractive than, for instance, PRI Governor Manuel Ángel Núñez from the state of Hidalgo, who controlled, and thus could secure the votes of five PRI deputies, that is, 50 percent of the state's legislative delegation. 8 On these grounds, it can be hypothesized that presidents will reproduce SURs where governors control a larger share of copartisan federal legislators, as these governors are in a better position to ensure the delivery of legislative votes.
The disciplining capacity of governors over legislators is also mediated by electoral institutions and campaign financing rules that are specific to each country, which is why undemocratic governors from some, but not all, federal countries can ensure the delivery of legislative support. The case of Mexico nicely illustrates how electoral rules can limit governors' capacity to ensure legislative support. Mexico has a mixed electoral system, with 300 of the 500-member Chamber of Deputies filled through plurality races in single-member districts (SMDs) and 200 through closed proportional representation (PR) lists. Voters in Mexico cast only one ballot to choose SMD deputies and do not participate directly in selecting PR deputies. In this context, as noted by Langston and Aparicio, "PR deputy candidates do not run electoral campaigns; if they are placed high enough on the closed list, they will enter the Chamber" (2008: 9). SMD legislators, by contrast, must win plurality races, and thus are interested in running successful campaigns. Given that successful campaigns entail access to resources, SMD candidates are dependent on the national party headquarters and, above all, on governors for a good deal of their campaign funds (Langston 2005). Governors, then, become candidates' lenders of last resort, and consequently stand in a position to make SMD deputies far more beholden than PR deputies. These electoral and campaign rules limit considerably the capacity of Mexican governors to guarantee the votes of both SMD and PR deputies. Hence it is expected that presidents in Mexico will only contribute to the reproduction of those SURs where governors control the largest share of copartisan SMD deputies.
In the same way that not every undemocratic governor is by default an efficient provider of legislative support, not all undemocratic governors are attractive partners for electoral coalition building. Undemocratic governors, who exert power within the context of patrimonial state administrations, and who, as a result, monopolize political authority, command extensive clientelistic and patronage networks, and control state agencies as well as state resources, can easily manipulate voters and guarantee presidents the delivery of solid electoral majorities. 9 By contrast, undemocratic governors who exercise power within bureaucratic state administrations and thus concentrate far less political authority over persons, state resources, and state agencies, are less capable of mobilizing the electorate and influencing voters' 9 Examples of governors who rule within the context of patrimonial state administrations can be found in La Rioja, Santiago del Estero, and Formosa in Argentina, and Oaxaca, Tabasco, and Guerrero in Mexico (see Gibson 2005, Leiras 2006, Lakin 2008, Giraudy 2009, Durazo Herrmann 2010 in this issue).
behavior. 10 Given that patrimonial, undemocratic governors are more likely to deliver solid electoral majorities on Election Day than bureaucratic undemocratic governors, it is expected that presidents will contribute to reproduce SURs ruled by governors who concentrate, rather than disperse, political authority. Undemocratic governors are, in principle, strong and well-entrenched provincial bosses who are hard to discipline. As such, they are prone to challenge presidential orders, and to refuse to meet presidential strategic political needs. In the best of cases, they may use their entrenched political position as a bargaining chip to support a president's agenda in exchange for important concessions from the federal government. However, not all undemocratic governors are equally powerful in the face of national ruling elites. They simply are not because, as discussed in detail below, other variables unrelated to the political regime -such as copartisanship and financial dependency on the national government -render subnational undemocratic incumbents vulnerable vis-à-vis central state elites and susceptible to be induced to meet presidential political strategic needs. It is thus expected that presidents will only contribute to the reproduction of SURs if they are ruled by governors upon whom national rulers can exert effective control and disciplinary power.
From the literature on political parties we know that partisan organizational structures facilitate national incumbents' capacity to discipline and control subnational copartisans (Mainwaring 1999;Stepan 2000;Samuels 2003;Wibbels 2005;Levitsky 2003;Leiras 2006). The mechanisms through which parties exert control over low-level copartisans are manifold and depend on the internal organizational structure of each party. 11 Regardless of which partisan mechanism is put in place to discipline lower-level copartisans, the important point is that governor-president copartisanship should induce presidents to reproduce SURs ruled by copartisan governors, as these are more likely to be controlled and disciplined via partisan mechanisms, and are, in turn, more prone to be induced to meet presidents' strategic political needs.
SUR financial dependency on the central government also enables presidents to exert control and induce the cooperation of undemocratic governors.
10 For a classification of patrimonial vs. bureaucratic SURs in Argentina and Mexico, see Giraudy (2009). 11 For instance, centralized and bureaucratic parties, which tightly control local party branches, are more likely to discipline, and thus induce, the cooperation of copartisan undemocratic governors (Levitsky 2003). Similarly, national party leaders who exert leverage over the selection, nomination, and appointment of candidates can easily control copartisans by determining the fates of their political careers (Samuels 2000;De Luca, Jones, and Tula 2002;Garman, Haggard, and Willis 2001).
From the literature on fiscal federalism, we know that highly indebted or financially profligate governors, who depend on the central government for their subsistence, can be expected to comply with the central government's political demands for fear of being deprived of funds (Wibbels 2005;Falleti 2005). By contrast, governors from fiscally responsible and low indebted SURs or those who rule undemocratic provinces which amass abundant revenues, due to efficient tax collection or to the existence of profitable natural resources, enjoy greater financial autonomy from the federal government, and thus more independence vis-à-vis national incumbents (Giraudy 2009). On these grounds, it can be hypothesized that presidents will reproduce SURs that are in financial dire straits, rather than SURs that are financially sound, as the latter are more likely to refuse cooperating with the federal government, rather than meet presidential strategic political needs. Finally, presidents' calculations regarding SUR reproduction may also be shaped by municipal factors. Democratically elected national politicians may have more incentives to reproduce SURs where the president's party has a strong municipal presence. At least two reasons may lead presidents to act in this way. First, keeping loyal copartisan mayors in office may be key to increasing presidents' electoral success, as mayors usually command substantial distributive resources (i.e., public positions and subsidies) and thus can deliver support during electoral races and primaries (De Luca, Jones, and Tula 2002;Jones and Hwang 2005). Indeed, in many instances, the support of mayors, rather than that of governors, is central to winning on Election Day.
Second, presidents may have an interest in enhancing local copartisan structures because this facilitates presidential penetration in subnational undemocratic arenas, and consequently empowers presidents to challenge and control undemocratic governors from within. In effect, these national-local alliances may result critical to put pressure on and induce undemocratic governors to meet presidential political needs. On these grounds, it can be hypothesized that national executives will have incentives to reproduce SURs in which the share of municipalities that belong to the presidents' party is higher, as these regimes are more likely to meet the strategic political needs of presidents.
Before exploring the validity of these hypotheses, a caveat about presidential strategies of SUR reproduction is in order. There are different ways through which presidents can help sustain SURs in power. Presidents can, for instance, veto legislative pieces such as a declaration of federal intervention to overturn undemocratic regimes in specific subnational units, or discourage bills or veto laws intended to create conditions that would eventually democratize subnational districts. Alternatively, they can prevent federal agencies of control, such as the Supreme Court or federal auditing agencies, from sanctioning undemocratic governors for their abuses of power and financial misdoings, thus helping to prevent SURs from weakening. Lastly, presidents can also actively sustain SURs in power by benefiting them economically. They can, for instance, reward SURs with additional subsidies or with special federal transfers and programs through which they help consolidate and maintain these regimes in power.
This article focuses on the latter type of presidential strategy of SUR reproduction. Several empirical studies show that national politicians in Argentina and Mexico allocate a wide variety of federal resources and programs in a discretionary manner to meet their political strategic needs. 12 Drawing on this evidence, the next section explores whether the allocation of two specific federal transfers -1) funds for public works and 2) financial subsidies (PA-FEF in Mexico and ATN in Argentina) -has also been used to sustain SURs that were more capable of meeting presidents' strategic political needs.
Measures of the Dependent and Independent Variables
The Programa de Apoyos para el Fortalecimiento de las Entidades Federativas (PAFEF), the Aportes del Tesoro de la Nación (ATN), and funds for public works (i.e., the dependent variables) were selected because of their propensity to be distributed in a discretionary manner. Previous works show that each of these funds has been allocated on the basis of political rather than universal criteria (see Díaz-Cayeros 2006;Giraudy 2006;Cetrángolo and Jiménez 1997;Gibson and Calvo 2000;Bonvecchi and Lodola 2008). There are thus good reasons to suspect that these programs might have been used by presidents to reproduce SURs from above. All federal transfers are measured as a percentage of provincial/state-level total income (see Table 2 for a detailed description of each variable and its source).
The first independent variable of theoretical interest, governor's legislative support, is calculated as the percentage of copartisan federal deputies who belong to the governor's legislative delegation. 13 For the reasons outlined above, in Mexico, this measure only includes federal SMD deputies. The second variable of theoretical relevance, dispersed political authority, is measured using Giraudy's (2009) additive index of patrimonial and bureaucratic state administrations which combines indicators of horizontal and societal accountability, rulers' fiscal discretion, and patronage (see appendix for a detailed explanation of how the index was built). The index ranges from zero to one, where zero denotes higher concentration of political authority (i.e., higher levels of patrimonialism) and one indicates more dispersed authority (i.e., higher levels of bureaucratization).
The third independent variable, copartisanship between presidents and undemocratic governors in Argentina is measured using a dummy variable that scores one when the presidential party equals a governor's party, and zero otherwise. For the case of Mexico, where some governors of the Partido de Acción Nacional (PAN) have come to power through electoral coalitions with other national parties, two dummy variables were created. The first variable captures Panista governors who governed without a coalition (i.e., governor-president copartisanship [non-coalition]) and the other gauges Panista governors who won elections, and thus governed in coalition (i.e., governorpresident copartisanship [coalition]). Each of the dummies scores one for the years in which each of these two types of PAN governors ruled a given state, and zero otherwise.
Two indicators, debt and surplus, are used to tap a subnational unit's financial autonomy from the federal government. Both predictors were calculated as the percentage of state/province's total revenues. High scores of debt indicate higher levels of indebtedness (that is, lower financial autonomy), whereas higher scores of surplus reveal greater financial autonomy. 14 The fifth variable, municipalities belonging to presidential party, captures the president's partisan presence at the local level. It is coded as the percentage of municipalities controlled by the president's party in any given year.
When appropriate, the models were run with additional control variables. The distribution of funds for public works should, in principle, be determined by infrastructural needs. Other things being equal, one would expect that states and provinces which lag behind in terms of infrastructural development should receive a greater proportion of funds for public works than subnational units whose infrastructure is more developed. Similarly, highly populated districts, where the demand for infrastructure (sewage, housing, and paved roads) is higher, should also receive more funds for public works. To control for these effects, an index of infrastructure was employed in the Mexican models, and necesidades básicas insatisfechas (unsatisfied basic needs) -a proxy for poverty -was used in the Argentine regressions. Lastly, the variable population was included as an additional control.
The argument about presidents reproducing SURs suggests that this reproduction holds regardless of electoral cycles. In other words, SURs should be reproduced not only during electoral years, but instead on a constant basis (i.e., every year). Thus, no effect or a negative effect of electoral processes is expected on presidents' decision to sustain SURs. To control for the effects of electoral cycles three dummy variables -gubernatorial, presidential, and legislative election -were included in the models. Each variable was coded as one in the year in which presidential, legislative, or gubernatorial elections were held, and zero otherwise. Given that electoral calendars across levels of government differ in each country (i.e., staggered vs. concurrent electoral calendars in Mexico and Argentina, respectively), different combinations of these three electoral variables were included in each country model. 15 Presidents who are native to SURs, such as Carlos Menem from La Rioja or Néstor Kirchner from Santa Cruz, may have a strong inclination to channel funds to their strongholds not only to sustain SURs but also for personal reasons. For instance, as former President Menem noted, an outstandingly large amount of money was sent to La Rioja [one of Argentina's SURs] not so much to keep the regime alive but also to reward the loyalty of former staffers and to improve the wellbeing of my Riojanos (interview by author, La Rioja, May 9, 2008).
To control for these effects, a dummy variable, province of President, was included. Finally, the time span analyzed in Argentina (1990Argentina ( -2006 covers four presidencies. Dummy variables were included to control for political and partisan effects occurring during the presidencies of Menem (baseline), De la Rúa, Duhalde, and Kirchner.
Data and Analytic Technique
The balanced panel dataset used for the statistical analyses comprises all Argentine and Mexican SURs -that is, the states and provinces that score a perfect zero (see Figures 1 and 2). 16 The existence of cases that rank just above zero, such as the Argentine province of La Rioja which gets an average 15 The existence of concurrent national and subnational elections in Argentina yields high correlations across the three electoral variables. Thus, one or two (at best) dummy variables were included in the models. By contrast, the staggered nature of electoral calendars in Mexico permits the inclusion of the three electoral variables. 16 Data in each country are analyzed separately, not pooled. score of 0.006, or the Mexican state of Oaxaca, with an average score of 0.009, makes it hard to exclude these districts from the group of SURs. Thus, these cases were also included in the dataset. 17 Data span the period 1990(Model 1), 1996(Model 2), and 2000 (Models 3 and 4). 18 When data are pooled across time and units, several of the ordinary least squares (OLS) standard assumptions are violated, and consequently the usual procedures for hypothesis testing are no longer appropriate (Long and Ervin 2000). Authors have provided alternative solutions to deal with these violations, including fixed-effects and random-effects models (FEM and REM, respectively), panel-corrected standard errors (PCSE), lagged dependent variable (LDV) models, and autoregressive (AR) models with corrections for first-order autoregression (AR1) (see Beck and Katz 1995;Achen 2000;Huber and Stephens 2001;Plümper, Manow, and Troeger 2005, among others).
Some of these analytic techniques, such as PCSE and AR models, are inappropriate, because the data used in this study are not temporally dominated (i.e., t>N), but rather cross-sectionally dominated (i.e., N>t). Other techniques, such as FEMs or LDV models, are also inadequate given that several key independent variables have level effects and are relatively time invariant (i.e., they only change at a slow pace). In the presence of such variables, a FEM will improperly absorb the significance of these predictors (Plümper, Manow, and Troeger 2005;Achen 2000).
For all these reasons, the problem of correlated errors in panel data is addressed using a combination of OLS estimation of the regression coefficients with a robust-cluster estimator of the standard errors. The robustcluster variance estimator, as noted by Huber et al.,"provides correct coverage in the presence of any pattern of correlations among errors within units, including serial correlation and correlation attributable to unit-specific components" (2006: 957).
17 Cluster analyses were employed to set the cut-off points between states and provinces that rank zero or near zero from those ranking higher on the democracy scale. In a scale that ranges from zero to one, the cutoff point was set at 0.17 in Argentina and at 0.08 in Mexico. This means that all Argentine provinces and Mexican states scoring below 0.17 and 0.08, respectively, are included in the dataset (see Figures 2 and 3). Results of cluster analyses are available from the author. 18 Time periods for Argentina (Models 1 and 2) differ because data for the dependent variable were available for varying time points. Table 3 shows that one of the article's main finding of theoretical interest is that, contradicting conventional wisdom, national incumbents do not have incentives to benefit, and thus help sustain, every single existing SUR. Rather, the analysis reveals that when the universe of cases (in Argentina and Mexico) is taken into consideration, presidents only contribute to the reproduction of the SURs that serve them politically well. A second major finding of this study is that SUR reproduction is not necessarily driven by the potential legislative and electoral support that undemocratic governors may deliver. By contrast, other political factors, such as dispersed political authority, financial dependency on the federal government, and the share of municipalities belonging to a president's party, strongly shape presidents' strategic calculi regarding SUR reproduction. The lack of incentives for presidents to benefit, and thus help reproduce SURs which may eventually deliver legislative support is confirmed by the fact that the governor's legislative support variable only comes out positively signed and statistically significant in Model 1, where an increase of 1 percent in governor's legislative support, which equals a one point increase in the share of deputies belonging to the governor's partisan congressional delegation, is associated with an increase in ATN allocation of 0.04 percent of undemocratic provinces' total revenues. By contrast, both Mexican regressions and Argentine Model 2 show that undemocratic governors who control the bulk of their legislative delegation, and who in turn stand in a better position to secure more legislative votes, are no more attractive to presidents than governors who control small shares of loyal deputies.
The fact that both Mexican models invalidate the legislative aspect of the strategic calculation explanation, and that Argentine Model 1 substantiates it, points to important cross-country differences regarding the capacity of Argentine and Mexican governors to discipline legislators. In Argentina, as numerous studies show, provincial executives are the main principals exerting leeway over deputies (Jones and Hwang 2005;Gordin 2004). In Mexico, by contrast, a multiplicity of principals, including party leaders in Congress, the national party leadership, and governors, have control over deputies' behavior (Casar 1999;Langston 2005;Langston and Aparicio 2008). These cross-country differences may explain why presidents in Argentina (i.e., Model 1) favor SURs on the basis of the potential legislative support that may accrue from these districts and why Mexican presidents abstain from benefiting SURs on the basis of the potential legislative support, which undemocratic governors can secure. The refusal of Argentine and Mexican presidents to reproduce SURs that are in a position to ensure the delivery of electoral support is evidenced by both the positive sign and statistical significance of the dispersed political authority variable in Models 2 and 3, and its statistical insignificance in Models 1 and 4. 19 Models 2 and 3 show that undemocratic regimes where political authority is more dispersed (i.e., where governors control smaller provincial/state-level patronage networks, have less capacity to discretionally distribute state resources, and are checked by more autonomous subnational agencies of control) are rewarded with more federal transfers than regimes that concentrate more political authority, and thus stand in a better position to ensure presidents' solid electoral majorities.
This counterintuitive finding may be explained by at least four reasons. First, presidents may attach less value than previously thought to the capacity of governors to deliver electoral support, as they may be less efficient to provide solid electoral majorities than other brokers, for example mayors (see below). Second, the refusal of presidents to benefit governors who concentrate political authority and exert power in a patrimonial way, may respond to presidents' aversion to enhance the position of already territorially and politically powerful bosses, who could eventually challenge presidential authority. Third, actively sustaining clientelistic, illiberal, and patrimonial governors, who plainly violate democratic and human rights standards such as Governor Carlos A. Juárez in Santiago del Estero or Governor José Murat in Oaxaca (Gibson 2005), may also impose high political and reputational costs on presidents, as these governors are usually regarded as unpopular political actors. By contrast, while supporting undemocratic governors who exercise power in a more bureaucratic manner may bring about fewer electoral benefits, it may also be less costly in terms of the reputational and political price that presidents may be willing to pay. Finally, the fact that SURs that concentrate political authority get a lower share of transfers may also be explained by ideological factors. Indeed, from studies conducted in Mexico we know that presidents who advanced technocratic and neoliberal agendas, such as Presidents Salinas (1988)(1989)(1990)(1991)(1992)(1993)(1994) and Zedillo (1994Zedillo ( -2000, refrained from empowering and siding with entrenched traditional and antineoliberal governors, the so-called dinosaurios (dinosaurs), as they systematically opposed presidential initiatives and federal-led projects (Centeno 1994;Eisenstadt 2004;Hernández Rodríguez 2008).
The results presented in Model 3 indicate that similar ideological factors may have shaped presidential behavior vis-à-vis SURs during the years of the first Panista administration. Several studies show that Fox's political weakness, which largely stemmed from the fact that 21 (out of 32) governors belonged to the PRI, and from the lack of majorities in both chambers of Congress, led him to build legislative and electoral alliances with PRI undemocratic governors (Hernández Rodríguez 2008;Madrazo 2007). These alliances, however, were selective in that they were struck with PRI undemocratic governors who shared and upheld the PAN's businessoriented, neoliberal, and technocratic worldview, that is, governors who ruled states where political authority was exercised in a more bureaucratic (rather than patrimonial) manner. In fact, as Gibson (2005), Giraudy (2009), andDurazo Herrmann (2010 in this issue) show, Fox refrained from siding with traditional, patrimonial, and highly clientelistic undemocratic governors, such as the Governor of Oaxaca, as these governors' behavior was largely incompatible with the PAN's ideological stance.
Models 2, 3, and 4 show that presidents channeled more funds to SURs where the share of municipalities belonging to the presidential party was larger. Indeed, a 1 percent increase in the share of municipalities belonging to the president's party is associated with an increase in public works spending of 0.02 percent of the SURs' total income in Argentina (Model 2), of 2.44 and 1.77 percent of the SURs' total revenues in Mexico (Model 3 and 4). These result indicate an important and often overlooked aspect of presidents' strategic calculi regarding SUR reproduction, namely, that federal incumbents opt to benefit and thus sustain copartisan local structures, as these structures, and not state-level/provincial patronage networks controlled by governors, may be more effective to deliver electoral support during Election Day. 20 Recent events in Argentine politics substantiate this finding, as evidenced by former President Néstor Kirchner's strategy of striking electoral coalitions with mayors of undemocratic districts and not solely with patrimonial governors (Leiras 2006).
The greater share of federal transfers flowing to SURs where the share of municipalities belonging to the presidential party is higher may also respond to presidents' strategy of strengthening local copartisan bases of support. As noted earlier, local copartisan structures are "springboards" that facilitate presidential penetration in subnational undemocratic arenas, and thus serve as key vehicles to gain leverage to both challenge and control undemocratic governors from within. It is by applying pressure from within that presidents can further induce provincial/state-level ruling elites to de-20 The transfers analyzed in this study can flow directly to municipalities (as is the case of ATN and PAFEF) or be earmarked for specific municipal infrastructural projects. As such, they can be used to strengthen local governments without necessarily empowering undemocratic governors. liver strategic political support to advance the presidential agenda. Evidence from Mexico indicates that President Fox rewarded SURs where the share of PAN-ruled municipalities was high because by strengthening and expanding the PAN local presence he could challenge opposition governors' territorial power and, eventually, constrain undemocratic governors' authority from within (Giraudy 2009). 21 Altogether, the impact of municipal aspects on presidential strategies of SUR reproduction underscores that researchers of regime juxtaposition, as Fenwick (2010 in this issue) notes, may need to scale further down to the municipal level to uncover how local aspects determine the political fates of SUR reproduction. Casting doubt over previous theoretical expectations, the negative sign and statistical insignificance of the governor-president copartisanship variable in both Argentine regressions (Models 1 and 2), and Model 3 in Mexico, shows that governor-president copartisanship does not necessarily determine presidents' strategy of cross-SUR federal funds allocation. Even though no conclusive (statistical) assertion about the role played by copartisanship on SUR reproduction can be made, qualitative evidence indicates that both Argentine and Mexican presidents have helped sustain, and thus reproduce, SURs from the opposition. Several works and reports conducted in Argentina show that presidents were eager to strike coalitions with opposition undemocratic governors who were willing to meet presidential strategic needs. President Menem's strategy of delivering selective benefits to SURs of the opposition in exchange of support to pass key pieces of legislation to implement his neoliberal agenda (Botto 1998;Gibson and Calvo 2001), as well as President Néstor Kirchner's multiparty coalition, the so-called Concertación Plural (plural agreement), which entailed alliances with opposition undemocratic governors in exchange for legislative and electoral support, 22 are some cases in point. A similar relationship between opposition undemocratic governors and presidents was observed in Mexico during the presidency of Vicente Fox, when the federal government rewarded undemocratic PRI governors, such as the rulers of Puebla, Veracruz, Hidalgo, Sinaloa, and Sonora, with subsidies and special transfers in exchange for electoral and legislative support (Madrazo 2007). In consonance with these findings, the statistical results presented in Model 4, show that SURs ruled by PAN governors who came to power in an electoral coalition are associated with an 21 It should be noted that the enhancement of local copartisan structures may eventually help presidents to topple SURs, in which case national incumbents would be contributing to subnational democratization and not SUR reproduction. This phenomenon, however, occurs in t 2, and not in t 1 -the period which this study focuses on. 22 Diario Río Negro (2006), February 23, 24, 26, and March 10. increase in public works transfers of 0.93 percent of state's revenues. By contrast, in SURs governed by non-coalitional PAN governors a decrease in public works funds of 1.29 percent of state total income was recorded.
The negative sign and statistical insignificance of the governor-president copartisanship variable may also indicate that Argentine and Mexican parties' internal organizational structures are not effective enough to enable presidents to control and discipline provincial/state-level copartisans, and thus induce their cooperation. Studies have shown that the internal organization of the Peronist party in Argentina, which is both informal and decentralized, is largely inoperative to exercise control over provincial branches and lowerlevel copartisans (Levitsky 2003). In a similar fashion, the PAN's organizational structure, despite being far more institutionalized than the Peronist party in Argentina, prevents national party leaders from exerting control over governors, as the party's rank and file, and not national party leaders, determine Panistas' career paths (Mizrahi 2003;Langston 2004).
Lastly, one of the most consistent findings of this article is that, ceteris paribus, presidents reward SURs that are in financial dire straits. In effect, increasing provincial surplus in Argentina by 1 percent leads to a decrease in ATN spending of 0.07 percent of the SURs' income (Model 1), and to a decrease in public works spending of 0.05 percent of the SURs' total revenues (Model 2). Likewise, a 1 percent increase in state-level surplus in Mexico results in a decrease in public works spending of 0.02 percent of the SURs' income (Model 4). The presidential decision to allocate more funds to SURs that have larger deficits and which are ruled by profligate undemocratic governors substantiates the theoretical claim that presidents opt to reward governors who are more susceptible to being controlled and induced to meet presidential strategic political needs. As noted earlier, financial dependency on the federal government not only seriously limits undemocratic governors' capacity to challenge and oppose the presidential agenda, but more importantly, it poses strong constraints on provincial incumbents to follow presidential orders. 23 Qualitative and quantitative evidence presented by Wibbels (2005) and Giraudy (2009) in analyses of several undemocratic profligate Argentine provinces reveal that governors from these districts are forced to support most presidential initiatives for fear of being deprived of financial resources. The results presented in this paper reveal that this claim also holds true when a larger set of SURs is taken into consideration.
23 Another possible interpretation of the greater share of ATNs flowing to profligate SURs is the president's determination to ensure national macroeconomic stability.
Conclusion
This study has contributed to knowledge accumulation in the area of regime juxtaposition by conceptually defining and measuring subnational democracy in all Argentine provinces and Mexican states, and by identifying the regimes that comprise the universe of SURs in these two countries. It has also made an important contribution by retesting and specifying the scope conditions of the strategic calculation explanation. Building on statistical analyses to contrast patterns of spending across SURs, the paper shows that, contrary to conventional wisdom, national incumbents have incentives to reproduce only a handful of SURs, as not all of them can meet presidential needs. In addition, the paper shows that the strategic calculation of presidents regarding this reproduction is not necessarily dictated by governors' capacity to deliver legislative and electoral support, but rather by factors that have been largely overlooked by the literature. Three important implications for the study of regime juxtaposition follow from this article. First, the results show that presidents help sustain SURs that are easily controllable and penetrable, such as those that are financially weak and where copartisan mayors exist. This finding reveals that presidents' decision to reproduce SURs is shaped, to a very large extent, by presidents' capacity to exert leverage over undemocratic governors. Instances that facilitate financial and political control, either from above or from within, over recalcitrant rulers and which, in turn, give presidents additional instruments to induce their political cooperation seem to be important determinants of SUR reproduction. More studies are needed to explore how specifically the possibilities of federal control (or lack thereof) over SURs affect the prospects of their continuity and change, and how the existence of different mechanisms of control, that is, political or fiscal, shape presidents' strategies of SUR reproduction.
Secondly, this study reveals that under certain circumstances, presidents are active promoters of SUR continuity. The fact that national actors shape the prospects of SUR sustainability underscores, as many studies have already pointed out, that subnational political processes are not impervious to the political dynamics that take place at the national level of government; quite to the contrary, they are greatly shaped by events occurring at higher levels of government (Gibson 2005(Gibson , 2008Montero 2010 in this issue). Future theories about SUR continuity or change necessarily have to assess and theorize about the impact of both national and subnational variables, as their combined interaction is crucial to the understanding of how and why these regimes continue to exist or not.
Finally, this study has shown that the pattern of interaction between presidents and undemocratic governors varies from SUR to SUR. Differ-ences across these subnational regimes have important consequences for presidents' strategic calculations regarding regime reproduction. Extant explanations about SUR continuity, however, tend to overlook these subnational differences, taking for granted that SURs within countries are homogenous or uniform entities, all deemed to be analyzed and treated as equivalents, especially with regard to the relation they maintain with national rulers or national institutions. On these grounds, existing explanations infer that the causal mechanisms that shape the interaction between one or several SURs and national government/institutions (i.e., the mechanisms that account for regime reproduction) are generalizable across all SURs. As a result, most accounts about SUR continuity remain largely incomplete and theoretically underspecified. By showing that the causal mechanisms that account for regime resilience are not homogenously valid across the universe of SURs, this study not only points to the perils of assuming unit homogeneity across subnational units, but also makes an important contribution to advance more complete and specified explanations about SUR continuity in nationally democratic countries.
Wibbels, Erik (2005), Federalism and the Market. Intergovernmental Conflict and
Economic Reform in the Developing World, Cambridge, New York: Cambridge University Press. | 2018-03-18T06:49:35.131Z | 2010-08-01T00:00:00.000 | {
"year": 2010,
"sha1": "8797c36f85b2cf7936a0ec4153be0dd5e27da0f4",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1866802X1000200203",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "8797c36f85b2cf7936a0ec4153be0dd5e27da0f4",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
256633002 | pes2o/s2orc | v3-fos-license | Chemotaxing neutrophils enter alternate branches at capillary bifurcations
Upon tissue injury or microbial invasion, a large number of neutrophils converge from blood to the sites of injury or infection in a short time. The migration through a limited number of paths through tissues and capillary networks seems efficient and ‘traffic jams’ are generally avoided. However, the mechanisms that guide efficient trafficking of large numbers of neutrophils through capillary networks are not well understood. Here we show that pairs of neutrophils arriving closely one after another at capillary bifurcations migrate to alternating branches in vivo and in vitro. Perturbation of chemoattractant gradients and the increased hydraulic resistance induced by the first neutrophil in one branch biases the migration of the following neutrophil towards the other branch. These mechanisms guide neutrophils to efficiently navigate through capillary networks and outline the effect of inter-neutrophil interactions during migration on overall lymphocyte trafficking patterns in confined environments. Neutrophils mobilize rapidly and travel through the vasculature and microcirculation en masse in response to inflammatory stimuli. Here the authors use a microfluidic device and intravital microscopy to show neutrophils move through tissues in a highly ordered pattern, taking turns to file into alternate branches at bifurcations.
N eutrophils are the most abundant subpopulation of white blood cells in the blood circulation. They serve as the first line of host defense in tissue injury and infections. Upon tissue injury or microbial invasion, neutrophils rapidly migrate to the sites, eliminating microbes and mediating further immune responses 1,2 . This process is partly facilitated by chemotaxis-a process in which neutrophils migrate along the gradient of chemoattractant released by damaged tissue, microbes, or other leukocytes 2 . In vivo studies in the liver 3 , skin 4 , lymph node 5 , and lungs 6,7 of animal models have shown that neutrophils navigate in a coordinated and uniform fashion toward targets, through tissues and capillary networks. However, the principle that governs the efficient trafficking of a group of neutrophils through capillary networks remain to be uncovered.
Here, we study the migration patterns of consecutive neutrophils through capillary branches in vivo and in vitro. In mouse models of liver and lymph node, we find that consecutive neutrophils moving through capillaries toward sites of infection and injury take alternative routes more often than predicted by random decisions. To explore the mechanisms that could explain this unexpected phenomenon, we employ microfluidic devices with branching channels. We find that consecutive neutrophils in vitro take alternating branches with even higher precision than observed in vivo. Enabled by the controlled microenvironment of these experiments, we uncover that neutrophils moving through small channels can obstruct the channels and bias the migration of follower neutrophils toward alternative routes by perturbing the chemoattractant gradients behind them and increasing the hydraulic resistance of the channels they are entering.
Results
Neutrophil migration bias at capillary bifurcations in vivo. We employed multiphoton microscopy and spinning-disk confocal intravital microscopy to observe the trafficking of neutrophils in capillary networks towards sites of tissue damage in the liver (N = 4 mice). We also observed the trafficking of neutrophils toward sites of Staphylococcus aureus infection in the mouse lymph node (N = 8 mice) (Fig. 1, Supplementary Fig. 2, Supplementary Movie 1, Supplementary Movie 2, Supplementary Table 1). We focused on the capillary bifurcations where two neutrophils migrated consecutively. We defined these neutrophils as two-neutrophil squads. If neutrophils migrate randomly at the bifurcations, one would expect that 50% would of them enter different branches and 50% would enter the same branches. Surprisingly, we found that 23 out of 32 two-neutrophil squads diverged into different capillary branches rather than advancing in the same branch (Supplementary Table 1 and Supplementary Movies 1 and 2). The difference between the observed (72%) and expected (50%) frequencies is significant (p < 0.05, two-tailed test). The in vivo results also show that inside capillary branches with cross-section smaller than 50 µm 2 , the percentage of neutrophils entering alternate branches is 80% (N = 20 neutrophil squads). In capillary branches larger than 50 µm 2 the percentage drops to 60% (N = 10 neutrophil squads, Supplementary Table 1). This data suggests that the divergence of two-neutrophil squads at capillary bifurcations may not be a random process. Instead, consecutive neutrophils most often enter alternative branches.
Microfluidic bifurcations for studying migration bias. We employed a microfluidic device consisting of two-level bifurcations towards chemoattractant reservoirs to recapitulate this alternative migration pattern in vitro (Supplementary Movie 3). Strikingly, all neutrophils in neutrophil squads following a formylmethionine-leucyl-phenylalanine (fMLP) gradient sequentially migrated to the alternative branches at bifurcations. The observations were similar to the in vivo observations.
To systematically study the directional decisions of groups of chemotactic neutrophils at bifurcations, we designed microfluidic chips that contain more than 200 simplified, microfluidic bifurcations units (Fig. 2). Each unit contains a two-branch bifurcation that starts with a cell-loading channel and is connected to a chemoattractant chamber. A chemoattractant gradient is established in each migration channel between the chemoattractant chamber (serving as chemoattractant reservoir) to the cell-loading channel (serving as sink), consistent with previously demonstrated principles 8 . Neutrophils are loaded in the cell-loading channel and migrate up the migration channel toward the chemoattractant chamber.
We characterized the gradient profile in the channels along each branch using fluorescein. The fluorescent intensity profile showed that the gradient was successfully established and maintained in both branches for more than 90 min ( Supplementary Fig. 1a). The gradients in the left and right branches remained identical in the absence of neutrophils ( Supplementary Fig. 1b). Chemotaxing neutrophils made one directional decision at the bifurcation (Fig. 2c). We quantified the direction of neutrophils at the bifurcation as Left (L) and Right (R), using the direction of migration relative to the glass bottom as spatial reference.
Migration patterns of neutrophil squads at bifurcations. We investigated the direction of successive neutrophils arriving at bifurcations in four-neutrophil squads (Fig. 2d). Theoretically, if arriving neutrophils enter branches at random, the directional patterns of 4 consecutive neutrophils passing through a bifurcation should be allotted equally in one of the 16 possible LR combinations and the percentage of each pattern would be equal to 6.25% (Fig. 2e, blue dots, x-axis). Strikingly, we observed that neutrophils in 4-cell "squads" enter alternate branches, along only two patterns: LRLR or RLRL (Fig. 2e). The total frequency of LRLR and RLRL patterns was~55%, equal for the two patterns (N = 4 repeats, N = 68 four-neutrophil squads, Fig. 2e). The frequencies of LRLR and RLRL patterns were higher than all the other patterns combined (Fig. 2e). The frequencies of LLLL and RRRR patterns, when consecutive neutrophils followed each other into the same branch, were 0. In control experiments, we verified the symmetry of the system, by quantifying the frequency of the first neutrophil entering either one of the two migration channels at the bifurcation, in the absence of any following neutrophils. We found that the neutrophils enter the L and R branches with equal frequencies (N = 6, N = 447, Fig. 3b). Together, our observations show that neutrophils, when migrating closely in squads and arriving at bifurcations, do not enter the two branches randomly. Instead, they display alternate patterns.
Neutrophil pairs enter alternate branches at bifurcations. To investigate the mechanisms that determine the alternate migration patterns of a group of neutrophils, we focused on the behavior of two-neutrophil squads (Fig. 3). We summarized the four possible combinations of neutrophil migration patterns into two states: same branch (LL and RR) and different branches (LR and RL) (Fig. 3a-c). We observed that the percentage of first neutrophils entering the left and right branches was equal (50 vs. 50%, N = 3, N = 261), indicating no bias for the first neutrophil entering the branches. However, the percentage of a second neutrophil entering a different branch is~80%, significantly higher percentage than entering the same branch (N = 3, N = 261). This confirmed that neutrophils in two-cells squads are more likely to enter alternate branches than follow each other.
We investigated whether the cross-section of the bifurcation impacted the distribution of neutrophils from two-neutrophil squads at bifurcations (Fig. 3d). We designed the cross-section of the two branches after the bifurcation to be 15 µm 2 (3 µm × 5 µm width × height), 25 µm 2 (5 µm × 5 µm width × height) or 50 µm 2 (10 µm × 5 µm width × height) (Fig. 3d, horizontal schematics). We maintained the cross-section before the bifurcation to 5 × 5 µm 2 for all networks. We also categorized the intercellular distance between the two neutrophils into three categories, including contact, distant, and individual ( Fig. 3d, vertical schematics). "Contact" represents the condition that the second neutrophil is in physical contact with the first neutrophil when arriving at the bifurcation. "Distant" indicates that when the second neutrophil arrives at the bifurcation, the first neutrophil is away from it, but still moving in one of the branches. "Individual" indicates neutrophils arriving at the bifurcation in the absence of other neutrophils in the migration channels. In the 3 × 5 and 5 × 5 µm 2 bifurcations, we measured the percentage of neutrophils entering alternate branches to be 100% for "contact", 95% for "distant", and 50% for "individuals" (N = 3, N = 20 to 100). These results suggest that two neutrophils are more likely to enter alternative branches when they are moving closer to each other ( Supplementary Fig. 3). The percentages sharply decrease in the 10 × 5 µm 2 bifurcation, to 76% for contact, 60% for distant and do not change (~50%) for individuals (N = 3, N = 20 to 100). Our results show that in the symmetrical bifurcation, the chance of consecutive neutrophils entering alternate branches are correlated with the cross-section of the channels as well as the distance between the two neutrophils in the squad. The chances are higher in smaller bifurcation channels and when two neutrophils are closer.
Based on the observations, we hypothesize two mechanisms that impact migration patterns of chemotactic neutrophil squads arriving at bifurcations. The first mechanism involves the obstruction of small channels by the first neutrophils, which alters the hydraulic resistance of the channels. Subsequent neutrophils are sensitive to variations in pressure in front of them (barotaxis) and this mechanism could bias the neutrophils toward the lower hydraulic resistance path 9 . The second mechanism involves the alterations of the chemoattractant gradients in small channels by the moving neutrophils. The first neutrophils passing through a channel network could alter the chemoattractant gradient and can bias the migration of subsequent neutrophils toward the alternate branch, where the gradient is intact.
Migrating neutrophils alter channel hydraulic resistance. We tested the interplay between hydraulic and chemical signals in straight (Figs. 4 and 5) and bifurcating channels (Fig. 6). We employed a small number of fresh, human red blood cells (RBCs), preloaded in the channels during the priming steps, to determine if fluid is displaced in front of the moving neutrophils (Fig. 4a). In one typical example, inside a 5 × 5 µm 2 channel, we observed that an RBC at 100 µm ahead of a neutrophil changed its position at an average velocity comparable to that of the moving neutrophil ( Fig. 4b-d left panels). In one other example, in a 10 × 5 µm 2 channel, an RBC at 150 µm ahead of a moving neutrophil, floated at 0-5 µm per min ahead of the neutrophil moving at~20 µm per min, while other RBCs were by-passed by the moving neutrophil ( Fig. 4b-d right panels). These situations suggest that fluid displacement occurs in front of neutrophils moving through the smallest channels.
Migrating neutrophils alter chemoattractant gradients. To test whether neutrophils can alter the chemoattractant gradients, we spiked fluorescein in the fMLP solution and loaded it inside straight channels. Since fMLP and fluorescein have similar molecular weights (MW 437.55 g per mol and 332.31 g per mol, respectively), both molecules diffuse at a similar diffusion rate. The fluorescent intensity profile along the channel was used to indicate the profile of fMLP gradient. As Fig. 5a shows, as a neutrophil migrated in a 3 × 5 µm 2 channel, the fluorescence intensity in front of it dramatically increased, much higher than the intensity without the neutrophil at the same location. The fluorescent intensity was not altered in 10 × 5 µm 2 channels. We measured the fluorescent intensity profile with and without the presence of the neutrophil in various cross-sections (Fig. 5c). The results show that neutrophils alter the chemoattractant gradient in 3 × 5 µm 2 and 5 × 5 µm 2 channels. The fluorescence intensity is enriched in front of the neutrophil and decreased at the back, creating a much sharper gradient along the neutrophils than the initial gradient. In 7 × 5 µm 2 and 10 × 5 µm 2 channels, the fluorescence intensity profile remained the same, indicating that the chemoattractant gradient is not altered by the neutrophil.
Neutrophils alter gradients and move fluid at bifurcations. Our experiments using straight channels suggest that chemotactic neutrophils can alter the chemoattractant gradient inside the channels and alter the hydraulic resistance of the channels. We applied this new knowledge to test the two proposed hypotheses regarding the mechanisms that govern the alternating paths of neutrophils moving through bifurcating channels (Fig. 6). To evaluate the fluid displacement in front of neutrophils moving through bifurcating channels, we loaded the channels sparsely with 2 µm TRITC-labeled, polystyrene microbeads. We tracked the migration of neutrophils and the relative displacement of microbeads ( Fig. 6a-c). In 3 × 5 and 5 × 5 µm 2 bifurcations, the microbeads moved forward as the neutrophils migrated into the same branch (Fig. 6a). The microbeads moved randomly in 10 × 5 µm 2 channels, independent on the migration of the neutrophils. We measured the migration distance of the neutrophil and the microbead displacement over time (N = 5, Fig. 6b, c). In a typical example at the 5 × 5 µm 2 bifurcation, a neutrophil migrated 28 µm in 1.5 min, while a microbead was displaced 23 µm (Fig. 6b). In another example, in the 10 × 5 µm 2 bifurcation, the migration distance for a neutrophil (60 µm in 4.5 min) was larger than the bead displacement (9 µm, Fig. 6c). The neutrophil moving persistently up the branch eventually passes the bead. Taken together, the results in small and large bifurcating channels suggest that moving neutrophils displace the fluid through the channels ahead of them. This implies that neutrophils are relatively impermeable to fluid and could alter the hydraulic resistance of the channels inside which they reside.
We then measured the fluorescent intensity profiles in the presence of migrating neutrophils through bifurcations ( Fig. 6d-g). The fluorescence intensity profiles were measured along the two lines as indicated in Fig. 6e. In the 5 × 5 µm 2 channels, the presence of the chemotactic neutrophil in the right branch biased the chemoattractant gradient to the left (Fig. 6f). The fluorescent intensity behind the first neutrophil decreased, leading to asymmetrical gradient profiles in the two branches. Along profile 1, the gradient in the branch without the neutrophil is significantly steeper than the one with neutrophil (Fig. 6h, Supplementary Fig. 4). The peak fluorescence intensity is also significantly higher in the branch without the neutrophil than with the neutrophil (Fig. 6j). Consequently, the second neutrophil would experience steeper chemoattractant gradient in the alternating branch. In contrast, in the 10 × 5 µm 2 channels, the fluorescent intensity profiles remained symmetrical in the two branches, despite the presence of the neutrophil (Fig. 6g, h, k).
Our results indicate that migrating neutrophils through bifurcations alter the chemoattractant gradients in the two branches with a small cross-section. To test the contribution of hydraulic resistance alterations, we probed the migration of neutrophils through asymmetric bifurcations.
Neutrophil pairs migrate through asymmetric bifurcations. We explored the migration patterns of two-neutrophil squads at asymmetric bifurcations (Fig. 7a). The cross-sections of the branches are 3 × 5 µm 2 and 10 × 5 µm 2 and their length is equal. The hydraulic resistance R of the channels is different, estimated using the following equation 10 R % 12 μL wh 3 1 À 0:63h where w, h, and L represent the width, height, and length of the channel and µ represents the viscosity of the fluid. The ratio of the hydraulic resistance of the narrow and wide channels is calculated to be~10 (Fig. 7a). We measured the fluorescent intensity profiles along the narrow and wide branches, confirming that the chemoattractant gradients in the two channels are similar (Fig. 7b, c). We monitored the trajectories of individual neutrophils at bifurcations. We found that 87% of the individual neutrophils migrated to the wide branch with lower hydraulic resistance (Fig. 7d, e). We also studied the migration pattern of neutrophils in two-neutrophil squads (Fig. 7f). We observed that while the first neutrophil most often entered the wider branch, the trajectory of the second neutrophil was dependent on the intercellular distance between the two neutrophils. When two neutrophils are closely following each other at the bifurcations (Fig. 7f, g), we observed that 86% of the time the second neutrophil entered the narrower branch. The percentage sharply decreases to 26% for "distant" conditions, and to 12% for "individuals" conditions (Fig. 7g). The results suggest that the impermeable neutrophils could alter the hydraulic resistance of small channels and is consistent with the barotaxis bias of neutrophils that prefer to migrate along the path with lower hydraulic resistance 9 .
Migration of neutrophil armies through bifurcating networks. We explored the migration pattern of a large group of neutrophils in a microfluidic bifurcation network, which mimics complex microvascular networks in vivo. The device contains 63 bifurcations arranged into 11 columns and 6 rows (Fig. 8a). The channels in the device have a cross-section of 5 × 5 µm 2 . We observed that neutrophils preferred to migrate to alternate branches throughout the network (Supplementary Movie 4). As Fig. 8b shows, six neutrophils arriving at one bifurcation consecutively moved to alternate branches, displaying a pattern of LRLRLR. We measured the fraction of neutrophils moving to alternate branches at each of the 63 bifurcations (Fig. 8c). The fraction is larger than 0.6 at all the bifurcations. The histogram further shows that at more than 87% of the bifurcations, there are larger than 0.7 fraction of neutrophils moving to alternate branches (Fig. 8d). Taken together, our results indicate that neutrophils not only follow this alternate migration pattern in simple bifurcations, but also in complex, interwoven networks of channels.
Discussion
We investigated the migration pattern of multi-neutrophil squads through branching capillaries in vivo and through microfluidic bifurcations in vitro. We found that consecutively migrating neutrophils in squads preferably enter alternative branches. The blue and red areas correspond to profile 1 and 2. The fluorescence gradients in the two bifurcation branches without vs. with a neutrophil along profile 1 in 5 × 5 µm 2 (h) and 10 × 5 µm 2 (i) channels. The peak fluorescence intensity along profile 2 in the branches without vs. with a neutrophil in 5 × 5 µm 2 (j) and 10 × 5 µm 2 (k) channels. (**p < 0.01, ***p < 0.001, paired, two-tailed t test).
The in vivo and in vitro models employed in this study share several key features. The geometries of the microfluidic bifurcations and networks resemble the in vivo capillary junctions and networks 4,[11][12][13] . The cross-section of the microfluidic channels (15-50 µm 2 ) was within the range of mouse capillaries (~7-100 µm 2 ) 11,14 . The match between the in vivo and in vitro data suggests potentially robust mechanisms that guide the alternating migration pattern at non-flow capillary junctions. The extravasation, infiltration, and trafficking of neutrophils through interstitial space upon tissue injury and infection have been well studied 15 . Moreover, neutrophils also migrate through small capillary networks in organs and tissues such as liver 3 lymph nodes 5 , skin 4 , and lungs 6,7 . However, how neutrophils efficiently navigate as a group through capillary networks and interstitial spaces is not well understood. Here, we found that the interplay between consecutive neutrophils plays a pivotal role in diverging their migratory path at capillary bifurcations. The process was often neglected in previous studies [16][17][18] , and may be important for reducing the "traffic jam" in capillary networks. Some of the principles of neutrophil traffic through capillary networks may apply to the traffic through cell-rich tissues that confine the moving neutrophils and limit their directional choices. Recent reports have shown that neutrophils could attract other neutrophils through LTB4 and other mediators [19][20][21] . They could also guide T-cell migration by leaving long-lasting chemokine-containing trails 22 . Based on these mechanisms, the it would be expected that the first neutrophil migrating into one branch would attract the second neutrophil to the same branch. However, our results show that neutrophils do not follow each other at bifurcations, which suggesting that the hierarchy of the different neutrophil-neutrophil coordination mechanisms depend on the circumstances. Near large microbial targets neutrophils would follow each other, whereas at a distance from the targets and in conditions of sterile inflammation, the neutrophils will distribute uniformly across patent trafficking paths.
One mechanism responsible for the neutrophils in multineutrophil squads entering alternating branches at bifurcations involves the perturbation of chemoattractant gradients behind neutrophils confined and moving in small channels. While the first neutrophil blocks the diffusion of chemoattractant from the anterograde source, the retrograde concentration of chemoattractant in the branch with the moving neutrophil can decrease below the concentration in the alternative branch. The trailing neutrophil follows the higher concentration and enters the unoccupied branch. This mechanism is consistent with the known ability of human neutrophils in channels to steer effectively in the direction of steeper chemoattractant concentrations 23 . The chemical gradients in channels are sensitive to the presence of sustained fluid flow through the channels. Thus, it is important to note that there is no flow in the microfluidic bifurcations in vitro and there is limited or no blood flow in the small capillary junctions obstructed by neutrophils in vivo.
A second mechanism involves the increased hydraulic resistance of the channels due to the presence of neutrophils. The higher hydraulic resistance of neutrophil-occupied channels can also bias the trajectory of the trailing neutrophils towards the unoccupied channels that have lower hydraulic resistance. This bias is consistent with recent studies that showed that neutrophils moving through channels may respond to differences in hydrostatic pressure 9 . This study also offers additional insights into the balance between chemical gradients and hydraulic resistance. Our observations that the alternation between branches at asymmetric bifurcations still takes place when neutrophils did not completely block the larger channels suggests that that the alteration of the chemical gradients may be sufficient to bias neutrophil trajectories. The occasional bias of the second neutrophil toward a higher hydraulic resistance branch is also consistent with the dominant effect of chemical gradients over the hydraulic resistance 9 .
Other strategies that guide the traffic of neutrophils through capillary networks may also be possible. Contact guidance by the side wall of the channel has been proposed for cancer cells based on observations of their migration on printed collagen I lines and bifurcations 24 . Here, we avoided the effect of contact by designing the channel before the bifurcation to be 5 × 5 µm 2 such that neutrophils contact all channel walls, by their entire circumference. Physical contact between cells has been proposed as a mechanism that alters the trajectory of chemotaxing cancer cells 25 . Here, we observed pairs of neutrophils that maintain contact during migration in channels and found that most "trailing" neutrophils pick alternate branches at bifurcations. These results suggest that the physical contact between neutrophils does not play a role in their alternating trajectories. A mechanism involving water transport through the cell membrane has been proposed to play critical roles in regulating cell migration, including through the confinement of channels 26,27 . Our observations of RBCs and microparticles moving in front of neutrophils at comparable speed with the neutrophils, suggest that unlike the cancer cells, neutrophils are effective at pushing the fluid in front of them. One must note that the average speed of migrating neutrophils in a confined channel is~20 µm/min, one order of magnitude faster than cancer cells (~50 µm per h) 28 . It is still possible that for neutrophils, the water flux through aquaporins is significantly smaller than the volume of fluid pushed forward by the fast-moving neutrophils, which explains the sensitivity to the hydraulic resistance of the channel in front of the moving neutrophils.
In summary, our study reveals that when marching through complicated capillary networks, neutrophils undertake alternate paths. Chemical and physical mechanisms help distribute the neutrophil traffic uniformly at bifurcations. These mechanisms may contribute to the efficiency of trafficking during infections and inflammation and may help guard against disseminating bacteria.
Methods
Device design and fabrication. The microfluidic devices were fabricated with soft lithography. We fabricated the 2-layer master mold in negative photoresist (SU-8, Microchem, Newton, MA) on a 4-in. silicon wafer. The first layer was 5 µm thin which consists of migration channels. The second layer was 150 µm thick which consists of cell-loading channels and chemoattractant chambers. A mixture of PDMS base and curing agent (10:1) (PDMS, Sylgard, 184, Elsworth Adhesives, Wilmington, MA) was cast on the wafer and cured at 65°C overnight. After overnight curing, we peeled and diced the PDMS layer into individual devices. We punched the inlets and outlets of the devices using a 0.75 mm diameter biopsy puncher (Harris Uni-Core, Ted Pella) and irreversibly bonded them to a glassbottom multiwell plate (MatTek Co., Ashland, MA).
Multiphoton microscopy in vivo. Animal experiments were performed with male and female adult mice (6-10 week old), and all experimental animal protocols were approved by the University of Calgary Animal Care Committee and were in compliance with the Canadian Council for Animal Care Guidelines. Multiphoton microscopy was employed to assess the movement and location of neutrophils in lymph node capillaries following infection with S. aureus. 2.5 × 10 7 CFU were injected into footpad before imaging. Mouse popliteal lymph nodes were imaged by anesthetizing the mouse and exposing the popliteal lymph node in the right hindlimb. Anesthetic and TRITC-Dextran 70,000 kDa were delivered by cannulation of right jugular vein. Image acquisition of the popliteal lymph node was performed an upright multiphoton microscope (Olympus FV1000 MPE, Richmond Hill, Ontario, Canada). Neutrophils and vasculature were visualized simultaneously in separate channels using 830 nm pulsed laser Ti:sapphire excitation (Coherent Chameleon Ultra II). The fluorescence emission was directed through bandpass filters (GFP: 520 ± 20 nm and TRITC: 600 ± 30 nm) and detected by non-descanned photomultiplier detectors (Olympus). Fluoview (FV10-ASW4.2) software was used to drive the confocal microscope and for 3D rendering of images. Files in OIF format were imported into Fiji (7) for analysis or Volocity 6.3 (Perkin Elmer) for export.
Spinning-disk confocal intravital microscopy. A tail vein catheter was inserted into mice after anesthetization with 200 mg/kg ketamine (Bayer Animal Health) and 10 mg/kg xylazine (Bimeda-MTC). Sterile inflammation was performed as described previously 3 . In brief, mice were anesthetized with isoflurane and a <1-cm incision was made just below the level of the diaphragm to expose the liver. A single focal injury was induced on the surface of the liver using the tip of a heated 30-gauge needle mounted on an electro-cautery device. Mice were prepared for intravital microscopy of the liver as previously described 3,13,29 . Briefly, a midline laparotomy was performed followed by removal of the skin and abdominal muscle along the costal margin to the midaxillary line to expose the liver. Mice were placed in the right lateral position and a single liver lobe was exteriorized on the pedestal of a custom-made heat controllable Plexiglas microscope stage. All exposed tissues were moistened with saline-soaked gauze to prevent dehydration during imaging. For the duration of all experiments, the liver was continuously superfused with physiological saline buffer. Images were acquired using Olympus IX81 inverted microscope, equipped with an Olympus focus drive and a motorized stage (Applied Scientific Instrumentation, Eugene, OR) and fitted with a motorized objective turret equipped with 10×/0.40 UPLANSAPO, and 20×/0.70 UPLANSAPO objective lenses and coupled to a confocal light path (WaveFx; Quorum Technologies, Guelph, ON) based on a modified Yokogawa CSU-10 head (Yokogawa Electric Corporation, Tokyo, Japan). The hepatic microvasculature and neutrophils were visualized by intravenous (i.v.) infusion of 1.2 mg of PE-conjugated anti-PECAM-1 and APC-conjugated anti-Ly6G antibodies, respectively. We assume a capillary has a circular cross-section and estimate the cross-sectional area of the capillary as π(w/2) 2 , where w is the width of the capillary. Laser excitation wavelengths of 491, 561, and 642 nm (Cobolt) were used in rapid succession together with the appropriate band-pass filters (Semrock). A back-thinned EMCCD 512 × 512 pixel camera was used for fluorescence detection. Volocity software (Perkin Elmer) was used to drive the confocal microscope.
Neutrophil isolation. Human blood samples from healthy donors (aged 18 years and older) were purchased from Research Blood Components, LLC (Brighton, MA). Human neutrophils were isolated within 2 h after drawn using the human neutrophil direct isolation kit (STEMcell Technologies, Vancouver, Canada). Isolated neutrophils were stained with Hoechst 33342 trihydrochloride dye (Life Technologies) and then suspended in Iscove's Modified Dulbecco's Medium (IMDM) containing 20% fetal bovine serum (FBS) (Thermo Fisher Scientific) at a concentration of 1 × 10 7 cells per mL.
Microfluidic device operation and imaging. Formulated peptide fMLP (Sigma-Aldrich) was diluted in IMDM containing 20% FBS to 100 nM. To prime the device, 10 µL of the chemoattractant solution was pipetted into each device. The well plate was then placed in a desiccator under vacuum for 10 min and then taken out for 15 min until the devices were filled completely with the solution. Three microlitre of media (IMDM + 20% FBS) was then added to each well to cover the devices. Ten microlitre media was then pipetted from the inlet to replace the chemoattractant in the cell-loading channel with chemoattractant-free media which created the gradient along the migration channels. Two microlitre neutrophil suspension was then pipetted into each device. To characterize the chemoattractant gradient, the fMLP solution was spiked with fluorescein (Sigma-Aldrich) at a concentration of 1 µg/mL and loaded in the device. To characterize the movement of fluid in the channels, RBCs (~2 × 10 8 per mL) or 2 µm nile-red polystyrene microbead (1.04 g per mL, 0.1% w per v) (Spherotech Inc., Lake Forest, IL, USA) were spiked into the fMLP solution and loaded in the device.
Time-lapse images at regions of interest were captured at 10× or 20× magnification with a time interval from 30 s to 3 min between two cycles, using a fully automated Nikon TiE microscope (Micro Device Instruments). The microscope is equipped with a biochamber heated at 37°C and 5% CO 2 . The trajectories of cells and microbeads as well as the fluorescent intensity profiles were analyzed using Fiji ImageJ.
Estimating fluid displacement in channels by neutrophils. To estimate the fluid displacement in front of moving neutrophils, we employed floating microparticles (fresh, human RBCs, or polystyrene microbeads). We loaded the channels with a small number of microparticles during the device-priming steps. We identified neutrophils moving in channels inside which one or more microparticles were present. We measured the change in the position of microparticles that were at 50 µm or more in front of moving neutrophils. Whenever the microparticles moved with velocities comparable to those of the moving neutrophils, we concluded that fluid flow is present in front of the moving neutrophils. We only made qualitative estimates regarding the displacement of fluid in front of moving neutrophils because several factors may affect the microparticle velocity and preclude the use of microparticles for quantitative measurements of fluid flow. For example, the velocity of RBCs in the centerline of a~10 µm diameter channel during NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-15476-6 ARTICLE Poiseuille flow can be up to~1.5 faster than the average velocity of the fluid in the channel 30 , an effect known as the Fahraeus effect 31 . Other factors, including the size of the microparticle relative to the size of the channel, the friction between the microparticle and the walls of the channel could reduce the velocity of particles below that of the average velocity of the fluid. Consequently, or analysis of the microparticle and neutrophil velocities could only identify if fluid is being displaced inside small channels during neutrophil migration.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. | 2023-02-08T15:41:56.193Z | 2020-05-13T00:00:00.000 | {
"year": 2020,
"sha1": "e208d1120a0a1f0656dd95b2f27533e6c062942f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41467-020-15476-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "e208d1120a0a1f0656dd95b2f27533e6c062942f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
225350037 | pes2o/s2orc | v3-fos-license | Visualization Experiment and Numerical Simulation of Cracks Caused by Pulsed- Plasma Rock Fracturing
Shale oil and gas reservoirs, as well as compact oil and gas reservoirs, are important oil exploration resources. However, their tight lithology and extremely low permeability and porosity hinder extraction. Production stimulation measures, such as reservoir fracturing transformation, are needed to increase production by low-permeability oil and gas reservoirs. This study explored a new type of fracturing technology, namely, pulsed-plasma rock fracturing, to increase rock fracturing efficiency and recovery. First, a model of stress distribution on the sidewalls of open holes was established. Then, a fracturing experiment was conducted on polymethyl methacrylate (PMMA). Morphological features were analyzed on the basis of the visual characteristics of PMMA, and the influential factors of fracture breakdown were summarized. Meanwhile, the stress-changing rule was analyzed by simulating pulsed-plasma shock-wave rock fracturing with LS-DYNA. Results show that pulsed-plasma fracturing can generate valid cracks with actiniform and wave-form features. High discharge voltages and loads on rocks associate with long crack lengths. In the simulation, applies load strengths form 9 MPa to 30 MPa, and the crack lengths increase from 16 mm to 67.5 mm. At the same time, the width and number of fractures show an increasing trend, and radioactivity and multi-branched cracks inside the rocks become increasingly complex. This study provides a practical and reliable reference for the technology of pulsed-plasma rock fracturing.
Introduction
The world has invested massive amounts of manpower and material resources in technologies for increasing oil production. Fracturing is one of the important means for increasing oil production. However, the formation of effective fractures in hard rock formations has always troubled scholars in China and other countries. Pulsedplasma fracturing technology is one of the latest fracturing technologies with the greatest potential developed in recent years. As early as 1905, Swedbery found that high-voltage discharge in water can generate pulsed-plasma shock waves. This phenomenon was called "electrohydraulic effect" by succeeding scholars. In the subsequent dozens of years, scholars from Russia, Australia, France, China, the U.S.A, and Ukraine devoted themselves to studies on relevant technologies [1][2][3][4][5]. This technology was introduced in China to deblock oil layers. Related studies and applications have been performed in Zhongyuan Oilfield [6], Karamay Oilfield [7], Qiaokou Oilfield [8], and Henan Oilfield [9]. First, pulsed-plasma shock waves induce cracks in a reservoir, and internal fluid resonates. Second, the high-temperature plasma beam dissolves waxes and asphaltenes. On the basis of this effect, the blocking oil layer can be dredged with the efficiency of 90%. Given that pulsed-plasma shock waves can crack reservoirs, scholars all over the world have applied this technology in rock breaking and fracturing. Pulsedplasma fracturing technology has attracted the attention of a large number of scholars given its natural advantage over the present fracturing technology, which often pollutes reservoirs and wastes water resources. Scholars have mainly studied plasma-generating mechanisms, performed shockwave numerical simulation, developed experimental devices, and optimized the factors that influence fracturing effects [10][11][12][13][14].
In this study, laboratory experiments were conducted to study further the law of crack initiation with pulsed-plasma rock fracturing technology. A model of stress distribution on the sidewalls of open holes was established by using LS-DYNA to simulate the process of pulsed-plasma shock-wave rock fracturing. Meanwhile, by visualizing the characteristics of polymethyl methacrylate (PMMA), a fracturing experiment was carried out to analyze and observe the morphological features and macro features of cracks to summarize the influential factors of crack initiation.
State of the art
Scholars have performed numerous works on pulsed-plasma fracturing technology. South Korean scholars Rim et al. [15] developed a rotating arc gap-switch that is driven by a selfinduced magnetic field with the voltage of 3-11 kV and the peak current of 400 kA. They adopted a copper-tungsten electrode to improve switching performance. Experiments have shown that intense electromagnetic radiation occurs and the electric circuit exhibits sharp changes in current and voltage during plasma generation. Decun et al. [16] imposed anti-interference measures for laboratory equipment to solve these problems. single-phase 50 Hz, 220 V AC power supply to avoid the conduction coupling interference caused by the grid supply. Their experimental instrument adopted a piezoresistive pressure sensor to avoid electromagnetic interference. Sun et al. [17] conducted follow-up studies. They developed pulsed power with the power input of 220 V and 2 kW, maximum stored energy of 40 kJ, rated high DC voltage of 20 kV, and maximum discharge current of 70 kA. They utilized optical fiber to isolate and transmit high-voltage signals between the high-voltage unit and the control system. Scholars have not only developed devices but also crushed rocks to verify the feasibility of this device. For example, Bing from Dalian University of Technology developed a large-scale highpressure liquid-phase discharge device with the length of 5m and the maximum discharge voltage of 30 kV to test impulse shock-wave pressure in water and fitted the calculation formula of the shock-wave peak pressure [18.19]. Kuznetsova et al. [20] from the Institute of High-Tech Physics of Tomsk Polytechnic University in Russia conducted experiments on concrete blocks with dimensions of 620 mm × 580 mm × 1000 mm and 700 mm × 450 mm × 300 mm. The high-voltage pulse device they developed has a voltage of 15-20 kV and the specific energy of 60 kJ/cm3 and exhibits energy accumulation in the discharge channel of up to 2.5 × 103 MPa. Khomeriki et al. [21] studied the directed fracture of rocks under impulse load. Madhavan et al. [22] performed one-dimensional and two-dimensional hydrodynamic simulations to study the action of shock waves on rocks in water. Chen and Maurel [23] established a simplified model of shock wave propagation in water under liquid-phase discharge to study the propagation law of pulsed-plasma shock waves in water and within surrounding rocks. French scholars Deroy and Claverie [24.25] experimentally studied the formation of plasma and the propagation of pulsed-plasma shock waves. By using optical and imaging diagnostic methods, they observed shock wave propagation and bubble formation and estimated the energy absorbed in water during discharge. Yi et al. [26] discussed the influences of optimal gaps between electrodes and plasma channel length on the intensity of pulsed-plasma shock waves. The most intense pulsed-plasma shock wave can be produced with the optimal electrode gap. The plasma channel length adjustment (PCLR) method can effectively improve the efficiency of energy transfer and reduce the requirements for capacitors and switches. In addition, the PCLR method has been applied to demonstrate that pulsedplasma shock waves can improve permeability effectively. Given that this experiment was carried out in an environment without confining pressure, future experiments applied the triaxial condition. Mao [27] studied the possibility of multiple cracking and predicted the size of cracks through a triaxial pressure simulation experiment. Meanwhile, pulse attenuation has been explained by using a dynamic fracture model. Multiple cracks with ideal lengths and widths are induced when the pressure amplitude generated by the pulsed-pressure wave reaches 50 MPa. Experiments have shown that liquid-phase high-voltage pulsed-discharge cracking technology can be used not only for compact reservoirs but also for oriented cracking. Rongyao et al. [28] conducted three-dimensional topographic analysis on cracks generated by simulated rock cracking with the maximum storage energy of the same pulse power supply of 40 kJ/20 kV. The cracks exhibit surface roughness values of 0.430-1.075 mm and certain conductivity. An electrical impulse fracturing device with a hydrostatic pressure of up to 30 MPa has been established to simulate ambient pressure at the depth of 3000 m in deep wells. A plasma impact fracturing experiment at different hydrostatic pressures has also been conducted. This experiment showed that numerous short cracks are present, and crack turning is common. Annular cracks are found in local areas [29].The groups of Yan [30],Usov [31]and Inoue [32]also conducted relevant experimental studies on this problem.
The above-mentioned studies discussed hydroelectric crushing mechanisms, experimental equipment, occurrence process, and action results. However, research systems established on the basis of theory for experimentation and simulation remains scant. The impossibility of observing the development of cracks inside test objects and understanding the trajectory of pulsed-plasma shock waves when rocks are cracked by plasma remains the major problem in experimental research. In this study, PMMA was used as an experimental object to observe the propagation path of pulsed-plasma shock waves in an ideal state.
The rest of the study is organized as follows. Section III describes how the model of stress distribution at the sidewalls of open holes was established, how the plasma fracturing experimental equipment was built independently, how the PMMA visualization experiment was carried out, and how the numerical model was constructed by using LS-DYNA. As described in Section IV, pulsed-plasma fracturing was analyzed, the difference between pulsedplasma fracturing and hydraulic fracturing was compared, and the results of the numerical model were analyzed and discussed. The summary and relevant conclusions of this study are given in the last section.
Model of stress distribution on the sidewalls of open holes and shock-wave pressure at fracture
Analyzing the instaneous stress conditions of rock rupture on the sidewalls of open holes reveals that when the permeability of the formation is low, the rock is subjected mainly to ground stress, fluid column pressure inside a wellbore, and the pressure of plasma-generating shock waves. The fluid column pressure of the rock, the pressure of the plasma-generating shock waves, and the ground stress are studied, and the final field distribution of the stress on the surrounding rock sidewall is obtained via the superposition principle. The stress conditions of the well sidewall include radial stress ( ), vertical stress ( ), and circumferential stress ( ). The pressure stress is positive, and the pulling stress is negative. The distribution of the stress field surrounding the well sidewall is shown in Fig. 1. Stress of fluid column pressure on the well sidewall: (1) Stress of the shock wave generated by plasma on the well sidewall: (2) Stress of the maximum horizontal main stress on the well sidewall: Stress of the minimum horizontal main stress on the well sidewall: (4) Stress of the overlaying strata pressure on the well sidewall: (5) where is Poisson's ratio and is dimensionless.
In accordance with the superposition principle, under the joint actions of fluid column pressure, shock-wave pressure, and ground stress, the distribution of formation stress on the well sidewall is as follows: The rock breaks because the circumferential stress acting on the rock is larger than the extension strength of rock, that is: Where is the extension strength of the rock in MPa. As can be seen from Equation (7), when the shock-wave pressure increases, the circumferential stress decreases. When is negative, that is, the circumferential stress on the rock is converted into extension stress, the rock breaks if the extension stress is sufficient to overcome the extension strength when is the minimum and θ is 0° or 180°.
where is pore pressure in MPa, and is the dimensionless effective stress coefficient.
Substituting Equation (10) into Equation (9) reveals that when the rock breaks, the pressure of the shock wave is .
As can be seen from Equation (11), the extent of the rock-fracturing pressure is related to the rock extension strength, maximum horizontal principle stress, minimum horizontal principle stress, and pore pressure. Small differences between three times the minimum horizontal principle stress and the maximum principle stress indicate that the shock-wave pressure required by rock fracturing is low. Large pore pressure or fluid column pressure indicates that rock fracturing requires a low shock-wave pressure. Without applying confining pressure, the fracturing pressure of the rock is approximately equal to the extension strength of the rock.
Experimental device and method for the pulsedplasma fracturing of PMMA (1) Experimental equipment
A pulsed-plasma fracturing experimental device through which plasma fracturing can occur simultaneously at different angles of a rock and in multiple holes is built in this study. The experimental device has two modules for charging, energy storage, and plasma generation. It includes a charging source, a discharge switch, a capacitor, a highvoltage protection inductor, a protective freewheeling diode, a discharge electrode set, and a pressure sensor. The connections of the pulsed-plasma fracturing experimental device are shown in Fig. 2. (2) Experimental method PMMA was visualized to summarize the fracturing law and observe cracks visually. First, six 100 mm × 100 mm PMMA blocks were selected. A hole with the diameter of 30 mm and the depth of 60 mm was drilled into the middle of the upper surface of the PMMA, which was filled with water. Then, the electrode set was placed in the hole and sealed for the experiment. The schematic of the experimental device is shown in Fig. 3. Plasma fracturing experiments were carried out on 5 pieces of PMMA at 3, 4, 5, 6, and 7 kV to observe fracture morphology. The last piece of PMMA was hydraulically fractured at 8 MPa with a displacement of 30 ml/min to compare the differences between the morphology of cracks induced by hydraulic fracturing and plasma fracturing.
Establishment of the numerical model of pulsedplasma rock fracturing
A shock wave generated by a plasma pulse was simulated by applying a time-varying pulse load in the fluid domain to study further the mechanism of pulsed-plasma rock fracturing. In this experiment, the Euler algorithm was adopted in the fluid domain, and the Lagrangian algorithm was adopted in the rock specimen. The nonlinear coupling effect of the shock wave and the rock mass domain was realized by using the multi-material fluid-solid coupling algorithm. The failure control algorithm was stress control failure, that is, failure occurred when the tensile stress exceeded the tensile strength of the rock. The physical model was specified as a homogeneous rock with the dimensions of 150 mm × 150 mm × 150 mm. The middle of the rock contained a hole with the diameter of 50 mm and the depth of 60 mm. The fracturing experiment was sharply. The number of secondary cracks between the two main cracks has increased greatly, and the cracks exhibit a radial shape as a whole. The properties of the cracks of each group of PMMA are summarized in Table 1. The basic law of sample cracking is analyzed by plotting the curve of the relationship between the discharge voltage and the total number of cracks as shown in Fig. 5. The analysis in Table 1, Fig. 4, and Fig. 5 shows that the total number of cracks in PMMA increases continuously with the increase in discharge voltage. High voltage is associated with complex fracture morphology. Review 13 (4) (2020) 232 -239 microcracks Fig. 5. Curve of the relationship between discharge voltage and the total cracks in PMMA Cracks caused by hydraulic fracturing and plasma fracturing were compared by subjecting PMMA to hydraulic fracturing. The experimental results are shown in Fig. 6, which clearly shows that cracks induced by hydraulic fracturing are flat and that the surfaces of the cracks induced by plasma fracturing crack are undulating. Given that plasma fracturing can be considered as fracturing wherein a plasma discharge channel is used as the point vibration source to generate spherical shock waves, the cracks generated after plasma cracking are more difficult to close than those caused by hydraulic fracturing. Cracks induced by hydraulic fracturing are difficult to steer without applying triaxial pressure, and cracks caused by pulsed-plasma fracturing are complicated. As can be seen from Fig. 4, as the input energy increases, the cracks elongate, their shapes become increasingly complicated, and their number increases. Therefore, cracks caused by pulsed-plasma fracturing are more complicated than those caused by hydraulic fracturing. Table 2, which shows that tensile failure occurs first and shear failure occurs under the load of 30 MPa. As depicted in Fig. 7 and Table 2, the width and maximum length of the cracks increase with the increase in load because high effective stress quickly exceeds the tensile strength of the rock, and the rock at the loading surface is broken instantaneously such that the shock wave spreads far. In addition, near the free surface, stress waves are reflected and converted into tensile waves, resulting in the wellknown surface spallation phenomenon.
A measuring point was selected on the loading and free surfaces, and a comparison chart of the stress time-travel curves of the same measuring points under multiple simulations was plotted as presented in Figs. 8 and 9. These figures show that effective stress reaches the peak value rapidly and then drops quickly with a shock-wave feature. The drop is followed by a slow rise and then a descent because the shock wave undergoes reflex on the free surface. The severe oscillation of the curve is the manifestation of insufficient stress. As the simulated applied load increases, the maximum peak and the reflex on the free surface increase. In Fig. 8, the arrow of the peak value of 30 MPa indicates the post isosurface mode at the moment. This arrow shows that the pulsed plasma extends outwards in the form of a spherical wave. The arrow of the peak value of 30 MPa in Fig. 9 is the grid point mode at that moment, from which the diffusion path of the energy can be inferred.
Conclusion
This study analyzed the mechanism of pulsed-plasma rock fracturing by combining tests and numerical simulation to explore and reveal the characteristics and mechanism of pulsed-plasma rock fracturing. The following conclusions were drawn: (1) The model of stress distribution on the sidewalls of open holes during plasma fracturing suggests that rock fracturing pressure in pulse-plasma rock fracturing is related to the rock tensile strength, maximum horizontal principle stress, minimum horizontal principle stress, and pore pressure. Under the absence of confining pressure, the fracturing pressure of the rock is approximately equal to the tensile strength of the rock.
(2) The basic fracturing law of pulsed-plasma fracturing can be obtained intuitively by conducting a fracturing experiment on PMMA. The number and the complexity of cracks continue to increase with the increase in voltage. The comparative experiment with hydrofracturing shows that cracks formed by plasma fracturing exhibit a radial pattern with a large fracture density and wave-form characteristic.
(3) Numerical simulation reveals that the number, length, and width of rock cracks increase with the increase in applied load, and energy diverges in radiating spherical waves. Although the basic law of the crack initiation of pulsed-plasma rock fracturing has been obtained experimentally, study on this topic remains insufficient and presents a large gap. Data collection by experimental devices can be improved further, and the attenuation of shock-wave pressure after rock fracturing must be clarified. Subsequent studies on the characteristics of crack steering and the expansion pattern of the radial and axial combination of cracks are required.
This study combined laboratory experiments with theory to understand the crack propagation law of pulsed-plasma rock fracturing clearly. A visual experiment was performed to observe visually and describe the characteristics of fracture development. This experiment has a certain reference value for follow-up studies on pulsed-plasma rock fracturing. In contrast to rocks, PMMA is homogeneous. Although cracks in PMMA can be used to describe the shock wave propagation path well, they are idealized. Thus, PMMA is different from heterogeneous rocks, and how to compensate for this deficiency is the direction of future research. | 2020-09-03T09:03:33.844Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "c4b9a4873f74bde7bb843da7174d0904f961d99f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.25103/jestr.134.24",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f9c0c2988798865815ccfa372797645fa08df2f2",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
233785176 | pes2o/s2orc | v3-fos-license | Isotopic study of Raman active phonon modes in -Ga2O3
Holding promising applications in power electronics, the ultra-wide band gap material gallium oxide has emerged as a vital alternative to materials like GaN and SiC. The detailed study of phonon modes in b-Ga2O3 provides insights into fundamental material properties such as crystal structure and orientation and can contribute to the identification of dopants and point defects. We investigate the Raman active phonon modes of b-Ga2O3 in two different oxygen isotope compositions ( O,O) by experiment and theory: By carrying out polarized micro-Raman spectroscopy measurements on the (010) and ( % 201) planes, we determine the frequencies of all 15 Raman active phonons for both isotopologues. The measured frequencies are compared with the results of density functional perturbation theory (DFPT) calculations. In both cases, we observe a shift of Raman frequencies towards lower energies upon substitution of O with O. By quantifying the relative frequency shifts of the individual Raman modes, we identify the atomistic origin of all modes (Ga–Ga, Ga–O or O–O) and present the first experimental confirmation of the theoretically calculated energy contributions of O lattice sites to Raman modes. The DFPT results enable the identification of Raman modes that are dominated by the different, inequivalent Oor Ga-atoms of the unit cell. We find that oxygen substitution on the OIII site leads to an elevated relative mode frequency shift compared to OI and OII sites. This study presents a blueprint for the future identification of different point defects in Ga2O3 by Raman spectroscopy.
Crystal structure and electronic properties
The crystal structure of the b-polymorph (Fig. 1) is monoclinic (space group: C 3 2h ; C2/m) 12,19,32 with lattice parameters a = 12.29 Å, b = 3.05 Å, c = 5.81 Å and the monoclinic angle of b = 103.771 between the crystallographic a and c axes. 19 The commercial availability of b-Ga 2 O 3 substrates with different surface planes enables lattice-matched strain-free homoepitaxial growth of b-Ga 2 O 3 films with low defect densities. [33][34][35][36][37][38][39][40] Nonetheless, even in the case of homoepitaxy, different substrate orientations can result in the formation of structural defects in the deposited layers, e.g. twin domains in (100) and (% 201) films. 33,38 In order to overcome this problem, homoepitaxial growth of b-Ga 2 O 3 is mostly performed on (010) oriented substrates, 34 although viable alternatives such as (001) homoepitaxy 33 or the employment of proper offcuts in (100) oriented substrates 41 were recently reported.
Previous works have demonstrated the possibility of extrinsic n-doping by substituting Ga atoms with electron donors like Si, 42 Sn, 43 Ge 44 and Nb. 45 To date, the reported charge carrier densities in b-Ga 2 O 3 bulk crystals 46,47 and thin films 44,45,48 have rarely exceeded high 10 19 cm À3 . Free carrier concentrations are commonly limited by point defects and structural defects. Theoretical works [49][50][51][52][53][54][55][56] have addressed the effects of Ga and O point defects in gallium oxide on the electrical and optical properties. Using hybrid functional calculations, it was revealed that oxygen vacancies (V O ) are deep donors with ionization energies above 1 eV and hence do not contribute to the observed n-type conductivity in unintentionally doped b-Ga 2 O 3 . 49 Succeeding studies found that all other native defects, except for gallium interstitials (Ga i ), are deep, too. [53][54][55] While Ga i may act as shallow donors, their high mobility, coupled with large formation energies under n-type conditions inhibit their contribution to the electrical conductivity. 56 Whereas intrinsic defects should not be the main source of electrical conductivity, different impurities including H, Si, Ge, Sn, F and Cl could act as extrinsic shallow donors. 49,50 Moreover, a large concentration of point defects can also affect the mobility of free charge carriers. Nonetheless, from an experimental point of view both (i) the control over the formation of different point defects during the material synthesis and (ii) their unambiguous identification is extremely difficult to achieve, but potentially of paramount importance for fully exploiting the material potential. In order to address this challenge, we conduct a combined experimental and theoretical study of the impact of different lattice atoms on phonon modes in b-Ga 2 O 3 isotopologues.
Lattice dynamics
Polarized micro-Raman spectroscopy constitutes a powerful tool that enables e.g. the study of strain in epitaxial thin films, 58 the detection of dopants and point defects via local vibrational modes, 59 the identification of lattice sites via angular resolved measurements, 60 and the determination of the thermal conductivity via Raman thermometry in its one and two laser implementations. 61,62 Symmetries and spectral positions of Raman active phonons of b-Ga 2 O 3 in the natural 16 O isotopic composition have been calculated theoretically and verified experimentally in a number of publications. 60,[63][64][65][66] Two recent works 60,66 employed polarized micro-Raman spectroscopy to separate Raman modes of different symmetries. Infrared active phonons were investigated by IR ellipsometry. 67 The influence of lattice expansion was studied by temperature dependent Raman spectroscopy of b-Ga 2 O 3 . 63 Employing the valence force field calculation, the authors indentified three categories of Raman active phonons with respect to the motions of Ga and O atoms. The transition of the bto the a-phase under high pressure was investigated by carrying out Raman spectroscopy in diamond anvil cells. 64 Raman tensor elements were determined in angular-resolved measurements 60 using a modified Raman tensor formalism proposed in a preceding publication. 68 Though the effects of isotope substitution on the frequencies of the Raman active phonon modes in b-Ga 2 O 3 were investigated theoretically, 69 no corresponding experimental study is available in the literature.
Raman vibrations may be excited by oscillations of Ga or O atoms. By calculating the displacements of the individual O or Ga lattice sites, density functional perturbation theory (DFPT) calculations enable the identification of the atomistic origins of vibrational modes (Ga-Ga, Ga-O or O-O). From an experimental point of view, the identification of individual lattice sites in materials with large unit cells is a challenging endeavour. A powerful tool is the usage of different isotopes and investigate their impact on the vibrational properties of a material. This approach was successfully applied in TiO 2 enabling the experimental identification of Raman modes without any contribution of oxygen lattice vibrations as evidenced by the unchanged frequency of the E g (1) and B 1g (1) modes in anatase TiO 2 . 70,71 Alternatively, the introduction of different isotopes of the same dopant can provide an unambiguous identification of dopant related local vibrational modes. ZnO is a well-studied wide band-gap oxide, whose point defects have been thoroughly investigated by vibrational spectroscopy. Apart from Zn and O vacancies, point defects in ZnO include Zn on interstitial sites (Zn i ) and oxygen on interstitial sites (O i ). 72,73 Undoped ZnO exhibits intrinsic n-type conductivity, which is commonly attributed to one or more of the following origins: (i) intrinsic defects acting as shallow donors, 74,75 (ii) shallow donors due to H absorbed from the air, [76][77][78][79] (iii) shallow donors formed by residual trace amounts of group -III impurities such as Al, Ga and In. 80 The substitution of N on O lattice sites in ZnO produced a pair of additional modes at n E 274 and 510 cm À1 , 81-84 irrespective of the implanted N isotope. 85,86 Moreover, since the doping of ZnO with Ga, Fe, Sb, Li and Al yielded the same vibrational modes, 87 their occurrence was tentatively attributed to a Zn i -N i or Zn i -O i complex. 83 By investigating undoped and nitrogen-doped ZnO thin films in different Zn isotope compositions, the presence of the 274 cm À1 mode was eventually revealed to be related to interstitial Zn clusters depending on the surface polarity of ZnO. 84 This experimental approach could represent an important milestone for the future identification of different point defects in Ga 2 O 3 layers as a function of different synthesis (e.g. deposition, annealing) conditions. 88 Oxygen defects such as O-vacancies and nitrogen on O-sites are deep donors and acceptors, respectively, which can strongly impact electronic behavior (traps, scattering centers, centers that get charged/discharged in space-charge devices). These defects need to be studied and we propose on the long run to use Raman spectroscopy for it.
Towards this goal, this work investigates the phonon frequencies of b-Ga 2 O 3 in the natural 16
Experimental and theoretical methods
A b-Ga 2 O 3 layer was homoepitaxially deposited (deposition time 445 min) on top of an unintentionally doped (010)-oriented substrate with In-mediated metal-exchange catalysis (MEXCAT) [89][90][91] in an MBE chamber equipped with an oxygen-plasma source run at a power of 300 W. For this deposition run, nominally 97.39% isotopically enriched 18 O 2 was employed to provide an oxygen flux of 0.38 standard cubic centimeter per minute (sccm) during the layer growth. For gallium (with natural isotopic abundance) a beam equivalent pressure (BEP) of BEP Ga = 1.27 Â 10 À7 mbar was used (equivalent to a particle flux of F Ga = 2.2 nm À2 s À1 ), while the additional In-flux necessary to allow the catalytic growth of the layer at a substrate temperature of T g = 900 1C was set to F In = 1/3 F Ga (BEP In = 5.2 Â 10 À8 mbar).
The 18 O isotope fraction, n*, within the sample was determined by time-of-flight secondary ion mass spectrometry (ToF-SIMS) depth profiles on a ToF-SIMS IV machine (IONTOF GmbH, Münster, Germany). Measurements were performed with a 25 kV Ga + analysis beam and a 2 kV Cs + sputter beam. The isotope fraction is directly accessible from the 18 O and 16 O intensities by Crater depths were analysed by interference microscopy with a WYKO NT1100 (Veeco Instruments Inc., USA).
Raman scattering at room temperature (293 K) was excited by a 632.816 nm He-Ne laser on a LabRAM HR 800 spectrometer (Horiba Jobin-Yvon, France). The laser beam was focused onto the sample using a 100Â Olympus objective with a numerical aperture (NA) of 0.9, with the scattered light being collected in backscattering geometry. Backreflected and elastically scattered light (Rayleigh component) was filtered using an ultra low frequency filter (ULF) unit and then spectrally-dispersed by a monochromator spectrometer with a grating of 1800 lines per mm. The light was detected by a charge-coupled device (CCD). The spectral dispersion of the setup is 0.35 cm À1 per pixel. The spectral resolution at 650 nm (400 cm À1 ) using the 1800 lines per mm grating is about 0.5 cm À1 . The precision in the determination of peak positions in comparison to neighboring Raman modes after line shape fitting for non-overlapping modes is better than 0.1 cm À1 . The sample was placed beneath the objective with a respective surface's normal parallel to the direction of light propagation. A l/2 wave plate in the excitation was set at 01 or 451 to polarize the incident light parallel or crossed with respect to the scattered light, which was selected using a fixed polarizer in the detection. Prior to each measurement, the Raman spectrometer was calibrated using the silicon T 2g -peak at 520.7 cm À1 .
Simulations of the lattice vibrations were performed within the frame of density functional perturbation theory (DFPT) on the level of the local density approximation (LDA) as implemented into the Quantum Espresso suite. 92 The Ga(3s,3p,3d) and the O(2s,2p) states were treated as valence electrons using multi-projector optimized normconserving Vanderbildt (ONCV) pseudopotentials 93 from the Pseudo Dojo repository, 94 where we used a large cutoff of the planewave basis set of 180 Ry. All reciprocal space integrations were performed by a discrete kpoint sampling of 7 Â 7 Â 7 points in the Brillouin zone. We fully optimized the atomic positions and cell parameters of the primitive cell of b-Ga 2 O 3 until the residual forces between atoms and the cell stress were smaller than 0.0025 eV Å À1 and 0.01 GPa, respectively. The threshold for the total energy was set to 10 À15 Ry, which ensured tightly converged interatomic forces for the geometry optimization and of the ground state density and wavefunctions for the DFPT calculations. As a result of the intrinsic underbinding of the LDA exchangecorrelation functional, our optimized lattice constants are less than 1% underestimated compared to the experimental values (Table 1). On the other hand, it is well-known that the LDA overbinding gives rise to a typically very good agreement of the DFT phonon frequencies, 95 motivating our choice for the purposes of this work.
Results and discussion
The Raman-active phonon modes of b-Ga 2 O 3 in the 16 O and 18 O isotoplogues (substrate and film, respectively) were acquired in polarized micro-Raman measurements, the results of which are depicted in Fig. 2a-c. From ToF-SIMS experiments, the dominating presence of 18 O in the homoepitaxial film has been verified: the isotope fraction is 96.3% and remains constant up to a depth of 1.5 mm from the surface (Fig. S2 in ESI †). At the interface between film and substrate, the isotope fraction drops over an intermediate region of 300 nm to 0.3%, which is slightly higher than the natural isotope abundance of 0.2%. The difference might be caused by diffusion during the deposition process at elevated temperatures. The depth with 50% of the initial isotope fraction marks the interface of the film resulting in a film thickness of 1.65 mm (AFM micrography of the layer reported in Fig. S1 of ESI †).
The MEXCAT deposition process 33,91,98 allows to widen the deposition window of Ga 2 O 3 to otherwise forbidden growth regimes (e.g. the high T g and metal-to-oxygen flux ratios used in this work), while at the same time allowing for low incorporation of the catalyzing element.
The primitive unit cell of the monoclinic structure consists of 10 atoms: Ga lattice sites occur in tetrahedral (Ga I ) or octahedral (Ga II ) coordination, with three nonequivalent oxygen sites (O I , O II and O III ) in between. At the G-point, these correspond to 27 optical phonons: 60,63,66 G opt = 10A g + 5B g + 4A u + 8B u , (2) Table 1 Lattice constants a, b, c and the monoclinic angle b obtained from DFT-LDA calculations in this work compared to experimental and theoretical data. The exchange-correlation functional applied in each theoretical work is specified 15 modes are Raman-active (index g), with symmetries A and B, and 12 modes are infrared (IR) active (index u). All 15 Raman-active phonon modes of symmetries A g and B g were separated by utilization of the measurement geometries illustrated in Fig. 2d. The sample was irradiated normally in a confocal setup (left) and on the edge (right) to access the (010) or (% 201) plane, respectively. The choice of the (010) and (% 201) planes is advantageous as it enables the selected detection of Raman modes with A g or B g symmetry and thus facilitates the separation of modes with closely matching phonon frequencies. For the (010) plane, B g -modes are forbidden as determined by Raman selection rules. For excitation normal to the (% 201)-plane, A g modes have vanishing intensity for crossed polarization of incident and scattered light when the polarization of the incident radiation aligns with the [010] direction. 60,63 Fig . 2a illustrates the Raman spectra of the (010) plane in parallel polarization between incident and scattered light. Using the Porto notation, the applied scattering geometry can be written as z(yy)% z, where z and y correspond to the [010] and [100] directions, respectively (cf. Fig. 1). 60,63,66 In this configuration, the A 1 g , A 2 g , A 3 g , A 7 g and A 8 g modes have maximum intensity. As we only investigated phonon modes accessible in Raman measurements, the index g denoting a Raman-active mode is omitted in the following discussion. The Raman spectra of the substrate with 16 O (dark yellow) and the epilayer with 18 O (blue) were recorded in a confocal set-up by moving the z-focus into the substrate (minimum signal contribution from the film) and into the film (maximum film contribution). By subtracting the two spectra from each other, we obtained a difference spectrum (green), in which the 18 O Raman modes dominate. For clarity spectra are vertically offset and the difference spectrum is scaled by a constant factor. 18 O Raman modes are labeled with * to distinguish them from 16 O modes.
Subsequently, the polarization vectors of both the incident and scattered radiation were rotated 901 around the [010] direction. The x-axis and the [001] direction confined an angle of 13.771 (Fig. 1), yielding the configuration z(xx)% z, in which the A 5 , A 6 , A 9 and A 10 modes are maximal. The same procedures as above were applied to acquire the substrate, epilayer and difference spectra, which are displayed in Fig. 2b. The objective's large NA (0.9) implies a relaxation of selection rules, as light is also collected from directions other than perpendicular to the (010) plane. Hence, the most-intense B 2 -mode is weakly present in Fig. 2b.
To access the B-modes, Raman scattering was performed in a cross-section configuration. The thickness of the film equals approximately twice the diffraction limited extent of the laser of about 800 nm. In order to obtain the Raman spectrum of the homoepitaxial layer, we performed cross-section line scans with 200 nm step size. Based on these linescans, we selected two positions for long integration Raman measurements, one for which the 18 O related Raman modes reach maximum intensity and a second one for which only substrate modes are visible. Subtracting of the resulting spectra yielded the difference spectrum (green) in Fig. 2c.
The in-plane crystalline directions of the (010)-oriented sample were identified (i) by the presence of a rounded edge in the used substrate correspondent to the [102] crystalline direction (orthogonal to the (% 201) planes, cf. Fig. 2d), (ii) by X-ray diffraction (XRD) 2y-and j-scans performed on different (010) crystals, and (iii) by the recognition of (110)-facets visible as oriented lines along the [001] in-plane orientation in AFM micrographs (cf. Fig. S1b, ESI †). 91,98 Indeed the correct probe of the (% 201) planes was experimentally verified via Raman spectroscopy which highlighted the mode intensity ratios expected for the (% 201) planes (Fig. 2c) 60 . We thus selected an edge, whose surface plane corresponds to the (% 201) plane. For normal incidence z 0 (cf. Fig. 1) with crossed polarization in the z 0 (x 0 y 0 )% z 0 configuration (x 0 ||[010], y 0 ||[102]), Raman selection rules predict vanishing intensity for the A modes and maximum intensity for the B modes. Due to the edge's imperfect preparation, A-modes were still present, yet less intense. An intensity ratio of about 3 : 1 between the most intense B 2 and A 3 modes was achieved. All five B-modes are hence available in this configuration.
The spectral positions of 16 Table 2 for both O isotopes in conjunction with the results of DFPT calculations. A few modes deserve particular attention, as the determination of their spectral positions and relative frequency shifts is challenging due to small frequency shifts or overlapping modes. This applies to the low-frequency A 1 , B 1 , B 2 and A 2 modes, for which the frequency shifts are in the order of the respective mode's linewidth.
Furthermore, in the Raman spectra of the substrate (dark yellow) shown in Fig. 2c, the B 3 resides in the right shoulder of the A 5 . As for the 18 O distribution (green), the two modes are superimposed in a joint widened peak. With the A Ã 5 position derived from Fig. 2b, we analysed this peak by setting the position of the A Ã 5 fixed and varying the position of the B Ã 3 until the intensity ratio A Ã 3 =B Ã 3 was equal to the intensity ratio A 5 /B 3 . In the 16 O substrate spectrum the B 5 is closely neighbored by the A 9 . The B Ã 5 envelope in the 18 O difference spectrum is composed of three individual modes: In addition to the B Ã 5 , the A Ã 9 as well as the A 8 mode from the substrate lie in close proximity. The intensity of the A 8 can be regarded as negligible due to its suppression in the difference spectrum. In analogy to the analysis of the B Ã 3 spectral position, the B Ã 5 Raman frequency is obtained by setting the A Ã 9 position determined from the difference spectrum in Fig. 2b fixed and varying the B Ã 5 position until obtaining equal intensity ratios for A Ã 9 =B Ã 5 and A 9 /B 5 . Distinguishing the A 7 and B 4 modes in both isotope compositions is a formidable task in the literature, as the two modes are located at nearly the same frequency. With the exception of one publication, 66 previous experimental works have usually reported both modes at the same frequency or have only assigned one mode, 60,63,64 whereas theoretical works have calculated a frequency difference ranging from 0.3 to 13.4 cm À1 between these two modes. 60,[63][64][65] Using a (% 201)-oriented b-Ga 2 O 3 sample as reference, we obtained an intensity ratio B 4 /B 2 4 1/28 between the B 4 and B 2 mode in the z 0 (x 0 y 0 )% z 0 configuration. Using this intensity ratio, we conclude that the B 4 and B Ã 4 will contribute more than 70% to the total intensity of the occurring peaks at 475.9 cm À1 and 453.6 cm À1 . Hence, we assign these peaks to the B 4 and B Ã 4 , respectively. Analogously, the peak positions of the A 7 and A Ã 7 are determined from the analysis of the (010) spectra in Fig. 2a with a negligible B 4 or B Ã 4 intensity, respectively. Consequently, we obtain a previously unresolved mode spacing of the A 7 and B 4 of 0.6 cm À1 and 0.3 cm À1 for the 16
O and 18 O isotopologues.
Following this detailed analysis, we were able to determine the spectral positions of all 15 Raman-active phonon modes in the 16 O and 18 O isotopologues of b-Ga 2 O 3 (summarized in Table 2).
While a slight change in the oxygen mass does not affect the formation of point defects during the MBE growth, altering the mass of one of the two elements of a binary oxide induces a shift in Raman modes, in which atomic vibrations of the respective element are present. Owing to the larger relative mass difference between e.g. 16 O and 18 O compared to stable Ga isotopes, oxygen isotopes produce an elevated frequency shift and are preferably used to study the variation of phonon frequencies in different isotopologues. 99 The observed shift of Raman modes towards lower frequencies upon substitution of 16 O with 18 O corresponds to an increase in the isotopic mass. 99,100 Table 2 further lists the absolute and relative frequency shift for each mode. Based on the data displayed in Table 2, Fig. 3a depicts the experimentally-(blue) and theoretically-determined (green) relative mode frequency shifts for all 15 Raman active modes. Errors in the experimental data originate from the uncertainties in the determined peak positions as described above. A qualitative agreement between experimental and theoretical data is apparent, with a slight overestimation of the frequency shifts of the majority of modes by the DFPT calculations.
The analysis of the experimental Raman mode frequency shifts in Fig. 3a reveals several noteworthy results: (i) the relative shift strongly varies between the different modes with the smallest and largest shift of 0.43% for the A 1 and 5.47% for the B 3 , respectively; (ii) low energy phonons between 110 and 170 cm À1 exhibit weak frequency shifts below 1.03%, whereas high frequency phonons with Raman shifts above 350 cm À1 experience large relative shifts close to 5% upon O isotope substitution; (iii) phonons with wavenumbers between 200 and 350 cm À1 show intermediate relative shifts, which do not scale linearly with increasing phonon energy. Fig. 4 illustrates a scheme of the Raman-active A 1 , A 5 and A 10 modes as representatives of the low-energy phonons, phonons of intermediate energies and high-energy-phonons. Modes of A symmetry oscillate within the (010)-plane, with arrows indicating the amplitude of vibration. A scheme of all Raman-active phonon modes is presented in Fig. S3 in the ESI. † In order to explain the reasons for the observations (i)-(iii), we calculate the relative energy contribution of the three oxygen (O I , O II , O III ) and two gallium (Ga I , Ga II ) lattice sites to the total phonon energy for each mode (Fig. 3b). By treating the individual atoms as harmonic oscillators, the displacement of atom k in the x-direction is given by where A k denotes the vibration amplitude of atom k, o is the frequency at which all atoms contributing to a specific Raman mode vibrate and t is the time. The Raman frequencies o as well as the amplitudes A k of the corresponding atomic vibrations are readily available from our DFPT calculations. In this approximation, each atoms kinetic energy is proportional to the product of the atoms mass and the amplitudes square of the absolute value. As a result, the energy stored in a respective Raman mode is determined by the sum of the individual kinetic energies of all The smallest relative frequency shifts seen in the lowfrequency A 1 , B 1 , B 2 and A 2 modes correspond to a comparatively low energy contribution from O lattice sites to the respective modes. An increased frequency shift relative to the low-frequency modes is distinctive of the A 3 , A 4 , A 5 and A 6 modes. O lattice sites are expected to contribute 61% to the energy of the A 3 , which is reflected in an observed relative frequency shift of 3.32%. Conversely, a lower frequency shift of 1.54% or 2.74% for the A 4 or A 5 stems from a less prominent energy contribution from O lattice sites. Oxygen lattice site vibrations are predominant in the B 3 and the remaining highfrequency modes from A 7 onward and vary between 81% or 99% for the B 4 or B 3 , respectively. Correspondingly, these modes exhibit the greatest mode frequency shifts, with the largest shift observed for the B 3 .
Finally, we address the impact of oxygen isotope substitutions on the three distinct oxygen lattice sites. A direct comparison of the observed mode frequency shifts and phonon energy contribution of individual lattice sites indicates that the size of the relative frequency shift depends on which of the three O lattice sites contributes to the vibrational mode. The experimental data indicate that replacing 16 As for the A 3 and A 6 , oxygen vibrations account for 61% or 63% of the mode energy. However, the A 6 , characterized by a more intense contribution from O III lattice sites and a reduced contribution from O II sites, has an increased relative frequency shift. Equally, oxygen vibrations account for 86% in the A 7 and A 8 or 89% in the A 9 . Yet, the observed frequency shift is most profound for the A 8 , where O III lattice sites prevail. As for the B 5 , the O energy contribution is comprised entirely of O III sites. Despite a lower overall oxygen contribution (82%) compared to the A 7 and B 4 modes, the B 5 is characterized by an increased relative frequency shift. While the exact values of the computed relative energy contributions may vary slightly depending on the employed XC functional, the above LDA-based conclusions also hold true for PBE, PBEsol and opt-B88.
Conclusions
In summary, we examined a homoepitaxial b-Ga 2 O 3 thin film in the 18 O isotope composition, deposited on top of a substrate with 16 O isotope distribution. Polarized micro-Raman spectroscopy was carried out to record Raman spectra of both isotopologues. By probing the (010) and (% 201) planes, we were able to separate all 15 Raman active phonon modes and determine their phonon frequencies for both isotopologues. The replacement of 16 O with the heavier 18 O atoms resulted in a reduction of the phonon frequency. We provided the spectral positions of 18 O Raman modes and quantified the absolute as well as relative frequency shifts compared to 16 O Raman modes. Based on the comparison of experimental data with DFPT calculations of mode frequency shifts and calculation of the relative energy contributions of each lattice site to the total phonon energy, we identified and quantified the atomistic vibrations for each of the five distinct lattice sites which give rise to the 15 different Raman-active phonon modes in b-Ga 2 O 3 . A large relative frequency shift seen in the modes of higher frequencies is emblematic of modes governed predominantly by O lattice site vibrations. In contrast, the low-frequency A 1 , B 1 , B 2 and A 2 modes exhibit the smallest overall shifts and hence are governed by the vibration of both Ga and O atoms. By determining dedicated Raman modes dominated by one of the three inequivalent O-sites and relating their relative frequency shifts to the calculated relative amount each lattice site contributes to a respective mode's energy, we conclude that substituting 16 O with 18 O isotopes on the O III lattice site results in an elevated mode frequency shift compared to O I and O II lattice sites. This discovery paves the way for the identification of O-site-related defects by Raman spectroscopy in future studies. Furthermore, micro-Raman spectroscopy may be carried out to investigate the formation of oxygen vacancies in different lattice sites as a function of different synthesis (e.g. deposition, annealing) conditions.
Conflicts of interest
There are no conflicts to declare. | 2021-05-07T00:04:35.233Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "0499d5e4cb89ff1694bc2be2fe80090834ac9fd9",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/tc/d0tc04101g",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f5bc66b35637530fb8aec5dcd9a12ebc1df0bc1d",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
270735086 | pes2o/s2orc | v3-fos-license | Changes in blood catecholamines during induction of general anesthesia in patients with post-induction hypotension undergoing laparoscopic cholecystectomy: A single-center prospective cohort study
Background Post-induction hypotension (PIH) often occurs during general anesthesia induction. This study aimed to investigate blood catecholamine levels during induction of general anesthesia in patients with PIH undergoing laparoscopic cholecystectomy. Methods This prospective study included 557 adult patients who underwent laparoscopic cholecystectomy under general anesthesia. PIH was defined as a greater than 20% decrease in systolic blood pressure from the pre-induction value, a systolic arterial pressure of less than 90 mmHg, or both. Plasma concentrations of epinephrine and norepinephrine during the induction of general anesthesia were determined using enzyme-linked immunosorbent assay. Multivariate logistic regression analysis evaluated the association between the clinical factors and PIH. Results Of the 557 patients, 390 had PIH, and the remaining 167 were allocated to the non-PIH group. Changes in blood adrenaline, noradrenaline levels, or both were more pronounced in the PIH than in the non-PIH group (p<0.05). Age, body mass index, a history of hypertension, preoperative systolic blood pressure, and propofol or sufentanil dose were independent predictors of PIH. Conclusion The changes of blood catecholamines in patients with more stable hemodynamics during the induction of general anesthesia are smaller than that in patients with post-induction hypotension. Trial registration ChiCTR2200055549, 12/01/2022.
版本号: 1.0 版本日期:2020.09.07 3 In strict accordance with the inclusion and exclusion criteria, after signing the informed consent form, the anesthesiologist will measure the baseline blood pressure of the patient before the operation, and the non-invasive arterial blood pressure will be used to measure the baseline blood pressure of the patient before the induction.At the same time, the height, weight, BMI, ASA grade, basic disease, and other characteristics of each patient were understood through the clinical digital anesthesia record sheet.The radial artery blood was collected from all patients at three time points before anesthesia induction (T0), after anesthesia induction (T1), and after tracheal intubation (T2).The blood samples of 80 patients were randomly selected for examination, and the changes of blood catecholamine before and after induction were observed, combined with the characteristics of patients before and after operation and the anesthetic use during operation, so as to understand the relationship between the induced hypotension and the above indicators, and provide useful information for disease research.
Observation indicators
(1) The occurrence of hypotension after induction.
(2) Heart rate, systolic blood pressure and diastolic blood pressure (perioperative period) (3) Changes of blood catecholamines in some patients before and after anesthesia induction (4) Use of narcotic drugs.
4、Adverse event observation
4.1 Possible adverse events (1) When blood samples were taken, there were transient palpitations, cold sweats and other dizziness reactions; (2) Postoperative anxiety due to pre-induction evaluation and blood sampling.
Risk prevention and treatment
(1) Pay close attention to the patient's vital signs during sampling, and seek the help of professional psychologist at any time to avoid stress reaction 版本号: 1.0 版本日期:2020.09.07 4 (2) Ask the physical and mental health doctor to conduct psychological counseling, confide and vent to avoid aggravating anxiety.
Review by the Ethics Committee
The protocol, the written informed consent form and the data directly related to the subject were submitted to the ethics committee, and the study was officially carried out after obtaining the written approval of the ethics committee.Inform the Ethics Committee in writing when the study is suspended or completed; Report all changes in the research work (such as the revision of the protocol or informed consent number) to the Ethics Committee in a timely manner, and do not implement these changes without the approval of the Ethics Committee, unless they are made to 版本号: 1.0 版本日期:2020.09.07 5 eliminate the obvious and direct risks to the subjects.In such cases, the Ethics Committee will be informed.
6.2 Informed consent 6.2.1 Procedures for obtaining informed consent The researcher must provide the subject or his legal representative with an easy-to-understand informed consent form approved by the ethics committee, and give the subject or his legal representative sufficient time to consider the study.The subject shall not be included in the group until the signed written informed consent form is obtained from the subject.During the participants' participation, all updated informed consent forms and written information will be provided to the subjects.The informed consent form shall be kept as an important document of the clinical trial for future reference.
7、 Confidentiality measures
The results of the research through this project may be published in medical journals, but we will keep the patient's information confidential according to the requirements of the law.Unless required by relevant laws, the patient's personal information will not be disclosed.When necessary, the government management department, the hospital ethics committee and its relevant personnel can consult the patient's information according to the regulations.
8、 Expected progress and completion date of the research project
February 2022 -November 2022: complete the collection of formal experimental data; November 2022 -January 2023: summarize data and write papers; January 2023 -February 2023: paper revision, submission and publication.
9、 References
[1].Saab, R., et al., Failure to detect ward hypoxaemia and hypotension: contributions of insufficient assessment frequency and patient arousal during nursing assessments.
5. 1
Inclusion of sample content; Patients who underwent simple laparoscopic cholecystectomy from February 2022 to November 2022 in the General Department of Hepatobiliary Surgery and Surgery of the Affiliated Hospital of Southwest Medical University were included.5.2 Statistics and analysis of research dataStatistical analysis was performed using R4.0.5.Categorical variables are expressed as counts and percentages (%), while continuous variables are expressed as mean ± standard deviation or median and quartile ranges.The independent sample t-test was used to compare the parameter values of the two groups, the Mann-Whitney U-test was used to compare the non-parameter values of the two groups, and the chi-square test was used to compare the classified variables.Variables with a p-value ≤ 0.1 were included in the multivariate logistic regression model to identify independent risk factors and then the nomogram of the prediction model was constructed.All tests were two-sided, and the significance level was set at P < 0.05, the ROC curve and the area under the curve (AUC) was drawn and calculated to evaluate the discrimination.Then, the calibration degree of the prediction model was evaluated by Hosmer-Lemeshow test and the calibration curve, and finally the decision curve analysis (DCA) was used to evaluate the clinical net benefit. | 2024-06-27T05:07:33.436Z | 2024-06-25T00:00:00.000 | {
"year": 2024,
"sha1": "705d1a8e86b66ae9400a8b4b5c6518b3409360f0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "705d1a8e86b66ae9400a8b4b5c6518b3409360f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238586568 | pes2o/s2orc | v3-fos-license | Congruity of Virtual Reality In-Game Advertising
With virtual reality (VR) video game users beginning to see beta advertisements within game play, this conceptual article adds a needed digital and interactive marketing research foundation to the new construct of VR in-game advertising (IGA) activation. New consumer VR technology continues to disrupt traditional media as a $7.7 billion USD industry, that is expected to reach $57.55 billion by 2027. As such, marketing researchers must continue to evolve and understand the interdisciplinary VR research evolution as many VR users are likely to view IGA as intrusive. IGA and VR are not new constructs, but IGA within VR is unique as VR consumers have not yet experienced VR IGA intrusion. This article utilizes a sport marketing focus to provide an industry specific set of examples for the reader, however this article can be applied to broader fields including communications and interactive marketing. The main contributions of this article are 2-fold. First the development of a VR In-Game Advertising Congruity Framework is developed through a review of the literature and application to VR IGA in the topical areas of congruity of the IGA, interactivity of the IGA, intrusiveness of the IGA, realism of the experience, telepresence, brand awareness, and attitude toward the IGA. Secondly, a proper VR context definition of telepresence is provided through review of the literature that takes into account the interaction of a VR participant. This article aims to aid marketers in making informed IGA development decisions through strategic choice, via a centralized VR IGA congruity framework, that not only enhances brand awareness, but leaves participants with a favorable attitude toward the IGA to increase sales activation.
INTRODUCTION
The purpose of this conceptual article is to add a much needed timely digital and interactive marketing research foundation to virtual reality (VR) in-game advertising (IGA). In July 2021 people with Oculus Quest headsets are scheduled to start seeing beta advertisements in virtual reality (Olson, 2021). IGA is not a new construct, but IGA in VR is as unique as this conceptual analysis since VR consumers have not yet experienced VR IGA intrusion. As this new consumer VR technology continues to disrupt traditional media, marketing researchers must continue to evolve and understand the interdisciplinary VR research evolution. This work uses a sport video game context to examine the emerging VR gaming industry, the user intrusion anticipated by IGA, and the VR IGA's sport marketing ramifications across disciplines such as advertising.
Due to the novelty of VR as well as the importation of ideas from fields other than sport marketing, there are numerous conceptual and definitional inconsistencies in the literature around terms such as presence, spatial presence, and telepresence along with elements related to the immersive characteristics of VR technology. This concept is practically relevant to the design and evaluation of media products and human-computer interactions, such as VR games and IGA. For this reason, scholars have paid much attention to presence, spatial presence, and telepresence and suggested various definitions since the mid-90s (Lee, 2004). One major goal of this work is an attempt to provide some clarity of common IGA terms within the VR context as consumer experience VR IGA for the first time in 2021 in order to enhance successful communication among scholars. Topical areas of congruity of the IGA, interactivity of the IGA, intrusiveness of the IGA, realism of the experience, telepresence, brand awareness, and attitude toward the IGA are unraveled within the literature. Congruity was identified as the centralized variable of the VR IGA congruity framework based on the topical VR context with IGA referring to the compatibility, agreement, or harmony between the elements of the game and of the VR advertisement. Two main contributions of this article are the development of a VR In-Game Advertising Congruity Framework (see Figure 1) and a proper telepresence definition through review of the literature with a VR active participant context.
It is important to note that augmented reality (AR), digital overlay on top of a real-life first person view, often goes hand in hand with VR discussions (i.e., AR/VR) and that this article focuses solely on VR technology and research opportunities for VR marketing exploration. VR uses computing technology but its technical properties far exceed those of a computer, giving it potentially superior applications in many fields. As a medium of work and entertainment, VR (like the television and the computer, powered by the Internet) has great potential as a sport marketing medium that brings advertisers in contact with target "eye balls, " only VR is more interactive and more immersive.
Just like advertisers have targeted sport video games as an outlet for showcasing their products and services, VR holds even more promises, only if advertisers can get assurance that they are properly and adequately engaging their target audience. Brand awareness is therefore a key goal that advertisers want to achieve in product placement (Karrh et al., 2003), which in the case of VR is through IGA. In addition to providing greater clarity to the concept of IGA within VR, this study leverages on the state achieved by the participant to analyze the VR user's attitude toward the IGA with a view toward proposing and contributing effective evaluation of interactive marketing initiatives within VR game environments. This improved knowledge of the participant experience, within the VR gaming environment, aims to assist sport marketers in making informed IGA development decisions through strategic choice that not only enhances brand awareness, but leaves participants with a favorable attitude toward the IGA to increase sales activation.
LITERATURE REVIEW
In this study, a sport VR In-Game Advertising Congruity Framework is developed and proposed based on an extensive review of literature to guide researchers and marketers in the sport VR gaming industry (see Figure 1). Each variable of this conceptual framework is developed and supported within the literature review below for future empirical research lines and practical marketer application. This research team believes that insufficient attention has been dedicated to the relationship between sport video games along with sport consumer behavior and this is especially evident in the sport VR video game setting. This is despite the fact that the sport video game industry is a growing, multi-billion dollar industry with significant recent growth in the sport VR area (Hong and Magnusen, 2017). For instance, the most popular sport video game in the world, FIFA, generated over $1.6 billion in revenue for EA Sports in the 2020 and this accounts for over 25% of the total revenue for the company (Murphy, 2021). The majority of this revenue came from FIFA Ultimate Team and a FIFA VR game for PlayStation 5 that has been released after years of development. Consequently, a sport-related VR gaming environment is utilized to discuss the variables of congruity of the IGA, interactivity, intrusiveness, realism, telepresence, brand awareness, and attitude toward the IGA throughout the general business and advertising literature.
Congruity of the IGA
Advertisers desire that notwithstanding the primary task the player is involved in, they may notice and remember the ingame Ad. Kim and Ko (2019) limited-capacity model of attention suggests that it may however be a challenge for players to focus equally on the game as well as on the IGA. Game players can be selectively focused (Kahneman, 1973) or more or less intensely focused (Olshavsky, 1994), and since they can be exhausted, cognitive resources would have to be freed from a primary task, in order for spare capacity to be available for a secondary task.
Congruity may influence the ease with which players are able to share attention in this way. Lee and Faber (2007) further suggest that the interplay between primary task capacity and spare capacity may further explain how product placement in games affect brand memory. Congruity in a digital media context refers to the harmony between the elements it contains. These elements include visual or verbal elements of an IGA (Heckler and Childers, 1992; an IGA and the in-game context (Moorman et al., 2007) or harmony between the Ad sponsor and the event being sponsored (De Pelsmacker et al., 2019).
The causes of the perceived congruity can be intrinsic to the game, a property of the IGA, or in the user. Congruity may also be an aspect of the game structure, and thus can be perceived as simulation, gameplay or game narrative (Verberckmoes et al., 2016). IGA congruity may also be thematic-harmony between the game elements and the IGA (Lee and Faber, 2007), such as displaying an IGA of a sport related brand inside a sport VR game; advertising an energy drink within a space game, where the player's need for virtual energy may be a cue to the physical body's need for energy from an energy drink. There is research suggesting that congruent information is better remembered than incongruent ones (Lambert, 1980;Moorman et al., 2007), the argument being that when people encounter new information, they try to match it with preexisting schema. If the two information sets match, the new one would be absorbed into the old schema and thus be more easily remembered.
Other authors argue the contrary: that incongruent information is more likely to be remembered, because not matching with pre-existing information or schema, they tend to stand out, making them more easily remembered (Heckler and Childers, 1992;Russell, 2002;Dimofte et al., 2011). Despite the differences, the mechanism of effect on the memory seem to be based on both the degree of congruity and the amount of cognitive processing resources available to the individual.
We suggest that the same mechanism may explain the effect of IGA within VR on brand memory and awareness. What is at issue here though is the effect that this attention dispersal might have on the player based on their perception of harmony in the game play experience, and hence perceptions of congruity within the VR environment. Positive gameplay experience such as in a low cognitive load, aesthetic pleasure derived from a good graphics design, the absence of vestibular disturbances like motion sickness, would contribute to a mental, affective, and physical perception of congruence by the player.
Interactivity of the IGA
Interactivity is the level to which users can change the form and content of a mediated environment in real time (Steuer, 1992). Interactivity in the context of VR video games is how actively involved the player is in the activities in and outcome of the interaction. They are thus able to modify their in-game experience and change the course of the game. For instance, a player in a VR auto racing game may choose a car with specific attributes such as a higher top speed, but slower acceleration; choices that can change their playing experience (Herrewijn and Poels, 2014). IGA marketers can also take advantage of the possibilities and expectations of interactivity in VR to have players be more actively involved with their ads, which can be made a part of the game play experience (Nelson et al., 2004;Lee and Faber, 2007;Lee et al., 2014). For example, a player may be able to customize their in-game experience within a football game by picking a branded uniform for their team or a player in a NASCAR themed racing game, may be able to virtually drink a Monster Energy drink to enhance their in-game focus .
Previous research has demonstrated that this IGA brand interaction typically leads to positive outcomes for brands and game developers as users want to play games with interactive advertising more often and the user may exhibit a higher level of brand awareness than players exposed to passive, non-interactive IGA (Wu and Liu, 2007;Wu et al., 2008;Pelsmacker et al., 2019). Papadopoulos (2020) found that VR gamers had a higher level of recognition of both familiar and unfamiliar brands when they had brand interactions with a VR environment. In short, participant interactivity with the IGA has enhanced brand awareness as opposed to lower level of brand awareness for participants merely "viewing" or being "exposed" to the IGA.
Research has also shown interactivity to be a multidimensional construct. There is as yet little consensus in the literature regarding the dimensions of interactivity as it relates to IGA (Steuer, 1992;Gao et al., 2009). Liu and Shrum (2002) identified three dimensions of interactivity related to advertising including active control, two-way communication, and synchronicity. These three dimensions of interactivity have been included in multiple mobile and VR video game studies and Gao et al. (2009) added connectedness, playfulness, and interpersonal communication as additional dimensions of interactivity related to mobile video game IGA.
Intrusiveness of the IGA
According to previous research, a non-congruent IGA is likely to be intrusive. An intrusive IGA will augment the experience of incongruence, which produces an outcome that increases the likelihood of the IGA being noticed, stored in memory and recalled. Intrusiveness is defined as "the degree to which advertisements in a media vehicle interrupt the flow of an editorial unit" (Ha, 1996, p. 77). Editorial units in the digital media context should be broadly defined to include all media types including VR sport games.
We define intrusiveness in the context of VR video games as the degree to which advertisements embedded in the game do not conform with the digital medium's reality and/or impact the participant's VR experience. For instance, a player in a VR soccer game may expect to see advertisements embedded around the arena matching the real-world arena. However, the player may not expect there to be additional advertisements in nontraditional places such as on the goal posts and may place virtual blame on the IGA for the intrusive experience. With the unrestricted nature of VR game environments providing an unparalleled amount of advertising inventory, advertisers have an opportunity to push the envelope while not incurring intrusiveness "that occurs when an audience's cognitive processes are interrupted" (Li et al., 2002, p. 39). As such, enhanced brand awareness and recall of an intrusive IGA may produce a negative attitude toward the IGA resulting in decreased purchase intentions and actual purchase.
Marketers must be cognizant of new VR advertising inventory, but be wary of the temporal, visual, and flow characteristics of advertisements that consumers find intrusive so that negative responses are minimized (Riedel et al., 2018). For example, it would be realistic in a VR basketball game to have the participant's players within the game experience fatigue due to time between whistles and hydration. This presents an opportunity for advertisers to activate potential sponsors such as Gatorade where the participant has a chance to enhance their player's strength during timeouts by giving them Gatorade instead of water. However, this brand activation comes with a price and may be considered intrusive if not properly embedded or if this enhanced stamina is only available if you watch a 10 s commercial. It is worth noting that Poels et al. (2013) found a positive relationship between game players and their attitude toward IGA intrusiveness based on IGA context authenticity. As such in this context, we thus observe a parallel between intrusiveness and a lack of congruity. That which is incongruous can be so because it is intrusive. Likewise, because it is intrusive, the end result would be incongruity.
Realism of the Experience
IGAs, congruence in function, lifestyle, image or advertising, add to the realism of the game experience. Due to a natural harmony of components of the IGA and the game environment, the senses are made to believe that the VR experience is real, rather than simulated. Realism is the element of VR media that receives the most attention, but is comparatively limited in empirical research. With realism being the driving industry force behind VR game development and advertiser activation, perceived realism should be a key focus as it can influence mental processing of media messages, attitudes and behavior, in some cases intensifying effects (Potter, 1988). Krcmar et al. (2011) found that video game realism correlates with attention retention outcomes. As such, integrated VR advertisements with improved graphics and enhanced graphical realism may be more salient to players and thus, lead to more identification, and a greater sense of being "in" the game experience (Tamborini, 2000). For example, a participant playing a VR basketball game in a crystal-clear VR environment, should see a realistic in-game advertising experience complete with current jersey sponsors and in-arena sponsor signs. More specifically, a gamer playing an NBA VR game with the Denver Nuggets team should see Western Union prominently placed on the player's jerseys as this is current sponsor of this team at time of writing. Gamers demand realism in their gaming experience and this sense of realism may have a significant impact on the brand awareness of sponsoring companies.
Authenticity has also been found to impact participants' perceptions of game play realism (Malliet, 2006). Hall (2003) stated that events or behaviors are defined as plausible when they have "the potential to occur in the real world" (p. 629). Tavinor (2019) found that participants' sense of being within the world, the realistic appearance of the world's environments, and the feeling of joy, anxiety, and fear provoked by the events depicted, all make for a greater impression in VR. As such, sport VR game advertisers must place importance on exposing participants to a higher degree of visual realism, as visual realism experience has been correlated to a stronger sensation of presence (Hvass et al., 2017).
Telepresence
Telepresence is a term derived from presence and it refers to "the mediated perception of an environment" whereas presence is "the natural perception of an immediate physical environment" (Steuer, 1992, p. 6). That is, contrary to the experience of "being here, " telepresence is the experience of "being there" or being in parallel space through a specific computer-mediate vehicle (Faiola et al., 2013). This definition has guided a stream of studies as researchers have adopted telepresence within their work in the fields of VR, marketing communication, web uses, and consumer behavior (e.g., Klein, 2003;Nah et al., 2011;Hyun and O'Keefe, 2012;Kim and Ko, 2019).
A conceptual study of Lombard and Ditton (1997) on presence has guided several presence and telepresence research studies. They defined presence as "the perceptual illusion of nonmediation" as the same concept as telepresence. That is, when the individual feels a sense of presence, the individual does not perceive or acknowledge the existence of a medium in his/her communication environment and responds as he/she would if the medium were not there in a form of illusion of nonmediation. They conceptualized presence using six aspects, social richness, realism, transportation, immersion, social actor within medium, and medium as social actor. This multi-dimensional view on presence has provided a conceptual foundation for telepresence studies and now needs clarification for VR adoption.
When
applying Lombard and Ditton's (1997) conceptualization of presence, researchers have defined telepresence with a focus on a different dimension (e.g., transportation, immersion, etc.) or used different subscales to reflect their contexts. Lombard and Ditton argued that among the aspects of presence, transportation has the longest history and has been "often used in discussions of VR, which takes users to virtual environment and leads to the suspension of disbelief that they are in a world other than where their real bodies are located" (Slater and Usoh, 1993, p. 222). For this reason, the transportation aspect has taken a central role when defining telepresence (e.g., Held and Durlach, 1992;Biocca and Levy, 1995;Steuer, 1995).
As various media technologies have advanced, so has the level of immersion. Therefore, several researchers have stressed immersion as a dimension of telepresence. Mollen and Wilson (2010) proposed that telepresence is characterized by cognitive and sensory arousal, control, and immersion. In their study, immersion referred to perceiving oneself to be steeped in and interacting with an environment that sustains a continuous stream of stimuli and experiences in their conceptual work. Furthermore, some researchers even have used telepresence and immersion interchangeably (e.g., McGloin et al., 2011McGloin et al., , 2013Mcgloin et al., 2015;Nelson et al., 2013), furthering the need for a proper VR-based definition of telepresence. In a game-based learning study, Faiola et al. (2013) highlighted the role of telepresence to enhance a user's sense of being totally immersed in a virtual space. They suggested that virtual world users often feel completely immersed in the interactivity of a game, losing their sense of time, while feeling a heightened sense of pleasure, or what has been considered the gamers' optimal experience.
Although the work of Steuer (1992), Kim and Biocca (1997), and Lombard and Ditton (1997) have primarily guided telepresence research, it is critical to revisit how to define telepresence in the head mounted display (HMD) VR gaming setting due to its fast technological advancement, growing market demand, and its applicability in various areas past VR sport games, such as training, entertainment, education, health treatment, etc. Previous studies have focused on different aspects of telepresence in line with Lombard and Ditton's (1997) work, but this inconsistency has caused substantial confusion to researchers. Consequently, we need to reconsider the definition of telepresence that can be particularly applied to the HMD VR context.
When defining telepresence in the application of HMD VR, we need to pay attention to the notion that "being there" as a spectator is not the same as "being there" as an actor (Kim and Biocca, 1997;Klein, 2003;Nelson et al., 2013). In the current upward trend of highly interactive media, VR gaming is one of the most relevant areas that require close interactions between technology and users to optimize gaming experiences. It is apparent that telepresence plays a significant role affecting sport gaming experiences, and telepresence deserves a great amount of attention from both researchers and practitioners to benefit consumers, the industries, and the overall body of business knowledge.
As such, the following clarifying VR definition has been developed through the literature to concisely guide future research. In the context of HMD VR, telepresence is defined as a participant being present within a parallel continuous streaming digital space as if the participant was actually a part of the digital space. We highlight the active role of VR sport gamers because of users' increasing level of autonomy in the VR environment due to the advancing VR technology in our definition. This actorfocused definition is grounded in Lee's (2004) presence theory, in which presence was defined as "a psychological state in which virtuality of experience is unnoticed" (p. 32). Our definition can help researchers further examine telepresence as a predictor of brand awareness in the VR gaming setting.
According to Cummings and Bailenson (2015), individuals may more likely perceive the virtual environment as a plausible space and themselves as located within it if spatial cues in the virtual environment have a logical consistency. Therefore, it is likely that individuals feel as if they are actually playing their sport in a stadium if the IGA is highly congruent with that sport during game play. For example, a Cleveland Cavaliers' fan who plays an NBA-themed VR game may feel as if he/she played basketball in Rocket Mortgage FieldHouse when he/she was surrounded by various Gatorade signage's in the virtual arena. Additionally, Bae et al. (2020) found that the characteristics of mixed reality, which represent presence, at cultural and artistic visitor attractions positively affect brand awareness. This suggests that gamers may recognize an IGA brand further when they feel as if they were in the gaming setting.
Brand Awareness
Brand awareness has long been known as a measure of advertising effectiveness and predictor of future sales in a variety of settings (Aaker and Brown, 1972;Cornwell and Maignan, 1998). Brand awareness has been researched extensively in the sport sponsorship context and the main measures typically include consumer recall and recognition. Recall is typically unaided and researchers ask consumers to identify the advertisers from a website, event, or a video game without giving them further information. For instance, a researcher may ask a consumer who recently played a sport VR video game to identify all of the advertisers they saw during the course of playing the game in an open-ended format. Recognition typically requires consumers to identify official advertisers in a multiple-choice format from a list of actual advertisers and fake advertisers. For example, a researcher may ask study participants to identify all of the official advertisers after playing a sport video game from a list of both official advertisers and non-advertisers (Pham, 1992;Tripodi, 2001;Tripodi et al., 2003). While early advertising and sport sponsorship research has extensively investigated brand awareness as a measurement of advertising effectiveness, more recent research has focused on brand awareness as a measure of IGA effectiveness in sport video games (Cianfrone et al., 2008).
Initial IGA effectiveness studies utilizing brand awareness were first related to digital video games on the computer and gaming console platforms with more recent research focusing on mobile and VR games (Nelson, 2002;Cianfrone et al., 2008;Herrewijn and Poels, 2014;Wu et al., 2018). VR games and specifically sport VR games provide a highly interactive virtual advertising platform for advertisers to showcase their brands with IGA. As VR technology has improved, advertisers have been able to fully engage potential customers by increasing the interactivity of their embedded ads. For example, an advertiser in a basketball VR game may offer gamers the ability to change their branded shoes to a different model/color that has varying performance attributes. One type of basketball shoes may increase a player's speed while another type of shoes may increase a player's jumping ability and if the gamer is immersed in the virtual basketball world, they may physically feel these attributes during gameplay. This type of advertising experience is unique to VR games and advertising effectiveness research within VR and mobile gaming areas have shown a direct relationship between interactivity and brand awareness (Gao et al., 2009;Lee et al., 2014;Wu et al., 2018).
Both Cianfrone and Zhang (2013) and Kim et al. (2008), are seminal studies on the consumptive behaviors of sport video gamers and their relationship to consumer behavior of IGA. The researchers demonstrated that sport video game sponsorships/IGA were extremely effective in improving purchase intentions and this was especially the case for heavy gamers and sport consumers and those with a previous interest in the sponsoring brand. For instance, if a gamer playing a soccer video game like FIFA was already a loyal Apple consumer, Apple could potentially increase consumer purchase intentions with strategically placed IGA in the game. With that being said, while there is a good bit of research related to the effects of IGA on brand awareness in the traditional sport video game setting it appears that further investigation of how IGA influences brand awareness in the sport VR video game setting is warranted. Additionally, it may be important to examine VR gamer's attitudes toward the IGA embedded in games and previous brand loyalties. Similar to the issue of too much advertising at a live sports event, there is the potential for this negative effect with too much IGA in VR sport video games.
Attitude Toward the IGA Studies suggest that there is an increase in positive consumer attitude toward advertised brands when the IGA is congruent with the video game content (Lee and Faber, 2007;Chang et al., 2010). Since the IGA represents the brand being advertised, a positive attitude toward the IGA should translate to the same toward the brand, and vice versa. How a player feels about an IGA depends a lot on the nature of the in-game interaction and whether it is favorable or not. This too depends on whether the IGA enhances or impedes the gaming experience, the primary motivation for engaging with the VR medium, in this case.
Other factors that can affect the attitude toward VR IGA are perceived congruity of the IGA, perceived realism of the game play, and perceived intrusiveness of the IGA. Verberckmoes et al. (2016) showed that IGA congruity decreased perceived intrusiveness and increased realism, both of which contribute to a positive attitude toward the IGA. Contrary to this, other authors show that IGA that seem out of place with respect to other game environmental characteristics get noticed more, and hence are better recalled, suggesting benefits for incongruity (Verberckmoes et al., 2016). Advertisers want customers to notice the brand, like it, and then take action toward sustained engagement, ending in purchase for products. Incongruent IGA may upset the player, and since brand affinity is thus a goal, beyond awareness, perhaps as Lewis and Porter (2010) suggest a moderately congruent IGA may serve to both increase awareness and keep affinity positive.
DISCUSSION
Intrusiveness and Attitude Toward the IGA Two arguments may be presented following this comment: that experienced video game players, by being able to deliberately block out IGA, demonstrated that they were aware (at least momentarily) of the IGAs. One is only able to deliberately block out what they are aware of at that given moment in time. This should lead to great recall and recognition, but Lee and Faber (2007) argued the contrary. An alternative argument would be that the players are used to the game play and its environment containing the IGA (due to repeated practice). The participant players are thus able to block out IGAs, not because of an active awareness, but rather an active awareness of the game and a total lack of attention to anything not considered a part of the game. In either case, it is unlikely that the player would have a positive attitude toward this intrusive IGA in standard or VR game play.
Inversely, functional congruity occurs when the product category of the IGA is an essential element of game play (Gwinner and Eaton, 1999). This type of IGA may thus be tolerated and be even welcomed by participant players. The characteristic of interactivity of the IGA should be acceptable based on the same criteria as intrusiveness, only to the extent that it is functionally congruent. This should in turn lead to a harmonious game experience that may contribute to a positive participant experience, ultimately leading to an impactful attitude toward the IGA within an immersive VR sport gaming environment.
Where IGA intrusiveness and interactivity contribute to both a telepresence along with realism, and ultimately a more pleasurable game play experience, the attitude toward the IGA is expected to be positive. Functional congruity should help to bring this result because, while advertisers may be achieving brand promotion objectives through the IGA, they would simultaneously be contributing to the primary goal of gameplay. In short, with intrusiveness and attitude toward the IGA aligned the IGA would produce positive marketing outcomes, especially in sport VR experiences where advertisement inventory is maximized in real life.
Interactivity Dimensions
Connectedness builds off of the Ha and James (1998) research and refers to the feeling of being linked to the product and company outside of the mobile or VR environments. Playfulness emphasizes entertainment and the inner joy experienced by the player of the mobile or VR game and is the first dimension of interactivity that emphasizes self-communication rather than interaction with others. The final dimension of interactivity is interpersonal communication, which refers to the degree to which the media platform allows users to communicate in a mediated, interactive environment. Additionally, researchers believe that improvement in technology such as VR headsets and improved graphics can significantly improve and enhance interpersonal communication. Regardless of the ongoing dissent related to the definition and measurement of interactivity in VR games, researchers agree that interactivity can lead to both a significant improvement in attitude toward an advertising brand and brand awareness of the IGA (Gao et al., 2009).
Being able to manipulate or change in-game objects is a key part of the interactivity of immersive VR sport experiences. An IGA can be embedded on an object with which the user interacts, as long as the IGA is thematically congruent with the object. An example would be a drink label (IGA) imprinted on an ingame drink bottle, such as Gatorade, as it will not be surprising that a drink bottle has a label. Interactivity in the sport game would require the user to hold, touch or change the object to achieve hydration.
Interactivity introduces dynamism that breaks the monotony of the normal, thus calling the user's attention. Following the argument that incongruent IGA's receive superior cognitive attention, it is therefore expected that the more interactive the IGA, the more incongruent, and hence the greater the likelihood that the user will interact with the object bearing the IGA, and finally, the greater the chance that the IGA will be remembered. In order to take advantage of the interactivity in this Gatorade example, the user would have to pick up the bottle. But what if the user, in the case of the bottle, does not want to drink? Offering them incentives during game play might lead them to pick up the bottle (interactive), what kind of incentive would they be offered if it does not contribute to the game play objective? Having players move aside embedded IGA objects could be regarded as interactive play that contributes to the game objective, to the extent that removing the barrier allows them to proceed within game play. Further IGA research is needed to understand how much cognitive involvement compares in this "negativeengagement" (remove obstacle) and a positive-engagement such as picking up to use within an immersive VR environment.
Additionally, De Pelsmacker et al. (2019) argues that interactivity of the IGA will distract from the game play. Li (2015) found domain experts who believe interactivity is neutral, and that what is important is thematic congruity. According to Li (2015), there would be no distractions when the quality of game design is good enough to achieve thematic congruity between IGA and the game. The limited-capacity model of attention (Lee and Faber, 2007) suggests that VR game players have limited cognitive resources, which they would prefer to deploy in the primary motive for playing the game, leisure. Besides subjective affective feelings, successful completion of VR in-game challenges may be confirmed by the attainment of high game scores. As Lee and Faber (2007) suggested, the more experienced players become, the more they learn to block out IGA, which they would regard as extraneous information toward the sport competition objective. In the end, IGA recall and recognition are likely to be low even in an immersive VR environment.
This research team recommends to differentiate between IGA that is designed to be interactive by the game designer, and whether or not it is engaged in by the user. This would be the case of those IGAs with dynamism of mobility, but without having anything to do with game play. From the advertiser's point of view, such IGAs would be more noticeable. However, since their interactivity is not a result of user-initiated actions, it may result in a lack of congruity, since it is an event outside normal game play or has a lack of congruity with the sport competition. Thus, advertiser generated IGAs might result in incongruity, and may lead to an eventual negative attitude toward the IGA.
Implications and Future Research
This paper advances the VR sport marketing research discussion within the literature and provides a unique IGA congruity conceptual framework (see Figure 1) that can be tested in subsequent studies. Grounded in both the VR and sport advertising bodies of literature, this study attempts to address several emerging issues in the VR landscape. Additionally, as VR gaming becomes more affordable and accessible there will be significantly more opportunities for IGA within VR sport game development following the 2021 Oculus VR IGA beta testing. Consequently, the importance of studying the effectiveness of IGA in the VR gaming setting will become increasingly important. The VR In-Game Advertising Congruity Framework developed through a comprehensive review of literature will serve as a blueprint for researchers to further investigate VR IGA effectiveness in a variety of business settings. The theoretical framework created in this study builds off of the Pelsmacker et al. (2019) framework with inclusion of telepresence and brand awareness. The researchers also utilized seminal studies related to traditional sport video game consumer behavior to verify that this framework fits appropriately in a sport specific setting within a VR context.
A secondary, but very important theoretical implication of this study is a comprehensive review of the definition of telepresence, which is key to understanding the effects of IGA in VR games. While researchers have debated the specific definition of telepresence for decades now, the researchers in this study focused specifically on the interactivity aspect of telepresence within VR. In the context of IGA within VR gaming, this involves a feeling that the participant is actually in the digital space and the influence of telepresence on brand awareness warrants further investigation (Faiola et al., 2013;Herrewijn and Poels, 2014). Again, in the context of HMD VR, this research team defines telepresence through the literature as a participant being present within a parallel continuous streaming digital space as if the participant was actually a part of the digital space. As the VR sport gaming experience continues to improve, researchers will have additional opportunities to examine telepresence and how we should look at IGA within the VR gaming environment space under this cleaned definition.
In addition to the effect of telepresence on brand awareness, the conceptual framework proposed in this study (see Figure 1) also further examines the effect of the IGA's congruity on brand awareness. Lee and Faber (2007) suggested four dimensions of congruity when considering the relationship between the product category of the embedded IGA and the content of the game, within which it is embedded: (1) Functional congruity is when the advertised product category is used in the game. (2) Life congruity is when both the product category and the game content are designed for the sociodemographic group, while (3) image congruity is when the image of both the product category of the IGA match the same of the game focus. Finally, (4) advertising congruity is when the IGA product category is appropriate for the game context. Some authors are of the opinion that congruency between ingame elements produces a superior impact on memory (Lambert, 1980;Shamdasani et al., 2001;Moorman et al., 2007;Rodgers, 2013), especially when people have a pre-existing mental schema against which they compare new information and experiences. A match between the new information and existing schema, where the former is absorbed by the latter results in a great recall of the congruent information. The aforementioned authors found that incongruent information has a comparatively inferior effect on memory to congruent information, which suggest negative brand awareness within sport VR games.
Others argue that incongruent information has a superior effect on the memory, because the novelty and the uniqueness of the out-of-place information make them noticeable (Heckler and Childers, 1992;Forehand et al., 2002;Russell, 2002;Dimofte et al., 2011). It is in the attempt to make sense of the incongruous that the cognition becomes more engaged, and thus make it easy to recall (Srull and Wyer, 1979;Mandler and Shebo, 1982). Lee and Faber (2007) believed that neither of the two above (congruency or incongruency per se) is an adequate explanation for the recall ability of the memory. On the one hand, they argued, it depends on the degree of either congruity or incongruity. Higher levels are remembered more easily than low or moderate levels. On the other hand, recall, they suggested, depends on how much attentional or processing resources are left over after being dedicated to the primary task e.g., an online game. Thus, in order for memory resources to be allocated to the new information, its incongruency ought to be particularly large in order for the brain to undertake the cognitive elaboration to make sense of it. The more incongruent the brand, the greater will be the brand recall and recognition (memory). This may contradict some traditional sport sponsorship/brand awareness research where sponsor/brand fit is very important (Kim et al., 2008;Cianfrone and Zhang, 2013) and further research is needed to test the VR In-Game Advertising Congruity Framework in order to determine the optimum line that IGA incongruity can be pushed without resulting in a negative attitude toward the IGA within an immersive VR environment.
The level of involvement of the player within the game can also affect the impact of congruity on memory. Players who are more involved, which in VR can be operationalized by the interactivity of either the game or the IGA, will be so focused on the game that they tend to block out any information considered extraneous, in this case, an incongruous IGA. Thus, it is hypothesized for future research that more experienced VR gamers should exhibit lower recall than moderate or less involved VR gamers. For example, experienced VR gamers playing FIFA may have high fan identification for their selected club and be able to block out more incongruous IGA based on familiarity with the competition and immersive VR setting.
Considering the different ways in which congruity has been conceptualized will be helpful in understanding the interactions within VR, with the special characteristics of this immersive media. Lee and Faber (2007) identified some of these conceptualizations, which either relate to the IGA itself, or refer to aspects of the IGA as well as the game context. They include the relationships between the visual and the verbal elements of the IGA; the sponsor and the advertised event; the IGA and the context where it appears; the relationship between the modality (the audio and the visual placements of the IGA) and the plot.
Future research needs to empirically examine each of the variable relationships within the VR In-Game Advertising Congruity Framework under the VR sport gaming context. Given the multidisciplinary nature of the research framework, its examination should not be restricted to just a sport marketing context, and rather multidisciplinary VR efforts will be optimal. Although the VR gaming industry has been exponentially growing in the last few years, there is room for improvement for both hardware and software within the $7.7 billion USD industry (Wood, 2020) that is expected to reach $57.55 billion USD by 2027 (Fortune Business Insights, 2021). Multidisciplinary contribution can enhance the collective level of understanding that researchers have about human behaviors in an extensive range of contexts (Kim et al., 2008). For example, collaborations among scholars in psychology, marketing, industrial engineering, and computer software science can produce rich findings and provide a great deal of insights for both researchers and practitioners.
This VR In-Game Advertising Congruity Framework can be tested using various VR games, and through group difference testing such as ANOVA, in order to examine framework variable similarities across different games genres (e.g., sport, action, action-adventure, military, role-playing games, etc.). Researchers can also utilize eye tracking built into VR HMDs such as the HTC VIVE Pro Eye with Tobii analytical software on either commercially available games or games in development creating further research opportunities. For example, researchers can develop a VR basketball game that allows IGA to be embedded or excluded in a way to manipulate the level of intrusiveness. VR players can be exposed to either a highly intrusive condition or unintrusive condition while playing the game. Following the completion of the game, the participants can complete a VR In-Game Advertising Congruity Framework guided survey or interview on their gaming experience, which includes IGA brand awareness and attitude toward the IGA. Additionally, researchers can explore framework ANOVA relationships to eye tracking analytics measuring gamer glances on the IGA through heat maps.
Finally, from a practitioner perspective, this study aims to benefit video game developers and advertisers. The proposed In-Game Advertising Congruity Framework of this study and comprehensive literature review provide a basis for video game developers and advertisers to make informed decisions regarding IGA in VR video games. This work is unique as there is a paucity of research within the literature on sport video games and none to date within the VR sport game context. The In-Game Advertising Congruity Framework provides practitioners and academics alike with a baseline to measure fandom with respect to irrational choice of sport consumption and if purchase behaviors extend to a VR sport gaming experience. While it is difficult to conceptualize the role of each framework variable without empirical evidence, the context and role of IGA in VR reveals a unique boundary condition of existing theoretical frameworks. Additionally, VR gamers will potentially benefit from this study through more effective IGAs that are less intrusive and offer gamers targeted advertisements related to products that they may be interested in purchasing.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
FUNDING
The publication fees for this article were funded through the Feliciano School of Business at Montclair State University. | 2021-10-12T13:29:16.597Z | 2021-10-12T00:00:00.000 | {
"year": 2021,
"sha1": "b62dd847ab604e24793231f7764b1f23dc5a963f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fspor.2021.728749/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b62dd847ab604e24793231f7764b1f23dc5a963f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118993709 | pes2o/s2orc | v3-fos-license | Equations of motion for charged particles in strong laser fields
Starting from the Dirac equation coupled to a classical radiation field a set of equations of motion for charged quasi-particles in the classical limit for slowly varying radiation and matter fields is derived. The radiation reaction term derived in the paper is the Abraham-Lorentz-Dirac term.
Introduction
The interaction of electrons and positrons with their radiation field is described by the Dirac equation coupled to Maxwell's equations.
The goal of the present paper is to outline the derivation of a dynamical framework for charged quasi-particles in the classical limit neglecting spin for slowly varying matter and radiation fields from first principles.
The present paper is structured as follows: First, a classical Vlasov equation is derived for spinless electrons and positrons coupled to Maxwell's equations from the fundamental theory of electromagnetism. Next, the concept of quasi-particles for scalar electrons and positrons is introduced. In a third step dynamical equations for the energy-momentum tensors of the matter and radiation fields are derived. The latter are utilized to obtain a set of classical molecular dynamical (MD) equations of motion for electrons and positrons coupled to their retarded radiation fields. The Lorentz-Abraham-Dirac (LAD) term for radiation reaction is obtained. The LAD term, however, is not the only radiation reaction force that can be derived with the help of the methodology presented in this paper.
Matter and radiation fields
We start by defining the concept of a Wigner operator outlined in [1]. The Wigner operator iŝ with the kernelΨ where With the help of the Dirac equation coupled to the radiation field equations of motion for the Wigner operator (1) are obtained [1]. In the limit of a slowly varying classical radiation field we obtain Next, it is useful to expand the Wigner operator in spin space [1]. This yieldŝ whereF is a scalar and α is an axial vector. It is found that in the classical limit hold. Both equations come along with the following constraints The 4-current is given bŷ = −e d 4 p p µ m :F (x, p) : and the energy-momentum tensor bŷ where normal ordering is implied. In what follows we neglect spin. The associated classical radiation field is obtained with the help of the ensembleaveraged current In addition to the energy-momentum tensor of the Dirac field (11) the radiative energy-momentum tensor is needed. It is given by where 3. On-shell scalar Vlasov equation Next, we decomposeF into positive and negative energy parts due to the contraint equation (8). We find An on shell equations is obtained by performing energy averaging. We obtain where following the outline in [2] f ( Equation (16) is the desired scalar Vlasov equation for particle and antiparticle distributions f (x, p ) andf (x, p ). Finally (16) has to be augumented with Maxwells's equations given by
The concept of quasi-particles
We depart from f andf defined as continuous functions on phase space and make the ansatz to approximate the on shell distribution functions (17) and (18). We require that hold with arbitrary ,¯ > 0 for an appropriate proximity measure. Since quasi-particles interact via their radiation fields retardation constraints will be encountered in space-time.
The radiation field
Next, (19) is solved with the help of (20) and (21). We obtain for the 4-current We pick the retarded vector potential solutions of Maxwells's equations implying where the retarded Green's function is given by Plugging (24) and (26) into (25) yields Defining τ q ret andτ q ret for particles and anti-particles by we can solve (25) by observing [3] since the particle worldlines intersect the backward light cone at the observation point x α for the retarded times. Hence, we obtain the familiar retarded field solutions [3] A where x α q (τ q ret ) is the location of particle q at its retarded time τ q ret . The same holds for the anti-particles labeled with a bar.
The radiation field (32) is linked to the worldlines of the particles and anti-particles. Hence, it is not defined for all space-time points due to the retardation conditions (28) and (29).
Equations of motion for quasi-particles
To derive equations of motion for the quasi-particles we make use of the energy-momentum tensors for the matter (11) and radiation fields (12). They are given by and (13). To obtain an equation of motion for (33) along worldlines of quasiparticles we make use of (16). We pick an arbitrary quasi-particle at x p (t) and define a spherical volume V p (t) with radius R sp (t) surrounding it in such a way that there is no 2nd quasi-particle at x q (t) with q = p in the same volume.
From (32) we conclude that V p (t) contains the retarded field from the quasi-particle at x p (t) inside V p (t) and the fields from all quasi-particles at x q (t) with q = p outside V p (t). The latter form the external field seen in V p (t) by quasi-particle x p (t).
To shorten notation it is useful to split the total radiation field into the source field F µν p of quasi-particle p and the external field F µν ext produced by all quasi-particles q = p. We obtain To obtain an equation of motion for (13) we consider only the field of quasiparticle p inside the volume V p . Hence, we find We note that only the field of quasi-particle p contributes. Adding (34) and (35) we find Equation (36) does not contain singular terms and can be used to define a set of delay equations for radiation reaction. We will not do this here but follow the tradiational derivation of radiation reaction terms, which lead us to the LAD equations.
LAD equations
We now solve (35) instead of (36) explicitly for the retarded field solution (32). To do this we first infer the current for source p from (24) and integrate over the volume V p around p. We obtain We next evaluate following the outline in [4]. It is found after a few intermediate steps where τ 0 = q 2 p /4π 0 c 3 and a µ p is the 4-acceleration. Finally, we obtain the following set of ordinary differential equations for each quasi-element ṗ the solutions of which have to be plugged into (32) to obtain the field distribution of the particle and anti-particle ensemble.
The self-force terms in (41) are part of the LAD equations, which have well-known mathematical problems [4]. We note that the derivation of the LAD equations given here makes the assuption that the radiation fields can be Taylor expanded around their singularities along the worldlines. No need for similar Taylor expansions would arise in the case of the aforementioned delay equations as a replacement for LAD.
The dynamical framework
Taking all together we obtain a set of classical MD equations of motion given byẋ wherem p denotes the renormalized mass The external electromagnetic field at x α p is generated by all surrounding particles with The retarded electromagnetic fields are obtained from (32). They are given by [3] where The retardation constraint is where x µ ret q = x µ q (τ q ret ) .
Constraint (50) must be solved for all q = p. In case a solution exists particle q contributes to the external field at particle p. Else it does not. The situation is illustrated in Fig. 1. The discussion of the setup problem of the radiative MD system is omitted in this paper.
Conclusions
Starting from the field equations of electrodynamics equations of motion for scalar quasi-electrons and positrons have been derived. Together with their radiation fields they form a set of MD equations with self-force effects. The derivation of the latter makes use of the energy-momentum tensors of the matter and radiation fields. In the paper the LAD terms for radiation reaction have been motivated. | 2016-11-08T20:00:44.000Z | 2016-11-08T00:00:00.000 | {
"year": 2016,
"sha1": "32d0fe517d6baa6f848ef73f69b38a56ec2646d1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "32d0fe517d6baa6f848ef73f69b38a56ec2646d1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
208058726 | pes2o/s2orc | v3-fos-license | Exotic Plants Used by the Hmong in Thailand
Exotic species are an integral part of the plants used by many ethnic groups, but they usually receive little attention and have been considered alien to the ethnobotanical data. Here, we analyze the plants used by Thai Hmong refugees that are not native to their current habitats in Thailand. We attempt to understand the sources of this knowledge. Do people maintain the original traditional knowledge related to exotic species when they migrate to a new region, or does new knowledge originate from acculturation? We interviewed 16 specialist Hmong informants in Nan province, Thailand, about their traditional knowledge of 69 exotic species used. Acquisition of this knowledge has a long history; several species are the same as plants used by the Hmong in China and other countries, others are globally useful species which have become part of the pool of species that the Hmong have developed local knowledge about. However, migration also involves the integration of local knowledge from other cultures, and also adapts them to function in urban settings. This includes using closely related exotic taxa that replace some of the species they used in their original homelands. The migrants’ traditional knowledge in their new habitats is more complicated and also involves the development of local knowledge that is entirely new.
Introduction
Exotic species are such ones that have been introduced by humans into new places outside their native habitats [1]. Some exotic plants have long histories of introduction in cultivation, and they may reach a stage where they produce offspring without intervention by humans and, finally, they may become naturalized [2][3][4]. For instance, Solanum torvum Sw. was recently found for the first time in Europe as naturalized in a riverbed, and it probably escaped from cultivation because it is used as rootstock for some edible Solanaceae species (e.g., S. lycopersicum L., S. melongena L.) [5]. Some exotic plants have potential to spread rapidly and to become more competitive than local plants [6,7]. They are then sometimes characterized as weeds or invasive species that have negative effects on the biodiversity [6][7][8]. Weeds and invasive species are unwanted plants from the human point of view [1,4] because in farmlands, they may reduce agricultural crop yields when they compete with the cultivated plants for light, nutrients, and other resources [9].
Ethnobotanists have mostly focused on native plants [10] that evolved within the study regions where they have their natural interactions with the environment and other organisms. Exotic species may also be present in the same regions, but they have usually received less attention than the native plants [10]. As use of exotic species generates new knowledge [11], replacement of native plants Eight of the species were cosmopolitan weeds that occur in agricultural and forestry areas ( Table 2). Even if these species are unwanted in the fields, they are at the same time used for many purposes by the Hmong. For instance, they use the leaves of Chromolaena odorata (L.) R.M.King & H.Rob. that invades crop fields and edges of forests to stop bleeding when there is an emergency in the field. The Hmong wear a triangle-shaped amulet pouch with Mimosa pudica L. fragments around their neck or pin it to their clothes to ward off evil spirits. Celosia argentea L. is an important medicinal weed used by the Hmong to treat amenorrhea and dysmenorrhea. Moreover, some weeds rapidly spread around and serve as animal feed such as Amaranthus spinosus L. and Crassocephalum crepidioides (Benth.) S.Moore. It is a good way to balance the weeds by natural control.
Of the 69 exotic species encountered, 58 were used as medicine, whereas smaller numbers were used for food, social purposes, animal feed, and materials (Table S1).
The medicinal exotic species were used by the Hmong to treat health conditions in 17 different categories. Most medicinal plants were used to treat diseases of the genitourinary system (17%), especially in menstrual cycle bleeding disorders, diseases of the digestive system (16%), and endocrine, nutritional or metabolic diseases, especially in nutritional disorders (11%). The menstrual cycle bleeding disorders included amenorrhea, dysmenorrhea, and female fertility. The popular species used in this category often had red and orange colors (e.g., Celosia argentea, Impatiens balsamina L., and Tagetes erecta L.). Digestive system disorders such as diarrhea, peptic ulcer, and stomachache are common. Psidium guajava L. was commonly used for treating diarrhea. The common methods for plant preparations are decoction (40%), cooking (22%), and pounding (18%). The plant materials are chopped and boiled for a prolonged period in water for oral consumption as an herbal tea. Culinary medicine is important to the Hmong who cook herbs in their meals. Clear chicken soup is a popular Hmong recipe using the removed entrails of black chicken that are boiled with a variety of herbs, and then salted to give the dish a mild taste. The soup is served with hot steamed rice. Exotic ingredients such as Artemisia lactiflora Wall. ex DC. and Iresine herbstii Hook. may be included showing the importance of exotic plants in this signature dish of the Hmong's culture. Exotic food plants were vegetables (53%), fruits (30%), carbohydrate sources (10%), and food additives (7%). Most of the exotic food plants were ones that are commonly eaten in Asia or worldwide (e.g., Manihot esculenta Crantz and Zea mays L.) The traditional Hmong religion is animist, which often involves the protection against black magic and treatment of the negative effects of that magic, evil spirits and other bad things, such as soul-calling. The plants used for these purposes were often such ones that are valued as ornamentals in Thailand and elsewhere in the tropics. Zephyranthes carinata Herb. leaves were used to protect the owner from evil spirits by keeping it in a triangle-shaped amulet pouch. Hippeastrum × johnsonii was used to protect against evil spirits and Gladiolus × hortulanus L.H. Bailey was used in soul-calling rituals to treat the frightened soul when it had fallen away and was scared or panicked. Caladium bicolor (Aiton) Vent. is widely used as ornamental foliage plants because of many color patterns and variations. It is commonly used in the villages to treat pain from evil black magic. We are not aware of previous reports of the plant being used for this purpose.
Geographic Origins
The 69 species of exotic plants used in the seven Hmong villages originated from five regions ( Figure 1). Most species (40) came from America and of these six species were weeds ( Table 2). The second most important region of origin of the exotic species was Asia which had 21 species, of which 16 were native to China. For Chinese herbs, there was a consensus about their uses among the Hmong. For instance, Artemisia lactiflora and Chrysanthemum indicum L. were used as a tonic. Artemisia vulgaris L. was used to treat malaria and fever, and Sedum sarmentosum Bunge was used as a tonic and food for pregnant women. Fourteen species were native to Africa and only a few of the Hmong exotic species came from Europe and Australia.
Geographic Origins
The 69 species of exotic plants used in the seven Hmong villages originated from five regions ( Figure 1). Most species (40) came from America and of these six species were weeds ( Table 2). The second most important region of origin of the exotic species was Asia which had 21 species, of which 16 were native to China. For Chinese herbs, there was a consensus about their uses among the Hmong. For instance, Artemisia lactiflora and Chrysanthemum indicum L. were used as a tonic. Artemisia vulgaris L. was used to treat malaria and fever, and Sedum sarmentosum Bunge was used as a tonic and food for pregnant women. Fourteen species were native to Africa and only a few of the Hmong exotic species came from Europe and Australia.
Sources of Hmong's Knowledge
Hmong's knowledge of exotic plants is derived from four main sources. Most knowledge of exotic species (46%) was acquired from worldwide sharing. Many species were well-known and globally used the same as they were used for by the Hmong. Among the American species, the Hmong used Manihot esculenta, Solanum lycopersicum, and Zea mays for food and as cash crops, Nicotiana tabacum L. for smoking, and Annona squamosa L., Carica papaya L., and Psidium guajava as edible fruits. Of those native to Asia, many had been promoted during a long time of cultivation as directly edible or for commercial use. This was true for Anethum graveolens L., Artocarpus heterophyllus Lam., Citrus maxima (Burm.) Merr., Psophocarpus tetragonolobus (L.) DC., Solanum melongena, and
Sources of Hmong's Knowledge
Hmong's knowledge of exotic plants is derived from four main sources. Most knowledge of exotic species (46%) was acquired from worldwide sharing. Many species were well-known and globally used the same as they were used for by the Hmong. Among the American species, the Hmong used Manihot esculenta, Solanum lycopersicum, and Zea mays for food and as cash crops, Nicotiana tabacum L. for smoking, and Annona squamosa L., Carica papaya L., and Psidium guajava as edible fruits. Of those native to Asia, many had been promoted during a long time of cultivation as directly edible or for commercial use. This was true for Anethum graveolens L., Artocarpus heterophyllus Lam., Citrus maxima (Burm.) Merr., Psophocarpus tetragonolobus (L.) DC., Solanum melongena, and Solanum torvum. Exotics that are native to Africa included Aloe vera (L.) Burm.f., which was commonly used to treat burns, Crassocephalum crepidioides, which was used for food, and Lagenaria siceraria (Molina) Standl., which was used for making utensils. Moreover, some exotic species are commonly used elsewhere in Thailand such as Amaranthus spinosus L., Ayapana triplinervis (Vahl) R.M.King & H.Rob., and Passiflora foetida L., which were eaten as local vegetables.
The second source of exotic plant knowledge was transmission from their original homeland to their new homeland (22%). Several species of plants used by the Hmong are the same as in Thailand, China and other countries where they have settled. Among these, Impatiens balsamina, Iresine herbstii, and Mirabilis jalapa L. were commonly used to treat menstruation disorders. Tradescantia zebrina Bosse was boiled in chicken soup as a tonic, and Bryophyllum pinnatum (Lam.) Oken was also used as a tonic, to treat muscular-skeletal disorders and injuries.
Interestingly, some of the uses of exotic plants appeared to have evolved on site and to be unique to the Thai Hmong and were not known in Hmong communities in other countries (19%). Euphorbia tithymaloides L. was used as a lactation stimulant. Crinum × amabile Donn ex Ker Gawl. was used to treat abscesses, bone fractures, bruises, and sprains (strains). Verbena officinalis L. was used to treat foot dermatitis and itchy rashes, which are common health conditions because the Hmong's main occupation is as farmers and their feet are always in touch with soils and water.
Finally, the Thai Hmong had acquired new exotic plant knowledge (13%) from new habitats. These species were the first records of the plants being used by the Hmong, e.g., Gladiolus × hortulanus, which was used for social purposes and Plectranthus scutellarioides (L.) R.Br., which was used to treat flatulence and liver disorders.
Exotic Species Used
As shown above, exotic plants are important in the Hmong culture. This has been documented in previous ethnobotanical studies that reported anywhere from 22-38% of the plants used by the Thai Hmong as being exotic [34,35,37,[39][40][41][42]45]. The family with the most exotic plants used by the Hmong was Asteraceae. It is one of the largest plant families globally and is well-known for having many weedy species [9]. Asteraceae are often widely distributed because of their numerously light seeds, and they are dispersed by wind and adhesion [46]. Basically, weeds are plants that grow in places where they are not appreciated, and they are undesirably seen from a human point of view [4]. Here, we focus on the useful aspects of exotic weeds. We suggest that they should not be subjected to inattentive and indiscriminate eradication without consideration of their potential usefulness [4]. Good management of exotic weeds can be better achieved through accurate control than through complete elimination.
As for the medicinal plants, many exotic herbs were used to treat health conditions of women. Women's healthcare is crucial to human life in most cultures, including the Hmong in Thailand who use a large diversity of herbs as remedies for such purposes [47]. Plants with reddish color are believed to be efficient in the treatment of blood-related ailments; therefore, red plants are often used to treat health problems connected to menstruation [48]. Nguanchoo (2014) found that the Hmong used many exotic species to treat common medical problems, for instance, nutritional and gastrointestinal disorders. When preparing the medicine, herbs were usually decocted, which is a simple pharmaceutical method and maybe the most common one for preparation of medicine since ancient times [49][50][51]. As in China, food plants and medicinal plants are mixed in diets and form an important link between food and health. It is very popular among the Hmong to boil herbs with chicken to make a clear soup. The soup was used as a tonic, which is an important part of the Hmong identity [52]. Tonic is commonly used in many cultures for balancing elements, enhancing immunity, and boosting rapid recovery [49].
For social purposes, exotic plants are part of Hmong beliefs and rituals that are deeply rooted in their culture. Therefore, we expected that the plants used by the Hmong for religious and ritual uses would be native plants that had been available to them forever. We did not expect to find exotic species in this category of uses. Exotic species would have been added to their plant arsenal long after their beliefs and rituals developed. Nevertheless, we found nine exotic species among those used for purposes related to beliefs and rituals. The uses of some of these were the results of acculturation because they are hybrids and newly introduced ornamental plants such as Gladiolus × hortulanus and Hippeastrum × johnsonii. However, some species have a long history such as Zephyranthes carinata, which is a sacred plant. A closely related species, Z. rosea is also used by the Hmong in Thailand [35,42,45] and in Minnesota, USA [53] and was brought with the migration from southern China becoming naturalized in Thailand [34]. Another exotic plant used in rituals is Caladium bicolor, which contains calcium oxalate crystal that may cause skin irritation, vomiting and be toxic when ingested [54]. However, calcium oxalate can be destroyed by drying or heating [55], so the Hmong pound the rhizome to make a liniment for areas that suffer pain from evil black magic. It produces intensive burning and itching that is strong enough to eliminate the evil power. This belief makes it possible to fight pain from magic with pain from poisoning. The oldest Amerindian charm, Caladium bicolor, is represented by a collection from 1837 in the Leiden herbarium. It is widely used by all 11 indigenous groups in the Guianas [56]. Organs of desired game animals were burnt into powder, mixed with the juice of the Caladium tuber and rubbed on the hunter's body as a hunting charm. This had to be painful because the hunter had to be prepared for the pain and danger in the forthcoming hunt [56,57]. Sansevieria trifasciata Prain is commonly known as "snake plant". The Hmong used it for protecting them from snakes and evil spirits and also to treat snake bites with a leaf poultice or decoction [37,45]. In Bangladesh, Kenya, India, and Nigeria, S. trifasciata is also used to repel snakes and to treat snake bites [58][59][60][61]. Supposedly, snakes do not like the appearance of the plant because of its shape and sharp margins of its leaves [58]. This supports the Doctrine of Signature, in which the physical characteristics of plants reveal their therapeutic value [48]. This led to the discovery of many medicinal plants and plant characteristics believed to be linked with a charm, ritual, or sacred plant [56]. It has been shown that the ethanolic extract of S. trifasciata induces potent antiallergic and anti-anaphylactic activity [62]. As mentioned above, many researchers revealed the use of at least 19 exotic species for ritual and belief-related uses in the Hmong culture [37,45]. Some of the previous reports agree with those recorded here, for example, that Jatropha gossypiifolia L. and Sansevieria trifasciata are grown around the Hmong's houses to protect from evil spirits and to expel snakes, respectively. Mimosa pudica is kept in a triangular pouch as a sacred plant.
Geographic Origins
The Hmong used many plants introduced from America. This may not be surprising because native American plants became popular and widespread throughout the world following the exploration and colonization era, especially the voyages of Christopher Columbus and his discovery of the New World in 1492. Moreover, historical evidence shows that some plants were introduced into Asia in pre-Columbian times including Zea mays [63]. Many seeds were carried to be planted in Europe and subsequently rapidly spread to Asia and Africa where there were European colonies [64,65]. The same is true for the fruits and vegetables. They were introduced to European colonies along commercial sailing routes around the 16th century and have been cultivated there ever since. Examples include Annona squamosa, first cultivated in Malaysia by the Dutch [66], Carica papaya which was introduced to the Philippines by the Spanish [67], and Cucurbita moschata Duchesne, which was introduced to India, Southeast Asia, and Japan by the Spanish [68]. Eryngium foetidum L. was introduced in Southeast Asia by the Chinese using it as a substitute for coriander [69], and Muntingia calabura L. was probably first introduced in Thailand or Vietnam by the Portuguese [66].
The Hmong in our study also used many exotic herbs introduced from China. They frequently carried plants and seeds to be grown in their new homelands where they settled [33,34]. These helped in achieving endurance of their traditional knowledge and the transmission of it to new generations in the settled area. Artemisia lactiflora and A. vulgaris are native to China and both are important in traditional Chinese medicine for menstrual problems and liver disorders [70,71]. Leaves of A. lactiflora contain volatile oil and the Hmong in Thailand boiled it in chicken soup as a tonic [34,40,41,45,52]. Artemisia lactiflora was reported for Thailand in the scientific literature for the first time in 1993. It was documented that it had been brought from southern China with the Hmong migration [34]. Artemisia vulgaris has for long been used to treat malaria and as a fever remedy by the Hmong in Thailand and Minnesota [53]. Moreover, many exotic plants in China are commonly cultivated and naturalized and have been used in traditional Chinese medicine and as food for a long time, including Impatiens balsamina, Mirabilis jalapa, Talinum paniculatum (Jacq.) Gaertn. [72], and Eryngium foetidum [73]. Therefore, some exotic plants used by the Thai Hmong may originally relate to traditional knowledge from China before the Hmong migration.
Sources of Hmong's Knowledge
The Hmong migrated to new homelands and the exotic plants that they now use have arrived along many routes. They brought traditional knowledge with them from their homelands, but also acquired knowledge through acculturation in their new habitats, and in some cases developed entirely new and unique knowledge. Some species are used by all Hmong, in Thailand, China and other countries where the Hmong settled, suggesting that there has been a long history of acquisition of knowledge related to exotic plants. This use can be said to be part of their traditional knowledge and cultural heritage. In their homeland and wherever they settled, the Hmong used plants to prevent acculturation under their resettlement far away from their homeland, and these plants are actually exotic plants in their new lands. One plant that is widely known by the Hmong is Bryophyllum pinnatum, which is used to treat bruises and bone fractures. The Chinese Hmong use it to treat burns, scalds [74], and bone fractures [36], and among the Hmong in Minnesota it is used to treat wounds, chicken pox, fever, stomachache and sores [53]. Additionally, it is used as a symbolic and cultural medicine in Laos [75]. It is also used in folk medicine in tropical Africa, America, India, China, and Australia [76]. Another example is Canna indica L., of which the burned rhizome is eaten as an alternative source of starch by many rural people [77]. In addition, the Hmong in Thailand and California use Canna indica seeds to treat appendicitis, flatulence, and stomachache [35,41,42,44]. Elsewhere in Southeast Asia, Canna indica is used to treat gonorrhea in China, insect bites and swelling in Cambodia, Vietnam, and Laos, and as a diuretic in the Philippines [78]. Thai Hmong used Impatiens balsamina and Mirabilis jalapa to treat amenorrhoea, dysmenorrhea and for postpartum recovery [37,41,42,45,47]. Laotian refugees in America and California grow I. balsamina and use it for hastening childbirth [44,53]. The Hmong in China used the same species for relieving pain, regulating menstruation and treating broken bones [79]. Mirabilis jalapa is used to promote delivery of blood clots after birth by the Hmong in Laos [33] and in Minnesota [53]. The Hmong in China used Mirabilis jalapa to treat edemas, relieve toxicity and pain, and activate blood flow [79]. Both species are recognized medicines worldwide. Impatiens balsamina has been widely used in traditional Chinese medicine to treat rheumatism, difficult labor, and puerperal pain [80]. Mirabilis jalapa was used to treat abscesses in India and Java, diabetes in China, infection in Thailand, as a laxative in Europe, and for many other therapeutic purposes in Mexico and Brazil also among indigenous people [81]. Moreover, it is mentioned as a uterine stimulant to hasten childbirth in the USA [82]. Tradescantia zebrina is used as a tonic. The Hmong in China use T. zebrina to treat gastritis [36]. In Jamaica, Cuba, and Malaysia, T. zebrina is highly appreciated as a remedy to improve kidney function, and it is used as nutritional medicine in China [83]. The leaves are decocted, mixed with lemon and drunk as a tonic in Mexico [84]. Verbena officinalis is commonly used to treat foot dermatitis and itchy rashes by the Thai Hmong [35,37,[40][41][42]45] and to treat indigestion in California [44]. The Chinese Hmong used remedies prepared from V. officinalis to treat cold-fever, hepatitis and enteritis [36], to relieve toxicity and pain, regulate menstruation, and to treat broken bones [79]. Moreover, V. officinalis is used as medicine in Canada, Italy, Spain and USA [85][86][87] and it has been well-known in the treatment of toxic dermatitis in traditional Chinese medicine for several thousand years [88]. Studies of the pharmacological action of the extract showed that it had significant potential because of its anti-inflammatory effects [89].
Immigrants of the ethnic minorities may come into direct contact with mainstream culture, which may cause acculturation. The knowledge about the many common species has been influenced by knowledge sharing. They were used by the Thai Hmong in the same way as they are used in many other parts of the world. Historically, immigration, commodity exchange, and colonization have been the means of carrying knowledge and have led to the acquisition of plant knowledge by cultural interaction and exchange. Crops and edible plants rose in popularity all over the world. Nowadays, shared information has been diffused by globalization. For instance, Aloe vera was used already in the old Mesopotamia, Egypt, and Greece for treatment of the skin, wounds, dermatitis, thermal burn, and sunburn [90]. Today it is frequently used in households and it is currently the most used medicinal plant worldwide [91]. Aloe vera is also used by the Hmong who usually grow it in a pot in their home gardens for treating injuries from burns by cutting a leaf and applying the sap to the affected area. Some species used in agricultural extension were introduced in the late 19th Century including Sechium edule (Jacq.) Sw. and Passiflora edulis [92]. Plant introductions increase interconnectedness in multicultural societies and tend to be universal [93]. Knowledge of exotic plants is also commonly shared in a worldwide context which integrates knowledge for human benefit in cultural globalization. Only a few exotic species used by the Thai Hmong had been the subject of development of new knowledge. Their inclusion here is the first record of their uses by the Hmong and they are, therefore, not broadly used in a worldwide context. They have mostly very specific uses, mostly as medicine and for social purposes.
Some species have been widely used among the Thai Hmong, but are not present in other countries. Perhaps Hmong refugees in Thailand discovered new knowledge that was different from both the original Hmong knowledge, but also different from that of other Hmong migrants. Chrysanthemum indicum is native to China with a long history of being planted. Its dried flowers are used for various herbal remedies and they are commonly used as a tea for tonic in China, and also in Thailand [94]. More recently, the Thai Hmong have used its leaves and shoots as a tonic in chicken soup, but this has not been reported for the Hmong from other countries. The first use reported for the Thai Hmong appeared as recently as in 2012 [40,45]; after that, the use of this plant has boomed, and it is now being used by all Hmong in Thailand [52]. Sambucus canadensis L. was cultivated in America for its edible fruits, which were used in commercial processors, wineries, jam, and bakeries [95]. To the Thai Hmong, S. canadensis does not have an edible fruit, but the leaves are popular and used for treating muscular and skeletal disorders, and injuries by poultice [35,37,42,45]. Talinum fruticosum (L.) Juss. is widely used in chicken soup as a tonic and for muscle pain relief [37,40,41,45]. Informants said that they had learned this traditional use from Hmong kinsmen in other villages. They always grow this plant in a pot in their home gardens which shows strong transmission of plant knowledge and it helps in shaping the Hmong identity in Thailand.
Incorporating new exotic plants to generate new Hmong knowledge in Thailand has not happened easily. Changes related to acculturation may be derived from ecological changes induced by an impinging culture [96]. Likewise, Akha refugees in Thailand brought a strong cultural tradition from China with them, but their knowledge has been applied by using a different set of species because of the effect of ecological changes [97]. Resettlement forces newcomers to accommodate to existing new conditions for their life. This involves exotic species that are closely related to taxa that they used with their ancient knowledge to substitute original plants that they easily find in their resettlement area. Thai Hmong still endeavor their cultural heritage. Succulent species in Crassulaceae such as Bryophyllum, Kalanchoe, and Sedum have similar vegetative morphologies and are widely used as medicine by the Hmong. The Thai Hmong say that they can be used for treating bone fractures, injuries, morning sickness, and as food during pregnancy. Sedum sarmentosum is native to China and the Chinese Hmong use it to relieve the effects of toxicity, swelling, pain, cough, and to treat broken bones [79]. Laotian Hmong refugees in California used S. sarmentosum and S. spectabile Boreau to relieve upset stomach and to treat sores [44]; while, in Minnesota, S. telephium L. is used for pregnancy and postpartum diet, and to treat bruises and bone fractures [53]. Talinum fruticosum is used only by the Thai Hmong, whereas T. paniculatum is an exotic plant, which is used around the Hmong culture as a tonic in Thailand [34,40,41,52], Minnesota [53], and to treat incontinence by the Chinese Hmong [36]. Species of Sedum and Talinum are examples of using closely related taxa as substitutes of the original plants in Hmong culture in different regions. Thai ethnic minorities use plant classifications [98] that correspond closely to scientific taxa [99]. Moreover, they believe that plants are divided into two main groups. The first group is wild herbs that grow naturally in the forest and sometimes include naturalized species. Another group includes the domesticated herbs, which are often exotic species in home gardens [100]. A couple of species may be closely related taxa with the same therapeutic property, but wild plants are thought of as more potent than domesticated plants. Some wild species can be substituted with domesticated exotic plants in pots for easy use. Therefore, plant substitution is a combination of adaptation and cultural recognition. Sambucus canadensis was widely used only by the Thai Hmong. A closely related species, S. javanica Blume, is native to Thailand and tropical Asia and was used for the same purpose by Thai and Laotian Hmong [37,41,75]. Sambucus javanica is thought to be a wild plant whereas S. canadensis is a domesticated plant in the folk classification. The Chinese Hmong use S. adnata Wall. ex DC. and S. williamsii Hance that are both native to China [36]. Buddleja asiatica Lour. is a native plant in Thailand and is used as a wild plant for medicine by Thai [35,40,42] and Chinese Hmong [36]. Buddleja paniculata Wall. is an exotic plant and a new record as used by the Hmong with the status of domesticated plant. Accordingly, species of Sambucus and Buddleja are examples of commonly used plant genera in Hmong traditional medicine, but different species are used depending on the region and the status of domesticated plants.
Materials and Methods
Villagers in six Hmong villages in the Nan province, in northern Thailand (Table 3 and Figure 2), were interviewed about their use of exotic plants. The old people in these villages had migrated from Laos across the border to Nan. We asked local leaders and community members to point to recognized specialists and we selected 16 of them as our key informants [76,101]. Almost all informants were herbalists who used plants in their daily life or shamans who used belief and rituals related to plants in their practice [100,102]. The research protocol was approved by Chiang Mai University Research Ethics Committee with the certificate of approval number COA No. 020/61. Informants were interviewed using semi-structured interviews and field interviews [103]. Questions asked were about the application of plants used, therapeutic properties, and the methods of use. Vouchers were collected of all except seven very common plants for subsequent identification and they were deposited at Queen Sirikit Botanic Garden Herbarium, (QBG), Chiang Mai, Thailand. The plant names were standardized following The Plant List (http://www.theplantlist.org/). Their status as exotic or not was determined following the Thai Plant Names [104]. Use categories followed Cook (1995) for non-medicinal categories [105] and the WHO Classification of Diseases version 11 (ICD-11) for medicinal categories (https://www.who.int/classifications/icd/en/). We cross-checked exotic species with the species described in 16 [38], Pake (1987) [33], Pongsattayapipat (1993) [39], Spring (1989) [53], Srisanga (1994) [34], Srithi (2012) [40], Tichachart (2004) [41], Tovaranonte (1998)
Conclusions
Our research on the traditions of uses of exotic plants by the Thai Hmong demonstrated a deep relationship between exotic species and Hmong culture. The exotic plants provide medicine, food, animal feed, material, and are used for many social purposes. A number of exotic species have a long history of acquisition of the knowledge related to them among the Hmong. The Hmong have used exotic plants since they lived in their original homeland in China and they brought plants into their new habitats when they migrated south, and several species that the Thai Hmong use today are the same plants that were used by the Hmong in China and wherever they have resettled in recent times. This enlightens us about the original traditional knowledge related to the use of exotic plants by the Hmong. However, cultural interaction and globalization have had large effects on shared and commonly used plants worldwide. Acculturation of knowledge and adaption through exotic plants accommodating to existing conditions in the newly colonized lands occurred. Their accommodation is based on inherited knowledge by using closely related taxa as substitutes for the original plants.
Conclusions
Our research on the traditions of uses of exotic plants by the Thai Hmong demonstrated a deep relationship between exotic species and Hmong culture. The exotic plants provide medicine, food, animal feed, material, and are used for many social purposes. A number of exotic species have a long history of acquisition of the knowledge related to them among the Hmong. The Hmong have used exotic plants since they lived in their original homeland in China and they brought plants into their new habitats when they migrated south, and several species that the Thai Hmong use today are the same plants that were used by the Hmong in China and wherever they have resettled in recent times. This enlightens us about the original traditional knowledge related to the use of exotic plants by the Hmong. However, cultural interaction and globalization have had large effects on shared and commonly used plants worldwide. Acculturation of knowledge and adaption through exotic plants accommodating to existing conditions in the newly colonized lands occurred. Their accommodation is based on inherited knowledge by using closely related taxa as substitutes for the original plants. Our research also shows that the Thai Hmong evolved their current knowledge out of their original plant knowledge through transmission and exchange between Hmong communities. The use of exotic species by the refugee after resettlement is more complicated, and cannot merely be assumed to interfere acculturation and new knowledge because each has different sources, importance, and history.
Supplementary Materials: The following are available online at http://www.mdpi.com/2223-7747/8/11/500/s1, Table S1: Alphabetical listing of 69 exotic species used by the Hmong in six villages in Nan province in northern Thailand. | 2019-11-15T23:50:11.045Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "3ac54e0bfb17be815a5df43d0115ee17592c7489",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/plants8110500",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ac54e0bfb17be815a5df43d0115ee17592c7489",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
236213032 | pes2o/s2orc | v3-fos-license | Tailings after Iron Extraction in Bayer Red Mud by Biomass Reduction: Pozzolanic Activity and Hydration Characteristics
Bayer red mud (BRM) is a kind of solid waste with high hematite content, and its effective utilization is difficult due to the special physicochemical properties. In this work, Fe2O3 in BRM was reduced to Fe3O4 by biomass, and iron concentrate and high activity tailings were obtained after magnetic separation. The pozzolanic activity and hydration characteristics of the tailings were systematically studied. The results showed that the relatively stable polymerization structures of Si-O and Al-O in BRM are destroyed under the effect of biomass reduction at 650 °C, and some fracture bonds and activation points are formed in the structures. The aluminosilicate phases in the BRM were easy to transform into the active substances of Si and Al. The pozzolanic activity of tailings is greatly improved, and its pozzolanic activity index is 91%. High polymerization degree of gel and ettringite are formed since more active substances and alkali in the tailings promote the hydration reaction of cement-based cementitious materials, which made cementitious materials have dense matrix, good mechanical properties, and environmental performance. This work has realized the full quantitative utilization of BRM and provided a feasible way for the resource utilization of BRM.
Introduction
BRM is a kind of strong alkaline industrial solid waste, which is discharged from the process of alumina production from bauxite. Normally, 1.0-1.5 tons of BRM will be produced for every ton of alumina [1][2][3]. The emission of BRM is increasing rapidly with the increase of global alumina production. In 2018, about 160 million tons of BRM were produced globally, and the current global storage capacity of BRM is about 4 billion tons [4][5][6]. It is worth noting that a large amount of BRM is currently stacked in the BRM dam since the comprehensive utilization rate of BRM is only about 5% [4]. BRM has a high specific surface area, and fine BRM particles will float with the wind after weathering, causing haze, which seriously affect the normal breathing activities of human and animals. Furthermore, if the BRM storage yard is destroyed, the BRM leachate with high alkalinity will cause soil salinization, disturb the normal growth of plant roots, and seriously damage the ecological environment. Therefore, it is urgent to research methods to improve the utilization rate of BRM.
One of the ways to significantly improve the comprehensive utilization rate of BRM is to use BRM as a supplementary cementitious material to prepare building materials [1,[7][8][9]. The current research results showed that most of the phases in red mud have little reactivity in the mixture with cement [5], and the utilization rate of BRM as a supplementary cementitious material is about 10% due to its low pozzolanic activity and high sodium alkali content [4]. There are two main reasons for the low pozzolanic activity of BRM: 1.
The content of silicon and aluminum in BRM is low; 2. most of the silicon and aluminum in BRM is inert. Therefore, the pozzolanic activity of BRM can be effectively improved by increasing the content of silicon and aluminum and activating it. One of the effective ways to activate BRM to improve its pozzolanic activity is to calcine it. The purpose of calcination is to decompose some inert phases (such as gibbsite and Katoite, etc.) in red mud and convert them into phases that are easy to react with water or OH − [7][8][9]. The active silica alumina minerals produced by calcined red mud will be dissolved in water or OH − and participate in the reaction of cement to form C-A-S-H gel [4]. Therefore, the general consensus seems to be that the prerequisite of red mud reactivity is calcination, otherwise the inert substances in red mud will not participate in the hydration reaction of cement, which leads to its low utilization in cement. In addition, the content of silicon and aluminum can be effectively increased by extracting the iron oxide in BRM as BRM contains more iron oxide (about 40%). In recent years, although the global iron and steel industry has developed rapidly, iron ore resources have become less and less, and iron ore grades have become lower and lower. In order to alleviate this problem, many researchers have studied the iron recovery in BRM. Zhu et al. [10] blended BRM and Na 2 CO 3 , then added soft coal as a reducing agent after drying, roasted at 1050 • C for 80 min, and then ground the roasted product and magnetically separated the iron from it. Liu et al. [11] used low ash coal char with a carbon content of 92.9% as a reducing agent. After the BRM was roasted under inert gas, the Na and Al in the BRM were recovered by water leaching, and the residue was finally recovered iron by magnetic separation. Borra et al. [12] used graphite powder as a reducing agent, wollastonite powder as a fluxing additive, and recovered iron in BRM through a reduction smelting process in an argon atmosphere. Samouhos et al. [13] used hydrogen as a reducing agent to convert hematite into magnetite at 480 • C, and extract magnetite by wet magnetic separation. The above researches on extracting iron from BRM have achieved good results. However, there are few investigates on tailings after iron extracting from BRM.
Using biomass to reduce BRM can effectively achieve the goal of energy conservation and clean production as biomass is a renewable resource with less pollution compared with traditional reducing agents such as coal and carbon. Our research group previously studied the reduction of BRM by biomass, which can reduce all iron oxide to magnetite, and then separate magnetite from BRM by magnetic separation [4]. An important phenomenon was discovered that 650 • C is not only the optimal reduction temperature, but also the temperature at which the thermal calcination method maximizes the pozzolanic activity of BRM. In our previous report [4], the magnetization mechanism of Fe 2 O 3 in BRM to magnetite by biomass was analyzed in depth. However, the increasing mechanism of pozzolanic activity of BRM in the process of reducing the iron component of BRM by biomass has not been studied in depth. Therefore, elucidating the increasing mechanism of pozzolanic activity can give a method for BRM research as a supplementary cementitious material.
The pozzolanic activity and hydration characteristics of tailings after iron extraction from biomass magnetized BRM have been deeply investigated. The influence of tailings as supplementary cementitious materials on the properties and microstructure of cementitious materials was explored. The microstructure of the cementitious materials was characterized and the mechanism of enhancing the pozzolanic activity of tailings is revealed.
Materials and Methods
BRM was obtained from an aluminum plant in Shanxi, China, and the production method was the Bayer process. The BRM was dried, ground, and sieved to obtain a particle size range of 0-0.074 mm. Biomass (bamboo powder) was obtained from a company in Beijing, China, which was broken to a particle size range of 0-2 mm. The cement (42.5) was selected from a cement company in Hebei, China. The XRF was used to analyze the main chemical constituents of BRM and cement, as shown in Table 1 The selected bamboo powder is air-dried, crushed, and sieved according to the particle size. The elemental analysis of bamboo powder was analyzed according to the standard GB/T212-2008 [14], and Table 2 shows the results. The volatile matter in the bamboo powder accounts for the largest proportion and the lowest ash content. According to our previous research [4], the BRM and bamboo powder were uniformly mixed according to the set ratio (5:1), and a cylindrical test piece of ϕ 8 × 8 mm was prepared under the pressure of 20 MPa, and then the test pieces were placed in the center of the tubular furnace. A high-purity N 2 gas (flow rate: 100 mL·min −1 ) was introduced into the tubular furnace, and then the tubular furnace was heated at 10 • C/min to 650 • C and kept for 30 min, cooled to room temperature under a nitrogen atmosphere. The Fe 3 O 4 in the BRM is extracted by the method of magnetic separation to obtain Tailings. The chemical composition of Tailings is shown in Table S1 (see Supplementary Materials). The particle size distribution of Tailings is shown in Figure S1 (see Supplementary Materials). In addition, BRM was calcined at 650 • C in the tubular furnace to obtain calcined BRM at 650 • C (CBRM) as a control group.
BRM, CBRM, and Tailings are used to prepare cementitious materials according to the proportion in Table 3. Referring to GB/T 17671-1999 [15], cement mortar experiments were carried out with 40 × 40 × 160 mm samples, the water-cement ratio: w/c = 0.5. Demoulding after curing for a day at (20 ± 1) • C, R.H. = (95 ± 1)% and then curing for the corresponding age at the same condition. The preparation process was shown in Figure 1. In addition, in order to study the hydration behavior of cementitious materials, the pastes of BRM-cement, CBRM-cement, and Tailings-cement were prepared. Table 3. Design of different cement mortars (g).
Number
BRM CBRM Tailings Cement Standard Sand BRM-cement 135 --315 1350 CBRM-cement -135 -315 1350 Tailings-cement --135 315 1350 Cement ---450 1350 The X-ray diffraction (XRD) of the sample was tested with the X-ray diffractometer (D8 ADVANCE, Karlsruhe, Germany) of the German Bruker company. The thermogravimetry (TG) was tested by NETZSCH STA 449F5 (Serb, Germany). The structure and chemical bonds of the sample were characterized by Fourier transform infrared spectrometer (FTIR, Nicolet IS 10, Karlsruhe, Germany). The scanning electron microscope-energy dispersive spectrometer (SEM-EDS, SU8020) was tested to observe and analyze the micromorphology of the sample. The 29 Si nuclear magnetic resonance (NMR) was tested with Bruker AVANCE III instruments of the German Bruker company (Karlsruhe, Germany). The pore structure of the sample was analyzed by mercury intrusion porosimetry (MIP, AutoPore IV 9500, Atlanta, GA, USA). The X-ray diffraction (XRD) of the sample was tested with the X-ray diffractometer (D8 ADVANCE, Karlsruhe, Germany) of the German Bruker company. The thermogravimetry (TG) was tested by NETZSCH STA 449F5 (Serb, Germany). The structure and chemical bonds of the sample were characterized by Fourier transform infrared spectrometer (FTIR, Nicolet IS 10, Karlsruhe, Germany). The scanning electron microscope-energy dispersive spectrometer (SEM-EDS, SU8020) was tested to observe and analyze the micromorphology of the sample. The 29 Si nuclear magnetic resonance (NMR) was tested with Bruker AVANCE III instruments of the German Bruker company (Karlsruhe, Germany). The pore structure of the sample was analyzed by mercury intrusion porosimetry (MIP, AutoPore IV 9500, Atlanta, GA, USA).
The pozzolanic activity index is calculated in accordance with the Chinese standard (GB/T12957-2005) "Test method for activity of industrial waste slag used as addition to cement" [16]. Thirty percent of BRM was added into the cement and prepared the red mud-cement mortar, and the compressive strength at 28 d of red mud-cement mortar was tested. The pozzolanic activity index was calculated by the following formula: where R1 is the compressive strength at 28 d of red mud-cement mortar and R2 is the compressive strength at 28 d of cement mortar. The pozzolanic activity of BRM is positively related to its the content of amorphous phase. The amorphous phase structure of BRM is composed of Si and Al tetrahedrons, which are connected by bridging oxygen bonds in two ways: Si-O-Si and Si-O-Al. Si (Al) can be divided into structures of SiQ 0 , SiQ 1 , SiQ 2 , SiQ 3 , and SiQ 4 according to the number of bridging oxygen bonds. The aluminosilicate frameworks in BRM have different degrees of polymerization. The degree of polymerization reflects the crystallization trend of aluminosilicate. Zhang [17] proposed the concept of relative bridging oxygen number (RBO) to evaluate the degree of polymerization, and proposed the calculation formula of the relative bridging oxygen number.
where Q n is the relative area of the corresponding formant. The toxicity leaching tests of cementitious materials were determined by the European Standard EN12457. The leaching tests were carried on by mixing 100 g of cementitious materials with 1 L deionized water in a 1 L container made of polypropylene and The pozzolanic activity index is calculated in accordance with the Chinese standard (GB/T12957-2005) "Test method for activity of industrial waste slag used as addition to cement" [16]. Thirty percent of BRM was added into the cement and prepared the red mud-cement mortar, and the compressive strength at 28 d of red mud-cement mortar was tested. The pozzolanic activity index was calculated by the following formula: where R 1 is the compressive strength at 28 d of red mud-cement mortar and R 2 is the compressive strength at 28 d of cement mortar. The pozzolanic activity of BRM is positively related to its the content of amorphous phase. The amorphous phase structure of BRM is composed of Si and Al tetrahedrons, which are connected by bridging oxygen bonds in two ways: Si-O-Si and Si-O-Al. Si (Al) can be divided into structures of SiQ 0 , SiQ 1 , SiQ 2 , SiQ 3 , and SiQ 4 according to the number of bridging oxygen bonds. The aluminosilicate frameworks in BRM have different degrees of polymerization. The degree of polymerization reflects the crystallization trend of aluminosilicate. Zhang [17] proposed the concept of relative bridging oxygen number (RBO) to evaluate the degree of polymerization, and proposed the calculation formula of the relative bridging oxygen number.
where Q n is the relative area of the corresponding formant. The toxicity leaching tests of cementitious materials were determined by the European Standard EN12457. The leaching tests were carried on by mixing 100 g of cementitious materials with 1 L deionized water in a 1 L container made of polypropylene and subjected to a mixing system. Mixing took place for 24 ± 0.5 h at 30 rpm, and then kept in balance. Vacuum filtration was next. Lastly, the filtrate was acidified (pH ≈ 2.5-3) and the ion concentrations of Na, Cr, Cd, Pb, and As in the filtrate were measured by the inductive coupled plasma emission spectrometer (ICP-MS).
Mechanical Properties and Pozzolanic Activity
The compressive and flexural strength of BRM-cement, CBRM-cement, and Tailingscement at 3, 7, and 28 d are shown in Figures 2 and 3. Here, it can be seen that the ion concentrations of Na, Cr, Cd, Pb, and As in the filtrate were measured by the inductive coupled plasma emission spectrometer (ICP-MS).
Mechanical Properties and Pozzolanic Activity
The compressive and flexural strength of BRM-cement, CBRM-cement, and Tailingscement at 3, 7, and 28 d are shown in Figures 2 and 3. Here, it can be seen that the compressive and flexural strength of all the samples increase with the increase of hydration time. The compressive strength of Tailings-cement is slightly higher than that of BRMcement and CBRM-cement. The compressive strength of Tailings-cement at 3, 7, and 28 d is 23.7, 28.8, and 44.3 MPa, respectively, which meet the requirements of level 42.5 of the ordinary Portland cement. In addition, the change trend of flexural strength of all the samples is the same as that of compressive strength. The flexural strength of Tailings-cement at 3, 7, and 28 d is 6.44, 8.37, and 9.01 MPa, respectively, which meet the requirements of level 42.5 of the ordinary Portland cement. It is worth noting that the different compressive strength values of BRM-cement, CBRM-cement, and Tailings-cement indicates that they have different pozzolanic activities. According to the GB/T12957-2005 "Test method for activity of industrial waste slag used as addition to cement", the pozzolanic activity index of BRM, CBRM, and Tailings is calculated, as shown in Figure 4. Here, it can be seen that the pozzolanic activity index of BRM, CBRM, and Tailings is 74%, 86%, and 91%, respectively. Next, the XRD, TG-DTG, and FTIR of BRM, CBRM, and Tailings were analyzed to reveal the mechanism of enhancing the pozzolanic activity of Tailings compared with BRM and CBRM. According to the GB/T12957-2005 "Test method for activity of industrial waste slag used as addition to cement", the pozzolanic activity index of BRM, CBRM, and Tailings is calculated, as shown in Figure 4. Here, it can be seen that the pozzolanic activity index of BRM, CBRM, and Tailings is 74%, 86%, and 91%, respectively. Next, the XRD, TG-DTG, and According to the GB/T12957-2005 "Test method for activity of industrial waste slag used as addition to cement", the pozzolanic activity index of BRM, CBRM, and Tailings is calculated, as shown in Figure 4. Here, it can be seen that the pozzolanic activity index of BRM, CBRM, and Tailings is 74%, 86%, and 91%, respectively. Next, the XRD, TG-DTG, and FTIR of BRM, CBRM, and Tailings were analyzed to reveal the mechanism of enhancing the pozzolanic activity of Tailings compared with BRM and CBRM.
XRD, TG, and FTIR Analysis of BRM, CBRM, and Tailings
The phase composition has an important influence on the pozzolanic activity of BRM. The amorphous phase is the most active phase in BRM. Most of the crystalline components of BRM are considered to be inert and hardly participate in the hydration reaction. The XRD patterns of BRM are shown in Figure 5a
XRD, TG, and FTIR Analysis of BRM, CBRM, and Tailings
The phase composition has an important influence on the pozzolanic activity of BRM. The amorphous phase is the most active phase in BRM. Most of the crystalline components of BRM are considered to be inert and hardly participate in the hydration reaction. The XRD patterns of BRM are shown in Figure 5a. The main mineral compositions of BRM are Ca 3 Al 2 (SiO 4 )(OH) 8 Figure 5a, the diffraction peak intensity of CBRM crystals decrease gradually, especially for cancrinite and calcite. The decrease of the peak intensity of the crystals reveal the appearance of amorphous substances, and the amorphous peaks of 15 • to 40 • become larger. It shows that some aluminosilicate crystals are decomposed and transformed into active substances of silicon and aluminum, which may be the reason why the pozzolanic activity of CBRM is higher than that of BRM. Figure 5c shows the XRD patterns of Tailings, where the main mineral compositions of Tailings are CaAl 2 Si 2 O 7 (OH) 2 ·H 2 O (Lawsonite), Na 6 Ca 2 Al 6 Si 6 O 24 (CO 3 ) 2 ·2H 2 O (Cancrinite), CaO (Lime) and Ca 3 Al 2 (Si 3 O 4 , CO 3 , OH) 3 (Grossular). Compared with CBRM, Fe 2 O 3 is almost completely transformed into Fe 3 O 4 and separated by magnetic separation, and the diffraction peak intensity of all the crystals in Tailings decrease gradually. The amorphous peaks at 15 • to 40 • become significantly larger, which indicate that some aluminosilicate phases are transformed into active substances of silicon and aluminum under the action of biomass. This may be the reason why the pozzolanic activity of Tailings is higher than that of CBRM. The results showed that the amorphization degree of each crystal phase in BRM, CBRM, and Tailings is different, and its contribution to the pozzolanic activity is also different. The existence of bamboo powder is conducive to the improvement of amorphization degree of all crystal phases in BRM.
The TG-DTG curves of BRM are shown in Figure 6. Here, it can be seen that the continuous mass loss of BRM occurs at 0-1000 • C and the mass loss rate is 14.63%. The mass loss rate in the range of 0-225 • C and 700-1000 • C is slower than that in the range of 225-700 • C. The DTG curve shows that BRM mainly contains two obvious mass loss stages in the heating process. The first stage is 30-500 • C, which is mainly the removal of physically adsorbed water and chemically bound water in BRM. A large amount of physically adsorbed water has been removed during the drying process as the BRM was dried at 100 • C before the experiment, only 0.26% of the physically adsorbed water is removed within the range of 30 • C to 105 • C. The weight loss rate is 7.92% in the range of 105-500 • C, which is mainly the removal of chemically bound water. The second stage is located in the temperature range of 500 to 770 • C, and the weight loss rate is 4.86%, which is mainly caused by the decomposition of aluminosilicate-carbonate in the BRM to release CO 2 .
intensity of the crystals reveal the appearance of amorphous substances, and the amorphous peaks of 15° to 40° become larger. It shows that some aluminosilicate crystals are decomposed and transformed into active substances of silicon and aluminum, which may be the reason why the pozzolanic activity of CBRM is higher than that of BRM. Figure 5c shows the XRD patterns of Tailings, where the main mineral compositions of Tailings are CaAl2Si2O7(OH)2·H2O (Lawsonite), Na6Ca2Al6Si6O24(CO3)2·2H2O (Cancrinite), CaO (Lime) and Ca3Al2(Si3O4, CO3, OH)3 (Grossular). Compared with CBRM, Fe2O3 is almost completely transformed into Fe3O4 and separated by magnetic separation, and the diffraction peak intensity of all the crystals in Tailings decrease gradually. The amorphous peaks at 15° to 40° become significantly larger, which indicate that some aluminosilicate phases are transformed into active substances of silicon and aluminum under the action of biomass. This may be the reason why the pozzolanic activity of Tailings is higher than that of CBRM. The results showed that the amorphization degree of each crystal phase in BRM, CBRM, and Tailings is different, and its contribution to the pozzolanic activity is also different. The existence of bamboo powder is conducive to the improvement of amorphization degree of all crystal phases in BRM. The TG-DTG curves of BRM are shown in Figure 6. Here, it can be seen that the continuous mass loss of BRM occurs at 0-1000 °C and the mass loss rate is 14.63%. The mass loss rate in the range of 0-225 °C and 700-1000 °C is slower than that in the range of 225-700 °C. The DTG curve shows that BRM mainly contains two obvious mass loss stages in the heating process. The first stage is 30-500 °C, which is mainly the removal of physically adsorbed water and chemically bound water in BRM. A large amount of physically adsorbed water has been removed during the drying process as the BRM was dried at 100 °C before the experiment, only 0.26% of the physically adsorbed water is removed within the range of 30 °C to 105 °C. The weight loss rate is 7.92% in the range of 105-500 °C, which is mainly the removal of chemically bound water. The second stage is located in the temperature range of 500 to 770 °C, and the weight loss rate is 4.86%, which is mainly caused by the decomposition of aluminosilicate-carbonate in the BRM to release CO2. It is worth noting that the FTIR spectra at 800-1200 cm −1 of BRM, CBRM, and Tailings are similar, but the area of peak has some differences. Based on the existing research results, the characteristic peaks of SiQ 0 , SiQ 1 , SiQ 2 , SiQ 3 , and SiQ 4 are 840-900 cm −1 , 900-950 cm −1 , 950-1030 cm −1 , 1030-1100 cm −1 , and 1100-1200 cm −1 , respectively. The Origin software was used to separate and fit the peaks of 800-1200 cm −1 , whereas the peaks area and RBO were calculated. The related peaks information are shown in Figure 7 and Table 4. It can be seen from Table 4 that the content of SiQ 4 in BRM is higher than that in CBRM and Tailings. Combined with the XRD results, SiQ 4 in BRM represents the structure with more Si-O-Si (Al), which indicates that its crystal has a high degree of polymerization and stable chemical properties. The SiQ 4 of CBRM is lower than that of BRM, which indicates that some stable SiQ 4 structures in BRM are depolymerized and low degrees of polymerization substances are formed at 650 • C. It is worth noting that SiQ 4 of Tailings is significantly lower than that of CBRM, which indicates that bamboo powder promotes the depolymerization of SiQ 4 structure at 650 • C. The alkali system of cement hydration reaction is conducive to the dissolution of silicon and aluminum components in Tailings, which makes the pozzolanic activity of Tailings higher than that of BRM and CBRM. The results showed that the Si-O-Si (Al) bond of aluminosilicate in BRM is destroyed after being reduced by biomass, and the polymerization degree of aluminosilicate decreases, which leads to the increase of the active substance of silicon and aluminum in Tailings. Figure 7. FTIR spectra of BRM, CBRM, and Tailings. In summary, the diaspore in BRM will decompose into Al 2 O 3 at about 500 • C, there is no corresponding characteristic peak in XRD of CBRM and Tailings, which indicates that alumina may be amorphous. In the range of 300-600 • C, Katoite and yugawaralite begin to remove part of the -OH and gradually transform into Chantalite and lawsonite. The free -OH of aluminosilicates in BRM is removed, which is conducive to the depolymerization of aluminosilicates and the reduction of polymerization degree of CRBM. Cancrinite in BRM begins to decompose into lime and amorphous aluminosilicate at about 650 • C. More importantly, the reduction gases of lime and H 2 produced by pyrolysis of bamboo powder can transform all hematite in BRM into magnetite, and also promote the decomposition of aluminosilicate phase (cancrinite), which produces CaO and active silicon and aluminum. At the same time, the FTIR spectra of Tailings show that the Si-O bond and Al-O bond are broken, which indicate that the polymerization degree of aluminosilicate is reduced, and the pozzolanic activity of Tailings is improved.
XRD Analysis of Cement-Based Cementitious Materials
The XRD patterns of hardened pastes of BRM-cement, CBRM-cement, and Tailingscement at 28 d are shown in Figure 8. Here, it can be seen that the ettringite and aluminosilicate are generated in BRM-cement, CBRM-cement, and Tailings-cement, which promote the strength development. It is worth noting that the peaks of Fe 2 O 3 appear in the BRM-cement and CBRM-cement, which indicate that Fe 2 O 3 does not participate in the hydration reaction of cementitious materials. Therefore, the high content of Fe 2 O 3 in BRM is one of the main reasons for its low pozzolanic activity, which also shows that one of the reasons for the increase in pozzolanic activity of Tailings is that the Fe 2 O 3 transition into Fe 3 O 4 is magnetically separated. Compared with the XRD of BRM, CBRM, and Tailings, the center of miscellaneous diffusion peaks (15-40 • ) of cementitious materials shift to the right, which indicates the formation of amorphous phase in the product (C-S-H gel and C-A-S-H gel) [18]. The peaks with Ca(OH) 2 appear in the XRD patterns of BRM-cement, CBRM-cement, and Tailings-cement. However, the intensity of the peak is different due to the different extent of hydration reaction. Ca(OH) 2 is formed by the reaction of CaO with H 2 O, which can generate a pozzolanic reaction with the active SiO 2 and Al 2 O 3 in cementitious materials, and lead to the formation of gel and ettringite [19]. Therefore, compared to BRM-cement, the CBRM-cement and Tailings-cement get better activation and produce more active SiO 2 and Al 2 O 3 and with stronger pozzolanic reactivity. C-A-S-H gel) [18]. The peaks with Ca(OH)2 appear in the XRD patterns of BRM-cement, CBRM-cement, and Tailings-cement. However, the intensity of the peak is different due to the different extent of hydration reaction. Ca(OH)2 is formed by the reaction of CaO with H2O, which can generate a pozzolanic reaction with the active SiO2 and Al2O3 in cementitious materials, and lead to the formation of gel and ettringite [19]. Therefore, compared to BRM-cement, the CBRM-cement and Tailings-cement get better activation and produce more active SiO2 and Al2O3 and with stronger pozzolanic reactivity. [19], and the absorption spectrum of Tailings-cement is the widest. The weak absorption peaks at 1643 cm −1 indicate the vibration spectrum of H-O-H. The vibration peak observed from 871 cm −1 is caused by the asymmetric stretching vibration of Si-OH and 461 cm −1 represents an absorption spectra for Si-O [20].
FTIR Analysis of Cement-Based Cementitious Materials
The absorption peaks at 3643 cm −1 and 1637 cm −1 indicate the telescopic and free-water bending vibrations of Ca(OH)2, respectively [21]. The absorption peaks of Tailingscement at 3644 cm −1 and 1643 cm −1 weaken, implying that more Ca(OH)2 is involved in the hydration reaction, and more free water is changed to bound water during the reaction. Compared with the BRM-cement and CBRM-cement, the absorption peak value of spectra at 3440 cm −1 has an increasing trend relatively, implying that a much higher amount of [Al(OH)6] 3− is generated in addition to the hydration products. This is similar to the results of XRD analysis. The hydration degree of Tailings-cement is high, and more hydration products are formed.
Pore Structure Analysis of Cement-Based Cementitious Materials
Characterization of the pore is an important component of the microstructure of cementitious materials. The pore diameter distribution, the number of gel pores, and the gel density of the cementitious materials affect the strength of the cementitious materials. The MIP results are shown in Table 5. It is generally believed that in cementitious materials, the pores that are larger than 1 um are harmful, and cementitious materials with fewer harmful pores have higher mechanical properties [18][19][20][21][22]. It is obvious that the Tailingscement owns fewer harmful pores, and the CBRM-cement takes the second place. As is well known, the pore sizes ≤10 nm are called gel pores and the gel pores were generated with C-S-H gel and C(N)-A-S-H gel for BRM-cement, CBRM-cement, and Tailings-cement [23]. The pozzolanic activity of BRM, CBRM, and Tailings can be inferred from the content of gel pores. The gel pores in BRM-cement, CBRM-cement, and Tailings-cement are 32.82%, 34.32%, and 35.94%, respectively. The amount of gel pores is highest in Tailingscement, which indicates that the gel produced in Tailings-cement is the most and the Tailings-cement has higher pozzolanic reactivity. The total pore volume of cementitious material is small, which indicates that its gel density is large and its mechanical properties are high [24]. The total pore volume, average pore diameter, and porosity of Tailings-cement are best, which indicate that the Tailings-cement produces more hydration products.
The curves of log differential invasion and cumulative pore volume of BRM-cement, The absorption peaks at 3643 cm −1 and 1637 cm −1 indicate the telescopic and freewater bending vibrations of Ca(OH) 2 , respectively [21]. The absorption peaks of Tailingscement at 3644 cm −1 and 1643 cm −1 weaken, implying that more Ca(OH) 2 is involved in the hydration reaction, and more free water is changed to bound water during the reaction. Compared with the BRM-cement and CBRM-cement, the absorption peak value of spectra at 3440 cm −1 has an increasing trend relatively, implying that a much higher amount of [Al(OH) 6 ] 3− is generated in addition to the hydration products. This is similar to the results of XRD analysis. The hydration degree of Tailings-cement is high, and more hydration products are formed.
Pore Structure Analysis of Cement-Based Cementitious Materials
Characterization of the pore is an important component of the microstructure of cementitious materials. The pore diameter distribution, the number of gel pores, and the gel density of the cementitious materials affect the strength of the cementitious materials. The MIP results are shown in Table 5. It is generally believed that in cementitious materials, the pores that are larger than 1 um are harmful, and cementitious materials with fewer harmful pores have higher mechanical properties [18][19][20][21][22]. It is obvious that the Tailingscement owns fewer harmful pores, and the CBRM-cement takes the second place. As is well known, the pore sizes ≤10 nm are called gel pores and the gel pores were generated with C-S-H gel and C(N)-A-S-H gel for BRM-cement, CBRM-cement, and Tailings-cement [23]. The pozzolanic activity of BRM, CBRM, and Tailings can be inferred from the content of gel pores. The gel pores in BRM-cement, CBRM-cement, and Tailings-cement are 32.82%, 34.32%, and 35.94%, respectively. The amount of gel pores is highest in Tailings-cement, which indicates that the gel produced in Tailings-cement is the most and the Tailingscement has higher pozzolanic reactivity. The total pore volume of cementitious material is small, which indicates that its gel density is large and its mechanical properties are high [24]. The total pore volume, average pore diameter, and porosity of Tailings-cement are best, which indicate that the Tailings-cement produces more hydration products. The curves of log differential invasion and cumulative pore volume of BRM-cement, CBRM-cement, and Tailings-cement are shown in Figures 10 and 11. The pore size distribution curves of BRM-cement, CBRM-cement, and Tailings-cement are approximately at normal distribution and the cumulative pore volume of Tailings-cement is the smallest. It indicates that the best pore and dense structure was obtained by the Tailings-cement and the Tailings-cement has the best strength.
Materials 2021, 14, x FOR PEER REVIEW 1 Figure 10. The curves of log differential invasion of BRM-cement, CBRM-cement, and Tailin ment.
Thermal Analysis of Cement-Based Cementitious Materials
The TG-DTG curves of the hardened paste of BRM-cement, CBRM-cement, and Tailings-cement at 28 d are shown in Figure 12. The mass loss of BRM-cement, CBRMcement, and Tailings-cement is continuous with the increase of temperature, and their total mass loss is 20.86%, 23.02%, and 22.40%, respectively. The mass loss of BRM-cement, CBRM-cement, and Tailings-cement can be divided into four main ranges: The first range is 30-100 • C, which represents the mass loss of free water in hardened paste. The second range is 100-300 • C, which represents the mass loss of bound water of C-S-H gel, C(N)-A-S-H gel, and ettringite in BRM-cement, CBRM-cement, and Tailings-cement [25]. The third temperature range is about 300-750 • C, which represents the decomposition of Si(Al)-OH structure and Ca(OH) 2 in BRM-cement, CBRM-cement, and Tailings-cement [26]. The fourth temperature range is about 750-1000 • C, which corresponds to the decomposition of calcium carbonate in BRM-cement, CBRM-cement, and Tailings-cement. Figure 10. The curves of log differential invasion of BRM-cement, CBRM-cement, and Tailin ment. Figure 11. The curves of cumulative pore volume of BRM-cement, CBRM-cement, and Tailin ment.
Thermal Analysis of Cement-Based Cementitious Materials
The TG-DTG curves of the hardened paste of BRM-cement, CBRM-cement, and ings-cement at 28 d are shown in Figure 12. The mass loss of BRM-cement, CBRM-cem and Tailings-cement is continuous with the increase of temperature, and their total loss is 20.86%, 23.02%, and 22.40%, respectively. The mass loss of BRM-cement, CB cement, and Tailings-cement can be divided into four main ranges: The first range 100 °C, which represents the mass loss of free water in hardened paste. The second is 100-300 °C, which represents the mass loss of bound water of C-S-H gel, C(N)-A gel, and ettringite in BRM-cement, CBRM-cement, and Tailings-cement [25]. The temperature range is about 300-750 °C, which represents the decomposition of Si(Al structure and Ca(OH)2 in BRM-cement, CBRM-cement, and Tailings-cement [26] fourth temperature range is about 750-1000 °C, which corresponds to the decompo of calcium carbonate in BRM-cement, CBRM-cement, and Tailings-cement.
The mass loss parameters of BRM-cement, CBRM-cement, and Tailings-ceme different ranges are shown in Table 6. The mass loss of BRM-cement, CBRM-cement It is worth noting that the XR patterns show that the content of Ca(OH)2 in Tailings-cement is less, which indicates tha Tailings-cement has more Si(Al)-OH structure. Therefore, the active substance of silico and aluminum in the Tailings-cement participate in the hydration reaction. The mass los of BRM-cement, CBRM-cement, and Tailings-cement in the range of 750-1000 °C is 2.83% 3.69%, and 1.76%, respectively. Compared with BRM-cement and CBRM-cement, the ca bonization degree of Tailings-cement is smaller, which indicates that the hydration reac tion of Tailings-cement is more thorough. The mass loss parameters of BRM-cement, CBRM-cement, and Tailings-cement in different ranges are shown in Table 6. The mass loss of BRM-cement, CBRM-cement, and Tailings-cement in different ranges can indirectly reflect the corresponding content of hydration products. The mass loss of BRM-cement, CBRM-cement, and Tailings-cement in the range of 30-100 • C is 4.44%, 5.32%, and 5.14%, respectively. The mass loss of BRMcement, CBRM-cement, and Tailings-cement in the range of 100-300 • C is 3.40%, 3.61%, and 4.54%, respectively. It indicates that the total amount of C-S-H gel, C(N)-A-S-H gel, and ettringite in the hydration products of Tailings-cement is the largest. The mass loss of BRMcement, CBRM-cement, and Tailings-cement in the range of 300-750 • C is 10.19%, 10.40%, and 10.96%, respectively, which indicates that the total amount of Si(Al)-OH structure and Ca(OH) 2 of Tailings-cement are the most. It is worth noting that the XRD patterns show that the content of Ca(OH) 2 in Tailings-cement is less, which indicates that Tailings-cement has more Si(Al)-OH structure. Therefore, the active substance of silicon and aluminum in the Tailings-cement participate in the hydration reaction. The mass loss of BRM-cement, CBRM-cement, and Tailings-cement in the range of 750-1000 • C is 2.83%, 3.69%, and 1.76%, respectively. Compared with BRM-cement and CBRM-cement, the carbonization degree of Tailings-cement is smaller, which indicates that the hydration reaction of Tailings-cement is more thorough. The tool to characterize amorphous and weak amorphous phases of Si in cementitious materials were characterized by 29 Si NMR. In this research, the 29 Si NMR spectra were used to study a deeper structural transformation of cementitious materials [27]. It is worth noting that the polymerization degree or depolymerization can be shown by the change of the number of coordination bridge oxygen bonds during the hydration reaction. The relative bridge oxygen (RBO) number sums up to an accurate calculation the degree of the polymerization [SiO 4 ] [28].
The chemical shift range of the structural unit of Q n in CBRM-cement and Tailingscement are shown in Figure 13 and Table 7. There are four resonance peaks in the hydration products of CBRM-cement and Tailings-cement, which represent the bridging oxygen structure with different Si-O. Al replaces Si results in a chemical shift to 3 to 5 ppm, the values significantly more positive [29]. As a result, Q 3 (1Al) is characterized by peaks approaching 91 ppm, while the chemical shift around 100 ppm is Q 4 (1Al). It can be seen that the chemical shifts in CBRM-cement are −75.91, −91.49, −101.25, and −110.84 ppm, respectively. The chemical shift of Tailings-cement is similar to CBRM-cement. The main peaks are Q 0 , Q 3 (1Al), Q 4 (1Al), and Q 4 , respectively. Q 0 is mainly caused by C 2 S or C 3 S in raw materials, indicating that some Si does not participate in the hydration reaction. Q 3 (1Al), Q 4 (1Al), and Q 4 are caused by C-S-H gel, C-A-S-H gel, and aluminosilicate [27]. According to the observation, the peak intensity of Q 0 in CBRM-cement is significantly stronger than that in Tailings-cement, which means that the hydration degree of Tailingscement is more complete. The relative area of CBRM-cement and Tailings-cement is shown in Table 8, it is observed here that the relative area of Q 4 (1Al) and Q 4 in Tailings-cement are greater than the CBRM-cement, which proves that more gel products generated in Tailingscement, and then showed better activation [30]. Meanwhile, the RBO of CBRM-cement and Tailings-cement are 60.07% and 62.5%, respectively. It states that the polymeric level of the [SiO 4 ] tetrahedra in Tailings-cement is better than CBRM-cement, and the Tailings-cement has a better polymerization structure. It is worth noting that [AlO 4 ] − in the C-A-S-H gel has a charge effect on cations, while the [AlO 4 ] − structure content in the Tailings-cement is more, which indicates that Na + in the red mud can be effectively solidified.
H gel has a charge effect on cations, while the [AlO4] − structure content in the Tailingscement is more, which indicates that Na + in the red mud can be effectively solidified. SEM can observe the morphology of cementitious materials prepared at different hydration ages and with BRM, CBRM or Tailings as raw materials. Meanwhile, chemical elements of the characteristic points were measured and analyzed by EDS. Combined with the energy spectrum analysis, the composition of the surface materials of the sample can be identified, and the factors influencing the strength growth of the net slurry sample can be researched. Figure 14 shows the SEM pictures and EDS of BRM-cement, CBRM-cement, and Tailings-cement at 28 d. Table 9 shows the elements distribution of regions 1 and 2 in Figure 14(C2). It can be seen from Figure 14 and Table 9 that there are fibrous C-A-S-H gel and rod ettringite in CBRM-cement and Tailings-cement. Only a few rod ettringites can be observed in BRM-cement. It can be clearly observed that ettringite is mostly distributed in the pores and pits, which makes the structure of slurry more compact [20]. There is more fibrous C-A-S-H gel interweaved in the network structure, and ettringite is connected together to reduce porosity, which makes the cementitious materials have better cementitious properties [17]. It is worth noting that there are more hydration products and lower pores of Tailings-cement, which have the highest compressive and flexural strength.
CBRM-cement and Tailings-cement. Only a few rod ettringites can be observed in BRMcement. It can be clearly observed that ettringite is mostly distributed in the pores and pits, which makes the structure of slurry more compact [20]. There is more fibrous C-A-S-H gel interweaved in the network structure, and ettringite is connected together to reduce porosity, which makes the cementitious materials have better cementitious properties [17]. It is worth noting that there are more hydration products and lower pores of Tailingscement, which have the highest compressive and flexural strength. Leaching test findings have been contrasted against the EU Directive on drinking water quality (98/83/EC). Table 10 shows that the hazardous metals Na, Cr, and Pb comply with the standard requirements. According to the previous research results [19][20][21][22], C-S-H gel, C-A-S-H gel, and ettringite produced in cementitious materials can effectively solidify Na and heavy metal ions. It can be seen from Table 10 that the heavy metal ions were not detected in the cementitious materials at 28 d, and the leaching concentration of Na in BRM-cement, CBRM-cement, and Tailings-cement is 47.261, 28.543, and 10.397 mg/L, respectively, which indicates that more active silicon and aluminum in Tailings are involved in the hydration reaction of cementitious materials, and the gel and ettringite can effectively solidify Na and heavy metal ions. Therefore, the cementitious materials are environmentally friendly, and the pozzolanic activity and environmental performance of Tailings meet the requirements of supplementary cementitious materials for cement. It can be seen from the above results that the method of using bamboo powder to reduce BRM in order to extract iron to synergistically activate the pozzolanic activity of its Tailings is very promising.
Conclusions
The pozzolanic activity and hydration characteristics of Tailings after iron extraction in BRM by biomass reduction were systematically studied, and the following conclusions were obtained: The relatively stable polymerization structures of Si-O and Al-O in BRM were destroyed under the effect of biomass reduction at 650 • C, and some fracture bonds and activation points are formed in the structures. The inert aluminosilicate in BRM was easy to be transformed into active substances, which made the Tailings have high pozzolanic activity. The pozzolanic activity index of the Tailings was 91%, which was higher than that of BRM. As a supplementary cementing material, the Tailings can meet the requirements of ordinary Portland cement 42.5 in terms of compressive and flexural properties at 3, 7, and 28 d.
The active substances of Si and Al formed the gel structure with a high degree of polymerization during the hydration reaction process, and the alkali in the BRM made the hydration reaction of the system more complete. The cementitious materials prepared from Tailings and cement contain a large amount of C-S-H gel, C(N)-A-S-H gel, and ettringite, which not only play a positive role in improving the properties, but also have a good solidification effect on Na and heavy metal ions. The close combination of gel and ettringite effectively fills the pores to obtain a higher density of the matrix.
Therefore, Hematite in BRM is reduced to magnetite by biomass, and the iron concentrate separated by magnetic separation is used for ironmaking. Additionally, the Tailings can be used to prepare cementitious materials, which can realize the full utilization of BRM, and have an economic and environmental value.
Author Contributions: Conceived and designed the analysis, collected the data, contributed data or analysis tools, performed the analysis, wrote the paper, Y.W.; conceived and designed the analysis, collected the data, contributed data or analysis tools, performed the analysis, wrote the paper, X.L.; contributed data or analysis tools, performed the analysis, wrote the paper, Y.L.; contributed data or analysis tools, performed the analysis, D.L.; contributed data or analysis tools, performed the analysis, W.Z.; contributed data or analysis tools, performed the analysis, wrote the paper, Y.X. All authors have read and agreed to the published version of the manuscript. | 2021-07-25T06:17:03.914Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "65dcfbfe1d0b0b97d0c1ad36d45d92f0ae000008",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/14/3955/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96ddecdfe76226718633326f36469bd80f358974",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59575028 | pes2o/s2orc | v3-fos-license | The AMIGA sample of isolated galaxies IX. Molecular gas properties
Aims: We characterize the molecular gas content using CO emission of a redshift-limited subsample of isolated galaxies from the AMIGA (Analysis of the interstellar Medium of Isolated GAlaxies) project. Methods: We present the 12CO(1-0) data for 273 AMIGA galaxies. We constructed a redshift-limited sample containing galaxies with 1500\kms<v<5000\kms and excluded objects with morphological evidence of possible interaction. This sample ($n=173$) is the basis for our statistical analysis. It is dominated, both in absolute number and in detection rate, by galaxies of type T=3-5 (Sb-Sc). Most galaxies were observed with a single pointing towards their centers. Therefore, we performed an extrapolation to the total molecular gas mass expected in the entire disk based on the assumption of an exponential distribution. We then studied the relationships between \mhtwo\ and other galactic properties (\lb,\dopttwo,\lk, \lfir, and \mhi). Results: We find correlations between \mhtwo\ and \lb, \dopttwo, \lk, and \lfir. The tightest correlation of \mhtwo\ holds with \lfir\ and, for $T=3-5$, with \lk, and the poorest with \dopttwo. The correlations with \lfir and \lk are very close to linearity. The correlation with \lb is nonlinear so that \mhtwo/\lb\ increases with \lb. The molecular and the atomic gas masses of our sample show no strong correlation. We find a low mean value, log(\mhtwo/\mhi)=-0.7 (for $T=3-5$), and a strong decrease in this ratio with morphological type. The molecular gas column density and the surface density of the star formation rate (the Kennicutt-Schmidt law) show a tight correlation with a rough unity slope. We compare the relations of \mhtwo\ with \lb and \lk found for AMIGA galaxies to samples of interacting galaxies from the literature and find an indication for an enhancement of the molecular gas in interacting galaxies of up to 0.2-0.3 dex.
Introduction
A major and longlasting debate in astronomy involves the relative roles of "nature" and "nurture" in galaxy formation and evolution (e.g. Sulentic 1976;Larson & Tinsley 1978;Joseph & Wright 1985;Bushouse 1987). Although it is broadly accepted that galaxy evolution strongly depends on the environment, the quantitative effect of "nurture" on certain galactic properties is still a matter of debate.
The molecular gas content is an important quantity of a galaxy because it is directly related to its capacity for star formation (SF). We still need to determine, however, how the en- Tables 1, 4 and 5 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J /A+A/ and from http://amiga.iaa.es/. vironment affects the amount of the molecular gas. Galaxies in clusters (Kenney & Young 1989;Boselli et al. 1997, Scott et al. in prep.) and groups (Verdes-Montenegro et al. 1998;Leon et al. 1998) seem to have a normal molecular gas content, even though they can be highly deficient in atomic gas. On the other hand, some authors (Braine & Combes 1993;Combes et al. 1994;Casasola et al. 2004) find an enhanced molecular gas content in interacting galaxies, in contrast to the results of Perea et al. (1997), who concluded that the molecular gas content is not affected by interaction in strongly interacting pairs or Virgo cluster galaxies.
To clarify the role played by the environment, a well-defined sample of isolated galaxies is needed to serve as a zero level for studies dealing with the effect of interactions. Most previous studies investigating the properties of molecular gas in isolated and interacting galaxies (Solomon & Sage 1988;Sage 1993;Boselli et al. 1997;Helfer et al. 2003;Leroy et al. 2009) have generally not defined any very clear criterion for isolation. Perea et al. (1997) carried out a CO study comparing isolated and interacting galaxies. Their sample of isolated galaxies is composed of 68 galaxies from various sources, selected in a much less rigorous way than the present study and biased towards infrared-luminous objects. The only survey explicitly focusing on isolated galaxies, and in particular on galaxies from the Catalogue of Isolated Galaxies, is the one by Sauty et al. (2003). They present the CO data of 99 optically-selected spiral galaxies with recession velocities up to 14000 km s −1 and briefly compare the properties of the molecular gas mass to the blue luminosity and atomic gas mass. A detailed analysis of the properties of that sample is, however, not presented there. The largest previous CO survey was the FCRAO Extragalactic CO survey (Young et al. 1995) observing ∼ 300 nearby galaxies. The major difference with respect to the present study is that it did not consider the isolation of the galaxies as a criterion. Furthermore, it only contained bright galaxies (either m B,corr < 13 mag, or F 60µm > 5 Jy or F 100µm > 100 Jy) whereas our samples also includes fainter objects.
The project AMIGA ('Analysis of the interstellar Medium of Isolated GAlaxies", Verdes-Montenegro et al. 2005) was started to provide such a reference sample by characterizing the properties of the interstellar medium (ISM) and star formation (SF) in isolated galaxies in the local Universe. It is based on the Catalogue of Isolated Galaxies (CIG Karachentseva 1973) which is composed of 1050 galaxies located in the Northern hemisphere. The AMIGA project is presented in Verdes-Montenegro et al. (2005). A considerable amount of work has been done since then in order to refine the sample. This work includes the revision of all CIG positions (Leon & Verdes-Montenegro 2003), the determination of POSS2-based morphologies and the identification of galaxies showing signs of possible interaction (Sulentic et al. 2006) and the reevaluation and quantification of the degree of isolation (Verley et al. 2007a,b). The results of this project consistently find that the AMIGA galaxies have the lowest SF activity as well as the lowest presence of Active Galactic Nuclei (AGN) in the local Universe. This is obtained both from the far-infrared (FIR) luminosity derived from IRAS data (Lisenfeld et al. 2007), and the radio continumm emission , which are both SF tracers. The rate of AGN candidates, derived from IRAS colors and radio continuum emission of the AMIGA galaxies is lowest compared to similar studies from the literature . Optical photometric analysis of Sb-Sc galaxies in the AMIGA sample showed that most galaxies have pseudo-bulges instead of classical bulges, and a comparison with samples of spiral galaxies selected without isolation criteria revealed that the isolated galaxies tend to host larger bars, are more symmetric, less concentrated and less clumpy (Durbala et al. 2008). These findings strongly support that the AMIGA sample represents the most isolated galaxies in the local Universe where secular evolution is dominant. Espada et al. (in prep.) study the HI content and (Espada et al. 2011) found the smallest fraction of asymmetric HI profiles in the AMIGA sample when compared with any sample yet studied.
The revised AMIGA sample is reasonably complete (∼ 80-95%) down to m B,corr ≤ 15.0 mag (Verdes-Montenegro et al. 2005) and it is currently one of the largest sample of nearby isolated galaxies in the Northern hemisphere. It consists of galaxies whose structure and evolution have been driven largely or entirely by internal rather than by external forces at least during the last 3 Gyr (Verdes-Montenegro et al. 2005). The data are being released and periodically updated at http:// amiga.iaa.es where a Virtual Observatory interface with different query modes has been implemented.
In the present paper we present and analyze CO observations of a redshift-limited subsample of this catalogue. The goal is to characterize the properties of the molecular gas, traced by CO, of isolated galaxies and to provide a reference sample for studies investigating the role of the environment.
The sample
For a study of the molecular gas content we had to restrict the number of galaxies, since observation of the entire optically complete sample (n ∼ 700) required too much telescope time. We chose to build a redshift-limited subsample by selecting galaxies with recession velocities in the range of 1500 −5000 km s −1 . The completeness limit of of the AMIGA sample of 15 mag corresponds to blue luminosity of log10(L B /L ⊙ ) = 8.55 and log10(L B /L ⊙ ) = 9.60 at the distances derived for these velocities with a Hubble constant of 75 km s −1 Mpc −1 . The range was chosen in order to avoid (i) very nearby galaxies for which the condition of isolation is not reliable (Verley et al. 2007a) and (ii) distant galaxies which are difficult to detect in CO. The restriction in velocity provides us with a sample probing a defined volume in space.
There are 278 galaxies in this velocity range in the CIG. We have CO data for 201 of these objects, mostly from our own observations (180 galaxies) with the 30m telescope of the Instituto de Radioastronomía Milimétrica (IRAM) at the Pico Veleta and with the 14m Five College Radio Astronomical Observatory (FCRAO), and the rest from the literature. We then excluded those galaxies that were identified by us in a visual inspection of optical images as having signs of a possible present or past interaction (see description of Table 1 for more details on the criteria). This leaves us with 173 isolated galaxies with CO data in the velocity range between 1500 and 5000 km s −1 . We refer to this sample as the redshift-limited CO sample and we will use it for the statistical analysis throughout this paper.
Additionally, we have CO data for 72 galaxies outside this velocity range. Six galaxies are from our own observations (with velocities between 5000 and 5500 km s −1 ) and the rest is from the literature. Thus, in total, we have 12 CO(1-0) data for 273 CIG galaxies. We refer to this sample as the total CO sample, and list the corresponding data in Tables 1 and 5, but we do not use it for any statistical analysis. Recently, an update of the basic properties of the galaxies in the AMIGA sample was carried out for the blue magnitude, optical isophotal diameter D 25 , velocity, and morphology and interaction degree based on higher resolution images (from SDSS or our own images). The details are described in Espada et al. (in prep.). In order to provide a self-contained data set for the present paper, we list in Table 1 (Helou et al. 1988). 10. log(L K ): Decimal logarithm of the luminosity in the K-band, in units of the solar luminosity in the K S -band (L K,⊙ = 5.0735×10 32 erg s −1 ), calculated from the extrapolated magnitude in the K S (2.17 µm) band from the 2MASS Extended Source Catalogue (Jarrett et al. 2000). The magnitudes were available for 250 galaxies of our sample. We calculated the K S luminosity, L K , from the total (extrapolated) K S flux, f K , as L K = ν f K (ν), where ν is the central frequency of the K Sband. L K is a good measure of the total stellar mass.
In Fig.1 we present some of the basic characteristics of the CO samples and compare them to those of the optically complete sample (Verdes-Montenegro et al. 2005). The latter (n = 712) is composed of CIG galaxies with m B in the range of 11 -15 mag and is 80 -95 % complete. The main difference between the CO and the optically complete sample is the larger spread in velocity of the latter (Fig.1a). The CO samples are at a lower velocity, especially the redshift-restricted sample. This leads to a slightly lower optical luminosity (Fig.1b) of the CO samples since the number of luminous objects is higher at larger distances due to the Malmquist bias.
The distribution of the morphological types ( Fig.1c) is very similar in all the three samples. All samples are dominated by spiral galaxies. The relative number of early type galaxies (T = (−5)−0) is 8% for the optically complete sample and 13% for the CO redshift-restricted sample. The distribution is peaked around galaxies with types T = 3 − 5 (63% for the optically complete sample and 51% for the CO redshift-restricted sample).
CO observations and analysis
We carried out CO(1-0) observations with the FCRAO and with the IRAM 30m telescope. We observed galaxies with isophotal diameters D 25 < 100 ′′ at the 30m telescope and galaxies with D 25 ≥ 100 ′′ at the 14m FCRAO telescope. In this way we tried to optimize the agreement between beam size and optical diameters and minimize the fraction of missing flux in the observations done with a single pointing. We observed 100 galaxies at the FCRAO radio-telescope and 101 at the IRAM 30m telescope. In order to check the consistency of the results we observed 15 galaxies at both telescopes.
IRAM 30m telescope
We observed the 12 CO(1-0) line at 115 GHz with the IRAM 30meter telescope on Pico Veleta using the dual polarization receivers A100 and B100, together with the 512 × 1 MHz filterbanks. The observations were done in wobbler switching mode with a wobbler throw of 120 ′′ in azimuthal direction. Pointing was monitored on nearby quasars every 60 -90 minutes. The integration time on source was typically 0.5 to 1.5 hours. The mean system temperatures was 320 K on the T * A scale. All CO spectra and intensities are presented on the main beam temperature scale (T mb ) which is defined as T mb = (F eff /B eff ) × T * A . The IRAM forward efficiency, F eff , was 0.95 and the beam efficiency, B eff , 0.75. The peak intensities of our sources ranged between about 10 and 80 mK (T mb ).
Most galaxies were observed at the central position with a single pointing. The galaxies with the strongest emission at the center (19 galaxies) were mapped along the major axis with a spacing of 15 ′′ , until a root mean square (rms) noise of about 3 mK was reached for a velocity resolution of 10.6 km s −1 .
FCRAO 14m telescope
The observations at the FCRAO were done with the receiver SEQUOIA, a 4 × 4 pixel array operating from 85 to 115 GHz. We used the so called "beam switching" mode, in which the telescope switches position between the source and a reference position 89.2 ′′ apart in azimuth. Two of the pixels in the array alternated between the ON-and the OFF-position, doubling in this way the effective integration time.
The typical observing time per object was about 2 hours. The pointing was checked between the observations of different sources using a nearby quasar. The mean system temperatures was 380 K (on the T * A scale). All CO spectra and luminosities are presented on the main beam temperature scale (T mb ) which is defined as T mb = T * A /B eff . The main beam efficiency is B eff = 0.45. We observed each galaxy with one pointing at their central position. The peak intensities of our sources ranged between about 10 and 80 mK (T mb ).
Literature data
We furthermore searched the literature and found CO(1-0) data for 131 objects, 87 of them had not been observed by us. We list the references for these data in Table 2. Some galaxies were observed at several telescopes. Table 3 provides information about the telescopes used in the different surveys: the antenna size (column 3), the half power beam width (HPBW) (column 4) and the conversion factor Jy/K (on the T mb scale) at 115 GHz (column 5).
Data reduction
The data from both telescopes were reduced in the standard way using the CLASS software in the GILDAS package 1 . The data reduction consisted in dismissing poor scans, flagging bad channels, subtracting a baseline and averaging the spectra for the same object and position. In most cases a constant baseline was subtracted and only in a few cases the subtraction of a linear baseline was required.
Spectra and integrated intensities
The CO(1-0) profiles of the detections and tentative detections observed by us at the IRAM 30m and FCRAO 14m are shown in Appendix A ( The velocity integrated intensity, I CO = T mb dv (in K km s −1 ), was calculated from the spectra with a velocity resolution of 10.6 km s −1 for the IRAM spectra, respectively 13.1 km s −1 for the FCRAO spectra by summing up all channels with significant emission. Its error was calculated as: where σ is the rms noise of the spectrum, W CO is the CO line width, and δV CO is the spectral resolution. For undetected galax-ies, we calculate a 3σ upper limit, assuming a line width of 300 km s −1 , as: In Table 4 we list the following items: 1. Entry number in the Catalogue of Isolated Galaxies (CIG).
An asterisk added to the number means that the detection is marginal. In our statistical analysis we treat marginal detections as upper limits. 2. Off. α: RA offset from the center in arcsec. 3. Off. δ: Declination offset from the center in arcsec. 4. rms: root mean square noise in mK for a velocity resolution of 10.6 km s −1 (IRAM), respectively 13.1 km s −1 (FCRAO). 5. I CO : velocity integrated CO line temperature T mb dv, in K km s −1 , and its error. 6. V CO : mean velocity of the CO line, in km s −1 . 7. W CO : zero line-width of the CO spectrum, in km s −1 . 8. Tel: radio-telescope code, as listed in Table 3.
Comparison between IRAM and FCRAO data
In order to check the relative calibration between the IRAM 30m and the FCRAO 14m telescope and to guarantee that these two data sets are comparable, we observed 15 galaxies with both telescopes. We expect a ratio of the velocity integrated intensities of I CO−IRAM /I CO−FCRAO = 1 for emission homogeneously filling the beams, and I CO−IRAM /I CO−FCRAO = (Θ FCRAO /Θ IRAM ) 2 = 4.5, where Θ FCRAO and Θ IRAM are the FWHM of the respective beams, for a point-like emission.
Four galaxies (CIG 66,181,281 and 330) were detected at both telescopes. The ratios of I CO−IRAM /I CO−FCRAO range between 1.1 and 2.3, consistent with the value expected for slightly concentrated emission. Six galaxies were detected at IRAM, but only tentatively detected (CIG 176, CIG 355) or undetected (CIG 217, CIG 561, CIG 609, CIG 622) at the FCRAO. The lower limit for I CO−IRAM /I CO−FCRAO in five cases was between 0.94 and 3.6, consistent with the expected range of values. For CIG 217 this value is higher (I CO−IRAM /I CO−FCRAO = 6.1) than the theoretical upper limit. Since the detection at IRAM has a high signal-to-noise ratio, the most likely reason is an underestimate of the upper limit of the FCRAO data. There is one object with a detection at the FCRAO and only a tentative detection at IRAM (CIG 433), and one with a nondetection at IRAM (CIG 268). The ratio of the intensities in both cases is I CO−IRAM /I CO−FCRAO = 0.3, indicating an underestimate in the IRAM data. In the remaining three cases, both observations were either no detection or tentative detections.
We conclude that there is very good agreement between the detected values at both telescopes, and in most cases (with the exception of three galaxies) also for objects only detected at one telescope. This gives us confidence that the calibration of the two data sets is consistent.
Calculation of the molecular gas mass
The molecular gas mass (M H 2 ) is calculated using a Galactic conversion factor of N(H 2 )/I CO = 2.0 × 10 20 cm −2 (K km s −1 ) −1 (e.g. Dickman et al. 1986) yielding: where I CO is the velocity integrated CO line intensity in K km s −1 , D is the distance in Mpc and Ω is the area covered by the observations in arcsec 2 (i.e. Ω = 1.13Θ 2 B for a single pointing with a Gaussian beam of FWHM Θ B ). We do not include the mass of heavy metal (mostly helium) in the molecular gas mass.
Most of our objects were observed at the central position in a single pointing since the mapping of the entire galaxy would have been too time-consuming. We therefore might have missed part of the CO emission for galaxies where the emission is more extended than the beam. This fraction depends on the galaxy size, inclination and on the telescope beams. It is thus necessary to correct for this loss, and we do this by extrapolating M H 2 observed in the central beam to the total mass in the galaxy. In the next subsection we explain how we carried out this correction.
Aperture correction
In order to apply an aperture correction, we need to predict the distribution of the CO emission. CO maps of nearby spiral galaxies Regan et al. 2001;Leroy et al. 2008) have shown that the radial distribution of I CO (r) in galaxies can be well described by an exponential function with a scale length r e : (2) The CO scale length, r e , is well correlated and similar to the optical exponential scale length (Regan et al. 2001;Leroy et al. 2008). It also correlates, although less tightly, with the optical radius at the 25mag isophote, r 25 . Leroy et al. (2008) derived for spiral galaxies from the THINGS survey a mean value of α = r e /r 25 = 0.2. We derived the same mean value for α from the data of Regan et al. (2001) for 15 spiral galaxies observed in the BIMA Survey of Nearby Galaxies (BIMA-SONG) and from the data of for 25 spiral galaxies observed with the Nobeyama 45m telescope. We also used the data of Young et al. (1995), who studied the molecular gas content and distribution in a sample of 300 nearby galaxies, to derive the r e . They found a mean ratio between the effective CO diameter, D CO , the diameter within which 70% of the CO emission is situated, and the optical diameter of D CO /D 25 = 0.5. For an exponential distribution one can derive that D CO × 0.5 = 2.5r e . Thus, their data also yield r e /r 25 = D CO ×0.5/2.5 D 25 ×0.5 = 0.5 2.5 = 0.2. Finally, we use the data of the 19 galaxies (all of them with morphological type T ≥ 2) mapped along the major axis with the IRAM 30m telescope as a further test. Although our data is not sufficiently detailed to fit the radial distribution (we have only 3-5 detected spectra along the major axis), we can use it to test whether (i) an exponential distribution is a reasonable description of the CO distribution, (ii) the scale length derived by other studies is in agreement with our data, and (iii) the predictions for the extrapolated M H 2 are in agreement with our mapped values.
For the first two tests, we fitted an exponential distribution independently to each side of the CO distribution along the major axis. From the 38 resulting fits, there were only six cases where no exponential fit could be applied within the error bars. In five cases we derived α ≥ 0.4, in seven cases 0.4 < α < 0.25, in three cases α < 0.15 and in 17 cases, the majority, 0.15 ≤ α ≤ 0.25. Thus, our data are in general consistent with the value of α found by other studies.
In summary, we conclude that an exponential distribution of the molecular gas distribution with α = r e /r 25 = 0.2 is a good approximation based on the CO maps for nearby spiral galaxies available up to date and also consistent with our data. We adopt the same value for both spiral galaxies and early type galaxies. In early-type galaxies, the molecular gas extent is much less known. However, for our study, this uncertainty is not important because the number of objects with T ≤ 0 is low (n = 23, with eight detections) and we focus on our results and conclusions on spiral galaxies, in particular of types T = 3 − 5, which dominate the sample. We now use these results to calculate the aperture correction which we define as the ratio between the total (extrapolated) molecular gas mass M H 2 , and the molecular gas mass in the central pointing, M H 2 ,center , The total molecular gas mass is calculated by spatially integrating I CO (r) from Eq. 2 and using Eq. 1. This yields : Similarly, we calculate M H 2 ,center by convolving the exponential CO intensity distribution with a Gaussian beam. This yields: where i is the inclination of the disk. The integration of Eq. 5 is carried out numerically. Thus, the correction factor, f ap =M H 2 /M H 2 ,center , depends on the ratio of the scale length and the beam size, r e /Θ B , as well as the galaxy inclination i. Fig. 2 shows the distribution of f ap for the galaxies in our sample. The correction factors are generally low: 81% of the galaxies have f ap < 2, and 92% f ap < 3. Only nine galaxies have a correction factor above 5. All of them are nearby (v < 1000 km s −1 ) galaxies with a large angular size (between 4 ′ and 20 ′ ) that are not included in our redshift-limited sample.
In order to carry out test (iii) we compared for the 19 galaxies, that were mapped along their major axis with the IRAM 30m telescope, the extrapolated M H 2 to the mapped molecular gas mass which we extrapolated to the mass in the entire disk by assuming azimuthal symmetry (M H 2 ,map,extra ). Fig. 3 presents the ratio of these two masses as a function of D 25 and i. For most objects, both masses agree reasonably well, with ratios ranging between 0.5 and 1.3. There are only two outliers, with ratios around 2. These galaxies (CIG 84 and CIG 28) have a very flat CO distribution along the major axis so that our extrapolation, assuming an exponentially decreasing distribution, underestimates the true amount of molecular gas. No trend with neither D 25 nor the inclination is seen, showing that no apparent bias is introduced by the aperture correction. The mean mass ratio is 1.0, with a standard deviation of 0.3.
Molecular gas content
In Table 5 we list M H 2 for the individual galaxies. If observations from different references were available for a single galaxy, we inspected the spectra and discarded those of poorer quality. In case of similarly good data we gave preference to mapped observations or to observations from the telescope with a larger beam in order to avoid flux loss. For the 44 objects that had data from the literature and were also observed by us, our data was in general of better quality, with the exception of three objects (CIG 512, CIG 604, CIG 626). The columns in Table 5 denotes tentative detections that are treated as upper limits in the statistical analysis. 3. log(M H 2 ) mapped: Decimal logarithm of the mapped M H 2 in solar masses, calculated in the following way: for the data from the literature, the angular separation between the individual pointings was always larger than the beam size so that the total M H 2 could be calculated as the sum of the individual pointings. For our own observations with the IRAM 30m telescope, the spacing between the individual pointing was 15", which is smaller than the FWHM of the beam (21 ′′ ) so that in this case we had to take the overlap of the individual pointings into account. We calculated the mapped M H 2 from Eq. 1 where I CO is taken as the mean value of the different pointings, and Ω is the total area covered by the mapping, approximated as 21 ′′ × 36 ′′ , 21 ′′ × 51 ′′ , 21 ′′ × 66 ′′ , 21 ′′ × 81 ′′ for 2, 3, 4, and 5 pointings, respectively. 4. log(M H 2 ) extrapol.: Total (extrapolated) M H 2 in solar masses, calculated as described in Section 3.5.1. An asterisk denotes tentative detections that are treated as upper limits in the statistical analysis. 5. Tel.: Radio-telescope code, as in Table 3. 6. Ref.: Bibliographic code, as in Table 2. Table 6), calcu- 5. The molecular gas mass for the redshift-limited sample as a function of morphological type. Triangles denote the mean value and its error for a range of morphological types and squares the median value, as listed in Table 6. The error bars in the x-direction denote the range of morphological types over which the mean and median have been taken. lated with the package ASURV that takes into account the upper limits. Here and throughout the paper we use the ASURV 3 package which applies survival analysis in the presence of upper or lower limits and calculates the mean value based on the Kaplan-Meier estimator. Table 6. Our sample is dominated by spiral galaxies of type T = 3 −5 (Sb-Sc). Not only is the total number of objects greatest in this range, but also the detection rate. Therefore, we can derive the most reliable results for these types. Both for earlier and for later types, the detection rates are very low, making a detailed analysis difficult. The molecular gas mass is largest for spiral galaxies of T = 3 − 5, and decreases both for earlier and later types. There are eight early-type galaxies, of type S0 and S0a, with detections in M H 2 , and five of them have unusually high molecular gas masses in the range of those for spiral galaxies (CIG 332, CIG 481, CIG 498, CIG 733 and CIG 1015).
Relation of M H 2 to other parameters
In the following we investigate the relations between M H 2 and L B , D 25 , L FIR , L K and M HI . The first two quantities (L B , D 25 ) were chosen because they are in general available for any galaxy and are therefore useful to predict the expected M H 2 . L FIR is very closely related to M H 2 because of their common relation to SF. L K is dominated by the emission of low-mass stars which are the result of the long-term SF history of an object and determine the gravitational potential which influences the SF activity. Finally, we compare M H 2 to M HI in order to derive the molecularto-atomic gas mass ratio as a function of morphological type.
The results of the regression analysis as well as the Spearman rho correlation coefficient derived in this section are listed in Table 7, and the mean and median values of the ratios for different subsamples are in Table 6. We use the package ASURV to calculate the bisector regression line applying the Schmitt's binning method (Schmitt 1985) as the only method offered by ASURV able to deal with censored data in both the dependent and the independent variable 4 . Fig. 6 shows the relation between M H 2 and L B , for the the entire redshift-limited sample and for subsamples of different morphological types, together with the the best-fit bisector regression lines. The high number of upper limits for galaxies of type T = (−5) −2 and T = 6 −10 impedes the calculation of a reliable regression line for these individual subgroups. However, we note that there is no apparent deviation from the mean best-fit regression for all morphological types for these two groups. A slope considerably larger than 1 is found for the correlation between M H 2 and L B , in agreement with Perea et al. (1997). This means that the ratio M H 2 /L B tends to increase with L B . Even though the scatter around the best-fit is large, so that no strong correlation exists between M H 2 /L B and L B , we confirm the variation of M H 2 /L B by finding that the mean values is indeed higher for high L B than for low L B : for galaxies (all morphological types) with L B < 10 10 L ⊙ (n = 123 galaxies, n = 68 upper limits) we obtain M H 2 /L B =−1.36 ± 0.05 and for galaxies with L B ≥ 10 10 L ⊙ (n/n =50/11) M H 2 /L B = −1.10 ± 0.06. The corresponding numbers for the T = 3 − 5 subsample are −1.16 ± 0.06 (L B < 10 10 L ⊙ , n/n =48/16) and −1.04 ± 0.05 (L B ≥ 10 10 L ⊙ , n/n =40/5). This trend has to be taken into account when using this ratio as an indicator of an enhancement of M H 2 .
Optical luminosity
The ratio between M H 2 and L B is shown in Fig. 7 as a function of morphological type. The values are listed in Table 6. The 4 The Schmitt method was partially reimplemented and wrapped into Python. It can be found at http://amiga.iaa.es/software/python-asurv. Table 6. The error bars in the x-direction denote the range of morphological types over which the mean and median have been taken. ratio is highest for galaxies of type T = 3 − 5. It is however remarkable that galaxies of type S0 and S0a with detections in CO have much higher values of M H 2 /L B , in the range of spiral galaxies of type T = 3 − 5, than early-type galaxies with nondetection in CO.
In order to check whether the extrapolation of the molecular gas mass to the entire disk has introduced any biases, we show in Fig. 8 the relation between M H 2 and L B only for galaxies with f ap < 1.5. No significant difference compared to the entire redshift-limited CO sample can be seen, and the best-fit Fig. 8. M H 2 vs. L B for galaxies in the redshift-limited CO sample with an aperture correction factor of less than f ap < 1.5 (n = 78). The solid line is the best fit bisector line derived with ASURV for this restricted sample, and the dashed lines gives, for comparison, the fit for the entire redshift-limited CO sample. regression coefficients are the same within the errors (Table 7). We found this good agreement between entire redshift-limited CO sample and the subsample of galaxies with f ap < 1.5 for all correlations studied. Therefore, in the following subsections we do not show the f ap < 1.5 correlation separately, but we list in Tables 6 and 7 the mean values and the regression coefficients for this subsample. Fig. 9 shows the relation between M H 2 and D 2 25 , for the entire redshift-limited sample and for subsamples of different morphological types, together with the best-fit bisector regression lines. For early-type galaxies (T ≤ 0) only a very poor correlation is visible, whereas galaxies of type T = 6 − 10 seem to follow the same correlation as those of type T = 3 − 5.
Optical isophotal diameter
The correlation between M H 2 and D 2 25 , has the lowest correlation coefficient (r ∼ 0.5) among those considered in this paper (Table 7). Although the bisector slope is formally larger than 1, we do not find a variation of M H 2 /D 2 25 with increasing optical diameter. This shows that for this poor correlation the regression slope has to be taken with caution. Fig. 10 shows the ratio M H 2 /D 2 25 as a function of morphological type. The values are listed in Table 6. The ratio is highest for spiral galaxies of type T = 3 − 5. Similarly to M H 2 /L B , S0 and S0a galaxies with detections in CO have high values of M H 2 /D 2 25 , similar to those of spiral galaxies of type T = 3 − 5, whereas M H 2 /D 2 25 of the nondetections is much lower. Fig. 11 shows the relation between M H 2 and L K , for the entire redshift-limited sample and for subsamples of different morphological types, together with the the best-fit bisector regression lines. The relation is close to linear. The correlation is very good (r = 0.73) for spiral galaxies of type T = 3 − 5. The distribution of the emission from spiral galaxies of later types (T = 6 − 10) is consistent with this correlation. However, early-type galaxies (T ≤ 0) show a very poor correlation. Only the objects with CO detections follow the same correlation as spiral galaxies whereas , for the redshift-limited CO sample (n = 173), including all morphological types, and for groups of different morphological types. The dashed line gives the best fit bisector (derived with ASURV) for all morphological types and the solid line is the best fit for the T = 3 − 5 sample (see Table 7). Fig. 10. The ratio between M H 2 and D 2 25 for the redshift-limted sample as a function of morphological type. Triangles denote the mean value and its error for a range of morphological types and squares the median values, as listed in Table 6. The error bars in the x-direction show the range of morphological types over which the mean and median have been taken. most galaxies with CO nondetections have upper limits for M H 2 that lie considerably below it.
Luminosity in the K-band
The ratio M H 2 /L K is shown in Fig. 12. It is lowest for earlytype galaxies (up to T = 2), and increases for later types by a factor 3-5. From T = 3 on, the ratio is approximately constant and does not show the decrease it shows for M H 2 /L B and M H 2 /D 2 25 . Early-type (E+S0s) galaxies have a lower mean molecular gas mass per L K than spiral galaxies. This is caused by galaxies not detected in CO, whereas, as seen before, S0 and S0a detected in Table 6. Mean and median values. 17/9 Mean value and its error calculated with the program ASURV for different subsamples, and (below) median value (calculated treating the upper limits as detections). For the mean and median value of log(M H 2 /L FIR ) and log(M H 2 /M HI ) only galaxies with detection in L FIR or M HI , respectively, were taken into account, since ASURV cannot handle both upper and lower limits. Below: Total number of galaxies and number of upper limits in M H 2 taken into account for the means and median. Fig. 11. The relation between M H 2 and L K for the redshiftlimited CO sample (n = 173), including all morphological types, and for groups of different morphological types. The dashed line gives the best fit bisector (derived with ASURV) for all morphological types and the solid line is the best fit for the T = 3 − 5 sample (see Table 7).
CO have higher values of M H 2 /L K , in the same range as spiral galaxies. Fig. 12. The ratio between M H 2 and L K as a function of morphological type. Triangles denote the mean value and its error for a range of morphological types and squares denote the median values, calculated treating upper limits as detections, as listed in Table 6. The error bars in the x-direction denote the range of morphological types over which the mean and median have been taken.
FIR luminosity, SF rate and efficiency
A good correlation is known to exist between M H 2 and L FIR (e.g. Young & Scoville 1991) because both quantities are directly related to SF: the molecular gas as the fuel for SF and L FIR as a tracer for SF based on the heating of the dust by newly born stars. Fig. 13 shows the relation between M H 2 and L FIR , for the entire redshift-limited sample and for subsamples of different morpho- The entries are: Column 1: Subsample considered. Column 2: Total number of galaxies in the respective samples. Column 3: Number of galaxies with upper limits in M H 2 and/or L FIR . Column 4: Bisector slope and its error of the best-fit regression line derived with the Schmitt binning method in the ASURV package. The slope and intercept are defined as log(A) = intercept + log(B) × slope, where A is M H 2 or Σ H 2 and B is L B , L K , L FIR , (D 25 ) 2 or Σ SFR , respectively. Column 5: Bisector intercept and its error. Column 6 and 7: Slope and intercept and their errors of the best-fit regression line derived with the Schmitt binning method in the ASURV package adopting L B , L FIR , L K D 2 25 or Σ SFR as independent variable. Column 8: Spearman's rho correlation coefficient, calculated with ASURV. * Only galaxies for which the aperture correction factor, f ap (see Sect. 3.5.1), is less than 1.5. Fig. 13. M H 2 vs. L FIR for for the redshift-limited CO sample (n = 173), including all morphological types, and for groups of different morphological types. The dashed line gives the best fit bisector (derived with ASURV) for all morphological types and the solid line is the best fit for the T = 3 − 5 sample (see Table 7). logical types, together with the best-fit bisector regression lines. We find a good correlations with a roughly linear slope (Table 7).
For early (T ≤ 0) and late (T = 6 − 10) type galaxies, the distribution of M H 2 vs. L FIR is consistent with the correlation found for galaxies of type T = 3 − 5. Fig. 14 shows the ratio L FIR /M H 2 as a function of morphological type. Early-type galaxies have a higher value, possibly due to a large fraction of their FIR emission not being heated by young stars but by the general interstellar radiation field. For late-type spirals (T ≥ 6) the mean ratio increases again, but the low number of values and detections make any firm conclusion difficult.
These values and trends for L FIR /M H 2 are consistent with earlier results of Young et al. (1996) who studied them in a sample of 120 galaxies included in the FCRAO survey. They used a higher value of N(H 2 )/I CO = 2.8 × 10 20 cm −2 (K km s −1 ) −1 , but also define the FIR luminosity in the range between 1 − 500 µm which is, according to their prescription, between ∼ 0.1 − 0.2 dex higher. Since both differences roughly compensate, their values of L IR /M H 2 are comparable to our values L FIR /M H 2 . They find very similar values of L FIR /M H 2 for different morphological types as we do. They obtain value between 0.55 ± 0.08 and 0.61 ± 0.06 for morphological types T = 3 − 5 (n = 45), higher values (between 0.70 ± 0.13 and 1.53 ± 0.24) for later type galaxies (T = 6 − 10, n = 19), and, for earlier spiral types (T = 1 − 2, n = 14), mean values (0.65 ± 0.20 and 0.53 ± 0.09) that are similar to galaxies of type T = 3 − 5. Their sample only includes three galaxies of type S0-S0a and no elliptical galaxies, so that we cannot compare earlier types.
L FIR is a good tracer for the star formation rate (SFR) due to two reasons: (i) young stars are formed in dense regions where dust opacities are high and (ii) the dust extinction curve peaks The ratio between L FIR and M H 2 as a function of morphological type. Galaxies with upper limits at both wavelengths are excluded. Triangles denote the mean value and its error for a range of morphological types and square signs denote the median values, as listed in Table 6. The error bars in the x-direction show the range of morphological types over which the mean and median have been taken.
in the ultraviolet so that the radiation from young, massive stars is preferentially absorbed. Therefore, L FIR as a good SFR tracer especially for actively SF galaxies. It has to be used with some caution in galaxies with a low SFR where dust heated from old stars can contribute to L FIR , or in galaxies with a low metallicity and thus a low dust opacity (e.g. Bell 2003).
Keeping these limitations in mind, we use the formula of Kennicutt (1998) to calculate the SFR: SFR(M ⊙ yr −1 ) = 4.5 × 10 −44 L IR (erg yr −1 ), where L IR is the total FIR luminosity in the range 8 -1000 µm. This formula assumes a Salpeter Initial Mass Function. We convert this to a value based on the Kroupa (2001) IMF by deviding by a factor 1.59 . In our analysis we use L FIR , calculated following the formula given by Helou et al. (1988), which estimates the FIR emission in the wavelength range of 42-122.5 µm. We estimate L IR from L FIR using the result of Bell (2003) that on average L IR ∼ 2× L FIR for a heterogenous sample of normal and starbursting galaxies. Adopting this factor and the conversion to the Kroupa IMF we calculate the SFR from the L FIR as: 10 −44 L FIR (erg yr −1 ) (7) = 2.2 × 10 −10 L FIR (L ⊙ ).
We derive an average SFR for galaxies of type T = 3 − 5 of ∼ 0.7 M ⊙ yr −1 . The ratio L FIR /M H 2 is proportional to the star formation efficiency (SFE), defined here as the ratio between SFR and M H 2 . The mean value of the SFE for galaxies of type T = 3 − 5 is 10 −9 yr −1 (Table 6). Table 6. The error bars in the x-direction denote the range of morphological types over which the mean and median have been taken. Fig. 15 shows the relation between M H 2 and M HI for different groups of morphological types. The correlation with M HI is much poorer than the other correlations considered here, with a Spearman correlation coefficients of r = 0.44, 0.44, 0.29, 0.57 for the entire redshift-limited sample, T = (−5) − 2, T = 3 − 5 and T = 6 − 10, respectively. In most galaxies M HI is higher than M H 2 . Some differences can be seen for the different morphological types: Early type galaxies (T = −5 − 2) have a high number of upper limits in both M H 2 and M HI . The upper limits mainly populate the low M HI and M H 2 part of the diagram, whereas the detections have values of M H 2 and M HI comparable to those of spiral galaxies. Late-type galaxies (T = 6 − 10) are shifted to-wards higher M HI ( > ∼ 10 9 M ⊙ ) but lower ( < ∼ 10 9 M ⊙ ) M H 2 with a large number of upper limits in the latter. Galaxies of morphological types T = 3 − 5 have both high atomic gas and molecular gas masses ( > ∼ 10 9 M ⊙ ) with a low number of upper limits in M H 2 and almost none in M HI . Fig. 16 displays the ratio M H 2 /M HI as a function of morphological type, showing a strong variation with T . The highest values are found for early-type galaxies, up to T = 0 (albeit with a high uncertainty due to the high number of upper limits). For later types, M H 2 /M HI is decreasing strongly. The mean ratio M H 2 /M HI is significantly lower than 1 for all morphological types.
Expected molecular gas content in a galaxy
We showed in Sect. 4.2 that good correlations exist between M H 2 and other parameters of a galaxy (L B , L K and L FIR ) and a somewhat poorer correlation with D 2 25 . In Table 7 we list, apart from the linear regression parameters, also the Spearman's rho correlation coefficient and in Table 6 the different ratios depending on the morphological types. All these relations can be used to study differences in the molecular gas content of other sample, like e.g. interacting galaxies, with respect to isolated galaxies.
The best correlation exists between M H 2 and L FIR (r ≥ 0.8). L FIR is thus a very reliable way for predicting the expected molecular gas content in a galaxy. However, L FIR might not be a good parameter when searching for variations of M H 2 in interacting galaxies because L FIR itself, tracing the SFR, is easily affected in such an environment. A very good, and roughly linear, correlation also exists between M H 2 and L K for galaxies of type T = 3 − 5 (r = 0.73). The luminosity in the K-band as a measure of the total stellar mass is less affected by recent events than L FIR or L B and thus a good normalization parameter when searching for changes in M H 2 . This correlation is, however, poor for early type galaxies (T ≤ 2), where the undetected objects have ratios M H 2 /L K well below the values for spiral galaxies with T ≥ 3 (see Fig.11 and Table 6) so that for those types L K is not recommened as a measure of the expected M H 2 . There is also a good correlation between M H 2 and L B (r ∼ 0.65). The ratio M H 2 /L B is however not constant, but increases with L B which has to taken into account in any comparison. Finally, the poorest correlation (r ∼ 0.5) exists with the isophotal diameter which is not a very reliable parameter for predicting M H 2 .
Apart from using the ratio of M H 2 to another parameter, we can use the correlations, defined by the linear regression parameters listed in Table 7, for predicting the expected M H 2 and determine whether a deficiency or an excess of M H 2 exists in an object in comparison to isolated galaxies.
We define the deficiency in M H 2 in an analogous way to the definition in M HI of Haynes & Giovanelli (e.g. 1984): Note that in this definition, a positive value of Def(M H 2 ) means a deficiency and a negative value means an excess. We can derive the predicted M H 2 , M H 2 ,predicted , from any of the parameters, X, (L B , D 2 25 , L K or L FIR ) as: log(M H 2 ,predicted ) = intercept + slope × log(X), where intercept and slope are the parameters of the best fit listed in Table 7. These definitions allow us to directly take the effect of a nonlinearity of a correlation into account. In Table 7 we have given the values for two types of regressions: The bisector fit and the fit obtained by taking M H 2 as the dependent variable, O(Y|X), and minimizing the distance of the M H 2 measurements from the best-fit regression line. The regression parameters are different for both methods because of the scatter in the data and the difference is larger for poorer correlations. The O(Y|X) fit is the appropiate regression for predicting M H 2 from L B , whereas the bisector regression is the best estimate for the underlying correlation between two parameters (Isobe et al. 1990). For a sample covering the same luminosity range, O(Y|X) is the best way for predicting the expected M H 2 . For samples with a different luminosity range, however, the bisector fit is better since it provides a more reliable extrapolation.
The Kennicutt-Schmidt law
We used our data to calculate the Kennicutt-Schmidt law, i.e. the relation between the disk-averaged gas column density and the disk-averaged SFR per area, Σ SFR . Fig. 17a shows the relation with the molecular gas column density, Σ H2 , and Fig. 17b with the total (molecular+atomic) gas column density, Σ H2+HI . The surface densities were calculated by dividing the SFR, respectively the (extrapolated) molecular or total gas mass, by the galaxy surface πD 2 25 /4. A clear correlation exists with the molecular gas, but none with the total gas, showing a lack of correlation with the atomic gas column density. Our findings are in agreement with previous results based on spatially resolved analysis (Wong & Blitz 2002;Bigiel et al. 2008;Verley et al. 2010), showing that the SFR is strongly related to molecular gas only. The best-fit regression yields Σ SFR ∝ Σ 0.89±0.07 H 2 (Table 7). This slope, close to unity, is again consistent with the results of Wong & Blitz (2002) and Bigiel et al. (2008) for a spatially resolved analysis.
Discussion
We compare our data to that of other samples of "normal" galaxies and also search for differences in samples of interacting galaxies. The samples of normal galaxies that we consider are made of not obviously interacting galaxies, which are however selected without a clear isolation criteria, and some of these samples contain, e.g. several cluster galaxies. In all these comparisons we have adjusted the molecular gas masses to our definition, i.e. same conversion factor and no consideration of the helium fraction.
Relation between molecular gas, FIR and blue luminosity
The nonlinear relation between M H 2 and L B has been found by other groups as well (see Perea et al. 1997, and references therein), with similar slopes as found by us. Their study was based on a smaller number (n = 68) of galaxies selected with a less rigorous criterion with respect to the environment. They discussed the cause of the nonlinearity and conclude that the most likely reason is extinction affecting L B and increasing with galaxy luminosity. They predicted that the relation between M H 2 and luminosities at longer wavelengths should be more linear. This prediction is confirmed by the nearly linear relation found between M H 2 and L K in our analysis. We compared our mean value of M H 2 /L B for galaxies of type T = 3−5 to results for a large sample of normal galaxies, studied by Bettoni et al. (2003). They searched the literature for galaxies with data for their ISM properties, excluding galaxies with The relation between the surface density of the molecular gas and the SFR per area for morphological types T = 3 − 5, calculated by dividing the molecular gas mass and the SFR derived from L FIR by the surface of the galaxies, π(D 25 /2) 2 . The full line shows the bisector best fit for this sample, and the dashed line the best fit to the sample of all morphological types. Panel b: The same relation with the total (atomic + molecular) gas surface density. The blue triangles show the galaxies with either an upper limit in HI or in CO. The gas surface density for these galaxies has a lower and an upper limit which for the galaxies in this figure are very close together (within the size of the triangle). a known pecularity (interacting, disturbed, galaxies with polar rings or counterrotation) and with active galactic nuclei. Their sample includes 177 galaxies of type T = 3 − 5 with CO(1-0) data (160 detections and 17 upper limits) with values of log(L B ) between 9 and 11. They derive a mean value (adapted to our convention for the calculation of the M H 2 and L B ) of M H 2 /L B = −0.92 ± 0.04 for these galaxies. When restricting the range of L B to log(L B ) = 10 -10.6, the mean value is −0.82 ± 0.05, about 0.2 dex higher than the corresponding value for the AMIGA sample (see Tab. 8). showing that AMIGA galaxies have a lower molecular gas content. When comparing their values for M H 2 /D 2 25 a similar difference is found.
The relation between M H 2 and L FIR in nonstarburst galaxies has been found to be close to linear in other studies (e.g. Perea et al. 1997;Gao & Solomon 2004), in agreement with our results.
Gas depletion time
We derived in Sect. 4.2.4 the SFE (defined as the SFR/M H 2 ). The SFE is directly related to the molecular gas depletion time, τ dep , by τ dep = SFE −1 . The gas depletion time for our sample has a mean value of log(τ dep ) = 8.9 yr for the entire redshift-limited sample and log(τ dep ) = 9 yr for galaxies of type T = 3 − 5 (see Table 6), with a spread of values roughly ranging between log(τ dep ) = 8.5 yr and log(τ dep ) = 9.5 yr (see Fig. 14).
This value can be compared to those found in recent surveys. Bigiel et al. (2011) derived a mean gas depletion time of 2.35 Gyr from spatially resolved observations of 30 nearby galaxies in the HERACLES survey. This values includes a helium fraction of a factor 1.36, thus giving τ dep = 1.7 Gyr without helium. They furthermore showed that this value is consistent with a wide range of molecular gas depletions times from the literature, albeit with a large standard deviation of 0.23 dex. This value is only slightly higher than our value for isolated galaxies. The small difference could be due to the fact that our value is global, and thus might encompass some FIR emission not directly associated to SF from the outskirts of the galaxies, whereas the value from Bigiel et al. (2011) is from a spatially resolved study. Saintonge et al. (2011) studied the molecular gas depletion time, τ dep , for a volume limited sample of 222 galaxies with 0.025 < z < 0.05 observed in the COLD GASS survey. They found values for the gas depletion time in the range of roughly log(τ dep ) = 8.6 yr and 9.5 yr, with a mean value of τ dep = 1 Gyr (for a Chabrier (2003) IMF which gives a mass comparable to within ∼ 10% to the Kroupa IMF). Both the range and the mean agree very well with our values.
They furthermore found a good correlation of τ dep with M * and with the specific SFR, sSFR = SFR/M * . We tested both correlations with our data. We did not find a similarly good correlation with L K (which is in a good approximation proportional to the stellar mass M * ) although our sample covered a similar range of stellar masses, up to several 10 11 M ⊙ . Taking into account all morphological types, we found evidence for a weak trend (correlation coefficient r = 0.25) which completely disappeared when restricting the sample to T = 3 − 5 (r = 0.08).
We could confirm with our data the existence of a correlation between τ dep and sSFR. This correlation is however expected because both τ dep and the sSFR depend on the SFR, and L K and M H 2 show a good correlation. Thus, the respective ratios are expected to correlate.
The molecular-to-atomic gas ratio
The value of M H 2 /M HI in galaxies as a function of morphological type has been controversial. In a large survey of spiral galaxies, Young et al. (1995) mapped a sample of about 300 galaxies with the FCRAO telescope. Young & Knezek (1989) studied the dependence of M H 2 on M HI for that sample. They derived a continous decrease of M H 2 /M HI from early to late-type galaxies with mean values above 1 for early types. Casoli et al. (1998) studied the molecular gas properties of a large (n = 528), heterogenous sample of galaxies, composed of data from the literature and their own observations. Their data consisted both of mapped galaxies and of objects where only the central position had been observed. In agreement with Young et al. (1995), they found a decrease in M H 2 /M HI from early to late-type galaxies, but obtained much lower values for M H 2 /M HI , especially for early-type spirals. In Fig. 18 we compare our results to these two studies and furthermore to the THINGS sample ) and the Nobeyama survey . Bettoni et al. (2003) (not included in the Fig. 18 ) provide values for a sample of 427 normal galaxies in agreement with Casoli et al. (1998) for T > 0 and higher values (by 0.5-1 dex) for T ≤ 0.
The data from all studies show a decrease of the M H 2 /M HI gas mass towards late morphological types. A pronounced step in the ratio takes place at T ∼ 6. The low values found for late-type galaxies could be due to two effects: (i) late-type galaxies are richer in HI (Haynes & Giovanelli 1984) or (ii) they have a lower molecular gas content. A comparison of M HI and M H 2 to the blue luminosity or the optical diameter shows that both effects take place: The ratio M HI /L B (respectively M HI /D 2 25 ) increases for types T ≥ 6 by ∼ 0.2 − 0.3 dex whereas M H 2 /L B (respectively M H 2 /D 2 25 ) decreases by ∼ 0.3 − 0.5 dex (Young & Knezek 1989;Casoli et al. 1998;Bettoni et al. 2003). Our data show the same behaviour. The strong decrease of M H 2 /L B and M H 2 /D 2 25 could be due to a real decrease in the molecular gas mass or due to the fact that late-type galaxies tend to have lower metallicities so that we probably underestimate the true M H 2 by using the Galactic conversion factor.
Although the general trends in the data sets are the same, with a pronounced and continous decrease of M H 2 /M HI from early to late-type galaxies, there are considerable differences between the different samples (see Fig. 18). The differences are up to an order of magnitude for elliptical, lenticular and early-type spiral galaxies (up to T ∼ 3), and less (up to 0.4 dex) for later type spirals. There is in general a good agreement between the values found by Casoli et al. and ours, the match being very good for early-type galaxies (up to T= 0) whereas for later-type galaxies our values lie somewhat below. The results from the THINGS survey are also in agreement with the data of Casoli et al., and only slightly higher than our values. The results by Young & Knezek (1989) and give much higher values than the other surveys. This could be due to the sample selection. In the sample of Young & Knezek (1989) FIR or optically bright galaxies were selected, so that a high CO emission can be expected. Furthermore, the Young & Knezek (1989) survey contains objects from the Virgo Cluster, and cluster galaxies are known to be HI deficient on average. Cluster galaxies were excluded in Casoli et al., and A noticeable point is that the mean M H 2 /M HI of the AMIGA sample represents for all morphological types the lower limit of all samples. For late-types, the AMIGA galaxies have the lowest M H 2 /M HI , even though the M H 2 is extrapolated unlike other studies that are usually based on a central pointing only. This is in line with the low molecular gas content that we found in comparison to the sample of Bettoni et al. (2003). Young & Knezek (1989) (red crosses), Leroy et al. (2008) (blue circles) and (magenta squares). The molecular gas mass have all been adapted to common conversion factor of 2 × 10 20 cm −2 and no consideration of the helium mass.
5.2.
Comparison to studies of interacting galaxies 5.2.1. Is the molecular gas content enhanced in interacting galaxies?
Several studies in the past have concluded that M H 2 in interacting galaxies is enhanced (Braine & Combes 1993;Casasola et al. 2004;Combes et al. 1994), based on a higher value of M H 2 /L B or M H 2 /D 2 25 compared to noninteracting galaxies. However, we found that the ratio M H 2 /L B increases with L B and this trend has to be taken into account when comparing isolated and interacting samples. Indeed, Perea et al. (1997) found no difference in the correlation between M H 2 and L B for samples of isolated, strongly and weakly perturbed galaxies.
We use the sample of Casasola et al. (2004) of interacting galaxies to search for a possible excess in M H 2 with respect to AMIGA galaxies. Their sample includes 153 galaxies with molecular gas data from different sources. Casasola et al. found that the mean ratio of M H 2 /L B for spiral galaxies was, depending on the morphological type, between about 0.2 and 1.0 dex higher than for a sample of 427 noninteracting galaxies from Bettoni et al. (2003) and concluded that M H 2 was enhanced in interacting galaxies. However, since their sample of interacting galaxies is on average 0.5 mag brighter, a higher M H 2 /L B is already expected due to the higher luminosity. We furthermore include the samples of strongly and weakly perturbed galaxies from Perea et al. (1997) in this test. The weakly perturbed sample has 43 galaxies and includes classes 1, 2, and 3 of Solomon & Sage (1988) and class 2 objects from the luminous IRAS sample of Sanders et al. (1991). The strongly perturbed sample has 35 galaxies and includes interaction class 4 of Solomon & Sage (1988), interaction classes 3 and 4 of Sanders et al. (1991) and closely interacting pairs from Combes et al. (1994).
We searched for a possible enhancement of M H 2 in comparison to L B and to L K in these samples, both by applying the de-ficiency parameter and by comparing the ratios. The mean values are listed in Table 8. We note that the interacting samples are more luminous (by about 0.5 dex) in both L B and L K than the AMIGA sample so that we have to extrapolate the relations found for the AMIGA galaxies to higher luminosities. For the calculation of the deficiency parameter we therefore use the bisector fit as the best fitting method for extrapolations. We use the fits for the AMIGA T = 3 − 5 subsample to compare to the weakly and strongly perturbed samples WPER and SPER, since spiral galaxies are the better comparison for these actively star-forming objects. For the Casasola et al. sample information about the morphological types are available and we are able to do the analysis both for the entire sample and for T = 3−5, using the values for the corresponding morphological types in AMIGA. When comparing the ratio M H 2 /L B we restrict the galaxies that we consider to the same range of L B in order to avoid effects caused by the nonlinearity in the correlation. For the roughly linear M H 2 /L K ratio this is not necessary (we checked that no difference in M H 2 /L K was present for low and high L K in the AMIGA sample). Fig.19 shows M H 2 vs. L B for the three samples compared to the bisector fit of the AMIGA T = 3 − 5 sample. An excess of M H 2 is visible for both the Casasola T = 3 − 5 and SPER sample. This is confirmed by the mean value of the deficiency (Table 8) When doing a similar analysis based on L K (see Fig. 20), we find again no indications for an enhancement in M H 2 for the WPER sample nor for the total Casasola sample, neither from the deficiency nor from the ratio. There is a clear excess in M H 2 for galaxies of type T = 3 − 5 from the Casasola sample, quantified both in the ratio, which is about 0.3 dex higher for T = 3 − 5 AMIGA galaxies, and in the deficiency parameter. Also the SPER sample shows a higher M H 2 /L K ratio than the AMIGA sample, however, no indication of an M H 2 excess from the deficiency parameter. The discrepancy between the two indicators (mean and deficiency) is probably due to the small sample size and the small amount of the molecular gas excess. We suggest to rely in this case on M H 2 /L K as the more robust indicator.
In summary, by comparing the molecular gas mass to L K and L B and using both the deficiency and the ratios M H 2 /L K and M H 2 /L B we found the clearest evidence for an enhancement of M H 2 in the Casasola sample of interacting galaxies of type T = 3 − 5, which is the largest comparison sample in our study. Evidence for an enhancement was also found for the Perea sample of strongly perturbed interacting galaxies (SPER). No evidence for any enhancement was found for the Perea sample of weakly perturbed interacting sample (WPER). Based on the present data, it is not entirely clear where the differences between these samples come from. A possible reason could be a lower degree of interaction in the WPER sample. We would like to point out the importance of matching the luminosity range when comparing parameters with a nonlinear correlation, as M H 2 /L B , Fig. 19. The molecular gas mass vs. L B for different sample of interacting galaxies: Galaxies of type T = 3 − 5 from the sample of Casasola et al. (2004) and a sample of weakly (WPER) and strongly (SPER) perturbed galaxies from Perea et al. (1997). We adapted both the molecular gas masses and L B to our definition. The full line is the regression fit obtained for the AMIGA T = 3 − 5 sample from Table 7. and the importance of comparing the same morphological types, since there are generally large difference between early type and spiral galaxies, in particular for M H 2 /L K .
What influence has the fact that the molecular gas masses in these samples are not aperture corrected as in the AMIGA sample? We do not expect this effect to be very important in interacting galaxies where the SF and thus the molecular gas usually tend to be more concentrated to the central regions. However, we can make an estimate of the importance of this effect for the Casasola et al. sample. The median angular diameter of the galaxies of type T = 3 − 5 in this sample is 90 ′′ . If we assume that the galaxies were observed only at the central position with a beam size of 50 ′′ (the typical beam size of the radio telescopes used in surveys) the predicted aperture correction according to our prescription is between a factor of 1.2 and 1.4 (for edge-on and face-on galaxies, respectively). Thus, in this case the total molecular gas content would be 0.08-0.15 dex higher. Thus, if the molecular gas in the interacting galaxies of the Casasola sample are distributed in a similar way as in isolated galaxies, this difference would strengthen the finding that M H 2 is enhanced in this sample of interacting galaxies. It also shows the importance of mapping the molecular gas in galaxies in order to be able to compare different samples in a relyable way.
Is L FIR and the SFE enhanced in interacting galaxies?
We use the same samples as in the previous subsection to look for a possible enhancement of L FIR in interacting galaxies. In Fig. 21 L FIR is compared to the blue luminosity. We include the best-fit regression line found for the AMIGA T = 3 − 5 sample presented in Lisenfeld et al. (2007) (Table 6, log(L FIR ) = 1.35× log(L B )-3.98). We see a clear excess of L FIR with respect to this regression line for the three samples, the strongest for the sample of strongly perturbed galaxies (SPER). The excess can be quantified by calculating the L FIR deficiency, defined in an anologous way as the M H 2 deficiency. The values are listed in Table 8 and Fig. 20. The molecular gas mass (adapted to our conversion factor) vs. L K for the same samples as in Fig. 19 . The full line is the regression fit obtained for the AMIGA T = 3 − 5 sample from Table 7.
confirm the visual impressions of an excess in L FIR of about an order of magintude for the SPER sample. The large excess in L FIR together with a smaller (or no) excess in M H 2 results in a higher value of SFE ∝ L FIR /M H 2 compared to the AMIGA sample for the strongly and weakly perturbed samples. Comparing these values to the results for the AMIGA sample we find an increase of about a factor 5 for the strongly perturbed sample and of about 2 for the weakly perturbed sample. The SFE in the Casasola sample is similar to the AMIGA sample.
For higher infrared luminosities, the value of L FIR /M H 2 is known to increase strongly. Sanders et al. (1991) studied the molecular gas content in luminous infrared galaxies, ranging from L IR = 10 10 L ⊙ to several 10 12 L ⊙ , and showed that the ratio L IR /M H 2 increases strongly with IR luminosity and with the degree of interaction. Whereas for isolated, low-luminosity (L IR < 10 11 L ⊙ ) galaxies they found values of L IR /M H 2 similar to ours, L IR /M H 2 increases by a factor of about 10 for galaxies with L IR ∼ 10 12 L ⊙ which are mostly advanced mergers. Similarly high values of L FIR /M H 2 ∼ 50 have been found by Solomon et al. (1997) for ULIRGs. Fig. 21. The FIR luminosity vs. blue luminosity for different samples of interacting galaxies: Casasola et al. (2004) (only spiral galaxies with type T = 3 − 5 are shown, in order not to overload the plot), and a sample of weakly (WPER) and strongly (SPER) perturbed galaxies from Perea et al. (1997). The full line is the regression fit obtained for the AMIGA T = 3 − 5 sample from Lisenfeld et al. (2007).
Conclusions and Summary
We presented molecular gas masses, based on CO observations, for a sample of 273 isolated galaxies and we performed a statistical analysis for a redshift-limited sample of 173 isolated galaxies with recession velocities between 1500 and 5000 km s −1 . The observations covered in most cases only the central position of the galaxies. In order to correct for the missing molecular gas mass outside the observed area, we derived and applied an aperture correction assuming an exponentially decreasing CO disk. We obtained the following results: 1. We compared the molecular gas mass to different parameters (L B , L K , D 2 25 and L FIR ) in order to characterize the relations followed by isolated galaxies and establish a baseline that can be used to find possible deviations for interacting galaxies. We concentrated our analysis on the morphological types T = 3 − 5 (Sb-Sc) which represents the bulk of our sample (51% of the galaxies are of these types) where also the detection rate of CO (74%) was highest.
2. We found good correlations with these parameters, roughly linear in the case of L K and L FIR and nonlinear for L B and D 2 25 . The tightest correlations are with L FIR and, for spiral galaxies with T = 3 − 5, with L K , whereas the correlation with D 2 25 is the poorest. Due to the nonlinearity of the correlation, the ratio M H 2 / L B changes with L B which has to be taken into account when comparing it to other samples. We describe a deficiency parameter, defined in analogy to the deficiency parameter for the atomic gas as the difference between the logarithm of the expected molecular gas mass and the logarithm of the observed molecular gas mass. The expected molecular gas mass can be calculated from any of the parameters studied by us (L B , D 25 , L K , L FIR ) using the correlation coefficients listed in Table 7. 3. We applied these relations and the resulting expressions to three samples from the literature (Perea et al. 1997;Casasola et al. 2004). For the sample of Casasola et al. and the sample of strongly interacting galaxies of Perea et al., we found clear evidence for an enhancement of M H 2 in comparison to L B and L K , while for a sample of weakly interacting galaxies from Perea et al. no difference with respect to the AMIGA sample was found. A possible reason for this difference could be a higher degree of interaction in the first two samples. 4. We derived a mean molecular gas depletion time, τ dep (defined as M H 2 /SFR), of log(τ dep ) = 9.0 yr for spiral galaxies (T = 3−5) and a slightly lower value of log(τ dep ) = 8.9 yr for all morphological types (both values for a Kroupa IMF), in reasonable agreement with other studies of nearby galaxies (Bigiel et al. 2011;Saintonge et al. 2011). 5. No good correlation was found between M H 2 and M HI . The ratio between the molecular and the atomic gas mass decreases significantly from early to late-type galaxies, with a difference of up to a factor of 10. The ratio M H 2 /M HI of the AMIGA galaxies is well below 1 for all morphological types, with a mean value of log(M H 2 /M HI ) = -0.72 for galaxies of type T = 3 − 5. We compared our values to those of other noninteracting samples and found that the AMIGA galaxies had the lowest values for all spiral galaxies. 6. We used our data to compare the disk averaged surface densities of the molecular, Σ H 2 , and molecular+atomic gas, Σ H 2 +HI , to those of the SFR, Σ SFR . We found a good correlation between the logarithms of Σ H 2 and Σ SFR , with a slope close to 1. No correlation with of Σ SFR with Σ H 2 +HI was found.
Fig.
A.1. CO(1-0) spectra for the galaxies detected with the IRAM 30m telescope. The x-axis represents the recession velocity in km s −1 and the y-axis the main beam brightness temperature T mb in K. The spectral resolution is 20.8 km s −1 in most cases except for some individual galaxies for which a higher or lower resolution was required to clearly show the line. The full (red) line segment shows the line width of the CO line adopted for the determination of the velocity integrated intensity. The dashed (black) line segment is the HI line width at 20% peak level, W 20 . An asterisk next to the name indicates a marginal detection. | 2011-08-10T09:32:35.000Z | 2011-08-10T00:00:00.000 | {
"year": 2011,
"sha1": "85bcf28ee830392067b52548fd3e5480f265c9f0",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2011/10/aa17056-11.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "85bcf28ee830392067b52548fd3e5480f265c9f0",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
260075680 | pes2o/s2orc | v3-fos-license | An Unusual IgE-Mediated Hypersensitivity: Two Case Reports of Paracetamol Allergy
Paracetamol is one of the most commonly used analgesic and antipyretic agents worldwide, attributed in part to its excellent safety profile when administered at recommended doses. Paracetamol allergy is not common, and the majority of the reactions are related to the pharmacological action of cyclooxygenase 1 inhibition. Selective and Immunoglobulin E (IgE)-mediated hypersensitivity reactions are rare. In this article, the authors report two cases of paracetamol allergy in which the mechanism of IgE-mediated hypersensitivity was demonstrated by positive skin tests and basophil activation tests. We highlight the relevance of identifying the mechanism underlying the reaction since patients with IgE-mediated paracetamol allergies will be able to tolerate non-steroidal anti-inflammatory drugs.
Introduction
Paracetamol is one of the most commonly used analgesic and antipyretic agents worldwide, in part due to the paucity of its adverse effects when administered at recommended doses [1]. At therapeutic doses, paracetamol is a weak cyclooxygenase (COX) 1, COX-2, and COX-3 inhibitor, but it is not a non-steroidal anti-inflammatory drug (NSAID) [2][3][4]. Paracetamol allergy is not common, and most of the reported reactions are related to the pharmacological action of COX-1 inhibition [5]. Most reactions are documented by oral challenge tests [6][7][8], which do not elucidate the underlying pathophysiological mechanism that, in rare cases, has been shown to be IgE-mediated, with positive skin tests and/or detectable serum-specific IgE [9][10].
In this article, we present two rare cases of paracetamol allergy in which the mechanism of IgE-mediated hypersensitivity has been demonstrated and supported by in vivo and in vitro tests.
This data has been presented as a poster at the 40th annual meeting of the Sociedade Portuguesa de Alergologia e Imunologia Clińica in October 2019.
Case Presentation
The first case is a 30-year-old female with no history of drug reactions who presented her first episode of generalized urticaria and oropharyngeal tightness starting 10-20 minutes after oral administration of 1000mg of paracetamol (Ben-u-ron®) as a symptomatic treatment for headaches. A few months later, she reported a second and third episode of generalized urticaria within 10-15 minutes after oral administration of 250 mg of paracetamol, 20 mg of pyrilamine maleate, and 30 mg of caffeine (Antigrippine®) and 1000 mg of paracetamol (Ben-u-ron®), respectively, for symptomatic relief of headaches and coryza. There were no gastrointestinal, pulmonary, or cardiovascular symptoms in any episode. After the third episode, the patient was referred to an Immunoallergology appointment with no instructions to avoid any particular drug. The patient described prior tolerance to acetylsalicylic acid (ASA), ibuprofen, and nimesulide, which she maintained after the three episodes. The investigation of suspected immediate hypersensitivity to paracetamol was performed with skin prick tests (SPT) with injectable paracetamol solution (10 mg/ml) [11], with the appearance of a 4 mm mean diameter wheal (6.5 mm mean diameter wheal for histamine) and erythema with a larger diameter of 15 mm. A serum-specific IgE measurement for paracetamol using the ImmunoCap® method (Phadia, Thermo Fisher Scientific, Sweden) was specially requested and was negative. A basophil activation test (BAT) with a paracetamol solution (1 mg/mL) was also performed. This was positive, with a basophil activation rate of 24.61% (proportion of CD63+ cells) corrected for the negative control (0.70%) and a stimulation index (SI) of 35 (ratio of CD63% of activated cells and negative control) ( Figure 1). This concentration did not induce any significant basophilic activation in four healthy individuals. The patient was advised to avoid paracetamol to avoid the recurrence of similar episodes. The second case is of a 26-year-old female who described her first episode of generalized urticaria after oral intake of 1000 mg of paracetamol (Doliprane®) and a second episode characterized by urticaria, and eyelid angioedema 10 minutes after oral administration of 1000 mg of paracetamol (Ben-u-ron®). A third episode of palmar and plantar itching, urticaria, and conjunctivitis started within 10 minutes after oral administration of 500 mg of paracetamol and 65 mg of caffeine (Ben-u-ron Caff®) for headaches. There were no signs or symptoms suggestive of pulmonary, gastrointestinal, or cardiovascular involvement in any episode. During the third episode, she was observed in the emergency department and treated with antihistamines and corticosteroids. She was referred to an outpatient Immunoallergology clinic with an indication to avoid paracetamol. After these three episodes, she tolerated ibuprofen, metamizole, and ASA. The immunoallergic research included SPT and intradermal tests with an intravenous paracetamol solution.
The intradermal test at a concentration of 0.1 mg/ml (a non-irritant concentration as recommended by the European Network on Drug Allergy [11]) showed an immediate positive result, with an increase in the papule diameter of 3 mm and surrounding erythema of a greater diameter of 14 mm, at 15 minutes. According to the activation rate and the SI, Figure 2 shows a positive BAT for four paracetamol concentrations (0.31, 0.5, 1.0, and 1.25 mg/mL). The serum-specific IgE measurement for paracetamol was not performed. Written informed consent was obtained from both patients, and they were treated according to the ethical standards established in the Declaration of Helsinki.
Discussion
Hypersensitivity reactions to paracetamol are rare. Paracetamol is a weak COX inhibitor, and, therefore, inhibition of prostaglandin synthesis may be a mechanism to explain paracetamol reactions, similar to NSAIDs mechanisms [2][3][4]. However, cases of selective paracetamol reactions in NSAID-tolerant patients have been described [5]. In these cases, an IgE-mediated mechanism is possible.
In the few cases described of selective hypersensitivity to paracetamol, the diagnosis is mostly based on the oral challenge test, which does not elucidate the underlying pathophysiological mechanism [6][7][8]. The IgEmediated mechanism was demonstrated by positive skin tests in only 10 reported cases [5,[12][13][14][15][16][17] ( Table 1). The authors describe two clinical cases of IgE-mediated hypersensitivity to paracetamol. Both are clinically characterized by the occurrence of several immediate-onset reactions 10-15 minutes after paracetamol administration, reproducible with repeated exposure, with no involvement of other drugs or allergen ingestion.
Both patients tolerate several NSAIDs, including ASA, demonstrating that COX-1 inhibition is not involved in these reactions. Although it is a limitation of the study, we considered that the benefit/risk ratio did not justify performing oral challenge tests to paracetamol in these patients because despite being the diagnostic gold standard of drug allergy, they are time-and resource-intensive and expose the patient to the risk of severe reactions. NSAIDs are effective antipyretic and analgesic drugs, making them a valid alternative to paracetamol.
The underlying specific IgE-mediated mechanism was demonstrated in both patients using paracetamolpositive skin tests. We also performed a BAT, which was positive in both patients. This basophil activation pattern confirmed the significant in vitro basophil degranulation specifically induced by paracetamol, which in these patients suggested an IgE-mediated mechanism [18][19][20]. Serum-specific IgE for paracetamol measurement was negative in the first case and was not performed in the second case. This analysis is not used in routine clinical practice, and some authors have reported negative results in patients with a proven IgE-mediated mechanism, which indicates that it has very low sensitivity [14,15]. | 2023-07-23T15:24:27.816Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "bd127e2238596ca533c8988e855ee4456ec92f10",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/170849/20230721-12114-1g54meb.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9d488838cbd9f92bc5c2ec4ea3f807b835e5149",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
17484577 | pes2o/s2orc | v3-fos-license | The effect of gender and age on the association between weight status and health-related quality of life in Australian adolescents
Background Evidence suggests an inverse relationship between excess weight and health-related quality of life (HRQoL) in children and adolescents, however little is known about whether this association is moderated by variables such as gender and age. This study aimed to investigate these relationships. Methods Participants were secondary school students (818 females, 52% and 765 males, 48%) from 23 secondary schools in Victoria, Australia. Age ranged from 11.0 to 19.6 years (mean age 14.5 years). The adolescent version of the Assessment of Quality of Life (AQoL) Instrument (AQoL-6D) which is a self-reported measure of adolescent quality of life was administered and anthropometric measures (height and weight) were taken. Assessment of weight status was categorized using the Body Mass Index (BMI). Results HRQoL was associated with gender and age, but not weight status or socio-economic status; with males and younger adolescents having higher HRQoL scores than their female and older adolescent counterparts (both p < 0.05). There was also a significant interaction of weight status by gender whereby overweight females had poorer HRQoL (-.06 units) relative to healthy weight females (p < 0.05). Conclusions This study contributes to the evidence base around factors associated with adolescent HRQoL and reveals that gender and age are important correlates of HRQoL in an Australian adolescent population. This knowledge is critical to inform the design of health promotion initiatives so they can be tailored to be gender- and age-specific. Trial registration Australian Clinical Trials Registration Number 12609000892213.
Background
Obesity is a major health concern. Globally, it has been estimated that 10% of children and adolescents aged five to 17 years old are overweight and, of these, two to three per cent are obese [1]. The most recent data in Australia (2011-2012) revealed the prevalence of overweight and obesity in Australian adults has increased to 63.4% (35.0% overweight, 28.3% obese), and children aged 5-17 years to 25.3% (17.7% overweight, 7.6% obese) [2]. The health implications of obesity include the development of heart disease, cardiovascular disease, hypertension, type 2 diabetes and musculoskeletal problems due to the mechanical stress on the body [3][4][5].
Obesity contributes to the global burden of chronic disease and disability and has been found to be associated with social, economic and cultural factors and satisfaction with life [5,6]. Consequences of obesity also extend to psychological and social aspects of well-being [7] which also are vital to good health. The World Health Organisation Constitution states that health is not merely the absence of disease or infirmity, but a state of complete physical, mental and social well-being [8]. Functional status and well-being is commonly referred to as health-related quality of life (HRQoL) [9] and the impact of diseases (such as obesity), environmental and economic factors such as income and education can all influence HRQoL [10].
HRQoL is a multidimensional measure based upon an individual's satisfaction or happiness in various life domains that affect or are affected by health [11]. Factors such as weight status, age, gender can affect HRQoL [7,9,12]. With regards to an individual's weight status, recent research in adult populations has suggested that obesity impacts negatively on functional health and wellbeing (HRQoL) [9,10]. Research has expanded to child and adolescent populations and supports the associations found in adult populations whereby poorer HRQoL was experienced by children and adolescents with excess weight [6,[13][14][15][16][17][18]. Furthermore, studies examining gender effects on HRQoL have revealed female children and adolescents to report lower HRQoL in comparison to their male counterparts [7,11,[19][20][21]. Studies have also revealed an association between increasing age and poorer HRQoL scores across 12 European countries [11], greater physical and psychological well-being in children compared to adolescents [19] and evidence to suggest the higher the age, the lower the HRQoL scores in adolescents [7].
In recent studies, many variables affecting HRQoL are beginning to be examined together. Gender influences the association of HRQoL and weight status, with females with excess weight having lower HRQoL [7,12,21]. A relationship has also been observed between HRQoL and weight status as children and adolescents age with younger overweight adolescents reporting significantly lower HRQoL scores [12]. This pattern has also been observed in students with obesity whereby younger students with obesity have higher HRQoL compared to older students with obesity [16,22]. Subsequent studies analysing the effect of age further, suggest the association of lower HRQoL and obesity is weak and/or absent in very young children (aged 2-5 years) but appears more in school years, and steadily strengthens with age [18].
Evidence suggests an association between HRQoL and weight status, however less is known regarding gender and age as moderating factors on the association between weight status and HRQoL. HRQoL and BMI may track strongly longitudinally in children growing into adolescents [23]. This is a concern and suggests we need to understand the issues and subsequently intervene early in the life-stage to avoid the development of overweight and obesity, the potential associated chronic health conditions and poorer HRQoL. Much of the research already conducted examining the effects of weight status has largely documented the impacts on adults and children and to a lesser extent, defined adolescents as a cohort separate to children [6,15,17,24]. There are distinct changes occurring during the growth between a child and adolescent; namely the physical and psychological changes accompanying the onset of puberty [25][26][27]. Therefore it is essential to separately assess how children and adolescents perceive their own situation [11] and examine any differences in HRQoL as they age.
The present study aims to build the evidence base by investigating 1) the association of weight status (healthy weight vs overweight and obese), gender and age (younger vs older adolescents); on self-reported HRQoL; and 2) examine whether the association of weight status on HRQoL is moderated by either age or gender among a sample of Australian adolescents.
Participants
Participants consisted of 1583 secondary school students recruited from 23 schools in various communities across Victoria, Australia (818 female (51.7%) and 765 male (48.3%)) [28]. The participants were aged from 11.0 to 19.6 years (mean age14.5, SD = 1.5 years). Schools in the current study were part of a larger health promoting study and selected for involvement as previously described [28]. Briefly, schools within intervention communities were invited to participate in the study, and subsequently comparison schools selected using stratified random sampling to match intervention school demographics such as school type, school size, level of disadvantage and location [28]. This study utilises baseline data only, consequently intervention or comparison status of schools is irrelevant. Parents provided written consent and participants provided verbal consent prior to data collection. Approval for this study was granted by the Deakin University Human Research Ethics Committee (EC98-2008), the Department of Education and Early Childhood Development and relevant Catholic dioceses where appropriate. The project was registered with the Australian Clinical Trials (registration number 12609000892213).
Materials and apparatus Demographics
A combined plain language statement and consent form were used to obtain information about age, gender, Aboriginal and/or Torres Strait Islander ethnicity, migration status and residential postcode which was used to calculate socio-economic status (SES). The 2006 Census data was used to determine the Socio-Economic Index For Areas (SEIFA) score on the index of relative socioeconomic disadvantage [29,30]. This area-level index is based on data collected from the 2006 Australian census of population and housing, and incorporates variables such as income, education, occupation, living conditions, access to services and wealth. A lower score on the index indicates that an area is more disadvantaged [31].
Health-related quality of life
Participants were asked to complete the adolescent version of the Assessment of Quality of Life (AQoL) AQoL-6D which measures adolescent HRQoL. Developed in Australia, the AQoL-6D adolescent survey is an adapted version of the AQoL 2 designed for and validated in adults [32,33]; the utility weights have been recalibrated for adolescents [34]. The AQoL theoretical framework was based on the effects of ill health on a person's capacity to function; the health descriptions were established using the WHO's disabilities and impairments framework [32,35]. This self-reported instrument consists of 20 items that produce scores on six domains. Each domain is measured by three to four items pertaining to that domain; physical ability (4 items), social and family relationships (3 items), mental health (4 items), coping (3 items), pain (3 items) and vision, hearing and communication (3 items) [36].
Anthropometry
Height and weight were measured and recorded as previously defined [28]. Briefly, weight and height was measured by trained research staff in a private and sensitive manner behind screens. Each measurement was taken twice, and a third measurement was only taken if the first two measurements were outside defined parameters as previously reported [28]. Heavy clothing and shoes were removed prior to measurement. Weight was recorded to the nearest 0.1 kilogram using calibrated digital scales. Height was recorded to the nearest 0.1 centimeter, using a portable stadiometer with a movable headboard that lowered to touch the crown of the head [28]. BMI was calculated using weight in kilograms divided by height in metres 2 (kg/m 2 ). Standardized BMI scores were used to categorize weight status into healthy, overweight/obese categories using the World Health Organisation Growth Reference for 5-to 19year-old children BMI cutoff values [37]. The thin category was excluded from the dataset due to low numbers (n = 4).
Data treatment and analysis
Data were double entered by research staff. Data were cleaned and analysed using Stata 10.0.
AQoL-6D
Weighted item scores from the 20 questions were combined to form dimension scores that were added into a single multiplicative score using a scoring algorithm [38]. This algorithm includes a specific adjustment of the overall single multiplicative score for participants who are Australian adolescents [34].
Coding of variables
The age variable was dichotomized into younger adolescents (11.00 to 14.99 years) and older adolescents (≥15.00 to 19.00 years) [25].
Descriptive data were summarised as means with standard deviations (±SD), or proportions for total population and for male and female subgroups to describe characteristics of the sample. Associations between key demographic variables were tested using Chi-square tests. Separate univariate ANOVAs were used to test for significant differences in AQoL by weight status, gender and age group. Multiple linear regression (MLR) analysis was also used to test for associations between weight status and AQoL score and effects are reported as unstandardized coefficients (B). Three MLR models were tested: model 1 tested for associations with weight status; model 2 also tested for associations with weight status but with gender, age and area-SES covariates included; model 3 included same the covariates as model 2 but also included the interaction terms of weight status by gender and weight status by age. All models were adjusted for clustering by school. P < 0.05 was considered statistically significant. Note that demographics and surveys were collected from 1583 students however anthropometric measurements were taken from 944 students as indicated by the sample numbers displayed in tables. Two rounds of data collection occurred at each school. Round 1 involved collecting demographic information, survey (AQoL6-D) and anthropometric data from participating students. Due to school-related limits on student access for data collection in round 2, it was only possible to collect demographic information and survey data from these participating students. As data was collected from the same schools at both time points, the characteristics of the sample at round 1 and round 2 are similar.
Results
Characteristics of the adolescent sample are shown in Table 1. Over two-thirds of the student population were <15 years old and approximately one quarter were overweight or obese. The majority of students were born in Australia and only a small proportion were of Aboriginal and/or Torres Strait Islander origin. Over three-quarters of the students were from areas classified into the two lower SES quartiles (<50 th percentile).
The mean total HRQoL for healthy weight, overweight/obese adolescents (overall and stratified by gender and age category) are displayed in Table 2. Results of ANOVA revealed a significant difference in HRQoL for gender and age; males and younger adolescents had higher self-reported HRQoL than their female and older adolescent counterparts.
Results of the multiple linear regression analyses are shown in Table 3. There was no association between weight status and self-reported HRQoL (model 1), even when the gender, age and area-SES covariates were included (model 2); gender and age were associated with HRQoL however area-SES or weight status was not. Similarly, in model 3, which included the two interaction terms, weight status was not associated with HRQoL. Of the covariates, gender and area-SES were not associated but age was associated with HRQoL. The interaction of weight status by age was not associated with HRQoL but the interaction of weight status by gender was; overweight females had significantly poorer HRQoL (-.06 units) relative to healthy weight females.
Discussion
The purpose of this study was to investigate the association of weight status, gender and age on self-reported HRQoL, and to examine whether the association of weight status on HRQoL is moderated by either gender or age in a sample of Australia adolescents. The findings from this study indicate that individual variables such as gender and age affect self-reported HRQoL in an Australian adolescent population. In contrast, there was no relationship between weight status and HRQoL. However when gender was added to the model examining weight status and HRQoL, a significant interaction was discerned whereby gender moderated the association between weight status and HRQoL. Specifically, the interaction indicated that relative to males, females who were overweight had significantly poorer HRQoL compared to healthy weight females.
Weight status and HRQoL
The lack of associations regarding weight status in this study could be due to the HRQoL assessment tool utilized -perhaps AQoL-6D is less sensitive to weight-related variations in HRQoL, and comparison of common HRQoL tools and AQoL-6D in the same population would be interesting to further investigate. Relationships between weight status and particular domains of HRQoL assessment tools have been demonstrated [6,18]. Not only do the six AQoL-6D domains differ to other commonly utilised HRQoL tools such as PedsQL, SF36 and KIDSCREEN52, but analysis of the AQoL-6D domains is not currently possible. This tool has currently only been weighted in an adolescent population for the overall score, not the individual domains (which are adult weighted) [34]. However if domain analysis was possible, associations between weight status and domains could possibly be identified.
The lack of an association between weight status and HRQoL is in contrast to other studies that have suggested that overweight and obese adolescents report lower HRQoL compared to those healthy weight children and adolescents [9,12,13,[15][16][17][18][39][40][41] and that increasing weight status negatively impacts overall paediatric HRQoL [9]. There are three studies which have not reported a significant association between weight status and HRQoL [42][43][44] however potential reasons why this was the case were not discussed by the authors. The systematic review by Tsiros et. al [9] which examined weight status and HRQoL also had the limitation of including clinical treatment seeking populations which may have resulted in an overestimation of the strength of the associations presented [23] and lack generalisability to the population [12]. The fact that previous studies have reported an association while we did not might reflect a number of methodological differences; specifically the different HRQoL assessment tools utilised (e.g. self-report and parent-report); different sample sizes and characteristics (e.g. small sample sizes, clinical treatment seeking populations); and the methodology surrounding anthropometric measurements (e.g. self-reported, parent-reported measurements). Environmental, economic and cultural factors can substantially affect well-being [10,11] and could also be an explanation to the lack of significant association between weight status and HRQoL in the current study. For example, a meaningful negative association between excess weight and HRQoL was unable to be demonstrated in a population of Fijian students possibly due to socioeconomic and socio-cultural factors [22]. The current sample population in this study was socio-economically disadvantaged, included some large rural and regional areas, and some communities we culturally and linguistically diverseall potential influencing factors which might explain a lack of an association and require further investigation. Additionally, it could be speculated that perhaps these communities have different social norms, or perhaps community efforts to support and not stigmatise overweight and obesity have been effective hence no meaningful association between excess weight and HRQoL. However, despite the cultural differences between Fiji and Australia, it is intriguing that the findings regarding a lack of association between weight status and HRQoL in Fiji are supportive and strengthen the current study findings in this adolescent population in Australia. Given that the association between excess weight and HRQoL became significant once gender was added into the model, perhaps there is some other unknown factor complicating this relationship yet to be determined.
Gender and HRQoL
This study revealed significant differences in HRQoL for male and female adolescents, with females reporting lower HRQoL. Various reasons have been proposed to explain this association in the literature and include the notion that puberty is physically more extreme for females (e.g. menstruation), females have varied coping mechanisms (coping patterns are inwards for females, outwards for males), puberty hormones, social demands which can be difficult to achieve, the influence of traditional female stereotypes, confronting beauty ideals [19], and greater body image concerns [21]. Bonsergent et.al (2012) proposes that girls may be more attuned and aware of their bodies compared to males; and seek to be thin and fit due to the ideal body shape demonstrated in television, advertising, magazines and social stigmas attached to obesity [7]. Many studies support the finding of lower HRQoL for females [7,11,[19][20][21].
Age and HRQoL
Older adolescents in this study reported lower levels of HRQoL than younger adolescents. Differences in HRQoL as children and adolescents age could be resultant from the physical and social transition experienced as they grow and age; particularly if combined with transitioning to new schools [19]. Adolescents undergo a process of individuation and autonomy that is very important to them [19]. Adolescents begin to develop their own values and cultural norms; and are being challenged with new developmental tasks, and a vast number of new experiences including being socially accepted as peers become more important than adults [19]. Bonsergent et. al (2012) found decreasing HRQoL with increasing age [7] which was further supported by a European multi-country study which demonstrated better HRQoL values in children compared to adolescents [11].
Whilst there was no significant overall association for weight status on HRQoL (model 2); when the weight status-HRQoL association was modelled in conjunction with gender (model 3), a more complex association was found which suggests that gender is an important moderating factor and strengthens the relationship between weight status and HRQoL. The association for gender found in model 2 was not significant when the combination term was included (model 3). Together this complex set of findings suggests that the association of weight status on HRQoL is particularly salient for adolescent females but of little impact to males. Other studies that have reported similar findings regarding gender [7,12,21]. reported that whilst both sexes experience significant decreases in HRQoL associated with obesity, the effect was doubled in magnitude for females [39]. This association can possibly be explained by the different hormones female experience during puberty, their inward coping mechanisms, social demands and influences of traditional female stereotypes, beauty ideals [19] and body image concerns [21]. With regards to the association between weight status and HRQoL in conjunction with age we found that age did not moderate this relationship. This could possibly be due to differing tools to assess HRQoL; and differing sample populations (i.e. treatment seeking populations which limits generalisability to the population). However, previous research has indicated that age play an important role in moderating the association between weight status and HRQoL [9,12,18]. It has been suggested that early adolescence (<14 years) is a particularly vulnerable period for decreases in HRQoL in overweight/obese adolescents potentially due to emotional development and awareness of social exclusion [9].
Limitations and future directions
This study adds value to population data and trends for adolescent health in Australia where little is known regarding the associations of different factors on adolescent self-reported HRQoL. A particular strength of the study is the large sample size of adolescent females and males which were not treatment seeking individuals, but sampled from the general population. It employed a widely used, psychometrically tested and validated Australian based psychometric instrument to measure self-reported HRQoL which has been specifically calibrated and validated in adolescents [34,39]. This research focus on an adolescent population further adds to the development of Australian population norms and health research into effects of obesity.
Whilst this study extends current Australian health research into obesity and HRQoL focusing specifically on the developmental phase of adolescence, we acknowledge several study limitations. Whilst the methodology limits the generalisability to other Australian studies that have investigated children [17,24] given the different tools utilised to collect HRQoL, the study findings still highlight the importance of investigating these moderating factors further. The self-reported and cross-sectional nature of the data is also a limitation. Future research could include exploring potential associations between HRQoL and subgroups of excess weight (i.e. analysing overweight and obese categories separately) and indepth examination of family and socio-economic patterns to determine any influences on HRQoL and weight status such as household finances, familial eating patterns, lifestyle behaviors and relationships. Longitudinal studies would be particularly beneficial to track HRQoL changes over time. Several of the authors on this study have previously demonstrated the effect of ethnicity on child overweight and obesity over and above socioeconomic status [45]. Exploration of ethnic diversity and body image to better understand HRQoL in a cultural context would continue to add to diversity and population norms.
Conclusion
The present investigation found that females had lower HRQoL compared to males, and lower HRQoL was reported for older compared to younger adolescents. Additionally, in this adolescent population, gender acted as a significant moderator on weight status and HRQoL, subsequently overweight females had poorer HRQoL compared to healthy weight females. More understanding of these associations from longitudinal studies would shed light on the temporal nature of these types of associations, their causal pathways and specific mechanisms. [18] Information from this study will help inform the design of health promotion initiatives so they can be tailored to be gender-and age-specific. Further research into adolescent HRQoL and weight status is beneficial in developing targeted health promotion programs that incorporate evidence-based interventions for adolescents who are in the critical stage of establishing poor lifestyle behaviours and are at risk of developing obesity. Promoting a normal body weight has the potential to improve health and well-being in the young, and affect the risk of disease later in life [18]. | 2016-05-12T22:15:10.714Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "e138560aab200ea61c6bc08522a9338c65f8a74a",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-898",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83b613b88606226ba15968e9998341ddf7c3ddc6",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205647537 | pes2o/s2orc | v3-fos-license | Taxonomy of anaerobic digestion microbiome reveals biases associated with the applied high throughput sequencing strategies
In the past few years, many studies investigated the anaerobic digestion microbiome by means of 16S rRNA amplicon sequencing. Results obtained from these studies were compared to each other without taking into consideration the followed procedure for amplicons preparation and data analysis. This negligence was mainly due to the lack of knowledge regarding the biases influencing specific steps of the microbiome investigation process. In the present study, the main technical aspects of the 16S rRNA analysis were checked giving special attention to the approach used for high throughput sequencing. More specifically, the microbial compositions of three laboratory scale biogas reactors were analyzed before and after addition of sodium oleate by sequencing the microbiome with three different approaches: 16S rRNA amplicon sequencing, shotgun DNA and shotgun RNA. This comparative analysis revealed that, in amplicon sequencing, abundance of some taxa (Euryarchaeota and Spirochaetes) was biased by the inefficiency of universal primers to hybridize all the templates. Reliability of the results obtained was also influenced by the number of hypervariable regions under investigation. Finally, amplicon sequencing and shotgun DNA underestimated the Methanoculleus genus, probably due to the low 16S rRNA gene copy number encoded in this taxon.
Results and Discussion
Experimental setup. The microbial communities under investigation were grown in three laboratory scale Continuous Stirred Tank Reactors (CSTR) operated at thermophilic conditions (54 ± 1 °C) and fed with cattle manure. Samples were collected twice from each reactor, the first sampling was performed when the influent feedstock was composed only by cattle manure, the second sampling when the feedstock was added with Na-oleate at a concentration of 12 g/L. Despite addition of long chain fatty acids has a relevant effect on microbial composition [36][37][38][39] , the present study does not focused on the interpretation of biological data, but on investigation of potential biases determined by different high-throughput sequencing approaches on taxonomic results obtained. For this reason, biological results are not thoroughly discussed in the paper. To identify these biases it is important to reduce as much as possible the sources of variability that could rise during samples collection, such as lysis of bacterial cells and extraction of nucleic acids. To achieve this goal, both DNA and RNA were extracted using the same kit and protocols used for Illumina sequencing were very similar for all the samples (see Materials and Methods for details).
Regarding bioinformatics analysis, the procedure is depicted in Fig. 1 and a detailed description is reported in Materials and Methods section. Four main bioinformatics investigations were performed (Fig. 1). (1) Evaluation of the influence determined by the database used for training the Bayesian classifier (RDP, Greengenes and SILVA) (Fig. 1, C1) 8,9 . (2) Calculation of the minimum number of sequences needed in order to obtain a "solid and reliable taxonomic" result ( Fig. 1, C2). (3) Influence on taxonomic assignment of the "Forward" and "Reverse" reads merging step (Fig. 1, C3). (4) Influence of the sequencing method used (amplicon sequencing, shotgun RNA sequencing and shotgun DNA sequencing) (Fig. 1, C4 and C5). The latter bioinformatics check was performed both without preliminary merging forward and reverse read pairs (C4) and also after the paired-end merging step (C5). The loss of reads due to the merging step led to a marked reduction of the data available for the comparison; for this reason the step "C5" was performed only on the sequencing methods which provided the highest number of sequences mapped on 16S rRNA gene: amplicon sequencing and shotgun RNA sequencing.
Influence of the training set on taxonomic results. It was previously reported that different training sets can have a remarkable influence on the number of reads that the Bayesian RDP Classifier can assign to the taxonomy and on the abundance calculated for different taxa 40 . In the same study it was also reported that number of unassigned reads is lower for well-characterized microbial communities (e.g. the human gut microbiome), while it is more problematic when many unknown species are dominant. To test the influence of the training set on the taxonomic investigation of the AD microbiome, RDP classifier was trained using three different databases, RDP, Greengenes and SILVA. After training, a taxonomic assignment was performed and results obtained at phylum level for sample CSTR01a were reported in Fig. 2 (Supplementary Table S1). These results evidenced that at phylum level the highest number of taxonomic assignments were obtained using SILVA and the lowest were obtained using RDP (Fig. 2). The limitations associated with the use of RDP database in the taxonomic investigation of the AD microbiome was also reported in previous studies 40 . Unfortunately, a more detailed analysis performed at taxonomic level lower than phylum revealed that ∼160,000 sequences were not assigned in a reliable way, but were allocated to "uncultured taxa". Due to the difficulty in filtering out these "fake" assignments and in calculating abundance at low taxonomic levels, results obtained using Greengenes was selected for all the subsequent analyses.
Biological results obtained are in agreement with previous data, with a vast majority of sequences assigned to Firmicutes, Bacteroidetes and Proteobacteria, the three main phyla characterizing the AD microbiome ( Fig. 2) 29,[41][42][43][44] . Interestingly, using RDP release 11, the fraction of sequences assigned to Bacteroidetes and Tenericutes is extremely low, evidencing that results are strongly biased by the training set (Fig. 2). Analysis of the shotgun RNA reads gave strongly different results in comparison to the other two methods (Fig. 2C) and this can be due to differences in transcriptional activity of some microbes or to the 16S rRNA gene copy number, as described more in detail in section "Comparison between results obtained using different sequencingmethods". By considering the reads aligned to the 16S rRNA gene, it is evident that for amplicon sequencing the fraction of those assigned to specific phyla can be higher than 90% ( Fig. 2A). For shotgun DNA and shotgun RNA this value is lower than 50% ( Fig. 2B and C). This marked difference is due to the random distribution of the shotgun reads and, more specifically, to those aligned to the conserved regions of the 16S rRNA gene. These specific regions are highly conserved among different taxa and useless in taxonomic analysis. Obviously, in the 16S rRNA amplicon sequencing all the reads are localized on hypervariable regions and this represents an advantage because all of them are taxonomically informative. Minimum number of sequences required for a reliable taxonomic investigation. The 16S rRNA gene(s) represent less than 1% of the entire genome and for this reason the fraction of shotgun DNA reads assigned to this gene is low 15 (see also Methods). This problem can prevent the investigation of the rare taxa, thus, a rarefaction approach was used in order to verify the minimum number of reads needed for a reliable taxonomic analysis in samples obtained from biogas reactors. The random sampling was performed on 16S rRNA amplicons starting from 1000 reads, increasing stepwise the number up to 700,000 and repeating five times the taxonomic analysis for each step (Fig. 3). Despite results reported in Fig. 3 indicates that more than 200,000 reads were needed to reach a plateau in the number of taxonomic groups, the most abundant ones were already identified with a lower number of sequences. In particular, 29% of genera, 49% of families, 56% of orders, 60% of classes, and 59% of phyla were identified with 10,000 sequences. These values increased up to 62%, 81%, 81%, 79% and 76% using 100,000 sequences. It should be noticed that all the taxonomic groups with abundance higher than 0.1% can be identified (with 10 or more sequences) using as low as 10,000 reads. This finding indicates that, using the shotgun DNA approach and a number of sequences around 60-70 thousand (as in the present study), the analysis can be deep enough to cover most of the taxonomic groups. This result can be considered as a reference to determine the number of 16S rRNA reads needed to analyze the AD microbiome.
Comparison between results obtained using different sequencing methods. The main topic of the present study is the investigation of biases in taxonomic results associated with the three sequencing approaches used. It was possible to identify both differences determined by PCR amplification biases (16S rRNA amplicons), and those determined by the expression level of rRNA genes (shotgun RNA sequences).
Since the number of sequences mapped on the 16S rRNA gene is strongly variable among the three approaches used, we performed an initial investigation by subsampling, selecting randomly 60,000 reads for each sample. This number was chosen considering the less numerous sample of the random DNA sequencing (Supplementary Table S2). From this analysis it was found that two main phyla were strongly under-represented in the amplicon analysis in comparison to random shotgun DNA sequencing (Fig. 4A, red bars): Spirochaetes and Candidate Division TM7 (Candidatus Saccharibacteria). On contrary, Euryarchaeota phylum was highly-represented only in random RNA sequencing, suggesting a very high transcriptional activity (Fig. 4A). The Methanoculleus genus was the main responsible for this result (Fig. 4C). This peculiar characteristic of methanogenic Archaea was previously reported and associated to the remarkable transcriptional activity of the genes involved in methanogenesis 30,31 . A possible alternative explanation for the results obtained for Euryarchaeota (Methanoculleus) could be the presence of a low number of 16S rRNA genes encoded in the genome. This is a peculiar characteristic of some taxa which can result in an underestimation of the abundance determined with amplicons and shotgun DNA 45 . Differently from this, the gene copy number has a low impact on shotgun RNA which is influenced only by the expression level. In order to investigate the influence of gene copy number, it was calculated the average number of 16S rRNA genes on each taxonomic group 46 . Results indicated that, among taxa reported in Fig. 4, those with the lower rRNA gene copy number were Chloroflexi, Tenericutes, Spirochaetes, Verrucomicrobia, Thermotogae and Euryarchaeota. Among these, only Euryarchaeota have a markedly higher abundance value estimated with the shotgun RNA method, suggesting that gene copy number can influence the results obtained with shotgun DNA and amplicons, but it is not the only determinant. An independent analysis was performed determining abundance levels for all the taxa with MetaPhlAn 2 software 47 , which can align the shotgun DNA sequences on unique clade-specific marker genes other than the 16S rRNA. Interestingly, results obtained with MetaPhlAn 2 correlated well with those obtained with shotgun RNA, confirming that there is a substantial underestimation of Euryarchaeota determined by the utilization of the 16S rRNA marker gene both in amplicon and shotgun DNA sequencing (Supplementary Table S3). Since this bias is absent in the "shotgun RNA" based approach, it is probably due to the 16S rRNA gene copy number, but it remains unclear why it does not influence other phyla characterized by a low 16S rRNA copy number.
In amplicon sequencing a substantially lower abundance of genera, such as Methanosarcina, Sphaerochaeta, Acetivibrio and Peptostreptococcus,was found (Fig. 4C). This result was confirmed both by shotgun RNA and DNA sequencing.
A second analysis was performed only on the two sequencing approaches providing the highest number of 16S rRNA reads (16S amplicon sequencing and shotgun RNA sequencing). In this comparison 700,000 sequences were collected per each sample ( Fig. 4B and D) (Supplementary Table S4). This investigation was performed using the sequences obtained after paired-end merging. It should be noted that in the present study, independent analysis of forward and reverse reads provided very similar results in comparison to the use of merged paired-ends, as reported in Supplementary Dataset S1. Moreover, the threshold of the Bayesian classifier was decreased to 0.5 to verify also the presence of sequences belonging to taxa more difficult to identify. Despite the modifications introduced in the parameters, results obtained substantially confirmed the previous ones indicating that analysis is solid. An interesting difference was found regarding Armatimonadetes phylum, which was identified at higher abundance in the shotgun RNA sequencing.
Comparison of results obtained investigating different hypervariable regions.
As reported in section 2.4, different sequencing approaches revealed discrepancies in abundance of specific taxa. This can be determined by two main effects: (1) different distribution of the reads on the 16S rRNA gene, and (2) biases in amplification of universal primers used for PCR. Regarding the first effect, it should be evidenced that amplicon sequencing targets specifically the V4 region, while shotgun sequencing allows investigation of multiple hypervariable regions. It was previously reported that different hypervariable regions can provide different results in the taxonomic assignment 48 . To determine the contribution of specific hypervariable regions to the taxonomic assignment, sequences derived from shotgun RNA were classified in six main classes according to their position on the 16S rRNA gene sequence ( Table S5). Some regions (V1-V2, V5-V6 and V7-V8) were not considered singularly, but were investigated as couples because they were shorter than the average length of the shotgun sequences. Analysis was performed only on shotgun RNA because the number of reads was very high and this allowed a reliable analysis even after subsampling the reads on hypervariable regions. On contrary, the low number of sequences obtained for shotgun DNA made this analysis unreliable.
An evaluation of the number of sequences assigned to the taxonomy revealed that the best result was obtained using those aligned to regions V1 and V2 (Fig. 5A). This was more evident at low taxonomic levels (e.g. genus and family). Similar evidences were found by previous analyses performed on activated sludge 48 suggesting that V1-V2 are good target regions for taxonomic analysis. At genus level the percentage of reads taxonomic assigned varied from 33% on V5-V6 regions to 76% on V1-V2 regions. As expected results obtained considering only sequences positioned on the V4 region was similar to those obtained using amplicons (Fig. 5A). These findings indicate that the hypervariable region used for classification is one of the main determinants for the discrepancies identified using different sequencing methods.
Interestingly, abundances calculated for the same taxa and determined considering different hypervariable regions were quite different (Fig. 5B); this result was confirmed at different taxonomic levels. To make representation simpler, only results obtained for sample CSTR01a are reported in Fig. 5B and C, but other samples Table S5). This finding also evidences that analysis restricted to one specific region cannot provide an accurate estimate regarding abundance of taxonomic groups in the microbiome. Correlation between the abundance of phyla calculated using amplicons (V4 region) and those obtained with shotgun RNA, revealed that the higher correlations were observed with sequences assigned to regions V3 (R 2 0.83), V4 (R 2 0.7) and V5-V6 (R 2 0.89). Correlation values at lower taxonomic levels were variable; for example, at genus level the correlation calculated between "amplicons" and shotgun RNA sequences assigned to the V4 region was very low (R 2 0.43), while higher values were evidenced considering the V3 region (R 2 0.72) and the V5-V6 regions (R 2 0.99).
The discrepancies in abundance identified at phylum level (Euryarchaeota) and at genus level (Sphaerochaeta, Acetivibrio, Peptostreptococcus and Methanoculleus) were investigated in more detail considering also the sequencing method used (previous section and Fig. 4). As evidenced in Fig. 5B, "efficiency level" in taxonomic assignment of the V4 region was not identified as the main determinant for the lower abundance of Euryarchaeota (Methanoculleus) and Spirochaetes (Sphaerochaeta); in fact abundance estimated using the V4 region was close to the average value obtained with sequences mapped to other hypervariable regions. Efficiency of the V4 region was found to be more relevant for Peptostreptococcus, Tepidimicrobium and Acetivibrio. Again this was revealed by the lower abundance value obtained using V4region in comparison to the average value obtained using the other hypervariable regions. In particular for Peptostreptococcus and Acetivibrio, results were biased by an extremely high number of reads aligned on regions V9 or V5-V6 (Fig. 5).
Another finding is that for Euryarchaeota all the hypervariable regions (except than V3) confirmed a higher number of reads in shotgun RNA in comparison to amplicons (Fig. 5A).No clear evidences were obtained for TM7 phylum and for Sphaerochaeta genus.
Identification of possible amplification biases determined by the universal primers 515 F/806 R.
The presence of possible biases during PCR amplification of the 16S rRNA gene was evaluated testing limitations of universal primers 515 F/806 R to efficiently hybridize on the 16S rRNA sequences. This was performed with a two-step process based on an initial assembly of full-length 16S sequences, followed by a "virtual PCR" useful to investigate potential amplification drawbacks. Despite this approach is probably less precise than a real PCR verification, it allowed the verification of a high number of different templates in a short time. The full-length 16S rRNA sequences used for virtual PCR were obtained assembling shotgun DNA and RNA sequences with dedicated software and subsequently clustering the resulting sequences at 97% and 99% similarity level. Since universal primers used in the present study matched the V4 region, 16S rRNA obtained from the assembly were tested to verify whether their sequence included the V4 region. This procedure led to the selection of 1397 16S rRNA sequences clustered at 97% similarity and 1876 sequences clustered at 99% similarity.
Sixty-six out of 1397 bacterial sequences clustered at 97% similarity (4.7%), and 81 out of 1876 (4.3%) of those clustered at 99% similarity did not pass the "virtual amplification" (Supplementary Table S6). The percentage of failures for Archaea was higher (19%) as evidenced by the identification of 6 (out of 32) 16S rRNA sequences which failed the "virtual amplification" check. The taxonomic assignment of the sequences that failed the "virtual PCR" was investigated to verify potential drawbacks on specific taxonomic lineages. Phyla having the highest fraction of amplification failures were Spirochaetes (∼71%) (Sphaerochaeta genus), Bacteroidetes (∼17%) (Thermonema genus), Actinobacteria (∼16%) (Arthrobacter genus) and Euryarchaeota (∼23%) (Methanoculleus genus) (sequences clustered at 97% similarity). Results obtained for sequences clustered at 99% similarity were very similar. These findings suggest that the predicted fraction of 16S genes failing the amplification is limited and that, among taxa with low estimate abundance in amplicons, only Sphaerochaeta and possibly Methanoculleus were biased by failures at the PCR amplification step. Previous studies reported the presence of biases in 16S rRNA amplification for Spirochaeta, particularly in association with primers "63 F", "1389 R" and "S-D-Bact-0347-a-S-19", "S-D-Bact-0785-a-A-19" 49,50 . These findings suggest that species belonging to this phylum are particularly refractory to 16S rRNA amplicon sequencing.
The main results obtained in the present study and some suggestions to improve the taxonomic analysis of the AD microbiome were resumed in Table 1.
Impact of different sequencing methods on the identification of taxa influenced by oleate addition.
The microbial community under investigation was sampled in two different conditions, before and after the addition of supplemental amounts (12 g/L-feed) of unsaturated fatty acids (in the form of Na-oleate) in cattle manure feedstock. In the previous sections it was demonstrated that the sequencing method had a relevant influence on the abundance of the taxa identified; for this reason, it is expected that the sequencing approach can also influence the identification of taxonomic groups changing in abundance in response to Na-oleate.
By checking changes in abundance at genus level, results obtained from amplicon sequencing were more similar to those obtained from shotgun DNA (R 2 0.65) (red in Fig. 6), while they diverged from those obtained with shotgun RNA (R 2 0.34) (blue in Fig. 6). This is expected because in amplicon sequencing and in shotgun DNA sequencing the number of reads per OTU are mainly determined by species abundance, while in RNA sequencing abundance is influenced both by species abundance and by expression level of the 16S rRNA gene. Shotgun DNA sequencing does not depend on gene-targeted primers or PCR amplification, thus it is not affected by primer bias or chimeras and for this reason it provides a better representation of the taxonomic abundance in comparison to shotgun RNA sequencing. Despite this, analysis at the transcript level is probably more representative of the activity of microbial species 18 . By checking results at phylum level, most of the changes in abundance determined by Na-oleate were concordant for all the three sequencing approaches (Supplementary Tables S7 and S8). Only one discrepancy was evidenced for Spirochaetes. In this phylum, both amplicons and shotgun DNA evidenced a decreased abundance after Na-oleate addition, 5.97 fold for amplicons (p-value 0.00127) and 4.89 fold for shotgun DNA (p-value 0.0007). Differently, results obtained from shotgun RNA revealed only a 1.97 fold decrease (p-value 0.56).
As evidenced in Fig. 6, most of the genera which underwent a modification in abundance after Na-oleate addition were confirmed by the three methods used; for example, both Syntrophomonas and Alcaligenes resulted in a markedly increased abundance, while Sedimentibacter was dramatically decreased. Results evidenced also some interesting inconsistences, as for example in the Anaerofustis genus. This taxon was not detected using shotgun RNA sequencing, while both amplicon and shotgun DNA sequencing evidenced a strong decrease after Na-oleate addition. This intriguing result suggests a very low transcriptional activity for this genus, which can be determined only using RNA-seq.
Conclusions
This is the first study that performed an in-depth comparative evaluation of three widely used sequencing methods to investigate the taxonomic composition specifically focused on the anaerobic digestion microbiome. It was demonstrated that the classical 16S rRNA amplicon sequencing is biased by two main effects, which are the limited number of hypervariable regions investigated (V4 in the present study) and, at a less extent, the failure of universal primers to match all the 16S rRNA targets. These two biases influenced different taxonomic groups and, more specifically, amplification drawbacks were more problematic for Euryarchaeota and Spirochaetes. Interestingly, analysis of shotgun DNA reads performed using a group of clade-specific marker genes other that 16S rRNA confirms that use of this marker gene can lead to underestimation in abundance of Euryarchaeota in the AD system. This finding indicates also that use of multiple marker genes, or analysis at transcriptional level, could improve the evaluation of abundance for crucial taxonomic groups. Moreover, it is concluded that the absolute abundance level of different taxa is markedly influenced by the selected hypervariable region and also by the set of sequences used to train the Bayesian classifier. These two limitations suggest caution in considering absolute abundance levels of taxa determined using amplicon sequencing results. It was also evidenced that investigation of more than one hypervariable region (including for example V1 and/or V2) can improve the quality of the results. From a general point of view, the abundance estimation obtained using 16S rRNA amplicons is well correlated with the corresponding one obtained using shotgun DNA sequencing, while more diverse results were found in the comparison with the shotgun RNA data.
Configuration of the biogas reactors, management and samples collection. Shotgun DNA and
shotgun RNA sequences analyzed in this study were obtained from previous studies 32, 33 , while 16S amplicons were specifically generated and sequenced for this comparative analysis. A detailed description of the reactors used and operational conditions was previously reported 32 . Experiments were carried out in triplicate continuous stirred tank reactors (CSTR), denoted as CSTR01, CSTR02, and CSTR03 having a 1.5 L working volume. All reactors were equipped with magnetic stirrers and thermal jackets were used to maintain the operating temperature at 54 ± 1 °C. Initially, the reactors were inoculated with thermophilic inoculum obtained from Snertinge biogas plant, Denmark. During the first period the reactors were fed exclusively with cattle manure and then the influent feedstock was supplemented with sodium oleate (12 g/L-feed). The hydraulic retention time (HRT) of all reactors was kept constant at 15 days. Samples for genomic and RNA extraction (∼15 mL each) were collected from each reactor during the steady state condition of each period (i.e., period with stable biogas production with a daily variation lower than 10% for at least 5 days). The three samples (biological replicates) obtained from the first period were indicated as CSTR01a, CSTR02a, and CSTR03a, while the samples obtained from the second period were indicated as CSTR01b, CSTR02b, and CSTR03b. DNA/RNA extraction, shotgun DNA, shotgun RNA and amplicon sequencing. Barley residues present in the manure were removed using a 100 μm nylon cell strainer filter as previously described 29 . The filtered samples were centrifuged at 5000 rpm for 10 min and the supernatant was discarded leaving ∼2 g of material. To avoid RNA degradation, 3.5 mL of phenol/chloroform (pH 6.7/8.0) were mixed with isoamyl alcohol (25:24:1) (Amresco, Incorporated) and were added to the pellet after centrifugation. The samples were immediately processed for extraction of nucleic acids. Total RNA was extracted from 2 g of pellet using the RNA PowerSoil Kit (MO BIO laboratories, Carlsbad, CA). Genomic DNA was extracted from the same samples after separation from RNA, using the RNA PowerSoil ® DNA Elution Accessory Kit (MO BIO laboratories, Carlsbad, CA). The quality and the quantity of the nucleic acids were determined both using NanoDrop (ThermoFisher Scientific, Waltham, MA) and Qubit fluorometer (Life Technologies, Carlsbad, CA). RNA integrity was determined with Agilent Bioanalyzer, genomic DNA integrity was determined using agarose gel electrophoresis and results were previously reported 32,33 . RNA libraries were prepared using the TruSeq RNA Library Preparation Kit (Illumina, San Diego, CA), while genomic libraries were prepared with Nextera DNA Library Preparation Kit (Illumina, San Diego, CA, USA). The V4 region of the 16S rRNA gene was amplified using universal primers 515F-806R from the same genomic samples used for shotgun sequencing 4 Low (>10,000). All the sequences target the 16S rRNA gene and this allows reliable investigation of the main taxa with few reads.
Very high (>1,000,000). Number of reads assigned to the 16S rRNA gene is low.
Intermediate (>100,000). Loss of reads determined by the presence of transcripts other than 16S rRNA gene is quite limited.
Possible suggestions.
Increase the number of clade-specific marker genes other than 16S rRNA using dedicated software (e.g. MetaPhlAn) Hypervariable regions.
Analysis targets one or two selected regions. This can reduce accuracy in calculating abundance of specific taxa (e.g. Peptostreptococcus, Tepidimicrobium and Acetivibrio).
Analysis targets all the hypervariable regions. This can increase both the efficiency of taxonomic analysis and the evaluation of abundance for most taxonomic groups.
Same as shotgun DNA.
Possible suggestions.
Increase the number of hypervariable regions under investigation with longer reads (e.g. using PacBio SMRT technology) or analyzing more than one amplicon. V1-V2 regions seem particularly promising to improve taxonomic results.
Universal primers introduce biases (e.g. Sphaerochaeta and Methanoculleus) due to inability of hybridizing on all the 16S rRNA molecules.
No amplification step needed, this reduces biases in taxonomic investigation. Same as shotgun DNA.
Possible suggestions.
Perform accurate check for potential biases in 16S rRNA gene amplification. Use more than one couple of universal primers.
This approach targets genomic DNA, transcriptional activity cannot be monitored and expression level of the 16S rRNA gene does not influence analysis.
Same as 16S rRNA amplicon seq.
This approach targets RNA molecules and provides insights in activity of specific taxa. Analysis can be inaccurate in determining abundance of taxa characterized by high or low activity.
Possible suggestions.
Combine different sequencing approaches to gain insights both on microbial abundance and on their activity.
Same as 16S rRNA amplicon seq. Same as 16S rRNA amplicon seq. Conversion of reads from fastq to fasta format was performed with QIIME (1.9.0 + dfsg-0biolinux5) "convert_fastaqual_fastq.py" 6 . Chimera sequences were removed using usearch (7.0.1090_i86linux32) (-uchime option) and Greengenes as reference database. Taxonomic assignment was performed using Bayesian RDP classifier 7 trained with RDP (v11) 9 , Greengenes (13 08), or SILVA (v128) 8 . Output of RDP classifier was further analyzed using self-written perl scripts to accelerate the examination of results at different taxonomic levels. Sequences were taxonomic assigned using Bayesian classifier without a preliminary clustering-based step. This choice was determined by the random distribution of the shotgun DNA and RNA sequences on the 16S rRNA gene, a characteristic which made the OTU-based approach impractical 12 . To overcome this limitation, Illumina reads were directly assigned to the taxonomy after removal of the large number of "non-16S sequences" present in the shotgun DNA and RNA samples. This "pre-filtering step", performed with riboPicker software 52 , selected only sequences aligned on the 16S rRNA gene and made more robust evaluation of the taxonomic results. In the comparison between taxonomic results independently obtained for PE (For and Rev) sequences, taxonomic assignment has been performed using RDP classifier (trained of Greengenes) and results were compared using self-written perl scripts. Briefly, RDP results obtained for the two paired-ends were compared and the "lowest" concordant taxonomic assignment was selected. The script calculated percentage of concordant results for each taxonomic level and reported as output a file with the same format than RDP classifier software. Analysis of the minimum number of reads providing a reliable taxonomic result was performed starting from RDP classifier output (trained on Greengenes DB) and using self-written perl scripts to calculate the taxonomic results on a subset of randomly chosen sequences (perl "rand" function). The script allows the selection of parameters such as "repeat random resampling N times" (selected N = 5 resampling) and "increase of K the number of reads at each step" (selected K = 1000 between 0 and 10000, selected K = 10000 between 10000 and 100000, selected K = 100,000 between 100,000 and 700,000).
Assembly of shotgun reads assigned to the 16S rRNA gene was performed using EMIRGE 54 . Sequences were clustered at 97% and 99% similarity using QIIME (1.9.0 + dfsg-0biolinux5) "pick_otus.py" software 6 and taxonomy of the 16S rRNA sequences obtained were assigned using RDP classifier trained on Greengenes database. Presence on the 16S rRNA sequences of the V4 region was verified by aligning the 16S rRNA sequence with nhmmer (v3.1b1) (parameter -E 0.0001) on two hidden markov models (bac.ssu.rnammer.hmm, bac.ssu.rnammer.hmm) obtained from RNAmmer (v1.2) software 55 . Sequences including the V4 region were recovered by considering the start/end positions of the alignment on hidden markov models. A "virtual PCR" was performed using MFEprimer-2.0 56 in order to verify the ability of universal primers to match each sequence. MFEprimer-2.0 software was launched on each 16S rRNA sequence using an automated pipeline which also parsed and verified the output files obtained from the analysis by selecting the sequences which passed the virtual amplification test. For each taxon, the number of sequences failing the virtual amplification test was compared with the total number of sequences identified for the same taxon. Figure 6. Abundance ratio (log 2 ) determined for the 100 most abundant genera before and after Na-oleate addition. x and y axes report the log 2 ratios obtained by dividing abundance level of genera "after" and "before Na-oleate" addition. "Blue dots" represent the comparison between log 2 ratio determined for amplicon sequencing (x-axes) and log 2 ratio determined for shotgun RNA sequencing (y-axes) (60,000 subsampled sequences). "Red dots" represent the comparison between the log 2 ratio determined for amplicon sequencing (x-axes) and log 2 ratio determined for shotgun DNA sequencing (y-axes). | 2018-04-03T00:48:35.699Z | 2018-01-31T00:00:00.000 | {
"year": 2018,
"sha1": "74f516a2868d857469ab1462b0c50c6cf7cc8792",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-20414-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84969f93b405c29d69e64206e07841ecfdbd04b6",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
427432 | pes2o/s2orc | v3-fos-license | A Clean Slepton Mixing Signal at the LHC
In supersymmetric scenarios where the scalar tau lepton is stable or long-lived, a search for a decay mode chi0 -->stau + mu at the LHC has a good sensitivity to the flavor mixing in the scalar lepton sector. We demonstrate that the sensitivities to the mixing angle at the level of sin(theta)=0.15 are possible with an integrated luminosity of 100fb^{-1} if the total production cross section of supersymmetric particles is of the order of 1pb. The sensitivity to the mixing parameter can be better than the experimental bound from the tau -->mu + gamma decay depending on model parameters.
If new physics contains a charged stable particle, such as the scalar tau lepton (τ ) in supersymmetric (SUSY) models, it provides a very clean signal at the LHC experiments. Once they are produced, most of them penetrate detectors and leave charged tracks just like muons.
By measuring the velocity at the muon system, we can easily distinguish from the muon background. A very precise mass measurement is possible by combining with the momentum measurements [1].
Scenarios with such a charged stable or long-lived particle have sounded exotic and regarded as alternative possibilities. However, recent serious considerations of SUSY phenomenology have shown that it is indeed theoretically motivated [2]. The presence of such particles does not immediately contradict with cosmological history. There are interesting cosmological scenarios and even motivations for such a long-lived particle [3,4,5,6]. If it is the case, we will have new kinds of signals in new physics search experiments.
In this paper, we propose a search strategy for flavor mixing in the scalar lepton sector in the stable (or long-lived)τ scenario at the LHC. In the presence of the flavor mixing, we will have a decay mode of the neutralinos χ 0 →τ µ. By looking for sharp peaks in theτ -µ invariant mass, we show that we will be able to discover lepton flavor violation for Γ(χ 0 1 →τ µ)/Γ(χ 0 1 → τ τ ) ≃ tan 2 θ 23 10 −2 , where θ 23 is the slepton mixing angle.
There have been many studies on lepton flavor violation at the LHC and e + e − colliders assuming the neutralino to be the lightest SUSY particle. The possibility of observing e ± µ ∓ + missing E T final states at e + e − colliders has been pointed out in Ref. [7]. The correct treatment of the process including quantum interference (slepton oscillation) has been studied in Ref. [8] and discussion has been expanded to the LHC experiments and CP violation in Ref. [9]. Following those papers, LHC studies on searches for decay processes χ 0 2 → l ± i l ∓ j χ 0 1 with i = j have been done in Refs. [10]. The sensitivities of O(0.1) for mixing angles have been derived in various SUSY models.
Lepton flavor violation in the long-livedτ scenario has also been studied. In Ref. [11], the decay ofτ into e or µ and a gravitino is studied under an assumption that a significant number ofτ 's will be collected at the LHC or future linear collider experiments by placing a massive stopper material close to the detectors [12]. A linear collider study with long-livedτ has also been done in Ref. [13] where it is proposed to search for lepton flavor violating final states such as (e + τ ±τ ∓ )τ − through slepton pair production processes. Very good sensitivities as well as sin θ ∼ (a few)×10 −2 are reported in both of the works. We study in the following the LHC signals of slepton flavor mixing without new detectors or future colliders. Therefore it serves as the first search strategy that can be done immediately after the LHC starts ifτ is stable or long-lived.
In order to estimate the sensitivity, we performed the following Monte Carlo simulation. We used a model of Ref. [2] where the spectrum of the SUSY particles are parametrized by four quantities: The µ parameter and M gaugino control the Higgsino mass and the gaugino masses, respectively.
The messenger scale M mess and the number of messenger particle N mess determines masses of scalar particles relative to the gaugino masses. We have chosen two parameter points whereτ is the lightest SUSY particle (except for the gravitino): Model I : µ = 300 GeV, M gaugino = 900 GeV, M mess = 10 10 GeV, N mess = 1 , Model II : µ = 500 GeV, M gaugino = 900 GeV, M mess = 10 8 GeV, N mess = 1 . With the SUSY spectra, we have generated 40,000 SUSY events for each model by using the Herwig 6.50 event generator [14] with the CTEQ5L parton distribution function [15]. This corresponds to an integrated luminosity of 33 fb −1 (46 fb −1 ) at the LHC for Model I (Model II). We set the mixing angle of the right-handed sleptons to be sin θ 23 = 0.33 with which the branching ratio of the lightest neutralino is Γ(χ 0 1 →τ µ)/Γ(χ 0 1 →τ τ ) ≃ 0.1. Heavier neutralinos do not have significant branching ratios for the χ 0 →τ µ decays because the amplitudes are suppressed by the Yukawa coupling constant of the muon. With SUSY spectra with the lightest neutralino being almost the Bino and the lighterτ to be almost right-handed (which is the case in the above two models), the following method is not sensitive to mixings in left-handed sleptons.
The events are passed through a detector simulator AcerDET 1.0 [16] where muon momenta are smeared according to the resolutions of the ATLAS detector. We have also smeared the momenta and velocities ofτ 's according to the resolution obtained in Ref. [17]; where k 1 = 0.0118%, k 2 = 2% and k 3 = 89%. The momentum p is in GeV. The resolution of the velocity is We have ignored the η dependence of the resolutions. Also, in the following analysis, we assume that theτ mass is known with a good accuracy by the method of Ref. [1].
We have followed the strategy of Ref. [1] for the identification ofτ . We require the candidate tracks to be within |η| < 2.4, P T > 20 GeV and βγ meas > 0.4. The cut on the measured velocity ensuresτ to reach the muon system. A consistency condition: |β ′ − β meas | < 0.05 is imposed, where β ′ is a velocity calculated from the momentum (β ′ = p 2 /(p 2 + m 2 τ )). By also requiring the measured velocities of at least one candidateτ to be within 0.4 < βγ meas < 2.2 for each event, this selection strategy reduces background from mis-identified muons to a negligible level [1]. We therefore ignore in the following analysis the background from the standard model processes as well as from muons in SUSY events.
In order to look for lepton flavor violating neutralino decays, we selected events with only one isolated muon with P T > 20 GeV and at least one opposite-signτ candidate. If there are two opposite-signτ -µ pairs, we use both of them for the analysis. The invariant mass Mτ µ is calculated for each candidate event.
The invariant mass distribution is shown in the left panel of Fig. 1, where we can clearly see a peak at the lightest neutralino mass (187 GeV). The shape and normalization of the background distribution can be obtained from the Mτ e distribution directly from the data (shaded histogram). By subtracting those estimated background, we obtain the histogram in the right panel where we see that the background is successfully subtracted. Therefore we can reliably use the Mτ e distribution as an expected background. We can also find an excess around masses of the lighter Higgsino-like neutralino (∼ 276 GeV). Most of the background originates from the χ 0 →τ τ decay followed by τ → µνν. Although the signal region is the kinematic endpoint of this background for each neutralino, the backgroundτ µ pairs from heavier neutralinos fall into the signal region. There are also backgrounds from leptonic decays of W bosons. Numbers of such background events depend on cascading pattern of heavy SUSY particles.
We fitted two peaks in the right panel of Fig. 1 with the gaussian function and defined the signal region to be the 1σ region around the peaks; |Mτ µ − 185.2 GeV| < 3.8 GeV and |Mτ µ − 276 GeV| < 10 GeV. In the case where there are not enough events to find the neutralino masses by the Mτ µ distribution, one should look for edges in the invariant mass ofτ and τjet, Mτ jτ , for the neutralino mass measurements as is done in Ref. [2]. There are S + B = 584 events in the signal region whereas the number of the expected background in the signal region is B = 374. Therefore we obtain 9σ excess with 33 fb −1 of data. * Normalizing to the integrated luminosity of 100 fb −1 , the number of needed signal events for 5σ discovery to be 181, corresponding to sin θ 23 > 0.18.
We repeat the same analysis for the Model II and the result is shown in Fig. 2. There is only one peak associated with the Bino-like neutralino because the number of Higgsinos in cascade decays is reduced and the branching ratios of B(χ 0 3,4 →τ τ ) are suppressed with the relatively heavy Higgsinos. A slightly better sensitivity than Model I is obtained. In the signal region, |Mτ µ − 193.0 GeV| < 6.9 GeV, we find S + B = 539 and B = 238 for 46 fb −1 . Normalizing to 100 fb −1 of data, we obtain the 5σ sensitivity to be sin θ 23 > 0.15.
If no peak is found due to small mixing angles, one can put a bound on the branching fraction (or equivalently the mixing angle). This requires a counting of the χ 0 →τ τ events that involves the efficiency measurement of the τ identification. That will be the dominant uncertainty in putting the experimental bound. As far as order of magnitude is concerned, the sensitivity will be at the level of sin θ 23 ∼ 0.1 − 0.2.
The search for the τ → µγ decay has already put a stringent bound on a combination of various SUSY parameters involving the slepton mixings. Although a model independent comparison is not possible, we can get a sense of sensitivities to the mixing parameter by calculating the τ → µγ branching ratio with a particular parameter set. We have done that in * This level of excess is somewhat optimistic given that we know the correct location of the peaks. In the actual experimental situation, the peak locations (the neutralino masses) will be measured by looking for the endpoint locations of the invariant mass Mτj τ . The uncertainty of this measurement is estimated to be at most of order 5% [2] by taking into account the effects of fake τ -jets and the uncertainties in calibration of the τ -jet energies. If we use the central values given in Table 2 of Ref. [2] and the 5% errors for the definition of the signal region, i.e., 194 ± 10 GeV and 279 ± 14 GeV, we obtain about a 7σ excess, where most of the significance is a contribution from the second peak. for Model I and II, respectively. Compared with the current experimental bound, 4.5×10 −8 [18], the LHC sensitivities can be much better (or worse) depending on model parameters. One should also note that the measurement of Γ(χ 0 →τ µ)/Γ(χ 0 →τ τ ) at the LHC will directly probe the slepton mixing parameter. Therefore, measuring/constraining the branching fractions of both processes will be important to understand the flavor structure of SUSY models. We summarize the results in Table 1.
A similar analysis will go through for a χ 0 →τ e search at the LHC. Also, if a linear collider is built in future, searches for a decay mode χ 0 →τ µ through a neutralino pair production process may give a better sensitivity to the mixing angle as background from heavier neutralinos and W bosons will be under better control. | 2008-03-04T17:21:46.000Z | 2008-01-23T00:00:00.000 | {
"year": 2008,
"sha1": "e6e3975af7a8b2a9c6adf26a1e4da3dcad75d007",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0801.3486v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e6e3975af7a8b2a9c6adf26a1e4da3dcad75d007",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235266235 | pes2o/s2orc | v3-fos-license | Analysis and classification of main risk factors causing stroke in Shanxi Province
In China, stroke is the first leading cause of death in recent years. It is a major cause of long-term physical and cognitive impairment, which bring great pressure on the National Public Health System. Evaluation of the risk of getting stroke is important for the prevention and treatment of stroke in China. A data set with 2000 hospitalized stroke patients in 2018 and 27583 residents during the year 2017 to 2020 is analyzed in this study. Due to data incompleteness, inconsistency, and non-structured formats, missing values in the raw data are filled with -1 as an abnormal class. With the cleaned features, three models on risk levels of getting stroke are built by using machine learning methods. The importance of"8+2"factors from China National Stroke Prevention Project (CSPP) is evaluated via decision tree and random forest models. Except for"8+2"factors the importance of features and SHAP1 values for lifestyle information, demographic information, and medical measurement are evaluated and ranked via a random forest model. Furthermore, a logistic regression model is applied to evaluate the probability of getting stroke for different risk levels. Based on the census data in both communities and hospitals from Shanxi Province, we investigate different risk factors of getting stroke and their ranking with interpretable machine learning models. The results show that Hypertension (Systolic blood pressure, Diastolic blood pressure), Physical Inactivity (Lack of sports), and Overweight (BMI) are ranked as the top three high-risk factors of getting stroke in Shanxi province. The probability of getting stroke for a person can also be predicted via our machine learning model.
The results show that Hypertension (Systolic blood pressure, Diastolic blood pressure), Physical Inactivity (Lack of sports), and Overweight (BMI) are ranked as the top three high risk factors of getting stroke in Shanxi province. The probability of getting stroke for a person can also be predicted via our machine learning model.
Introduction
Stroke, an acute cerebrovascular disease, is caused by brain tissue damage due to abnormal blood supply to the brain with cerebrovascular blockage. It includes hemorrhagic stroke and ischemic stroke. According to the Global Burden of Disease, Injuries and Risk Factor Study and other researches [1][2][3], stroke is the third leading cause of death in the world and the first in China. Recent studies from National Epidemiological Survey of Stroke in China(NESS-China) [4] show the prevalence of stroke in China during 2012-2013: Investigation into risk factors of getting stroke is essentially important for the prevention of stroke. Research shows that risk factors can be divided into two categories: reversible factors and irreversible factors.
Reversible factors mainly refer to unhealthy lifestyles such as smoking, excessive alcohol consumption and physical inactivity; while irreversible factors mainly refer to chronic diseases such as hypertension, diabetes, and hyperlipidemia. A number of researches on stroke risk analysis have been done for the European and American populations [5,6]. However, they can not be directly applied to the Chinese population due to racial difference.
In China, stroke-related research is mostly carried out on risk prediction models with pathogenic factors. The most widely used one is the 10-year risk prediction model using cardiovascular and cerebrovascular diseases to give probability of stroke and coronary heart disease incidence. The CHINA-PAR project (Prediction for ASCVD Risk in China) led by Gu Dongfeng's team [7] proposed a revised model which considered not only the 10-year risk but also a lifetime-risk assessment. By analyzing data on the incidence of stroke in 32 of 34 provincial regions of China, Xu et al. [8] concluded that there is a stroke belt in north and west China.
In recent years, some machine learning methods have been applied to the stroke prediction. In 2010, a combination of Support Vector Machine and Cox Proportional Hazard Model was proposed by Khosla et al. [9]. Benjamin [10] implemented an interpretable method using Decision List with Bayesian Analysis to quantify the probability of stroke. Chi-Chun Lee's team [11,12] compared multiple methods including Deep Neural Network in stroke prediction with Electronic Health Records (EHR). In their research, they focus on the patient's 3 year stroke rate and 8 year stroke rate. However, few of these studies modeled the early screening and prevention of stroke.
Evaluation of the risk of getting stroke is important for prevention and treatment of stroke in China. The China National Stroke Prevention Project (CSPP) proposed "8 + 2" main risk factors in identifying Chinese residents' risk level of getting stroke [13][14][15] With the above proposed "8+2" main risk factors, the risk level of getting stroke can be classified into: 1. High risk: having at least three factors from factor 1 to 8; or one of a and b; 2. Medium risk: having less than three risk factors from factor 1 to 8 with at least one being factor 1, 2 or 3; 3. Low risk: having less than three risk factors from factor 4 to 8.
However, the ranking of the risk factors may present differently in different provinces. Based on the census data in both communities and hospitals from Shanxi Province, in this paper, we investigates different stroke risk factors and their ranking. It shows that hypertension, physical inactivity (lack of sports), and overweight are ranked as the top three high stroke risk factors in Shanxi. The probability of getting a stroke is also estimated through our interpretable machine learning methods. The study provides theoretical support for stroke prevention and control in Shanxi Province. Each record in both datasets contains 177 features, not only providing information on the "8+2" risk factors but also patients' other information: Data cleansing is a preparation process in data analysis by removing or correcting data that is corrupt or inaccurate. The raw data in the above datasets needs to be cleaned due to data incompletion, inconsistence and non-structured formats which may lead to a failure of feature engineering. In this paper, missing values of a feature are filled with -1 as an abnormal class. If there is over 60% missing inside the column, we will delete it since the data from the column cannot provide much information. Inconsistent values are found and corrected with prior medical knowledge. For instance, diastolic blood pressure should be lower than systolic blood pressure.
Dataset and Preprocessing
After the data cleansing, there are total 23289 records (low: 9718, mid: 6742, high: 5610) with 32 features in remains, shown in Table 2.
Models
Decision-Tree is a classic non-parametric machine learning algorithm. A tree is created through learning decision rules inferred from data features. Starting from the top root node, data are split into different internal nodes according to certain cutoff values in features, and then finally arrive the terminal leaf nodes which give the final classification result. ID3 [16] and CART [17] are classic Decision-Tree algorithms which employ Information Gain and Gini Impurity from Entropy Theory [18] as measurements in making best splitting rules.
Random-Forest is a machine learning algorithm proposed by Leo Breiman [19] in 2001. Instead of using one decision tree which is nonunique and may exhibits high variance, random forest generates a number of individual decision trees operating as a committee. Bootstrapping technique is used to train the individual decision trees in parallel on different sub datasets and features with random sampling with replacement. The final decision of classification is aggregated by voting and averaging. With the wisdom of crowds, random forest can easily overcome overfitting problem and reduce model bias caused by data imbalance, and thus shows good generalization. Logistic model is a generalized linear model which is widely used in data mining. It assumes that the dependent variable y follows a Bernoulli distribution and introduces non-linear factors through the Sigmoid function: where z = β 0 + β 1 x 1 + · · · + β n x n and n is the number of features.
Assumes that y represents a binary outcome {0, 1}, and X is an array of their features and β i is the coefficient of feature x i [20]. The coefficient in logistic regression is called log odds and used in logistic regression equation for the prediction of the dependent variable y from the independent variable X, let p = P (y = 1): In practice, Logistics Regression can be used in multiple aspects, for instance, advertising, disease diagnosis as it can provide the possibility of a user buying a certain product and the possibility of a certain patient suffering from a certain disease. In our case, we want to use the "8+2" risk factors and resident's lifestyle factors as input and give out the probability of stroke incidence, which can provide a forward-looking prediction.
Model's Interpretation
The model's interpretability and explanations are crucial for the medical data analysis: the medical diagnosis system must be transparent, understandable, and explainable. Therefore, the doctor and the patient can know how the model makes decisions, which features are important, and how the features affect the model's result [21,22]. In this section, we mainly introduce the feature importance, permutation importance and SHAP value which can help interpret the model.
Feature importance, also called as Gini importance or Mean Decrease Impurity (MDI) [19,23,24], is the average of node impurity decrease of each variable and weighted by the probability of the sample reaching to that node. For Random-Forest model, assumes that the response is Y and to calculate the average variable importance of feature X i with N trees: Where p(j)∆i(s j , j) is the weighted impurity decreases for feature X i in all nodes j, p(j) is the probability of the sample reading to node (p(j) = Nj N = the amount of samples reaching the node j total amount of samples ) and i(s j , j) is the impurity measure at node j with split s j . v(s j ) is the variable used in the split s j (split s j means the split at node j. Hence, v(s j ) = X m means at node j, the splitting identifier is variable X m ).
For the Decision-Tree model, it only contains one tree, that is, N = 1, and its feature importance can be rewritten as: Permutation Importance [19,[24][25][26], is used in answering how a certain feature influence the overall prediction, as it evaluates the changes of model prediction's accuracy by permuting the feature's values. If let s represent the model accuracy with the full dataset D, then the permutation feature importance of i th feature is: where j represents the j th repetition in K times shuffling for i th feature , s i,j as the model accuracy in modified datasetD i,j with i th feature shuffled. With the average changed accuracy before and after shuffling can we evaluate the importance of i th feature. These two importance values can show which feature is more important, however, it is unavailable for us to know whether the feature has a positive or negative effect respect to the output. SHAP(Shapley Additive explanations) can provide not only the importance of the features but also show how much each feature contributes, either positively or negatively, to the target variable. It is a method to explain each individual prediction. This idea of SHAP value comes from the Shapley value in game theory. Shapley value tells how to fairly distribute the contributions among the features, which is the marginal contributions for each feature [27].
The goal of SHAP is to explain the prediction of an instance x i by computing the contribution of each feature to the prediction model. The formula of SHAP is an addictive feature attribution linear model, and it is shown below: With this method, we calculate how the feature contributes to each coalition of each decision tree model and sum them up to get the total contribution of the whole prediction model. In this equation, F \{x i } represents all the possible subsets without feature x i , S represents the sub-feature set that did not contain the result, v(S {x i }) represents the model output (precision, recall or accuracy, etc.) after feature x i is added to subset S, v(S) represents the model output using subset S. With the multiplication for the occurrence probability for each subsets without that feature and the output different with and without that feature, the marginal contribution of each feature x i is calculated.
SHAP has three properties: local accuracy, missingness, and consistency [28]. Local accuracy means that when approximating the original model for a specific input x, local accuracy requires the explanation model to at least match the output of the model for the simplified input x . Missingness means that if there is a feature missing in the sample, it does not affect the output of the model. Consistency means that when the model changes and the marginal contribution of a feature increases, the corresponding Shapley value will also increase. Therefore, it is more accurate and scientific in interpreting machine learning models due to those three characteristics.
Main Risk Factors Ranking
Due to geographic and cultural differences, the same disease may have different manifestations in different region. We hope to find the topmost influential factors in Shanxi Province. Table 3 shows the proportion of each risk factor's Exposure rate and the Risk attribution (RA) based on our data: 3
Feature
Exposure To assess the ranking of main risk factors, in the first experiment, we used the dataset 2 with the "8+2" factors as feature and implemented the Decision-Tree model. Figure 1 shows the feature importance and permutation importance based on the Decision Tree model, which shows the ranking of these main risk factors: both evaluation methods confirm that hypertension, physical inactivity, and hyperlipidemia are estimated as the top three informative features in the Decision-Tree model.
Lifestyle and Medical Measurement Ranking
For the second experiment, we would like to identify more risk factors for Shanxi Province besides the "8+2" risk factors by using the dataset 2 with features such as lifestyle habits and medical measurement. Table 5 shows the classification result and Figure 4 shows the feature and permutation importance: The results shown in Figure 4 confirm that systolic blood pressure, diastolic blood pressure, physical inactivity, BMI, smoking, FBG, TG, HDL, family history of Stroke and weight are the top ten factors when we only consider the lifestyle habits, demographic information, and medical measurement. These factors are, medically, highly corresponding to the Chronic diseases [29][30][31]. To give out the specific details on how each feature contributes to each individual, we calculate the SHAP value in the Random Forest model and use the summary-plot to show their importance. The figure of ordered mean sample SHAP value for each feature is shown in Figure 5. It shows the distribution of the contributions each factor has on the cause of stroke. The color represents the feature value (red represents high, blue represents low). The more difference for the distribution between high feature value and low feature value, the better it would be in separating patients with different risk levels. Figure 5 shows that Diastolic Blood Pressure, Physical inactivity, Systolic Blood Pressure, BMI, Smoking, FBG, and TG are positively correlated to the risk of stroke, and HDL are negatively correlated to the risk.
Quantitative Prediction of Stroke's Incidence
For the third experiment, a logistic model is establish to quantify the probability of stroke incidence. To achieve this goal, we combine the dataset 1 and 2, relabel the data: the original low-risk and medium-risk are now class 0, the high-risk and stroke are class 1. The features contains lifestyle information, demographic information, and the "8+2" factors.
Logistic regression is feature-sensitive. Feature selection is done before modeling. To solve the multicollinearity problem [32], the highly relevant features are removed first. For example, we keep BMI and remove height and weight. What's more, Variance Threshold [33] is used to remove low variance features. It is a simple method of feature selection, where deletes all the features whose variance does not meet a certain threshold. For example, most of the respondent in our survey are Han Chinese, therefore, we remove ethnicity.
The logistic model results are shown in Table 6 including the features' coefficient, standard error, and the confidence interval. According to the coefficients, it yields that History of stroke, Physical inactivity, Hypertension, Hyperlipidemia, Smoke, Diabetes Mellitus, BMI, Family history of stroke and Heart disease are positive correlated to stroke incidence; Education level, Frequency of vegetables, and Occupation are negative correlated to stroke incidence. Comparing with the qualitative ranking method, we quantify the risk factors of stroke, and convert the scoring grades into probabilities, making the prediction of stroke risk more intuitive. What's more, our logistic model is based on the current actual circumstance to predict the incidence promptly, which is more timesensitive.
The Risk Factors in Shanxi Province
Based on the treeSHAP value, feature and permutation importance of lifestyle and medical measurement, we have found the most important factors of causing stroke: The treeSHAP dependence plot is applied to compare the contribution between two features. Figure 6 shows that Diastolic Blood Pressure (> 140 mmHg) is more suitable in diagnosing the risk of patients getting a stroke than the Systolic Blood Pressure (> 90 mmHg). Similarly, based on the comparison between HDL and LDL (see Figure 7), we can find that high-density lipoprotein are better in diagnosing those non-stroke patients in low high-density lipoprotein.
Feature Validity
Missing data due to the technique errors (like typos and facilities errors) is a common problem during the census analysis. To find out how might those error data in the datasets might influence, we have conducted an experiment to find out missing data in features and its influence on the final results. The Random Forest Classifier is adopted to predict the stroke risk with different proportions of a single missing feature and looped for 100 times at random locations. What's more, to prevent the precision score didn't change due to the strong-correlation of features, some specific feature pairs are cleaned up. The result is shown in Figure 8 : In Figure 8, the curve for each feature represents how the average weight precision score of the feature has changed with the increasing proportion of the feature missing and the shadows are the 95% confidence area 100 times for each feature. Based on the result, we can see that diastolic blood pressure, physical inactivity, BMI, smoking, alcohol, HDL, and FBG are in order important factors when identifying the cause of stroke, while the other factors are not influencing the whole models. An interesting fact we have seen is that HDL seems to be a great influential factor when the proportion of HDL is small compared to most of the influenced factors.
What's more, a Recursive Feature Elimination (RFE) process is also done to evaluate the specific amount of factors for analyzing the risk level of patients getting a stroke. The procedure of the RFE is as follows. First, the estimator is trained on the initial feature set and the importance of each feature is obtained from any specific or callable attribute. Then, the least important features are removed from the current feature set. This process is repeated recursively over the pruning set until the number of features to be selected is finally reached. Based on the Figure 9, we have found that approximate 7 features can help the Random Forest model to get a stable precision for different levels of risk. Therefore, the validity of those features is proved. Figure 9: The relationship between the amount of features and the precision changed | 2021-06-02T01:16:01.728Z | 2021-05-29T00:00:00.000 | {
"year": 2021,
"sha1": "51fbe52971e83a51306c2b29b75061e475a1b031",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "51fbe52971e83a51306c2b29b75061e475a1b031",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237643017 | pes2o/s2orc | v3-fos-license | Effect of vitamin C treatment on some central nervous system functions in young rats whose mothers treated with hydrogen peroxide during the lactation period
Article history: Received August 6, 2020 Accepted January 23, 2021 Available online October 1, 2021 This study was designed to evaluate the effect of vitamin on young rats whose mothers exposed to 1% hydrogen peroxide in drinking water during the lactation period on the function of the nervous system. The study consisted of three groups, Control group, hydrogen peroxide group, vitamin C with hydrogen peroxide group, showed that treatment with hydrogen peroxide for mothers led to significant decrease in the weights of youngrats-age21 days and significant increase in the righting reflex test, cliff avoidance and the olfactory discrimination test in young rats at age of week and also in the onset movement test and at the time of negative geotaxis test in-young-rats age21days. Treatment of mothers with hydrogen peroxide caused significant decrease in open-field activity and number of times standing on backlists during 3minutes and in the approach response, touch and sound of young rats at age 21 days. While, hydrogen peroxide treated group resulted in significant increase in time of negative geotaxis test and hydrogen peroxide had no effect on response of tail pinch as compared to other groups. When treating mothers with vitamin C significant increase in weights of youngsters age 21 days, as well as in onset of movement test and in number of squares cutoff within 3minutes in open-field activity and number of times standing on backlists during 3minutes in young-rats age 21 days. Also there was significant decrease in righting reflex test, olfactory distinction and negative geotaxis test compared to hydrogen peroxide treated group. It was concluded from the study that vitamin C plays an important role as an antioxidant by improving the nervous system function in young rats their mothers exposed to with hydrogen peroxide in drinking water.
Introduction
Oxygen is one of the necessary elements for life, and most of it is naturally reduced to water through breathing processes in mitochondria. As for the remainder of oxygen 1-2%, it is exposed to incomplete reductive reactions through which radicals of the Superoxide anion radical, which is one of the most varieties of free radicals are toxic, other types of free oxygen roots that include hydrogen peroxide, hydroxyl radical, and singlet oxygen (1)(2)(3). Many studies confirm the role of oxidative stress in causing many pathological conditions by generating free radicals and effective oxygen classes (4), and free radicals are defined as molecules that have one or more than ten electrons in an unconfigured way in the outer envelope (5), and for these free radicals Toxic effects on the phospholipids present in the cell membrane, as they act to oxidize and form malondialdehyde (MDA) in a process called lipid peroxidation, and this leads to an increase in the membranous exudation of the membrane with impaired cell function (6). Several studies also confirmed that free radicals lead to a nervous system affliction with many diseases (7) and that young animals are most vulnerable to these toxic effects as the main source of exposure to these toxic substances is milk and that the lactating period is very important in the life of young people because it is characterized by rapid growth and development of nervous system, as the motor mechanics of these toxic substances in young people differ from adults, which makes the young at great risk (8), and that the period of brain tissue formation occurs at the last stage of pregnancy and lasts for 18 months after birth in humans and is called sudden growth Brain growing spurt, and in rats, the period of sudden brain development begins immediately after birth and reaches its climax on the tenth day of life, and this period is characterized by the formation of network communication and the creation of the two medulla (9). Vitamin C is one of the water soluble vitamins, this vitamin is stable in acidic solutions but it is very sensitive to oxidation, which is accelerated by the presence of copper salts and at high temperatures and exposure to air and light (10). Vitamin C is involved in accelerating the absorption of iron in the gut and contributes to making use of the iron stored in the body (11). Vitamin C is considered one of the most powerful natural antioxidants (12), as it is found in a large concentration in many tissues, and works as a sweeper for effective oxygen classes, especially radicals of negative superoxide, hydrogen peroxide, hydroxyl radical and singlet oxygen (13), Vitamin C is considered an important factor in preventing the effects of oxidative stress in poultry despite the manufacture of this vitamin in their bodies (14), and also vitamin C is an important factor in eliminating the neurological signs resulting from poisoning with sodium nitrate and causing the production of free radicals (15).
Many recent studies have confirmed the strength of this vitamin in reducing the severity of stress The oxidative process that results in the generation of the reactive oxygen species responsible for destroying the cells components (16)(17)(18). For this reason, this study aimed to use vitamin C and evaluate its effect on the efficiency of the nervous system in young rats his mothers exposed to hydrogen peroxide during the lactation period in drinking water.
Materials and methods
Female albino rats were used in this study at the age 100-120 day that weight ranged between 200-220 g. The rats were reared under special laboratory conditions represented by a natural light cycle (10 hours of light and 14 hours of darkness) in special plastic cages prepared for this purpose and at a temperature of 18 ± 2ºC, water and food were submitted adlibitum. Females were placed with adult males for fertilization at a ratio of 3: 1, and then the mothers were isolated at the first day after birth with their offsprings, each in their cage, and the treatments were performed. The experiment included 3 groups. Mothers of the first group (G1) were given water and a regular diet during the 21 days of lactation. The second group (G2), the mothers were given 1% hydrogen peroxide (Al-Hayat Laboratory, Iraq) with drinking water, and the third group (G3) mothers were given vitamin C 500 mg/kg body weight (BDH, England) (19) using the intragastric gavage needle and the hydrogen peroxide 1% With drinking water, tests were conducted on the nervous system of young rats at the age of 21 days, including righting reflex test, cliff avoidance test and olfactory discrimination test as neurological behavioral tests , and onset of movement test, open-field activity negative geotaxis test as indicator for central nervous system activity. As well as, approach response, ouch response, click response and tail pinch response were measurement. Furthermore, weights of young rats, at one day and after 21 days old was recorded (20,21).
The data were statistically analyzed using the Tukey's test and the difference was considered significant at the probability level P<0.05 (22). Data for sensory and motor stimulatory responses in young rats at the age of 21 days were analyzed using Mann-Whitney-U-test for data that was in the form of numerical scores.
Results
The results showed in table 1 a significant decrease in the body weight of young rats their mothers treated with hydrogen peroxide for 21 days compared with the control group. While giving vitamin C to mothers for 21 days led to a significant increase in the weights of young rats compared to the weights of young rats their mothers treated with hydrogen peroxide. Table 2 revealed a significant increase in Righting reflex test, Cliff avoidance and olfactory discrimination in young rats their mothers treated with hydrogen peroxide during a week of lactation compared with the control group. When giving vitamin C and hydrogen peroxide to mothers, a significant decrease was observed in a Righting reflex test and olfactory discrimination in young rats compared to the young rats their mothers treated with hydrogen peroxide for a week and values return to normal values of the control group, and no significant difference was recorded in the Cliff avoidance in young rats their mothers Treatment with hydrogen peroxide and vitamin C compared with young rats their mothers exposed to hydrogen peroxide at the age of a week, with a significant increase in the Cliff avoidance in young rats their mothers treated with hydrogen peroxide and vitamin C compared to control group in the first week of age.
The results of table 3 showed that the treatment of mothers with hydrogen peroxide led to a significant increase in the onset of movement test and negative geotaxis test in young rats at the age of 21 days compared to the young rats in control group, while a significant decrease was observed in the number of square moving during 3 minutes and the number The times of standing on the hind legs during 3 minutes for young rats at the age of 21 days and the rate their mothers treatment with hydrogen peroxide during the lactation period compared with the control group. When treating mothers with hydrogen peroxide and vitamin C, a significant increase was observed in the number of square blocks during 3 minutes and the number of times standing on the hind legs during 3 minutes in young rats at the age of 21 days and returning to the control group. Also, a significant decrease was observed in the negative geotaxis test in the young rats, whose mothers treated with hydrogen peroxide and vitamin C at the age of 21 days when compared with the young rats group their mothers treated with hydrogen peroxide at the age of 21 days, and then return to the control group. Treatment with hydrogen peroxide and vitamin C did not cause a significant change in the young rats when compared with the young rats treated with hydrogen peroxide at the age of 21 days in the Onset of movement test, and a significant increase was observed in the Onset of movement test in the young rats group treated with hydrogen peroxide and vitamin C when compared with the control. Table 4 showed that giving hydrogen peroxide to mothers resulted in a significant decrease in the Approach response, Touch response and click response in young rats at the age of 21 days compared with the control group, and no significant difference was observed in the Tail pinch response compared with control group. And the administration of vitamin C with hydrogen peroxide for mothers led to the return of Approach response, touch, click and tail pinch to the normal values of control.
Discussion
This study is concerned with observing the effect of antioxidants represented by vitamin C on some nervous system functions of young rats whose mothers are exposed to oxidative stress-induced with hydrogen peroxide with drinking water. The results of the study showed a significant decrease in the weights of young rats his mothers treated with hydrogen peroxide at the age of 21 days, which is due to the decrease in the amount of feed consumed by the young, and this is consistent with the findings of Aziz (23), which is due to the increase in the active oxygen varieties that deplete glutathione and catalase as antioxidants (24). Meanwhile, the results revealed that giving vitamin C for 21 days led to a significant increase in the body weights of the young rats and this is in line with what Al-Taie reached (19) as vitamin C is the first line of defense against oxidative stress and thus works to improve the animal's ability for food intakes (25). The study also showed a significant increase in reflex tests to correct the posture of the body and avoid high edge and olfactory discrimination in young rats at the age of a week when treating their mothers with hydrogen peroxide during lactation and this is due to changes in the metabolism of neurotransmitters in the central nervous system and the occurrence of neuropathy in the brain, spinal cord and peripheral nerves (26), Where the neurotransmitters play an important role in the function of the central nervous system and they interact with each other in a complex way, acetylcholine plays an important role in learning and memory (27). The administration of hydrogen peroxide also resulted in a significant increase in the onset of movement test and the negative geotaxis test, a decrease in the number of square blocks and the number of times standing on the back legs within 3 minutes, the reason for this is that hydrogen peroxide generates free radicals that lead to neuropathy in the brain and spinal cord an surrounding nerves, which affect the neurotransmitters, the chemical and serotoninergic system, which lead to the breaking down of the cell membrane, the release of proteins and the destruction of the DNA of the neurons in young rats his mothers treated with hydrogen peroxide (27), This is in agreement with the findings of Mustafa and Al-Baggou (28) that administration of chlorpyrifos and deltamethrin in mice led to a significant decrease in the open field and a significant increase in the time of completion of negative geotaxis test, Qasim (15) also found that the administration of sodium nitrate led to a significant decrease in motor activity within the open field, and this is consistent with the findings of the current study, while the current study did not agree with Qasim's findings in the negative geotaxis test (15), this is consistent with the findings of Qasim (15), where the study proved that vitamin C has antioxidant effects on the functions of the nervous system, Hassan (12) also, found that vitamin C reduced the effects of heat stress in broilers.
The current study showed a significant decrease in the catalytic response to the approach, touch and click in young rats at the age of 21 days when their mothers treated with hydrogen peroxide and perhaps the reason for that is due to neuropathy, which is one of the diseases of the nervous system due to the generation of reactive oxygen species that have a major role in sabotaging the brain cells, spinal cord and nerves Surrounding it which leads to a change in the metabolism of the neurotransmitters in the nervous system (29).
Giving vitamin C with hydrogen peroxide for mothers has significantly increased the stimulatory response to approaching, touching, sound or crackling in young rats at the age of 21 days. This is due to the role of vitamin C in protecting against oxidation, as well as having an important role in the synthesis of peptides in the peptide amidation and myelin formation and protection neural synapses from free radicals as well as protecting neurons from glutamate toxicity caused by free radicals (29).
Conclusions
It was concluded from this study that the use of vitamin C has an important role in ameliorating the dysfunction of nervous system in young rats their mothers treated with hydrogen peroxide during the lactation period. | 2021-09-27T20:55:39.954Z | 2021-07-23T00:00:00.000 | {
"year": 2021,
"sha1": "3424ef350ac2c0122ea0a96fee3bd57352b378fb",
"oa_license": "CCBY",
"oa_url": "https://vetmedmosul.com/article_168576_ac49d9f6971e735a9da9a31fcc17dc36.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a7369fa7a3f713e7c76362be9e3e9459316b3e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232042184 | pes2o/s2orc | v3-fos-license | Simvastatin impairs hippocampal synaptic plasticity and cognitive function in mice
Lipophilic statins which are blood brain barrier (BBB) permeable are speculated to affect the cholesterol synthesis and neural functions in the central nervous system. However, whether these statins can affect cholesterol levels and synaptic plasticity in hippocampus and the in vivo consequence remain unclear. Here, we report that long-term subcutaneous treatments of simvastatin significantly impair mouse hippocampal synaptic plasticity, reflected by the attenuated long-term potentiation of field excitatory postsynaptic potentials. The simvastatin administration causes a deficiency in recognition and spatial memory but fails to affect motor ability and anxiety behaviors in the mice. Mass spectrometry imaging indicates a significant decrease in cholesterol intensity in hippocampus of the mice receiving chronic simvastatin treatments. Such effects of simvastatin are transient because drug discontinuation can restore the hippocampal cholesterol level and synaptic plasticity and the memory function. These findings may provide further clues to elucidate the mechanisms of neurological side effects, especially the brain cognitive function impairment, caused by long-term usage of BBB-permeable statins.
Introduction
Statins are the most effective low density lipoprotein-cholesterol lowering medications by targeting 3-Hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase in blood and liver [1,2]. Statins have widely been recognized as the first-line medications for the therapy of strokes and cardiovascular diseases for years [3,4]. Various types of statins including atorvastatin, lovastatin, rosuvastatin and simvastatin have been approved by the U.S. Food and Drug Administration (FDA) [5]. According to their capacity to cross the blood-brain barrier (BBB), statins are classified as lipophilic statins including atorvastatin, simvastatin and lovastatin which are BBB-permeable, and hydrophilic statins including rosuvastatin and pravastatin which are BBB-impermeable [6]. The lipophilic simvastatin has been reported to significantly reduce brain cholesterol level in mice, when compared with hydrophilic pravastatin [7]. Clinical studies have also shown that atorvastatin and simvastatin usage could cause reversible cognitive function impairment [8,9]. However, the underlying mechanisms upon how statins affect the brain cognitive function remain unsolved.
Cholesterol is ubiquitous in the central nervous system (CNS). Accurate maintenance of brain cholesterol level is essential for normal brain function including signaling and synaptic plasticity [10,11]. Brain cholesterol metabolic deficiency has been linked to varieties of neurological disorders, such as Alzheimer's disease, Parkinson's disease and Huntington disease [12][13][14]. Human studies have demonstrated that low levels of total cholesterol are associated with poor performance on cognitive function [15]. Animal studies also indicated that animals with cholesterol synthesis deficiency suffer severe declines in learning and memory abilities [16,17]. Dietary cholesterol can improve performance of rodents in Morris Water Maze (MWM) tests. Such improvement is suggested to be associated with the changes in synaptic plasticity of hippocampus [18,19].
Hippocampal synaptic structure and function are always linked to brain cognition [20,21]. Hippocampal cholesterol loss may impair brain synaptic functions including electrical or chemical signal transmission and therefore may lead to the poor cognition [22][23][24][25]. Although BBB-permeable statins have been suggested to affect brain cognition, it remains unclear whether they affect cholesterol levels in hippocampus and the hippocampal synaptic plasticity. To answer these questions, here we combined our recently developed desorption electrospray ionization mass spectrometry with photoionization assistance (paDESI-MS) imaging technology [26] with field potential recordings and behavioral tests. Chronic simvastatin treatments indeed significantly reduced long-term potentiation (LTP) in hippocampal slices of mice and impaired their recognition memory. The MS imaging revealed a remarkable down-regulation of cholesterol in hippocampus in simvastatin-treated mice. Furthermore, drug withdrawal significantly restored the hippocampal synaptic plasticity and the memory function of mice, with simultaneous recovery of cholesterol level in the hippocampus. These findings provide a basis for studying the neurological and cognitive side effects of BBB-permeable lipophilic statins.
Animals
All procedures have been approved by the Institutional Animal Use and Care Committee of School of Life Sciences, University of Science & Technology of China. Adult C57BL/6J male mice at 5 weeks of age were obtained from Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). After acclimating for a week, mice received administration of simvastatin (S.C., 30 mg/ kg) or vehicle for 26 consecutive days. All behavioral tests were performed from Day 21 to Day 26. All mice were housed at 18-23 ℃ with 40-60% humidity under a 12-h dark/light cycle (lights off at 7 p.m) and free access to food and water.
Morris water maze (MWM)
After receiving 20 consecutive days (Day 1-Day 20) of vehicle/simvastatin treatments, the mice were arranged for the MWM tests (Day 21). Mice of each group were trained in a large tank (120 cm in diameter and 40 cm in depth) which was divided into four quadrants. A hidden 10-cm-diameter platform (1 cm below the surface of water) was placed in the center of a quadrant. The pool was surrounded by a black curtain with four visual cues on the wall of pool. Water was kept at 20° C and opacified with titanium dioxide. The trials were conducted 4 times daily at the same time point for 5 successive days followed by a probe test on Day 6. Mice were placed into four quadrants in order (20 min interval) and swam freely for a maximum of 60 s. If a mouse did not find the platform within a 60-s period, it was gently guided to the platform and allowed to stay on the platform for 15 s. The latency, distance and speed of mice to find platform were recorded. For probe test, the platform was removed from the pool and the mouse was put into the quadrant opposite to where the platform located and allowed to swim for 30 s. The time of the mice spent in each quadrant was recorded.
Novel object recognition (NOR)
After receiving 20 consecutive days of vehicle/simvastatin treatments, another group of mice were arranged for the NOR tests (Day [21][22]. The open-field apparatus consisted of an acrylic chamber (40 cm × 40 cm × 30 cm). Two different objects were prepared in duplicate: towers of rectangular Lego bricks (built from blue, green and yellow bricks) and circular Lego bricks (built from yellow and red bricks). The objects were placed 10 cm away from the walls and attached to the floor. Mice were tested in the dark (active phase between 7:00 p.m. and 7:00 a.m.). During the familiarization session, mice were allowed to freely explore two identical objects (rectangular Lego) placed into the arena at fixed locations for 3 min. The ANY-maze video-tracking system (Stoelting, Wood Dale, USA), which is based on nose-point detection, was used to record the time mice spent exploring objects. Active exploration was defined as mice sniffing or touching the object when the gap between the nose and the object was less than 2 cm. Climbing over the object or gnawing the object was not considered as exploratory activity. At the end of the test, each mouse was returned to its home cage, and the chamber and objects were cleaned using 75% ethanol, then air-dried for 3 min. The mice with no significant preference for the two identical objects were selected for further tests. In the NOR tests, 6 of 34 mice were excluded based on their abnormal preference to specific legos. After an intersession interval (ISI) of 24 h, one of the familiar objects was replaced by a novel object (circular Lego). The location of the novel object (left or right) was randomized among the mice and the groups tested. Object preference was calculated by using the following formula: preference % = (time to explore the individual object/total exploration time for both objects) × 100%. Data were excluded if the total of exploration time was less than 10 s. After the novel object recognition test, mice were allowed to recover for 2 days before further behavioral tests.
Open field test (OFT)
The open field test was performed 2 days after the NOR test ( Day 24). An open field test system (XR-XZ301, Xinruan, Shanghai, China) was used. Mice were individually transferred from their home cages to an open field chamber (width, 45 cm; length, 45 cm; height, 45 cm) for locomotion tests for 15 min. Locomotor activity was recorded by a camera and the distance each mouse travelled was analyzed by the ANY-MAZE software (Global Biotech Inc.).
Rotarod test (RT)
The rotarod test was performed on the next day after the OFT (Day 25). A rotarod training system (XR1514, Xinruan, Shanghai, China) was used. Before the first training session, mice were habituated to stay on a stationary rod for 2 min. A total of six trials for the rotarod test were carried out using an accelerating protocol from 4 to 60 rpm in 300 s with 20-min inter-trial intervals. After falling, the mice were immediately placed back to their home cages and the time to fall was automatically recorded by the rotarod software. Once the trial reached to 300 s, the mice were manually removed from the rod immediately. The apparatus and testing area were cleaned with 75% ethanol (w/v) after each trial.
Elevated plus maze (EPM)
The elevated plus maze was performed on the next day after RT (Day 26). The EPM apparatus consisted of a cross-shaped maze (with 25 cm × 5 cm arms) elevated by a 60-cm support. Two opposite arms were surrounded by a 20-cm wall, while the other two were open (only with a 1-cm contention step). Mice were individually placed in the central area of the apparatus, facing one of the closed arms, and their mobility within the maze was assessed over 5 min. The exploration profile within the different areas of the maze (open arms, closed arms and center) was analyzed. The anxiety behavior was assessed by examination of the open arm exploration. Animals that fell from the apparatus had to be censored from the analyses. Arm preference was automatically analyzed by the ANYmaze video tracking software.
Hippocampal slice preparations and electrophysiological recordings
The mice were sacrificed on the next day after all behavioral tests were finished (Day 27). Coronal hippocampal slices (350-μm thick) from adult male mice were prepared with Leica Vibratome in ice-cold cutting solution containing (in mM) 30 NaCl, 26 NaHCO 3 , 10 Glucose, 194 sucrose, 4.5 KCl, 1.2 NaH 2 PO 4 , 1 MgCl 2 and continuously bubbled with carbogen (95% O 2 + 5% CO 2 ). The slices were then recovered at room temperature for 1 h. Slices were transferred into the recording chamber continuously perfused at 12 ml/min with artificial cerebrospinal fluid (ACSF) at 37 ℃. The constituent of ACSF are the followings: (in mM): 124 NaCl, 4.5 KCl, 1 MgCl 2 , 2 CaCl 2 , 1.2 NaH 2 PO 4 , and 26 NaHCO 3 , continuously bubbled in carbogen. Long-term potentiation (LTP) was triggered by high frequency stimulations (HFS, 100 Hz, 1 s) in the hippocampal CA3 area. Field excitatory postsynaptic potentials (fEPSPs) were recorded using a glass electrode (filled with NaCl, 3-6 MΩ) placed into the stratum radiatum of the CA1 area. Signals were amplified (gain 100) and filtered (3 kHz), then digitized (10-100 kHz; National Instruments). After a 20-min baseline recording, recordings were continued for at least 50 min following LTP induction. The LTP was quantified by the fEPSP slope normalized to the baseline. Paired-pulse ratio (PPR) was obtained by delivering two stimulation pulses with interstimulus intervals of 50 ms. PPR values were quantified by calculating the ratio between the mean amplitude of the second and the first fEPSP. Synaptic responses were evoked at 0.1 Hz using a bipolar tungsten electrode. Data were collected and analyzed on or off-line by using pClamp 10.4 software (Molecular Devices, Sunnyvale, CA) software.
paDESI-MS imaging
The mice used for paDESI-MS imaging also received vehicle/simvastatin treatments and behavioral tests except those in Fig. 3c and were then sacrificed on the next day after behavioral tests (Day 27). The brain was immediately removed from the skull and flash frozen in liquid nitrogen for 15 s. The frozen mouse brain was transferred to the cryostat chamber of a Vibratome (VT 1200S, Leica, Germany) at − 20 °C. Brains from vehicle group and simvastatin group were separately cut into 16-μm-thick coronal sections. In each group, three adjacent hippocampal slices were collected for parallel experiments. One slice from control group and one slice from simvastatin group were placed on the same microscope slide to avoid the matrix effects caused by different slides. The slide was then scanned by paDESI-MS. The cholesterol intensity was normalized to 13 C3-cholesterol (0.1 mg/mL) which has been added into the spray. paDESI-MS imaging system consisted of a DESI sprayer, a 2D scanning stage, and a postphotoionization interface. A solvent was infused at a flow rate of 3 μL/min through a DESI sprayer (50 μm i.d. and 150 μm o.d. inner fused silica capillary and a 250 μm i.d. and 350 μm o.d. outer fused silica capillary) and directed onto the surface of a tissue slice with a 53° angle of incidence with the assistance of the nebulizing N 2 gas (120 psi). The flow of the solvent was driven by a syringe pump, and the metal needle tip was connected to a high-voltage power supply (3500 V for the positive ion mode and − 4000 V for the negative ion mode). The desorbed compounds were sucked in the heated transfer tube (i.d. 0.5 mm, o.d. 1/16in.) with a 10° angle of collection, and the un-ionized neutral molecules were ionized in an ionization tube (i.d. 4 mm, o.d. 10 mm) by a coaxially oriented krypton DC discharge vacuum ultraviolet (VUV) lamp, which was positioned to shine toward the exit of the transfer tube. Then the ionized species was transferred into a capillary of mass spectrometer. In order to improve the transfer efficiency, an air-flow assisted transport arrangement was added in this interface, and a pneumatic diaphragm pump (60 L/min, model GM-1.0A, Jinteng Experimental Equipment Co., Ltd., Tianjin, China) was connected to the side port of the ionization tube. In experiments, the transfer tube and ionization tube were kept at 300 °C. Note that the krypton lamp was turned off in the DESI mode and turned on in the DESI/PI mode. All imaging data were collected on an Agilent 6224 Accurate-Mass TOF mass spectrometer (Agilent, USA). The flow rate and temperature of drying gas of the mass spectrometer were set at 5 L/min and 325 °C, respectively. A programmable motorized X-Y scanning stage (GCD-203050 M, Daheng, Beijing, China) was used for tissue imaging, and the scanning process was allowed to be synchronized with the Agilent mass spectrometer data acquisition by the customized stage control software. The sample surface was line scanned in the X direction with a stepper motor at a velocity of 370 μm/s while acquiring mass spectra every 0.5 s. The distance between adjacent scan lines in the Y direction was 200 μm. The acquired multiple scan lines were combined in one data file for ion distribution images by using the freely available standalone version of the MSiReader software.
For simvastatin discontinuation experiments, the control group and simvastatin group received 26-day vehicle or simvastatin treatments and were then sacrificed on Day 27. The brain was then removed and frozen at − 80 °C for further MS imaging. The discontinuation group suffered 4-week simvastatin discontinuation after 26-day simvastatin treatments. After the discontinuation session, the mice were sacrificed and the brain was removed and frozen at − 80 °C. Brains from vehicle group, simvastatin group and simvastatin discontinuation group were separately cut into 16-μm-thick coronal sections. In each group, three adjacent hippocampal slices were collected for parallel experiments. One slice from control group, one slice from simvastatin group and one slice from simvastatin discontinuation group were placed on the same microscope slide to avoid the matrix effects caused by different slides. The slide was then scanned by paDESI-MS. The identifications for most of these peaks were facilitated by accurate m/z values, comparison of isotope distribution patterns, and tandem mass spectrometry.
Statistics
All experiments and data analysis were conducted in a blinded way. All statistical analyses for in vitro recording and behavioral experiments were performed using Prism7 software (GraphPad). Data were statistically compared by unpaired t tests, as indicated in the specific figure legends. Average values are expressed as the mean ± SEM. P < 0.05 was considered significant.
Hippocampal LTP is inhibited in simvastatin-treated mice
First, we examined the LTP, a main form of synaptic plasticity that underlies synaptic information storage within the CNS [27], in the hippocampal slices of mice receiving chronic subcutaneous (S.C.) simvastatin administration (30 mg/kg/day, 26 days). Field excitatory postsynaptic potentials (fEPSPs) were recorded in CA1 area in response to the electrical stimulation of Schaffer commissural pathway (Fig. 1a). After setting of stimulating and recording electrodes into hippocampal CA3 and CA1, an input-output curve was constructed by stimulating at intensities ranging from 0 to 0.6 mA. Before LTP recording, we assessed the effects of simvastatin on presynaptic function of CA1 using a paired-pulse ratio (PPR) test. The results showed that simvastatin-treated mice showed a similar PPR compared with vehicle-treated mice, suggesting that the presynaptic release probability is unchanged (Fig. 1b, c). We then examined whether the basal synaptic field responses in the hippocampus were altered by simvastatin, by comparing input-output curves constructed from the stimulation intensity vs fEPSP slope. No significant differences between vehicle-and simvastatin-treated mice in the overall inputoutput curves were observed (Fig. 1d). These results suggest that long-term treatment of simvastatin does not affect the basal synaptic transmission. We next investigated whether simvastatin would affect synaptic plasticity induced by HFS. High frequency stimulation (HFS, 100 Hz, 1 s) was used to achieve LTP, before which a 20-min baseline recording was performed. The HFSinduced potentiation of fEPSP was significantly reduced in the simvastatin-treated mice when compared with the vehicle-treated mice (Fig. 1e, f ). These results indicate that chronic simvastatin usage may impair the hippocampal synaptic plasticity.
Chronic simvastatin treatments impair recognition and spatial memory
We next conducted behavioral tests including novel object recognition (NOR) and Morris water maze (MWM) to examine the effects of simvastatin on the development of recognition and spatial memory, both greatly involving the hippocampal synaptic plasticity (Fig. 2a, g).
For the MWM test, mice were required to find a hidden platform to escape from swimming in a pool of water. The pool contained four quadrants and the mice were placed into four quadrants orderly (20-min interval) to swim freely for a maximum of 60 s. Four consecutive trials were conducted daily at the same time point for five successive days from Day 1 to Day 5. The simvastatin-treated mice showed an increased latency to find the platform compared with vehicle-treated mice on Day 5 (Fig. 2b, c). Additional probe trials demonstrated that simvastatintreated mice spent less time in the target quadrant than the vehicle-treated mice (Fig. 2d). Similarly, simvastatintreated mice also travelled a long distance compared with vehicle-treated mice on Day 5 (Fig. 2e). These results showed that long-term simvastatin treatments may cause deficiency in spatial memory. Such impairment seems to be independent of the swimming ability and sensitivity to water because the swimming speed was unchanged in simvastatin-treated mice (Fig. 2f ).
For the NOR tests, the vehicle-and simvastatin-treated mice were adapted to the training room for 30 min. Then, the mice were allowed to freely explore two identical objects (rectangular lego) placed into the arena at fixed locations for 3 min. The mice with no significant preference for the two identical objects were selected for further tests. After an intersession interval (ISI) of 24 h, one of the original objects was replaced by a novel object (circular lego) and the object preference was calculated (Fig. 2h). The vehicle-treated mice spent more time exploring the novel object compared with the familiar object. Such preference to the novel object was significantly inhibited in the simvastatin-treated mice, indicating a deficiency in recognition memory (Fig. 2i). Such deficiency in memory is certainly not due to the preference of mice to the shape of lego itself (Fig. 2j).
We further examined the effects of simvastatin on other neurological behaviors. Simvastatin did not affect locomotor activity and motor coordination of mice, reflected by unchanged travel distance in the open field test and unaltered time to fall in the rotarod test (Fig. 2k, l). In the elevated plus maze test, time spent in the open and closed arms was not changed in the simvastatin treated mice compared with the vehicle-treated mice (Fig. 2m).
Chronic simvastatin treatments reduce cholesterol levels in hippocampus
To examine whether long-term usage of the BBB-permeable simvastatin affects the hippocampal cholesterol level, we used our recently developed paDESI-MS imaging technique [26] to quantify the intensity of cholesterol in the hippocampus of mouse brain sections (Fig. 3a, b). The paDESI-MS technique combines conventional DESI with a postphotoionization. The advantage of this technology is that it enhances the ionization and imaging of desorbed neutral molecules such as cholesterol in biological tissue sections. Considering that it will take a long time for paDESI-MS to scan a whole brain slice and such a long time may cause degradation of metabolites, in this study we only screened and analyzed a small brain area containing the hippocampus (Fig. 3c). Long-term simvastatin administration significantly reduced brain cholesterol concentration in the hippocampus of mice. There was a strong correlation between hippocampal cholesterol intensities with the recognition memory (Fig. 3d) and the spatial memory of mice (Fig. 3e). Taken together, these results suggest that the simvastatin-induced synaptic plasticity impairment and cognition deficiency are correlated with the down-regulation of cholesterol level in hippocampus. The latency data at the last day of MWM training was shown. All the dots corresponding to individual mice were randomly selected from the simvastatin-treated group. n = 10 mice
Simvastatin discontinuation restores hippocampal cholesterol levels, synaptic plasticity and memory
For investigating whether the neurological side effects of simvastatin are reversible, the medication was then weaned over a 4-week period in the simvastatin-treated mice. After that, the hippocampal cholesterol levels, LTP amplitude and the memory capacity were all re-examined in these mice. The hippocampal cholesterol concentration was restored to normal level testified by paDESI-MS imaging (Fig. 4a). Both the simvastatin-impaired recognition memory and spatial memory were significantly restored after simvastatin discontinuation (Fig. 4b-f ). In addition, the LTP of fEPSP slopes in hippocampal CA1 slices were also recovered (Fig. 4g, h). These results suggest that the simvastatin-induced impairment of hippocampal cholesterol, synaptic plasticity and memory is transient and reversible.
Discussion
Statins are widely known as a type of medications lowering low-density lipoprotein (LDL) cholesterol which is always referred to as bad cholesterol [28]. Emerging evidences suggest that statins may affect brain cognitive function [8,29]. However, the underlying mechanism is still poorly understood. The data presented in this study provides several lines of evidence that BBB-permeable simvastatin may impair cognition via reducing hippocampal cholesterol. First, the long-term simvastatin treatment causes a significant reduction in hippocampal LTP, and leads to the inferior performance of MWM and NOR tests. Second, simvastatin reduces the hippocampal cholesterol concentration. The hippocampal cholesterol level is well correlated with the memory function of mice. Third, cholesterol discontinuation reverses the negative effects of simvastatin on hippocampal cholesterol level and synaptic plasticity. These results together suggest simvastatin may impair cognitive function by reducing cholesterol concentration in hippocampus. More importantly, the present study may provide some guiding significance for clinical practice. Although the effects of simvastatin are transient, patients requiring long-term usage of statins should select the BBB-impermeable drugs whenever possible, especially for patients with cognitive disorders. In the present study, the paDESI-MS imaging technique is introduced to directly measure cholesterol concentration in hippocampus [26]. Generally, the cholesterol levels in biological tissues are determined usually by indirect measurements, such as classical chemical methods, enzymatic assay and analytical instrumental approaches including gas and liquid chromatography [30]. Compared with the conventional approaches, the paDESI-MS imaging exhibits several unique advantages. First, the paDESI-MS enable detecting cholesterol directly rather than indirectly measuring the H 2 O 2 yielded from the oxidase-mediated oxidization of cholesterol [31]. Second, the MS imaging achieves the in-situ detection of cholesterol in specific subregions of the brain, allowing us to specifically measure cholesterol in hippocampus without interference from cholesterol-rich regions close to the hippocampus such as the corpus callosum. Thus, the paDESI-MS imaging is a powerful technique for qualitative and quantitative analysis of brain cholesterol.
Except for hippocampus, other brain regions may also be affected by simvastatin. Although our MS imaging tests only focus on the hippocampal brain area, cholesterol reduction in white matter and a few brain regions adjacent to hippocampus such as corpus callosum is also observed. Considering this, simvastatin may also affect other neurological functions such as motor and emotion. However, our present results indicate that simvastatin has no effects on motor ability and anxiety behaviors of mice. This is consistent with the clinical studies that no side effects on motor function and emotional states have been observed in patients treated with statins [32][33][34][35][36][37]. These negative results can be attributed to several reasons. For example, simvastatin may have a weaker cholesterol lowering effects in the brain regions related to motor function and emotional regulation when compared with the hippocampus. In addition, the compensation pathways for cholesterol synthesis in these brain regions may be activated after simvastatin administration. Thus, future studies should focus on the heterogeneity among different brain regions in cholesterol synthesis and metabolism. Notably, even in the hippocampus, various factors such as drug dose, duration of treatment and age may also differentially affect the effects of simvastatin. For example, previous studies have reported that hippocampal LTP could be enhanced when animals were administrated of a lower dose of simvastatin [38], when brain slices were treated with acute simvastatin incubation [39], or when older animals were used [40].
Statins including lipophilic statins and hydrophilic statins have different capacity to cross the BBB [6]. Cholesterol in the brain is locally synthesized independent from peripheral circulating cholesterol due to the presence of BBB [11,41,42]. Thus, BBB-permeable lipophilic statins may affect brain cholesterol synthesis and corresponding neurological functions. The present study shows that simvastatin reduces hippocampal cholesterol level and impairs hippocampal synaptic plasticity and memory function. Mounting evidence has reported that hippocampal cholesterol is correlated with learning and memory [19,22,43]. Increased cholesterol efflux impairs hippocampal synaptic plasticity and causes neurodegeneration [22]. Hippocampal cholesterol reduction impairs brain synaptic plasticity and leads to cognition impairment [23][24][25]. In addition, LTP formation has been evidenced to be mediated by many synaptic membrane proteins such as voltage-gated K + channels, Na + channels and Ca 2+ channels, NMDA receptors and AMPA receptor [43][44][45][46][47][48][49]. Cholesterol has been widely reported to modulate the function of these ion channels [44,48,49]. Thus, simvastatin may affect the synaptic membrane fluidity and the function of ion channels in the synaptic membrane by lowering hippocampal cholesterol synthesis. | 2021-02-25T14:38:19.241Z | 2020-09-18T00:00:00.000 | {
"year": 2021,
"sha1": "616837573c649a924bd69da85470d0c6254eda9d",
"oa_license": "CCBY",
"oa_url": "https://molecularbrain.biomedcentral.com/track/pdf/10.1186/s13041-021-00758-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8661f6a71e37f05b0063aaec041288fab6c7cae",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2484587 | pes2o/s2orc | v3-fos-license | Effect of diversity and missing data on genetic assignment with RAD-Seq markers
Reduced representation libraries are being used as a preferred source of markers to address population genetic questions. However, libraries of RAD-Seq variants often suffer from significant percentage of missing data. In addition, algorithms used to mine SNPs from the raw data may also underscore biological variation. We investigate the effect of biological diversity in mining SNPs from the program STACKS and the effect of missing data on individual assignment implemented in STRUCTURE. We observed that changing diversity parameters in STACKS significantly alters the number of SNPs discovered and allowing for higher percentage of missing data retrieves more loci and possibly more power for individual assignment. Electronic supplementary material The online version of this article (doi:10.1186/1756-0500-7-841) contains supplementary material, which is available to authorized users.
Reduced representation genomic libraries are increasingly used to answer diverse questions in evolutionary biology, which remained unresolved otherwise. Various restriction-site based genome scans have become standard tools for both population genetic and phylogenetic analyses [1]. Although extremely useful, these techniques are cost prohibitive and generation of data from hundreds of individuals across multiple runs may not really be an option for many research labs. Besides, technical issues related to data generation result in the requirement of multiple runs to troubleshoot, thereby increasing the cost manifold.
RAD-Seq (restriction site-based reduced representation genomic libraries) generates tens of thousands of loci per individual, but overlapping loci across all individuals are much fewer, resulting in significant missing data. Since missing data could impact inference, it is important to test its effect on the analysis. While missing data may not significantly impact phylogenetic inference [2], other forms of population genetic inferences remain untested.
We compared genetic assignment and group membership of individuals as inferred by nuclear autosomal microsatellite loci and genome-wide single nucleotide polymorphisms (SNPs) sites obtained from RAD-Seq for ten individuals of two Cynopterine fruit bat species Cynopterus sphinx and C. brachyotis (Additional file 1: Table S1). We assessed the effect of missing data on group assignment in RAD-Seq. Because software tools are used to ascertain SNPs in RAD data, SNP calling is impacted by assumptions of the software tools. Thus the number and quality of markers obtained is highly influenced by the software. We also investigated how such assumptions impact results, specifically the effect of change in diversity parameters (mismatch between loci) within a widely used tool for mining SNPs with RAD-Seq data, STACKS 1.09 [3].
Library preparation and analyses
As an ongoing project to estimate gene flow and understand its dynamics between these two bat species in sympatry, we used data from ten individuals from a pool of 387 genotyped individuals (Additional file 1). We prepared standard RAD libraries with individual barcodes (Additional file 1: Table S1). We performed a standard paired end run (single lane) in HiSeq 1000. Data output was 91 million reads, but paired-end quality was poor and was not considered in further analyses (Additional file 1). We subsequently analyzed~47 million forward reads. We observed more than 90% data loss due to ambiguous barcode (Additional file 1). The average number of reads per individual was 468,612.3 (range: 366,389 to 731,138, Additional file 1: Table S1). Within the STACKS pipeline, the basic algorithm for arranging reads into stacks depends upon absolute nucleotide matches and has often been regarded as conservative [4]. Additionally, there is evidence that when sequence diversity is high (as in our case where we are examining sequences from two distinctly diverged genomes) stacks may remove a majority of the loci from its analysis, or may separate single locus into two [5].
We assessed the sensitivity of STACKS to these deviations in differentiating between these well-defined taxa. First we obtained SNPs from different run parameters, with 50% missing data. Further, for the parameter combination that provided meaningful number of loci, we obtained SNPs with different extent of missing data (10%, 30%, 50%, 70% and 90%). For all these parameter sets we performed independent STRUCTURE runs (50,000 burnin and 100,000 mcmc) considering two genotypic clusters (K =2) [6]. Each run was replicated five times and the mean of ancestry coefficient across all these replicates was used to obtain trends.
Results
We obtained 17 to 2954 loci with all parameter combinations tested in our study (Additional file 1: Table S2) with substantial variation between individuals. Our results suggest that in the absence of any mismatch between loci (default settings in denovomap.pl) the SNPs obtained cannot differentiate between both species. However, species differentiation is accurate when mismatches are allowed and there is also no significant effect of varying degrees of mismatches ( Figure 1A). We observe that beyond the mismatch parameter of M = 3 (mismatches between reads within a locus) and n =5 (mismatches between loci when comparing across individuals) the number of SNPs (or loci) do not increase significantly (Additional file 1: Table S2 and S3). Presence of mild to considerable extent of missing data (Additional file 1: Table S4) also did not adversely affect the assignment scores in Structure and most individuals were ascribed group memberships in agreement with the microsatellite dataset. However one individual (CA002) considered as genetic intermediate (microsatellite data) was consistently assigned to C. brachyotis based on the SNP analysis (Figure 1, Additional file 2: Figure S1). Further, we also observed that allowing for 10% missing data returned too few loci (228 loci) to differentiate between two species, whereas 90% missing data obliterated any intermediate ancestry coefficients (Figure 1B). It appears that too much missing data reduces the power of assignment to a considerable extent. However, the cutoff percentage of missing data from STACKS does not represent the actual extent of missing data (Additional file 1: Table S4). Though the cutoffs vary greatly in stringency (10% to 90%) the actual extent of missing data do not vary drastically between the datasets (55% to 74%). This may be due to the fact that in our analysis a locus in STACKS is recognized if it a present in at least one of the species. Thus the percentage cutoff only corresponds to the species in which the locus is present and not the complete dataset. As both populations are not of the same sample size the extent of missing data is greater than indicated by the cutoff. Rather, the cutoffs represent an increase in number of loci Figure 1 Ancestry coefficient for individuals (obtained from structure) A) with increase in number of mismatches allowed to generate loci in Stacks (default: two mismatches between reads within a locus, M2n2: two mismatches between reads within a locus and two mismatches between loci when comparing across individuals, M3n5: three mismatches between reads within a locus and five mismatches between loci when comparing across individuals, M3n7: three mismatches between reads within a locus and seven mismatches between loci when comparing across individuals and M3n5N7: three mismatches between reads within a locus and five mismatches between loci when comparing across individuals and additionally allowing seven mismatches to align secondary reads to generate a locus) and B) increasing the proportion of missing data.
when the cutoff parameters become less stringent. We conclude that biological diversity should be taken into consideration while generating SNPs in STACKS (similar checks should be performed with other SNP calling programs as well) and generally RAD-Seq population genomic datasets with considerable missing data may actually mine considerably higher number of loci and have more power [7] to perform population genetic analysis like assignment tests.
Ethics statement
The sampling were approved by the institutional ethics committees (Internal Research Review Board (IRB), Ethical Clearance (EC), Biosafety and Animal Welfare committee approval to BC dated 21-11-2005 Madurai Kamaraj University and Institutional Animal Ethics Committee (IACE) to UR id UR-3/2009, National Centre for Biological Sciences).
Data accessibility
Raw sequence reads have been deposited in the Sequence Read Archive (SRA) (accession no. SRP042963).
Additional files
Additional file 1: Supplementary information. Table S1. Details of samples used for RAD-Seq library preparation. Table S2. Number of locus per samples for each data set. Table S3: Number of SNPs obtained in stacks by varying different parameters in denovomap.pl program in STACKS. Table S4: Number of SNPs obtained in stacks by varying the level of missing data. The average level of missing data was calculated in PLINK 1.07 [7] (url: http://pngu.mgh.harvard.edu/purcell/plink/). Additional file 2: Figure S1. Bar plot of ancestry coefficient for individuals a) with increase in number of mismatches allowed to generate loci in Stacks and b) increase in proportion of missing data. | 2016-05-12T22:15:10.714Z | 2014-11-25T00:00:00.000 | {
"year": 2014,
"sha1": "8b4927906be671d2b2eb5951e417ba5dd4fc2a5b",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-7-841",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5508decc0be1e1b71af1a37b997afb6f629b2a2",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18405096 | pes2o/s2orc | v3-fos-license | An Exact Bosonization Rule for c=1 Noncritical String Theory
We construct a string field theory for c=1 noncritical strings using the loop variables as the string field. We show how one can express the nonrelativistic free fermions which describes the theory, in terms of these string fields.
Introduction
Noncritical string theories provide useful toy models to study various dynamical aspects of string theory (for reviews, see [1]- [4]). They are exactly solvable and have many features in common with the critical ones.
The c = 1 case 1 is the most interesting one in which the two-dimensional target space interpretation is possible. It is solved by the matrix model techniques and can be reduced to a free nonrelativistic fermion system. On the other hand, c = 1 noncritical string theory can be described by a string field theory [5]. Although the string field theory looks not so simple, the Hamiltonian can be diagonalized by using the free fermions. Therefore from the point of view of the string field theory, it will be an intriguing problem to find a way to express the fermions in terms of the string fields.
There have been many attempts to give such a "bosonization rule" 2 [6]- [10]. In [6], in particular, the fermions were expressed approximately by using the perturbative fluctuation of the string field. This is based on a standard bosonization formula for the two-dimensional relativistic fermion. In this paper, we would like to propose an exact bosonization rule. We first rewrite the string field theory for c = 1 strings using the loop variables. The Hamiltonian consists of the joining-splitting type terms, whose forms are quite similar to those in the string field theory for critical strings. Then we will give a bosonization rule to express the fermions in terms of these string fields. The bosonization rule we propose is exact even before the continuum limit is taken.
Our bosonization rule is a generalization of the D-instanton operator proposed in [11]. In [11], the D-instanton operator was used to investigate the functional form of the chemical potential of D-instantons [11]- [19]. Therefore our bosonization rule will be useful in understanding the nature of D-branes in c = 1 string theory. We will perform some perturbative calculations using this rule.
The organization of this paper is as follows. In section 2, we describe a string field theory for c = 1 strings in terms of the loop variables and show that it is equivalent to the Das-Jevicki's formulation [5]. In section 3, we give the bosonization rule. In section 4, we take the double scaling limit and describe the string field theory and the bosonization rule in the continuum limit. In section 5, we describe how one can perform perturbative calculations using the string field theory and the bosonization rule. Section 6 is devoted to discussions.
2. Collective field theory c = 1 noncritical string theory can be described by the matrix quantum mechanics: Here M(t) is an N × N hermitian matrix. We can take the double scaling limit in which β → ∞ with an appropriate matrix potential U(M). 3 The matrix Hamiltonian is derived from the action as In order to study the c = 1 theory, we are mainly interested in the so-called singlet sector. Then what is relevant is the eigenvalues ζ i (i = 1, · · · , N) of the matrix M and the wave function Ψ( ζ) is given as a function of these eigenvalues. The Hamiltonian is expressed as a differential operator in terms of ζ i as where △( ζ) is the Van-der-Monde determinant.
Loop variables
Now we would like to construct the collective field theory for this matrix quantum mechanics. 4 The basic idea of the collective field theory is to express the wave function of the system as a functional of the loop variable In the matrix model, this quantity corresponds to a boundary on the worldsheet with length l. Thus we will consider this operator for l > 0. The wave function Ψ( ζ) is now expressed as a functional Ψ[ϕ]. As we will see, any function Ψ( ζ) can be expressed as such a functional, but the space of such functionals are bigger than the Hilbert space of the matrix quantum mechanics. We define the operatorφ(l),φ(l) to be the ones which act asφ It is easy to see that the Hamiltonian eq.(2.2) is expressed in terms of these variables as
Relation to Das-Jevicki variables
This collective field theory is of course equivalent to the Das-Jevicki theory. In order to rewrite the Hamiltonian into the Das-Jevicki form, we express the field ϕ(l) in terms of the density of the matrix eigenvalues ρ(ζ) as We assume that ρ(ζ) possesses a compact support on the real axis. The Laplace transform of ϕ(l) becomes and forφ we defineφ Then it is straightforward to show that the relation between our variables and the Das-Jevicki variables ρ, π is given as where ζ is real and δ > 0 is very small. The commutation relation (2.6) implies that π(ζ) is the canonical conjugate of ρ(ζ). Using this relation, we can rewrite the Hamiltonian (2.7) as which is exactly the Das-Jevicki Hamiltonian.
Since ρ and π are defined to be hermitian operators, the Hamiltonian is hermitian. From eqs.(2.11), we can obtain the hermitian conjugates of ϕ andφ as ϕ † (l) = ϕ(l), Before closing this subsection, one comment is in order. The Hamiltonian in eq.(2.12) is actually the classical part of the Das-Jevicki Hamiltonian. In Das-Jevicki's formulation [5], there exist higher order terms, which we are not able to reproduce. We will come back to this point later.
Bosonization
The string field formulation using the loop variables is just another expression of the familiar Das-Jevicki formalism. However, the loop variables are convenient for guessing the form of the bosonization formula. The algebra of the loop variables is quite analogous to that of the bosonic oscillators. Indeed if we compare ϕ(l) and φ(l) to α n and α −n with l, (l > 0) corresponding to n > 0, the commutation relation eq.(2.6) should correspond to [α n , α −m ] = nδ n,m . (3.1) Using this analogy, one can guess how one can construct fermions from the bosonic operators ϕ andφ. From the usual bosonic oscillators, one can construct a fermionic operator roughly as Therefore it is conceivable that if we construct something like it will behave as a fermionic operator. However things are not so straightforward. For one thing, zero modes play important roles in the usual bosonization and we need to find a substitute for those in the collective field theory. Secondly, the above analogy is not correct as to the hermiticity of the operators and it may cause trouble in defining the fermion conjugate to the one above. What we will show is that the above guess is essentially correct and we can construct fermionic operators in the Hilbert space of the collective field.
Collective field Hilbert space
Before constructing the fermionic operators, we will construct the Hilbert space of the collective field ϕ so that it can describe the matrix quantum mechanics. Let ϕ 0| be the eigenstate of ϕ(l) with the eigenvalue 0, i.e. ϕ 0|ϕ(l) = 0. (3.4) ζ| is an eigenstate of ϕ(l) and Now for a state |Ψ , we define the wave function Ψ( ζ) as and identify Ψ( ζ) with the wave function for the matrix eigenvalues. In such a representation, using eqs.(2.7)(3.6) we obtain Thus, the collective field Hamiltonian (2.7) coincides with the Hamiltonian (2.3) for the matrix eigenvalues.
The state |Ψ should be in the form where |0 φ is the eigenstate ofφ with the eigenvalue 0. Assuming that ϕ 0|0 φ = 1, we obtain in which form any symmetric function of ζ i can be represented. This is the basic idea on which the collective field theory is constructed.
Bosonization
Let us define Here, ǫ > 0 and δ > 0 are small numbers and we take the limit ǫ → 0, δ → 0 eventually. ǫ is necessary to regularize the divergence at l ∼ 0 in the integral. We consider ζ to be on the real axis and the integral ∞ ǫ dl l e −(ζ+iδ)l ϕ(l) is supposed to have a cut on the real axis. δ specifies how to avoid the cut. We eventually consider these operators between the bra ϕ 0| and the ket |0 φ . Thus we express O in the way that all the ϕ's come on the left ofφ's.
The hermitian conjugate of O ± can be given in the above-mentioned operator ordering as Therefore the hermitian conjugate of O ± can be given essentially by flipping the sign of the exponent, as in the usual bosonization formula. We would like to use O ± and (O ± ) † to define fermionic operators satisfying the canonical commutation relations. In order to do so, we need to calculate the commutation relations for these operators. Let us consider the product of operators O ± (ζ)O ± (ζ ′ ) for example. In order to define such a product, we first define it in the case ζ ′ > ζ, and deal with the other case by the analytic continuation. This procedure should be considered as a variant of the radial ordering in the usual case. Then we can show the following identity: Here we have used the following identity, ∞ ǫ dl l e −ζl ∼ − ln(ǫζ). (3.14) We can prove similar formulas for other products and show Now let us define the fermionic operators which satisfy the canonical anti-commutation relation: ψ † can be considered as the creation operator and since ϕ 0|ψ † (ζ) = 0, (3.18) ϕ 0| can be considered as the vacuum. We can define another pair of canonical fermions as and ϕ 0|ψ † (ζ) = 0, (3.21) and anti-commute with ψ and ψ † . Thus we have constructed fermions ψ andψ which acts on the Hilbert space of collective field theory.
Description via fermions
Using the fermionic operator ψ(ζ), we can express the inner product of the wave function Ψ( ζ) in the framework of the collective field theory. Noting that we can express the fermion wave function defined by Ψ fermion ( ζ) ≡ △( ζ)Ψ( ζ) as This relation is exactly the one between the second quantized fermion operator and the many body wave function. The partition function of the matrix quantum mechanics is expressed as where the state |Ψ corresponds to the boundary condition. For a large T , Z ∼ e −iE 0 T and E 0 can be identified with the free energy of the string theory. Since one can show and ϕ 0|H = 0, E 0 is the lowest energy eigenvalue of the N fermion state. Thus the fermion operator ψ(ζ) is exactly the nonrelativistic free fermion which describes the matrix quantum mechanics. The other fermionsψ,ψ † correspond to matrix eigenvalues with "negative norm". Indeed ϕ 0|ψ(ζ 1 ) · · ·ψ(ζ N )ϕ(l) = ϕ 0|ψ(ζ 1 ) · · ·ψ(ζ N ) − i e ζ i l . (3.26) Such eigenvalues appear in the supermatrix model. Therefore the collective field theory can express the super matrix model. It is natural for the collective field Hilbert space to include such modes, because it includes π(ζ) conjugate to ρ(ζ), which makes it impossible for ρ(ζ) to take only nonnegative values. Moreover since if we take the potential U(ζ) to be bounded below, the fermionψ † generate the energy spectrum not bounded below.
The double scaling limit
So far we have been studying the matrix quantum mechanics without taking the continuum limit. Now let us take the double scaling limit of the bosonization rule given above. Putting U(ζ) = 1 2 ζ 2 , we take the limit β → ∞. The continuum string field should be defined as ϕ c (l) ≡ ϕ( βl). (4.1) We use the variable y = √ βζ to describe the continuum variables. The Hamiltonian in the continuum limit is given as H c ≡ βH. Introducing the chemical potential µ, we get the continuum Hamiltonian as Thus by replacing ζ by y and rescaling ǫ and δ by √ β, all the bosonization rules are the same as the ones given in the previous section, and ψ, ψ † are defined accordingly. We will omit the subscript c and forget that ǫ and δ are rescaled by √ β in the following. The continuum Hamiltonian is given in terms of the fermions as The continuum limit of the Das-Jevicki variables are defined in the same way. From the relation (2.8), we can get (4.5) and the relation (2.11) implies ϕ(y ± iδ) +φ(−y) = i∂ y π(y) ∓ iπρ(y), which gives the relation between the variables in the continuum limit. The collective Hamiltonian in the Das-Jevicki form becomes H c = dy − 1 2 (y 2 − 2µ)ρ(y) + 1 2 (∂ y π(y)) 2 ρ(y) + π 2 6 ρ 3 (y) . (4.8)
Perturbative calculations
Since we have the exact expression for the fermion variables in terms of the string field, in principle, we can calculate the amplitudes involving fermions perturbatively using the string field. In this section, we perform some calculations for a simple example, and point out a subtlety involved in such calculations.
Expansions of the Das-Jevicki variables
In order to consider the theory around the vacuum, Das-Jevicki variables are more convenient than ϕ,φ. From the continuum Hamiltonian (4.8), we can see that the following distribution yields a static vacuum configuration: .
The collective field theory around this vacuum is most conveniently described by introducing the variable τ which satisfies y = − 2µ cosh τ, (5.2) and the field φ(τ ) and its canonical conjugate π φ (τ ) as Using these variables, the Hamiltonian for the fluctuation becomes Now the Hamiltonian becomes the one for the massless boson φ in the two dimensional spacetime (t, τ ) with nonrelativistic interactions. One can expand the operators φ and π φ as where the boundary conditions for φ and π φ are chosen as [5] b and b † are hermitian conjugate to each other and satisfies The free part of the Hamiltonian (5.5) becomes
Perturbative calculations
Substituting eq.(5.6) into eq.(5.4) and using eq.(4.6), we can expand the fields ϕ,φ in terms of the oscillators b(E), b † (E). In principle it is possible to perform perturbative calculations using these oscillators.
Here let us study the fermionic operators taking the one loop effects into account. Using eq.(3.11), it is straightforward to see that for y ≤ − √ 2µ the fermions can be written as We have introduced Λ ∼ ǫ −1 to regularize the integral in place of ǫ, 5 and we set y = − 2µ cosh τ, Rewriting eq.(5.10) in the normal ordered form with respect to b, b † , we can take the one-loop effects into account. We obtain where C denotes a numerical constant. This form of the fermionic operator is quite like those found in [6]. Especially we get the WKB wave function precisely as a factor. However it is with a divergent constant. Actually such a calculation is subtle for y ≤ − √ 2µ. In order to deal with the matrix model, we should restrict ourselves to the states in the string field Hilbert space, which are annihilated byψ † . Suppose we are given a coherent state ϕ 0 | which satisfy ϕ 0 |ϕ(l) = ϕ 0 |ϕ 0 (l), (5.13) let us examine what conditions ϕ 0 | should satisfy in order to be a state annihilated byψ † . We assume that ϕ 0 (l) can be written as ϕ 0 (l) = dζe ζl ρ 0 (ζ), (5.14) with ρ 0 (ζ) ≥ 0. Since ϕ 0 |ψ † (ζ) = 0 when the two terms in the last line cancel with each other. When ρ 0 (ζ) ≥ 0, we expect that the difference between the two comes from the difference in the imaginary parts of the exponents which is Therefore if and only if ζ dζ ′ ρ 0 (ζ ′ ) is an integer, ϕ 0 |ψ † (ζ) = 0. This is automatically satisfied before the continuum limit is taken, but it is a very subtle condition in the continuum limit. In the vacuum configuration, eigenvalues are distributed in the region y ≤ − √ 2µ. Hence for y ≤ − √ 2µ, this condition is very subtle. The divergences in eq.(5.12) are the signs of this subtlety.
The situation is quite similar to that in the c = 0 [17] and c < 1 [14] case. In those cases, we encounter divergences similar to the ones in eq.(5.12). However, in calculating physical quantities such as the chemical potential of instantons, they are cancelled by divergences from other factors and we eventually obtain finite results. We expect that similar things happen in c = 1 case.
Conclusions and discussions
In this paper, we construct a string field theory for c = 1 noncritical string theory, using the loop variables. We give an exact bosonization rule, by which we can express the nonrelativistic free fermions in terms of the string field. The description by the string field involves fermions with negative norm and energy besides the usual fermion. This is inevitable because we should introduce a canonical conjugate to the eigenvalue distribution function ρ in the string field theory. The existence of such extra fermions causes subtleties in the perturbative calculations.
We argue that the Hamiltonian written in terms of the loop variables is equivalent to the classical part of Das-Jevicki's Hamiltonian. Higher order terms are necessary for Das-Jevicki's Hamiltonian to reproduce the results of the matrix quantum mechanics. On the other hand, the nonrelativistic fermion formulation of the matrix quantum mechanics is reproduced from the string field theory. It is conceivable that our prescription of normal ordering of the fermionic operators has something to do with this discrepancy. It is an important and intriguing issue to clarify this point by comparing our results with the perturbative calculations in [25] [26], for example. We leave it as a future problem.
In [10], nonperturbative effects in c = 1 string theory were investigated using another bosonization rule. The authors in [10] showed that there is a nonperturbative correction on the zero mode of bosonic fields. In our rule, on the other hand, the bosons φ(τ ) and π φ (τ ) cannot have any zero mode, by construction, due to the boundary condition (5.7). This suggests that their approach and ours will give different results for nonperturbative effects.
Since the fermions in c = 1 strings can be considered as D-branes, our results will be useful in understanding how one should consider D-branes in the context of closed string field theory and clarify the relation between string theory and the matrix models. The Hamiltonian for our string field involves joining-splitting interaction which is similar to that of the light-cone gauge string field theory for critical strings. Therefore we may be able to use our approach in the critical string theory to investigate the above mentioned issues. | 2007-08-28T15:32:32.000Z | 2007-04-23T00:00:00.000 | {
"year": 2007,
"sha1": "2f579e085b472cdfedea34a6248b791800c1cc47",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0704.2960v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a268dc3965274388b3d4eda6a36a0d3f662cbb28",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
44836802 | pes2o/s2orc | v3-fos-license | Forecasting the Cell Temperature of PV Modules with an Adaptive System
The need to reduce energy consumptions and to optimize the processes of energy production has pushed the technology towards the implementation of hybrid systems for combined production of electric and thermal energies. In particular, recent researches look with interest at the installation of hybrid system PV/T. To improve the energy performance of these systems, it is necessary to know the operating temperature of the photovoltaic modules.The determination of the operating temperatureT c is a key parameter for the assessment of the actual performance of photovoltaic panels. In the literature, it is possible to find different correlations that evaluate theT c referring to standard test conditions and/or applying some theoretical simplifications/assumptions.Nevertheless, the application of these different correlations, for the same conditions, does not lead to unequivocal results. In this work an alternative method, based on the employment of artificial neural networks (ANNs), was proposed to predict the operating temperature of a PVmodule.This methodology does not require any simplification or physical assumptions. In the paper is described the ANN that obtained the best performance: a multilayer perception network. The results have been compared with experimental monitored data and with some of the most cited empirical correlations proposed by different authors.
Introduction
In the world energy scenario affected by the reduction of fossil fuels supply used for the production of the electrical and thermal energy, the potential offered by renewable energy sources (RES), is strategic for the industrial countries [1].The exploitation of RES has promoted several thermal and electric technologies, improving the overall energy conversion efficiency [2].The development of PV systems and of solar thermal technology is playing an important role in the building integration, to cover the electricity and thermal needs for the production of heat water and the internal heating.
In recent years, several researches have led to the installation and the study of hybrid devices [3]: hybrid photovoltaic/thermal collector or hybrid (PV/T) system.
The temperature of PV module increases when it absorbs solar radiation; this temperature raising provokes a decrease in electrical conversion efficiency; a PV/T system is capable of partially avoiding this undesirable effect.The hybrid technologies permit to control the temperature system; the presence of a water/air circulation system cools the PV module helping to increase the electrical output, and furthermore the hot water can be afterword exploited, improving the overall efficiency of the whole system [4].Furthermore, the operating temperature of the PV modules has a significant importance when used in building integrated photovoltaic energy system (BIPV).Many authors have pointed out how the operating cell temperature must be evaluated for the optimal sizing of BIPV system [5][6][7][8][9].The assessment of can therefore give important results in terms of indoor thermal quality in the sustainable building field, especially in the application of BIPV as smart windows [10].
For these reasons, the evaluation of the PV temperature is essential to ensure high performances.
As described by Skoplaki and Palyvos in [11], many correlations for predicting the electrical performance of a photovoltaic module have been proposed and used by different authors.This report highlights the role of the temperature of the silicon as the main parameter that affects the conversion efficiency.However, as better explained in the following, the equations that describe the energy balance of a PV system make the determination of the operating temperature of a PV module not simple.In fact, because of the different nature of the variables present in the balance (physical, thermoelectrical, and environmental) and the uneasy determination of many key parameters (e.g., overall heat transfer coefficient and optical properties), the evaluation of the cell temperature, by using an empirical correlation, could lead to not reliable results.
In this work, the authors explore the possibility to offer an alternative method to assess the operating temperature of PV devices by using adaptive techniques.Adaptive systems such as ANN should allow predicting, in a fast and reliable way, the temperature of the PV module by varying all the boundary conditions.To validate the reliability of the proposed ANN, two different modules were tested and the results were compared with experimental monitored data.
The Cell Temperature of a PV Module
In general, the performance of a photovoltaic module is defined according to the "peak power, " which identifies the maximum electric power supplied by a PV system when it receives a solar irradiance of 1 kW/m 2 ( ref ) at the cell temperature of 25 ∘ C ( ref ).These conditions are only nominal because the solar irradiance has a variable intensity and the module is subject to considerable temperature changes [12].
Indeed, in actual conditions it is essential to evaluate the operating condition under all possible circumstances of solar irradiance , cell temperature , wind speed , air temperature air , and electric load .
In Figure 1, it is possible to observe how the intersection between the load line and I-V curves identifies the working point; with the same graphical method, it is possible to identify the working point in terms of electric power; the red circles identify the maximum power points.As it is easy to understand, the is a key parameter that affects the energy conversion efficiency of a PV module: increasing the temperature decreases the delivered power.
In the literature, there are several available empirical correlations that employ the PV module operating temperature, as the expression proposed by Evans [13] that describes the module's efficiency in correspondence to given values of the and as follows: where ref is the efficiency at standard test conditions (STC) and the temperature coefficient and the insolation coefficient have values of 0.004 K −1 and 0.12, respectively, for crystalline silicon modules [14].Other authors [15][16][17], instead, proposed the use of empirical constants, whose values are only referred to as few models of PV panels.The most common procedure to determine the cell temperature consists in using the normal operating cell temperature NOCT [18][19][20].The value of this parameter is given by the PV module manufacturer: is then dependent on the air temperature air and on the solar irradiance according to (2) as follows:
Glass Silicon Plastic
Solar irradiance This very simple method yields satisfying results only if the PV modules are not roof integrated.
However, the NOCT approach estimates based only on the passive behaviour of the PV, not taking into account at the same time the actual weather variables and the electricity production regimes of the PV module.This approach neglects the fact that not all the absorbed solar irradiance is converted into electricity: generally, only 15-18% is converted into electricity; the remaining part of the insolation is transformed into heat contributing to increase the temperature cell.
The heat transfer between the PV panel and the surrounding environment is driven by a global heat transfer coefficient, which describes the radiative and convective exchange processes.
For these reasons, in this work after a simplified description of the energy balance of a PV, which highlights the dependence of the operating temperature by some environmental parameters and by the thermophysical properties of a PV system, the authors underline how the determination of using conventional calculation procedures is often complex and difficult to solve.On the other hand, the use of any empirical relationship, typically characterized by some assumptions and simplifications, often affects the results.In the following, the application of the ANN approach, to predict the operating temperature of a PV module, is proposed.
The Energy Balance of a PV Module
As already noted, the efficiency of a photovoltaic panel depends mainly on the intensity of the solar irradiance and cell temperature.Considering a generic PV system the energy exchanges can be depicted by Figure 2.
In heat transfer analysis, some body can be observed to behave as a "lamp" whose interior temperature remains essentially uniform at all times during a heat transfer process.The temperature of such bodies can be taken to be a function of time only [32].Applying this approach at the energy balance of a photovoltaic panel, the lumped system analysis permits describing the heat exchange as follows: where ele is the electrical power product of the module [W]; sol is the absorbed solar power [W]; is the thermal capacity of the PV system (glass, silicon, and plastic layers) [J]; is the surface of the panel [m 2 ]; is the global heat exchange coefficient between the module and the surrounding environment [W/m 2 K]; is the time [s].Assume: where is the transmission coefficient of the glass, is the absorption coefficient of the silicon, is the voltage [V], and is the current [A] produced by the panel.Generally, a PV cell is represented as a current generator, connected in parallel with a diode and two resistances and sh connected in series and a parallel, respectively, and the I-V characteristic can be described by the following equation: where is the photocurrent, 0 is the diode saturation current, and is the ideality factors of the diode.Applying the definition of the electric power as described in ( 5) and considering the one diode approach, represented by ( 6), the ele can be stated as Knowing the electric power and the absorbed solar power is possible to evaluate the thermal power that contributes to heat the PV panel as follows: In this way (3) can be rewritten as Moreover, Assume: International Journal of Photoenergy If at the time we assume constant air , the of the system, the coefficient , and the quantity thermal , it is possible to write integrating ln If the values of the thermophysical variables are known, the expression of the temperature of the PVsystem is As it is possible to observe, also in this simply lumped parameters approach, the evaluation of the cell temperature is not immediate and it strongly depends on the solar irradiance and air temperature.Furthermore, the presence of exponential terms and implicit expression of electric power complicates the resolution procedure, not allowing the direct mathematical calculation.In addition, the determination of the global heat transfer coefficient must be taken into account.
Unfortunately, the case study of hot inclined rear surfaces is still an open problem [33].
If we give up the idea of analytically solving the problem, as previously described in Section 2, in the literature, it is possible to find different correlations about the value [11,17,[34][35][36][37].However, each correlation is characterised by the same simplifications and/or assumptions that do not represent the complexity of the PV energy balance accurately.
Definition of ANN
ANNs are computational intelligence architectures based on emulating biological neural networks and have the capability of "learning" the behavior of input data.The basic unit of an ANN is a neuron.An artificial neuron (AN) acts in the same way as a biological neuron; each has a set of inputs and produces an output based on these inputs.A biological neuron produces an output by comparing the sum of each input to a threshold value; based on that comparison it produces an output.In addition, it is able to vary the weight of each input according to the priority of the input.The inputs and outputs of a biological neuron are called synapses; they may act as inputs to other neurons or as outputs.Thus, the set of neurons and synapse creates an interconnected network, which produces an output based on weights, sums, and comparisons [22].
Generally, an artificial neural network consists of multiple interconnected artificial neurons, arranged in several layers; Figure 3 shows a schema of a typical arrangement of neurons in an ANN.
The use of ANNs often makes it possible to identify correlations between data that are very complex to assess.
Experimental Setup
To apply a neural approach it is necessary to have a large database of specific data that represents the analysed system.For this reason, to build and to train a specific ANN, a specific test facility situated on the roof of the Energy Department of University of Palermo (38 ∘ 1070N, 13 ∘ 1220E) has been made up.The experimental device was built up to permit to acquire the thermoelectrical parameters of photovoltaic modules and the weathers variables that define the energy balance of a PV system.The test facility consists of two silicon panels (Kyocera KC175GHT-2 and Sanyo HIT 240 HDE4) connected with a precision resistance set (Figure 4).The technical data of the two panels provided by the manufacture are shown in Tables 1 and 2. The measurements were performed with a data acquisition module National Instruments NIUSB-9221 and a Delta Ohm pyranometer (mod.LPPYRA 02 AV) linked to an Advantech ADAM 6024 module.A DavisVantage PRO 2 Plus weather station was used to collect the measurements of air temperature and relative humidity, wind speed and direction, horizontal global solar irradiance, and atmospheric pressure.The temperature of the panels was measured using some thermocouples (type T, copper-constantan) put in three different points of each panel [34].
In this way, in addition to the weather data climate collected by the weather station and the cell temperature measured by the thermocouples, it was also possible to measure and collect electric data related to the operation panel: the power delivered, the short-circuit current SC , and the open-circuit voltage OC .
Preliminarily Analysis of Data Collected
To correctly apply the neural approach, all data must be subject to a preprocessing step that consists in a preliminary analysis that permits to identify possible outliers, to remove uncorrected values, to carry out a statistical analysis, and to perform a correlation analysis.The analysis of the common current-voltage curves given by manufactures at constant temperature or constant solar irradiance does not allow a correct evaluation of the thermoelectrical behaviour of a photovoltaic panel because, in actual conditions, the temperature and the solar irradiance change simultaneously.
Indeed to predict the yield of a photovoltaic system, it is essential to evaluate the operating condition under all possible circumstances of solar irradiance, cell temperature, wind speed , air temperaturem and electric load , when the photovoltaic elements are working and producing electricity.To identify the operation regimes of the panel as a function of electricity production regimes, the authors have chosen to compare the operating voltage with the maximum power point voltage mpp,panel for given solar irradiance and cell temperature (Figure 1).In this way, the ratio / mpp,panel easily allows to identify the operating regimes of the panel as follows: (i) when the ratio between the working voltage V and the voltage of maximum power mpp,panel is less than 0.95, the I-V characteristic is almost horizontal, and the power output is proportional to the incident insolation; (ii) when the ratio / mpp,panel is greater than 1.05, the I-V characteristic of the panel decreases much more rapidly and the influence of insolation becomes less significant (saturation conditions); (iii) the regimen identified by a ratio 0.95 < / mpp,panel < 1.05, characterizes the state of a PV panel connected to a maximum power point tracking system (MPPT) in which the load dynamically adapts to generate the maximum power.
The evaluation of the maximum voltage was carried out by using the following correlations: where the constants and were determined by a logarithmic interpolation and applying the least squares technique between the values of the power output at different solar irradiance and by a linear interpolation and applying the least squares technique at different temperature, respectively.In this case, the two couples of constants pertaining the two modules have been evaluated using Tables 3 and 4 for the Kyocera module and Tables 5 and 6 for the Sanyo module.For the Kyocera module, the mathematical interpolations of the data collected in the previous tables permitted to evaluate the following values: = 1.1395 and = 26.172.
For this last module, the mathematical interpolations of the data collected in the previous tables permitted to evaluate the following values: = 1.7007 and = 26.153.
In this way filtering the data for the operating regimen represented by a ratio 0.95 < / mpp,panel < 1.05, it was possible to identify the data close to the maximum power points.Filtering the data collected for both modules, two datasets, indicated in Table 7, have been created; the 15% of the filtered data will be used as a test dataset (not used for the ANN training phase).
The correlation analysis for the two different PVmodules (Figures 5 and 6) permits a first evaluation of the mutual relationships between and all the other features.The preliminary correlation analysis has identified a strong correlation between and the solar irradiance , the shortcircuit current SC , the open-circuit voltage OC , and the electrical power ele ; on the other hand, it has identified a moderate correlation with air temperature and wind speed.
A statistical analysis permitted to assess the maximum (Max), mean (Mean), and minimum (Min) values and the standard deviation (StDev) of all considered features (Tables 8 and 9).
Application of ANN
According to the type and nature of data that have been collected from the experimental set, it was possible to choose several topologies of neural networks.Different simulations relating to several topologies of ANNs have been tested, but in this work, only the best ANN will be described.
One Hidden Layer MLP.
A multi-layer perception (MLP) is a kind of ANN consisting of multiple layers of ANs in which each layer is fully connected to the next one.Except for the input ANs, each node is a neuron with a nonlinear activation function.MLP utilizes a common supervised learning technique for training the network.This topology is one of the simplest available for ANNs; our MLP artificial network is composed by: two input sources, two function blocks, two weight layers, one hidden weight layer and one error criterion block.In Figure 7 our one hidden layer MLP topology is schematized to evaluate the cell temperature.
The input source represents all the available data that can be used for training or testing; in this case it contains the total vectors obtained after applying the filter (Table 7): 85% of the total filtered vectors were used for the training phase and 15% of the total filtered vectors were used for the testing phase.
The function blocks can be seen as nonlinear thresholds for the propagation of the signals.They give the adaptive system its nonlinear computing capabilities, and those used in the following proposed network have a sigmoidal function.
The weights layer represents the long-term memory of the system and is adjusted during the learning phase.Finally, the error criterion is a block that takes two signals and compares them according to a specific criterion; the signals terminate to flow across the system.The training phase has been optimised concerning the number of epochs to avoid overfitting.
In Table 10 are reported the number of epochs in order to avoid the overfitting for the two different PV panels.After the training, for each ANN the postprocessing phase evaluates the mean error (ME) and the mean absolute error (MAE) that represents the quantity used to measure how close forecasts or predictions are to the eventual outcome.Furthermore, the standard deviation (StDev) that shows how much variation or "dispersion" exists from the average (mean, or expected value) was evaluated.
Figures 8 and 9 show the ME distribution for the Kyocera and Sanyo modules, respectively, in terms of cell temperature.
Figures 10 and 11 show the MAE distribution for the Kyocera and Sanyo modules, respectively, in terms of cell temperature.
In Table 11 the values of ME, MAE, and StDv of case, for the two different PV panels are reported.
Furthermore, the confidence plot that gives an estimated range of values, which is likely to include the calculated cell temperature with a probability of 95% was evaluated.Due to the physics of the analyzed problem, the confidence plot was evaluated only for daylight hours neglecting the temperature trend during the night.In Figure 12 the confidence plot of for the Kyocera module is ±0.23 ∘ C, and in Figure 13 the confidence plot of for the Sanyo module is ±0.74 ∘ C. In these figures the range of the confidence plot was indicated by the red (high) and blue (low) lines, while the measured data are indicated by the black line and those obtained from the ANN by the purple line.As it is possible to see the output data coincide perfectly with the desired data with a very narrow confidence band.
Results and Discussion
Based on analysis of the results obtained, it is clear as the proposed ANN approach for evaluating the operating temperature gives very good performances, characterized by an extremely narrow confidence band that does not exceed ±0.7366 ∘ C.This methodology is characterised by a great flexibility and reliability providing excellent results for any kind of modules: monocrystalline and/or polycrystalline.Furthermore, this approach does not consider any simplifications nor assumptions; finally, the learning time is very short.
To validate the ANN methodology, a comparison between ANN results and the calculated with some of the most cited empirical correlations was carried out extracting the MAE values.In this work were chosen the following correlations.
Servant Correlation (see [35]) where , , and are the constants, defined by the model of the author.
Duffie-Beckman Correlation (see [36][37][38]) where is the efficiency and / is defined constant by the model of the authors.
Hove Correlation (see [39]) where is the efficiency and / is determined experimentally.
The results shown in Table 12 were obtained by applying the previous correlations at the same modules (Kyocera and Sanyo) for the same data set (filtered data) and calculating the mean absolute error between the measured data and the calculated data.
Conclusion
In this paper, an artificial neural network approach has been proposed to determine the operative temperature of PV modules.As previously described, the energy balance of a generic PV system, because of the different natures of the parameters (thermoelectric, environmental, and physical) and the complex mathematical formulation, is not easy to solve.Different authors proposed several empirical correlations which should permit obtaining directly the PV module operating temperature, developed from common geometries and weather conditions.Generally, if the correlations are in implicit form, an iteration procedure is necessary for the calculation; if the correlations are in explicit form, very often they are referred only to references conditions or sometime they use same constant values.In any case, these assumptions and simplifications could affect the reliability of the results.The application of the ANN approach, instead, represents a simple and fast solution to correctly evaluate the operative regimen of a PV module by varying all the boundary conditions.
In this work the authors proposed a one hidden layer MLP to determine the of a generic PV panel.The ANN has been tested and trained with experimental data consisting of air temperature, wind speed, solar irradiance, power output, open-circuit voltage, short-circuit current, and cell temperature.The results obtained of the ANN issued a reliable tool to forecast the cell temperature of the PV panel.Comparing the performances of this network with those of some of the most cited empirical correlations, the ANN results present a significant lower MAE.Furthermore, the very short time requested for the training phase suggests that the ANN could be integrated in a software for run-time evaluation of the cell temperature.
Figure 1 :
Figure 1: Working point of a generic PV panel at constant irradiance (1000 W/m 2 ) with varying temperature and electric load.
Figure 2 :
Figure 2: Energy balance in a PV system.
Figure 3 :
Figure 3: Generic schema of an ANN.
Figure 5 :
Figure 5: Correlation analysis between operating temperature and all input data for the Kyocera module.
Figure 6 :
Figure 6: Correlation analysis between operating temperature and all input data for the Sanyo module.
Figure 7 :
Figure 7: Schema of one hidden layer MLP topology for the cell temperature evaluation.
Figure 8 :
Figure 8: Error distribution over 50 vectors of with one hidden layer MLP topology for the Kyocera module.
Figure 9 :
Figure 9: Error distribution over 31 vectors of with one hidden layer MLP topology for the Sanyo module.
Figure 10 :
Figure 10: Absolute error distribution over 50 vectors of with one hidden layer MLP topology for the Kyosera module.
Table 3 :
Characteristic data values of Kyocera KC175GHT-2 at constant temperature (25 ∘ C) with varying solar irradiance.
Table 5 :
Calculated characteristic data values of Sanyo HIT240HDE-4 at constant temperature (25 ∘ C) with varying solar irradiance.
Table 7 :
Datasheet and dataset vectors.
Table 8 :
Statistics evaluation of Kyocera panel.
Table 9 :
Statistics evaluation of Sanyo panel.
Table 10 :
Number of epochs and learning time.
c ( ∘ C) Figure 11: Absolute error distribution over 31 vectors of with one hidden layer MLP topology for the Sanyo module.
Table 11 :
Mean error and mean absolute error of the one hidden layer MLP topology.
Table 12 :
Mean absolute error results for different correlations. | 2017-11-29T04:25:10.743Z | 2013-09-09T00:00:00.000 | {
"year": 2013,
"sha1": "3f4ab3aecb2489dd9adb0f222fdafd9545448f82",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijp/2013/192854.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f4ab3aecb2489dd9adb0f222fdafd9545448f82",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258936015 | pes2o/s2orc | v3-fos-license | Assimilation of integral experiments on high-energy nuclear parameters
. Current assimilation of integral experiments often consists in adjusting multi-group cross sections with feedbacks from critical reference benchmarks. In order to maintain the constraints coming from nuclear models, we present here a method to achieve assimilation of integral experiment on nuclear parameters, from which nuclear data are evaluated. This method, based on Bayesian inference, uses continuous energy reactivity sensitivities to nuclear parameters, throughout all the nuclear data types (cross section, angular distribution, energy distribution, fission multiplicity and spectrum). This improvement was made possible by coupling a stochastic transport code and a nuclear data evaluation code. The study of a test case – the assimilation of Jezebel ICSBEP benchmark on a plutonium-239 toy evaluation – shows that angular and energy distributions have a non-negligible impact on the assimilation process and results.
Introduction
Nuclear data evaluation relies, in part, on adjustments of nuclear parameters on differential experiment data. To add another source of information, one can use the data from integral experiments. Usually, this process is applied on multi-group cross sections, using deterministic transport code. Unfortunately, this may deteriorate physical constraints coming from the nuclear models, used to evaluate nuclear data. To get the best adjustment, and maintain the physical correlations of nuclear data, one has to work directly with nuclear parameters: Thus, we can take into account the influence of parameters not only on cross sections, but also on angular (AD) and energy (ED) distributions, when these latter are obtained from the same set of parameters. Above all, to exclude multi-group discretization bias, continuous energy data should be used. Previous works have successfully extended the assimilation process to nuclear parameters, and even using continuous energy [1,2]. However, these were limited to the influence of cross-sections, and continuous energy calculations were limited to specific nuclear parameters (resonance parameters). A recent coupling [3] between the nuclear reaction code CONRAD [4] and a development version of the stochastic code TRIPOLI-4 ® [5] allows calculating full sensitivities of integral experiment reactivity to nuclear parameters, with contributions from all nuclear data, without energy and models restrictions [6,7]. With such tool, we are now able to adjust nuclear parameters and phenomenological models on integral experiment results, and especially here, criticality experiments reactivity. * e-mail: elias.vandermeersch@cea.fr
Complete feedback on nuclear parameters
The assimilation of criticality assimilation may be achieved with Bayesian inference, where one updates prior knowledge of nuclear parameters with results from integral experiments. With the help of the CONRAD code, such problem is here linearized and solved by iterations of a Newton method. Stochastic methods can be used to solve this type of problem and avoid linearization, but with a high computational cost, particularly for the adjustment of nuclear parameters. Here, using information about initial nuclear parameters values ⃗ x prior , their correlations M prior x , the expected integral experiments reactivities ⃗ y, the reactivities sensitivities S to nuclear parameters, correlations between the integral experiments M y , and reactivity estimation ⃗ t, one tries to find a set of parameters ⃗ x that minimizes χ: (1) For each iteration, the nuclear parameters values, their correlations, the sensitivities and the estimated reactivity are updated.
Reactivity estimation ⃗ t is obtained here by a stochastic transport simulation.
In the same time, CONRAD/TRIPOLI-4 ® weak coupling compute reactivity sensitivity to nuclear parameters: Indeed, the sensitivity S of the reactivity ρ to the nuclear parameter Γ may be expressed as: with σ, AD and ED the cross sections, angular and energy distributions, and r the neutron induced nuclear reactions. For prompt neutron multiplicity ν p and neutron spectrum nuclear parameters χ p : One should note that only σ, AD and ED share the same models, meaning that Γ, λ and T are completely separated and non interfering nuclear parameters. The CON-RAD code computes nuclear data sensitivities to nuclear parameters, and TRIPOLI-4 ® reactivity sensitivities to nuclear data, using perturbation theory with the Iterated Fission Probability method [8]. These sensitivities calculations are a main component of the assimilation process, and this work seeks to study the impact of a complete assimilation, where sensitivities are calculated by taking into account all the nuclear data.
Application case on a plutonium-239 evaluation
This work focuses on the ICSPBEP benchmark Jezebel [9], a bare sphere of plutonium and gallium alloy, producing a fast neutron spectrum. A plutonium-239 toy evaluation is generated with CONRAD, from a set of nuclear parameters and their associated models. As CONRAD cannot produce delayed neutron data, these have been extracted from JEFF-3.1.1 [10], and will be excluded during the adjustment. Regarding prompt neutron multiplicity, a simple linear model has been implemented, using two parameters: the interception (thermal value) and the slope of the linear curve. Such a model underestimates the prompt neutron plutonium-239 multiplicity compared to JEFF-3.1.1 evaluation, as shown in figure 1. Finally, Optical Model Potential (OMP) parameters, from Morillon-Romain [11] parameterization, have been fitted on Poenitz differential experiments, in order to obtain some initial correlations.
We focused on the nine most effective parameters, estimated here with a variation of the standard regression coefficient (SRC), defined by: Figure 3: Studied parameters' initial correlation matrix.
With x the nuclear parameter of interest and δx its standard deviation. Based on the results presented in figure 2, it appears that Jezebel assimilation will mostly have an impact on Optical Model, prompt fission spectrum parameters, and thermal value of the prompt neutron multiplicity. We added the slope of the multiplicity to the set of parameters studied, to allow more flexibility on the simplest model of our evaluation. All the parameters selected, their value and their associated nuclear data are recalled in table 1. The initial correlation of the parameters is depicted in figure 3: at this step, there is no link between models: For example, fission spectrum parameters are not correlated to Optical Model or Fission parameters. Regarding multiplicity, parameters are, for now, considered uncorrelated.
The first assimilation of Jezebel's reactivity was conducted using full sensitivities (through cross section, an-gular distribution (AD) and energy distribution (ED), and all nuclear data are updated at each iteration. Results are presented in table 2, and the final parameter correlations in figure 4.
The expected multiplication factor (k eff ) for Jezebel is 1.00000 ± 110 pcm [9]. Focusing on the first case, the assimilation manages to fit the parameters so that the Jezebel simulated k eff reaches the expected value. Most of the changes concern prompt spectrum fission spectrum, as expected from Jezebel characteristics, and because of the parameters large uncertainties. Correlations are altered, for instance an anti-correlation between our multiplicity parameters appears. Regarding links between models, they appear mostly for the parameters highly modified by the assimilation.
This assimilation leads to a probable over-estimate prompt neutrons multiplicity, particularly in the thermal energy range (in comparison to JEFF-3.1.1 value, cf. figure 1). The assimilation manages to find a balance that fits our experiment, however, some constraints such as our linear multiplicity model may lead to a non-realistic evaluation. Such problem may be corrected by using a proper multiplicity model, fitted on differential experiments, a stochastic solving of the Bayesian problem, and assimilate multiples experiments in the same process.
Angular and energetic distribution impact
In the second case, feedbacks from prompt multiplicity and fission spectrum are used and updated at each iteration. However, for fission and optical model parameters (used for σ, AD, ED), only cross sections feedback is used and updated. This means that angular and energy distributions remain unchanged during the process. Once again, the assimilation process manages to find a balance, with a k eff within the expected experimental range. The correlation matrix, depicted at the right of figure 4, is very similar to the one obtained with a full assimilation. Interestingly, by removing some freedom of action on the Optical Model and Fission parameters, we mainly influenced the parameters of other models: As in our evaluation, flexibility is primarily offered by prompt multiplicity and fission spectrum parameters, assimilation compensates for the lack of freedom on these parameters. Such example illustrates how the current method, based only on cross section adjustment, may wrongly fit some parameters, by removing degrees of freedom in the assimilation process.
Conclusion
This weak coupling between a nuclear data and a stochastic transport code allowed to assimilate reactivity to nuclear parameters, with feedback through all nuclear data and using continuous energy. A simple test case, with the Jezebel benchmark, shows the impact of neglecting angular and energy distributions. Further work is underway to achieve a strong coupling, in order to avoid formatting nuclear data and facilitate the assimilation process. Such coupling will also ease the development of more complex assimilation methods, such as those based on the Generalized Perturbation Theory, in order to achieve assimilation of reaction rates for example. At the same time, CONRAD evaluation capabilities will be extended, in order to work on more complete evaluation data. | 2023-05-28T15:02:11.342Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "9b5cdaf5fcc1dcac28cfc3b1ababdebd421a249c",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2023/10/epjconf_nd2023_14014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8badbda4d7f2c2012e4c745fcd2d3e96f9db6198",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
34000458 | pes2o/s2orc | v3-fos-license | The Family Trust In New Zealand and the Claims of ‘ Unwelcome Beneficiaries ’
In June 2009, at the Transcontinental Trusts conference in Geneva, His Honour Justice David Hayton said that the New Zealand Court of Appeal had got aspects of the law of trusts wrong in its decision in Official Assignee v Wilson [2008] 3 NZLR 45. The Court held that the test for proving a trust was a sham was whether the trustees and settlor had a common intention that the trust was not to be a genuine entitiy. Hayton prefers a more objective approach and looks to the objective effect of a shammer’s conduct and not look for secret dishonest intentions which will hardly ever be revealed. Hayton’s approach would ensure that trust property would be made available to creditors so that they were paid what they were due by declaring the trusts to be shams. Family trusts have become big business in New Zealand and are commonly used to protect a businessman’s assets from creditors. While there is nothing illegal in setting up a family trust, it is my contention that the law pertaining to family trusts in New Zealand has become so far removed from the accepted principles of equity as to demand investigation. This paper explores the origins of equity and compares the modern family trust against the equitable principles which have been developed over hundreds of years, even as far back as Plato’s Greece. The paper links the equity of Ancient Greece to Cicero in Rome, through the early Roman Church to the Chancellors serving English Kings. The law of England leads to the law of modern New Zealand. The paper goes on to examine the way the family trust has been used to defeat the legitimate claims of creditors. I aim to show that the approach taken by the New Zealand Court of Appeal is too narrow and favours the ‘shamming’ settlor at the expense of creditors who have given good consideration in comparison with volunteer beneficiaries. Moreover the family trust has become a mechanism that bears little relationship to recognised equitable principles and should lose the protection accorded to properly constituted trusts.
In Chudleigh's case in 1594 (Note 1) Sir Edward Coke said that there are "two inventors of uses [trusts] fear and fraud; fear in times of troubles and civil wars to save their inheritances from being forfeited; and fraud to defeat due debts, lawful actions, wards, escheats, mortmains, etc.".Although the fear of losing inheritances through civil war may be past, many trusts are regularly created through a desire to avoid the claims of creditors, and what have been described as 'unwelcome beneficiaries'.(Note 2) The setting up of a family trust has been advertised as a smart move to ensure that a business is structured to operate in a financially beneficial way, protecting assets while paying the lowest possible tax.(Note3).This particular firm refers to trusts as being 'a unique business form' and calls the settlors 'directors' of the trust who enjoy shared control of the assets with the other directors of the trustee company.Very often the companies promoting their services as trust advisers attract clients by offering to help them avoid tax, to shield their wealth from creditors, safeguarding against future inheritance taxes, showing them how to protect themselves against possible means testing on superannuation or medical or residential care benefits.The family trust is being sold as a clever way to make someone appear poor without him or her suffering the rigours of poverty.
While the law continues to assume that trusts are created for altruistic reasons, companies are selling their family trust services on the basis of the fringe benefits that a trust can confer.In fact if asked a client may reveal that the main reason for the creation of the trust was to defeat creditors.In the recent case of Taylor v Official Assignee which was heard in the High Court in Auckland in August 2009 the Court found that the Taylors had established their family trust to shield family assets from legitimate claims for unpaid taxes arising out of business activities and to place their primary assets out of the reach of creditors.This is not to say that there are not proper reasons for setting up a family trust.Commonly these are used as legitimate vehicles for the protection of children's assets in case of relationship break down, to prevent a second wife from taking 50% of the family assets without regard for the interests of the children of the first marriage.
A common selling point offered by purveyors of family trusts is to say that the settlors will retain control over the trust assets.Settlors pass assets and funds to the trust with themselves as trustees and, often without reference to any other trustee, continue to use those assets and funds as if they were their own.For example in Charman v Charman (Note 4) the English Court of Appeal developed a test to decide whether the assets in the trust had been used as a resource by the settlor.In Charman the husband had set up a discretionary trust of which he was settlor and one of a class of beneficiaries.He instructed the trustee company that they must consult him before taking any action with regard to the trust.Mr Charman had sole power to remove and appoint trustees.If he required any of the assets held by the trust they were to be transferred to him without question.He also told the trustees that they should regard him as the principle beneficiary during his lifetime and that all income should be held for him alone.At the time of the marriage break up the accumulated income of the trust was 4 million pounds and the trust's assets were valued at 68 million pounds.The husband appealed the trial judge's decision that the assets and income of the trust were in fact the property of the husband and therefore available to be included in the division of property between the spouses.The Court of Appeal dismissed his appeal because the husband had exerted so much control over the assets as to make them his property.Two earlier cases, In the Marriage of Ashton (1986)(Note 5) and In the Marriage of Goodman(1990) were heard by the Full Court of the Family Court of Australia where it was found that the husband in each case was in full control of the assets of the trust to the extent that the trust property was in fact the husband's own property.
Control of the trust can have far reaching consequences.In 2006 the Federal Court of Australia heard the case of Australian Securities & Investments Commission v Carey (No 6) (Note 6) in which it was held that the trustee of a discretionary trust controlled the trust to the extent that the trustee was the alter ego of a beneficiary.In this case the property held in trust was seen to be the property of the beneficiary to the extent that receivers could take control of that property.This case turned on the effective control of the trust property which the court said could then be said to be included in the term "individual property" and therefore subject to the jurisdiction of the appointed receivers.These tests developed in the Australian case of Carey and the UK case of Charman to identify the settlor's 'control' and whether the trust assets were used as 'resources' do not hinge on intention by settlor or trustee.The courts in question were able to look through the trust to the reality of the consequences beyond the trust mechanism without the need to interpret intention as a central issue.
In the New Zealand case of Taylor the issue of intention was raised.This was related to s 60 of the Property Law Act 1952 where it was necessary to show "intent to defraud".However the Court cited the Supreme Court judgment in Regal Castings Ltd v Lightbody (Note 7) where the Court considered that the expression "intent to defraud" was "not happily chosen" and that the phrase had been considered as shorthand for intent to hinder, defeat or delay a creditor in the exercise of any right over the property of the debtor.The wording of Section 345(1)(a) of the Property Law Act 2007 reflects Blanchard J's interpretation of the way s 60 had come to be used.The new PLA demands that it be shown the debtor acted with intent to prejudice a creditor.Furthermore, the proof of intent in Taylor was arrived at by looking at the consequences of Mrs Taylor's actions.Her actions in transferring property to the Trust put the only valuable asset out of the reach of creditors and continuing to indulge in a lifestyle beyond her means put the Commissioner of Taxation at serious risk of not recovering the amounts owing.It was held that she must have intended this consequence even though it was not actually her wish to cause the commissioner loss at the time the trust was established.The Taylors may have intended to set up a trust at the onset of their dealings and the trust may have been validly constituted, but the subsequent actions of the parties, the deliberate ignorance of Mr Taylor and his wife's machinations were enough to allow the Court to remove the assets from the trust under their powers granted by S60 PLA 1952.
In New Zealand cases, even where there are no statutory provisions to be met, the issue of the parties' intentions at the time when a trust was created has become paramount.In Official Assignee v Wilson (Note 8) it was held that "there must be a common intention before a transaction is found to be a sham" meaning that both the settlor and the trustee must share a common intention to create an illusion of a trust before the court will find that the trust is a sham.Despite arguments like that of Jessica Palmer that the intention of a trustee has never been required in the creation of a trust, sham or otherwise, the Court of Appeal followed the line of cases established by Snook v London & West Riding Investments (Note 9) and sought subjective intent on the part of the settlor and the trustee as parties to a transaction.This was said to be the determinate feature that determines whether an act or document was intended to be operative according to its tenor or whether it was meant to cloak another, different, transaction.
It appears to me that this is the point at which the focus of the courts in these cases has been misdirected and because of this successful misdirection the misuse of equitable principles to hide assets has become obscured to the point where the courts seem to be interested only in the state of mind of the settlor and the trustee at the time of the formation of the trust.Once it has been accepted that the settlor and the trustee must have been ad idem in the purpose of creating some edifice that looked like a trust but in fact was something else entirely, it becomes almost impossible to penetrate the trust unless the circumstances can be said to allow section 357 of the Property Law Act 2007 to be used to set aside a transfer of property where it is detrimental to a creditor.However in most of the cases where the settlor intends to retain control of the trust assets and make use of the trust format as a device to protect those assets the settlor retains such control over the selection of the trustee that it is likely that the original trustee named at the creation of the trust would have been replaced several times, in accordance with the posers given to the settlor by the trust instrument.In those circumstances it is absurd to look for the intention of the trustee.The trustee's intention will simply be to retain the trust as a client and to adhere to the terms of the trust instrument to avoid breaching his fiduciary duty to the trust however it has been set up.
It is possible to see how settlors and the courts might have come to this point, where the certainties present at the time of the trust's formation have come to be more important than the way equitable principles are being used.The trust has come to be referred to as a 'transaction' or a 'business form'.Many trusts are advertised as being capable of operation as a trading business with profits spread among beneficiaries.Sometimes settlors are advised to link a settlor's limited liability company to the trust and utilised as the trustee.The trust has become associated with companies and the language of the company is commonly used when referring to a trust to the extent that looking through the trust mechanism is being likened to piercing the corporate veil.This is evident in A v A (Note 10) where Munby J was considering an application to allow trust assets to be made available to meet a claim in the Family Division.Munby J refers to the three divisions of the High Court in England and says that there is "but one set of principles…equally applicable in all three Divisions, determining whether or not it is appropriate to 'pierce the corporate veil'."Those divisions are Chancery, Family and the Queen's Bench.While it is clearly appropriate to talk of piercing the corporate veil when dealing with issues of breach of directors' duties I would contend that it is simply not appropriate to refer to corporate veils in relation to trusts, but if we accept the trust as being some alternative business form with the settlor's transfer of assets seen as a transaction then we are increasingly denying the importance of equitable principles in the operation of a trust.The family trust ceases to be a creature of equity, subject to equity's unique rules and becomes another way for a well informed person to hide assets from legitimate claimants.
In recent years fiduciary duties have been found to exist outside the trust and the more traditionally accepted roles of solicitor, trustee, agent etc.More and more we are importing fiduciary obligations into business transactions.The joint venture is a very good example of this where people bound by contract can find themselves also bound by equity.Sir Anthony Mason said in 1994 'Equity, by its intervention in commerce, has subjected the participants in commercial transactions, where appropriate, to the higher standards of conduct for which it is noted and has exposed the participants to the advantages and detriments of relief in rem'.It is true that there has been an attempt to bring to commerce a higher standard of trust and reliance in some circumstances.However this trend is undermined if we allow the blurring of the distinction between common law and equity to dilute or in any way erode or diminish the principles upon which equity was founded.It is those tenets that made the trust possible and it is not logical to now suspend those principles just because the elements of the three certainties that every law student must learn happen to be present.In some family trusts the form of a trust may be there but the essence of equity is not.In Official Assignee v Wilson (Note 11) at paragraphs 42 and 43 Robertson J cites cases where the three certainties of intention, subject matter and object are discussed and he uses the case of Re Kayford Ltd (Note 12) as authority to say that a court cannot hold that a trust exists unless in the surrounding circumstances an intention to create a trust has been manifested.This is undoubtedly true.But it is not true that simply because an intention to create a trust exists that entity then becomes impervious to the rigours of equitable principles and may escape unscathed even when the consequences of the trust are that injustice is permitted to prevail.
It is sometimes said that equity has evolved far beyond its historical roots and that, while equity may have been created to thwart injustice, the liberal approach to equity, such as Lord Denning espoused is outdated.Graham Virgo describes today's equity as a 'technical system' administered by the Court of Chancery.Lord Denning's approached has, of course, been criticised heavily.For example in Dupont Steels Ltd v Sirs (Note 13) Lord Scarman referred to the purposive, liberal approach to judging as an 'unguided, even if experienced, sage sitting under the spreading oak tree'.Dr J Morris (Note 14) called it 'Palm Tree Justice in the Court of Appeal'.However other notions of equity still prevail such as overriding or correcting the effect of rules and embodying good conscience and natural justice.
Fifteen years ago Sir Anthony Mason expressed the purpose of equity in modern society: …the ecclesiastical natural law foundations of equity, its concern with standards of conscience, fairness, equality and its protection of relationships of trust and confidence, as well as its discretionary approach to the grant of relief, stand in marked contrast to the more rigid formulae applied by the common law and equip it better to meet the needs of the type of liberal democratic society which has evolved in the twentieth century.(Note 15) Despite this excellent description of modern equity it appears from some of the judicial pronouncements we have seen since that these standards of "conscience, fairness, equality" have been forgotten when dealing with family trusts.Instead of striving to maintain equitable standards courts have only looked at the form of the trust instrument and have become so fixated on questions of intention that the wider issues of whether equity has been served are lost.The other thing that strikes me in Sir Anthony Mason's words is his reference to equity's "discretionary approach to the grant of relief".This seems to have been forgotten in cases such as those that follow the Snook line of reasoning, that if the trust is to be a sham then common intention between the trustee and the settlor must be there at the beginning.For example in Shalson v Russo [2005] Ch 281 it was held that "unless that intention is from the outset shared by the trustee (or later becomes so shared) I fail to see how the settlement can be regarded as a sham."This reasoning led to the situation where the Official Assignee has to establish a common intention to create a sham trust by the settlor and the trustee at the outset (Official Assignee v Wilson (Note 16).This requirement is tantamount to being insurmountable especially in cases where the settlor retains power to replace trustees who do not comply with his wishes.If the opinion expressed in 17) is correct then a genuine trust (one where there is no common intention between the settlor and trustee) can never become a sham, even if a newly appointed trustee is the mere puppet of the settlor.Even if a subsequent trustee is removed for breach of trust the initial trust document remains valid, no matter what effect it may have.
I do not believe it is satisfactory to say that family trusts in New Zealand are often conducted in a manner which might be thought to be unconventional by reference to traditional concepts of trust administration.If family trusts are to be given the protections and flexibility offered by equity such as the imposition of fiduciary duties and the ability to split legal and equitable ownership, then the parties to the trust must also be bound by the higher standards imposed by equity.Sir Anthony Mason has identified the origins of those standards when he talks about the ecclesiastical beginnings of equity.But the obligations demanded by equity go back much further than the influence of the English chancellor on his King's conscience.The ecclesiastical monopoly over equity held by the king's chancellor dates from about 1274 when Edward I appointed Robert Burnell to the post of chancellor.Burnell was also Bishop of Bath and Wells from 1275.But the use of equitable principles to overcome the rigidity and shortcomings of the common law dates back to Plato in the 4 th Century BC.
Equity is not just a mechanism or 'business form' to be used for the advantage of businessmen, no matter how much this might seem to be the case from the attitude of many providers in the family trust industry.Instead equity is the product of thousands of years of philosophical consideration of the law and its relationship to justice.In Plato's Statesman (or Politicus) the concept of equity is clearly articulated as being the conflict between the need to administer uniform law in general terms and the demands of justice in particular cases.Although Plato prefers the certainty of posited law "based on long experience and the wisdom of counsellors" he does leave some room for equity by recommending in The Laws, when the Athenian stranger replies to Cleinias, that every law should be issued with a preamble in order to expose the spirit of the written law.This idea was new in Plato's time and it is important because it offered for the first time the opportunity to distinguish between the spirit and the letter of the law.Nature appealed to the scholars of the 4 th century BC as the counterpart of law.At a later stage of philosophical development, most particularly with the advent of Stoicism, the law of nature began to be considered as a practical source of law.The explanation for this would appear to lie in the powerful development of equity in the jurisdiction of the democratic courts of this period.
Aristotle's contribution to the development of equity is generally accepted as being much greater than that of Plato.In the Nichomachean Ethics (Note 18) Aristotle identifies an "annex" to his main scheme of justice which is equity or fairness; the correction of the law in cases where the law is found to be deficient by reason of its universality.This is similar to Plato's expression of the concept of equity but Aristotle refines his articulation of equity in Rhetoric as being "justice that goes beyond the written law".Aristotle advises that when the written law is against us we should have recourse to "the general law and equity as more in accordance with justice" (Note 19) It is possible for us to see some actual examples of the use of equity in Ancient Greek courts through the translation of some of the speeches by orators of the time.In particular the speeches of Isaios show how the orators who were employed to plead cases made recourse to principles of equity in cases of contract and testamentary disposition.In many cases appeal to equitable principles was necessary because of the archaic nature of some of the laws and the extremely cumbersome legislative process.The Heliastic Courts represented the sovereign people and possessed wide discretionary powers, exercising a residuary justice which arose out of prerogative power.Later it was the monarch's prerogative power that enabled English Kings to depart from the common law and apply equity to promote justice in particular cases.This appeal by Greek orators to equity when faced with rigid or archaic statutes shows that equity was not merely an abstract conception but was an essential part of Greek justice.(Note 20) When the Greek city states were absorbed into the Macedonian Empire of Alexander the Great, the philosophers became less involved with politics and looked to the achievement of peace of mind in a changing world.The Stoics emerged as a movement that concentrated on the common humanity that united all people.Theirs was a conception of an ideal world state, in which everyone is a citizen living in harmonious order governed by a rational, universal set of rules -a law of nature binding on everybody.As the Macedonian Empire gave way to the growing power of the Roman Empire, the Romans adopted the Stoic concept of a universal natural law and used it to accommodate the legal systems of other communities into an overarching Roman law.This development in Roman philosophy was partly due to the great influence of Cicero, a Stoic and a prolific and gifted writer.
The writings of Cicero are an important step in the development of equity.Even at a time when rulers were supported by armies rather than philosophy, Cicero shows his sympathy with Plato in his belief that the pursuit of justice is the true aim of the state, with law fulfilling the role of the instrument that is used to achieve the state's aim.Later the English kings used equity to promote justice while exercising a prerogative power arising out of the status of the Crown as the origin of all law and justice.More proof that Cicero was influenced by the teachings of Plato can be seen in De Legibus where he says that it is the "crowd's definition of law" that identifies law with written decrees issuing commands and prohibitions.This would seem to be the attitude of judges who follow the Snook line of reasoning by taking the attitude that if the form of the trust exists in accordance with trusts law then it cannot be challenged even if it results in a terrible injustice.For Cicero "Justice is one; it binds all human society and is based on one Law, which is right reason applied to command and prohibition.".(Note 21) We can see here the same dichotomy that was previously expressed by Plato that exists between the written law and an unwritten standard of justice.There is another aspect of Roman Law that served to promote the development of equity which is the power enjoyed by the praetor, the Roman chief magistrate, to set aside the jus civile and to allow equitable defences and remedies.This would appear to be a clear expression of the influence of Greek philosophy.Gradually principles of equity and bona fides became prevalent in Roman jurisprudence.This is an important step as it was through the influence of the Roman Catholic Church that equity came to be the tool of English kings and chancellors.
As the influence of Christianity was felt on the Roman legal system, there was a reciprocal exchange of ideas from Roman law into the organization of the Church.The Christian Fathers lived under this Jurisprudence and it is not surprising that they adopted some ideas from the legal system of their home state and adapted them for use in the organizational structure of the new faith.Eventually the Bishop of Rome became a legislator for the whole of the Christian church just as the Emperors of the Roman Empire had legislated for all the citizens in their empire.
The reciprocal relationship between the Church and the Empire was in no way an immediate development and in fact it took several centuries.The early Christian church suffered persecution at the hands of the Romans until the first quarter of the fourth century when Constantine took the first crucial steps that linked the Church with Rome.There is a story that he had a vision on the eve of a battle after which he instructed his soldiers to put a Christian monogram on their shields.After winning the battle in 312 he continued to acknowledge his old religion based on the cult of the sun but at the same time he began to show favour to Christians by restoring their property that had been denied them while Christianity had been an illegal religion.He went on to make substantial gifts to Churches, in particular the Church of Rome, and by 320 the sun no longer appeared on new coins.Constantine came to see himself as responsible for the well being of the Church and in 324, having defeated an imperial rival who had been persecuting Christians, declared himself to be a Christian.In 325 he called the first ecumenical council at which 300 bishops attended who laid down a Creed.The Christian Church from that time had the protection of Imperial Rome and just as the Emperor ruled a united Roman Empire so too did the Pope of the Church of Rome aspire to ruling an all-embracing church.The laws promulgated by the Church were influenced by the Roman law and political organization, and Roman jurists adapted the ideas of the Christian Fathers, identifying the 'jus naturale' of Roman law with the law of God.The Christian church attained the charisma of Rome and Imperial power.
During the early middle ages in Europe the two main influences on philosophy were Christian doctrine and knowledge of Greek philosophy which was made available through the Latin texts of scholars like Cicero and Seneca.The influence of Cicero on the Roman legal system culminated in the Code written by the Emperor Justinian in the sixth century and its accompanying Digest.This great work was finished just as the barbarians destroyed the Roman Empire in the west.It is to these works that scholars will turn in the great revival of legal philosophy in the twelfth century.In the thirteenth century many of the works of Aristotle were rediscovered and translated once more changing the face of theological tradition.
The story of how Roman law theory of equity came to be incorporated into English law and from there to New Zealand is as follow.The Roman army abandoned England in the 5 th century leaving the country vulnerable to the invasion of barbarous Picts and Scots from the North and from the Jutes in the south east.Across Europe, in the wake of the Roman troops who were slowly converging on Rome to defend their homeland, the rule of Rome was displaced by the laws of the hordes that were sweeping across the empire, ripping it to shreds But even after the fall of the Roman Empire the influence of Roman jurisprudence did not altogether die out.The invading barbarian rulers who took over many of the Roman dominions compiled books of laws and preserved or imitated as much as they could of Roman culture and institutions.The influence of the Christian church was not entirely swept away during these dark years.Christianity had been established as the official religion of the Roman Empire a century before the fall of the Empire in the west and the Christian churches continued to look to the Church of Rome for guidance.Even though the Empire of Rome had fallen, the influence of the Church of Rome continued to expand.In 582 Pope Gregory sent Augustine and other monks to England where Ethlebert, who was at that time King of Kent, was converted.His wife was already a Christian, her family having been converted by earlier Christian pilgrims.The conversion of much of England followed, very often through the auspices of Christian queens who persuaded their husbands to look kindly on the religion that Saint Augustine of Canterbury and his monks were practicing.Paulinus converted Northumbria and established his see as Bishop of York.The seventh century saw the area of Northumbria flourish as an artistic, cultural and educational centre.Communication and exchanges of ideas between the Church of Rome and the scholars of Northumbria continued.
So it was in this way that the influence of the Greek and Roman philosophers, who saw the role of the state as being a vehicle for the promotion of equity and justice, came to the English kings through Christian teaching and influenced the future development of English Law.From the end of the sixth century onwards Codes of Anglo-Saxon law were compiled and published.They were written in Anglo-Saxon, not Latin and were expressions of the laws of England but those laws show the direct influence of the Church.The Church lent a new sanctity to the King as defender of the faith.The Church laid stress on the motives and intention behind people's actions that modified older notions of liability for wrongdoing.In modern times we say that equity operates on the conscience of the person who has wrongfully taken or kept what was not his beneficially.The Church that had inherited the ideas of the older civilization acted as a bridge to the 'modern' world and so it was inevitable that what the Church passed on would be coloured by the theories of law and equity that had been assimilated from the ancient Roman and Greek world.
Equity that was available for dispensation by the English Kings after William I was in part a product of the influences that shaped legal philosophy throughout Europe for hundreds of years before the Normans came to Britain.The twelfth century brought a renaissance of civilization in Europe and the rediscovery of the Emperor Justinian's Corpus Juris, written in the sixth century but lost after the fall of the Roman Empire.Some of the countries of southern Europe were becoming more profitable.People had more leisure time and were becoming more cultured.There was a great desire for knowledge of all sorts, and a need for a form of law that would befit the emerging enlightened world.
In eleventh century England William I began the process of securing his power against the local lords and establish his own royal power incorporating centralized power and a common law.However the laws that he enacted were of less importance than the English laws that were already in existence at the time of his accession.The laws of Edward the Confessor (1042-1066) were confirmed and the Christian religion was preserved.During the reigns of William I, William II and Henry I some laws were enacted but against a background of the customary law which must now adapt to the new situation created by the Conquest.
We are very fortunate in the fact that some of the early records of the Curia Regis were preserved.These are first hand materials showing the development of the Common Law beginning in 1194.The Plea Rolls give us an insight into the working of the Curia Regis in its early days and the cases that were decided there are the earliest authoritative statements of the Common Law.The influence of Henry the II on the law during his 35 year reign (1154-89) can be seen through the work of his clerks; usually men in Holy Orders, who compiled, illustrated and coordinated the information available in the court rolls.From the early years of his reign pleas were heard by itinerant judges who visited the counties and among them was Thomas Becket when he was Henry's Chancellor.In 1178 the King chose five men, two clerics and three laymen to form a permanent King's Court.This was to be a body of men who dispensed justice habitually in a central court.Most important from the point of view of the story of equity is the reserve of justice which remained in the King himself.One of those who were chosen to do this work was lawyer Ranulf de Glanvil who acted as counsellor and adviser to King Henry II.Glanvil was influential in the great strides made in the reform of English law and the founding of the common law by the King and it is said that he invented the action of Replevin.His fame comes to us through a book ascribed to his authorship but which is probably the work, at least in part, of his nephew, Hubert Walter who later became Archbishop of Canterbury and chancellor.Glanvil's treatise is a work of fourteen volumes and gives details of the differences between criminal and civil pleas, the pleas that were heard in the King's court and which were heard in other courts.Glanvil gives detailed accounts of the relationship between the king's courts and the ecclesiastical courts.
The earlier King's court was not fettered by rules of precedent and the judges showed they were willing to act on principles of equity.For example the view taken of cases involving mortgage reflected the same attitude that was ultimately adopted by the court of Chancery.This is in stark contrast to the attitude to mortgages that came to be taken by the later Common law courts.As an educated man Glanvil was well versed in Roman law and there are some instances in the reforms of Henry where it is clear that the imprint of Roman law, with all its attendant influences, is present.
The rules of the King's court were, therefore, evolving at the hands of the ablest men of the day, men who had been educated in the Christian tradition, who used their knowledge of Roman law to rationalize the old customary law, adapting principles of Roman law to the needs of England.The remedies that could be dispensed by the King's court and the writs that could start action were not strictly limited.This court would not hesitate to overrule old customary law in the name of equity.It had a power and efficiency that made it superior to all its rivals.The Common law continued to be developed for some time by men who had been educated by the Catholic Church and whose ideas of law were influenced by Roman law.
However, gradually, and certainly by the reign of Edward I (1272-1307), lawyers ceased to be predominantly those from an ecclesiastical background and a distinct profession began to emerge.These new lawyers were not versed in the equitable elements of Roman law and therefore were not able to use Roman law to supplement the deficiencies of English law.For a while the common law remained flexible enough to continue to dispense equity in the royal courts.While the number of writs available was not fixed, the law could develop to meet new situations.The King's judges were not tied to rules of substantive law or procedure so they could continue to do equity.
Eventually, during the fourteenth century broad principles of law gave way to technical, less rational, adherence to rules.Before that time, however, we can see a new development which would enable the further development of equity.This comes about while Robert Burnell is chancellor to Edward I.In 1265 he emerged as one of Edward's clerks and after traveling in France with Edward, he became Edward's friend.In 1274 Edward made him chancellor and in 1275 he became Bishop of Bath and Wells.During his time as chancellor his position was akin to that of Prime Minister, trusted by the King and powerful in government.He was head of a commission of enquiry into serious corruption in the judiciary after which two out of every three judges from the Court of King's Bench and four out of every five of the judges of the court of common Pleas were dismissed.
Burnell was very active in Chancery and has been described by historians as the first of the great chancellors in English Law.While Chancery was not yet a court, or curia, it was receiving petitions when the King was absent from England.During Burnell's time in office the Chancellor ceased to follow the King whenever he went abroad and instead remained in England to respond to these petitions.It was during this time that Parliament, comprised of the House of Lords and the House of Commons, was established and the law of the church began to grow more and more distinct from the common law.Ecclesiastic courts were in competition with Royal courts and the legal profession had split into the two branches of attorneys and pleaders, who became known as serjeants.In Parliament the judicial discretion enjoyed by the King was vested in the King's Council which was assisted by judges in the enactment of laws.It was possible for cases to be brought before the King's council and frequently the King's response would be to tell the litigant to go to the common law.As yet the king's courts were still able to dispense equity from within the common law.However, as the common law courts became more and more bound by strict rules of procedure, the role of the King and his Chancellor changed to allow them to decide cases in equity for litigants who could have no redress in law.
The common law judges now advised litigants whose claims could not be satisfied in their courts to apply directly to the King and to those officials who were so close to the king that they were able to exercise his prerogative in his name.This responsibility therefore devolved to the Chancellor.The King's duty was to do justice even though the result is to overturn the decision of the common law court.The Chancellor as close adviser to the King and, having a state department of his own, was the obvious choice to act on the King's behalf.The equity that he dispensed was based on conscience in deciding what was morally right and equitable rules sat very comfortably with the canon rules with which the Chancellor, as a religious leader, would be very familiar.
The story of equity from this point is one of conflict between two systems of justice followed by reconciliation, but throughout that history there is a constant theme.The court that dispensed equity did so according to its own rules which remained rooted in the wisdom of Aristotle brought to the courts via the law of Rome through the Christian church.Equity is once again being dispensed by common law courts as it was in the King's Court of Henry II, and the availability of equitable remedies is dependent on the justice of the case.The discretionary nature of those remedies means that if justice will not be served by their application, the judge is not bound to hand down such a remedy.In the case of family trusts that serve no purpose but to act as a repository for assets that justice demands should be distributed to creditors, it is not within the province of equity to assist the settlor.In the Australian tax case of Raflands in the appeal to the High Court it was said that the courts can "send a clear signal that they will not be deceived into giving effect to unreal transactions, just because such transactions are expressed in documents that, to a greater or lesser extent, observe legal forms and give effect to apparent legal objectives." As far as finding common intention was concerned Heydon J said (Note 22) It may be inferred that, so far as the intention of the Settlor, Mrs Sommerville, was relevant, that intention was to be found in the minds of the Heran brothers, the principals of Mr Tobin, who was Mrs Sommerville's employer.So far as the intention of the Trustee, Raftland Pty Ltd, was relevant, the same was true in view of its directors and shareholders.In assessing that intention any evidence by Mr Tobin, the architect of the transactions, could be taken into account, particularly if it were adverse to the interests of his principals.It may also be inferred that the intention of Mr Carey as Trustee of the E & M Unit Trust was the intention of Mr and Mrs Thomas, the controllers of that Trust, and that that intention was the same as that of the Heran brothers.This is demonstrably a departure from the formalistic approach that is so often a product of the reasoning in Snook.The inflexible approach of the New Zealand courts in similar cases, involving allegations of sham is demonstrated by some older and some recent judgments.
In Paintin and Nottingham (Note 23) Turner J made it clear that, in New Zealand, "[t]he word 'sham' has no applicability to transactions which are intended to take effect, and do take effect, between the parties thereto according to their tenor".In an earlier decision in Bateman Television Ltd v Coleridge Finance Co Ltd, his Honour had remarked, to like effect: (Note 24) "[T]he occasions on which Courts have set aside the form of a transaction as a 'sham' are confined to cases in which, really doing one thing, the parties have resorted to a form which does not fit the facts in order to deceive some third person, often the revenue authorities, into the belief that they were doing something else." Later, in Mills v Dowdall (Note 25), Richardson J postulated, as a test for sham, whether "the [documents do] not reflect the true agreement between the parties".Later still, in Marac Life Assurance Ltd v Commissioner of Inland Revenue, Richardson J emphasised that: (Note 26) "The true nature of a transaction can only be ascertained by careful consideration of the legal arrangements actually entered into and carried out: not on an assessment of the broad substance of the transaction measured by the results intended and achieved or of the overall economic consequences." This approach has laid the ground for a narrow operation of the doctrine of sham in New Zealand.So much was reaffirmed in the recent decision of the New Zealand Court of Appeal in Accent Management Ltd v Commissioner of Inland Revenue: "[A]rtificiality and lack of commercial point (other than tax avoidance) are not indicia of sham.And the concepts of sham and tax avoidance are not correlatives.As well, while there are elements of pretence (and certainly concealment) associated with [the] transactions [here at issue], these are explicable on bases other than sham".
The principles of equity are being used as instruments in schemes allowing people to avoid their ordinary financial obligations while this narrow approach, with its emphasis on the evidence of intention to create a sham, prevails in New Zealand.This use is contrary to the very foundations of equity which is to supplement the common law in the pursuit of justice for which equity has been developed since Plato lived in Ancient Greece. | 2017-09-07T19:43:17.735Z | 2009-11-02T00:00:00.000 | {
"year": 2009,
"sha1": "2f5b3e0a9675bcd5117a54802219440b5578976a",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/jpl/article/download/4498/3853",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2f5b3e0a9675bcd5117a54802219440b5578976a",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Sociology"
]
} |
16310597 | pes2o/s2orc | v3-fos-license | Special Agents Hunting Down Women Silent Killer: The Emerging Role of the p38α Kinase
Ovarian cancer is sensitive to chemotherapy with platinum compounds; however, the therapy success rate is significantly lowered by a high incidence of recurrence and by the acquisition of drug resistance. These negative outcomes mainly depend on altered apoptotic and drug resistance pathways, determining the need for the design of new therapeutic strategies to improve patient survival. This challenge has become even more critical because it has been recognized that hindering uncontrolled cell growth is not sufficient as the only curative approach. In fact, while current therapies are mostly conceived to impair survival of highly proliferating cells, several lines of research are now focusing on cancer-specific features to specifically target malignant cells with the aim of avoiding drug resistance and reducing adverse effects. Recently, great interest has been generated by the identification of metabolic reprogramming mechanisms occurring in cancer cells, such as the increase in glycolysis levels. In this light, pharmacologic manipulation of relevant pathways involved in cancer-specific metabolism and drug resistance could prove an effective approach to treat ovarian cancer patients.
Introduction
Ovarian cancer has historically been called the "silent killer," even if around 80% of patients do actually have symptoms. Indeed, only 20% of ovarian cancers are currently diagnosed while still limited to the ovaries, when up to 90% of patients can be cured using available therapies. Its poor prognosis is related to late diagnosis, which usually occurs at advanced stages, and to acquisition of chemoresistance [1]. To date, more than 30 oncogenes and tumor suppressor genes have been identified that are involved in ovarian oncogenesis inducing modifications in proliferation, apoptosis, anoikis, motility, adhesion, and invasion [2].
Genetic Alterations in Ovarian Cancer
Although ovarian cancer risk is, at least in part, influenced by hormonal, environmental, and racial factors, a major role is played by genetic factors. Indeed, a key advance in the study of ovarian cancer etiology has been the identification of mutations in the BRCA genes. BRCA1 and BRCA2 genes act as tumor suppressor genes and, when mutated, are associated with the accumulation of chromosomal abnormalities and thus with a higher risk of developing cancer. Inheritance of mutations in BRCA genes is associated with a 27% to 44% lifetime risk of ovarian cancer. A higher incidence of carcinomas of the ovary has also been detected in families affected by the HNPCC syndrome (hereditary nonpolyposis colorectal cancer) [3], which is caused by mutations in DNA mismatch repair genes. HNPCC carriers account for approximately 1% of ovarian cancer patients, and their estimated lifetime risk of ovarian cancer is 9% to 12% [4].
Mutations in BRAF, KRAS, and erbB2 oncogenes and in the tumor suppressor PTEN have been found in a large subset of ovarian cancers [5,6]. The inactivation of PTEN and an activating mutation of KRAS are sufficient to induce ovarian endometrioid carcinoma in a mouse model [7]. Furthermore, mutations of beta-catenin have been detected both in ovarian carcinomas and in their precursor lesions [8]. Indeed, inactivation of the Wnt/beta-catenin and the PI3K/PTEN pathways has been shown to induce the development of endometrioid carcinoma in an engineered mouse model [9]. The small G-protein RAB25, which regulates motility, aggressiveness, apoptosis, and autophagy and mediates survival in response to stress, has also been found upregulated in the majority of ovarian cancers [10].
The Aurora-A kinase (Aurora-A) is associated with tumor initiation and progression and is overexpressed in various malignancies. Inhibition of Aurora-A induces cell cycle arrest and decreases proliferation of epithelial ovarian cancer stem cells, which represent the chemoresistant population and act as a source of recurrence [11]. All of these and several other amplified oncogenes are potential targets for ovarian cancer therapy.
Chromatin Remodeling and Ovarian
Cancer. Molecular genetic changes in chromatin remodeling genes have been identified as a new mechanism in cancer pathogenesis. ARID1A (BAF250a), which promotes the formation of SWI/ SNF chromatin remodeling complexes containing BRG1 or BRM, has emerged as a candidate tumor suppressor gene based on its frequent mutations in gynecological cancers. 46%-57% of ovarian clear cell carcinomas, 40% of uterine endometrioid carcinomas, and 30% of ovarian endometrioid carcinomas display somatic sequence mutations in ARID1A [12][13][14]. Guan and colleagues recently reported that restoring wild-type ARID1A expression in ovarian cancer cells that harbor ARID1A mutations is sufficient to suppress cell proliferation and tumor growth in mice. Moreover, they showed that ARID1A/BRG1 complexes directly interact with p53 and that mutations in the ARID1A and TP53 genes were mutually exclusive in tumor specimens. The regulation of p53-related genes by ARID1A raises the possibility that ARID1A cooperates at the molecular level with p53 to inhibit tumor growth. In non-transformed cells, ARID1A and p53 act as a pair of gatekeepers that prevent tumorigenesis by transcriptional activation of tumor-inhibiting downstream genes, such as CDKN1A and SMAD3. The authors found that all tumors with mutated ARID1A contained wildtype TP53 and tumors with mutated TP53 harbored wildtype ARID1A. Mutations in either ARID1A or TP53 were sufficient to inactivate the ARID1A/BRG1/p53 complex and silence transcription of CDKN1A and SMAD3. This recent study suggests a close collaboration between genetic and epigenetic alterations in cancer pathogenesis [15].
Imprinting and Ovarian Cancer.
Genomic imprinting is a molecular mechanism that plays an important role in development, growth, and cell differentiation in mammals. However, only 74 genes have been identified as imprinted among the over 30,000 that can be expressed in human cells. Several of these imprinted genes have been implicated in human oncogenesis. Indeed, while functional inactivation of non-imprinted genes usually requires two genetic alterations, loss of function of imprinted genes may occur following a single genetic or epigenetic event (including loss of heterozygosity (LOH), hypermethylation, and altered transcriptional regulation) occurring on the single functional allele. Moreover, in the case of ovarian oncogenesis, spontaneous mutations may occur during the proliferation of ovarian epithelium to repair ovulatory defects. In this light, downregulation of the imprinted growth-inhibitory genes Aplasia Ras homologue member I (ARHI) and paternally expressed 3 (PEG3) may be particularly important in the pathogenesis of ovarian cancer [16].
ARHI, also known as DIRAS3, is a maternally imprinted tumor suppressor gene encoding a 26 kDa GTPase with 55%-62% homology to Ras and Rap, which inhibits cancer cell growth, motility, and invasion. It is expressed by ovarian epithelial cells and is lost or markedly downregulated in 60%-70% of ovarian cancers [17][18][19]. Loss of ARHI expression is associated with tumor progression and poor prognosis, while its re-expression in cancer cells inhibits signaling through the Ras/MAPK pathway, induces p21WAF1/CIP1, and downregulates cyclin D1 [19]. Besides, Lu et al. [16] demonstrated that ARHI re-expression causes autophagic death of ovarian cancer cells in culture and participates directly in autophagosome formation by upregulating the ATG4 enzyme that processes the microtubule-associated protein LC3I to LC3II. Autophagy is a process of "self-eating" that involves enzymatic digestion and recycling of cellular constituents in response to stress. While it can contribute to cancer cell death in response to chemotherapeutic agents [20], its role in oncogenesis remains ambiguous as it may also permit survival of cancer cells in response to environmental stress or cytotoxic drugs [21][22][23]. Indeed, induction of ARHI in xenografts does not kill ovarian cancer cells but instead induces tumor dormancy [24], and its subsequent downregulation rapidly resumes cancer growth.
PEG3 is an imprinted gene encoding a 140 kD Kruppeltype (C2H2) zinc-finger protein that plays an important role in the p53/c-myc-mediated apoptotic pathway. It is significantly downregulated in the majority of ovarian cancers due to promoter hypermethylation and LOH, and its re-expression markedly inhibits ovarian cancer growth. Of note, a high degree of correlation has been found between ARHI and PEG3 in terms of mRNA levels and promoter methylation [25].
Current Therapies and New Therapeutic Targets
The platinum compounds cisplatin and carboplatin are the most effective chemotherapy agents currently used in ovarian cancer. The antitumor activity of cisplatin (cis-diamminedichloroplatinum (II)) was discovered by Rosenberg and colleagues in 1961 [26]. Cisplatin has been the most active drug used for the treatment of ovarian cancer for the last 4 decades, and response to cisplatin is considered a prognostic factor for patients with ovarian cancer [27]. A high percentage of women with ovarian cancer respond to frontline platinum combination chemotherapy, but in most of them the disease will become resistant to cisplatin, ultimately leading to death [27]. Thus, methods of preventing resistance to cisplatin could prove very useful against ovarian cancer.
The classical therapeutic sequence combines maximal debulking surgery followed by adjuvant platinum-and paclitaxel-based chemotherapy [28,29]. Unfortunately, 20% of patients do not respond to chemotherapy and recurrent disease occur in >50% of those who initially achieve complete remission, with a 5-year overall survival of only 30%-40% for all stages [30].
New therapeutic approaches based on targeted biologic agents have generated great interest and are currently being investigated in several clinical trials focused on treatments for recurrent ovarian cancer ( Figure 1). As is the case for other cancers, angiogenesis is a key process implicated in the metastatization of ovarian cancer. Several growth factors, including vascular endothelial growth factor A (VEGFA), lysophosphatidic acid (LPA), interleukin 6 (IL6), interleukin 8 (IL8), and fibroblast growth factor 1 (FGF1) and 2 (FGF2) are involved in this process [31,32]. To date, agents that target the VEGF pathway have proven the most effective against the disease.
VEGFA activity has been inhibited by various mechanisms. Bevacizumab, a VEGFA-specific antibody, induced an objective response rate in 16% of patients with recurrent ovarian cancer and stabilized disease for 5.5 months in 50% of patients [33], while improved response rates have been observed in platinum-resistant disease when it was used in combination with cytotoxic chemotherapy [34]. The VEGF Trap is based on a different approach [35]: it is a fusion protein that acts as a soluble VEGF receptor and binds with high affinity to VEGF. Several small molecule inhibitors have been used in ovarian cancer to target VEGF and other pathways. Sorafenib, an oral multikinase inhibitor with activity against Raf and other receptor kinases (including the VEGF receptor (VEGFR), the platelet-derived growth factor receptor (PDGFR), and c-Kit) may have antiangiogenic effects through inhibition of VEGFR. This inhibitor has also been used with promising results in combination with bevacizumab and in combination with chemotherapy, both in recurrent disease and as initial therapy in newly diagnosed patients. Sunitinib is an oral agent that inhibits a number of receptor tyrosine kinases implicated in epithelial ovarian cancer (EOC) growth and metastasis, including VEGFR and PDGFR. It has been assessed in phase II studies for the treatment of advanced or metastatic recurrent EOC [36]. Cediranib (AZD2171) is an oral tyrosine kinase inhibitor with selective activity against VEGFR1, VEGFR2, VEGFR3, and c-Kit. Recent clinical trials showed that cediranib has anticancer activity in recurrent EOC [37]. Pazopanib is an oral angiogenesis inhibitor targeting VEGFR, PDGFR, and c-Kit, which is currently being tested in clinical trials on ovarian cancer.
The epidermal growth factor receptor (EGFR) family is commonly overexpressed in ovarian cancer and has been associated with a negative prognosis; however, limited efficacy has been observed with molecules targeted to the EGFR pathway. Gefitinib and erlotinib, which are inhibitors of EGFR, stabilized disease in 11%-44% of patients with ovarian cancer but produced objective regression in only 4%-6% of cases [38,39]. The effect of EGFR inhibitors might be reduced by activation of the RAS-MAPK signalling pathway, as happens in colorectal cancers [40]. ErbB2 (also known as HER2) expression in ovarian cancer is associated with advanced stage, higher recurrence frequency, shorter survival time, and lower sensitivity to platinum-based chemotherapy. Trastuzumab and pertuzumab are humanized antibodies targeted against HER2, which act through different mechanisms [41,42]. In phase II monotherapy clinical studies, trastuzumab has shown activity in certain ovarian cancers overexpressing HER2, while pertuzumab is currently undergoing ovarian cancer trials in combination with cytotoxic agents including gemcitabine [43] and carboplatin [44].
The estrogen receptor α (ERα) has also been targeted for the treatment of ovarian cancer. Phase II trials of aromatase inhibitors (AIs) have shown modest response but rather better disease stabilization rates, especially when patients are selected on the basis of ERα expression [45].
Activation of the PI3K pathway, which occurs in approximately 70% of ovarian cancers, is associated with resistance to cytotoxic chemotherapy. Inhibitors of PI3K and Akt prevent the growth of ovarian cancer xenografts and potentiate the cytotoxic effects of paclitaxel and cisplatin [46]. Perifosine is an alkylphospholipid compound that inhibits Akt and is currently being tested in combination with docetaxel. Development of more specific Akt inhibitors is currently underway and PI3K inhibitors are entering phase I-II trials [47].
Overexpression of IL6 has been detected in the majority of ovarian cancers. It induces a signaling pathway that ultimately stimulates proliferation, inhibits apoptosis, and promotes angiogenesis. Antibodies against IL6 and inhibitors of proteins involved in its pathway, such as JAK2 and STAT3, are currently in development for use in ovarian cancer [48].
Upregulation of the LPA receptors LPAR2 and LPAR3 has been described during the malignant transformation of ovarian surface epithelial cells. An approach targeting this pathway in ovarian cancer cells through antibodies capable of neutralizing LPA and through inhibitors of LPA receptors is currently being studied [49].
Constitutive activation of the NFkB transcription factor has been observed in the majority of ovarian cancers [50,51]. Activated NFKB induces upregulation of anti-apoptotic genes, growth regulatory cytokines (IL6 or growth regulated α (Gro1)), and angiogenic factors (IL8) [52]. A clinical trial is currently underway to study the efficacy of liposomal adenoviral E1A, which interferes with NFKB signaling, in combination with paclitaxel in patients with recurrent ovarian cancer.
The use of poly(ADP-ribose) polymerase (PARP) inhibitors in ovarian cancer is being evaluated in various preclinical, and clinical studies. By interfering with PARP singlestrand DNA repair activity, this strategy is aimed at increasing the cytotoxicity associated with DNA damage induced by chemotherapy and takes advantage of the fact that loss of function of BRCA genes, which are also involved in DNA strand breaks repair, is a common feature of this type of cancer [53]. Response to treatment has been observed in 46% of ovarian cancer patients with a BRCA mutation administered with the oral PARP inhibitor AZD2281 (Olaparib) [54]. Moreover, several clinical trials are studying the efficacy of PARP inhibitors in combination with cytotoxic compounds including monofunctional alkylating agents, topoisomerase-I poisons and DNA-crosslinking agents [55].
p38α and Ovarian Cancer Cell Survival
The high rate of drug resistance acquisition observed in ovarian cancer patients has led to a recent shift in the design of therapeutic strategies: pathways involved in drug resistance are being investigated in depth in order to identify new putative targets, and the potential to manipulate cancer-specific features is being evaluated with the aim of specifically targeting tumor cells in order to reduce adverse effects ( Figure 1). As for this second aspect, major attention has been focused on the metabolic reprogramming occurring in cancer cells, which display increased levels of glycolysis compared with their normal counterparts. Indeed, conventional therapies, such as chemotherapy and radiation, produce heavy adverse effects because they are mainly designed to affect survival of highly proliferating cells and thus also damage healthy tissues characterized by a high cellular turnover. In recent years, the observation made in the 1920s by Nobel Prize winner Otto Warburg that tumor cells produce 50% of their adenosine triphosphate through the glycolytic flux versus the 10% observed in normal cells-the so-called Warburg effect-is being revalorized and is now considered a promising target for new therapeutic approaches [56]. This phenomenon is already successfully exploited for the detection of metastasis of most epithelial tumors by positron emission tomography combined with computed tomography (CT; PET/CT) [57,58]. The Warburg effect seems to be achieved through stable genetic or epigenetic alterations that promote the constitutive activation of the glycolytic pathway and induce a decrease in mitochondrial oxidative phosphorylation, a phenomenon known as aerobic glycolysis. The transcription factor HIF1α is one of the central players of cancer-specific aerobic glycolysis. Indeed, its stabilization leads to overexpression of target genes involved in key regulation steps of glucose transport, glycolysis, lactate production, and lactate/ proton extrusion [59]. Concomitantly, deregulated HIF1α also induces suppression of mitochondrial metabolic pathways, such as oxidative phosphorylation, lipid synthesis, and β-oxidation [60]. The role of HIF1α has been well documented in cancers originating in the ovary. Tumor xenografts obtained from stable HIF1α-silenced ovarian cancer cells show increased cell death and necrosis [61], and the expression levels of HIF1α have been proposed as an independent prognostic factor in patients with epithelial ovarian tumors [62]. HIF1α activity is regulated by several pathways, including the mitogen-activated protein kinase cascade, and p38α has been demonstrated to be involved in the stabilization of HIF1α in various normal and cancer cell types [63,64]. The p38α pathway regulates proliferation, differentiation, metabolism, and cell death in a cell type-specific and signal-dependent manner [65]. Starting from our promising results obtained on colorectal cancer, showing that p38α blockade promotes autophagy, cell cycle arrest, and non-apoptotic programmed cell death both in vitro and in vivo [66][67][68][69], we recently demonstrated that ovarian cancer cells are highly sensitive to p38α inhibition [70]. Inhibition of p38α activity by the specific inhibitor SB202190 impairs the expression of genes sustaining the altered metabolism of ovarian cancer cell lines and induces a shift from HIF1α-to FoxO3A-dependent transcription (Figure 1) [70]. SB202190 promotes a timedependent reduction of HIF1α protein levels, ultimately leading to an acute energy need that triggers the activation of AMPK and the consequent induction of the FoxO3A transcriptional program. In turn, FoxO3A promotes upregulation of crucial mediators of autophagy, cell cycle control, and cell death. Upon p38α inhibition, autophagy is first accompanied by G1 arrest, but prolonged inactivation of p38α leads to autophagic cell death [70]. Autophagy represents a promising target for the design of new therapeutic strategies relying on pharmacological manipulation in tumors displaying resistance to apoptosis. Besides, as aerobic glycolysis represents a differentiating factor between normal and cancer cells, inhibition of genes involved in cancer cell metabolic reprogramming may provide both specificity and efficacy in countering the energetic demand of transformed cells, thus hampering growth and inducing energy failure-dependent death processes. Thus, therapies based on p38α-specific inhibitors could represent a valuable tool against cancer.
The rationale to manipulate the p38 pathway in ovarian cancer is further corroborated by recent findings indicating that p38α is a major mediator of drug resistance in response to chemotherapy with 5-fluorouracil and irinotecan [71,72]. Moreover, as p38α inhibitory compounds are currently being investigated in clinical trials for inflammatory diseases and cancer [73], these findings might be taken advantage of in the prospect of clinical translation and support the idea that p38α could be one of the special agents engaged by clinicians to hunt down the silent killer. | 2014-10-01T00:00:00.000Z | 2012-03-11T00:00:00.000 | {
"year": 2012,
"sha1": "a306c022c47be9258b63a944df5f3e14c36fab4b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jo/2012/382159.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf4595ea69a01bbe96f1453435df2778a5b7d0c5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244960957 | pes2o/s2orc | v3-fos-license | Supplement levels and functional oils to replace virginiamycin for young bulls during early dry season on grasslands and finishing phase in feedlot systems
Aim of study: To assess the effects of replacing virginiamycin (VM) by functional oils (FO) from castor beans and cashew nut on beef cattle system during the early dry season (Experiment I) and during the finishing phase were evaluated the historical effect, keeping the treatments and methods intact (Experiment II). Area of study: These experiments were conducted at the Forage Crops and Grasslands section of São Paulo State University, “Julio de Mesquita Filho” (Unesp–Jaboticabal, São Paulo, Brazil). Material and methods: Two supplementation levels combined with two additives (four treatments in total) were evaluated: LSVM, low supplementation (0.3% body weight [BW]) with VM; LSFO, low supplementation (0.3% BW) with FO, HSVM, high supplementation (0.6% BW) with VM, and HSFO, high supplementation (0.6% BW) with FO. In both experiments, the experimental design was completely randomized with a 2 × 2 factorial arrangement (supplementation levels × additives). Main results: In Exp. I, the additive effect of VM provided greater average daily gain (ADG, p= 0.02), higher supplementation level resulted in higher ADG ( p =0.04) and the greatest crude protein apparent digestibility ( p =0.002). However, no effects were observed between supplementation levels, additives, and interactions ( p ≥0.11) on voluntary intake and ruminal parameters. In Exp. II, LSVM treatment resulted in lower dry matter intake ( p =0.04). Animals maintained on LSFO during the early dry season exhibited lower carcass yield ( p= 0.004). Research highlights: FO can be used to replace VM in beef cattle diet during the finishing phase in the feedlot without altering animal performance.
Introduction
Many technologies have been used to improve beef cattle chain, which has resulted in system intensification. Among these, animal supplementation (Romanzini et al., 2018) and feedlot use during the finishing phase can be highlighted. These technologies can reduce season effects, mainly in the dry season. Specifically, the use of the feedlot system to finish animals during the period with low forage availability would be more efficient if forage production potential was created in relation to heavier animals from the growing phase. This condition is of great importance for the viability of Brazilian beef cattle production systems (Barbero et al., 2017).
Additives are important ingredients in beef cattle diets because of their beneficial action. These compounds can decrease the negative effects of diet changes caused mainly by the start of the finishing phase. During this phase, animals are fed with diets composed of high proportions of carbohydrates quickly fermentable, which could result in several changes in the rumen. These changes occur because of an increase in free glucose that elevates volatile fatty acid and lactic acid production, causing ruminal acidosis, laminitis, and tympanism (Millen et al., 2009). Lemos et al. (2016) reported that virginiamycin additive improves animal performance, in addition to changing ruminal fermentation, resulting in decreased metabolic issues. According to Cocito (1979), virginiamycin is a non-ionophore antibiotic that acts on protein synthesis, specifically the 50S ribosomal subunit, thereby inhibiting the growth of gram-positive bacteria. However, some marketplaces, mainly in Europe, do not purchase animal products from countries that feed animals with these additives because of new rules regarding food security. These rules put forth by Council Regulation 2821/98 (EC, 1998) prohibit growth-promoting additives such as monensin and virginiamycin. Therefore, to meet the requirements of these countries, conventional additives need be replaced by plant extracts and essential functional oils (Fugita et al., 2017).
This replacement is one way to overcome the problem of such marketplace bans; however, it may lead to undesirable consequences on animal performance, as their mechanisms of action on rumen fermentation are not yet fully defined. Therefore, new studies to understand their mechanisms of action are necessary. These plant essential oils could be used to replace traditional additives because they are secondary plant metabolites that normally have antimicrobial properties. These secondary metabolites are cardol and cardanol; and ricinoleic acid from cashew nut and castor beans respectively. According Nagabhushana et al. (1995), the compounds related to cardol and cardanol are phenolic compounds that act as monovalent ionophore. Lima et al. (2000) reported some antimicrobial effect from cardol, a compound from anacardic acid, in gram-positive bacteria as Streptococcus mutans and Staphylococcus aureus, and Amoratti et al. (2001) and Trevisan et al. (2006) reported both functions for cardanol compound, anti-inflammatory and antioxidant. In this way, considering the ricinoleic acid, that has a function similar to divalent ionophore (Vieira et al., 2001), it has been studied to reduce digestibility, acetate:propionate ratio and methane synthesis, besides change the rumen bacteria resistance to ionophore. Consequently, because the functional essential oil composition, they could modulate ruminal fermentation and improve nutrient utilization by animals (Calsamiglia et al., 2007). Therefore, the aim of this study was to investigate whether different inclusion levels of functional essential oils from cardol, cardanol and ricinoleic acid, to replace virginiamycin as additives, would change the metabolic parameters, animal performance and carcass yield of beef cattle.
Material and methods
Two experiments were performed: first, during the early dry season, when the tropical grasses started to decline in productivity and quality and second, during the finishing phase, using the feedlot system. Both experiments were conducted at the Forage Crops and Grasslands section of São Paulo State University, "Julio de Mesquita Filho" (Unesp-Jaboticabal, São Paulo, Brazil). This forage unit is located at 21°15ʹ22ʹʹ S, 48°18ʹ58ʹʹ W, at an altitude of 595 m a.s.l, and the climate is subtropical humid with dry winters and wet summers. The experimental procedures were approved by the Ethics, Bioethics, and Animal Welfare Committee of Unesp, Jaboticabal (protocol 022368/12).
Area, period, grazing method, and herbage characterization
Urochloa brizantha 'Marandu' (Marandu grass) was sown in the experimental area used during this phase. Eight experimental paddocks with feed bunks (60 cm linear per animal) and water drinkers were used. The evaluation period during this phase was from May to August 2015, with 14 d for adaptation to the diets and 90 d for evaluation. During this experimental phase, the pasture management adopted was continuous stocking, where at the beginning, the herbage mass available was close to 5 t of dry matter (DM).
All the herbage evaluations (herbage mass, morphological compounds, total DM, availability of leaf, and chemical composition) were performed according to Barbero et al. 3 Functional oils in beef cattle diets to replace a traditional additive (2015) and Delevatti et al. (2019)
Animals and treatments
Forty Nellore (Bos taurus indicus) bulls with a mean initial BW (IBW) of 403 ± 24 kg were used. Each treatment used two experimental paddocks, where 10 tester animals were allocated following a randomized distribution. The treatments studied were a combination of the additives virginiamycin (VM) and functional oils (FO) with low (LS) and high levels (HS) of supplementation, creating four treatments: (1) VM with 0.3% BW level of supplementation (LSVM) corresponding to 175 mg/kg of supplement; (2) FO with 0.3% BW level of supplementation (LSFO) corresponding to 2,260 mg/kg of supplement; (3) VM with 0.6% BW level of supplementation (HSVM) corresponding to 70 mg/kg of supplement; (4) FO with 0.6% BW level of supplementation (HSFO) corresponding to 810 mg/kg of supplement, with 98, 491, and 221 mg/kg of cardol, cardanol, and ricinoleic acid, respectively (Table 1).
Tester animals were weighed every 30 d after the adaptation period (14 d), always at 5:30 am, and after both a feed and water fast of 14 h. The weight was used to evaluate the gain during the period and to adjust the supplement supply. The total BW of animals in each paddock, during each experimental period, allowed the calculation of the stocking rate measured in animal units per hectare (AU/ha, 450 kg BW/ha).
Chemical analysis
All samples were dried at 55 ± 5 ºC to a constant weight and then ground through a 1 mm screen in a shear mill (Thomas-Wiley Laboratory Mill Model 4; H. Thomas Co., USA) for further processing. The analyses of DM AOAC (1990). Following the same reference, the level of CP was determined using the Dumas method of combustion, with a Leco F528 N analyzer (LECO Corporation, St. Joseph, MI, USA). The concentrations of NDF and ADF were determined following Mertens (2002), using an ANKOM2000 Fiber Analyzer (ANKOM Technologies, Macedon, NY, USA). The concentration of NDF from supplements was measured according to Van Soest et al. (1991) using alpha-amylase, as these feeds have high starch levels. Lignin was measured in the ADF residues after hydrolysis of the cellulose in 72% H 2 SO 4 (Van Soest & Robertson, 1985). Forage residues of NDF and ADF were recovered and analyzed for CP to determine NDICP and ADICP. The concentrations of iNDF were determined as described by Casali et al. (2008). Samples were weighed and placed in the rumen of a cannulated steer for 240 h and subsequently analyzed for NDF as described earlier in this section.
Ruminal parameters
Four rumen-cannulated Nellore steers with a mean BW of 406 ± 39 kg were used. These animals were distributed in a 4 × 4 Latin Square experimental design (treatments × periods), where each animal stayed in paddocks joined to animals used in performance trials. The rumen-cannulated animals fed with different treatments were evaluated. The Latin Square design was performed twice: a second one was developed using the same treatments, animals, and period numbers. Four time periods, of 14 d each, were assigned, with the first 10 d used for the adaptation to diets and, both intake and nutrient digestibility evaluations were performed between days 8 and 14 of each period.
Herbage intake was estimated according to Delevatti et al. (2019), who considered fecal excretion and iNDF as internal markers. To estimate fecal excretion, 10 g of chromium oxide (Cr 2 O 3 ) was used as an external marker. Cr 2 O 3 was administered via the rumen cannula, after supplying supplement straight within the rumen at 11:00 h. This process occurred for 10 d, where the first 7 d were for adaptation and the last 3 d were to conduct fecal sampling. Fecal sampling was conducted twice daily at one of the following time combinations: 07:00 and 13:00 h; 09:00 and 15:00 h; and 11:00 and 17:00 h. After all samplings were completed, the samples were processed as described in section aforementioned for further analyses, maintaining individual identification of each sample. The estimation of iNDF in feces was conducted using samples from the grazing-simulation (hand-plucking) method from approximately 20 points/ha (Sollenberger et al., 1995), following the method described by Casali et al. (2008). The total amount of daily supplement intake for each animal was measured by low and high levels of supplementation, with 0.3% BW and 0.6% BW, respectively.
To evaluate the pH and rumen ammonia nitrogen (RAN), the sampling of rumen fluid was performed during two consecutive days (d 12 and 13 of each experimental period) to reduce possible changes occurring during grazing cycles and in animal behavior each day. These samples were obtained at different times, 0, 2, 4, 6, 8, and 12 h after supplementation. The first sampling was performed before administration of the supplement, which occurred at 11:00 am. The ruminal fluid was collected manually from three different locations in the rumen. The material was filtered through three layers of cheesecloth, and the pH was measured using an electronic sensor (DM-23-DC model, DIGIMED, Digicrom Analytic, São Paulo, Brazil). The RAN was measured by the colorimetric method according to Chaney & Marbach (1962).
Volatile fatty acids (VFAs) were evaluated following the methods described by Delevatti et al. (2019), where the ruminal fluid was defrosted in the refrigerator overnight and centrifuged at 4 °C and 20,000×g for 30 min. The resultant supernatant was analyzed for VFA concentrations (Palmquist & Conrad, 1971) by gas chromatography (GC2014; Shimadzu Corporation, Kyoto, Japan) using an HP-INNO wax capillary column (30 m × 0.32 mm; Agilent Technologies, Loveland, CO, USA) at an initial temperature of 80 °C and a final temperature of 240 °C.
During the 14th d of each experimental period, urine was collected from cannulated steers by spot sampling before the supplementation time and 4 h later. The urine was filtered, and two aliquots were measured; the first with 10 mL was diluted in 40 mL of sulfuric acid (Valadares et al., 1999), and the second with 40 mL was kept pure for further analyses of total nitrogen. Both the samples were stored at -20 °C until analysis.
Purine derivatives were determined using the first aliquot, by adding allantoin and uric acid (Delevatti et al., 2019). Allantoin concentrations were measured according to Young & Conway (1942) and concentrations of uric acid and creatinine using diagnostic colorimetric test kits (Labtest Diagnostica SA, Lagoa Santa, MG, Brazil). All equations to obtain the final values were obtained from Delevatti et al. (2019). The second aliquot was analyzed using the Dumas method of combustion using a Leco F528 N analyzer (LECO Co.).
Blood collection was performed at the same time as that of urine sampling. Blood was collected directly from the caudal vein using a vacutainer with coagulation accelerator gel. The blood from each animal, at each time point (before supplementation and 4 h after supplementation), was centrifuged at 1,500 × g for 20 min to obtain serum and plasma. The first sample was stored at -20 ºC until analysis to determine the level of serum urea nitrogen. Following the methodology described by Valadares Functional oils in beef cattle diets to replace a traditional additive et al. (1999), blood urea nitrogen was used to calculate nitrogen balance (NBal).
The daily urine volume was calculated according to the equation proposed by Chizzotti et al. (2006). The nitrogen balance, measured as g/day and % of intake, was calculated using equation (1): where NI is nitrogen intake, NE fe is nitrogen excreted in feces, and NE ur is nitrogen excreted in urine, and the latter two was measured in g/day.
Experiment II -Finishing phase
The total experimental period was 74 d, with 14 d for the adaptation to diets and 60 d for the evaluation conducted between October and December 2015. During this phase, all forty Nellore bulls, from the previous phase, were used. The animals were allocated into individual pens of 16 m² (4 × 4 m), with a roof, individual feed bunker, and water. The diets evaluated during this phase were formulated with a roughage/concentrate ratio of 30/70, to meet animal nutrition requirements for a daily weight gain close to 1.5 kg/day (Valadares Filho et al., 2010) with 450 kg BW kept in the feedlot system. The roughage used was corn silage, and the concentrates were compounded by the same additives evaluated during the first phase, which had 29.2 mg/kg of VM + 31.7 mg/kg of monensin (VMMon) and 700 mg/kg of FO, compounded by 85, 424 and 191 mg/kg of cardol, cardanol and ricinoleic acid, respectively ( Table 2).
The diet was supplied twice per day, at 7:00 and 14:00 h, during the same period as that in which the amount supplied to allow refusals between 5 and 7% of the total diet was adjusted. The adaptation period, first 14 d, was developed following restrictive dietary adaptation of 7 d, always maintaining the roughage/concentrate ratio of 30/70. On the first day, 1% BW of DM was supplied, and this value was increased daily by 0.2% BW of DM until animals had stabilized DM intake. On day 7 of adaptation, the animals reached a DM intake close to 2.2% BW, provided these animals were not allowed refusals of the total amount supplied on the last day. During the adaptation period between d 8 and 14, the diet amount supplied was controlled by refusal levels, ranging between 5% and 7%.
The treatments evaluated during the finishing phase were labelled the same as that in the previous phase to track the historic effect. Therefore, animals that received FO (LSFO and HSFO) during the dry season were also supplemented with FO during the finishing phase (FO concentration of 700 mg/kg DM). The animals that received VM in the dry season (LSVM and HSVM) had VM-Mon as the supplement in the finishing phase, with 29.2 and 31.7 mg/kg DM of VM and VMMon concentrations, respectively. The chemical composition of the diets was analyzed following the same methodologies described in the chemical analysis section previously mentioned.
The slaughter criterion was determined by the length of stay in the feedlot system (d 60), without accounting for the adaptation time for this phase. During the finishing phase, animal performance, such as average daily gain (ADG), DM intake (DMI), feed conversion ratio (FCR), and carcass yield from animals were evaluated. The ADG was obtained at the beginning (after the adaptation period) and at the end of the experimental period. The weighing was always conducted at 05:30 am, after a 14 h feed and water fast. The total BW gain was divided by the total number of days in the feedlot system to obtain ADG, measured as kg/day. The DMI was calculated by daily differences between the DM amount supplied and DM amount in refusals. The FCR was calculated considering the total DMI divided by BW gain during the total period. The carcass yield was calculated using the weight of the hot carcass after the slaughter process, and final BW was obtained from the last weighing (Barbero et al., 2017).
Experiment I -Early dry season
For animal performance, the experimental design was a completely randomized 2 × 2 factorial arrangement (additives [AD] × levels [L]). The paddocks were considered experimental units (n=8). Normality of errors and homoscedasticity of data were verified by PROC UNIVARIATE procedures, and significance was considered at a p-va-lue<0.05 by Tukey's test. The interactions were evaluated between the levels and additives (AD × L). The model for these analyses was as follows: where Y ij is the dependent variable, μ is the overall mean, ai is the additive effect, b j is the level effect, (ab)ij is the interaction effect between additive and level, and ε ij is the residual experimental error. The analysis of the ruminal parameters was performed using a double Latin Square with repeated measurements as an experimental design, when the PROC MIXED procedure was developed. The treatments and time were considered fixed effects in the Latin Square, and animal and period were considered random effects. These Latin squares were performed twice at a time to consider statistical errors in the degrees of freedom. The data were tested for the best covariance structure considering the lowest Akaike Information Criterion scores and then included in the REPEATED statement to complete the analysis.
Differences in means were determined by Tukey's test, when p<0.05. The statistical model was as follows: where Y ij(k)m is the dependent variable, μ is the overall mean, SQm is the Latin square effect, PER im is the period effect, ANI(SQ m )j is animal j in the Latin square m effect, τ (k) is the treatment k effect, ε ij(k) m is the residual experimental error, k is the treatment number that ranges from 1 to r, and m is the Latin square number that ranges from 1 to b.
Experiment II -Finishing phase
The experimental design was completely randomized, considering four treatments (animals historic from early dry season) and ten tester animal replicates, per treatment. The procedures were the same as those of Exp. I (animal performance analysis), without considering the interaction mentioned previously (AD × L). The model used in these analyses was as follows: where Y ij is the dependent variable, μ is the overall mean, τi is the treatment effect, and εij is the residual experimental error.
Experiment I -Early dry season
The DMI of supplements was different (p<0.001) between the levels evaluated (LS and HS) ( Table 3): between animals supplemented with higher levels (0.6% BW) of intake at 2.55 kg/day and those supplemented with 1.27 kg/day (0.3% BW). Considering herbage (p=0.23) and total DMI (p=0.56), there was no difference in both levels studied. Following these results, the nutrient intake (CP [p=0.09], NDF [p=0.72], and total digestible nutrient (TDN) [p=0.18]) was not altered in response to the treatments evaluated. However, although there was no difference (Table 3), CP and TDN intakes were 1.46 and 4.05 kg/day, respectively, for animals supplemented with 0.3% BW. Similarly, NDF at the lower intake level showed no differences (p=0.64) and was 4.73 kg/day for animals that were fed FO (Table 3). Differences in apparent nutrient digestibility could be seen for CP (p=0.002) and NDF (p=0.01) between the levels evaluated. However, the DM (p=0.59) and OM (p=0.49) digestibility did not differ for all evaluations (additive, levels, and interactions). For animal performance, the ADG was different for both levels (p=0.04) and additives (p=0.02) but did not result in a significant interaction effect (p=0.71). Higher values of ADG were observed for high supplementation levels (0.6% BW -0.77 kg/day) and for the conventional additive (VM -0.78 kg/day). Furthermore, the higher supplementation level (0.6% BW) resulted in higher stocking rate, 5.18 AU/ha (p=0.02, Table 3); however, no differences were observed between additives (p=0.85).
Concerning ruminal parameters (Table 4), differences were observed only between time samples for pH (p<0.001). The evaluations between the levels and additives did not result in differences in both pH (p=0.24 and Table 3. Nutrient intake, apparent digestibility, and performance of young bulls fed on Marandu grass pastures with different supplement levels and additives during early dry season (Experiment I)
Variable
Level ( p=0.57) and RAN (p=0.36 and p=0.69). The means obtained for pH and RAN between treatments were 6.41 and 32.04 mg/dL, respectively. There was no difference in VFA levels. However, a tendency could be reported in the acetic acid:propionic acid rate in response to level effect (p=0.06); this occurred for animals supplemented with a higher supplementation level (0.6% BW), which had a greater acetic acid:propionic acid ratio of 4.32, against the ratio of 4.13 from animals supplemented with 0.3% BW (Table 4). There was no difference between all evaluations for neither EMICS (p=0.66) nor NBal (p=0.11) ( Table 5). The mean of EMICS, measured in g microbial nitrogen/ kg TDN, was 16.36, and when measured as microbial CP/ kg TDN, the mean was 102.24. The means of NBal were around 161.60 g/day and 60.75% of intake (Table 5). In this way, no interaction effect was observed between the supplementation level and additive for all variables analyzed.
Experiment II -Finishing phase
Differences were not observed in the IBW (p=0.19) or final BW (FBW; p=0.64) from animals finished in the feedlot system (Table 6). The animals started this phase with a mean weight of 487 kg and finished with 583 kg. The DMI was different between the treatments evaluated for both units, kg/day (p=0.04) and %BW (p=0.002). When DMI measured in kg/day was evaluated, animals from the FO treatments (LSFO and HSFO) had greater values than those of animals from the LSVM treatment; however, the DMI of animals from the HSVM treatment did not differ from that of the first group of animals. The difference between treatments is emphasized when DMI was expressed in % BW: animals that were fed FO (LSFO and HSFO) had a DMI that was 12.6% more than that of animals that were fed VM (LSVM and HSVM).
The ADG and FCR were not different (p=0.20 and p=0.17, respectively) between the treatments evaluated. The means obtained for these variables were 1.601 kg/ day and 7.28 for ADG and FCR, respectively (Table 6).
No difference was observed (p=0.22) in the carcass weight obtained from animals finished in the feedlot system, and the mean weight was 334 kg. However, the carcass from the animals fed during the early dry season with LSFO resulted in a lower yield (55.8%) when compared with carcass from other treatments (the average of LSVM, HSVM, and HSFO was 57.7%, Table 6).
Experiment I -Early dry season
In grazing beef cattle systems, herbage DMI can change because of certain factors such as grazing intensity (Barbero et al., 2015) and energy supplementation (Moore et al., 1999). In this study, during the early dry season (Exp. I), evaluation of different supplementation levels resulted in contrasting DMI of supplements calculated for the animals. However, a crucial point to be highlighted is that if animals provided with a low supplementation level (0.3% BW) had the same total DMI, it could be understood that an increase in herbage DMI caused these results, which is consistent with the results of previous studies. We found differences in behaviour patterns of animals in each treatment. Casagrande et al. (2011) reported that animals have a constant grazing behaviour after they are adapted to management. The use of FO did not result in changes in animal behaviour, as was demonstrated by Ornaghi et al. (2017); the adjustments in animal intake to allow similar results between treatments actually occurred because of changes in herbage DMI.
Although this experiment was conducted during the early dry season, the forage quality was high, considering historic tropical grass data, which reported CP 15.9%, NDF 64.7%, ADF 32.7%, iNDF 24.8%, and lignin 4.45% of DM (Poppi & Mclennan, 2007). According to this study, the higher the pasture quality, the lower the difference in response obtained by supplementation of different kinds or levels. Thus, knowing that animals kept in grazing systems should obtain the total or most of their nutrients Functional oils in beef cattle diets to replace a traditional additive from herbage (Barbero et al., 2015), the nutrient intake (CP, NDF, and TDN) measured from animals in this study would have to be similar to herbage DMI. The increase in supplement intake could be explained by the greater CP availability, and also, due to the nitrogen fractions more quickly metabolized (Orskov et al., 1980), could cause an improvement in the apparent digestibility of CP. Milis & Liamadis (2007) reported that effects of CP levels on digestibility of nutrients are variable and depend on factors such as levels and sources of protein. Thus, animals that were fed a higher level of supplementation had greater and quicker protein digestibility, which could cause an increase in apparent CP digestibility. Regarding the apparent digestibility of NDF, Valadares Filho et al. (2016) reported that an improvement in the concentrate levels supplied to animals, expressed by the interaction with voluntary DMI, has negative effects on fiber digestibility. As the animals supplemented with 0.6% BW had greater DM supplement intake, the responses from our study are consistent with previous findings. Specifically, in terms of FO, there is limited information about the effect of the addition of these compounds on ruminant diets. Ornaghi et al. (2017) noted an absence of differences among diets, such as in in vitro digestibility, with the use of these natural additives, which is consistent with our results. The high ADG obtained using high supplementation levels can be understood from previous studies, but the differences in ADG from the additives used could be explained by the specific compounds in each additive. FO is a secondary metabolite produced by plants, which can affect animal performance at different points, from modulation of microbiota to the disturbance in the colonization of substrates (Calsamiglia et al., 2007). Castor oil acid and cashew nutshell liquid, both used as ingredients to com-pose the functional essential oils additive evaluated in this study, have ricinoleic acid, anacardic acid, cardanol, and cardol as the main components (Zotti et al., 2017). These compounds have antimicrobial activity, especially the fatty acids in castor oil, which could be described as inhibitors of biohydrogenation and methane production and have some effects on gram-positive bacteria (Morales et al., 2012). Thus, all activities are related to the improvement of animal performance; however, the low supplementation levels during this phase (0.3% and 0.6% BW) and the limitations of FO, such as low water solubility and stability (Rai et al., 2017), could have caused the lower ADG compared to VM. Specifically, its use as a feed additive to mainly reduce acidosis risk makes it highly prevalent in Brazilian feedlot systems (Millen et al., 2009). This compound is also used for other purposes, as a growth promoter and for disease prevention (Benatti et al., 2017). Some studies evaluating VM reported high ADG with its use; Costa et al. (2018) reported an improvement of 14% using VM as an energy supplement. In this study, the improvement was 30% among the different additives.
Ruminal parameters have an important effect on animal performance; however, the results from this study did not show differences in these variables, except in pH between different sampling times. The minimum pH value to cause a reduction in cell wall degradation is 6.2 (Hoover, 1986). Changes occurring during these sampling times were expected and were caused by the ingestion behavior and timing of grazing of the steers. Grazing activity typically increases at the end of the afternoon (Casagrande et al., 2011). This could possibly contribute to pH reduction, which intensifies 6 h after ingestion (Owens & Goetsch, 1993). Another important parameter that can alter microbial synthesis is RAN; some studies have Table 6. Initial body weight (IBW), final BW (FBW), dry matter intake (DMI), yield and carcass weight, average daily gain (ADG) and feed conversion ratio (FCR) of Nellore bulls finished in feedlot system following consumption of historic feed during early dry season (Experiment II) reported the minimum values needed to maintain and improve ruminal digestion in tropical pastures. Detmann et al. (2014) reported 6.3 mL/dL for tropical conditions similar to that obtained in this study, and it should be noted that all the means in this study were above this value. A possible association could be the similarity in another variable, such as NBal (Broderick et al., 2010). The similarity of pH values between the analyzed effects (levels and additives) can be related to the absence of differences in VFA. Total VFA concentration is directly associated with dietary composition and intake level; both characteristics are similar if herbage is considered as the main feed source, regardless of supplementation levels. Similarly, Cardozo et al. (2006) also reported that extracts composed of other types of FO had no effect on total VFA concentration, which is consistent with our results. Regarding EMICS and NBal, no difference was observed, despite the known antimicrobial effect of FO, probably because of the levels supplied to the animals during this phase. This condition was reported by other studies (Molero et al. 2004;Newbold et al., 2004), where the blend of essential oils when used resulted in small and variable changes, depending of the feed being degraded, type of ration fed to the animals, and length of the adaptation period. In this way, Benchaar et al. (2006) affirmed that when dairy cattle were fed with a blend of essential oil (compounded by thymol, eugenol, vanillin, and limonene), resulted an increase on ruminal pH and ADF digestion, however no changes were observed to VFA, NBal, protozoa counts and animal performance. Therefore, according Calsamiglia et al. (2007), studies in vivo similar to this should be performed to determine between other points, the e the optimal dose in units of the active component and the effects on animal performance.
Experiment II -Finishing phase
The initial BW of the animals showed that all treatments had similar effects during the growing phase (Exp. I), allowing the animals to begin the finishing phase under the same conditions. Traditionally, during the finishing phase, ruminants are fed with diets containing high amounts of fermentable carbohydrates in feedlot systems. This situation can alter ruminal conditions, decrease rumen pH values drastically, and cause ruminal acidosis due to the accumulation of acids in the rumen (González et al., 2012).
The adequate adaptation period and maintenance of the supplementation history of animals allowed the observation of differences in DMI between animals from each treatment. The higher DMI measured in animals that were fed FO can be attributed to the palatability properties of this compound, and also be associated to the slight improvement on rumen fermentation. Franz et al. (2010) reported the presence of volatile and odorant compounds in FO, which could determine its potential use as a feed additive in animal nutrition.
During the finishing phase, when high-grain diets are used, natural additives have the potential to modulate rumen fermentation (Valero et al., 2014). Therefore, these additives, represented by FO in this study, can be used as natural products to replace ionophores and growth promoters (monensin and VM, respectively) in beef cattle production without future problems in trade to specific marketplaces with rigorous food safety bans (Ornaghi et al., 2017).
The differences in carcass yield could be due to the historic feed supplied during the growing period, when the young bulls were fed with lower supplementation levels combined with FO as the additive, resulting in a lower ADG, whereas the other treatments promoted higher ADG. The animals from this treatment probably had high deposition of lean tissue and consequently improved carcass yield. This is supported by the ADG during the finishing phase, which occasionally remained unchanged during the feedlot. Regarding these results, our study did not agree with the findings from other studies that reported an improvement in animal performance (Ornaghi et al., 2017) and feed efficiency (Valero et al., 2014;Fugita et al., 2017); however, these differences can be related to the compositions of each additive used in these studies. Besides decreasing methane production, which improves energy use (Ornaghi et al., 2017), FO from different plants have different organic actions, as they have the potential to modulate rumen fermentation according to antimicrobial activity of each FO used (Geraci et al., 2012). These actions can cause different responses in ruminants. Thus, more research on the effects of FO from plant extracts needs to be developed to refine their use in animal diets.
Hence, the supplementation level of 0.6% BW and VM as an additive increased beef cattle performance in grasslands during the early dry season when compared to FO, a mix of cardol, cardanol, and ricinoleic acid. Young bulls receiving 0.3% BW of supplement this mix of FO during this phase on grasslands exhibited lower carcass yield after the finishing phase in the feedlot. The mix of cardol, cardanol, and ricinoleic acid (FO) can replace VM as an additive for beef cattle during the finishing phase in the feedlot, without changing the ADG and FCR. | 2021-12-09T17:54:50.118Z | 2021-08-12T00:00:00.000 | {
"year": 2021,
"sha1": "7999b07f61c9f7aa4c9cbcc525cd0c7a802d054f",
"oa_license": "CCBY",
"oa_url": "https://revistas.inia.es/index.php/sjar/article/download/15795/5251",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e97fb1d8ef79d6521d6d4490eb5880d4a6e7cc5f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
16591315 | pes2o/s2orc | v3-fos-license | The Geometric Increase in Meta-Analyses from China in the Genomic Era
Meta-analyses are increasingly popular. It is unknown whether this popularity is driven by specific countries and specific meta-analyses types. PubMed was used to identify meta-analyses since 1995 (last update 9/1/2012) and catalogue their types and country of origin. We focused more on meta-analyses from China (the current top producer of meta-analyses) versus the USA (top producer until recently). The annual number of meta-analyses from China increased 40-fold between 2003 and 2011 versus 2.4-fold for the USA. The growth of Chinese meta-analyses was driven by genetics (110-fold increase in 2011 versus 2003). The HuGE Navigator identified 612 meta-analyses of genetic association studies published in 2012 from China versus only 109 from the USA. We compared in-depth 50 genetic association meta-analyses from China versus 50 from USA in 2012. Meta-analyses from China almost always used only literature-based data (92%), and focused on one or two genes (94%) and variants (78%) identified with candidate gene approaches (88%), while many USA meta-analyses used genome-wide approaches and raw data. Both groups usually concluded favorably for the presence of genetic associations (80% versus 74%), but nominal significance (P<0.05) typically sufficed in the China group. Meta-analyses from China typically neglected genome-wide data, and often included candidate gene studies published in Chinese-language journals. Overall, there is an impressive rise of meta-analyses from China, particularly on genetic associations. Since most claimed candidate gene associations are likely false-positives, there is an urgent global need to incorporate genome-wide data and state-of-the art statistical inferences to avoid a flood of false-positive genetic meta-analyses.
Introduction
Meta-analyses are influential publications [1]. They can summarize evidence quantitatively across diverse disciplines and can inform decisions about the need for further research and/or practical implementations of the research findings [2]. The method's popularity has surged in the last two decades with the advent of evidence-based medicine. Noticeably, the application of meta-analyses has extended globally, involving many other countries beyond the United States (US) and selected western countries. This trend reflects the globalization of scientific research, the deluge of published data in the current era, and the need for knowledge integration [3,4]. Given the influence of meta-analyses in assessing the robustness of scientific evidence, there is a need to evaluate the volume of meta-analyses and their quality, specifically in the scientific fields which have undergone the most rapid expansion in production of data.
To obtain a better understanding of the expansion of the metaanalyses literature, in this paper we systematically mapped and evaluated the extent and pace of growth of the meta-analysis literature in biomedical science worldwide. The number of metaanalyses published annually approximately doubled in the last 5 years. We were intrigued to document a very rapid rise in the production of meta-analysis from China. Meta-analyses from the US (traditionally the key producer of meta-analyses in the past) less than doubled in the last 5 years. Conversely, meta-analyses from China outnumber those from the US in the current production, while very few meta-analyses came from China until a few years ago. The advent of Chinese meta-analysis production was driven primarily from genetics, a field that until recently was dominated by papers from the US and a few European countries. To understand the dynamics and patterns of this growing literature, we also performed a more in-depth evaluation of meta-analyses of genetic associations from China and the US to compare their methodologic characteristics.
Survey of Meta-analyses -Search Strategies
We searched PubMed (last search date September 1, 2012) for publications classified as type ''meta-analysis'' and performed also counts per publication year from 1995 until 2012. We generated separate counts for all meta-analyses worldwide, as well as those with listed affiliation from the People's Republic of China (China [affiliation] NOT Taiwan [affiliation]) and, for comparison, those with affiliation from the US (USA [affiliation] OR US [affiliation] OR United States [affiliation]), since US has historically been the top producer of meta-analysis publications.
We then separated the meta-analysis publications according to field as follows. First, genetics-related meta-analyses were searched using the strategy ''gene OR genetic OR polymorphism OR genome OR mutation OR haplotype''. Of those meta-analyses not captured with this strategy, we used the search ''trial OR random* OR treatment'' to identify treatment-related meta-analyses. Of those captured with neither of these two strategies, we used the search term ''sensitivity'' to identify meta-analyses of diagnostic performance. Of those not captured with any of these three searches, we used the search ''cohort or case control'' to identify other meta-analyses mentioning studies with such designs. All remaining meta-analyses were placed in a miscellaneous group.
Given that the greatest share of meta-analyses from China was identified in the genetics-related group, we also mapped the evolution of the number of meta-analyses in genetics (total, from China and from US for comparison). We also evaluated whether genetics-related meta-analyses from China are published in the English or Chinese language, as well as whether they address genetic associations of gene variants or other research gene-related questions.
The HuGE Navigator [5] (last update search performed January 13, 2013) was also used to map annually the evolution of the number of published meta-analyses on genetic associations of gene variants since 2000 around the world and the number of meta-analyses published per year was plotted for the 10 most prolific countries during 2000-2012.
Comparative Evaluation of Recent Meta-analyses in Genetics
The field of genetics has experienced a paradigm transformation since 2005. Previously, most genetic studies followed a candidate gene approach: one or a few genes and variants thereof were chosen based on biological reasoning to test for association with some phenotype/disease of interest. More recently, genomewide association studies (GWAS) have probed associations across the whole genome and claim discoveries only after rigorous replication and stringent criteria of multiplicity-adjusted significance are met [6,7]. Large-scale agnostic studies have also permitted testing previously proposed candidate gene associations. In such evaluations the majority of associations proposed in the candidate gene era have not been replicated [8][9][10][11].
Given the large amount of genetic meta-analyses from the US and China, we compared genetic meta-analyses from the two countries to describe the current state of meta-analysis approach in the published literature. We evaluated 100 genetic association meta-analysis articles (50 from China and 50 from the US) published in 2012. We defined genetic association meta-analyses to include studies that use published and/or new data on candidate or GWAS-derived associations (newly proposed, or further validated) of genetic variants with any outcome/phenotype of interest. We included in this category, meta-analyses of variants with pharmacogenetic associations, and meta-analyses of Mendelian randomization provided they also addressed some clinical phenotype. We excluded meta-analyses of somatic mutations and of gene expression data. Articles selection was done by systematically screening through the meta-analyses published in 2012 in chronological order of PubMed indexing until we identified 50 eligible meta-analyses from each country group. For these 100 meta-analyses, we extracted the following information: journal of publication (so as to identify subsequently also the 2011 Journal Impact Factor from Thomson ISI); number of authors; language of publication; disease/phenotype (cancer, cardiovascular, infectious diseases, other disease, non-disease); type of data included (literature, investigators' own, both); inclusion of any unpublished data other than those of the meta-analysis investigators; number of genes assessed (1,2,3, .3); number of genetic variants assessed (1,2,3, .3); any new associations proposed (yes/no -if no, whether the previously proposed genes that were probed had been derived from agnostic approaches (GWAS) (none [candidate genes only], some, all -the GWAS Catalog constructed by National Human Genome Research Institute (NHGRI) [12] was consulted to identify whether any of the gene-phenotype associations had been identified in GWAS); models used for data synthesis (fixed effect, random effects, both); relative risks or absolute differences reported in the abstract (relative risk, absolute difference, both, none); largest relative risk (more deviating from 1.00 in either direction) reported in the abstract; largest nominally statistically significant relative risk reported in the abstract; any nominally statistically significant results (P,0.05 or 95% confidence interval excluding the null) reported in the abstract (yes/no); any genomewide statistically significant results (conventionally defined as P,5610 28 ) reported in the abstract (yes/no); conclusion of the abstract regarding whether there is some association or not (yes/ no); abstract suggesting differences in populations of different ethnicity/ancestry (yes/no); abstract suggesting significant associations only with a particular inheritance model, and if so, which; and any suggestions made in the abstract that more data are needed (yes/no). We also evaluated whether the meta-analyses included any data from GWAS (yes/no); whether the eligibility criteria aimed to include data from Chinese-language studies (yes/ no); whether literature searches included Chinese biomedical literature databases [13] or were limited to western databases such as PubMed and EMBASE; and whether any Chinese-language studies were indeed included in the meta-analysis calculations, and, if so, how many. For comparison, we also assessed whether studies in any other language besides English/Chinese were considered eligible.
We compared the two groups using chi-square test with Yates' correction, Fisher's exact test, Freeman-Halton or chi-square test adjusting for trend, and Mann-Whitney U test, as appropriate. With n = 50 meta-analyses in each group, and assuming that 20% of the US meta-analyses would assess candidate gene variants, we had at least 90% power to find a significant difference at alpha = 0.05, if the proportion of meta-analyses from China addressing candidate genes were 50% or higher [14]. One comparison used all 100 meta-analyses; a further analysis focused only on meta-analyses that addressed only genetic variants that have not been validated in GWAS, since the large majority of these associations are likely to be spurious [8,9,10,15,16]. All Pvalues are two-tailed.
Number of Meta-analyses
As of September 1, 2012, PubMed tagged a total of 34,238 publications as meta-analyses. The number published each year since 1995 is shown in Table 1. Overall, there is a substantial growth over time, with 11-fold increase in the annual number between 1995 and 2011. In the same time frame, the annual volume of items indexed in PubMed has approximately doubled (n = 443,543 in 1995, n = 865,176 in 2011).
The United States was the most common country affiliation accounting for 8,886 of the 34,238 meta-analyses (26%). There is a clear decline in the proportion of the total represented over time (from 38% in 1995 down to 20% in 2012). Conversely, China has emerged as a dominant publisher of meta-analyses. When all years are considered, it lagged behind the US with a total of 2,587 (8%) publications overall; nevertheless, the rate of growth in meta-analyses is rapid. Prior to 2003, China contributed less than 1% of the total meta-analyses. Between 2003 and 2011, the annual number of Chinese meta-analyses increased over 4000%, versus 140% for the US. In 2012 meta-analyses from China surpassed meta-analyses from the US (21% versus 20% of the total).
The sheer proportional increase in the number of published meta-analyses is markedly specific for China. Japan accounts for 1.5% of meta-analyses published (1.3% when limited to 2012 alone), and no other Asian country accounts for more than 1% of meta-analyses published over all time or focusing on 2012 only. Several European countries have long published meta-analyses and continue to do so with modest increases in the number of meta-analyses published per year, but currently their relative contribution is far less prominent than US or China (not shown in detail).
Types of Meta-analyses
As shown in Table 2, the majority of meta-analyses overall pertain to clinical trials and treatment topics (68%). Geneticsrelated meta-analyses accounted for 11% of the total when all years are considered and substantially higher proportions in later years (19% in 2012).
Among genetics-related meta-analyses, production from China accounts for 26% of all meta-analyses when all years are considered and approximately half of the published papers (48%) in 2012. In 2012, among papers from China, geneticsrelated meta-analyses outnumber in absolute numbers metaanalyses of clinical trials and treatments; conversely meta-analyses of clinical trials and treatments remain more than 5-times more common than genetics-related meta-analyses among papers coming from countries other than China (n = 1191 versus n = 231). Table 3 shows the yearly publication of genetics-related metaanalysis from 1995 to 2012 for all countries, China, and the US. With very few exceptions, the genetics-related meta-analyses from China indexed in PubMed are published in English-language journals (902/942, 96%). Following a suggestion raised during peer-review, we assessed whether any of the meta-analyses from China had been published in duplicate, in both the Chinese and English language. Careful scrutiny of a sample of 15 Chineselanguage meta-analyses on genetic associations showed that for 2 of them, a corresponding meta-analysis on the same variant and phenotype and with partially overlapping authors had been published also in an English-language journal within ,1 year time-difference. In one pair, the Chinese-language meta-analysis [17] concluded that GSTM1 is significantly associated with colorectal cancer risk, while the English-language meta-analysis [18] concluded that GSTM1 is not significantly associated with colorectal cancer risk, while other metabolic enzyme genetic polymorphisms were associated. In the other pair, the Chineselanguage meta-analysis [19] found associations for SLC11A1 (formerly NRAMP1) gene polymorphisms and tuberculosis susceptibility focusing on East Asian populations, while the English-language meta-analysis [20] included diverse ethnic groups and also found significant associations for the same variants in East Asian populations but not with identical effect sizes.
Growth of Genetics-related Meta-analyses
The large majority of the Chinese genetics-related metaanalyses addressed genetic associations of gene variants (95/100 among the latest 100 meta-analyses indexed in PubMed, 90/95 evaluating only a single gene). Figure 1 shows the data from HuGE Navigator on the ten most prolific countries of genetic association meta-analyses with data updated to the end of 2012. Consistent with the PubMed data, the genetic association meta-analyses from China followed a geometric growth, while the US slowed, and other countries contributed comparatively few meta-analyses. Among Asian countries, South Korea is fourth in number of genetic association meta-analyses in 2012, but very far from both China. By the end of 2012, there were 612 genetic association meta-analyses published from China in that single year, versus only 109 from the US, 49 from the United Kingdom, and 44 from South Korea. No other country produced more than 30 genetic association meta-analyses in 2012. Table 4 shows the distribution of articles published in 2012 for China and US by selected characteristics. As shown, meta-analyses from China and the US differed significantly in several features, including source of data, number of genes studied, and type of approaches (candidate-gene or GWAS). Ninety-two percent of Chinese articles performed their meta-analyses using only data abstracted from the literature as compared to 28% from the US. Conversely, seventy-two percent of US investigators included their own genetics data compared to 8% of Chinese investigators. Unpublished data beyond those of the investigators participating in the meta-analysis were rarely included in any meta-analysis (n = 1 in China-based articles, n = 0 in US-based articles). Metaanalyses from the US tended to examine more genes than ones from China. A majority (88%) of Chinese meta-analyses addressed gene variants that were previously identified using a candidategene approach. In comparison, US meta-analyses were evenly split between candidate-genes and GWAS-derived genes and/or new GWAS discoveries.
Comparative Evaluation of Genetic Association Metaanalyses from China and US
As shown in Table 5, there was a significant difference in Chinaversus US-based meta-analyses in the use of fixed and/or random effects models for data synthesis (P,0.0001), because US articles were mostly GWAS where fixed effects analyses are long established as the standard method applied for making discoveries. The largest relative risks and the largest statistically significant relative risks were not significantly different in the two groups of meta-analyses, when all 100 meta-analyses were considered. The largest significant risks were larger in China-rather than US-based meta-analyses, when limited to the set that did not include GWASvalidated genes (median 1.81 v 1.21, P = 0.036). Both groups of meta-analyses claimed significant results in the abstract (76% versus 82% in China versus US), but genome-wide significant findings were observed predominantly in US meta-analyses (n = 23), with only one exception in the China group (P,0.001 for the China-US comparison). Both groups usually concluded favorably for the presence of genetic associations (80% versus 72%). There was a suggestion that the Chinese group was more likely to invoke ethnicity/ancestry differences (24% versus 8%, P = 0.054) and possibly also associations that are specific to an inheritance model (8% versus 2%) in their discussion of the results, but these differences were not nominally statistically significant.
Inclusion of Data from GWA Studies and from Chineselanguage Literature
One study from China included data from GWA studies, while GWAS data was included in 31 of the 50 meta-analyses from the US (P,0.001). Among the 48 studies from China and 12 from the US that performed literature searches, the eligibility criteria aimed to include data from published Chinese-language studies in 38 meta-analyses versus 8 respectively (P = 0.448). Literature searches perused Chinese-language biomedical literature database in 21 versus 1 meta-analyses, respectively (P = 0.041). Chinese-language studies were included in the meta-analysis calculations in 20 versus 2 meta-analyses (P,0.001 for all studies, P = 0.180 for studies using literature searches). A total of 65 Chinese-language genetic association studies were included in the calculations of 20 Chinabased meta-analyses, and 9 meta-analyses included . = 3 Chinese-language studies. In contrast, a total of 9 Chinese language genetic association studies were included in the calculations of 2 US-based meta-analyses. Articles in languages other than English and Chinese were considered eligible according to the Methods section in 29 meta-analyses from China and 7 meta-analyses from the US among the 48 and 12 meta-analyses respectively that performed any literature searches (P = 1.00).
Discussion
Our empirical overview of the meta-analyses literature shows a rapid increase in meta-analysis studies worldwide. The increase is most prominently seen in China, specifically in genetics. Chinese publication of genetic association meta-analyses was more than 5 times higher than US-published meta-analyses during 2012 and currently China dominates the global production of papers in this field. In the last 8 years, the annual production of meta-analyses of China has increased 40-fold overall with a 110-fold increase in genetics.
The vast majority of the genetics meta-analyses from China resulted from combining data from association studies evaluating one or two specific gene variants that had been proposed in the candidate gene era. These studies based significant results on nominal significance (P,0.05) rather than genome-wide significance thresholds. US investigators tended to include their own genetic data in the meta-analyses whereas Chinese papers typically did not address GWAS-proposed variants. Many of the Chinabased meta-analyses also include published Chinese-language studies that are not indexed in western databases.
China is becoming an increasingly important player in biomedical research as illustrated in the exponential contribution of published works from Chinese investigators. To our knowledge, the meta-analysis literature from China has not been comparatively evaluated previously in a systematic fashion. One exception is an empirical evaluation of systematic reviews of acupuncture from China. The evaluation suggested that the systematic reviews conducted lacked rigor in appraising the risk of bias in included studies [21]. Likewise, empirical investigations in some other fields, including single genetic association studies of candidate genes, clinical trials, and randomized trials on acupuncture have suggested that Chinese studies present a prominent excess of significant results [22][23][24][25] that requires cautious interpretation.
Strong evidence has accumulated on the low replication rates of past candidate gene associations, when these associations are evaluated in large-scale consortia with agnostic testing of gene variants across the whole genome [8,9,15]. Thus, one may infer that the large majority of significant associations proposed in metaanalyses of candidate gene studies are likely to perpetuate falsepositive findings. Therefore, it is likely that the majority of Chinaproduced genetic meta-analyses are reaching false-positive conclusions. The reasons for this may be manifold, but we have identified some potential explanations.
First, China-based meta-analyses do not employ data from GWA studies. This may be due to the fact that these data are not readily available to a wider public, or they may need approval processes which hinder access from Chinese meta-analysts. In some occasions, the genetic variants of interest may not be captured by agnostic platforms, although this is not common with current GWA platforms that have very high genome coverage and imputation should make this concern even less important [26]. Second, China-based meta-analyses tended to focus on testing single genes and gene variants that were proposed in the candidate-gene era. This approach does not address genetic variants that have emerged with far stronger statistical support from large-scale consortia performing GWAS [27]. Overall, these meta-analyses have not followed the evolution of human genome epidemiology in the direction of GWA studies, e.g. currently largescale synopses of genetic association studies routinely try to incorporate the GWA data, which are typically the largest datasets in each field [28,29]. Lack of inclusion of GWA data in the current meta-analysis literature may not be necessarily exclusive to China. Meta-analysts from other countries may also continue to perform candidate-gene meta-analyses. Unfortunately, the number of such meta-analyses from other countries is extremely small compared to China to allow a meaningful evaluation.
Third, many China-based meta-analyses performed more exhaustive literature searches than US-based meta-analyses and included data from Chinese-language publications that are often indexed only in Chinese literature databases [13]. While inclusiveness is commendable in principle, previous experience suggests that language bias could affect the results in different ways depending on the field involved [4,30,31]. In some disciplines, it may be preferable to exclude rather than to include data from specific countries. It was demonstrated [25] that there is a prolific Chinese language literature on genetic association studies addressing variants of the candidate gene era and this literature might be biased towards reporting of statistically significant results [25]. This was not due to the quality of the studies based on the reported features of their methods and conduct [25], but may be due to selective reporting of ''positive'' results. Under such circumstances, inclusion of these data may propagate further false-positive results at the meta-analysis level.
We should acknowledge some limitations in our study. We did not assess in depth the quality of the evaluated meta-analyses, which could further explain the differences between China and US meta-analyses. Quality of published genetic association metaanalyses is difficult to assess as there is no guarantee that reported features accurately reflect real practices adopted during the conduct of the meta-analysis [32]. As previously observed [25], reported quality of Chinese genetic association studies was comparable to studies performed in other countries. Moreover, quality comparisons between China and the US might not be meaningful or even feasible because the types of research endeavors differ between the two countries with minimal overlap, e.g. meta-analyses of published data on candidate gene associations versus consortium analyses of GWA data. Eventually, one needs to assess the essential features and reliability of a metaanalysis regardless of its country of origin. However, the extremely rapid increase in Chinese meta-analyses of candidate gene variants is a phenomenon that cannot be underestimated and it has no parallel in any other country to-date.
Allowing for these caveats, our empirical evaluation documents an extraordinary, geometric growth of the meta-analysis literature produced by authors from China in the last few years, with strong emphasis on genetic associations. This surge in meta-analysis applications presents conflicting issues. On the one hand, metaanalyses of data abstracted from the literature and from candidategene studies may propagate an epidemic of false claims for candidate gene associations. Conversely, the extraordinary scientific potential of China can offer tremendous impetus to evidencebased medicine in general and genetics more specifically, if it is appropriately harnessed. Efforts should be made to familiarize Chinese meta-analysts with the newer waves of genetic studies and improved access to large-scale consortium databases and active participation in such consortia [33] may be useful steps in this direction. China is already a leading power in modern genomic technologies, with unparalleled sequencing capacity and is already assuming a leading role in the emerging omics fields [34][35][36]. Meta-analyses could also be facilitated to reach higher levels of reliability. | 2018-04-03T01:45:54.525Z | 2013-06-12T00:00:00.000 | {
"year": 2013,
"sha1": "b86ecc7897e0eb0c084c6cc3ab9fd1f53465ecb9",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0065602&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b86ecc7897e0eb0c084c6cc3ab9fd1f53465ecb9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
218631696 | pes2o/s2orc | v3-fos-license | Dihydroartemisinin Induces Growth Arrest and Overcomes Dexamethasone Resistance in Multiple Myeloma
The discovery of artemisinin (ART) for malaria treatment won the 2015 Nobel Prize in Medicine, which inspired the rediscovery and development of ART for the treatment of other diseases including cancer. In this study, we investigated the potential therapeutic effect of ART and dihydroartemisinin (DHA) on multiple myeloma (MM) cells including primary MM cells and in 5TMM3VT mouse model. Both in vitro and in vivo experiments showed that DHA might be a more promising anti-MM agent with significantly improved efficacy compared to ART. Mechanistic analyses suggested that DHA activated the mitochondrial apoptotic pathway by interacting with ferrous (Fe2+) ions and oxygen to produce reactive oxygen species (ROS). Intriguingly, DHA could reverse the upregulated expression of B-cell lymphoma 2 (Bcl-2) protein, a typical mitochondrial apoptotic marker, induced by dexamethasone (Dexa) in MM. We further demonstrated that DHA treatment could overcome Dexa resistance and enhance Dexa efficacy in MM. Additionally, DHA combined with Dexa resulted in increased ROS production and cytochrome C translocation from the mitochondria to the cytoplasm, resulting in alterations to the mitochondrial membrane potential and caspase-mediated apoptosis. In summary, our study demonstrated that DHA was superior to ART in MM treatment and overcame Dexa resistance both in vitro and in vivo, providing a promising therapeutic strategy for MM therapy.
INTRODUCTION
Multiple myeloma (MM) remains an incurable hematological malignancy of plasma cells, despite the therapeutic advances over the past two decades with numerous agents including proteasome inhibitors such as bortezomib (1), ixazomib (2), and carfilzomib (3), monoclonal antibodies such as elotuzumab (4), and daratumumab (5), immunomodulatory drugs such as pomalidomide (6) and lenalidomide (7), and other treatments including chimeric antigen receptors (CAR)-T-cell therapy (8,9) gaining clinical approval. Dexamethasone (Dexa) remains the most widely used drug for the treatment of MM despite the development of resistance in patients after prolonged exposure to its high doses (10), which in turn, is associated with poor prognosis of MM. Recurrent MM patients treated with numerous anti-MM drugs or high-risk patients with MM have poor median survival rates (11); thus, novel molecular targeting therapies are required to overcome drug resistance in MM treatment.
An increasing number of studies have demonstrated that artemisinin (ART)-type based endoperoxide drugs exhibit anticancer properties (12,13). Dihydroartemisinin (DHA)inhibited proliferation of cancer cells may be associated with the production of reactive oxygen species (ROS) (14,15), induction of apoptosis (16), inhibition of angiogenesis (17), and ferroptosis (18). For example, in lung cancer, DHA-inhibited proliferation induces cell cycle arrest and decreases tumor growth by suppressing invasion and migration, increases the concentration of calcium (Ca 2+ ) ions, and activates p38 (19). In glioma, DHA induces autophagy, apoptosis, and cell cycle arrest by increasing the cleavage of caspase-3, decreasing the expression of protein kinase B (p-AKT), and downregulating AKT phosphorylation followed by caspase-3 activation (20). Furthermore, artemisinin-type drugs increase the sensitivity of resistant cancer cells to conventional drugs (21). Examples of this include DHA combined with gemcitabine, which is efficacious against pancreatic tumor cells by suppressing gemcitabine-induced nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) activation (22); DHA combined with cyclophosphamide, which inhibits spontaneous pulmonary metastasis (23), and a combination of DHA and gemcitabine, which decreases hepatoma tumor growth (24). However, the synergistic effects of DHA with the conventional drug Dexa for the treatment of MM have not yet been determined.
In the present study, two artemisinin-type drugs, ART and DHA, were examined for their anti-MM effect in vitro and in vivo. We also established their mechanism of action.
Cell Culture
Human MM cell lines MM.1S, MM.1R cells were purchased from ATCC (CRL-2974 and CRL-2975, respectively), ARP1 and H929 cells were kind gifts from Dr. Siegfried Janz (University of Iowa, Iowa City, IA, USA) and mouse 5TMM3VT cells were donated by Dr. Wen Zhou (Xiangya School of Medicine, Central South University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education; Key Laboratory of Carcinogenesis, National Health and Family Planning Commission, Changsha, China). All the cells were cultured in RPMI-1640 medium (Biological Industries, Beit Haemek, Israel) supplemented with 10% heat-inactivated fetal bovine serum (FBS; Biological Industries, Israel) and 1% penicillin/streptomycin at 37 • C with 5% CO 2 . Primary human CD138 + cells were collected from the blood samples of each participant and cultured in the same conditions as described above, which was approved by the ethics committees of Affiliated Hospital of Nanjing University of Chinese Medicine (No. 2018NL-KS13).
Cell Proliferation Assay
Cell growth was evaluated using MTT assay according to the method described in the literature (25). Cells were seeded at a density of 8 × 10 3 cells/well in 96-well plates. MM cells were cultured with different treatments for 48 h, and primary human CD138+ cells were treated for 24 h at 37 • C with 5% CO 2 . 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT; 5 mg/mL) was added into each well. After incubation for 4 h, the supernatant was removed and DMSO was added to dissolve formazan. The absorbance was measured at A570 nm with a microplate plate reader (Thermo Fisher Scientific, Inc., USA).
Cell Cycle Analysis
Cells were collected into centrifuge tubes and washed twice with PBS, then fixed with 75% ethanol for 12 h. Subsequently, cells were washed with PBS, treated with 200 µg/mL RNase for 15 min at 37 • C to remove RNA contamination, and stained with 50 µg/mL PI (Sigma-Aldrich, Merck KGaA) for 30 min at room temperature in the dark. Cells were filtered through 35 µm mesh to remove clumps before analyzed by a flow cytometer.
Mitochondrial Isolation Assay
Mitochondria isolation was operated according to the manufacturer's instruction of Cell Mitochondria Isolation Kit (Beyotime Institute of Biotechnology, Jiangsu, China, #C3601). MM.1S and MM.1R cells were plated in a 100 mm 2 dish containing 5 × 10 8 cells and treated with DHA, Dexa or combination of DHA with Dexa for 48 h. After treatment, cells were washed with cold PBS for three times, then collected into centrifuge tubes (1,200 g for 5 min) and resuspended with isolation buffer containing protease inhibitor (1:1,000, Biolegend, America, CAT: 640941). After standing the suspension for 15 min, the suspension of cells were homogenized by a Dounce glass homogenizer for several complete up-and-down cycles and kept on ice. Next, the liquid was transferred into centrifuge tubes to remove debris including nuclei by centrifugation at 600 g for 10 min at 4 • C. Then supernatant was transferred into another centrifuge tube and centrifuged at 11,000 g for 10 min at 4 • C. The pellet was the crude mitochondrial fraction. The crude mitochondrial fraction was lysed by mitochondrial lysis fluid for western blot analysis. After placed on ice for 15 min, the lysate was centrifuged at 12,000 g for 5 min at 4 • C.
Measurement of Fe 2+ Levels
1 × 10 6 cells were harvested and incubated with 5 µM of Calcein-AM (a non-fluorescent lipophilic ester, Yeasen, Shanghai, China, #40719ES50) for 15 min at 37 • C and 50 µg/mL PI for 10 min at 4 • C after treated with 10 µM ART or DHA for 48 h. Cells were then washed twice using PBS and the fluorescence intensity signals of the cells were analyzed using a flow cytometry (wavelength of Ex = 490 nm / Em = 515 nm for Calcein-AM; wavelength of Ex = 535 nm / Em = 617 nm for PI). Cytosolic esterases hydrolyze Calcein-AM to release the fluorescent calcein, which binds to the intracellular labile iron pool, resulting in quenching of the fluorescent signal. The cytosolic iron mobilization was calculated by the mean cellular calcein fluorescent intensity using flow cytometry. The reduction of calcein-AM fluorescence intensity represented an increase of chelatable cytosolic Fe 2+ (29).
ROS Determination
Intracellular ROS levels were quantified by measuring the fluorescence intensity of the 2 ′ ,7 ′ -dichlorofluorescein diacetate (DCFH-DA; Beyotime Institute of Biotechnology, Jiangsu, China, #S0033) with PI as a probe using flow cytometry. The non-fluorescent DCFH-DA is deacetylated by intracellular esterases to the non-fluorescent DCFH, which is subsequently rapidly oxidized by intracellular ROS to the fluorescent 2 ′ ,7 ′dichlorofluorescein. Cells were washed twice with PBS and incubated with 10 µM DCFH-DA at 37 • C for 30 min in the dark. Subsequently, the cells were washed twice and resuspended in PBS. ROS levels were determined using a flow cytometer.
Human Myeloma Xenograft Mice Model
All animal procedures were conducted in accordance with government-published recommendations for the Care and Use of Laboratory Animal, which were approved by the Institutional Ethics Review Boards of Nanjing University of Chinese Medicine (No. ACU170501 and 201905A003). MM.1S (sensitive to Dexa) and MM.1R (resistant to Dexa) cells (3 × 10 6 ) were subcutaneously injected into the abdominal area of 6∼8-weeks old NOD-SCID mice (n = 10 per group) from Beijing Vital River Laboratory Animal Technology, Co., Ltd (Beijing, China). Starting on day 3 post cell transfer, mice were treated with DHA (25 mg/kg) three times a week and Dexa (9 mg/kg) every other day. The tumor volumes were measured using calipers at the indicated time points. When the tumor diameters reached 20 mm, the mice were sacrificed. Tumor volume (mm 3 ) was calculated as: (length × width 2 )/2 (30).
5TMM3VT Myeloma Mice Model
5TMM3VT murine myeloma cells (1 × 10 6 ) were injected through the tail vein into 6-week-old C57BL/KaLwRij mice (n = 10 per group). The mice were divided into 3 groups as follows: DHA (50 mg/kg) treatment group, ART (50 mg/kg) group, and control group (Castor oil: ethanol: saline=2:1:7). After 2 days, 10 mice from each group were treated via intraperitoneal injection three times a week for 75 days until all the mice were dead. DHA and ART were dissolved in 70% saline, 20% Castor oil, and 10% ethanol.
Statistical Analysis
Data were expressed as the mean ± SD. The Student's t-test was used to determine a significant difference. The difference between groups was set at * p < 0.05, * * p < 0.01, and * * * p < 0.001. Mouse survival was analyzed by GraphPad Prism 5 software (GraphPad Software Inc., La Jolla, CA, USA) using the Log-rank (Mantel-Cox) Test. The interaction between DHA and Dexa was analyzed by CalcuSyn software program (CalcuSyn Version 2.1, Biosoft). Isobologram analysis was based on the Chou-Talalay method with the combination index (CI). CI < 1.0 indicates synergism, CI = 1.0 presents additive activity and CI > 1.0 states antagonism. The difference of the cell cycle of G2/M phase or apoptosis between groups was calculated by Student's t-test. *P < 0.05, **P < 0.01, and ***P < 0.001 were considered statistically significant.
DHA Is a Potential Drug in the Treatment of Myeloma
To evaluate the potential of ART or DHA as a treatment for MM, the therapeutic effects of ART and DHA were determined on overall survival rate of C57BL/KaLwRij MM-prone mouse model established using 5TMM3VT cells. Kaplan-Meier survival curve showed that the MM mice with ART treatment had significantly improved overall survival (median survival, 53 days) compared with the untreated control animals (median survival, 38 days; P = 0.0085). Additionally, MM mice treated with DHA had a significantly longer survival (median survival >75 days) compared with the untreated control animals (P = 0.0020). None of the untreated control mice survived >6 weeks; however, mice treated with ART survived for <8 weeks (39% increase) while mice treated with DHA survived >10 weeks (90% increase). The improved survival rate of DHA-treated mice compared with those of ART-treated mice (P = 0.0116; Figure 1A) demonstrated the therapeutic potential of DHA compared with ART for the treatment of MM.
The effects of ART and DHA on the proliferation of the MM cell lines were determined ( Figure 1B). Treatment of ARP1 and H929 cells with ART or DHA resulted in dose-dependent cytotoxicity. The IC 50 of ART was significantly higher than that of DHA in both ARP1 (2.84 mM vs. 2.937 µM, respectively) and H929 (815 µM vs. 7.931 µM) MM cells (Figure 1B), highlighting the better efficacy of DHA in the treatment for MM. This observation was further confirmed by performing a cell cycle assay and apoptosis analysis. In the cell cycle assay, the proportion of ART-and DHA-treated cells in the G2/M stages decreased compared with that in the untreated control group, and the proportion of ARP1 (15.23 ± 3.66%) and H929 cells (11.46 ± 4.26%) in these phases was significantly lower in the DHA-treated group than that in the ART-treated group (ARP1 cells: 28.40 ± 5.63%, P < 0.05 and (H929 cells: 29.97 ± 1.38%, P < 0.01; Figure 1C). Flow cytometry analysis of apoptosis showed that there was a significant increase in the apoptotic levels in the DHA-treated ARP1 (25.83 ± 7.80%) and H929 cells (66.23 ± 1.80%) compared with the ART-treated ARP1 (9.08 ± 0.72%, P < 0.01) and H929 cells (12.54 ± 1.59%, P < 0.01) and the untreated ARP1 (5.35 ± 0.96%, P < 0.001) and H929 cells (7.29 ± 0.14%, P < 0.001; Figure 1D).
DHA Treatment Increases the Concentration of ROS in MM Cell Lines
ROS serve an important role during apoptosis (31) and ROS concentration is the most intuitive indicator to evaluate the oxidant activity of ART and DHA (32). Thus, the ROS levels in MM cells treated with ART or DHA were determined. As shown in Figure 2A, DHA significantly increased ROS production compared with ART treatment (P < 0.001) in ARP1 and H929 cells. At 2.5 µM DHA in ARP1 cells, the generation of ROS (1.65 ± 0.06) was significantly higher than that in the untreated cells (1.00 ± 0.02) and ART-treated cells (1.15 ± 0.10). Similar results were observed in H929 cells and at other concentrations, suggesting that DHA increased intracellular ROS production compared with the ART and untreated cells (Figure 2A).
Previous studies have reported that ART treatments, in the presence of free iron, result in the production of alkylating agents which damage cancer cells (33). Therefore, the effect of DHA on intracellular iron in MM cells was examined. The fluorescence intensity of intracellular iron staining was determined using Calcein-AM after 0, 24, 48 and 72 h of treatment and flow cytometry (Figure 2B) To determine whether ART and DHA altered mitochondrial OCR, MM cells were treated with ART or DHA for 48 h. Oligomycin A was added to the cells to examine oxygen consumption coupled with ATP synthesis. The uncoupler carbonyl cyanide-4-(trifluoromethoxy) phenylhydrazone (FCCP) was added to determine the maximal respiratory capacity. Antimycin A and rotenone were used to determine the spare respiratory capacity. As shown in Figure 2C, only treatment with DHA resulted in a significant increase in OCR, suggesting that DHA, but not ART, increases the concentration of ROS primarily through increasing mitochondrial OCR.
DHA Induces the Mitochondrial Apoptosis Pathway in MM
To confirm the effect of DHA on MM cell apoptosis and the involvement of the mitochondrial pathway, the expression of caspase-3, PARP, Bcl-2, and Bad were determined, which were indicators of apoptosis in a number of different types of cancer (34). Western blot showed that cleaved caspase-3 and PARP expression levels in ARP1 and H929 cells treated with DHA were significantly increased compared with cells treated with ART or untreated cells after 12, 24, and 48 h (Figure 3A). Bcl-2 expression levels were significantly decreased in cells treated with DHA after 24 h (Figure 3B). To further confirm the involvement of mitochondria-dependent apoptosis in cells treated with DHA, Bad expression was measured. As shown in Figure 3C, DHA treatment downregulated the expression of Bad in both ARP1 and H929 cells in a time-dependent manner.
DHA Overcomes Resistance to Dexa in MM
Since Bcl-2 is a typical marker for Dexa resistance MM patients, we further assessed the efficacy of DHA for treatment of Dexa-resistant MM (35). Paired Dexa-sensitive (MM.1S) and Dexa-resistant (MM.1R) MM cell lines were utilized, which were established from peripheral blood of the same MM patient and sensitive/resistant to Dexa, respectively (36). The viability of MM.1S and MM.1R cells significantly reduced in a concentration-dependent manner (Figure 4A). Of note, the MM.1R cells exhibited some sensitivity to DHA.
As Dexa is the most widely used treatment for patients with MM despite the development of drug resistance (37), the effect of DHA in combination with Dexa was determined. As shown in Figure 4B, a combination of DHA and Dexa significantly reduced the viability of MM.1S (combination index, CI, of 17.5 µM DHA with 60 µM Dexa, 0.269) and MM.1R cells (CI of 17.5 µM DHA with 15 µM Dexa, 0.167). Comparable results of combined treatment and Dexa were observed, confirming that combined treatment significantly reduced viability compared with either treatment alone, irrespective of the Dexa dose.
To determine the clinical potential of DHA combined with Dexa, MM cells obtained from the MM patients were treated with the drugs to evaluate their effects on growth in shortterm cultures for 12 h. The CD138 + cells showed significantly reduced growth when treated with the combination of drugs compared with 7.5 µM Dexa or 120 µM DHA alone (both P < 0.05; Figure 4C). Additionally, apoptosis was detected using flow cytometry, and it was demonstrated that both MM.1S and MM.1R cells treated with the combination of drugs exhibited increased apoptosis compared with Dexa or DHA alone (both P < 0.001; Figure 4D), consistent with the results of ARP1 and H929 cells treated with the drugs either alone or in combination.
DHA Combined With Dexa Increases Translocation of Cytochrome C (cyt C) From the Mitochondria to the Cytoplasm
In order to detect the mechanism through which DHA overcomes Dexa resistance, we further explored if DHA combined with Dexa could increase oxidative stress and ROS production in MM.1S and MM.1R cells. As shown in Figure 5A, the levels of ROS in cells treated with a combination of drugs were significantly higher than those in cells treated with either treatment alone. Treatment with Dexa or DHA alone, or combined resulted in a loss in ψ m , which is represented by an increase in JC-1 green fluorescence ( Figure 5B) The alterations of ψ m , induced by a combination of drugs, are associated with translocation of mitochondrial proteins to the cytosol, including Cyt C (38). As shown in Figure 5C, treatment with the combination of drugs resulted in increased translocation of Cyt C to the cytosol compared with treatment with either Dexa or DHA alone. It has been reported that during mitochondrial permeability transition, Cyt C activates caspase-3 (39). The expression levels of cleaved caspase-3 were increased by DHA and Dexa alone, and further increased when cells were treated with a combination of the two drugs ( Figure 5D). Cyt Ctranslocation from the mitochondria to the cytosol depends partly on the expression of Bcl-2 (40), thus Bcl-2 expression in cells treated with Dexa, DHA, or both combined was examined. Additionally, Dexa increased Bcl-2 expression in a time-dependent manner (0-96 h) peaking at 48 and 72 h with the concentration of 3.75 µM, in MM.1S and MM.1R cells (Figure 5E), suggesting that Dexa may increase Bcl-2 expression during the first 24 h. However, increasing the concentration of Dexa (0, 3.75, 7.5, 15, 30, 60, and 120 µM) within 24 h resulted in a decrease in Bcl-2 expression including in the MM.1R cells (Figure 5F). Irrespective of the concentration of Dexa used, the combination treatment restored the Dexa-induced Bcl-2 expression in the resistant cells, suggesting that DHA treatment was able to overcome Dexa resistance.
These results show that the cytotoxic effects of DHA combined with Dexa were mediated by increased ROS production, altering ψ m , reversing changes to Bcl-2 expression mediated by Dexa, increasing release of mitochondrial Cyt C, activating caspase-3, and finally inducing caspase-mediated apoptosis.
Anti-MM Activity of DHA Combined With Dexa in a MM Xenograft Mouse Model
To replicate our findings in vivo, an MM xenograft mouse model was used to evaluate the antitumor activity of DHA in combination with Dexa. NOD-SCID mice were injected with MM.1S or MM.1R cells subcutaneously; tumor neoplasms in untreated mice developed over ∼1 month, whereas tumor growth of the Dexa-or DHA-treated mice developed slower compared with the untreated mice ( Figure 6A)
DISCUSSION
Despite the advancements in therapeutic target discovery and targeted therapy, MM is still incurable. Typically, MM patients respond to the treatment initially, but the majority of them ultimately relapse. Therefore, the development of novel treatment is extremely urgent. Artemisinin-type drugs are typically metabolized in vivo to the active metabolite DHA (41), and the anticancer effects of ART and its derivative DHA have been demonstrated in a number of different types of cancer (41)(42)(43)(44)(45). For instance, in breast cancer, DHA induced apoptosis and G0/G1 arrest by activating Bid, increased the expression of Bim, decreased the expression of Bcl-2, induced the translocation of Cyt C from the mitochondria to the cytosol, and increased the expression of caspase-8, and cleaved caspase-9 (43,44). However, so far, there have been no studies examining the effects of DHA on MM, and in particular, the combination of DHA with other chemotherapeutic drugs including Dexa on MM. Our study first evaluated the effects of ART and DHA on MM in vitro and in vivo, and demonstrated that DHA could possess a greater efficacy than ART against MM (Figures 1A,B). At the cellular level, ART could only induce weak apoptosis of MM cells and decrease the G2/M staged cells (Figures 1C,D) compared to DHA, suggesting that ART could be converted into activated DHA in the microenvironment of MM cells. The in vivo study in 5TMM3VT myeloma C57BL/KaLwRij mice showed that both ART and DHA could improve the survival rate of the mice; however, MM mice treated with DHA had a longer survival than ART-treated mice. This phenomenon may be due to the fact that ART in vivo is not completely converted to the activated DHA.
Consistent with previous reports that DHA prevented MM tumor growth through a caspase-mediated pathway (43), the current study found that the mitochondria were the primary target by which DHA exerted its effects. DHA activated apoptosis via caspase-3, followed by PARP cleavage (Figure 3A), and decreased ψ m (Figure 5B), which in turn, increased the translocation of Cyt C from the mitochondria to the cytoplasm ( Figure 5C) (45). Additionally, DHA interacted with iron and activated oxygen consumption (Figures 2B,C), resulting in increased production of ROS (Figures 2A, 5A), and this was assumed to be the after-effects of permeabilization of the outer mitochondrial membrane (46), since outer mitochondrial membrane permeabilization resulted in a loss of ψ m and an increase in ROS levels (47). For MM treatment, the drug resistance and its related relapse are the major clinical obstacles, we further explored that DHA is effective in treating drugresistant MM (Figures 4A,B). DHA-induced mitochondrial apoptotic pathways are largely related to those pathways affected by Dexa. Dexa increases the releasing of the mitochondrial apoptogenic factors Second mitochondrial-derived activator (Smac)/DIABLO and Cyt C from the mitochondria to the cytosol, and activates the SAPK/JNK-independent pathway, which is associated with downregulation of MAPK and p70S6K (48). However, DHA reduced the expression of Bad and Bcl-2 ( Figures 3B,C), while Dexa increased Bcl-2 expression in both MM.1S and MM.1R cells within the first 48 h at lower concentrations ( Figure 5E) and suppressed the growth of MM cells at higher concentrations ( Figure 5F). Therefore, we infer that DHA might counteract the effects of Dexa to balance Bcl-2 expression (Figure 5G), thus regulating the translocation of Cyt C to further promote cell apoptosis (summarized in Figure 7). This hypothesis is further strengthened by the In previous studies, DHA was found to influence the progression of different cancers by regulating the expression of mitochondrial, nucleus and cytoplasmic proteins. On protein level, Dexa inhibits the proliferation of MM cells by mainly adjusting the expression of Smac, cytochrome C, caspase-3, caspase-9, MAPK and p70S6K. In our study, DHA was found to not only induce the mitochondrial apoptosis pathway, but also inhibit Bcl-2 expression induced by Dexa. A synergism of DHA and Dexa increases ROS levels and translocation of cytochrome C from mitochondria to the cytoplasm, and then alters mitochondrial membrane potential as well as caspase-mediated apoptosis. (The triangle on the left shows the effect of Dexa on the protein expression, and the other side shows the effect of DHA on the protein expression. The red triangle indicates an increase in protein expression and the blue triangle indicates a decrease in the protein expression. The blue characters and lines represent Dexa-acting proteins. The black line and the bold characters represent the summary of this study, and the gray line represents the summary of the literature study).
report that Bcl-2 inhibits the translocation of Cyt C from the mitochondria to the cytoplasm (49), and Cyt C increases caspase-3 expression and ultimately induces caspase-3-mediated apoptosis (50).
Therefore, we further confirmed that DHA could overcome the Dexa resistance in MM (Figures 4A-C). DHA-induced cytotoxicity in MM.1S and MM.1R cells is enhanced by Dexa, suggesting that the apoptotic signaling cascades by which DHA and Dexa exert their effects may be different ( Figure 4D). It has been demonstrated that Dexa-induced caspase-9 activation via a Smac-dependent and Cyt C-independent pathway (51), whereas, in the present study, DHA activated caspase-3 and increased the release of Cyt C from the mitochondria (Figures 5C,D). DHA synergistically augmented the effect of Dexa-induced cytotoxicity (Figure 4B), including the decrease in proliferation and the increase in apoptosis of MM cells ( Figure 4D). Importantly, DHA re-sensitized MM.1R cells to Dexa in vitro and in vivo (Figures 6A-C). Through these mechanisms, DHA alone not only exhibits anticancer effects in MM but also augments and complements the effects of the widely used anti-MM agent Dexa. These data provide a rationale for future clinical studies to develop a novel therapeutic regimen comprising DHA alone or in combination with other agents for patients with MM particularly for relapsed/refractory MM patients.
In conclusion, our study provides the first evidence that DHA may be an effective standalone treatment for MM patients and that it can overcome Dexa resistance. Mechanistically, DHA prevents MM development and progression through mitochondrial apoptotic pathway involving Fe 2+ and oxygen. DHA augments the effects of Dexa, resulting in increased production of ROS and translocation of Cyt C from the mitochondria to the cytoplasm, and downregulates Dexainduced expression of Bcl-2, a biomarker of Dexa resistance. Therefore, DHA may be a promising therapeutic option for patients with refractory/relapsed MM. Further studies are necessary to determine the clinical efficacy and detailed therapeutic targets.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The operations of primary human CD138 + cells were approved by the ethics committees of Affiliated Hospital of Nanjing University of Chinese Medicine (No. 2018NL-KS13). The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by Animals experiments were approved by the Ethics Committee of Nanjing University of Chinese Medicine (Nanjing, China; animal ethics registration no. ACU170501 and 201905A003).
AUTHOR CONTRIBUTIONS
This study was designed and conceived by YY and CG. Experiments were performed by YC, RL, YZ, | 2020-05-15T13:08:41.359Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "860ef47c33557ecef9fcbce1ff3188d50f89e08e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fonc.2020.00767",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "130f4a5ab6f2286a91a33c01424db25f933aa10d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
224817815 | pes2o/s2orc | v3-fos-license | Controllable Text Simplification with Explicit Paraphrasing
Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting. Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously. However, such systems limit themselves to mostly deleting words and cannot easily adapt to the requirements of different target audiences. In this paper, we propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles. We introduce a new data augmentation method to improve the paraphrasing capability of our model. Through automatic and manual evaluations, we show that our proposed model establishes a new state-of-the-art for the task, paraphrasing more often than the existing systems, and can control the degree of each simplification operation applied to the input texts.
Since 2016, nearly all text simplification systems have been sequence-to-sequence (seq2seq) Table 1: Output statistics of 500 random sentences from the Newsela test set. Existing systems rely on deletion and do not paraphrase well. OLen, %new, %eq and %split denote the average output length, percentage of new words added, percentage of system outputs that are identical to the inputs, and percentage of sentence splits, respectively. †We used the system outputs shared by their authors. models trained end-to-end, which have greatly increased the fluency of the outputs (Zhang and Lapata, 2017;Nisioi et al., 2017;Zhao et al., 2018;Kriz et al., 2019;Dong et al., 2019;Jiang et al., 2020). However, these systems mostly rely on deletion and tend to generate very short outputs at the cost of meaning preservation (Alva-Manchego et al., 2017). Table 1 shows that they neither split sentences nor paraphrase well as reflected by the low percentage of splits (< 1%) and new words introduced (< 11.2%). While deleting words is a viable (and the simplest) way to reduce the complexity of sentences, it is suboptimal and unsatisfying. Professional editors are known to use a sophisticated combination of deletion, paraphrasing, and sentence splitting to simplify texts (Xu et al., 2015). Another drawback of these end-to-end neural systems is the lack of controllability. Simplification is highly audience dependant, and what constitutes simplified text for one group of users may not be acceptable for other groups (Xu et al., 2015;Lee and Yeung, 2018). An ideal simplification system should be able to generate text with varied characteristics, such as different lengths, readability levels, and number of split sentences, which can be difficult to control in end-to-end systems. To address these issues, we propose a novel hybrid approach that combines linguisticallymotivated syntactic rules with data-driven neural models to improve the diversity and controllability of the simplifications. We hypothesize that the seq2seq generation model will learn lexical and structural paraphrases more efficiently from the parallel corpus, when we offload some of the burden of sentence splitting (e.g., split at comma) and deletion (e.g., remove trailing preposition phrases) decisions to a separate component. Previous hybrid approaches for simplification (Narayan and Gardent, 2014;Siddharthan and Mandya, 2014;Sulem et al., 2018c) used splitting and deletion rules in a deterministic step before applying an MT-based paraphrasing model. In contrast, our approach provides a more flexible and dynamic integration of linguistic rules with the neural models through ranking and data augmentation ( Figure 1).
We compare our method to several state-of-theart systems in both automatic and human evaluations. Our model achieves overall better performance measured by SARI (Xu et al., 2016) and other metrics, showing that the generated outputs are more similar to those written by human editors. We also demonstrate that our model can control the extent of each simplification operation by: (1) imposing a soft constraint on the percentage of words to be copied from the input in the seq2seq model, thus limiting lexical paraphrasing; and (2) selecting candidates that underwent a desired amount of splitting and/or deletion. Finally, we create a new test dataset with multiple human references for Newsela (Xu et al., 2015), the widely used text simplification corpus, to specifically evaluate lexical paraphrasing.
2 Our Approach Figure 1 shows an overview of our hybrid approach. We combine linguistic rules with data-driven neural models to improve the controllability and diversity of the outputs. Given an input complex sentence x, we first generate a set of intermediate simplifications V = {v 1 , v 2 , . . . , v n } that have undergone splitting and deletion ( §2.1). These intermediate sentences are then used for two purposes: (1) Selected by a pairwise neural ranking model ( §2.2) based on the simplification quality and then rewritten by the paraphrasing component; (2) Used for data augmentation to improve the diversity of the paraphrasing model ( §2.3).
Splitting and Deletion
We leverage the state-of-the-art system for structural simplification, called DisSim (Niklaus et al., 2019), to generate candidate simplifications that focus on splitting and deletion. 2 The English version of DisSim applies 35 hand-crafted grammar rules to break down a complex sentence into a set of hierarchically organized sub-sentences (see Figure 1 for an example). We choose a rule-based approach for sentence splitting because it works really well. In our pilot experiments, DisSim successfully split 92% of 100 complex sentences from the training data with more than 20 words, and introduced errors for only 6.8% of these splits. We consider these sub-sentences as candidate simplifications for the later steps, except those that are extremely short or long (compression ratio / ∈ [0.5, 1.5]). The compression ratio is calculated as the number of words in a candidate simplification v i (which may contain one or more sub-sentences) divided by that of the original sentence x.
To further increase the variety of generated candidates, we supplement DisSim with a Neural Deletion and Split module trained on the text simplification corpus ( §3.1). We use a Transformer seq2seq model with the same configuration as the base model for paraphrasing ( §2.3). Given the input sentence x, we constrain the beam search to generate 10 outputs with splitting and another 10 outputs without splitting. Then, we select the outputs that do not deviate substantially from x (i.e., Jaccard similarity > 0.5). We add outputs from the two systems to the candidate pool V .
Candidate Ranking
We design a neural ranking model to score all the candidates that underwent splitting and deletion, V = {v 1 , v 2 , . . . , v n }, then feed the top-ranked one to the lexical paraphrasing model for the final output. We train the model on a standard text simplification corpus consisting of pairs of complex sentence x and manually simplified reference y.
Scoring Function. To assess the "goodness" of each candidate v i during training, we define the gold scoring function g * as a length-penalized BERTscore: BERTScore (Zhang et al., 2020b) is a text similarity metric that uses BERT (Devlin et al., 2019) embeddings to find soft matches between word pieces (Wu et al., 2016) instead of exact string matching. We introduce a length penalty to favor the candidates that are of similar length to the human reference y and penalize those that deviate from the target compression ratio φ y . λ defines the extent of penalization and is set to 1 in our experiments. φ v i represents the compression ratios of v i compared to the input x. In principle, other similarity metrics can also be used for scoring.
Pairwise Ranking Model. We train the ranking model in a pairwise setup since BERTScore is sensitive to the relative rather than absolute similarity, when comparing multiple candidates with the same reference. We transform the gold ranking of V (|V | = n) into n 2 pairwise comparisons for every candidate pair, and learn to minimize the pairwise ranking violations using hinge loss: where g(.) is a feedforward neural network, m is the number of training complex-simple sentence pairs, k is the index of training examples, and n k represents the number of generated candidates ( §2.1). On average, n k is about 14.5 for a sentence of 30 words, and can be larger for longer sentences. We consider 10 randomly sampled candidates for each complex sentence during training.
Features. For the feedforward network g(.), we use the following features: number of words in v i and x, compression ratio of v i with respect to x, Jaccard similarity between v i and x, the rules applied on x to obtain v i , and the number of rule applications. We vectorize all the real-valued features using Gaussian binning (Maddela and Xu, 2018), which has shown to help neural models trained on numerical features (Liu et al., 2016;Sil et al., 2017;Zhong et al., 2020). We concatenate these vectors before feeding them to the ranking model. We score each candidate v i separately and rank them in the decreasing order of g(v i ). We provide implementation details in Appendix A.
Paraphrase Generation
We then paraphrase the top-ranked candidatev ∈ V to generate the final simplification outputŷ. Our paraphrase generation model can explicitly control the extent of lexical paraphrasing by specifying the percentage of words to be copied from the input sentence as a soft constraint. We also introduce a data augmentation method to encourage our model to generate more diverse outputs.
Base Model. Our base generation model is a Transformer encoder-decoder initialized by the BERT checkpoint (?), which achieved the best reported performance on text simplification in the recent work (Jiang et al., 2020). We enhance this model with an attention-based copy mechanism to encourage lexical paraphrasing, while remaining faithful to the input.
Copy Control. Given the input candidatev = (v 1 ,v 2 , . . . ,v l ) of l words and the percentage of copying cp ∈ (0, 1], our goal is to paraphrase the rest of (1 − cp) × l words inv to a simpler version. To achieve this, we convert cp into a vector of the same dimension as BERT embeddings using Gaussian binning (Maddela and Xu, 2018) and add it to the beginning of the input sequencev. The Transformer encoder then produces a sequence of context-aware hidden states H = (h 1 , h 2 . . . h l ), where h i corresponds to the hidden state ofv i . Each h i is fed into the copy network which predicts the probability p i that wordv i should be copied to output. We create a new hidden stateh i by adding h i to a vector u scaled according to p i . In other words, the scaled version of u informs the decoder whether the word should be copied. A single vector u is used across all sentences and hidden states, and is randomly initialized then updated during training. More formally, the encoding process can be described as follows: The Transformer decoder generates the output sequence fromH. Our copy mechanism is incorporated into the encoder rather than copying the input words during the decoding steps (Gu et al., 2016;See et al., 2017). Unless otherwise specified, we use the average copy ratio of the training dataset, 0.7, for our experiments.
Multi-task Training. We train the paraphrasing model and the copy network in a multi-task learning setup, where predicting whether a word should be copied serves as an auxiliary task. The gold labels for this task are obtained by checking if each word in the input sentence also appears in the human reference. When a word occurs multiple times in the input, we rely on the monolingual word alignment results from JacanaAlign (Yao et al., 2013) to determine which occurrence is the one that gets copied. We train the Transformer model and the copy network jointly by minimizing the cross-entropy loss for both decoder generation and binary word classification. We provide implementation and training details in Appendix A.
Data Augmentation. The sentence pairs in the training corpus often exhibit a variable mix of splitting and deletion operations along with paraphras-ing (see Figure 1 for an example), which makes it difficult for the encoder-decoder models to learn paraphrases. Utilizing DisSim, we create additional training data that focuses on lexical paraphrasing For each sentence pair x, y , we first generate a set of candidates V = {v 1 , v 2 , . . . , v n } by applying DisSim to x, as described in §2.1. Then, we select a a subset of V , called V = {v 1 , v 2 , . . . , v n } (V ∈ V ) that are fairly close to the reference y, but have only undergone splitting and deletion. We score each candidate v i using the length-penalized BERTScore g * (v i , y) in Eq. (1), and discard those with scores lower than 0.5. While calculating g * , we set φ y and λ to 1 and 2 respectively to favor candidates of similar length to the reference y. We also discard the candidates that have different number of split sentences with respect to the reference. Finally, we train our model on the filtered candidate-reference sentence pairs v 1 , y , v 2 , y , . . . , v n , y , which focus on lexical paraphrasing, in addition to x, y .
Controllable Generation
We can control our model to concentrate on specific operations. For split-or delete-focused simplification, we select candidates with desirable length or number of splits during the candidate generation step. We perform only the paraphrase generation step for paraphrase-focused simplification. The paraphrasing model is designed specifically to paraphrase with minimal deletion and without splitting. It retains the length and the number of split sentences in the output, thus preserving the extent of deletion and splitting controlled in the previous steps. We control the degree of paraphrasing by changing the copy ratio.
Experiments
In this section, we compare our approach to various sentence simplification models using both automatic and manual evaluations. We show that our model achieves a new state-of-the-art and can adapt easily to different simplification styles, such as paraphrasing and splitting without deletion.
Data and Experiment Setup
We train and evaluate our models on Newsela (Xu et al., 2015) 3 and Wikipedia copora (Zhu et al., 2010;Woodsend and Lapata, 2011;Coster and Kauchak, 2011 Table 2: Automatic evaluation results on NEWSELA-AUTO test set. We report SARI, the main automatic metric for simplification, and its three edit scores namely precision for delete (del) and F1 scores for add and keep operations. We also report FKGL (FK), average sentence length (SLen), output length (OLen), compression ratio (CR), self-BLEU (s-BL), percentage of sentence splits (%split), average percentage of new words added to the output (%new), and percentage of sentences identical to the input (%eq). Bold typeface denotes the best performances (i.e., closest to the reference).
articles with each article rewritten by professional editors for students in different grades. We used the complex-simple sentence pairs automatically aligned by Jiang et al. (2020) To demonstrate that our model can be controlled to generate diverse simplifications, we evaluate under the following settings: (i) Standard evaluation on the NEWSELA-AUTO test set similar to the methodology in the recent literature (Jiang et al., 2020;Dong et al., 2019;Zhang and Lapata, 2017), and (ii) Evaluation on different subsets of the NEWSELA-AUTO test set that concentrate on a specific operation. We selected 9,356 sentence pairs with sentence splits for split-focused evaluation. Similarly, we chose 9,511 sentence pairs with compression ratio < 0.7 and without sentences splits to evaluate delete-focused simplification. We created a new dataset, called NEWSELA-TURK, to evaluate lexical paraphrasing. 4 Similar to the WIKIPEDIA-TURK benchmark corpus (Xu et al., 2016), NEWSELA-TURK consists of human-written references focused on lexical para-phrasing. We first selected sentence pairs from the NEWSELA-AUTO test set of roughly similar length (compression ratio between 0.8 and 1.2) and no sentence splits because they more likely involve paraphrasing. Then, we asked Amazon Mechanical Turk workers to simplify the complex sentence without any loss in meaning. 5 To ensure the quality of simplifications, we manually selected the workers using the qualification test proposed in Alva-Manchego et al. (2020), during which the workers were asked to simplify three sentences. We selected top 35% of the 300 workers that participated in the test. We periodically checked the submissions and removed the bad workers. In the end, we collected 500 sentences with 4 references for each sentence.
Existing Methods
We use the following simplification approaches as baselines: (i) BERT-Initialized Transfomer (?), where the encoder is initialized with BERT base checkpoint and the decoder is randomly initialized. It is the current state-of-the-art for text simplification (Jiang et al., 2020). (ii) EditNTS (Dong et al., 2019), 6 another state-of-the-art model that uses a neural programmer-interpreter (Reed and de Freitas, 2016) to predict the edit operation on each word, and then generates the simplified sentence. (iii) LSTM baseline, a vanilla encoderdecoder model used in Zhang and Lapata (2017). (iv) Hybrid-NG (Narayan and Gardent, 2014), 7 one of the best existing hybrid systems that performs splitting and deletion using a probabilistic model and lexical substitution with a phrase-based machine translation system. We retrained all the models on the NEWSLA-AUTO dataset. Table 3: Automatic evaluation results on NEWSELA-TURK that focuses on paraphrasing (500 complex sentences with 4 human written paraphrases). We control the extent of paraphrasing of our models by specifying the percentage of words to be copied (cp) from the input as a soft constraint.
Automatic Evaluation
Metrics. We report SARI (Xu et al., 2016), which averages the F1/precision of n-grams (n ∈ {1, 2, 3, 4}) inserted, deleted and kept when compared to human references. More specifically, it computes the F1 score for the n-grams that are added (add), 8 which is an important indicator if a model is good at paraphrasing. The model's deletion capability is measured by the F1 score for n-grams that are kept (keep) and precision for those deleted (del). 9 To evaluate a model's para- 8 We slightly improved the SARI implementation by Xu et al. (2016) to exclude the spurious ngrams while calculating the F1 score for add. For example, if the input contains the phrase "is very beautiful", the phrase "is beautiful" is treated as a new phrase in the original implementation even though it is caused by the delete operation. 9 SARI score of a reference with itself may not always be 100 as it considers 0 divided by 0 as 0, instead of 1, when calculating n-gram precision and recall. This avoids the inflation of del scores when the input is same as the output. phrasing capability and diversity, we calculate the BLEU score with respect to the input (s-BL), the percentage of new words (%new) added, and the percentage of system outputs identical to the input (%eq). Low s-BL, %eq, or high %new indicate that the system is less conservative. We also report Flesch-Kincaid (FK) grade level readability (Kincaid and Chissom, 1975), average sentence length (SLen), the percentage of splits (%split), compression ratio (CR), and average output length (OLen). We do not report BLEU because it often does not correlate with simplicity (Sulem et al., 2018a,b;Xu et al., 2016). Table 6: Human evaluation of 100 random simplifications from the NEWSELA-AUTO test set and the split-focused subset of the same test set. Has Split and Correct Split denote the percentage of the output sentences that have undergone splitting and the percentage of coherent splits respectively. * denotes that our model is significantly better than the corresponding baseline (according to a t-test with p < 0.05). deletion as they show high self-BLEU (>66.5) and FK (>8.8) scores despite having compression ratios similar to other systems. Transformer model alone is rather conservative and copies 10.2% of the sentences directly to the output. Although Hybrid-NG makes more changes than any other baselines, its SARI and add scores are 3.7 and 1.7 points lower than our model indicating that it generates more errors. Our model achieves the lowest self-BLEU (48.7), FK (7.9), and percentage of sentences identical to the input (0.4), and the highest add (3.3) score and percentage of new words (16.2%). In other words, our system is the least conservative, generates more good paraphrases, and mimics the human references better. We provide examples of system outputs in Table 9 and Appendix C. Tables 3, 4, and 5 show the results on NEWSELA-TURK, split-focused, and delete-focused subsets of NEWSELA-AUTO test set respectively. For these experiments, we configure our model to focus on specific operations (details in §2.4). Our model again outperforms the existing systems according to SARI, add score, and percentage of new words, which means that our model is performing more meaningful paraphrasing. We show that we can control the extent of paraphrasing by varying the copy ratio (cp). Our model splits 93.5% of the sentences, which is substantially better than the other models.
Human Evaluation
We performed two human evaluations: one to measure the overall simplification quality and the other to specifically capture sentence splitting. 11 For the first one, we asked five Amazon Mechanical Turk workers to evaluate fluency, adequacy and simplicity of 100 random simplifications from the NEWSELA-AUTO test set. We supplemented the 2-3 readability levels in NEWSELA-AUTO, which contained more lexical overlaps and inflated the scores for EditNTS. 11 We provide instructions in Appendix E.
fluency and adequacy ratings with binary questions described in Zhang et al. (2020a) for the second evaluation over another 100 simplifications from the NEWSELA-AUTO split-focused test set. We asked if the output sentence exhibits spitting and if the splitting occurs at the correct place. While fluency measures the grammaticality of the output, adequacy captures the extent of meaning preserved when compared to the input. Simplicity evaluates if the output is simpler than the input. Each sentence was rated on a 5-point Likert scale and we averaged the ratings from the five workers. We chose the majority value for the binary ratings. We used the output of our model that is tailored for sentence splitting for the second evaluation. Table 6 demonstrates that our model achieves the best fluency, simplicity, and overall ratings. The adequacy rating is also very close to that of Transformer bert and EditNTS even though our model is performing more paraphrasing (Table 2), which verifies that the changes made by our system are meaningful. Our model achieves the most number of correct sentence splits (90%), and the highest fluency (4.19) for syntactic simplification, showing that it can generate more number of coherent sentence splits when compared to other models.
Model Analysis
In this section, we analyze the contribution of each model component and examine the system errors.
System Ablations
We evaluate our key design choices, namely candidate ranking that is based on length-penalized BERTScore and paraphrase generation that uses data augmentation and copy attention. Table 8 summarizes the results. Our pairwise ranking model (BERTScore len ) achieves an increase of 3.2 points in SARI when compared to choosing a random (Random) candidate. Randomly selecting a candidate also performs fairly well, indicating that the The Seattle kids petitioned Washington state last year to adopt stricter science-based regulations to protect them against climate change.
Simple
The eight kids from Seattle who petitioned Washington state last year are fighting for the state to adopt stricter laws to protect their future against climate change.
Our Model
The Seattle kids asked the Washington state last year to take tougher rules against climate change. Hallucinations (13%) Complex "It's more of a family than living outside," said Jessica Konczal, 33, whose husband is Sgt. Matthew Konczal .
Simple
Jessica Konczal, age 33, says the base is its own type of family .
Our Model
"It's more of a family than living outside," said Jessica Konczal, one of the protesters.
Fluency Errors (14%) Complex
Parental feedback on the menu additions so far, from some of the early adopter markets, has been "phenomenal," Leverton said.
Simple
Leverton said parents love the new food.
Our Model
So far, parents parents have feedback on the menu changes has been a great deal, he added. Bad Substitution (15%) Complex One of the device's inventors explained to the president that the machine was a prototype.
Simple
One of the girls explained it was just a prototype.
Our Model
One of the inventors told the president that the first design was a design.
Anaphora Resolution / Entailment (12%) Complex
Sea slugs dubbed sacoglossans are some of the most remarkable biological burglars on the planet.
Simple
The sea slugs known as sacoglossans are some of the most interesting burglars on the planet.
Our Model
Scientists say these are some of the most interesting creatures on the planet.
Human Reference Errors (4%) Complex
They float in and out of formations that combine the underwater world with the stage.
Simple
In the dance on a round stage on the seabed, the dancers swim together and apart.
Our Model
They float in and out of places that combine stage with the underwater. Compared to our final model (Our Model), its variants without data augmentation (− augmentation) and copy mechanism (− copy attn) suffer a drop of 1.0 and 2.6 points in SARI respectively and a decrease of at least 3.0% of new words, which demonstrates that these components encourage the system to paraphrase. Our model trained on only DisSim (− only DisSim) and Transformer (− only Transformer) candidates performs close to our best model (Our Model) in terms of SARI.
Error Analysis
To understand the errors generated by our model, we manually classified 200 simplifications from the NEWSELA-AUTO test set into the following categories: (a) Good, where the model generated meaningful simplifications, (b) Hallucinations, where the model introduced information not in the input, (c) Fluency Errors, where the model generated ungrammatical output, (d) Anaphora Resolution, where it was difficult to resolve pronouns in the output. (e) Bad substitution, where the model inserted an incorrect simpler phrase, and (e) Human Reference Errors, where the reference does not reflect the source sentence. Note that a simplification can belong to multiple error categories. Table 7 shows the examples of each category.
Related Work
Before the advent of neural networks, text simplification approaches performed each operation separately in a pipeline manner using either handcrafted rules (Carroll et al., 1999;Siddharthan, 2002;Siddharthan et al., 2004) or data-driven methods based on parallel corpora (Zhu et al., 2010;Woodsend and Lapata, 2011;Narayan and Gardent, 2014). Following neural machine translation, the trend changed to performing all the operations together end-toend (Zhang and Lapata, 2017;Nisioi et al., 2017;Zhao et al., 2018;Alva-Manchego et al., 2017;Vu have revealed who owned the ship. Our Model (cp = 0.6) scientists have found a secret deal. they have discovered who owned the ship. Our Model (cp = 0.7) scientists have found documents in portugal. they have also found out who owned the ship. Our Model (cp = 0.8) scientists have found documents in portugal. they have discovered who owned the ship.
Complex
Experts say China's air pollution exacts a tremendous toll on human health.
Simple
China's air pollution is very unhealthy. Hybrid-NG experts say the government's air pollution exacts a toll on human health. LSTM experts say china's air pollution exacts a tremendous toll on human health. Transformer bert experts say china's pollution has a tremendous effect on human health. EditNTS experts say china's air pollution can cause human health. Our Model (cp = 0.6) experts say china's air pollution is a big problem for human health. Our Model (cp = 0.7) experts say china 's air pollution can cause a lot of damage on human health. Our Model (cp = 0.8) experts say china 's air pollution is a huge toll on human health. Controllable text simplification has been attempted before, but only with limited capability. Scarton and Specia (2018) Another long body of research focuses on a single simplification operation and can be broadly divided into three categories: (1) Lexical Simplification (Specia et al., 2012;Horn et al., 2014;Glavaš and Štajner, 2015;Paetzold andSpecia, 2017, 2015;Maddela and Xu, 2018;Qiang et al., 2020), where complex words are substituted with simpler words.
Conclusion
We proposed a novel hybrid approach for sentence simplification that performs better and produces more diverse outputs than the existing systems. We designed a new data augmentation method to encourage the model to paraphrase. We created a new dataset, NEWSELA-TURK, to evaluate paraphrasing-focused simplifications. We showed that our model can control various attributes of the simplified text, such as number of sentence splits, length, and number of words copied from the input.
Acknowledgments
We thank the anonymous reviewers for their valuable feedback. We thank Newsela for sharing the data and NVIDIA for providing GPU computing resources. This research is supported in part by the NSF award IIS-1822754, ODNI and IARPA via the BETTER program contract 19051600004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
A Implementation and Training Details
We implemented two separate Transformer models for neural deletion and split component ( §2.1) and paraphrase generation ( §2.3) using the Fairseq 12 toolkit. Both the encoder and decoder follow BERT base 13 architecture, while the encoder is also initialized with BERT base checkpoint. For neural deletion and split component, we used a beam search of width 10 to generate candidates. The copy attention mechanism is a feedforward network containing 3 hidden layers, 1000 nodes in each layer with tanh activation, and a single linear output node with sigmoid activation. We used Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0001, linear learning rate warmup of 40k steps, and 100k training steps. We used a batch size of 64. We used BERT WordPiece tokenizer. During inference, we constrained the beam-search to not repeat trigrams and emitted sentences that avoided aggressive deletion (compression ratio ∈ [0.9, 1.2]. We chose the best checkpoint based on the SARI score (Xu et al., 2016) on the dev set. We saved a checkpoint after every epoch. We did not perform any hyperparameter search and directly used the hyperparameters of the BERT-initialized Transformer described in ?. The model takes 10 hours to train on 1 NVIDIA GeForce GPU. Our pairwise ranking model, implemented using the PyTorch framework, consists of 3 hidden layers, 100 nodes in each layer, tanh activation, and a single linear output node. We used Adam optimizer with a learning rate of 0.01 and 10 epochs. We applied a dropout of 0.2. For Gaussian binning, we vectorized the numerical features into 10 dimensional vectors. The model takes half hour to train on 1 NVIDIA GeForce GPU. We do not perform any extensive hyperparameter tuning. We just examined few values for learning rate (0.001, 0.01 and 0.1) and chose the best based on the SARI score on the dev set. We used the original code for DisSim. 14 12 https://github.com/pytorch/fairseq 13 https://github.com/google-research/ bert 14 https://github.com/Lambda-3/ DiscourseSimplification 3550 B Annotation Interface
C System Outputs System Outputs Complex
This year, the FAA has approved dozens of permits for agricultural drone businesses.
Simple
This year, it approved dozens of permits for agricultural drone businesses.
Hybrid-NG
this year, the government has approved dozens of drone permits for agricultural businesses. LSTM this year, the faa has approved dozens of permits for agricultural drone businesses. Transformer bert this year, the faa has approved dozens of permits for agricultural businesses.
EditNTS
this year, the government has approved dozens of permits for drone businesses for no permission. Our Model (cp = 0.6) this year, the faa has allowed many businesses to use drones. Our Model (cp = 0.7, 0.8) this year, the faa has approved dozens of permits for drones.
Complex
The room echoed with the sounds of song, the beat of drums, the voices of young men.
Simple
As she spoke, the building echoed with music and the beat of drums. Hybrid-NG echoed the room. LSTM the room echoed with the sounds of song, the voices of young men. Transformer bert the room echoed with the sound of song, the beat of drums, the voices of young men. EditNTS the room echoed with the sounds of song, the beat of drums, the voices of young men who are hungry and legs. Our Model (cp = 0.6) the sound of the room was full of sounds of young men and the voices of cellos. Our Model (cp = 0.7) the sound of the room sounded like a lot of music, and the voices of young men. Our Model (cp = 0.8) the sound of the room sounded like a song, the beat of drums, and the voices of young men. Table 11: Automatic evaluation results on a subset of Newsela test set that focuses on paraphrasing (8371 complexsimple sentence with compression ratio > 0.9 and no splits). We control the extent of paraphrasing of our models by specifying the percentage of words to be copied (cp) from the input as a soft constraint. We use the complex-simple sentence pairs from WIKI-AUTO (Jiang et al., 2020), which contains 138,095 article pairs and 604k non-identical aligned and partially-aligned sentence pairs. To capture sentence splitting, we join the sentences in the simple article mapped to the same sentence in the complex article. Similar to Newsela, we remove the sentence pairs with high (>0.9) and low (<0.1) BLEU (Papineni et al., 2002) scores. For validation and testing purposes, we use the following two corpora: (i) TURK corpus (Xu et al., 2015) for lexical paraphrasing and (ii) ASSET corpus (Alva-Manchego et al., 2020) for multiple rewrite operations. While the former corpus has 8 humanwritten references for 2000 validation and 359 test sentences, the latter corpus provides 10 references for the same sentences. We remove the validation and test sentences from the training corpus . Tables 12 and 13 show the results on TURK and ASSET respectively. | 2020-10-22T01:01:13.266Z | 2020-10-21T00:00:00.000 | {
"year": 2021,
"sha1": "63dfa44285c075800be8e7ad3c7eaa8ed039b916",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2021.naacl-main.277.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "7493d2c9633ef976e9fb9003c5b9ca422e0a7146",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
166898492 | pes2o/s2orc | v3-fos-license | Q&A. What Motivates Cyber-Attackers?
A. The need to understand the motivations of cyberattackers is great, given that "cybersecurity risks pose some of the most serious economic and national security challenges of the 21st Century" (The White House, 2009). However, the motivations behind cyberattacks intended to cause economic impacts may be different from those posing a threat to national security. And, in many cases, the real purpose and primary objective of a cyber-attack may be hidden or obscured, even if the attacker claims responsibility (Shakarian et al., 2013).
Q&A
Chen Han and Rituja Dongre A. The need to understand the motivations of cyberattackers is great, given that "cybersecurity risks pose some of the most serious economic and national security challenges of the 21st Century" (The White House, 2009).However, the motivations behind cyberattacks intended to cause economic impacts may be different from those posing a threat to national security.And, in many cases, the real purpose and primary objective of a cyber-attack may be hidden or obscured, even if the attacker claims responsibility (Shakarian et al., 2013).
Nonetheless, to help tease out and understand common motivations, cyber-attackers may be categorized, noting that a given attacker may belong to more than one category (Andress & Winterfeld, 2011).For example, politically motivated cyberattacks may be carried out by members of extremist groups who use cyberspace to spread propaganda, attack websites, and steal money to fund their activities or to plan and coordinate physical-world crime (Gandhi et al., 2011).Generally, the reason for non-politically motivated attacks is generally financial, and most attacks are considered as cybercrime (Andreasson, 2011), but many cyber-attacks are motivated by deeply-rooted socio-cultural issues (Gandhi et al., 2011).
As shown in Figure 1, cyber-attackers can be broadly considered "insiders" or "outsiders" (Russell & Gangemi, 1993), meaning that they act from within an organization or attempt to penetrate it from the outside.
The three basic categories of insiders are: i) disgruntled employees, who may launch retaliatory attacks or threaten the safety of internal systems; ii) financially motivated insiders, who may misuse company assets or manipulate the system for personal gain (although some insiders may be acting on ethical grounds or for other reasons); and unintentional insiders, who may unwittingly facilitate outside attacks, but are not strictly speaking primary attackers (Andress & Winterfeld, 2011).
Outsiders can be classified based on their organization, motives, and professional level: organized attackers, hackers, and amateurs.
Q&A. What Motivates Cyber-Attackers?
Chen Han and Rituja Dongre 1. Organized attackers: include organizations of terrorists, hacktivists, nation states, and criminal actors.Terrorists are those who seek to make a political statement or attempt to inflict psychological and physical damage on their targets, in order to achieve their political gain or create fear in opponents or the public (Howard, 1997;Lewis, 2002;Cohen et al., 1998).Hacktivists seek to make a political statement, and damage may be involved, but the motivation is primarily to raise awareness, not encourage change through fear.Nation-state attackers gather information and commit sabotage on behalf of governments (Cohen et al., 1998), and are generally highly trained, highly funded, tightly organized, and are often backed by substantial scientific capabilities.In many cases, their highly sophisticated attacks are directed toward specific goals, but their specific motives may be mixed (Cohen et al., 1998).Criminal actors are usually "organized groups of professional criminals" (Cohen, et. al, 1998), and they may act within complex criminal ecosystems in cyberspace that are both "stratified and service oriented" (Grau & Kennedy, 2014).Perpetrators of organized crime are typically focused on control, power, and wealth (Gragido et al, 2012).
2. Hackers: may be perceived as benign explorers, malicious intruders, or computer trespassers (Hafner & Markoff, 1991;Lachow, 2009).This group includes individuals who break into computers primarily for the challenge and peer status attained from obtaining access (Howard, 1997).In some cases, hacking is not a malicious activity; a "white hat" hacker is someone who uncovers weaknesses in computer systems or networks in order to improve them, often with permission or as part of a contract with the owners.In contrast, "black hat" hacking refers to malicious exploitation of a target system for conducting illegal activities.In most cases, black hat hackers could be hired by or be sponsored by criminal organization or governments for financial gain or political purpose.Thus, hacking can involve espionage (i.e., to obtain secrets without the permission of the holder of the information, primarily for personal, political, or criminal purposes), extortion (i.e., to extract money, property, or other concessions by threatening harm), theft (i.e., to steal valuable data, information, intellectual property, etc.), vandalism (i.e., to cause damage) (Shakarian et. al, 2013;Cohen et. al, 1998;Howard, 1997).
3. Amateurs: less-skilled hackers, also known as "script kiddies" or "noobs" often use existing tools and instructions that can be found on the Internet.Their motivations vary: some may simply be curious or enjoy the challenge, others may be seeking to build up and demonstrate their skills to fulfill the entry criteria of a hacker group (Andress & Winterfeld, 2011).However benign their intentions may be, the tools used by amateurs can be very basic but powerful.Despite their lower skill skills, they can cause a lot of damage or, after gaining enough experience, may eventually "graduate" to professional hacking.
Although these categories are presented as discrete groups, there can be some overlap or difficulty placing a given situation into a particular box.For example, a group of hackers can act in a coordinated fashion, and in this sense could be considered "organized attackers." The categories of cyber-attackers enable us to better understand the attackers' motivations and the actions they take.As shown in Figure 2, operational cybersecurity risks arise from three types of actions: i) inadvertent actions (generally by insiders) that are taken without malicious or harmful intent; ii) deliberate actions (by insiders or outsiders) that are taken intentionally and are meant to do harm; and iii) inaction (generally by insiders), such as a failure to act in a given situation, either because of a lack of appropriate skills, knowledge, guidance, or availability of the correct person to take action (Cebula & Young, 2010).Of primary concern here are deliberate actions, of which there are three categories of motivation (Gandhi et al., 2011)
About the Authors
Chen Han is a graduate student in the Technology Innovation Management (TIM) program at Carleton University in Ottawa, Canada.She has more than 8 years working experience in product design, User interface design and project management.She built and led an independent technical team that provides overall solutions and outsourcing services for various clients including world's top media, Internet startups, and multinational firms.Currently, she is working with founder team of Pricebeater, a global startup offering tools for online shopping in North America.
Rituja Dongre is a graduate student in Technology Innovation Management (TIM) program at Carleton University in Ottawa, Canada.She holds a Bachelor's Degree in Electronic and Telecommunication from the Nagpur University, India, and has worked as an Associate Consultant in Capgemini India.
Figure 2 .
Figure 2. Types of cyber-attacker actions and their motivations when deliberate | 2018-12-05T05:20:27.762Z | 2014-10-30T00:00:00.000 | {
"year": 2014,
"sha1": "f4afc497233ac14b986e8774a12165b1c146108f",
"oa_license": "CCBY",
"oa_url": "https://timreview.ca/sites/default/files/article_PDF/HanDongre_TIMReview_October2014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f4afc497233ac14b986e8774a12165b1c146108f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
253171292 | pes2o/s2orc | v3-fos-license | In Search of a Professional Identity: An Analysis of Photographic and Illustrative Images of Nurses in Japan
Purpose: Inaccurate portrayals of nurses and their roles in the media contribute to misleading public stereotypes that negatively affect the status and value of nurses. This study aims to clar-ify how nurses are portrayed visually on the Internet in Japan and what these images suggest about the role, value, and professional status of nurses in Japan. Methods: A descriptive mixed methods study: Forty photographs of nurses and 40 illustrations of nurses were analysed to make judgments about consistency with a contemporary profile that informs guidelines for professional education and practice. Results: Over three quarters (77.5%) of the images showed only female nurses and 55% were depicted smiling politely. None of the images depicted nurses engaged in high level care consistent with that of a professional dealing with typical contemporary client/patient needs. Conclusion: Our study suggests that in common with overseas research, nurses were predominantly being portrayed as feminine and caring but not as professionals capable of independent practice. Although representations were usually positive, the depictions were not entirely accurate, and this was more evident in the illustrations. Imagery can have a profound effect on ideas, attitudes and behavior; it is imperative that nurses are depicted as appropriately educat-ed professionals and that stereotypes such as nurses being young, subservient, and female should be challenged.
INTRODUCTION
Visual research focuses on what can be seen and perceptions and meanings attributed to an image or object (Reavey & Prosser, 2012). There has been a significant amount of research into the visual representation of nurses in the media over many years and longstanding concern over the public image of nursing as a profession (Takase, et al. 2006). The public's stereotypical images of nursing impact the status and value of nurses (Godsey, et al. 2020) in their own eyes, by other health professionals, and the public. This, in turn, impacts nurses' working conditions, approach to education and learning outcomes, visibility, leadership, ability to influence decision making, and scope of practice (Godsey, et al. 2020;Ravn et al., 2020). Nurses and the general public have stereotyped views of the nursing profession that have proved enormously resistant to change (Fletcher, 2007).
While media images are only one part of the solution to improving the image of a profession like nursing, it is, nevertheless, an important one. Image is socially and culturally constructed, and accurate portrayals of nurses and their roles are necessary for all media (Carroll & Rosa, 2016). While a lot has been written over the years on this subject, recent literature on images of nurses -as distinct from images of nursing -is scarce suggesting that this subject should be revisited. Our mixed methods, descriptive research focuses on an analysis of how nurses are portrayed visually in Japan on the Internet. We were interested in interpreting emerging trends in the depiction of nursing and what the images suggest about nursing stereotypes, roles. The researchers were asking i) Will the full suite of contemporary nursing roles and functions in contemporary society be represented? and ii) What is the impact of the emerging trends in imagery on attracting newcomers to the profession? We assumed that expectations about levels of education for contemporary practice and the value of nursing in Japan would also be impacted.
Literature review
Research into visual representations of nurses fits within the extensive literature on the public image of nurses. Imagery pervades our world to a greater extent than in the past from pictures in advertising, posters, and packaging (Buresh, Gordon, & Bell, 1991) and film and television and health information. Images are potent and imbued with symbolic and emotional meaning (Buresh et al., 1991), and when a picture accompanies the text, we usually first view the picture (Kang, 1997). Examination of visual images of nurses is vital because of the ability to engage the viewer and the immediacy of this method of communication (Escober et al., 2007).
Some of the earliest visual images of nurses dating from the fifteenth century show female nurses caring for mostly male patients in bed, and this representation of nurses wearing uniforms and working in acute areas of nursing has remained dominant (Darbyshire & Gordon, 2005;Price & McGillis Hall, 2013) while images from other fields of nursing have not been foregrounded. While, researchers have examined visual imagery of nurses over a range of modalities, including film (Stanley, 2008); depictions on YouTube (Kelly, et al. 2012); children's books (Carroll & Rosa, 2016); greetings cards (Pierce et al., 2002); advertisements (Lusk, 2000); websites of American hospitals and universities (Kasoff, 2006) and the nurse as heroine in propaganda posters and postcards (Dower, 2008) none has examined cartoon and images on the internet.
Stereotypes
The term stereotype originally described a printing method in which a metal printing plate was formed from a mould of the movable type so that the plate could be used repeatedly without alteration (History Teaching Institute, 2021). Stereotypical images of people are conventional, formulaic, and oversimplified, and particular clothing stereotypes are used as a shorthand to denote professions, for example, a white coat and stethoscope around the neck for a doctor, a nurse's cap or apron for a nurse. In the west, the stethoscope has an iconic status for doctors and medical students and has become a symbol of their high status (Rice, 2010), and anecdotal evidence suggests that the same is true in Japan. Clothing norms are used in subtle but purposeful ways to reflect varying degrees of cultural and economic capital and reflect professional boundaries in education and practice and authority (Jenkins, 2014).
Stereotyping also includes value judgments, which may be positive and negative; however, stereotyping may lead to "gross simplifications" and overgeneralization (van der Cingel & Brouwer, 2020). Stereotyping is evident in representations of nurses: Some of the more stereotypical images of nurses involve 'transactions or routines' , showing them performing tasks, such as giving medicine to patients, taking temperatures, or assisting with personal hygiene (van der Cingel & Brouwer, 2020). The public and nurses themselves have been found to have a stereotypical view of nursing, in which nurses may be regarded as not part of a profession, as less intelligent than doctors, subservient to doctors, lacking in authority to make sound decisions and clinical judgments, powerless and underpaid (Fletcher, 2007;Takase et al., 2006). This portrayal may be perpetuated by inaccurate visual portrayals underestimating the full extent of the role and expertise of a nurse (Carroll & Rosa, 2016). An early media study identified four commonly depicted stereotypes: ministering angel, battle-axe, naughty nurse, and doctor's handmaiden (Bridges, 1990). The nurse as heroine has also been used in propaganda posters and postcards (Dower, 2008), and in the current pandemic crisis the stereotype of nurses as self-sacrificing angels has been reignited and controversial images intended to show the sacrifice and heroism of Chinese nurses having their hair shaved were widely shared (Yan, 2020). These harmful stereotypes distort the public's concept of nursing, affect the quality and number of people who enter nursing and the decisions of policymakers, and undermine nurses' self-confidence" (Fletcher, 2007;Price & McGillis Hall, 2014).
Gender has been an important factor in the stereotyped image of nurses, and problems with image "are bound to broader societal gender issues" (Fletcher, 2007). The World Health Organization (2002) defines gender as the term used to describe the characteristics of women and men that are socially constructed, while sex refers to those that are biologically determined. The social construction of gender relates to behaviors and attributes based on received ideas of masculinity and femininity. In contrast, sex refers to the biological aspects of an individual as determined by their anatomy and affected by chromosomes and hormones (Office for National Statistics, 2019).
Caring is often seen as being a feminine characteristic and nurses as being typically female (van der Cingel & Brouwer, 2020): this has its basis historically in nursing being a form of "gendered and class-bound domestic work" (Kelly et al., 2012). These ideals and values are inseparable from the identification of nursing as women's work and the societal value placed on women and caring (Fletcher, 2007). A content analysis using Goffman's (1976) indicators of gender stereotyping suggested that although the images of nurses changed in a positive direction over the years that advertisements in journals portrayed nurses as "conciliatory rather than autonomous, and attractive rather than learned" with female nurses placed in a subservient position compared to men.
Imagery
The concept of nursing image representation is complex (Kasoff, 2006). Kasoff (2006) analysed how nurses were depicted on hospital websites in North America with the view that these images, and the captions about these images, are essential for recruitment and providing an accurate picture of a nurse's role. She developed a checklist to assess images which included whether pictures showed nurses represented by different ethnicities, cultures, gender, and age; and how they were dressed but found that pictures of nurses and their roles were scarce and difficult to locate and concluded that given that nurses comprise the largest health profession that they are underrepresented (Kasoff, 2006).
Most of the research on the image of nursing has been in Western countries, including the U.K., USA, Australia, and Canada (Kalisch, et al. 2007), with other research from Hong Kong, and Taiwan, (Kress, et al., 2018;Tzeng, 2006) and more recently Africa (Ndirangu, et al. 2021). There are few if any studies on the visual depictions of nurses from southeast Asia: the exception is a Korean study that indicated that nurses were portrayed from the perspective of femininity and mothering in hospital administration journals and television dramas (Kim et al., 2013;Park, et al., 2010).
In a country famous for manga, Japanese graphic novels, visual communication is used extensively in advertising and conveying important information to the public. This form of communication is effective because it grabs the attention, is more concise than only the written word, and simplifies communication (Nomoto, 2019) but it may also exaggerate or reinforce stereotypes and overly simplify the message.
The status of nursing in Japan
In order to examine imagery around nurses and nursing, it is first essential to ascertain the contemporary role of the nurse globally and in Japan. Nurses provide healthcare from a holistic perspective, which centers on working with the whole person (mind, body, and spirit) and preventing, or treating symptoms and curing illness and disorders (Klebanoff & Hess, 2013). There are similarities and differences between the Japan Nurses Association (JNA)'s definition of nursing, which is brief and straightforward, and the International Council of Nurses (ICN) definition. Although the JNA definition does note that nurses care for patients "on their own," it also states that the nurse often acts under the direction of a doctor. The ICN definition also notes that the nurse practices with awareness and conducts research to improve nursing practice; these are elements consistent with the designation as a profession. While the JNA mentioned clinical practice in its definition, ICN notes the nurse's role in supervising and training nursing and health care auxiliaries. Similarly, the ICN definition notes the role of patient education. A 2013 Japanese survey of the general public revealed that 79.5% (N = 2,244) of the respondents viewed nurses as kind and considerate; disturbingly, their perceptions of what a nurses role entailed included "accurately inform your doctor of your illness" (61.5%); none of the other roles they noted extended beyond basic nursing care, kindness, and understanding (Cabinet Office, 2013). Although fundamental to clinical safety and excellence nurses also have a range of other competencies such as the ability to lead. Asked about the impression they had of nurses as people, 64% said "being gentle and warm", other high rating answers included "having a bright personality", "kindness", "a feeling of cleanliness", "being reliable", and "having a sense of responsibility" (Cabinet Office, 2013). Less frequent answers included "being snappy", "scary", "cold", "strict", and "unfriendly" and in relation to what desirable qualities a nurse should have, none mentioned intellectual ability; "being kind" and "compassionate" was seen as the major attribute (Cabinet Office, 2013).
Conceptual framework
The basis of nurses' self-concept and professional identity is "public image, work environment, work values, education, and traditional social and cultural values" (Hoeve, et al., 2014). A negative image of nursing affects the quality and quantity of people attracted to the profession (Kalisch et al., 2007), and the perception of nursing as a female profession affects recruitment (Fletcher, 2007;Seago, et al., 2006). It impacts the public's view of nurses' role and professionalism and the policy maker's allocation of resources and decisions about salary (Kalisch et al., 2007).
Various methodologies have been used to research the image of nursing. Koo and Lin (2016), in 2014, set out to explore how the nursing profession was being portrayed on the Internet by two popular sources of photographic images. They analysed the first 100 images obtained using the search term "nurse" on both Google Images and Shutterstock but searched only photographs. The visual content of the retrieved images was analysed for attributes such as sex, ethnicity, eye contact with the camera, facial expression, posture, whether the nurse was shown in the presence of a patient or other medical personnel, whether the nurse was shown touching a patient, the presence of a stethoscope, what the nurse was holding, what activity the nurse was engaged in, and the image's context. Goffman's (1979) classic study of latent meaning in advertisements also guided the analysis. He maintained that subtle details in images, rather than their manifest content, can reveal assumptions about gender. Thus, in addition to Koo and Lin's methodology (2016) and following Lusk (2000) we used specific, detailed criteria to categorize images to facilitate objective, unambiguous interpretation and to allow for replication. Thus, we carefully noted context, positioning, eye contact, and spatial arrangement of the images. This study was designed to explore how nurses are portrayed on Google in Japan in the contrasting media of photographs and illustrations. Previous research (Lusk, 2000) led us to anticipate that nurses would be portrayed as young women and will be depicted in a subordinate status relative to male physicians. Cartoon or drawn images of nurses have not been analysed in previous research. This study will include these because they are commonly used on teaching materials and conference PowerPoints in Japan and elsewhere. Only Google images were used for the search because Google is a commonly used search engine.
METHOD
Following a small-scale pilot study using the intended search terms to standardize the process and identify potential problems, the student researchers (HA & MI) used two sets of search terms -s "Japan + nurse" and "Japan + nurse + illustration" on Googles images. The first 40 images using these search terms retrieved from Google images on a single day in June 2021 were analysed. Having acquired the 40 images, the trends were clear, so the researchers believed they had a representative sample that would allow for analysis, that is interpretation of messages conveyed through the choice of imagery. In addition, the "SafeSearch" filter, which helps to block explicit images from the search results, was activated.
The images were recorded on an Excel sheet the Supplementa-ry Table 1 and website addresses and attributes for analysis and discussed with the supervisor a PhD qualified nurse researcher.
Inclusion and exclusion criteria
The criteria for the inclusion of images were based on the literature review and consideration of the Japanese context of the study. Included were images that related to cosplay; actors/esses playing nurses; nurses depicted on university and hospital sites; in films and news items. Excluded were historical images of nurses, apparently non-Japanese people, images from non-Japanese sites, and advertisements for nurses' uniforms/costumes.
Analysis
The retrieved images were analysed for several attributes: context, gender, age, facial expression, eye contact with observer, posture, mask, nurses' cap-wearing, other elements, care activities, and other people present. Additionally, the underlying aim to assess how nurses were presented in the media and whether the full suite of contemporary nursing roles and functions in contemporary society was represented was considered in the analysis. There was consideration of what the image suggested about the role/value/gender of nurses in Japan.
For this study, gender was defined as culturally established categories of sex, such as female or male, and the term gender display as defined as conventionalized portrayals of these correlates (Kang, 1997). Some elements of gender display were used in the analysis of the images, including; the relative size of males and females, function ranking: this refers to the status of a function performed by males/females; typically, the male will be depicted performing the more prestigious or executive function and ritualization of subordination: postures of deference include lowering oneself physically or other of subjugation (Kang, 1997).
Trustworthiness
Trustworthiness is concerned with the extent to which the study is conceptually sound, based upon which the value of the qualitative research can be judged (Bowen, 2009). To enhance the trustworthiness, rigor, and transparency of the research, an audit trail was created. This notated how the study was carried out and conclusions made (Carcary, 2020).
Following Rodgers and Cowles (1993), this included methodological decisions, procedures for data collection and analysis, and self-awareness of the researcher. The student researchers, translator (JM), and supervisor (TS) shared weekly reflections by email, and decisions made at meetings were recorded on this email trail, and the documentation retained. This study followed the COREQ checklist criteria for reporting qualitative research (Tong, et al., 2007).
In common with other qualitative research, the research design changed through the processes of data collection and analysis and required us to alter decisions (Carcary, 2020). Following a pilot data collection, it was decided that, unlikeKoo and Lin (2016), we would collect only 40 photographs and 40 illustrations because we planned to describe them in more detail. There was extensive discussion of the disparity between Japanese and Western researchers in interpreting the illustrations. As a result, the decision was made that the Japanese researchers did the initial coding, and those discrepancies were to be discussed following their interpretation.
Ethics
Ethics approval was not required for this research because information was freely available in the public domain.
Findings
Eighty images were retrieved from the internet search: 40 of these were photographs and 40 illustrations (Supplementary Table 1). The majority (n = 23) of the 40 photographs were derived from hospital websites; seven (P1,9,18,19,22,29,30) from health organization websites (ex: Japanese Nursing Association, Japan Association for Emergency Nursing, and Japan Council of Nursing Programs Colleges); six (P15,27,32-35) related to news items; three (P13,37,38) from university websites; and one (P26) was an educational item regarding a nursing textbook.
Of the 40 illustrations, 24 were taken from a free illustration site and seven from a commercial website entitled "Nursing-roo!" that claims to be a media support site aimed at supporting nursing students and young nurses (2021). The site shares nursing information, medical news, a nursing terminology dictionary, tips for the national examination and nursing technology videos, and hosts bulletin boards, lifestyle articles, manga, psychological tests, and omikuji (fortune-telling) (Nursing-roo!, 2021). Four of the remaining illustrations (D21.D26.D30.D38) were from a health staff recruitment website; two (D23.D27.D28) from the front cover of nursing textbooks; one (D28) from a nurse's autobiography; and two (D10.D34) from a website dedicated to collating illustrations drawn by the general population and nurses are a popular subject.
The attributes of the retrieved images are summarized in Table 1. Table 1. Attributes of nurse images from Google images of "Japan + nurse" and "Japan + nurse illustration"
Gender
Thirty-nine of the 40 photographs depicted female nurses. One photograph (P33) was unclassified because the person was photographed from a distance with the face obscured. Twelve photographs included male nurses.
All illustrations included a female nurse. The female nurses were shown with males in eight illustrations (D2.D15.D23.D26.D28. D29.D30.36). The Supplementary Table 1 Of the 16 males depicted, only five (D2.D15.D26.D29.D30) were judged to be nurses. The other 11 males included six patients (D23.26.28.36), one of which was a baby in a humidicrib; three figures were judged to be doctors (D15.28.30); one (D15) an allied health staff member. We were unable to classify the intended identity of one male figure (D 26).
There was little difference in the relative size of males and females in the photographs, although naturally, the male nurses as somewhat taller than their female counterparts. In contrast, P39 shows the only male nurse at the back, and to the side of the photograph, P17 and P30 have males to the side of photographs have females in the central dominant position Neither males nor females were shown deferring to each other.
With regards to the illustrations was little difference in the relative size of males and females except for D30, which depicts a male doctor as larger than the female nurse in the foreground and a taller male nurse with a female nurse in the background and D2 and D 26 and which show the male nurse as being somewhat taller than the female.
In 22 of the 40 illustrations the nurse is depicted as looking at the viewer with eyes closed (D1,2,5,19,20,24,26,28,31,35,36); at other people in seven illustrations (D20,23,26,27,28,30,37); and in three illustrations at some other point (D13,21,32); at a computer laptop in one (D11); at a paper (D38): two of these (D13,32) showed the character with a "thought bubble" symbol to depict thinking. In six of the seven illustrations where the nurse was looking at another person, it was at a patient (D23.26.27. 28.30.37) because they were undertaking care activities, and the other (D20) showed a junior nurse looking intently at a senior nurse.
All of the illustrations showed nurses in uniforms: most of these were white, a small minority were pink and pale blue. A small majority of the female nurses wore uniform dresses, but several wore tunics and trousers (D2, 11, 12, 19, 20, 21, 23, 27, 29). The other twenty-one illustrations showed figures from the waist up, and it was impossible to ascertain what they were wearing. In contrast to photographs, 13 of the illustrations (D1,3,8,10,14-16,18,24,25, 34,37,39) pictured nurses wearing caps and only one (D 11) showed a nurse wearing a mask.
Care activities
Only two photographs showed nurses engaged in inpatient care: one (P9) depicted a nurse taking a blood pressure and the other (P22) a nurse pushing a patient in a bed. There were only six illustrations of nurses conducting patient-related activities. Nurses were shown talking with a patient (D23.28.30); pushing a wheelchair (D26); massaging a patient's shoulders (D27), hugging a baby (D28); checking the patient's health (D28), and washing a patient's hair (D36).
Other people present
Thirty-six photographs show nurses without other people. In those depicted with other people, one (P7) shows a nurse with a baby in an incubator; one (P15) with television presenters; one (P19) with a doctor; and one (P22) with a patient lying in bed.
Thirty-three illustrations showed only a nurse in those with others six (D23,26-28,30,36) were with a patient; three (D15,28,30) were with a doctor, and three (D15,23,26) with other health staff; and one (D26) was with people dressed in suits.
Ethnicity
The inclusion criteria dictated that only Japanese nurses and nurses from Japanese websites were included. Therefore, all photographs were of Japanese nurses.
DISCUSSION
Our research set out how are nurses portrayed in visual images in Japan on the Internet and what the images suggest about the stereotypes, role, and value of nurses in Japan. Our findings suggested that nurses in Japan were portrayed in Google photographs and illustrations predominantly as young women, in traditional hospital settings, were not depicted engaged in highly skilled/professional care; the photographic images of nurses were assessed as being more representative of contemporary Japanese nursing than were the illustrations.
The majority of the photographs analysed in this small-scale study were derived from hospital websites; the target audience for these are the general public as well as health professionals. Few nurses were shown in a clinical setting, but of those all were typical inpatient acute settings. It is likely that the intent of the photographs is to showcase positive aspects of the hospital, familiarize the public with the services, and lower the anxiety of potential patients. Data collection was conducted in the second year of the COVID-19 pandemic, and of the 40 photographs, six were news items, and of these five (P27,32,33,34,35), related to COVID-19. It was pleasing to note that several of the photographs showed nursing leaders -presidents of nursing organizations -and Certified Nurses and Certified Nurse Specialists who have extended clinical roles. It is likely these represent public relations news releases and hospitals showcasing their specialist staff.
Many of the illustrations were derived from sites providing stock images for public use. A significant number from a commercial site, https://www.kango-roo.com, (Nursing-roo!, 2021) with a character called kangoru, a slightly clumsy nurse targeting nursing students and young nurses. The Japanese word for nursing is kango, and the similarity in pronunciation of kango and kangaroo are similar may have led to the creation of the character. The stated goal of this character is to make all nurses happy, and she has a range of nursing tips packed in her "tummy pocket" (Nursing-roo!, 2021). While the site may offer practical advice along with fun activities such as fortune-telling and cartoons, the "kawaii" illustrations may reinforce the stereotype of cute but unprofessional female nurses.
The nurses depicted in photographs in our study fitted the stereotype in terms of being predominantly female and predominantly young although the photographs depicted more mature female nurses. This does reflect the reality that women constitute the majority of Japan's nurses: in 2018, male nurses accounted for only 7.8% of the total nursing workforce (Ministry of Health Labour and Welfare, 2019). Linguistically the word for nursing was gendered: it was as late as 2001 that the name for nursing in Japanese changed from kangofu (nursing woman) to kangoshi (nurse), a gender-neutral term (Sakashita, 2018). Although the sparse representation of male nurses in photographs and illustrations may represent the reality, it also reinforces a stereotype of nurses as female and may discourage men from considering nursing as a career choice. There was little to suggest differences in function raking or ritualization of subordination (Kang, 1997), although P6 shows the lone male nurse in the central position and P36 shows the male nurse instructing the females. There was no evidence of differences in function raking or ritualization of subordination (Kang, 1997): neither males nor females were shown in definite roles or were shown deferring to each other.
Western articles have discussed a range of stereotypes associated with nursing, including angel, battle-axe, and nymphomaniac or sex object and " the empty-headed nurse who is kind but dumb" (Darbyshire & Gordon, 2005) capable only of being a doctor's handmaiden. None of the sampled photographs showed these explicit stereotypes; all the nurses were dressed professionally and, although none were shown performing anything that highlighted clinical expertise or competence, they were not depicted performing trivial tasks. None of the images identified in our research showed a battle-axe image, although three illustrations showed angry nurses.
In contrast, the illustrations typified stereotypes of "kind but dumb" and pretty nurses. Closed eyes often denote smiles or winking in Japan and the characters are shown with closed or wide eyes. Most figures tended to have large heads which render them younger and more childlike. The smiling females look cute (kawaii) rather than intelligent; submissive and willing rather than autonomous or professional, and none were depicted in any challenging clinical activities. The Japanese culture of kawaii or cute pervades the manga and is evident in many of the illustrations of nurses in our study: kawaii is used to describe an aesthetic of "adorable beauty and innocent attractiveness", cheerfulness and purity as well as mute, insecure, helpless and bewildered (Cheok & Fernando, 2012) (Kinsella, 2015). Manga characters express exaggerated emotions and are frequently drawn with two dots for eyes and a smiling mouth (Kinsella, 2013) such as in many of the nurse illustrations. The head of many illustrations is disproportionately large in relation to the size of an adult body and this elicits a nurture response because the proportions are typical of a baby (Cheok & Fernando, 2012). Stereotypical gestures were also common among the images with nurses frequently shown punching the air in triumph, gesturing helpfully, looking shocked or coming to sudden realizations.
Two illustrations (D 10 and 34) conform to the sexualized stereotype of female nurses looking beautiful and ultra-feminine. Two other illustrations are suggestive of medical dominance: in one (D15), nurses stand respectfully with hands crossed submissively in front of them to the side of the central medical figures, the male doctor in a dominant stance, the female with a clipboard and along with the allied health workers all have stethoscopes and more dominant stances. The other (D28), concerningly, is the cover of a first-person account of nursing showing a medical doctor front and center. This finding accords with Lusk's (2000) in her study of Western hospital administration journals from 1930-1950 that nurses were pictured in a subordinate status relative to male physicians.
Our study suggests that in common with overseas research, nurses were predominantly being portrayed as feminine and caring but not as leaders or professionals capable of independent practice (Stanley, 2008) and that, although the representations were usually positive, the depictions were not accurate (Brien & McAllister, 2019). Our research found nurses were presented as caring and trusted yet lacking influence and autonomy consistent with that of a professional as did Godsey et al. (2020). As Fletcher stated, "problems with [nursing] image are tied to the broader problem of gender" (2007, pp., p.210), and the status of nursing is linked to the low status of women in Japan. This gendered presentation of nurses can result in men rejecting nursing as a career and having difficulty identifying themselves as professional nurses (O'Connor, 2015). It has been noted that, instead of portraying nurses as autonomous professionals with a "distinct scientific discipline," media depictions tend to sexualize nurses; present them as subservient to other professionals in outdated, stereotypical roles (Lusk, 2000;McNally, 2009;Stanley, 2008). Our research distinctly showed nurses in outdated and stereotypical aspects of nursing care and, in many illustrations, wearing caps which are no longer used in Japan.
Unfortunately, nursing professionalism and concomitant educational preparation for practice in Japan has not developed to the extent it has in many other countries; autonomy is limited by the Japanese Nursing Law, which defines clinical scope and legally prescribes nurses' work as under medical doctors' orders (Tanaka, et al., 2014). Other Japanese nurse researchers agree that professionalization is limited and nurses' work is largely "care work" (Asakura, 2015) (Takeshita, Iwawaki & Matsuoka, 2011). Unless the situation has changed since these papers were published, there should be a concerted effort to accelerate the status of nursing as a profession in Japan. Nurses have Codes of Conduct and Ethics as well as Standards for practice: this is consistent with the status of a profession (Fitzgerald, 2020). Unless nurses unpack their beliefs and appreciate their levels of responsibility and accountability around their actions, and promote education programs with outcomes consistent with those of the professions, they will seek avoidance of 'discomfort' and choose to remain subservient.
Attire, Masks, and caps
The majority of the photographs and illustrations showed female nurses wearing white uniforms of some description. White is symbolic of purity, and angels are typically associated with white, but it would be a stretch of interpretation to assert that this is associated with the symbolic stereotype of " the saintly purity of the nurse as angel" (Darbyshire & Gordon, 2005). Japan has been described as the land of uniforms" (McVeigh, 2000), and uniforms are ubiquitous in Japan for many professionals, service staff, and factory workers and make a person's role readily apparent. It is therefore not surprising that most of the photographs and illustrations depicted nurses wearing uniforms including those of advanced-level Certified Nurses and Certified Nurse Specialists who have a broader scope of practice than registered nurses. The illustrations depicted more nurses wearing caps and less of a range of clothing than did the photographs.
Our findings suggest that recently published photographs of nurses on the Internet in Japan are more representative of contemporary nursing in Japan. In contrast, in 12 of the 40 illustrations of nurses, there was at least one figure wearing a cap. It appears that the stereotype has not caught up with what contemporary Japanese nurses' attire because the cap was phased out in 2007. Illustrators use stereotypes, such as the nurses' uniform and cap, as a type of symbol, and they are a shorthand way to symbolize groups of people or complex ideas (History Teaching Institute, 2021). Previously a cap was an almost universally recognized symbol of being a nurse, and originally, caps or veils were styled following a nun's habit and were designed to cover the wearer's hair. Capping ceremonies were a solemn rite of passage and marked the end of nurse training (Sibbald, 2001). These are still held in some countries, but in most, caps have lost their symbolic meaning as a sign of accomplishment and been abolished to promote a professional image (Sibbald, 2001).
Data was collected during the ongoing COVID-19 pandemic, and undoubtedly, this influenced the photographs retrieved from the Google search in terms of nurses wearing masks in clinical settings and of nurses featured in news items. Previous researchers have noted that nurses have been largely invisible in news items about healthcare (Buresh et al., 1991), but one of the effects of the current COVID-19 pandemic has been an almost threefold increase in media coverage of nursing and the predominant imagery has been of heroic nurses, battling nurses, and nurses sacrificing themselves for their patients (Bennett, et al., 2020). Some of the most striking imagery has been of exhausted nurses, with marks on their faces after hours of mask-wearing, and the sacrifice of female Chinese nurses having long hair shorn before working on COVID-19 wards (Stevenson, 2020). In our research, this is exemplified in P33, a picture suggestive of an exhausted and overwhelmed nurse during the pandemic.
A frequent theme that emerged in our analysis and discussion about the illustrations was that the Japanese students repeated shared in our work together that they had not thought deeply about the illustrations or what they may represent. It is likely that this is because, whereas verbal information is processed rationally and consciously, visual imagery is perceived and partially processed preconsciously (Buresh et al., 1991). Cultural factors and familiarity are also likely to play a part. Therefore, the nursing profession and nursing students need to be aware of and critically reflect on the implicit meanings of imagery in the media and ensure the public is given a realistic and accurate description or profile of them and their profession (Stanley, 2008). The divergence in opinion between the British/Australian researcher and her Japanese colleagues about the ethnicity of the illustrated characters based on hair color, skin tone, and face shape has been previously noted by Lu (2009). Japanese anime has characteristics and stylization: the characters often have colorful hair, long eyelashes, small mouths, and noses and with large expressive eyes, with female characters often having large breasts and facial coloring is usually white with a tint of yellow, pink, or brown (Ocansey, 2018). Lu (2009) reported that, although the ethnicity of more than half of the 341 anime characters examined was designed initially to be Asian and only a tiny fraction was intended to be Caucasian, many were perceived as Caucasian by the predominantly Caucasian raters. Raters tended to perceive the characters as being of the same racial group as themselves; thus, Asian raters saw more characters as Asian and Caucasian raters saw more as Caucasian (Lu, 2009). Images may not always have a fixed or transparent meaning because understanding is derived from the context (Van Leeuwen, 2005). Similarly, the values communicated will also be dependent on shifting cultural contexts.
Limitations
The age of nurses depicted on the Internet was based on estimates. As previously mentioned, the classification of profession and age was necessarily subjective. A few photographs and illustrations were not straightforward to classify. Judging the intended profession of figures depicted with nurses ironically involved some reliance on using stereotypes. For example, in P19, a man on the far right and a woman in the middle are shown wearing white coats but we judged only the male to be a doctor. Sample size and methodology mean that the findings should be interpreted with caution. The strength of this study is that it is the first to examine nurses' image in Japan and include illustrations of nurses.
Implications
The increasing role of social media in our lives along with digi-tal developments generate a 'world full of images' make visual methodologies increasingly relevant" (Ravn et al., 2020) and the exploration of images of nurses of continued relevance. Nurses need to be aware that the image of nursing is socially and culturally constructed (Carroll & Rosa, 2016) and that imagery can have profound emotional and attitudinal impacts (Buresh et al., 1991) Nurses should ensure they circulate accurate contemporary images of nurses and their role and to challenge stereotyped images when they are encountered (Fletcher, 2007). Realistic and contemporary images of nurses are vital to attracting people to nursing and may also be linked to job satisfaction and workforce retention (Price & McGillis Hall, 2013;Takase et al., 2006) and self-identity (Lusk, 2000). The images selected by universities and websites help shape the public's view of the profession and create the first impression for prospective applicants as to the nature and value of the nursing profession (Escober et al., 2007) but our study suggests that imagery is not realistic or contemporary. It is also possible that images influence expectations around the nature of education appropriate for contemporary practice ie developing the ability to collaborate with patients and peers, to question therapeutic processes and outcomes, to think critically in order to make informed judgments about evidence for optimal care processes and to contribute to innovation in systems and processes.
Working with the media to help create more realistic and contemporary portrayals of the whole range of nursing's work can help create a more balanced view of healthcare and professional education. Darbyshire and Gordon (2005) pointed out that criticism alone cannot rectify the situation and that nurses should be proactive in providing media outlets with positive and acceptable alternatives. It is not only images generated by others that may misrepresent nurses, but images used by nurses themselves may also do this (Kelly et al., 2012), and nurses need to be mindful of the images they use.
CONCLUSIONS
Visual representations or images carry a more immediate emotional impact than written and spoken words, and visual images are perhaps particularly pertinent in Japan with such a unique and visually-oriented culture. Profiles of the profession of nursing and the roles of nurses are ever evolving; examining how nurses are portrayed visually in the media will remain a topic that needs revisiting. Our research suggests that photographic representations of nurses on Google images showed a more contemporary and balanced view of nursing than did illustrations of nurses. If there is one positive to come out of the devastating COVID-19 pandemic, it is to foreground the importance of nursing and show them as essential well-informed health professionals.
Nurses have been trained or 'groomed' to believe that they were only 'givers' in a transactional way and under the direction of others and the illustrations we sampled reflect this. If nurses value education and assume the full suite of their professional roles and functions, they will see that they do make decisions with and on behalf of patients and they do behave autonomously.
Nurses, nurse educators, and managers themselves have a responsibility to consciously select images that reflect the contemporary professional roles of nursing in their presentations, on websites and promotional material drawn from a range of nursing contexts, represent the diversity in gender and context of the current nursing workforce and promote research and education programs that prepare professionals for evidence-based practice. Adoption of curriculum designs such as practice-based philosophy and methodologies that incorporate stimulus (learning) material based on the reality of practice and the full suite of abilities and a range of contexts of practice can only enhance the likelihood of alternative imagery around the profession of contemporary nursing.
SUPPLEMENTARY MATERIALS
Further details on supplementary materials are presented online (available at 10.24313/jpbl.2022.00206). | 2022-10-28T15:15:22.524Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "bc92fe766cfca29253f4b311f2a17a7301dfe922",
"oa_license": "CCBYNC",
"oa_url": "https://www.ejpbl.org/upload/pdf/jpbl-2022-00206.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f43f81d6303461794513398e3e6be3512d0de1e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
11116271 | pes2o/s2orc | v3-fos-license | Designing Computer-Supported Complex Systems Curricula for the Next Generation Science Standards in High School Science Classrooms
: We present a curriculum and instruction framework for computer-supported teaching and learning about complex systems in high school science classrooms. This work responds to a need in K-12 science education research and practice for the articulation of design features for classroom instruction that can address the Next Generation Science Standards (NGSS) recently launched in the USA. We outline the features of the framework, including curricular relevance, cognitively rich pedagogies, computational tools for teaching and learning, and the development of content expertise, and provide examples of how the framework is translated into practice. We follow this up with evidence from a preliminary study conducted with 10 teachers and 361 students, aimed at understanding the extent to which students learned from the activities. Results demonstrated gains in students’ complex systems understanding and biology content knowledge. In interviews, students identified influences of various aspects of the curriculum and instruction framework on their learning. present a computer-supported curriculum and instruction framework for and in In this implementation we promising outcomes for
Introduction
Ranging from a single fertilized egg to the climate of the entire planet, complex systems are part of the fabric of our natural and social environments.Knowledge of these systems is essential for a nuanced understanding of the world around us.Recognizing the importance of complex systems, the Next Generation Science Standards (NGSS) prominently feature them in their goals for educating K-12 students in the United States in the Cross Cutting Concepts, featuring topics such as Systems and System Models, Energy and Matter: Flows, Cycles, and Conservation, and Stability and Change.In the pure and applied sciences, the National Academies Keck Futures Initiative [1] has stressed the need to investigate the fundamental science behind these systems, as well as their benefits to society.However, the very nature of complex systems-with their many moving parts, nonlinear relationships, and unpredictability-makes them challenging to teach and to understand.To address these challenges, we Systems 2016, 4, 38 2 of 18 present a computer-supported curriculum and instruction framework for teaching and learning about complex systems in high school science classrooms.In this design, development, and implementation project, we describe the features of the framework and document our work in translating it into practice in high school biology classrooms.We provide evidence from a preliminary study, aimed at gauging the extent to which students learned from these curriculum and instruction activities, which shows promising outcomes for further exploration and scaling.
Teaching and Learning about Complex Systems: Motivations and Theoretical Considerations
Although complex systems vary in their physical components, a common feature of all such systems is the presence of multiple interconnected elements that are bounded by structure and function.The interactions among the elements form a collective network of relationships that exhibit emergent properties not observable at subsystem levels.When perturbations occur, the network may self-organize in unpredictable ways, allowing new properties to emerge.The manner in which complex systems communicate, respond to perturbations, and self-organize is understood by studying the evolution of dynamic processes [2,3].For example, human activities such as the burning of fossil fuels that emit carbon dioxide have impacted the carbon cycle in the atmosphere, which has in turn affected global weather systems in ways that are difficult to predict, e.g., when and where hurricanes will strike.The lower level interactions between human behavior and the environment are perturbations in normal atmospheric carbon cycling causing the system to adapt and self-organize, which emerges as extreme weather patterns at a macro level.What makes complex systems challenging to understand is that the mechanisms that fuel them, such as adaptation and self-organization, are often hidden and also take place over a long period of time.Nevertheless, learning about complex systems is a critical undertaking because so many human and environmental issues depend on our ability to investigate and manage them, e.g., the spread of disease, power grid robustness, and biosphere sustainability [1].
In order to understand the hidden dimensions and mechanisms of systems, scientists use computer simulations to visualize the dynamics, capture relational data at large scales, and make predictions [1].In developing our computer-supported curriculum and instruction framework for complex systems, we are motivated by two strands of research in science education and the learning sciences: research documenting the value of computer simulations in the study of complex systems at the high school level, and research calling for the need to identify which facets of new curriculum and instruction activities are the most effective and efficient for the broadest possible audiences.
Using Computer Simulations to Teach Complex Systems
Computer modeling of complex systems, particularly within science education, has been an emerging focus in educational and learning sciences research.The ability to visualize patterns based on non-intuitive processes such as emergence or decentralization [4,5] is thought to be better enabled through computational simulations than through static images or descriptions found in textbooks and other traditional curricular resources [6].Several software applications and associated curricula, including StarLogo, NetLogo, Biologica, and handheld Participatory Simulations [6][7][8][9][10][11], have been created to teach complex systems and models.These projects, which include several of our own, represent initial steps in the design and development cycle of computational and curricular supports intended to improve students' understanding of complex systems.Much of this initial work focused on developing simulations of phenomena and understanding how students learn from the simulations.However, more work is needed in order to fully integrate these supports with existing classroom resources and to align them with new NGSS reforms.Specifically, we need to develop frameworks for implementation that can accommodate a range of variables, such as availability of computers and teacher knowledge, that influence classroom practice.In a recent National Research Council [12] report on implementing NGSS, the authors write, Teachers need resources . . .that bring together target core ideas, crosscutting concepts, and practices.Such resources may include text materials, online resources (such as simulations or access to data), materials and equipment for investigative and design activities, and teacher manuals that include supports for pedagogical strategies needed to enact these lessons.(p.53) At the time of the writing of that report, the authors suggested no curricular resources had been explicitly built for the NGSS, although we do know now that a number of initiatives are well underway, for example, Amplify Science and iQWST for elementary and middle school levels (see https://www.amplify.com/and http://www.activatelearning.com/iqwst/).In the work reported here, we aim to support systems learning by incorporating many of the scientific practices and crosscutting themes articulated in the NGSS for high school science.
Understanding Implementation of Educational Tools for Teaching Complex Systems
Our study is also motivated by a need to determine which curriculum and instruction activities have the potential to succeed in a variety of learning environments and to influence the greatest number of students and teachers.This research challenge has been cogently described by Penuel and his colleagues [13].To illustrate the necessary design elements for innovative educational tools and practices to take hold, Penuel's team developed a design-based implementation research (DBIR) framework.The first element of the DBIR framework is the formation of teams of multiple stakeholders who are focused on persistent problems of practice.Such collaborative teams of researchers and practitioners should address important problems that present challenges to implementation, with practitioners having a say in defining the problems.The second element is a commitment to iterative collaborative design, in which teams focus their efforts on designing, collecting and analyzing data, learning from, and redesigning project activities that may address curriculum construction, professional development (PD), and organizational change.The third element entails developing theories about how both students and teachers learn in particular contexts, and the activities, resources, and organizations that support this learning.Finally, the fourth element of the DBIR framework concerns developing capacity for sustaining change within systems through intentional efforts to develop organizational infrastructures.
The DBIR framework has guided us throughout the design and implementation of our study.In constructing and carrying out our curriculum and instruction framework, for example, we attended a great deal to the first two DBIR elements (see Section 2.1.for more description about how we worked with teachers as collaborators in the design).We addressed the third element by incorporating a number of theories (see Section 1.2.2.) on how people learn in particular contexts (e.g., teachers in PD) with particular resources (e.g., computer simulations).We build on previous research on integrating curricula and tools into science classroom instruction (e.g., [14]) and professional development activities that support these efforts (e.g., [15]), with the goal of contributing new insights and extensions to these theories and research.Finally, in this article, we set our sights on the fourth DBIR element by offering a modular framework for other researchers and practitioners to adopt, assess, and adapt when building robust learning opportunities centered on complex systems for use in science classrooms.
Research on Science Instruction and Modeling Tools
In developing a curriculum and instruction framework for teaching and learning about complex systems, we build on research that has harnessed technology's potential to scaffold learning experiences in the science classroom [16].While a complete review of this research is beyond the scope of this paper, we briefly describe two long-standing projects that have shaped our efforts in developing classroom experiences for science students and teachers through anchoring curriculum in inquiry-based instruction, developing student scientific practices through simulations, and offering insights into how to work with teachers in professional development activities.WISE has pioneered a curriculum and instruction design that focuses on making thinking visible through modeling tools, developing activities that allow all students to access science content by bridging it to students' prior knowledge, enabling collaboration between students to construct understanding, and promoting autonomous learning by engaging students in student-centered inquiry activities [17].Similarly, BGuILE has developed technology-infused curricular units that help students construct scientific explanations through the negotiation of in-depth empirically collected data [18].In the BGuILE curriculum design, students are guided to make connections between observed experimental patterns and domain theories through scientific explanations and scientific practices like argumentation [19].Digital environments are used to simulate experiences that scientists might encounter in the real world.For example, Tabak and Reiser [20] describe an investigation strategy in evolutionary biology, enabled by the BGuILE modeling tool, that asks students to observe patterns of variable interactions, compare structural and behavioral outcomes induced by variable changes, relate structural and behavioral changes to function, and explain theories that emerge from the investigation process.
In addition to developing curriculum and technology-based tools for science instruction, researchers have focused on the PD activities needed to support teachers in the classroom.A review of the literature on PD for technology-enhanced inquiry science by Gerard and colleagues [15] suggests that teachers are more likely to improve students' experiences of inquiry-based science learning when PD enables teachers to engage in constructivist learning processes.Such PD involves capitalizing on what teachers know, allowing them time to integrate ideas, and providing space for them to customize the curriculum through successive classroom tests and refinements.Professional guidance to help teachers customize the curriculum is also an important factor in this process, as is PD that continues for more than a year (which is typically the amount of time teachers need to make the necessary adjustments based on their unique instructional practices and classroom environments).Moreover, science inquiry projects in which the technology is embedded within the curricular units support implementation more than projects that rely on teachers to determine the technology's affordance or to develop their own curricular units.Finally, partnerships between researchers and teachers help address a number of issues, including the technical challenges that impede instruction-a point that further supports the need for PD that lasts longer than the initial year of implementation.Beyond these best practices, we also considered the supports needed for (a) teaching and learning about complex systems as a content domain; and (b) for students programming their own computer models [9,21].
Complex Systems Curriculum and Instruction Framework
Our complex systems curriculum and instruction framework is depicted in Figure 1. and promoting autonomous learning by engaging students in student-centered inquiry activities [17].
Similarly, BGuILE has developed technology-infused curricular units that help students construct scientific explanations through the negotiation of in-depth empirically collected data [18].In the BGuILE curriculum design, students are guided to make connections between observed experimental patterns and domain theories through scientific explanations and scientific practices like argumentation [19].Digital environments are used to simulate experiences that scientists might encounter in the real world.For example, Tabak and Reiser [20] describe an investigation strategy in evolutionary biology, enabled by the BGuILE modeling tool, that asks students to observe patterns of variable interactions, compare structural and behavioral outcomes induced by variable changes, relate structural and behavioral changes to function, and explain theories that emerge from the investigation process.
In addition to developing curriculum and technology-based tools for science instruction, researchers have focused on the PD activities needed to support teachers in the classroom.A review of the literature on PD for technology-enhanced inquiry science by Gerard and colleagues [15] suggests that teachers are more likely to improve students' experiences of inquiry-based science learning when PD enables teachers to engage in constructivist learning processes.Such PD involves capitalizing on what teachers know, allowing them time to integrate ideas, and providing space for them to customize the curriculum through successive classroom tests and refinements.Professional guidance to help teachers customize the curriculum is also an important factor in this process, as is PD that continues for more than a year (which is typically the amount of time teachers need to make the necessary adjustments based on their unique instructional practices and classroom environments).Moreover, science inquiry projects in which the technology is embedded within the curricular units support implementation more than projects that rely on teachers to determine the technology's affordance or to develop their own curricular units.Finally, partnerships between researchers and teachers help address a number of issues, including the technical challenges that impede instruction-a point that further supports the need for PD that lasts longer than the initial year of implementation.Beyond these best practices, we also considered the supports needed for (a) teaching and learning about complex systems as a content domain; and (b) for students programming their own computer models [9,21].
Complex Systems Curriculum and Instruction Framework
Our complex systems curriculum and instruction framework is depicted in Figure 1.The framework has four major categories: curricular relevance, cognitively rich pedagogies, tools for learning and teaching, and content expertise.These categories build on the previously reviewed Systems 2016, 4, 38 5 of 18 literature and are additionally aligned with the literature on needs and best practices for STEM teaching and learning through computer-supported modeling tools and complex systems content.
Curricular Relevance
This first category of the framework focuses on developing 21st-century competencies [22], ensuring alignment with standards [23], and supporting collaboration with teachers to promote teacher ownership [24][25][26].All of these aspects of education work toward creating utility for students in terms of the scientific knowledge and practices they are engaged in as well as ensuring that teachers find relevance and utility in their teaching.In our project, we emphasized building 21st-century skills by incorporating into the curriculum problem solving, critical thinking, and self-directed learning through self-paced experiments performed in teams of two.We ensured alignment with standards by drawing on content, practices, and crosscutting concepts outlined in the NGSS.For example, each unit required that students interpret graphs, and in many cases, construct their own graphs from the data they collected.Modeling and interpreting models of biological phenomena, which highlight aspects of systems, allowed students to participate in practices that cut across science fields while deepening their content knowledge.We collaborated with teachers as research partners by seeking continual feedback about challenges in classroom implementation and by working collectively with teachers to solve problems, improve the project, and promote optimal implementation.Finally, we facilitated peer sharing through an online database where teachers could post lesson plans and comment on implementation details.
Cognitively Rich Pedagogies
The second framework category involves pedagogies that promote (a) the social construction of knowledge through collaboration and argumentation [27] and (b) constructionist learning through the construction of models [28] For example, in every unit, experimental results are examined and discussed through a process of argumentation whereby student groups are asked to reflect on evidence that can be brought to bear on claims made about the results.Students are also required to provide the reasoning linking their claims and evidence, with small-group responses shared in the larger group to enable group members to check their understanding.Figure 2 provides an example of an argumentation activity from the unit on evolution.Furthermore, in this category, using the StarLogo Nova application (described in more detail below), students learn how simulations work by deconstructing existing simulations (getting guided tours of the code behind them) and constructing their own simulations (often adding on to existing simulations to give them a head start).Both of these activities connect the code in the simulation to the underlying scientific phenomena in these units.
The framework has four major categories: curricular relevance, cognitively rich pedagogies, tools for learning and teaching, and content expertise.These categories build on the previously reviewed literature and are additionally aligned with the literature on needs and best practices for STEM teaching and learning through computer-supported modeling tools and complex systems content.
Curricular Relevance
This first category of the framework focuses on developing 21st-century competencies [22], ensuring alignment with standards [23], and supporting collaboration with teachers to promote teacher ownership [24][25][26].All of these aspects of education work toward creating utility for students in terms of the scientific knowledge and practices they are engaged in as well as ensuring that teachers find relevance and utility in their teaching.In our project, we emphasized building 21st-century skills by incorporating into the curriculum problem solving, critical thinking, and self-directed learning through self-paced experiments performed in teams of two.We ensured alignment with standards by drawing on content, practices, and crosscutting concepts outlined in the NGSS.For example, each unit required that students interpret graphs, and in many cases, construct their own graphs from the data they collected.Modeling and interpreting models of biological phenomena, which highlight aspects of systems, allowed students to participate in practices that cut across science fields while deepening their content knowledge.We collaborated with teachers as research partners by seeking continual feedback about challenges in classroom implementation and by working collectively with teachers to solve problems, improve the project, and promote optimal implementation.Finally, we facilitated peer sharing through an online database where teachers could post lesson plans and comment on implementation details.
Cognitively Rich Pedagogies
The second framework category involves pedagogies that promote (a) the social construction of knowledge through collaboration and argumentation [27] and (b) constructionist learning through the construction of models [28] For example, in every unit, experimental results are examined and discussed through a process of argumentation whereby student groups are asked to reflect on evidence that can be brought to bear on claims made about the results.Students are also required to provide the reasoning linking their claims and evidence, with small-group responses shared in the larger group to enable group members to check their understanding.Figure 2 provides an example of an argumentation activity from the unit on evolution.Furthermore, in this category, using the StarLogo Nova application (described in more detail below), students learn how simulations work by deconstructing existing simulations (getting guided tours of the code behind them) and constructing their own simulations (often adding on to existing simulations to give them a head start).Both of these activities connect the code in the simulation to the underlying scientific phenomena in these units.The third category builds knowledge through computational modeling tools [29].Participants use an agent-based modeling platform called StarLogo Nova that combines programming based on graphical blocks with a 3D game-like interface, as illustrated in Figure 3a,b.Figure 3a shows the graphical programming language in which computational procedures are built into easily assembled blocks [30,31] to execute commands.Using the blocks programming language, students can simply drag and drop blocks of code, which are organized into categories.This eliminates the need for students to know or remember command names and the accompanying syntax, which are both significant barriers to novice programmers.Figure 3b shows how the language is translated into the graphical interface of the system being modeled.These images depict an ecological system in which fish and algae interact.
Tools for Teaching and Learning
The third category builds knowledge through computational modeling tools [29].Participants use an agent-based modeling platform called StarLogo Nova that combines programming based on graphical blocks with a 3D game-like interface, as illustrated in Figure 3a,b.Figure 3a shows the graphical programming language in which computational procedures are built into easily assembled blocks [30,31] to execute commands.Using the blocks programming language, students can simply drag and drop blocks of code, which are organized into categories.This eliminates the need for students to know or remember command names and the accompanying syntax, which are both significant barriers to novice programmers.Figure 3b shows how the language is translated into the graphical interface of the system being modeled.These images depict an ecological system in which fish and algae interact.Throughout the project, students interacted with models that visually represent scientific phenomena such as ecological and evolutionary systems.The models help students understand the dynamic processes of systems, such as self-organization and emergence, by visually representing system states and processes at multiple scales.Students use the models to conduct experiments by collecting and analyzing data and drawing evidence-based conclusions.Teacher guides and student activity packs promote teacher and student autonomy and also provide suggestions for adapting and extending practice, which we encourage teachers to do.Both the teacher guides and student activity packs make explicit connections to scientific practices, model representations of scientific ideas, and complex system processes.See Figure 4 for an excerpt from the teacher guide to the unit on evolution.Here we see instruction in collecting data from multiple trials (scientific practices) and how random variation both manifests in the model and is programmed to exhibit fish movement.Other examples Throughout the project, students interacted with models that visually represent scientific phenomena such as ecological and evolutionary systems.The models help students understand the dynamic processes of systems, such as self-organization and emergence, by visually representing system states and processes at multiple scales.Students use the models to conduct experiments by collecting and analyzing data and drawing evidence-based conclusions.Teacher guides and student activity packs promote teacher and student autonomy and also provide suggestions for adapting and extending practice, which we encourage teachers to do.Both the teacher guides and student activity packs make explicit connections to scientific practices, model representations of scientific ideas, and complex system processes.See Figure 4 for an excerpt from the teacher guide to the unit on evolution.
Here we see instruction in collecting data from multiple trials (scientific practices) and how random variation both manifests in the model and is programmed to exhibit fish movement.Other examples of scientific practices include aggregating data for more accuracy and precision, controlling variables, changing variables to test hypotheses and make claims, arguing from evidence, and making sense of emerging system patterns through graphs that run in real time alongside the simulation.Other complex system characteristics that students can see through studying and manipulating the code include how agents interact with other agents, how agents and the environment interact, and the effects of perturbations on system states, e.g., equilibrium.
Systems 2016, 4, 38 7 of 18 of scientific practices include aggregating data for more accuracy and precision, controlling variables, changing variables to test hypotheses and make claims, arguing from evidence, and making sense of emerging system patterns through graphs that run in real time alongside the simulation.Other complex system characteristics that students can see through studying and manipulating the code include how agents interact with other agents, how agents and the environment interact, and the effects of perturbations on system states, e.g., equilibrium.
Content Expertise
The fourth category builds deeper understanding of complex systems [2] and biology [32].The project team built instructional sequences for five high school biology units-Genetics, Evolution, Ecology, the Human Body, and Animal Systems.There is no set sequence for the units; instead, teachers can implement the units in the order that suits their school curriculum.The curricular materials for each unit take two or three days to complete and include popular and academic literature about complex systems, short movies, PowerPoint presentations, and vocabulary lists.These are provided to teachers.
Content Expertise
The fourth category builds deeper understanding of complex systems [2] and biology [32].The project team built instructional sequences for five high school biology units-Genetics, Evolution, Ecology, the Human Body, and Animal Systems.There is no set sequence for the units; instead, teachers can implement the units in the order that suits their school curriculum.The curricular materials for each unit take two or three days to complete and include popular and academic literature about Systems 2016, 4, 38 8 of 18 complex systems, short movies, PowerPoint presentations, and vocabulary lists.These are provided to teachers.
An additional area of expertise that we intended to develop tangentially is computational thinking.Models are set up to allow students to explore the program that executes the model with the goal of developing skills related to computation, such as algorithmic thinking; as stated earlier, some models require students to manipulate the program and construct their own systems.However, we wanted to use these notions of computational thinking in the service of developing content expertise and did not explicitly measure its development in the project.For example, in predator-prey interactions, students become familiar with variables, how they are controlled, and how the variables can be programmed to exhibit system behaviors on the screen.
Methodology
In the following sections we describe our preliminary investigation of student learning outcomes from a yearlong implementation of project units in high school biology classrooms.
Context
The data and outcomes reported here result from classroom implementations of the project's five curricular units during the 2013-2014 school year by teachers who trained with us over the course of two years (week-long summer PD sessions and workshops during the academic year).An extended description of PD activities can be found elsewhere, in addition to information on teacher learning and change in instructional practices [6].However, we provide some brief characteristics of the PD in order to demonstrate the efforts made to ensure a high-quality training experience.We designed and conducted the PD following professional judgments about what constitutes high-quality PD: (a) aligned content; (b) active learning opportunities; (c) coherence with professional demands; (d) at least 20 h in duration and spread over a semester; and (e) collective participation of teachers from the same school, grade, or subject [23,33].Of these five characteristics, we considered active learning to be particularly important.Due to the well-documented, steep learning curve teachers experience in adopting new technologies in their classroom [24,34], we emphasized exposure to computers [25] and extensive training on computers [35] through active engagement to give teachers needed experiences to support adoption efforts.We also incorporated the other characteristics judged to be important for a quality intervention.For example, we aimed to achieve coherence with professional demands by working closely with teachers to understand and incorporate daily issues they may face in negotiating new technology-based programs.A critical element in this partnership with teachers was the iterative program design cycles, in which teachers became co-designers in upgrading the simulation and curricular activities with the goal of improving classroom implementation.We delivered 80 h of face-to-face PD (40 in the first year and 40 in the second year).We also focused on collective participation by working only with high school biology teachers, which provided them with opportunities to work with others in the same content area.In some cases, several teachers from the same school were able to work together, which gave them opportunities to share knowledge of situational affordances and constraints.
After the summer PD in each year, teachers were expected to implement the five project units in their biology classrooms throughout the year.The units typically spanned two to three days of instruction.Students worked in pairs and completed inquiry prompts that required them to run experiments, make predications, collect and interpret data, and answer argumentation questions based on the StarLogo Nova simulation for that unit.The units covered the biology topics of Genetics, Evolution, Ecology, the Human Body, and Animal Systems.Nine of the teachers completed all five units; and one teacher completed three of the units.We provide data and results from their second year of implementation.
Participants
We recruited 10 teachers-seven women and three men-from seven Boston area public schools.The teachers came from a diverse set of schools.One school was as high as 71% ethnic/racial minorities, while another school was almost entirely white (3% minority).School-level percentages of low-income students ranged from 14% to 59%.The percentage of students who were proficient or advanced on the state standardized science test ranged from 65 to 94. Teachers, on average, had eight years of teaching experience, with a range of 3.5 to 19 years.We collected student data in 10 teachers' classrooms from a total of 361 students ranging from freshman to 12th grade college prep and honors levels.The schools did not release individual student demographic and achievement data to us, so we cannot report accurate sample data in these areas.However, due to the range of classrooms and academic levels, we believe that the students we worked with are a relatively representative sample of the population-level statistics that are reported.See Tables 1 and 2 for detailed demographic information and numbers of students distributed across grade levels.
Data Sources and Analyses
To investigate learning outcomes, we conducted a mixed methods evaluation of students over the course of the 2013-2014 school year.Both before and after the intervention, measurements were collected on student understanding of biology and complex systems.We also conducted focus group interviews with students.Biology content understanding was assessed using 14 multiple-choice questions compiled from several state and national standardized science exams ( Students also completed an assessment of complex systems understanding, at both the start and the end of the school year.This instrument consisted of an open-ended ecology prompt described below, asking students to write down anticipated changes in a biological complex system.Imagine a flock of geese arriving in a park in your town or city, where geese haven't lived before.Describe how the addition of these geese to the park affects the ecosystem over time.Consider both the living and non-living parts of the ecosystem. Responses were scored on a scale of 1 (not complex) to 3 (completely complex) for each of four different dimensions of complex system understanding.These components were derived from earlier research [2,3,36,37] the categories of which are listed in Table 3. Aggregate scores on this exam ranged from 4 to 12. Table 3. Categories of complex system components.
Predictability
The emphasis is on the predictability of the effects caused by the agent in question.
According to the clockwork framework, the way in which a part or agent operates or affects other components of the system is predictable.In a complex framework, it is impossible to anticipate precisely the behavior of the system.This is because the actions of agents cannot be predicted (as random forces or chance factors can affect an agent's actions) even if we know the rules or characteristics of the agent.
Processes
Processes refer to the dynamism of the mechanisms that underlie the phenomena (i.e., how the system works or is thought to work).In a clockwork framework, there is a beginning, middle, and end in the system.The system is composed of static events.While perturbations (actions by/on parts) in the system may cause change to occur, the change terminates once an outcome is achieved.In a complex system framework, there is no definite beginning or end to the activity.System processes are ongoing and dynamic.
Order
The focus is the organization of the system or phenomenon as centralized or decentralized.A clockwork framework assumes that all systems are controlled by a central agent (e.g., all action is dictated by a leader).Order is established top-down or determined with a specific purpose in mind.In a complex systems framework, control is decentralized and distributed to multiple parts or agents.Order in the system is self-organized or 'bottom-up' and emerges spontaneously.
Emergence and Scale
Emergence refers to the phenomenon where the complex entity manifests properties that exceed the summed traits and capacities of individual components.In other words, these complex patterns simply emerge from the simpler, interdependent interactions among the components.In a clockwork framework, parts of the system are perceived to be isolated, with little interdependency among them.This is because of the linear nature that characterizes these relationships.Thus, there are no large, global patterns that emerge from actions imposed on the system.Rather, these actions cause only localized changes (e.g., geese eat plants, causing a decrease in grass).In a complex system, because parts or agents are interdependent in multiple ways, an action (small or large) that is imposed on the system may have large and far-reaching consequences on the numerous parts and agents of the system.This may in turn result in large-scale change and evolution.
For the coding, three raters were rigorously trained over a series of several weeks.Due to the complexity of this task, all raters were asked to code 20% of the data.We then ran reliability tests between pairs of raters in order to determine which raters had the highest alpha scores in each of the complex systems categories.Alpha scores for each pair ranged between 0.731 and 0.823.The pair of raters then scored all of the student responses in their assigned categories.All discrepant codes were discussed and a single code was assigned.Student growth in complex systems understanding was determined from their scored responses through a paired t-test.
Focus groups were also conducted at the end of each classroom observation with four to five students who volunteered to be interviewed.Teachers were instructed to select a range of students who they considered to be high, medium, and low achievers.The interview consisted of five structured and semi-structured questions to probe student learning of complex systems and biology when interacting with the simulations and curricula.The questions were: (1) How was your learning experience similar or different to how you learn normally in a classroom?; (2) Did you understand how to answer the argumentation questions?; (3) What was the main biology idea represented in this unit?; (4) What do you know about complex systems?; and (5) Have you ever used computer simulations or models before in this class or any other classroom?In total, 12 focus group interviews of 50 students (28 female, 22 male) were conducted in seven schools, lasting in total 6 h and 3 min.The focus group transcripts were coded according to a framework adapted from the NGSS to assess the impact of the curriculum and instructional framework on student learning.A categorization manual was constructed using descriptions of the five disciplinary core ideas, the eight scientific practices, and the seven crosscutting concepts from the NGSS [38].Table 4 shows definitions used for three of the 20 NGSS categories (for a full description of the categorization manual, see [39]).
NGSS Category Categorization Manual Definition
Developing and using models Modeling can begin in the earliest grades, with students' models progressing from concrete "pictures" and/or physical scale models (e.g., a toy car) to more abstract representations of relevant relationships in later grades, such as a diagram representing forces on a particular object in a system.Modeling in 9-12 builds on K-8 experiences and progresses to using, synthesizing, and developing models to predict and show relationships among variables between systems and their components in the natural and designed worlds.
Cause and effect Mechanism and explanation.Events have causes, sometimes simple, sometimes multifaceted.A major activity of science is investigating and explaining causal relationships and the mechanisms by which they are mediated.Such mechanisms can then be tested across given contexts and used to predict and explain events in new contexts.In grades 9-12, students understand that empirical evidence is required to differentiate between cause and correlation and to make claims about specific causes and effects.They suggest cause and effect relationships to explain and predict behaviors in complex natural and designed systems.They also propose causal relationships by examining what is known about smaller scale mechanisms within the system.They recognize that changes in systems may have various causes that may not have equal effects.
Systems and system models
Defining the system under study-specifying its boundaries and making explicit a model of that system-provides tools for understanding and testing ideas that are applicable throughout science and engineering.In grades 9-12, students can investigate or analyze a system by defining its boundaries and initial conditions, as well as its inputs and outputs.They can use models (e.g., physical, mathematical, and computer models) to simulate the flow of energy, matter, and interactions within and between systems at different scales.They can also use models and simulations to predict the behavior of a system, and recognize that these predictions have limited precision and reliability due to the assumptions and approximations inherent in the models.They can also design systems to do specific tasks.
Initial coding was completed through a modified method of interaction analysis (IA).This typically involves analyzing video and/or audio data to examine the details of social interactions [40].The analysis is normally performed in collaborative work groups.In our study, four researchers read through the transcripts of four focus group interviews (33% of the total number of focus groups interviewed) and identified responses from students that indicated learning in the NGSS categories.Responses could be coded in multiple categories.After an acceptable level of agreement was reached, one researcher coded the remaining eight focus group interviews.
The following excerpt provides further details about how students' responses were coded.To answer the first interview question, "How was your learning experience similar or different to the way you learn normally in a classroom?"one student said, Visualization helps and it helps that it's moving so you know how long it takes, how fast the graphs, if you just look at the graph without any idea of what the simulation is, it would just make no sense to anybody.But if you watch the fish swim around and eat the algae, you saw the population plummet.It just shows you the exact numbers versus something that you don't really understand what's going on.(Focus Group 5, 17 October 2013).This student's comment was coded in the first two categories listed in Table 4.Here the student discusses using the simulation to understand the relationship between the fish and algae in the ecology model, showing an understanding of developing and using models.He also mentions the use of the graph to provide evidence of population changes, showing an understanding of cause and effect relationships.
Results
The results reported below illustrate the curriculum and instruction framework's influence on student learning.This section is organized around student content expertise, curricular relevance to the NGSS, and what students believed supported their learning.
Content Expertise
The project sought to develop content expertise in the topics of biology and complex systems.From students' pre-and post-surveys, the results indicated gains in both content areas.Table 5 displays results from the paired t-test of biology scores, conducted at the beginning and end of the 2013-2014 academic year.The results show that student scores increased significantly from pre-to post-assessment-from a mean of 7.67 to 9.43 (SD = 2.47), where t(345) = 12.5, p < 0.001, and the effect size is 0.67 (Cohen's d).Analysis of responses from the complex systems ecology prompt show similar gains.Table 6 shows that students' complex systems understanding demonstrated positive significant growth, moving from a mean of 5.80 (SD = 1.23) to 6.79 (SD = 1.29),where t(360) = −12.26,p < 0.001, and the effect size is 0.65 (Cohen's d).Our research design does not allow us to identify whether or not these gains would have occurred in the absence of our curriculum and instruction framework.As this was a preliminary study, we did not randomly assign students or classes to control groups.It is also important to note that average gains in student understanding were small in absolute terms.The typical student improved his or her score by only one or two points or questions over the course of the year.Nonetheless, statistical tests demonstrate that learning clearly occurred, and the effect size of students' respective growth in content expertise is encouraging.The effect sizes of 0.67 and 0.65 are interpreted as medium effects in Cohen's d terms [41].Moreover, according to Bloom and colleagues, ninth graders typically experience an effect size of around only 0.19 in science learning over the course of their freshman year as measured by several nationally normed tests [42].As ninth grade was the modal, or most common, grade level in our study, it is quite possible that the science learning under our framework exceeded that of traditional science classrooms.
Curricular Relevance
A central goal of the framework was to align curriculum and instruction with the NGSS.From the student focus group interviews, we coded 1056 responses indicating student learning outcomes that illustrated the standards.Figure 5 shows the frequency of responses that fell into each category.Students identified learning instances in 18 out of the 20 categories.Not surprisingly, given the emphasis of our simulation tool and curricula, the categories with the most responses were Developing and using models; Cause and effect; and Systems and system models.However, we were encouraged to see that most of the other NGSS categories were represented in what students thought they learned.Other categories showing relatively high frequencies of responses were Planning and Carrying out Investigations, Analyzing and Interpreting Data, Using Mathematics and Computations Thinking, and Patterns.That students discussed their learning experiences with respect to these scientific practices and cross-cutting themes, is important as the science education community aims to identify resources that enable NGSS goals to take place in science instruction.We discuss further alignment of NGSS categories in the section on students' perceptions of learning supports.
was the modal, or most common, grade level in our study, it is quite possible that the science learning under our framework exceeded that of traditional science classrooms.
Curricular Relevance
A central goal of the framework was to align curriculum and instruction with the NGSS.From the student focus group interviews, we coded 1056 responses indicating student learning outcomes that illustrated the standards.Figure 5 shows the frequency of responses that fell into each category.Students identified learning instances in 18 out of the 20 categories.Not surprisingly, given the emphasis of our simulation tool and curricula, the categories with the most responses were Developing and using models; Cause and effect; and Systems and system models.However, we were encouraged to see that most of the other NGSS categories were represented in what students thought they learned.Other categories showing relatively high frequencies of responses were Planning and Carrying out Investigations, Analyzing and Interpreting Data, Using Mathematics and Computations Thinking, and Patterns.That students discussed their learning experiences with respect to these scientific practices and cross-cutting themes, is important as the science education community aims to identify resources that enable NGSS goals to take place in science instruction.We discuss further alignment of NGSS categories in the section on students' perceptions of learning supports.
Students' Perceptions of Learning Supports
Students articulated benefits accrued from participating in cognitively rich pedagogies.All focus groups mentioned the positive impact on their learning as a result of participating in the argumentation process.In the following excerpt from a focus group interview, one student spoke about the learning affordances in figuring out what her evidence showed in order to answer one of the argumentation questions, which we included as a process to encourage the social construction of ideas (see Figure 2 for an example): Well, I think the hardest part about ... it was like figuring out what the claim was like ... what does your evidence show?Why did I just collect all of this evidence?What did I discover off that?But after you figured out what your claim was you had a bunch of evidence to back it up.So everything there on was pretty easy.(Focus Group 7, 17 October 2013) In this excerpt, we can see that this student was cognitively involved in understanding the purpose of the experiment and how the evidence was brought to bear on her understanding.
Another example of student learning resulting from project efforts to build in cognitively rich pedagogies happened while working with the unit "Catching Flies," in which students learned to program their own simulations.The following quote illustrates how building the simulation supported this student's particular style of learning: I feel like I'm much more of a hands-on learner so for them to be projecting on the board and us watching it's not as helpful for me than us doing it on the computer . . .when you get to do it on a computer by yourself because you get to actually experience it and that's kind of important for learning.You can see all your mistakes and you can try to fix them yourself instead of having the teacher do it the first time and getting it right.(Focus Group 9, 12 December 2013) Through the actual manipulation of the program, students were able to examine and modify their own errors, which potentially led to deeper learning.In the following exchange, it is clear that through constructing simulations students were also able to begin testing their developing ideas of complex systems: I think that all the coding, it was a lot of different things that came together, but then when it was on the simulation it was just a simple few flies moving around randomly and it didn't look that different, but when you went back into the code it had a lot of different parameters.(Focus Group 9, 12 December 2013) In this unit, students program multiple variables into the system and learn about how different initial conditions and the inclusion of different variables can significantly impact the patterns that emerge.The excerpt shows that building the simulation, which was included as a constructivist activity, helped students make connections between the code, behaviors on the screen, and the complex concept that was represented.
Active experimentation with the simulations similarly enabled students to develop a deeper understanding of complex system processes.For example, in the following interview excerpt concerning the unit on enzymes, one student spoke about the ability to manipulate the inputs (the number of starches and enzymes present) in the model, which enabled her to gain a greater understanding of the complex system concept of predictability: Yeah, I didn't know if they sought out all the time or if they were just moving randomly most of the time.So I tried I think it was like 20 starches and then I added like 10 enzymes.I thought because it was 10 and 20 that it would come out like that, but it didn't.It came out to totally different numbers and that just made me understand how no matter the number, you can always have different outcomes.(Focus Group 10, 21 November 2013) An accurate understanding of complex systems processes suggests that it is not possible to predict the exact outcome of system effects each time.What is important to note from the above quote is that the student recognized that outcomes of a system processes may vary based on the initial conditions.The student was able to gain this knowledge by experimenting with a simulation in which random variability was built into the system.
Beyond manipulating the simulations, and using evidence to understand phenomena, we were also interested in learning about the impact of the guided inquiry processes included in the curricula.All interviews indicated that students' improvement in understanding the scientific phenomena was enabled by the StarLogo Nova simulations.Moreover, many students spoke about the utility of the student activity packs in providing them with room to explore and learn on their own.In the following excerpt, one student discusses how the dynamic visualization helped her to see how the system changed over time: Students also spoke about the hidden information that was revealed by working with the simulations.After the unit on diffusion of sugar into the bloodstream, a student noted how she understood the actual movement of the sugar molecules: Yeah, and I learned that the molecules bounce off the walls randomly.I thought they just went in a straight line to the bloodstream.(Focus Group 12, 11 February 2014) With respect to the student activity packs, students discussed the curricula's ability to offer additional opportunities to experiment and to answer student-generated questions, as documented in this excerpt: But learning it beforehand and then having to follow specific directions and then doing it, it really helps because if you're following the directions and it works right it's going to be like "Oh, okay, I understand now.If I do this and I do that, then I can do that."But then you can make other questions and hypotheses and everything like that and then you can be like "If I do this will this happen?"And you can actually try it because you have that freedom of trying it on the simulation.(Focus Group 10, 21 November 2013) While the activity packs provided specific questions to guide student learning, this student used the simulation to answer her own questions.
Discussion
The study of complex systems in education has been supported through policy, scholarship, and the allocation of resources.For example, all seven of the crosscutting concepts in the new NGSS reflect important aspects of complex systems such as Scale, and Structure and Function.This has raised challenges for educators who must follow the NGSS alongside other contextual and professional demands.Thus, understanding optimal methods for constructing educational experiences about complex systems is critical.In this paper, we introduced a framework for teaching and learning about complex systems that addressed needs in designing curriculum for classroom implementation [12].The framework builds on previous research in science education on inquiry-based projects [17,20] and included other known best practices and recent research, demonstrating the importance of developing programs that have curricular alignment, engage in cognitively rich pedagogies, use appropriate tools for teaching and learning about complex systems, and develop content expertise in science and complex systems content.This framework moves the field of complex systems in science education forward by offering a comprehensive program for classroom implementation that has been vetted by teachers as co-designers in the iterative design cycle.In addition to improving curriculum and instruction activities, this co-design model can also be understood as a necessary variable in producing successful implementation and student learning outcomes.Both the collaboration and the iteration represent the first two elements in the DBIR framework [13] in the service of tackling issues in adopting innovative technology-based programs.
In our curriculum and instruction framework design, we also focused heavily on constructing learning experiences that build on what we know about how learning best happens, especially in the domain of science.Much of our curriculum was constructed to capitalize on the ability of computer simulations to easily enable iterative experimentation, provide multiple sources and inputs for data collection, and present dynamic visual details about different aspects of systems that would not be accessible through static images or text-based descriptions.Based on student perceptions of what supported their learning, constructing the simulations also played a central role in understanding the components of the system and how they interacted with each other and the environment.With the inclusion of other important scientific practices like argumentation and analyzing and interpreting data in the learning activities, students were able to demonstrate robust learning in the content areas of biology and complex systems with effect sizes that exceed known yearly learning gains in science classrooms [42].This is an encouraging result despite the fact that we did not have a control group for comparison.Our next step will be to run an experimental efficacy trial to support these preliminary findings.In other publications, (e.g., [39,43]) we have discussed some of the challenges associated with professional development and teacher learning as a result of implementation, which will also be investigated further in subsequent studies.
However, one of our main goals in this paper was to develop and provide promising evidence of the effectiveness of a modular framework for other similar systems researchers and practitioners to assess and use, which is the last element of the DBIR process.We believe this is an important pursuit, particularly as we embark on investigating science programs that can instantiate qualities of the NGSS to improve science learning for all students.In this preliminary study, we hope we have advanced this conversation, and we eagerly seek commentary in the service of advancing complex systems in education research and practice.
Figure 1 .
Figure 1.Complex systems curriculum and instruction framework.
Figure 2 .
Figure 2. Curricular example of scaffolding the scientific argumentation process with the simulation activity.
Systems 2016, 4 , 38 6 of 18 Figure 2 .
Figure 2. Curricular example of scaffolding the scientific argumentation process with the simulation activity.
Figure 3 .
Figure 3. (a)The blocks-based programming interface for StarLogo Nova, in which student control properties of their virtual systems; (b) StarLogo Nova translates blocks of computer code into a virtual 3D ecosystem of fish, grasses, and algae.
Figure 3 .
Figure 3. (a)The blocks-based programming interface for StarLogo Nova, in which student control properties of their virtual systems; (b) StarLogo Nova translates blocks of computer code into a virtual 3D ecosystem of fish, grasses, and algae.
Figure 4 .
Figure 4. Excerpt of the teacher guide for the evolution unit.The callouts provide background information and suggestions for running the simulation in class.
Figure 4 .
Figure 4. Excerpt of the teacher guide for the evolution unit.The callouts provide background information and suggestions for running the simulation in class.
Figure 5 .
Figure 5. Frequency of student focus group responses as coded in each NGSS category.
Figure 5 .
Figure 5. Frequency of student focus group responses as coded in each NGSS category.
Table 1 .
School level demographic information.
* Minority is defined as non-white.
Table 2 .
Teacher experience and grade level taught with numbers of students in each grade.
We consulted the 1999-2001, 2010, and 2011 New York State Board of Regents Biology exams, the 2009 California Standards Test in Biology, and the 2000 and 2009 National Assessment of Educational Progress (NAEP) exams).Students completed the test at both the start and end of the school year so that learning gains could be assessed.A paired t-test was conducted to analyze growth in student biology understanding.
Table 5 .
Results for paired t-test biology content learning.
Table 6 .
Results for paired t-test complex systems content learning.
think the hands-on experience just helps you process the information better because it's right there and you can see it going on.You can see the process.You can see what's happening.Even though it's a short amount of time you understand what would happen over a long amount of time instead of just being told what happens.(Focus Group 7, 17 October 2013) | 2017-01-23T08:43:12.842Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "5d741b5950740864cb3516a221d8483351b96419",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-8954/4/4/38/pdf?version=1480755940",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cda6e10be17299c53eac1c1816cf74e1cd1a1fe2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
261480056 | pes2o/s2orc | v3-fos-license | Online reinforcement learning control of robotic arm in presence of high variation in friction forces
The operation and accuracy of industrial robotic arms can be negatively affected by significant fluctuations in friction forces within their joints, potentially resulting in financial and operational losses. To mitigate these issues, an online model-free reinforcement learning controller specifically designed to handle high variations in joints’ friction forces. To the best of our knowledge this is the first time where reinforcement learning controller is used to handle high friction variations in a robotic arm. Initially, the dynamic equations of the robotic arm are derived, verified and validated to ensure an accurate representation of real-world behaviour. The stability of the closed-loop system is analyzed using the Lyapunov second method. The performance of the proposed controller in terms of position tracking is compared against four commonly used controllers found in literature for similar applications: (i) nonlinear model-based computed torque controller, (ii) proportional-derivative controller, (iii) adaptive iterative learning controller and (iv) radial basis function neural network adaptive controller. Simulation results demonstrate that the reinforcement learning controller outperforms the other controllers in terms of tracking performance, even in the presence of significant variations in joint friction forces.
Introduction
Motion control of robotic manipulators is challenging, mainly due to presence of joints' friction, nonlinearity and coupled dynamics.The motion control of a robot becomes even more challenging specially when the robot operates in harsh environments (i.e.presence of dust and debris) (Wong et al., 2018).Usually, high friction forces in the manipulator's joints are the most dominant factor that affects controller performance.Friction causes stick-slip in relative motion, affects the durability and reliability of systems, and leads to significant performance losses if it is not adequately considered (Liu et al., 2019a).Friction effects can be cancelled by using friction compensation.This can be done by incorporating an estimation of the friction force into the control signal to cancel out its influence (Caldarelli et al., 2022).Model-based friction compensation schemes require an in-depth understanding of friction characteristics (Gao et al., 2022a).In the last decades, the friction phenomena were studied extensively and many friction models were proposed including: Coulomb model, viscous model, Stribeck mode, Dahl model, LuGre model and Generalized Maxwell-Slip model (Gao et al., 2022a).Generally, utilizing any of the aforementioned friction models within the model-based friction compensation scheme assumes that model parameters are constant or slowly varying in nature (Huang et al., 2019).This assumption is often invalid, particularly when the robot is operating under harsh conditions as in mining, grinding and polishing applications.In such applications there is significant presence of pollutants like dust and debris, causing the frictional forces to increase.Therefore, model-based compensation approaches usually lose their effectiveness (Chew et al., 2021) and (Yin et al., 2021).Consequently, model-based controllers are usually combined with another control strategy that helps in compensating for uncertainties as in Abraham et al. (2020).
Another alternative to model-based controller is the model-free controller, which does not depend on any mathematical model of the manipulator.Intelligent controllers are among the most promising model-free control strategies.Intelligent controllers are inspired by biological systems and human cognitive capabilities.They possess learning and adaptation capabilities.Different intelligent control strategies have been investigated in the literature.In Esmaeili et al. (2019), a data-driven observer with an adaptive sliding mode controller for manipulators was investigated.An iterative feedback model-free adaptive learning control of pneumatic artificial muscle was investigated in Wu et al. (2019).Aliman et al. (2022), designed an adaptive fuzzy-proportionalderivative (PD) controller for the rehabilitation lower limb exoskeleton.In Gundogdu and Celikel (2021), a nonlinear autoregressive control scheme was proposed to control a single link manipulator at low speed.
Many of the intelligent control strategies mentioned earlier require offline training and tuning.However, when a robot operates in a harsh industrial environment (i.e.dusty), performance with offline training strategies deteriorates because the friction forces can vary significantly.Therefore, learning approaches that estimate and compensate for friction forces online showed a promising performances as in Roveda et al. (2022).
Recently some promising artificial intelligence (AI) techniques, that are capable of performing training and adaptation online, have been utilized to control robotics with presence of uncertainties and disturbances.In Chen et al. (2020), linear quadratic regulator (LQR) with radial basis function neural network (RBFNN) was employed to enhance the tracking performance under variable admittance control for human-robot collaboration.In Kumar and Rani (2021), a model-free scheme was integrated with a RBFNN to compensate for the unknown dynamics and uncertainties.In Lee et al. (2019), an adaptive iterative learning control (ILC) algorithm was proposed to adaptively identify the friction model over multiple iterations.In Gao et al. (2022b), an online adaptive backstepping integral nonsingular terminal sliding mode control was proposed for precision trajectory tracking of manipulators under unknown dynamics and external disturbances.Cremer et al. (2020), investigated a model-free online neuro-adaptive controller with inner and outer neural networks (NNs) for human-robot interaction.An active inference online joint space torque controller for manipulators was proposed in Pezzato et al. (2020).In Liu et al. (2019b), RBFNN-based tracking control of under actuated systems with unknown parameters and with matched and mismatched disturbances was developed and tested on a two-link planar manipulator.In Zhang et al. (2022), a PD controller with augmented NN was utilized to compensate for both the continuous dynamic and discontinuous friction of a two degree of freedom (DOF) robotic arm.
Another promising model-free AI control approach that can deal with nonlinearities, uncertainty and significant variation in the friction forces is the reinforcement learning (RL).RL is one of the data-driven decision-making frameworks that focuses on the interactions of an agent with its environment, where the agent tries to find a set of actions that maximizes the cumulative reward (Li & Deng, 2021).Pane et al. (2019) tested an RL-based controller on a robot manipulator, and compared its performance with PD controller, model predictive control (MPC) and ILC.The results showed better performance with the RLbased controller when compared with PD, MPC and ILC.An RL tracking controller with a kernel-based transition dynamic model was proposed in Hu et al. (2020).In this approach, a reward function was defined according to the features of tracking control to speed up the learning process.The results showed that their proposed algorithm has better tracking performance than NN and adaptive NN controllers when tested on 2-DOF robotic arm.In Ouyang et al. (2020), an adaptive control with actor-critic RL was proposed for a 2-DOF arm with elastic joints.The tracking performance with the RL controller was better than the performance obtained by a PD controller.In Lee and An (2021), an RL-NN-based controller was developed and tested experimentally on a self-balancing quadruped robot.Their results revealed a promising control algorithm that can replace the mathematically based robot control system.RL-based optimal controller was utilized in Liu et al. (2022) to minimize the tracking errors of a shape memory alloy actuated manipulator.
Paper contributions
Based on conducted literature and to the best of our knowledge, RL has not been evaluated in position tracking application of a robotic arm with presence of high variation in joints' friction forces.On the other hand, friction variation in a robotic arm is unavoidable and is more significant when a robot operates in a harsh industrial environment.If the controller dose not compensate for any friction variation in real time, then the tracking performance will degrade significantly.
In this paper, an online model-free RL control approach is utilized to control a 3-DOF robotic arm in presence of high variation in its joints' friction forces.The tracking performance with the proposed controller is compared against four controllers using different desired trajectories.The four controllers selected for comparison represent major control strategies that are usually used for similar applications, namely: nonlinear model-based computed torque (CT) controller, linear control strategy (i.e.PD), and model-free adaptive control strategy (i.e.ILC and RBFNN).
Paper organization
The sections introduced in the manuscript are organized as follows: the robot arm dynamics were derived, validated and verified in Section 2. Section 3 introduces the proposed RL controller.The simulation results and comparison with other controllers are presented in Section 4. Finally, concluding remarks are presented in Section 5.
Robotic arm modelling, validation and verification
The dynamic model of the 3-DOF articulated robotic arm shown in Figure 1 is derived using the wellknown energy-based Lagrangian dynamic formulation.The dynamic equation of the manipulator is represented using state space as in (1), where T is the joints' torque, θ is the joint angle, θ is the joint angular velocity, and θ is the joint angular acceleration, M(θ ) is the mass matrix, V(θ , θ ) is the vector of centrifugal and Coriolis forces, G(θ ) is the gravity force vector and τ f is the friction torque vector.
M(θ ) is given in Equations ( 2)-( 6), where m i is the mass of link i, I i is the inertia tensor of link i, L i is link i length and L ic is length of the center of mass for link i, and i from 1 to 3. (3) G(θ ) is given as in Equations ( 11)-( 13), where g is the gravitational acceleration.
In the developed model, the Stribeck friction model was adopted to represent the joints' friction forces as shown in Figure 2. τ f , the friction torque vector, given in ( 14), where f s is the static friction coefficient, f e is the applied joint's torque, f c is the Coulomb friction coefficient, f v is the viscous friction coefficient, and θs is the Stribeck velocity.always positive definite, that is θT M(θ ) θ is always greater than zero as stated in Siciliano and Khatib ( 2016).This condition is evaluated using different sets of joints' trajectories.Figure 3
Reinforcement learning controller
RL is inspired by how a human learns when subjected to an ambiguous task.In contrast to the supervised learning algorithm that aims to map between known inputs and outputs, the RL algorithm is based on rewarding for desired behaviour or punishing for undesired behaviour without any prior knowledge about the inputs and outputs.
RL controller design
The proposed controller in this paper contains two NNs, namely, critic and actor networks as shown in Figure 5.
The actor network provides the actual control signal (u) which is the control torque sent to the arm actuators, and the critic network provides long term performance information (Q) to the actor network, which will be used to improve the actor performance.The critic-actor RL is suitable for learning control policies online, and is capable of adapting time varying system parameters such as joints friction in the robot arm.The objective of the controller is to bring the joints to the desired angles θ d (t) while ensuring the stability of the closed-loop system.For a given desired angle trajectory θ d (t), the angles tracking error vector e(t) is given as.
where θ d (t) and θ(t) are the desired and actual joint trajectory, respectively.A utility function p(t) is defined to measure the current system performance, i.e. the current state of the angles tracking error, and it is expressed as where || is the first norm of the angles tracking error.The long-term performance of the system which represents the long-term effect of a given control action on the tracking error, is given as where u(t) is the current control action, α is the discount factor, a positive constant defined as 0 < α < 1, and R is the control horizon and it is selected as R > 1.
Critic network
The critic NN is utilized to approximate the long-term performance Q(t) as Where Q(t) is the approximation of the long-term performance Q(t), W c (t) and N c (t) are the weights of the output layer, and the output of the hidden layer of the critic NN, respectively.The activation function N c (t) is defined as: Where W ci is the input-hidden weights of the critic network, Z(t) is the critic network input vector defined as Z(t) = [e(t), u(t)], and σ is the smoothing parameter.
The critic prediction error is the difference between the predicted and the actual long-term performance and it can be expressed as: Substituting ( 17) into (20) leads to Based upon the prediction error E c (t), and utilizing (18), the update rule of the critic NN is expressed as in ( 22), where α c is the adaptation gain of the critic NN weights.
Actor network
The actor network is used to approximate the control signal u(t) as where W a (t) and N a (t)are the weights of the output layer and the output of the hidden layer of the actor NN, respectively.
The activation function N a (t) is defined as: W ai is the input-hidden weights of the actor network, and σ is the smoothing parameter.
The update law of the actor network weights is given as where α a is the actor network adaptation gain.
Network convergence analysis
According to the universal approximation theory, a NN with a single hidden layer that has sufficient number of hidden neurones can approximate any nonlinear function under certain conditions.For a given function Y and its NN approximation ( Ŷ), this approximation property is typically expressed as: Where is the approximation error of NN and it is bounded as | | ≤ ζ , where ζ is a positive number, W is a constant ideal weight vector and N is the output of the hidden layer of the NN.Both critic and actor and networks are used to approximate the long-term performance and the control law respectively, so the weights of these networks are expected to be varying in a bounded range and both networks will produce an approximation error which is bounded as | | ≤ ζ .
Stability analysis
To ensure the stability of the closed-loop system, Lyapunov Second Theory will be used for this analysis.If the adaptation gains are selected as in ( 28) and ( 29), then the closed-loop system is semiglobal ultimate bounded (SGUB). (29) Proof: Let us define the weight errors E wa (t) and E wc (t) of the actor and critic networks, respectively as follows: Where Wa(t) and Wc(t) are the actual output weights of the critic and actor, respectively, and Wa(t) and Wc(t) are the constant ideal output weights of the critic and actor, respectively.
Let us consider a Lyapunov function candidate as follows.
The time derivative of the first Lyapunov function L 1 (t) is given as.
Simplifying further leads to L2 (t) ≤ 0. Since both L1 (t) ≤ 0 and L2 (t) ≤ 0 the system is semi globally ultimately bounded.The proof is complete.
Simulation results
To test the proposed controller, the previously derived dynamic equations of the robot were implemented and simulated via MATLAB software with ode14x solver on a computer running Windows 10 with Core i5 processor.The input-hidden weights for the critic and actor networks, W ai and W ci are initialized as: W ai = [−1 : 0.1 : 1] and W ci = [−1 : 0.1 : 1].The output weights of the critic and actor networks W a and W c are initialized to zeros.The critic and actor gains are selected as α c = 0.1 and α a = 100, respectively.Values for α and σ were chosen to be 0.99 and √ 0.5, respectively.The proposed RL control strategy is an online learning scheme; hence no previous learning or training is needed.Figure 6 shows the learning pattern for the RL controller when a sinusoidal trajectory is set as a reference for the three joints.The evolution of the norm of the actor network weights as depicted in Figure 7 indicates that the weights of the actor network will converge after some learning time to be varying in bounded range less than 200.In the beginning, the responses for the three joints oscillate around the desired trajectory, which represents the RL controller learning stage.However, shortly the responses improved significantly which reflects the fact that the RL controller tuned itself successfully.
In the upcoming subsections, the proposed RL controller is to be tested with trajectories in both joint and Cartesian operational spaces, and with the friction forces in the joints being increased significantly.Moreover, the tracking performance of the proposed RL controller will be compared against the performances obtained with four common manipulators' benchmark controllers: (i) CT, (ii) PD, (iii) ILC and (iv) RBFNN.
Benchmark controllers
The performance of the proposed RL-based controller is compared with CT, PD, ILC and RBFNN control strategies.
The four benchmark controllers are usually used for the manipulator's position control and represent the major type of controllers presented in the literature.CT is an example of a model-based none adaptive controller, while PD is an example of a model-free none adaptive liner controller.On the other hand, ILC and RBFNN controllers belong to the online adaptive model-free control strategies.In this subsection, four controllers are introduced, tuned, and then tested on the 3-DOF manipulator discussed in Section 2. Their tracking performances are to be compared with the performance obtained with the RL proposed controller.
Computed torque control strategy
CT controller is a special type of feedback linearization and requires full knowledge of the system's dynamics and parameters.Feedback linearization is used to cancel all the nonlinearities in the dynamics of the robot; hence, the overall closed-loop system acts like a fully linear system, and it guarantees the stability of the closedloop system as stated in Siciliano and Khatib (2016).As shown in Figure 8, it uses two loops: (i) a feedforward loop that cancels out the nonlinearities by the use of inverse dynamic model, and (ii) a feedback loop that is used for trajectory tracking with a proportional-velocity (PV) controller.The equation of the controller output, U CT , is given as where K v_CT and K p_CT are the velocity and the proportional gains, respectively.θd and θ d are the desired angular velocity and angle, respectively.It is worth mentioning that the design and tuning were according to Siciliano and Khatib (2016).For the computed torque controller, the gains are selected as K v_CT = 300 and K p_CT = 1000.
Proportional-derivative control strategy
The PD controller is a widely used linear controller in industrial processes due to its simple design.Figure 9 shows the block diagram of the PD controller of the robotic arm.The control signal, U PD , is where K p_PD is the proportional gain, K d_PD is the derivative gain, e(t) is the tracking error and ė(t) is the derivative of the tracking error.Similar to the CT, the design and tuning were according to Siciliano and Khatib (2016).For the PD controller, the gains are chosen as K p_PD = 1000 and K d_PD = 100.
Iterative learning control strategy
ILC is a model-free adaptive control strategy.Due to its simplicity and robustness, ILC is becoming popular in robotics applications.The ILC controller is based on the assumption that tracking errors in a repetitive task stay unchanged in the absence of an explicit external correction.A feedforward ILC is added to a PD feedback tracking controller to compensate for this repetitive error as depicted in Figure 10 and given in ( 47).This PD-type ILC controller is similar to the one adopted in Boudjedir and Boukhetala (2021).
+ K d_ILC (e j (t + 1)-e j (t))] (47) where j denotes the j-th iteration, H(q) is a low pass filter, e is the error, K p_ILC and K d_ILC denote the proportional and derivative gains, respectively.For the ILC controller, the gains are chosen as K p_ILC = 1000 and K d_ILC = 100.
Adaptive RBFNN control strategy
When utilized in control systems, RBFNNs have the benefits of superior process learning capabilities and some degree of disturbance immunity.The suggested RL controller is relatively comparable to the adaptive RBFNN because both learn online and are model-free controllers.The adopted RBFNN control law in this study is similar to the one adopted in Liu et al. (2021).The control law U RBFNN of the adaptive RBFNN is given in ( 48) and ( 49), and the weights are updated according to the error as given in (50).That is the network modifies its parameters to adapt to the variation of the plant.Figure 11 shows the block diagram of the RBFNN controller.
where x is the input vector, w in are the weights of the input layer, k(x, w in ) is the radial basis function in the hidden layer, w o are the output layer weights, α RBF is learning rate of RBFNN and ε is the error vector.The learning rate, α RBF , of the RBFNN controller is selected to be 3.4 and the weights and the parameters of the RBFNN are initialized as for the actor network.
Joint space simulation results
In this section, the proposed RL controller is tested in joint space and its trajectory tracking performances are compared with the tracking performances obtained with the benchmark controllers described in subsection 4.1.All controllers were well-tuned using the same trajectory presented in Figure 6.Then, the controllers were tested on a sinusoidal trajectory, shown in Figure 12, for the three joints with the friction increased by 100% and 200% at 4 and 8 s, respectively.The increase in friction forces represents a possible increase in the joints' friction that may happen in a real industrial environment due to pollutants such as dust and other debris.As depicted in Figure 12, all controllers performed well when there is no increase in friction, time from 0 to 4 s.However, when friction forces increased by 100% at 4 s and 200% at 8 s the performance with the CT controller started to degrade significantly, especially for joint 3, compared to other controllers.The significant failure of the CT controller in the tracking of the desired trajectory, when friction increases, is mainly due to the dependency of the CT on an accurate model of the robot and the lack for any adaptation.Figure 13 shows the tracking error for the three joints of the robot with all controllers, tracking error for CT was excluded since it was significantly higher than others.It can be seen from Figure 13 that the least tracking error was obtained with the RL controller.The performances with RBFNN and ILC were better than the PD performance.This is expected since both RBFNN and ILC are online adaptive controllers.Worst tracking performances were achieved with the non-adaptive controllers, CT and PD.Since the RL controller is a learning-based technique, it learns the changes in friction and updates its policy to overcome these changes.On the other hand, both PD and CT are not capable of changing the control policy and thus, they both fail to sustain the performance of the arm.RL in this learning task is continuous and thus, it keeps learning on the fly and updates its parameters using the update laws as indicated in ( 22) and ( 25).Since RL learns in two folds, using critic and actor NNs, it achieves better performance than ILC and RBFNN as both of the later methods have single learning approaches.
Task space simulation results
In this section, the simulation results of the end effector position in terms of the Cartesian space are discussed and the desired trajectory is given in the operational task domain (X, Y and Z). Figure 14 shows the responses of the five controllers when the desired trajectory is rectangular and with no variation in the friction forces.While the RL, CT, ILC and RBFNN controllers are all able to follow the required trajectory, the PD controller fails to follow the trajectory efficiently as depicted in Figure 14.Table 2 lists the performances of the controllers, in tracking the As shown in Table 2, when there is no variation in friction forces the PD has the largest error when compared to the other controllers.But when the friction forces increase by 100%, the CT performance degrades drastically and becomes the worst among all controllers as depicted in Figure 15.When compared to the CT, the performance of the PD did not worsen significantly but its performance was still inaccurate.On the other hand, ILC, RBFNN and RL adapted to the changes in friction, in contrast to PD and CT.To further compare between the ILC, RBFNN and RL controllers, Figure 16 shows only the tracking error of PD, ILC, RBFNN and RL, the CT error is excluded since its bad performance is visible.Figure 16 and Table 2 show the superiority of the RL performance compared with the performance of ILC and RBFNN.For the 200% friction increase case, Figure 17 shows the performance of all the controllers except the CT because of its out-of-range response.It is clear how the PD did not handle the variation in the friction well and its performance worsen in comparison to the previous cases.The performances of ILC and RBFNN were worsened when the friction increased to 200% and both oscillate around the desired reference as depicted in Figure 18 and Table 2.The RL, on the other hand, adapted to the friction increase and has the lowest error compared with the other four controllers.From Table 2, it was found, in the 200% friction increase case, that RMSE with RL is less than RMSE with PD, ILC and RBFNN by 95%, 70% and 69.1%, respectively.Moreover, IAE with RL was 94%, 64.7% and 74.6% less than IAE with PD, ILC and RBFNN, respectively.This shows the ability of the RL controller to cope with high changes in the joints' friction forces that may occur in the system, hence maintaining the accuracy of the system.
Conclusions
This study presents an online model-free control approach to efficiently deal with any variation in the friction forces that occurs in the robot's joints.In most cases, friction variation in a robotic arm is unavoidable and usually occurs when the robot ages or operates in a harsh industrial environment.In such situations, model-based and linearized control techniques fail to obtain acceptable tracking performance, especially in the presence of high friction variation.Even most adaptive controllers, that are tuned offline and are incapable of re-adjusting their adaptive parameters online, lose their effectiveness when significant friction variation occurs.In this study, an online model-free RL control approach was proposed and tested on the 3-DOF robotic arm with presence of high friction variation in the joints.The position tracking performance of the proposed controller was compared against four popular controllers utilizing different desired trajectories in both joint and Cartesian domains.Results showed that the RL controller has the best tracking performance out of the five controllers even with the presence of 100% and 200% increases in the joints' friction forces.With the presence of high friction variation in the robot, performances in terms of RMSE and IAE are degraded significantly with both PD and CT controllers.The performance of the other adaptive controllers, ILC and RBFNN, worsen when the friction increased to 200% in the task domain, demonstrating the low adaptability of the two controllers.This assures the superiority of the RL controller, which was able to adapt to the new changes in friction and was able to follow the desired trajectory accurately with the smallest tracking error.
Figure 2 .
Figure 2. Adopted Stribeck friction model in the joints of the robotic arm.
(a) shows one of those joints' trajectory sets, and Figure 3(b) shows that the positive definite condition is satisfied.Moreover, the developed manipulator model has been validated against the experimental results presented in de Jesús Rubio et al. (2014).The authors utilized in their work a robot similar to the robotic arm investigated in this paper.The validation approach presented in de Jesús Rubio et al. (2014) was followed, and Figure 4(a) shows the input voltage for each joint, and Figure 4(b) shows the angle for each joint.The obtained results are very similar to the results presented by the authors, which validate the robot model developed in this study.
Figure 3 .
Figure 3. Verification results for the positive definite condition for M(θ) (a) Joints' trajectories (b) Positive definite condition result.
Figure 4 .
Figure 4. Model Validation (a) Input Voltages for each joint (b) The response of the robot's joints
Figure 6 .
Figure 6.Learning pattern of RL controller with sinusoidal desired trajectory.
Figure 7 .
Figure 7. Norm of the actor network weights evolution with time.
Figure 9 .
Figure 9. Block diagram of the robotic arm with PD controller.
Figure 8 .
Figure 8. Block diagram of the robotic arm with CT controller.
Figure 10 .
Figure 10.Block diagram of the robotic arm with ILC controller.
Figure 11 .
Figure 11.Block diagram of the robotic arm with RBFNN controller.
Figure 12 .
Figure 12.Responses of the robot's three joints with RL, PD, CT, ILC and RBFNN with friction forces have been increased by 100% and 200% at 4 s and 8 s, respectively.
Figure 13 .
Figure 13.The trajectory error of the robot's three joints with RL, PD, ILC and RBFNN controllers with friction forces have been increased by 100% and 200% at 4 s and 8 s, respectively.
Figure 14 .
Figure 14.Responses of the robot's end effector for rectangle trajectory in Cartesian space with RL, PD, CT, ILC and RBFNN controllers and with 0% friction increment.
Figure 15 .
Figure 15.Responses of the robot's end effector for rectangle trajectory in Cartesian space with RL, PD, CT, ILC and RBFNN controllers and with 100% friction increment.
Figure 16 .
Figure 16.Tracking error responses of the robot's end effector for rectangle trajectory in Cartesian space with RL, PD, ILC and RBFNN controllers and with 100% friction increment.
Figure 17 .
Figure 17.Responses of the robot's end effector for rectangle trajectory in Cartesian space with RL, PD, ILC and RBFNN controllers and with 200% friction increment.
Figure 18 .
Figure 18.Tracking error responses of the robot's end effector for rectangle trajectory in Cartesian space with RL, PD, ILC and RBFNN controllers and with 200% friction increment.
Table 1 .
Simulation parameters values utilized in the 3-DOF robotic arm model.
Table 2 .
RMSE and IAE for PD, CT, ILC, RBFNN and RL controllers with rectangular trajectory in Cartesian space and with different friction variations. | 2023-09-03T15:04:27.381Z | 2023-08-31T00:00:00.000 | {
"year": 2023,
"sha1": "f3bfb8435c150db6544d00c876d6e4b6e4b54154",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642583.2023.2251521?needAccess=true&role=button",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f48282f30f38fb5360fb6af0c181698565b319dd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.