id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
34705409
|
pes2o/s2orc
|
v3-fos-license
|
Constructing a Large Variety of Dirac-Cone Materials in the Bi${}_{1-x}$Sb${}_{x}$ Thin Film System
We theoretically predict that a large variety of Dirac-cone materials can be constructed in Bi${}_{1-x}$Sb${}_{x}$ thin films, and we here show how to construct single-, bi- and tri- Dirac-cone materials with various amounts of wave vector anisotropy. These different types of Dirac cones can be of special interest to electronic devices design, quantum electrodynamics and other fields.
Dirac cone materials have recently attracted considerable attention. In an electronic band structure, if the dispersion relation E(k) can be described by a linear function as E = v · k, where v is the velocity, k is the lattice momentum, andh = 1, the point where E → 0 is called a Dirac point. A Dirac cone is a two-dimensional (2D) Dirac point. Dirac cone materials are interesting in electronic device design, quantum electrodynamics and desktop relativistic particle experiments etc. A single-, bi-or tri-Dirac cone system has one, two or three different Dirac cones degenerate in E(k) in the first Brillouin zone. Graphene has two degenerate isotropic Dirac cones at points K and K ′ in its first Brillouin zone, which is therefore considered as a bi-Dirac-cone system. Many novel phenomena are observed in this system [1], such as the room temperature anomalous integer quantum Hall effect [2], the Klein paradox [3], which means that fermions around a Dirac cone can transmit through a classically forbidden region with a probability of 1. Dirac fermions can be immune to localization effects and can propagate without scattering over large distances on the order of micrometers [4].
In this Letter, we show how to obtain single-, bi-and tri-Dirac-cone Bi 1−x Sb x thin films, and how to construct Dirac cones with different anisotropies. We also point out the possibility of constructing semi-Dirac cones in Bi 1−x Sb x thin films.
Bi 1−x Sb x has many special properties that are interesting from the point of view of anisotropic Dirac cones. We recall that bulk Bi 1−x Sb x is a crystalline alloy with a rhombohedral structure, which displays remarkable anisotropy. The first Brillouin zone of bulk Bi 1−x Sb x has one T point and three degenerate L points, L (1) , L (2) and L (3) , as illustrated in Fig. 1.
The bottom of the conduction band is located at the L points, while the top of the valence band can be located either at the T point or at the L points, depending on the Sb composition x when 0 ≤ x ≤ 0.10. In bulk Bi 1−x Sb x , the band structure varies as a function of Sb compostion x, temperature T, pressure P and stress τ [5]. The conduction band is very close to the valence band at the L points, so that these bands are non-parabolically dispersed as [6] (1) due to their strong interband coupling. When the L-point band gap E g is small, the dispersion relation E(k) becomes linear and Dirac points are formed as E(k) → ±v · k. The L-point band gap E g can approach 0 under some conditions, e.g. when P = 1 atm, E g → 0 at x ≈ 0.04 and T ≤ 77 K [7], or at x ≈ 0.02 and T ≤ 300 K [8]. For simplification, this Letter will focus on the low temperature range (T ≤ 77 K) where the band structure of Bi 1−x Sb x does not change much with temperature.
For Bi 1−x Sb x thin films, the 2D band structure varies also as a function of film thickness and film growth orientation, which provides considerable flexibility compared to bulk Bi 1−x Sb x . Furthermore, the quantum confinement effect in the thin film system is potentially interesting, where its anisotropic properties imply potential application possibilities.
The energy spectrum near an L-point Dirac cone in a Bi 1−x Sb x thin film is calculated based on the iterative-twodimensional-two-band model described below. Here the general two band model for two strongly coupled bands obeys the relation [9] p · α α α · p = E(k) ( where p is the carrier momentum vector and α α α is the inversemass tensor. The two coupled key parameters α α α and E g are calculated in an iterative way in our model as and where m 0 is the free electron mass, I is the identity matrix, l z is the film thickness and n denotes the nth step in the iteration. The procedure is repeated until α α α [n] and E [n] g become self-consistent, and then we get accurate solutions for g for thin film Bi 1−x Sb x . Because of the approximations that are valid for Sb composition 0 ≤ x ≤ 0.10, and Eqs. (3) and (4) can be further simplified, which converge to the analytical solution as and (6) The dispersion relation E(k) can then be the solved by the methods used by Ref. [10] from whereα i j = α i3 α j3 /α 33 − α i j for i, j = 1 and 2, and α α α = α α α f ilm (Bi 1−x Sb x ). The Hamiltonian for Bi and Bi 1−x Sb x based on k · p theory in Eq. (2) is equivalent to a Dirac Hamiltonian with a scaled canonical conjugate momentum [11]. Thus, Eq. (7) is also a good approximation to describe the Dirac cones. The band parameters we use in the present calculations are values that were measured by cyclotron resonance experiments [12]. According to Eqs. (1) and (7), when E g → 0 at an L point, the electronic dispersion relation becomes a perfect Dirac cone, where the energy E is exactly proportional to the lattice momentum k measured from that L point. When E g becomes large enough [13], the linearity of the dispersion relation becomes an approximation, and the Dirac cone becomes a quasi-Dirac cone. Ifα 11 ≫α 22 with a finite E g , so that E ∝ k x and E ∝ k 2 y , we call it a semi-Dirac cone. In a semi-Dirac cone, the fermions are relativistically dispersed in one direction (k x ), and classically dispersed in another direction (k y ).
We propose that single-, bi-and tri-Dirac-cone materials can be constructed from Bi 1−x Sb x thin films, by proper synthesis conditions to control the relative symmetries of the 3 L points. Bi 1−x Sb x thin films grown along the bisectrix axis can be single-Dirac-cone materials, as illustrated in Fig. 2a, where the 3-fold degeneracy of the L (1) , L (2) and L (3) points is bro-ken. The value of the film-direction-inverse-mass-component α f ilm 33 (Bi 1−x Sb x ) is much smaller for the L (1) point than the corresponding values for the L (2) and L (3) points. The L (1)point gap E (1) g is negligibly small due to the small value of α f ilm 33 (Bi 1−x Sb x ), where a Dirac cone is formed, as shown in Fig. 2a. However, the L (2) -and L (3) -point band gaps E (2) g and E (3) g are much larger, which implies that a single-Dirac-cone at the L (1) point is constructed. Here we are taking advantage of both the extreme anisotropy of Bi 1−x Sb x and the quantum confinement effect of thin films. The quantum confinement effects for the L (1) -point carriers differs remarkably from those for the L (2) -and L (3) -point carriers due to the anisotropy of the L-point pockets. Figure 2b shows that a Bi 1−x Sb x thin film grown along the binary axis can be a bi-Dirac-cone material, where the L (1) -point band gap E (1) g is much larger than the L (2) -and L (3) -point band gaps E (2) g and E (3) g . Thus, E (2) g and E (3) g remain small enough [13] to make two degenerate Dirac cones (quasi-Dirac cones) at the L (2) and the L (3) points. In Fig. 2c, the film is grown along the trigonal axis, so that the 3-fold symmetry for the three L points is retained. The three Dirac cones (quasi-Dirac cones) at the L (1) , L (2) and L (3) points are degenerate in energy, which makes this film a tri-Dirac-cone material. By definition, an exact Dirac cone has E g = 0. However, E g = 0 Dirac cones are seldom achieved experimentally, so it is practical to consider E g ≤ k B T as a criterion for an exact Dirac cone. In the temperature range below 77 K that we are considering in this paper, the thermal smearing of k B T corresponds to ∼ 7 meV. For the criterion of a quasi-Dirac cone, we can use k B T ≤ E g ≤ E g (Bi) bulk , where E g (Bi) bulk ≃ 14 meV. Thus, we consider the three Dirac cones in Fig. 2c, as quasi-Dirac cones, which are plotted for the case of l z = 100 nm and E g ≃ 10 meV. If exact Dirac cones are needed, a larger film thickness can be chosen, e.g. l z = 200 nm, which satisfies E g ≤ k B T .
We now show how to construct anisotropic Dirac cones with different shapes for the wave vector as a function of cone angle. To characterize the anisotropy of a Dirac cone, we define an anisotropy coefficient where v max and v min are the maximum and minimum in-film carrier group velocities for a Dirac cone that is defined as v(k) = ∇ k E(k). For a perfect Dirac cone, v is a function of the direction of the lattice momentum k measured from that L point only and is independent of the magnitude of k. For an imperfect Dirac cone or a quasi-Dirac cone, this magnitude invariance is exact only when k is large, and becomes an approximation around the apex when k is small [13]. Fig. 3 gives us an important guide on how to construct anisotropic L (1) -point Dirac cones. In Fig. 3a, the anisotropy coefficient γ for the L (1) -point Dirac cone as a function of film growth orientation is shown. For a film grown along the bisectrix axis, γ has its minimum value γ min = ∼2, where the carrier velocity v(k) for the L (1) -point Dirac cone varies only by a small amount with the direction of k, as shown in Fig. 3b. For a film grown along the binary axis, γ = ∼10, where v(k) varies more with the direction of k as shown in Fig. 3c, compared to Fig. 3b. For a film grown along the trigonal axis, γ has its maximum of γ max = ∼14, where v varies significantly with the direction of k, as shown in Fig. 3d.
Researchers have tried to realize semi-Dirac cones in oxide layers [14], where the fermions are relativistic in one direction and classical in its orthogonal direction. In the present work, we have found that it is possible to construct semi-Dirac cones in the Bi 1−x Sb x thin film system. According to Eqs. (1), (2) and (7), for an in-film directionk, wherek is a unit directional vector of k in the in-film lattice momentum space, whether the dispersion relation is linear or parabolic depends on the L-point band gap E g , and theα α α projection along that direction ofk, defined byαk =k * ·α α α ·k, whereα α α is given by Eq. (7). When E g is small andαk is large [13], the energy becomes linearly dispersed alongk; when E g is large andαk is small, the energy becomes parabolically dispersed alongk. To construct a semi-Dirac cone, we needs to find a proper L-point band gap E g and anisotropy γ, such that E g /α max is small and E g /α min is large. In this case, the electronic energy is linearly dispersed along theα max direction and parabolically dispersed along the α min direction. Hereα max andα min are the maximum and minimum values ofαk, which correspond to the principal axes of the 2D tensorα α α. The L-point band gap varies as a function of the film thickness l z , the growth orientation and the Sb composition x, as shown by the calculated results given in Fig. 4. To construct a semi-Dirac cone, we need to find a growth direction that ensures a significant anisotropy, and a large enough value of E g which ensures that E(k) becomes parabolically dispersed along theα min direction. However, the E g should not be too large, because of the necessity that the linear dispersion relation along theα max direction is maintained. These requirements can all be achieved by choosing the proper Sb composition x, film thickness l z and growth orientation as shown in Fig. 4. From Figs. 2 and 3, we know that the L (1) -point Dirac cone has a maximum k-vector anisotropy, when the growth orientation is near the trigonal axis. We also see that the thin film becomes a single-Diraccone material when the growth direction is near the bisectrix axis. Thus, a good strategy to construct a semi-Dirac cone is to choose a growth orientation between the trigonal and the bisectrix axis, in the trigonal-bisectrix plane. An example of a semi-Dirac cone is shown in Fig. 5, where the example sample is grown along a direction that is 40 • from the trigonal axis, 50 • from the bisectrix axis, and perpendicular to the binary axis. Thus, a large Sb composition (e.g. x ≈ 0.10) and a small film thickness (e.g. l z ≈ 100 nm) is preferred to make the E g large, and x = 0.10 and l z = 100 nm are chosen for this example sample.
In conclusion, we have proposed the growth of Bi 1−x Sb x thin films, which for selected concentrations of Sb and different directions to the film normal allows different Dirac-cone materials to be constructed. We have shown how to construct single-, bi-and tri-Dirac-cone materials, as shown in Fig. 2, as well as quasi-and semi-Dirac-cone materials, as shown in Fig. 2c and Fig. 5, respectively.
|
2011-11-23T15:25:37.000Z
|
2011-11-23T00:00:00.000
|
{
"year": 2012,
"sha1": "46a0b88095e3458875eaee1b9c7d85145e445fff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1111.5525",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "46a0b88095e3458875eaee1b9c7d85145e445fff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science",
"Physics"
]
}
|
446184
|
pes2o/s2orc
|
v3-fos-license
|
Probing small-x parton densities in proton- proton (-nucleus) collisions in the very forward direction
We present calculations of several pp scattering cross sections with potential applications at the LHC. Significantly large rates for momentum fraction, x, as low as 10^-7 are obtained, allowing for possible extraction of quark and gluon densities in the proton and nuclei down to these small x values provided a detector with good acceptance at maximal rapidities is used.
I. INTRODUCTION
In this paper, we study the measurement of quark and gluon distribution functions inside the proton at very small momentum fractions x. We consider several processes in pp collisions at a center-of-mass energy ( √ s) of 14 TeV and show that the range of useful measurements extends down to about x ∼ 10 −6 for most processes, and even down to x ∼ 10 −7 for the Drell-Yan process. 1 Such measurements require a detector which has sufficient acceptance at maximal rapidities. Plans for building a detector of this type (FELIX) are under discussion at the LHC []. We show how the parton densities may be determined. The event rates are high; to estimate them, we use the CTEQ3M distributions [] to provide an extrapolation from the region where measurements currently exist, which is x ≥ 10 −4 . Although our results are obtained for proton-proton interactions, a similar analysis can easily be applied to proton-nucleus interactions in the same accelerator. For example, in the case of proton-calcium collisions, at say a center-of-mass energy of 63 TeV, the same region, down to x ∼ 10 −6 and lower, can also be probed. We show how data from such studies would provide information on the parton densities in nuclei at both large and small x. Previous data for hard processes on nuclei have been confined to fixed-target energies, and so the range of processes for which perturbative QCD (pQCD) calculations are reliable is limited.
At the LHC, the integrated luminosity for a proton-proton luminosity of 10 31 cm −2 s −1 and a run-time of 10 7 sec/yr is 100 pb −1 . However, there is a loss of luminosity in protonnucleus collisions; for protons on calcium, a luminosity of 10 30 cm −2 s −1 with a shorter running time is envisaged. This loss of luminosity is to a large extent compensated by increased cross sections, which are approximately proportional to the mass number A for large x. At small x, the cross sections presumably behave more like A 2/3 , but as we will see, the event rates are so large that the resulting loss of event-rate relative to proton-proton collisions will still enable a lot of physics to be probed.
II. KINEMATICS AND CROSS SECTIONS
We consider the following hadronic processes at center of mass energy √ s = 14 TeV with where γ denotes a photon, ll a lepton pair, Q(Q) a heavy quark (antiquark) and W, Z are the weak vector-bosons. To calculate the cross sections, we use the pQCD formalism, with the lowest-order hard scattering cross sections found, for example, in Refs. []. For the production of jets, photons and heavy quarks, we impose a cut p T, min = 10 GeV, which will keep us in the region where perturbative calculations are applicable. Since our main interest is to probe parton densities at small x, we will mostly need asymmetric configurations of the momentum fractions, x 1 and x 2 , of the partons entering the hard scattering. From the kinematic inequality we deduce that the momentum fractions obey x 1 x 2 ≥ 2 × 10 −6 . High-energy data on soft hadron production indicate [] that for fixed y max − y, where y is the particle rapidity, the soft hadronic multiplicity does not increase with energy, although at y = 0 it grows rapidly with s. Thus, for the large values of y that we use in this analysis, soft interactions result in much smaller underlying event E T pedestals than for y ∼ 0 at LHC energies. Moreover, for x substantially larger than x min = 2 × 10 −6 /max(x 1 , x 2 ), the counting rates are so high for moderate p T of the jets (in processes Jγ and JJ in expression (1)) that it would be possible to restrict the analysis to the region of sufficiently large x (for one of the partons) so that the jets are still produced at large y. Indeed, the expected rates are so high that it would be also possible to check the role of the pedestals by taking data at several x bins. Hence, our choice of p T, min = 10 GeV appears safe. Since our interest is only in estimating rates and not in a detailed extraction of parton densities from data, it will be sufficient to perform leading-order calculations for most of the processes considered. Of course, when actual data become available, it will be necessary to fit the parton densities to the data with the aid of theoretical formulae at the best possible accuracy, at least next-to-leading order.
At small x, the gluon density is substantially larger than the quark densities. Since at lowest order, the Drell-Yan and vector-boson processes are given by quark-antiquark annihilation, without a gluon-induced subprocess, we will calculate these processes to nextto-leading order, with the hard-scattering coefficients in the MS scheme found in Ref For jet, photon and heavy-quark production, we will set the renormalization and factorization scale µ to the commonly used value of p T . For Drell-Yan and vector-boson production, we set µ to the pair mass and vector-boson mass, respectively.
III. JET PLUS PHOTON
We calculate the cross section for producing a jet and a photon, putting the events in bins of p T and x, where x is the minimum of the momentum fractions, x 1 and x 2 , of the incoming partons. The x bin is defined by The cross section σ(x, p T ) for Jet + γ production as a function of x = min(x 1 , x 2 ) and p T . It is integrated over a p T bin which is 20-40% of the central value (see text), max(x 1 , x 2 ) < 0.8 and x/∆ < min(x 1 , x 2 ) < x∆, where ∆ = 10 1/20 . This choice of ∆ corresponds to 10 bins per decade in x.
One sees that large cross sections (above 10 pb) are obtained for p T up to about 100 GeV. Combined with a luminosity of 100 pb −1 /yr, this gives at least hundreds of events in every bin, which will give good statistical precision. The strong fall-off of the curves at their left end is a result of approaching the kinematic limit: x > 4p 2 T,min /s. To get a useful number of events, it is sufficient for x to be about twice its limit.
The cross section is dominated by gluon-quark scattering. In Fig. 2, the cross section integrated over p T > 10 GeV is split into gq and qq components, and we see that the gq term is about an order of magnitude larger over the whole range shown. As before, the fall-off at the left is a consequence of the chosen minimum p T . Since the incoming quark is typically at large x, where its distribution is already known fairly accurately [], photon-jet production provides a direct measurement of the gluon density for small x in the range x > 2.5 × 10 −6 .
To illustrate the accuracy that can be achieved in a determination of the gluon density, we show in Fig. 3 the CTEQ3M gluon momentum density xG(x, Q 2 = p 2 T ) together with the statistical errors on measurements that correspond to the cross sections in Fig. 1. Notice that there is sufficient precision not merely to measure the gluon density but also to test its evolution. The cross sections presented here are, of course, the result of an extrapolation of parton densities from a region where they have been measured to much smaller x values.
IV. LEPTON PAIRS
We next consider lepton pair production from hadrons (the Drell-Yan process). Using the same kinematic region defined by expression (2), we obtain the next-to-leading order cross sections shown in Fig. 4. Again, the relatively large cross sections lead to a large number of events expected at the LHC. More significantly, by measuring cross sections at smaller values of the pair mass Q, we can obtain parton densities at lower values of x than in the other processes considered in this paper. The advantage of the Drell-Yan process is that one can go to fairly small Q values while still trusting the pQCD formalism.
The dominant contribution for this process comes from the uū channel; the qg contributions are about 30 times smaller than those from qq. Using quark distributions q(x 2 ) which are well-determined for large x 2 , one may then extract antiquark densitiesq(x) at small x. We show in Fig. 5 a plot of the extrapolated CTEQ3M antiquark density xū(x, Q 2 ) with the statistical errors based on the cross sections shown in Fig. 4. Note that we consider the Drell-Yan process down to relatively small pair masses ∼ 5 GeV. This assumes that, in the experiment, the detector will be able to suppress the background due to heavy flavor decays. Such suppression might be achieved using information from the forward calorimeters as well as from microvertex detectors.
V. TWO JETS
We next consider the cross section for the production of dijets. Fig. 6 shows the cross section in bins of p T and x, in the region defined by expression (2). The cross sections obtained are about 100 times larger than all the others computed in this paper.
In Fig. 7, we show the x distribution of the cross section. Aside from the total cross section (solid curve), we also exhibit the contributions from the different partonic channels: gg (dashed), gq (upper dotted), qq (lower dotted) and qq (dot-dashed curve). The largest contributions to the cross section come from the gg and gq channels. This process thus provides an independent consistency check of the parton distributions, primarily the gluon density, obtained from other processes. analysis of the leading diagrams for the x → 1 limit [] suggests that r → 0.2 in this limit. A direct measurement independent of any data requiring knowledge of nuclear effects would be valuable. At next-to-leading order, we calculate the cross sections for W + and W − production in bins of x. One observes from Fig. 8 that significant rates can be measured up to x ∼ 0.8. This implies that quark distributions at Q 2 ∼ 10 4 GeV 2 can be measured from the sum of the cross sections for production of W + and W − -bosons down to x ∼ 5 × 10 −5 . At the same time, one would be able to distinguish between different scenarios for the asymptotic behaviour of the u/d ratio in the x → 1 limit.
VII. HEAVY QUARKS
We show in Figs. 9 and 10 the (x, p T ) and x distributions, respectively, for charm quark production in the region defined by expression (2). The same general features as in the plots for jet plus photon and dijets are observed.
In Fig. 10, the contributions from the two active partonic channels are also shown: gg (dashed curve) and qq (dot-dashed curve). The total cross section is of order 1 to 10 2 pb when 2.5 × 10 −6 ≤ x ≤ 4 × 10 −6 . In this region, the qq contribution dominates, being about 10 times larger than that of the gg channel. In the region 10 −4 ≤ x ≤ 10 −3 , the total cross section rises, is of order 10 5 pb and mostly consists of gg channel contributions; the qq contribution drops to about 10% of the total. Thus, it is possible to extract the antiquark (or quark) density at x ∼ 3−4×10 −6 from the charm quark cross section providing a cross-check of the antiquark densities obtained from Drell-Yan measurements. In this case, the background consisting of gg channel contributions is well determined with the use of the gluon density obtained from the jet plus photon or dijet cross section as described in Sections III and V. Similarly, it is also possible to extract the gluon density at x ≥ 10 −4 from this type of cross section, providing a cross-check of gluon densities derived from other measurements.
VIII. CONCLUSIONS
We have shown, in several independent ways, how parton distributions in nucleons and nuclei (nuclear shadowing) down to x ∼ 10 −7 may be measured from hadron-hadron interactions at LHC energies. In particular, we show in Figs. 11 and 12 the regions in energy E = (p T , Q or M W ) and momentum fraction x where measurements can be made for extracting antiquark and gluon densities, respectively, at better than 20% accuracy from the different hadronic processes considered, assuming an integrated luminosity of 100 pb −1 .
In Fig. 11, the region marked with cross-hatch shading shows where the Drell-Yan process determines the antiquark densities with better than 20% accuracy. The dashed horizontal line is where W production determines these densities. The small triangle with horizontal shading shows where charm production determines the antiquark densities. The dotted curve shows the minimum value of x as a function of the pair mass Q in the Drell-Yan process.
Similarly, we show in Fig. 12 the regions where data on the different processes can determine the gluon density. The regions marked with vertical, horizontal and slanted lines correspond to jet plus photon, charm quark and dijet production, respectively. The dotdashed curve represents the minimum allowed value of x as a function of the p T of the jet or heavy quark.
In our calculations, we have assumed that the usual DGLAP evolution equations apply to the parton densities in the very small x region being considered and that CTEQ3M parton distributions can be extrapolated outside the region in which they are valid. However, it may well be that other physics effects, notably gluon recombination [], are important at these small x values. As we have seen, by measuring parton densities over a wide range of scales and in several processes, one can, in fact, test whether DGLAP evolution and the rest of the factorization method is valid -see, for example, Figs. 3 and 5. Of course, if the measurements fail consistency checks, for example if DGLAP evolution is found to fail, then one can no longer say that it is the parton densities that are being measured, but rather that one is probing new effects at small x. ) where measurements of the antiquark densities in the proton can be made to within 20% accuracy, assuming an integrated luminosity of 100 pb −1 . The region marked with cross-hatch shading corresponds to the Drell-Yan process. The dashed horizontal line is where W production dominates. The dotted curve shows the minimum value of x as a function of the pair mass in the Drell-Yan process. The small triangle with horizontal shading shows where charm production is dominated by the qq → cc subprocess. ) where measurements of the gluon density in the proton can be made to within 20% accuracy, assuming an integrated luminosity of 100 pb −1 . The dot-dashed curve represents the minimum allowed value of x as a function of the p T of the jet or heavy quark. The regions marked with vertical, horizontal and slanted lines correspond to jet plus γ, charm quark and dijet production, respectively.
|
2014-10-01T00:00:00.000Z
|
1997-10-27T00:00:00.000
|
{
"year": 1997,
"sha1": "513919991e2f9240db4ecdfcb867103a7e161b24",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9710490",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "66ad118e284c1ad1ee41f90b9dd65d18f39d3992",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
8686444
|
pes2o/s2orc
|
v3-fos-license
|
Distinguishing Characteristics between Pandemic 2009–2010 Influenza A (H1N1) and Other Viruses in Patients Hospitalized with Respiratory Illness
Background Differences in clinical presentation and outcomes among patients infected with pandemic 2009 influenza A H1N1 (pH1N1) compared to other respiratory viruses have not been fully elucidated. Methodology/Principal Findings A retrospective study was performed of all hospitalized patients at the peak of the pH1N1 season in whom a single respiratory virus was detected by a molecular assay targeting 18 viruses/subtypes (RVP, Luminex xTAG). Fifty-two percent (615/1192) of patients from October, 2009 to December, 2009 had a single respiratory virus (291 pH1N1; 207 rhinovirus; 45 RSV A/B; 37 parainfluenza; 27 adenovirus; 6 coronavirus; and 2 metapneumovirus). No seasonal influenza A or B was detected. Individuals with pH1N1, compared to other viruses, were more likely to present with fever (92% & 70%), cough (92% & 86%), sore throat (32% & 16%), nausea (31% & 8%), vomiting (39% & 30%), abdominal pain (14% & 7%), and a lower white blood count (8,500/L & 13,600/L, all p-values<0.05). In patients with cough and gastrointestinal complaints, the presence of subjective fever/chills independently raised the likelihood of pH1N1 (OR 10). Fifty-five percent (336/615) of our cohort received antibacterial agents, 63% (385/615) received oseltamivir, and 41% (252/615) received steroids. The mortality rate of our cohort was 1% (7/615) and was higher in individuals with pH1N1 compared to other viruses (2.1% & 0.3%, respectively; p = 0.04). Conclusions/Significance During the peak pandemic 2009–2010 influenza season in Rhode Island, nearly half of patients admitted with influenza-like symptoms had respiratory viruses other than influenza A. A high proportion of patients were treated with antibiotics and pH1N1 infection had higher mortality compared to other respiratory viruses.
Introduction
Viral respiratory illnesses are responsible for large numbers of hospital admissions each year leading to substantial morbidity and mortality [1]. The etiologic agents include a diverse group of viruses, such as influenza A which is responsible for intermittent pandemics [2]. Reassortment of swine-origin and human strains led to circulating pH1N1 [3,4] and a significant increase in hospital admissions during the 2009-2010 influenza season.
Timely identification of influenza is important as the administration of neuraminidase inhibitors may limit duration and severity of illness if given early [5]. Rapid tests were found to be insensitive in the diagnosis of pH1N1 [6] and unable to subtype the influenza virus. Molecular techniques replaced some of these tests, but the availability, expense and technical training limited widespread use of this technology [7]. Therefore, many clinicians relied on clinical symptoms to diagnose influenza during the pandemic [8].
The inability to reliably diagnose a viral respiratory infection such as influenza A, often leads to coverage of possible bacterial etiologies [27]. Overuse of antibiotics is not without consequence and can lead to complications including Clostridium difficile infection and high rates of resistance [28]. Thus, an accurate diagnosis of influenza and other respiratory viral infections is important to avoid overuse of antibacterial agents and direct appropriate antiviral therapy.
In response to the diagnostic challenges presented by influenza infection, our hospital system instituted a polymerase chain reaction (PCR)-based molecular panel that was able to identify 18 different respiratory viruses. The aim of this study was to examine differences in clinical, laboratory and radiographic findings between pH1N1 and other respiratory viruses with the goal to assist clinicians in more effectively diagnosing and treating pH1N1. To our knowledge, this is the first study to directly compare clinical parameters of pH1N1 to other respiratory viruses using a sensitive molecular diagnostic methodology in a large cohort.
Results
During our peak pH1N1 season, 1,438 RVP samples were collected. Of these, 1192 were from inpatients (340 samples in patients ,5 years, 240 samples 5-18 years, and 612 samples .19 years). Six-hundred and fifteen patients with positive results were included in the final analysis ( Figure 1) with a mean age of 20 years (range: 0-97 years). Forty-seven percent of patients had pH1N1 and 53% had another respiratory virus with rhinovirus being the second most prevalent in the population analyzed (34%, Table 1). Fewer patients with pH1N1 were under the age of five years compared to those with other viruses and individuals with pH1N1 were less likely to have cardiac co-morbidities, malignancy, or be admitted from a nursing home. Individuals with pH1N1 were more likely to report a sick contact or to use tobacco.
Individuals with pH1N1 were more likely to present with the following symptoms when compared to those with other respiratory viruses: subjective fever or chills, sore throat, nausea, vomiting, abdominal pain, weakness, fatigue, headache, myalgias, and chest pain. Patients with other respiratory viruses were more likely to present with changes in mental status including dizziness or lethargy (Table 2).
On presentation to the emergency room, patients with pH1N1 exhibited a higher maximum temperature, lower maximum heart rate, respiratory rate, systolic blood pressure, and oxygen saturation (Table 3). Patients with pH1N1 were more likely to have lower white blood counts, platelet counts, and potassium levels. Alternatively, patients with pH1N1 were more likely to have higher hemoglobin/hematocrit and albumin levels.
Of the 529 patients who received a chest radiograph, a greater number of patients with pH1N1 had no acute findings compared to other respiratory viruses (Table 4). Other respiratory viruses were more likely to have an interstitial opacity consistent with viral infection on chest radiograph. Thirty percent (161/529) of patients with a chest radiograph had focal or multi-focal airspace findings.
In patients with cough, the presence of subjective fever/chills independently increased the likelihood of pH1N1 infection ( Table 6). In patients with cough and gastrointestinal complaints, subjective fever/chills independently increased the likelihood of having pH1N1. Using fever alone did not raise the likelihood of having influenza infection versus another respiratory virus. Using age as a covariate, patients 19 to 59 years of age had the highest likelihood of presenting with pH1N1 compared to other age groups.
Discussion
In patients with viral respiratory infections, diagnosis of influenza is important to provide timely and efficient treatment with neuraminidase inhibitors. Rapid antigen tests were insensitive in the diagnosis of influenza during the 2009-2010 pandemic season [29,6,30]. Furthermore, these tests could not distinguish between different influenza A subtypes [31]. Seasonal influenza A (H1N1) was resistant to oseltamivir, whereas pH1N1 was not, making this a critical distinction [32]. While state public health labs had a CDC-based PCR assay for distinguishing influenza subtypes, an FDA-cleared product for clinical laboratories was delayed [33]. Therefore, many institutions, including our own, implemented a molecular-based test to diagnose influenza A [34]. The Luminex xTAG RVP was highly sensitive and able to distinguish 18 viruses causing respiratory infections, including different influenza subtypes.
With the introduction and effectiveness of molecular testing, one goal is more efficient use of antimicrobials and the reduction of unnecessary antibiotic use. Despite the relatively rapid turnaround time of the PCR-based tests, greater than half of the patients in our cohort with documented viral infections received antibacterial agents, presumably for empiric coverage of bacterial pneumonia. Furthermore, almost half of patients without influenza received oseltamivir. As such, implementation of rapid diagnostic testing for respiratory pathogens alone may not limit antibiotic use without other interventions. These data suggest overuse of antibacterial and antiviral agents and an opportunity for a robust antimicrobial stewardship program.
Clinical characteristics of hospitalized patients with pH1N1 were variable. Fever and cough, two criteria for ILI, often occur in influenza A patients [3,14,12,[15][16][17][18]20,21,24]. Although more patients with pH1N1 presented with fever and sore throat compared to those with other viruses in our population, the difference was not enough to make a firm clinical diagnosis of influenza. Furthermore, there was no significant difference in cough alone between patients infected with pH1N1 and other respiratory viruses. However, fever, cough, and gastrointestinal symptoms increased the likelihood of pH1N1 almost 10-fold in the pediatric population and may be useful as a preliminary guide to prompt clinicians to treat influenza infection. Chest radiographs may be useful in diagnosing superimposed bacterial infection. While airspace disease was observed more often in patients with non-influenza viruses, there were no chest radiographic findings that distinguish influenza infection. Over half of patients with pH1N1 had non-specific findings on chest radiograph as previously reported [24].
Our study supports previous findings that pH1N1 tends to infect younger adults, sparing the elderly and young children [3,14,15,17,19,18,20,21]. We found lower rates of influenza from nursing home patients reflecting this age distribution. Of those that died or were hospitalized, many had co-morbidities as previously reported [15,17,19,21,35,36]. In contrast to other studies [15,37,38,21,19], we did not find a high infection or mortality rate during pregnancy but our study was underpowered due to the low number of pregnant women in our cohort.
The mortality rate of 2.1% for hospitalized patients with pH1N1 infection in our cohort was lower than other reports [14,15]. Despite this, it was significantly higher than the mortality associated with other respiratory viral infections and highlights the importance of accurate diagnosis and early treatment of influenza infection. Aside from the retrospective nature of our study, a potential limitation was the small number of pregnant women likely due to the presence of a neighboring obstetrics and gynecologic hospital. A second limitation was the time period for which patients presenting with ILI were evaluated (6 weeks at the peak of the pandemic) whereas a typical respiratory season would be for several months and include a greater variety of viruses, especially in the pediatric population. In fact, our data (not shown) does indicate that after the pandemic wave at our institution, a typical peak for RSV, metapneumovirus and parainfluenza viruses followed the presence of pH1N1, much like the rest of the country. A third limitation was that pH1N1 confirmatory testing was not performed for all nonsubtypeable influenza A viruses. However, recent literature suggests that 100% of non-subtypeable influenza A H1 identified by the xTAG RVP was pH1N1 [39] and that misinterpretation is uncommon [40]. In addition, our initial investigation of a large number of strains early in the pandemic with the CDC PCR assay confirmed these findings. Many prior studies only assessed the clinical characteristics of patients with influenza or compared to individuals who's respiratory tests were negative for influenza, but they did not further delineate those without influenza or positive for another virus [14,16,18,20,21,25,26]. We set out to compare pandemic 2009 influenza A (H1N1) to other respiratory viruses in patients with ILI. To our knowledge, these results provide the first comparison of clinical characteristics between pH1N1 and other common respiratory viruses.
While a specific clinical presentation could not confirm pH1N1 in patients with cough and gastrointestinal complaints, the presence of subjective fever and/or chills increased the likelihood of pH1N1 infection versus another virus. Respiratory infection with pH1N1 infection more often resulted in death compared to other respiratory viruses and should be treated aggressively with supportive measures and antiviral medications. Despite the use of RVP testing, many influenza-infected patients received antibacterial agents and many patients without influenza received antivirals. Use of a highly accurate RVP in conjunction with a robust antimicrobial stewardship program will be necessary to assure prudent antibacterial and antiviral agent use in the future.
Ethics
The study was approved by the Rhode Island Hospital institutional review board. A waiver of informed consent was obtained before onset of the study. result from a nasopharyngeal swab specimen and who were subsequently hospitalized. Our hospital system consists of Rhode Island Hospital, a tertiary care center licensed for 719 beds, including Hasbro Children's Hospital, as well as The Miriam, Newport and Bradley Hospitals licensed for 247, 129 and 60 beds, respectively. All respiratory specimens were processed in the microbiology facility at Rhode Island Hospital. Our 18-virus panel detected influenza A/B (H1, H3, and non-subtypeable A consistent with pH1N1), respiratory syncytial virus A and B, adenovirus, metapneumovirus, rhinovirus/enterovirus, parainfluenza 1,2,3,4 and coronaviruses (NL63, OC43, HKU1, and 229E). The panel determined influenza A as seasonal human influenza A (H1N1), seasonal human influenza A (H3N2) or a non-subtypeable influenza A virus consistent with pH1N1. The Rhode Island Department of Health (DOH) confirmed the initial 30 specimens detected by the xTAG RVP as non-subtypeable influenza A H1 as pH1N1, utilizing primers and probes distributed by the Center for Disease Control and Prevention (CDC). Thus, subsequent nonsubtypeable influenza A H1 detected by the RVP were reported as pH1N1.
Statistical Analysis
Medical records of all cases were reviewed. Initial chest radiographs and subsequent chest CTs were reviewed and interpreted independently by three board-certified radiologists. Consensuses on all findings were reached. Logistic regressions were used to examine the relationships between variables and patients testing positive for pH1N1 compared with patients testing positive for a different respiratory virus. Subsequently, a series of multiple logistic regressions were constructed based on integrating the results from previous literature and our logistic regression results. Individual interactions between variables were checked and those with p.0.15 were retained, arriving at a final model. Special effort was placed on using symptoms and other clinical information. Co-linearity between predictors was minimized by forming theoretically and clinically guided composites as needed.
All predictors were tested for an interaction with different age categories (,5 years, 5-18 years, 19 years and older) with regards to predicting pH1N1 in logistic regressions. Models included main effects for the predictor, age, and the interaction of the two. When a statistically significant interaction was detected, the simple effects of the predictors were described in terms of their effects within age categories. Those which did not significantly interact with age were described in terms of their main effect.
|
2016-05-12T22:15:10.714Z
|
2011-09-16T00:00:00.000
|
{
"year": 2011,
"sha1": "3c3f097f19bb3dbbf56eb0f099af81054b305752",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0024734&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c3f097f19bb3dbbf56eb0f099af81054b305752",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119111119
|
pes2o/s2orc
|
v3-fos-license
|
Temporal Evolution of the Scattering Polarization of the CaII IR Triplet in Hydrodynamical Models of the Solar Chromosphere
Velocity gradients in a stellar atmospheric plasma have an impact on the anisotropy of the radiation field that illuminates each point within the medium, and this may in principle influence the scattering line polarization that results from the induced atomic level polarization. Here we analyze the emergent linear polarization profiles of the Ca II infrared triplet after solving the radiative transfer problem of scattering polarization in time-dependent hydrodynamical models of the solar chromosphere, taking into account the impact of the plasma macroscopic velocity on the atomic level polarization. We discuss the influence that the velocity and temperature shocks in the considered chromospheric models have on the temporal evolution of the scattering polarization signals of the Ca II infrared lines, as well as on the temporally averaged profiles. Our results indicate that the increase of the linear polarization amplitudes caused by macroscopic velocity gradients may be significant in realistic situations. We also study the effect of the integration time, the microturbulent velocity and the photospheric dynamical conditions, and discuss the feasibility of observing with large-aperture telescopes the temporal variation of the scattering polarization profiles. Finally, we explore the possibility of using the differential Hanle effect in the IR triplet of Ca II with the intention of avoiding the characterization of the zero-field polarization to infer magnetic fields in dynamic situations.
INTRODUCTION
The chromosphere, the interface region between the photosphere and the corona, is a very important part of the solar atmosphere. It is the place where most of the non-thermal energy that creates the corona and solar wind is released, with a heating rate requirement that is between one and two orders of magnitude larger than in the corona. To infer the thermal, dynamic and magnetic structure of the solar chromosphere is thus a very important goal in astrophysics. For instance, it is believed that the dissipation of magnetic energy in the 10 6 K corona may be significantly modulated by the strength and structure of the magnetic field in the chromosphere (e.g., Parker 2007). However, "measuring" the chromospheric magnetic field is notoriously difficult (e.g., reviews by Casini & Landi Degl'Innocenti 2007;Harvey 2009;. While spectroscopic observations allow us to determine temperatures, flows and waves, they do not provide any quantitative information on the chromospheric magnetic field. To this end, we need to measure and interpret the polarization t hat some physical mechanisms introduce in chromospheric spectral lines. These mechanisms are the Zeeman effect, scattering processes and the Hanle effect. The circular and linear polarization signals that the Zeeman effect can in principle produce in a spectral line are caused by the wavelength shifts between the π and σ transitions of the line, as a result of the Zeeman splitting induced by the presence of a magnetic field. The amplitude of the circular polarization scales with the ratio, R, between the Zeeman splitting and the Doppler line width. The amplitude of the linear polarization scales with R 2 (see Landi Degl 'Innocenti & Landolfi 2004). Outside sunspots (where B 100 G at chromospheric heights) R≪1, which explains why it is so difficult to detect the polarization of the Zeeman effect in a chromospheric line. Typically, only the circular polarization is detected, especially in long-wavelength chromospheric lines such as those of the IR triplet of Ca ii (e.g., Trujillo Bueno 2010, figure 3). But the linear polarization observed in quiet regions of the solar chromosphere has practically nothing to do with the transverse Zeeman effect.
In weakly magnetized regions, the linear polarization of chromospheric lines is dominated by scattering processes. The physical origin of this polarization is the difference among the electronic populations of sublevels pertaining to the levels of the spectral line under consideration. This so-called atomic level polarization, which is caused by the anisotropic illumination of the atoms, produces selective emission and/or selective absorption of polarization components without the need of a magnetic field (e.g., Manso Sainz & Trujillo Bueno 2003b. The larger the anisotropy of the incident radiation field the larger the induced atomic level polarization and the larger the amplitude of the linear polarization of the emergent spectral line radiation. In an optically thick plasma like the solar atmosphere, the anisotropy of the radiation field depends mainly on the spatial distribution of the physical quantities that determine, at each point within the medium, the angular variation of the incident intensity. Great attention has been paid to the gradient of the source function (e.g., Trujillo Bueno 2001; Landi Degl'Innocenti & Landolfi 2004) but, in a highly dynamic medium like the solar chromosphere, the gradients of the macroscopic velocity of the plasma may also play an important role (Carlin et al. 2012, and references therein). In fact, in Carlin et al. (2012, hereafter Paper i) we showed that it can affect significantly the scattering polarization of the IR triplet of Ca ii. Our arguments were based on radiative transfer calculations in a semi-empirical model of the solar atmosphere, after introducing ad-hoc velocity gradients and comparing the computed Q/I profiles with those corresponding to the static case. Given the diagnostic potential of the Ca ii IR triplet for exploring the magnetism of the solar chromosphere (e.g., Manso Sainz & Trujillo Bueno 2010;De la Cruz Rodríguez et al. 2012), and the fact that the region where such chromospheric lines originate may be affected by vigorous and repetitive shock waves (e.g., Carlsson & Stein 1997), it is necessary to investigate the radiative trans fer problem of scattering polarization in the Ca ii IR triplet using dynamical, time-dependent atmospheric models of the solar chromosphere. In this paper, we show the results of such investigation.
RESOLUTION PROCEDURE.
We have carried out radiative transfer calculations of the linear polarization produced by scattering in the Ca ii infrared (IR) triplet. The polarization is produced by the atomic level polarization that results from anisotropic radiation pumping in the hydrodynamical (HD) models of solar chromospheric dynamics described in Carlsson & Stein (1997, 2002. We used two time series of snapshots from the abovementioned radiation HD simulations, each one lasting about 3600 s and showing the upward propagation of acoustic wave trains growing in amplitude with height until they eventually produce shocks. The first one corresponds to a relatively strong photospheric disturbance showing well-developed cool phases and pronounced hot zones at chromospheric heights (see Carlsson & Stein, 1997; we refer to this as the strongly dynamic case). The second simulation corresponds to a less intense photospheric disturbance (see Carlsson & Stein, 2002; we refer to this as the weakly dynamic case). Thus, the thermodynamical evolution of the atmosphere (including the chromosphere and the transition region) is driven by the bottom boundary condition that is imposed on the velocity. This realistic boundary condition is extracted from the measured Doppler shifts in the Fe i line at 3966.8Å. Our description focuses mainly on the strongly dynamic case, but in Sec. 4.5 we compare the results with those corresponding to the weakly dynamic case.
To characterize the simulations we can use the following quantities. In terms of the velocity gradients, and using units related to a representative scale height 4 H = 275 km, the temporal average of the maximum velocity gradient along the atmosphere is 40 km · s −1 per scale height (or 145 m · s −1 km −1 ) in the strongly dynamic case 4 A scale height can be defined as the typical distance over which atmospheric magnitudes such as the density vary an order of magnitude. But the scale height is not a fixed quantity in a timedependent model that contains important temporal variations in those magnitudes. For this reason we have defined an averaged scale height as the representative value used for the characterization of the velocity gradients. and 13 km · s −1 per scale height (or 47 m · s −1 km −1 ) in the weakly dynamic case. Likewise, the temporal average of the minimum of temperature in the atmosphere is 3976 K in the strongly dynamic case, and 4292 K in the weakly dynamic case.
At each time step of the HD simulation under consideration we use the corresponding one-dimensional stratifications of the vertical velocity, temperature and density to compute the emergent I(λ) and Q(λ) profiles through the application of the multilevel radiative transfer code of Manso Sainz & Trujillo Bueno (2003a, after the generalization to the non-static case described in Carlin et al. (2012). Specifically, we have solved jointly the radiative transfer (RT) equations for the Stokes I and Q parameters and the statistical equilibrium equations (SEE) for the atomic populations of each energy level and the population imbalances among its magnetic energy sublevels (equivalently, the multipolar tensor components of the atomic density matrix, ρ K 0 (J i ), with J i the angular momentum of each level i). This is the NLTE radiative transfer problem of the second kind (see sections 7.2 and 7.13 in Landi Degl 'Innocenti & Landolfi 2004)). Once the self-consistent solution of such equations is found at each height in the atmospheric model under consideration, we compute the coefficients of the emission vector and of the propagation matrix (see section 2.2 of Paper I) and solve the RT equations for a line of sight (LOS) with µ = 0.1, where µ is the cosine of the heliocentric angle. This LOS has been chosen in order to simulate a close to the limb observation, such as that shown in figure 13 of Stenflo et al. (2000). To account for the macroscopic motions, we have introduced the Doppler effect in the calculation of the absorption and emission profiles for each wavelength and ray direction (Paper i). The influence of the Doppler effect on the SEE appears directly because the radiative rates depend on the radiation field tensor components. Likewise, the RTE is affected because the Doppler effect modifies the elements of the propagation matrix and of the emission vector.
Given that the computations reported here are carried out in plane-parallel atmospheric models, it is necessary to introduce a micro-turbulent velocity, that accounts for the Doppler shifts (inducing an effective line broadening) produced by moving fluid elements below the resolution element. In order to estimate a suitable value (assumed constant with height), we have calculated the emergent intensities at disk center and compared them with those of the solar Kitt Peak FTS Spectral Atlas (Kurucz et al. 1984). A good agreement is obtained with 3.5 km s −1 .
RESULTS.
A standard Fourier analysis of the atmosphere model shows that it acts as a pass-band filter for the multifrequency sound waves generated in the lower boundary. The result is that the predominating periods at chromospheric heights and higher are around three minutes (Carlsson & Stein 1997). For practical reasons we divided the temporal evolution in 3-min intervals so that the beginning of each interval coincides with the moment in which the shock front in temperature and velocity is the sharpest in each interval (vertical lines in figures with temporal axis, like Fig. 1). Given the power of the 3-min waves, this division turns out to be "natural" and can be used to mark the most interesting events we see in the emergent polarization.
Inside each three-minutes cycle we distinguish between compression and expansion phases. They can be easily identified following the height at which τ los ν0 = 1, i.e., where the optical depth at line center (ν 0 ) along the LOS equals unity (upper panel of Fig. 1). This quantity is a good marker of the shock fronts when they cross heights between 1 and 2 Mm. It is because the steep changes in opacity inside the shocks forces the τ = 1 region to remain comprised within them. The line transitions at 8542Å and 8662Å (green and red lines in the upper panel of Fig. 1) follow a clearer periodic pattern because they form higher, where less frequency components of the velocity waves can arrive. Compression phases begin when plasma falls down from upper layers (τ los 8542 = 1 and τ los 8662 = 1 decrease in top panel of Fig. 1), while simultaneously a new upward propagating wave emerges amplified into the chromosphere. At the end of this stage a shock wave is completely developed and the τ los ν0 = 1 position is close to ∼1200 km for the three IR lines. The shock waves so created start always in such region between 1 and 1.5 Mm 5 . After that, during what we term expansion phase (heights for τ los 8542 = 1 and τ los 8662 = 1 arising in top panel of Fig. 1), the shock fronts travel upward increasing the plasma velocities as they encounter lower densities. Figure 1 also shows the time evolution of other quantities during the first 2000 s after the initial transient. In the second row, the location and value of the temperature minimum are displayed, showing a clear correspondence with expansion and contraction phases. In the third row, we show the ensuing variation of (Q/I) pp , defined as the peak-to-peak difference of the Q/I profile for each spectral line. It is a measure of the linear polarization signal contrast that was used in Paper i to characterize the polarization amplitude and discriminate their variations with respect to static cases. In each cycle we see an amplification of (Q/I) pp occurring at expansion phases and an usually larger amplification during contraction phases. Finally, the time evolution of the emergent Q(λ)/I(λ) profile for the 8542Å line is illustrated in the lower panel (the vertical axis shows 0.6Å around the rest wavelength of the line). Here, we observe two distinct areas showing amplifications inside each three-minute cycle. The first amplification is blue-shifted, because it happens in an atmospheric expansion phase (plasma moving towards the observer). It is weaker than the second amplification, which is redshifted and occurs during the compression phase (plasma moving down in the atmosphere). This indicates that the compression phase is more efficient producing a polarization amplification than the expansion one. The reason is that during compression we have stronger velocity and temperature gradients along the main regions of formation. Following the results of Paper i, the larger the gradient, the larger the enhancement of the linear polarization signal. The behaviour is similar in the other transitions.
5 It is in this range of heights where the Ca ii IR triplet forms in typical semi-empirical models There is a clear correspondence between the maximum value of the temperature minimum (hot-chromosphere time-steps) and the largest peaks of the (Q/I) pp signal, taking place just before the maximum contraction (dotted vertical lines). As the atmosphere is compressed, the temperature increases at chromospheric heights and the resulting gradient of the source function produces an increase of the radiation field anisotropy in the upper layers. This directly leads to an enhanced emergent linear polarization signal. On the contrary, in coldchromosphere models the expansion reaches its maximum and (Q/I) pp is near its minimum value.
Even in such complex situations, we still witness the already known effects of amplification (with respect to the static case), frequency shift and asymmetry in the linear polarization profiles due to dynamics. All of them have been already explained in Paper i, using the semiempirical FAL-C model of Fontenla et al. (1993) with ad-hoc velocity stratifications. The enhancement is produced as a consequence of the velocity gradients and subsequent anisotropy enhancements. However, some differences exist from the experiments in semi-empirical models and the calculations presented in this paper. First, the velocity stratification in the HD models is, in general, non-monotonic and with a non-constant variation with height. Second, the maximum velocity gradients are located at shocks, with amplitudes that reach tens or even hundreds of meters per second per kilometer (as a comparison, in Paper i we dealt with velocity gradients between 0 and 20 m s −1 km −1 ). Third, as commented before, we have shocks in temperature that produce larger source function gradients and additional enhancement of the radiation anisotropy and of the linear polarization. Finally, these variations are usually concentrated in the formation regions of the triplet lines. All these mechanisms act together and enhance the linear polarization of the emergent radiation with amplification factors up to ∼ 10 (in the 8498Åline) and ∼ 7 (in the 8542Å and 8662Å lines), for the instantaneous values of the Q/I amplitudes with respect to the static FAL-C case. However, if we consider temporal averages of the emergent Stokes profiles during long periods, we get amplification factors of about a factor of 2 (time-averaged Q/I amplitudes reach ∼ 1 % for 8542Å and 8662Å lines).
Summarizing, the temporal evolution of the polarization is driven by the temperature and velocity stratifications, that in turn are a result of the dynamical conditions set in the photosphere.
ANALYSIS AND DISCUSSION OF RESULTS.
4.1. The effect of the velocity. A way of visualizing the effect of vertical velocity gradients on the emergent scattering polarization is to compare the evolution of the polarization profiles corresponding to both the static and non-static case. In the absence of velocities (lower row of Fig. 2), the maximum of the Q/I profiles is always located at λ = λ 0 (i.e., line center), and its temporal evolution presents a sawtooth shape. When the effect of velocities is included in the calculations (upper row of Fig. 2), the maximum of the Q/I signal is no longer located at the central wavelength and its temporal evolution assumes a different shape with two peaks every 3-min period (upper right panel). These The dotted vertical lines are located at the local minimum of the τ los 8452 = 1 curve, and they can be considered to indicate the beginning and the end of each "three-minute" period with their corresponding expansion and compression phases. Second row: time evolution of the temperature value and atmospheric height of the temperature minimum. Third row: time evolution of the polarization contrast (max(Q/I) − min(Q/I)) for the three lines of the Ca ii IR triplet. The polarization amplitude of the λ8498 line has been multiplied by -5 to show the results for the three lines on the same scale. Note that, by definition, the line contrast is always positive, so with this quantity we cannot know if the polarization signal is positive or negative. Thus, with the artificial sign inversion done for the 8498Å line contrast in this figure, we try to illustrate that this line is the only one whose larger polarization amplitudes are negative.Bottom row: time evolution of the calculated Q(λ)/I(λ) fractional linear polarization profile of the λ8542 line.
wavelength and amplitude modulations are produced by the Doppler effect of the velocity gradients.
It is interesting to compare the mean Q/I amplitudes obtained in the hydrodynamical models with the one calculated in the FAL-C model. They differ notably (see horizontal lines in right panels of Fig. 2). In the 8542Å transition we have mean values around 1%, 0.31% and 0.42% for the HD models with velocities, the HD models at rest and static FALC models, respectively. The results of these figures have been obtained using an integration time of 1040 s (∽ 17 minutes), the duration of the temporal interval shown in Fig. 2. Neglecting the effect of the velocity gradients in the HD models, we see that the resulting temporally-averaged scattering polarization signals (which include the impact of the temperature and density shocks) are similar to the Q/I profiles computed in the static FAL-C semi-empirical model.
4.2.
The combined effect of velocity and temperature on the linear polarization. In Fig. 3 we display some relevant magnitudes for three different situations in the simulation. The first column corresponds to a quiet time-step, with no shocks, zero velocity and without any kind of amplifications (it Finally, the last column displays an expansion phase, in which the atmosphere is expanded and the shocks are already travelling over the transition region. Furthermore, we distinguish between the solutions when motions are taken into account (red lines) and the solutions obtained allowing shocks in all magnitudes but artificially setting the velocity to zero (black lines).
The normalized velocity ξ z = (ν 0 /c)v z /∆ν D , with ∆ν D the Doppler width of the absorption profiles (that depends on the temperature), c the speed of light and v z the vertical velocity, is the quantity that controls the importance of the atmospheric motions in relation with the radiation anisotropy and the scattering polarization (see Paper i). Note that this quantity considers the combined effect of velocity and temperature. In the HD atmosphere models, ξ z (solid lines in upper panels of Fig. 3) is only significant in the formation region of the IR triplet lines (τ los ν0 ∼ 1 region with high velocity gradients, and not very high temperatures). Although shock waves increase the chromospheric temperature, the effect of the velocity gradients is predominant. The contrary occurs over the transition region, where the thermal line width is much larger than the Doppler shifts.
The expansion and contraction can be identified also in quantities such as the intensity source function and the Planck function (second row in Fig. 3). During contraction phases (middle column panels), high temperatures produce a more efficient population pumping towards upper levels, incrementing the emissivity and, consequently, the source function. Additionally, during contraction the temperature shock occurs in optically thick and denser layers (deeper layers below τ los ν0 = 1), forcing the source function gradient to increase with respect to the static case at those heights. Note how in this last case the source function rises as a whole because of the warming (compare the source function in the middle panel, the black solid line that has been obtained neglecting velocities, with the non-dynamic source function in the left column). If the macroscopic velocity is now considered, we additionally get a jump in the source function (red lines in middle column of Fig. 3) caused by the velocity shock that is developed in this contraction phase. This behaviour is accompanied by a significant Dopplerinduced anisotropy enhancement that amplifies the linear polarization, as shown in the corresponding lower panels of the same figure.
In the expansion phases, the shock waves move upward and the chromosphere becomes cooler. This induces a lower source function and smaller polarization amplitudes (as compared with the contraction phase). Otherwise, as the density of scatterers is now lower around the shock (because it moved upward to regions with τ los ν0 < 1), the temperature gradients have smaller effects on the polarization profiles than during the contraction phases. In this expansion time step, the black solid line representing the static source function is similar to the non-dynamic source function of the left column. However, once the motions are introduced, and despite of the fact that the shocks have already reached upper chromospheric layers, the remanent velocity field has still a siz- and 8542Å lines after temporally averaging the Stokes I and Q profiles during 3070 seconds (51 minutes). These Q / I profiles may be considered to emulate what can be actually observed with today's solar telescopes. Black solid profiles: static case with v micro = 3.5 km s −1 . Red solid profiles: strongly dynamic case taking into account the effect of the velocity gradients and assuming v micro = 3.5 km s −1 . Black dashed profiles: strongly dynamic case neglecting the effect of the velocity gradients and assuming v micro = 0.Red dashed profiles: strongly dynamic case taking into account the effect of the velocity gradients and assuming v micro = 0. The green solid lines show the temporally averaged profiles obtained after applying the velocity free approximation (VFA) with v micro = 3.5 km s −1 (i.e., neglecting the Doppler shifts of the macroscopic velocities when computing the density matrix elements, but taking them into account when calculating the emergent Stokes profiles).
In order to compute the average linear polarization signal that one would observe without any temporal resolution, we average Q and I (obtaining Q / I ) over 3070 s (≈ 51 minutes) for four different cases (Fig. 4). We consider the cases with zero micro-turbulent velocity (dotted lines) and a constant micro-turbulent velocity of 3.5 km s −1 (solid lines). For each case, we distinguish between the results switching off the velocity (black lines) and the results allowing for macroscopic velocity fields (red lines).
When macroscopic motions are considered, the polarization profiles become asymmetric. Furthermore, they become more negative in the case of the 8498Å transition and more positive in the other two transitions. The asymmetry of the red profiles is a consequence of the fact that, during the averaging period, the dynamical situations in which the velocity gradient is negative (velocity field mostly decreasing with height) dominate over the situations with velocity gradients that are mostly positive. This predominance is not because the situations with negative velocity gradients are more frequent but because such situations are more efficient on amplifying the linear polarization. This happens during the compression phase because i) the velocity gradients are larger, ii) there is also a shock in temperature affecting the formation region and iii) the shock fronts are located just below the τ los ν0 = 1 height. The results are qualitatively the same independently of the micro-turbulent velocity value, but, when it is not considered, the amplification of (Q/I) pp is larger and the profiles are narrower.
If we decrease the averaging interval to 9 minutes, we obtain profiles that are essentially similar to the ones obtained by averaging during 51 minutes (showed in Fig. 4). If we integrate less than that, significant variations appear in the shape and amplitude of the emergent profiles. This indicates that, concerning the linear polarization, there is still reliable dynamic information contained in a time interval corresponding to a few 3-min cycles.
The velocity free approximation.
An approximation that is sometimes applied to solve radiative transfer problems in dynamical atmospheres (either taking into account the presence of atomic polarization or not) is the velocity free approximation (VFA). It is based on solving the SEE and RTE simultaneously but neglecting the effect of plasma motions. However, once they are consistently solved, such plasma motions are included in the synthesis of the emergent Stokes profiles (along µ = 0.1 in our case). Consequently, the density matrix elements are calculated as if plasma motions did not affect them, reducing the complexity and computational effort of the problem since a reduced frequency grid is used to compute the mean intensity and the anisotropy. The results of applying it to each time step of our HD evolution is the temporal average illustrated as the green line in Fig. 4. This approximation is clearly not appropriate in our case, given that the profiles just become asymmetric (with respect to the static profiles) but without the amplification. The reason for this lack of amplification is that the anisotropy controlling the linear polarization is not correctly enhanced (see Paper i). On the other hand, the asymmetry is purely due to the asymmetric absorption with respect to the line center that motions produce along the ray under consideration. Hence, in order to obtain reliable results it is mandatory to include the effect of Doppler shifts in the whole set of equations, and we conclude that the VFA should not be applied.
The effect of photospheric dynamics.
Given that the small velocity fields appearing in the photosphere are amplified because of the exponential decrease in the density while the perturbations travel outwards, the properties of the bottom boundary condition are determinant in the behaviour of the emergent Stokes parameter of chromospheric lines. We compare the strongly dynamic case that forms the core of our paper with the weakly dynamic case that has been already introduced in Sec. 3. Although the mean maximum velocity gradient is three times smaller in the weakly dynamic case and the averaged polarization amplitudes are also smaller than in the strongly dynamic one, we still find comparable or even slightly larger instantaneous (Q/I) pp amplitudes (see Fig. 5). The resulting averaged polarization profiles are qualitatively the same but they differ in amplitude (Fig. 6). This is a reasonable result because in the weakly dynamic scenario the instantaneous velocity gradients are smaller in general. Differences are especially critical for the 8498Å line, whose linear polarization profiles can be positive, but also adopt significant negative values at redder wavelengths (when velocity gradients are mainly positive with height) or at bluer wavelengths (when velocity gradients are mainly negative with height). This behaviour produces cancel-lation effects with integration times larger than a threeminute period. Furthermore, the central depression produced in the 8498Å average profile when the velocity is neglected in the strongly dynamic simulation (solid black line in left panel of Fig. 6) do not appear in the average profile corresponding to the weakly dynamic case (dashed black line in the same panel) because of the differences in the instantaneous temperature stratifications. The sensitivity of this spectral line to the instantaneous photospheric perturbations and to the developed chromospheric shocks is larger than in the other two lines. The curves correspond to F = 1, 0.9, 0.7, 0.5, 0.2, 0.1, 0, going from black colour (fully dynamic case) to lighter grey colour (static case). The results for the 8662Å line are very similar to the ones obtained for the 8542Å line. 4.6. The effect of the integration time.
In order to detect in the Sun the time evolution of the linear polarization signals, the observations must have enough time resolution, signal to noise ratio and spatial resolution. A sufficient spatial coherence is important to avoid cancellations of the contribution from different regions in the chromosphere evolving with different phases. If we consider the expected capabilities of the next generation of solar telescopes (like the European Solar Telescope, EST, or the Advanced Technology solar Telescope, ATST), we can aim at observing the emergent Stokes profiles of Fig. 7 with a 10 s cadence (upper panel) 6 . However, with the present telescopes and instrumentation, we are forced to integrate in time and/or space to detect the scattering polarization signals. If we degrade the temporal resolution of our results to an integration 6 Using EST (telescope diameter of 4 m, instrumental efficiency around 10%) and considering a spectral resolution of 30 mÅ, a spatial resolution of 0.1 arcsecs and an integration time of 1 s (ten times better than needed), it would be possible to observe the linear polarization of the 8542Å line (line to continuum ratio of ∼ 0.2) at the level of Q/I ∼ 10 −3 with a confidence of 3σ over the noise. time of 1 min (middle panel of Fig. 7) and 3 min (lower panel of Fig. 7) we clearly see that the time evolution becomes more difficult to detect. In the last case, the profiles are already so smoothed that the original features are completely lost, both in the spectral and temporal domains. The amplitude of the integrated signals are lower than in the original 10 s sequence by a factor of 2 (see the color scales). However, integration during time intervals ∼1 min could reveal the amplification/modulation effect if we capture spectro-polarimetric signals similar to the ones showed in the middle row panel. 4.7. The effect of a decreasing velocity on the averaged profiles.
We also calculated what happens to the emergent averaged profiles in the strongly dynamic case (with 15 minutes of integration, emulating an observation) when we gradually reduce the velocity field by a constant scaling factor F , keeping the rest of atmospheric magnitudes unperturbed (see Fig. 8). As expected, we find that the polarization amplitudes decrease in proportion to F , from the original case, with F = 1, towards the static case, with F = 0. Note that the core of the 8498Å line goes through zero for a certain F value (near F = 0.6). Thus, depending on the magnitude of the velocity gradients, its linear polarization amplitude will be positive or negative. This fact suggests an additional way to diagnose velocity gradients along the line-of-sight. However, it is important to keep in mind that this sensitivity also depends on the variations in density and temperature, as shown in Sec. 4.5.
Furthermore, the variation of the Q/I amplitudes is not linear with F . The change is small for small F , is larger for intermediate values of F , and again becomes smaller for the largest F , tending to saturation.
CONSIDERATIONS ON THE HANLE EFFECT
For magnetic field diagnostics with the Hanle effect it is often necessary to know the zero-field polarization reference (e.g., Stenflo 1994;Trujillo Bueno et al. 2004). That is what we have tried to do in previous sections, calculating and explaining the temporal evolution of the linear polarization profiles in chromospheric dynamic simulations. Ideally, this reference has to be computed under the same thermodynamical and dynamical conditions than in the real Sun but without magnetic field. As the Hanle effect often depolarizes the linear polarization signals, the difference between the observation and the zerofield calculation can be associated to a magnetic field by adjusting the magnetic field topology and intensity. The key point is that the reference amplitude must be as precise as possible. If it is imprecise, variations in the Stokes profiles can be associated to a magnetic field when they were really due to uncertainties in another magnitudes, like the temperature or the velocity field. Due to this reason, the fact that the solar chromosphere is a highly dynamic medium brings some complications for the use of the Hanle effect as a diagnostic mechanism.
A strategy to avoid the above-mentioned problem is known as the line ratio technique. It consists of finding a pair of spectral lines whose thermodynamical behaviour is identical but whose sensitivity to the magnetic field is different in some range of magnetic field intensity or inclinations (e.g., Stenflo et al. 1998; is not shown because it is very similar to ̺ 1 and because can be obtained from the other two ratios. Manso Sainz et al. 2004). In that case, the ratio between the polarization amplitudes should only change due to variations in the magnetic field, thus allowing us to measure it after a suitable calibration. As shown by Manso Sainz & Trujillo Bueno (2010), the main magnetic sensitivity difference among the lines of the Ca II IR triplet is between the λ8498 line (which is sensitive to field strengths between 0.001 G and 10 G) and any of the λ8662 and λ8542 lines (which react mainly to sub-gauss magnetic fields and up to 10 G in the latter spectral line). Unfortunately, while the line-cores of the λ8662 and λ8542 lines originate in similar atmospheric layers, the λ8498 line-core originates at significantly deeper atmospheric layers (see Figs. 1 and 5). Nevertheless, we have found useful to plot in Fig. 9 the time evolution of the following polarization line ratios: where the super-index indicates the central wavelength of the transition inÅ. These quantities were calculated for each simulation considered before (weakly and strongly dynamic cases). The more stable they are, the more useful they will be for inferring the magnetic field. We obtain that , in average,̺ 1 = 0.15 ± 0.10 and ̺ 1 = 0.16 ± 0.14 for the strongly and weakly dynamic cases, respectively (lower panel in Fig. 9). The sudden shape variations (including maximum amplitudes passing by zero) of the 8498Å line induce large instantaneous excursions on ̺ 1 . As expected, a more stable line ratio is obtained for the second pair of transitions, which are precisely the ones that originate at similar chromospheric heights. We find̺ 2 = 1.06 ± 0.11 and̺ 2 = 1.00 ± 0.09 for the strongly and weakly dynamic cases, respectively (upper panel in Fig. 9). If we repeat the calculations setting to zero the micro-turbulent velocity in the strongly dynamic case we obtain̺ 1 = 0.22 ± 0.09 and̺ 2 = 0.97 ± 0.12 (dashed black lines in Fig. 9). These results indicate that the̺ 2 line ratio shows a relatively stable behavior against variations of the velocity and temperature in the solar atmosphere. Consequently, in principle, ̺ 2 could be used as a suitable line ratio to estimate the magnetic field from spectropolarimetric observations of the λ8662 and λ8542 lines.
Regarding the sensitivity of these lines to the magnetic field and their applicability for the diagnostic of magnetic fields through the Hanle effect, several considerations have to be taken into account. First, the microturbulent velocity has a small influence on the averaged amplitudes and line ratios. Second, once the magnetic field is included in the calculations, the Hanle effect typically operates at the line center for static cases. However, in a dynamic situation there is not a preferred line center wavelength. As the maximum of the absorption and dispersion profiles occurs at different Doppler shifted wavelengths, the Hanle effect will operate in a small bandwidth around the line core. Third, according to the static calculations by Manso Sainz & Trujillo Bueno (2010), for chromospheric magnetic fields stronger than 0.1 G in the "quiet" Sun, the Q/I signal of the 8662 A line is expected to be Hanle saturated. Thus, variations between 0.1 and 10 G could be measured with ̺ 2 , being produced by changes in the linear polarization of the 8542Å line. Unfortunately, the fluctuations we see in Fig. 9 (exclusively due to the dynamics) have amplitudes of the same order of magnitude than those expected from the investigations of the Hanle effect in static model atmospheres (exclusively due to the magnetic field). More realistic results will be obtained when carrying out calculations of the Hanle effect of the Ca ii IR triplet in dynamical model atmospheres. In any case, it is clear that for exploiting the polarization of these lines, we need instruments of high polarimetric sensitivity.
CONCLUSIONS
The results presented in this paper indicate that the vertical velocity gradients caused by the shock waves that take place at chromospheric heights in the HD models of Carlsson & Stein (1997;2002) have a significant influence on the computed scattering polarization profiles of the Ca ii IR triplet. They show changes in the shape of the Q/I profiles of the three IR lines and clear enhancements in their amplitudes, as well as changes in the sign of the Q/I signal of the λ8498 line. Interestingly enough, such modifications with respect to the static case are evident, not only in the temporally resolved Q/I profiles (e.g., see Fig. 2), but also in the temporally-averaged Q / I profiles (e.g., see Fig. 4). This is true even with moderate macroscopic plasma velocities, simply due to the presence of strong vertical velocity gradients like the ones produced by shock waves. This may explain why the above-mentioned modifications of the scattering polarization profiles of the Ca ii IR triplet are present not only in the strongly dynamic simulation (Carlsson & Stein 1997) but also in the weakly dynamic one (Carlsson & Stein 2002).
Our investigation points out that the development of diagnostic methods based on the Hanle effect in the Ca ii IR triplet should take into account that the dynamical conditions of the solar chromosphere may have a significant impact on the emergent scattering polarization signals. This complication could be alleviated through the application of line ratio techniques. In Sec. 5 we have concluded that the ratio between the polarization amplitudes of the λ8542 and λ8662 transitions would be the best line-ratio choice. However, even in the absence of magnetic fields, the small fluctuations we see in the value of such a line ratio in dynamical model atmospheres could be confused with the presence of magnetic fields in the range between 0.1 and 10 G. Further work is necessary at this point.
In any case, the fact that realistic macroscopic velocity gradients may have a significant impact on the scattering polarization profiles of the Ca ii IR triplet is interesting and important for the diagnostic of the solar chromosphere 7 . On the one hand, it provides a new observable for probing the dynamical conditions of the solar chromosphere (e.g., by confronting observed Stokes profiles with those computed in dynamical models) . On the other hand, the exploration of the magnetism of the quiet solar chromosphere via the Hanle effect in the Ca ii IR triplet (either through the forward modeling approach or via foreseeable Stokes inversion approaches) would have to be accomplished without neglecting the possible effect of the atmospheric velocity gradients on the atomic level polarization.
Several points are still unanswered after this work. First, we need to investigate the sensitivity to the Hanle effect of the Q/I and U/I profiles of the Ca ii IR triplet using magnetized and dynamical atmospheric models. Second, we have to investigate whether our onedimensional radiative transfer results remain valid after considering realistic three-dimensional models, such as those resulting from magneto-hydrodynamical simulations (e.g. Wedemeyer et al. 2004;Leenaarts et al. 2009). We would tentatively expect that the strong stratification that gravity imposes in the solar atmosphere facilitate shocks that propagate mainly in the vertical direction, but with a reduced strength, given the increased degrees of freedom.
Finally, we mention that our results could be of potential interest in other astrophysical contexts. For instance, the mechanism of polarization enhancement due to the presence of shocks might well be the explanation for the changing amplitudes of the linear polarization signals reported in variable pulsating Mira stars (Fabas et al. 2011).
We are grateful to Rafael Manso Sainz (IAC) for several useful discussions and advice with the radiative transfer computations. Financial support by the Spanish Ministry of Economy and Competitiveness through projects AYA2010-18029 (Solar Magnetism and Astrophysical Spectropolarimetry) and CONSOLIDER INGE-NIO CSD2009-00038 (Molecular Astrophysics: The Herschel and Alma Era) is gratefully acknowledged.
|
2012-12-12T18:07:38.000Z
|
2012-10-04T00:00:00.000
|
{
"year": 2013,
"sha1": "4d256a0809f46d210cd1d9e61608e73b13330c50",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1210.1525",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4d256a0809f46d210cd1d9e61608e73b13330c50",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
208833743
|
pes2o/s2orc
|
v3-fos-license
|
The Anodic Behaviour of Bulk Copper in Ethaline and 1-Butyl-3-Methylimidazolium Chloride
: The anodic dissolution of bulk metallic copper was conducted in ionic liquids (ILs)—a deep eutectic solvent (DES) ((CH 3 ) 3 NC 2 H 4 OH) comprised of a 1:2 molar ratio mixture of choline chloride Cl (ChCl), and ethylene glycol (EG)—and imidazolium-based ILs, such as C 4 mimCl, using electrochemical techniques, such as cyclic voltammetry, anodic linear sweep voltammetry, and chronopotentiometry.To investigate the electrochemical dissolution mechanism, electrochemical impedance spectroscopy (EIS) was used. In addition to spectroscopic techniques, for instance, UV-visible spectroscopy, microscopic techniques, such as atomic force microscopy (AFM), were used. The significant industrial importance of metallic copper has motivated several research groups to deal with such an invaluable metal. It was confirmed that the speciation of dissolved copper from the bulk phase at the interface region is [CuCl 3 ] − and [CuCl 4 ] 2 − in such chloride-rich media, and the EG determine the structure of the interfacial region in the electrochemical dissolution process. A super-saturated solution was produced at the electrode / solution interface and CuCl 2 was deposited on the metal surface.
Experimental
The DES was prepared by mixing choline chloride (ChCl) (Aldrich, 99%) and EG (Aldrich, >99%) in a stoichiometric molar ratio of 1:2 (ChCl:EG). Then, it was heated to 60 • C with continuous stirring until a clear liquid was produced. The IL one was purchased from (Aldrich, 99%). The 1-Butyl-3-methylimidazolium chloride, (C 4 mim)(Cl), was dried under vacuum before use, but had a water content of ca. 0.1 wt.% (thermogravimetric analysis, Mettler Toledo TGA/DSC1 STARe system) which enabled it to be liquid at 70 • C.The copper wire was purchased from Alfa Aesar (99.9% purity).
Regarding electrochemical measurements, both cyclic voltammetry and linear sweep voltammetry were conducted by means of both stationary and rotating disk electrodes. The galvanostatic and the AC impedance were performed using an Autolab PGSTAT 12: controlling by GPES software and then fitting with an FRA impedance module. The impedance spectra acquisition was in the frequency range of 1-65,000 Hz with small amplitude of 10 mV of the AC signal. All electrochemical measurements were carried out in a three electrode cell system involving a 1 mm diameter metal disc Cu working electrode, sealed in glass; a platinum flag (1 cm 2 area) as a counter electrode; and Ag/AgCl (0.1 M in 1:2ChCl:EG) as the reference electrode. All measurements are performed at 20 • C and 70 • C at a 5 mV·s −1 scan rate except for the determination of time considered necessary for copper electrode to reach passivation. The sweep rate was changed from 5 up to 50.5 mV·s −1 .
The UV spectra were recorded by means of a Shimadzu model Uv-1601 spectrophotometer with a cell path length of 10 mm.
The morphological examinations were conducted using atomic force microscopy (AFM). The acquisition of images was by means of a Digital Instruments Nanoscope IV Dimension 300 (Veeco) atomic force microscope with a 100 mm scanning head contact mode. The controlling software was Nanoscope version 6.13 during image acquisition in air.
Anodic Dissolution Mechanism
Cyclic and Linear Sweep Voltammetry Figure 1a exhibits the cyclic voltammetric response a of metallic, copper disc electrode in choline chloride-based IL at 20 • C. Within the anodic potential range, two oxidation processes can be clearly seen. The anodic current begins to increase at −0.3 V, peaking at ca. 0 V. The current sharply drops in a manner which is a characteristic of a quasi-passivation process. This might be due primarily to presence of EG, and partly, the chloride ion. The second anodic current rises to a peak at +0.3 V and falls down to a steady state current of approximately 23 mA·cm −2 . The second anodic peak beyond +0. 25 V could be linked to extra oxidation of copper from Cu(I) to Cu(II). The other ting peaks are artefacts that might be linked to complexity of dissolution process of bulk copper metal within such a viscous liquid.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 3 of 14 presence of EG, and partly, the chloride ion. The second anodic current rises to a peak at +0.3 V and falls down to a steady state current of approximately 23 mA·cm −2 . The second anodic peak beyond +0. 25 V could be linked to extra oxidation of copper from Cu(I) to Cu(II). The other ting peaks are artefacts that might be linked to complexity of dissolution process of bulk copper metal within such a viscous liquid. On the cathodic scan, the current was approximately constant until ca. 0.2 V, when a noisy phenomenon occurred at the same potential that passivation film formation occurred: on the reversed sweep. Indeed, the process is not an artefact but is very reproducible. The main cathodic process begins at ca. 0.0 V, peaking at −0.4 V, with a shoulder at −0.2 V. The noise is likely owed to redissolution of the Cu(II) species formed on the electrode's surface.
In the interpretation of the whole processes that occurs, it is helpful to compare the cyclic voltammograms of the metallic copper electrode in choline chloride-based DES with that for at Pt electrode in a solution of 0.1 M CuCl2·2H2O in the electrolyte. Abbott et al. documented the electrodeposition of copper using CuCl2·2H2O in the electrolyte, observing that two distinct processes take place, with Cu(II) undergoing a one-electron reduction to Cu(I)at +0.43 V, followed by a oneelectron reduction to metallic copper at −0.45 V [24]. The significance of this finding is that both processes are reversible. It is worth-mentioning, however, that direct comparison between the redox potentials in Figure 1 and the previously reported study are not helpful due to the use of considerably different reference electrodes (a silver wire quasi-reference electrode in the latter case). Figure 1a reveals an over-laid cyclic voltammetry CV (dashed line) of 0.1 M CuCl2·2H2O in choline chloride-based DES versus Ag/Ag + at 20 °C. It can obviously be seen that the onset potential on the anodic scan are similar to large extent, indicating that the metallic copper first dissolves as Cu(I) in the complex form. The quasi-passivation which is observed for the bulk copper electrode dissolution occurs at the same potential as the Cu(I)/(II) oxidation in solution. One can conclude that the second process in metallic copper dissolution occurs owing the change in oxidation state of the metal. When Cu(I) salts solubilise in choline chloride-based DES, the speciation was found to be in the [CuCl2] − form while the Cu(II) salts tend to produce [CuCl4] 2− [20,24]. Both of these complexes are known to be soluble largely and so the cause of the quasi-passivation is not immediately clear.
From Figure 1a,b, one can see the differences which are caused due to a specific cation effect or because of the lower chloride concentration in choline chloride-based DES compared to imidazolium- On the cathodic scan, the current was approximately constant until ca. 0.2 V, when a noisy phenomenon occurred at the same potential that passivation film formation occurred: on the reversed sweep. Indeed, the process is not an artefact but is very reproducible. The main cathodic process begins at ca. 0.0 V, peaking at −0.4 V, with a shoulder at −0.2 V. The noise is likely owed to redissolution of the Cu(II) species formed on the electrode's surface.
In the interpretation of the whole processes that occurs, it is helpful to compare the cyclic voltammograms of the metallic copper electrode in choline chloride-based DES with that for at Pt electrode in a solution of 0.1 M CuCl 2 ·2H 2 O in the electrolyte. Abbott et al. documented the electrodeposition of copper using CuCl 2 ·2H 2 O in the electrolyte, observing that two distinct processes take place, with Cu(II) undergoing a one-electron reduction to Cu(I)at +0.43 V, followed by a one-electron reduction to metallic copper at −0.45 V [24]. The significance of this finding is that both processes are reversible. It is worth-mentioning, however, that direct comparison between the redox potentials in Figure 1 and the previously reported study are not helpful due to the use of considerably different reference electrodes (a silver wire quasi-reference electrode in the latter case). Figure 1a reveals an over-laid cyclic voltammetry CV (dashed line) of 0.1 M CuCl 2 ·2H 2 O in choline chloride-based DES versus Ag/Ag + at 20 • C. It can obviously be seen that the onset potential on the anodic scan are similar to large extent, indicating that the metallic copper first dissolves as Cu(I) in the complex form. The quasi-passivation which is observed for the bulk copper electrode dissolution occurs at the same potential as the Cu(I)/(II) oxidation in solution. One can conclude that the second process in metallic copper dissolution occurs owing the change in oxidation state of the metal. When Cu(I) salts solubilise in choline chloride-based DES, the speciation was found to be in the [CuCl 2 ] − form while the Cu(II) salts tend to produce [CuCl 4 ] 2− [20,24]. Both of these complexes are known to be soluble largely and so the cause of the quasi-passivation is not immediately clear.
From Figure 1a,b, one can see the differences which are caused due to a specific cation effect or because of the lower chloride concentration in choline chloride-based DES compared to imidazolium-based IL. To confirm this further, 1-butyl-3-methylimidazolium chloride (C 4 mimCl) was diluted by adding EG. The addition of EG into imidazolium-based IL one results in a shift in the oxidation onset potential of metallic copper by approximately 300 mV, as presented in Figure 2. Similarly, a quasi-passivation profile is also observed showing that passivation is probably associated to the chloride concentration. As the concentration of chloride declines due to addition of EG, shifting in the onset potential occurs. It is noteworthy that the chloride ion caused a higher current response.
The effect of anion concentration on copper dissolution in choline chloride-based IL was investigated as shown in Figure 3. In the study the concentration of chloride in electrolyte (ChCl:EG 1:2) was manipulated by diluting the electrolyte with EG. As the molar ratio altered from 1:2 to 1:3 and 1:4, the passivation potential was changed to less positive potential values. This was predictable, as in such cases, there is less chloride available to the electrode's surface, so it is more difficult to produce CuCl4 2− and more likely forms CuCl2. As the concentration of chloride declines due to addition of EG, shifting in the onset potential occurs. It is noteworthy that the chloride ion caused a higher current response.
The effect of anion concentration on copper dissolution in choline chloride-based IL was investigated as shown in Figure 3. In the study the concentration of chloride in electrolyte (ChCl:EG 1:2) was manipulated by diluting the electrolyte with EG. As the molar ratio altered from 1:2 to 1:3 and 1:4, the passivation potential was changed to less positive potential values. This was predictable, as in such cases, there is less chloride available to the electrode's surface, so it is more difficult to produce CuCl 4 2− and more likely forms CuCl 2 .
The effect of anion concentration on copper dissolution in choline chloride-based IL was investigated as shown in Figure 3. In the study the concentration of chloride in electrolyte (ChCl:EG 1:2) was manipulated by diluting the electrolyte with EG. As the molar ratio altered from 1:2 to 1:3 and 1:4, the passivation potential was changed to less positive potential values. This was predictable, as in such cases, there is less chloride available to the electrode's surface, so it is more difficult to produce CuCl4 2− and more likely forms CuCl2.
UV-Visible Spectroscopy
It is of utmost significance to know about the speciation of dissolved copper in choline chloride/EG system. After metallic bulk copper's dissolution, electrochemically, into Ethaline, a coloured solution of dissolved copper was generated. Of great importance to the dissolved bulk metal is the determination of the identity of the species that arose from copper dissolution into a chloride
UV-Visible Spectroscopy
It is of utmost significance to know about the speciation of dissolved copper in choline chloride/EG system. After metallic bulk copper's dissolution, electrochemically, into Ethaline, a coloured solution of dissolved copper was generated. Of great importance to the dissolved bulk metal is the determination of the identity of the species that arose from copper dissolution into a chloride rich electrolyte, as that can be regarded as a key factor in dealing with the kinetics of the electrochemical processes. Herein, a facile, non-destructive technique was used to conduct this task: the UV-Vis spectroscopic technique. In the visible wavelength range, to gain information about any species, the solution must be coloured. To fulfil this condition, the chromophores/electron transition within the d-d orbitals have to exist [22].
From Figure 4, one can see similar UV-Vis spectra of 0.1 mM CuCl 2 ·2H 2 O in ethaline and the solution of dissolved copper obtained electrochemically by holding a copper disc electrode in ethaline at a potential of 1.2 V for 1 h, from which dissolved copper ion in the electrolyte, i.e.,ethaline, was obtained at 20 • C. However, there are two observations to make: first, there is a large peak at around 220 nm present in the copper dissolved electrochemically, which is entirely missing in the control solution; next, the peak at 281 nm in the control solution underwent blue-shifting in the solution of the copper product dissolved electrochemically. These can be associated to the impurities which might be present in the used copper wire (99.9%).
The spectra obtained in both cases were identical, indicating that the speciation of stripped copper from the bulk copper metal into ethaline is exactly the same as that for the solution species obtained by dissolving CuCl 2 .2H 2 O in solution. The formation of [CuCl 4 ] 2− was emphasised from the spectra where three distinct peaks were seen at 233, 281, and 407 nm and this was confirmed using EXAFS [20,22].
The spectra obtained in both cases were identical, indicating that the speciation of stripped copper from the bulk copper metal into ethaline is exactly the same as that for the solution species obtained by dissolving CuCl2.2H2O in solution. The formation of [CuCl4] 2− was emphasised from the spectra where three distinct peaks were seen at 233, 281, and 407 nm and this was confirmed using EXAFS [20,22]. Figure 1a exhibits the corresponding experiments in the IL, C4mimCl; in other words, Cu electrochemical dissolution and the voltammetry of 0.1 M CuCl2·2H2O on a Pt electrode. This experiment had to be performed at 70 °C owing to the high melting point of C4mimCl. Clearly, it is seen that a comparable current density was observed for copper dissolution to that of choline chloride-based IL. Moreover, the same onset potential for electrochemical copper dissolution was also obtained. However, the most noticeable difference between the electrochemical dissolution responses in choline chloride-based IL and imidazolium-based IL is that there is no quasi-passivation response in the latter and there is an inflection point occurring the identical potential as the Cu(I)/(II) process. It is also worth noting that the voltammogram for CuCl2·2H2O in C4mimCl is quite comparable to that in ethaline, showing two one-electron reductions and oxidation processes.
The Effect of Temperature
Basically, the ligands for copper's dissolution electrochemically, should be the same in both liquids (Cl − ). However, there is a discrepancy in the electrochemical behaviour in both electrolytes, which can, therefore, only be due to kinetic factors (diffusion of copper from the electrode and Cl − to Abs. Wavelength, λ / nm This experiment had to be performed at 70 • C owing to the high melting point of C 4 mimCl. Clearly, it is seen that a comparable current density was observed for copper dissolution to that of choline chloride-based IL. Moreover, the same onset potential for electrochemical copper dissolution was also obtained. However, the most noticeable difference between the electrochemical dissolution responses in choline chloride-based IL and imidazolium-based IL is that there is no quasi-passivation response in the latter and there is an inflection point occurring the identical potential as the Cu(I)/(II) process. It is also worth noting that the voltammogram for CuCl 2 ·2H 2 O in C 4 mimCl is quite comparable to that in ethaline, showing two one-electron reductions and oxidation processes.
Basically, the ligands for copper's dissolution electrochemically, should be the same in both liquids (Cl − ). However, there is a discrepancy in the electrochemical behaviour in both electrolytes, which can, therefore, only be due to kinetic factors (diffusion of copper from the electrode and Cl − to the electrode) or thermodynamic factors owing to the solubility of the chlorometallate complexes in the both electrolytes.
From the literature survey, one can see a huge number of studies on copper dissolution and deposition in a various electrolyte solutions. The kinetics of the electrodeposition of a copper salt in choline chloride-based IL was studied, and it was determined that the Cu(I)/II process is quasi-reversible with a rate constant of 9.5 ± 10 −4 cm·s −1 [25]. For copper to be deposited from its salt in the electrolyte, the amount of copper deposited on the electrode's surface must be quite small so it can all be dissolved in the anodic sweep (returning the current to approximately zero), and a diffusion-limited current is seen for the oxidation of Cu(I) to Cu(II). When copper dissolution is conducted electrochemically at a bulk copper electrode, the copper is effectively at infinite concentration. The solution close to the electrode's surface, i.e., at the interface region, could become saturated. It is proposed that that is what occurred in Figure 1a at the potential about +0.2 V. Saturating the solution at the interfacial region with the copper complex will result in a decrease in the oxidative current as the electrode becomes blocked with dissolution product, and as a consequence, an asymmetric peak is gained.
The number of moles involved in the phase transitions at first oxidation peaks have been computed and are presented in Table 1. One can conclude that copper's conversion from metallic copper to Cu(I) in Ethaline is about 90 times higher than stripping of electrodeposited copper one in choline chloride-based and imidazolium based electrolytes, using platinum as a substrate at 20 • C and 70 • C, respectively. The number of moles copper stripped electrochemically in the choline chloride-based IL at 20 • C is slightly lower than the moles calculated in the imidazolium based IL at 70 • C. This might be owing to higher viscosity of imidazolium-based IL even at 70 • C (142 cP) to choline chloride-based IL (36 cP) at 20 • C [26]. The quasi-passivation process in the electrochemical dissolution of bulk copper in choline based IL at 20 • C could be as a result of the low solubility of CuCl 2 . So as to verify this, the experiment was repeated in the choline-based IL at 70 • C to create a comparison with the imidazolium-based IL experiment and the result are shown in Figure 5. It was anticipated that as the temperature raised, so would the anodic current (approximately 10 fold) because of the lowering of the solution viscosity. It ought to, however, be noticed that the sharp decline in current still occurred at 0.6 V instead of 0.2 V. This is presumably as a result of the super-saturation at elevated temperature; i.e., higher concentration relative to saturated one, and the diffusion of chloride was also higher.
One thing that has to be well-known is the oxidation onset potential that shifted to be more negative; i.e., there was less anodic onset potential as the temperature is raised, demonstrating the kinetic accelerating oxidation reaction of the bulk copper. Ultimately, the overall electrochemical behaviour did not alter at that elevated temperature; in the other words, the passivation was still effective.
The time for the bulk copper electrode to passivate in the choline chloride-based IL as a function of scan rate is presented in Figure 6. At faster sweep rates, a higher concentration of electrochemically dissolved copper is put more rapidly into the solution and the interfacial region between the electrode's surface and the electrolyte saturates more quickly. It appears as if the potential of passivation increases, but this is just an artefact of the system not being at equilibrium state.
Influence of Mass Transport
When the potential of bulk copper electrode was held at +0.18 V versus Ag/Ag + for 10 min in choline chloride-based IL, the surface initially darkened and a green film slowly formed on the electrode's surface, as presented in the previous work [21]. At this potential, the most likely salt is CuCl, which is only sparingly soluble in the electrolyte. The light green colour indicates that the further oxidation of Cu(II) occurs; however, given the applied electrode potential, the oxidation would also be caused by the existence of dissolved oxygen.
After the green film was washed off, the electrode metal was darkened quite considerably owing to surface roughening. To study the morphology of the surface, it is important to determine the role of mass transport; the experiment was repeated using a rotating disc electrode, as presented in Figure 7. The morphology of the surface before and after anodic polarisation can be seen in later section for a bulk copper electrode with and without stirring.
The manipulation of mass transportation was tested using a rotating disc electrode, as presented in Figure 7. It is important that in the absence of stirring, passivation occurs, even at stirring rates of up to 500 rpm. Only at rotation speeds above 1000 rpm, the passivation response was lost. As the rotation speed increased, so did the current, due to the anions provided to the surface by which more reactions occurred. The current did not, however, reach a steady state value as would be predicted for a solution based species, so it must be limited by the diffusion of oxidised copper from the
Influence of Mass Transport
When the potential of bulk copper electrode was held at +0.18 V versus Ag/Ag + for 10 min in choline chloride-based IL, the surface initially darkened and a green film slowly formed on the electrode's surface, as presented in the previous work [21]. At this potential, the most likely salt is CuCl, which is only sparingly soluble in the electrolyte. The light green colour indicates that the further oxidation of Cu(II) occurs; however, given the applied electrode potential, the oxidation would also be caused by the existence of dissolved oxygen.
After the green film was washed off, the electrode metal was darkened quite considerably owing to surface roughening. To study the morphology of the surface, it is important to determine the role of mass transport; the experiment was repeated using a rotating disc electrode, as presented in Figure 7. The morphology of the surface before and after anodic polarisation can be seen in later section for a bulk copper electrode with and without stirring.
The manipulation of mass transportation was tested using a rotating disc electrode, as presented in Figure 7. It is important that in the absence of stirring, passivation occurs, even at stirring rates of up to 500 rpm. Only at rotation speeds above 1000 rpm, the passivation response was lost. As the rotation speed increased, so did the current, due to the anions provided to the surface by which more reactions occurred. The current did not, however, reach a steady state value as would be predicted for a solution based species, so it must be limited by the diffusion of oxidised copper from the electrode's surface rather than a diffusion of chloride from the bulk electrolyte region to the electrode's surface. electrode's surface rather than a diffusion of chloride from the bulk electrolyte region to the electrode's surface. The formation of films on the electrode's surface is known to influence the surface morphology of the dissolved copper at the surface. The process of electropolishing to small extent, i.e., surface levelling, is thought to occur because of film formation which restricts metal ions diffusing away from the electrode's surface as a consequence of compact film formation, i.e., resistive film.
Electrochemical Impedance Spectroscopy (EIS)
EIS technique is a novel approach to examine the electrical properties of bulk and interfacial regions of various materials [27][28][29]. Figure 8 shows the EIS spectra of metallic copper in choline chloride-based IL at various direct current (DC) potentials. At −0.2 V, there was a single semi-circle for an electron transfer process which got smaller at 0 V indicating an increase in electron transfer rate constant. At +0.2 V, a second semi-circle was enlarged at +0.4 V, and above that voltage, dominated, suggesting that an insulating layer formed on the electrode's surface. At +0.2 V, there were two semi-circle responses which could represent the first and second oxidations of copper. The formation of films on the electrode's surface is known to influence the surface morphology of the dissolved copper at the surface. The process of electropolishing to small extent, i.e., surface levelling, is thought to occur because of film formation which restricts metal ions diffusing away from the electrode's surface as a consequence of compact film formation, i.e., resistive film.
Electrochemical Impedance Spectroscopy (EIS)
EIS technique is a novel approach to examine the electrical properties of bulk and interfacial regions of various materials [27][28][29]. Figure 8 shows the EIS spectra of metallic copper in choline chloride-based IL at various direct current (DC) potentials. At −0.2 V, there was a single semi-circle for an electron transfer process which got smaller at 0 V indicating an increase in electron transfer rate constant. At +0.2 V, a second semi-circle was enlarged at +0.4 V, and above that voltage, dominated, suggesting that an insulating layer formed on the electrode's surface. At +0.2 V, there were two semi-circle responses which could represent the first and second oxidations of copper.
The data acquired for impedance at different polarisation potentials in Figure 8b were fitted to an electrical equivalent circuit (EEC) involving a two Randle's circuits in series with Warburg impedance. EEC is a straight method to represent the behaviour of the medium with circuit elements, thus understanding the electrical properties of the system under study [30,31]. From the data analysis, when applying a potential of +0.8 V, the film capacitance was found to be 4.9 × 10 −6 F·cm −2 , assuming a dielectric constant of 8.0; the thickness of the film was estimated to be 1.4 µm [32]. This is about an order of magnitude more than expected from the films formed in aqueous media. The film thickness was, however, remarkably thicker than that found for stainless steel electropolishing in the same liquid under the comparable conditions, and less thick compared to cobalt in the same electrolyte. As a consequence, the cobalt surface underwent a well-mirror-like electropolishing. The thickness was estimated to be only 16 nm for that layer [32]. It should, however, be noted that speciation is different where for stainless steel, the iron complex formed is glycolate. The data acquired for impedance at different polarisation potentials in Figure 8b were fitted to an electrical equivalent circuit (EEC) involving a two Randle's circuits in series with Warburg impedance. EEC is a straight method to represent the behaviour of the medium with circuit elements, thus understanding the electrical properties of the system under study [30,31].From the data analysis, when applying a potential of +0.8 V, the film capacitance was found to be 4.9 × 10 −6 F·cm −2 , assuming a dielectric constant of 8.0; the thickness of the film was estimated to be 1.4 µm [32]. This is about an order of magnitude more than expected from the films formed in aqueous media. The film thickness was, however, remarkably thicker than that found for stainless steel electropolishing in the same liquid under the comparable conditions, and less thick compared to cobalt in the same electrolyte. As a consequence, the cobalt surface underwent a well-mirror-like electropolishing. The thickness was estimated to be only 16 nm for that layer [32]. It should, however, be noted that speciation is different where for stainless steel, the iron complex formed is glycolate.
The experiment was repeated in choline chloride-based IL at 70 °C (not shown) and the same mechanism of dissolution was gained. A comparison with copper dissolution at 20 °C showed a capacitance of 2.17 ×10 −5 F·cm −2 at 70 °C and + 0.8 V which corresponds to a layer of about 0.3 µm thick [32]. It would seem logical to say that the diffusion layer in the duplex salt film model should be thinner at a higher temperature as the salt would be more soluble.
The structure of the double layer at the electrode's surface in molecular solvent systems is completely different from that in ILs, in such a way that in the former, the electrode charge is compensated by both adsorbed counter ions and the diffuse layer, while in the latter, the structure The experiment was repeated in choline chloride-based IL at 70 • C (not shown) and the same mechanism of dissolution was gained. A comparison with copper dissolution at 20 • C showed a capacitance of 2.17 ×10 −5 F·cm −2 at 70 • C and + 0.8 V which corresponds to a layer of about 0.3 µm thick [32]. It would seem logical to say that the diffusion layer in the duplex salt film model should be thinner at a higher temperature as the salt would be more soluble.
The structure of the double layer at the electrode's surface in molecular solvent systems is completely different from that in ILs, in such a way that in the former, the electrode charge is compensated by both adsorbed counter ions and the diffuse layer, while in the latter, the structure may involve a monolayer of counter ions as compensation, followed by a multilayer involving cations and anions adjacent to each other [33,34].
In Figure 8c the impedance of a copper electrode in imidazolium-based IL was measured at 70 • C as a function of DC potential. A single semicircle was observed at −0.2 V corresponding to the process of electron transfer that was likely the oxidation of copper. In this experiment, the polarisation potential was altered over the potential window from negative to positive. As the DC potential was shifted from negative to 0.0 V, the semicircle lessened its width as a result of a faster electron transfer process that was predicted given the increase in overall potential. From +0.4 to +1.2 V, a vertical straight line was observed, signifying the characteristic of a series RC circuit which was caused by an insulating layer on the electrode's surface. The response of the series RC circuit did not alter with potential, indicating that once the film forms, it is not permeable and insulates the electrode's surface. It would therefore, be expected that this would make the electrode unelectropolished. This reveals how the speciation of dissolved copper can affect the behaviour of the electrode. From examining the linear sweep voltammetry for the imidazolium-based IL system, it can be seen that a diagonal line is gained, indicating a resistor. This can be attributed to a layer of copper chloride formation on the copper metal.
Electropolishing
The electropolishing can basically be described as a controlled electrochemical dissolution process of a surface in an attempt to construct it less roughly at the macroscale (levelling) >1 µm and microscale (brightening) <1 µm [35]. The basic principles of this process are film formation and a mass transport limited current plateau from the polarisation response. So as to achieve the conditions for macrosmoothing, the ohmic control or the mass transport control has to be conducted, while for microsmoothing, the mass transport mechanism is sufficient [35].
In the present work, metallic copper was electropolished in choline chloride-based IL at 20 • C at a potential of +1.2 V, as exhibited in Figure 9. This is the first time that a DES has been shown to be useful electropolishing electrolyte for a single metal. Figure 9 exhibits a metallic copper surface which is brighter but there are obvious signs of pitting on the surface. It could actually be questioned whether this is truly electropolished or just brightened alone. potential was shifted from negative to 0.0 V, the semicircle lessened its width as a result of a faster electron transfer process that was predicted given the increase in overall potential. From +0.4 to +1.2 V, a vertical straight line was observed, signifying the characteristic of a series RC circuit which was caused by an insulating layer on the electrode's surface. The response of the series RC circuit did not alter with potential, indicating that once the film forms, it is not permeable and insulates the electrode's surface. It would therefore, be expected that this would make the electrode unelectropolished. This reveals how the speciation of dissolved copper can affect the behaviour of the electrode. From examining the linear sweep voltammetry for the imidazolium-based IL system, it can be seen that a diagonal line is gained, indicating a resistor. This can be attributed to a layer of copper chloride formation on the copper metal.
Electropolishing
The electropolishing can basically be described as a controlled electrochemical dissolution process of a surface in an attempt to construct it less roughly at the macroscale (levelling) >1 µm and microscale (brightening) <1 µm [35]. The basic principles of this process are film formation and a mass transport limited current plateau from the polarisation response. So as to achieve the conditions for macrosmoothing, the ohmic control or the mass transport control has to be conducted, while for microsmoothing, the mass transport mechanism is sufficient [35].
In the present work, metallic copper was electropolished in choline chloride-based IL at 20 °C at a potential of +1.2 V, as exhibited in Figure 9. This is the first time that a DES has been shown to be useful electropolishing electrolyte for a single metal. Figure 9exhibits a metallic copper surface which is brighter but there are obvious signs of pitting on the surface. It could actually be questioned whether this is truly electropolished or just brightened alone.
Atomic Force Microscopy (AFM)
The AFM images for a bulk copper sheet before and after electrochemical dissolution in choline chloride-based IL are presented in Figure 10. It can be seen that metallic copper undergoes pitting under several conditions and electropolishing under the others, noticing that on the native (unpolished) surface, machining marks and scratches can clearly be seen. Over the anodic sweep, it can be seen that the average surface roughness value, Ra, of the electrode prior to anodic polarisation was 0.75 µm, while the value after polarisation was 3.75 µm. When the electrode was rotated at 3000 rpm, the 0.63 µm was gained. If the liquid was stirred (by rotating the electrode at 3000 rpm) then no film was produced at the electrode's surface and the solution became green.
Galvanostatic etching for short times resulted in pitting taking place, leading to an uneven surface, whereas increasing the etch time led to a visibly brighter surface with less microscopic roughness. The machining marks were removed by the electro polishing process. Similarly, the same pattern was observed for the electropolishing of stainless steel in choline chloride-based IL [36]. In commercial electropolishing electrolytes, it is well-known that levelling only really takes place once
Atomic Force Microscopy (AFM)
The AFM images for a bulk copper sheet before and after electrochemical dissolution in choline chloride-based IL are presented in Figure 10. It can be seen that metallic copper undergoes pitting under several conditions and electropolishing under the others, noticing that on the native (unpolished) surface, machining marks and scratches can clearly be seen. Over the anodic sweep, it can be seen that the average surface roughness value, Ra, of the electrode prior to anodic polarisation was 0.75 µm, while the value after polarisation was 3.75 µm. When the electrode was rotated at 3000 rpm, the 0.63 µm was gained. If the liquid was stirred (by rotating the electrode at 3000 rpm) then no film was produced at the electrode's surface and the solution became green.
Galvanostatic etching for short times resulted in pitting taking place, leading to an uneven surface, whereas increasing the etch time led to a visibly brighter surface with less microscopic roughness. The machining marks were removed by the electro polishing process. Similarly, the same pattern was observed for the electropolishing of stainless steel in choline chloride-based IL [36]. In commercial electropolishing electrolytes, it is well-known that levelling only really takes place once the electrolyte is saturated with metal ions. Electropolishing the metallic copper samples with 0.81 M CuCl 2 added to the solution caused two different morphologies on the electrode's surface. It is worth noting that at low etch times, pitting was more observable, whereas longer timescales gave more even surface finishes. It is worth mentioning the hydrogen evolution occurs at the cathode during the anodic dissolution of metallic copper in choline chloride-based IL at 20 • C. On the one hand, a negligible gas evolution at low current density (<20 mA·cm −2 ) is observed in the form of bubble. On the other hand, at higher current density the evolution is remarkable in the addition of water at any amount. This can be related to either the electrolysis of a trace of water or EG [36]. The hydrogen gas evolution resulted in a lowering of the current efficiency of the electrochemical dissolution process, which is undesirable. DESs are generally insensitive to moisture compared to ILs, but the impact of water on the electrochemical process is still effective [37].
On the one hand, there is an alteration in morphology of metallic copper surface from dark to surface brightness, as shown in Figure 10. On the other side, roughness is revealed in Figure 10; the copper sheet after dissolution made the subjects rougher, but the fewer crevices are considered the key feature of the copper sheet. So as to make a decision on whether electropolishing occurred or not, herein at least, the brightness accomplished had to be sufficient. Moreover, this is desirable from the electropolishing perspective that can be linked to the kind of electrolyte because the electropolishing of copper robustly depends on the nature of the electrolyte. surface finishes. It is worth mentioning the hydrogen evolution occurs at the cathode during the anodic dissolution of metallic copper in choline chloride-based IL at 20°C. On the one hand, a negligible gas evolution at low current density (<20 mA·cm −2 ) is observed in the form of bubble. On the other hand, at higher current density the evolution is remarkable in the addition of water at any amount. This can be related to either the electrolysis of a trace of water or EG [36]. The hydrogen gas evolution resulted in a lowering of the current efficiency of the electrochemical dissolution process, which is undesirable. DESs are generally insensitive to moisture compared to ILs, but the impact of water on the electrochemical process is still effective [37].
On the one hand, there is an alteration in morphology of metallic copper surface from dark to surface brightness, as shown in Figure 10. On the other side, roughness is revealed in Figure 10; the copper sheet after dissolution made the subjects rougher, but the fewer crevices are considered the key feature of the copper sheet. So as to make a decision on whether electropolishing occurred or not, herein at least, the brightness accomplished had to be sufficient. Moreover, this is desirable from the electropolishing perspective that can be linked to the kind of electrolyte because the electropolishing of copper robustly depends on the nature of the electrolyte. It is also impressive to notice that the nature of interfacial region governs the nature of electrochemical-polishing which is different to large extent from that of aqueous counterparts. It is well-reported that electrochemical polishing occurs at the interface region between the electrode and the electrolyte where dissolved ions diffuse from the electrode's surface into bulk electrolyte [38,39]. It is also impressive to notice that the nature of interfacial region governs the nature of electrochemical-polishing which is different to large extent from that of aqueous counterparts. It is well-reported that electrochemical polishing occurs at the interface region between the electrode and the electrolyte where dissolved ions diffuse from the electrode's surface into bulk electrolyte [38,39].
Conclusions
In the present work, the study of mechanism of metallic copper dissolution in two chloridecontaining electrolytes, choline chloride-based deep eutectic solvent, ethaline, and imidazolium-based IL, C 4 mimCl, has revealed that CuCl ads and CuCl 2ads formed in the first oxidation region which was a compact film, and that was followed by a second oxidation resulting in the complexation of oxidised copper in the form CuCl 3 − and CuCl 4 2− producing a porous film which then diffused away from the metallic copper surface. In other words, the release of dissolved copper from the bulk solid phase into both electrolytes in the form Cu(II)rather than Cu(I) occurred. The Cu(I)/Cu(II) process can be identified by means of an asymmetric peak at the anodic regime as a result of passivation of the copper surface primarily with saturated corrosion products. This is evidenced by comparing the voltammogram of CuCl 2 ·2H 2 O and metallic copper disc in both electrolytes. This can be linked with the chemistry of what actually happens at the interface, considering the composition properties (composition and structure). It was also seen that EG was responsible for saturation at the interface region, resulting in withdrawing chloride ions into the interfacial region. The quasi-passivation of the copper in this electrolyte depends upon the EG.
The postulated mechanism involves the electrochemical formation of CuCl ads and CuCl 2 − , followed by oxidation to CuCl 2 leading to super-saturation of the interfacial region with [CuCl 3 ] − and [CuCl 4 ] 2− restricted by the availability of Cl − . It is probable that the Cl − ions interact with oxidised metallic copper, creating insulating CuCl and CuCl 2 which may not cover the entire surface, and some parts keep free of coverage in non-stoichiometric proportions which are vulnerable to solvation with Cl − ions, and as a result, diffusion away from the surface occurs. Additionally, the kinetics of metallic copper dissolution in both electrolytes of interest was studied to some extent. The temperature influence and use of RDE have demonstrated that electrochemical dissolution raised as the temperature elevated and the mass transport effect changed towards faster electrochemical metallic copper dissolution.
Finally, the electropolishing of copper in deep eutectic fashion was accomplished to some extent, which cannot be acquired by means of aqueous chloride electrolyte. The electropolishing of copper in this electrolyte (ChCl:EG 1:2) involves levelling (macrosmoothing) and to some extent brightening with a diminution in the surface roughness. The use of dissolved copper as a counter electrode in a range of experiments to produce copper ions slowly on the working electrode on a large scale level is another advantage of studying the anodic behaviour of copper metal.
|
2019-10-24T09:12:33.260Z
|
2019-10-17T00:00:00.000
|
{
"year": 2019,
"sha1": "5169538d31bd67c2d2df32ee61a15362f542a69b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/9/20/4401/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3846f7a7f708b0e996d429912346174ce6dc5339",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
250329524
|
pes2o/s2orc
|
v3-fos-license
|
The Association Between Hypertensive Disorders in Pregnancy and the Risk of Developing Chronic Hypertension
Objective This meta-analysis comprehensively evaluated the association between hypertensive disorders in pregnancy (HDP) and the risk of developing chronic hypertension and the associations between specific types of HDP, including preeclampsia (PE) and gestational hypertension (GH), and the risk of developing chronic hypertension. Design Systematic review and meta-analysis. Data Sources The PubMed, Embase and Cochrane Library databases were searched from inception to August 20, 2021. Methods Depending on heterogeneity, the combined odds ratio (OR) of the 95% confidence interval (CI) was obtained with a random-effects or fixed-effects model. We used meta-regression analysis to explore the sources of heterogeneity. We analyzed the OR value after adjusting for age and BMI at recruitment, prepregnancy BMI, age at first delivery, and other factors. Additionally, we evaluated the results of the subgroup analysis by the year of publication (< 2016, ≥ 2016), study design, sample size (< 500, ≥ 500), region (North and South America, Europe, and other regions) and NOS score (< 7, ≥ 7). Results Our systematic review and meta-analysis comprehensively explored the relationships between HDP, GH, and PE and chronic hypertension. Twenty-one articles that included 634,293 patients were included. The results of this systematic review and meta-analysis suggested that women with a history of HDP are almost 3.6 times more likely to develop chronic hypertension than those without a history of HDP, women with a history of GH are almost 6.2 times more likely to develop chronic hypertension than those without a history of GH, and women with a history of PE are almost 3.2 times more likely to develop chronic hypertension than those without a history of PE. In addition, we further calculated the probability of developing chronic hypertension among patients with HDP or PE after adjusting for age and BMI at recruitment, prepregnancy BMI, age at first delivery, and other factors. The results suggested that women with a history of HDP are almost 2.47 times more likely to develop chronic hypertension than those without a history of HDP and that women with a history of PE are almost 3.78 times more likely to develop chronic hypertension than those without a history of PE. People in Asian countries are more likely to develop chronic hypertension after HDP or PE, while American people are not at high relative risk. Conclusion These findings suggest that HDP, GH, and PE increase the likelihood of developing chronic hypertension. After adjustment for age and BMI at recruitment, prepregnancy BMI, age at first delivery, and other factors, patients with HDP or PE were still more likely to develop chronic hypertension. HDP may be a risk factor for chronic hypertension, independent of other risk factors. GH and PE, as types of HDP, may also be risk factors for chronic hypertension. Systematic Review Registration [www.ClinicalTrials.gov], identifier [CRD42021238599].
INTRODUCTION
Hypertension is one of the most common conditions that occur during pregnancy and the main cause of maternal death (1). Ten percent of pregnancies are affected by hypertension, especially those of primiparas. Hypertensive disorders in pregnancy (HDP) include a series of diseases classified as preeclampsia, eclampsia, gestational hypertension, pregnancy complicated with chronic hypertension and preeclampsia superimposed on chronic hypertension (2). Their definitions are shown in Table 1. HDP remains one of the leading causes of maternal and fetal disease incidence and mortality worldwide. Moreover, HDP is closely related to the patient's future health. A study found that women with prepregnancy hypertension and those with both HDP and prepregnancy hypertension had an increased risk of kidney disease 5 years after delivery (3). HDP increases the risk of future cardiovascular events and has been included in the guidelines for the risk assessment and prevention of stroke and cardiovascular disease (CVD) in women (4,5). Recent evidence indicates that the incidence rate of HDP has increased over the past 30 years, suggesting that HDP, a sex-specific CVD risk factor, may become more important in the coming years (6,7). A history of gestational hypertension/preeclampsia is related to subclinical atherosclerosis (increased carotid intimamedia thickness (IMT) and plaque) (8). Pregnancy-induced hypertension is even hereditary, affecting the cardiovascular health of offspring (9).
Studies have shown that women with preeclampsia have a higher risk of developing chronic hypertension. Indeed, comprehensive data show that 20% of women with eclampsia develop hypertension within 15 years (10). However, the risk varies depending on the population studied and the criteria used for diagnosis. According to a study, the risk of hypertension in Sweden 5-12 years after pregnancy is approximately 40% (11,12). Three other studies reached similar conclusions (13)(14)(15). The correlation between HDP and chronic hypertension fluctuates greatly. The results were different depending on the region and follow-up years. There are many other confounding factors, such as race or country; studies have shown that African women with a history of pregnancy-induced hypertension, followed by Hispanic and Asian women, have the highest risk of future high blood pressure. Moreover, individuals with normal blood pressure showed better health-related quality of life than patients with hypertension. Although systemic hypertension has almost always been considered a clinically asymptomatic disease, it can impair the quality of life of patients (16,17). Therefore, the early prevention of hypertension is necessary. If the association between gestational hypertension and chronic hypertension can be identified, the early prevention and treatment of HDP will greatly benefit the long-term health of patients.
This systematic review and meta-analysis assessed recent reports to explore the association between HDP and chronic hypertension and evaluate the associations between specific types of HDP, including preeclampsia (PE), and gestational hypertension (GH), and the risk of developing chronic hypertension. We analyzed both crude and adjusted OR values to better determine the relationships between the variables and the stability of the results. We also conducted subgroup analysis by country and year to analyze the relationship between HDP and chronic hypertension.
METHODS
This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (18).
Protocol, Eligibility Criteria, Information Sources, and Search Strategy
This review was based on a prior design recommended by a systematic review and meta-analysis. The PubMed, EMBASE, and Cochrane Library databases were searched electronically in August 2021 using a combination of terms, keywords and
Type of HDPs Definition
Gestational hypertension Hypertension occurring after 20 weeks of pregnancy, systolic blood pressure ≥ 140 mmHg and (or) diastolic blood pressure ≥ 90 mmHg, and a return to normal blood pressure within 12 weeks after delivery; urinary protein (-); the diagnosis can be made after delivery.
Preeclampsia Systolic blood pressure ≥ 140 mmHg and (or) diastolic blood pressure ≥ 90 mmHg after 20 weeks of pregnancy, accompanied by urinary protein ≥ 0.3 g/24 h, or random urinary protein (+) Or without proteinuria, but combined with any of the following: • Thrombocytopenia (platelets < 100) × 10 9/ L) • Liver function impairment (serum transaminase level is more than twice the normal value) • Renal function impairment (serum creatinine level > 1.1 mg/dl or more than twice the normal value) • Pulmonary edema • New central nervous system abnormalities or visual impairment Eclampsia Convulsions that cannot be explained by other reasons occurring on the basis of preeclampsia.
Preeclampsia superimposed on chronic hypertension
There was no proteinuria before pregnancy, and proteinuria was present after 20 weeks of pregnancy in women with chronic hypertension; or proteinuria was present before pregnancy, and proteinuria increased significantly after pregnancy; or blood pressure rises further; or thrombocytopenia < 100 × l0 9/ L; or other serious manifestations such as liver and kidney function damage, pulmonary edema, nervous system abnormalities, or visual impairment.
Pregnancy complicated with chronic hypertension
Systolic blood pressure ≥ 140 mmHg and (or) diastolic blood pressure ≥ 90 mmHg before 20 weeks of pregnancy (excluding trophoblastic diseases), and there was no significant aggravation during pregnancy; or hypertension was first diagnosed after 20 weeks of pregnancy and continued beyond 12 weeks postpartum.
word variants related to the medical subject headings (MeSH) "hypertension, pregnancy, " "preeclampsia, " "eclampsia" and "hypertension." We used Endnote x9 to remove duplicate articles and then browsed the titles and summaries to exclude unrelated articles. Reviews, meta-analyses, articles lacking relevant data, letters and abstracts were excluded. There were no time or language restrictions. The reference lists of relevant articles and comments were manually searched for additional reports. The study was registered in the Prospero database (Registration number: CRD42021238599).
Study Selection, Data Collection, and Data Items
The main outcome was the incidence rate of chronic hypertension in patients with HDP or with the specific types PE and GH. We included case-control studies and cohort studies that provided data on how many patients developed hypertension several years after delivery. The research period of the different studies varied: the span was large, and the time period ranged from 1 to 30 years. Hypertension was defined as a systolic blood pressure (SBP) ≥ 140 mmHg and/or a diastolic blood pressure (DBP) ≥ 90 mmHg occurring more than once in a clinical environment. The use of antihypertensive drugs and lower thresholds for defining hypertension were also included in the diagnostic criteria. When data were available, only patients affected by HDP, PE, and GH were considered in the analysis. We excluded studies in which chronic hypertension was present before pregnancy or before 20 weeks of gestation. If a study included patients with chronic hypertension, we considered only the articles that provided the number of patients with chronic hypertension. In addition, we did not include articles about the incidence rate of postpartum hypertension within 1 year of delivery. Two researchers, Xu and Wang, independently performed all abstract screenings. The two researchers retrieved and independently evaluated the full texts of potentially eligible studies. Any inconsistencies or differences were discussed with a third reviewer, and a consensus was reached. Several articles were translated into languages other than English to determine whether they were suitable for inclusion. The reviewers extracted data on the study characteristics and results, especially the author, year, location, study type, population size, and reported results. If multiple studies with the same endpoint were published for the same cohort, the report containing the most comprehensive population information was used to avoid population overlap.
Risk of Bias and Study Quality
The quality of the included studies was assessed using the Newcastle-Ottawa Scale (NOS) for cohort and case-control studies, which was developed by Schokker et al. to assess the quality of non-randomized studies (19). With this protocol, the maximum score for each study was 9. Studies with a score ≥ 7 were considered high-quality articles. The two authors independently reviewed each study and determined whether it was eligible for inclusion in our meta-analysis. If there were any differences, the third author joined the discussion. Since the NOS could not be used to fully evaluate the potential confounding factors in the study analysis, information on which confounding factors were considered in each study was further extracted. Publication bias was assessed by a funnel plot using Begg's and Egger's tests (20). Subgroup analysis by publication year (< 2016, ≥ 2016), study design, location, sample size (< 500, ≥ 500) and NOS score (< 7, ≥ 7) was performed to further evaluate the associations between HDP, PE, and GH and chronic hypertension.
Statistical Analysis
We constructed forest plots to obtain pooled ORs and 95% CIs. We applied a fixed-effects model to calculate the combined effect estimate if I 2 ≥ 50%. Otherwise, we used a random-effects model. Sensitivity analysis was used to explore the robustness of the included literature. Publication bias was assessed by funnel plots and linear regression equations. If the funnel plot was obviously asymmetric, we further used the trim-and-fill method to adjust the data. In addition, meta-regression analysis was performed based on the publication year, NOS score, status, sample size, and study design to explore the sources of heterogeneity. All analyses were conducted via R version 3.6. The critical value for statistical significance was set as P < 0.05.
Study Selection
To obtain relevant literature, we searched the PubMed, Embase and Cochrane Library databases from inception to August 20, 2021. A total of 57,194 studies were obtained (Figure 1). After removing duplicate articles, 45,436 articles remained. Then, we culled articles that were unrelated and lacked data by scanning the titles, abstracts, and full texts. In addition, three studies that were retrieved from the reference lists of previous relevant articles were included. Ultimately, 21 studies met all eligibility criteria .
Study Characteristics
The 21 studies included in this systematic review and metaanalysis varied in study design, year of publication, NOS score, country, and sample size. All studies were observational; 12 were described as cohort studies, eight as case-control studies and one as a cross-sectional study. The publication dates of these articles ranged from 2000 to 2021. Among these articles, the study areas included Europe for seven studies, North and South America for nine studies, and other regions for five studies. The smallest sample size was 28 (38), and the largest sample size was 331,707 (35). Eleven studies researched HDP, 3 researched GH, and 13 researched PE. There were five studies that included more than one disease. The research characteristics are summarized in Table 2.
Total Pooled Effect
As shown in Figure 2A, the heterogeneity among the eligible articles about HDP was I 2 = 96% (P< 0.01), so we chose to use a random-effects model. The overall combined effect showed that HDP patients had a higher risk of developing chronic hypertension than healthy controls (OR 3.61, 95% CI 2.18-6.00). We also calculated the GH and PE results and chose to use random-effects models (I GH 2 = 73%, P = 0.03, I PE 2 = 97%, P < 0.01). Women with GH or PE were at higher risk of developing chronic hypertension than healthy controls (OR GH 6.24, 95% CI 1.73-22.55, OR PE 3.19, 95% CI 1.52-6.70) (Figures 2B,C).
Some articles reported adjusted OR values for age and BMI at recruitment, prepregnancy BMI, age at first delivery and other factors. We further evaluated the associations between HDP, GH, and PE and chronic hypertension based on the adjusted OR values.
The heterogeneity among the articles about HDP with adjusted OR values was 79% (OR 2.47, 95% CI 1.67-3.64) (Figure 3A), and the heterogeneity among those with unadjusted OR values was 83% (OR 2.36, 95% CI 1.43-3.88) (Figure 3B). The two results were similar, showing that patients with HDP are at higher risk of developing chronic hypertension than healthy controls. The same trend in the risk of chronic hypertension was observed in the PE group, and the OR values were adjusted (I 2 = 90%, OR = 3.78, 95% CI 2.05-6.98) ( Figure 3C).
Publication Bias, Sensitivity Analysis and Risk Analysis
Through linear regression and funnel plots, we found that studies on HDP (P = 0.4639) and PE (P = 0.5380) had no publication bias (Figure 4). Figure 5A shows that when omitting one of these studies (22), the sensitivity analysis of the HDP group showed an OR of 4.10 (95% CI 2.49-6.74), which was nearly the same outcome as the total pooled effect (OR 3.61, 95% CI 2.18-6.00). Similarly, when omitting other studies, women with HDP were at higher risk for developing chronic hypertension than healthy controls. Sensitivity analysis of the PE group showed similar results after omitting other studies, and women with PE were at higher risk of developing chronic hypertension than those in the healthy control group (Figure 5B).
The quality assessment and risk of bias analysis of each included study are shown in Table 2.
Meta-Regression Analysis
In the total pooled effect, the heterogeneity of the HDP group was I 2 = 96%, and the heterogeneity of the PE group was I 2 = 97%. Thus, we conducted meta-regression analysis based on the publication year, NOS score, country, sample size and study design. The results confirmed that the publication year and study design had a significant effect on the heterogeneity in the HDP group (P publication year = 0.03, P study design = 0.003). Other factors showed no significant effect on the heterogeneity in the HDP group. The publication year and study design may be the sources of heterogeneity for the experimental results. None of the factors showed a significant effect on the heterogeneity in the PE group ( Table 3).
Subgroup Analysis
We conducted subgroup analyses based on the year of publication (< 2016, ≥ 2016), study design, region (North America, South America, Europe, etc.), sample size (< 500, ≥ 500) and NOS score (< 7, ≥ 7) to further evaluate the correlations between HDP, GH, and PE and the risk of chronic hypertension. The subgroup analyses showed some inconsistencies; some of them seemed reasonable, while others did not.
An overall OR value of 5.75 (95% CI 3.92-8.44; I 2 = 49%) was found for the risk of developing postpartum hypertension among women with a history of HDP. According to the subgroup analysis, the risk of chronic hypertension in patients with HDP increased for different continents, but there were differences among the continents (P = 0.03). The increased risk in North and South America was the lowest (OR 2.11, 95% CI 1.42-3.14), and the risk in Europe was the highest (OR 5.52, 95% CI 3.01-10.14), while the risk in Asia was similar to the overall assessment (OR 4.26, 95% CI 1.05-17.21) (Figure 6). According to the analysis of publication years, when the publication year was before 2016, the increase in the risk of developing chronic hypertension among patients with HDP was significantly lower than that among patients included in studies with a publication year of 2016 or later (P = 0.02, OR <2016 1.78, 95% CI <2016 1.04-3.04, OR ≥2016 4.33, 95% CI ≥2016 2.62-7.16) (Figure 7). Grouped by study design, the OR value of the case-control group was 2.47 (95% CI 1. 47-4.13), that of the cohort control group was 5.19 (95% CI 2.99-9.01), and that of the cross-sectional group was 1.21 (95% CI 0.90-1.64) (Figure 8). According to the NOS score and sample size, the increased risk of developing chronic hypertension among HDP patients was similar to that of the overall evaluation (OR NOS≥7 3.68, 95% CI NOS≥7 2.03-6.66; OR NOS<7 3.21, 95% CINOS<7 1.19-8.66; OR >500 3.21, 95% CI >500 1.62-6.35; OR ≤500 4.26, 95% CI ≤500 1.94-9.33) (Figure 9). The overall OR was 3.19 (95% CI 1.52-6.70; I 2 = 97%), and women with a history of PE had a greater risk of developing postpartum hypertension than women without PE. The increased risks in the Americas and Europe were similar to the overall risk (OR Americas 3.32, 95% CI America 1. 26-8.74; OR Europe 2.19, 95% CI Europe 0.3-16.02), while the risk of developing chronic hypertension in Asia was significantly increased (OR 7.54, 95% CI 2.49-22.81) (Figure 10A). According to the analysis of the publication years, when the publication year was before 2016, the increase in the risk of developing chronic hypertension among patients with PE was significantly lower than that among patients included in studies with a publication year of 2016 or later (OR <2016 1.54, 95% CI <2016 0. 28-8.44, OR ≥2016 5.53, 95% CI ≥2016 3.21-9.53) ( Figure 10B). Grouped by study design, the OR value of the case-control group was 2.68 (95% CI 0. 45, 15.86) and that of the cohort control group was 2.70 (95% CI 1. 22, 11.22) (Figure 10C). The OR value of the NOS score ≥ 7 group was 2.15 (95% CI 0.7-6.64), and the OR of the other group was 6.88 (95% CI 6.07-7.80) ( Figure 11A). According to sample size, the increase in the risk of developing chronic hypertension among PE patients was similar to that of the overall evaluation (OR <500 4.05, 95% CI <500 1.12-14.69; OR ≥500 2.69, 95%CI ≥500 0.97, 7.45) (Figure 11B).
Principal Findings
Our systematic review and meta-analysis comprehensively explored the associations of HDP, GH, and PE with chronic hypertension. We included 21 articles with a total of 634,293 patients. The results of this systematic review and meta-analysis suggested that women with a history of HDP are almost 3.6 times more likely to develop chronic hypertension than those without a history of HDP, women with a history of GH are almost 6.2 times more likely to develop chronic hypertension than those without a history of GH, and women with a history of PE are almost 3.2 times more likely to develop chronic hypertension than those without a history of PE. In addition, we further calculated the probability of developing chronic hypertension among patients with HDP or PE after adjusting for age and BMI at recruitment, prepregnancy BMI, age at first delivery and other factors. The results suggested that women with a history of HDP were almost 2.47 times more likely to develop chronic hypertension than those without a history of HDP and that women with a history of PE were almost 3.78 times more likely to develop chronic hypertension than those without a history of PE (Figure 12). The above results show that women with HDP are more likely to develop chronic hypertension and that those with GH are more likely to have PE. Therefore, patients with HDP should monitor their blood pressure more actively in the future and choose a healthy lifestyle, such as a low-salt and low-fat diet, to reduce the possibility of hypertension. One meta-analysis showed that subclinical hypothyroidism during pregnancy is associated with an increased risk of developing HDP, and this association is present regardless of the gestational period (42). Some studies have shown that BMI or maternal prepregnancy obesity and abnormal gestational glucose metabolism are independently associated with an increased risk of HDP. Controlling these factors may reduce the occurrence of HDP (43,44). Preventing or reducing the occurrence of HDP in pregnant women will inevitably reduce the probability of developing hypertension in the future. In terms of countries, women in Asian countries are more likely to develop chronic hypertension after HDP or PE, while the relative risk in the Americans is not high. This may be related to race, medical level and economic conditions. We look forward to future research.
Comparison With Other Studies
Our systematic review illustrates the risk of developing chronic hypertension among pregnant women with HDP, GH and PE. Although the evidence linking pregnancy-induced hypertension with the development of hypertension has been recognized, there are still many outstanding problems in a number of specific aspects (45).
In 2007, a systematic review and meta-analysis showed that preeclampsia patients had more than three times the risk of developing hypertension (OR 3.70, 95% CI 2.70-5.05) than those without preeclampsia; the follow-up time was adjusted to 14.1 years (46). Subsequent studies did not adjust the followup years. A systematic review and meta-analysis in 2013 showed that women with a history of preeclampsia or eclampsia had more than three times the risk of developing hypertension (RR 3.13, 95% CI 2.51, 3.89) (14) than those without a history of preeclampsia or eclampsia. In 2016, Mayri Sagady Leslie reviewed 48 unique studies from 20 countries that included a total of 3,598,601 women, and found similar results (47). This outcome was consistent with ours. In 2018, L Brouwers' team found that recurrent preeclampsia was consistently associated with an increased pooled risk ratio for hypertension (RR 2.3; 95% CI 1.9-2.9) (48). The above articles all studied the relationship between preeclampsia and chronic hypertension, and few meta-analyses have directly studied the relationship between HDP or GH and chronic hypertension.
The advantage of our study is that a large number of articles were selected, and the sample size was large. We not only studied the possibility of HDP leading to chronic hypertension but also accounted for the relevant data on various types of HDP and finally chose to analyze the large amount of relevant data for PE and GH. We also performed subgroup analysis (publication year, study design, country, sample size and NOS score) to analyze the sources of heterogeneity and the probability of developing chronic hypertension in each subgroup. In addition, we further calculated the probability of developing chronic hypertension for patients with HDP or PE after adjusting for age and BMI at recruitment, prepregnancy BMI, age at first delivery and other factors. In general, we carried out statistical analysis on all aspects of the obtained data that could be analyzed.
However, there are still some limitations of this study, which need further study. There are few studies with high scores. The ages of patients with HDP and chronic hypertension were not statistically analyzed because the data were seriously lacking, which may be the reason for the high heterogeneity. The published literature is insufficient to determine the best screening period for postpartum detection of hypertension. We could not determine an observation age or follow-up period to limit the screening of the articles. The heterogeneity of the population and hypertension definitions and the failure to obtain sufficient details make the results of the metaanalysis misleading, and they could not be adjusted using statistical tests.
CONCLUSION
HDP, GH, and PE increase the likelihood that patients will develop chronic hypertension. After adjustment for age and BMI at recruitment, prepregnancy BMI, age at first delivery and other factors, patients with HDP or PE were still more likely to develop chronic hypertension. HDP, GH, and PE may be risk factors for chronic hypertension, independent of other risk factors.
|
2022-07-07T13:23:59.853Z
|
2022-07-07T00:00:00.000
|
{
"year": 2022,
"sha1": "70312dc38716fdc40a37b882119620de48cc0d7b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "70312dc38716fdc40a37b882119620de48cc0d7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85441330
|
pes2o/s2orc
|
v3-fos-license
|
Inequalities and Child Protection System Contact in Aotearoa New Zealand : Developing a Conceptual Framework and Research Agenda
There is a growing movement to integrate conceptual tools from the health inequalities field into research that examines the relationship between inequalities and chances of child protection system contact. This article outlines the key concepts of an inequalities perspective, and discusses how these apply to inequalities in child protection in the Aotearoa New Zealand context. Drawing on existing research, this article shows that while there is evidence of links between deprivation, ethnicity, location and system contact, a more systematic research agenda shaped by an inequalities perspective would contribute to understanding more fully the social determinants of contact with the child protection system. An inequalities perspective provides balance to the current ‘social investment’ policy approach that targets individuals and families for service provision, with little attention to how structural inequalities impact on system contact. Directions for research are discussed, with some specific questions suggested. These include questions relating to the relationships between social inequalities and various decision points in the child protection system; if a social gradient exists and how steep it is; the inter-relationship between ethnicity, deprivation and patterns of system contact; and how similarly deprived children in different locations compare with each other in relation to child protection system contact, that is, is there an ‘inverse intervention law’ operating?
Introduction
Child protection system interventions are increasingly prevalent in many countries, but this prevalence is not evenly shared across the population.While it is clear that inequalities relating to deprivation, ethnicity and others influence contact with child protection services generally, understanding their complexities in specific national contexts is important so that research and policy strategies are developed in a manner responsive to specific environments.This article draws on international and national research to explore the following key questions: What is an inequalities perspective, and what can it add as a conceptual framework to understanding the chances of child protection system contact in Aotearoa New Zealand?What existing evidence is there about inequalities relating to deprivation, ethnicity and other axes of inequality, and contact with child protection systems?What gaps are there in our knowledge that require research attention?
The population in contact with child protection systems is a diverse and growing one.As definitions and types of abuse have expanded, and expectations of the state to protect children have risen, the rates of children notified to child protection services in Anglophone countries have generally increased (Gilbert 2012;Melton 2005).In some instances, this has led to heightened strain on child protection agencies, multiple re-referrals, and more children in foster care, driving many countries to re-think their child welfare systems (Bilson and Martin 2016;Gilbert et al. 2011;Parton 2010aParton , 2010b;;Spratt 2008).
When comparing Aotearoa New Zealand to broad international trends, similarities and differences emerge.Overall, high rates of children have some kind of child protection system contact.For a 1990-1991 birth cohort, it was found that 15% of all children had been notified at some point in their childhood, with 7% of children having a substantiated finding of abuse.Later cohorts are estimated to have even higher rates, with 20% of a 1993 cohort being notified (Templeton et al. 2016).The latest published research states the rates for a cohort born in 1998 until the end of 2015.The authors found that 23.5% of children had at least one notification, and 9.7% had a substantiated finding (Rouland and Vaithianathan 2018).Ethnic inequities are marked, with 28% of Māori children, 12% of European, 18% of Pacific and 4% of Asian children having some form of contact with the child protection system (Templeton et al. 2016).This pattern of system contact generally shows an increase over time of notifications and substantiations.
How does this rate compare internationally?Bilson and Martin (2016) found that in the UK, 22.5% of children were notified in a 2010 cohort and 7.8% substantiated, but this was only by age 5, suggesting the overall childhood rate may be somewhat higher than in Aotearoa New Zealand.A Western Australian study of a 1990-1991 cohort found that 13% of all children were reported before reaching the age of eighteen, with only 3% substantiated.Bilson and Martin (2016) note the increasing proportions of children investigated or notified but not substantiated.This has occurred in Aotearoa New Zealand, with the proportion of notifications not substantiated increasing slightly from 53% of the 1990-1991 cohort, to 59% of the 1998 cohort.Bilson and Martin (2016) point out that while child protection systems are an important element of state responses to children's needs, they focus intensely on forensic investigations with few responses to address the pressing social needs that are the antecedents to child protection system contact.They observe that there is often less emphasis on prevention, resulting in more children placed in out of home care.In light of this, they argue for " . . .a change from the current emphasis on individualised and investigative approaches to child protection in order to provide an effective and humane response to children, the majority of whom live in families affected by high levels of deprivation and poverty" (Bilson and Martin 2016, p. 793).Bywaters et al. (2016a) show that deprivation has a marked correlation with contact with the child protection system in England, occurring along a social gradient where those more deprived had higher rates of contact than less deprived children.While this basic correlation is unsurprising, there has been little recent research into either the extent or the underlying causes of child protection inequalities, and "a reluctance to describe differences as inequalities or to propose action on the underlying social determinants" (Bywaters 2013, p. 6).The development of an inequalities perspective in relation to these important questions, as is well established in health, opens up a range of research and policy directions important to promote equity and effectiveness in relation to child protection in Aotearoa New Zealand.An inequalities perspective draws focus to the social determinants of system contact, and can assist with balanced policy responses that address these determinants in addition to family or individual factors.
Key Features of the Inequalities Perspective
The inequalities perspective prominent in health research and policy includes several key features (Bywaters et al. 2009).It draws attention to the relationship between socio-economic circumstances and particular features of people's daily lives-for example, in patterns of income and wealth, employment, health or education or in child welfare and child protection-but it does so not by focusing only on those at the bottom of the spectrum of advantage but by looking across the population.This raises such questions as how is income, wealth, health, education and good childhood development distributed across society?What are the patterns associated with middle or higher incomes as well as the implications of living in poverty?This draws attention away from focusing only on the reasons why some people do badly to question also why others do well.This balance of emphasis should remove the problem of 'othering' from the discussion of social phenomena that can develop when socioeconomic differences are presented as dichotomous categories.
In the child welfare field, this opens up a series of issues.First, this leads to a focus on child development and opportunities across families in all social circumstances, not only on those living in poverty.Some assume that contact with child protection services is reserved for a very small minority of children and that most of those are in very disadvantaged circumstances but recent English research has suggested that at least 40% of all children subject to child protection plans will be living in neighbourhoods outside the most disadvantaged 20% in the country (Bywaters et al. 2015).
Second, an inequalities perspective aims to establish the social gradient of the phenomenon under examination (Roberts 2012).The social gradient is a measure of how much difference an increase in social advantage or disadvantage makes to the chosen outcome.The magnitude of the social gradient bears on the allocation of resources but it should also raise questions about the interaction of family disadvantage and other factors-universal and targeted service patterns and priorities, parental skills and attitudes or education, or the availability of informal sources of support at different points in the economic spectrum or in different ethnic communities, for example.This opens up discussion about the relative difference between people across the whole spectrum, not only on those at the extremes, and links this to the supply and demand of services.In this way, it should question not only what percentage of people receiving child protection interventions are in poverty, but how great is the difference in the percentages between those who are most deprived and those at other points on the deprivation spectrum?Given that abuse and neglect are found in families with very different social circumstances, what should be the proportionate allocation of resources and services between different parts of society?
Third, the related question to those concerning the social gradient is the question of whether there is an interdependent relationship between outcomes for different groups: do some people do worse as a consequence of others doing better?For example, if child development is measured in terms of middle-class family values, does this pre-dispose working class families to more negative consequences of involvement with social work services (Bradt et al. 2015)?
A fourth key concept in an inequalities perspective is the concept of intersectionality: the interaction of different dimensions of structured social relations, for example, deprivation with ethnicity, gender, disability, age or sexual orientation (Nadan et al. 2015).The question here is how these varied aspects of people's identities interact with socio-economic inequalities to contribute to unequal outcomes.In child protection, child gender differences in child outcomes appear to be surprisingly small by comparison with differences between ethnic groups, or between disabled and non-disabled children.
A fifth concept relates to the explanatory theories we use to understand inequalities in rates of system contact.Child welfare inequalities research describes differences in rates, but also attempts to explain differences in rates by examining the complex interplay between increased actual exposure to risk factors (and therefore incidence) for some groups, bias within the systems that respond to them, and the structural factors that shape supply and demand of services (Boyd 2014;Bywaters et al. 2015;Cram et al. 2015;Drake et al. 2011).The supply of services-the availability, appropriateness and accessibility of services, both formal statutory child protection services, and prevention services-also influences variations in intervention rates and their outcomes (National Audit Office 2016; Fluke et al. 2010).Services may contribute to exacerbating inequalities when they are not designed or applied in proportion to the level of need in different geographical areas, or different social groups.For example, studies in England have found evidence of an 'inverse intervention law' (Bywaters et al. 2015), that is, that local authorities that have high average deprivation also have higher rates of children on child protection plans.But when similarly advantaged or disadvantaged neighbourhoods (small areas within local authorities) are compared between local authorities, the low deprivation local authorities have been found to have much higher rates of intervention than local authorities with high deprivation overall.
Finally, there is the question of whether the degree of income inequality in a population is an additional contributory factor over and above the dimensions of socio-economic circumstances and ethnicity already mentioned.Does a society with high income inequality produce worse (or better, some might argue) outcomes for people at different places on the social spectrum over and above just the consequences of their circumstances?(Wilkinson and Pickett 2009).For example, if a society is more unequal is there the potential for greater shame, guilt or distress at any given level of disadvantage, further reinforcing the negative effects of a disadvantaged position, than if there is a sense that (almost) everyone faces not too dissimilar odds?There is some-limited-evidence in the child protection field that the degree of inequality in a society creates such additional strains on family life (Eckenrode et al. 2014;Peacock et al. 2014).
A nuanced conceptual framework needs to be developed to explore the interaction of social, economic and environmental inequalities in family resources, with patterns of policy and service priorities, resources and practices to produce outcomes.Bywaters et al. (2016d) note that an important overarching question is: do services reflect, reinforce or reduce inequalities?
Child Welfare Research Using an Inequalities Framework
As mentioned above, recent research has drawn on this perspective to examine the complex relationships between social, economic and environmental inequalities and child welfare outcomes for children.This research direction seeks to understand how inequalities affect the chances of children's contact with this system, their experiences once they are in it, and the outcomes of that contact (Bywaters et al. 2015(Bywaters et al. , 2016a(Bywaters et al. , 2016b(Bywaters et al. , 2016c(Bywaters et al. , 2018)).Elsewhere, particularly in the US, many studies have examined the intersecting influences of poverty, income and race on contact with the child protection system (Cancian et al. 2013;Conrad-Hiebner and Paschall 2017;Detlaff et al. 2011;Font et al. 2012;Pelton 2015;Raissian and Bullinger 2016;Slack et al. 2017).While these US studies do not explicitly state an inequalities perspective, nevertheless they highlight the ways that contact with the child protection system is shaped by socio-structural factors, adding to our understanding of the causes of system contact and its outcomes.For example, Pelton (2015) examines the interplay of deprivation and ethnicity.He concludes that findings assessing the importance of racial bias as an explanatory factor are mixed " . . .but leave no doubt that racial disproportionalities within the system are overwhelmingly related to racial disproportionalities in the poverty population.There is continuing evidence that children placed in foster care are predominantly from impoverished families, and that changes in the level of material supports are related to risk of placement" (p.30).
Others also draw attention to the role of neighbourhood differences in relation to inequalities.They explicate nuanced evidence showing the interactions of neighbourhood factors, such as social cohesion, ethnic diversity, transience and adult:child ratios, with poverty to shape child welfare outcomes (Coulton et al. 2007;Maguire-Jack and Font 2017;Molnar et al. 2016;Shuey and Leventhal 2017).For example, Shuey and Leventhal (2017) using multilevel path models found that wealthier neighborhoods were "indirectly associated with mothers' lower reports of physical aggression with their children via more neighborhood services for children" (p.52).Klein and Merritt (2014) found that risk of referral to child protection services increased for Black, White and Hispanic US children if they lived in multicultural, as opposed to ethnically homogenous, neighborhoods.Molnar et al. (2016) showed that social neighborhood processes such as intergenerational closure, collective efficacy and social networks were correlated with lower rates of all types of abuse substantiations.
Building on these descriptive studies, researchers have attempted to theorise why inequalities have a relationship with child welfare services interventions.Proposed explanations as mentioned above include: the increased risk of exposure to poverty as a life stressor increasing actual incidence, (both for poorer people overall, and people from ethnic minorities overrepresented in this group); the impact of other services available to less deprived people outside of the child welfare system; differences in demand and supply of child welfare services; the heightened surveillance more deprived people are exposed to; and the role of bias within the systems that respond to them, of both referrers, and decision-makers within the child protection system (Boyd 2014;Detlaff 2014;Drake et al. 2009;Johnson-Reid et al. 2009;Wells et al. 2009;Widom et al. 2015).Understanding how inequalities interact with decision-making is important.For example, Morris et al. (2018) are researching how site-specific decision-making processes at wealthier and more deprived sites interact with deprivation to affect the chance of children having child welfare system contact.Examining how social workers perceive and respond to poverty in the context of family life is an important aspect of the study.Stokes and Schmidt (2011) found that while neither race nor poverty directly affected decision-makers, nevertheless other indicators of deprivation such as substandard housing and substance abuse did affect decision-reasoning, suggesting that "The increasingly technocratic discourse in child protection blames individual parents and holds them responsible for not protecting their child from vulnerability, regardless of any historical and structural impediments they may face in attaining adequate resources" (p.1105).One direction for inequalities research is to draw attention to the interplay between macro contexts, policies, discourses that frame the causes and consequences of child abuse, and decision-making practices.
The Aotearoa New Zealand Context
The macro conditions operating in Aotearoa New Zealand point to a concerning picture, one in which it is pressing to consider the impact of a range of inequalities on the chances of child protection contact and its outcomes.Current policy directions use administrative data to target individuals for service receipt within a 'social investment' approach aimed at reducing future cost to the state; however, the broader social context that contributes to social problems is largely invisible in this policy plan (O'Brien 2016;Keddell 2017).This section outlines the broad macro factors contributing to structural inequalities.It examines current research into variations in child protection system contact that may be related to inequalities, and what is already known about the risk, bias and spatial processes that influence children's chances of being in contact with the child protection system.
Aotearoa New Zealand has high levels of child poverty, with 28% of children living below the 60% median wage (after housing costs) relative poverty line in 2015, up from 24% three years earlier (Simpson et al. 2016).This rate is not evenly distributed by ethnicity, with 33% of Māori (indigenous), 28% of Pasifika (Pacific) and 16% of Pākeha (European ancestry) children living in households in income poverty.Of children living in households in income poverty, 46% are Māori or Pacific (Perry 2015).Of all children, 14% are in material hardship, that is, going without the things most New Zealanders consider essential (Simpson et al. 2015).Auckland, the largest city in Aotearoa New Zealand, has the most unaffordable housing in the world based on the ratio of household income: housing cost, with the average house price equal to ten times the average household income (Collins 2014).The result of this is high rates of homelessness and fragile housing situations.Differences in rates of childhood illnesses and educational success are marked between different levels of deprivation, for example, Aotearoa New Zealand has a rate of bronchiectasis that is 9 times that of Finland, and rates of rheumatic fever generally not seen in 'developed' countries (Dale et al. 2014).
Inequalities in Contact with the Child Protection System in Aotearoa New Zealand
In this context of poor social conditions, and in the context of extensive reforms of the child protection system, what is the relationship between inequalities relating to deprivation and ethnicity particularly, and contact with the child protection system (Expert Panel 2015)?Aotearoa NZ has a depth of research in the health inequalities area, yet translating this into examining child protection has so far been limited (Dew and Matheson 2009;Dew et al. 2016;Woodward and Blakely 2016).This section outlines patterns of system contact overall, before examining some of the research already undertaken in relation to deprivation, ethnicity and child protection system contact.Patterns of contact with child protection services in Aotearoa New Zealand have generally increased over the last twenty years.As mentioned above, a recent study of a birth cohort of children born in Aotearoa New Zealand in 1998 found that 23.5% of those children had some contact with child protection services before age 18, and 9.7% had a substantiated finding of abuse (Rouland and Vaithianathan 2018).In other research it is apparent that many children have multiple notifications.Of those 28,079 children engaged with the statutory agency in 2016, 70% had been previously notified, on average six times (Crichton et al. 2016).While notifications have risen between the 1990-1998 cohort studies, there is evidence of a decline in very recent years, as raw numbers of notifications, substantiations, child and family assessments or further investigations, all reduced by between ten and nineteen percent in the years 2016 to 2017, with a shallower decline in all these decision points since 2012 (Ministry of Social Development 2018).Children having Family Group Conferences (FGCs) and in foster care, on the other hand, have increased, the first by 4% and the second by 8% between 2016 and 2017 (Ministry of Social Development 2018).The reduction in earlier points of system contact may be shaped by the reforms mentioned above.These reforms (the Vulnerable Children's Reform and the Modernising Child Youth and Family reforms) aim to 'head off' children before they enter the child protection system, via mechanisms such as children's teams (professional teams outside of the statutory service), or changes in the decision-making tools available at entry (Sturmfels, pers.com., 2016).However, there has been little increase in the funding of preventive services -contracted non-governmental organisations have had no cost of living index increase to their contracts since 2008.This, combined with the lack of direct services available via the Children's Teams may be resulting in the threshold for entry to child protection services rising, but once children are over that threshold, an increased likelihood of entry to care and remaining in care longer.This conclusion is also suggested by the fact that the increases to children in care reflect a stable rate of entry to care, but fewer children leaving care (Ministry of Social Development 2018).The increase of FGCs and placement may be the first effects of the recent changes that aim to move children more quickly into permanent care arrangements once they have system contact (Expert Panel 2015).These patterns reflect that policies, resources and decision-making practices may interact with inequalities, operating together to shape both the reasons families are notified, and decision-making pathways post-notification (Putnam-Hornstein et al. 2013;Slack et al. 2017).More research however, is needed to properly understand the dynamic processes shaping system interactions in Aotearoa New Zealand, as these conclusions are at best tentative.
There are considerable spatial, temporal, ethnic and placement type variations in child protection system contact within Aotearoa New Zealand, also suggesting a relationship between system contact and inequalities.For example, as seen in Table 1, there are differences in child protection substantiations relative to notifications in different regions of the country, and over time.These range from 17% in Canterbury, to 38% in Counties Manukau in 2013.In 2017, the variation was from 10% in Canterbury to 21% in Bay of Plenty and Waitemata.Different site offices show even more variation, with the proportion of notifications that are substantiated ranging from 16% in Timaru (a town in the southern region) to 54% in Taumarunui (a town in the Central region) in 2015.There are also variations between rates of substantiated findings as a proportion of total child population by each site office, from 5 per 1000 children in Alexandra, to 62 per 1000 children in Taumarunui (in 2015) (Ministry of Social Development 2016).What causes such marked differences in substantiation rates?They may be related to differing levels of exposure to risks such as poverty, surveillance or system bias, site-specific differences in processing cases through the system, differences in the balance between demand and supply of services, or a combination of all these elements (McLaughlin and Jonson-Reid 2017; Kim et al. 2018).Examining these differences from an inequalities perspective helps ascertain what the structural contributors are to these patterns.Differences in relation to placement type and ethnicity of children in the care of the chief executive of the statutory child protection agency also hint at inequalities.The biggest growth in placement type is for family and whānau (extended family) placements-up from 1698 in 2013, to 2515 in 2017.Does this reflect a growing preference for kin-based care, or a lack of non-kin foster carer availability?There is also a growing disproportionate percentage of Māori children in care, up from 55% of children in care in 2013, to 62% in 2017 (despite being 25% of the child population) (Ministry of Social Development 2018).The proportion of Pākeha (European ancestry) children in care over the same period has dropped, from 33% in 2013, to 27% in 2017, while Pacific (8-7%), Asian children (1.4-1.6%) and children with multiple ethnicities (2-1.4%)all remain steady (Ministry of Social Development 2018).A further question is whether the increased percentage of children in contact with child protection services who are Māori is related to increasing exposure to poverty, an increase in implicit bias in the systems that respond (including surveillance bias), or the lack of culturally appropriate prevention services.It may also be heightened by the practice of prioritising ethnicity data so that the growing multiple ethnicity child population are categorised as Māori only, depressing counts of children from other ethnic groups (Cram et al. 2015;Cormack and Robson 2010).All of these questions would benefit from exploring from an inequalities perspective, as this assists with understanding the systemic factors contributing to system contact.Bywaters et al. (2016d) argues that: "The differences in rates between Local Authorities (council areas) and between neighbourhoods are not a postcode lottery nor are they simply the result of random differences in LA policies and practice; they are markers of social inequalities" (p. 7).Determining if the differences in rates of intervention have a relationship with deprivation or other types of inequalities, in combination with site-related factors such as supply and demand of services or differences in decision-making, are key areas requiring investigation in Aotearoa New Zealand.While we have as yet limited evidence of the nuances of this relationship explicitly framed within an inequalities perspective, some research provides useful windows into possible links.For example, the recent Expert Panel report into Child Youth and Family (the statutory child protection service) shows that 88% of those seen by child protection services by age 5 had at least one parent in receipt of a welfare benefit, compared to 30% of those who had no contact with child protection services, while 46% of those with contact with child protection services lived in a high deprivation area, compared to 26% of those with no child protection notifications (Expert Panel 2015).In terms of the possible influence of site office, predictive modelling suggests that site office may have a substantial influence on decision outcomes.
Understanding Patterns of System Contact: The Intersectionality of Deprivation, Ethnicity and Location
A study by Wilson et al. (2015) reported that site office was the 4th most predictive variable from thirteen variables "when the effect of other variables was controlled" (p.510).This suggests that over and above the other variables (many of which could be considered proxies for poverty-the three most predictive were previous contact with child protection services, length of time on a benefit, and having a parent with child protection system contact), site office remains a strong predictor.This suggests that deprivation and other macro inequalities interact with site factors to influence outcomes.
Other work also suggests more nuanced relationships between poverty, contact with the child protection system, and poor adult outcomes (Ball et al. 2016;Crichton et al. 2016;Templeton et al. 2016).Crichton et al. (2015) found that three factors had particularly strong correlations with these poor social outcomes: referral to youth justice, lack of NCEA level 2 (National Certificate of Educational Achievement-a key high school educational qualification), and receipt of a benefit as an adult.The three risk factors were: "the proportion of time the child had been supported by benefits since birth; having a parent/caregiver with a corrections history (including both community and custodial sentences); being notified to Child Youth and Family (child protection services) (pp.32-33, brackets mine)".These all suggest a strong relationship between high deprivation and contact with child protection services (O'Brien 2016; Keddell 2017).These studies give some insight into the connections between deprivation and contact with the child protection system, but do not explicitly frame these as markers of inequalities and therefore a justice issue.
A persistent inequality, as mentioned above, in the Aotearoa New Zealand child protection domain is that related to ethnicity.In the last five years, Aotearoa NZ has seen continued Māori overrepresentation in the child protection and foster care population.Understanding the rates of Māori child protection system contact across deprivation levels compared to other ethnic groups is important, as this would help understand the relationship between the two axes of inequality (meeting the 'intersectional' criteria of inequalities research as described above).One study has examined this intersection.Cram et al. (2015) found that amongst Māori who had spent at least four out of the last five years on a welfare benefit (as a proxy for poverty), the rate for this group of substantiated child abuse findings was 156.38/1000 births, and the infant mortality rate was 6.17/1000.For Māori who had spent no time in the last five years in receipt of benefit, the rate of substantiation was 8.73/1000 births, and infant mortality 1.7/1000.For non-Māori, non-Pacific the same rates were 119.06 (3.68) and 3.52 (0.91).This shows marked differences that relate not only to ethnicity but to the combination of ethnicity and deprivation.
International research has explored the relationships between ethnicity and different levels of deprivation.Depending on context, some conclude that while children from indigenous and ethnic minorities are overrepresented, when deprivation is taken into account differences between ethnic groups can disappear or even reverse.For example, Bywaters et al. (2016b) found that of children in the poorest decile in the UK, Black children had a lower rate of contact than White children, and that "mixed heritage" children had a higher rate than both.Wulczyn et al. (2013) found that there was a 'placement gap' between African-American and white American children that was reduced by introducing poverty as a variable, while Drake et al. (2009) found, similarly to Bywaters et al. in the English context, that poor White children had a higher rate of contact with child protection system than African-American poor children (Drake et al. 2009).Others have also concluded that ethnic disparities generally reduce as deprivation increases (Kim et al. 2011).
Here in Aotearoa New Zealand, due to significant and increasing overrepresentation, it is important that research attempts to unpick the interrelationships between poverty, ethnicity and bias that are likely contributors to the disproportionate representation of Māori children.Since the 1980s, many have noted that institutional bias, both explicit and implicit, has resulted in more state intervention for Māori children than for others, and is part of the long history of cultural oppression of Māori (Ministerial Committee 1988;Cram 2012;Reid et al. 2016).It is likely that for Māori particularly, exposure to poverty is not the only factor that may be contributing to increased risk.Other factors identified in the literature include everyday exposure to racism, cultural oppression, negative media representation, and the alienation of material resources due to the process of colonisation (Blank et al. 2013;Cram 2012;Hackell 2016).The relationship between neighborhood ethnic density and deprivation may also affect exposure to the stressors associated with increased risk of need for child protection services, as high ethnic density may operate as a protective factor, but can be offset by exposure to high deprivation (Bécares et al. 2013;Cram et al. 2015).The process of being assigned ethnicity by others may also influence the ability of some Māori to access appropriate support services, as has been found in health service access (Reid et al. 2016).
In Aotearoa New Zealand, one study has examined in some depth the intersection between risk and bias, attempting to discover to what extent the overrepresentation of Māori children is related to heightened exposure to risk factors, or bias within the child protection system (Cram et al. 2015).In this nuanced study, a range of rates of poor outcomes outside the child protection system were examined for Māori children compared to the rates inside the system, as a method (Drake et al. 2009) to ascertain if the disproportionate rates indicated risk over bias.The range of outcomes outside of the child protection system examined were benefit use, mortality and other poor birth outcomes, accidents and hospitalisation rates.They found similarly poor rates for outcomes outside the child protection system for Māori.They draw tentative conclusions from this, stating that " . . .focusing on 'poverty and its correlates' when attempting to address the overrepresentation of indigenous children in administratively recorded maltreatment may effect more change than focusing on the attitudes of those who come in contact with children and their families" (p.8).However, they point out that the similarities between risk measures they used and child protection outcomes do not necessarily preclude the additional effect of bias within the child protection system (p.9).They also conclude that the traditional risk/bias split may not adequately account for contextual factors such as the history of colonisation, the provision of culturally appropriate (or not) services and the protective factors embedded in Māori culture (Cram et al. 2015;Drake et al. 2009Drake et al. , 2011)).They argue for a more complex understanding of indigenous disproportionality.
Other studies of indigenous children also draw complex conclusions.For example, a study based on national data in Canada found that the overrepresentation of Aboriginal children in the Canadian child welfare system was not adequately explained by child maltreatment type, child functioning, or levels of harm.Instead, overrepresentation at all decision points (investigation, substantiation and removal) were associated with poverty, poor housing and substance abuse, pointing to structural disadvantage as the primary factor, rather than either case factors or bias (Fallon et al. 2013).Drake et al. (2011) found that the disparity in child protection system data matched the data on other poorer outcomes for Black children, particularly neonatal deaths, concluding that although decision-making bias may play a role, a more effective way to reduce racial disproportionality in the CPS system would be to address known risk factors that affect African-American families in the US, rather than address racial bias in the CPS system.On the other hand, some studies have found that race does increase perceptions of risk that result in differences in service outcomes (Ards et al. 2012;Williams and Soydan 2005).Rates of contact for Māori may also be related more directly to bias.For example, the overrepresentation of tamariki Māori (children) increases at each decision point within the child protection system, with 40% of children notified being Māori (who are 25% of the child population), but this increases to 60% by the time decisions to remove children into foster care are made (Expert Panel 2015;Statistics New Zealand 2016).
A further possible piece of the puzzle when considering the relationships between deprivation and ethnicity is the influence of culturally appropriate preventive/support services.For example, Fluke et al. (2010) investigated the influence of organisational factors on the rates of decisions to remove Aboriginal children in the Canadian Incidence study.While Aboriginal status and structural factors such as poverty and housing were influential, they found that the only organisational factor that affected this outcome was the relative proportion of Aboriginal children notified to particular site offices.They conclude that the provision of culturally appropriate services outside the formal child protection system affects rates of Aboriginal children entering the child protection system, as without those services, when faced with high numbers of Aboriginal referrals, the child protection system may have little choice but to intervene.These are all important issues affecting the patterns of inequalities for Māori in the child protection system.The issues for children from ethnic groups other than Māori and Pākeha may also have aspects that could be examined from an inequalities perspective.Disproportionate rates of notification for Pacific children, for example, nearly disappear by the removal decision point, despite high levels of community deprivation (Ministry of Social Development 2018).
Building a Research Agenda
Current research and policy directions in Aotearoa New Zealand have drawn increasing attention to persistent risk factors across the population for poor outcomes in education and criminal justice, and link this to a 'social investment' policy agenda (Ball et al. 2016;Crichton et al. 2015).This has a strong focus on outcomes, explained by individual risk factors and the cumulative nature of those risk factors for individuals across the lifespan.An inequalities perspective builds on this base, using a different lens with which to analyse data, motivated not only by future economic considerations but by a concern with human rights and social justice.In other words, the argument for greater equality in child protection is not just to avoid costs to the state from poor outcomes, but also because it is a moral imperative.The state's obligation to protect and promote the development of children under the Convention of the Rights of the Child is incompatible with accepting very unequal childhood experiences, including experiences of abuse and neglect, receipt of services, or being separated from your birth parents.
An inequalities research agenda can be separated into three categories of chances, experience and outcomes.Chances focus on who is subject to interventions or gets access to services and how this relates to inequalities; experience focusses on the experiences of different groups once they are in the system; and outcomes aims to establish how inequalities affect the outcomes of system contact (Bywaters 2013).Research questions relating to the chances of contact include: 1.
What are the relationships between social inequalities and various decision points in the child protection system? 2.
Does a social gradient exist and how steep is it?3.
What are the rates of Māori, Pākeha, Pacific and Asian children at each decision point, by level of deprivation?4.
How do the same levels of deprivation in different locations compare with each other in relation to child protection system contact, that is, is there an 'inverse intervention law' operating?
Of particular interest in relation to ethnicity, incorporating the risk-bias literature, is the question of whether or not there are increasing levels of deprivation as severity of child protection system contact points increases and to what extent does this explain the overrepresentation of Māori children?If so, does this mean that Māori children are presenting at the 'front-door' of the child protection system with more serious and complex problems than other children relating to their over-exposure to deprivation; or is this increasing overrepresentation the result of bias?This has important system design implications.If risk is increased, then more emphasis is needed on addressing poverty and access to services-if the issue is more strongly affected by bias, then patterns of surveillance and direct bias require correcting within child protection systems.
Qualitative approaches are also needed to understand how chances of contact may be shaped by perceptions and responses of various social actors: policy makers and politicians; managers and service leaders; front line staff; parents; children; and the wider public.Research must cover their perspectives and experiences.In terms of experiences, the different pathways of children and their families once in the system require investigation, with a view to identifying differential pathways that may be related to deprivation, ethnicity, or some other type of inequality such as disability or location.This type of research would examine the relative roles of risk, bias, and demand and supply to describe and explain differences in experiences.Questions could include to what extent expenditure for different areas, groups and ages of children reflects levels of need and whether the type of service provision is equally appropriate and accessible to different groups.Again, while some of these research questions should be quantified, qualitative research into the experiences of practitioners in the system and of children and families in the child protection system are also required.Finally, studies of the outcomes of children who have been in the child protection system and their families is needed, and the relationships of various groups to inequalities examined.Is it, for example, that children removed from more middle-class families do better?Or that different types of foster care (which is differentially resourced) affect children's outcomes?
In conclusion, the evidence base in relation to inequalities in the Aotearoa New Zealand context is slowly growing.Understanding the complex interplay between markers of inequality and contact chances, experiences and outcomes of the child protection system provides a substantial lens for framing future research, one that may assist with informing policies that can address 'upstream' determinants in addition to the downstream effects of child protection system contact.
* Data from Ministry of Social Development Key Statistics.
|
2019-03-20T23:52:56.185Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "c9f97da14206c9961306ef4c5ca7ea9d4387b53e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0760/7/6/89/pdf?version=1527849465",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c9f97da14206c9961306ef4c5ca7ea9d4387b53e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
64153532
|
pes2o/s2orc
|
v3-fos-license
|
Recent Developments in Optimal Placement of Phasor Measurement Units Considering in Incomplete Observability
Independent System Operator (ISO) monitors the SCADA system to continuously check the health of power system. SCADA system refreshing rate is several seconds which may cause delay or even miss important information and the ISO may not be able to take appropriate action when most needed. An emerging technology is the Phasor Measurement Unit (PMU) which takes the help of the Global Positioning System (GPS) and timestamps the whole power system network with much faster refreshing rates. In this paper difference of Phasor Measurement Unit (PMU) and SCADA is represented. Rules and conditions to optimally place the PMUs in power system are discussed. A review on methods used for placement of the PMU in the power system based on incomplete observability is given. Also the spanning tree search based optimal placement method is also discussed. *Author for correspondence
Introduction
In power system control center state estimation has very important role. Critical power system securities and electrical market decisions are dependent on the results of state estimators. Now-a-days algorithms used for state estimations are majorly used for the measurement of the asynchronous measurement of real and reactive power, voltage magnitudes, load angle from the bus from where state vectors has been calculated. The algorithms are based on iterative solutions most widely used algorithm is weighted least square algorithm. Although these state estimator's algorithms are capable to solve very large iterations, it affects the system's reliability and accuracy when power system is under stress 6 .
PMU are able to improve current state estimators which is SCADA. The SCADA system can measure voltage and current waveforms but they cannot measure the Indian Journal of Science and Technology, Vol 9(48) voltage angle at every bus. This can be done by using the PMU technology. PMU has been under research from early 1980s. With the invention of Phasor Measurement Unit angle was directly measured for first time. A PMU is digital device that can measure synchronized voltage and current measurements. Here phasors are vector representation of magnitude of voltage and current. To determine phase angles from different sites, all PMU sites are synchronized with common time pulse which can be got from GPS in order to 1ms that can measure very precise voltage and current phasors. Reporting rate for SCADA system is once in 4 to 6 seconds so it means we get only 1 reading for estimation in 4 to 6 seconds where reporting rate for PMU is very fast it depends on frequency which has minimum 10 readings for calculation in 1 second. Figure 1 shows the typical working block diagram of PMU. As analog inputs, voltage and current signals are provided to anti-aliasing filter from the measurement Keywords: Complete and Incomplete Observability, Optimal Placement, Phasor Measurement Unit (PMU), SCADA instruments CTs and PTs. Anti-aliasing filter is used to attenuate the frequency more than the nyquist frequency 12 .
The phase-locked oscillator (PLL) converts GPS signal 1 pulse per second into a sequence of very high speed time pulses that will be used in sampling of the waveform. Next block the A/D converter is used to convert the analog current and voltage signals to digital signals, which are derived into the phasor microprocessor to perform the Discrete Fourier Transform (DFT) for phasor cal- culations of signals. Then computed signals are send to Phasor Data Concentrator (PDC) which is then send to modems to transmit it.
Some application fields can be built for PMUs in order to study the interconnected grid in a active way. Such application fields include 14 : 1. Dynamic state estimation 2. Power systems protection 3. Monitor the stress on the electric transmission system 4. Identifying the corrective actions (such as damping) needed in case of discrepancies 5. Various stability studies including angular stability and voltage stability. 6. Monitor system oscillations 7. Wide Area Monitoring systems (WAMS) 8. Available transmission capacity 9. Power systems control Uses of PMUs are increasing day by day in recent years, it improves monitoring and controls the power networks. Proposed PMU applications, the relatively high cost of phasor measurement units as well as the facilities of communication cost. So to decrease the cost of PMU, optimal placement of PMU must be necessary and it is a big challenge. So some conventional optimization techniques have been introduced to solve the Optimal Placement Problem (OPP), such as linear programming (LP), dynamic programming, nonlinear programming (NLP) or combinational optimization. To overcome the problems raised due to optimization techniques, such as risk of difficulties in handling constraint, trapping at local optima or numerical difficulties, new techniques have been proposed such as depth first search (DeFS), simulated annealing (SA), tabu search (TS), differential evolution (DE), particle swarm optimization (PSO) 5 .
Formulation of Optimal Placement Problem
PMU can measure the voltage phasor where the PMU has been installed and current phasor of all the lines con-nected to that bus. There are some rules that can be used for the optimal placement of the PMU 5 .
Rule 1:
Specify a voltage measurement at a bus where PMU is placed with a current measurement to each line and branch connected to that bus.
Rule 2:
Specify a pseudo measurement of Voltage to each node which can be reached by other equipped with a PMU.
Rule 3:
Specify a pseudo measurement of current to each branch which is connecting to two buses where known voltage is there. It will allow to interconnect the observable zones.
Rule 4:
Specify a pseudo measurement of current to each line and branch where current can be calculated indirectly by KCL (Kirchhoff current law).
But these rules can be applied only when there is known current balance at particular node.
The main principle of the optimal OPP problem is the right choice of the minimum no of PMUs to be installed (n p ) and to choose the optimal location S(n p ) to place the no of PMU (n p ) from where complete observability can be achieved. The OPP problem can be formulate as (1) Such that O bs (n p ,S(n p ))=1 (2) where, R(n p ,S(n p )) = Redundancy Measurement Index O bs = observability evaluation function The conditions for observability that have to be met for the selection of the placement of PMU are 5 To get direct state measurement, PMU should be deployed at all the buses of the system. Measurements includes synchronized positive sequence current and voltage measurements 1 . All measurement assumed to be zero mean, usually distributed noise component. Measured vector M can be formulated by (3) where,
V and I = vectors of true values of branch current and bus voltage in rectangular form ε v and ε i =Error Vectors
The errors are adopted to have a covariance matrix W= (4) If for branch circuit elements representation is assumed, then relationship between I and V can be shown as where, A = Bus incident matrix for current measurement y = A diagonal matrix of primitive series admittances of measured branches y s = primitive matrix for shunt admittances at the measured ends Substituting equation (5) Weighted least square method to determine the state vector V can be shown as where, G is called as gain matrix which is given by
PMU Optimal Placement for Incomplete and Complete Observability
All installed PMUs should be monitored by SCADA/EMS system. Different types of methods like graph theory and simulated annealing shows that minimum 1/5 to 1/4 of the system buses would be provided with PMUs to make the system completely observable. In optimization techniques required number of PMUs are minimized and observability of the system still remain as it is with those reduced number of PMUs 10 . Placement of PMU for incomplete observability method is a topology that systematically distributed PMUs in all the network resulting. When the number of PMUs is not sufficient and their location is not optimal at that time PMU cannot cover whole the system, it is called incomplete observability 3 . Figure 3 shows system which is incomplete observed. PMUs installed at bus 2 and bus 6 directly measures the voltage at bus 2 and 6, whereas voltages at bus 1, 3, 5 and 7 can be determined by using the measured values of voltages and currents. But at the same time bus 4 leaves unobservable. Here buses 1, 3, 5 and 7 are calculated buses and PMUs are installed at bus 2 and 6 so, these two buses are defined as directly measured buses. And here one bus is unobservable so it is defined as depth of one unobservability. If as shown in Figure 4 we move PMU 2 one bus away from its initial place, then two buses will be unobservable in the system 8 . So it is defined as depth of two unobservability condition of the system. And these two unobservable buses are in between two PMUs.
Another technique for PMU placement is tree search which is described in Figure 5. Derive spanning tree from the parent graph of power system by eliminating the cotrees. Here it is assumed that 1 is reference node. In as shown in Figure 4B to have depth of one unobservability PMU is put four buses away from previous PMU which is installed at bus A. Once tree is completely searched PMU locations must be assigned as shown in Figure 4C. For optimal placement of PMU minimum no of PMUs should be installed 4 .
Illustration
The tree search techniques for the placement of PMU is explained in the Figure 6. As shown in figure there are 13 branches 14 nodes exculpated from a system graph 21 branches with including co-trees. Here we choose node 12 as a reference node. Logically PMU should be placed at node 6 (PMU-A) so to observe the reference node. We will find forward path along chosen path determined by the nodal sequence node 6 node 5 node node node 4. The next priority for the placement of the PMU (PMU-B) should be at node 4, which makes the node 1 as unobservable node with depth of one unobesrvability. Here mark that both PMUs are 4 node away from each other 2 .
The next move, PMU should be move to terminal node 9. Now here we will backtrack PMU from terminal node 9 to terminal node 4 and from terminal node 4 to terminal node 7 2 . The next movement of PMU should be to terminal node 8 and it is observable because current between node 7 and node 8 is known. Now we again backtrack till we don't reach node where we can move forward. Node 8 node 7 node 4 node 2. Now from node 2 we will move to node 6 with the path of node 2 node1 node 5 node6. Now again we will move to node 10 in sequence followed by node 6 node 11 node 10.
Node 10 is good place for the placement of the PMU but we cannot place it here because node 10 has the depth of one unobservability and we can say this because node 11 and node 9 are observed branches. So now we will again backtrack from node 10 and we will further move to node 6 and move again forward to node 13. Node 13 has one depth of one unobservable bus which is node 14. So the last backtracking will be to reference node 12 to finish the search of optimal place. For the optimal placement with depth of one unobservability in 14 bus system as shown in Figure 6 we must install PMU at node 6 and node 4.
To confirm minimum number of PMU placements, it is needed to check another search by choosing different node as reference node 10 . Figure 7 explains the PMU placement techniques by using tree search method for incomplete observability. An outer loop, contains process box which is from box no 1 to decision box which is box no 5, iterates on a subset of spanning trees is added. The main objective of the flow chart is to find the tree which helps to find the optimal placement of PMU and reduce the number of PMU needed 11 . Remember that even a small part of the system has large number of tree so prepared algorithm to find the Figure 7. Algorithm flowchart to find the tree by using tree for incomplete observability 2 .
|
2019-02-16T14:32:53.758Z
|
2016-12-29T00:00:00.000
|
{
"year": 2016,
"sha1": "6f41fa2a0f8e0c6f050b0503db1232f98fa82f82",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2016/v9i48/101723",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d7b03ddef046dfe0e24dd439bef84f766106c637",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
248081565
|
pes2o/s2orc
|
v3-fos-license
|
Mongolian Gerbils as a Model for the Study of Cholesteatoma: Otoendoscopic as a Diagnostic Tool
Introduction Cholesteatoma is a disease with significant clinical impact but is incompletely understood. The challenge of performing studies with long-term follow-up in humans is a factor that has restricted the advance of knowledge in this field. Thus, the use of animal models is highly pertinent, and the Mongolian gerbil model has emerged as one of the most useful. Objective The present study aims to evaluate, through serial otoendoscopies, the development and characteristics of pars flaccida retraction pocket and cholesteatoma in Mongolian gerbils after obliteration of the eustachian tube and compare it with the control group. Methods Forty Mongolian gerbils were divided into two groups of 20 animals each. In the intervention group, the animals were followed with serial otoendoscopies after eustachian tube obliteration. In the control group, the animals were only followed through serial otoendoscopies. Results At the end of the 16-week follow-up, cholesteatoma was present in 13 of 38 (34.2%) ears in the intervention group, and in 7 of 34 (20.6%) in the control group ( p = 0.197). When we considered cholesteatoma and its potential precursor, pars flaccida retraction pocket, in a combined way, we verified it in 23 of 38 (60.8%) in the intervention group and in 11 of 34 (32.3%) in the control group ( p = 0.016). Conclusions Over the 16 weeks of follow-up, serial otoendoscopies enabled us to evaluate the formation and development of pars flaccida retraction pockets and cholesteatomas in Mongolian gerbils and proved to be an excellent diagnostic tool.
Introduction
Cholesteatoma is a pathology with high morbidity and high costs to the health system, with significant risk for complications, both intratemporal and extratemporal. 1 Over the past decades, our understanding of the disease has evolved, especially in clinical and pathophysiological aspects.Many gaps persist in our understanding of its pathogenesis; however, several theories have been proposed.Clinical and experimental evidence [2][3][4][5][6] has demonstrated that the development of this pathology may follow a sequence of progressive tympanic retraction, pocket formation, loss of selfcleaning properties, keratin accumulation, and cholesteatoma formation.Nevertheless, some previous studies have failed to clearly demonstrate this evolution. 7,8n the effort to provide adequate answers to lingering knowledge gaps, and considering the difficulty in performing histological and long-term follow-up studies of this disease in humans, the use of animal models for this purpose is highly valuable.Among these, Mongolian gerbils have emerged as one of the most operational species to meet this need, because together with humans, they are the only species to develop cholesteatomas spontaneously.According to Chole et al., 9 cholesteatomas in Mongolian gerbils exhibit several characteristics similar to those of humans, both macroscopically and microscopically.][12][13][14] The obliteration of the external auditory canal has been used more frequently in prior studies, since this technique allows the development of cholesteatoma in almost all the animals that undergo the procedure. 10,15Although this model has provided us added knowledge of cholesteatoma in general, the pathogenesis involved in the formation of cholesteatoma when this technique is used appears to differ from that observed in humans.
The present study aims to evaluate, through serial otoendoscopies, the development and characteristics of pars flaccida retraction pocket and cholesteatoma in Mongolian gerbils after obliteration of the ET and compare it with the control group.
Materials and Methods
The present research was performed at the Animal Experimentation Unit (AEU) of the Hospital de Clínicas de Porto Alegre, between March 2016 and November 2017.All procedures were performed according to the regulations of the Committee on Animal Ethics (CEUA, in the Portuguese acronym) of this institution (16-0010).
Obliteration of the ET in Mongolian gerbils was performed by cauterization of the tubal ostium bilaterally.The animals underwent the procedure while under general anesthesia using inhaled isoflurane, intraperitoneal tramadol (30 mg/kg), and bupivacaine (0.5%, 4 mg/kg).Cauterization was performed using monopolar cautery with a needle tip inserted in a transpalatal approach, without direct visualization of the tubal ostia.Based on an anatomical study by Wolfman et al, 16 the needle was introduced 5 mm posterior to the transition between the hard palate and soft palate of the animal, maintained for 3 seconds, and angled 30°to the right initially and, afterwards, using the same angulation, to the left (►Fig. 1).In the immediate postoperative period, analgesia included intramuscular dipyrone (500 mg/kg) in animals that underwent the intervention.Maintenance of analgesia was performed using tramadol hydrochloride (10 mg/kg) every 12 hours intraperitoneally in animals exhibiting clinical signs of pain.
We considered cholesteatoma through otoendoscopy in two possibilities: presence of a plug of keratin filling a pars flaccida retraction pocket; and keratin accumulation adjacent to the tympanic membrane and partially or totally obliterating the external auditory canal.
The sample calculation was performed using an estimated incidence of developing cholesteatoma of 45% in the control group and of 85% in the intervention group, as reported in the literature. 9,11,16The incidence of cholesteatoma formation in each group was reported as n (%).Differences in the incidences between the groups were assessed using the chi-squared test.
Results
Forty Mongolian gerbils > 3 months of age of both genders were included in the study and divided into 2 groups (intervention and control) of 20 animals each (►Fig.2).Five losses occurred throughout the study: three in the control group and two in the intervention group.However, one was included in the analysis because it was followed-up for at least 8 weeks, resulting in 72 ears studied.The groups were divided without randomization and did not demonstrate statistically significant differences in their baseline characteristics (►Table 1).In the first two evaluations (week zero, week 1), no cholesteatoma was visualized by otoendoscopic evaluation.In week zero, we verified the presence of pars flaccida retraction pocket in 2 out of 38 (5.2%) ears in the intervention group and in 1 out of 34 (2.9%) ears in the control group (p ¼ 0.654).In week 1, the occurrence of pars flaccida retraction pocket was observed in 6 out of 38 (15.7%) ears in the intervention group and in 3 out of 34 (8.8%) ears in the control group (p ¼ 0.372).
In week 4, the presence of cholesteatoma was observed in 5 out of 38 (13.1%) ears in the intervention group and in 1 out of 34 (2.9%) ears in the control group (p ¼ 0.117).Considering the ears in the intervention group that developed cholesteatoma this week, 3 out of 5 (60%) presented pars flaccida retraction pocket in the previous evaluation.Pars flaccida retraction pocket was seen in 9 out of 38 (23.6% [6 new/3 already present in the previous evaluation]) ears in the intervention group and in 6 out of 34 (17.6% [3 new/3 previous]) ears in the control group (p ¼ 0.528).
In the 8 th week, 8 out of 38 (21.05%) ears in the intervention group showed cholesteatoma, and 2 out of 34 (5.8%) ears in the control group (p ¼ 0.063).In this week, we identified 3 new cholesteatomas compared with the previous evaluation, 2 out of 3 (66%) presented pars flaccida retraction pocket at week 4.The presence of pars flaccida retraction was seen in 9 out of 38 (23.6% [2 new/5 previous]) ears in the intervention group and in 6 out of 34 (17.6% [1 new/5 previous]) in the control group (p ¼ 0.528).
In week 12, we observed the presence of cholesteatoma in 10 out of 38 (26.3%) ears in the intervention group and in 4 out of 34 (11.7%)ears in the control group (p ¼ 0.119).In the intervention group, 2 new cholesteatomas emerged, 1 of which (50%) presented pars flaccida retraction pocket in the previous evaluation.Pars flaccida retraction pocket was observed in 12 out of 38 (31.5% [4 new/8 previous]) ears in the intervention group and in 6 out of 34 (17.6% [1 new/5 previous]) ears in the control group (p ¼ 0.172).
At the end of the 16-week follow-up, cholesteatoma was present in 13 out of 38 (34.2%) ears in the intervention group and in 7 out of 34 (20.6%) in the control group, a difference that was not statistically significant (p ¼ 0.197).In this final week, we identified 3 new cholesteatomas compared with the previous evaluation, and 2 out of 3 (66%) presented pars flaccida retraction pocket at week 12.When we consider cholesteatoma and its potential precursor, pars flaccida retraction pocket, combined, these changes were observed in 23 out of 38 (60.8%) in the intervention group and in 11 out of 34 (32.3%) in the control group, a difference with statistical significance (p ¼ 0.016).Using serial otoendoscopic evalua-tion, the development of cholesteatomas occurred, on average, at 10.5 weeks in animals in the intervention group compared with 14.2 weeks in the control group (p ¼ 0.052).
Over the 16 weeks, 13 cholesteatomas developed in the intervention group and 7 in the control group (►Fig.5).In the intervention group, 8 out of 13 (61.5%)cholesteatomas developed in ears that presented pars flaccida retraction pockets in the immediately previous evaluation, compared with 4 out of 7 (57.1%) in the control group (p ¼ 0.744).
Discussion
An animal model of cholesteatoma has value in the study of pathogenesis.The Mongolian gerbil has been considered a suitable model of study, since it is the only species, other than humans, prone to spontaneous cholesteatoma.The majority of previous studies is based on observations made after closure of the external auditory canal to increase the incidence of cholesteatoma formation, followed by serial sacrifice of the animals and histological analysis of the bulla.Although these studies have shed light into many facets of this intriguing disease, it is often difficult to fully translate their findings into the human clinical scenario.Particularly, one of the most accepted theory for the development of cholesteatoma in humans is related to malfunction of the ET, with the consequent generation of negative pressure in the middle ear, inflammation, progressive retraction of the tympanic membrane and, finally, cholesteatoma formation. 17To follow this progression, the clinician can best asses this with serial inspection of the affected ear with the aid of otoscopes, microscopes or endoscopes.In other words, experimentally-induced cholesteatomas obtained by suturing the ear canal are critically dissimilar to natural occurring cholesteatomas.Similarly, relying on histopathology to make the clinical diagnosis of cholesteatoma is not consistent with real-world clinical care.
Taking this into account, we chose ET obliteration as a technique to induce cholesteatoma in the present study.We create an experimental model that could simulate, as close as possible, the development and the diagnosis of cholesteatomas in humans.To do so, we have decided to induce the development of the disease through ET obliteration and follow-up the animals with serial otoendoscopies.This method allows us to analyze evolutionary characteristics, formation pathways and cholesteatoma progression over time in the same animal, a scenario not possible using histology.To our knowledge, this is the first time that endoscopes have been used to monitor cholesteatoma in Mongolian gerbils.Using this methodology, we found that ET obliteration showed a tendency to increase the incidence of cholesteatoma in Mongolian gerbils, although not achieving statistical significance.However, when we included pars flaccida retraction pockets in the analysis (a condition also closely related to ET obliteration and theoretically a potential precursor of cholesteatoma), we obtained a high combined incidence of these changes in the intervention group, with statistical significance when compared with the control group.The inclusion of these findings furthers the concept of a spectral progression to cholesteatoma.
The present study has some limitations.We faced difficulties in the complete visualization of the tympanic membrane through otoendoscopy in cases in which the external auditory canal was completely occluded by the accumulation of keratin.The impossibility of directly visualizing the tubal ostia during the cauterization procedure was another difficulty of the present study.We cauterized in an extended way ( 3 seconds) and bilaterally in an attempt to increase the chance of success.However, at no point in the study was there confirmation that the obliteration was really effective, which is an important limitation.Furthermore, we must be very careful when projecting results obtained in experimental studies on humans.In addition to the evident anatomical differences, Mongolian gerbils have a substantially higher incidence of cholesteatoma than humans, demonstrating a greater propensity for the development of this pathology.
Considering our findings in serial otoendoscopies, we hypothesize that ET obliteration and the consequent desestabilization of the middle ear (negative pressure and inflamatory process) seems to increase the incidence of cholesteatoma in Mongolian gerbils.This would occur either by pars flaccida retraction and gradual accumulation of keratin, followed by the probable loss of the self-cleaning ability of the retraction pocket, or by progressive accumulation of keratin in the lateral wall of the tympanic membrane with later advancement toward the middle ear and bulla.In addition, we observed that even in the control group there is a high incidence of cholesteatomas that presented pars flaccida retraction pocket in their immediately previous evaluation.Therefore, we must consider this formation pathway (retraction pocketcholesteatoma) as responsible for at least part of the spontaneous cholesteatomas in these animals.Taking this into account, the present study corroborates the possibility of using Mongolian gerbils as an animal model to study also the formation pathways of cholesteatoma, and otoendoscopy proved to be an excellent tool for this purpose.
Conclusion
Over the 16 weeks follow-up, serial otoendoscopies enabled us to evaluate the formation and development of pars flaccida retraction pockets and cholesteatomas in Mongolian gerbils and proved to be an excellent diagnostic tool.
Fig. 5
Fig. 5 Findings of each ear studied during the 16-week follow-up.
Table 1
Baseline characteristics
|
2022-04-10T15:06:11.683Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "40544cc275e9c01f4f638da55c935d9a06c2f3de",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1740159.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cd4b8307c0053fa0468c91bb65298d53e661439a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
19058328
|
pes2o/s2orc
|
v3-fos-license
|
Changes in Autonomic Nervous System Activity and Mood of Healthy People after Mindfulness Art Therapy Short Version
The aim of this study was to investigate changes in autonomic nervous system (ANS) activity and mood caused by Mindfulness Art Therapy Short version (MATS). The participants were 20 Japanese college students who were separated into high and low risk groups based on the median score of the General Health Questionnaire (GHQ). MATS consisted of mindfulness exercise and making of art in one session. ANS activity (TP: total energy, LF/HF: sympathetic nervous, HF: parasympathetic nervous system, LF: both sympathetic and parasympathetic) and mood (TA: tension arousal, EA: energy arousal) were measured psychologically before and after MATS. In the high risk group, TP significantly decreased and LF, HF, and LF/HF did not change significantly; while TA significantly decreased and EA significantly increased. In the low risk group, TP and LF significantly increased and HF and LF/HF did not change significantly; while TA significantly decreased and EA showed a non-significant increase. These results suggest that MATS affects the ANS differently for participants with different states of mental health, and particularly promotes activity in low-risk participants. Psychologically, MATS decreased tension or anxiety and increased energy. These findings justify further use of this therapy.
Introduction
Cancer patients experience physical, social, psychological, and spiritual pain.Art therapy is a psychotherapy * Corresponding author.
that has been shown to be effective for anxiety [1], depression [2], spiritual well-being [3], and somatic symptoms [4] in cancer patients.Mindfulness is another effective approach, and Kabat-Zin [5] developed the Mindfulness-Based stress Reduction (MBSR) program based on the principle of mindfulness, which is defined as moment-to-moment, present-centered, purposive non-judgmental awareness.MBSR is effective for improvement of quality of life and mood [6] and for post-traumatic growth and spirituality [7].
Mindfulness-Based Art Therapy (MBAT) integrates mindfulness and art therapy, and leads to significant decreases in symptoms of distress and increased quality of life [8].Physiologically, MBAT for 8 weeks increases cerebral blood flow (CBF) and there is a correlation between increased CBF and decreased anxiety [9].However, the duration of MBAT of 8 weeks in a group format may to be hard for patients with advanced cancer, and a simpler version, the Mindfulness Art Therapy Short version (MATS), was developed [10].MATS is effective for alleviation of tension-anxiety, depression and fatigue, and elevation of vigor in healthy people [11].
Previous studies of MATS have used questionnaires as psychological indicators, and thus the physiological changes induced by MATS remain unclear.Autonomic nervous system (ANS) activity is a useful physiological indicator, since this reflects heart rate variability (HRV), which in turn reflects the subtle relationship between the heart and lung.In addition, brain scans and EEG studies require immobility [12], but this is not possible for an action-based form of mindfulness or therapeutic use of art; however, resting HRV has become an accepted marker for vagal tone and neurobiological correlates of internal composure.It is also likely that the state of mental health of participants affects MATS because participants impressions were divided into two areas such that some experiences were much more focused on the mind in a previous study [11].Thus, we investigated changes in the ANS and mood in participants based on their state of mental health.
To measure mood, we chose a scale which measure present states.And to measure their mental health, we chose a scale which is used in worldwide studies.
Participants
The participants were 20 college students at a single college in Japan.They were recruited by advertisement of the study and their participation was voluntary.The inclusion criterion was age >20 years old and the exclusion criterion was the presence of severe mental problems.The study was approved by the college ethical board.
Measures
The General Health Questionnaire 30 (GHQ30) was used to measure mental health state, including physical state [13] [14].Autonomic nervous system (ANS) activity was evaluated using TAS 9 (YKC Corp.), which uses heart rate to measure total energy (TP) of the sympathetic and parasympathetic nervous system level (LF: low frequency), parasympathetic level (HF: high frequency), and the balance of sympathetic and parasympathetic nerve levels and the sympathetic nervous system (LF/HF).Mood was measured using the Japanese UWIST Mood Adjective Checklist (JUMACLE) [15].This consists of 10 items for tension arousal (TA), which includes tension or anxiety; and 10 items for energy arousal (EA), which includes activity or vigor.Each item is scored by each participant from 1 to 4 on a Likert scale.A high score indicates high TA or EA.
The procedure for the Mindfulness Art Therapy-Short version program has been described elsewhere [11].The intervention includes mindfulness and art therapy of 90 minutes in one session.Participants are instructed on mindfulness with assistance from a CD developed by a Yoga specialist and a clinical psychologist.First, participants exercised mindfulness by listening to the CD while receiving support from the clinical psychologist.This required about 20 minutes.The simple instructions in the CD were designed to make the participants aware of mind and body without judgment.After mindfulness practice, participants were given art materials including clay, collage (fancy paper, felt glue sticks, magazines), drawing instruments (colored pencils, pastel chalks, pencils, water colors), and sketch books and invited to make art by expressing their feelings or emotions freely.
Procedures
Students were recruited by an advertisement in the college.If a student was interested in the research, he or she telephoned the researchers and made a reservation.Each participant received MATS individually with a clinical psychologist.First, the clinical psychologist explained the study in more detail.If the participant continued to agree to participation, they gave signed informed consent and the study was started.Each participant completed the GHQ and JUMACLE questionnaires, and ANS activity was measured by TAS9 via a sensor.After MATS, the JUMACLE was completed again and ANS activity was measured for a second time.
Analysis
Statistical analysis was conducted with SPSS ver.21.0 (Japanese version) for Windows (SPSS Inc.).Scores for ANS activity and JUMACLE pre-and post-intervention were compared by t-test.All reported p values are 2-tailed and p < 0.05 was taken to indicate a significant level in all analyses.
Results
The demographic data are shown in Table 1, and the results for HRV and mood are shown in Table 2 to Table 5.
In the high risk group, there were no significant differences between pre-and post-MATS values for LF, HF, and LF/HF.The score for TP significantly decreased from 7.28 to 7.01 (p < 0.05).TA significantly decreased from 18.8 to 13.4 (p < 0.005) and EA significantly increased from 30 to 34 (p < 0.05).In the low risk group, LF significantly increased from 5.08 to 5.92 (p < 0.009) and TP significantly increased from 6.73 to 7.12 (p < 0.004).There were no significant differences in HF and LF/HF pre-and post-MATS.TA significantly decreased from 16.5 to12.2 (p < 0.001) and there was no significant difference in EA pre-and post-MATS.
Discussion
The absence of significant difference in LF, HF, and LF/HF in the high risk group indicates that there were no significant physiological changes in the sympathetic and parasympathetic nervous systems in this group.However, the significant decrease in TP shows that total energy in the ANS decreased.This may be because performance of mindfulness and making art for 90 minutes consumed their physical strength.For the psychological indicators, the significant decrease in TA indicates a decrease in tension or anxiety, while the significant increase in EA indicates an increase in energy.Therefore, the high-risk participants were activated and revitalized by MATS.That is, physiologically, the participants in the mental high risk group might have been fatigued; however, psychologically, they were relaxed and activated after MATS.A previous study showed decreased sympathetic activity and increased parasympathetic activity after back massage in healthy people with high sympathetic activity at baseline [16].In contrast, in the current study, there was no significant difference in HF and LF because MATS is an active intervention, whereas an intervention such as back massage is passive.
In the low risk group, the significant increase of LF suggests an increase in sympathetic activity.Moreover, the LF/HF of >1.0 suggests that sympathetic activity was activated more than parasympathetic activity.The significant increase in TP indicates that the total energy of the ANS also increased.That is, for the low-risk mental health group, MATS was a physiological activator.Compared with the high risk group, MATS may be more Table 6.Merit of the MATS for cancer patients.
1) A participant can conduct individually.
2) Duration of a session time is about from 60 minutes to 90 minutes, and participants can complete easily.
3) Mindfulness part in the MATS require little physical movement, and participants with weak physical strength may do.4) A participant can conduct event though they are on the bed.5) They can use CD for mindfulness, and they can conduct when a supporter is absent.
effective for elevating ANS activity in healthy people mental risk.One of the reasons for this effect may be that this therapy increases cerebral blood flow [9], which then activates brain function and increases ANS activity.
Psychologically, the significant decrease in TA in the low risk group indicates a significant decrease intension or anxiety, as also found for the high risk group.Thus, MATS is useful in this respect for all participants.In contrast, energy was high at baseline and did not increase (no significant change in EA) in the low risk group.These changes show that participants in this group were activated and energetic, and had decreased tension after MATS.This is consistent with the finding by Monti et al. [17] that anxiety decreased after mindfulness art therapy in cancer patients.
Lastly, we estimate merits of the MATS for cancer patients.This method may be useful for cancer patients in these points like Table 6.Moreover, this study may be important before conducting the MATS for cancer patients to confirm potentiality measuring automatic nervous system.And from this study we estimate merits for cancer patients; 1) since the MATS alleviate tension-anxiety (TA) for healthy people, it may alleviate tensionanxiety for cancer patients, 2) it elevated energy of healthy people physiologically in the low risk group, it may elevate energy of cancer patients with low mental health risk, 3) since mental health like depression or anxiety related with spirituality in the previous study [18], it might affect spirituality of cancer patients.In future study, we need to confirm these points.
The study is limited by the small number of participants and the absence of a randomized controlled design.Therefore, evaluation of the robustness and reliability of the findings will require a further study in more subjects in a randomized controlled trial.
Conclusions
We investigated changes in the ANS and mood in participants by the MATS based on their state of mental health.
1) In the high risk group, TP significantly decreased and LF, HF, and LF/HF did not change significantly; while TA significantly decreased and EA significantly increased.
2) In the low risk group, TP and LF significantly increased and HF and LF/HF did not change significantly; while TA significantly decreased and EA showed a non-significant increase.
3) These results suggest that MATS affects the ANS differently for participants with different states of mental health, and particularly promotes activity in low-risk participants.
4) Psychologically, MATS decreased tension or anxiety and increased energy.These findings justify further use of this therapy.
Table 2 .
Pre-and post-intervention scores for ANS activities in the high risk group (n = 10).
Table 3 .
Pre-and post-intervention scores for subdomains of mood in the high risk group (n = 10).
Table 4 .
Pre-and post-intervention scores for ANS activities in the low risk group (n = 10).
Table 5 .
Pre-and post-intervention scores for subdomains of mood in the low risk group (n = 10).
|
2019-03-12T13:11:29.714Z
|
2016-02-25T00:00:00.000
|
{
"year": 2016,
"sha1": "6f9075afae2cc513b0ba047507377d463c86ec70",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=63799",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "72c9f7ec55c64506c490edac9d7e6b616a6ce806",
"s2fieldsofstudy": [
"Art",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236087304
|
pes2o/s2orc
|
v3-fos-license
|
Provably Efficient Multi-Task Reinforcement Learning with Model Transfer
We study multi-task reinforcement learning (RL) in tabular episodic Markov decision processes (MDPs). We formulate a heterogeneous multi-player RL problem, in which a group of players concurrently face similar but not necessarily identical MDPs, with a goal of improving their collective performance through inter-player information sharing. We design and analyze an algorithm based on the idea of model transfer, and provide gap-dependent and gap-independent upper and lower bounds that characterize the intrinsic complexity of the problem.
Introduction
In many real-world applications, reinforcement learning (RL) agents can be deployed as a group to complete similar tasks at the same time. For example, in healthcare robotics, robots are paired with people with dementia to perform personalized cognitive training activities by learning their preferences [42,21]; in autonomous driving, a set of autonomous vehicles learn how to navigate and avoid obstacles in various environments [27]. In these settings, each learning agent alone may only be able to acquire a limited amount of data, while the agents as a group have the potential to collectively learn faster through sharing knowledge among themselves. Multi-task learning [7] is a practical framework that can be used to model such settings, where a set of learning agents share/transfer knowledge to improve their collective performance.
Despite many empirical successes of multi-task RL (see, e.g., [51,28,27]) and transfer learning for RL (see, e.g., [26,39]), a theoretical understanding of when and how information sharing or knowledge transfer can provide benefits remains limited. Exceptions include [16,6,11,17,32,25], which study multi-task learning from parameter-or representation-transfer perspectives. However, these works still do not provide a completely satisfying answer: for example, in many application scenarios, the reward structures and the environment dynamics are only slightly different for each task-this is, however, not captured by representation transfer [11,17] or existing works on clustering-based parameter transfer [16,6]. In such settings, is it possible to design provably efficient multi-task RL algorithms that have guarantees never worse than agents learning individually, while outperforming the individual agents in favorable situations?
In this work, we formulate an online multi-task RL problem that is applicable to the aforementioned settings. Specifically, inspired by a recent study on multi-task multi-armed bandits [43], we formulate the ǫ-Multi-Player Episodic Reinforcement Learning (abbreviated as ǫ-MPERL) problem, in which all tasks share the same state and action spaces, and the tasks are assumed to be similar-i.e., the dissimilarities between the environments of different tasks (specifically, the reward distributions and transition dynamics associated with the players/tasks) are bounded in terms of a dissimilarity parameter ǫ ≥ 0. This problem not only models concurrent RL [34,16] as a special case by taking ǫ = 0, but also captures richer multi-task RL settings when ǫ is nonzero. We study regret minimization for the ǫ-MPERL problem, specifically: 1. We identify a problem complexity notion named subpar state-action pairs, which captures the amenability to information sharing among tasks in ǫ-MPERL problem instances. As shown in the multi-task bandits literature (e.g., [43]), inter-task information sharing is not always helpful in reducing the players' collective regret. Subpar state-action pairs, intuitively speaking, are clearly suboptimal for all tasks, for which we can robustly take advantage of (possibly biased) data collected for other tasks to achieve a lower regret in a certain task.
2. In the setting where the dissimilarity parameter ǫ is known, we design a model-based algorithm MULTI-TASK-EULER (Algorithm 1), which is built upon state-of-the-art algorithms for learning single-task Markov decision processes (MDPs) [3,46,36], as well as algorithmic ideas of model transfer in RL [39]. MULTI-TASK-EULER crucially utilizes the dissimilarity assumption to robustly take advantage of information sharing among tasks, and achieves regret upper bounds in terms of subpar state-action pairs, in both (value function suboptimality) gap-dependent and gap-independent fashions. Specifically, compared with a baseline algorithm that does not utilize information sharing, MULTI-TASK-EULER has a regret guarantee that: (1) is never worse, i.e., it avoids negative transfer [33]; (2) can be much superior when there are a large number of subpar state-action pairs.
3. We also present gap-dependent and gap-independent regret lower bounds for the ǫ-MPERL problem in terms of subpar state-action pairs. These lower bounds nearly match the upper bounds when the episode length of the MDP is a constant. Together, the upper and lower bounds can be used to characterize the intrinsic complexity of the ǫ-MPERL problem.
Preliminaries
Throughout this paper, we denote by [n] := {1, . . . , n}. For a set A in a universe U , we use A C = U \ A to denote its complement. Denote by ∆(X ) the set of probability distributions over X . For functions f, g, we use f g or f = O(g) (resp. f g or f = Ω(g)) to denote that there exists some constant c > 0, such that f ≤ cg (resp. f ≥ cg), and use f g to denote f g and f g simultaneously. Define a ∨ b := max(a, b), and a ∧ b := min(a, b). We use E to denote the expectation operator, and use var to denote the variance operator. Throughout, we useÕ(·) andΩ(·) notation to hide polylogarithmic factors.
Multi-task RL in episodic MDPs. We have a set of M MDPs M p = (H, S, A, p 0 , P p , r p ) M p=1 , each associated with a player p ∈ [M ]. Each MDP M p is regarded as a task. The MDPs share the same episode length H ∈ N + , finite state space S, finite action space A, and initial state distribution p 0 ∈ ∆(S). Let ⊥ be a default terminal state that is not contained in S. The transition probabilities P p : S × A → ∆(S ∪ {⊥}) and reward distributions r p : S × A → ∆([0, 1]) of the players are not necessarily identical. We assume that the MDPs are layered 1 , in that the state space S can be partitioned into disjoint subsets (S h ) H h=1 , where p 0 is supported on S 1 , and for every p ∈ [M ], h ∈ [H], and every s ∈ S h , a ∈ A, P p (· | s, a) is supported on S h+1 ; here, we define S H+1 = {⊥}. We denote by S := |S| the size of the state space, and A := |A| the size of the action space.
Interaction process. The interaction process between the players and the environment is as follows: at the beginning, both (r p ) M p=1 and (P p ) M p=1 are unknown to the players. For each episode k ∈ [K], conditioned on the interaction history up to episode k − 1, each player p ∈ [M ] independently interacts with its respective MDP M p ; specifically, player p starts at state s k 1,p ∼ p 0 , and at every step (layer) h ∈ [H], it chooses action a k h,p , transitions to next state s k h+1,p ∼ P p (· | s k h,p , a k h,p ) and receives a stochastic immediate reward r k h,p ∼ r p (· | s k h,p , a k h,p ); after all players have finished their k-th episode, they can communicate and share their interaction history. The goal of the players is to maximize their expected collective reward E Policy and value functions. A deterministic, history-independent policy π is a mapping from S to A, which can be used by a player to make decisions in its respective MDP. For player p and step h, we use V π h,p : S h → [0, H] and Q π h,p : S h × A → [0, H] to denote its respective value and action-value functions, respectively. They satisfy the following recurrence known as the Bellman equation: ∀h ∈ [H] : V π h,p (s) = Q π h,p (s, π(s)), Q π h,p (s, a) = R p (s, a) + (P p V π h+1,p )(s, a), where we use the convention that V π H+1,p (⊥) = 0, and for f : S h+1 → R, (P p f )(s, a) := s ′ ∈S h+1 P p (s ′ | s, a)f (s ′ ), and R p (s, a) := Er ∼rp(·|s,a) [r] is the expected immediate reward of player p. For player p and policy π, denote by V π 0,p = E s1∼p0 V π 1,p (s 1 ) its expected reward.
For player p, we also define its optimal value function V ⋆ h,p : S h → [0, H] and the optimal actionvalue function Q ⋆ h,p : S h × A → [0, H] using the Bellman optimality equation: where we again use the convention that V ⋆ H+1,p (⊥) = 0. For player p, denote by V ⋆ 0,p = E s1∼p0 V ⋆ 1,p (s 1 ) its optimal expected reward.
Given a policy π, as V π h,p for different h's are only defined in the respective layer S h , we "collate" the value functions (V π h,p ) H h=1 and obtain a single value function V π p : S ∪ {⊥} → R. Formally, for We define Q π p , V ⋆ p , Q ⋆ p similarly. For player p, given its optimal action value function Q ⋆ p , any of its greedy policies π ⋆ p (s) ∈ argmax a∈A Q ⋆ p (s, a) is optimal with respect to M p .
Suboptimality gap. For player p, we define the suboptimality gap of state-action pair (s, a) as gap p (s, a) = V ⋆ p (s)−Q ⋆ p (s, a). We define the mininum suboptimality gap of player p as gap p,min = min (s,a):gap p (s,a)>0 gap p (s, a), and the minimum suboptimality gap over all players as gap min = min p∈[M] gap p,min . For player p ∈ [M ], define Z p,opt := (s, a) : gap p (s, a) = 0 as the set of optimal state-action pairs with respect to p.
Performance metric. We measure the performance of the players using their collective regret, i.e., over a total of K episodes, how much extra reward they would have collected in expectation if they were executing their respective optimal policies from the beginning. Formally, suppose for each episode k, player p executes policy π k (p), then the collective regret of the players is defined as: Baseline: individual STRONG-EULER. A naive baseline for multi-task RL is to let each player run a separate RL algorithm without communication. For concreteness, we choose to let each player run the state-of-the-art STRONG-EULER algorithm [36] (see also its precursor EULER [46]), which enjoys minimax gap-independent [3,8] Our goal is to design multi-task RL algorithms that can achieve collective regret strictly lower than this baseline in both gap-dependent and gap-independent fashions when the tasks are similar.
Notion of similarity. Throughout this paper, we will consider the following notion of similarity between MDPs in the multi-task episodic RL setting.
If the MDPs in (M p ) M p=1 are 0-dissimilar, then they are identical by definition, and our interaction protocol degenerates to the concurrent RL protocol [34]. Our dissimilarity notion is complementary to those of [6,16]: they require the MDPs to be either identical, or have well-separated parameters for at least one state-action pair; in contrast, our dissimilarity notion allows the MDPs to be nonidentical and arbitrarily close.
We have the following intuitive lemma that shows the closeness of optimal value functions of different MDPs, in terms of the dissimilarity parameter ǫ:
Algorithm
We now describe our main algorithm, MULTI-TASK-EULER (Algorithm 1). Our model-based algorithm is built upon recent works on episodic RL that provide algorithms with sharp instancedependent guarantees in the single task setting [46,36]. In a nutshell, for each episode k and each player p, the algorithm performs optimistic value iteration to construct high-probability upper and lower bounds for the optimal value and action value functions V ⋆ p and Q ⋆ p , and uses them to guide its exploration and decision making process. Empirical estimates of model parameters. For each player p, the construction of its value function bound estimates relies on empirical estimates on its transition probability and expected reward function. For both estimands, we use two estimators with complementary roles, which are at two different points of the bias-variance tradeoff spectrum: one estimator uses only the player's own data (termed individual estimate), which has large variance; the other estimator uses the data collected by all players (termed aggregate estimate), which has lower variance but can easily be biased, as transition probabilities and reward distributions are heterogeneous. Such an algorithmic idea of "model transfer", where one estimates model in one task using data collected from other tasks has appeared in prior works (e.g., [39]). Specifically, at the beginning of episode k, for every h ∈ [H] and (s, a) ∈ S h × A, the algorithm has its empirical count of encountering (s, a) for each player p, along with its total empirical count across all players, respectively: Player p executes policy π k (p) on M p and obtains trajectory (s k h,p , a k h,p , r k h,p ) H h=1 .
The individual and aggregate estimates of immediate reward R(s, a) are defined as: (4) Similarly, for every h ∈ [H] and (s, a, s ′ ) ∈ S h × A × S h+1 , we also define the individual and aggregate estimates of transition probability as: . Constructing value function estimates via optimistic value iteration. For each player p, based on these model parameter estimates, MULTI-TASK-EULER performs optimistic value iteration to compute the value function estimates for states at all layers (lines 3 to 15). For the terminal layer H + 1, V ⋆ H+1 (⊥) = 0 trivially, so nothing needs to be done. For earlier layers h ∈ [H], MULTI-TASK-EULER iteratively builds its value function estimates in a backward fashion. At the time of estimating values for layer h, the algorithm has already obtained optimal value estimates for layer h + 1. Based on the Bellman optimality equation (1), MULTI-TASK-EULER estimates (Q ⋆ p (s, a)) s∈S h ,a∈A using model parameter estimates and its estimates of (V ⋆ p (s)) s∈S h+1 , i.e., (V p (s)) s∈S h+1 and (V p (s)) s∈S h+1 (lines 5 to 12).
Specifically, MULTI-TASK-EULER constructs estimates of Q ⋆ p (s, a) for all s ∈ S h , a ∈ A in two different ways. First, it uses the individual estimates of model of player p to construct ind-Q p and ind-Q p , upper and lower bound estimates of Q ⋆ p (lines 8 and 9); this construction is reminiscent of EULER and STRONG-EULER [46,36], in that if we were only to use ind-Q p and ind-Q p as our optimal action value function estimate Q p and Q p , our algorithm becomes individual STRONG-EULER. The individual value function estimates are key to establishing MULTI-TASK-EULER's fall-back guarantees, ensuring that it never performs worse than the individual STRONG-EULER baseline. Second, it uses the aggregate estimate of model to construct agg-Q p and agg-Q p , also upper and lower bound estimates of Q ⋆ p (lines 6 and 7); this construction is unique to the multitask learning setting, and is our new algorithmic contribution.
To ensure that agg-Q p and ind-Q p (resp. agg-Q p and ind-Q p ) are valid upper bounds (resp. lower bounds) of Q ⋆ p , MULTI-TASK-EULER adds bonus terms ind-b p (s, a) and agg-b p (s, a), respectively, in the optimistic value iteration process, to account for estimation error of the model estimates against the true models. Specifically, both bonus terms comprise three parts: and L(n) ln( MSAn δ ).
The bonus terms altogether ensures strong optimism [36], i.e., for any p and (s, a), Q p (s, a) ≥ R p (s, a) + (P p V p )(s, a).
In short, strong optimism is a stronger form of optimism (the weaker requirement that for any p and (s, a), Q p (s, a) ≥ Q ⋆ p (s, a) and V p (s) ≥ V ⋆ p (s)), which allows us to use the clipping lemma (Lemma B.6 of [36], see also Lemma 20 in Appendix C.4) to obtain sharp gap-dependent regret guarantees. The three parts in the bonus term serve for different purposes towards establishing (6): 1. The first component accounts for the uncertainty in reward estimation: with probability 1 − O(δ), R p (s, a) − R p (s, a) ≤ b rw n p (s, a), 0 , and R (s, a) − R p (s, a) ≤ b rw n(s, a), ǫ .
2. The second component accounts for the uncertainty in estimating (P p V ⋆ p )(s, a): with proba- , n(s, a), V p , V p , ǫ . 3. The third component accounts for the lower order terms for strong optimism: with probabil- Based on the above concentration inequalities and the definitions of bonus terms, it can be shown inductively that, with probability 1 − O(δ), both agg-Q p and ind-Q p (resp. agg-Q p and ind-Q p ) are valid upper bounds (resp. lower bounds) of Q ⋆ p . Finally, observe that for any (s, a) ∈ S h × A, Q ⋆ p (s, a) has range [0, H − h + 1]. By taking intersections of all confidence bounds of Q ⋆ p it has obtained, MULTI-TASK-EULER constructs its final upper and lower bound estimates for Q ⋆ p (s, a), Q p (s, a) and Q p (s, a) respectively, for (s, a) ∈ S h × A (line 11 to 12). Similar ideas on using data from multiple sources to construct confidence intervals and guide explorations have been used by [37,43] for multi-task noncontextual and contextual bandits. Using the relationship between the optimal value V ⋆ p (s) and and optimal action values Q ⋆ p (s, a) : a ∈ A , MULTI-TASK-EULER also constructs upper and lower bound estimates for V ⋆ p (s), V p (s) and V p (s), respectively for s ∈ S h (line 15).
Executing optimistic policies. At each episode k, for each player p, its optimal action-value function upper bound estimate Q p induces a greedy policy π k (p) : s → argmax a∈A Q p (s, a) (line 14); the player then executes this policy in this episode to collect a new trajectory and use this to update its individual model parameter estimates. After all players finish their episode k, the algorithm also updates its aggregate model parameter estimates (lines 16 to 19) using Equations (3), (4) and (5), and continues to the next episode.
The lemma follows directly from Lemma 2; its proof can be found in the Appendix along with proofs of the following theorems. Item 1 implies that any subpar state-action pair is suboptimal for all players. In other words, for every player p, the state-action space S × A can be partitioned to three disjoint sets: I ǫ , Z p,opt , (I ǫ ∪Z p,opt ) C . Item 2 implies that for any subpar (s, a), its suboptimal gaps with respect to all players are within a constant of each other.
Upper bounds
With the above definitions, we are now ready to present the performance guarantees of Algorithm 1. We first present a gap-independent collective regret bound of MULTI-TASK-EULER.
Theorem 5 (Gap-independent bound). If M p M p=1 are ǫ-dissimilar, then MULTI-TASK-EULER satisfies that with probability 1 − δ, We again compare this regret upper bound with individual STRONG-EULER's gap independent regret bound. Recall that individual STRONG-EULER guarantees that with probability 1 − δ, We focus on the comparison on the leading terms, i.e., the We next present a gap-dependent upper bound on its collective regret.
that holds with probability 1 − δ. We again focus on comparing the leading terms, i.e., the terms that have polynomial dependences on the suboptimality gaps in the above two bounds. It can be seen that an improvement in the regret bound by MULTI-TASK-EULER comes from the contributions from the subpar state-action pairs: for each (s, a) ∈ I ǫ , the regret bound is reduced from p∈[M] [44] has shown that in the single-task setting, it is possible to replace (s,a)∈Zp,opt with a sharper problem-dependent complexity term that depends on the multiplicity of optimal state-action pairs. We leave improving the guarantee of Theorem 6 in a similar manner as an interesting open problem.
Key to the proofs of Theorems 5 and 6 is a new bound on the surplus [36] of the value function estimates. Our new surplus bound is a minimum of two terms: one depends on the usual stateaction visitation counts of player p, the other depends on the task dissimilarity parameter ǫ and the state-action visitation counts of all players. Detailed proofs can be found at Appendix C.
Lower bounds
To complement the above upper bounds, we now present gap-dependent and gap-independent regret lower bounds that also depend on our subpar state-action pair notion. Our lower bounds are inspired by regret lower bounds for episodic RL [36,8] and multi-task bandits [43].
Theorem 7 (Gap-independent lower bound). For any
and l, l C ∈ N with l + l C = SA and l ≤ SA − 4(S + HA), there exists some ǫ that satisfies: for any algorithm Alg, there exists an ǫ-MPERL problem instance with S states, A actions, M players and an episode length of H such that I ǫ
192H
≥ l, and We also present a gap-dependent lower bound. Before that, we first formally define the notion of sublinear regret algorithms: for any fixed ǫ, we say that an algorithm Alg is a sublinear regret algorithm for the ǫ-MPERL problem if there exists some C > 0 (that possibly depends on the state-action space, the number of players, and ǫ) and α < 1 such that for all K and all ǫ-MPERL environments, E Reg Alg (K) ≤ CK α .
for this problem instance, any sublinear regret algorithm Alg for the ǫ-MPERL problem must satisfy: Comparing the lower bounds with MULTI-TASK-EULER's regret upper bounds in Theorems 5 and 6, we see that the upper and lower bounds nearly match for any constant H. When H is large, a key difference between the upper and lower bounds is that the former are in terms of I ǫ , while the latter are in terms of I Θ( ǫ H ) . We conjecture that our upper bounds can be improved by replacing I ǫ with I Θ( ǫ H ) -our analysis uses a clipping trick similar to [36], which may be the reason for a suboptimal dependence on H. We leave closing this gap as an open question.
Related Work
Regret minimization for MDPs. Our work belongs to the literature of regret minimization for MDPs, e.g., [5,18,8,3,9,19,10,46,36,49,45,44]. In the episodic setting, [3,10,46,36,49] achieve minimax √ H 2 SAK regret bounds for general stationary MDPs. Furthermore, the EULER algorithm [46] achieves adaptive problem-dependent regret guarantees when the total reward within an episode is small or when the environmental norm of the MDP is small. [36] refines EULER, proposing STRONG-EULER that provides more fine-grained gap-dependent O(log K) regret guarantees. [45,44] show that the optimistic Q-learning algorithm [19] and its variants can also achieve gap-dependent logarithmic regret guarantees. Remarkably, [44] achieves a regret bound that improves over that of [36], in that it replaces the dependence on the number of optimal state-action pairs with the number of non-unique state-action pairs.
Transfer and lifelong learning for RL. A considerable portion of related works concerns transfer learning for RL tasks (see [40,24,50] for surveys from different angles), and many studies investigate a batch setting: given some source tasks and target tasks, transfer learning agents have access to batch data collected for the source tasks (and sometimes for the target tasks as well). In this setting, model-based approaches have been explored in e.g., [39]; theoretical guarantees for transfer of samples across tasks have been established in e.g., [25,41]. Similarly, sequential transfer has been studied under the framework of lifelong RL in e.g., [38,1,15,22]-in this setting, an agent faces a sequence of RL tasks and aims to take advantage of knowledge gained from previous tasks for better performance in future tasks; in particular, analyses on the sample complexity of transfer learning algorithms are presented in [6,29] under the assumption that an upper bound on the total number of unique (and well-separated) RL tasks is known. We note that, in contrast, we study an online setting in which no prior data are available and multiple RL tasks are learned concurrently by RL agents.
Concurrent RL. Data sharing between multiple RL agents that learn concurrently has also been investigated in the literature. For example, in [20, 35, 16, 12], a group of agents interact in parallel with identical environments. Another setting is studied in [16], in which agents solve different RL tasks (MDPs); however, similar to [6,29], it is assumed that there is a finite number of unique tasks, and different tasks are well-separated, i.e., there is a minimum gap. In this work, we assume that players face similar but not necessarily identical MDPs, and we do not assume a minimum gap.
[17] study multi-task RL with linear function approximation with representation transfer, where it is assumed that the optimal value functions of all tasks are from a low dimensional linear subspace. Our setting and results are most similar to [32] and [13].
[32] study concurrent exploration in similar MDPs with continuous states in the PAC setting; however, their PAC guarantee does not hold for target error rate arbitrarily close to zero; in contrast, our algorithm has a fall-back guarantee, in that it always has a sublinear regret. Concurrent RL from similar linear MDPs has also been recently studied in [13]: under the assumption of small heterogeneity between different MDPs (a setting very similar to ours), the provided regret guarantee involves a term that is linear in the number of episodes, whereas our algorithm in this paper always has a sublinear regret; concurrent RL under the assumption of large heterogeneity is also studied in that work, but additional contextual information is assumed to be available for the players to ensure a sublinear regret.
Other related topics and models. In many multi-agent RL models [47,31], a set of learning agents interact with a common environment and have shared global states; in particular, [48] study the setting with heterogeneous reward distributions, and provide convergence guarantees for two policy gradient-based algorithms. In contrast, in our setting, our learning agents interact with separate environments. Multi-agent bandits with similar, heterogeneous reward distributions are investigated in [37,43]; herein, we generalize their multi-task bandit problem setting to the episodic MDP setting.
Conclusion and Future Directions
In this paper, we generalize the multi-task bandit learning framework in [43] and formulate a multitask concurrent RL problem, in which tasks are similar but not necessarily identical. We provide a provably efficient model-based algorithm that takes advantage of knowledge transfer between different tasks. Our instance-dependent regret upper and lower bounds formalize the intuition that subpar state-action pairs are amenable to information sharing among tasks.
There still remain gaps between our upper and lower bounds which can be closed by either a finer analysis or a better algorithm: first, the dependence on I ǫ in the upper bound does not match the dependence of I Θ(ǫ/H) in the lower bound when H is large; second, the gap-dependent upper bound has O(H 3 ) dependence, whereas the gap-dependent lower bound only has Ω(H 2 ) dependence; third, the additive dependence on the number of optimal state-action pairs can potentially be removed by new algorithmic ideas [44].
Furthermore, one major obstacle in deploying our algorithm in practice is its requirement for knowledge of ǫ; an interesting avenue is to apply model selection strategies in bandits and RL to achieve adaptivity to unknown ǫ. Another interesting future direction is to consider more general parameter transfer for online RL, for example, in the context of function approximation.
[15] Francisco M Garcia and Philip S Thomas. A meta-mdp approach to exploration for lifelong reinforcement learning. arXiv preprint arXiv:1902.00843, 2019.
[16] Zhaohan Guo and Emma Brunskill. Concurrent pac rl. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015.
Proof. For the first claim, we prove a stronger statement by backward induction on h, namely, for Base case: We first prove the following auxiliary statement: for every s ∈ S h+1 and p, q ∈ [M ], Let a p = argmax a∈A Q ⋆ p (s, a) and a q = argmax a∈A Q ⋆ q (s, a). The above auxiliary statement can be easily proven by contradiction: without loss of generality, suppose that , which contradicts the fact that a q = argmax a∈A Q ⋆ q (s, a). We now return to the inductive proof, and we show that given the inductive hypothesis, for every (s, a) ∈ S h × A and p, q ∈ [M ], where the first inequality follows from Eq. (1) and the triangle inequality; the second inequality follows from Definition 1 and the triangle inequality; the third inequality follows from Hölder's inequality; and the fourth inequality uses Definition 1 and Eq. (7).
For the second claim, we note that from the first claim, we have for any p, q, s, therefore, for any p, q, s, a, Proof. For any (s, a) ∈ I ǫ , there exists some p 0 such that gap p0 (s, a) > 96Hǫ. From Lemma 2 we know that gap p (s, a) − gap p0 (s, a) ≤ 4Hǫ. Therefore, for all p, This proves the first item.
For the second item, for all p, q ∈ [M ],
B Additional Definitions Used in the Proofs
In this section, we define a few useful notations that will be used in our proofs. For state-action pair (s, a) ∈ S × A, player p ∈ [M ], episode k ∈ [K]: 1. Define n k (s, a) (resp. n k p (s, a),P k ,P k p ,R k ,R k p ) to be the value of n(s, a) (resp. n p (s, a), P,P p ,R,R p ) at the beginning of episode k of MULTI-TASK-EULER. a)) right after MULTI-TASK-EULER finishes its optimistic value iteration (line 15) at episode k. 3. Define the surplus [36] (also known as the Bellman error) of (s, a) at episode k and player p as: ; recall the definitions of gap p (s, a) and gap p,min in Section 2. We also adopt the following conventions in our proofs:
Denote by
1. As ǫ-dissimilarity with ǫ > 2H does not impose any constraints on M p M p=1 (recall Definition 1), throughout the proof, we only focus on the regime that ǫ ≤ 2H.
2. We will use π k (p) and π k p interchangeably. To avoid notational clutter, we will also sometimes slightly abuse notation and use , respectively.
C Proof of the Upper Bounds
Proof outline. This section establishes the regret guarantees of MULTI-TASK-EULER (Theorems 5 and 6). The proof follows a similar outline as STRONG-EULER's analysis [36], with important modifications tailored to the multitask setting. The proof has the following structure: 1. Subsection C.1 defines a "clean" event E that we show happens with probability 1 − δ.
When E happens, the observed samples are representative enough so that standard concentration inequalities apply. This will serve as the basis of our subsequent arguments.
2. Subsection C.2 shows that when E happens, the value function upper and lower bounds are valid; furthermore, MULTI-TASK-EULER satisfies strong optimism [36], in that all players' surpluses are always nonnegative for all state-action pairs at all time steps.
3. Subsection C.3 establishes a distribution-dependent upper bound on MULTI-TASK-EULER's surpluses when E happens, which is key to our regret theorems. In comparison with STRONG-EULER [36] in the single task setting, MULTI-TASK-EULER exploits inter-task similarity, so that its surpluses on state-action pair (s, a) for player p are further controlled by a new term that depends on the dissimilarity parameter ǫ, along with n k (s, a), the total visitation counts of (s, a) by all players.
4. Subsection C.4 uses the strong optimism property and the surplus bounds established in the previous two subsections to conclude our final gap-independent and gap-dependent regret guarantees, via the clipping lemma of [36] (see also Lemma 20).
5.
Finally, Subsection C.5 collects miscellaneous technical lemmas used in the proofs.
C.1 A clean event
Below we define a "clean" event E in which all concentration bounds used in the analysis hold, which we will show happens with high probability. Specifically, we will define E = E ind ∩ E agg ∩ E sample , where E ind , E agg , E sample are defined respectively below.
In subsequent definitions of events, we will abbreviate ∀k ∈ .
Define event E ind as: where in Equation (12), (s p i ) ′ denotes the next state player p transitions to, for the i-th episode it experiences (s, a). E ind captures the concentration behavior of each player's individual model estimates.
Proof. The proof follows a similar reasoning as the proof of e.g., [36, Proposition F.9] using Freedman's Inequality. We would like to show that each of E ind,rw , E ind,val , E ind,prob , E ind,var happens with probability 1 − δ 12 , which would give the lemma statement by a union bound. For brevity, we only show that P(E ind,var ) ≥ 1 − δ 12 , and the other probability statements follow from a similar reasoning.
Fix h ∈ [H], (s, a) ∈ S h × A, and p ∈ [M ]. We will show For every j ∈ N + , define stopping time k j as the j-th episode when (s, a) is experienced by player p, if such episode exists; otherwise, k j is defined as ∞. it suffices to show that For every k ∈ N + , Define F k−1 as the σ-field generated by all players' observations up to episode k − 1, along with all players' observations at episode k up to them taking action at step h. Define it can be seen that X k is F k -measurable, and Note that X k /H 2 ≤ 1; by [14, Corollary 1.4] applied to X k /H 2 ∞ k=1 , for any λ ≥ 0, is a nonnegative supermartingale. Applying optional sampling theorem on Y k (λ) and stopping time k j , we get E Y kj (λ)I(k j < ∞) ≤ E Y 0 (λ) = 1. As a result, for any fixed thresholds b, v ≥ 0 [?, see]Theorem 1.6]freedman1975tail, , and the above inequality can be rewritten as: for any b, v ≥ 0, A union bound over all j ∈ N + yields Equation (14).
Define event E agg as: where in Equation (20), s ′ i and p i denote the next state and the player index for the i-th time some player experiences (s, a), respectively, where within an episode, we order the experiences of the players by their indices from 1 to M . E agg captures the concentration behavior of the aggregate model estimates.
Proof. The proof follows a similar reasoning as the proof of e.g., [36, Proposition F.9] using Freedman's Inequality. We would like to show that each of E agg,rw , E agg,val , E agg,prob , E agg,var happen with probability 1 − δ 12 , which would give the lemma statement by a union bound. For brevity, we show that P(E agg,var ) ≥ 1 − δ 12 , and the other probability statements follow from a similar reasoning.
Fix h ∈ [H], (s, a) ∈ S h × A and p ∈ [M ]. It suffices to show that
var Pp i (·|s,a) [V ⋆ p ] L(n k (s, a)) (n k (s, a)) 2 + 2H 2 L(n k (s, a)) n k (s, a) For micro-episode (k, p), denote its index as l = (k − 1)M + p; it can be easily seen that the ordering of micro-episodes' indices is consistent with their lexical ordering. For every j ∈ N + , define stopping time l j ∈ N + as follows: it is the index of the j-th micro-episode when (s, a) is experienced by some player, if such micro-episode exists; and l j is defined to be ∞ otherwise. With this notation, it suffices to show: For every l ∈ N + , Define F l−1 as the σ-field generated by all players' observations up to microepisode l − 1, along with micro-episode l's corresponding player (player index ((l − 1) mod M ) + 1)'s observations up to them taking action at step h. Define , where in the above expression, to avoid notation clutter, we use k and p to denote microepisode l's episode number and corresponding player number k(l) = ⌈l/M ⌉ and p(l) = ((l − 1) mod M ) + 1, respectively.
It can be seen that X l is F l -measurable, and E X l | F is a nonnegative supermartingale. Also, note that if l j < ∞, Using the same reasoning as in the proof of Lemma 9 (and observing that lj l=1 U l ∈ [0, H 4 j]), we have that for all j ∈ N + : A union bound over all j ∈ N + implies that Equation (22) (s, a) . Also, define G k as the σ-algebra generated by all observations up to episode k. It can be readily seen that {X k } K k=1 is a martingale difference sequence adapted to filtration {G k } K k=0 . Freedman's inequality (specifically, Lemma 2 of [4]) implies that for every fixed k, with probability 1 − δ 6K , If Equation (23) happens, then by AM-GM inequality that Additionally, asn k−1 (s, a) ≥n k (s, a) − M always holds, we have In summary, for any fixed k, with probability 1 − δ 6K , ifn k (s, a) ≥ N 1 := 84M ln 6SAK 2 δ , n k (s, a) ≥ 1 2n k (s, a).
Taking a union bound over all k ∈ [K], we have P(E agg,sample ) ≥ 1 − δ 6 . It follows similarly that P(E ind,sample ) ≥ 1 − δ 6 ; the only difference in the proof is that, we need to take an extra union bound over all p ∈ [M ] -hence an additional factor of M within ln(·) in the definition of N 2 . The lemma statement follows from a union bound over these two statements.
Proof. Follows from Lemmas 9, 10, and 11, along with a union bound.
C.2 Validity of value function bounds
In this section, we show that if the clean event E happens, then for all k and p, the value function estimates Q k p , Q k p , V k p , V k p are valid upper and lower bounds of the optimal value functions Q ⋆ p , V ⋆ p (Lemma 15). As a by-product, we also give a general bound on the surplus (Lemma 14) which will be refined and used in the subsequent regret bound calculations. Before going into the proof of the above two lemmas, we need a technical lemma below (Lemma 13) that gives necessary concentration results which motivate the bonus constructions; its proof can be found at Section C.
Then, for all (s, a) ∈ S h × A: 2.
Lemma 14. If event E happens, and suppose that for episode k and step h, we have that for all and Proof. We only show Equation (30) for brevity; Equation (31) follows from an exact symmetrical reasoning.
Recall that Q a)].
• For agg-Q k p (s, a), using Lemma 13 and our assumptions on V k p and V k p over S h+1 , we have: • For H − h + 1, we have: Combining the above three establishes that a) . and here, recall that V π k p is the value function of policy π k (p) with respect to M p defined in Section 2.
Proof. The proof of this lemma extends [36, Proposition F.1] to our multitask setting.
For every k and p, we show the above holds for all layers h ∈ [H] and every (s, a) ∈ S h × A; to this end, we do backward induction on layer h.
Base case: For layer
Inductive case: By our inductive hypothesis, for layer h + 1 and every s ∈ S h+1 , We will show that Equations (32) and (33) holds holds for all (s, a) ∈ S h × A.
We first show Equation (32). First, Q π k p (s, a) ≤ Q ⋆ p (s, a) for all (s, a) ∈ S h × A is trivial.
To show Q ⋆ p (s, a) ≤ Q k p (s, a) for all (s, a) ∈ S h × A, by Lemma 14 and inductive hypothesis, we have: Likewise, we show Q π k p (s, a) ≥ Q k p (s, a) for all (s, a) ∈ S h × A, using Lemma 14 and inductive hypothesis:
This completes the proof of Equation (32) for layer h.
We now show Equation (33) for layer h. Again V π k p (s) ≤ V ⋆ p (s) for all s ∈ S h is trivial.
To show V π k p (s) ≥ V k p (s) for all s ∈ S h , observe that V π k p (s) = Q π k p (s, π k (p)(s)) ≥ Q k p (s, π k (p)(s)) = V k p (s).
This completes the induction.
C.2.1 Proof of Lemma 13
Proof of Lemma 13. Equations (24), (26), and (28) essentially follow the same reasoning as in [36]; we still include their proofs for completeness. Equations (25), (27), and (29) are new, and require a more involved analysis. Our proof also relies on a technical lemma, namely Lemma 16; we defer its statement and proof to the end of this subsection. (24) follows directly from the definition of E ind,rw . Equation (25) follows from the definition of E agg,rw , and the fact that R k (s, a) − R p (s, a) ≤ ǫ.
Equation
2. We prove Equation (26) as follows: where the first inequality is from the definition of E ind,val ; the second inequality is from Equation (34) of Lemma 16; the third inequality is from Lemma 24; the fourth inequality is from our assumption that for all for all s ′ in the support ofP k p (· | s, a).
We prove Equation (27) as follows: s, a)) n k (s, a) + HL(n k (s, a)) n k (s, a) s, a)) n k (s, a) + L(n k (s, a)) n k (s, a) · ǫH + HL(n k (s, a)) n k (s, a) + HL(n k (s, a)) n k (s, a) s, a)) n k (s, a) + HL(n k (s, a)) n k (s, a) where the first inequality is from the observation that P k (· | s, a) − P p (· | s, a) 1 ≤ ǫ H and Lemma 25; the second inequality is from the definition of E agg,val ; the third inequality is from Equation (35) of Lemma 16; the fourth inequality is from Lemma 24 and the observation that for constant c > 0, c L(n k (s,a)) n k (s,a) · ǫH ≤ ǫ + c 2 4 L(n k (s,a)) n k (s,a) by AM-GM inequality; the fifth inequality is from our assumption that for all s ′ ∈ S h+1 , for all s ′ in the support ofP k (· | s, a).
3. We prove Equation (28) as follows: where the first inequality is from the elementary fact that n i=1 a i ≤ n i=1 |a i |; the second inequality is from the definition of E ind,prob ; the third inequality is from the definition of E ind,prob and Lemma 26; the fourth inequality is by algebra and 0 ≤ ( ) for all s ′ ∈ S h+1 ; the fifth inequality is by Cauchy-Schwarz. We now prove Equation (29): ≤b str P k (· | s, a), n(s, a), V k p , V k p , ǫ , where the first inequality is triangle inequality; the second inequality is from the elementary fact that n i=1 a i ≤ n i=1 |a i |, along with P k (· | s, a) − P p (· | s, a) 1 ≤ ǫ H and Lemma 25; the third inequality is from the definition of E agg,prob ; the fourth inequality is from the definition of E agg,prob and Lemma 26; the fifth inequality is by algebra and ) for all s ′ ∈ S h+1 ; the last inequality is by Cauchy-Schwarz.
Lemma 13 relies on the following technical lemma on the concentrations of the conditional variances. Specifically, Equation (34) is well-known (see, e.g., [2,30]); Equations (35) and (36) are new, and allow for heterogeneous data aggregation in the multi-task RL setting. We still include the proof of Equation (34) here, as it helps illustrate our ideas for proving the two new inequalities. Lemma 16. If event E happens, then for any s, a, k, p, we have: Proof. 1. By the definition of E, we have H 2 var Pp(·|s,a) [V ⋆ p ]L(n k p (s, a)) n k p (s, a) + H 2 L(n k p (s, a)) n k p (s, a) ; this, when combined with Lemma 26, implies that .
Now, observe that which can be seen by applying Lemma 23 with X being the random variable that is drawn uniformly , which has expectation µ = (P k p V ⋆ p )(s, a), and setting m = (P p V ⋆ p )(s, a).
Recall that by the definition of event E, we have where the second inequality uses Lemma 27. Using the elementary fact that|A Combining Equations (37) and (38), using algebra, we get 2. We first show Equation (35). By the definition of E, we have For the first term on the left hand side, observe that for each i, By averaging over all i's and taking square root, we have which can be seen by applying Lemma 23 with X being the random variable that is drawn uniformly , which has expectation µ = (P k V ⋆ p )(s, a), and setting m = (P p V ⋆ p )(s, a).
C.3 Simplifying the surplus bounds
In this section, we show a distribution-dependent bound on the surplus terms, namely Lemma 19, which is key to establishing our regret bound. It can be seen as an extension of Proposition B.4 of [36] to our multitask setting using the MULTI-TASK-EULER algorithm, under the ǫ-dissimilarity assumption. Before we present Lemma 19 (Section C.3.1), we first show and prove two auxiliary lemmas, Lemma 17 and Lemma 18. consequently, where the first inequality is from Equations (30) and (31) for (s, a) and player p at episode k, and the second inequality is from the inductive hypothesis; the third inequality is by algebra. This completes the induction.
We now show Equation (43). By the definition of ind-b k p (s, a) and algebra, ind-b k p (s, a) H SL(n k p (s t , a t )) n k p (s t , a t ) where the second inequality uses varP k p (·|s,a) V As a consequence, using Lemma 27, Lemma 18. If E happens, we have the following statements holding for all p, k, s, a: 1. For two terms that appear in ind-b k p (s, a), they are bounded respectively as: Pp(·|s,a) + H 2 SL(n k p (s, a)) n k p (s, a) (44) varP k p (·|s,a) V k p L(n k p (s, a)) n k p (s, a) var Pp(·|s,a) V π k p L(n k p (s, a)) n k p (s, a) Pp(·|s,a) L(n k p (s, a)) n k p (s, a) 2. For two terms that appear in agg-b k p (s, a), they are bounded respectively as: Pp(·|s,a) + This implies that Pp(·|s,a) + SH 2 L(n k p (s, a)) n k p (s, a) , where the first inequality is from Equation (48), and the fact that V var Pp(·|s,a) V ⋆ p L(n k p (s, a)) n k p (s, a) Pp(·|s,a) L(n k p (s, a)) n k p (s, a) var Pp(·|s,a) V π k p L(n k p (s, a)) n k p (s, a) Pp(·|s,a) L(n k p (s, a)) n k p (s, a) where the first inequality is from Lemma 24 and the observation that when E happens, for all s ′ ∈ S h+1 ; the second inequality is from Equation (34) of Lemma 16 and Equation (44); the third inequality again uses Lemma 24 and the observation that 2. For Equation (46), using the definition of E agg,prob and AM-GM inequality, when E happens, we have for all p, k, s, a, s ′ ,P k (s ′ | s, a) P k (s ′ | s, a) + L(n k (s, a)) n k (s, a) .
This implies that Pp(·|s,a) + SH 2 L(n k p (s, a)) n k p (s, a) where the first inequality is from Equation (49) and the fact that V k p (s ′ ) − V k p (s ′ ) ∈ [0, H] for any s ′ ∈ S h+1 ; the second inequality is from the observation that P p (· | s, a) −P k (· | s, a) 1 ≤ ǫ H ; the third inequality is by algebra. var Pp(·|s,a) V ⋆ p L(n k p (s, a)) n k p (s, a) Pp(·|s,a) L(n k p (s, a)) n k p (s, a) + √ SHL(n k p (s, a)) n k p (s, a) + HǫL(n k p (s, a)) n k p (s, a) var Pp(·|s,a) V π k p L(n k p (s, a)) n k p (s, a) Pp(·|s,a) L(n k p (s, a)) n k p (s, a) where the first inequality is from Lemma 24 and the observation that when E happens, ; the second inequality uses Equation (36) of Lemma 16 and Equation (46); the third inequality is from Lemma 24 and the observation that when (s, a) . We now bound ind-b k p (s, a) and agg-b k p (s, a) respectively.
Combining the above upper bounds, and using the observation that L(n k (s,a)) all k ∈ [K], h ∈ [H] and (s, a) ∈ S h × A, the surplus of Q We bound each factor as follows: for the first factor, In conclusion, by the regret decomposition Equation (51), and Equations (54) For the (s, a)-th term in term (A), we will consider the cases of (s, a) ∈ I ǫ and (s, a) / ∈ I ǫ separately.
Case 1: (s, a) ∈ I ǫ . In this case, we have that for all p,ǧ ap p (s, a) = gap p (s,a) 4H ≥ 24ǫ. We simplify the corresponding term as follows: We now decompose the inner sum over k, In summary, combining the regret bounds of cases 1 and 2 for term (A), along with Equation (55) for
C.5 Miscellaneous lemmas
This subsection collects a few miscellaneous lemmas used throughout the upper bound proofs.
If g(u)
C log N u ξ u for some C > 0 such that ln C ln N , then n Γ f (u/4)du Cn ln N n ξ ∧ C ∆ ln N n ξ .
If g(u)
C ln N u ξ u for some C > 0 such that ln C ln N , then where the first equality is from the definition of collective regret; the first inequality is from the Construction. For a = (a 1 , . . . , a S1 ) ∈ [b + 1] S1 , we define the following ǫ-MPERL problem instance, M(a) = M p M p=1 , with S states, A actions, and an episode length of H, such that for each p ∈ [M ], M p is constructed as follows: • S 1 = [S 1 ], and p 0 is a uniform distribution over the states in S 1 .
• For each (s, a) ∈ S × A, the reward distribution r p (s, a) is a Bernoulli distribution, Ber(R p (s, a)), and we will specify R p (s, a) subsequently.
• For each state s ∈ [S 1 ], where we recall that a = (a 1 , . . . , a S1 ); furthermore, it suffices to show that, for any s ′ ∈ [S 1 ], where N K+1 (s ′ ) = a∈A n K+1 (s ′ , a); this is because it follows from Eq. (61) that where the first inequality uses Lemma 30 (the regret decomposition lemma).
Using a similar reasoning as before, and recall that∆ p s0,a0 ≤ 1 24 , we can show that and consequently, as long as K ≥ 20S 1 , Similar to the proof of Claim 1, we have the following argument.
It follows that E M n K p0 (s 0 , a 0 ) ≥
|
2021-07-20T01:16:14.061Z
|
2021-07-19T00:00:00.000
|
{
"year": 2021,
"sha1": "5487e7c6ca2a20d7e8186feec143862086e0da74",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5487e7c6ca2a20d7e8186feec143862086e0da74",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
17228281
|
pes2o/s2orc
|
v3-fos-license
|
Close to Home: A History of Yale and Lyme Disease
Yale scientists played a pivotal role in the discovery of Lyme disease and are credited as the first to recognize, name, characterize, and treat the affliction. Today, Lyme disease is the most commonly reported vector-borne illness in the United States, affecting approximately 20,000 people each year, with the incidence having doubled in the past 10 years [1]. Lyme disease is the result of a bacterial infection transmitted to humans through the bite of an infected deer tick, which typically results in a skin rash at the site of attack. While most cases, when caught early, are easily treated by antibiotic therapy, delayed treatment can lead to serious systemic side effects involving the joints, heart, and central nervous system. Here we review Yale’s role in the discovery and initial characterization of Lyme disease and how those early discoveries are crucial to our current understanding of the disease.
The Yale Team
In the early fall of 1975, two mothers from Old Lyme, Connecticut, desperately sought medical help regarding the mysterious outbreak of arthritis and juvenile arthritis in their families and town. In the face of unexplainable symptoms and unsatisfying diagnoses, they reached out to the Connecticut State Department of Health and the Yale School of Medicine, sparking an investigation that would culminate in the characterization of what is now widely known as Lyme disease [2].
The initial studies carried out in Lyme, Connecticut, and two surrounding towns on the eastern bank of the Connecticut River in New London County were led by Allan C. Steere, MD, and Stephen E. Malawista, MD, from the Rheumatology section of the Yale School of Medicine, in conjunction with David R. Snydman, MD, and Francis M. Steele, PhD, from the Connecticut State Department of Health, among others. Dr. Steere, the first author of the study, was a first-year fellow in rheumatology at the time. Dr. Malawista, then Head of the Rheumatology Section at Yale, continues to pioneer Lyme disease research at Yale.
The Investigation
In December 1975, Steere and Malawista led a surveillance study [3] to investigate the cause of a sudden outbreak of rheumatoid arthritis in and around Lyme. The study focused on the three contiguous towns of Old Lyme, Lyme, and East Haddam, where 51 residents were diagnosed with juvenile arthritis or arthritis of unknown cause (39 children and 12 adults) out of a total population of 12,000. The investigation consisted of thorough physical examinations and blood work of each patient on site at Yale. Additionally, detailed patient histories were collected through interviews with each patient's local physician and family members.
While the early physical examinations and laboratory tests revealed nothing out of the ordinary, the interview aspect was surprisingly informative. Approximately 25 percent of the patients in the study reported a skin lesion with an expanding bull's-eye pattern four or more weeks preceding the onset of arthritic symptoms. The authors found this to be particularly intriguing, as the lesion matched the description of erythema chronicum migrans (ECM), or erythema migrans (EM), a lesion previously reported in Europe that was thought to be a result of an infectious agent but had never before been associated with arthritis [4].
The mysterious arthritis also emerged in interesting patterns geographically and temporally. Most of the patients lived in close proximity within the towns -several children lived on a particular road, and the arthritis afflicted several members from the same family. The patients also exclusively lived in the rural wooded areas of town, with no cases present in the town centers. Notably, there was also a unique temporal clustering to the symptoms, with the majority of onset occurring from June through September. Rheumatoid arthritis, a known autoimmune disease leading to inflammation of the joints, had never before been, nor would it have been, expected to cluster geographically or temporally in this way.
The Skin Lesion-EM
The term erythema migrans (EM) was first mentioned in a presentation at the 1909 meeting of the Swedish Dermatological Society in Stockholm by Arvid Afzelius [2]. EM, also reported as erythema chronic migrans (ECM), was sometimes associated with a tick bite and was accompanied by nerve pain, paralysis, or meningitis. In Europe, doctors believed that EM might be caused by a bacterium, and penicillin and other antibiotics were moderately effective at treating it. This connection between ticks and EM led Steere and Malawista to hypothesize that Lyme disease might be transmitted by the bite of an arthropod such as a tick.
However, in the United States, there was little experience with EM, and in the European cases, EM never presented with arthritis. Intrigued by the EM lesion described by patients in their first study, Steere and Malawista eagerly awaited the next "high season." Indeed, during the summer of 1976, 30 new patients were identified, a survey of which strengthened the connection between the initial presentation of EM and the later development of arthritis [5]. The Yale team thus officially declared EM as the initial mark of infection and as the diagnostic hallmark of "Lyme arthritis," the initial name given for the disease by Yale investigators [6].
The Tick
While Steere and Malawista suggested the tick as the vector of Lyme arthritis as early as 1976 [3,5,6], in 1978 they showed epidemiological evidence for a tick vector by expanding their surveillance of the Lyme area across the Connecticut River [7]. They found that the incidence of Lyme arthritis was 30 times greater on the east side of the river, where Lyme is located, than it was on the west side, similar to the difference in deer and deer tick distribution in the area [8].
Scientists later confirmed that ticks indeed are the transmission vector of the infectious agent in Lyme disease. In the United States, Lyme disease is transmitted by the deer tick, or Ixodes scapularis, member of the Ixodes family. Other related Ixodes ticks have been found in Europe and Asia. The Ixodes tick can become infected at any point of its 2-year lifespan, which consists of three distinct stages -larvae, nymph, and adult [9,10]. The tick's survival depends on a feeding or "blood meal" at each stage of its life. The larvae hatched in late summer feed on small animals such as the white-footed mouse that can be infected but remain asymptomatic, serving as a continuous resource for infection. The larvae then molt into nymphs who feed again the following spring to early summer. Transmission to humans typically occurs by ticks in this stage, as increases in outdoor activity coincides with the nymph feeding cycle. The small size of the nymph, about the size of a poppy seed, allows them to go unnoticed. Furthermore, it has been shown that a tick must feed for 48 or more hours to transmit infection. In the fall, nymphs molt into adult ticks, which then feed on large animals, deer in particular. Adult ticks, which may actually mate on the deer itself, are transmitted by deer to the surroundings, usually leafy areas, where new larvae are hatched the following summer.
Deer thus play an important role in the tick life cycle by supplying a blood meal and potentially serving as a mating ground for adult ticks. Accordingly, the recent explosion of the United States deer population is thought to be responsible for the dramatic increase in the instances of Lyme disease, particularly in the Northeast [11]. Efforts to decrease the prevalence of Ixodes scapularis ticks and Lyme disease through the control of deer populations have proven successful [12] and is thought to be one possible Lyme disease prevention strategy.
Lyme DiseAse -moRe thAn ARthRitis
To Steere and Malawista, it soon became clear that "Lyme arthritis" was actu-ally only a small piece of a larger puzzle. Now that the EM skin lesion was confirmed as the initial mark of infection, the Yale team made a major effort to inform and educate the area near Lyme. The Yale team also asked local healthcare providers to refer patients to them soon after infection, enabling them to further characterize the disease and onset. As the result of these further studies, the team reports that Lyme disease can manifest in a variety of systemic ways, including those involving the nervous system [13], the heart [14], and the joints [15][16][17][18][19].
In 1984, the Yale School of Medicine brought together Lyme disease researchers from all over the world at the First International Conference on Lyme Disease in New Haven [10,20]. For the first time, professionals from a range of disciplines, including rheumatology, immunology, dermatology, and neurology, as well as public health officials and practicing physicians were gathered in recognition of this new complex and systemic disease. In 1985, Steere
Clinical Features
Today, Lyme disease is clinically described as either "early" or "late." Early Lyme disease initially presents itself with the characteristic bull's-eye patterned lesion, erythema migrans (EM). This lesion can last anywhere from several days to several weeks [21] and is most often accompanied by severe fatigue, myalgia, arthralgias, regional lymphodenopathies, and headaches or fever. The initial EM lesion can sometimes spread to produce smaller secondary lesions 3 to 5 weeks after the primary lesion. Patients may further develop neurologic, cardiac, and rheumatogical symptoms in the early stage, the exact causes of which are still not fully understood.
One of the most common features of late Lyme disease is arthritis, particularly asymmetric oligoarticular arthritis, involving large joints such as the knees. Arthritis arises when an inflammatory response occurs in the synovial tissue between the joints and leads to painful swelling in the affected area.
tReAtment
In 1977, in the journal Science [5], Steere and Malawista reported the presence of common antibodies extracted from patients experiencing an active EM lesion or active arthritis, thereby suggesting a common origin for these two clinical symptoms. While it would be several years before the infectious agent that causes Lyme disease would be isolated, the Yale team had growing evidence for the role of a bacterial infection in the disease. In 1980, Steere and Malawista determined that antibiotic treatment "shortens the duration of ECM and may prevent or attenuate subsequent arthritis" [22]. The study consisted of 113 patients presenting the EM lesion. Half of the group did not receive treatment, while the other half were treated with antibiotics. In patients who did not receive antibiotics, the EM lesion and associated symptoms resolved within a median of 10 days after the initial visit. Those patients receiving antibiotic treatment experienced significantly faster resolution of EM, with a median of duration of 4 days. Furthermore, significantly fewer patients in the antibiotic group went on to develop arthritis compared to patients in the control group. Antibiotic therapy is still the major line of treatment for Lyme disease [23].
The Infectious Agent: B. burgdorferi
In 1982, Burgdorfer and colleagues isolated the infectious agent that causes Lyme disease that now bears his name: Borrelia burgdorferi [24]. The genus Borrelia is a member of the family Spirochaetacaea, also known as spirochetes, which are Gram-negative bacterium characterized by a wavelike body and flagella [21]. Burgdorfer and colleagues collected and dissected hundreds of Ixodes ticks from Shelter Island, New York (another location with a high prevalence of Lyme disease) and found that most of them contained spirochetes, specifically in the mid gut region. They further characterized the spirochetes with dark field and electron microscopy. Finally, indirect immunofluorescence revealed that antibodies extracted from serum of Lyme disease-infected patients reacted positively with the spirochete, while serum from control patients did notthereby confirming the link between the tick-derived spirochete and Lyme disease. In the United States, Lyme disease is primarily caused by the spirochete Borrelia Burgdorferi sensu stricto. Other related genospecies of Borrelia such as B. garinii, and B. afzelii have been identified in Europe and Asia.
Outer Surface Proteins
B. burgdorferi's persistence inside the tick and transmission to its human host are thought to be a product of altered expression of outer surface proteins (Osps). When inside the tick host, expression of OspA enables B. burgdorferi to persist in the gut. More specifically, Erol Fikrig, MD, and colleagues at Yale have found that a tick receptor protein (TROSPA) expressed in the tick gut is responsible for tight binding to OspA [25]. During a tick's blood meal, expression of OspA is decreased, leading to dissociation from the gut, and expression of OspC is increased. OspC is thought to play a role in migration of the bacterium from the tick's gut to its salivary glands [26]. Fikrig and colleagues, along with Durland Fish, PhD, from the Yale School of Public Health, have since shown that the interaction of OspC with the tick salivary protein Salp15 enhances the infectivity of B. burgdorferi in its new mammalian host [27]. Once inside the human host, B. burgdorferi induces immune responses that lead to a variety of symptoms present in the disease. Vaccines incorporating OspA, a strong antigen that induces an antibody response, have been developed but are currently off the market due to complications [28,29].
Diagnosis
Clear diagnosis of Lyme disease has been challenging. If the EM rash is present, then diagnosis is ameliorated, but since not all patients present with a purely characteristic rash and sometimes do not notice it in time, diagnosis remains difficult. Serological tests that indirectly test for antibodies produced against B. burgdorferi are often used, along with somewhat less accurate PCR assays. There is some controversy over misdiagnosis of Lyme disease and even the existence of long-term chronic Lyme disease [9,21,30] that is beyond the scope of this article. However, a regimen of antibiotic therapy is typically sufficient in treating the disease at any stage, with greatest efficacy seen for patients receiving treatment soon after the tick bite and associated EM lesion.
concLusion
The massive efforts taken by Steere and Malawista toward the investigation of the clustering of arthritis in Lyme in the late 1970s and early 1980s have led to the discovery of a complex, multifaceted disease. The results of their studies have laid the foundation for our current understanding of the role of the infectious agent, the tick as vector for infection, the EM skin lesion, and the systemic clinical symptoms of late onset. Yale investigators continue to lead the field of Lyme disease research today.
|
2018-04-03T01:35:08.878Z
|
2011-06-01T00:00:00.000
|
{
"year": 2011,
"sha1": "6cb7d9ecffb27911aacd7f364901d469bf3a99e8",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6cb7d9ecffb27911aacd7f364901d469bf3a99e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216359231
|
pes2o/s2orc
|
v3-fos-license
|
Deep model predictive flow control with limited sensor data and online learning
The control of complex systems is of critical importance in many branches of science, engineering, and industry, many of which are governed by nonlinear partial differential equations. Controlling an unsteady fluid flow is particularly important, as flow control is a key enabler for technologies in energy (e.g., wind, tidal, and combustion), transportation (e.g., planes, trains, and automobiles), security (e.g., tracking airborne contamination), and health (e.g., artificial hearts and artificial respiration). However, the high-dimensional, nonlinear, and multi-scale dynamics make real-time feedback control infeasible. Fortunately, these high-dimensional systems exhibit dominant, low-dimensional patterns of activity that can be exploited for effective control in the sense that knowledge of the entire state of a system is not required. Advances in machine learning have the potential to revolutionize flow control given its ability to extract principled, low-rank feature spaces characterizing such complex systems. We present a novel deep learning model predictive control framework that exploits low-rank features of the flow in order to achieve considerable improvements to control performance. Instead of predicting the entire fluid state, we use a recurrent neural network (RNN) to accurately predict the control relevant quantities of the system, which are then embedded into an MPC framework to construct a feedback loop. In order to lower the data requirements and to improve the prediction accuracy and thus the control performance, incoming sensor data are used to update the RNN online. The results are validated using varying fluid flow examples of increasing complexity.
Introduction
The robust and high-performance control of fluid flows presents an engineering grand challenge, with the potential to enable advanced technologies in domains as diverse as transportation, energy, security, and medicine.In many of these areas, the flows-described by the three-dimensional Navier-Stokes equations-are turbulent or exhibit chaotic dynamical behavior in the relevant regimes.As a consequence, the control of fluid flows is (1) Here, f (y) = z is the observation of the time (and potentially space) dependent system state y that has to follow a reference trajectory z ref , and α and β are regularization parameters penalizing the control input as well as its variation.The time-T map of the system dynamics describes how the system state evolves over one time step given the current state and control input.Problem (1) is then solved repeatedly over a fixed prediction horizon N and the first entry is applied to the real system.As the initial condition in the next time step, the real system state is used such that a feedback behavior is achieved.Note that u −1 is the control input that was applied to the system in the previous time step.The scheme is visualized in Fig. 1, where the MPC controller based on the full system dynamics is shown in green.
MPC has successfully been applied to a very large number of problems.However, a major challenge is the real-time requirement, i.e., (1) has to be solved within the sample time t = t i+1 − t i .In order to achieve this, linearizations are often used.Since even these can be too expensive to solve for large systems, we will here use a surrogate model which does not model the entire system state but only the control relevant quantities.In a flow control problem, these can be the lift and drag coefficients of a wing, for instance.Such an approach has successfully been used in combination with surrogate models based on dynamic mode decomposition [33] or clustering [31].We thus aim at directly approximating the dynamics for the observable z = f (y) and replacing the constraint in Problem (1) by a surrogate model.Following Takens embedding theory [39], we will use delay coordinates, an approach which has been successfully applied to many systems [8].Therefore, we define where d is the number of delays.Given a history of states z and controls u, the reduced dynamics then yield the state at the next time instant.This allows us to replace Problem (1) by the following surrogate problem: ( The resulting MPC controller is visualized in Fig. 1 in orange. Remark 1 Note that another advantage of modeling only the quantities relevant to the control part is that we depend much less strongly on the scales of the flow field (i.e., grid size and time step), as integral quantities such as body forces may evolve on their own (and possibly somewhat slower) time scale.
Related work
The main challenge in flow control-the construction of fast yet accurate models-has been addressed by many researchers in various ways.We here give a short overview of alternative methods (mostly related to the cylinder flow) and relate them to our approach.From a control-theoretical standpoint, the best way to compute a control law is via the exact model, i.e., the full Navier-Stokes equations.Using such a model in combination with an adjoint approach, a significant drag reduction could be achieved for the cylinder flow in [12] for Reynolds numbers up to 1000.However, this approach is too expensive for real-time control.To this end, several alternatives have been proposed, the most intuitive and well known being linearization around a desired operating point, cf.[18] for an overview.As a popular alternative, proper orthogonal decomposition (POD) [38] has emerged over the past decades, where the full state is projected onto a low-dimensional subspace spanned by orthogonal POD modes which are determined from snapshots of the full system.The resulting Galerkin models have successfully been used for control of the cylinder wake, see, e.g., [7,13].Balanced truncation POD models can be obtained for linear [44] or linearized systems [36].In order to ensure convergence to an optimal control input, the POD model can be updated regularly within a trust-region framework [6].Alternative approaches that are similar in spirit are moment matching [1] and linear-quadratic-Gaussian (LQG) balanced truncation [5].
The above-mentioned methods have as their main drawback that they quickly become prohibitively expensive with increasing Reynolds number.This is due to the fact that linearizations are less efficient or that the dynamics no longer live in low-dimensional subspaces that can be spanned by a few POD modes.Furthermore, all approaches require knowledge of the entire velocity (and potentially pressure) field, at least for the model construction.Both issues can be avoided when not considering the entire velocity field but only sensor data, which results in purely data-driven models or feedback laws.Several machine learning-based approaches have been presented in this context, for instance cluster-based surrogate models (cf.[31], where the drag of an airplane wing was reduced), feedback control laws constructed by genetic programming [45], or reinforcement learning controllers [35].These approaches are often significantly faster, rendering real-time control feasible.The approach presented in the following falls into this category as well.
DeepMPC: model predictive control with a deep recurrent neural network
In order to solve (2), the surrogate model for the control relevant system dynamics is required.For this purpose, we will use a deep RNN architecture which is implemented in TensorFlow [2].Once the model is trained and can predict the dynamics of z (at least over the prediction horizon), the model can be incorporated into the MPC loop.
Design of the RNN
As previously mentioned, the surrogate model is approximated using a deep neural network similar to [3].Each cell of the RNN predicts the system state for one time step.In order to capture the system dynamics using few observations only, we use the delay coordinates introduced above.Consequently, each RNN cell takes as input a sequence of past observations ẑi as well as corresponding control inputs ûi .The RNN consists of an encoder and a decoder (cf.Fig. 2a), where the decoder performs the actual prediction task and consists of N cells-one for each time step in the prediction horizon.This means that a single decoder cell computes ẑi+1 = (ẑ i , ûi ).The state information ẑi+1 is then forwarded to the next cell to compute the consecutive time step.In order to take long-term dynamics into account, an additional latent state l k+1 is computed based on past state information and forwarded from one cell to another.To properly compute this state for the first decoder cell, an encoder with M cells, whose cells only predict this latent state, is prepended to the decoder.As the encoder cell only predicts the latent state, it is a reduced version of the decoder cell which additionally contains elements for predicting the current and future dynamics, cf.Fig. 2a, b.
More precisely, the decoder cells are divided into three functional parts capturing different parts of the dynamics, i.e., long term (which is equivalent to an encoder cell) and current dynamics as well as the influence of the control inputs (see Fig. 2b).Therefore, the input (ẑ k , ûk ) of each cell k is divided into three parts As shown in Fig. 2b, the encoder and decoder consist of different smaller sub-units, represented by gray boxes.Each of the gray boxes represents a fully connected neural network.The encoder cell consists of three parts, h l,past , h l,current and h latent .In h l,past and h l,current latent variables for the last k − 2b + 1, . . ., k − b and k − b + 1, . . ., k time steps are computed, respectively.The current latent state l k+1 can be computed based on the information given by h l,past , h l,current and the latent variable of the last RNN cell l k .In a decoder cell, the future state z k+1 is additionally computed.Therefore, the latent state l k+1 is used as an input for h past , and the results of h past , h current and h future are used to calculate the predicted state z k+1 and thus, ẑk+1 .The corresponding equations can be found in Appendix A.
The RNN-based MPC problem (2) is solved using a gradient-based optimization algorithm-in our case a BFGS method.The required gradient information with respect to the control inputs can be calculated using standard backpropagation through time.This is represented by the red arrows in Fig. 2b.Since the RNN model requires temporal information from at least M + 2b time steps (M encoder cells and input sequence of length 2b) to predict future states, there is an initialization phase in the MPC framework during which the control input is fixed to 0.
Training of the RNN
The RNN is trained offline with time series data ((z 0 , u 0 ), . . ., (z n , u n )).For the data collection, the system is actuated with uniformly distributed random yet continuously varying inputs.In order to overcome difficulties with exploding and vanishing gradients as well as problems with the effect of nonlinearities when iterating from one time step to another, we use the three-stage approach for learning as proposed in [24] and used in [3].First, a conditional restricted Boltzmann machine is used to compute good initial parameters for the RNN according to the work by [40].In the second stage, only the model for a single time step is trained as this is faster and more stable than directly training the entire network, i.e., the model for the entire prediction horizon.In the final stage, another training phase is performed, this time for the complete RNN with N decoder cells, improving and making the predictions more robust for the system state over N time steps.Both the individual RNN cell and the entire network were trained using the ADAM optimizer [19].
Online training of the RNN
During system operation, we obtain incoming sensor data in each iteration, i.e., with a relatively high frequency.In order to improve the prediction accuracy and thus the control performance of the model, these data are used to perform batch-wise online learning.To this end, we begin with a model which was trained in the offline phase as previously described.We then collect data over a fixed time interval such that we can update the RNN via batch-wise training using the ADAM optimizer.
To control the influence of the newly acquired data on the model-i.e., to avoid overfitting while yielding improved performance-it is important to select the training parameters accordingly, in particular the batch size and the learning rate.In our experiments, we have observed that the same initial learning rate and the same batch size as in the offline training phase are typically a good choice.However, the optimal choice of those parameters highly depends on the initial training data and the data collected during the control process.
Results
In order to study the performance of the proposed MPC framework, four flow control problems of increasing complexity are considered.Instead of a real physical system, we here use a numerical simulation of the full model as our plant.In all four cases, the flow around one or multiple cylinders (cf.Fig. 3) is governed by the incompressible 2D Navier-Stokes equations with fluid entering from the left at a constant velocity y in .The Reynolds number Re = y in /ν D (based on the kinetic viscosity ν and the cylinder diameter D) ranges from 100 to 200, i.e., we are in the laminar regime.The full system is solved using a finite volume discretization and the open-source solver OpenFOAM, cf.[14].The control relevant quantities are the lift and drag forces (i.e., the forces in x 2 and x 1 direction) acting on the cylinders.These consist of both friction and pressure forces which can be computed from the system state (or easily measured in the case of a real system).The system can be controlled by rotating the cylinders, i.e., the control variables are the angular velocities.
One cylinder
The first example is the flow around a single cylinder, cf.Fig. 3a, which was also studied in [30].At Re = 100, the uncontrolled system possesses a periodic solution, the so called von Kármán vortex street.On the cylinder surface, the fluid and the cylinder velocity are identical (no-slip condition) such that the flow can be steered by rotating the cylinder.The control relevant quantities are the forces acting on the cylinder-the lift C l and drag C d .We thus set z = (C l , C d ), and the aim is to control the cylinder such that the lift follows a given trajectory, e.g., a piecewise constant lift.
In order to create training data, a time series of the lift and the drag is computed from a time series of the full system state with a random control sequence.To avoid high input frequencies, a random rotation between u = −2 and u = 2 is selected according to an equal distribution every 0.5 s.The intermediate control inputs are then computed using a spline interpolation on the grid of the time-T map, where t = 0.1 s.For the RNN training, a time series with 110,000 data points is used which corresponds to a duration of 11,000 s.The concrete parameters for the RNN can be found in Appendix B. Remark 2 In the first experiments, we use abundant measurement data in order to rule out this source of insecurity.The chosen amount significantly exceeds the amount that is required for a good performance, as we will see below.Considering the uncontrolled system, the fluidic time scale is in the range of seconds, and even in the actuated situation, a much smaller amount of data is sufficient, in particular in combination with the stochastic gradient descent optimization which selects random subsets of the data set in each iteration.
In a first step, the quality of the RNN prediction is evaluated on the basis of an exemplary control input sequence.As one can see in Fig. 4a, the prediction is very accurate over several time steps for many combinations of observations z and control inputs u.There are only small regions where the predictions deviate stronger from the real lift and drag.
The good prediction quality enables us to use the RNN in the MPC framework, where the aim is to force the lift to +1, 0 and −1 for 20 s, respectively.This results in the following realization of ( 2): ( The parameter β is set to 0.01 in order to avoid too rapid variations of the input.Furthermore, the control is bounded by the minimum and maximum control input of the training data (i.e., ±2).We solve the optimization problem (3) using a BFGS method and with a prediction horizon of length N = 5, as our experiments have shown that this is a good compromise between accuracy and computing effort.As visualized in Fig. 4b, the DeepMPC scheme leads to a good performance.Due to the periodic fluctuation of the uncontrolled system, a periodic control is expected to suppress this behavior which is what we observe.
Fluidic pinball
In the second example, we control the flow around three cylinders in a triangular arrangement, as shown in Fig. 3b.This configuration is known as the fluidic pinball, see [9] for a detailed study.The control task is to make the lift of the three cylinders (C l,1 , C l,2 and C l,3 ) follow given trajectories by rotating the rear cylinders while the cylinder in the front is fixed.We thus want to approximate the system dynamics of the forces acting on all three cylinders, i.e., z ).Similar to the single cylinder case, the system possesses a periodic solution at Re = 100.When increasing the Reynolds number, the system dynamics become chaotic [9] and the control task is much more challenging.We thus additionally study the chaotic cases Re = 140 and Re = 200.The behavior of the observables of the uncontrolled flow for the three cases is presented in Fig. 5. Interestingly, the flow is non-symmetric in all cases.At Re = 100, for instance, the amplitudes of the oscillations of C l,2 and C l,3 are different, and which one is larger depends on the initial condition.As we now have two inputs and three reference trajectories, we obtain the following realization of problem (2): min where the value of β is set to 0.1.For Re = 100, the prediction horizon is again N = 5.Since the dynamics are more complex for Re = 140 and Re = 200, the prediction horizon has to be larger and we experimentally found N = 10 to be an appropriate choice.The average number of iterations as well as the number of function and gradient evaluations for the MPC optimization are shown in Table 1 for the three considered cases.
For all three Reynolds numbers, the training data are computed by simulating the system with random yet smoothly varying control inputs as before, i.e., uniformly distributed random values between u = −2 and u = 2 for each cylinder independently every 0.5 s.Due to the significantly smaller time step of the finite volume solver for the fluidic pinball, the control is interpolated on a finer grid with step size 0.005 s.Since the control input has to be fixed over one lag time due to the discrete-time mapping via the RNN, the mean over one lag time (i.e., over 20 data points) is taken for u.Time series with 150,000, 200,000 and 800,000 data points are used for Re = 100, Re = 140 and Re = 200, respectively.As already mentioned in Remark 2, the chosen amount of data exceeds the maximum that is required.We will address this below.The concrete parameters for the RNN can be found in Appendix B.
At Re = 100, where the dynamics are periodic, the control is quite effective, almost comparable to the single cylinder case, cf.Fig. 6a.In particular, the lift at the bottom cylinder is controlled quite well.The fact that the two lift coefficients cannot be controlled equally well is not surprising as the uncontrolled case is also asymmetric, cf.Fig. 5.
In comparison, the error e mean for the mildly chaotic case Re = 140 (Fig. 6b) is approximately one order of magnitude larger.The reference is still tracked, but larger deviations are observed.However, since the system is chaotic, this is to be expected.It is more difficult to obtain an accurate prediction and-more importantly-the system is more difficult to control, see also [32].
In order to improve the controller performance, we incorporate system knowledge, i.e., we exploit the symmetry along the horizontal axis.Numerical simulations suggest that this symmetry results in two metastable regions in the observation space and that the system changes only occasionally from one region to the other, analogous to the Lorenz attractor [8].Therefore, we symmetrize (and double) the training data as follows: This step is not necessary at Re = 100, since the collected data are already nearly symmetric.Nevertheless, the amount of training data can be doubled by exploiting the symmetry, and therefore, the simulation time to generate the training data can be reduced.
In Fig. 6c, the results for Re = 140 with symmetric training data are shown.In this example, the tracking error is reduced by nearly 50%.In particular, the second lift is well controlled.This indicates that it is advisable to incorporate known physical features such as symmetries in the data assimilation process.However, we still observe that the existence of two metastable regions results in a better control performance for one of the cylinders, depending on the initial condition.
For the final example, the Reynolds number is increased to Re = 200 in order to further increase the complexity of the dynamics, and symmetric data are used again.Due to the higher Reynolds number, switching between the two metastable regions occurs much more frequently, and the use of symmetric data yields less improvement.The results are presented in Fig. 6d, and we see that even though tracking is achieved, the oscillations around the desired state are larger.In Fig. 7a, the mean and the maximal error for the three Reynolds numbers are shown.Since the system dynamics become more complex with increasing Reynolds In order to study the robustness of the training process as well as the influence of the amount of training data on the tracking error, 5 identical experiments for Re = 200 have been performed for different amounts of training data (10%, 50% and 100% of the symmetrized data points), respectively, see Fig. 7b.We observe no trend with respect to the amount of training data, in particular considering that the standard deviation is approximately 0.03 for the average and 0.15 for the maximal error.Figure 8 shows a comparison between two solutions using RNNs trained with 100% and 10% of the data, respectively.Although the solution trained with less data appears to behave less regularly, its performance is on average almost as good as the solution trained with 100% of the data.In conclusion, shorter time series already cover a sufficiently large part of the dynamics and are thus sufficient to train the model.In order to further improve the performance, the size of the RNN as well as the length of the training process would very likely have to be increased significantly and also, much smaller lag times would be required.
Online learning
Since we want to avoid a further increase in computational effort and data collection, we instead use small amounts of data sampled in the relevant parts of the observation space, i.e., close to the desired state.To this end, we perform online updates using the incoming sensor data.In our final experiment, we study how the control performance can be improved by performing batch-wise online updates of the RNN using the incoming data as described in Sect.3.3.In the feedback loop, a new data point is collected from the real system at each time step.Our strategy is to collect new data over 25 s for each update.By exploiting the symmetry, we obtain a batch size of 500 points within each interval that is used for further training of the RNN.In the right plot of Fig. 9, we compare the tracking error over several intervals, and we see that it can be decreased very efficiently within a few iterations by using online learning (see also Fig. 6a for a comparison).Besides reducing the tracking error, the control cost ||u|| 2 decreases, which further demonstrates the importance of using the correct training data.Significant improvements of both the tracking performance as well as the controller efficiency are obtained very quickly with comparably few measurements.
Conclusion and further work
We present a deep learning MPC framework for feedback control of complex systems.Our proposed sensorbased, data-driven learning architecture achieves robust control performance in a complex fluid system without recourse to the governing equations, and with access to only a few physically realizable sensors.In order to handle the real-time constraints, a surrogate model is built exclusively for control relevant and easily accessible quantities (i.e., sensor data).This way, the dimension of the RNN-based surrogate model is several orders of magnitude smaller compared to a model of the full system state.On the one hand, this enables applicability in a realistic setting since we do not rely on knowledge of the entire state.On the other hand, it allows us to address systems of higher complexity, i.e., it is a sensor-based and scalable architecture.The approach shows very good performance for high-dimensional systems of varying complexity, including chaotic behavior.To avoid prohibitively large training data sets and long training phases, an online update strategy using sensor data is applied.This way, excellent performance can be achieved for Re = 100.For future work, it will be important to further improve and robustify the online updating process, in particular for chaotic systems.Furthermore, it is of great interest to further decrease the training data requirements by designing RNN structures specifically tailored to control problems.Deep learning MPC is a critically important architecture for real-world engineering applications where only limited sensors are available to enact control authority.Therefore, in the context of real-world applications it would be of significant interest how the framework reacts to noisy sensor data.Since neural networks can in general work well with noisy data, we expect that DeepMPC gives a good noise-robustness.Furthermore, it should be investigated what happens if the Reynolds number changes slightly from the one used to train the RNN.
The equations for the encoder resp.the long term part of the decoder are given by h l,past = relu(W l,past x p + b l,past ), h l,current = relu(W l,current x current + b l,current ), l k+1 = h latent = relu(W latent,h (h l,past • h l,current ) + W latent,l l k + b latent ), where • is the element-wise multiplication of vectors and relu computes the rectified linear, i.e., max(0, x) for each element in x.The weights and the biases are the variables which are optimized during the training process and have the following dimensions: where N l is the size of the latent state l k and N h the size of the hidden layers.N l and N h can be chosen appropriately depending on the problem, cf.Appendix B for the concrete values as used in our experiments.where with N u = bn u .The final output z k+1 is computed via a linear layer: where W out ∈ R N h ×N out and b out ∈ R N out with N out = n o .
B Parameter choice for the RNN
The structure of the RNN is defined by the number of neurons in the hidden layers, i.e., by N h and N l , the number of encoder cells M and the chosen delay d = 2b.We performed different experiments to find appropriate values, and in Table 2 we summarize our final choices.
Fig. 1 2 2
Fig. 1 Structure of the control scheme, where a classical MPC controller based on a model for the full system state is shown in green and a controller using a surrogate model in orange (color figure online) , two time series z k−2b+1,...,k−b and z k−b+1,...,k of the observable with the corresponding control inputs u k−2b,...,k−b−1 , and u k−b,...,k−1 and a separate sequence of control inputs u k−b+1,...,k .The input length b is thus related to the delay via d = 2b.In summary, the inputs (ẑ k , ûk ) = (z k−2b,...,k , u k−2b,...,k ) are required.
Fig. 2 a
Fig. 2 a Unfolded RNN consisting of encoder (red) and decoder (yellow).b Layout of a single RNN cell.An encoder cell only consists of the blue area.A decoder cell, on the other hand, contains the entire green cell (color figure online)
Fig. 3 aFig. 4
Fig. 3 a Single cylinder setup.The system is controlled by setting the angular velocity u of the cylinder.b Setup for the fluidic pinball, where the forces on all cylinders are observed.The system is controlled by rotating cylinders one and two with the respective angular velocities u 1 and u 2
Fig. 7 a
Fig. 7 a Mean (blue) and maximal (red) error for various Reynolds numbers with full data.b Mean and maximal error for different training data set sizes, both averaged over 5 training runs (Re = 200) (color figure online)
Fig. 8
Fig. 8 DeepMPC reference tracking for Re = 200 and different amount of data
Fig. 9
Fig. 9 Re = 100 with online learning.The RNN is updated every 25 s (denoted by black lines on the left).On the right the mean error (blue) and the control cost (red) over each interval are shown (color figure online) N x is determined by the delay d = 2b, the number of observables n o and the number of control inputs n u as N x = b(n o + n u ).For the fluidic pinball, we observe the lift and drag at the three cylinders, and we can adapt the angular velocity of the two rear cylinders.Therefore, we have n o = 6 and n u = 2.In order to predict the future state, the decoder consists of three additional parts.The equations are given byh past = relu(W past l k+1 + b past ), h current = relu(W current x current + b current ), h future = relu(W future u future + b future ),
Table 1
Number of iterations, function and gradient evaluations averaged over control steps for different Reynolds numbers
|
2020-03-19T10:54:25.513Z
|
2020-03-12T00:00:00.000
|
{
"year": 2020,
"sha1": "7515252a0a82bb93533a8de2e841dce402664622",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00162-020-00520-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "310592f13fbb0cd17792934206d0c7066497f4b0",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
237513733
|
pes2o/s2orc
|
v3-fos-license
|
Explainable Identification of Dementia from Transcripts using Transformer Networks
Alzheimer's disease (AD) is the main cause of dementia which is accompanied by loss of memory and may lead to severe consequences in peoples' everyday life if not diagnosed on time. Very few works have exploited transformer-based networks and despite the high accuracy achieved, little work has been done in terms of model interpretability. In addition, although Mini-Mental State Exam (MMSE) scores are inextricably linked with the identification of dementia, research works face the task of dementia identification and the task of the prediction of MMSE scores as two separate tasks. In order to address these limitations, we employ several transformer-based models, with BERT achieving the highest accuracy accounting for 87.50%. Concurrently, we propose an interpretable method to detect AD patients based on siamese networks reaching accuracy up to 83.75%. Next, we introduce two multi-task learning models, where the main task refers to the identification of dementia (binary classification), while the auxiliary one corresponds to the identification of the severity of dementia (multiclass classification). Our model obtains accuracy equal to 86.25% on the detection of AD patients in the multi-task learning setting. Finally, we present some new methods to identify the linguistic patterns used by AD patients and non-AD ones, including text statistics, vocabulary uniqueness, word usage, correlations via a detailed linguistic analysis, and explainability techniques (LIME). Findings indicate significant differences in language between AD and non-AD patients.
I. INTRODUCTION
Alzheimer's disease (AD) constitutes a neurodegenerative disease characterized by a progressive cognitive decline and is the leading cause of dementia. Signs of dementia include amongst others: problems with short-term memory, keeping track of a purse or wallet, paying bills, planning and preparing meals, remembering appointments, or travelling out of the neighborhood [1]. Because of the fact that Alzheimer's dementia gets worse over time, it is important to be diagnosed early. For this reason, several research works have been introduced targeting at diagnosing dementia, which use imaging techniques [2], CSF biomarkers [3], [4], or EEG signals [5]. Due to the fact that dementia affects speech to a high degree, recently the research has moved towards dementia identification from spontaneous speech, where several shared tasks [6], [7] have been developed in order to distinguish AD from non-AD patients.
Several research works have been conducted with regard to the identification of AD patients using speech and transcripts. The majority of them have employed feature extraction techniques [8]- [12], in order to train traditional Machine Learning (ML) algorithms, such as Logistic Regression, k-NN, Random Forest, etc. However, feature extraction constitutes a time-consuming procedure achieving poor classification results and often demands some level of domain Loukas Ilias and Dimitris Askounis are with the Decision Support Systems Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece (e-mail: {lilias,askous}@epu.ntua.gr).
expertise. Recently, researchers introduce deep learning architectures [13], [14], such as CNNs and BiLSTMs, so as to improve the classification results. Despite the success of transformer-based models in several domains, their potential has not been investigated to a high degree in the task of dementia identification from transcripts, where research works [15] having proposed them, use their outputs as features to train shallow machine learning algorithms. Concurrently, all research works except one [16], train machine learning models, in order to distinguish AD patients from non-AD patients, without taking into account the severity of dementia via Mini-Mental State Exam (MMSE) scores. Motivated by this limitation, we propose two multi-task learning models minimizing the loss of both dementia identification and its severity.
At the same time, to the best of our knowledge, the research works that have proposed deep learning models based on transformer networks have focused their interest only on improving the classification results obtained by CNNs, BiLSTMs etc. instead of exploring possible explainability techniques. Specifically, due to the fact that deep learning models are considered black boxes, it is important to propose ways of making them interpretable, since it is imperative for a clinician to be informed why the specific deep neural network classified a person as AD patient or not. To the best of our knowledge, only one work [17] has experimented with interpreting its proposed deep learning model (CNN-LSTM model) in the field of dementia detection using transcripts. In order to tackle this limitation, our contribution is twofold. First, we propose an interpretable neural network architecture. Next, we extend prior work and employ LIME [18], a model agnostic framework for interpretability, aiming to explain the predictions made by our best performing model. Concurrently, we propose an in-depth analysis of the language patterns used between AD and non-AD patients aiming to shed more light on the main differences observed in the vocabulary that may distinguish people suffering from dementia from healthy people.
Our main contributions can be summarized as follows: • We employ several transformer-based models, pretrained in biomedical and general corpora, and compare their performances. • We propose an interpretable method based on the siamese neural networks along with a co-attention mechanism, so as to detect AD patients. • We introduce two models in a multi-task learning framework, where the one task is the identification of dementia and the second one is the detection of MMSE score (severity of dementia). We model the MMSE detection task as a multiclass classification task instead of a regression task. • We perform a thorough linguistic analysis regarding the differences in language between control and dementia groups. • We employ LIME, in order to explain the predictions of our best performing model.
A. Feature-based
The authors in [19], [20] introduced approaches based on multimodal data (both linguistic and acoustic features) to detect AD patients (binary classification task) and predict MMSE score (regression task). More specifically, the authors in [19] exploited dimensionality reduction techniques followed by machine learning classifiers and stated that Logistic Regression (LR) with language features was their best performing model in terms of classifying AD and non-AD patients. With regards to estimating the MMSE score, they claimed that a Random Forest classifier with language features achieves the lowest RMSE and R 2 scores. The combination of linguistic and acoustic features did not perform well on both tasks. In [20], the authors trained both shallow and deep learning models (LSTM and CNN) on a feature set consisting of acoustic features (i-vectors, xvectors) and text features (word vectors, BERT embeddings, LIWC features, and CLAN features) to detect AD patients. They found that the top-performing classification models were the Support Vector Machine (SVM) and Random Forest classifiers trained on BERT embeddings, which both achieved an accuracy of 85.4% on the test set. Regarding the regression task, they claimed that the gradient boosting regression model using BERT embeddings outperformed all the other introduced architectures. Authors in [15] trained shallow machine learning algorithms (Logistic Regression and Support Vector Machine for detecting AD patients, and Support Vector Machines based regression and Partial Least Squares Regressor for predicting the MMSE scores) using embeddings extracted by transformerbased models, namely BERT, RoBERTa, DistilBERT, DistilRoBERTa, and BioMed-RoBERTa-base. A similar approach was conducted by [21], where the authors extracted embeddings for each word of the transcript using transformer-based networks, exploited four types of pooling functions for generating a transcript-level representation, and trained a Logistic Regression classifier. Research work [22] merged acoustic (x-vectors) and linguistic features and trained a Support Vector Machine Classifier. In terms of the language features, (i) a Global Maximum pooling, (ii) a bidirectional LSTM-RNNs provided with an attention module, and (iii) the second model augmented with part-of-speech (POS) embeddings were trained on the top of a pretrained BERT model. Nasreen et al. [11] extracted two feature sets, namely disfluency and interactional features, and performed an in-depth statistical analysis in an attempt to investigate the differences between AD and non-AD subjects in terms of these features. Findings show that these two groups of people present significant differences. Then, they exploited shallow machine learning algorithms using the aforementioned feature sets to distinguish AD from non-AD patients and obtained an accuracy of 0.90 when providing both feature sets as input to the SVM classifier.
B. Deep Learning
Research works [23], [24] employed a hierarchical attention neural network to detect AD patients. More specifically, the authors in [23] evaluated their proposed model in both manual and automatic transcripts and found that a hierarchical neural network achieves an improvement in F1-score in comparison to other deep learning models. In [24], the authors tried to interpret the decisions made by the proposed model by visualizing words and sentences and performing statistical analyses. However, they were not able to explain why their model pays attention to some specific words more than others. Moreover, an explainable approach was introduced by [17]. Specifically, after proposing three deep learning architectures based on CNNs and RNNs, the authors applied visualization techniques and showed which linguistic characteristics are indicative of dementia, i.e., short answers, repeated requests for clarification, and interjections at the start of each utterance. Authors in [25] proposed a multi-task learning framework (Sinc-CLA), so as to predict age and MMSE scores (both considered as regression tasks) and used only speech as input for their proposed network. Concurrently, they introduced shallow networks with input i-vectors and x-vectors both in single and multi-task learning frameworks. They claimed that using x-vectors in a multi-task learning framework yields the best results in terms of the estimation of both age and MMSE scores. Ref. [26] introduced both feature-based and transformer-based methods. Regarding transformer-based models, they fine-tuned the BERT model to detect AD patients achieving better evaluation results than the ones achieved via the feature-based methods. For estimating the MMSE score they proposed only feature-based approaches. Research work [16] is the most similar to ours. The authors proposed transformer-based models using text, audio, and images (they converted audio to images using Mel Frequency Cepstral Coefficient). Regarding text, they employed BERT and Longformer. They claimed that models using only text data outperformed all the other proposed ones. The fusion of text and audio did not achieve better results. They introduced also a multi-task learning architecture using only text as input, in order to predict the MMSE score (regression task) and detect AD patients (binary classification task). Results showed limited improvements in classification and a negative impact in regression. We extend this research work by employing more transformer-based networks with an efficient training strategy, proposing a new interpretable method to detect AD patients based on siamese networks, introducing two models in a multi-task learning framework by regarding the MMSE prediction task as a multiclass classification task and employing explainability techniques. On the other hand, research works [27] & [28] introduced deep learning models including CNNs and LSTM neural networks with feed-forward highway layers respectively. In [27] results suggested that the utterances of the interviewer boost the classification performance. A similar methodology with [28] was proposed by [29], where the authors exploited both BERT and LSTMs with gating mechanism and showed that LSTM with gating mechanism outperforms BERT model with gating mechanism. They stated that this difference may be attributable to the fact that BERT is very large in comparison to the LSTM models. Researchers in [30] introduced four approaches for detecting AD patients. Specifically, they trained a hierarchical neural network with an attention mechanism on linguistic features. Concurrently, they proposed a Siamese Neural Network and a Convolutional Neural Network using audio waveforms. Finally, they extracted features from audio segments and trained an SVM classifier. Results showed that the combination of audio features, CNNs, and hierarchical neural network achieved the best classification results.
C. Related Work Review Findings
From the aforementioned research works, it is evident that despite the negative consequences dementia has in people's everyday life, little work has been done so far towards its identification. More specifically, most researchers introduce feature extraction approaches from audio and transcripts and train ML algorithms, such as SVM, LR, etc. Because of the fact that feature extraction constitutes a time-consuming procedure and does not generalize well to new AD patients, researchers have started exploiting deep learning methods, such as CNNs and LSTMs, which obtain low performances. However, despite the fact that pretrained transformer models achieve new stateof-the-art results in several domains, including the biomedical one, their potential has been mainly used as embeddings for training shallow ML algorithms, such as SVM or LR. Concurrently, little has been done regarding the interpretability of the proposed deep learning models as well as the main differences observed in the language between AD patients and non-AD patients.
Our work is different from the research works mentioned above, since we: (a) propose several pretrained transformer-based models and compare their performances, (b) introduce the idea of siamese neural networks along with a co-attention mechanism towards the task of dementia classification, (c) convert the MMSE regression task into a multiclass classification one and explore if it helps dementia identification, (d) perform a detailed linguistic analysis to find the linguistic patterns that distinguish AD patients from non-AD ones, and (e) exploit LIME for explaining the predictions made by our best performing model.
III. DATASET
We use the ADReSS Challenge Dataset [6] for conducting our experiments. In contrast to other datasets, this dataset is matched for gender and age, so as to minimize the risk of bias in the prediction tasks. Moreover, it has been selected in such a way so as to mitigate biases often overlooked in evaluations of AD detection methods, including repeated occurrences of speech from the same participant (common in longitudinal datasets) and variations in audio quality. It consists of speech recordings along with their associative transcripts and includes 78 non-AD and 78 AD subjects. In addition, the dataset includes the MMSE scores for each subject except one. We report the mean and standard deviation of the MMSE scores for the two main groups, i.e., AD patients and non-AD ones, in Table I. Each participant (PAR) has been assigned by the interviewer (INV) to describe the Cookie Theft picture from the Boston Diagnostic Aphasia Exam [31]. Due to the fact that the transcripts are annotated using the CHAT coding system [32], we use the python library PyLangAcq [33] for having access to the dataset. We use data (utterances) only from PAR and conduct our experiments at the transcript-level. The ADReSS Challenge dataset has been divided into a train and a test set. The train set consists of 54 AD patients and 54 non-AD ones, while the test set consists of 24 AD patients and 24 non-AD ones.
IV. PROBLEM STATEMENT
In this section, the problem statement used in this paper is presented. More specifically, it can be divided into two problems, namely the Single-Task Learning (STL) Problem and the Multi-Task Learning (MTL) Problem, which are presented in detail in Sections IV-A and IV-B respectively.
A. Single-Task Learning Problem
consist of a set of transcriptions belonging to the dementia group, d ⊂ S, and a set of transcriptions belonging to the control group, c ⊂ S. Furthermore, The task is to identify if a transcription s i ∈ S, belongs to a person suffering from dementia, i.e., s i ∈ d, or not, i.e., s i ∈ c.
sn, labeln, mmsen
consist of a set of transcriptions belonging to the dementia group, d ⊂ S, and a set of transcriptions belonging to the control group, c ⊂ S. Furthermore, The tasks here are to identify (i) if a transcription s i ∈ S, belongs to a person suffering from dementia, i.e., s i ∈ d, or not, i.e., s i ∈ c, as well as (ii) to identify the MMSE scores of each person.
V. PREDICTIVE MODELS
In this section, we describe the models used for detecting AD patients. Specifically, Section V-A refers to the models employed in the single-task learning setting, whereas in Section V-B we refer to the models used for jointly learning to identify AD patients and detect the severity of dementia.
Regarding our experiments, we pass each transcription through each pretrained model mentioned above. The output of each model is passed through a Global Average Pooling layer followed by two dense layers. The first dense layer consists of 128 units with a ReLU activation function and the second one has one unit with a sigmoid activation function to give the final output.
A.2. Transformer-based models with Co-Attention Mechanism: In this section, we present an interpretable method to differentiate AD from non-AD patients. First, we split each transcription s in the dataset into two statements of equal length (s 1 & s 2 ). In this way, we have to categorize a pair of statements (s 1 & s 2 ) into dementia or control group. To do this, we pass s 1 and s 2 through the transformer-based models mentioned in Section V-A.1, i.e., BERT, BioBERT, BioClinicalBERT, ConvBERT, RoBERTa, ALBERT, and XLNet. These models can be considered as siamese in our experiments, since we make them share the same weights. Then, we implement a co-attention mechanism introduced by [41] and adopted in several studies, including [42], [43], over the two embeddings of the two statements (outputs of the transformer-based models), in order to render the entire architecture interpretable.
Formally, let T be the tokens of s 1 and s 2 respectively. These tokens are passed to the transformer-based models as described via the equations below: , where model is one of the following: BERT, BioBERT, BioClin-icalBERT, ConvBERT, RoBERTa, ALBERT, and XLNet. We have omitted the first dimension, which corresponds to the batch size. Following the methodology proposed by [41], given the output of the model receiving the tokens of s 1 (C ∈ R d×N ) and the output of the model receiving the tokens of s 2 (S ∈ R d×T ), where d denotes the hidden size of the model, the affinity matrix F ∈ R N ×T is calculated using the equation F = tanh C T W l S , where W l ∈ R d×d is a matrix of learnable parameters. Next, this affinity matrix is considered as a feature and we learn to predict the attention maps for both statements via the following, H s = tanh (WsS + (WcC) F ) and H c = tanh WcC + (WsS) F T , where Ws, Wc ∈ R k×d are matrices of learnable parameters. The attention probabilities for each word in both statements are calculated through the softmax function as follows, a s = sof tmax w T hs H s , a c = sof tmax w T hc H c , where as ∈ R 1×T and ac ∈ R 1×N . W hs , W hc ∈ R k×1 are the weight parameters. Based on the above attention weights, the attention vectors for each statement are obtained by calculating the weighted sum of the features from each statement. Formally: ,whereŝ ∈ R 1×d andĉ ∈ R 1×d . Finally, these two vectors are concatenated, i.e., p = [ŝ,ĉ], where p ∈ R 1×2d and we pass the vector p to a dense layer with 128 units and a ReLU activation function followed by a dense layer consisting of one unit with a sigmoid activation function.
B. Multi-Task Learning
In this section we propose two architectures based on multi-task learning [44] and adopt the methodology followed by [45] & [46].
To be more precise, we employ a multi-task learning framework consisting of a primary and an auxiliary task. The identification of dementia constitutes the primary task, while the prediction of the MMSE score constitutes the auxiliary one. Our main objective is to explore whether the MMSE score helps in classifying groups into dementia or control. The introduced architectures are trained on the two tasks and updated at the same time with a joint loss: ,where L dementia and L MMSE are the losses of dementia identification and MMSE prediction tasks respectively. α is a hyperparameter that controls the importance we place on each task. We mention below the MTL architectures developed. MTL-BERT (Multiclass): We pass each transcription through a BERT model (which constitutes our best performing STL model). The output of the BERT model is passed through two separate dense layers, so as to identify dementia and predict the MMSE score. For identifying dementia, we use a dense layer with 2 units and a softmax activation function and minimize the cross-entropy loss function. Regarding the estimation of the MMSE score, in contrast with previous research works, we convert the MMSE regression task into a multiclass classification task. More specifically, according to [28], we can create 4 groups of cognitive severity: healthy (MMSE score ≥ 25), mild dementia (MMSE score of 21-24), moderate dementia (MMSE score of 10-20), and severe dementia (MMSE score ≤ 9). Thus, for classifying transcriptions into one of these 4 groups, we use a dense layer of 4 units with a softmax activation function and minimize the cross-entropy loss function.
MTL-BERT-DE (Multiclass): Similarly to [46], we pass each transcription into a BERT model. The output of the BERT model is passed through two separate BERT encoders, i.e, double encoders, which are followed by dense layers so as to identify dementia and classify MMSE score into one of the four classes mentioned above. For identifying dementia, we use a dense layer with 2 units and a softmax activation function and minimize the cross-entropy loss function. For classifying the MMSE score, we use a dense layer with 4 units and a softmax activation function and minimize the crossentropy loss function.
VI. EXPERIMENTS
All experiments are conducted on a single Tesla P100-PCIE-16GB GPU.
Experimental Setup: Firstly, we divide the train set provided by the Challenge into a train and a validation set (65%-35%). Next, we train the proposed architectures five times and test them using the test set provided by the Challenge. Specifically, we freeze the weights of each pretrained model (BERT, BioBERT, BioClinical-BERT, ConvBERT, RoBERTa, ALBERT, and XLNet) and update the weights of the rest layers. In this way, these pretrained models act as fixed feature extractors. We train the proposed architectures using Adam optimizer with a learning rate of 1e-4. We apply EarlyStopping and stop training, if the validation loss has stopped decreasing for 9 consecutive epochs. We also apply ReduceLROnPlateau, where we reduce the learning rate by a factor of 0.2, if the validation loss has stopped decreasing for 3 consecutive epochs. When this training procedure stops, we unfreeze the weights of the pretrained models and train the entire deep learning architectures using Adam optimizer with a learning rate of 1e-5. We apply EarlyStopping with a patience of 3 based on the validation loss. In terms of models with a co-attention mechanism, we start training the proposed architectures using Adam optimizer with a learning rate of 1e-3 and follow the same methodology. We also apply dropout after the co-attention mechanism with a rate of 0.4. For BERT, we have used the base-uncased model, for BioBERT we have used BioBERT v1.1 (+PubMed), for ConvBERT we have used the base model, for RoBERTa we have employed the base model, for ALBERT we have used the base-v1 model, and for XLNet we have used the base model. For these pretrained models, we have used the Transformers library [47]. 1 Evaluation Metrics: We evaluate our results using Accuracy, Precision, Recall, F1-score, and Specificity. All these metrics have been calculated using the dementia class as the positive one.
B. Multi-Task Learning
Comparison with state-of-the-art approaches: For the primary task (AD Classification task), we compare our introduced models with BERT base [16], since this research work proposes a multitask learning model and tests its proposed approach on the ADReSS Challenge test set.
Experimental Setup: Firstly, we divide the train set provided by the Challenge into a train and a validation set (65%-35%). Next, we train the proposed architectures five times and test them using the test set provided by the Challenge. We use the Adam optimizer with a learning rate of 1e-6. We apply EarlyStopping and stop training, if the validation loss has stopped decreasing for 8 https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT the MMSE categories, we apply balanced class weights to the loss function (L MMSE ). We set α of (4) equal to 0.1. 2 Evaluation Metrics: For the primary task (AD Classification task), we evaluate our results using Accuracy, Precision, Recall, F1score, and Specificity. All these metrics have been calculated using the dementia class as the positive one.
For the auxiliary task (MMSE Classification task), we evaluate our results using the average weighted Precision, average weighted Recall, and average weighted F1-score.
A. Single-Task Learning Experiments
The results of the proposed models mentioned in Section V-A are reported in Table II. Also, Table II provides a comparison of our introduced models with existing research initiatives.
Regarding our proposed transformer-based models, one can easily observe that BERT obtains the highest Recall, F1-score, and Accuracy accounting for 81.66%, 86.73%, and 87.50% respectively. Specifically, BERT outperforms the other introduced transformer-based models in Recall by 1.67-13.33%, in F1-score by 2.01-10.98%, and in Accuracy by 1.25-9.17%. BioClinicalBERT achieves the second highest Accuracy and F1-score accounting for 86.25% and 84.72% respectively. Also, BioClinicalBERT obtains the highest Precision score equal to 95.03% surpassing the other transformer-based models by 4.79-15.88%. RoBERTa achieves comparable results to BERT and BioClinicalBERT yielding an Accuracy and F1-score of 84.16% and 82.81% respectively. In addition, BioBERT and ConvBERT demonstrate slight differences in Accuracy and F1-score, with BioBERT surpassing ConvBERT in both metrics. Specifically, BioBERT surpasses ConvBERT in F1-score by 0.46% and in Accuracy by 0.84%. Moreover, we observe that ALBERT and XLNet achieve Accuracy scores equal to 78.33%, with ALBERT surpassing XLNet in F1-score by 2.70%.
Regarding our proposed transformer-based models with a coattention mechanism, they achieve lower performance than the proposed transformer-based models except for ConvBERT+Co-Attention, ALBERT+Co-Attention, and XLNet+Co-Attention. More specifically, ConvBERT+Co-Attention presents a slight surge of 0.42% in Accuracy in comparison with ConvBERT, ALBERT+Co-Attention presents an increase in Accuracy by 1.67% in comparison with ALBERT, and XLNet+Co-Attention demonstrates a slight increase of 0.42% in Accuracy in comparison with XLNet. BERT+Co-Attention attains the highest F1-score and Accuracy accounting for 83.85% and 83.75% respectively. BERT+Co-Attention outperforms the other models in terms of F1-score by 1.42-7.43%, and in terms of Accuracy by 1.25-5.00%. ConvBERT+Co-Attention and BioClinicalBERT+Co-Attention demonstrate slight differences in F1-score and Accuracy, with ConvBERT+Co-Attention surpassing BioClinicalBERT+Co-Attention in F1-score by 0.44% and in Accuracy by 0.42%. BioBERT+Co-Attention and ALBERT+Co-Attention achieve almost equal F1-score results, with BioBERT+Co-Attention attaining a higher Accuracy score than ALBERT+Co-Attention by 1.66%. RoBERTa+Co-Attention and XLNet+Co-Attention demonstrate low performances attaining an Accuracy of 79.16% and 78.75% respectively.
Overall, BERT constitutes our best performing model, since it outperforms all the other introduced models in F1-score and Accuracy. Although there are models surpassing BERT in Precision and Recall, BERT outperforms all of them in F1-score, which constitutes the weighted average of Precision and Recall. In addition, there are models that outperform BERT in Specificity. However, high specificity and low recall means that the model cannot diagnose the AD patients pretty well and consequently AD patients are misdiagnosed as non-AD ones.
With regards to our introduced models, one can easily observe that
In comparison to the research work [16], as one can easily observe, both our introduced models attain a higher Accuracy score. To be more precise, MTL-BERT (Multiclass) outperforms BERT base [16] in Accuracy by 5.42%. In addition, MTL-BERT-DE (Multiclass) surpasses the research work [16] in Accuracy by 4.17%. These differences in performance are attributable to the fact that we adopt a different training procedure than the one adopted by [16], we consider the MMSE task as a multiclass classification task instead of a regression task, as well as to the different architectures proposed.
B.2. Auxiliary Task:
The results of the introduced models mentioned in Section V-B for the auxiliary task (MMSE Classification task) are reported in Table IV
VIII. ANALYSIS OF THE LANGUAGE USED IN CONTROL AND DEMENTIA GROUPS
We finally perform an extensive analysis to uncover some unique characteristics, which discriminate the AD patients from the non-AD ones, and understand the predictions made by our best performing model as well as its limits.
A. Text Statistics
We first extract some statistics, namely the syllable count, the lexicon count, the difficult words, and the sentence count, using the TEXTSTAT library in Python, in order to understand better the differences in language used between control and dementia groups. More specifically, the syllable count refers to the number of syllables, the lexicon count to the number of words, and the sentence count to the number of sentences present in the given text. With regards to the difficult words, they refer to the number of polysyllabic words with a Syllable Count > 2 that are not included in the list of words of common usage in English [48]. After extracting these statistics per transcript, we calculate the mean and standard deviation for both control and dementia groups. We test for statistical significance using an independent t-test for each metric between control and dementia groups and adjust the p-values using Benjamini-Hochberg correction [49]. As one can easily observe in Table V, the control group presents a significantly higher number of syllables, lexicon, and difficult words than the dementia group.
B. Vocabulary Uniqueness
In order to understand the vocabulary similarities and differences between control and dementia groups, we adopt the methodology proposed by [50]. Formally, let P and C be the sets of unique words included in the control group and dementia group respectively. Next, we calculate the Jaccard's index given by (5), in order to measure the similarity between finite sample sets. More specifically, the Jaccard's index is a number between 0 and 1, where 1 indicates that the two sets, namely P and C, have the same elements, while 0 indicates that the two sets are completely different.
As observed in Table VI, the Jaccard's index between the control and dementia groups is equal to 0.4049, which indicates that people with dementia tend to use a different vocabulary than those in the control group.
C. Word Usage
Apart from finding the vocabulary similarities and differences, it is imperative that patterns of word usage be investigated. Thus, following the methodology introduced in [50], the main objective of this section is to explore the differences between the two classes (control and dementia) with regard to the probability of using specific words more than others. Formally, let D 1 and D 2 be two documents, where D 1 includes all the transcriptions of the control group, whereas D 2 consists of transcriptions of the dementia group. Moreover, we define S as the entire corpus consisting of D 1 and D 2 . Now we can define the probability of a word w i in the document D 1 in a collection of documents S given by (6): Similarly, we can define the probability of a word w i in the document D 2 in a collection of documents S given by (7): We employ the Jelinek-Mercer smoothing method and consider that α D ∈ [0, 1]. More specifically, α D is a parameter that controls the probability of words included only in one document (D 1 or D 2 ). In our experiments, we set α D equal to 0.2.
Moreover, we define P (w i |S) = sw i |S| , where sw i denotes the number of times a word w i is included in the collection, whereas |S| is the total number of words occurrences in the collection. Similarly, P (w i |D 1 ) = dw i |D 1 | , where dw i denotes the number of times a word w i is presented in the document D 1 , whereas |D 1 | is the total number of words occurrences in the document D 1 . The same methodology has been adopted for calculating the P (w i |D 2 ).
After having calculated the two distributions, i.e., P (w i |D 1 , S) and P (w i |D 2 , S), we exploit the Kullback-Leibler (KL) divergence, in order to measure the difference of these two distributions. KLdivergence is always greater than zero and is given by (8). The larger it gets, the more different the two distributions are.
As one can easily observe in Table VII, the KL divergence between control and dementia groups is high indicating that these two groups present differences regarding the probability of using some words more than others. Our findings agree with the ones in [50], where the authors state that there are clear differences in terms of language use between positive (depression and self-harm) and control group, where the values of KL-divergence range from 0.18 to 0.21.
D. Linguistic Feature Analysis
Following the method introduced by [51], the main objective of this section is to shed light on which unigrams and pos-tags are mostly correlated with each class separately. To facilitate this, we compute the point-biserial correlation between each feature (unigram and postag) across all the transcriptions and a binary label (0 for the control and 1 for the dementia group). Before computing the correlation, we normalize features so that they sum up to 1 across each transcription. We use the point-biserial correlation, since it is a correlation used between continuous and binary variables. It returns a value between -1 and 1. Since we are only interested in the strength of the correlation, we compute the absolute value, where negative correlations refer to the control group (label 0) and positive correlations refer to the dementia one (label 1). We report our findings in Table VIII, where all correlations are significant at p < 0.05, with Benjamini-Hochberg correction [49] for multiple comparisons.
As one can easily observe, the pos-tags associated with the dementia group are the following: RB (adverbs), PRP (personal pronoun), VBD (verb in past tense), and UH (interjection). On the other hand, people in the control group tend to use VBG (verb, gerund, or present participle), DT (determiner), and NN (noun). These findings can be justified in Table IX, where we present three examples of transcripts belonging to the control group and three examples of transcripts belonging to the dementia one. More specifically, we have assigned colours to different pos-tags, so as to render the differences in the language patterns used by each group easily understandable to the reader. To be more precise, red colour indicates the VBG pos-tag, yellow refers to the DT pos-tag, fuchsia to the RB pos-tag, apricot to the PRP pos-tag, navy blue to the VBD pos-tag, and the pine green to the UH pos-tag.
We observe that people in the dementia group tend to use personal pronouns (he, she, I, them etc.) very often, since they are unable to remember the specific terms (mom, boy, etc.). This finding agrees with the research conducted by [52], where the authors state that personal pronouns present a high frequency in the speech of AD patients, since these people cannot find the target word. To be more precise, in a conversation people have to remember what they have said during the entire conversation. However, this is not possible in AD patients, who present working memory impairment and thus tend to produce empty conversational speech (use of personal pronouns). On the other hand, people in the control group tend to use more nouns instead of personal pronouns, since they are able to maintain various kinds of information.
Moreover, AD patients tend to use verbs in the past tense (were, forgot, did, started) in contrast to people who are not suffering from dementia and use verbs in the present participle. One typical example that can illustrate this difference can be seen in the fifth transcription in Table IX, i.e., "oh have you heard of that new game that they started to play after christmas ? did you". The AD patient perhaps remembers a personal story from the past that wants to narrate, instead of the task he has been assigned to conduct. Therefore, the patient is not able to stay focused on describing the picture. This finding is consistent with [53], [54], where the authors state that AD patients present difficulty in maintaining and continuing the development of a topic and thus demonstrate unexpected topic shifts. Also, this finding reveals a difference in language used by the AD patients and the agrammatic aphasics. Specifically, patients with agrammatic aphasia typically have problems using past tense inflection and instead rely on infinitive or present tense verb forms [55].
In addition, AD patients tend to use the UH (oh, yeah, well) and the RB (maybe, probably) pos-tags, since they are not certain of what they are describing due to the cognitive impairment. Concurrently, the UH pos-tag constitutes an example of empty speech. More specifically, this pos-tag is used as filler at the beginning of each utterance, since AD patients are thinking of what to say.
E. Explainability -Error Analysis
In this section, we employ LIME [18] (using 5000 samples) to explain the predictions made by our best performing model, namely BERT, and shed more light regarding the differences in language between AD and non-AD patients. More specifically, LIME generates local explanations for any machine learning classifier by introducing an interpretable model, which is trained on data generated through observing differences in the classification performance when removing tokens from the input string. Examples of explanations generated by LIME are illustrated in Figs. 1-4. More specifically, Fig. 1 illustrates two transcripts, whose ground-truth label is dementia, while our model predicts them as belonging to non-AD patients. Fig. 2 refers to transcripts with both ground-truth label and prediction corresponding to dementia. In Fig. 3, two transcripts are presented, whose prediction is control and true label is control too. Finally, Fig. 4 illustrates transcripts, which are misclassified. The ground-truth is control, whereas the prediction is dementia. Moreover, as one can observe, each token has been assigned a colour, either blue or orange. To be more precise, the blue colour indicates which tokens are indicative of the control group, whilst the orange colour indicates tokens, which are used mainly by AD patients. The more intense the colours are, the more important these tokens are towards the final classification of the transcript.
As one can easily observe in Fig. 2, tokens belonging to the UH pos-tag, such as yeah and oh, are identified as important for the dementia class by our best performing model. Moreover, personal pronouns (she, they) and verbs in the past tense (got, had) are also indicative of dementia. Also, our model considers the token "here", which corresponds to the RB pos-tag, indicative of the dementia class. These findings are consistent with the ones in Section VIII-D, where we have found that PRP, VBD, UH pos-tags as well as the unigram "here" are significantly correlated with the dementia class. In addition, our model identifies the repetition of token "and" as important for the dementia class. This finding agrees with previous research works [17], where the word "and" indicates a short answer and burst of speech.
Regarding Fig. 3, one can easily observe that our model identifies tokens belonging to the VBG (putting, drying, blowing, standing, etc.), DT (the, a), and NN (cookie, action, stool, etc.) pos-tags as significant for the control class. Concurrently, in consistence with the findings in Section VIII-D, the unigrams "curtain" and "window" are used mainly by non-AD patients.
With regards to Figs. 1 and 4, our model is not able to classify these transcripts correctly. One possible reason for such misclassifications has to do with the fact that these transcripts include pos-tags which are indicative of both the control and the dementia class. To be more precise, in Fig. 1, the majority of tokens in both transcripts belong to the VBG, NN, and DT pos-tags, which are correctly identified by our model as significant for the control group. Words, like "and", "him", and "well" are used in a low frequency. Similarly to Fig. 1, in Fig. 4, the majority of tokens in each transcript belong to the pos-tags which are significantly correlated with the dementia class. This can be illustrated in Fig. 4c, where we observe the usage of words, like "and", "yeah", "well" & "got".
IX. CONCLUSIONS AND FUTURE WORK
We introduced both single-task and multi-task learning models. Regarding single-task learning models, we employed several transformer-based networks and compared their performances. Results showed that BERT achieved the highest classification performance with accuracy accounting for 87.50%. Concurrently, we introduced siamese networks coupled with a co-attention mechanism which can detect AD patients with an accuracy up to 83.75%. In terms of the multi-task learning setting, it consisted of two tasks, the primary and the auxiliary one. The primary task was the identification of dementia (binary classification), whereas the auxiliary task was the categorization of the severity of dementia into one of the four categories -healthy, mild/moderate/severe dementia-(multiclass classification). Specifically, we proposed two multi-task learning models. Results showed that our model achieves competitive results in the MTL framework reaching accuracy up to 86.25% on the detection of AD patients. Next, we performed an in-depth linguistic analysis, in order to understand better the differences in language between AD and non-AD patients. Finally, we employed LIME, in order to shed light on how our best performing model works. Findings suggest that AD patients tend to use personal pronouns, interjection, adverbs, verbs in the past tense, and the token "and" at the beginning of utterances in a high frequency. On the contrary, healthy people use verbs in present participle or gerund, nouns as well as determiners.
One limitation of the current research work is pertinent to the small dataset used for conducting our experiments. However, we opted for this dataset, in order to mitigate different kinds of biases that could otherwise influence the validity of the proposed approaches.
We conducted our experiments on the ADReSS Challenge dataset, which is matched for gender and age and consists of a statistically balanced, acoustically enhanced set of recordings of spontaneous speech. Therefore, the results of this study could be integrated into an application, which will predict whether a person is an AD patient and will provide at the same time the reasons for this prediction via the explainability method.
In the future, we plan to investigate multimodal deep learning models incorporating both text and audio. Specifically, we plan to propose end-to-end trainable deep neural networks in contrast to existing research initiatives, which train multiple models separately and then use majority-voting approaches. In addition, our aim is to investigate fusion methods, in order to assign more importance to the most relevant modality and suppress the irrelevant information. Another future plan is to exploit further explainability techniques, such as anchor explanations [56]. Examples of transcripts along with their labels. red colour indicates the VBG pos-tag, yellow refers to the DT pos-tag, fuchsia to the RB pos-tag, apricot to the PRP pos-tag, navy blue to the VBD pos-tag, and the pine green to the UH pos-tag.
|
2021-09-16T01:15:35.637Z
|
2021-09-14T00:00:00.000
|
{
"year": 2021,
"sha1": "69934d3028a9f311966229aea2f14e31ab4740fb",
"oa_license": "CCBYNCND",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6221020/6363502/09769980.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "69934d3028a9f311966229aea2f14e31ab4740fb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
10431840
|
pes2o/s2orc
|
v3-fos-license
|
Technical, Economical and Social Assessment of Photovoltaics in the Frame of the Net-metering Law for the Province of Salta, Argentina
Central and Northern Argentinean regions possess a high potential for the generation of solar energy. The realization of this potential is an alternative to alleviate the strong dependence on imports of fossil energy and to reduce the CO 2 emissions of the country. However, the adoption of photovoltaics (PV) is still in an incipient state. It is undermined by a context of heavily subsidized electricity prices, high equipment and installation costs and a lack of information, training and experience in handling PV technology. This paper presents a techno-economical assessment of the application of the recently enacted net-metering law for promoting renewable energies (RE) in the Province of Salta (Northwest Argentina) for the case of PV. The assessment shows under which conditions and for which types of consumers it is profitable to adopt PV in the context of the law. This analysis is supported by a participatory planning approach as a study of stakeholders' attitudes towards RE, intentions to adopt PV and their knowledge about the law. The results of this study and the economical analysis serve to provide recommendations aimed at increasing the level of PV adoption in the province.
Introduction
Energy systems planning processes play a fundamental role for the promotion and incorporation of renewable energies (RE) into national energy matrices and also to develop solutions for energy access at a local level.A wide variety of examples around the world-India [1], China [2] and countries from the European Union [3]-show their importance.
Nowadays, this topic plays a crucial role in Argentina since different public policies for environmental and social inclusion issues have recently been implemented, especially planning strategies, projects and laws to promote RE.The national Law 26190 to promote the use of RE sources for electric energy generation was passed in 2006.It establishes as an objective a coverage of 8% of the national electricity demand by 2016 through RE sources.The law introduces feed in tariffs for wind, biomass, small scale hydro, tidal, geothermal and solar power for a period of 15 years [4].However, at the end of 2014, RE represent only 1% of the total electricity supply and 1.7% of the installed generation capacity.While wind contributed 265 MW, and biomass 1.150 MW, only 1 MW was provided by PV [5].The poor results motivated a new law enacted in September 2015.It is the national Law 27191, which provides financial arrangements and establishes new RE penetration goals.The 8% objective for national electricity demand coming from RE should now be achieved by 2017, but there is also a new mid-term objective of 25% RE penetration by 2025 [6].Furthermore, several provinces of the country decided to take their own measures and enacted their own laws for promoting RE.
In this paper, a technical, economical and social assessment of the photovoltaics (PV) energy supply potential for the Province of Salta in the frame of a net-metering law is presented.The province is located in the northwest of the country and has a high potential for energetic exploitation, principally of solar and biomass resources [7].The strategy that has been followed by this province includes a RE plan, a law for RE promotion and a net-metering law.The objective of the present study is to analyze the PV potentials and provide recommendations to activate this potential, while considering technical, economical and social constraints, derived from a participatory process.
The paper is structured as follows: First, the rest of the Introduction Section is dedicated to the explanation of the RE promotion framework in the Province of Salta and to the state of the art of PV potential evaluation.Second, the methodology for the participatory consultation, and the PV technical and economical assessment is explained.Third, the results are presented and discussed.Fourth, in the final section, conclusions are drawn and recommendations are given.
Renewable Energy Promotion Framework in the Province of Salta
In the Province of Salta, a planning process was set up to promote and encourage the use of RE sources.This process began in 2011, lead by the secretary of energy of the province and with the participation of various public and private institutions [8].The first result of this process was a RE plan for the province that appeared in 2014.This plan seeks to promote the generation and use of RE to meet the energy requirements of the inhabitants of the province, diversify the energy matrix and improve industrial competitiveness and quality of life [7].One of the initial objectives of the plan was to establish a reliable framework to strengthen public and private investment to adopt clean energy sources.Two laws were enacted in this context: law No. 7823-"Régimen de Fomento para las Energías Renovables" (Promotion scheme for Renewable Energies) and law No. 7824 "Balance Neto.Generadores residenciales, Industriales y productivos" (Net metering.Residential and industrial electricity generators).Both laws were enacted in 2014 and their corresponding regulations were completed in February 2015.
The law No. 7823 promotes the use, production, research, development and sustainable use of RE.The government gives different benefits to natural or legal persons who develop, manufacture and/or install technologies for harnessing RE.The most important include: (1) exemption from provincial taxes (up to 100% for a 10 year period); (2) tax credits for up to 70% of the value of the investment in equipment, with five years of grace and without interest; and (3) the provincial government assists to obtain credits and helps with technological, economic, financial and administrative aspects.
The net-metering law addresses residential and industrial electricity generators with the intention of motivating the installation of RE generation plants.The notion of "Net Balance", in the legislation of the Province of Salta, refers to the difference between the amount of electricity consumed from the grid and the amount of generated RE energy that is fed into the grid by a certain user during a certain period of time.The law promotes the installation of RE electric generators in private homes, businesses and industries (installation of up to 30 kWp for residential users and up to 100 kWp for businesses and industries).All electricity generated beyond own needs and fed into the grid is bought by the local electricity company at a differential rate.This rate varies depending on the RE source and is in general higher than the current energy final price for consumers.The highest of the rates is the one paid for energy produced with PV, which is currently around ten times higher than the final price paid for electricity by average users.Users wishing to access the net-metering mode must have the installation with all technical requirements and allow for supervision by the energy distribution company.The obligation of the users includes the payment of an initial connection fee and yearly inspection fee.The law contemplates as additional promotion measure that all energy generated by the user will be bought at the differential rate for the first two years.The users continue paying the usual energy price for every kWh that they are receiving from the grid.It is only from the third year that the net balance is adopted.
State of the Art of Photovoltaic Potential Evaluation
The evaluation of PV potential usually follows a top-down approach in which the theoretical, the technical and the economic PV potential are estimated consecutively [9].The theoretical potential concerns the evaluation of solar radiation availability.The technical potential is a fraction of the theoretical one and incorporates the consideration of energy transformation efficiency of the PV-panels, inverters and further components of the installation.The economic potential is based on "soft" factors which may change over time [10].This presents the amount of energy that can be generated by PV-installations under the current or expected local costs structure, legislation and the public acceptance of the technology [9].
In situ measurements of solar radiation, satellite data, GIS-based procedures, reanalysis data or a combination of these are common data sources to estimate the theoretical and technical PV energy generation potential.The use of a data source depends on the study area.In situ measurements are considered the most accurate sources and normally serve to validate the other alternatives [11].However, their availability is limited and their access is restricted.Even European weather station networks, recording solar radiation data, are not dense enough in order to provide proper coverage [12,13] and data sets are not necessarily freely accessible for the broad public.On the other hand, global solar radiation and temperature data derived from satellite images are available for most part of the world in temporal resolutions higher than hours, at no cost in many cases, but their accuracy depends strongly on the algorithms used to deal with cloud coverage and to derive the variables of interest from the sensors.Algorithms such as the Heliosat in its different versions [14] and the one used by the Land Surface Analysis Satellite Application Facility (LSA-SAF) have been validated in multiple locations [15][16][17][18][19] but there are accuracy problems in mountainous regions [20,21].Reanalysis data sets provide global coverage and there are several freely available data sources (see, e.g., [22]), but these have relatively low spatial and temporal resolutions compared to data derived from satellite images.Moreover, solar radiation models integrated in GIS tools have been widely used to estimate solar radiation and PV technical potential in areas with complicated topography and in urban environments [23][24][25][26].These rely, however, on other atmospheric variables that must be retrieved from in situ measurements, satellite images and/or reanalysis data.Examples of methodologies combining different sources have been proposed for the whole world [27], Europe [28], for several detailed studies of cities and municipalities (e.g., [29][30][31]), and for comparing different types of technologies and incentive programs for PV (e.g., [32][33][34]).
The calculation of PV technical potential varies widely in scientific literature.On the one hand, there are studies where the PV yield is calculated by merely multiplying the total solar energy cumulated in a surface in a year by a certain efficiency factor (see, e.g., [35]).On the other hand, there are cases where the yield is calculated in high temporal resolution, considering effects of shadowing, ambient and roof-top temperature (see, e.g., [36][37][38]).
Economic PV potential is assessed taking into consideration in-time-changing factors that are decisive for the realization of projects and the adoption of PV.Regulatory mechanisms as feed in tariffs and net-metering laws as well as the local cost structure (installation, capital and electricity costs) are quantitative factors that are usually used for the economic PV potential evaluation [39].An attractive economic potential based on these factors is necessary, but does not guarantee the market success of PV [40].Diffusion and acceptance barriers perceived by potential PV investors must be also identified and overcome.Participatory processes constitute a widely applied strategy to identify these barriers and propose alternatives to overturn them [41][42][43][44].
Concerning only the quantitative factors of the economic potential, existing literature relies either on cost-related or on investment attractiveness figures [45].The levelized cost of electricity (LCOE) is the widest used cost-related indicator, but is also target of strong criticism [46].LCOE serves to compare the costs of different energy sources while correcting differences in the operation and investment time horizons.This requires, however, assumptions about discount rates, which are difficult to make in context of high uncertainty [47].Typically, investment attractiveness figures for PV are payback periods, net present value (NPV) and internal rate of return (IRR).NPV allows an intuitive assessment of individual projects, but has the same drawback as LCOE, since it depends strongly on the applied discount rates.The other two indicators do not require assumptions about discount rates, but still have their own limitations.Payback periods tend to overestimate future returns, because they do not incorporate the time value of money.IRR does consider the time value of money, but is not an appropriate indicator, when evaluating projects with different time horizons and scales [45].
Methodology
The methodology is divided into four subsequent stages (see Figure 1).First, a participatory consultation is conducted with a wide range of stakeholders from the Province of Salta, Argentina.This is analyzed and serves as an input for the further stages.Second, a series of case studies to assess the application of the law are defined.These correspond to different types of consumers (households, business, industry and institutions connected to the grid) and levels of electricity consumption.Third, the technical potential for PV installations in the locations of the case studies is calculated.Solar global radiation and temperature time series are retrieved from the ECMWF era-interim reanalysis model data.These serve as input for a PV energy generation model that delivers hourly energy generation data.This output is cumulated to monthly data to match the temporal resolution of the demand data.Fourth, an economical assessment for every case study is performed.
Concerning only the quantitative factors of the economic potential, existing literature relies either on cost-related or on investment attractiveness figures [45].The levelized cost of electricity (LCOE) is the widest used cost-related indicator, but is also target of strong criticism [46].LCOE serves to compare the costs of different energy sources while correcting differences in the operation and investment time horizons.This requires, however, assumptions about discount rates, which are difficult to make in context of high uncertainty [47].Typically, investment attractiveness figures for PV are payback periods, net present value (NPV) and internal rate of return (IRR).NPV allows an intuitive assessment of individual projects, but has the same drawback as LCOE, since it depends strongly on the applied discount rates.The other two indicators do not require assumptions about discount rates, but still have their own limitations.Payback periods tend to overestimate future returns, because they do not incorporate the time value of money.IRR does consider the time value of money, but is not an appropriate indicator, when evaluating projects with different time horizons and scales [45].
Methodology
The methodology is divided into four subsequent stages (see Figure 1).First, a participatory consultation is conducted with a wide range of stakeholders from the Province of Salta, Argentina.This is analyzed and serves as an input for the further stages.Second, a series of case studies to assess the application of the law are defined.These correspond to different types of consumers (households, business, industry and institutions connected to the grid) and levels of electricity consumption.Third, the technical potential for PV installations in the locations of the case studies is calculated.Solar global radiation and temperature time series are retrieved from the ECMWF era-interim reanalysis model data.These serve as input for a PV energy generation model that delivers hourly energy generation data.This output is cumulated to monthly data to match the temporal resolution of the demand data.Fourth, an economical assessment for every case study is performed.
Participatory Consultation
The planning process for RE in the province was defined from the beginning as dynamic [7].In this context, it was decided to conduct a participatory consultation with an inclusive and constructive approach to local users and/or institutions of the energy sector [48].Two reflection and consultation activities were designed: a population survey online and an inter-agency workshop.
Survey: Renewable Energy in Salta
The aim of the survey was to collect information about the knowledge of the population of the City of Salta regarding the RE plan and new RE laws.Additionally, the intention behind it is to analyze the possibility of improving the integration of the RE at the local level.
The survey was divided into five sections with brief introductory explanations: To achieve a wide dissemination, the survey was developed in an online platform [49].It was distributed from the institutional email and Facebook page.The survey was available for 30 days (July 2015).
Workshop: Contributions to Promote Renewable Energies in Salta: Actions, Projects and Proposals
The meeting was held in September 2015 in Salta City.The objective of this workshop was to ponder the implementation of actions to promote RE in the province, and to provide recommendations to promote RE in the local context.The intention was also to motivate deliberation and discussion by presenting to the participants the German case, a country that is considered in the international context as a leader in the global transition to a renewable energy future [50].
The workshop was structured in two parts: a discussion board of experts and a discussion in working groups.On the first one, experts presented an overview of the advances done on implementing the Renewable Energy Plan and law promotion in Province of Salta, effects and status report of implementing RE laws in Germany, and the findings of the survey Renewable Energy in Salta.Presentations were lead by members from the Secretary of Energy of the Province Salta, the Deggendorf Institute of Technology and the Energy Planning and Land Management Group of the Argentinian Institute for non-conventional Energy Research (INENCO).
The second stage, group work, was held based on the progress and findings exposed by the board and the own personal and institutional experiences.The discussion was centered on analyzing the potential, limitations and promotion in the current local context of RE and to propose actions to promote a wider application of RE in Salta.
Definition of Case Studies
The financial feasibility assessment of PV in the context of the net-metering law requires information beyond the mere expected output of a certain PV installation.Although the rates that are paid for electric energy fed into the grid by PV power plants is the same for all types of users, the expected income and savings (due to self-consumption) depend on the amount of demanded energy and the tariff that every user has to pay for it.The energy supply company of the province classifies its clients into eight different tariff classes.These are sub-divided depending on contracted energy capacity, total monthly demand and voltage that is necessary to comply with the requirements of the client.The types of users range from small demands with contracted energy capacity below 10 kW (for residential purposes and small businesses) and high demands that require supply directly from the high voltage grid (for industrial purposes).Only based on the classification of the local utility, there are in total 22 different consumer types.Taking into consideration that between users of a certain demand class could also give significant differences in their electric consumption profiles, it is expected that there are actually hundreds of consumer types, which could be considered.
To represent the diversity of consumer types, a case study approach is adopted.A survey of the energy consumption profiles of the population of the province and its clustering in consumer typologies is beyond the scope of this study.However, electric energy consumption data from 122 clients were provided by the local energy supply company.These are concentrated in the City of Salta in seven different locations (see Figure 2).The data include five years of monthly demand for electric energy from 10 single family houses, 105 apartments from two different buildings, five commercial businesses and industries of the industrial park of the city, the largest university of the province and the justice building of the province.The five years of data are summarized in one average energy consumption year time series with monthly temporal resolution for each user.The maximum consumption in a certain month is used to match each single-family house, commercial and industrial user to one of the 22 consumption classes of the local energy supply company.In cases that more than one of these users falls into one of the classes, a synthetic consumption profile was generated by averaging the monthly consumption of all users in the same class.In the cases of apartment buildings, the demand data were available for all apartments.This allows evaluating the adoption of PV for the whole building and also for one average apartment in it.The university and the justice building could serve as a benchmark for further public administration buildings and therefore are considered individually.At the end of this classification, 14 case studies remain.These are used for the technical and economical assessments and are presented in Table A1.
Energies 2016, 9, 133 6 of 21 (for residential purposes and small businesses) and high demands that require supply directly from the high voltage grid (for industrial purposes).Only based on the classification of the local utility, there are in total 22 different consumer types.Taking into consideration that between users of a certain demand class could also give significant differences in their electric consumption profiles, it is expected that there are actually hundreds of consumer types, which could be considered.
To represent the diversity of consumer types, a case study approach is adopted.A survey of the energy consumption profiles of the population of the province and its clustering in consumer typologies is beyond the scope of this study.However, electric energy consumption data from 122 clients were provided by the local energy supply company.These are concentrated in the City of Salta in seven different locations (see Figure 2).The data include five years of monthly demand for electric energy from 10 single family houses, 105 apartments from two different buildings, five commercial businesses and industries of the industrial park of the city, the largest university of the province and the justice building of the province.The five years of data are summarized in one average energy consumption year time series with monthly temporal resolution for each user.The maximum consumption in a certain month is used to match each single-family house, commercial and industrial user to one of the 22 consumption classes of the local energy supply company.In cases that more than one of these users falls into one of the classes, a synthetic consumption profile was generated by averaging the monthly consumption of all users in the same class.In the cases of apartment buildings, the demand data were available for all apartments.This allows evaluating the adoption of PV for the whole building and also for one average apartment in it.The university and the justice building could serve as a benchmark for further public administration buildings and therefore are considered individually.At the end of this classification, 14 case studies remain.These are used for the technical and economical assessments and are presented in Table A1.
Technical Assessment
A top-down approach is used for calculating the technical PV potential.First, the average solar radiation in an hourly time-step basis for a year for the locations of the case studies is determined (the theoretical PV potential).Second, the solar radiation and temperature data are used as input for a PV model to calculate hourly energy yield for every location (technical potential) assuming installation size of one kWp.
The scarcity of ground measured solar radiation data in Argentina is a well known problem that obligates users to rely in other data sources [51,52].To alleviate this, a plan to install 40 new stations able to precisely measure direct and diffuse radiation was launched in 2012 [53].However, the net is still under construction [54] and some time will be necessary until these data become usable.
To compensate the scarcity of data for the locations of the case studies, data from the ERA-Interim atmospheric reanalysis provided by European Centre for Medium-Range Weather Forecasts (ECMWF) were used.Ramirez Camargo et al. [55] validate the Surface Solar Radiation Downwards (SSRD) of this data set (equivalent to global solar radiation) for the City of Salta and compare it to data obtained with statistical procedures and from satellite imagery.These authors stated that the hourly SSRD data present only a slightly worse fit to the ground measured data compared to data obtained from processing satellite imagery and suggest to use SSRD for studies wanting to have a conservative estimation of PV potential.Although SSRD data are available from 1989 to the present [56], only data for 2013 and 2014 are downloaded.This simplifies the data processing effort without compromising the quality of the analysis.Following the conclusions of [57], two years of radiation data should suffice to represent average solar radiation patterns in the Argentinean Northwest, where the case studies are located.The SSRD data are available in three hours' time steps and accounts for the amount of energy on the surface in J¨m ´2 that has been accumulated from the beginning of every day.The resampling and interpolation procedure to obtain hourly instantaneous solar radiation from the SSRD data proposed by [55] is also applied.
The instantaneous PV power output is calculated following the set of equations proposed by [36] in the adapted version adopted by [38].This equation also requires, apart from the global irradiance, the panel efficiency, a temperature correction factor, an efficiency reduction factor due to installation type, the area of the plant, the nominal operating temperature of the PV modules and the ambient temperature.Only location dependent parameters, i.e., solar radiation and temperature cannot be retrieved from technical documentation of PV panels or literature.Analogically to the procedure for getting the solar radiation data, the temperature at two meters above the ground for the same period of time is retrieved from the ERA-Interim [56].These are provided in three hours' time steps and require resampling and a linear interpolation to obtain hourly values.The PV technical parameters are presented in Table A2.The panel efficiency was stated by the local vendors as the minimum achieved by the panels.Further parameters presented in Table A2 are taken from [58].The hourly yield for 2013 and 2014 is calculated assuming that the instantaneous output remains constant every hour (i.e., solar irradiance and temperature do not change during a time step), installation size of one kWp and additional inverter and cable losses.Average monthly PV output for one year is calculated from the two years of results in order to match the temporal resolution of the electricity demand data.
Economic Assessment
The economic assessment includes cost-related and investment figures.LCOE is estimated to be compared with the current price of energy and the differential rate that is paid for energy generated by PV installations in the frame of the net-metering law.Expected income and savings (due to self-consumption), NPV and IRR are calculated to evaluate the economical feasibility of installing PV for each of the case studies.All calculations related to monetary values are performed in US dollar (USD) assuming an exchange rate of 9.728 Argentinian pesos (ARS) per USD, which is the official exchange rate at the end of November 2015.Furthermore, inflation is not considered in any of the indicators.
The LCOE is calculated following Equation (1) [59], and NPV is calculated following Equation ( 2) [60].The IRR provides the discount rate at which NPV is equal to zero and therefore it is calculated by solving Equation ( 2) for the discount rate equal to the IRR and NPV equal to 0 [60].LCOE " where LCOE is the levelized cost of electricity generation, n is the lifetime t is the year, and i is the discount rate.
NPV " where CF t is the cash flow in the corresponding year.
While the parameters electricity generated and lifetime can be defined straightforwardly, the expenditures and the discount rate require special attention in a context as the Argentinian one.The Electricity generated in time step t for all t is the sum for a year of the calculated technical PV potential corrected by a PV module degradation rate.The assumed lifetime is 25 years, and, as reported by the International energy agency [61], modules are usually guaranteed for this period of time at a minimum 80% of their rated output.The commonly used PV module degradation rate of 0.5% per year [62] is adopted also here.The expenditures correspond to the total cost of the PV plant and the yearly costs, which include operation, maintenance and a certain rate necessary to cover the cost of the inverter replacement every 10 years.Additionally, the connection (once in the lifetime of the installation) and supervision costs charged by the local utility must be also included.Currently these are 68 and 34 USD, respectively [63].The yearly costs for operation, maintenance and inverter replacement are assumed to be 1% of the cost of the plant [62].To determine average local total installation costs is not trivial since the Argentinian PV market is incipient and vendors are concentrated in Buenos Aires (Capital of the country located 1600 km away from Salta City).There is no industry able to produce PV cells in the country, there are only two companies that import cells to assembly panels locally, and foreign industrial products have to deal with high import taxes.From nearly a dozen of vendors and installers (including importers of PV panels manufactured in Spain and China) that were found in the country, only three provide enough information to estimate the total cost of small-scale PV-installation.The costs in USD for a 5 kWp installation including value-added taxes (10.5% for the panels and 21% for the rest of the components) are presented in Table A3.The LCOE is calculated for the different prices provided by the vendors and for an installation that costs 2800 USD/kWp.The last one is the international total average cost for residential PV systems in 2014 [64].
Establishing an appropriate discount rate is challenging since the Argentinian economy is characterized by high uncertainty [65].A commonly used measurement for the discount rate is the weighted average cost of capital (WACC).The WACC is typically between 6% and 12% for RE projects in OECD countries and can be between 15% and 20% for projects in Africa, where a higher risk is perceived [64].To provide an appropriate picture of the situation the calculations of financial figures depending on a discount rate (LCOE and NPV), a sensitivity analysis where WACC is varied from 0% to 20% is included.
Concerning the investment figures, the cash flows for every year during the lifetime of the PV installation are calculated as follows: First, the expenditures are calculated in the same way as it is done for the LCOE.Second, the monthly income and savings (due to self-consumption) are estimated using the net-metering tariff (0.214 USD), the current tariffs of the local utility and its corresponding subsidies (see Table A4) and the monthly consumption profiles of the case studies.Due to the lack of hourly information for the demand, some assumptions are needed.For residential case studies, 50% self-consumption rate is selected.For non-residential case studies, the monthly demand data are provided in three different demand periods.There are data for consumption in peak (6:00 p.m.-11:00 p.m.), mid-peak (11:00 p.m.-5:00 a.m.) and off-peak (5:00 a.m.-6:00 p.m.) demand periods.The best temporal match between PV energy generation and demand is given for the off-peak periods; therefore, the total amount of energy demanded during these periods is the basis for calculating savings and income.
A sensitivity analysis for income, savings, NPV and IRR figures is provided by varying total cost and sizes of the PV installation.The considered prices are the ones of the three local providers, proportionally adapted for other system sizes.The considered system sizes range from 1 kWp to 30 kWp for residential users and from 1 to 100 kWp for non-residential users.The values of 30 kWp and 100 kWp are the installation size limits established by the net-metering law to be able to participate in the incentive program for small and large electric energy consumers, respectively.
Results
The results are presented in the three thematic areas proposed in the methodology, social perception, technical potential and economic assessment.
Survey
It was possible to learn about the vision of the population in regard to RE regulations with the survey and to explore its current potential locally.In total, 324 surveys were sent and 163 were answered (response rate 50%).The socio-demographic structure of respondents was quite homogenous.The age of respondents was between 15-75 years, 66% were under 44 years old, 100% have at least high school degree, and 59% have technical/bachelor/tertiary studies.
There is a low knowledge of the RE plan and the current regulations as results from the survey showed (see Figure 3).proportionally adapted for other system sizes.The considered system sizes range from 1 kWp to 30 kWp for residential users and from 1 to 100 kWp for non-residential users.The values of 30 kWp and 100 kWp are the installation size limits established by the net-metering law to be able to participate in the incentive program for small and large electric energy consumers, respectively.
Results
The results are presented in the three thematic areas proposed in the methodology, social perception, technical potential and economic assessment.
Survey
It was possible to learn about the vision of the population in regard to RE regulations with the survey and to explore its current potential locally.In total, 324 surveys were sent and 163 were answered (response rate 50%).The socio-demographic structure of respondents was quite homogenous.The age of respondents was between 15-75 years, 66% were under 44 years old, 100% have at least high school degree, and 59% have technical/bachelor/tertiary studies.
There is a low knowledge of the RE plan and the current regulations as results from the survey showed (see Figure 3).However, RE technologies are recognized, particularly solar energy equipment (solar PV panels 90%, cookers and water heaters 85%) and wind energy (80%).Eighty percent of the responders considered that applications of solar energy should be promoted in Salta, and 55% thought the same about biomass.In addition, 93% of people surveyed confirmed the importance of promoting RE in Salta and the remaining 7% said they do not know/no answer (i.e., no negative response was recorded).Among the named reasons to encourage RE are:
Environmental protection and sustainability using RE as clean energies. Climate change mitigation. Diversification of the energy matrix. Independence of fossil fuels and energy resources imports. Technological and regional development.
Use of local natural resources. Access to energy in isolated places.However, RE technologies are recognized, particularly solar energy equipment (solar PV panels 90%, cookers and water heaters 85%) and wind energy (80%).Eighty percent of the responders considered that applications of solar energy should be promoted in Salta, and 55% thought the same about biomass.In addition, 93% of people surveyed confirmed the importance of promoting RE in Salta and the remaining 7% said they do not know/no answer (i.e., no negative response was recorded).Among the named reasons to encourage RE are:
‚
Environmental protection and sustainability using RE as clean energies.
‚ Diversification of the energy matrix.
‚ Independence of fossil fuels and energy resources imports.
‚ Technological and regional development.
‚
Use of local natural resources.
‚
Access to energy in isolated places.
‚
Improving the quality of life.
With regard to the implementation of the provincial law, the following key aspects were highlighted: 1. Dissemination, information, and awareness.2.
Economic incentives, such as tax breaks, subsidies, investments, receivables, and other benefits as well as public policies, 3.
Political interest, management support, guarantees continuity applications in rural education, industry and electrical infrastructure matrix, 4.
Other issues including education (different levels), research, technological development, and environmental issues.
Finally, there was strong interest in installing PV solar panels or another RE system to generate electricity (Figure 4).The main reasons for this interest are: economical (reduction of electrical power costs, reduction of equipment costs, and the ability to access an income from surplus energy generated), ''friendly" technologies for the environment, better use of urban space, saving conventional energy sources, and clean energy diffusion.Among the reasons that ''would cast doubts and reject the possibility of installing a RE system at a household" were: (1) technical aspects (amount of energy, efficiency, safety, installation, and maintenance); ( 2) budget (installation and maintenance costs, repayment, and investment recovery); and (3) discontinuity of policies and distrust of government.The reasons were highlighted in order of importance.Finally, there was strong interest in installing PV solar panels or another RE system to generate electricity (Figure 4).The main reasons for this interest are: economical (reduction of electrical power costs, reduction of equipment costs, and the ability to access an income from surplus energy generated), ''friendly'' technologies for the environment, better use of urban space, saving conventional energy sources, and clean energy diffusion.Among the reasons that ''would cast doubts and reject the possibility of installing a RE system at a household'' were: (1) technical aspects (amount of energy, efficiency, safety, installation, and maintenance); ( 2) budget (installation and maintenance costs, repayment, and investment recovery); and (3) discontinuity of policies and distrust of government.The reasons were highlighted in order of importance.
Workshop
The workshop constituted a space for dialogue, reflection and collective construction of knowledge and proposals.In total, 47 people attended the workshop, among them representatives of various government agencies (Departments of Government, Science and Technology, and Universities) and private organizations (Companies and NGOs).
First, presentations to contextualize the current situation regarding the implementation of the RE plan and laws in Salta were introduced, as well as the path taken by Germany in the energy sector.After that, the results of the general survey were presented as triggers to introduce the discussion.
Second, people were gathered together in groups in order to discuss the current local RE situation.The agreements built on each work-group helped to identify some keys to improve the integration of RE locally.All in all, the demand for a multidimensional and integrated approach in the energy thematic was emphasized.Environmental, economic, socio-cultural and technical issues
Workshop
The workshop constituted a space for dialogue, reflection and collective construction of knowledge and proposals.In total, 47 people attended the workshop, among them representatives of various government agencies (Departments of Government, Science and Technology, and Universities) and private organizations (Companies and NGOs).
First, presentations to contextualize the current situation regarding the implementation of the RE plan and laws in Salta were introduced, as well as the path taken by Germany in the energy sector.After that, the results of the general survey were presented as triggers to introduce the discussion.
Second, people were gathered together in groups in order to discuss the current local RE situation.The agreements built on each work-group helped to identify some keys to improve the integration of RE locally.All in all, the demand for a multidimensional and integrated approach in the energy thematic was emphasized.Environmental, economic, socio-cultural and technical issues are intrinsically related and therefore also require comprehensive and complementary responses.
Among the identified potential for the promotion of RE are: the existence of a provincial plan and specific laws, the availability of renewable resources and technologies, and also the potential for interaction between multiple actors already involved.On the other hand, the limitation issues that were determined are: lack of knowledge and diffusion, low profitability, restrictions on the concerned sectors, low confidence in government policies and also lack of local production capabilities and counseling.
The proposed actions were aimed at ensuring a coordinated interagency work, the realization of economic incentives and the promotion of new technologies and existing regulations.The proposed actions are presented in the Table 1.
A strong interest in the topic and a stated commitment from the institutions to join in inclusive and collaborative works were featured as the result of the general conclusions.Interagency and intersectoral linkages were raised as a tangible result of the workshop.The possibility of initiating concrete actions in education and dissemination of RE from all areas was prioritized.
Finally, the development of a technical and economic assessment of the PV energy supply potential for the city was found to be necessary and useful to measure the scope and the feasibility of the new net-metering law.
Coordinated
Inter-Institutional work -Establish a mechanism agreed by involved institutions to define stages of support.
-Promote the regulation and control by professional organizations.
-Use energy saving education at all levels, from individual household to institutional levels.
RE promotion (regulation and new technologies)
-Advertise the laws and technologies massively (environmental awareness).
-Show people how to handle RE technologies.
-Promote solar thermal and PV energy.
-Promote other renewable sources and their combination (biomass, small hydro, etc.).
Specific economical guidelines
-Specially designed to create economic incentives to encourage households (link to regulation improvements).
-Promote subsidies for private households as well as small municipalities.
-Facilitate the importation of RE installation materials and supplies.
Technical Potential
Despite the conservative assumptions for the calculation of the PV potential, the PV yield per kWp for the locations of the case studies is very high.The total solar energy on a horizontal surface for most of the locations achieves 1884 kWh/(m 2 a).The results for all locations vary only slightly (<1%) and are equal for most of the cases since the locations are relatively close to each other and the resolution of the ECMWF data is relatively low (with a pixel size of 11 km ˆ11 km).The yearly yield per kWp for a horizontal installed PV is around 1709 kWh.This is even higher than the PV yield reported by [62] for optimally mounted installations in South Spain or in North Africa (up to 1600 kWh/(kWp a)).The radiation and PV yield values are also far beyond the average of 1055 kWh/(m 2 a) and 1000 kWh/(kWp a) that can be obtained in Germany [62].These differences could be even higher considering that the used solar radiation data set is the one with the most conservative values compared to other sources for the same location [55].Furthermore, the potential energy generation does not correspond to that of an optimally installed PV system (one expects that optimally installed PV systems can generate even more energy per year), and the module output degradation rate appears pessimistic when compared to empirical studies showing degradation rates around 0.1% per year [62].
The monthly solar radiation per square meter and PV yield per kWp, which are presented in Figure 5, represent well the expected output considering the typical local climatic conditions.The differences are the largest during the summer (November-February), and both solar radiation and PV yield are lower in January and February than in November and December due to the rainy season.These PV yield values per kWp are used for the economical assessment.
Economic Assessment
The LCOE based on cost information provided by the three local vendors contrasts with the LCOE calculated for the average international price (See Figure 6) and with the average LCOE calculated by [46] for a location with such a high generation potential per kWp installed capacity (considering a WACC of 6%).The difference between the LCOE for the three local vendors and the international averages is almost 0.1 USD for low WACCs and increases up to 0.41 for high WACCs and the highest total costs.This is an indicator that the high energy generation potential is overcompensated by the local cost structure for PV.In fact, the lowest of the total costs provided by the Argentinian vendors is already two times higher than the international average and it is higher than the highest international average cost when comparing it against data provided by [64].
The LCOE is only lower than the differential tariff defined by the net-metering law in a few cases and is several times higher than the final price for electricity paid by the users.The cases where the LCOE is lower than the differential tariff are the cases where the average international installation costs are being assumed, and this situation applies only for WACCs ranging from 0% to 9%.Further values of LCOE below the differential tariff are given only for the lowest of the local installation costs and for the lowest discount rates.Furthermore, low energy prices and additional subsidies for the final consumers create a relation of 1:5 between the LCOE with 0 discount rate and the final utility price for energy, when considering the lowest residential prices.The best relation is 1:3.5 for the highest final energy prices paid by the industry.These relations deteriorate rapidly when increasing the discount rate.They change to 1:10 and 1:7 for a WACC of 10% and continue increasing to 1:17 and 1:11 when assuming a discount rate of 20%.
Economic Assessment
The LCOE based on cost information provided by the three local vendors contrasts with the LCOE calculated for the average international price (See Figure 6) and with the average LCOE calculated by [46] for a location with such a high generation potential per kWp installed capacity (considering a WACC of 6%).The difference between the LCOE for the three local vendors and the international averages is almost 0.1 USD for low WACCs and increases up to 0.41 for high WACCs and the highest total costs.This is an indicator that the high energy generation potential is overcompensated by the local cost structure for PV.In fact, the lowest of the total costs provided by the Argentinian vendors is already two times higher than the international average and it is higher than the highest international average cost when comparing it against data provided by [64].
Economic Assessment
The LCOE based on cost information provided by the three local vendors contrasts with the LCOE calculated for the average international price (See Figure 6) and with the average LCOE calculated by [46] for a location with such a high generation potential per kWp installed capacity (considering a WACC of 6%).The difference between the LCOE for the three local vendors and the international averages is almost 0.1 USD for low WACCs and increases up to 0.41 for high WACCs and the highest total costs.This is an indicator that the high energy generation potential is overcompensated by the local cost structure for PV.In fact, the lowest of the total costs provided by the Argentinian vendors is already two times higher than the international average and it is higher than the highest international average cost when comparing it against data provided by [64].
The LCOE is only lower than the differential tariff defined by the net-metering law in a few cases and is several times higher than the final price for electricity paid by the users.The cases where the LCOE is lower than the differential tariff are the cases where the average international installation costs are being assumed, and this situation applies only for WACCs ranging from 0% to 9%.Further values of LCOE below the differential tariff are given only for the lowest of the local installation costs and for the lowest discount rates.Furthermore, low energy prices and additional subsidies for the final consumers create a relation of 1:5 between the LCOE with 0 discount rate and the final utility price for energy, when considering the lowest residential prices.The best relation is 1:3.5 for the highest final energy prices paid by the industry.These relations deteriorate rapidly when increasing the discount rate.They change to 1:10 and 1:7 for a WACC of 10% and continue increasing to 1:17 and 1:11 when assuming a discount rate of 20%.The LCOE is only lower than the differential tariff defined by the net-metering law in a few cases and is several times higher than the final price for electricity paid by the users.The cases where the LCOE is lower than the differential tariff are the cases where the average international installation costs are being assumed, and this situation applies only for WACCs ranging from 0% to 9%.Further values of LCOE below the differential tariff are given only for the lowest of the local installation costs and for the lowest discount rates.Furthermore, low energy prices and additional subsidies for the final consumers create a relation of 1:5 between the LCOE with 0 discount rate and the final utility price for energy, when considering the lowest residential prices.The best relation is 1:3.5 for the highest final energy prices paid by the industry.These relations deteriorate rapidly when increasing the discount rate.They change to 1:10 and 1:7 for a WACC of 10% and continue increasing to 1:17 and 1:11 when assuming a discount rate of 20%.
The low final energy prices also have a strong repercussion on the income obtained by selling energy at the differential rate, as well as in the savings due to self-consumption expected from installing a certain PV system.For the residential case studies, income and savings are decoupled after a system size of 4 kWp (see Figure 7).This can be explained due to the relatively low demand of the residential case studies and it would be favorable to increase the total income generated by a PV system of a certain system size.For most of the case studies presented the income generated by selling energy to the grid is four times higher than the savings achieved due to self-consumption per kWh.This occurs despite of the differential tariff defined by the net-metering law being comparable to the feed-in tariff for energy from PV systems in Germany in 2013 [61].In the European country, however, the feed in tariff has been sequentially decreased to reflect the total cost drop for PV systems, which already in 2013 were one of the lowest in the world [61].For non-residential case studies, the decoupling requires much larger system sizes and, for some of these cases, the 100 kWp accepted in the net-metering law are not enough to cover the off-peak energy demand (see Figure 8).This situation leads to cases in which the income for the energy fed into the grid is lower than the savings as well as to cases where the total income is much lower than in the residential case studies.The low final energy prices also have a strong repercussion on the income obtained by selling energy at the differential rate, as well as in the savings due to self-consumption expected from installing a certain PV system.For the residential case studies, income and savings are decoupled after a system size of 4 kWp (see Figure 7).This can be explained due to the relatively low demand of the residential case studies and it would be favorable to increase the total income generated by a PV system of a certain system size.For most of the case studies presented the income generated by selling energy to the grid is four times higher than the savings achieved due to self-consumption per kWh.This occurs despite of the differential tariff defined by the net-metering law being comparable to the feed-in tariff for energy from PV systems in Germany in 2013 [61].In the European country, however, the feed in tariff has been sequentially decreased to reflect the total cost drop for PV systems, which already in 2013 were one of the lowest in the world [61].For non-residential case studies, the decoupling requires much larger system sizes and, for some of these cases, the 100 kWp accepted in the net-metering law are not enough to cover the off-peak energy demand (see Figure 8).This situation leads to cases in which the income for the energy fed into the grid is lower than the savings as well as to cases where the total income is much lower than in the residential case studies.NPV is presented in Figure 9 for residential case studies and it is consistent with the previous indicators.Positive NPV (colored green) can be found only for small-scale consumers, at low discount rates and mostly for relatively large PV system sizes.At international PV systems costs, there is a wide range of combinations between system size and discount rate that deliver a positive NPV.Furthermore, for the lowest of the local PV system costs (PV1), there are at least four combinations of WACC and PV system size that produce a positive NPV.The situation is more disadvantageous for non-residential case studies.In these case studies, only few combinations of discount rate and system sizes at the international average total PV system cost generate a positive NPV.Due to the low reward for self-consumption, PV installation would be almost unalterably an unattractive investment for non-residential users.NPV is presented in Figures 9 for residential case studies and it is consistent with the previous indicators.Positive NPV (colored green) can be found only for small-scale consumers, at low discount rates and mostly for relatively large PV system sizes.At international PV systems costs, there is a wide range of combinations between system size and discount rate that deliver a positive NPV.Furthermore, for the lowest of the local PV system costs (PV1), there are at least four combinations of WACC and PV system size that produce a positive NPV.The situation is more disadvantageous for non-residential case studies.In these case studies, only few combinations of discount rate and system sizes at the international average total PV system cost generate a positive NPV.Due to the low reward for self-consumption, PV installation would be almost unalterably an unattractive investment for non-residential users.The IRR confirms the situation presented with the NPV.The best IRRs are obtained for the residential case with the lowest demand (Residential a), lowest total PV installation costs and largest system sizes.The calculated IRRs for all case studies and considered system sizes show positive values that are as high as 13% when considering international average total PV installation costs.There are only few cases where the IRR is above zero when considering the prices of local PV system vendors and the IRR deteriorates rapidly with increasing energy demand.An example of this trend is presented in Figure 10.For the lowest residential demand it is possible to achieve positive IRRs when considering the lowest of the total PV installations costs.For the highest residential demand (Residential e), assuming local costs, a positive IRR can only be obtained with the lowest installation costs and the maximum allowed system size.For the non-residential case studies, there is not a single combination of local costs and systems sizes that generates a positive IRR.The IRR confirms the situation presented with the NPV.The best IRRs are obtained for the residential case with the lowest demand (Residential a), lowest total PV installation costs and largest system sizes.The calculated IRRs for all case studies and considered system sizes show positive values that are as high as 13% when considering international average total PV installation costs.There are only few cases where the IRR is above zero when considering the prices of local PV system vendors and the IRR deteriorates rapidly with increasing energy demand.An example of this trend is presented in Figure 10.For the lowest residential demand it is possible to achieve positive IRRs when considering the lowest of the total PV installations costs.For the highest residential demand (Residential e), assuming local costs, a positive IRR can only be obtained with the lowest installation costs and the maximum allowed system size.For the non-residential case studies, there is not a single combination of local costs and systems sizes that generates a positive IRR.
Conclusions and Outlook
The province of Salta has launched one of the first frameworks for promoting RE in Argentina.One of the keystones of this framework is law No. 7824, which introduces the net-metering concept and allows the installation of PV power plants for households and enterprises under favorable conditions.The intention of this regulation is to enable the integration of distributed grid-connected small scale RE in the province.As the participatory consultation has shown, the idea of increasing the share of RE in the energy matrix is not only supported by the policy makers but also by the population.However, only 25% of the surveyed population knew about the RE plan, only 5% knew about the net-metering law and there were several questions and doubts about the technical and economical feasibility of RE generation projects.
Assessments intended to dissipate major concerns about technical and economical feasibility of PV energy generation projects have been conducted.These reveal that the theoretical and technical PV generation potentials per•m 2 and kWp installed capacity are very high.Nevertheless, the local total costs of installing PV systems are two times the international average.This deteriorates financial figures for the conducted case studies.When considering the local costs, only relatively large PV systems for households with low energy demand present positive results, in terms of investment possibilities.It has also been shown that the differential tariff defined by the net-metering law is barely able to cover the LCOE for PV installations at the lowest local price and with discount rates
Conclusions and Outlook
The province of Salta has launched one of the first frameworks for promoting RE in Argentina.One of the keystones of this framework is law No. 7824, which introduces the net-metering concept and allows the installation of PV power plants for households and enterprises under favorable conditions.The intention of this regulation is to enable the integration of distributed grid-connected small scale RE in the province.As the participatory consultation has shown, the idea of increasing the share of RE in the energy matrix is not only supported by the policy makers but also by the population.However, only 25% of the surveyed population knew about the RE plan, only 5% knew about the net-metering law and there were several questions and doubts about the technical and economical feasibility of RE generation projects.
Assessments intended to dissipate major concerns about technical and economical feasibility of PV energy generation projects have been conducted.These reveal that the theoretical and technical PV generation potentials per¨m 2 and kWp installed capacity are very high.Nevertheless, the local total costs of installing PV systems are two times the international average.This deteriorates financial figures for the conducted case studies.When considering the local costs, only relatively large PV systems for households with low energy demand present positive results, in terms of investment possibilities.It has also been shown that the differential tariff defined by the net-metering law is barely able to cover the LCOE for PV installations at the lowest local price and with discount rates up to 1%.When looking at the same figures and comparing them to international PV installation prices, it becomes evident that if measures are taken to decrease the local total PV system costs there is a considerable potential for improving the financial attractiveness of projects and decreasing the LCOE.Given the high technical energy generation potential it can be argued that bringing total PV system costs closer to the international average and providing secure financial conditions, will make investments in PV systems feasible for most types of consumers.
Another motive that makes investment in PV systems unattractive in the area is the final electricity price for consumers.There is no incentive for solar energy self-consumption given the low energy prices and the additional subsidies.Any increment in final electricity prices will improve investment indicators.Increments in the electricity prices are expected for 2016.This responds to a change in political and economical policies that Argentina will adopt following recent elections [66].Notwithstanding, an alternative strategy to make investments in PV systems more attractive would be to extend the period when users adopting net-metering mode receive the differential rate for every produced kWh (it is currently two years).This will still not motivate self-consumption but it would be less unwelcome by the population than incrementing the final energy prices.Such a measure can also have a large impact in the cash flow of the projects and therefore in the final attractiveness indicators.
It is expected that the results presented in this study are adequate for most part of the province and in the case that punctual technical assessments are required, the introduced methodology is suitable for researching locations in the whole province.This is supported in the following facts: First, electricity final prices are the same for the whole province and the variety of case studies presented covers most of existent tariffs.Second, total installation costs and technical electricity generation potential will not vary significantly for locations connected to the grid.Populations living in the mountains, where solar radiation, temperature and installation costs can drastically change, are normally not connected to the grid and therefore cannot adopt the net-metering mode.Third, the ECMWF data used for determining the PV energy generation potential have global coverage.
Finally, it is important to note that there is a positive outlook on the situation regarding RE in the province of Salta.Important steps forward can be taken by disseminating the framework that has been conceived to promote RE among the population of the province.There are viable alternatives for making financial investments in solar energy more attractive.The Total PV system costs, the major limiting factor, have a large potential to be reduced.There are also additional promotion mechanisms that have not been studied here, such as tax credits for up to 70% of the investment in equipment, established by the law No. 7823, which can certainly improve cash flows and therefore the investment figures.
Figure 1 .
Figure 1.Overall workflow of the proposed Methodology.Figure 1. Overall workflow of the proposed Methodology.
Figure 1 .
Figure 1.Overall workflow of the proposed Methodology.Figure 1. Overall workflow of the proposed Methodology.
Figure 2 .
Figure 2. Map of the location of the case studies, Salta City.
Figure 2 .
Figure 2. Map of the location of the case studies, Salta City.
Figure 3 .
Figure 3. General knowledge of the population about the renewable energies (RE) plan and new laws for promoting RE in the Province of Salta.
Figure 3 .
Figure 3. General knowledge of the population about the renewable energies (RE) plan and new laws for promoting RE in the Province of Salta.
Figure 4 .
Figure 4. Interest in installing a photovoltaic system at home.
Figure 4 .
Figure 4. Interest in installing a photovoltaic system at home.
Energies 2016, 9 , 133 12 of 21 Figure 5 .
Figure 5. Monthly isolation per square meter and energy yield per kWp photovoltaics for the location of one of the case studies.
Figure 5 .
Figure 5. Monthly isolation per square meter and energy yield per kWp photovoltaics for the location of one of the case studies.
Figure 5 .
Figure 5. Monthly isolation per square meter and energy yield per kWp photovoltaics for the location of one of the case studies.
Figure 6 .
Figure 6.Levelized cost of electricity (LCOE) based on the total installation costs provided by local vendors and the international average costs for discount rates between 0% and 20%.
Figure 6 .
Figure 6.Levelized cost of electricity (LCOE) based on the total installation costs provided by local vendors and the international average costs for discount rates between 0% and 20%.
Figure 7 .
Figure 7. Calculated income and savings for residential case studies for PV system sizes accepted in the net-metering law.
Figure 8 .
Figure 8. Calculated income and savings for non-residential case studies for PV system sizes accepted in the net-metering law.
Figure 7 .
Figure 7. Calculated income and savings for residential case studies for PV system sizes accepted in the net-metering law.
Figure 7 .
Calculated income and savings for residential case studies for PV system sizes accepted in the net-metering law.
Figure 8 .
Figure 8. Calculated income and savings for non-residential case studies for PV system sizes accepted in the net-metering law.
Figure 8 .
Figure 8. Calculated income and savings for non-residential case studies for PV system sizes accepted in the net-metering law.
Figure 9 .
Figure 9. Net present value (NPV) for residential users for all provided total costs, system sizes and discount rates.
Figure 9 .
Figure 9. Net present value (NPV) for residential users for all provided total costs, system sizes and discount rates.
Figure 10 .
Figure 10.Internal rate of return (IRR) for residential case studies with the lowest (Residential a) and highest (Residential e) energy demand and four total installation costs.
Figure 10 .
Figure 10.Internal rate of return (IRR) for residential case studies with the lowest (Residential a) and highest (Residential e) energy demand and four total installation costs.
Table 1 .
Actions proposed in collaborative work.
Table A3 .
Photovoltaic installation costs for a 5 kWp installation provided by three different Argentinian vendors (the prices are in USD).
Table A4 .
[63]tric energy tariffs and subsidies applied to every case study in USD.The values are taken from[63].
|
2016-03-14T22:51:50.573Z
|
2016-02-26T00:00:00.000
|
{
"year": 2016,
"sha1": "6380107ef9737ad2a637945ff57241b04da0cc5e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/9/3/133/pdf?version=1456478874",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6380107ef9737ad2a637945ff57241b04da0cc5e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
36926679
|
pes2o/s2orc
|
v3-fos-license
|
‘I felt the world crash down on me’: Women’s experiences being denied legal abortion in Colombia
Background In 2006, Colombia’s constitutional court overturned a complete ban on abortion, liberalizing the procedure. Despite a relatively liberal new law, women still struggle to access safe and legal abortion services. We aimed to understand why women are denied services in Colombia, and what factors determine if and how they ultimately terminate pregnancies. Methods We recruited women denied abortion at a private facility in Bogota. Twenty-one participants completed an initial interview and eight completed a second longer interview. Two researchers documented themes and developed and applied a codebook to transcripts using ATLAS.ti. Results Participants faced barriers, such as lack of knowledge of service availability and delayed pregnancy recognition, leading to denial. Five out of eight participants ultimately received abortions in public hospitals, due to support from partners and a robust referral system; nevertheless, they received poor care. Those who continued pregnancies endured stigmatizing events and inaccurate medical counselling at referral facilities. Several women contemplated illegal abortion though were afraid to attempt it. Conclusion We propose the following recommendations: 1) increase awareness about availability and legality of abortion services to prevent delay and consequent denial; 2) provide counseling and referral upon denial; and 3) train providers in interpersonal quality abortion care.
In Colombia, abortion services are authorized by the federal government, with no specific gestational age limitations, except that services after 15 weeks must be performed at a high level facility. Despite legal availability, women in Colombia still face barriers to accessing safe abortion services. This paper seeks to understand why women are denied legal abortion services in Colombia, and what factors determine if and how they terminate a pregnancy after being denied services initially. We recruited 21 women immediately after they were denied services at a private facility in Bogota. These women reported delays in recognizing that they were pregnant and delays in determining where to go for legal abortion services. Those who were denied but ultimately terminated their pregnancy received support from partners and a robust referral system. Those who continued their pregnancies endured stigmatizing events and inaccurate medical counselling at referral facilities. Findings from this study indicate a need to increase awareness about abortion services to prevent delay and consequent denial, provide counselling and referral upon denial, and train providers in interpersonal quality abortion care.
Background
In Latin America and the Caribbean, nearly 10 million unintended pregnancies were estimated to have occurred in 2012, 40% of which ended in abortion [1,2]. A minimum of 10% of maternal deaths annually in the region are due to unsafe abortion, and about 760,000 women are treated for associated complications each year [3], including haemorrhage, sepsis, peritonitis, and trauma to the reproductive organs. In Colombia, one third of unsafe abortions result in complications that require medical attention, primarily heavy bleeding and incomplete abortion, and rates are even higher among women who self-induce using invasive techniques or seek help from an unqualified practitioner [4].
In 2006, Colombia's Constitutional Court overturned a complete ban on abortion, decriminalizing the procedure in cases of rape or incest, foetal anomaly incompatible with life, and endangerment of the life or health of the woman [5]. The Colombian government released guidelines for abortion provision [6], adapted from the World Health Organization (WHO) [7], soon after the law was adopted but later annulled these guidelines due to challenges to the government's authority to regulate abortion. More recent Ministry of Health technical documents now guide service providers on how to provide abortion services in the primary level, how to prevent unsafe abortion, and how to provide abortion counselling. The law does not include gestational age limits [8], but Ministry of Health protocol states that abortion services up to 15 weeks may be provided at the primary health service level, and services after 15 weeks must be performed at a higher level facility [6].
No concerted effort was undertaken to disseminate information about the change in legal status of abortion or to expand the number of providers, and by 2009, fewer than 3000 legal abortions had been reported, in contrast to an estimated 320,000 to 450,000 illegal abortions annually [9][10][11]. Approximately half of abortions in Colombia are induced using misoprostol, a low-cost medication used to induce an abortion or miscarriage, and the other half by non-misoprostol methods estimated to be evenly provided by medical doctors, other health professionals, or traditional providers [4]. Medical providers rely more on dilation and curettage (D&C) than on manual vacuum aspiration (MVA) [10], despite WHO recommendations to use MVA in the first trimester [12]. While there is little available data on abortion complications, an estimated 93,300 Colombian women sought post-abortion care in 2008 (1/3-1/4 of total procedures) [10]. Such a high proportion of total procedures requiring post-abortion care may indicate that a large proportion of abortions happen outside of the formal health system and result in women seeking additional treatment [4]; post-abortion care rates may be higher than necessary because many women may not be well informed about the normal process of an abortion using misoprostol nor what sequelae of a medication abortion require medical attention [13,14].
Even where abortion is legal, poverty, stigma, and distance from a provider prevent women from accessing safe abortion services, among other factors [2,15]. Despite a relatively liberal law in Colombia, which permits abortion free of charge in the public sector without a gestational age limit, barriers to accessing quality abortion care remain, especially later in pregnancy [16]. Previously documented barriers include: lack of referral protocols, narrow interpretation of the health exception (excluding mental health), stigma, lack of awareness about legal services, financial barriers, and delays to care [16,17]. These may lead to the denial of services for women, particularly at primary health facilities which have gestational age limitations. As is the case in other contexts where barriers to legal abortion care exists, some women may seek services elsewhere, either at another facility, through self-induction, or with the help of an informal sector provider [18][19][20]. Recent Global Turnaway Studies in Nepal [21], South Africa [22], and Tunisia [23] have shown this to be true in other settings. However, little research has been done in Colombia to understand why women are denied legal services, whether they seek services following denial, and what factors enable them to obtain services after denial.
We aim to answer the primary question: among those denied abortion care, what delays and barriers did they face? We additionally explored the factors that enabled or prevented women from seeking safe and legal services after being denied care and whether women used or considered using informal sector abortion methods outside the formal health system after denial. This study was conducted as part of the Global Turnaway Studies; other participating countries include the United States [24,25], Bangladesh [26], South Africa [22], Nepal [21], and Tunisia [23].
Methods
In September 2013, women denied abortion due to gestational age limits at Fundación Oriéntame, a private not-for-profit clinic in Bogotá, were recruited for indepth interviews. Oriéntame is the largest provider of abortions in the country and partners with an on-site legal advocacy group to provide information to women about legal abortion. While there is no gestational age limit in Colombia, at this time Oriéntame was unable to provide abortion past 15 weeks gestation, because they were not a secondary care facility [27]. A previous study demonstrated that 2% of women surveyed at the clinic did not receive the abortions they sought, due to advanced gestational age [19]. Trained interviewers-two nursing assistants and one psychologist-approached women after their medical visits, explained the study, obtained informed consent and screened for eligibility. Eligibility criteria included denial of abortion due to advanced gestational age and ability to speak Spanish. If eligible, interviewers conducted initial interviews, about 15 min in duration, at the time of recruitment in person at the clinic. Interviewers contacted women 2 months later for a second longer interview, about 30-45 min in duration, conducted by telephone. Interviews were conducted by telephone due to resource and time constraints. The two-month time frame allowed researchers to learn about women's experiences after denial; it was necessary to provide participants time to decide on their next course of action. Participants were compensated with a grocery store certificate worth 36,000 Colombian pesos (~$20 in 2014).
The initial interview guide included open-ended questions about clinic visit, including reasons for denial, reasons for seeking abortion, and factors that contributed to delay seeking services. The longer interview guide included a review of the initial abortion seeking process and questions about the respondent's reactions to denial, actions taken following denial of care, experiences with referral and subsequent counselling, knowledge of legal and illegal abortion methods, pregnancy outcomes, and overall quality of abortion care. In this context, quality of abortion care was assessed through the perceptions of the women, including satisfaction with the services received, interpersonal care provided, presence of complications or pain related to the procedure, and whether participants recommend the service to others.
Interviews were conducted in Spanish, recorded, transcribed, and translated to English for analysis. Data were analysed using a qualitative content analysis approach, using a consistent set of codes to organize text with similar content after data collection was completed, transcribed and translated. A priori themes were identified, based on code types and results from previous studies about abortion denial and barriers to abortion in Colombia [16,[21][22][23]. Additional codes and sub-codes were generated iteratively according to emergent themes throughout the coding process. One coder conducted analysis in Spanish, generating initial codes and documenting emerging themes. A second coder analysed data in English, generated codes and validated themes against those created by the first coder. Researchers analysed all qualitative data using Dedoose 5.0.11 (SocioCultural Research Consultants: Los Angeles, CA) and synthesised socio-demographic data using Excel. Coding and transcripts were analysed repeatedly as necessary, and referred to throughout the analysis and writing process. The entire team reviewed key themes and illustrative quotations throughout the process. A study identification number and pregnancy outcome, when available, are included in parentheses following each quotation in this manuscript. Facility names have been retracted for confidentiality. The Ethics Committee at Oriéntame and the Committee on Human Research at the University of California, San Francisco approved this study.
Summary
Twenty-three women were recruited and 21 were eligible for participation in the study. Most participants were 19-24 years old; three were 16-17 years old. Most women (15) lived with their parents; three lived independently, and three lived with their partners. Twelve out of 21 participants had been pregnant before and 11 had at least one child. Almost all were 15-20 weeks gestational age at denial, with one woman at 30 weeks.
Second interviews were conducted 2 months after recruitment with eight of the 21 participants who completed initial interviews (ID1, ID2, ID3, ID4, ID22, ID27, ID28, and ID29). The remaining 13 participants did not respond, had a non-functioning phone number, were no longer in Bogotá, or did not want to participate in a second interview for unknown reasons. Some women declined phone interviews because it was difficult to find a quiet and private space to talk.
Below we present results from initial interviews regarding delays and barriers to seeking legal abortion services (part 1). Subsequently, we present factors that enabled or inhibited participants in seeking legal abortion care after denial and their knowledge of and experiences with self-induction and illegal methods (part 2).
Part 1: Delays and barriers to seeking and accessing abortion services
Participants reported delayed recognition of pregnancy, lack of knowledge about legal abortion availability, logistical barriers, and/or need for time to decide. Six of the 21 participants said they did not realize they were pregnant until the second trimester, due to lack of pregnancy symptoms or irregular menstruation. One participant, who was 18 weeks upon denial, said, '…I haven't had my period for about 5 months. I thought the injections I was using for contraception had made my menstruation irregular. That made me think everything was normal' (ID30). Another, who sought abortion at 16 weeks, said: 'My period came normally, but I realized that I was pregnant when I started to see I was looking fat and that my belly was hard. I took a pregnancy test but it was negative…later I did a blood test and it was positive' (ID13).
Some participants did not know about the abortion law: 'I thought [abortion] was illegal, that it was denying life to a human being and no one could do it legally' (ID29). None were aware about the health exception, which includes mental health: 'I hadn't even thought of the possibility that if you were in a bad emotional state, like I was, you could find legal support for the procedure. I didn't know that. …They don't provide information about it, because of the Church and people's ideas, so many taboos' (ID1).
Logistical complications also delayed abortion seeking, including care-taking responsibilities, work, or lack of resources. One participant recalls: 'I confirmed [that I was pregnant] a month before. I didn't come in earlier because I didn't have money. …When I came they told me I was 11 weeks pregnant. I made an appointment for August 30 but I didn't come because I didn't have all the money. …by the time I came in…they told me I was at 16 weeks' (ID27). Many did not know where to seek services: 'There is a lack of information, lack of awareness, of support, of counselling. Some people may know about the clinic, but lots of people don't. So, with more information, more advertising, more use of media, people will know what to do in this case and not wait so long' (ID29).
Some participants delayed because they needed time to make a decision about the pregnancy. One explained: '…that's why I took so long. I told him that we should think about it. I searched for shelters for mothers in my situation. I thought about all these things, about school, and what I could give the baby' (ID28). Another participant needed an abortion for health reasons but still took time to make the decision: 'Of course, the decision was not easy. I got to the last week, I mean I waited a week more… After that week no hospital in Bogota would have done it. Since I was young, I was afraid of abortions' (ID22).
Lastly, several participants felt devastated when they were denied abortion services. One said immediately after: 'I am panicking…I can't see myself as a mom. I hope something can be done' (ID2). Another said: 'When they said they couldn't perform the abortion, I felt the world crash down on me' (ID27). An 18-year-old, who ultimately continued her pregnancy due to pressure from her partner and her mother, said: 'I am very sad because all of my plans have changed. I wanted to study next semester and now I have to wait six months. It's for these reasons that I didn't want a baby right now. It's difficult. I will no longer be able to be young' (ID4). Finally, a participant, whose husband left her when he learned about the pregnancy, said: 'It destabilizes many things. …I won't be able to study; that life plan will have to wait until the baby is older' (ID1).
All 21 respondents were referred to an advocacy group based in-house at the clinic, which provided legal advice for seeking abortion in the public sector. Participants were advised to present their request for abortion within the context of one or more of the circumstances sanctioned by Colombia's abortion law.
Part 2: Factors that enable or inhibit access to safe and legal abortion care Partner involvement in decision-making Four out of five participants who successfully terminated were no longer in a relationship with the man involved in the pregnancy when they pursued abortion. As a result, the men were either not included in the decision-making process or did not oppose abortion. One participant said: 'No, it was a passing thing. We only went out for a month. He left. …So he never found out' (ID29). Another explained: 'My partner knows and doesn't want to have it. We don't have a relationship any more. We broke up and I am not going to see him again' (ID22). The participant who terminated her pregnancy while still together with her partner explained that her partner also wanted her to have an abortion: 'He really didn't want to have it… He said, "No. We're not prepared to have children now. We're in college, we're just starting out."' (ID28).
The three participants who carried to term said they lacked support from their partners in seeking abortion. One explained: 'I asked him and he told me he didn't agree because it's not the baby's fault' (ID4). Another said: 'I had a serious argument with my partner. He told me it was my fault for spreading my legs… (ID3). One participant's relationship with her partner deteriorated after she told him about the pregnancy: 'When I told him I was pregnant, I never thought of having an abortion. …I always assumed that he was going to support me; but no. It was the moment for him to tell me, "I am seeing someone else and I don't want the responsibility of more babies. You are taking away my chances to study, to travel, by bringing so many babies into the world." These things made me sad, anguished' (ID1).
Legal support and counselling
All eight participants who completed a second interview confirmed they received legal support from the advocacy group where they were referred upon denial. Five of the eight ultimately obtained abortions at public hospitals. The remaining three participants continued their pregnancies.
Those who obtained abortions explained that the legal support and counselling was crucial to their success. After being counselled, one participant said she was able to effectively advocate for herself and navigate the complex system: I spoke with the lawyer, who explained to me the reasons for which one could have an abortion and told me to go to the [hospital]. I went there, talked to the receptionist and said it was an emergency... [the doctor] asked how many weeks I was at, and I told him that I was at 19 weeks and that I wanted an abortion. He asked me if my reason was within one of the three legal indications, and since the law contemplated psychological as well as physical health, I needed to be evaluated by a psychologist to see if he could do the procedure. The psychologist evaluated me and she was the one who approved that my mental health was at risk. (ID28) Another participant described how legal counselling empowered her to make a well-informed decision:
Stigmatizing experiences at referral facilities
Three out of eight participants who completed a second interview did not ultimately obtain abortions, despite legal counselling and support (ID1, ID3, ID4). Specific encounters with providers and another patient at the referral facility influenced them to ultimately decide against abortion. ID1 was confident in her initial decision (her husband was leaving her, she had a two-year-old child and she was overwhelmed by the idea of raising two babies alone), but she changed her mind after the doctor questioned her. She recalls: When I arrived, I thought I was sure. But when the whole process started, no. Something that happened was that I told the doctor I could feel fast heartbeats in my stomach. He felt my stomach and said the baby had tachycardia. He began to tell me about how babies can sense when they are in danger, things like that…. He told me that it was very possible that they would not be able to do the procedure because of how far along I was… He told me that he didn't recommend it but that he was going to refer me to another place where they dealt with these cases. (ID1) ID3 was also determined at first, but later became 'destabilized' after her ultrasound: Knowing that it's not going to be a happy baby or that it's not going to have a good future…I think that the best decision in that moment is to end the pregnancy. … Just imagine, after you see an ultrasound where the baby is totally formed, where you hear his heart, where you know that it's a little person that only you can feel. Obviously, that destabilizes you emotionally in an inexplicable way. No one can understand that, except the person who is in that situation. Despite all that, I tried to say no. (ID3) ID3 met with a lawyer and sought an abortion at a hospital, but finally decided against it. She explained: I saw some girl that was there for the same reason as me. She was worse off than me, because my parents supported me, despite the fights.... My parents knew about the pregnancy since I was six weeks and they never turned their back on me, never. … My decision now to continue with the pregnancy is due to the support I've had from my parents. When you hear someone who really [has no support]... you say, oh yeah, I will be okay if that person is at my side.... Something had to happen that day to make me react. I left and told my partner …It's the best decision I've made in my life, even though I know that abortion should be legal in this country. I support it. I've had an abortion before. I have been through these things, which is why I support abortion.' (ID3)
Poor interpersonal care, despite access
None of the five participants who ultimately obtained abortion suffered from medical complications; however, most experienced poor treatment and felt stigmatized. One participant explained: 'For me it was super difficult. To begin with, I'd never been to a [health facility] alone. ..It was a shock. On the way ...there are people who pass out fliers that say "unwanted pregnancies." Everything goes through your mind. There are ladies giving away religious icons and anti-abortion propaganda… it's an emotional shock' (ID2). Inside the hospital, she endured poor treatment from providers: …It was really hard to hear children crying nearby in the birthing rooms, to hear mothers pushing. … At around 11:00 at night, I started having strong contractions. The nurse who received me that night came in and performed a really rough examination. …Obviously they didn't approve of what I was doing and they wanted to get back at me. I was really in pain; I was screaming. They did another psychical examination and that was when my water broke and then I felt the foetus being expelled... About two hours earlier a woman came in… She was two months pregnant and it was a high-risk pregnancy. She expelled it. It was super tiny and everything happened right next to me. The woman started crying because she wanted to have her baby. When the nurse picked up the little foetus to take it to pathology, the woman was crying. The nurse glanced at me and said, "Ironic, don't you see? She wants a baby and you're tossing one out." I was really hurting. The pain made me think about other things. At the moment of the expulsion, the nurses picked up the foetus. They told me it was alive and the woman [next to me] started crying. I did too. I didn't say anything to the nurse. I was feeling really bad. …The woman looked at me and cried and I felt this emotional weight. I cried, "What can I do." I was in the bed bleeding. (ID2) Some participants navigated significant bureaucratic challenges at the hospital, including inefficient referrals and unnecessary paperwork. One 19-year-old woman describes: I had to write a letter requesting authorization … explaining my reasons, and stating that I was totally sure, with photocopies of my documents. … They called on Thursday and told me to go on Friday to the office to pick up the authorization form …. I went but the form said, "Appointment for gynaecology and obstetrics." I took the form to gynaecology where they told me it was only an authorization to schedule an appointment. I went to schedule an appointment and they gave me one for ten days later. …I went at 7:30 in the morning [the next day] and explained my situation. I showed the authorization form. The department head was there and I told her everything. Super rude. She said, "But that's not the way it's done. Show me the piece of paper that says you have one of the legal causes." Everything had been sent … I already had the authorization but they wouldn't receive me. (ID2) Another participant was hospitalized for 2 days without receiving care, during which time her providers disrespected her and criticized her decision to seek an abortion: I was there but they didn't do anything. They just sent me to the psychiatrist and told me it was a crime… They really treated me bad. The whole hospital found out-everyone. …they were all talking about it. All the nurses walked by and looked at me. …They asked me why I wanted to do it, if I didn't care. I mean, I didn't have to explain…it's my decision and what business is it of theirs? They're strangers; I don't know them. All the doctors of all the shifts found out that I was there for two days. They even called the police and everything because they said it was illegal and that I had to make a statement to the police. … It was intense because I was feeling bad, with all the people there judging you without knowing your condition. I felt bad. (ID27) This participant went to a different hospital, where a provider told her he couldn't help her "because of his personal integrity." She returned the next day and obtained an abortion from a different provider, but reported how difficult it had been: '…they held [the foetus] in their hands and everything. I saw it and I felt bad. It really hurt. I started crying and after leaving the hospital I couldn't sleep' (ID27).
One participant, who was placed in a maternity hospital room, said that when the doctors realized she was there for an abortion rather than delivery, they treated her differently: …they gave me a bed with the other moms, like a normal patient. But then came the shock of seeing all of them with their babies and me, with an abortion. Then they started to treat me poorly. They refused to give me [pain] medication. They delayed everything… I was very sore physically and emotionally and I couldn't make them be more considerate of my situation. (ID22) As a result, she recommended provider training to prevent poor treatment for other women seeking abortion: 'I think they need to hire, or make [providers] more aware and sensitive, or carry out a medical education campaign… starting in med school. …Because they swear to protect life even if people don't want to live and they make people be born even if they don't want to' (ID22).
Despite the poor interpersonal care in public hospital settings, none of the participants expressed feelings of regret about their decision. One said: 'It hurts, but at least now I can sleep, I can be peaceful. It wasn't easy at all, but I don't regret it either' (ID27). Another said: 'When I finally managed to have the abortion, I was calm. And now I think that if I hadn't had an abortion, I would be really bad off… because I am still a young woman and sex is a physiological, mental, and sentimental need. …I am grateful for the women's movements that have fought for rights and to open our thinking. Unfortunately, there is guilt that you can't erase; it stays' (ID22).
Self-induction and illegal abortion
Many women said they considered self-inflicting pain or injury, self-inducing abortion, or visiting an illegal provider before they came to Oriéntame. One participant said: '[I thought of] poisoning myself, something, damaging my stomach somehow to see if it worked' (ID29). Another said: 'I don't know if I was to commit suicide because I am very afraid of death. But I felt desperate and thought, "If I cut my veins, maybe I can damage the baby." That way, it might not be an abortion. I would just lose it. I didn't eat to see if I would lose it…' (ID28).
Most participants said they heard about pills for selfinduction and a couple attempted to obtain them. One participant said: 'You rely on information from your friends and it's a chain. Everyone goes to school or anywhere with rumours and they tell you… I was 17. I thought, "My parents don't support me. My partner is very young…" At that moment, you don't think about anything…I had information. I had access [to a friend's pharmacy]' (ID3). Another participant explained: 'We researched and, because of the gestational time, we found some pills online… They're super easy to buy. Each pill costs 15 thousand pesos. But they asked me, "How far along are you?" I said "two long months." And they said they couldn't sell them to me…' (ID2). According to one participant, self-induction was a last resort: 'There are many women who are unaware so we resort to other things. It is difficult because you don't have an open space to discuss sexuality and get counselled about these topics. They only talk about how to protect from diseases and how to use contraception… I thought that if nothing could be done, I would take the risk because I really didn't want to be a mother' (ID2).
Most participants were afraid that alternative methods would not work or would be harmful. One participant recalled: 'Of course. I thought about the possibility of Cytotec. I checked out clandestine places on the web.
[City] is full of those places. I went to one, went in, and said to myself, "I could die in here." Nothing was good in that place' (ID3). Another said: '…I started researching a lot of things, online, talking to friends, without telling them I was pregnant. I just listened and learned…about the Cytotec pills they can buy. They spent a lot of money on those pills and they didn't work because the baby was still there.' (ID1). A third participant said: 'I heard about [pills] in grade school and in college too. But I couldn't. I heard that when you do that, you had to be with someone else in case something bad happens and be close to a hospital if anything happens. …I decided not to because I don't want to die' (ID28). One of two participants who were approached by illegal providers outside of a clinic explained: 'He was pulling me and I got scared. I told him, "I'm going to call the police. I am just here for an ultrasound." And he said, "You're lying. You're going to have an abortion. There's a place where they charge half as much for the same things, with a doctor"' (ID1).
Discussion
We aimed to explore the barriers women face in accessing abortion care, the factors that enable or prevent women from seeking safe and legal services after denial, and the prevalence of informal sector abortion attempts after denial.
Results confirm prior research that preventable barriers to care, such as lack of knowledge of services, logistical barriers, or delayed pregnancy recognition, delay women from seeking abortion services earlier in their pregnancies. These delays, which can carry women past 15 weeks gestation, the gestational age limit for the study clinic, lead to unnecessary denial of services and, further, make it more difficult for some women to obtain wanted abortions, particularly in cases where they must defend their decision to partners and providers at later gestation.
Our findings suggest that key factors influencing whether or not women obtain a wanted abortion following denial include: partner support, legal counselling and referral at the moment of denial, medically accurate counselling at all points-of-care, and quality interpersonal care from providers. Women who chose not to discuss the pregnancy with their partners had a more straightforward path to care than did women who had to manage partners' resistance; women whose partners were supportive of abortion were more likely to obtain care. Legal counselling from the on-site advocacy group played an essential role in enabling participants to effectively navigate a complex and bureaucratic health system, understand the law and its implications, and ultimately arrive at the next point of care prepared to advocate for themselves. Other studies show that women who are denied care without explanation or referral may be left with no option but to carry the unwanted pregnancy to term [21][22][23]26].
Partner support and robust referral programs are not necessarily sufficient to ensure access to abortion. Stigmatizing experiences at referral facilities and poor interpersonal treatment from some providers ultimately prevented some participants in this study from obtaining wanted abortions. In at least two of the three cases where participants decided to carry to term, providers manipulated patients by exposing them to the foetal heartbeat and ultrasound images, and by advising them to continue the pregnancy based on medically inaccurate information. Furthermore, those who obtained abortions following denial endured physical and psychological abuse from providers and hospital staff, possibly due to inadequate training about the law and social stigma associated with abortion. Some clinicians may be required to perform abortions despite lack of training or personal objection. Comprehensive provider training should not only cover technical skills but also interpersonal quality care techniques, which treat all women, including those who have unwanted pregnancies, with respect and empathy [2]. The WHO considers interpersonal interactions to be part of quality of care, as evidenced by their definition, which includes the following key dimensions: effectiveness, efficiency, accessibility, acceptability/patient-centeredness, equity, and safety [28]. Many women were aware of self-induction, including with misoprostol, and some were aware of informal sector providers. This is unsurprising given estimates that over 99% of abortions in Colombia are performed outside of the formal health system and over one-half of these are performed using misoprostol [18].
It is important to acknowledge the limitations of our analysis. First, because study participants were sampled from a formal-sector abortion facility, it is highly likely that their knowledge of and experience with informal sector abortion is under-representative of that of all women in Colombia, particularly rural and poor women who bypass the formal sector altogether. In addition, as anticipated with an exploratory qualitative study, these findings are not generalizable or necessarily representative of all women in Colombia. Our results do not include the experiences of young women under 18 years or of women who seek abortion outside facility-based care.
Conclusion
To our knowledge, based on a review of the literature and consultation with local experts, this is the first study to examine the experiences of women denied legal abortion in Colombia. Our findings highlight the need for: 1) public awareness campaigns about the availability and legality of abortion services in Colombia to prevent delay and consequent denial; 2) provider support and referral to patients if and when they are denied services for any reason; and 3) training on compassionate care for all providers and medical staff who encounter abortionseeking patients. These improvements will help to ensure that women are able to obtain timely, safe, effective, and non-judgmental abortion care when needed. Similar research is needed to better understand the experiences of women denied abortion services across the country, particularly given that Oriéntame is likely the best case scenario for abortion care in Colombia. In the long term, systematic quantitative data collection would enable research on the health and socioeconomic consequences of legal abortion, illegal abortion and childbirth in Colombia.
Availability of data and materials
The data supporting the conclusions of this article are included within the article and its supplemental files. The interview transcripts in full will not be shared for confidentiality purposes, given that the detail provided in the transcripts may reveal the identity of a participant and given the sensitive nature of the study topic.
Authors' contributions TD made substantial contributions to the conception and design of the study, the acquisition of the data, and the drafting and revising of the manuscript. SR analyzed and interpreted the data and drafted and revised the manuscript. MM assisted in data collection, conducting interviews, and some of the analysis. CV contributed to the conception and design of the study, acquisition of the data, and revising of the manuscript. DF made substantial contributions to conception and design of the study, acquisition of the data, and drafting and revising of the manuscript. CG made substantial contributions to conception and design of the study, acquisition of the data and drafting and revising the manuscript. All authors provided final approval of the version to be published and agree to be accountable for all aspects of the work.
Ethics approval and consent to participate
The University of California, San Francisco Committee on Human Research (IRB#10-045110) granted ethical approval for this study. Recruiters obtained informed consent from all those interested in participating in semistructured qualitative interviews. Recruiters obtained consent at recruitment and again at the time of interview.
Consent for publication
Written informed consent was obtained from all participants in the study. The informed consent document can be made available if requested. All informed consent documents included the following statement: "If information from this study is published or presented at scientific meetings, your name or other personal information will not be used." The data provided in this manuscript has been de-identified and no details on individuals are reported in the manuscript. Since we are reporting anonymous data, we believe consent for publication is not applicable in this case.
Competing interests
We have no financial interests or benefits to disclose. The authors declare that they have no competing interests.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author details
|
2017-10-25T08:09:07.850Z
|
2017-10-23T00:00:00.000
|
{
"year": 2017,
"sha1": "b85bb518f34acab94c122eef2b3269191fe10ba0",
"oa_license": "CCBY",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-017-0391-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b85bb518f34acab94c122eef2b3269191fe10ba0",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216593600
|
pes2o/s2orc
|
v3-fos-license
|
Designing a Smart Bath Assistive Device Based on Measuring Inner Water Temperature for Bathing Temperature Monitoring.
Today, taking a bath is not only a means to keep clean, but also to reduce fatigue and stress. However, taking a bath with hot water for a long time can also be dangerous, leading to scalding or even a heart attack. To prevent these risks, several studies based on measuring bio-signals have been conducted, but due to high prices, difficulty of use, and restricted functions, these studies’ recommendations cannot be easily adopted by the public. Therefore, developing accurate methods to measure bathing temperature and bathing time should be the most direct approach to solve these problems. In this study, a smart bath assistive device based on an inner water temperature measurement function is proposed. Prior to development of the device, a bathing environment was emulated with six temperature sensors affixed to different depths to find the optimal depth for measuring bathing temperature. According to the measurement results, the device was designed in a mushroom shape with the cap part floating on the water’s surface and housing the electronic components, and temperature sensors within the stem part were immersed in the water approximately 5 cm below the surface to measure the inner water temperature. Due to the low-power consuming Advanced RISC Machine (ARM) processor and waterproof design, the device is able to float in hot water and monitor the bathing temperature variation over a long period of time. The device was compared alongside a commercial analog bathing thermometer to verify the performance of temperature measurements. In addition, a compensation algorithm was developed and programmed into the device to improve the accuracy of measurements. Processed data is transmitted by Bluetooth to a dedicated Android app for data display and storage. The final results show that the proposed device is highly accurate and stable for monitoring bathing temperature.
Introduction
Bathing not only maintains the cleanliness of the body, but also relieves fatigue, helps blood circulation and promotes metabolism [1]. For infants, taking a bath two to three times a week is recommended to prevent not yet fully developed sweat glands from clogging due to excessive sweating, which trap perspiration beneath the skin resulting in prickly heat, heat rash, or miliaria [2,3]. Also, as is widely known, taking a bath is effective for treating injuries, promoting good health, and preventing diseases [1,4]. Therefore, it is a favored option for most sick and elderly people, who cannot participate in regular physical activities, and can be easily done at home [5].
However, taking a bath at excessive water temperatures can lead to more harm than good, with unsafe water temperatures leading to shock and scalding, particularly among infants and elderly people. Because infants scald or burn easily due to their sensitive skin (i.e., infants have a 30% thinner stratum corneum and a 20% thinner epidermis) and are more easily affected by heat, the water temperature should be maintained at around 37~38 • C [6,7]. An 8-year retrospective review of patients admitted to Stoke Mandeville Hospital (Aylesbury, UK) due to burns sustained by hot baths or showers was undertaken in one study. Fifty-seven patients of all ages were identified and stratified into pediatric (<16 years) and adult groups. In the pediatric group, children were predominantly under three years of age (83%), sustaining most frequently only superficial burns (41%) over areas of less than 10% of total body surface area (72%). Also, parents' supervision was inadequate in 85% of cases [8]. For elderly people, a half-full bath around 38~40 • C and for 20~30 min is suggested [9]. When the water temperature is higher than 41 • C, relaxation and contraction of blood vessels becomes more rapid, and it could aggravate angina and cardiovascular disease, as well as lead to blood pressure-related problems and pulse fluctuations. These physiological changes often result in syncope which can prove fatal in a bath [10][11][12][13]. Several studies with methods for preventing these risks have been presented, such as: monitoring and evaluating heart rates and peripheral blood flow during bathing; a system for measuring electrocardiogram (ECG) during bathing; and a variety of ubiquitous health monitoring systems to sense human bio-signals using sensors placed inside the bathtub when bathing at home [14][15][16]. However, both of these studies only focused on measuring bio-signals which led to very complex design structures, necessitating the integration of very expensive sensors and other components made from bio-compatible materials. Difficulty of use, restricted functions, and the high cost of developing these studies' recommendations means they cannot be easily adopted by the public. Since water temperature is the most direct cause of bathing-related accidents, developing accurate methods to measure bathing temperature and bathing time, with real-time alerts to a smart phone, should be the simplest solution to overcoming the previous studies' limitations.
Recently, common commercial bathing thermometers have been developed based on analog or digital temperature measurement techniques. Analog bathing thermometers, i.e., mercury thermometers, are the most popular approach to measuring water temperature due to their ease of use and high accuracy. However, the small scale of the thermometer can be hard to read, and the expansion variation of mercury is slow. In addition, the measured value cannot be observed and recorded expediently while the analog thermometer is in use. More importantly, mercury thermometers break easily, with broken glass and mercury's high toxicity causing great harm. Digital thermometers are based on semi-conductor technology and are more commonly used to safely and accurately measure water temperature. The maximum error value of the temperature measurement is less than 0.3 • C, and they have fast calibration and response times. Also, the digital design and display allows for easier reading and recording of measurements.
According to the principles of heat transfer, heat dissipates more quickly at the water's surface than under the surface, so different depths have different temperatures [17,18]. Also, research on absorption and attenuation of visible and near-infrared light in water that is temperature dependent has shown that water temperature dropped as depth increased [19]. Therefore, the water temperature measured by existing commercial bathing thermometers cannot determine exact bathing temperatures because they only measure the surface temperature. To overcome this deficiency, an IoT-based smart bath assistive device with an inner water temperature measurement function is proposed. Prior to development of the device, a bathing environment was emulated with a large glass water tank and six high accuracy temperature sensors affixed to different depths to find the relationship between depth and temperature variation (Section 2.2). According to the measurement results, the device was designed in a mushroom shape with the cap part floating on the water's surface, which stores the electronic components for the system control, and a stem part immersed in the water at around 5 cm below the surface, which houses a monolithic complementary metal-oxide-semiconductor (CMOS) integrated circuit (IC) integrated temperature sensor to measure inner water temperature. A low-power consuming ARM processor with high performance and a long-distance Bluetooth 4.0 module were implemented in the device for system control, data processing, and wireless communication. Because the device was manufactured with waterproof design and a large capacitance rechargeable battery, the device is able to float in hot water to monitor bathing temperature for a long duration of time. The prototype device was compared alongside a current commercial bath thermometer to verify the performance of temperature measurement. In addition, a compensation algorithm was developed and programmed into the device to improve the accuracy of measurements. The processed data is transmitted by Bluetooth to a dedicated Android application for data display and storage. Figure 1a shows the basic idea of the proposed Internet of Things (IoT)-based smart bath-assistive device, represented as a red dot that can float alone in water to monitor bathing temperature. With a high-accuracy temperature sensor, wireless communication, and alarm function, the proposed device was developed to measure the bathing temperature and to send real-time alerts to a smartphone automatically. As mentioned, the surface heat of hot water dissipates more quickly on the surface than underwater wherein different depths have different temperatures in the bathtub. The mushroom shape of the proposed device, shown in Figure 1b includes a large cap area that allows the device to float in water unassisted, and a stem part that contains the temperature sensor to be immersed in water to measure the inner water temperature.
Observing the Relationship between Water Temperature and Depth
Convection is heat transfer by mass motion of a fluid such as air or water when the heated fluid is caused to move away from the source of heat, carrying energy with it. Hot water is less dense than cold water, causing it to rise and create convection currents which transport energy. Cooler, denser water descends, and warmer water rises near the surface. As was surmised, water temperatures below the surface of the bathtub were higher than water at the surface. However, given convection dynamics, deeper bath water is colder, so it had to be ascertained prior to designing the device which depth (how far below the surface, and how high above deeper, colder bath water) is optimal for measuring inner water temperature.
Bathtub depths can vary: a European style bathtub has a depth of 45 cm, and a Japanese (or Greek) style bathtub has a depth of 55 cm. Standard bathtubs have depths between 35 and 44 cm, and a surface area of about 155 × 80 cm, which can hold between 95 and 170 L of water. According to the heat transfer rate equation for the water tank: h is heat transfer coefficient, A is surface area where the heat transfer takes place. Then, larger surface areas cause the overall heat transfer rate is larger than smaller surface areas. Also, natural convection occurs when there are hot and cold regions of water in the water tank, not generated by any external source and not depending on the volume of the water tank. Both of these theories meaning the speed of heat dissipation of the average bathtub is faster than that of the glass water tank [20]. However, this research focuses on finding the relationship between water temperature variation and difference depth only. Therefore, a 60 × 30 × 45 cm glass water tank with the same depth of a standard bathtub was filled up with 63 L of 40 • C water that is considered to be one of the most comfortable bathing temperatures to emulate the bathing environment, as shown in Figure 2a [10]. Six semi-conductor-based Si7021 (Silicon Labs, USA) temperature sensors were affixed individually at 5 cm intervals, starting from the surface down to a depth of 25 cm. The selected sensors have a very small maximum margin of measurement error, about ±0.3 • C at a 1 Hz sampling rate, which contributes to measuring the water temperature accurately. Also, this sensor is a monolithic CMOS IC integrated sensor. The 3 × 3 mm dual-flat no-leads (DFN) pack includes an analog-to-digital converter (ADC), signal processing, calibration data, and an inter-integrated circuit (I2C) interface, which make it easier to connect with the microcontroller unit (MCU) for system design -easier than dedicated water measurement sensors such as CS225 (Campbell Scientific, USA), TPJ 10K, and NTC10K (Capetti Elettronica, Italy). In addition, because of the monolithic CMOS sensor structure's design, the sensor IC has low drift and hysteresis, and excellent long-term stability [21]. Before affixing the sensors in the water tank, all the sensors were coated with epoxy to make them waterproof, and the accuracy of the measurements was calibrated and compensated for via programing. The six sensors were connected to an 8-channel multiplexer to control the sensors and acquire the data simultaneously. Also, a TES 1300 thermometer (TES, Taiwan), capable of measuring temperatures in the range of -50 • C to +199.9 • C with an accuracy of ±0.03% rdg, was affixed at a depth of 20 cm to observe the overall temperature. After the sensor initialization, the experiment began and was conducted for 30 min at 25 • C room temperature, and data measured from all the sensors were recorded every 5 min. The experiment results are shown in Figure 2b: the water temperature increased from the surface to the 20 cm depth and then decreased dramatically at the 25 cm depth, which was lower than the surface temperature and became increasingly lower as time went by. Also, measured temperatures from 10 to 20 cm depths fluctuated significantly, and the lowest value from 5 to 20 cm depths can be observed at the 15 cm depth for the last measurement (30 min). Clearly, the water temperature at the 5 cm depth shows the most stable variation as time went by, more so than at other depths in the whole experiment. Therefore, the 5cm depth position was selected as the point from which to measure the water temperature due to its linear and stable variation, as well as for practical design considerations for the mushroom-shaped design. Considering the water flow in the bath environment the device cannot always stay in the same place in the bathtub. The water temperatures of different locations at the 5 cm depth was observed to study whether consistent temperature readings can be made when the device floats to different locations in the bathtub. As Figure 3a shows, the six sensors were distributed in different locations at the 5 cm depth to measure the water temperature for 30 min. The conditions of the bath environment were set up to match those of the previous experiment, and room temperature was also maintained as 25 • C. The experiment results show that the water temperatures measured from the six sensors were almost identical and decreased linearly over time, as seen in Figure 3b. Figure 4 shows the block diagram of the proposed smart bath assistive device that consists of three parts: the sensor part for measuring water temperature, the device control part with wireless communication, and the smartphone application for remote control of the device and data storage. The temperature sensor Si7021 used in the experiment of Section 2.2 was also used in the proposed device with a 1 Hz sampling rate for bathing temperature monitoring. The ARM Cortex-M4 core-based EFM32WG 32-bit microprocessor (Silicon Labs, USA) was selected as the main controller for the device's control and data processing. This MCU provides a full digital system processor (DSP) instruction set and includes a hardware floating point unit (FPU) for faster computation performance. It also features up to 256 KB of flash memory, 32 KB of RAM, and central processing unit (CPU) speeds of up to 48 MHz, which are sufficient for implementing a compensation algorithm that processes the temperature automatically. With minimal energy consumption and intelligent peripherals, this MCU can efficiently control the device over a long lifespan. For the power supply, due to the nature of the device's use, it's not suitable to have a USB port to charge the battery. Therefore, a wireless charging circuit with wireless charging coils was designed for the device. For wireless communication between the device and smartphone, Bluetooth module BC127 (Blue Creation, USA) was selected. This module is highly flexible, with low-power consumption and a maximum data rate of 3 Mbps. In addition, an integrated antenna is ideal for easily adding high-quality audio and data communication. An Android OS-based smartphone app was developed to remotely control the proposed device, while the measured data is displayed and recorded for analysis via the app.
Device Manufacture and Assembly
Because of the proposed smart assistive bath device's mushroom-shaped design, the printed circuit boards (PCBs) were manufactured as two separate parts: the main board with a larger surface area designated to mount the MCU, Bluetooth module, and other electronic elements are housed in the cap of the device; and a smaller circular board designated for the temperature sensor is housed in the bottom stem of the device, as shown in Figure 5a,b. For the main board, the Bluetooth module was mounted on the top of the PCB and kept away from the MCU to avoid tracking signals produced by the high-frequency oscillator of the MCU. Also, the antenna of the Bluetooth module is located on the edge of the PCB to avoid radiation pattern blockage caused by other electronics components. An audio codec is implemented to give safety alarms in real time associated with excesses in bath temperature and bathing time. Also, the audio codec can allow for communication between a caregiver and a user in the bath by hands-free Bluetooth technology. The wireless charging circuit was also designed on the top of the PCB and the receiver coil was attached to the back of the main PCB for direct connection to the transmission coil for the device's docking and power transmission. Guide holes are designed on the same side of the two PCBs to connect easily to the main board. Both PCBs were manufactured with a four-layer structure, and all of the components' sizes were from the 2012 package (2.0 × 1.25 mm) and were mounted on both sides of the PCB to minimize the PCB size. Figure 6 shows the proposed smart bath assistive device assembled with manufactured mushroom-shaped shell. The shell is composed of a red cap and white stem fabricated by a 3D printer using high-impact strength acrylonitrile butadiene styrene copolymer (ABS) material. The ABS material has several advantages including high chemical resistance (making them safe to use), good electrical insulants, and, importantly, heat resistance that protects the device from deformation even when used in hot water with strong temperature variation [22]. Also, due to the ABS material being metal-free, it is able to avoid detuning effects on the antenna and unwanted signal path loss caused by the metal and metal coating housing material used to encapsulate the PCB. Figure 6a is the top view of the assembled device: two speakers for stereo sound and three buttons to control the device, all of which are waterproof. Figure 6b is the bottom view of the device: the temperature sensor was affixed to the bottom of the device and can be viewed clearly due to its thin and transparent acrylic cover. Four screws keep the cap and stem sealed together, with a rubber ring between cap and stem acting as an airtight and waterproof sealant. Figure 6c is the lateral view of the device showing its dimensions. The diameter (D) of the device is 12 cm and the total height of the device from the top to bottom of the device (HT) is 10 cm. The height of the stem part (HS) is 5 cm, which is immersed in the water allowing the temperature sensor to be 5 cm under the water.
Temperature Sensor Calibration and Wireless Communication Performance Verification
Due to the temperature sensor and Bluetooth module being encased in the waterproof shell, a temperament measurement test in a temperature and humidity chamber (T2, YMRTC Co., Ltd.) for sensor calibration and a wireless communication performance test had to be conducted. After the chamber temperature was initialized at 40 • C, the device was placed in the center of the chamber and the chamber was then turned off in order to naturally bring down the temperature to 20 • C to imitate bathing temperature variation. Data measured by the device were transmitted to a laptop by Bluetooth and instantly recorded as raw data. However, the chamber temperature displayed on the chamber interface was measured by the chamber's own sensor which is placed at the top of the chamber. Thus, the chamber temperature indicated was not suitable as a reference for observing the temperature near the device's sensor [23]. Therefore, the K-type temperature sensor of the digital thermometer (TES-1300, TES Electrical Electronic Corp.) was attached near the device's sensor. The measured data was also recorded as raw data by the digital thermometer to act as a comparative reference with the data measured by the smart bath assistive device.
Experiment for Comparison of the Proposed Device with a Commercial Bath Thermometer
As shown in Figure 7, the smart bath assistive device and an analog bath thermometer (Double Heart, Japan) were positioned to float in the water tank to compare their respective performances. The water tank was filled with 63L, 40 • C water (as done in the previous experiment) in order to observe the relationship between water temperature and depth. The experiment began at 40 • C and lasted for 30 min. The data measured by the device was recorded as raw data and the temperature values of the analog bath thermometer were recorded 10 times at one-minute intervals. In addition, a digital thermometer was set at a depth of 5 cm under the surface to observe temperature variation. The measured data were used as a reference to compare with the smart bath assistive device. Room temperature was also maintained at around 26 • C for the whole experiment. Figure 8 shows the experiment results of the proposed smart bath assistive device for temperature measurements and the wireless data communication performance test. The lines with black and white dots in Figure 8a represent the temperature measured by the proposed device and digital thermometer HTC-1, respectively. The x-axis and y-axis represent the chamber temperature versus the actual measured temperature. The temperature measured by the proposed device is lower than that of the digital thermometer. Figure 8b shows the gap between the two devices' measurements starting around 0.7 • C and then gradually becoming smaller as the chamber temperature decreased. After the chamber temperature dropped to below 35 • C, the temperature difference between the two devices stabilized at around 0.5 • C until the chamber temperature reached the lowest temperature of 20 • C. As mentioned before, the device's temperature sensor was enclosed within a waterproof shell, and even the cover of the sensor was made with a thermally sensitive material, meaning that the sensor was not able to measure the temperature directly. Therefore, the temperature measuring response speed of the proposed device is slower than that of the digital thermometer, causing the temperature measurement difference between the two devices, with the proposed device's reading always a bit lower than the digital thermometer's reading. Figure 9 shows the results of the second experiment: comparison of the water temperature measurement performance of the proposed device and the commercial device. Three temperature values were acquired by the proposed device, a commercial analog bath thermometer, and a digital thermometer as shown in Figure 9a. Although it was shown earlier that the inner bath-water temperature is higher than the temperature at the surface, the experiment results of the three devices show that the temperature measured by the analog device is a little higher than that of the proposed device. According to the results of the first experiment, the reason for this phenomenon is assumed to be due to the device's sensor being enclosed within the stem, making the measured temperature lower than the actual temperature. In addition, the relationship between the device and the digital thermometer shows that: first, the digital thermometer's measurement is higher than those of both the device and the analog bath thermometer, proving that the water temperature at a depth of 5 cm is again higher than at the surface; and second, the gap between the device and the digital thermometer remained constant from the experiment's start to its finish, as can be clearly observed. Therefore, the relationship between the two values were studied by Pearson correlation and Spearman rank correlation in MATLAB, as shown in Figure 9b with the coefficient values of 0.9983 and 1, respectively, which proves that the two linear lines are almost exactly parallel. The difference between the two devices' measurements was also close to 0.5 • C, which is similar to the results of the first experiment. To compensate for the lower temperature value measured by the smart bath assistive device, a composition of the temperatures measured by the digital thermometer and by the device were studied from the experiment results, as represented by Equations (2) and (3), respectively. The temperature value acquired from the digital thermometer was assumed to consist of the actual temperature at the 5 cm depth and the error value of the digital thermometer. However, the temperature value acquired from the device not only consisted of the same temperature at the 5 cm depth and the error value of the sensor (Si7021), but also the error value that occurred due to the acrylic cover, which has to be taken into consideration. Furthermore, the margin of error of the digital thermometer in the range of 30~40 • C is approximately around ±0.1 • C, which is smaller than that of the device's sensor at about ±0.2 • C. For these reasons, the main cause for the difference between the two devices' measurements can be chalked up to the role of the acrylic cover and the error value of the proposed device's sensor. In addition, the water temperature at the 5 cm depth measured by the digital thermometer can be used as the standard value to develop a compensation algorithm for improving accuracy of the device. Therefore, the compensation algorithm based on the curve fitting method with a linear fitting model was suggested as Equation (4) [24]. Each coefficient for the equation was set with 95% confidence bounds. For evaluating the goodness of fit, the following values were observed: the sum of squares due to error (SSE) = 0.006552, R-square = 0.9972, adjusted R-square = 0.9958, and root mean square error (RMSE) = 0.04047. The data processed through the compensation algorithm can be seen in Figure 10. The compensation algorithm-processed temperature value of the device and the value of the digital thermometer are both higher than the analog thermometer by about 0.3 • C, as shown in Figure 10a, and the temperature difference between the proposed device and the digital thermometer is less than 0.03 • C, as shown in Figure 10b. Likewise, several studies have proposed a variety of design methods with complex systems to measure water temperature in baths. They have one thing in common: the temperature sensors were set at varying depths in the water to achieve high accuracy and high stability [25][26][27]. However, as this study shows, the proposed device has a small size, simple structure, and features the adaptable compensation algorithm that allows the device to measure the inner water temperature with high accuracy and stability for a long period of time. Although, there are several kinds of commercial wireless water temperature measuring devices such as ALA (Monnit, USA) and TMU (LinkThru, USA) for monitoring temperatures in water storage tanks, pools, and aquariums, these devices must be mounted on the inner wall surface of the water container and can only measure the temperature near the sensor-they cannot be moved to another place. In addition, the accuracy of these two devices are +/-1 • C, which is two times lower than the proposed device's accuracy of +/-0.5 • C. Furthermore, as regards convenient usage and data acquisition, these commercial devices are at a disadvantage since they use a 900 MHz wireless communication frequency which necessitates connecting to a gateway first rather than communicating directly with a smartphone.
Results and Discussion
T Actual_Bath = 0.09102 sin T Proposed_Device − pi + 0.0189 (T Proposed_Device − 10) 2 + 23.83 (4) Figure 10. Graph with developed compensation algorithm for temperature measured by the proposed device: (a) water temperatures measured by the three devices; (b) differences in value between the proposed device (with compensation algorithm) and digital thermometer.
Conclusions
In this paper, a smart bath assistive device measuring inner water temperature for monitoring bathing temperature was developed. Unlike current studies' methods of measuring the water temperature at the surface of the bathwater with commercially available thermometers, the proposed device measures bathing temperature under the water at a depth of 5 cm due to the inner water's temperature being higher than at the surface, which was shown by a basic experiment that observed the relationships between water temperatures and depths. The device was designed in a mushroom shape that consisted of a cap and a stem that stored the main PCB and the sensor PCB, respectively. The entire device was manufactured with a waterproof design allowing for the cap part to float in water, and the stem part to be immersed in the water to measure the inner water temperature. Two kinds of experiments were conducted for testing the performance of temperature measurement and for comparison of the proposed device with a commercial bath thermometer. According to the experiment results, a compensation algorithm was developed and programmed into the device to adjust the data measured by the smart bath assistive device. The processed temperature value was higher than the commercial analog bath thermometer by about 0.3 • C, which is similar to the basic experiment results. Also, the data acquired by the device exhibited greater linearity and stability than the analog thermometer, showing that the device has dependable performance in bathing temperature measurement. With a high-performance Bluetooth module, the processed data is transmitted to an Android OS-based smartphone app for display and storage. In addition, because of a low-power-consuming ARM architecture-based MCU for the system control and data processing, using two 3500 mAh rechargeable batteries, the system can work on one full charge for about 30 h.
|
2020-04-29T13:03:23.265Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d6af447d03364b44716a6fc829c41a7d520c5f53",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/8/2405/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67db584fd5b08e4c6b83bf3cc6d7f74cebc1dd2b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science",
"Computer Science"
]
}
|
231900730
|
pes2o/s2orc
|
v3-fos-license
|
Personalized Endoscopy in Complex Malignant Hilar Biliary Strictures
Malignant hilar biliary obstruction (HBO) represents a complex clinical condition in terms of diagnosis, surgical and medical treatment, endoscopic approach, and palliation. The main etiology of malignant HBO is hilar cholangiocarcinoma that is considered an aggressive biliary tract’s cancer and has still today a poor prognosis. Endoscopy plays a crucial role in malignant HBO from the diagnosis to the palliation. This technique allows the collection of cytological or histological samples, direct visualization of the suspect malignant tissue, and an echoendoscopic evaluation of the primary tumor and its locoregional staging. Because obstructive jaundice is the most common clinical presentation of malignant HBO, endoscopic biliary drainage, when indicated, is the preferred treatment over the percutaneous approach. Several endoscopic techniques are today available for both the diagnosis and the treatment of biliary obstruction. The choice among them can differ for each clinical scenario. In fact, a personalized endoscopic approach is mandatory in order to perform the proper procedure in the singular patient.
Introduction
The management of malignant hilar biliary obstruction (HBO) is still today a medical challenge in terms of diagnosis, treatment alternatives, and palliation options. The etiology of malignant HBO includes mainly the cholangiocarcinoma originating between the cystic duct and the segmental branch of the intrahepatic bile ducts (Klatskin tumor or hilar cholangiocarcinoma) [1]. More rarely, hilar obstruction can be determined by the local extension of adjacent tumors, such as gallbladder, liver, and pancreatic cancer, or by metastasis from distant malignancies [2]. The Bismuth-Corlette classification system is used to classify hilar cholangiocarcinoma taking into account the involvement of the biliary confluence and the intrahepatic ducts. Patients may be classified into four categories-Bismuth type I when the stricture is localized in the main biliary duct and does not involve the confluence; type II when the stricture involves the main confluence; type IIIa when the stricture involves the confluence and the right sectorial confluence sparing the left one; type IIIb when the stricture involves the confluence and the left sectorial confluence sparing the right one; and type IV when confluence, right, and left sectorial confluence are all involved [3]. biliary evaluation during cholangioscopy-the presence of stricture, lesion type, mucosal features, papillary projections, ulceration, abnormal vessels, scarring, and pronounced pit pattern [14]. Moreover, the direct visualization of a suspect area allows the execution of a targeted forceps biopsy increasing the accuracy of malignancy detection ( Figure 1B) [15]. However, cholangioscopy devices are burdened by high costs and their widespread is consequently limited. Echoendoscopy (EUS) is another endoscopic technique that can be used to evaluate a biliary stricture. EUS allows the visualization of the primary lesion and the locoregional staging evaluating the infiltration of adjacent tissues, lymph nodes, and vessel involvement [16]. The diagnostic accuracy of EUS has been shown to be higher in distal biliary tract strictures than in hilar strictures [17]. EUS fine-needle aspiration (EUS-FNA) is a well-established procedure for cytological sample collection, however, its application in hilar cholangiocarcinoma is not widely performed because of technical complexity [17]. In a meta-analysis involving 957 patients, the sensitivity and specificity of EUS-FNA in biliary stricture were respectively 80% and 97%. The sensitivity of EUS-FNA in proximal stricture was significantly lower than in distal strictures (respectively 76% and 83%) [18]. EUS-FNA can be a useful tool in case of ERCP sample collection failure [19].
Ultrasound imaging can be used also inside the biliary tree performing an intraductal endoscopic ultrasound (IDUS). It consists of a tiny probe inserted into the biliary ducts in order to evaluate the presence of malignant stigmata on the biliary ducts' walls. In a retrospective study, IDUS showed a sensitivity and specificity in malignancy detection of 93.2% and 89.5% [20]. Other authors documented a higher sensitivity of IDUS when compared with EUS [21].
Recently, two other techniques became available for a direct evaluation of the biliary walls-confocal laser endomicroscopy (CLE) and optical coherence tomography (OCT). CLE uses a low-power laser to create a magnification of the mucosal layers ( Figure 2) [22]. Echoendoscopy (EUS) is another endoscopic technique that can be used to evaluate a biliary stricture. EUS allows the visualization of the primary lesion and the locoregional staging evaluating the infiltration of adjacent tissues, lymph nodes, and vessel involvement [16]. The diagnostic accuracy of EUS has been shown to be higher in distal biliary tract strictures than in hilar strictures [17]. EUS fine-needle aspiration (EUS-FNA) is a well-established procedure for cytological sample collection, however, its application in hilar cholangiocarcinoma is not widely performed because of technical complexity [17]. In a meta-analysis involving 957 patients, the sensitivity and specificity of EUS-FNA in biliary stricture were respectively 80% and 97%. The sensitivity of EUS-FNA in proximal stricture was significantly lower than in distal strictures (respectively 76% and 83%) [18]. EUS-FNA can be a useful tool in case of ERCP sample collection failure [19].
Ultrasound imaging can be used also inside the biliary tree performing an intraductal endoscopic ultrasound (IDUS). It consists of a tiny probe inserted into the biliary ducts in order to evaluate the presence of malignant stigmata on the biliary ducts' walls. In a retrospective study, IDUS showed a sensitivity and specificity in malignancy detection of 93.2% and 89.5% [20]. Other authors documented a higher sensitivity of IDUS when compared with EUS [21].
Recently, two other techniques became available for a direct evaluation of the biliary walls-confocal laser endomicroscopy (CLE) and optical coherence tomography (OCT). CLE uses a low-power laser to create a magnification of the mucosal layers ( Figure 2) [22].
During ERCP, a CLE probe can be advanced through the duodenoscope channel and inserted into the biliary tree. The classification of Miami and the Paris inflammatory criteria have been developed in order to distinguish a malignant from an inflammatory biliary stricture [23,24]. The sensitivity and specificity of CLE have been reported to be 90% and 72%, respectively [25]. Conversely, OCT uses an infrared-light providing crosssectional images of tissue reflectance in order to obtain information on microscopical tissue architecture [26]. Volumetric laser endomicroscopy (VLE) is a newer OCT that allows us to obtain higher definition in vivo cross-sectional images of the biliary wall layers [27]. OCT increases the sensitivity and accuracy of malignancy detection when compared to brush cytology alone [28]. M. Arvanitakis et al. described the role of OCT during ERCP to assess the diagnosis of a biliary stricture using two OCT criteria for malignancy-the unstructured walls layers and the presence of neovascularization. They reported an increased diagnostic accuracy when standard techniques (e.g., brush cytology and forceps biopsy) are combined with OCT [28]. However, the role of OCT as a single diagnostic tool in biliary malignancy is not well defined yet, and its use and widespread are limited due to high costs. During ERCP, a CLE probe can be advanced through the duodenoscope channel and inserted into the biliary tree. The classification of Miami and the Paris inflammatory criteria have been developed in order to distinguish a malignant from an inflammatory biliary stricture [23,24]. The sensitivity and specificity of CLE have been reported to be 90% and 72%, respectively [25]. Conversely, OCT uses an infrared-light providing cross-sectional images of tissue reflectance in order to obtain information on microscopical tissue architecture [26]. Volumetric laser endomicroscopy (VLE) is a newer OCT that allows us to obtain higher definition in vivo cross-sectional images of the biliary wall layers [27]. OCT increases the sensitivity and accuracy of malignancy detection when compared to brush cytology alone [28]. M. Arvanitakis et al. described the role of OCT during ERCP to assess the diagnosis of a biliary stricture using two OCT criteria for malignancy-the unstructured walls layers and the presence of neovascularization. They reported an increased diagnostic accuracy when standard techniques (e.g., brush cytology and forceps biopsy) are combined with OCT [28]. However, the role of OCT as a single diagnostic tool in biliary malignancy is not well defined yet, and its use and widespread are limited due to high costs.
Endoscopy can play also a role in the biomarkers' evaluation allowing bile samples collection. Classically in cholangiocarcinoma carbohydrate antigen 19-9 (CA 19-9) is considered the most accurate serum biomarker. However, CA 19-9 level >100 UI/mL has a sensitivity of 53% and a specificity of 75-90% in detecting cholangiocarcinoma [29]. Moreover, CA 19-9 can be raised in other malignancies (e.g., pancreatic cancer) and in benign conditions (e.g., cholangitis and primitive sclerosing cholangitis) [30]. Some authors used bile samples collected during ERCP to perform a multi-omic analysis (both metabolomic and proteomic), obtaining a panel of lipids and proteins that can discrimi- Endoscopy can play also a role in the biomarkers' evaluation allowing bile samples collection. Classically in cholangiocarcinoma carbohydrate antigen 19-9 (CA 19-9) is considered the most accurate serum biomarker. However, CA 19-9 level >100 UI/mL has a sensitivity of 53% and a specificity of 75-90% in detecting cholangiocarcinoma [29]. Moreover, CA 19-9 can be raised in other malignancies (e.g., pancreatic cancer) and in benign conditions (e.g., cholangitis and primitive sclerosing cholangitis) [30]. Some authors used bile samples collected during ERCP to perform a multi-omic analysis (both metabolomic and proteomic), obtaining a panel of lipids and proteins that can discriminate patients with bilio-pancreatic malignancy [31]. Moreover, the role of extracellular vesicles (EVs) is raising for cancer detection. EVs concentration in bile collected during ERCP showed the capability to distinguish patients with malignancy from patients with benign biliary strictures with a higher level of accuracy when compared with EVs concentration in serum [32].
Indication for Biliary Drainage in Malignant HBO
The choice for biliary drainage is a complex assessment that should be taken by a multidisciplinary team. The first rule to keep in mind is that it is essential to complete the radiological abdominal staging, as the placement of a device into the biliary tree can interfere with the abdominal cross-sectional imaging (e.g., CT and MRI) [33]. Hence, the patients can be divided into two main groups-those who are eligible for a resection surgery and those requiring palliation therapy.
In resectable malignant HBO, the preoperative biliary drainage (PBD) is not routinely performed. Several retrospective studies showed that PBD increases the risk for postsurgical infections without any effect on survival [34][35][36]. In a systematic review and meta-analysis including 501 patients who underwent PBD and 391 patients who had not PBD, Celotti A. et al. showed that the two groups did not differ in terms of mortality rate but in terms of morbidity with increased risk for infective complications in patients undergoing PBD [37]. Scheufele et al. demonstrated that PBD induces a shift of the biliary microbiome with an increase of aggressive and resistant bacteria [38]. Given this background, the indication for PBD should be done balancing risks and benefits for each patient. In those undergoing left hepatectomy, PBD is not indicated as it increases the mortality rate mainly for the occurrence of post-operative sepsis [34]. Differently, one of the main causes of death after right hepatectomy is liver failure, and it has been shown that it is significantly more frequent in patients who did not undergo PBD [34]. This difference can be attributed to the higher volume of parenchyma loss in the right hepatectomy when compared to the left one, therefore, the quantification of future liver remnant (FLR) volume is essential to indicate whether PBD should be performed [39]. When the FLR volume is less than 30%, portal vein embolization (PVE) is required to obtain hypertrophy of the remnant liver; in this setting, PBD appears to reduce the risk for hepatic insufficiency and should be definitely performed [40]. Moreover, there is consensus that PBD is indicated in patients with cholangitis, hyperbilirubinemia-induced malnutrition, hepatic insufficiency or renal insufficiency, patients needing neo-adjuvant therapy, severely symptomatic patients, and those with delays in surgery [33].
The prognosis of cholangiocarcinoma is still today poor. Surgery is the only curative approach but it is feasible in just 30-40% of patients [5]. Criteria for non-resectability are distant metastases, lymph node metastases beyond the hepatoduodenal ligament, the bilateral ductal extension to the secondary (or sectorial) biliary branches, encasement or occlusion of the main portal vein (or common hepatic artery) proximal to its bifurcation, unilateral involvement of secondary (or segmental) biliary radicles with contralateral vascular involvement, lobar atrophy with the involvement of contralateral secondary (or sectorial) biliary radicles, and lobar atrophy with the involvement of contralateral portal vein or hepatic artery [41]. Biliary drainage in unresectable HBO represents the cornerstone for palliation. The aims of palliative biliary drainage are to enable chemotherapy and radiotherapy administration and improve the quality of life, relieving jaundice, pruritus, pain, and cholangitis [4].
Percentage of Liver Volume to Drain
Once assessed the indication for biliary drainage, the following step is to define how much of the liver parenchyma should be drained to relieve jaundice and reduce the risk of cholangitis.
In a retrospective study including 107 patients undergoing endoscopic stent placement for malignant HBO, Vienne A. et al. showed that draining >50% of the liver volume was a predictor of drainage effectiveness, particularly in Bismuth III stricture, and was associated with longer overall survival [42]. Interestingly, Takahashi E. et al. correlated the percentage of liver volume to drain with the patient's liver function. They concluded that effective biliary drainage is obtained when >33% of the liver volume is drained in patients with preserved liver function and >50% in those with impaired liver function [43].
A cross-sectional study (e.g., CT or MRI) before performing the biliary drainage is crucial to define which liver sector will be drained in order to avoid opacification of undrained biliary ducts thus reducing the risk of post-procedural cholangitis [44]. Some authors suggested contrast-free cannulation to reduce the risk of post-ERCP cholangitis [45]. This technique is based on cross-sectional imaging as a guide-map to perform the cannulation of the obstructed duct injecting the contrast medium only when the obstruction is crossed thus opacificating only the drained sectors. Some authors described the use of other contrast media (e.g., air or CO 2 ) to reduce the risk of infection [46]. Moreover, the cross-sectional imaging allows identifying the presence of portal vein thrombosis with consequent segmental parenchymal atrophy. Drainage of an atrophic liver segment should be avoided because it has been shown that it does not improve liver function, drainage effectiveness nor survival ( Figure 3) [47]. On the contrary, it may increase the risk of cholangitis [42].
A cross-sectional study (e.g., CT or MRI) before performing the biliary drainage is crucial to define which liver sector will be drained in order to avoid opacification of undrained biliary ducts thus reducing the risk of post-procedural cholangitis [44]. Some authors suggested contrast-free cannulation to reduce the risk of post-ERCP cholangitis [45]. This technique is based on cross-sectional imaging as a guide-map to perform the cannulation of the obstructed duct injecting the contrast medium only when the obstruction is crossed thus opacificating only the drained sectors. Some authors described the use of other contrast media (e.g., air or CO2) to reduce the risk of infection [46]. Moreover, the cross-sectional imaging allows identifying the presence of portal vein thrombosis with consequent segmental parenchymal atrophy. Drainage of an atrophic liver segment should be avoided because it has been shown that it does not improve liver function, drainage effectiveness nor survival ( Figure 3) [47]. On the contrary, it may increase the risk of cholangitis [42].
PTBD versus ERCP for Biliary Drainage
In pre-operative HBO drainage, when indicated, the choice between ERCP and percutaneous approach is not standardized. In a meta-analysis of 15 studies, Hameed A. et al. [48] showed that the occurrence of liver failure was higher in patients undergoing percutaneous transhepatic biliary drainage (PTBD) when compared with endoscopic biliary drainage (EBD, including endoscopic nasobiliary drainage or stent placement). Moreover, the median one-year and five-year survival was higher in EBD (respectively 91% versus 73% and 46% versus 30%). The incidence of procedure-related complications, such as cholangitis and pancreatitis, was not statistically different between the two groups, however, there was a trend of fewer complications in PTBD patients. In another meta-analysis, Mahjoub A. et al. reported that the overall procedure-related complications rate (in terms of cholangitis and pancreatitis occurrence) was lower in the PTBD
PTBD Versus ERCP for Biliary Drainage
In pre-operative HBO drainage, when indicated, the choice between ERCP and percutaneous approach is not standardized. In a meta-analysis of 15 studies, Hameed A. et al. [48] showed that the occurrence of liver failure was higher in patients undergoing percutaneous transhepatic biliary drainage (PTBD) when compared with endoscopic biliary drainage (EBD, including endoscopic nasobiliary drainage or stent placement). Moreover, the median one-year and five-year survival was higher in EBD (respectively 91% versus 73% and 46% versus 30%). The incidence of procedure-related complications, such as cholangitis and pancreatitis, was not statistically different between the two groups, however, there was a trend of fewer complications in PTBD patients. In another meta-analysis, Mahjoub A. et al. reported that the overall procedure-related complications rate (in terms of cholangitis and pancreatitis occurrence) was lower in the PTBD than in the EBD group. However, post-operative morbidity and mortality were higher in the PTBD group (26% versus 21% and 7.5% versus 3.8%, respectively) with a trend towards better outcomes in the EBD group although not statistically significant. In palliative biliary drainage, ERCP is generally preferred rather than PTBD [44]. Moreover, biliary drainage via ERCP showed lower adverse events rate and shorter hospitalization when compared with PTBD [49]. In a propensity score matching analysis, Komaya K. et al. reported that patients undergoing PTBD have lower overall survival and a higher risk for seeding metastasis when compared with ERCP [50].
The decision to perform biliary drainage via ERCP or PTBD is complex and may be different among centers depending on technical experience and local facilities. Generally, PTBD is preferred when the patient presents gastro-duodenal altered anatomy, when the bile ducts to drain are not accessible by ERCP, and when ERCP was not sufficient to achieve adequate biliary drainage. The major drawback of PTBD is the need for an external catheter that may be a route of infection, it may represent discomfort for the patient causing local pain, aesthetic inconvenience worsening the patient's quality of life ( Figure 4) [51]. ary drainage via ERCP showed lower adverse events rate and shorter hospitalization when compared with PTBD [49]. In a propensity score matching analysis, Komaya K. et al. reported that patients undergoing PTBD have lower overall survival and a higher risk for seeding metastasis when compared with ERCP [50].
The decision to perform biliary drainage via ERCP or PTBD is complex and may be different among centers depending on technical experience and local facilities. Generally, PTBD is preferred when the patient presents gastro-duodenal altered anatomy, when the bile ducts to drain are not accessible by ERCP, and when ERCP was not sufficient to achieve adequate biliary drainage. The major drawback of PTBD is the need for an external catheter that may be a route of infection, it may represent discomfort for the patient causing local pain, aesthetic inconvenience worsening the patient's quality of life ( Figure 4) [51]. PTBD and ERCP should not be considered as exclusive-in case of complex HBO, which is difficult to approach by ERCP, initially, a PTBD can be performed and the PTBD tube can be used as a guide to place a stent via ERCP (Rendez-vous technique) [52].
ERCP for Biliary Drainage
Endoscopic drainage in malignant HBO is challenging and should be performed in high volume centers [44]. In a meta-analysis involving 13 studies, Keswani R. et al. showed a higher success rate and lower occurrence of adverse events when ERCPs are performed in high volume centers particularly for advanced procedures [53]. To perform biliary drainage, three options are available-plastic stent (PS), nasobiliary drainage, and self-expandable metal stent (SEMS).
Plastic stents are indicated in pre-operative drainage and when the treatment approach (curative versus palliative) has not yet been defined. The advantages of PS are their removability, thus not prohibiting further therapeutic approaches, and their moldability. Therefore, the caliber and the length of PS can be adapted to the singular biliary PTBD and ERCP should not be considered as exclusive-in case of complex HBO, which is difficult to approach by ERCP, initially, a PTBD can be performed and the PTBD tube can be used as a guide to place a stent via ERCP (Rendez-vous technique) [52].
ERCP for Biliary Drainage
Endoscopic drainage in malignant HBO is challenging and should be performed in high volume centers [44]. In a meta-analysis involving 13 studies, Keswani R. et al. showed a higher success rate and lower occurrence of adverse events when ERCPs are performed in high volume centers particularly for advanced procedures [53]. To perform biliary drainage, three options are available-plastic stent (PS), nasobiliary drainage, and self-expandable metal stent (SEMS).
Plastic stents are indicated in pre-operative drainage and when the treatment approach (curative versus palliative) has not yet been defined. The advantages of PS are their removability, thus not prohibiting further therapeutic approaches, and their moldability. Therefore, the caliber and the length of PS can be adapted to the singular biliary tree shape. When sufficient biliary drainage is not achieved, multiple PS can be inserted in order to increase the biliary tree patency ( Figure 5A,B).
Major drawbacks of PS are the risk of stent migration and stent occlusion [53,54]. PS migration occurs in 5-10% of cases, with the distal migration more common when compared with the proximal migration [54]. In a large retrospective study, Arhan M. et al. documented a lower incidence of stent migration in malignant biliary stricture and multiple stent placement when compared respectively with benign biliary stricture and single or double PS placement [55]. Stent occlusion occurs in up to 30% of cases, and it is related to bacterial biofilm formation, biliary sludge, biliary reflux of dietary fibers, and clots formation [55,56]. Stent patency mainly depends on stent caliber-larger PS has longer patency and time of placement, and PS exchange is generally needed every 3-6 months [57]. tree shape. When sufficient biliary drainage is not achieved, multiple PS can be inserted in order to increase the biliary tree patency ( Figure 5A,B). Major drawbacks of PS are the risk of stent migration and stent occlusion [53,54]. PS migration occurs in 5-10% of cases, with the distal migration more common when compared with the proximal migration [54]. In a large retrospective study, Arhan M. et al. documented a lower incidence of stent migration in malignant biliary stricture and multiple stent placement when compared respectively with benign biliary stricture and single or double PS placement [55]. Stent occlusion occurs in up to 30% of cases, and it is related to bacterial biofilm formation, biliary sludge, biliary reflux of dietary fibers, and clots formation [55,56]. Stent patency mainly depends on stent caliber-larger PS has longer patency and time of placement, and PS exchange is generally needed every 3-6 months [57]. Endoscopic nasobiliary drainage (ENBD) can be an option in pre-operative malignant HBO drainage [58]. In a meta-analysis involving 925 patients with malignant biliary obstruction, Lin H. et al. reported a lower rate of pre-operative cholangitis, post-operative fistula, and stent dysfunction in patients undergoing ENBD compared with EBD [59]. Nasobiliary tube occlusion in hilar cholangiocarcinoma has been reported to occur more rarely than in plastic stents [60]. ENBD enables us to perform a cholangiogram injecting contrast medium through the drainage tube without any other invasive procedures, thus helping in the diagnosis and management of complications. However, the major disadvantage of ENBD is the patient's discomfort and consequently its short-term usability.
SEMS is indicated in palliative drainage. Several studies showed the superiority of SEMS in palliation of unresectable malignant HBO when compared with PS [61]. In a metaanalysis, Sawas T. et al. documented that SEMS is associated with a lower risk of short-term and long-term stent occlusion, lower incidence of therapeutic failure, lower cholangitis occurrence rate, and less need for reinterventions when compared with PS [62]. Moreover, SEMS was associated with longer overall survival when compared with PS [63]. Different kinds of SEMS are nowadays available-fully-covered SEMS (FC-SEMS), partially-covered SEMS (PC-SEMS), and uncovered-SEMS (U-SEMS). In malignant HBO, the use of U-SEMS is suggested because the uncovered mash enable the side biliary branches drainage [64]. In a retrospective study involving 30 patients, Inoue T. et al. reported the use of FC-SEMS in malignant HBO-although technically feasible, the occurrence of hepatic abscess in 7% of patients burden the risk of intrahepatic bile duct occlusion with consequent septic complications [65]. Kitamura et al. described the use of PC-SEMS in malignant HBO as an alternative to U-SEMS in order to reduce the risk for tumor ingrowth and to maintain the possibility of stent removal [66]. However, data on FC-SEMS and PC-SEMS in malignant HBO are still inconsistent and only U-SEMS are indicated in this setting for palliation ( Figure 5B,C). Major drawbacks of U-SEMS are the non-removability and the difficult management of stent obstruction [67]. U-SEMS can occlude for the presence of biliary sludge or of tumor ingrowth-in the first case, the management is similar to biliary lithiasis ( Figure 6); the second case is more challenging and may require the placement of a second SEMS or PS.
procedures, thus helping in the diagnosis and management of complications. However, the major disadvantage of ENBD is the patient's discomfort and consequently its short-term usability.
SEMS is indicated in palliative drainage. Several studies showed the superiority of SEMS in palliation of unresectable malignant HBO when compared with PS [61]. In a meta-analysis, Sawas T. et al. documented that SEMS is associated with a lower risk of short-term and long-term stent occlusion, lower incidence of therapeutic failure, lower cholangitis occurrence rate, and less need for reinterventions when compared with PS [62]. Moreover, SEMS was associated with longer overall survival when compared with PS [63]. Different kinds of SEMS are nowadays available-fully-covered SEMS (FC-SEMS), partially-covered SEMS (PC-SEMS), and uncovered-SEMS (U-SEMS). In malignant HBO, the use of U-SEMS is suggested because the uncovered mash enable the side biliary branches drainage [64]. In a retrospective study involving 30 patients, Inoue T. et al. reported the use of FC-SEMS in malignant HBO-although technically feasible, the occurrence of hepatic abscess in 7% of patients burden the risk of intrahepatic bile duct occlusion with consequent septic complications [65]. Kitamura et al. described the use of PC-SEMS in malignant HBO as an alternative to U-SEMS in order to reduce the risk for tumor ingrowth and to maintain the possibility of stent removal [66]. However, data on FC-SEMS and PC-SEMS in malignant HBO are still inconsistent and only U-SEMS are indicated in this setting for palliation ( Figure 5B,C). Major drawbacks of U-SEMS are the non-removability and the difficult management of stent obstruction [67]. U-SEMS can occlude for the presence of biliary sludge or of tumor ingrowth-in the first case, the management is similar to biliary lithiasis ( Figure 6); the second case is more challenging and may require the placement of a second SEMS or PS.
Complete Versus Incomplete Drainage
Several studies investigated whether the biliary drainage should be unilateral or bilateral [68][69][70]. On one hand, the unilateral drainage raises the issue of insufficient jaundice relief and risk of infective complications occurrence, on the other hand, bilateral stenting is burden by a higher technical complexity. In a multicenter, prospective, randomized study unilateral and bilateral endoscopic drainage showed similar results in terms of success rate, but unilateral stenting was associated with a higher risk for reintervention and shorter patency time [71]. In a recent meta-analysis of 21 studies involving 1292 patients with malignant HBO, Meybodi M. et al. reported that the technical success and the functional success rate were higher in unilateral drainage when compared with the bilateral [70]. Short-term and long-term complication rates were comparable in the two groups [70]. In another meta-analysis, bilateral stenting, considering both SEMS and PS, has been shown to be more effective in lowering hyperbilirubinemia [69]. Moreover, bilateral SEMS seemed to be associated with a lower incidence of complications, while bilateral PS had similar odds of complications when compared with unilateral drainage [69]. A major limitation of these studies is the definition of unilateral and bilateral, respectively when one or two stents are placed in the biliary tree. This seems to be a simplification considering the great complexity of malignant HBO. Therefore, in Bismuth I, the placement of one stent is enough to obtain the complete liver drainage; in Bismuth II, the placement of one stent can drain up to 50% of the liver (right or left liver), while two stents will drain all the liver; in Bismuth III and IV, even two stents may not be enough to obtain sufficient drainage (Figure 7). Thus, the concept of unilateral and bilateral should be replaced by complete drainage and incomplete drainage [64].
jaundice relief and risk of infective complications occurrence, on the other hand, bilateral stenting is burden by a higher technical complexity. In a multicenter, prospective, randomized study unilateral and bilateral endoscopic drainage showed similar results in terms of success rate, but unilateral stenting was associated with a higher risk for reintervention and shorter patency time [71]. In a recent meta-analysis of 21 studies involving 1292 patients with malignant HBO, Meybodi M. et al. reported that the technical success and the functional success rate were higher in unilateral drainage when compared with the bilateral [70]. Short-term and long-term complication rates were comparable in the two groups [70]. In another meta-analysis, bilateral stenting, considering both SEMS and PS, has been shown to be more effective in lowering hyperbilirubinemia [69]. Moreover, bilateral SEMS seemed to be associated with a lower incidence of complications, while bilateral PS had similar odds of complications when compared with unilateral drainage [69]. A major limitation of these studies is the definition of unilateral and bilateral, respectively when one or two stents are placed in the biliary tree. This seems to be a simplification considering the great complexity of malignant HBO. Therefore, in Bismuth I, the placement of one stent is enough to obtain the complete liver drainage; in Bismuth II, the placement of one stent can drain up to 50% of the liver (right or left liver), while two stents will drain all the liver; in Bismuth III and IV, even two stents may not be enough to obtain sufficient drainage (Figure 7). Thus, the concept of unilateral and bilateral should be replaced by complete drainage and incomplete drainage [64]. When multiple SEMSs have to be placed in the biliary tree, two different techniques are available: stent-in-stent (SIS) and side-by-side (SBS). The SIS method consists of placing the second SEMS through the first SEMS, and the SBS techniques involve the parallel placement of multiple SEMS simultaneously or sequentially. The studies comparing the two techniques are scanty. On one hand, some authors documented an increased rate of adverse events (e.g., cholangitis and liver abscess) and longer stent patency in SBS when compared to SIS [72]. Conversely, other authors described prolonged patency in SIS than in SBS [73]. Finally, some others found no differences in complication rate, patency, and overall survival comparing the two techniques [74]. The SBS method is generally preferred because the deployment of multiple stents is technically easier than in SIS and, more importantly because in case of stent malfunctioning (e.g., stent occlusion for tumor ingrowth) the reintervention is usually possible and successful unlike in SIS in which retreatment may be prohibiting [75].
EUS-Guided Biliary Drainage
Recently, EUS has emerged as an option in biliary drainage (echoendoscopic biliary drainage (EUS-BD)), particularly when transpapillary ERCP drainage has failed [76]. Two different routes can be used to access the biliary tree with EUS-the intrahepatic and the extrahepatic bile duct approaches. The extrahepatic bile duct puncturing is valuable for distal biliary obstruction, thus for HBO, the intrahepatic approach is needed [77]. The EUS-BD has three possible drainage routes-the transmural placement of a stent creating a novel biliodigestive anastomosis; the transpapillary antegrade technique that involves the dilatation of the puncture site, the passage of a guidewire through the stenosis until the papilla, and the releasing of a stent antedradelly; the transpapillary retrograde technique (Rendez-vous technique), requiring the insertion of a guidewire through the stenosis until the papilla, the exchange of the instrument and the placement of a stent via the papilla using the guidewire as a route [77], and the transgastric drainage. In malignant HBO, the EUS-BD can be considered as an alternative to PTBD in case of altered upper gastrointestinal anatomy, duodenal obstruction, gastric outlet obstruction, periampullary diverticulum, distal biliary tumor infiltration, or occluded biliary metal stent [78].
In a meta-analysis, Baniya R. et al. showed no statistically significant differences in terms of technical and clinical success rate between PTBD and EUS-BD in malignant biliary obstruction with a lower incidence of moderate-severe adverse events in the EUS-BD group [79]. Among the different methods available, the hepaticogastrostomy is effective for left-sided biliary decompression [80]. The hepaticogastrostomy involves the puncture of the left intrahepatic bile duct using the transgastric EUS imaging in order to deliver a stent between the bile tree and the stomach. This technique has a technical success rate of 91-100% and a clinical success rate of 75-100% [81]. The incidence of adverse events has been reported to be 25% including stent migration, bile leaks, pneumoperitoneum, and cholangitis [81,82]. Ogura T. et al. described the "bridging method" to approach the right biliary system. This requires the puncture of the left intrahepatic biliary duct through the stomach, the advance of a guidewire through the right intrahepatic biliary duct, the delivery of a metal stent between the right and the left intrahepatic ducts, and finally, the performance of the hepaticogastrostomy [83].
The role of EUS-BD has been described also as a rescue reintervention in patients presenting metallic stent dysfunction in which an ERCP attempt had failed [84]. EUS-BD can also be combined with ERCP to provide complete drainage. When a SEMS is deployed in the left biliary tract a concomitant EUS-hepaticoduodenostomy can be performed, conversely, when a SEMS is placed in the right hepatic duct, a EUS-hepaticogastrostomy can be used to complete the drainage [85].
Once the biliary tree has been accessed, the choice between the transmural or transpapillary drainage is not standardized. The transpapillary drainage is generally more complex compared with the transmural, in fact, it requires the antegrade placement of a guidewire, access to the papilla in the Rendez-vous method, and the dilatation of the puncture tract in the antegrade method [77]. Moreover, the transmural drainage provides easy access to the biliary tree in case of reintervention.
These different techniques for EUS-BD should not be considered exclusive, but they can be considered complementary and can be chosen to provide the best drainage for personalized treatment of every single patient.
Loco-Regional Therapies
The only curative approach of cholangiocarcinoma is surgery, however, only a small percentage of patients can be referred to surgery because of locally advanced cancer or the presence of distant metastases. In order to prolong the overall survival and to improve the quality of life in patients with unresectable hilar cholangiocarcinoma, locoregional techniques have been developed such as photodynamic therapy (PDT), radiofrequency ablation (RFA), and brachytherapy (BT). PDT is a minimally invasive procedure that has been described as a palliative approach in advanced cholangiocarcinoma [86]. PDT involves the injection of a photosensitizer followed by irradiation with a specific wavelength in order to produce selective cytotoxicity on cancer cells [87]. PDT in cholangiocarcinoma can be performed both via ERCP and percutaneous transhepatic cholangioscopy (PTCS).
During ERCP, after injecting the photosensitizer, an optical fiber is inserted into the strictured bile ducts, and a light is applied in order to obtain the oxygen free radical formation within the tumor cells [88]. In cholangiocarcinoma, PDT has been shown to reduce malignant biliary stenosis and to be an option in post-surgical recurrence [88,89]. In a randomized prospective study, Ortner M. et al. reported a longer overall survival in patients with unresectable cholangiocarcinoma undergoing PDT when compared with standard treatment [90]. Li Z. et al. documented that patients with hilar HBO undergoing stent placement and PDT (both via ERCP and PTCS) had a significantly longer median survival, improved quality of life, and no differences in post-operative adverse events occurrences when compared with patients undergoing only stents placement [91]. In a retrospective study, PDT via ERCP and PTCS were compared and showed no significant differences in terms of overall survival and median metal stent patency [92].
RFA is another local therapy used in several solid tumors, which involves the production of high temperatures inside the tumor leading to tissue necrosis and consequently reduction in tumor size [93]. RFA can be performed percutaneously, intra-operatively, or endoscopically. During ERCP, intrabiliary-RFA is performed placing an RFA catheter under fluoroscopic guidance through the malignant biliary stenosis and releasing thermic energy for a standardized amount of time ( Figure 8) [94]. In unresectable cholangiocarcinoma, intrabiliary-RFA is considered an option prior to SEMS placement in order to prolong the sent patency and the overall survival [95]. Intrabiliry-RFA has also been applied as an ablative treatment for SEMS occlusion for tumor ingrowth [96]. In a meta-analysis involving 505 patients with unresectable biliary stricture, Sofi A. et al. reported a significant longer stent patency and overall survival in In unresectable cholangiocarcinoma, intrabiliary-RFA is considered an option prior to SEMS placement in order to prolong the sent patency and the overall survival [95]. Intrabiliry-RFA has also been applied as an ablative treatment for SEMS occlusion for tumor ingrowth [96]. In a meta-analysis involving 505 patients with unresectable biliary stricture, Sofi A. et al. reported a significant longer stent patency and overall survival in patients receiving RFA when compared with those treated only with stent placement [97]. Moreover, the risk for adverse events was not higher in the RFA group except for postprocedural abdominal pain [97].
PDT and RFA in hilar HBO can have a role in prolonging stent patency and consequently in improving quality of life, reducing reintervention for stent occlusion, and increasing overall survival, however, randomized studies are needed to clarify their role in daily clinical practice.
Another option for locoregional treatment in hilar cholangiocarcinoma is BT. Cholangiocarcinoma has been shown to be responsive to radiotherapy which is mainly used as an adjuvant, neoadjuvant, or palliative treatment [4]. The application of external beam radiotherapy in hilar tumors can be challenging for the risk of damage of surrounding organs, thus BT enables the local delivery of high dose radiation reducing the exposure to radiation of adjacent tissues. In biliary malignancy, BT can be performed percutaneously or endoscopically. The endoscopic approach involves the placement of a nasobiliary tube (NBT) through the biliary stricture and the insertion of a BT catheter inside the NBT for the delivery of high dose radiation ( Figure 9) [98].
Conclusions
Malignant HBO is a complex scenario that needs a multidisciplinary approach from diagnosis to final treatment. Each clinical case should be considered unique and should be discussed by radiologists, oncologists, surgeons, and endoscopists in order to personalize the care pathway in terms of diagnosis, treatment, and palliation.
Conclusions
Malignant HBO is a complex scenario that needs a multidisciplinary approach from diagnosis to final treatment. Each clinical case should be considered unique and should be discussed by radiologists, oncologists, surgeons, and endoscopists in order to personalize the care pathway in terms of diagnosis, treatment, and palliation.
The diagnostic algorithm ( Figure 10) in patients with HBO should always start with a clinical and laboratory assessment, including evaluation of liver function and biochemical markers (e.g., CA 19-9 and CEA). Cross-sectional imaging (e.g., CT or MRCP) is considered mandatory and should be obtained before any interventional approach in order to perform a radiological diagnosis, staging, and evaluation of resectability. In the case of an HBO with radiological stigmata of malignancy fulfilling the criteria for resectability, the patient can be referred to surgery also without a pathological sample. Otherwise, in the case of unresectable malignant-HBO, a cytological or histological sample is mandatory. It can be obtained during ERCP by performing brush cytology or forceps biopsy. However, because of the low negative predictive value of these techniques, in case of a negative report for cancer but with a high radiological suspect, another cytological or histological sample should be obtained. In this setting, an ERCP guided brush cytology or forceps biopsy can be reattempted, or another technique can be used (e.g., EUS-FNA, peroral cholangioscopy, OCT). Once the diagnostic and staging pathway has been completed, the therapeutic algorithm can be applied ( Figure 11). Commonly, the patient with malignant-HBO presents obstructive jaundice. In this setting, whether to perform a biliary drainage and which technique to use should be carefully evaluated in the singular patient. In resectable HBO candidates to left hepatectomy, the biliary drainage is generally not required. In candidates to right hepatectomy, the biliary drainage is required as a bridge to surgery in case of an FLR <30% requiring a PVE. In this case a PTBD, a PS, or nasobiliary drainage (NBD) can be used. In unresectable HBO, biliary drainage represents the standard of care. Nowadays, endoscopic drainage is generally preferred over the percutaneous approach, however, the technique used can vary based on local facilities and experience. When Once the diagnostic and staging pathway has been completed, the therapeutic algorithm can be applied ( Figure 11). Commonly, the patient with malignant-HBO presents obstructive jaundice. In this setting, whether to perform a biliary drainage and which technique to use should be carefully evaluated in the singular patient. In resectable HBO candidates to left hepatectomy, the biliary drainage is generally not required. In candidates to right hepatectomy, the biliary drainage is required as a bridge to surgery in case of an FLR <30% requiring a PVE. In this case a PTBD, a PS, or nasobiliary drainage (NBD) can be used. In unresectable HBO, biliary drainage represents the standard of care. Nowadays, endoscopic drainage is generally preferred over the percutaneous approach, however, the technique used can vary based on local facilities and experience. When ERCP is preferred for palliation, a U-SEMS should be considered the first choice. In case of failure or technical complexity (e.g., altered anatomy) a EUS-BD can be attempted. Regardless of the technique used, the biliary drainage should be as complete as possible (> 50% of live volume should be drained) in order to reduce the risk for infection and the risk for liver failure.
The application of new locoregional techniques (e.g., PDT, RFA, BT) can be considered to control the locoregional growth. However, their applicability in daily clinical practice is not still standardized and more clinical trials are needed to assess their use. Malignant HBO is such a complex condition that a standardized endoscopic approach may be challenging. Each patient should be carefully analyzed in order to define the pro and cons of the several endoscopic procedures available. Therefore, a personalized endoscopic approach is mandatory with the aim of treating the patient in its complexity and uniqueness.
Author Contributions: G.C.: Study conceptualization, data curation and analysis, review, and editing. P.F.: Study conceptualization, data curation, and analysis. A.T.: Study conceptualization, data curation, and analysis. V.B.: Study conceptualization, data curation, and analysis. F.A.: Study conceptualization, data curation, and analysis. R.L.: Study conceptualization, data curation, and analysis. V.P.: Study conceptualization, data curation, and analysis. T.S.: Study conceptualization, data curation, and analysis; methodology, writing, review, and editing. I.B.: Study conceptualization, data curation, and analysis; methodology, writing, review, and editing. All authors have read and agreed to the published version of the manuscript. Regardless of the technique used, the biliary drainage should be as complete as possible (> 50% of live volume should be drained) in order to reduce the risk for infection and the risk for liver failure.
The application of new locoregional techniques (e.g., PDT, RFA, BT) can be considered to control the locoregional growth. However, their applicability in daily clinical practice is not still standardized and more clinical trials are needed to assess their use. Malignant HBO is such a complex condition that a standardized endoscopic approach may be challenging. Each patient should be carefully analyzed in order to define the pro and cons of the several endoscopic procedures available. Therefore, a personalized endoscopic approach is mandatory with the aim of treating the patient in its complexity and uniqueness.
Author Contributions: G.C.: Study conceptualization, data curation and analysis, review, and editing. P.F.: Study conceptualization, data curation, and analysis. A.T.: Study conceptualization, data curation, and analysis. V.B.: Study conceptualization, data curation, and analysis. F.A.: Study conceptualization, data curation, and analysis. R.L.: Study conceptualization, data curation, and analysis. V.P.: Study conceptualization, data curation, and analysis. T.S.: Study conceptualization, data curation, and analysis; methodology, writing, review, and editing. I.B.: Study conceptualization, data curation, and analysis; methodology, writing, review, and editing. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
|
2021-02-13T06:16:37.603Z
|
2021-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "a72f74f8a351c2bad4d16442402512e913578efb",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7911877",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bb2336529ede3077d7c8e14b18cd52a409b4735",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251688058
|
pes2o/s2orc
|
v3-fos-license
|
Antimicrobial Effectiveness of Innovative Photocatalysts: A Review
Waterborne pathogens represent one of the most widespread environmental concerns. Conventional disinfection methods, including chlorination and UV, pose several operational and environmental problems; namely, formation of potentially hazardous disinfection by-products (DBPs) and high energy consumption. Therefore, there is high demand for effective, low-cost disinfection treatments. Among advanced oxidation processes, the photocatalytic process, a form of green technology, is becoming increasingly attractive. A systematic review was carried out on the synthesis, characterization, toxicity, and antimicrobial performance of innovative engineered photocatalysts. In recent decades, various engineered photocatalysts have been developed to overcome the limits of conventional photocatalysts using different synthesis methods, and these are discussed together with the main parameters influencing the process behaviors. The potential environmental risks of engineered photocatalysts are also addressed, considering the toxicity effects presented in the literature.
Introduction
Waterborne pathogens, such as viruses, bacteria, and protozoa, are responsible for 3.5 billion cases of diarrhea each year and 1.8 million deaths. Most of them are children under 5 years of age [1,2]. Access to clean water and sanitation, listed as Goal 6 of the United Nation's Sustainable Development Goals (SDGs), is one of the main challenges of the 2030 Agenda. Proper wastewater disinfection represents the base for the prevention of waterborne infections induced by pathogenic microbes, which can outbreaks and increase the disease burden, especially in developing countries, with subsequent social and economic impacts. Moreover, treated wastewater is also a reliable and attractive alternative source of water supply in developed countries (EU 2022/952 regulation). In this regard, disinfection is mandatory before wastewater discharge and reuse to kill/inactivate pathogens [3]. Contemporary conventional water disinfection technologies, including chlorination and UV, have been extensively discussed due to their various operational and environmental burdens, such as the formation of potentially hazardous disinfection by-products (DBPs), which are inevitably produced due to the reaction between disinfectants, halides, and organic matter, and high energy consumption. Previous studies have shown that the survival of various kinds of aquatic organisms, such as algae, cladocerans, polychaetes, and fish, may be threatened in water bodies receiving chlorinated WWTP effluent [3][4][5][6]. To date, approximately 800 DBPs have been discovered [7], and most of them
Preparation and Characterization of Photocatalysts
All processes regarding the formulation of efficient photocatalysts require the study of synthesis procedures and physico-chemical characterization tests, which are useful to understand the mechanism by which a photocatalyst can work under light irradiation.
Preparation and Characterization of Photocatalysts
All processes regarding the formulation of efficient photocatalysts require the study of synthesis procedures and physico-chemical characterization tests, which are useful to understand the mechanism by which a photocatalyst can work under light irradiation.
According to the literature, several methods can be used to prepare engineered photocatalysts, including sol-gel, hydrothermal-based, microemulsion, and precipitation methods, as well as their combinations [19,20]. The doping of photocatalysts makes it possible to inhibit the charge-carrier recombination phenomena and enables visible light absorption. The valance band holes, or conduction band electrons, are trapped in the defect sites generated by the dopant element, inhibiting the recombination of photo-induced holes and electrons and improving the interfacial charge transfer. Promising innovative catalysts have been produced using heterostructures. Heterojunctions can have synergic effects in various oxides, such as TiO 2 , SnO 2 , SiO 2 , CeO 2 , ZnO, WO 3 and ZrO 2 , due to the injection of conduction band electrons, which decreases the recombination rate and increases electron-hole pair lifetimes. This section describes the easiest and most cost-effective preparation methods for the synthesis of various doped or heterostructured semiconductors.
Sol-Gel Method
The sol-gel method is the most widely used method for the preparation of doped and undoped photocatalysts, and it can also be used in the nanometric range.
During sol-gel synthesis, the sol is generated by the hydrolysis and then polymerization of the precursor salt (usually metal alkoxides). The polycondensation reactions and the evaporation of the solvent make it possible to induce the transition from the sol to the gel phase. The process consists of the following steps: hydrolysis and condensation, then drying and thermal decomposition of precursors [17]. Figure 2a shows a schematic picture of a sol-gel process for the preparation of photocatalysts [21]. Depending on the solvent used, the sol-gel process can be classified as aqueous or non-aqueous. the synthesis, in order to reach a high pressure and, consequently, supersaturation at lower temperatures.
When this method is used for the preparation of photocatalysts, it has been shown to be very effective in incorporating dopants into the crystalline structure of TiO2 and ZnO. Many studies have been devoted to the controlled synthesis of TiO2 particles in particular due to their high photocatalytic activity [38].
F-doped, hollow TiO2 microspheres were prepared by Zhou et al. [39] through a hydrothermal synthesis method, controlling the hydrolysis of TiF4 in an autoclave made of Teflon at a reaction temperature of 180 °C.
A visible, active N-doped TiO2 photocatalyst was prepared using triethylamine as a nitrogen source with a low-temperature hydrothermal method [40].
Hydrothermal methods have also been used to prepare photocatalysts other than TiO2 and ZnO, forming structures with very high degrees of crystallinity. For example, Amano et al. [41] showed that bismuth tungstate (Bi2WO6) prepared following hydrothermal synthesis possessed high photocatalytic efficiency under visible light irradiation.
The increasing interest in hydrothermal synthesis derives from its advantages, such as the high reactivity of the reactants, easy control over the solution or interface reactions, the formation of metastable and unique condensed phases, less air pollution, and low energy consumption. The nanostructured energy materials can grow directly on conductive substrates with good, solid contact that can strongly enhance the conductivity [41].
Precipitation Method
The preparation of photocatalysts through the precipitation method consists of the chemical transformation of a highly soluble metal precursor salt into a chemical compound with lower solubility (Figure 2c). As shown in Figure 2a, a molecular precursor is dissolved in water or alcohol and converted to gel through hydrolysis/alcoholysis using heating and stirring. The gel is wet or damp; thus, it needs to be dried using appropriate methods, depending on the desired properties and application. The sol-gel method makes it possible to obtain homogeneous composites with very high purity and is applicable at an industrial scale [22]. It is possible to create thin films with a thickness of 50-500 nm or powders. Different coating methods can be used to create thin film, including dip-coating, spin-coating, spray-coating, flow-coating, capillary-coating and climbing-cover processes [23][24][25].
As a general remark, the steps in the sol-gel method can be changed to simplify the procedure and enhance the doping efficiency. In addition, sol-gel synthesis has also been adopted together with a dip-coating procedure to immobilize visible active photocatalysts on macroscopic and transparent supports, with the purpose of formulating structured photocatalysts for use in heterogeneous photocatalysis for the depollution of gaseous streams and in continuous fixed-bed photoreactors for wastewater treatment [36].
Hydrothermal Synthesis
Hydrothermal synthesis occurs in a closed vessel with controlled temperature and pressure. The temperature and pressure conditions facilitate the dissolution of the chemical reagents and the formation of the products through crystallization. This technique provides a one-step reaction route for the production of complex materials. The method is called "solvothermal" when a solvent other than water is used [37]. The synthesis of photocatalysts with this method is typically performed in steel vessels operating at high pressure (autoclaves) under controlled temperature, and the formation reaction of the nano-catalysts occurs in the liquid medium. A schematic picture of this method is shown in Figure 2b.
When the reaction mixture inside the autoclave is heated, two zones with different temperatures are created. The reactants of the mixture form a solution in the zone at higher temperature, while the saturated solution present in the lower part of the autoclave is transported to the upper section of the system due to convective motion. When the solution in the upper part of the autoclave becomes cooler and denser, it descends. Simultaneously, due to the temperature decrease, the solution exceeds the limit of solubility and precipitation begins. This technique makes it possible to directly obtain catalysts in powder form, and the crystalline degree can be tuned depending on the operating conditions. In addition, the particle size, shape, and chemical composition can be modified by changing only two parameters, the temperature of the reaction mixture and the solvent used in the synthesis, in order to reach a high pressure and, consequently, supersaturation at lower temperatures.
When this method is used for the preparation of photocatalysts, it has been shown to be very effective in incorporating dopants into the crystalline structure of TiO 2 and ZnO. Many studies have been devoted to the controlled synthesis of TiO 2 particles in particular due to their high photocatalytic activity [38].
F-doped, hollow TiO 2 microspheres were prepared by Zhou et al. [39] through a hydrothermal synthesis method, controlling the hydrolysis of TiF 4 in an autoclave made of Teflon at a reaction temperature of 180 • C.
A visible, active N-doped TiO 2 photocatalyst was prepared using triethylamine as a nitrogen source with a low-temperature hydrothermal method [40].
Hydrothermal methods have also been used to prepare photocatalysts other than TiO 2 and ZnO, forming structures with very high degrees of crystallinity. For example, Amano et al. [41] showed that bismuth tungstate (Bi 2 WO 6 ) prepared following hydrothermal synthesis possessed high photocatalytic efficiency under visible light irradiation.
The increasing interest in hydrothermal synthesis derives from its advantages, such as the high reactivity of the reactants, easy control over the solution or interface reactions, the Nanomaterials 2022, 12, 2831 6 of 26 formation of metastable and unique condensed phases, less air pollution, and low energy consumption. The nanostructured energy materials can grow directly on conductive substrates with good, solid contact that can strongly enhance the conductivity [41].
Precipitation Method
The preparation of photocatalysts through the precipitation method consists of the chemical transformation of a highly soluble metal precursor salt into a chemical compound with lower solubility (Figure 2c).
The generation of the weakly soluble compound (and then the precipitate) is usually undertaken by changing (generally by increasing) the solution pH [42,43].
The semiconductor most widely prepared with this method is ZnO. Generally, the precursor of ZnO is obtained using a direct precipitation method involving the reaction between a zinc salt and a base in an aqueous solution, which belongs to the solution phase [44]. In particular, the preparation involves the reaction of zinc salts, such as Zn(NO 3 ) 2 , Zn(CH 3 COO) 2 ·2H 2 O, ZnSO 4 , etc., with a basic solution containing, for example, NH 4 OH or NaOH [45].
To dope ZnO with metals (with the aim of shifting its absorption in the visible region), the precursor salt of the doping element can be added to the solution of the zinc precursor before inducing the precipitation with the basic solution [46,47]. The obtained precipitate is then transformed into doped ZnO photocatalysts through thermal treatment.
Microemulsion
Microemulsion is a preparation process with which it is possible to control the morphological and structural parameters of both semiconductor particles and heterostructures [48]. In detail, direct (oil-in-water) and inverse (water-in-oil) microemulsion media can be used to prepare different photocatalysts. Microemulsions are thermodynamically stable solutions containing, at the least, a polar phase (usually water), a nonpolar phase (usually oil), and a surfactant. Different microstructures can be generated, ranging from droplets of oil dispersed in a water phase (oil-in-water) over a bi-continuous "sponge" phase to water droplets dispersed in a continuous oil phase (water-in-oil) [49]. The latter can be used as nanoreactors for the preparation of nanoparticles [50]. In the case of photocatalytic materials, the first step in nanoparticle formation is the chemical reaction between the two reactants trapped in the microemulsion cores, or the reaction between the reactant and the precipitating agent. For instance, TiO 2 -based photocatalysts can be prepared through the direct reaction of titanium isopropoxide with water solubilized in water-in-oil microemulsions stabilized by the presence of a surfactant, such as Triton X-100 [51]. For ZnO-based materials, zinc nitrate has been solubilized in the aqueous phase of the microemulsion together with a precipitation agent (such as tetramethylammonium hydroxide pentahydrate) [52]. However, in most cases, a final thermal treatment is required to obtain the desired crystalline phase for the semiconductor particles.
Characterization of Photocatalysts
It is very important to collect information on the physico-chemical properties of engineered photocatalysts in order to understand the effect of the operating parameters adopted in the synthesis procedure. A wide variety of characterization methods are available, which are discussed in the literature; thus, they are only briefly described here.
To determine the photocatalyst morphology at very high magnifications, scanning electron microscopy (SEM) is used. SEM analysis makes it possible to collect information about agglomerate size and shape. Transmission electron microscopy (TEM) permits higher magnifications than SEM.
To define the distribution and specific surface area (SSA) of pores, N 2 adsorptiondesorption measurements at −196 • C are required. This analytical technique is based on the physical adsorption of gaseous molecules on the catalyst surface and within its pores. Since all the semiconductors used in photocatalysis are typically mesoporous materials, the most widely used model to measure SSA is the Brauner-Emmett-Teller (BET) model. Generally, a greater surface area is linked to an increase in photocatalytic activity [53]. The BET method is, however, extensively discussed in the literature [54].
X-ray diffraction analysis (XRD) is commonly used to identify the crystalline phase of photocatalysts through Bragg's law [55]. Additionally, the Sherrer equation makes it possible to estimate the crystallite size of photocatalysts [56].
Another useful method is Raman spectroscopy [57], which is based on the measurement of the Raman shift. The resulting plot displays the intensity as a function of the Raman shift. The use of Raman spectroscopy for the characterization of semiconductor photocatalysts makes it possible to highlight possible contaminants on the surface of the engineered photocatalyst (such as metal or non-metal groups bonded only on the external catalyst surface) and correlate them with photocatalytic activity. Contaminants on the surface can act as recombination centers for electron/hole scavengers, inducing a worsening of the photocatalytic activity.
The most important traditional technique used for the analysis of photocatalysts is UV-visible diffuse reflectance spectroscopy (UV-Vis DRS). UV-Vis DRS can analyze the light absorption properties of different materials. UV-Vis diffuse reflectance spectrophotometers provide data that are useful for the estimation of the band gap in semiconductors [58]. To this purpose, mathematical elaborations (e.g., Tauc plots) can be used to estimate the band-gap energy.
Recently, more refined techniques have been used for photocatalyst characterization. For instance, electron paramagnetic resonance (EPR) can be used to check the possible formation of reactive oxygen species under irradiation. Additionally, EPR is extremely powerful for understanding the nature of photoactive defects [59]. Together with EPR analysis, density functional theory (DFT) calculations can give detailed information about the change in electronic structure induced by the doping of semiconductors in order to understand the effect of the interaction of semiconductors with a specified light source [60].
Finally, time-resolved photo-luminescence (TRPL) can also be used to assess the evolution of the photocatalyst luminescence spectrum as a function of time [61], making it possible to analyze the charge-carrier lifetime and dynamics within a particular system.
Antimicrobial Efficiencies
Antimicrobial efficiencies depend on numerous factors, including the type and dose of catalyst, the type of microbe, the intensity of radiation, the degree of hydroxylation, the pH, the temperature, and the exposure time. Data for different applications of photocatalysts are reported in Tables 1-3. Antimicrobial efficiency effects comprise a wide range of endpoints that can be estimated on a study-by-study basis, such as bactericidal, bacteriostatic, and antiviral effects. The most common approach is to verify the decrease in the initial microbial load after the treatment as a percentage of reduction. Most engineered photocatalysts are designed to be active under visible light when not directly under solar light, and only a few cases use UV [16,68,89,101] or actinic light (max wavelength at 420 nm) [95,100]. To improve visible photocatalytic activity and to minimize the recombination phenomena in the generated electron-holes, several modifications of TiO 2 using metals, non-metals, cations, and anions have been attempted [1,[62][63][64]66,72,76,85,88], creating heterostructures (Table 1), doped photocatalysts (Table 2) and polymer nanocomposites (Table 3). An emerging field of interest is the synthesis of TiO 2 nanotubes and their coupling with cations, metal-oxides, and additional composites, leading to a higher sensitization in the visible range [70,85].
The use of Zn to improve the ability of TiO 2 to work under solar and UV light has been widely exploited in heterostructures and doped materials (Tables 1 and 2). As reported in Table 2, Stoyanova et al. [106] observed that, after 20 min of photocatalytic treatment, 1 g/L of TiO 2 /ZnO, made it possible to achieve 100% removal of 10 5 UFC/mL E. coli in the presence of UV light. Similar results were reported by Sethi et al. [1] (Table 2) after only 10 min of treatment using visible radiation with the lowest power compared to all other selected experimental studies. Wang et al. [107] reported that ZnCl 2 /TiO 2 and Zn(Ac) 2 /TiO 2 nanoparticles were more efficient in the removal of C. albicans than E. coli and S. aureus using visible light ( Table 2).
Due to its high stability, abundance, and matching band position with TiO 2 , Fe 2 O 3 is one of the surface co-catalysts used to create heterostructures for the control of electron-hole pair recombination in semiconductor-based photocatalysts [83]. TiO 2 -Fe 2 O 3 nanocomposites proved to be an efficient photocatalyst in terms of the inactivation of E.coli (99% removal) under direct natural sunlight irradiation but required a significantly greater treatment time (up to 120 min) [80] than under UV light (30 min) [68].
Furthermore, several other visible light-driven (VLD) photocatalysts, including Bibased [73,81,96], Ag-based [64,66,67], and C-based photocatalysts, have been recently investigated. In particular, bismuth oxyhalide (BiOI), a p-type semiconductor, showed the strongest visible light absorption due to its narrow band gap (1.7-1.8 eV) [66]. Heterostructures created by coupling TiO 2 with BiOI make it possible to achieve improved visible light catalytic behaviors. However, the mismatch in the band alignment between TiO 2 and BiOI limits the interfacial charge transfer. To overcome this limit, a recent study co-decorated TiO 2 /BiOI nanoparticles with Ag nanoparticles to a more efficient photocatalysts with broad light absorption and efficient charge transfer [66]. As reported in Table 1, the complete removal of 3 × 10 7 UFC/mL E. coli could be achieved after 30 min with a 16 W visible light lamp.
Molybdenum disulfide (MoS 2 ), a p-type semiconductor, has also been exploited as a co-catalyst to expand the response range of TiO 2 to visible light and improve the efficiency of photogenerated charge separation. MoS 2 /TiO 2 nanotube arrays prepared by coupling MoS 2 with the n-type semiconductor TiO 2 determined the formation of a p-n heterojunction between MoS 2 and TiO 2 , making it possible to achieve a sterilization effect per unit area of MoS 2 nanotubes close to that of some powder dosage photocatalysts under visible light irradiation [70] (Table 2).
Graphitic carbon nitride (g-C 3 N 4 ) has emerged as an innovative visible-light photocatalyst for environmental applications [74,75]. g-C 3 N 4 modified with AgBr [71], V-TiO 2 [62], expanded perlite [74], and graphene [86] has shown strong antibacterial capacity against E. coli cells with visible light (Table 1), but no applications have been reported with sunlight. Recently, 2D engineered photocatalysts and their composites [110], including Ag-and graphene oxide (GO)-based composites, have gained much attention due to their effective antimicrobial activity. It was been demonstrated that the interaction between GO and plasmid DNA inhibits the amplification and transformation of aphA genes. Moreover, the inhibition increases with the decreasing size of the GO [111,112]. The heterostructures created by combining g-C 3 N 4 with graphene oxide (GO/g-C 3 N 4 ) could kill 97.9% of E. coli after 120 min visible light irradiation at the concentration of 100 µg/mL (Table 1). It has been observed that Ag nanoparticles constitute an effective interfacial bridge between binary semiconductor nanocomposites. To date, various Ag-modified ternary photocatalysts, such as Ag/QDs/BiS 3 /SnIn 4 S 8 [69], AgI/AgBr/BiOBr 0.75 I 0.25n [73], and Ag-AgX/RGOs [78], have been developed, exhibiting improved photocatalytic performance under visible light irradiation, which is mainly related to the surface plasmon resonance (SPR) and Schottky effect of metallic Ag nanoparticles. Among these studies, the photocatalytic process proposed by Liang et al. [73] achieved the best results, with the lowest concentration of photocatalysts and the highest concentration of E. coli (3 × 10 7 UFC/mL), using a 300 W visible light lamp.
Photocatalyst Dose
There is no agreement in the literature regarding the influence of photocatalyst dose on process behavior, as it strictly depends on the form (particles, nanoparticles, film) and specific characteristics of the photocatalysts. According to Li et al. [67], the dosage of catalyst influenced the photocatalytic disinfection efficiency. An increase of the inactivation level for viruses from~4.5 log to~6 log was observed when increasing the photocatalyst concentration from 50 mg/L to 100 mg/L; a maximum value of~8 log was achieved at a photocatalyst concentration of 150 mg/L after 360 min visible light illumination. On the other hand, by increasing the g-C 3 N 4 concentration up to 200 mg/L, a decrease in virus MS2 inactivation to 7.5 log could be observed. This result was predictable since the addition of a large amount of photocatalysts can lead to a great decrease in light penetration. Thus, an optimum dosage for photocatalysts is critical for process optimization.
Effect of pH
In tests of the photocatalytic disinfection activity of different photocatalysts towards pathogenic bacteria under various pH conditions, the cell density did not decrease significantly under neutral-acidic pH [71,81]. The antibacterial efficiencies of g-C 3 N 4 -AgBr were similar under neutral and slightly acidic conditions of pH 5-7. The acidic condition resulted in the release of Ag + ; however, its contribution to cell disinfection was estimated to be negligible due to the low concentration [71]. On the other hand, alkaline conditions enhanced the disinfection activities of g-C 3 N 4 -AgBr, making it possible to achieve the best performances at pH 8 and pH 9. The increasing solution pH did not induce change in the zeta potentials of g-C 3 N 4 -AgBr, while the zeta potentials of E. coli became slightly more negative at high pH. As expected, the electrostatic force between bacteria and g-C 3 N 4 -AgBr was more repulsive under alkaline conditions [71].
According to Zhang et al. [74], faster viral inactivation by g-C 3 N 4 /EP-520 could be observed after decreasing pH values. At the same reaction time, about 5 log of inactivation was observed with 180 min visible light irradiation at pH 9, while 8 log of inactivation was achieved at pH 5. Reduced electrostatic repulsion between MS2 and g-C 3 N 4 produced by the acidic pH was considered responsible for the change in viral inactivation. MS2 has an isoelectric point of 3.9 and was negatively charged at all pH levels. g-C 3 N 4 shows an isoelectric point of 5.0, and its overall negative charge decreased as pH decreased from 9 to 5, facilitating MS2/g-C 3 N 4 interaction.
Effect of Temperature
Few studies have investigated the effects of temperature. However, it is well-known that, by increasing the temperature, photocatalytic reaction activity is enhanced. Accordingly, Basu et al. [2] reported that, with the increase in reaction temperature, bacterial disinfection time decreased. Nevertheless, a detailed explanation of how temperature affects photocatalytic inactivation needs to be provided [113].
Target
Most studies to date have focused on the fecal indicator bacteria (FIB) E. Coli. Only a few have investigated viral inactivation with visible light-active photocatalysts in water [74] ( Table 1), and one study investigated the effect on fungi inactivation [91]. For instance, bacteriophage MS2, a widely used surrogate for waterborne pathogenic viruses due to their similar size, structure, and surface properties, was selected as a model virus in the study by Zhang et al. [74]. Viruses and fungi are more resistant than bacteria to conventional disinfection methods [111], and the results of bacterial disinfection cannot be translated to viral disinfection. The mechanism of the photocatalytic inactivation of viruses is still largely unknown [67]. Considering that real water systems usually contain consortia of different bacteria (e.g., Gram-positive and Gram-negative), it would be highly recommended to investigate photocatalysts' efficiency against other bacterial systems to achieve a complete evaluation of these processes.
Effect of Water Matrix
Zhang et al. [74] investigated the effect of the water matrix on disinfection during photocatalytic inactivation of MS2, reporting that the viral inactivation efficiency in real source water was lower than that in deionized water with 240 min visible light irradiation (3.7 vs. 8 log removal). The main reason for the reduced disinfection efficiency can be ascribed to the presence of natural water constituents; e.g., natural organic matter, which can be adsorbed on photocatalysts to prevent ROS generation or to consume generated ROS, acting as scavengers.
Role of Direct Contact
Process behavior is strongly affected by the direct contact between photocatalysts and bacterial cells. However, long-range disinfection activity that did not depend on direct contact has also been reported previously [81].
Microbial inactivation can be achieved by photocatalysis-mediated reactive oxygen species (ROS), which work in the cell wall. The ROS in intimate contact with bacteria induce the peroxidation of the polyunsaturated phospholipid component of the lipid membrane and promote the disruption of cell respiration to destroy bacteria [63]. However, microorganisms with a more complex cell wall structure, such as Gram-positive bacteria, are likely more resistant to ROS.
Influence of Light
The power of the lamps used in the selected experimental studies varied from 8 W [1] to 500 W [62]. However, a higher-power light source did not correspond to better process behavior, as it represents only one of the variables influencing the microorganism inactivation.
Synthesis Methods
All the preparation methods described in Section 3 require the use of solvents and/or corrosive chemicals. For this reason, despite these methods producing engineered photocatalysts with high activities, special attention should be paid to green and environmentally friendly synthesis in order to minimize the possible negative impact on the environment due to the low sustainability involved [114]. General advantages of chemical methods are easy surface functionalization and versatility if nanomorphology formation, which make it possible to enhance their potential uses in different environments.
Among the various sustainable and green synthesis routes, electrochemical methods show the following advantages: (i) use of chemical agents commonly employed in wetchemical synthesis routes [115]; (ii) the crystal growth rate of particles can be easily tuned using deposition potentials, current densities, or salt concentrations [116]; (iii) doping elements, such as Cu, can be easily introduced into the semiconductor lattice [117].
In the field of electrochemical methods, a very interesting green preparation method could be based on sputtering techniques, which, moreover, offer the possibility of developing photocatalysts immobilized on a macroscopic support, thus avoiding the need to separate powder photocatalysts from the treated water [118]. Generally speaking, the sputtering method presents several advantages, such as coating uniformity over large areas, good control of morphological properties in the photocatalytic films, and lack of toxic or hazardous precursors [119]. Additionally, it has been extensively reported that the sputtering method is able to produce photocatalytic films that have higher durability compared to sol-gel techniques [120]. Moreover, reactive gases, such as oxygen or air, can be introduced into the process to react with the sputtered metal atoms, resulting in the formation of a photocatalytic film (Figure 3) [121]. using deposition potentials, current densities, or salt concentrations [116]; (iii) doping elements, such as Cu, can be easily introduced into the semiconductor lattice [117].
In the field of electrochemical methods, a very interesting green preparation method could be based on sputtering techniques, which, moreover, offer the possibility of developing photocatalysts immobilized on a macroscopic support, thus avoiding the need to separate powder photocatalysts from the treated water [118]. Generally speaking, the sputtering method presents several advantages, such as coating uniformity over large areas, good control of morphological properties in the photocatalytic films, and lack of toxic or hazardous precursors [119]. Additionally, it has been extensively reported that the sputtering method is able to produce photocatalytic films that have higher durability compared to sol-gel techniques [120]. Moreover, reactive gases, such as oxygen or air, can be introduced into the process to react with the sputtered metal atoms, resulting in the formation of a photocatalytic film (Figure 3) [121]. As is possible to observe from Table 1, several photocatalysts have been prepared using this method.
Alternative green approaches are based on mechanochemistry methods [121], such as the milling route [122]. This method is based on the use of a milling vessel loaded with the milling media (such as balls) and reactants [123]. In some cases, additional chemicals are added to the milling mixture with the aim of minimizing particle agglomeration. Finally, the milled material is recovered after a certain treatment time with a certain milling frequency [123]. The milling route can be used for the synthesis of heterostructures as an alternative to hydrothermal or solvothermal methods.
Regrowth
Regrowth tests are necessary to provide further insight into the effect of disinfection processes on microorganisms' inactivation. None of the selected experimental studies performed regrowth tests, which would be required to verify the total inactivation of target microorganisms in the photocatalysis process instead of simply suppressing their growth and reproduction abilities. As is possible to observe from Table 1, several photocatalysts have been prepared using this method.
Alternative green approaches are based on mechanochemistry methods [121], such as the milling route [122]. This method is based on the use of a milling vessel loaded with the milling media (such as balls) and reactants [123]. In some cases, additional chemicals are added to the milling mixture with the aim of minimizing particle agglomeration. Finally, the milled material is recovered after a certain treatment time with a certain milling frequency [123]. The milling route can be used for the synthesis of heterostructures as an alternative to hydrothermal or solvothermal methods.
Regrowth
Regrowth tests are necessary to provide further insight into the effect of disinfection processes on microorganisms' inactivation. None of the selected experimental studies performed regrowth tests, which would be required to verify the total inactivation of target microorganisms in the photocatalysis process instead of simply suppressing their growth and reproduction abilities.
Reusability of Photocatalysts
The reusability and stability of photocatalysts play significant roles in practical applications of disinfection. Feng et al. [81] reported that the bactericidal efficiencies of BiOBr-0.5AgBr were slightly decreased with the increase in reuse cycles. According to the studies by Shanmugam et al. [62], the g-C 3 N 4 -10% V-TiO 2 hybrid photocatalyst still showed outstanding photocatalytic stability after up to five cycles of reuse. Shi et al. [65] performed recycle experiments with CuBi 2 O 4 /Bi 2 MoO 6 , observing that the FT-IR and XRD analyses displayed almost no change in the crystal phase and transmission peaks over time, demonstrating that the photocatalyst still preserved high photocatalytic bactericidal activity towards E. coli. A decrease in the inactivation property was attributed to the loss of the photocatalyst during the recovery process.
Toxicity Evaluation
Traditional animal models and assays have been historically applied to determine the potential human and ecological hazards and risks of compounds through the evaluation of various endpoints (i.e., embryo lethality, reproductive and developmental toxicity, genotoxicity, carcinogenicity, neurotoxicity, etc.) [124][125][126][127][128][129]. As reported in Table 4, few studies have focused on the ecotoxicity of engineered photocatalysts so far, probably due to the scarce availability of standardized protocols. Moreover, they concurred with antimicrobial applications only in a few cases. [146] n.a. = not available; n.e. = no effect.
Chen et al. [130] explored the aquatic toxicity of water treated with silver phosphate (Ag 3 PO 4 ) photocatalyst against Chlorella vulgaris, observing a greater stimulatory effect on the growth of algae with respect to the control (algae exposed to untreated water). ZnO@ZnS-based photocatalysts displayed negligible effects on the viability, biomass, and photosynthetic pigments of Spirulina platensis microalgae [131]. Similarly, nitrogen-doped TiO 2 showed a reduction in toxicity in terms of Vibrio fischeri and Raphidocelis subcapitata growth and Daphnia magna survival after 300 min of wastewater (contaminated with various pharmaceuticals) treatment [132].
The toxicity of hydrogen (H 2 RGOTi)-and thermal (RGOTi)-reduced graphene oxide/TiO 2 has been investigated for zebrafish embryos, showing that H 2 RGOTi could be more ecofriendly than RGOTi [133] (see also Table 4). In fact, RGOTi was able to increase mortality (LC 50 = 0.7 g/L; Table 4) and the size of the eye, yolk, and pericardium, with consequent cardiac development damage [134]. Instead, the facet-dependent monoclinic scheelite BiVO 4 (m-BiVO 4 ) weakly affected the survival and the development of zebrafish embryos [134]. Recently, biochar functionalized with titanium dioxide (TiO 2 ) was evaluated for its effects on the survival, neurotoxicity, and energy metabolism of Mytilus galloprovincialis bivalves, showing effects comparable to those observed in the controls [135]. In 2021, an in vivo toxicity study of the effects of water treated with alumina/ZnO on female pathogen-free Balb/c mice revealed high bacteria disinfection and no impact on gut health [2]. In contrast, when exposing male Wistar rats to Fe 2 O 3 nanoparticles, Abhilash et al. [136] demonstrated that heart tissue and, consequently, the cardiovascular system suffered toxic damage. In the same manner, the cerium oxide/sulfide nanoparticles in the zeolite channels displayed a toxic impact on the number of white blood cells and hemoglobin level of rats [137]. Various studies have also been conducted on mammal cell lines, showing negligible effects most of the time (see Table 4). El Nahrawy et al. [138] showed a negative effect toward skin cell lines in laryngeal carcinoma (Hep-2) after zinc titanate (Zn 2 TiO 4 ) exposure. In the same manner, silver nanoparticle-modified titanium (Ti-nAg) did not affect human gingival fibroblasts [139], whereas Ag nanoparticles@chitosan-TiO 2 showed low toxicity toward mammalian cell [109]. The TiO 2 :Cu nanocomposite showed beneficial effects on embryonic mouse fibroblast cells, with an enhancement of about 20% in cell viability [105]. Malankowska et al. [140] compared the sensitivity of two human cell lines (lung cells (A549) and liver cells (HepG2)) and one mouse cell line (embryo fibroblast cells (BALB/3T3)) to multicomponent (silver (Ag), gold (Au), platinum (Pt), and palladium (Pd)) TiO 2 -based photocatalysts, finding that the HepG2 and A549 cells were, respectively, the most and least sensitive among all the cell lines (see Table 4). Furthermore, oxygen-doped graphitic carbon nitride microspheres (O-g-C 3 N 4 ) and hydrogen-doped zinc oxide (ZnO(H)) displayed negligible cytotoxicity towards A549 cells [141,147]. Fe/Cr-doped CeO 2 NPs showed negative effects on the aneuploid immortal keratinocyte (HaCaT) cell line [142]. The potential cytotoxic effects of Fe-doped TiO 2 on human endothelial cells (HECVs), red blood cells, hemocytes of Mytilus galloprovincialis, and mouse macrophages (RAW 247) were evaluated, showing a decrease in cell viability only for HECV [143][144][145].Cadmium-bismuth microspheres (CdS-Bi 2 S 3 ) exhibited high cytotoxicity activity against a human colon colorectal tumor (HCT 116) cell line, even at the lowest tested concentration (0.25 g/L; see Table 4) [146].
The paucity of data concerning ecotoxicological implications reported in these few studies does not permit definitive estimates of the types and degrees of toxicity generated by the engineered photocatalysts when they are released into aquatic environments [148][149][150] and whether their interaction with biota can induce potentially adverse effects at different biological levels [151][152][153]. As a result, the toxicological risk applies not only to aquatic species but also to human beings, who could be exposed to such products through marine food webs [154,155]. Moreover, when applied to real wastewater, the process can generate dangerous intermediates from the degradation of organic contaminants. Thus, further studies are necessary to elucidate the ecotoxicity of effluents and of innovative photocatalyst nanoparticles themselves.
Photoreactor Configurations
The photocatalysts used in powder form are characterized by a large surface area and are more uniformly mixed in the solution, showing excellent bactericidal effects (Tables 1-3). Nevertheless, photoreactors designed to use photocatalyst suspensions require high energy consumption and secondary filtration to separate the nanomaterials from water [156] ( Figure 4). 150] and whether their interaction with biota can induce potentially adverse effects at different biological levels [151][152][153]. As a result, the toxicological risk applies not only to aquatic species but also to human beings, who could be exposed to such products through marine food webs [154,155]. Moreover, when applied to real wastewater, the process can generate dangerous intermediates from the degradation of organic contaminants. Thus, further studies are necessary to elucidate the ecotoxicity of effluents and of innovative photocatalyst nanoparticles themselves.
Photoreactor Configurations
The photocatalysts used in powder form are characterized by a large surface area and are more uniformly mixed in the solution, showing excellent bactericidal effects (Tables 1-3). Nevertheless, photoreactors designed to use photocatalyst suspensions require high energy consumption and secondary filtration to separate the nanomaterials from water [156] (Figure 4). The need to use an immobilized catalyst rather than catalyst powder in slurries has therefore been pointed out by recent studies. Indeed, different supporting materials, such as glass, ceramics, activated carbon, and polymeric materials, have been investigated [156][157][158][159][160]. The need to use an immobilized catalyst rather than catalyst powder in slurries has therefore been pointed out by recent studies. Indeed, different supporting materials, such as glass, ceramics, activated carbon, and polymeric materials, have been investigated [156][157][158][159][160].
Regarding polymers, the immobilization of the nanostructures in these types of materials not only makes it possible to avoid the separation step after water treatment but also reduces the problems concerning ecotoxicity and the aggregation of nanomaterials. This is the case, for instance, when poly (methyl methacrylate) (PMMA) is used as a polymer matrix in the preparation of nanocomposites (Table 3). PMMA, a common thermoplastic material, is used in many applications due to its transparency to visible light, mechanical properties, and environmental stability; it is also an economical and hydrophobic polymer suitable for contact with food and beverages. PMMA is an excellent host for functional inorganic particles; in fact, various types of metal oxide fillers have been demonstrated to further improve its properties [14].
Song et al. [65], faced the challenge of developing ternary, highly active TiO 2 -based photocatalysts with a novel structural form, good operability, and easy recyclability, created a flexible and hierarchical heterostructured Ag/BiOI/TiO 2 nanofibrous membrane.
Electric Energy Consumption
Unfortunately, fabrication of engineered photocatalysts is complicated and expensive, limiting their mass production and engineering applications. To evaluate the possibility of designing a photocatalytic system working under visible light, it is necessary to consider the treatment times, the ability to remove toxicity, and the energy consumption required for the treatment. Despite several engineered photocatalysts being able to exploit visible light, leading to complete removal of bacteria concentrations, only a few of them can achieve this result in a reasonable time. By considering a treatment time of 10 min, three photocatalysts were selected for the evaluation of electric energy consumption using the EE/O value-a scale-up parameter for removal of 90% of a pollutant contained in 1 m 3 of polluted water, expressed in kWh in European countries-according to the following equation (Azbar et al. [161] and Vaiano et al. [162]): where P is the nominal power of the light source (kW), t is the irradiation time (minutes), V is the volume solution (L), C 0 is the E. coli initial concentration (UFC/mL), and C f is the E. coli final concentration. Assuming 10 min as the treatment time, we compared the EE/O values of two photocatalysts, ZnO/TiO 2 [1] and Cu/ TiO 2 [111]. The calculation of electric energy consumption showed that the use of ZnO/ [111] under simulated sunlight. However, it must be considered that the electric energy consumption is strongly dependent on both the photoreactor configuration and photocatalyst composition. In fact, different results could be obtained by using lamps with the same power but different photocatalytic systems.
Conclusions
In the last decade, several studies have focused on the antimicrobial properties of engineered photocatalysts. Despite the promise of these materials, several issues related to their use still remain to be addressed:
•
Toxicological and ecotoxicological aspects have not been fully investigated and should be carefully assessed before planning full-scale production; • Greening production and minimizing the use of solvents should be considered essential for large-scale application; • Pilot-scale plant experiments are necessary to carry out a realistic cost evaluation per unit volume; • Regrowth and reuse have to be considered for a complete assessment of behaviors.
|
2022-08-20T15:12:15.815Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2eebbf704ad3050783fbdb7be9549befe5beb39c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/12/16/2831/pdf?version=1660785806",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0056353f30c869c352797b3ff68ad51752d69dea",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119196058
|
pes2o/s2orc
|
v3-fos-license
|
FLEX+DMFT approach to the $d$-wave superconducting phase diagram of the two-dimensional Hubbard model
The dynamical mean-field theory (DMFT) combined with the fluctuation exchange (FLEX) method, namely FLEX+DMFT, is an approach for correlated electron systems to incorporate both local and non-local long-range correlations in a self-consistent manner. We formulate FLEX+DMFT in a systematic way starting from a Luttinger-Ward functional, and apply it to study the $d$-wave superconductivity in the two-dimensional repulsive Hubbard model. The critical temperature ($T_c$) curve obtained in the FLEX+DMFT exhibits a dome structure as a function of the filling, which has not been clearly observed in the FLEX approach alone. We trace back the origin of the dome to the local vertex correction from DMFT that renders a filling dependence in the FLEX self-energy. We compare the results with those of GW+DMFT, where the $T_c$-dome structure is qualitatively reproduced due to the same vertex correction effect, but a crucial difference from FLEX+DMFT is that $T_c$ is always estimated below the N\'{e}el temperature in GW+DMFT. The single-particle spectral function obtained with FLEX+DMFT exhibits a double-peak structure as a precursor of the Hubbard bands at temperature above $T_c$.
I. INTRODUCTION
Despite a long history of physics of the high-T c cuprate, 1,2 we are still some way from a full understanding of the superconductivity. There is a general consensus that the supercurrent flows on each Cu-O plane, which can be modeled by the repulsive Hubbard model on the square lattice. There are actually two essential factors here: the repulsive Hubbard interaction can give rise to a pairing interaction in the d-wave channel mediated by antiferromagnetic spin fluctuations, 3 while the very same interaction also introduces Mott's metalinsulator transition 4 that hinders the superconductivity around half-filling for strong enough interactions. Capturing these two features simultaneously still remains a theoretically challenging task. As numerical methods for treating the strongly correlated electron systems, there are the exact diagonalization and quantum Monte Carlo (QMC) methods 5 that are exact within numerical errors, but the former can only deal with limited system sizes, while the latter suffers from the sign problem.
However, we do have theoretical methods that can deal with each of the d-wave pairing and Mott's transition separately: Namely, we have on one hand the fluctuationexchange (FLEX) approximation, 6 one of the perturbative methods for many-body physics that can describe the spin-fluctuation mediated d-wave pairing. On the other hand, we have the dynamical mean-field theory (DMFT), [7][8][9] which can describe the Mott transition. To be more precise, the FLEX describes the momentum dependence of the effective pairing interaction mediated by * Present address: RIKEN Center for Emergent Matter Science (CEMS), Wako 351-0198, Japan. the antiferromagnetic spin fluctuations, which is essential for the anisotropic pairing, 6 but the method, being perturbative, cannot describe the Mott transition in the regime close to the half-filling. The DMFT, although mean-field theoretic, describes Mott's insulator in terms of the (non-perturbative) correlation effect that is local (i.e., momentum-independent) but dynamical (i.e., incorporating temporal fluctuations), and becomes exact in the limit of infinite spatial dimensions of a lattice model. 7 There are many extensions of DMFT to include momentum dependence of the self-energy. [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24] One is the cluster extension of DMFT, 10,11 which is employed, e.g., for explaining the pseudogap in the cuprates as a momentum-selective Mott transition. 12,13 However, in practice it is quite hard in this scheme to attain large cluster sizes and to incorporate spatially long-ranged components in the self-energy in a strongly correlated regime. It is also computationally very demanding to treat the dwave superconducting phase, or to extend to more complicated models such as multi-orbital systems with a large cluster size retained in the cluster DMFT.
More realistically, we have alternative and numerically feasible extensions of DMFT that combines DMFT with a certain resummation technique of nonlocal selfenergy diagrams, such as GW+(E)DMFT, 14,15 DΓA, 16,17 and the dual-fermion approach. [18][19][20] These schemes can treat momentum-dependent self-energies describing nonlocal long-range correlations with some selected diagrams taken into account. This has motivated us to take the FLEX+DMFT method, 24,25 where nonlocal FLEX diagrams are considered on top of DMFT local diagrams for the self-energy. We have opted for a method that evokes FLEX among other diagrammatic methods, since we are interested in the d-wave superconductivity mediated by antiferromagnetic fluctuations, which can be explicitly treated with FLEX.
In the present paper, we extend the FLEX+DMFT method to deal with the d-wave superconductivity in the two-dimensional repulsive Hubbard model, while a FLEX+DMFT has been applied to the normal phases of the Hubbard model in Ref. 24. To this end, we construct the Luttinger-Ward functional for FLEX+DMFT, where double counting of local diagrams from FLEX and DMFT parts is unambiguously subtracted. Starting from the Luttinger-Ward functional formalism guarantees the conserved nature of DMFT (as well as FLEX) retained, which is not always the case with other diagrammatic extensions of DMFT. We then apply this FLEX+DMFT to the d-wave superconductivity in the two-dimensional repulsive Hubbard model to obtain the superconducting phase diagram.
We find that the FLEX+DMFT result exhibits a T cdome structure of the superconducting phase diagram against band filling, which has not been observed in FLEX alone. We identify the origin of the dome to the local vertex correction from DMFT that renders a filling dependence in the FLEX self-energy. To elaborate this point, we compare this with the GW+DMFT method, in which only bubble diagrams are used to extend DMFT in considering a nonlocal self-energy correction, whereas both bubbles and ladders are included in FLEX+DMFT. The GW+DMFT result also exhibits a T c -dome structure, but, unlike the FLEX+DMFT result, T c in GW+DMFT is always below the Néel temperature, i.e., the antiferromagnetic order dominates over d-wave superconductivity for the whole filling range. We have also obtained the single-particle spectral function with the FLEX+DMFT, which exhibits a double-peak structure above T c with a precursor of the Hubbard bands.
While the present scheme does not consider vertex corrections to the nonlocal ladder diagrams unlike the dual-fermion approach which is recently applied 26 to the superconductivity in the Hubbard model, an advantage of the present method is that we define the Luttinger-Ward framework, which enables us to treat the normal self-energy and the anomalous (d-wave) self-energy on an equal footing as derivatives of the same Luttinger-Ward functional.
II. FLEX+DMFT FUNCTIONAL
Let us formulate the FLEX+DMFT method by introducing a Luttinger-Ward functional Φ, 27 which basically consists of FLEX and DMFT diagrams. However, there is a double counting of local self-energy diagrams between the two contributions, which must be subtracted. We show that the double counting term is uniquely identified if one demands the conserving nature of the formalism. Namely, we regard each of DMFT and FLEX as an approximation for the exact Luttinger-Ward functional of the dressed Green's function G to propose a new functional, in a manner similar to the GW+(E)DMFT scheme. 14,15 In DMFT, the approximate functional, Φ DMFT , is the sum of all types of the ring diagrams that only contain the local Green's function G loc . On the other hand, the approximate functional in FLEX, Φ FLEX , is the sum of specific (bubble and ladder) diagrams as shown in Fig. 1(a), which basically correspond to spin and charge fluctuations.
Then we can propose a functional in the FLEX+ DMFT scheme as where we have subtracted the local part of the FLEX functional Φ local FLEX [G loc ] with G loc = (1/N k ) k G(k) (N k : number of k points) to avoid the double counting. Since both Φ DMFT and Φ FLEX are expressed as functionals of dressed Green's functions, the overlap between the two is uniquely determined as a set of diagrams in the Φ FLEX [G] that only contain local dressed Green's functions, which is nothing but Φ local FLEX [G loc ]. We then obtain the self-energy in this scheme as a functional derivative, This way we retain the conserving nature of the approximation. 28,29 The first term in the last line of Eq.
is the difference between the FLEX self-energy constructed from the lattice Green's function G and that from the local Green's function G loc . Note that Σ nonloc FLEX contains some contributions from local parts of the selfenergy, i.e., Σ nonloc FLEX,ii = 0 (i: label of lattice sites). For example, Σ nonloc FLEX,ii contains a diagram displayed in the left-hand side of Fig. 1 shown on the right-hand side of Fig. 1(b) does not belong to Σ nonloc FLEX,ii . The self-consistency loop, which has to be a double loop in the present combined scheme, is depicted in Fig. 1(c): To start with, we define the DMFT mapping of a lattice model to an impurity model in such a way that Green's function of the mapped impurity model, G imp , coincides with the local Green's function for the original lattice model, G loc . The local self-energy Σ imp is calculated in the DMFT part of the self-consistency loop. The nonlocal part of the self-energy Σ nonloc FLEX is then calculated in the FLEX loop. We combine both Σ imp and Σ nonloc FLEX to obtain the full self-energy, from which we construct new (full and local) Green's functions. We update each of them (Σ imp , Σ nonloc FLEX ) alternately by using the corresponding loops until the whole loops [ Fig. 1 The present scheme may be viewed as a new diagrammatic extension of the DMFT that incorporates vertex corrections into the (local part of) FLEX scheme. FLEX itself, being a perturbative method, is considered to become exact in the weak-coupling limit, while DMFT becomes exact in the atomic limit. Since FLEX+DMFT formalism here incorporates the functionals that dominate in either limit, it is expected to describe spin fluctuation effects and Mott's physics simultaneously.
III. APPLICATION TO THE 2D HUBBARD MODEL
Let us apply the FLEX+DMFT method to the repulsive Hubbard model on the square lattice, with a Hamiltonian, Here c † k,σ creates an electron in a Bloch state with wavevector k = (k x , k y ) and spin σ, U is the on-site repulsion, and n i,σ = c † i,σ c i,σ is the number operator. The twodimensional band dispersion is given as where t, t ′ , and t ′′ represent the nearest-neighbor, secondneighbor, and third-neighbor hoppings, respectively, while µ is the chemical potential. We shall compare the case with the nearest-neighbor hopping only (t ′ /t = t ′′ /t = 0) with the case of t ′ /t = −0.20, t ′′ /t = 0.16, which are the values estimated for a typical hole-doped, single-layered cuprate, HgBa 2 CuO 4+δ with T c ≃ 90K, with first-principles methods. 31,32 Hereafter we take |t| as the unit of energy.
In the single-band Hubbard model, the FLEX selfenergy is computed as (6) where β is the inverse temperature, k = (ω n , k) with ω n the fermionic Matsubara frequency, G(k) is the Green's function, and is the irreducible susceptibility. We can calculate Σ loc FLEX by replacing G with G loc in Eqs. (6) and (7).
To obtain Σ imp in the DMFT procedure, we have to solve the impurity problem in DMFT. Among various impurity solvers, here we adopt the modified iterative perturbation theory (modified IPT), where the original IPT is modified for systems without particlehole symmetry. 33 The method is not computationally demanding, which facilitates a scanning over a wide parameter region to obtain the phase diagram, and also enables us to approach a region with large antiferromagnetic fluctuations where FLEX convergence critically slows down. We have confirmed for various values of parameters that the continuous-time quantum Monte Carlo impurity solver 34,35 implemented with the ALPS library 36,37 gives similar values for the eigenvalue of Eliashberg's equation even away from the half-filling.
When Green's function is obtained, we plug it into the linearized Eliashberg equation, Here ∆(k) is the anomalous self-energy, which is the gap function up to the renormalization factor, and is the effective pairing interaction (Fig. 2), where λ is the eigenvalue for Eliashberg's equation, with superconducting transition identified as the temperature at which λ = 1. At this point we should mention about the consistency of the approximate functional form Φ FLEX+DMFT and the linearized Eliashberg equation, Eq. (8). The Luttinger-Ward functional can be extended to incorporate the anomalous part, and the extended functional Φ[G, F † , F ] is related to the anomalous self-energy through ∆ = δΦ/δF † , where F † is the anomalous Green's function, for which we should consider the local correction to the anomalous self-energy ∆ as ∆ FLEX+DMFT = ∆ FLEX + ∆ loc as in Eq. (2) for the normal self-energy. Now, our interest here is the anisotropic, d-wave pairing instability in the repulsive model, for which we can ignore the local correction to the anomalous self-energy ∆ loc which does not depend on momentum. The remaining term ∆ FLEX = δΦ FLEX [G, F † , F ]/δF † is the same as the right-hand side of the linearized Eliashberg equation (8) if we linearize the anomalous part. 38 Then our formalism treats the normal and anomalous self-energies consistently, as functional derivatives of the same Luttinger-Ward functional Φ FLEX+DMFT . This is an advantage of using the Luttinger-Ward functional formalism in constructing a new scheme.
IV. RESULTS
We show the superconducting phase diagram of the two-dimensional Hubbard model obtained in the FLEX+DMFT in Fig. 3, right panels, where the FLEX result is also displayed in the left panels for comparison. We can immediately see that T c exhibits a dome structure in the FLEX+DMFT. This sharply contrasts with the FLEX result, where T c has been known to almost monotonically increase toward half-filling with some rounding off. 39 The presence of the T c dome in the FLEX+DMFT and its absence in the FLEX are seen for both the simple square lattice with t ′ = t ′′ = 0 [ Fig. 3(a)] and the case of t ′ = −0.20, t ′′ = 0.16 [ Fig. 3(b)]. For the simple square lattice (t ′ = t ′′ = 0), we cannot approach a region very close to half-filling because the antiferromagnetic (AF) fluctuations prevent the FLEX self-consistency loop from converging. For the same reason, it is difficult to attain convergence for systems with larger U . As a measure of the AF order, we evaluated Fig. 3) determined from max k [U χ 0 (k)](= 0.99 here), 40 which is usually adopted in FLEX-type schemes to take account of the effect of the quasi-two-dimensional nature (e.g., in three-dimensional layered systems), although FLEXtype approaches are known to obey the Mermin-Wagner theorem that forbids finite-temperature AF phase transitions in an isolated two-dimensional system. The estimated AF transition temperature T AF becomes higher than the superconducting T c as one approaches halffilling as shown in Fig. 3, where the color shaded region indicates the superconducting phase with T c > T AF (i.e., superconductivity dominating antiferromagnetism). The result suggests that a part of the T c dome is taken over by the AF phase in the case of t ′ = t ′′ = 0 [ Fig. 3(a), right] and t ′ = −0.20, t ′′ = 0.16, U = 5 [ Fig. 3(b), right]. For a smaller U = 4, by contrast, we have an almost full T c dome with T c > T AF for t ′ = −0.20, t ′′ = 0.16 [ Fig. 3(b), right]. These are a key result in the present work. Now let us identify the physical origin of the appearance of the T c dome in the FLEX+DMFT. In FLEX+DMFT, the self-energy is obtained from the FLEX and DMFT self-energies as Σ FLEX+DMFT = Σ FLEX − (Σ loc FLEX − Σ imp ) [Eqs. (2) and (3)], i.e., a part of the local self-energy is replaced from that in FLEX with that in DMFT. Thus the quantity Σ loc FLEX − Σ imp represents the difference in the self-energy effect between FLEX and FLEX+DMFT. We can actually take a look at Σ loc FLEX and Σ imp , with fillings n = 0.7 (underdoped), 0.88 (optimally doped), and 1.0 (half-filled) in Fig. 4 [the parameters are taken to be U = 4.0, β = 20, t ′ = −0.20, t ′′ = 0.16, which corresponds to Fig. 3(b), right panel]. We first notice that the magnitude of the DMFT self-energy Σ imp is smaller than that of FLEX Σ loc FLEX , which means that the overestimation of the selfenergy generally known to exist in FLEX is remedied in FLEX+DMFT by the DMFT (local) vertex corrections. More importantly, we can see that the difference, ImΣ loc FLEX − ImΣ imp [ Fig. 4(b)], has a clear filling dependence, and it increases with doping. Since
the AF phase boundaries (dashed lines in
, the result indicates that the reduction of the FLEX+DMFT selfenergy due to DMFT correction is reduced as one approaches half-filling. Thus T c tends to be suppressed near half-filling as compared to that of FLEX because of the filling-dependent self-energy reduction in FLEX+DMFT. On the other hand, the pairing interaction itself arising from spin fluctuations becomes stronger toward halffilling due to better band nesting, as reflected in the FLEX result [ Fig. 3, left panels] with T c almost monotonically increasing toward half-filling. Therefore, the FLEX+DMFT contains two factors with opposite filling dependencies, and we conclude that the T c dome in FLEX+DMFT arises from the combined effect of the nesting and filling-dependent self-energy reduction. In the FLEX+DMFT scheme, the self-energy reduction from DMFT takes place only in the local part, while the nonlocal self-energy is still considered to be overestimated, especially for ladder diagrams. 24 To examine this effect, we compare the present method with GW+DMFT, where only the bubble diagrams are considered for the self-energy and the pairing interaction (whereas both bubbles and ladders are included in FLEX and FLEX+DMFT). We show the GW+DMFT phase diagram, along with the GW result for comparison, in Fig. 5 for t ′ = −0.20 and t ′′ = 0.16. We can see that, although the T c dome structure remains in the GW+DMFT result, T c is much reduced from the result of FLEX+DMFT. On the other hand, the AF transition temperature is much higher in GW+DMFT than that of FLEX+DMFT. This makes the region in the dome where T c > T AF [highlighted with color shadings in Fig. 5(a)] very narrow in GW+DMFT. In fact, for t ′ = t ′′ = 0 the AF instability becomes so strong that we cannot even obtain superconducting phase boundaries for the whole region of the fillings considered.
In Fig. 5(b), we display the GW local self-energy ImΣ loc GW as compared with the DMFT self-energy ImΣ imp for the filling n = 0.70, 0.88, 1.0 with U = 4.0 and β = 50. We can see that the filling dependence is similar to those in the FLEX+DMFT in that the difference between the two self-energies increases with the doping. Hence we can conclude that the existence of the T c dome is not an artifact in FLEX+DMFT, but is robust in both FLEX+DMFT and GW+DMFT arising due to the same local vertex correction effect. The overestimation of nonlocal self-energy thus does not affect the existence of the T c dome itself.
The reason that the magnitude of T c is much smaller in GW+DMFT than in FLEX+DMFT is because ladder diagrams describing spin fluctuations are not taken into account in GW+DMFT. In this sense, GW+DMFT is closer to the mean-field theory than FLEX+DMFT, which is also reflected in the higher AF transition temperature in GW+DMFT. Concomitantly, the pairing interaction mediated by spin fluctuations is reduced, which acts to reduce the superconducting T c in GW+DMFT rather than in FLEX+DMFT. The fact that T c is always estimated below the Néel temperature in GW+DMFT suggests that GW+DMFT underestimates the spin fluctuation effect, and is not enough to describe the d-wave superconductivity mediated by spin fluctuations. Since the overestimated nonlocal self-energy in FLEX+DMFT is remedied in GW+DMFT, the accurate estimation of the spin-fluctuation effect is expected to lie between GW+DMFT and FLEX+DMFT. To see the strength of local correlation, we measure the double occupancy, We can see in Fig. 6 that the double occupancy becomes negative in the overdoped regime in FLEX, while this unphysical behavior is improved in FLEX+DMFT. We can regard this as another of the self-energy reduction effects: FLEX overestimates the correlation effect, while this is corrected by combining it with the DMFT. A similar tendency is observed between GW and GW+DMFT, but the difference in the double occupancy is smaller. This should be because the self-energy reduction effect is smaller [see Figs. 4(a) and 5(b)]. Let us finally examine the spectral function, which is calculated via analytical continuation with the Padé approximation. The results for FLEX+DMFT, FLEX, GW, GW+DMFT, and DMFT at various fillings are shown in Fig. 7. We can see that the filling dependence is similar among FLEX, GW, and DMFT in that we have a single peak that slightly shifts and broadens as we approach the half-filling. By contrast, in the method that combines DMFT with either FLEX or GW, the spectral function acquires a stronger filling dependence, where a marked double peak is observed at half-filling. Similar double-peak structures have been reported in the dualfermion method 21 as an antiferromagnetic pseudogap, where the appearance of the double peak is consistent with the QMC result. Thus we can see that the interplay of the local and nonlocal long-range correlation effects in FLEX+DMFT and GW+DMFT gives the double peak, which is considered to be a precursor of the Hubbard bands with two peaks separated by about U , while the system is metallic. If we look more closely at the momentum-resolved spectral function A(k, ω), we observe that there is a region in k space near the Fermi energy where the spectral weight becomes slightly negative. This might not be specific to the present method, since many extensions of DMFT do not guarantee positive-definite spectral weights. 20,22 Since the magnitude of the negative part is negligibly small (< 1%) in the present case and this tends to occur in the overdoped regime, this does not affect the phase diagram and the density of states [Figs. 3 and 7] in the underdoped regime.
V. SUMMARY AND DISCUSSIONS
We have employed the FLEX+DMFT approach in terms of a new Luttinger-Ward functional to study the superconductivity in correlated electron systems. This scheme is a diagrammatic extension of the DMFT, so that it can describe the d-wave superconductivity arising from k-dependent pairing interaction. The scheme, being formulated in terms of the Luttinger-Ward functional, also has a virtue of the normal and anomalous self-energies treated on an equal footing. We have applied the FLEX+DMFT to the repulsive Hubbard model on the square lattice. We have found that FLEX+DMFT describes a T c dome structure, whose physical origin is traced back to a combination of opposite effects: The self-energy effect introduced in the FLEX+DMFT suppresses the superconductivity more strongly toward the half-filling due to the local-correlation effect, while spin fluctuations become stronger toward the half-filling due to band nesting. We also compare the FLEX+DMFT result with the GW+DMFT result, which reproduces the dome structure. This indicates that the dome is not an artifact of the overestimated nonlocal self-energy in FLEX+DMFT.
Another observation is that there is a Pomeranchuk instability into electronic states with broken tetragonal symmetry in the case of t ′ = −0.20, t ′′ = 0.16 in both FLEX+DMFT and GW+DMFT. In this case, solutions with the four-fold rotationally symmetric Fermi surface become unstable, where we end up with a solution that breaks this symmetry when we start the calculation from an asymmetric initial input. While this instability is interesting in its own right, we have concentrated on the symmetric case in this study, and leave the analysis of the Pomeranchuk instability to another publication.
In order to improve the scheme to suppress the overestimation of the nonlocal FLEX self-energy, we should consider the screening effect in the FLEX self-energy. For example, the two-particle self-consistent method 41 takes account of vertex correction effects by considering the sum rule for the susceptibility, while a similar technique is also used to reduce the overestimated spin fluctuations and their effect on the self-energy in DΓA. 17 We expect these techniques bring some improvement to the present theory.
|
2015-08-19T09:46:05.000Z
|
2015-05-19T00:00:00.000
|
{
"year": 2015,
"sha1": "8baa0c43beca77d73de9097d243616978a81efd3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1505.04865",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8baa0c43beca77d73de9097d243616978a81efd3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
31810774
|
pes2o/s2orc
|
v3-fos-license
|
NITRATE LEVELS AND STAGES OF GROWTH IN HYPERNODULATING MUTANTS OF LUPINUS ALBUS . II . ENZYMATIC ACTIVITY AND TRANSPORT OF N IN THE XYLEM SAP
The enzymatic study and transport of N in the xylem sap was carried out with a view to observing the influence of different nitrate levels and growth stages of the plant in chemically treated mutants of Lupinus albus. Several stresses induce a reduction in plant growth, resulting in the accumulation of free amino acids, amides or ureides, not only in the shoot, but also in the roots and nodules. Although enzyme activity is decisive in avoiding products that inhibit nitrogenase by ammonium, little is known about the mechanism by which the xylem carries these products. However, this process may be the key to the function of avoiding the accumulation of amino acids in the cells of infected nodules. The behaviour of the enzymes nitrate reductase (NR), phosphoenolpyruvate carboxylase (PEPC), glutamine synthetase (GS) and nitrogen compounds derived from fixation, such as N-α-amino, N-ureides and N-amide in mutant genotypes were observed. The NR enzyme activity was highly influenced by the application of nitrate showing much higher values than those in the non-application of nitrate, independently of genotype, being that the NR, the best evaluation period was in the tenth week. The L-62 genotype characterized with nitrateresistance, clearly showed that the enzyme PEPC is inhibited by presence of nitrate. The L-135 genotype (nod fix) showed GS activity extremely low, thus demonstrating that GS is an enzyme highly correlated with fixation. With regard to the best growth stage for GS, Lupinus albus should be evaluated in the seventh week.
INTRODUCTION
Stresses inducing reduction in plant growth result in the accumulation of free amino acids, amides or shoot, root and nodule ureides which may be responsible for the regulation of nodulation and nitrogenase activity through a system of regeneration (19).The same author reports that the ammonium produced by nitrogenase in symbiosis with legumes is exchanged inside the cytosol of the host cell, where it is incorporated as amino acids and amides.It is for this reason that GS/GOGAT enzyme activity is decisive in avoiding inhibitive products from nitrogenase by ammonium.Although no information is available as to the mechanism by which the xilema carries amides or ureides, this process may be the key to the function of avoiding the accumulation of amino acids in cells of infected nodules.
On studying the reduction of nitrate in Rhizobium sp., Serrano and Chamber (20) observed that this includes disassimilatory and defective processes, besides the assimilatory reduction.Alcantar-Gonzales et al. (2) reported an increase in the reduction of nitrate in stirps (NR + ) nitrate reductase and that this generally occurs with a decrease in acetylene reduction activity.In bacteroids of some B. japonicum which has a high level of constitutive NR, they showed no reduction of nitrate in nodules owing to this anion not having access to the bacteroid zone (12).Silsbury et al. (22) showed that the nitrate reductase and the fixation of N 2 work in a complementary way by supplying reductive sources of nitrogen in the plants, consequently exhibiting a regulatory system involving a level of soluble N in the plant.Temporary treatments with high nitrate inhibit the acetylene reduction activity without any relation to nitrate reductase activity in bacteroids of B. japonicum.
There is a report in which a partially nitrate reductase deficient mutant of Pisum sativum (L.) was less susceptible to the influence of nitrate application on symbiotic N 2 -fixation than the wild type (11).In contrast, Ryan et al. (17) reported that a nitrate reductase mutant did not show improved nodulation compared to the wild type, so these results support the suggestion that the metabolism of nitrate was involved.The results of Burity et al. (7), suggest that Lupinus mutabilis mutants have a greater capability of assimilating symbiotically fixed N with greates available carbohydrates supply, and the partial tolerance to nitrate demonstrated by some mutants are apparently associated with the hypernodulated phenotype.Gibson and Harper (13) showed another type of pea mutant, whose nodulation demonstrated a greater tolerance to nitrate, althought it had normal nitrate reductase.These observations indicate that the adverse effect of nitrate on nodulation and N 2fixation can be overcome by other mechanisms, such as limited carbon supply to the nodule or cultivars with altered nitrate metabolism associated with hypernodulated characteristics.
Concerning PEP carboxilase, Vance and Heichel (28) propose that the reductive fermentative path in the cytosol of nodules involves its synthesis and that this enzyme is also inhibited by the nitrate.Streeter (25) noted the possibility that nitrate reductase activity may increase GS, and it is already known that glutamine absorbed by bacteroids controls nitrogenase activity.According to Milic et al (16), studying the symbiosis of soybean with Bradyrhizobium japonicum, the activity of glutamine synthetase (GS) enzyme in the plants is correlated with nitrogen fixation in relation to the different varieties studied.
With respect to ureides, according to Atkins et al.
(3), they are formed by the oxidation of purine xantina and hipoxantina bases which return with derivatives for a new synthesis of purine nucleotids.The application of alopurinol (AP) which has a similar structure to hipoxantina, in nodulating roots result in rapid inhibition of the activity of xantina hidrogenase (XDH) in the nodules.When the isolated bacteroids of inhibited nodules show nitrogenase activity rates with small differences in relation to the control, the direct application of AP (or xantina) on the isolated bacteroids had no effect (4).These data indicate that the effect of the AP on the nitrogenase was indirect, and a consequence of the interference of certain processes essential for the functioning of the nitrogenase, more localized on the outside of the bacteroid.In this same work it was observed that the production of H 2 was inhibited after 1 or 2 hours, whilst the accumulation of purines and inhibition of the synthesis of ureides in the nodules was detected after 1 hour.There are several paths by which the synthesis of ureides and nitrogenase can interact.The primary paths are intermediate (purines and ureides) which serve to regulate or aid the respiratory reactions that support the nitrogenase, and the second possibility is that respiratory substrates used by the bacteroids depend on ureide synthesis for their formation.From the disappearance of the relation between the abundance of ureides and nitrogen fixation which corresponds to the beginning of the formation of grains and the remobilization of these compounds at maturity in early genotypes, according to Aveline et al. (1), it was suggested that the interference of ureide synthesis derived from other products during senescence or even the release of these composites from any source group can quite clearly explain the lack of correlation with later genotypes.Furthermore, research into the measurement concerning the origin of ureides in different growth stages should be carried out.
The aim of the present work was to observe enzymatic activity and the transport of N in the xylem sap in the growth of Lupinus albus cv.Multolupa (standard), of two hypernodulating mutants and (L-280 and L-88), one resistant to nitrate (L-62) and one inefficient mutant (L-135) inoculated with Bradyrhizobium sp (Lupinus).
MATERIALS AND METHODS
The Lupinus albus plant material was used, and according to previous work done by C.I.D.A., Centro de Investigación y Desarrollo Agrario, Sevilla -Spain the main characteristics of the cv.Multolupa mutants that was used in the study were: L-280 nod + fix + ; L-88 nod + fix + ; L-62 nod -with resistent to NO 3 -and L-135 inefficient (nod -fix -).The planting methodology, procedure and statistical guidelines are cited in Burity et al. (part I).
The number and fresh weight of nodules were determined and 1.5g of the nodules from each treatment were homogenized under N 2 stream at 4 o C in a phosphate buffer (24).The homogenate solution was passed though a cheese-cloth layer and the suspension was collected in tubes which were then centrifuged at 200g for 20 min.The supernatants obtained were centrifuged at 8.000g for 20 min.at 4ºC to separate bacteroids from cytosol.Soluble protein was measured according to Goa (14), and samples of cytosol suspension were analyzed for nodule glutamine synthetase (GS) (EC 6.3.1.2) and phosphoenolpyruvate carboxilase (PEPC) (EC 4.1.1.31)activities.Nodule GS activity was determined using the ADT-transferase reaction that measures the formation of γ-glutamylhidroxamate (21), while nodule PEPC activity was determined according to the method described by Briand et al. (6).The nitrate reductase (NR) activity in the cytosol nodule was assayed spectrophotometrically (540 nm) according to Sanchez and Heldt (18).
The determination of nitrogen compounds in the sap derived from fixation were: N-ureides -alantoic acid and alantoin estimated by glyoxylate hydrolysis (29); N-α-amino using the modified method descibed by Matheson et al (15), and the reagent hydridantine was prepared in accordance with Connel et al. (10) and N-amide measured through the glutamine (26), in which the amide was estimated after hydrolysis.
RESULTS AND DISCUSSION
On analysing Table 1, significant differences can be observed, in both the nitrate levels applied and the evaluation periods, the highest rates of activity being in accordance with the mean among genotypes of nitrate reductase (NR) at the level of 5mM of 9.04 µmoles of NO 3 .h-1 . m. -1 protein., that is almost double that of treatment not applied with nitrate.Despite there being no differences among genotypes, the L-135 obtained the activity of 11.17, the highest in relation to the other genotypes.
The phosphoenolpyruvate carboxilase (PEPC) (Table 2), did not show significant differences, neither for 0 to 5mM nitrate concentrations nor for 7 (27), who observed that the nitrate inhibits the PEPC and reduces the formation of proteins (23).
Table 2 shows data for the activity of glutamine synthetase (GS) and shows that there were no significant differences with respect to the mean among the nitrate levels, although within the genotypes at the 5mM level, the genotypes differed in relation to the activity of GS, where the control (Multolupa), together with the L-62 showed values of 4.49 and 4.79 µmoles GS. h -1 .mg. -1 .protein.The L-135 genotype, with inefficient characteristics, obtained the extremely low value of 0.34.For the growth stages, the difference among means in relation to the first period of 7 weeks and in relation to the second of 10 weeks was 265%, suggesting 7 weeks as the ideal period for the determination of GS in this culture.Among the genotypes, the L-62 was the one which obtained the highest value, not differing from the L-88, a hypernodulator, for the first period.Streeter (25) considers the possibility of the NR increasing the GS, however, in the present work, this did not occur, given that the genotypes showing the highest values of GS in the evaluated period of 7 weeks, were not outstanding in relation to the concentration of NR, nor did they differ among themselves.Another example of differential behaviour was observed in relation to the determination of NR in these genotypes, where the best period was 10 weeks, that is, later on.
The parameters N-α-amino, N-ureides and Namide (Table 3) were studied over only one cycle, the first two having been evaluated in the seventh week, the last in the tenth week.For the mean values of N-α-amino, there were no differences among the levels of nitrate applied, this behaviour being repeated for the other nitrogen compounds from the sap derived from fixation.Only in the case of Nureide levels did the L-135 genotype obtain a high value when nitrate was not applied compared to the value reached at the 5mM level.This behaviour is logical since the L-135, being inefficient, shows that the nitrate clearly interferes in fixation because, despite its inefficiency, the concentration of Nureides was 84% higher when nitrate was not applied.As there are innumerable paths by which the synthesis of ureides and nitrogenase can interact, according to the reports of Atkins et al. (5), this behaviour may be a form of interaction in relation to the intermediate paths which aid the respiratory reactions that support nitrogenase or the respiratory substrates that the bacteroids use to synthesise ureides.Or, as Aveline et al. (1) observed on studying the best method for evaluating N-ureides in soybean, that there is a need to investigate different growth stages of the plant in order to quantify the ureides 4 18.1 2.6 derived from fixation.
For the N-amide data, the tenth week evaluation showed significant differences among the averages of the genotypes where the L-135 obtained the highest value of 25.73 µmoles/100µl of sap extracted.In contrast, N-ureides show similar behaviour to those obtained with this genotype, not differing statistically, however, from the L-280 genotype, whilst the L-62 showed the lowest value.
From these results it can be concluded that the nitrate reductase enzyme was highly influenced by the application of nitrate, showing much higher values in relation to the non-application of nitrate, irrespective of genotype, and that for NR, the best evaluation period was in the tenth week.Concerning PEPC, the nitrate-resistant L-62 genotype clearly demonstrated that this enzyme is inhibited by the presence of nitrate.With respect to GS, the L-135 genotype (nodfix -) showed an extremely low value, thus demonstrating that the GS is an enzyme highly correlated with fixation.In relation to the best growth stage for the GS, Lupinus albus should be evaluated in the seventh week.For the nitrogen compounds derived from fixation, we suggest that a deeper study concerning the best evaluation period would be of great importance, given that each genotype demonstrates differential behaviour for the synthesis of these compounds.
(28)me, of 41.40 µmoles of PEPC.h -1 .mg -protein, whilst the L-62, a nitrate-resistant genotype, showed the lowest value (15.55 µmoles).When nitrate was not applied, the activity of PEPC did not differ among genotypes.The behaviour of the L-62, which without receiving nitrate showed the highest activity of PEPC, corroborates results obtained by Vance and Heichel(28)and also resemble those of Vance and Stade
|
2017-09-18T16:57:09.957Z
|
1999-04-01T00:00:00.000
|
{
"year": 1999,
"sha1": "66483eb0dfa3d71f0e5f1331ac9b123d7b3fcbac",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rm/a/jNFCQSfkd5JyTnr7smKmNYx/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66483eb0dfa3d71f0e5f1331ac9b123d7b3fcbac",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
}
|
232223060
|
pes2o/s2orc
|
v3-fos-license
|
Subradiant-to-Subradiant Phase Transition in the Bad Cavity Laser
We show that the onset of steady-state superradiance in a bad cavity laser is preceded by a dissipative phase transition between two distinct phases of steady-state subradiance. The transition is marked by a non-analytic behavior of the cavity output power and the mean atomic inversion, as well as a discontinuity in the variance of the collective atomic inversion. In particular, for repump rates below a critical value, the cavity output power is strongly suppressed and does not increase with the atom number, while it scales linearly with atom number above this value. Remarkably, we find that the atoms are in a macroscopic entangled steady state near the critical region with a vanishing fraction of unentangled atoms in the large atom number limit.
Introduction.-Progress in laser physics has revolutionized our day-to-day lives and the scope of experiments across the entire spectrum of scientific disciplines. At its core, the laser is a highly out-of-equilibrium system whose steady state is maintained via a balance of driving and dissipation. A typical laser model involves a collection of continuously pumped two-level atoms interacting with an electromagnetic field confined in a cavity with lossy mirrors. In particular, bad cavity lasers operate in a regime where the lifetime of the photon is short compared to the effective lifetime of the upper atomic level [1][2][3]. Over the past decade, they have garnered significant attention because in these systems the sensitivity of the laser linewidth to cavity frequency fluctuations is strongly suppressed [1,4]. Furthermore, this narrow linewidth coexists in a regime where the emission amplitudes of the atoms can constructively interfere and give rise to superradiant emission. Apart from its promising technological potential, the superradiant regime has also been shown to host a variety of many-body phenomena such as synchronization [5][6][7][8][9][10][11][12][13][14][15], collective cooling [16][17][18][19][20][21] and self-organization [16,[22][23][24][25][26].
In contrast, the regime preceding the onset of superradiance has received far less attention, partly because within the framework of mean-field theory the atoms appear to be in a trivial unpolarized product state. Prior beyond-mean-field studies have only considered this regime in passing [2,27] or for a small number of emitters [28,29], but have nevertheless demonstrated that the atoms populate collective dark states giving rise to steady-state subradiance. However, the physics in this regime and the stability of the highly-correlated quantum states remains poorly understood especially given the fact that this regime is complementary to the well studied and much anticipated steady-state superradiant regime.
In this Letter, we show that the subradiant regime of a bad cavity laser is in itself a playground for a rich vari- atoms. The atoms undergo collective decay (green) in the presence of non-collective pumping (red) and additional noncollective decay (blue). When w < γ + Γc, the phases of the spins are anti-correlated, leading to steady-state subradiant emission. The steady-state density matrix lives in a triangular state space characterized by quantum numbers J, M . Collective decay only leads to transitions in the same J manifold, whereas non-collective pumping and decay cause jumps to states in the same as well as adjacent J manifolds. (cd) Population distribution on the Dicke ladder for N = 100 atoms, for two states that are approximately equally subradiant (c.f. Eq. (3)) but on either side of the phase transition.
ety of physical phenomena. In particular, we show that the onset of superradiance is preceded by a dissipative phase transition between two distinct types of subradiance. The transition is shown to arise as a consequence of the bounded state space of the collective atomic system. The two subradiant steady states correspond to the pop-arXiv:2103.07402v1 [quant-ph] 12 Mar 2021 ulation of different regions of this state space (see Fig. 1). The phase transition is heralded by a non-analytic change in the cavity output power and a discontinuous change in a squeezing parameter. An experimentally attractive feature is the scaling of the output power, which is strongly suppressed and does not increase with atom number N below the critical point but instead scales linearly with N above this point. Near the critical point, we find that the atoms are in a macroscopically entangled state and that the fraction of unentangled atoms is vanishingly small as the number of atoms increases. From the viewpoint of dissipative spin models, this phase transition and the accompanying entanglement is striking because they arise in a model whose governing master equation contains no Hamiltonian terms but only Lindblad dissipators.
Model.-Our system consists of N atoms each with upper and lower levels |↑ and |↓ respectively and a single lossy cavity mode as shown in Fig. 1(a). The atoms can be modeled using the language of Pauli matrices whereσ − j = |↓ j ↑| j (σ + j = |↑ j ↓| j ) is the lowering (raising) operator for atom j andσ z j = |↑ j ↑| j − |↓ j ↓| j is the population difference between the spin states. The finite lifetime of |↑ causes atoms to emit photons both into free space modes and the cavity mode as they decay to |↓ . Emission into free space is characterized by a jump operator √ γσ − j for each atom. Assuming that the atoms are identically coupled to the cavity mode, the emission of a cavity photon is characterized by the jump operator j is the collective angular momentum lowering operator. Here, Γ c = Cγ is the single atom emission rate into the cavity, which is modified by the dimensionless cooperativity parameter C. The decay channels are balanced by an effective incoherent pumping of the individual atoms from |↓ → |↑ which is represented by a jump operator √ wσ + j for each atom. The master equation governing the spin dynamics is therefore given by This master equation is invariant under permutations of the atomic indices and this symmetry results in a drastic reduction of the Liouville space for the steady-state solution from 4 N to O(N 3 ) basis states [30,31]. Furthermore, the master equation also possesses a U (1) symmetry which can be seen by making the transformation σ ± j → e ±iφσ± j in Eq. (1). This additional symmetry reduces the required basis states to O(N 2 ).
A convenient representation of these basis states uses the permutation invariant eigenstates of theĴ 2 andĴ z operators with respective quantum numbers J, M [32]. Here, we have introduced the collective angular momen- . The two quantum numbers J = 0, 1, 2, . . . , N/2 (for an even N [33]) and M = −J, . . . , J form a discrete, triangular state space for the collective atomic state in Liouville space as shown in Fig. 1(b). While the two vertices at J = N/2, M = ±N/2 correspond to trivial product states with all spins in |↑ or |↓ , the third vertex at J = 0, M = 0 is a highly entangled, subradiant state wherein the atoms are grouped into N/2 singlet pairs [34].
In this state space, collective emission leads to a transition with ∆M = −1 within a ladder of constant J. While the free space emission and repump of any single atom breaks permutation invariance, the cumulative effect of either of these processes occurring for all atoms preserves this symmetry. Hence, they can be viewed as transitions between different states in this state space with ∆M = −1, +1 respectively. Crucially, these processes couple adjacent J ladders and take the system away from J = N/2 which is the initial value when the atomic pseudospins are initialized in a coherent spin state. Closed form expressions for the transition probabilities [35] enable us to numerically determine the steady-state by exact diagonalization (ED) of a rate matrix [32].
Signatures of the phase transition.-For repump rates such that γ + Γ c < w < N Γ c , the system is in the superradiant regime that is characterized by positive inversion and spin-spin correlations σ z . We now vary w in the weak repump regime 0 < w < γ + Γ c while keeping the values of γ, Γ c fixed. We choose γ/Γ c = 0.1, corresponding to C = 10. We first consider the cavity output power per atom, which is proportional to Ĵ +Ĵ − /N , whereĴ + = (Ĵ − ) † . Figure 2(a) plots this quantity for different atom numbers as w is scanned across γ. With increasing system size, we observe signatures of a non-analytic change at w = γ that indicates a phase transition. We use second-order cumulant theory to obtain analytical insight into this behavior. Using an expansion in the small parameter 1/N , we find that the O(N 0 ) behavior of Ĵ +Ĵ − is given by [32] For w < γ, a zero solution at leading order reveals the strong suppression of the cavity output power, which does not grow with N in this regime. On the other hand, the output power grows linearly with N for w > γ. Importantly, this critical point is distinct from and precedes the onset of superradiance at w = γ + Γ c . As a result, the collective atomic state is subradiant (with respect to emission into the cavity) in both the phases demarcated by this point. A quantitative measure of the degree of subradiance is the per-atom reduction in the collective emission rate in units of Γ c . This subradiance factor S f is given by where Ĵ +Ĵ − describes collective emission and includes the effects of atom-atom correlations, while the second term describes the emission from N uncorrelated atoms. The J = 0, M = 0 singlet state gives the minimum possible value of S f = −0.5 and hence it can be considered the most subradiant state. Remarkably, as shown in Fig. 2(b), we find that near the critical point S f → −0.5 with increasing system size, indicating that the system is highly subradiant on either side of this point and occupies states with J close to zero.
To understand how these two subradiant phases differ, we plot the population in the J, M states for N = 100 atoms at two points with similar values of S f (≈ −0.37) on either side of the critical point ( Fig. 1(c-d)). For w < γ, the system predominantly occupies the lowest states of each J-ladder, i.e., M = −J, whereas the value of J/N ∼ O(1) [36]. In contrast, for w > γ, the system occupies states with vanishing values of J/N whereas all allowed M values are significantly populated. In other words, as w increases, the subradiant system 'walks' up the lower boundary of the triangular state space, encounters the vertex at J = 0 and undergoes a phase transition into a qualitatively different family of subradiant states. Therefore, the phase transition arises as a result of the closed bottleneck at J = 0 that reflects the incoming population back into the J ≥ 0 space (see animation [37]).
A non-analytic change is also observed in the mean atomic inversion σ z 1 = 2 Ĵ z /N , plotted in Fig. 2(c). We find that σ z 1 monotonically increases with w for w < γ while it is essentially zero (at leading order) for w > γ [32]. A further, dramatic evidence for the phase transition is observed in the normalized variance of the collective inversion, given by (∆Ĵ z ) 2 /N . Figure 2(d) plots this quantity for N = 10 3 , 10 4 , 10 5 spins. Since J N/2 in the critical region, we are able to extend the exact diagonalization (ED) computation to N ∼ 10 5 by working in a truncated state space with J max ≤ 1250.
With increasing atom number, we find strong evidence for a discontinuous jump in this quantity at the critical point. In cumulant theory, we find that this jump in the variance is only reproduced by accounting for third-order cumulants [32]. In particular, we cannot factorize threeatom correlations as σ + The nonanalytic behavior of the inversion and the discontinuity in the variance at the critical point are reminiscent of the behavior of order parameters and susceptibilities in equilibrium phase transitions, but in this system these features manifest in a strongly out-of-equilibrium setting.
Entanglement.-The failure of simple mean-field theory to reveal subradiance motivates us to investigate the entanglement properties of the steady state in this regime and in particular near the critical point w = γ. Since the system occupies states with J N/2 near this point, an appropriate entanglement witness is the generalized spin squeezing parameter [38,39] given by where Physically, this parameter captures the simultaneous compression of uncertainties in the three angular momentum components and takes the minimum value of ξ 2 = 0 for the macroscopic singlet state with J = 0, M = 0. Furthermore, ξ 2 also serves as an upper bound for the fraction of unentangled spins in the system [40]. Figure 3(a) plots ξ 2 as w is varied across γ. The discontinuity in (∆Ĵ z ) 2 also manifests here as a sudden drop in ξ 2 near the critical point that becomes more pronounced with increasing system size. For a finite N , the minimum attainable ξ 2 decreases with N . As shown in Fig. 3(b), we find a power law scaling ξ 2 ∝ N −0.34 for the minimum value obtained using ED, which is approximately reproduced by the numerical solution of third-order cumulant theory where ξ 2 ∝ N −0.31 . This scaling indicates that the fraction of unentangled spins, for which ξ 2 is an upper bound, vanishes as N → ∞. Indeed, in the large N limit, we analytically find that . The subradiant-to-subradiant phase transition is thus characterized by macroscopic entanglement in the atomic ensemble where O(N ) atoms are entangled with other atoms.
Practical considerations.-Bad cavity lasers based on Raman transitions [3] as well as narrow-line optical transitions [43] can be potentially adapted to observe this transition. Experiments could also be based on cooperative emission from artificial atoms such as NV centers or quantum dots [41,42]. Whereas for steady-state superradiance the bad cavity requirement is κ N Γ c , with κ the cavity linewidth and N Γ c the order of the collectively enhanced single-atom emission rate, future studies can explore if this requirement can be relaxed in the subradiant regime where there is no such enhancement. However, similar to the superradiant regime, steady-state subradiance requires the atom-cavity system to satisfy N C 1 but operate in the less explored weak pumping limit given by w ∼ γ N Γ c . Although we have considered the stricter (but achievable [44]) condition C > 1 in this work, the non-analytic behavior of the inversion and output power is independent of C, and the critical scaling of the minimum squeezing with N will also be observable for C 1, albeit with an exponent of smaller magnitude [32]. However, for C 1, the interval γ < w < γ + Γ c is very small and hence the subradiant-to-subradiant transition is immediately succeeded by the onset of superra-diance. We have verified that the mean inversion, output power and the minimum squeezing are robust to T 2 dephasing even when 1/T 2 γ [32]. While Ĵ +Ĵ − can be inferred from the cavity output power, the mean inversion and the variance (∆J z ) 2 could be measured, for instance, by preparing the steady state and subsequently measuring the population statistics in one of the pseudospin states by detecting the fluorescence from a cycling transition. Alternatively, quantum non-demolition schemes could also be used to measure the latter two observables [45,46] . The quantities S f and ξ 2 can be estimated by combining these three quantities. The cavity output can also be used to measure photon bunching via the second-order correlation function g (2) (0), which we find exhibits an abrupt spike at the critical point [32].
Identical coupling of the atoms to the cavity mode can be achieved by trapping the atoms at alternate antinodes [45]. Remarkably, we find that the behavior of the cavity output power and the mean inversion in the subradiant regime remain unchanged even when the atoms are assumed to be arbitrarily distributed over a mode wavelength [32]. However, the magnitude of the minimum S f is reduced because of the modulation by the mode function. This modulation also makes it difficult to infer S f and ξ 2 from measurements of the cavity output and the fluorescence. Importantly, since ξ 2 as defined in Eq. (4) does not account for the cavity mode function, it is no longer a suitable entanglement witness since the state can be highly entangled even when ξ 2 > 1. Future work will explore the possibility to construct an entanglement witness that accounts for the cavity mode function.
Conclusion and outlook.-We have demonstrated that a bad cavity laser undergoes a dissipative phase transition from one subradiant phase to another before the onset of superradiance. Rather than destroying atomic correlations, single atom pumping and decay instead play a central role in generating and maintaining the entangled subradiant states we observe, which, in addition, are also robust to T 2 dephasing. Buoyed by recent experiments [47], subradiance is an exciting frontier with a variety of proposed applications such as ultrafast readouts [48], engineering of optical metamaterials [49], photon storage [50,51], quantum state transfer [52] and improved quantum metrology [53], to name but a few. In light of its robust nature, it will be interesting to explore potential applications of steady-state subradiance in quantum information processing, especially considering the features near the critical point such as a vanishing fraction of unentangled spins and an extreme sensitivity of observables to system parameters. From a fundamental perspective, it will be interesting to explore higher-spin models, since the high-dimensional bounded state space presents a greater number of vertices and edges where we may discover dissipative phase transitions that are similar in spirit to the one we have reported here.
We thank Peter Zoller, Walter Hahn, John Cooper, References 7
I. PERMUTATION INVARIANT BASIS STATES
A convenient basis to represent pure states of N spin-1/2 particles consists of the joint eigenstates of theĴ 2 andĴ z operators with respective quantum numbers J and M such that:
II. RATE EQUATIONS FOR DETERMINING STEADY STATE POPULATIONS
Closed form expressions for transition probabilities between the PI basis states have been previously derived [3] for the various Lindblad terms constituting master equation (1) of the Main Text. We reproduce these expressions in the form of transition rates in Table I. We introduce a population vector P, whose dynamical evolution is given by where R jk represent the elements of a rate matrix R. The indices j, k each take on (N + 2) 2 /4 values corresponding to the total dimension of the PI U (1) symmetric basis set. An off-diagonal element R jk gives the rate of population flow from k → j and can be directly read off from Table I. The diagonal element gives the total rate of population flow out of state j and is thus given by − k =j R kj . The steady-state population P s is given by the right eigenvector of R with eigenvalue 0, i.e.
The populations are normalized according to j P s,j = 1. We note that this normalization is different from the convention adopted in Ref. [3] where the degeneracy d J N is explicitly factorized out of the state populations.
III. CUMULANT THEORY IN THE SUBRADIANT REGIME
Our starting point is the master equation of the bad cavity laser given by A. Absence of subradiance in mean-field theory The mean-field equations of motion for the expectation values σ + 1 , σ z 1 are d dt where Γ ± = w ± (γ + Γ c ). These equations admit two solutions given by For the second solution to represent a physical and nonzero polarization, we require | σ + 1 | 2 > 0. Assuming N 1, this condition implies that w − < w < w + , where (S10) The two roots, w − , w + are respectively the lower and upper bounds for superradiant emission. In this regime, the spin polarization has nonzero magnitude, i.e. | σ + 1 | > 0, and the permutation invariance implies that all the spins are polarized along the same direction, giving rise to a classically correlated spin state.
Instead, for w < w − or w > w + , the physical solution is the first line of Eq. (S9) where | σ + 1 | = 0, i.e. the spins are unpolarized. The steady state is therefore simply a product state of unpolarized spins. Notably, the subradiant nature of the spin state in the regime w < γ + Γ c does not manifest in mean-field theory.
IV. SECOND-ORDER CORRELATION FUNCTION
In this section, we study the second-order correlation function at zero time delay which can be expressed in terms of the atomic dipole in the bad-cavity limit as [4] g (2) (S25) Figure 1 shows g (2) (0) as w is scanned across γ for N = 10 3 (red, cross), N = 10 4 (blue, star), and N = 10 5 (magenta, diamond) atoms. For the cooperativity value C = 10, the second-order correlation function exhibits an abrupt jump near the critical point w = γ. For w > γ, the peak observed near the critical point rapidly decays towards a limiting value of 2. In the regime w < γ, we observe that g (2) (0) increases as w decreases. A detailed study of the intensity fluctuations will be the subject of future work.
V. IMPACT OF COOPERATIVITY PARAMETER
In the Main Text, we choose a large cooperativity of C = 10 so that the interval γ < w < γ + Γ c is discernible from the superradiant regime w > γ + Γ c . Here, we show that our conclusions are valid even for lower values of C.
As Eq. (S15) and Eq. (S16) demonstrate, the leading order expressions for the inversion and the cavity output power (which is proportional to Γ c Ĵ +Ĵ − ) are independent of C. The invariance of the latter observable is experimentally appealing because the phase transition can be detected by a macroscopic change in the cavity output power irrespective of the value of C. From Eq. (S23), the squeezing parameter is independent of C for w < γ while being proportional to 1/C when w > γ. Nevertheless, ξ 2 → 0 as w → γ + in the large N limit. We therefore conclude that the system in both intervals w < γ and γ < w < γ + Γ c continues to exhibit two distinct forms of subradiant behavior and undergoes a phase transition at w = γ for any value of C, although the latter regime may be difficult to examine when C 1 as the critical point would be almost immediately followed by the onset of superradiance.
We now demonstrate that a critical scaling of the squeezing parameter can be observed even at smaller C values. Figure 2 depicts the minimum value of ξ 2 for a varying number of atoms. The data correspond to C = 0.1 (red, cross), C = 1 (blue, circle), and C = 10 (magenta, asterisk). The dashed lines are the corresponding curve fits which reveal that the scaling with N is similar for C 1, but is noticeably reduced for smaller values of C.
VI. IMPACT OF DEPHASING
In this Section, we probe the robustness of the subradiant-to-subradiant phase transition to individual dephasing of the atomic dipoles. Dephasing can arise as a byproduct of the repump process (see Fig. 3) or due to ambient effects such as atom-atom collisions and photon scattering from the optical lattice used for trapping.
The master equation including dephasing has the form Dephasing as a byproduct of repumping. Atom j is coupled from the ground state |↓ j to an auxiliary state |a j by a laser with effective Rabi frequency Ωp. The rapid decay |a j → |↑ leads to an effective repump process from |↓ j → |↑ j at rate w = Ω 2 p γp/[Γ(γp+γa)], where Γ accounts for γp, γa and a possible non-zero linewidth of the coupling laser [5]. On the other hand, the decay |a j → |↓ j does not change the pseudospin state but destroys coherence between the spin states. The strength of this dephasing can be characterized by a rate 1/T2 = Ω 2 p γa/ [4Γ(γp + γa)]. observables to dephasing is a major advantage that facilitates experimental observations of the phase transition and may potentially allow for more freedom in the choice of atomic transition.
VII. INHOMOGENEOUS ATOM-CAVITY COUPLING
In this Section, we will investigate the situation where the atoms are not trapped at the antinodes but are spread over the cavity mode function, given by g(x) = g 0 cos(2πx/λ), where λ is the wavelength. As illustrated in Fig. 5, we divide the region −λ/2 ≤ x ≤ λ/2 into M bins and assume that the atoms are positioned at the bin centers. We suppose that the number of atoms at x m is N m , such that m N m = N . After eliminating the cavity mode, the master equation for the bad cavity laser is now of the form (S28) In the last term, the indices i n , j m respectively sum over atoms in bins n, m. The atom-atom coupling constant now varies depending on the bin index and is given by where θ m = 2π(m − 1/2)/M and Γ c = g 2 0 /κ for a cavity with decay rate κ.
The master equation (S28) is not permutation invariant and hence an exact diagonalization is no longer tractable. Instead, we use second-order cumulant theory to investigate the effect of inhomogeneous atom-cavity coupling. The relevant variables are σ z 1,k , σ z 1,kσ z 2,q and σ + 1,kσ − 2,q s = σ + 1,kσ − 2,q + σ + 2,qσ − 1,k /2, where k, q = 1, . . . , M . The equations for the expectation values are given by From these solutions, it is evident that the mean inversion remains the same even when the atoms are arbitrarily distributed over the cavity mode function. Simple algebra shows that, remarkably, even the cavity output power is independent of the distribution and is given by Eq.
(2) of the Main Text (multiplied by the rate Γ c ). Therefore, the phase transition can still be observed as a non-analytic behavior in either of these quantities. The subradiance factor S f , however, depends on the atomic distribution and is given by where the overbar denotes the average over the spatial distribution of atoms. As a result, the minimum value of S f is −1/4 when the atoms are uniformly distributed over a wavelength whereas it is −1/2 when the atoms are trapped at the antinodes. We have verified the validity of these analytic results by comparing with the numerical steady-state solution to Eq. (S30). To verify the accuracy of these expressions, we have numerically solved for the steady state of Eq. (S30) taking the case of N = 10 4 atoms uniformly distributed over either M = 25 or M = 40 bins that span the cavity wavelength. We find the numerical results to be in excellent agreement with the above expressions.
As already mentioned in the Main Text, the squeezing parameter ξ 2 defined in Eq. (4) of the main text does not account for the cavity mode function and hence can be significantly larger than 1 even when the atoms are in a highly entangled state.
|
2021-03-15T01:15:59.165Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "56dac7134d5476ac69315505a58d202f13d20e84",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.07402",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "87365f5d96b3498513b5f8715bc3b4586a5940e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
120535941
|
pes2o/s2orc
|
v3-fos-license
|
Estimation of the charge carrier localization length from Gaussian fluctuations in the magneto-thermopower of La_{0.6}Y_{0.1}Ca_{0.3}MnO_3
The magneto-thermoelectric power (TEP) $\Delta S(T,H)$ of perovskite type manganise oxide $La_{0.6}Y_{0.1}Ca_{0.3}MnO_3$ is found to exhibit a sharp peak at some temperature $T^{*}=170K$. By approximating the true shape of the measured magneto-TEP in the vicinity of $T^{*}$ by a linear triangle of the form $\Delta S(T,H)\simeq S_p(H)\pm B^{\pm}(H)(T^{*}-T)$, we observe that $B ^{-}(H)\simeq 2B ^{+}(H)$. We adopt the electron localization scenario and introduce a Ginzburg-Landau (GL) type theory which incorporates the two concurrent phase transitions, viz., the paramagnetic-ferromagnetic transition at the Curie point $T_C$ and the"metal-insulator"(M-I) transition at $T_{MI}$. The latter is characterized by the divergence of the field-dependent charge carrier localization length $\xi (T,H)$ at some characteristic field $H_0$. Calculating the average and fluctuation contributions to the total magnetization and the transport entropy related magneto-TEP $\Delta S(T,H)$ within the GL theory, we obtain a simple relationship between $T^{*}$ and the above two critical temperatures ($T_{C}$ and $T_{MI}$). The observed slope ratio $B ^{-}(H)/B ^{+}(H)$ is found to be governed by the competition between the electron-spin exchange $JS$ and the induced magnetic energy $M_sH_0$. The comparison of our data with the model predictions produce $T_{C}=195K$, $JS=40meV$, $M_0=0.4M_s$, $\xi_0=10\AA$, and $n_e/n_i=2/3$ for the estimates of the Curie temperature, the exchange coupling constant, the critical magnetization, the localization length, and the free-to-localized carrier number density ratio, respectively.
I. INTRODUCTION
][3][4][5][6][7][8][9][10][11][12][13][14] In the doping range 0.2 < x < 0.5, these compounds are known to undergo a double phase transition from paramagnetic (PM) insulator (I) to ferromagnetic (FM) metal (M) state characterized by the Curie temperature T C and the charge carrier localization temperature T MI , respectively.The so-called giant magnetoresistivity (GMR) exhibits a sharp peak around T MI , while below T C the system acquires a spontaneous magnetization accompanied by a giant magnetic entropy changes. 14Despite a variety of theoretical scenarios attempting to describe this phenomenon, practically all of them adopt as a starting point the so-called doubleexchange (DE) mechanism, which considers the exchange of electrons between neighboring M n 3+ /M n 4+ sites with strong on-site Hund's coupling.The estimated exchange energy 11 JS = 45meV (where S = 2 is an effective spin on a M n site), being much less than the Fermi energy E F in these materials (typically, E F = 0.15eV ), favors an FM ground state.In turn, an applied magnetic field H enhances the FM order thus reducing the spin scattering and producing the observed negative GMR.The localization scenario, 13 in which M n oxides are modelled as systems with both DE off-diagonal spin disorder and nonmagnetic diagonal disorder, predicts a divergence of the electronic localization length ξ(M ) at some M-I phase transition.In terms of the spontaneous magnetization M , it means that for M < M 0 the system is in a highly resistive (insulator-like) phase, while for M > M 0 the system is in a low resistive (metallic-like) state.Within this scenario, the Curie point T C is defined through the spontaneous magnetization M as M (T C , H) = 0, while the M-I transition temperature T MI is such that M (T MI , H) = M 0 (with M 0 being a fraction of the saturated magnetization M s ).Furthermore, the influence of magnetic fluctuations on electron-spin scattering near T MI is expected to be rather important, for they can easily tip a subtle balance between magnetic and electronic processes in favor of either charge localization or delocalization.Besides, the observable difference between the two critical temperatures (usually attributed to the quality of a particular sample used [5][6][7][8] ) is ascribed to the random nonmagnetic scattering which is highly responsible for the magnitude of the observable GMR. 13 On the other hand, in view of its carrier charge (and density) sensitive nature, thermopower (TEP) measurements could complement the traditional MR data and be used as a tool for probing the field-induced delocalization of the carriers.6][17] Besides, magneto-TEP can be directly linked to the transport entropy change in applied magnetic field.The recently observed 14 giant magnetic entropy change in manganites (produced by the abrupt reduction of the magnetization and attributed to an anomalous thermal expansion just at the Curie point) gives another reason to utilize the magneto-TEP data in order to get an additional information as for the underlying transport mechanisms in these materials.
In the present paper we discuss some typical results for magneto-TEP measurements on a manganite sample La 0.6 Y 0.1 Ca 0.3 M nO 3 at H = 1T field for a wide temperature interval (ranging from 20K to 300K).By approximating the true shape of the measured magneto-TEP in the vicinity of the peak temperature T * by a linear triangle of the form ∆S(T, H) ≃ S p (H) ± B ± (H)(T * − T ), we observe that B − (H) ≃ 2B + (H).In an attempt to account for the observed behavior of the magneto-TEP, we adopt the main ideas of the microscopic localization theory 13 and construct a phenomenological free energy functional of Ginzburg-Landau (GL) type which describes the magnetic field and temperature behavior of the spontaneous magnetization in the presence of strong localization effects near T * .Calculating the background and fluctuation contributions to the total magnetization and the transport entropy-induced magneto-TEP ∆S(T, H) within the GL theory, we obtain a simple relationship between T * and the above two critical temperatures (T C and T MI ).We find also that the observed ratio B − (H)/B + (H) asymmetry is governed by a universal parameter z = JS/M s H 0 where JS is the electron-spin exchange and M s H 0 is the localization related magnetic energy.By comparing our data with the model predictions, we deduce estimates for some important model parameters such as the Curie point T C , the localization length ξ 0 , the critical magnetization M 0 ∝ H 0 , the exchange energy J, and the free-to-localized carrier number density ratio n e /n i , all in good agreement with the existing microscopic localization theories.
II. EXPERIMENTAL RESULTS
La 0.6 Y 0.1 Ca 0.3 M nO 3 samples were prepared from stoichiometric amounts of La 2 O 3 , Y 2 O 3 , CaCO 3 , and M nO 2 powders.The mixture was heated in the air at 800C for 12 hours to achieve the decarbonation.Then it was pressed at room temperature under 10 3 kG/cm 2 to obtain parallelipedic pellets.An annealing and sintering from 1350C to 800C was made slowly (during 2 days) to preserve the right phase stoichiometry.A small bar (length l = 10mm, cross section S = 4mm 2 ) was cut from one pellet.The electrical resistivity ρ(T, H) was measured using the conventional four-probe method.To avoid Joule and Peltier effects, a dc current I = 1mA was injected (as a one second pulse) successively on both sides of the sample.The voltage drop V across the sample was measured with high accuracy by a KT 256 nanovoltmeter.The magnetic field H of 1T was applied normally to the current.Fig. 1 presents the temperature dependence of the magnetoresistance (MR) ∆ρ(T, H) = ρ(T, H) − ρ(T, 0) for a La 0.6 Y 0.1 Ca 0.3 M nO 3 sample at H = 1T field.As is seen, the negative MR ∆ρ(T, H) shows a peak (dip) at some temperature T 0 = 160K (referred to as T MI , in what follows) where the GMR ∆ρ(T, H)/ρ(T, 0) reaches 40%.The thermopower (TEP) S was measured using the differential method. 18In order to generate a heat flow, a small heater film (R = 150Ω) was attached to one end of the sample.Two calibrated chromel-constantan thermocouples were used to measure the temperature difference between two points on the sample.The TEP S(T, H) is deduced from the following equation, S(T, H) = S Au (T ) − V s (T, H)/∆T , where S Au (T ) is the TEP of the gold wires used to measure the voltage drop V s at the hot junctions of both thermocouples.Fig. 2 shows a typical temperature behavior of the deduced magneto-TEP ∆S(T, H) = S(T, H)−S(T, 0) for the same sample (at H = 1T ).Observe that it has an asymmetric Λ-like shape near some critical temperature T * > T MI where it reaches its field-dependent peak (dip) value S p (H). Approximating the shape of the observed ∆S(T, H) by the asymmetric linear triangle of the form with positive slopes B − (H) and B + (H) defined for T < T * and T > T * , respectively, we find (see Fig. 2) that B − (H) ≃ 2B + (H) in the vicinity of T * .Now, with all this information in mind, let us proceed to the interpretation of the experimental results.
III. DISCUSSION
A. The model Since we are dealing with the magnetic-field induced changes of the TEP, it is reasonable to assume that the observed behavior can be attributed to the corresponding changes of transport magnetic entropy (and thus spontaneous magnetization) in the presence of strong electronspin exchange and localization effects, near some critical temperature T * .Later on, we will establish a simple (linear) relationship between the peak temperature T * and the two critical temperatures T C and T MI , responsible respectively for PM-FM and M-I phase transitions.Based on the above considerations, we can write F = F M − F e for the balance of magnetic F M and electronic F e free energies participating in the transport processes under discussion.The observed magnetization M and the magneto-TEP behavior should result from the minimization of F (as, for example, is the case in superconductors where F measures the difference between the normal and condensate energies 15,16 ).In our case, the above electronic contribution reads F e = MH e = η 2 (n e E k + n i V DE ) and describes a coupling of spontaneous magnetization M = M s η 2 (where η is the order parameter and M s the saturated magnetization) with (i) an effective DE energy V DE = −JS (where S is an effective spin on a M n site, and J the exchange coupling constant), and (ii) the electronic (localization) energy E k (T, H) = h2 /2mξ 2 (T, H) (where ξ(T, H) is the localization length, and m an effective electron mass); n i and n e stand for the number density of localized spins and conduction electrons, respectively.At the same time, the magnetic contribution F M = M(H ef f − H) = M s η 2 (γη 2 − H) includes the effects due to the molecularfield H ef f = γM/M s (where γ = 3k B T C /2µ B S is the characteristic magnetic field with k B the Boltzman constant and µ B the Bohr magneton) and an applied magnetic field H.After trivial rearrangements, the above functional F can be cast into a familiar GL type form describing the second-order phase transition from PM (insulator) to FM (metal) state near T * , namely Here 0 (H)T * ; β = 2γM s , and we used the conventional expression ξ 2 (T, H) = ξ 2 0 (H)/(1 − T /T * ) for the correlation length.Besides, to account for the field-induced localization effects, we assume after Sheng et al. 13 B. Mean value of the magneto-TEP: ∆Sav(T, Given our previous experience with high-T c superconductors, we can readily present the observed magneto-TEP in a two-term contribution form 16
∆S(T, H) = ∆S av (T, H)
where the average term ∆S av (T, H) is non-zero only below T * while the fluctuation term ∆S f l (T, H) should contribute to the observable ∆S(T, H) both above and below T * .In what follows, we shall discuss these two contributions separately within a mean-field theory approximation for GMR materials.
As usual, the equilibrium state of such a system is determined from the minimum energy condition ∂F /∂η = 0 which yields for T < T * Substituting η 0 into Eq.( 2) we obtain for the average free energy density In turn, the magneto-TEP ∆S(T, H) can be related to the corresponding difference of transport entropies [15][16][17] ∆σ av ≡ −∂∆Ω av /∂T as ∆S av (T, H) = ∆σ av (T, H)/en e , where e and n e are the charge and the number density of free carriers.Finally the mean value of the magneto-TEP reads (below with and The influence of fluctuations (both Gaussian and critical) on transport properties of high-T c superconductors (including TEP, electrical and thermal conductivity) was extensively studied and is very well documented (see, e.g., [19][20][21][22][23][24][25] and further references therein).In particular, it was found that the fluctuation-induced behavior may extend to temperatures more than 10K higher than the critical temperature T c .As for manganites, the fluctuation effects in these materials appear to be much less explored. 26Nonetheless, according to the interpretation of the observed magneto-TEP we adopt in the present paper, influence of magnetic fluctuations on electron-spin scattering near T * should be rather important.So, it seems appropriate to take a closer look at the region near T * to discuss the fluctuations of the magneto-TEP ∆S f l (T, H).Recall that according to the textbook theory of Gaussian fluctuations, 27 the fluctuations of any observable (such as heat capacity, magnetization, etc) which is conjugated to the order parameter η can be presented in terms of the statistical average of the fluctuation amplitude < (δη) 2 > with δη = η − η 0 .Then the TEP above (+) and below (−) the critical point T * have the form of where , and A is a coefficient to be defined below.Expanding the free energy density functional around the mean value of the order parameter η 0 , which is defined as a stable solution of equation ∂F /∂η = 0 we can explicitly calculate the Gaussian integrals.Due to the fact that η 0 is given by Eq.( 4) below T * and vanishes at T ≥ T * , we obtain finally and As we shall see below, for the experimental range of parameters under discussion, Hence, with a good accuracy we can linearize Eqs.( 11) and ( 12) and obtain for the fluctuation contribution to the magneto-TEP where and Furthermore, it is quite reasonable to assume that S − p = S + p ≡ S p , where the magneto-TEP peak (dip) values are defined as follows, S − p = S p,av + S − p,f l and S + p = S + p,f l .The above equations allow us to fix the arbitrary parameter A yielding A = −4ζ 2 (0)α(0)(1+z)/3ek B T * βn e .This in turn leads to the following expressions for the fluctuation contribution to peaks and slopes through their average counterparts (see Eqs.( 7) and ( 8)): S + p,f l (H) = (2/3)S p,av (H), S − p,f l (H) = −(1/3)S p,av (H), B − f l (H) = −(1/2)B av (H), and B + f l (H) = B av (H).Finally, the total contribution to the observable magneto-TEP reads (Cf.Eq.( 1)) where and Here E 0 k = h2 /2mξ 2 0 (0), and z = n i JS/M s H 0 .Notice that within our model the asymmetry of slopes ratio B − (H)/B + (H) originates from the balance of the exchange n i JS and localization induced magnetic M s H 0 energies.
D. Magnetization and the critical temperatures
Before turning to the comparison of our theoretical findings with the experimental data, let us discuss the critical temperatures which control the magnetic (T C ) and carrier localization "metal-insulator" (T MI ) phase transitions.According to the adopted model, these two temperatures are defined through the spontaneous magnetization M = M av + M − f l as follows: M (T C ) = 0 and M (T MI ) = M 0 .Here M 0 ∝ H 0 is the critical magnetization at which the zero-temperature localization length ξ 0 (H) = ξ 0 (0)(1 − H/H 0 ) −1 ∝ (1 − M/M 0 ) −1 → ∞ marking the M-I phase transition.According to Section III, the average magnetization reads M av (T ) ≡ M(η 0 ) = M s η 2 0 (T ), where M s = n i µ B is the saturated magnetization, and the equilibrium order parameter η 0 (T ) is defined by Eq.( 4).Now, for the self-consistency of our approach, we need to find the fluctuation contributions to the magnetization as well.Following the lines of the previous Section, we obtain and As usual, to fix the constant C, we assume that M (T * ) = M + (T * ), where M + = M + f l is the magnetization above T * .As a result, we obtain C = −4M s ζ 2 /3k B βT * which leads to the following expression for the total magnetization below with ζ, β, and η 0 defined earlier.Given the above definitions, the two critical temperatures are related to each other and to the magneto-TEP peak temperature T * within our model as follows with Let us compare now the obtained theoretical expressions with our experimental data on La 0.6 Y 0.1 Ca 0.3 M nO 3 (see Fig. 2).By comparing the ratios (B − (H)/B + (H)) exp and (B − (H)/B + (H)) th , we obtain z ≃ 3 for the slopes asymmetry parameter leading to JS = 3µ B H 0 .Then, using Eq.( 18), B + exp , T * = 170K, and just obtained z, we get E 0 k /JS = 2.5(n i /n e ) which in turn brings about T C = 195K for the Curie temperature (this value falls into the reported range of the FM transition temperatures for this class of manganites [5][6][7][8] ).Using this temperature and assuming S = 2 for an effective Mn spin, we can estimate the value of the exchange energy J (via the mean-field expression for the critical field H 0 = 3k B T C /2Sµ B ).The result is: JS = 40meV , which agrees with other reported estimates of this parameter. 11Besides, from Eq.( 23) we immediately get a simple relationship between the two critical temperatures, T MI /T C = 1 − 4M 0 /9M s which allows us to estimate the critical magnetization M 0 (related to the localization magnetic field H 0 = µ 0 M 0 ).Using T MI = 160K (deduced from the GMR data on the same sample as a peak temperature, see Fig. 1), we obtain M 0 = 0.4M s , in a good agreement with the localization theory prediction. 13Next, with the above estimates in mind, Eq.( 17) yields ξ 0 = 10 Å for the localization length 5,13 (using a free electron mass m e for m).Finally, observing that JS ≃ k B T C ≃ 0.3E 0 k we obtain n e /n i = 2/3 for an estimate of the free-to-localized carrier number density ratio which leads to the saturated magnetization M s = n i µ B = (3/2)n e µ B .It is also worth noting that the found localization energy E 0 k is of the order of the Fermi energy E F , as expected for manganites. 11To conclude with the estimates, we note that ζ(H)T * /α(H) ≃ 1 which a posteriori justifies the use of the linearized Eq.( 13) for the fluctuation region |1 − T /T * | ≪ 1.As is seen in Fig. 2, this criterion is well met in our case.
In summary, to account for the observed temperature dependence of the magneto-TEP ∆S(T, H) in La 0.6 Y 0.1 Ca 0.3 M nO 3 , exhibiting a field-dependent peak at some temperature T * (lying in-between the charge carrier localization temperature T MI where the observed negative magnetoresistivity has a minimum, and magnetic transition temperature T C which marks the occurence of the spontaneous magnetization), we adopted the ideas of the localization model and introduced a free energy functional of Ginzburg-Landau (GL) type describing the phase transition from paramagnetic (insulator) to ferromagnetic (metal) state near T * .Calculating both average and fluctuation contributions to the total magnetization and magneto-TEP within the GL theory, we were able to successfully fit the data and estimate some important model parameters (including the metal-insulator T MI and magnetic T C transition temperatures, localization length ξ 0 , electron-spin exchange coupling constant J, and the free-to-localized carrier number density ratio n e /n i ), all in a reasonable agreement with existing microscopic theories.The Gaussian fluctuations both above and below T * are found to substantially contribute to the peak value S p (H) ≡ ∆S(T * , H) of the observed magneto-TEP, amounting to 67% and 33%, respectively.
|
2019-04-14T02:18:42.263Z
|
1998-12-14T00:00:00.000
|
{
"year": 1998,
"sha1": "fd163f615a296a254b72d4724a3eef2fcdd6adaf",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/cond-mat/9812219",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fd163f615a296a254b72d4724a3eef2fcdd6adaf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
15169906
|
pes2o/s2orc
|
v3-fos-license
|
Dynamics and Stability of Light-Like Tachyon Condensation
Recently, Hellerman and Schnabl considered the dynamics of unstable D-branes in the background of a linear dilaton. Remarkably, they were able to construct light-like tachyon solutions which interpolate smoothly between the perturbative and nonperturbative vacua, without undergoing the wild oscillations that plague time-like solutions. In their analysis, however, the full structure of the initial value problem for the nonlocal dynamical equations was not considered. In this paper, therefore, we reexamine the nonlinear dynamics of light-like tachyon condensation using a combination of numerical and analytical techniques. We find that for the p-adic string the monotonic behaviour obtained previously relied on a special choice of initial conditions near the unstable maximum. For generic initial conditions the wild oscillations come back to haunt us. Interestingly, we find an"island of stability"in initial condition space that leads to sensible evolution at late times. For the string field theory case, on the other hand, we find that the evolution is completely stable for generic choices of initial data. This provides an explicit example of a string theoretic system that admits infinitely many initial data but is nevertheless nonperturbatively stable. Qualitatively similar dynamics are obtained in nonlocal cosmologies where the Hubble damping plays a role very analogous to the dilaton gradient.
Introduction
Nonlinear theories with infinitely many derivatives have come to play an increasingly important role in theoretical physics; attracting interest both from string theorists and cosmologists. The string theory application of such equations that has stimulated the most interest is that of understanding the dynamics of tachyon condensation in string field theory (SFT) [1,2] and also in related toy models, such as the p-adic string [3]. The dynamical process of the tachyon field rolling from the unstable maximum of its potential to the true vacuum is expected to give a time-dependent description of the decay of unstable D-brane configurations in string theory (see [4] and references therein). A long-standing puzzle has been the observation that generic tachyon solutions in flat space do not roll monotonically from the unstable maximum of the potential to the true vacuum of the theory, as would be expected on physical grounds. Rather, the tachyon field undergoes wild oscillations at late times [5]; a manifestation of the Ostrogradski instability [6] that plagues higher derivative theories (see [7] and [8] for a more modern discussion). The string theoretic interpretation of this peculiar behaviour is a subtle problem and there are a number of proposed explanations in the literature (discussed in more detail below). (See [9] for further discussion of tachyon dynamics in SFT and related theories and see [10,11] for significant progress in understanding the vacuum structure of SFT.) The wild oscillations of the tachyon field arise because the higher derivative structure of SFT allows for extra (more than two) initial data in the solutions of the field equations. These extra data can be interpreted as a tower of new physical states, in addition to the usual tachyonic excitation, which contribute to the kinetic energy with indefinite sign [13]. In the case of SFT and the p-adic string, these spurious extra states have complex masssquared and behave as an admixture of ghost and non-ghost field (we will refer to such excitations as ghost-like or "quintom" in the text). The presence of extra ghost (or ghostlike) degrees of freedom is a rather generic problem for higher derivative theories; see [8] for a detailed discussion. In any theory with negative kinetic energy the Hamiltonian is unbounded from below and the system will become arbitrarily excited at late times for generic choices of initial data. The kind of peculiar time dependence obtained for rolling tachyon solutions in flat space is quite typical of higher derivative instabilities.
Recently, Hellerman and Schnabl [14] made significant progress in studying the dynamics of brane decay by explicitly constructing tachyon solutions that roll smoothly from the perturbative vacuum to the true vacuum, as a function of light-cone time, x + . This progress relied on turning on a linear dilaton background that violates energy conservation and provides a source of friction for the tachyon dynamics. Hellerman and Schnabl considered light-like tachyon profiles in SFT, p-adic string theory and also vacuum string field theory (VSFT), finding similar dynamics in all three cases. Despite their significant result, however, the analysis of [14] does not explicitly consider the role of initial conditions near the unstable maximum. Without a complete understanding of the initial value problem it is impossible to assess the stability of such solutions. Here we point out that the solutions presented in [14] result from the choice of one particular kind of initial condition from an infinite set of possibilities. Furthermore, we will show that for the cases of the padic string and VSFT the monotonic behaviour obtained by Hellerman and Schnabl is not generic in the sense that it relies on the special choice of initial conditions made. For more general choices of initial conditions the wild oscillations associated with the Ostrogradski instability come back to haunt us. On the other hand, we will show that the equation obtained in [14] for SFT in a level zero truncation leads to completely stable dynamics, even when the extra initial data are taken into account! This behaviour is confirmed by fully nonlinear numerical simulations, and provides a remarkable example of an interacting nonlocal theory that admits infinitely many initial data but whose evolution is nevertheless completely stable. These surprising dynamics arise because the friction coming from the dilaton gradient efficiently damps out the oscillations of the tachyon field.
A final surprise awaits us. Further studying the p-adic and VSFT case, we discover that despite the instability associated with generic initial conditions, the set of initial conditions leading to stable evolution is not of measure zero. This implies that even in these cases there is an "island of stability" in the initial condition space which leads to a well behaved evolution.
Nonlocal theories motivated by SFT have recently attracted interest also from cosmologists [15]- [36] due to a wide array of novel cosmological behaviours. Of particular interest are recent efforts to construct inflationary solutions in nonlocal theories [32]- [36]. The first realisation of nonlocal inflation was in the context of p-adic inflation [32]. This model can support inflation even when the potential is naively very steep, a behaviour that was found to be a rather generic feature of nonlocal inflation in [35] (see also [33]) and was verified using numerical analysis in [36]. Moreover, nonlocal inflation is also one of the rare models that predicts large non-gaussianity in the cosmic microwave background [33,34]. Here we point out that the nonlinear dynamics of nonlocal inflation are strikingly similar to the light-like tachyon models considered by Hellerman and Schnabl. Hence we expect that our analysis should have implications also for the stability of nonlocal cosmologies.
During the course of our investigation we uncover a number of results of general interest to the study of nonlinear theories with infinitely many derivatives. For instance, we will argue that the recipe of mixing friction and constraints on the initial conditions provides a very generic prescription for constructing stable solutions in infinite order theories. We believe that this work should be useful in guiding future searches for stable solutions in SFT (and similar theories). Moreover, our analysis clarifies a number of issues concerning the mathematical structure of nonlinear infinite order differential equations. In [12] the initial value problem for linear constant coefficient equations with infinitely many derivatives was studied and a formalism was developed to exhaustively count initial data. 1 These results were generalised to the case of variable coefficient equations (such as those that arise when studying nonlocal cosmological perturbation theory) in [37]. However, the nonlinear problem presents a number of questions which could not be addressed in [12,37]. This work represents progress towards a complete understanding of nonlinear equations with infinitely many derivatives.
The organisation of this paper is as follows. In section 2 we study the nonlinear dynamics of the light-like p-adic string tachyon in a linear dilaton background, presenting non-linear analytic solutions which fully take into account the freedom in fixing initial conditions. In section 3 we study the analogous dynamics at level zero truncation in SFT. In this case analytical solutions are not available so we must turn to a numerical analysis. In section 4 we describe our formalism for solving infinite order differential equations numerically, before applying our numerical approach to study the light-like SFT tachyon with generic initial conditions in section 5. In section 6 we comment on the stability of nonlocal cosmologies. We briefly review some proposed stringy interpretations of the wild oscillations in rolling tachyon solutions in section 7. Finally, in section 8 we conclude.
Set-Up and Equation of Motion
We begin our investigation of the dynamics of light-like tachyon condensation by studying p-adic string theory [3] coupled to a linear dilaton profile. We employ the action proposed in [14] where p is a prime number that characterises the world-sheet coordinates, α ′ = m −2 s (with m s the string mass), g p is related to the open string coupling and φ is the (dimensionless) tachyon field. Although (2.1) is not meant to be construed as a realistic model of string theory, we will spend some time studying this theory because it is analytically tractable and provides an excellent playground for studying the nonlinear dynamics of infinite order theories. Following [14] we will work in terms of light-cone coordinates x ± = (x 0 ± x 1 )/ √ 2 so that the metric takes the form The D − 2 transverse coordinates y will play no role in the ensuing analysis. The dilaton background is taken as For a light-like field φ = φ(x + ) one obtains the equation of motion Note that this finite difference equation can trivially be re-written as a pseudo-differential equation This equation admits constant solutions φ = p −α ′ V 2 /[2(p−1)] and φ = 0. The former corresponds to the unstable maximum of the potential (physically the state with a space-filling brane) while the latter is the true minimum (physically the nonperturbative vacuum with no D-brane). The action (2.1) was motivated by the striking similarity Eq. (2.3) bears to the equation of motion one derives for the tachyon in VSFT (with light-like ansatz and linear dilaton background). Equation (2.4) is also identical (up to a re-scaling of the field and space-time coordinates) to the friction dominated equation for the inflaton dynamics in p-adic inflation [32]. 2 Due to these similarities we expect our analysis of (2.4) to have relevance to VSFT and p-adic inflation also.
Exact Analytic Solution
Remarkably, equation (2.3) admits an exact nonperturbative solution: where F (x + ) is an arbitrary smooth periodic function satisfying It follows that F can be decomposed into a Fourier series as It is clear that the solution contains an infinite number of free parameters {a n , b n } which allow us to fix the state of the solution at some initial time x + i (we will usually set x + i ≡ 0 in subsequent analysis). Hence an infinite number of initial conditions must be specified to give us a unique solution. As we will see, the first term in (2.7) (proportional to a 0 ) is associated with the tachyonic excitation while the the oscillatory terms (proportional to a n , b n with n > 0) are associated with the presence of ghosts in the theory, since these modes contribute with indefinite sign to the total kinetic energy. We will refer to the n = 0 states as ghost-like (or quintom) modes.
Let us discuss the dynamics of the solution (2.5) in more detail. For any choices of a n and b n , the solution rolls from the unstable maximum in the asymptotic past x + → −∞. For a 0 > 0 and a n = b n = 0 for n > 0 the field rolls monotonically from the unstable maximum to the true vacuum of the theory. This solution corresponds to turning on only the tachyon in the initial state and in this case our solution (2.5) matches the one derived in [14] (also identical to the solution obtained in [32] in a somewhat different context).
On the other hand, considering a 0 = 0 and taking any of the a n or b n to be non-zero leads to a wildly oscillating solution which never settles to the minimum, but rather the amplitude of the oscillations grow in an unbounded manner (similarly to the solutions of (2.1) in the background of a constant dilaton [5]). This is not unexpected, since these modes of excitation are ghost-like and lead to vacuum instability. It is extremely interesting and indeed unexpected, however, to see what occurs if we choose to turn on some admixture of ghost excitation together with the well behaved tachyon in the initial state. Let us again take a 0 > 0. In this case we can choose the origin of time such that a 0 = 1 without loss of generality. Now let us arbitrarily take some of the a n , b n to be non-zero, but sufficiently small in a sense which we will quantify shortly. In this case the solution again rolls away from the unstable maximum, but as it rolls down the potential the field undergoes some small oscillations, though it still settles down to the minimum at late times. Quantitatively, the condition is sufficient to ensure that φ(x + ) → 0 as x + → ∞. Let us now consider increasing a n , b n so that (2.8) is violated. In this case the ghost excitations become dominant and φ oscillates wildly at late times. Note that these initial conditions are physically reasonable because φ is at the false vacuum in the asymptotic past x + → −∞. In Fig. 1 we illustrate the different behaviours of the solution (2.5), which we have just discussed, for some representative choices of a n , b n . Let us comment on the structure of the initial value problem for (2.4), which is closely related to the (in)stability of the theory. In [12] a formalism was presented for exhaustively counting the initial data of linear, infinite order equations (see [37] for a generalisation to variable coefficient equations). One could use this formalism to (2.4) in a perturbative expansion around φ = p −α ′ V 2 /[2(p−1)] . The solution so obtained reproduces the terms in a small-u expansion of (2.5). Using the results of [12] then proves that the solution (2.5) provides a complete solution of the initial value problem near the false vacuum. By continuity, we expect that this solution is complete also nonperturbatively.
Behaviour of the Hamiltonian
Using our exact solution (2.5) it is straightforward to explicitly construct the energy density as a function of x + (this is not simply a constant because the dilaton gradient violates time translation invariance). For simplicity we consider V − = V = 0 so that Φ = −V + x − and V 2 = 0 although we do not expect that our qualitative results depend on this restriction in any crucial way. The nontrivial components of the stress tensor are [14] Evaluating these on the solution (2.5) we have where in evaluating T ++ we have switched variables z = p ζ in the integration. Remembering that T µν transforms as a tensor the energy density is Since (2.11) could be written as it is natural to associate the second term in (2.13) with the potential energy and the first term with the kinetic energy. It can be readily verified that the kinetic term has indefinite sign and that the presence of some non-zero a n , b n (with n > 1) in (2.7) leads to negative contributions to the kinetic energy. The energy density g 2 p e Φ ρ as a function of x + for the same values of p and a n , b n used in fig. 1. The x + axis is measured in units of α ′ V + and, for illustration, we have set V + = 1 in these units.
The behaviour of the energy density as a function of x + is plotted in Fig. 2. In the case a n = b n = 0 for n > 0 (the case considered by [14]) we have monotonic behaviour of ρ(x + ). Turning on the ghost-like n > 0 modes the oscillations of φ lead to oscillations in ρ. If the contamination of ghost modes in the initial state is sufficiently large then these oscillations cause ρ(x + ) to cross zero and one would expect recollapse if gravity had been included. From the previous discussion of φ(x + ) one might expect that (2.8) is a sufficient condition to keep the kinetic energy positive for all x + . However, detailed examination of (2.13) reveals that the actual condition is somewhat stronger: (2.14) The fact that (2.14) is a stronger constraint than (2.8) means that we can construct solutions where ρ(x + ) crosses zero without prohibiting φ(x + ) from settling down to the minimum at late times.
The Island of Stability
The instability of the theory (2.4) is not surprising. This equation admits infinitely many initial conditions and hence Ostrogradski's theorem implies that the dynamics should be generically unstable. Of course it may be possible to construct well-behaved solutions by carefully choosing initial conditions, however, one would expect the set of initial conditions leading to non-pathological evolution to be a set of measure zero. This intuition is based on the observation that the Ostrogradski Hamiltonian is linear in the unstable directions. However, this intuition is incorrect in our case: the set of initial conditions leading to sensible evolution is not measure zero. Equation (2.4) allows for a certain amount of ghost-like contamination in the initial conditions (quantified by taking a n , b n to be nonzero) without prohibiting the tachyon from settling down to the vacuum at late times. This observation implies that there is an "island of stability" in initial condition space. 3 This (perhaps) surprising behaviour arises because the dilaton friction can efficiently damp out the unstable growth of the ghost modes as long as they do not contaminate the initial state too strongly. It is worth emphasising that the island of stability is not a perturbative artifact. Our solution (2.5) is exact at the fully nonlinear level and, as we have argued, it provides a full solution of the initial value problem for the theory (2.4). It is tempting to argue that a theory like (2.4) could be phenomenologically viable once the initial data are suitably restricted. Such a restriction on the data could be elegantly implemented using the contour deformation prescription advocated in [12]. 4 We would, however, like to emphasise two caveats. First, our analysis is purely classical. Quantum mechanically a solution which starts in the island of stability might be able to tunnel to an unstable configuration, perhaps though a negative tension instanton solution similar to those constructed in [44]. Second, we should emphasise that the stability of the solution (2.5) is closely tied to the light-like ansatz φ = φ(x + ). On scales small compared to the dilaton gradient one can treat Φ as a constant and (2.1) should have all the usual instabilities [8] and wild oscillations [5].
It is interesting to compare our findings to the case of closed string tachyon condensation. Purely light-like closed string tachyon solutions are stable against small perturbations of the initial conditions (in the sense that small perturbations of the initial conditions lead to qualitatively similar behaviour at late times) [45,46]. On the other hand, [47] discussed the possibility of qualitative change of the solution under finite changes of the initial conditions.
Set-Up and Equation of Motion
We now consider light-like tachyon condensation in the more realistic context of string field theory at level zero truncation. The action derived in [14] for the SFT tachyon in the linear dilaton background (given by Eq. (2.2)) is where g o is the open string coupling constant and the constant is K = 4/3 3/2 . Imposing again a light-like ansatz on the tachyon field φ = φ(x + ) we obtain the equation of motion This can be re-written in pseudo-differential form as This equation admits constant solutions φ = 0 and φ = K 3 . The former corresponds to the unstable maximum while the latter is the true vacuum. Note that, as in the p-adic case, equation (3.3) is identical to what one would have for the time-like SFT tachyon in a de Sitter background in the limit of very large Hubble scale where one could take 2 = −∂ 2 t − 3H∂ t ∼ = −3H∂ t . Hence, in some (limited) sense the dilaton gradient acts like an infinite source of Hubble friction.
Perturbative Analysis
Sadly, the equation (3.3) does not seem to admit a simple exact analytic solution analogous to (2.5). However, we can study the dynamics of this theory analytically using perturbation theory. Near the false vacuum we can write and linearize (3.3) in δφ to obtain This equation admits a single growing mode with a 0 constant. This solutions describes the usual tachyonic instability near the false vacuum.
Near the true vacuum we take and linearize (3.3) in δφ to obtain This equation belongs to the class studied in [12] where the generatrix is where a n are arbitrary complex numbers and Here W n are the branches of the Lambert-W function and n runs over all integer values (both positive and negative). It is easy to verify that the s n are complex and appear in complex conjugate pairs so that one may choose a n to obtain a real-valued solution. Moreover, it can be shown that Re(s n ) < 0 for all n so that all of the modes near φ = K 3 are decaying. These decaying oscillatory modes are ghost-like and (perhaps) are related to the presence of closed string excitations near the perturbative vacuum.
We will see shortly that, quite surprisingly, this naively linearized analysis actually gives a good qualitative picture of the fully nonlinear tachyon dynamics: generic solutions roll away from the unstable maximum and undergo damped oscillations about the minimum.
The reader may find it puzzling that perturbation theory around different critical points of the potential yields different numbers of initial conditions. This kind of mismatch (which is not unique to equation (3.3)) is an artifact of the perturbation theory employed. We discuss the resolution of this mis-match in the appendix. Note that because of this mis-match a naive perturbative analysis may lead to misleading results concerning the counting of initial data in higher derivative theories.
Numerical Methods
The naive perturbative analysis employed in subsection 3.2 is, of course, not sufficient to establish the stability of generic solutions of equation (3.3). Since we are unable to obtain nonperturbative analytical solutions of (3.3) we must turn to numerical analysis. In this section we describe our numerical methods. Although our primary interest is in equation (3.3), we apply our methods also to the p-adic string equation (2.4) as a consistency check. Our approach follows closely the formalism developed in [36] to study nonlocal cosmological models. Ref. [36] improved significantly on previous efforts to solve nonlocal equations numerically by allowing the equations of motion to be solved as an initial value problem. Since the stability of the theory is intimately tied to the structure of the initial value problem, it is only in the context of this formulation that one can sensibly address the crucial issue of stability.
Partial Differential Equation Formulation
Infinite order differential equations such as (2.4) and (3.3) are not directly amenable to standard numerical analysis. In order to solve these equations on a computer it is convenient to introduce a fictitious auxiliary direction (which we call r) and re-formulate the nonlocal ordinary differential equations (ODEs) as local partial differential equations (PDEs) in the space spanned by the coordinates x + and r. (This is very much analogous to the "diffusion equation" formulation that has been used previously to study nonlocal cosmologies numerically [28,29,30,36]. ) We start by remarking that both the equations of motion (2.4) and (3.3) can be expressed as where we are using the notation of Ref. [36]. For the p-adic case, ξ 2 = 0, α = −α ′ V + ln p and φ = exp[−α ′ V 2 ln p/2(p − 1)]ψ, and for the SFT case we have ξ 2 = −1/(8 ln K), α = 2α ′ V + ln K, p = 2 and φ = K 3 e −α∂ + ψ.
We now introduce an auxiliary variable r and define a new field such that Ψ(x + , r) = e −rα∂ + ψ(x + ) . By differentiating Ψ(x + , r) with respect to r we find that Ψ(x + , r) satisfies the PDE with the boundary condition which is determined from the equation of motion (4.1) employing the PDE (4.3).
The nonlocal system has now been formulated in a form amenable to standard numerical methods. Once we have specified the initial data Ψ(x + i , r) (see the next subsection) we can proceed to numerically integrate the PDE (4.3) subject to the boundary condition (4.4) on the interval 0 ≤ r ≤ 1, x + > x + i . At the end of the calculation the solution ψ(x + ) of the original nonlocal ODE (4.1) is extracted as ψ(x + ) = Ψ(x + , 0) .
(4.5) Note that the system (4.3,4.4) is very similar to the diffusion-like system obtained in [36]. In fact, it is identical with t replaced by x + , 2 replaced by ∂ + . The diffusion-like system obtained in [36] was ill-posed in the sense that high frequency initial data grew faster than lower frequency data, and hence numerical errors (which can be thought of as very high frequency noise) rapidly grew to swamp the real solution. Dealing with this numerical problem was a major part of the work in [36]. The ill-posedness arises because the PDE that was solved in [36] is second order in time and first order in the auxiliary direction. In the case at hand, however, the PDE (4.3) is only first order in light-cone time and as a result solving this PDE is the manner described is a well posed problem. This means that our numerical method is very stable, and the solutions produced are highly robust.
Constructing Suitable Initial Data
It remains to specify the initial data Ψ(x + i , r) which must be consistent with the boundary condition (4.4). In general, choosing such initial data is nontrivial. In order to proceed we construct suitable initial data Ψ(x + i , r) perturbatively for ψ(x + i ) close to some value A. There is no loss of generality because our formalism allows us to fix A arbitrarily. Once we have approximately determined Ψ(x + i , r) this initial configuration is then evolved numerically into the nonlinear regime. Hence our solutions are indeed fully nonlinear.
For initial field value ψ(x + i ) close to some (arbitrary) constant A we define Now Ψ(x + , r) = A + δΨ(x + , r) where δΨ(x + , r) ≡ e −rα∂ + δψ(x + ) satisfies the PDE To linear order in δψ the boundary condition (4.4) becomes We can now solve for δΨ(x + , r) by separation of variables. We take an ansatz of the form δΨ(x + , r) = δψ(x + )g(r) + h(r) . Substituting (4.10) back into (4.8) we find that the functions dependent on the auxiliary variable r are of the form g = e −αω 2 r and h = b(e αω 2 r − 1)/ω 2 , while δψ satisfies the local equation Here ω 2 is any solution of the characteristic equation and b is given by In general the characteristic equation (4.12) will have many roots ω 2 n . For each root ω 2 n of the characteristic equation we can obtain a solution δψ n (x + ) of equation (4.11) which, physically, can be thought of as a particle-like excitation near ψ = A. The spectrum of roots ω 2 n corresponds to the spectrum of masses for these physical excitations. For each state δψ n (x + ) there also exists a particular solution of the PDE: δΨ n (x + , r) = e αω 2 n r δψ n (x + ) + b n (e αω 2 n r −1)/ω 2 n . To construct general solutions, of course, we must superpose these modes, and we are lead to the general solution δΨ(x + , r) = n δψ n (x + )e αω 2 n r + b n ω 2 n e αω 2 n r − 1 , (4.14) where it can be easily verified that the constants b n must now satisfy the relation which replaces (4.13) when we superpose more than one mode, but are otherwise arbitrary. When ξ 2 = 0, such as in the p-adic case, the roots are given by and when ξ 2 = 0, like in the SFT case, we obtain when A = 0 and when A = 0. W n again represents the branches of the Lambert-W function. For both the p-adic and SFT cases, if A is sufficiently close to the unstable maximum then there exists a real-mass mode with αω 2 > 0. This mode corresponds to the usual tachyon and reflects the instability of the false vacuum. (There is also a decaying mode with real-mass which drops out of the spectrum very quickly.) In addition to this tachyonic state we also have an infinite tower of states with complex mass-squared. The kinetic energy associated with these states has indefinite sign [13] and hence these states are ghost-like. More precisely, each complex-mass state behaves as an admixture of a ghost field and a non-ghost field [36] (hence we use "ghost-like" rather than "ghost"). In [36] such states were described as "quintoms" and we will occasionally use this term interchangeably with ghost-like. The dependence of the mass spectrum on the initial value A is illustrated in Figs. (3) and (4). There we show the effective potential of the models under study and the regions where the characteristic equation has at least one real root and regions where all the roots are complex (which can be compared to the time-like cosmological cases in Ref. [48]). We can see from the preceding discussion and from Figs. (3) and (4), that the characteristic equation, for a given point A, has at most two real roots and an infinite number of complex roots. Hence, a particular solution δΨ n (x + , r) is, in general, complex-valued. For each complex root, its complex conjugate is also a root, hence, δΨ * n is also a particular solution to the PDE equation. Moreover, by suitably combining δΨ n and δΨ * n , we can construct real-valued solutions. Having this in mind, we are now ready to specify suitable initial data for our simulations. We are particularly interested in the evolution which follows when the field is initially in a region close to the maximum of the effective potential. It can readily be verified that an acceptable initial profile in the p-adic case, where the maximum exists at ψ = 1, is a n e αωn 2 R r cos(αω n 2 I r + θ n ) , (4.19) and an initial profile for the SFT case, where the maximum is at ψ = 0, can be written as a n e αωn 2 R r cos(αω n 2 I r + θ n ) , (4.20) where θ n is an arbitrary phase, the indices R and I mean the real and imaginary parts, and ǫ is a small number.
In equations (4.19), (4.20) the free coefficients a n allow us to fix an infinite number of initial conditions. Physically each a n parameterizes the amount of the state with masssquared ω 2 n that is present in the initial admixture. The approximate initial functions (4.19), (4.20) can now be numerically evolved using (4.3), (4.4) into the fully nonlinear regime.
For simplicity, we have set the arbitrary phases θ n to zero in our examples. We have verified that the inclusion of θ n = 0 does not qualitatively change the behaviour of our solutions. In particular, the inclusion of these phases does not change our results concerning the nonlinear stability of generic SFT solutions.
Consistency Check Using the p-adic Theory
Although our motivation for turning to numerical analysis was equation (3.3), we can also use this method as a consistency check on our previous results for the p-adic string (section 2). First, let us verify that we can reproduce the analytic solution (2.5) using the PDE formulation. To this end, we introduce another field Υ(x + , r) related to Ψ(x + , r) by Υ = ln Ψ. In terms of this new field the boundary condition (4.4) is linear: Υ(x + , 1) = pΥ(x + , 0). Now we can solve the diffusion-like equation (4.3) exactly by separation of variables, Υ = f (x + )g(r) where f (x + ) and g(r) are of the form The roots of the characteristic equation αω 2 n are determined using the boundary conditions to be: Putting all these results together and using (4.5) leads to the same solution for φ(x + ) found in Eq. (2.5). This confirms that the PDE method is consistent with other methods of solving non-local equations of motion. Note that for the SFT case, the boundary condition (4.4) cannot be written in terms of a linear relation, and therefore the method of separation of variables cannot be employed to obtain a solution to the PDE. This is simply a reflection of the statement we made previously that we cannot find an easy analytic solution to Eq. As a final consistency check we compare numerical solutions of (4.3) and (4.4) using the approach of subsection 4.2 to the exact analytic solutions obtained in section 2. In Fig. 5 we plot the light-cone time evolution of the p-adic tachyon for a combination of the real field (n = 0) with a oscillatory mode, which we refer to as a quintom in line with [36]. We choose the n = 1 mode. We see that the real field helps the quintom to decay and to remain harmless for the entire length of the subsequent evolution, assuming that the amount of quintom present initially does not violate the condition (2.8) which we derived analytically above. Moreover, the numerical solution is in excellent agreement with the analytical solution (2.5), a further verification of both the interesting behaviour of that solution, and of our numerical method.
Numerical Solutions
Having described in detail our numerical methods in section 4 we now wish to apply these to study the nonlinear dynamics of equation light-like tachyon condensation in SFT for arbitrary initial data. We proceed by numerically integrating the PDE (4.3) using the nonlinear boundary conditions (4.4), from initial data of the form (4.19)-(4.20) determined by perturbing about the hill-top as we have just described above. We stress that, although the initial data is fixed by considering a linearization of the equations of motion, the numerical solution rapidly evolves out of this linear regime and behaves in a fully non-linear manner.
First, we verify that picking initial conditions such that the real field tachyon is present initially, leads to the well behaved rolling solution which was constructed in [14]. Figure 6 presents the rolling solution produced by our numerical method which follows from picking only the n = 0 initial condition. As can be seen, this solution has exactly the behaviour expected.
We now focus on the fate of the ghost-like excitations. In Fig. 7 we show that the quintoms decay on their own in the SFT case without need of the extra contribution from the real field. Starting the evolution with only the n = 1 quintom present, we see that this excitation first decays towards the unstable maximum, and then at late times a real field is formed which evolves towards the minimum of the effective potential performing damped oscillations (assuming, of course, that the real field does not start to roll down the unbounded side of the potential). The initial decay is in line with the expectations drawn from a perturbative analysis close to the hill-top, then the subsequent roll to the minimum is the natural consequence of the maximum being an unstable point which the real field tachyon naturally tries to roll away from.
We have explored a wide range of other initial conditions for the SFT case, and other choices of n, and the situation is always the same: any ghost-like excitation present initially decays, as we expected from a perturbative analysis, and at late times the rolling solution is reached and the field decays to the minimum of the potential. Indeed even if we fix our initial conditions near to the minimum of the potential by perturbing about a point near to the minimum (where the roots to the characteristic equation are all complex and we can only have ghost-like states present as an initial condition), the behaviour that is always found is a rapid decay to the minimum.
Ghosts, Stability and Friction
The dynamics of equation ( many initial conditions and, by Ostrogradksi's theorem, the Hamiltonian is unbounded from below. However, the dilaton gradient violates time translation invariance so that intuition with Hamiltonian dynamics gives a completely inaccurate picture of the actual behaviour of the tachyon. Before getting too excited, we should remember that this stability relies crucially on the ansatz φ = φ(x + ) and that on small scales (compared to the dilaton gradient) generic field profiles should display the usual unstable behaviour. With this caveat in mind, however, we believe that our findings our quite significant. Equation (3.3) provides an explicit example of an interacting nonlocal theory that admits infinitely many initial conditions but is completely stable. It is our hope that this toy example will provide hints into how to construct more realistic stable theories with infinitely many derivatives. It is worth noting that our discussion of stability here refers only to the kinds of erratic time evolution that are associated with the Ostrogradski instability. Of course, the potential for the theory (3.3) is unbounded from below and hence there is an instability associated with rolling down that unbounded direction. This instability would be present also in a local field theory with the same potential and is completely unrelated to the kinds of higher derivative instabilities that we are interested in. The unboundedness of the potential is thought to be physically associated with the closed string tachyon.
Stability of Nonlocal Cosmologies
Owing to the close similarity between the equations describing light-like tachyon condensation and the equations describing nonlocal cosmologies (such as p-adic inflation) one may wonder whether the kinds of phenomena discussed above occur also in the latter case. It has been observed previously that it is possible to obtain cosmological solutions where the tachyon settles down to the true vacuum at late times [30,36]. However, previous studies have not considered how generic such solutions are, and indeed other wildly oscillatory solutions have also been produced [36].
We have numerically studied both p-adic and SFT cosmologies for a variety of initial conditions describing a mixture of tachyon and ghost-like excitation in the initial state. Our preliminary results suggest that, at least for certain parameter choices, the set of initial conditions leading to non-pathological evolution is not measure zero. Hence, we are lead to suspect that the island of stability is a fairly general property of infinite order theories in the presence of friction. This suggests that the combination of friction and constraints on the initial data provides a very general recipe for obtaining stable solutions in higher derivative theories. 5 This observation should be helpful to guide future attempts to construct physically sensible time-dependent solutions in SFT.
Before we leave this section a few comments are in order. As discussed previously, in the cosmological context our numerical methods are much less robust than in the light-like case. Although our preliminary efforts suggest that the island of stability exists in the cosmological context, it is difficult to make conclusive statements because our finite numerical accuracy prevents us from following the evolution to arbitrarily late times. Hence, although our solutions appear to settle down to the minimum of the potential, we cannot (yet) rule out the possibility that instabilities re-appear at very late times. (Note that this problem does not afflict our light-like solutions discussed previously.) We intend to return to this issue in future work.
Are Wild Oscillations Necessarily Catastrophic?
Throughout this paper we have discussed the nonlinear dynamics of the infinite order equations that describe tachyon dynamics in string theory. Our focus has been on constructing solutions that are not afflicted by wild oscillatory (or otherwise unstable) behaviour at late times. However, we have not considered the question of whether such instabilities are necessarily catastrophic. If a field theory such as (2.1) or (3.1) were the complete picture the answer might be straightforward, however, the problem is rather more subtle in the context of string field theory. The stress tensor at a point is not a truly gauge invariant object since neither the interactions of off-shell close string vertex operators, nor the restriction of the boundary state zero mode to particular values, preserves BRST invariance [14]. Since it is not trivial to relate the tachyon φ in the level truncation to physical observables it is not entirely clear if the wild oscillations are problematic. Several different explanations have been proposed in the SFT literature. We briefly discuss these below.
Erler and Gross argued, using a clever choice of basis (the light-cone basis), that the full SFT is first order in a single null direction (say x + ) and nonlocal in all the remaining directions [49]. The initial value formulation in this case is somewhat more complicated than in nondegenerate second order systems (where one specifies the coordinates and velocities at t = 0). In the Erler and Gross formulation one must provide information about the field at x + = 0 and also on the surface x − = c for all x + (here x − is the other null direction orthogonal to x + ). In [49] it was argued that one can take c → −∞ and demand that the field vanish in this limit. Although this requirement is physically sensible, it is not clear if it represents an undue restriction on the free coefficients of the solution. Note also that practical computations in the light-cone basis are complicated by the re-appearance of spurious negative energy states as artifacts of the level truncation [50].
Coletti et al. [51] argued that the wild oscillations of the tachyon field can be eliminated by a nonlocal transformation of the form φ(t) = f (∂ t )T (t) + · · · which takes the cubic open string field theory action to the analogous boundary string field theory action. The generatrix f (s) in this case has both poles and zeroes in the complex s-plane and hence one might worry about changing the number of degrees of freedom in the solution (see, for example, [12]).
Finally, Kiermaier et al. [52] constructed a class of BRST-invariant closed string states for any classical solution of open string field theory. This state can be used to provide gaugeinvariant observables. Using this state Kiermaier et al. argue that the wildly oscillatory rolling tachyon solution actually describes the regular close string physics studied in [4]. The peculiar time evolution is interpreted as resulting because the regular physics of the closed string sector is being described in terms of open string degrees of freedom.
It is sometimes argued that the ghost-like modes present in level-truncated SFT and p-adic string theory are artefacts. In this case one expects that more realistic dynamics will be obtained by projecting these states out, presumably through some prescription for choosing initial data (see [12] for an elegant implementation). Our analysis of the island of stability can be thought of as elucidating the minimal constraint on the initial data necessary to obtain sensible evolution. The idea of projecting out ghost excitations from higher derivative theories using some boundary conditions is not new. Qualitatively similar prescriptions have been employed in the finite derivative case; see [53] and [54]- [56].
Conclusions
Using a combination of analytical and numerical methods we have investigated the nonperturbative stability of light-like rolling tachyon solutions in the presence of a linear dilaton background. We have uncovered some potentially surprising results. We have seen that the addition of friction can drastically soften the effects of higher derivative instabilities. In the case of the p-adic string (and also VSFT) we have found an island of stability in initial condition space. For initial conditions within this island the tachyon dynamics are non-pathological. Interestingly, the island of stability is not a set of measure zero nor is it an artifact of perturbation theory. We have found qualitatively similar behaviour in the cosmological context and have speculated that the recipe of mixing friction with some constraints on the initial data provides a general prescription for constructing sensible (particular) solutions in nonlocal theories.
In the case of SFT at level zero truncation the effect of the friction on the higher derivative instability is even more dramatic. In this case the unstable growth associated with the ghost-like modes is completely damped out by the dilaton gradient and the re-sulting tachyon dynamics are non-pathological for generic choice of initial conditions! This provides an invaluable example of an interacting nonlocal theory derived from string theory that is completely stable. A caveat is that this stability relies on the ansatz φ = φ(x + ) and, as we have argued, is not expected to persist in the case where φ depends on all space-time coordinates. We do not believe that this caveat should be viewed as a serious limitation on our analysis since the significance of our results does not lie in the claim that light-like profiles are the most realistic rolling tachyon solutions. Rather, our results are important because we have uncovered a previously unexpected loop-hole in Ostrogradski's theorem.
During the course of our investigation we have developed many general techniques for studying nonlocal theories at the fully nonlinear level. It is our hope that these results/techniques will lead to the discovery of more realistic examples of stable infinite order theories.
Acknowledgments
This work was supported in part by NSERC. DJM is supported by the Centre for Theoretical Cosmology, Cambridge, NJN is supported by Deutsche Forschungsgemeinschaft, TRR33 and PR is supported by an NSERC USRA. We are grateful to T. Biswas, J. Cline, S. Hellerman, N. Kamran, M. Schnabl and R. Woodard for helpful discussions and correspondence.
APPENDIX: The Perturbative Mis-Match
In subsection 3.2 we discussed the perturbative construction of solutions of equation (3.3) about the constant solutions φ = 0, K 3 . There we were presented with the curious puzzle that -taking this result seriously -one would infer different numbers of initial conditions about these two critical points. This mis-match is an artifact of the perturbation theory that was employed. To see this, we reconsider solving equation (3.2) in a linearized expansion about an arbitrary value φ = φ 0 . We could simply extract the result from our analysis in subsection 4.2. However, the formlism does not rely on the PDE formulation and it may be of interest to show how to perturb about an arbitrary φ 0 without introducing the auxiliary direction r.
We begin in a very general context and specialize to equation (3.3) at the end. Consider the nonlocal equation where F (z) is an analytic function of the complex variable z that can be represented by a convergent series expansion F (z) = ∞ n=0 a n z n (A-2) and D is some linear differential operator which satisfies D(const) = 0. We wish to solve (A-1) near some constant value φ = φ 0 not necessarily a solution of (A-1). Writing and linearizing in δφ we have If φ 0 were a solution of (A-1) then the second term in the square braces would vanish. In this case one could construct the solution δφ by provisionally taking δφ to be an eigenfunction of D (that is assuming Dδφ = −ω 2 δφ). This is precisely the approach that was adopted in [32]- [34] and [37]. However, we can easily generalize this approach more general values of φ 0 by provisionally taking δφ to be a solution of the equation It is straightforward to show that the action of the pseudo-differential operator F (D) on a function satisfying (A-5) is Plugging (A-6) into (A-4) we find that the solutions of (A-5) are also solutions of the fully nonlocal equation (A-4) as long as ω, b are chosen to satisfy the following algebraic equations: In Note that this method may fail if ω 2 = 0 is a solution of (A-7). This approach is identical to the formalism employed in [36] and also in subsection 4.2. Now we specialize to equation (3.3). We take D = ∂ + , F (z) = (α ′ V + z − 1)K −2α ′ V + z and V ′ [φ] = −φ 2 /K 3 . After some straightforward manipulations we find that the mode functions δφ n take the form δφ n (x + ) = a n e snx + + b n s n (A-11) where the a n are arbitrary constants and α ′ V + s n = 1 − 1 2 ln K W n 4 ln K K φ 0 (A-12) (where W n denotes the branches of the Lambert-W function and n runs over all integer values). Summing over the modes and making use of equation (A-10) we have δφ(x + ) = n δφ n = n a n e snx + − φ 0 1 − φ 0 /K 3 1 − 2φ 0 /K 3 (A-13) For φ 0 = K 3 these expressions reproduce equations (3.9) and (3.10). Let us consider the case φ 0 = 0, where a naive perturbative analysis yields only a single growing mode. Taking the limit φ 0 → 0 in equation (A-12) we find that s 0 → 1/(α ′ V + ) while Re(s n ) → −∞ for all n = 0. Hence, the infinite tower of ghost-like modes decay very quickly near the false vacuum. In the limit φ 0 → 0 these spurious states all go to zero infinitely fast and completely drop out of the spectrum. Thus, the analysis in this appendix is consistent with equation (3.6) and the puzzle of the mis-match of initial data counting is resolved. A similar mis-match would occur if we have studied the p-adic equation (2.4) in a perturbative expansion about the constant solutions φ = 0, p −α ′ V 2 /[2(p−1)] . There one finds infinitely many solutions about the false vacuum and no non-trivial solutions at the true vacuum. Again, the mis-match is an artifact as can be seen by examining the fully nonperturbative solution (2.5). From this solution it is clear that the extra states become nonperturbative near φ = 0.
|
2009-02-03T18:12:20.000Z
|
2008-11-04T00:00:00.000
|
{
"year": 2008,
"sha1": "d4e23fa28042833d179d09bdc58a316f100cb087",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1126-6708/2009/03/018/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "d4e23fa28042833d179d09bdc58a316f100cb087",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
119089160
|
pes2o/s2orc
|
v3-fos-license
|
On the convergence of Kikuchi's natural iteration method
In this article we investigate on the convergence of the natural iteration method, a numerical procedure widely employed in the statistical mechanics of lattice systems to minimize Kikuchi's cluster variational free energies. We discuss a sufficient condition for the convergence, based on the coefficients of the cluster entropy expansion, depending on the lattice geometry. We also show that such a condition is satisfied for many lattices usually studied in applications. Finally, we consider a recently proposed general method for the minimization of non convex functionals, showing that the natural iteration method turns out as a particular case of that method.
I. INTRODUCTION
The cluster variation method (CVM) is a powerful approximate technique for the statistical mechanics of lattice systems, which can improve the simple mean field and Bethe theories, by taking into account correlations on larger and larger distances. It was first proposed by Kikuchi in 1951 [1] as an approximate evaluation of the thermodynamic weight of the system, and since then it has been reformulated several times [2, 3,4], mainly to clarify the nature of the approximation and to simplify the way to work it out. Quite a recent formulation [4] shows that the CVM consists in a truncation of the cumulant entropy expansion. Each cumulant is associated to a cluster of sites and the truncation is justified by the expected rapid vanishing of the cumulants upon increasing the cluster size. In this way the CVM can be viewed as a hierarchy of approximations, each one defined by the set of maximal clusters retained in the cumulant expansion, usually denoted as basic clusters.
If pairs of nearest neighbor sites are chosen as basic clusters, the CVM coincides with the Bethe approximation. Generally, using larger basic clusters improves the approximation, even if the convergence of the cumulant expansion to the exact entropy has been rigorously proved just in a few cases [3,5].
Due to its relative simplicity and accuracy, the CVM is widely used in every kind of statistical mechanical applications, to determine both thermodynamic properties [6,7,8] and phase diagrams [9,10,11,12]. The CVM results generally compare well with those of Monte Carlo simulations [10,11,13] as well as experimental ones [6,8,9,13,14,15]. Making use of suitable series of CVM approximations, it is also possible to extrapolate quite accurate estimates of critical exponents [16,17,18,19]. Recently, it has been shown that the belief propagation algorithm, an approximate method for statistical inference, employed for a lot of technologically relevant problems (image [20] and signal processing [21], decoding of errorcorrecting codes [21,22], machine learning [22]), is actually equivalent to the minimization of a Bethe free energy for statistical mechanical models defined on graphs [23]. This fact has opened new research areas both to the application of the CVM as an improvement of the approximation [23], and to the analysis of efficient minimization algorithms [24,25,26], mainly due to the fact that belief propagation sometimes fails to converge.
Let us introduce the problem from the CVM point of view. Once the approximate entropy (and hence free energy) for the chosen set of basic clusters has been obtained, one has to face the problem of minimizing a complicated non-convex functional in the basic cluster probability distributions. An algorithm for minimizing such a functional has been proposed by Kikuchi himself [27], and is known as natural iteration method (NIM). A proof of convergence of this algorithm has been given in the original paper, essentially for the Bethe approximation, which can be easily extended to the Husimi tree [28]. Nevertheless, the range of convergent cases seems to be much wider, so that the natural iteration method might be interesting also for the non conventional applications mentioned above.
In this article we analyze a sufficient condition for the convergence of the NIM. Such a condition is a requirement on the coefficients of the cluster entropy expansion (obtained from the cumulant expansion through a Möbius inversion [4]) and is shown to hold for quite a large variety of approximations that are generally used to treat thermodynamic systems. Namely, we consider: a set of "plaquette" approximations on different lattices [8,12,27,29], Kikuchi's B and C hierarchies for the square [30] and triangular [31] lattices, the cube approximation for the simple cubic lattice. As far as the latter case is concerned, we actually analyze a generic hypercube approximation on the hypercubic lattice in d dimensions, showing that the sufficient condition holds for d ≤ 3. Finally we take into account a recently proposed algorithm for the minimization of the CVM free energy [25], which allows several alternatives, depending on the possibility of upperbounding the free energy with convex (easy to be minimized) functions. We show that one of the best choices is actually equivalent to the natural iteration method.
II. THE CVM FREE ENERGY
As mentioned in the Introduction, the approximate CVM entropy can be written as a linear combination of cluster entropies [4] where the sum index α runs over all basic clusters and their subclusters. We shall always consider clusters in this set only. The cluster entropies are defined as usual where p α (x α ) denotes the probability of the configuration x α for the cluster α, the sum runs over all possible configurations, and the Boltzmann constant k is set to 1 (entropy is measured in natural units). The coefficients can be determined recursively, starting from basic clusters down to subclusters, making use of the following property [4] α ′ ⊇α a α ′ = 1 ∀α.
Due to the fact that a basic cluster γ never contains (by definition) another basic cluster, from the above formula we immediately get a γ = 1 ∀γ. Here and in the following, γ denotes basic clusters. As far as the hamiltonian is concerned, we assume that it can be written as a sum of contributions h γ from all basic clusters as where of course x γ denote basic cluster configurations. Let us decide to write the whole CVM free energy as a sum over basic clusters, splitting entropy contributions from each subcluster among all basic clusters that contain it (in equal parts). Assuming energies normalized to kT , we obtain where Let us notice that we have defined new coefficients b α ≡ a α /c α , where c α denotes the number of basic clusters that contain α, and we have expressed subcluster probability distributions as marginals of basic cluster distributions, according to Eq. (6) (the sum runs over configurations x γ\α of the basic cluster γ minus the subcluster α).
III. THE NATURAL ITERATION METHOD
In the above formulation, basic cluster distributions {p γ (x γ )} are the variational parameters of the free energy (which is denoted in short by F [p]), and the thermodynamic equilibrium state can be determined by minimization with respect to these parameters with suitable normalization and compatibility constraints. By compatibility we mean of course that marginal distributions p γ (x α ) must be the same for all basic clusters γ ⊃ α. Let us notice that, for most thermodynamic applications, one usually makes some homogeneity assumption on the system, and this generally reduces the problem to only one or few different basic cluster distributions. Compatibility constraints may be still necessary to impose the required symmetry. We go on with the complete formulation, without loss of generality. The important thing is that in any case we deal with constraints that are linear in the probability distributions (compatibility), possibly with an additive constant (unit) term (normalization).
According to the Lagrange method, we transform the constrained minimum problem with respect to {p γ (x γ )} to a free minimum problem for an extended functional which depends on additional parameters (Lagrange multipliers). Due to linearity, the extended functional can be written in the formF where {λ γ (x γ )} are the Lagrange multipliers. Of course, {λ γ (x γ )} are not all independent variables, but internal relationships are system dependent, and we do not analyze them.
Let us only notice, for future use, that the difference between the new functional and the original one (the last term in Eq. (7)) is actually independent of the {p γ (x γ )} distributions, provided they satisfy the required constraints.
The derivatives ofF with respect to p γ (x γ ) turn out to be where the additive constant is irrelevant and we can absorb it into the Lagrange multipliers.
Setting the above derivatives to zero resolves stationarization with respect to probability distributions. The natural iteration method consists in rewriting such equations in a fixed point form, that isp and then solving them by simple iteration. A new estimate of the basic cluster probability The Lagrange multipliers must be determined at each iteration, so that alsop γ (x γ ) satisfies the required constraints. This job can be done in different ways by a nested procedure (inner loop), for instance a Newton-Raphson method or a suitable fixed point method [31,32]. In this paper we do not deal with the determination of Lagrange multipliers, but we only focus on the convergence of the main loop. (5) and (6). Taking the logarithm of both sides of Eq. (9), we can rewrite the NIM equations in two different ways, that are Let us replace the former into F [p] and the latter into F [p]. Remembering that probability distributions satisfy the constraints, whence latter term on the right hand side of Eq. (7) depends on Lagrange multipliers only, we obtain Let us consider the inequality log ξ ≤ ξ −1, observing that equality holds if and only if ξ = 1.
By applying this inequality to the first logarithm (the one involving basic cluster probability distributions) in Eq. (12), and taking into account that distributions are normalized, we where equality holds if and only ifp γ (x γ ) = p γ (x γ ) ∀γ, x γ . The same result could be obtained by observing that actually the upperbounded terms coincide with (minus) the Kullbach-Liebler distances between the probability distributions p γ (x γ ) andp γ (x γ ). If all subcluster coefficients b α were negative, we could apply the same argument to all terms, and the upperbound would be zero. Such a situation occurs for instance in the Bethe [27] and Husimi tree [28] approximations, and the proof of convergence would be complete. In a general case we have to require a condition on the b α coefficients. The basic idea is to "couple" smaller cluster terms with a positive coefficient to larger cluster terms with a negative coefficient, yielding a sum of "negative" Kullbach-Liebler distances (some between conditional probability distributions), which can then be upperbounded by zero. The details are given in the following.
Theorem (sufficient condition for the convergence): Let {b α − |α + } be a set of non negative coefficients (allocation coefficients), defined for each pair of subclusters α − , α + , such that b α − < 0, b α + > 0, and α − ⊃ α + . If the following properties hold for all basic then Eq. (16) (17) is that it prevents the dynamical system defined by the NIM equations from having limit cycles at constant free energy, which could occur in principle.
Proof: Let us consider the right hand side of Eq. (13) and split the sum over subclusters α ⊂ γ in two sums over subclusters α + , α − with positive or negative coefficients respectively.
Positive coefficients b α + can be replaced by Eq. (14), while, according to Eq. (15), negative coefficients can be replaced by for certain d α − ≥ 0. Defining, for each α − ⊃ α + , the conditional probability distributions after some simple manipulations we obtain The logarithm inequality log ξ ≤ ξ − 1 can now be applied to all terms in the previous equation, because all coefficients are positive (or equivalently we get a sum of Kullbach-Liebler terms), and the zero upperbound of Eq. (16) is obtained. As previously mentioned, Eq. (17) is proved by the fact that the logarithm inequality holds if and only if ξ = 1, i.e., the Kullbach-Liebler distance between two probability distributions is zero if and only if the two distributions are equal.
V. SOME PARTICULAR CASES
In this section we consider some particular choices of basic clusters, that is, some particular CVM approximations for regular lattices on which several model systems are defined.
A. "Plaquette" approximations By "plaquette" approximations we mean a class of approximations in which basic clusters are of a unique type (which we denote as plaquette, for example a square on a square lattice), while subclusters with non zero coefficients are only single sites and nearest neighbor pairs.
Let us denote such clusters by 1 and 2 respectively, and, according to the notation introduced (1), and remembering that basic clusters (plaquettes) have unit a-coefficient, we can write a 2 + c 2 = 1 (21) Then, we have to impose the sufficient conditions on the coefficients, Eqs. (14) and (15).
We then have to couple each site to pairs that contain it and are contained in a given plaquette. Let us adopt the strategy of splitting the site coefficient among such pairs in equal parts, so that, being b 2|1 the only allocation coefficient and r the number of pairs, Eqs. (14) and (15) The allocation coefficient may be easily eliminated, yielding the single condition It is possible to show that also the r parameter depends on c 1 , c 2 , q only. Let us imagine to multiply the number q of nearest neighbor pairs sharing a site times the number c 2 of plaquettes sharing a pair. It is easy to realize that in this way we have overcounted r times the number c 1 of plaquettes sharing the given site, i.e., With the above manipulation, the condition (27) can be rewritten as q(c 2 − 1) ≤ 2(c 1 − 1).
In this form we can easily verify its validity, which is done in Tab. I for a set of typical plaquette approximations. We have considered: the 2d square, triangular, and honeycomb lattices with a 4-site square [12,29], a 3-site triangle [29], and an elementary hexagon as basic cluster respectively, the simple cubic (sc) lattice with a 4-site square [29] as basic cluster, and the face-centered cubic (fcc) lattice with a 3-site triangle [29] or a 4-site tetrahedron [8,27] as basic cluster.
B. B and C hierarchies
The B and C hierarchies, originally proposed by Kikuchi and Brush [30], are series of approximations with increasing cluster size, suitable for 2d square [30] and triangular [31] lattices. They are interesting mainly because they converge towards the exact free energy, in spite of the fact that the cluster size increases only in one direction. This result has been proved rigorously only for the C hierarchy [3], but there are numerical evidences for both [30,31]. Such results [3] are related to the transfer matrix concept: As the Bethe approximation solves exactly an Ising-like chain, the CVM, with infinitely long 1d stripes as basic clusters (to which the B and C hierarchies tend), solves exactly a 2d lattice. Here we are interested in showing that these approximations verify the sufficient condition for the convergence discussed above. Let us consider for instance the B hierarchy on the triangular lattice (a completely analogous treatment holds for the C hierarchy and/or for the square lattice). The basic clusters, shown in Fig. 1 (top row, left In the following rows of Fig. 1 also the subclusters of the given basic cluster, having nonzero coefficients in the cluster entropy expansion (a-coefficients), are displayed. They are divided in pair-like and site-like subclusters, in that they can be put in one-to-one correspondence with pair and site subclusters for the triangle plaquette approximations. Such analogy is not only a pictorial one. In fact, it is possible to show (for instance making use of Eq. (1), but see also Ref. [30]) that the a-coefficients are a 2 = −1 for pair-like clusters and a 1 = 1 for site-like clusters, like for the triangle plaquette approximation. The same holds for ccoefficients, i.e., the numbers of basic clusters sharing a given subclusters, which turn out to be c 2 = 2 and c 1 = 6 respectively, whence b 2 = −1/2 and b 1 = 1/6. Finally, from Fig. 1 one easily sees that also the same "allocation" technique as for the plaquette approximation can be used. Inside a given basic cluster, each site-like subcluster is shared by r = 2 pair-like clusters, and each pair-like cluster contains 2 site-like subclusters, whence inequality (27) is satisfied.
C. Hypercube approximation in d dimensions
Finally, let us consider the case of a hypercubic lattice in d dimensions, and let us choose a d-dimensional hypercube (d-cube) as basic cluster. Of course, the relevant cases are d = 2, 3, the former of which coincides with the square plaquette approximation, mentioned above, but the interest of a general treatment will be clearer later. It is possible to show, by repeated use of Eq.
(1), that clusters with non zero coefficients are only i-cubes, for i = 1, . . . , d, and As a consequence, the normalized coefficients turn out to be b (d) Let us now impose the sufficient conditions, Eqs. (14) and (15). Let us notice that the positive coefficients, those who give problems for upperbounding, have the i index with the same parity as d, that is i = d − 2, d − 4, . . . . Then we can couple each i-cube with (i + 1)cubes that contain it and are contained in a given d-cube. As for plaquette approximations, let us split the i-cube coefficient in equal parts, so that we have a single b i+1|i allocation coefficient. We still have to observe that each i-cube is shared by d−i (i+1)-cubes contained in the same d-cube (the equivalent of the r parameter for plaquette approximations), and that each (i + 1)-cube contains 2(i + 1) different i-cubes (the equivalent of 2 sites in a pair).
We can then rewrite Eqs. (14) and (15) By eliminating the allocation coefficient, we obtain which, replacing Eq. (30) and taking into account that d − i is always even (as previously mentioned), becomes Such inequality becomes more and more difficult to be satisfied as the subcluster index i increases. Therefore we have to consider the worst case, that is i = d − 2, leading to This results essentially proves the convergence for d = 3, because the d = 2 case coincides with the square plaquette approximation. Nevertheless, it is mainly interesting in that it gives us the opportunity to experiment the natural iteration method in a case in which the sufficient condition is not verified. We have actually implemented the procedure for the simple Ising model on the d = 4 hypercubic lattice, easily finding cases in which the behavior is non convergent (oscillating). This fact lead us to conjecture that actually the sufficient condition might be also a necessary one.
VI. AN EQUIVALENT FORMULATION
In a recent paper [25], a general method for the minimization of non convex function- Let the auxiliary functional satisfy the following requirements: Therefore, it defines an iterative method to minimize the original functional.
Proof: It is easy to obtain the following inequality chain proving immediately Eq. (39). The first inequality is the first hypothesis on the auxiliary functionalF , Eq. (36); the second inequality is a consequence of the definition of ϕ, Eq. (38); the equality descends from the second hypothesis onF , Eq. (37). In order to prove also Eq. (40), we have to show that both inequalities hold as equalities if and only ifp = p.
As far as the former is concerned, this is a direct consequence of the hypothesis Eq. (37), while the latter is proved by the fact thatF [p, p ′ ] has a unique minimum, which is also the absolute minimum, with respect to p ′ .
Let us now consider the auxiliary functional defined bȳ Moreover, F [p, p ′ ] is easily seen to be convex with respect to p ′ , therefore, if it has a stationary point, it is also unique, and is a minimum. Finally, let us observe that stationarization of this functional with respect to p ′ , with the usual linear constraints, gives rise just to the NIM equations (9), which in this way can be used to define the application ϕ. In order to show that ϕ actually perform a minimization of F , a sufficient condition is given by Eqs. (36),(37) in the above theorem, that is, we have to upperbound the quantity with zero. Going back to (the right hand side of) Eq. (13), it easily turns out that this is exactly the same upperbound we have proved with the sufficient condition for the convergence of the NIM.
VII. CONCLUSIONS
Let us finally summarize our results. We have investigated on the convergence of the natural iteration method, proposed by Kikuchi as a minimization procedure for cluster variational free energies and widely employed in a lot of applications of the CVM. We have discussed a condition on the coefficients of the cluster entropy expansion, which is sufficient to prove that the free energy decreases at each iteration, ensuring the convergence of the method. Such a condition is based on the idea of pairing subcluster entropies with a positive coefficient to larger subcluster terms with a negative coefficient, yielding a set of conditional entropy terms with negative coefficients. It had already been proved by Kikuchi in the original paper [27] that negative coefficient terms give decreasing contributions to the free energy. We have also taken into account a set of common CVM approximations defined on various regular lattices, frequently encountered in applications, showing that the sufficient condition is always satisfied. In particular, we have devoted some attention to the class of hypercube approximations on the generic (d-dimensional) hypercubic lattice, showing that the sufficient condition is verified for d ≤ 3. We have also implemented the natural iteration method for d = 4 on the simple Ising model, and found out that several (random as well as uniform) initial conditions give rise to non convergent (oscillating) behavior. This fact has led us to conjecture that the sufficient condition may be also a necessary one. Finally we have established a connection with a recently proposed method for the minimization of non-convex functionals, which can be applied to the CVM free energy [25]. Such a method is based on the existence of suitable upperbounding functionals to the functional to be minimized. In Ref. [25] several choices of upperbounding functionals are proposed and applied to simple inhomogeneous systems. We have shown that one of the upperbounding choices proposed there (indeed quite a good choice in terms of computation time) is actually equivalent to Kikuchi's natural iteration method. It turns out explicitly that the upperbounding condition implies free energy decreasing, whence convergence. I: Coefficients for different plaquette approximations. The first two columns report respectively the lattice and plaquette (basic cluster) type. The following three columns display the independent coefficients: q (coordination number), c 2 , c 1 (number of plaquettes sharing a given pair, site). The last two columns verify the sufficient condition, in that q(c 2 − 1) < 2(c 1 − 1).
|
2019-04-14T02:05:32.416Z
|
2004-04-27T00:00:00.000
|
{
"year": 2004,
"sha1": "fd5726cd290d630e4557ec79abd23bb2367766c9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0404654",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fd5726cd290d630e4557ec79abd23bb2367766c9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
261544559
|
pes2o/s2orc
|
v3-fos-license
|
EEG pre-burst suppression: characterization and inverse association with preoperative cognitive function in older adults
The most common complication in older surgical patients is postoperative delirium (POD). POD is associated with preoperative cognitive impairment and longer durations of intraoperative burst suppression (BSup) – electroencephalography (EEG) with repeated periods of suppression (very low-voltage brain activity). However, BSup has modest sensitivity for predicting POD. We hypothesized that a brain state of lowered EEG power immediately precedes BSup, which we have termed “pre-burst suppression” (preBSup). Further, we hypothesized that even patients without BSup experience these preBSup transient reductions in EEG power, and that preBSup (like BSup) would be associated with preoperative cognitive function and delirium risk. Data included 83 32-channel intraoperative EEG recordings of the first hour of surgery from 2 prospective cohort studies of patients ≥age 60 scheduled for ≥2-h non-cardiac, non-neurologic surgery under general anesthesia (maintained with a potent inhaled anesthetic or a propofol infusion). Among patients with BSup, we defined preBSup as the difference in 3–35 Hz power (dB) during the 1-s preceding BSup relative to the average 3–35 Hz power of their intraoperative EEG recording. We then recorded the percentage of time that each patient spent in preBSup, including those without BSup. Next, we characterized the association between percentage of time in preBSup and (1) percentage of time in BSup, (2) preoperative cognitive function, and (3) POD incidence. The percentage of time in preBSup and BSup were correlated (Spearman’s ρ [95% CI]: 0.52 [0.34, 0.66], p < 0.001). The percentage of time in BSup, preBSup, or their combination were each inversely associated with preoperative cognitive function (β [95% CI]: −0.10 [−0.19, −0.01], p = 0.024; −0.04 [−0.06, −0.01], p = 0.009; −0.04 [−0.06, −0.01], p = 0.003, respectively). Consistent with prior literature, BSup was significantly associated with POD (odds ratio [95% CI]: 1.34 [1.01, 1.78], p = 0.043), though this association did not hold for preBSup (odds ratio [95% CI]: 1.04 [0.95, 1.14], p = 0.421). While all patients had ≥1 preBSup instance, only 20.5% of patients had ≥1 BSup instance. These exploratory findings suggest that future studies are warranted to further study the extent to which preBSup, even in the absence of BSup, can identify patients with impaired preoperative cognition and/or POD risk.
The most common complication in older surgical patients is postoperative delirium (POD).POD is associated with preoperative cognitive impairment and longer durations of intraoperative burst suppression (BSup)electroencephalography (EEG) with repeated periods of suppression (very low-voltage brain activity).However, BSup has modest sensitivity for predicting POD.We hypothesized that a brain state of lowered EEG power immediately precedes BSup, which we have termed "pre-burst suppression" (preBSup).Further, we hypothesized that even patients without BSup experience these preBSup transient reductions in EEG power, and that preBSup (like BSup) would be associated with preoperative cognitive function and delirium risk.Data included 83 32-channel intraoperative EEG recordings of the first hour of surgery from 2 prospective cohort studies of patients ≥age 60 scheduled for ≥2-h non-cardiac, non-neurologic surgery under general anesthesia (maintained with a potent inhaled anesthetic or a propofol infusion).Among patients with BSup, we defined preBSup as the difference in 3-35 Hz power (dB) during the 1-s preceding BSup relative to the average 3-35 Hz power of their intraoperative EEG recording.We then recorded the percentage of time that each patient spent in preBSup, including those without BSup.Next, we characterized the association between percentage of time in preBSup and (1) percentage of time in BSup, (2) preoperative cognitive function, and (3) POD incidence.The percentage of time in preBSup and BSup were correlated (Spearman's ρ [95% CI]: 0.52 [0.34, 0.66], p < 0.001).The percentage of time in BSup, preBSup, or their combination were each inversely associated with preoperative cognitive function (β [95% CI]: −0.10 [−0.19, −0.01], p = 0.024; −0.04 [−0.06, −0.01], p = 0.009; −0.04 [−0.06, −0.01], p = 0.003, respectively).Consistent with prior literature, BSup was significantly associated with POD (odds ratio [95% CI]: 1.34 [1.01, 1.78], p = 0.043), though this association did not hold for preBSup (odds ratio [95% CI]: 1.04 [0.95, 1.14], p = 0.421).While all patients had ≥1 preBSup instance, only 20.5% of patients had ≥1 BSup instance.These exploratory findings suggest that future studies are
Introduction
As the number of global surgeries continues to increase beyond 300 million per year (Weiser et al., 2015), the number of surgical patients at risk for postoperative delirium will continue to rise.Postoperative delirium is a transient disturbance of mental status and attention following surgery, and is associated with extended hospital stays, increased dementia risk, and increased postoperative mortality (Deiner and Silverstein, 2009;Rengel et al., 2018).Postoperative delirium occurs at increased rates among older surgical patients, with an incidence of 12-53% in noncardiac surgical patients over age 65 (Reddy et al., 2017).As the population ages (Jordan, 2020) and increased numbers of older adults undergo surgery (Daiello et al., 2019), understanding the etiology of postoperative delirium is a key research question in geriatric perioperative medicine.
Postoperative delirium has been associated with intraoperative burst suppression (BSup), periods of electroencephalogram (EEG) recordings in which quick bursts alternate with suppressed activity (Bennett et al., 2009;Shanker et al., 2021), a pattern that is thought to reflect decreased neuronal activity due to neuronal pathology, high anesthetic dosage, or hypothermia (Brown et al., 2010).BSup has also been associated with postoperative mortality (Willingham et al., 2014) and worse neurologic outcomes (Wennervirta et al., 2009), and some data suggests that patients who demonstrate BSup are at greater risk for developing postoperative delirium than those without BSup, though causal relations between BSup and postoperative delirium have been a subject of controversy (Soehle et al., 2015;Fritz et al., 2016;Wildes et al., 2019;Evered et al., 2021;Berger et al., 2023).Thus, the extent to which BSup actually contributes to cognitive impairment vs. the extent to which it is merely a marker of latent underlying neuropathology is unclear.
Despite its association with postoperative delirium, the utility of BSup as a clinical predictor is limited by the rarity of BSup in certain patient populations.Studies have reported a BSup incidence as low as 9% among surgical patients receiving general anesthesia with propofol and remifentanil (Besch et al., 2011).Further, variation in the frequency of BSup may be driven by differences in the method of BSup measurement (Muhlhofer et al., 2017), differences in patient characteristics such as age or surgery type, or variability in the use of EEG-guided anesthetic titration among surgical centers, along with other patient or surgical factors (Whitlock et al., 2014;Wildes et al., 2019).The rarity of BSup suggests it is likely to have low sensitivity as a predictor of postoperative delirium.
Although BSup may have low sensitivity for predicting postoperative delirium, in our clinical experience, a brief period of intermediate EEG suppression often precedes BSup epochs.Moreover, we have observed similar brief periods of intermediate EEG suppression even in patients who never had actual BSup.Thus, we hypothesized that an intermediate EEG suppression pattern, which we have termed pre-burst suppression (preBSup), tends to immediately precede BSup.We also hypothesized that the brain does not instantaneously switch into and out of BSup, but instead goes through a gradual and identifiable decline in EEG power (i.e., preBSup) just prior to BSup.Finally, we hypothesized that preBSup can occur on its own (i.e., in the absence of BSup), and that preBSup (like BSup) is associated with postoperative delirium.Thus, if patients spend more time in the intermediate state of preBSup than in full BSup, then preBSup could be a more sensitive predictor of postoperative delirium than BSup itself, including among patients who do not demonstrate actual BSup.
Aside from BSup, another risk factor for postoperative delirium is preoperative cognitive impairment.Preoperative cognitive function is assessed infrequently in routine clinical practice (Berger et al., 2018;Deiner et al., 2020;Peden et al., 2021), yet other delirium-associated intraoperative EEG patterns, such as low alpha power, have also been associated with both impaired preoperative cognition (Giattino et al., 2017) and postoperative delirium risk (Gutierrez et al., 2019).Thus, we also hypothesized that both intraoperative EEG preBSup and BSup would each be associated with impaired preoperative cognitive function.To investigate these hypotheses, we determined the extent to which a preBSup pattern is associated with BSup, and the extent to which preBsup, BSup, or their combination is associated with preoperative cognitive impairment and/or postoperative delirium incidence.
Study population
In this study, we included all patients from 2 prior prospective observational cohort studies at Duke University Medical Center (Durham, NC) who underwent 32-channel intraoperative EEG recordings (Figure 1).(Giattino et al., 2017;Berger et al., 2019).Both studies were registered with clinicaltrials.gov,and approved by the Duke Health Institutional Review Board.All study subjects or legally authorized representatives gave written informed consent before study participation.Both MADCO-PC and INTUIT enrolled Duke patients aged ≥60 years who were scheduled to undergo elective non-cardiac, non-neurologic surgery lasting ≥2 h with a planned postoperative overnight hospitalization.Exclusion criteria included incarceration and anticoagulant use that prohibited undergoing lumbar punctures.No exclusions were based on preoperative cognitive status; however, all enrolled participants were required to complete a cognitive test battery (described below) that required intact language function and adequate English fluency.Patient information such as demographics (age, sex, race), baseline clinical status, surgery type, and anesthesia type was obtained via surveys or chart review, as described (Berger et al., 2019).INTUIT study data were managed using REDCap electronic data capture at Duke University (Harris et al., 2009(Harris et al., , 2019)).
Cognitive testing and delirium assessment
To assess preoperative cognition, we used a well-established neurocognitive test battery (Newman et al., 2001;Mathew et al., 2013;Browndyke et al., 2017) that included the Randt Short Story Memory Test (Randt and Brown, 1983), the Modified Visual Reproduction Test from the Wechsler Memory Scale (Wechsler, 1981), the Digit Span Test from the revised version of the Wechsler Adult Intelligence Scale (WAIS-R) (Wechsler, 1981), the Digit Symbol Test from the WAIS-R (Wechsler, 1981), the Trail Making Test Part B (Reitan, 1958), and the Hopkins Verbal Learning Test (Brandt, 1991).Scores from these tests were then combined via factor analysis with oblique rotation to obtain factor scores for five cognitive domains: Randt (narrative) verbal memory, Hopkins (episodic) verbal memory, executive function, visual memory, and attention/concentration (McDonagh et al., 2010).An overall cognitive index was then obtained by averaging the scores from these cognitive domain factors.Our group has used this cognitive assessment method for >20 years, both to reduce redundancy among tests and to minimize the need for multiple comparison corrections (Newman et al., 2001;McDonagh et al., 2010;Giattino et al., 2017).
Delirium incidence was measured using the 3-Minute Confusion Assessment Method (3D-CAM) (Marcantonio et al., 2014) or the original Confusion Assessment Method (CAM) (Inouye et al., 1990).Participants were screened for delirium at baseline (before surgery) and twice daily after surgery for up to 5 days after surgery or until hospital discharge, whichever occurred first (Vasunilashorn et al., 2020).
Electroencephalogram recording
Due to funding limitations and/or COVID restrictions, 32channel EEG recordings were performed on a consecutive set of 19 MADCO-PC patients, and on 81 INTUIT patients.A tethered EEG cap and recording system (BrainAmp MR Plus, Brain Products GmbH, Gilching, Germany) with a 32-channel custom electrode layout (Woldorff et al., 2002) were used for all MADCO-PC patients who underwent EEG recordings and for the initial 11 INTUIT study patients who underwent EEG recordings and were included in this study.To improve ease of use during surgery for subsequent INTUIT subjects who underwent EEG recordings, we switched to a wireless recording system (LiveAmp, Brain 10.3389/fnagi.2023.1229081Products GmbH, Morrisville, NC, USA) using a 32-channel cap with standard international 10-10 EEG locations (Oostenveld and Praamstra, 2001).
Electrode impedances below 20 k were obtained by light abrasion of the scalp locations with coarse electrode paste (Abralyte 2000, EASYCAP GmbH, Herrsching, Germany) before initiating data collection.EEG signals were recorded at a sampling rate ≥500 Hz with a 0.016-250 Hz band-pass filter.Procedure event markers, including time of induction, incision, and skin closure/end of surgery, were logged and cross-referenced with the surgical record to ensure accuracy.
Electroencephalogram preprocessing
Researchers blinded to patient cognitive and delirium status performed EEG processing in MATLAB (The MathWorks, Inc., Natick, MA, USA) using the EEGLAB toolbox (Delorme and Makeig, 2004) and custom scripts, as described (Giattino et al., 2017;Acker et al., 2021).We focused on EEG data from channel Fp1, given its clinical relevance; Fp1 is in the left medial frontal location where anesthesiologists typically place commercially available frontal EEG electrode strips to monitor brain responses to anesthesia.
Post-acquisition, the raw EEG data were band-pass filtered from 1-60 Hz to remove high-frequency noise, drift, and other artifacts.Epochs with false positives (marked "suppression" segments that were greater in amplitude than marked "burst" segments) and high amplitude artifacts (defined as >60 µV signals often reflecting large, high-frequency distortions from electrocautery or head movement) were removed (see Supplementary material for additional details).The data were downsampled to 250 Hz.Data from the standard international 10-10 EEG cap were referenced to AFz at acquisition; data from the custom tethered caps were referenced to Cz at acquisition.Thus, for subjects recorded with the custom cap, the average signal of 2 custom electrode locations slightly anterior to the standard 10-10 locations for F1 and F2 was subtracted from Fp1 to get the closest possible approximation to a 10-10 AFz reference.For our primary analyses, we used all available intraoperative EEG data from 5 min after surgical incision until 1 h later or 5 min before extubation, whichever occurred first.This approach avoided potential interactions between surgical case length and recorded time in BSup.
Pre-burst suppression calculations
We hypothesized that a distinct EEG spectral power pattern would occur immediately before BSup in patients with BSup and that this pattern may also occur in patients without BSup (Figure 2A).Thus, we operationally defined preBSup as a total power-decrease threshold using data from subjects with >0 instances of BSup in their first hour of surgery.This threshold was then applied to the intraoperative EEG data of all subjects regardless of whether they had any BSup (Figures 2B, C).
To define this power-decrease threshold, we first used a modified BSup algorithm to mark instances of BSup in (C) In this study, instances of suppression were marked (magenta), and in subjects with >0 burst suppression instances, the 1 s of EEG data preceding each suppression instance (cyan) was extracted.These data were used to create a preBSup threshold, which was then used to mark preBSup in all subjects.every subject's EEG recording (see Supplementary material) (Westover et al., 2013).Then, in each subject with >0 instances of intraoperative BSup (N = 17), we isolated the 1 s of EEG data preceding each suppression instance (i.e., preBSup).We chose a 1-s window for defining preBSup based upon visual inspection of the EEG power spectra just prior to BSup events.This 1-s window captured the decrease in power before suppression and excluded the preceding periods of average intraoperative power (Figure 3A).Next, we averaged the spectral power distribution of all these 1-s segments to create an average preBSup spectrum for that subject (Figures 3B, C).To determine the decrease in log-adjusted power (dB) associated with preBSup, we subtracted the aforementioned average preBSup spectrum from the spectral average of the subject's entire recording, excluding epochs with BSup, preBSup, or artifacts (Figure 3D).This subtraction of dB power gave power-decreaseby-frequency information for that subject.Finally, we averaged these subject-specific power-decrease data across the 17 subjects with >0 instances of BSup, resulting in the final preBSup powerdecrease threshold (Figure 3D, bold line).This final threshold was ] with a 95% confidence interval depicted in lighter purple.We used the average power decrease from 3-35 Hz (a 2.32 dB drop) from the 17 subjects with >0 BSup instances as our threshold to detect and mark preBSup in all subjects, independent of BSup.
our working definition of preBSup, which we then used to mark preBSup epochs in all patients, regardless of whether they had any instances of BSup.
This preBSup definition specifies a relative decrease in power at each frequency rather than an absolute power decrease, which helps to account for variability in each subject's baseline EEG power.Here, we took the average across-threshold values from 3-35 Hz in Figure 3D to form a generalized power decrease threshold for defining preBSup in all patients.Over the 3-35 Hz interval, the bold line indicates an average 2.32 dB drop in spectral power (equivalent to a 41% decrease in power on a linear scale) compared to that patient's average 3-35 Hz intraoperative power spectrum.Thus, 1-s epochs were marked as preBSup if their spectral power in that second decreased from their average intraoperative spectrum power by at least 2.32 dB (the power decrease threshold).If preBsup overlapped with BSup in a given second, the time in preBSup was recorded as 1 s minus the duration of BSup in that second.After marking these instances of preBSup, we recorded the percentage of each patient's case spent in preBSup and in BSup (beginning 5 min after surgical incision until 1 h later or 5 min prior to extubation, whichever occurred first).
Statistical analysis
All statistical analyses were performed by a statistician who was not involved in EEG data pre-processing.We summarized our patient cohort overall and by postoperative delirium status in Table 1.Spearman's correlation tests were measured between the case percentage of preBSup and BSup, and Wilcoxon Rank Sums tests were used to assess differences in the case percentage of preBSup among patients who did vs. did not have BSup.Then, we evaluated the association between preBSup and both preoperative cognition and postoperative delirium incidence.For the association between preBSup and preoperative cognition, we examined the relationship between preBSup with both overall cognitive index and the 5 individual cognitive domains.
Associations between preoperative cognition and (1) BSup, (2) preBSup, and (3) the combination of BSup and preBSup were measured via linear regression analyses.The associations between BSup, preBSup, and their combination with postoperative delirium incidence were examined via Firth-corrected logistic regression models.Our independent variables for these models included case percentages of (1) PreBSup, (2) BSup, and (3) combined BSup and preBSup to evaluate potential differential effects of these EEG measures on preoperative cognition and postoperative delirium.Spearman's correlations between preoperative cognition and intraoperative medication dosages or rates of administration were used to identify possible confounders to include in the general linear models.Given the low incidence of postoperative delirium in this cohort, we could only reasonably perform univariable analyses for delirium.
Due to the exploratory nature of this study, analyses were not corrected for multiple comparisons.Thus, findings were considered significant when p < 0.05.All statistical analyses were performed using SAS Studio 3.81 (SAS Institute, Cary, NC, USA).
Results
Preoperative characteristics for the 83 patients who had complete EEG and delirium data are presented in Table 1 (see Figure 1 for the participant flow diagram).Consistent with prior work (Guan et al., 2022), patients who later developed postoperative delirium had lower baseline MMSE scores and fewer years of education (Table 1).We first investigated the association between preBSup and BSup to determine whether these are related neurophysiologic patterns.The percentage of time spent in preBSup correlated with the percentage of time spent in BSup (Spearman's ρ [95% CI]: 0.52 [0.34, 0.66], p < 0.001).Further, the percentage of time spent in preBSup differed significantly among patients with vs. without BSup (median [Q1, Q3]: 16.61 [13.73, 17.26] vs. 8.58 [6.44, 11.23] percentage of the first case hour, respectively; median of differences [95% CI]: 7.29 [5.26, 9.01], p < 0.001).While all patients had preBSup, only 17 patients (20.5%) had BSup.
Univariable models showed an association between overall preoperative cognitive function and percentages of time in ( 1 were significantly associated with the percentage of time in all 3 intraoperative EEG states (BSup, preBSup, and combined preBSup and BSup).In contrast, preoperative Hopkins verbal memory scores (i.e., the structured verbal memory domain) were not associated with percentage of time in any of these EEG states (see Figure 4 and Table 2 for the β [95% CI] coefficients for each cognitive domain).Preoperative visual memory was associated with percentage of time in BSup, preBSup, and the combined percentage of time in BSup or preBSup, while executive function was only associated with preBSup and the combined preBSup and BSup percentages (Figure 4 and Table 2).Preoperative attention/concentration was not associated with any of these intraoperative EEG states (Figure 4 and Table 2).
Next, we examined the possibility that the relationship between EEG findings and preoperative cognition might be confounded by differential anesthetic dosage associated with preoperative cognition, i.e., if case anesthesiologists administered lower drug doses to patients who may have appeared to have pre-existing cognitive impairment.However, individual intraoperative medication administration rates and dosage(s) did not significantly differ in association with preoperative cognitive function (Supplementary Tables 1, 2).Thus, we found no evidence that intraoperative medications were potentially confounding the relationship between preoperative cognition and intraoperative EEG, since a confounder must be associated with both independent and dependent variables in an analysis.As such, we did not include intraoperative medication administration and dosage in our cognitive models.
Next, to determine whether preoperative cognition or intraoperative EEG parameters were associated with increased postoperative delirium risk, we analyzed univariable associations between postoperative delirium risk and both (1) preoperative cognition and (2) the percentage of time spent in preBSup, BSup, or their combination.Preoperative cognition was significantly associated with postoperative delirium incidence (odds ratio [95% CI]: 0.14 [0.05, 0.39], p < 0.001).The percentage of time patients spent in BSup was associated with postoperative delirium risk (odds ratio [95% CI]: 1.34 [1.01, 1.78], p = 0.043).However, no significant relationship was found between postoperative delirium and the percentage of time spent in preBSup (odds ratio [95% CI]: 1.04 [0.95, 1.14], p = 0.421) or the percentage of total time spent in either preSup or BSup (odds ratio [95% CI]: 1.06 [0.98, 1.15], p = 0.149.Further, we found no association between postoperative delirium and the rates of administration or dosage of intraoperative medications (Supplementary Table 3).Among patients who developed postoperative delirium (N = 12), only 4 (33.3%) had BSup.Thus, while all patients (regardless of delirium status) experienced preBSup, we hypothesized that a set percentage of case time in preBSup may be a more sensitive measure than BSup for postoperative delirium.To investigate this potential sensitivity vs. specificity trade-off for BSup and preBSup in association with postoperative delirium, we used the Youden Index (J, where J = max [sensitivity + specificity -1]) to generate optimal cut-points for percentage of time spent in BSup and preBsup among all patients.The Youden Index identifies the cut-point at which a biomarker (e.g., preBSup) is maximally effective for predicting an outcome (e.g., postoperative delirium) (Schisterman et al., 2008).Using a cut-point of ≥16.3% of time in preBsup had a sensitivity of 0.50 and a specificity of 0.75 (at J = 0.25) for postoperative delirium, whereas a cut-point of ≥1.34% of time in BSup had a sensitivity of 0.25 and specificity of 0.96 (at J = 0.21), confirming this trade-off in sensitivity versus specificity for these two measures.
Discussion
In this exploratory study, we defined and described an intraoperative neurophysiologic pattern that we termed pre-burst suppression (preBSup), which precedes EEG burst suppression (BSup) but can also occur in the absence of BSup.Since an extensive literature has shown an association between BSup and the risk for postoperative delirium, and between postoperative delirium and impaired preoperative cognition, we reasoned that BSupand by extension, preBSup-would be associated with impaired preoperative cognition.We found that (1) preBSup was associated with BSup, and (2) preBSup, BSup, and the combination of both were each associated with preoperative cognition.
As expected, many more of our study subjects displayed preBSup than BSup; subjects who had no BSup still had measurable preBSup, as characterized by the preBSup pattern(s) extracted from subjects with BSup.Even in the subset of patients who had BSup, the total duration of preBSup was longer than the duration of BSup.Further, the average duration of preBSup was more variable than BSup duration.Thus, there may be greater statistical power for examining relationships between preBSup (rather than BSup) and both cognitive function and delirium risk.
Despite some previous studies with negative findings (Wildes et al., 2019), prior literature largely supports an association between BSup and postoperative delirium (Soehle et al., 2015;Fritz et al., 2016Fritz et al., , 2018)).Additionally, in a mediation analysis, Fritz et al. (2020) found that BSup mediates a small portion of the relationship between preoperative cognitive impairment and postoperative delirium.We have replicated the finding that BSup is associated with both postoperative delirium and impaired preoperative cognition, and the finding that preoperative cognition is associated with postoperative delirium.This fits with the notion that BSup may be associated with postoperative delirium not because it represents the brain's response to excessive anesthetic doses, but rather because it represents the response of a vulnerable brainalready at risk for postoperative delirium-to normal anesthetic doses.
We further extended these findings by operationalizing preBSup, a distinct but related EEG pattern to BSup.Interestingly, preBSup was associated with preoperative executive function but not with postoperative delirium, while BSup was related to postoperative delirium but not with preoperative executive function.Yet both preBSup and BSup were associated with overall preoperative cognitive function (Figure 4).There are 3 major possible reasons for these relationships.(1) This pattern of results could represent a sensitivity vs. specificity trade-off between BSup and preBSup (i.e., preBSup could have increased ability to detect true positives for delirium over BSup but also an increased rate of false positives for delirium).Indeed, using the Youden Index, preBsup had greater sensitivity but less specificity than BSup in its association with postoperative delirium.Thus, while the true association between percentage of time spent in preBSup and postoperative delirium is unclear based on data from this cohort alone, it is possible that preBSup could serve as a more sensitive tool for perioperative monitoring and prevention of postoperative delirium, though future studies would be needed to identify the safety and efficacy of preBSup for this purpose.(2) The magnitude of effect between preBSup and postoperative delirium may be more subtle than that of BSup and postoperative delirium (with odds ratios of ∼1.15 vs. 1.34, respectively), and this study may simply not have been adequately powered to detect such a small association between preBSup and postoperative delirium.
(3) Since these 2 different EEG patterns (preBSup and BSup) were associated with different cognitive phenotypes (postoperative delirium vs. preoperative executive function, respectively), different neural mechanisms may underlie BSup and preBSup, an important question for future study.More specifically, preBSup may be associated with preoperative cognitive impairments but possibly not the type, pattern, or degree that leads to postoperative delirium or later neurocognitive dysfunction, while BSup may reflect a type or severity of network dysfunction related more specifically to postoperative delirium.As an example, the change in neural activity reflected by the reduction in power in preBSup (which is not as extreme as the power reductions seen in the isoelectric suppressions of BSup) may not be sufficient to account for neural changes in postoperative delirium.Given the effect sizes seen here, a much larger study would be required to provide sufficient power to test the association between preBSup and postoperative delirium or to perform mediation analyses, as Fritz et al. (2020) did for the relationships between preoperative cognition, BSup, and postoperative delirium.
There are several limitations to this study.First, to reduce the chance of falsely labeling epochs as BSup, we utilized a smoothing protocol to produce continuous suppression markings (see Supplementary material for details on smoothing procedures.)However, while this technique can reduce the chance of falsely marking epochs as BSup, it does so at the cost of potentially missing short periods of real suppression.
Second, we did not correct for multiple comparisons here, since this was an initial exploratory study of preBSup.Thus, our results should be viewed as hypothesis-generating rather than hypothesis-proving or -confirming.Third, this was a single-center study that enrolled mostly Caucasian participants, which limits the potential generalizability of the study conclusions.Fourth, the sample size was relatively small in this study, particularly for subgroups such as patients with vs. without postoperative delirium.Low power for these subgroup analyses increases the risk for type II statistical errors, which highlights a need for larger studies to investigate whether there may be brain mechanism differences that underlie differential associations between preBSup and BSup, and preoperative cognition and postoperative delirium, respectively.Fifth, we defined preBSup based on 1-s epochs prior to BSup in the 17 patients with >0 BSup events.It is unclear how the definition of preBSup would change if it was based on a larger or different group of individuals with >0 BSup events, and whether any such changes in the preBSup definition would modulate its associations with delirium and preoperative cognitive function.
Finally, this is the first study to discuss the concept of preBSup and to define it using quantitative criteria.While we used the average 3-35 Hz power decrease 1 s prior to BSup, we recognize that there are many other ways in which to define the concept of preBSup.Alternatively, for example, preBSup could be defined by quantifying the median of the power decrease before BSup, the slope or shape of the power decrease before BSup, by analyzing frequency-specific changes in power (such as 8-12 Hz alpha) prior to BSup epochs, or via other methods.Ultimately, the concept of preBSup could be defined/operationalized in a number of different ways, and future studies will be needed to determine which definition of preBSup would provide the most useful information about the neurocognitive function of individual patients.
Additional studies will be required to determine the extent of neurophysiologic differences (e.g., in brain network activity or in connectivity patterns) between preBSup and BSup, yet the results presented here suggest that preBSup may add value beyond that of BSup for identifying patients with impaired preoperative cognition and/or postoperative delirium risk.There are other areas of research in which lower EEG amplitudes [e.g., discontinuity in infants (Yuan et al., 2023)] have been observed, and the extent to which these patterns are related to BSup and preBSup may be another important area for future study.Additional work would be needed to understand the underlying neurophysiological mechanisms of preBsup and the changes in neural activity (e.g., periodic vs. aperiodic) that occur before the onset of BSup itself, especially in patients who later develop perioperative neurocognitive disorders such as delirium.
FIGURE 1
FIGURE 1 Consort diagram of participant data from the MADCO-PC and INTUIT studies.
FIGURE 2
FIGURE 2 PreBSup concept and algorithm.(A) Conceptual model of pre-burst suppression (preBSup) as an intermediate neural state between the normal anesthetized state and burst suppression (BSup).(B) BSup is characterized by periods of repeated bursts of EEG activity separated by low-amplitude isoelectric activity called suppression.(C)In this study, instances of suppression were marked (magenta), and in subjects with >0 burst suppression instances, the 1 s of EEG data preceding each suppression instance (cyan) was extracted.These data were used to create a preBSup threshold, which was then used to mark preBSup in all subjects.
FIGURE 3
FIGURE 3Creation of a preBSup threshold.In subjects with >0 BSup instances (N = 17), we created a preBSup threshold by calculating the average 3-35 Hz EEG power (dB) decrease for epochs occurring 1 s prior to suppression events relative to "normal spectra" during the rest of the EEG recording (excluding periods of BSup or artifact).(A) In an example subject, 138 instances of BSup from their surgical recording were overlaid and aligned to suppression onset (the magenta dotted line).The cyan dotted line represents the point in time 1 s before the onset of BSup in all 138 aligned EEG traces.(B) The average of all spectrograms from the 138 EEG traces plotted in panel (A) using non-overlapping 1-s windows.(C) The same data redrawn for smoother visualization using a 1-s moving window and a 0.025-s step size.(D) The averaged power across 1-s periods before suppression (i.e., average preBSup power, shown in red) among the 17 subjects with >0 BSup instances.The average normal spectra among the 17 subjects with >0 BSup instances are shown in blue.(E) The bold purple line indicates the average power (dB) decrease by frequency of the preBSup epoch [the red line in part (D)] from normal spectral power [the blue line in part (D)] with a 95% confidence interval depicted in lighter purple.We used the average power decrease from 3-35 Hz (a 2.32 dB drop) from the 17 subjects with >0 BSup instances as our threshold to detect and mark preBSup in all subjects, independent of BSup.
FIGURE 4
FIGURE 4Forest plots of the relationship between intraoperative EEG patterns (percentage of first hour of surgical case spent in BSup, preBSup, or their combination) and preoperative cognitive measures and postoperative delirium incidence.Each point with error bars represents the linear regression beta coefficient and 95% confidence interval from a separate statistical model, for the effects of BSup, preBSup, or their combination on continuous cognitive index and the 5 preoperative cognitive factor domains (Randt verbal memory, Hopkins verbal memory, executive function, visual memory, and attention/concentration).The odds ratios from the simple Firth-corrected logistic regression models of the effect of BSup, preBSup, and their combination on postoperative delirium incidence are shown in the bottom panel.
TABLE 1
Descriptive statistics of the study cohort.
Cognitive domains are listed in bold in the left column, EEG models for BSup, preBSup, and the combination of both are listed under each cognitive domain.
|
2023-09-06T15:05:52.703Z
|
2023-08-30T00:00:00.000
|
{
"year": 2023,
"sha1": "94df35121dc57a7d05085b65740458e5d4f8a872",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2023.1229081/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d088ff7674637d8a0de737a992123bf3882f9d73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
127565760
|
pes2o/s2orc
|
v3-fos-license
|
100 Years of the Ocean General Circulation
The central change in understanding of the ocean circulation during the past 100 years has been its emergence as an intensely time-dependent, effectively turbulent and wave-dominated, flow. Early technologies for making the difficult observations were adequate only to depict large-scale, quasi-steady flows. With the electronic revolution of the past 50 1 years, the emergence of geophysical fluid dynamics, the strongly inhomogeneous time-dependent nature of oceanic circulation physics finally emerged. Mesoscale (balanced), submesoscale oceanic eddies at 100-km horizontal scales and shorter, and internal waves are now known to be central to much of the behavior of the system. Ocean circulation is now recognized to involve both eddies and larger-scale flows with dominant elements and their interactions varying among the classical gyres, the boundary current regions, the Southern Ocean, and the tropics.
Introduction
In the past 100 years, understanding of the general circulation of the ocean has shifted from treating it as an essentially laminar, steady-state, slow, almost geological, flow, to that of a perpetually changing fluid, best characterized as intensely turbulent with kinetic energy dominated by time-varying flows. The space scales of such changes are now known to run the gamut from 1 mm (scale at which energy dissipation takes place) to the global scale of the diameter of Earth, where the ocean is a key element of the climate system. The turbulence is a mixture of classical three-dimensional turbulence, turbulence heavily influenced by Earth rotation and stratification, and a complex summation of random waves on many time and space scales. Stratification arises from temperature and salinity distributions under high pressures and with intricate geographical boundaries and topography. The fluid is incessantly subject to forced fluctuations from exchanges of properties with the turbulent atmosphere.
Although both the ocean and atmosphere can be and are regarded as global-scale fluids, demonstrating analogous physical regimes, understanding of the ocean until relatively recently greatly lagged that of the atmosphere. As in almost all of fluid dynamics, progress in understanding has required an intimate partnership between theoretical description and observational or laboratory tests. The basic feature of the fluid dynamics of the ocean, as opposed to that of the atmosphere, has been the very great obstacles to adequate observations of the former. In contrast with the atmosphere, the ocean is nearly opaque to electromagnetic radiation, the accessible (by ships) surface is in constant and sometimes catastrophic motion, the formal memory of past states extends to thousands of years, and the analog of weather systems are about 10% the size of those in the atmosphere, yet evolve more than an order of magnitude more slowly. The overall result has been that as observational technology evolved, so did the theoretical understanding. Only in recent years, with the advent of major advances in ocean observing technologies, has physical/dynamical oceanography ceased to be a junior partner to dynamical meteorology. Significant physical regime differences include, but are not limited to, 1) meridional continental boundaries that block the otherwise dominant zonal flows, 2) predominant heating at the surface rather than at the bottom, 3) the much larger density of seawater (a factor of 10 3 ) and much smaller thermal expansion coefficients (a factor of less than 1/10), and 4) overall stable stratification in the ocean. These are the primary dynamical differences; many other physical differences exist too: radiation processes and moist convection have great influence on the atmosphere, and the atmosphere has no immediate analog of the role of salt in the oceans.
What follows is meant primarily as a sketch of the major elements in the evolving understanding of the general circulation of the ocean over the past 1001 years. Given the diversity of elements making up understanding of the circulation, including almost everything in the wider field of physical oceanography, readers inevitably will find much to differ with in terms of inclusions, exclusions, and interpretation. An anglophone bias definitely exists. We only touch on the progress, with the rise of the computer, in numerical representation of the ocean, as it is a subject in its own right and is not unique to physical oceanography. All science has been revolutionized.
That the chapter may be both at least partially illuminating and celebratory of how much progress has been made is our goal. In particular, our main themes concern the evolution of observational capabilities and the understanding to which they gave rise. Until comparatively recently, it was the difficulty of observing and understanding a global ocean that dominated the subject. 1
Observations and explanations before 1945
Any coherent history of physical oceanography must begin not in 1919 but in the nineteenth century, as it sets the stage for everything that followed. A complete history would begin with the earliest seafarers [see, e.g., Cartwright (2001) who described tidal science beginning in 500 BCE, Warren (1966) on early Arab knowledge of the behavior of the Somali Current, and Peterson et al. (1996) for an overview] and would extend through the rise of modern science with Galileo, Newton, Halley, and many others. Before the nineteenth century, however, oceanography remained largely a cartographic exercise. Figure 7-1 depicts the surface currents, as inferred from ships' logs, with the Franklin-Folger Gulf Stream shown prominently on the west. Any navigator, from the earliest prehistoric days, would have been very interested in such products. Emergence of a true science had to await the formulation of the Euler and Navier-Stokes equations in the eighteenth and nineteenth centuries. Not until 1948 did Stommel point out that the intense western intensification of currents, manifested on the U.S. East Coast as the Gulf Stream, was a fluid-dynamical phenomenon in need of explanation. Deacon (1971) is a professional historian's discussion of marine sciences before 1900. Mills (2009) brings the story of general circulation oceanography to about 1960. In the middle of the nineteenth century, the most basic problem facing anyone making measurements of the ocean was navigation: Where was the measurement obtained? A second serious issue lay with determining how deep the ocean was and how it varied with position. Navigation was almost wholly based upon celestial methods and the ability to make observations of sun, moon, and stars, along with the highly variable skill of the observer, including the ability to carry out the complex reduction of such measurements to a useful position. Unsurprisingly, particularly at times and places of constant cloud cover and unknown strong currents, reported positions could be many hundreds of kilometers from the correct values. One consequence could be shipwreck. 2 Water depths were only known from the rare places where a ship could pause for many hours to lower a heavy weight to the seafloor. Observers then had to compute the difference between the length of stretching rope spooled out when the bottom was hit (if detected), and the actual depth. An example of nineteenth-century North Atlantic Ocean water-depth estimates can be seen in Fig. 7-2. A real solution was not found until the invention of acoustic echo sounding in the post-World War I era.
Modern physical oceanography is usually traced to the British Challenger Expedition of 1873-75 in a much-told tale (e.g., Deacon 1971) that produced the first globalscale sketches of the distributions of temperature and salinity [for a modern analysis of their temperature data, see Roemmich et al. (2012)]. 1 Physical oceanography, as a coherent science in the nineteenth century existed mainly in support of biological problems. A purely physical oceanographic society has never existed-most professional oceanographic organizations are inevitably dominated in numbers by the biological ocean community. In contrast, the American Meteorological Society (AMS) has sensibly avoided any responsibility for, for example, ornithology or entomology, aircraft design, or tectonicsthe atmospheric analogs of biological oceanography, ocean engineering, or geology. This otherwise unmanageable field may explain why ocean physics eventually found a welcoming foster home with the AMS with the establishment of the Journal of Physical Oceanography in 1971.
One of the most remarkable achievements by the latenineteenth-century oceanographers was the development of a purely mechanical system (nothing electrical) that permitted scientists on ships to measure profiles of temperature T at depth with precisions of order 0.018C and salinity content S to an accuracy of about 0.05 g kg 21 (Helland-Hansen and Nansen 1909, p. 27), with known depth uncertainties of a few meters over the entire water column of mean depth of about 4000 m. This remarkable instrument system, based ultimately on the reversing thermometer, the Nansen bottle, and titration chemistry, permitted the delineation of the basic threedimensional temperature and salt distributions of the ocean. As the only way to make such measurements required going to individual locations and spending hours or days with expensive ships, global exploration took many decades. Figures 7-2 and 7-3 display the coverage that reached to at least 2000 and 3600 m over the decades, noting that the average ocean depth is close to 4000 m. [The sampling issues, including seasonal aliasing, are discussed in Wunsch (2016).] By good fortune, the large-scale structures below the very surface of T and S appeared to undergo only small changes on time scales of decades and spatial scales of thousands of kilometers, with ''noise'' superimposed at smaller scales. Measurements led to the beautiful hand-drawn property sections and charts that were the central descriptive tool.
Mechanical systems were also developed to measure currents. Ekman's current meter, one lowered from a ship and used for decades, was a purely mechanical device, with a particularly interesting method for recording flow direction (see Sandström and Helland-Hansen 1905). Velocity measurements proved much more challenging to interpret than hydrographic ones, because the flow field is dominated by rapidly changing FIG. 7-2. Known depths in the North Atlantic, from Maury (1855). Lack of knowledge of water depths became a major issue with the laying of the original undersea telegraph cables (e.g., Dibner 1964). Note the hint of a Mid-Atlantic Ridge. Maury also shows a topographic cross section labeled ''Fig. A'' in this plot. 7.4 small-scale flows and not by stable large-scale currents. Various existing reviews permit us to provide only a sketchy overview; for more details of observational history, see particularly the chapters by Warren, Reid, and Baker in Warren and Wunsch (1981), Warren (2006), the books by Sverdrup et al. (1942) and Defant (1961), and chapter 1 of Stommel (1965).
The most basic feature found almost everywhere was a combined permanent ''thermocline''/''halocline,'' a depth range typically within about 800 m of the surface FIG. 7-3. Hydrographic measurements reaching at least 2000 m during (a) 1851-1900, (b) 1901-20, (c)1921-30, (d) 1931-40, (e) 1941-50, (f) 1951-60, (g) 1961-70, and (h) 1971-80. Because the ocean average depth is about 3800 m and is far deeper in many places, these charts produce a highly optimistic view of even the one-time coverage. Note, for example, that systematic to the bottom measurements in the South Pacific were not obtained until 1967 (Stommel et al. 1973). Much of the history of oceanographic fashion can be inferred from these plots. [The data are from the World Ocean Atlas (https://www.nodc.noaa.gov/OC5/woa13/); see also Fig. 5 of Wunsch (2016).] CHAPTER 7 W U N S C H A N D F E R R A R I 7.5 over which, in a distance of several hundred meters, both the temperature and salinity changed rapidly with depth. It was also recognized that the abyssal ocean was very cold, so cold that the water could only have come from the surface near the polar regions (Warren 1981).
The most important early advance in ocean physics 3 was derived directly from meteorology-the development of the notion of ''geostrophy'' (a quasi-steady balance between pressure and Coriolis accelerations) from the Bergen school. 4 Bjerknes's circulation theorem as simplified by Helland-Hansen for steady circulations (see Vallis 2017) was recognized as applicable to the ocean. To this day, physical oceanographers refer to the ''thermal wind'' when using temperature and salinity to compute density and pressure, and hence the geostrophically (and hydrostatically) balanced flow field. Even with this powerful idea, understanding of the perhaps steady ocean circulation lagged behind that of the atmosphere as oceanographers confronted an additional complication that does not exist in meteorology. By assuming hydrostatic and thermal wind balance, the horizontal geostrophic velocity field components-call them u g and y g (east and north)-can be constructed from measurements of the water density r(T, S, p) (where p is hydrostatic pressure) on the basis of the relationship where x, y, and z are used to represent local Cartesian coordinates on a sphere, f 5 2V cosf is the Coriolis parameter as a function of latitude f, and g is the local gravity. Either of these equations [e.g., Eq. (7-1a)], can be integrated in the vertical direction (in practice as a sum over finite differences and with approximations related to density r): and in a similar way for y g (x, y, z). The starting depth of the integration z 0 is arbitrary and can even be the sea surface; u 0 is thus simply the horizontal velocity at z 0 . These equations constitute the ''dynamic method'' and were in practical oceanographic use as early as Helland-Hansen and Nansen (1909, p. 155). The constant of integration u 0 , as is the equivalent y 0 , is missing. In the atmosphere, the surface pressure is known and thus the u 0 and y 0 can be estimated using geostrophic balance. Various hypotheses were proposed for finding a depth z 0 at which u 0 and y 0 could be assumed to vanish (a ''level of no motion''). It is an unhappy fact that none of the hypotheses proved demonstrable, and thus oceanic flows were only known relative to unknown velocities at arbitrary depths. Estimated transports of fluid and their important properties such as heat could easily be dominated by even comparatively small, correct, values of u 0 and y 0 . Physical oceanography was plagued by this seemingly trivial issue for about 70 years. It was only solved in recent years through mathematical inverse methods and by technologies such as accurate satellite altimetry and related work, taken up later. The earliest dynamical theory of direct applicability to the ocean is probably Laplace's (1775) discussion of tidally forced motions in a rotating spherical geometry, using what today we would call either the Laplace tidal or the shallow-water equations. Laplace's equations and many of their known solutions are thoroughly described in Lamb (1932) and tidal theory per se will not be pursued here (see Cartwright 1999;Wunsch 2015, chapters 6 and 7). Those same equations were exploited many years later in the remarkable solutions of Hough (1897Hough ( , 1898, and by Longuet-Higgins (1964) in his own and many others' following papers. As with most of the theoretical ideas prior to the midtwentieth century, they would come to prominence and be appreciated only far in the future. [The important ongoing developments in fluid dynamics as a whole are described by Darrigol (2005).] Probably the first recognizable dynamical oceanographic theory arises with the celebrated paper of Ekman (1905). In another famous story (see any textbook) Ekman produced an explanation of Fridjoft Nansen's observation that free-floating icebergs tended to move at about 458 to the right of the wind (in the Northern Hemisphere). His solution, the Ekman layer, remains a cornerstone of oceanographic theory. [See Faller (2006) for discussion of its relationship to Prandtl's contemporaneous ideas about the boundary layer.] Ekman and others devoted much time and attention to developing instruments capable of making direct measurements of oceanic flow fields with depth. Much of the justification was the need to determine the missing integration constants u 0 and y 0 of the dynamic method. 3 We use ''physics'' in the conventional sense of encompassing both dynamics and all physical properties influencing the fluid ocean. 4 According to Gill (1982), the first use of the terminology was in 1916 by Napier Shaw-the expression does not appear at all in Sverdrup et al. (1942). The notion of geostrophic balance, however, appears earlier in the oceanographic literature through the work of Sandström and Helland-Hansen (1905) as inspired by the new dynamical approach to meteorology and oceanography introduced in the Bergen school by Bjerknes (Mills 2009
7.6
These instruments were lowered on cables from a ship. Unfortunately, ships could stay in the same place for only comparatively short times (typically hours) owing to the great costs of ship time, and with navigational accuracy being wholly dependent upon sun and star sights. Early on, it was recognized that such measurements were extremely noisy, both because of ship movements but also because of the possible existence of rapidly fluctuating internal waves, which was already apparent (see Nansen 1902), that would contaminate the measurements of slowly evolving geostrophic velocities u g and y g .
Absent any method for direct determination of water motions over extended periods of time, and the impossibility of obtaining time series of any variable below the surface, theory tended to languish. The most notable exceptions were the remarkable measurements of Pillsbury (1891) in the Straits of Florida who managed to keep anchored ships in the Gulf Stream for months at a time. Stommel (1965) has a readable discussion of Pillsbury's and other early measurements. These data, including the direct velocities, were used by Wüst (1924) to demonstrate the applicability of the thermal wind/dynamic method. Warren (2006), describing Wüst's methods, shows that the result was more ingenious than convincing.
Of the theoretical constructs that appeared prior to the end of the World War II (WWII), the most useful ones were the development of internal wave theory by Stokes (1847), Rayleigh (1883), and Fjeldstad (1933) among others and its application to the two-layer ''deadwater'' problem by Ekman (1906). Building on the work of Hough and others, the English mathematician Goldsbrough (1933) solved the Laplace tidal equations on a sphere, when subjected to mass sources, a development only exploited much later for the general circulation by Stommel (1957) and then Huang and Schmitt (1993). One might also include Rossby's ''wake stream'' theory of the Gulf Stream as a jet, although that idea has had little subsequent impact. The use of three-dimensional eddy diffusivities (''Austausch'' coefficients), as employed in one dimension by Ekman, acting similarly to molecular diffusion and friction but with far-larger values, was the focus of a number of efforts summarized by Defant (1961), following the more general discussions of fluid turbulence. 5 In the nineteenth century, controversy resulted over the question of whether the ocean was primarily wind driven or thermally forced-a slightly irrational, noisy, dispute that is typical of sciences with insufficient data (Croll 1875;Carpenter 1875). Sandström (1908; see the English translation in Kuhlbrodt 2008) showed that convection in fluids where the heating lay above or at the same surface (as in the ocean) would be very weak relative to fluids heated below the level of cooling (the atmosphere). Bjerknes et al. (1933) labeled Sandström's arguments as a ''theorem.'' and thus attracted to it some considerable later misinterpretation. Jeffreys (1925), in an influential paper, had argued that Sandström's inferences (''principles'') had little or no application to the atmosphere but were likely relevant to the oceans. There the matter rested for 501 years. 6 The highly useful summary volume by Sverdrup et al. (1942) appeared in the midst of WWII. It remained the summary of the state of all oceanography, and not just the physical part, for several decades. Emphasis was given to water-mass volumes (basically varying temperatures and salinities), the dynamic method, and local (Cartesian coordinate) solutions to the shallow-water equations. The Ekman layer is the only recognizable element of ''dynamical oceanography'' relevant to the general circulation. In his condensed version directed specifically to meteorologists (Sverdrup 1942), Sverdrup concluded the monograph with the words: ''It is not yet possible to deal with the system atmosphere-ocean as one unit, but it is obvious that, in treating separately the circulation of the atmosphere, a thorough consideration of the interaction between the atmosphere and the oceans is necessary'' (p. 235), a statement that accurately defines much of the activity today in both atmospheric and oceanic sciences.
Post-WWII developments and the emergence of GFD
An informal sense of the activities in physical oceanography in WW II and the period immediately following, with a focus on the United Kingdom, can be found in Laughton et al. (2010). Shor (1978) is another history, focused on Scripps Institution of Oceanography, and Cullen (2005) described the Woods Hole Oceanographic Institution. Mills (2009) covered the earlytwentieth-century evolution specifically of dynamical oceanography in Scandinavia, France, Canada, and Germany. Other national quasi histories probably exist for other countries, including the Soviet Union, but these are not known to us.
A simple way to gain some insight into the intellectual flavor of physical oceanography in the interval from approximately 1945 to 1990 is to skim the papers and explanatory essays in the collected Stommel papers (Hogg and Huang 1995). The more recent period, with a U.S. focus, is covered in Jochum and Murtugudde (2006). The edited volume by Warren and Wunsch (1981) gives a broad overview of how physical oceanography stood as of approximately 1980-reflecting the first fruits of the electronics revolution.
The advent of radar and its navigational offspring such as loran and Decca greatly improved the navigational uncertainties, at least in those regions with good coverage (North Atlantic Ocean). This period also saw the launch of the first primitive navigational satellites (U.S. Navy Transit system), which gave a foretaste of what was to come later.
Because of the known analogies between the equations thought to govern the dynamics of the atmosphere and ocean, a significant amount of the investigation of theoretical physical oceanographic problems was carried out by atmospheric scientists (e.g., C.-G. Rossby, J. G. Charney, and N. A. Phillips) who were fascinated by the oceans. The field of geophysical fluid dynamics (GFD) emerged, based initially on oceanic and atmospheric flows dominated by Earth's rotation and variations of the fluid density (see Fig. 7-4). Present-day GFD textbooks (e.g., Pedlosky 1987;McWilliams 2006;Cushman-Roisin and Beckers 2011;Vallis 2017) treat the two fluids in parallel. When it came to observations, however, Gill's (1982) textbook was and is a rare example of an attempt to combine both the theory and FIG. 7-4. As in Fig. 7-3, but for stations reaching at least 3600 m by decade. The challenge of calculating any estimate of the heat content (mean temperature) or salinity in early decades will be apparent.
7.8
observations of atmosphere and ocean in a single treatment. Although a chapter describes the atmospheric general circulation, however, he sensibly omitted the corresponding chapter on the ocean general circulation. GFD might be defined as the reduction of complex geophysical fluid problems to their fundamental elements, for understanding, above realism. The emergence of potential vorticity (a quasi-conserved quantity derived from the oceanic vorticity and stratification), in various approximations, as a fundamental unifying dynamical principle emerged at this time (see Stommel 1965, chapter 8). Vallis (2016) has written more generally about GFD and its applications.
a. Steady circulations
In the United States and United Kingdom at least, WWII brought a number of mathematically adept professionally trained scientists into close contact with the problems of the fluid ocean. Before that time, and apart from many of the people noted above, physical oceanography was largely in the hands of people (all men) who can reasonably be classified as ''natural philosophers'' in the older tradition of focusing on description, rather than physics. (In English, the very name ''oceanography''from the French-evokes the descriptive field ''geography'' rather than the explicitly suppressed ''oceanology'' as a parallel to ''geology.'') Seagoing physical oceanography had, until then, been primarily a supporting science for biological studies. The start of true dynamical oceanography was provided in two papers (Sverdrup 1947;Stommel 1948) neither of whom would have been regarded as fluid dynamics experts. But those two papers marked the rise of GFD and the acceleration of dynamical oceanography. Sverdrup derived a theoretical relationship between the wind torque acting at the ocean surface and the vertically integrated meridional (north-south) transport of upper ocean waters. Stommel's (1948) paper treated a linear, homogeneous flat-bottom ocean, but succeeded in isolating the meridional derivative of the Coriolis acceleration as the essential element in producing western boundary currents like the Gulf Stream in the North Atlantic-a prototype of GFD reductionism. 7 Closely following on the Sverdrup/Stommel papers were those of Munk (1950), who effectively combined the Sverdrup and Stommel solutions, Munk and Carrier (1950), Charney (1955), Morgan (1956), and a host of others. 8 Following the lead of Munk and Carrier (1950) the Gulf Stream was explicitly recognized as a form of boundary layer, and the mathematics of singular perturbation theory was then enthusiastically applied to many idealized versions of the general circulation (Robinson 1970). Stommel with his partner, Arnold Arons, developed the so-called Stommel-Arons picture of the abyssal ocean circulation (Stommel 1957)probably the first serious attempt at the physics of the circulation below the directly wind-driven regions near surface. A few years later, Munk (1966) produced his ''abyssal recipes'' paper that, along with the Stommel-Arons schematic, provided the framework for the next several decades of the understanding the deep ocean circulation, thought of as dynamically relatively spatially uniform. This subject will be revisited below.
Attempts at a theory of the thermocline that would predict the stratification and baroclinic flows forced by the surface winds started with linear perturbation methods (Stommel 1957; cf. Barcilon and Pedlosky 1967). But because the goal was explaining the basic oceanic stratification, rather than assuming it as part of the background state, the problem resulted in highly nonlinear equations (e.g., Needler 1967). Ingenious solutions to these equations were found by Robinson and Stommel (1959) and Welander (1959) using analytic similarity forms. These solutions looked sufficiently realistic to suggest that the basic physics had been appropriately determined.
[See the textbooks by Pedlosky (1996); Vallis (2017); Huang (2010); Olbers et al. (2012).] Large-scale solutions that assumed vertical mixing of temperature and salinity in the upper ocean was a leading-order process (e.g., Robinson and Stommel 1959) were so similar to those that ignored mixing altogether (e.g., Welander 1959) that the immediate hope of deducing a vertical eddy diffusivity K y from hydrographic measurements alone proved unavailing. The puzzle ultimately led to a decades-long effort, much of it driven by C. S. Cox and continuing today, to measure K y directly (see Gregg 1991) and to its inference from a variety of chemical tracer observations.
b. Observations
Until about 1990, the chief observational tool for understanding the large-scale ocean circulation remained the shipboard measurement of hydrographic properties, leading to the calculation of density and the use of the dynamic method, often still employing assumed levels of no motion. Even as the technology evolved (Baker 1981) from reversing thermometers and Nansen bottles to the salinity-temperature-depth (STD), and conductivitytemperature-depth (CTD) devices, and from mechanical bathythermographs (MBTs) to expendable BTs (XBTs), the fundamental nature of the subject did not change. The major field program in this interval was the International Geophysical Year (IGY), July 1957-December 1958. The IGY Atlantic surveys were modeled on the R/V Meteor Atlantic survey of the 1920s (Wüst and Defant 1936). Notably, the Atlantic Ocean atlases of Fuglister (1960) and Worthington and Wright (1970) were based on these cruises and emerged as the basic representation of the ocean circulation.
Apart from the Atlantic Ocean, hydrographic surveys to the bottom remained extremely rare, with the socalled Scorpio sections in the mid-1960s in the South Pacific Ocean (Stommel et al. 1973), the R/V Eltanin survey of the Southern Ocean (Gordon and Molinelli 1982), and an isolated trans-Indian Ocean section (Wyrtki et al. 1971) being late exceptions. This rarity reflected a combination of the great difficulty and expense of measurements below about 1000 m, coupled with the very convenient supposition that the deep FIG. 7-5. From Robinson and Stommel (1959, their
c. High latitudes
During this long period, observations were focused on the mid-to lower latitudes, with the difficultto-reach Southern Ocean remaining comparatively poorly observed. Theoretical work was directed at the dynamics of the Antarctic Circumpolar Current (ACC). The absence of continuous meridional barriers in the latitude range of Drake Passage did not allow the development of the western boundary currents that were crucial in the theories of Stommel and Munk. Stommel (1957) argued that the Scotia Island Arc could act as a porous meridional barrier permitting the ACC to pass though, but be deflected north to join the meridional Falkland Current along the South American continent. Gill (1968) pointed out that the zonal ACC current could also result from a balance between the surface wind stress and bottom friction, without any need of meridional boundaries. However, he considered only models with a flat bottom that produced transports far in excess of observations for any reasonable value of bottom drag coefficients. Surprisingly both theories ignored Munk and Palmén's (1951) work, which had identified topographic form drag (the pressure forces associated with obstacles) from ocean ridges and seamounts as a key mechanism to slow down the ACC and connect it to currents to the north. Development of a theory of the Southern Ocean circulation is taken up below. The ice-covered Arctic Sea 9 was essentially unknown.
d. Tropical oceanography
Tropical oceanography was largely undeveloped until attention was directed to it by the rediscovery of the Pacific (and Atlantic) equatorial undercurrents. Buchanan (1888) had noted that buoys drogued at depth moved rapidly eastward on the equator in the Atlantic, but his results were generally forgotten (Stroup and Montgomery 1963). Theories of the steady undercurrent were almost immediately forthcoming (see Fig. 7-6) with perhaps the most important result being their extremely sensitive dependence on the vertical eddy diffusivity K y (e.g., Charney and Spiegel 1971;Philander 1973). But the real impetus came with the recognition (see Wyrtki 1975a;Halpern 1996) that El Niño, known from colonial times as a powerful, strange, occasional, event in the eastern tropical Pacific and regions of Ecuador and Peru, was in fact a phenomenon both global in scope and involving the intense interaction of atmosphere and ocean. Such physics could not be treated as a steady state.
e. Time-dependent circulation
Recognition of a very strong time dependence in the ocean dates back at least to Helland-Hansen and Nansen (1909) and is already implicit in Maury (1855). Fragmentary indications had come from the new Swallow floats (Crease 1962;Phillips 1966) and the brief direct current-meter measurements from ships had shown FIG. 7-6. A poster drawn by H. Stommel near the beginnings of geophysical fluid dynamics [reproduced in Warren and Wunsch (1981, p. xvii) Ó 1980 by the Massachusetts Institute of Technology, published by the MIT Press]. 9 Whether the Arctic is a sea or an ocean is not universally agreed on. Sverdrup et al. (1942) called it the ''Arctic Mediterranean Sea,'' both in acknowledgment of its being surrounded by land and because of its small size. variability from the longest down to the shortest measurable time scales. Physical oceanographers in contact with the meteorological community were acutely aware of Starr's (1948Starr's ( , 1968 demonstration that atmospheric ''eddies'' to a large extent controlled the larger-scale flow fields, rather than being a passive dissipation mechanism-in the sense of the Austausch coefficients of much theory. But because observational capabilities were still extremely limited, most of the contributions in the immediate postwar period tended to be primarily theoretical ones. Rossby et al. (1939) had produced a mathematical formulation of what came to be known as the ''Rossby wave'' and in Rossby (1945) he had made explicit its hypothetical application to the ocean. As Platzman (1968) describes in detail, the physics of those waves had been known for a long time in the work of Hough (1897, 1898)-who called them ''tidal motions of the second class''-Rossby's analysis produced the simplest possible waves dependent upon the variation of the Coriolis parameter, and the label has stuck. In a series of papers starting in 1964, Longuet-Higgins extended Hough's analysis on the sphere and showed clearly the relationship to the approximations based upon Rossby's beta plane. Many of the papers in Warren and Wunsch (1981) provided a more extended account of this period. Difficulties with observations vis-à-vis the emerging theories had led Stommel (see Hogg and Huang 1995, Vol. 1, p. 124) to famously assert that the theories ''had a peculiar dreamlike quality.''
f. The level of no motion
The issue of the missing constant of integration when computing the thermal wind had attracted much attention over many decades, frustrating numerous oceanographers who were trying to calculate absolute flow rates. Although a number of methods had been proposed over the years [see the summary in Wunsch (1996)], none of them proved satisfactory. To a great extent, the steady ocean circulation was inferred by simply assuming that, at some depth or on some isopycnal or isotherm, both horizontal velocities, u and y, vanished, implying u 0 and y 0 5 0 there. Choice of such a ''level of no horizontal motion'' z 0 (x, y), although arbitrary, did give qualitatively stable results, as long as a sufficiently deep value of z 0 was used; temporal stability was rarely ever tested. This apparent insensitivity of results (see Figs. 7-7 and 7-8) is understandable on the assumption that the magnitude of the horizontal flows diminished with depth-an inference in turn resting upon the hypotheses that flows were dominantly wind driven.
For quantitative use, however, for example in computing the meridional transport of heat or oxygen by the ocean as we mentioned above, differing choices of z 0 could lead to large differences. Ultimately Worthington (1976), in trying to balance the steady-state mass, temperature, salinity, and oxygen budgets of the North Atlantic Ocean, had come to the radical, and indefensible, conclusion that large parts of the circulation could not be geostrophically balanced by pressure gradients. (The inference was indefensible because no other term in the equations of motion is large enough to balance the inferred Coriolis force and Newton's Laws are then violated.) The problem was eventually solved in two, initially different-appearing ways: through the methods of inverse theory (Wunsch 1977) and the introduction of the b spiral (Stommel and Schott 1977). These methods and their subsequent developments employed explicit conservation rules that are not normally part of the dynamic method (heat, salt, volume, potential vorticity, etc.). Wunsch (1996) summarizes the methodsincluding Needler's (1985) formal demonstration that, with perfect data in a steady state, the three components of steady velocity (u, y, and w) were fully determined by the three-dimensional density field. None of the methods was practical prior to the appearance of digital computers.
Ironically, the solution to the major weakness of the dynamic method emerged almost simultaneously with the understanding that the ocean was intensely time dependent: the meaning of the statically balanced ocean calculations was thus unclear. When accurate satellite altimetry and accurate geoids became available after 1992, it was possible to obtain useful direct measurements of the absolute pressure of the sea surface elevation [ Fig. 7-8 and see Fu et al. (2019)]. Both the inverse methods and the absolute measurements showed that a level of no motion did not exist. That deep velocities are weaker generally than those near the surface is, however, generally correct .
Steady-state circulations circa 19801
The physics and mathematical challenges of deducing the nature of a hypothetical, laminar steady-state ocean continue to intrigue many GFD theoreticians and modelers. The most important of such theories was instigated by Luyten et al. (1983) who, backing away from the continuous ocean represented in very complicated equations, reduced the problem to one of a finite number of layers (typically 2-3). Following Welander's (1959) model, the theory ignored mixing between layers and assumed that temperature, salinity, and potential vorticity were set at the surface in each density layer. This theory of the ''ventilated thermocline'' of the upper ocean, combined also with ideas about the effects of 7.12 eddies , led to a renaissance in the theory. In the theories, the upper ocean is divided into a large region that is directly ventilated by the atmosphere and two or more special regions (the ''shadow zone'' and the unventilated ''pool''). These theoretical ideas are well covered in the textbooks already noted and are not further discussed here except to mention that the theory has since been extended to connect it to the rest of the ocean interior (which requires addition of mixing at the base of the ventilated thermocline; Samelson and Vallis 1997) and to the tropical oceans (which alleviates the need of any mixing to explain the equatorial currents; Pedlosky 1996). Determining the extent to which these theories describe the upper ocean in the presence of intense time variability is a major subject of current activity in both theory and observation.
Theories for the deep ocean circulation lagged behind. Starting with an influential paper by Stommel (1961) that introduced a two-box model to describe the deep circulation as resulting from the density difference between the low-and high-latitude boxes, the idea gained ground that the deep circulation was driven by the density differences generated by heating and cooling/evaporation and precipitation at high latitudes, in contrast to the wind-driven circulation in the upper thermoclines. This deep ''thermohaline circulation,'' as it came to be called, consisted of waters sinking into the abyss in the North Atlantic and around Antarctica and rising back to the surface more or less uniformly in the rest of the ocean. Van Aken (2007) provides a good review of the theoretical progress until the end of the twentieth century. Beyond the Stommel-Arons model to describe the depth-integrated deep circulation, theory focused on the overturning circulation and the associated cross-equatorial heat transport because of its relevance for climate. The approach was much less formal than in theories of the upper ocean and relied largely on FIG. 7-7. Wyrtki's (1975b, his Fig. 1) estimated topography of the sea surface based upon an assumed level of no horizontal motion at 1000-dbar pressure and the historical hydrographic data. The gross structure is remarkably similar to that in Fig. 7-8 (below) from a very large collection of data including absolute altimetric height measurements and the imposition of a complete physical flow model. Wyrtki's result from historical data is much noisier than the modern estimate. box models and simple scaling arguments. Indeed the most influential description of the supposed thermohaline circulation up to this time was the cartoon simplifications drawn by Gordon (1986) and Broecker (1987). These and other discussions led in turn to a heavy emphasis on the North Atlantic Ocean and its overturning in the guise of the Atlantic meridional overturning circulation (AMOC), but whose role in the climate state remains only a portion of the global story. As described below, it is inseparable from the mechanically driven circulations.
A theory for the deep circulation more grounded in basic GFD has only started to emerge in the last twenty years, after the crucial role of the Southern Ocean in the global overturning circulation was fully appreciated. We review the emergence of this paradigm in the section on the Southern Ocean below. Here suffice it to say that the role of the Southern Hemisphere westerlies took central stage in the theory of the deep overturning circulationrendering obsolete the very concept of a purely thermohaline circulation. The deep ocean is as sensitive to the winds as the upper thermoclines, and both circulations are strongly affected by the distinct patterns of heating and cooling, and of evaporation and precipitation.
Era of the time-dependent ocean
The most important advance in physical oceanography in the last 50 years, as with so many other fields, was the invention of the low-power integrated circuit, making possible both the remarkable capability of today's observational instruments, and the computers necessary to analyze and model the resulting data. This revolution began to be apparent in the early 1970s as the purely FIG. 7-8. A true 20-yr-average dynamic topography that is much smoother than in Fig. 7-7. The contour interval is 10 cm, the same as in that figure, but the absolute levels cannot be compared. Because the surface geostrophic velocity depends upon the lateral derivatives of h, the noisiness of the historical compilation is apparent, mixing structures arising over years and decades with the true average. Until the advent of high-accuracy altimetric and gravity satellites, these structures could be only be inferred and not measured. 7.14 mechanical systems such as the Nansen bottle/reversing thermometer, the bathythermograph, Ekman current meter, and so on gave way to their electronic counterparts (see e.g., Baker 1981;Heinmiller 1983) and with the parallel capabilities of spaceborne instrumentation [e.g., Martin (2014) and Fu et al. (2019)]. True time series, both Eulerian and near-Lagrangian (employing floats), became available for the first time, spanning months and then years-capabilities unimaginable with shipborne instruments. Equally important was the revolution in navigational accuracy that built on the development of radar, loran, and other radiometric methods during WWII. The present culmination is the global positioning system (GPS). Today, a push button on a cellular phone or equivalent produces, with zero user skill, much higher accuracies than the ingenious, painstaking, methods of celestial navigation that required years of experience to use.
Much of the ocean-bottom topography has been described, but many details remain uncertain (e.g., Wessel et al. 2010). Very small-scale topography, presumed to be of importance in oceanic boundary layer physics, remains unknown, and determinable at present only with the limited and expensive shipboard multibeam surveys (see FIG. 7-9).
As new instrumentation gradually evolved from the 1970s onward (self-contained moored instruments operating successfully for months and years, neutrally buoyant floats tracked in a variety of ways, rapid chemical analysis methods, sea surface temperature pictures from the new satellites, etc.), the attention of much of the seagoing and theoretical communities turned toward the problems of understanding the newly available, if fragmentary, time series. In the background was the knowledge of much of the community of the importance of large-scale meteorological patterns known as weather, and in particular the book by Starr (1968) and the preceding papers. Some of Starr's students (e.g., Webster 1961) had already tried employing limited ocean data in meteorological analogies.
In what became known as the International Decade of Ocean Exploration (IDOE; see Lambert 2000), largely funded in the United States by the National Science Foundation and the Office of Naval Research, much of the oceanographic community focused for the first time on documenting the time variability in the hope of understanding those elements of the ocean that were not in steady state.
A convenient breakdown can be obtained from the various physically oriented IDOE elements: the Mid eventually became MODE. 10 Despite some instrumental problems (the new U.S. current meters failed after approximately a month), the ''experiment'' 11 showed beyond doubt the existence of an intense ''mesoscale'' eddy field involving baroclinic motions related to the baroclinic radii of about 35 km and smaller, as well as barotropic motions on a much larger scale. In oceanography, the expression mesoscale describes the spatial scale that is intermediate between the large-scale ocean circulation and the internal wave field and is thus very different from its meteorological usage (which is closer to the ocean ''submesoscale''). A better descriptor is ''balanced'' or ''geostrophic'' eddies, as in the meteorological ''synoptic scale.'' [The reader is cautioned that an important fraction of the observed low-frequency oceanic motion is better characterized as a stochastic wave field-internal waves, Rossby waves, etc.-and is at least quasi linear, with a different physics from the vortexlike behavior of the mesoscale eddies. Most of the kinetic energy in the ocean does, however, appear to be in the balanced eddies (Ferrari and Wunsch 2009)]. Understanding whether the MODE area and its physics were typical of the ocean as a whole then became the focus of a large and still-continuing effort with in situ instruments, satellites, and numerical models. Following MODE and a number of field programs intended to understand 1) the distribution of eddy energy in the ocean as a whole and 2) the consequences for the general circulation of eddies, a very large effort, which continues today, has been directed at the eddy field and now extending into the submesoscale (i.e., scales between 100 m and 10 km where geostrophic balance no longer holds but rotation and stratification remain important). Exploration of the global field by moorings and floats was, and still is, a slow and painful process that was made doubly difficult by the short FIG. 7-10. A first sketch by H. Stommel (1969, personal communication) of what became the Mid-Ocean Dynamics Experiment as described in a letter directed to the Massachusetts Institute of Technology Lincoln Laboratory, 11 August 1969. What he called ''System A'' was an array of 121 ocean-bottom pressure gauges; System B was a set of moored hydrophones to track what were called SOFAR floats; System C was to be the floats themselves (500-1000); System D was described as a numerical model primarily for predicting float positions so that their distribution could be modified by an attending ship; System E (not shown) was to be a moored current-meter array; System F was the suite of theoretical/dynamical studies to be carried out with the observations. The actual experiment differed in many ways from this preliminary sketch, but the concept was implemented (although not by Lincoln Laboratories).
10 From a letter addressed to the Massachusetts Institute of Technology Lincoln Laboratory (unpublished document, 11 August 1969). 11 Physical oceanographers rarely do ''experiments'' [the purposeful tracer work of Ledwell et al. (1993) and later is the major exception], but the label has stuck to what are more properly called field observations. 7.16 spatial coherence scales of eddies, and the long-measuring times required to obtain a meaningful picture. The first true (nearly) global view became possible with the flight of the high-accuracy TOPEX/Poseidon 12 altimeter in 1992 and successor satellite missions. Although limited to measurements of the sea surface pressure (elevation), it became obvious that eddies exist everywhere, with an enormous range in associated kinetic energy (Fig. 7-11). The spatial variation of kinetic energy by more than two orders of magnitude presents important and interesting obstacles to simple understanding of the influence on the general circulation of the time-dependent components.
In association with the field programs, the first fine resolution (grid size ,100 km) numerical models of ocean circulation were developed to examine the role of mesoscale eddies in the oceanic general circulation [see the review by Holland et al. (1983)]. Although idealized, the models confirmed that the steady solutions of the ocean circulation derived over the previous decades were hydrodynamically unstable and gave rise to a rich time-dependent eddy field. Furthermore, the eddy fields interacted actively with the mean flow, substantially affecting the time-averaged circulation.
a. Observing systems
As the somewhat unpalatable truth that the ocean was constantly changing with time became evident, and as concern about understanding of how the ocean influenced climate grew into a public problem, efforts were undertaken to develop observational systems capable of depicting the global, three-dimensional ocean circulation. The central effort, running from approximately 1992 to 1997, was the World Ocean Circulation Experiment (WOCE) producing the first global datasets, models, and supporting theory. This effort and its outcomes are described in chapters in Jochum and Murtugudde (2006), and in Siedler et al. (2001Siedler et al. ( , 2013. Legacies of this program and its successors include the ongoing satellite altimetry observations, satellite scatterometry and gravity measurements, the Argo float program, and continuing ship-based hydrographic and biogeochemical data acquisition. Having to grapple with a global turbulent fluid, with most of its kinetic energy in elements at 100-km spatial scales and smaller, radically changed the nature of observational oceanography. The subsequent cultural change in the science of physical oceanography requires its own history. We note only that the era of the autonomous seagoing chief scientist, in control of a single ship staffed by his own group and colleagues, came to be replaced in many instances by large, highly organized international groups, involvement of space and other government agencies, continual meetings, and corresponding bureaucratic overheads. As might be expected, for many in the traditional oceanographic community the changes were painful ones (sometimes expressed as ''we're becoming too much like meteorology'').
b. The turbulent ocean
A formal theory of turbulence had emerged in the 1930s from G. I. Taylor, a prominent practitioner of GFD. Taylor (1935) introduced the concept of homogeneousisotropic turbulence (turbulence in the absence of any large-scale mean flow or confining boundaries), a concept that became the focus of most theoretical research. Kolmogorov (1941) showed that in three dimensions homogeneous-isotropic turbulence tends to transfer energy from large to small scales. [The book by Batchelor (1953) provides a review of these early results.] Subsequently, Kraichnan (1967) demonstrated that in two dimensions the opposite happens and energy is transferred to large scales. Charney (1971) realized that the strong rotation and stratification at the mesoscale acts to suppress vertical motions and thus makes ocean turbulence essentially two dimensional at those scales.
A large literature developed on both two-dimensional and mesoscale turbulence, because the inverse energy cascade raised the possibility that turbulence spontaneously generated and interacted with large-scale flows. Problematically the emphasis on homogeneousisotropic turbulence, however, eliminated at the outset any large-scale flow and shifted the focus of turbulence research away from the oceanographically relevant question of how mesoscale turbulence affected the large-scale circulation. A theory of eddy-mean flow interactions was not developed for another 30 years, until the work of Bretherton (1969a,b) and meteorologists Eliassen and Palm (1961) and Andrews and McIntyre (1976).
The role of microscale (less than 10 m) turbulence in maintaining the deep stratification and ocean circulation was recognized in the 1960s and is reviewed below (e.g., Munk 1966.) A full appreciation of the role of geostrophic turbulence on the ocean circulation lagged behind. Even after MODE and the subsequent field programs universally found vigorous geostrophic eddies with scales on the order of 100 km, theories of the largescale circulation largely ignored this time dependence, primarily for want of an adequate theoretical framework for its inclusion and the lack of global measurements.
That the ocean, like the atmosphere, could be unstable in baroclinic, barotropic, and mixed forms had been recognized very early. Pedlosky (1964) specifically applied much of the atmospheric theory [Charney (1947), Eady (1949), and subsequent work] to the oceanic case. Theories of the interactions between mesoscale turbulence and the large-scale circulation did not take center stage until the 1980s in theories for the midlatitude circulation Young and Rhines 1982) and the 1990s in studies of the Southern Ocean FIG. 7-11. (a) RMS surface elevation (cm) from four years of TOPEX/Poseidon data and (b) the corresponding kinetic energy (cm 2 s 22 ) from altimeter measurements (Wunsch and Stammer 1998). The most striking result is the very great spatial inhomogeneity present-in contrast to atmospheric behavior. 7.18 (Johnson and Bryden 1989;Gnanadesikan 1999;Marshall and Radko 2003).
Altimetric measurements, beginning in the 1980s, showed that ocean eddies with scales slightly larger than the first deformation radius dominate the ocean eddy kinetic energy globally (Stammer 1997) but with huge spatial inhomogeneity in levels of kinetic energy and spectral distributions (Fig. 7-12), and understanding their role became a central activity, including the rationalization of the various power laws in this figure. The volume edited by Hecht and Hasumi (2008) and Vallis (2017) review the subject to their corresponding dates.
Much of the impetus in this area was prompted by the failure of climate models to reproduce the observed circulation of the Southern Ocean. Their grids were too coarse to resolve turbulent eddies at the mesoscale. Because the effect of generation of mesoscale eddies is to flatten density surfaces without causing any mixing across density surfaces (an aspect not previously fully recognized), Gent and McWilliams (1990) proposed a simple parameterization, whose success improved the fidelity of climate models (Gent et al. 1995). It led the way to the development of theories of Southern Ocean circulation (Marshall and Speer 2012) and the overturning circulation of the ocean (Gnanadesikan 1999;Wolfe and Cessi 2010;Nikurashin and Vallis 2011).
Attention has shifted more recently to the turbulence that develops at scales below approximately 10 km-the so-called submesoscales (McWilliams 2016). Sea surface temperature maps show a rich web of filaments no more than a kilometer wide (see Fig. 7-13).
Unlike mesoscale turbulence, which is characterized by eddies in geostrophic balance, submesoscale motions become progressively less balanced as the scale diminishesas a result of a host of ageostrophic instabilities (Boccaletti et al. 2007;Capet et al. 2008;Klein et al. 2008;Thomas et al. 2013). Unlike the mesoscale regime, energy is transferred to smaller scales and exchanged with internal gravity waves, thereby providing a pathway toward energy dissipation . Both the dynamics of submesoscale turbulence and their interaction with the internal gravity waves field are topics of current research and will likely remain the focus of much theoretical and observational investigation for at least the next few decades.
c. The vertical mixing problem
Although mesoscale eddies dominate the turbulent kinetic energy of the ocean, it was another form of turbulence that was first identified as crucial to explain the observed large-scale ocean state. Hydrographic sections showed that the ocean is stratified all the way to the bottom. Stommel and Arons (1960a,b) postulated that the stratification was maintained through diffusion of FIG. 7-12. Estimated power laws of balanced eddy wavenumber spectra (Xu and Fu 2012). From this result and that in Fig. 7-11, one might infer that the ocean has a minimum of about 14 distinct dynamical regimes plus the associated transition regions. temperature and salinity from the ocean surface. However, molecular processes were too weak to diffuse significant amounts of heat and salt. Eckart (1948) had described how ''stirring'' by turbulent flows leads to enhanced ''mixing'' of tracers like temperature and salinity. Stirring is to be thought of as the tendency of turbulent flows to distort patches of scalar properties into long filaments and threads. Mixing, the ultimate removal of such scalars by molecular diffusion, would be greatly enhanced by the presence of stirring, because of the much-extended boundaries of patches along which molecular-scale derivatives could act effectively. [A pictorial cartoon can be seen in Fig. 7-14 (Welander 1955) for a twodimensional flow. Three-dimensional flows, which can be very complex, tend to have a less effective horizontal stirring effect but do operate also in the vertical direction.] Munk (1966) in a much celebrated paper, argued that turbulence associated with breaking internal waves on scales of 1-100 m was the most likely candidate for driving stirring and mixing of heat and salt in the abyss-geostrophic eddies drive motions along density surfaces and therefore do not generate any diapycnal mixing.
Because true fluid mixing occurs on spatial scales that are inaccessible to numerical models, and with the understanding that the stirring-to-mixing mechanisms control the much-larger-scale circulation patterns and properties, much effort has gone into finding ways to ''parameterize'' the unresolved scales. Among the earliest such efforts was the employment of so-called eddy or Austausch coefficients that operate mathematically like molecular diffusion but with immensely larger numerical diffusion coefficients (Defant 1961). Munk (1966) used vertical profiles of temperature and salinity and estimated that maintenance of the abyssal stratification required a vertical eddy diffusivity of 10 24 m 2 s 21 (memorably 1 in the older cgs system) a value that is 1000 times as large as the molecular diffusivity of temperature and 100 000 times as large as the molecular diffusivity of salinity.
For technical reasons, early attempts at measuring the mixing generated by breaking internal waves were confined to the upper ocean and produced eddy diffusivity values that were an order of magnitude smaller than those inferred by Munk (see Gregg 1991). This led to the notion that there was a ''missing mixing'' problem. However, the missing mixing was found when 7.20 the technology was developed to measure mixing in the abyssal ocean-the focus of Munk's argument [see the reviews by Wunsch and Ferrari (2004) and Waterhouse et al. (2014)]. Estimates of the rate at which internal waves are generated and dissipated in the global ocean [Munk and Wunsch (1998) and many subsequent papers] further confirmed that there is no shortage of mixing to maintain the observed stratification. The field has now moved toward estimating the spatial patterns of turbulent mixing with dedicated observations, and more sophisticated schemes are being developed to better capture the ranges of internal waves and associated mixing known to exist in the oceans. In particular, it is now widely accepted that oceanic boundary processes, including sidewalls, and topographic features of all scales and types dominate the mixing process, rather than it being a quasi-uniform open-ocean phenomenon (see Callies and Ferrari 2018).
ENSO and other phenomena
A history of ocean circulation science would be incomplete without mention of El Niño and the coupled atmospheric circulation known as El Niño-Southern Oscillation (ENSO). What was originally regarded as primarily an oceanic phenomenon of the eastern tropical Pacific Ocean, with implications for Ecuador-Peru rainfall, came in the 1960s (Bjerknes 1969;Wyrtki 1975b) to be recognized as both a global phenomenon and as an outstanding manifestation of ocean-atmosphere coupling. As the societal impacts of ENSO became clear, a major field program [Tropical Ocean and Global Atmosphere (TOGA)] emerged. A moored observing system remains in place. Because entire books have been devoted to this phenomenon and its history of discovery (Philander 1990;Sarachik and Cane 2010;Battisti et al. 2019), no more will be said here.
The history of the past 100 years in physical oceanography has made it clear that a huge variety of phenomena, originally thought of as distinct from the general circulation, have important implications for the latter. These phenomena include ordinary surface gravity waves (which are intermediaries of the transfer of momentum and energy between ocean and atmosphere) and internal gravity waves. Great progress has occurred in the study of both of these phenomena since the beginning of the twentieth century. For the surface gravity wave field, see, for example, Komen et al. (1994).
For internal gravity waves, which are now recognized as central to oceanic mixing and numerous other processes, the most important conceptual development of the last 100 years was the proposal by Garrett and Munk (1972;reviewed by Munk 1981) that a quasi-universal, broadband spectrum existed in the oceans. Thousands of papers have been written on this subject in the intervening years, and the implications of the internal wave field, in all its generalities, are still not entirely understood.
Numerical models
Numerical modeling of the general ocean circulation began early in the postwar computer revolution. Notable early examples were Bryan (1963) and Veronis (1963). As computer power grew, and with the impetus from MODE and other time-dependent observations, early attempts [e.g., Holland (1978), shown in Fig. 7-15] were made to obtain resolution adequate in regional models to permit the spontaneous formation of balanced eddies in the model. Present-day ocean-only global capabilities are best seen intuitively in various animations posted on the Internet (e.g., https://www.youtube.com/watch?v5CCmTY0PKGDs), although even these complex flows still have at best a spatial resolution insufficient to resolve all important processes. A number of attempts have been made at quantitative description of the space-time complexities in wavenumber-frequency space [e.g., Wortham and Wunsch (2014) Ocean models, typically with grossly reduced spatial resolution have, under the growing impetus of interest in climate change, been coupled to atmospheric models. Such coupled models (treated elsewhere in this volume) originated with one-layer ''swamp'' oceans with no dynamics. Bryan et al. (1975) pioneered the representation of more realistic ocean behavior in coupled systems.
a. The resolution problem
With the growing interest in the effects of the balanced eddy field, the question of model resolution has tended to focus strongly on the need to realistically resolve both it and the even smaller submesoscale with Rossby numbers of order 1. Note, however, that many features of the quasi-steady circulation, especially the eastern and western boundary currents, require resolution equal to or exceeding that of the eddy field. These currents are very important in meridional property transports of heat, freshwater, carbon, and so on, but parameterization of unresolved transports has not been examined. Figure 7-16 shows the Gulf Stream temperature structure in the WOCE line at 678W for the top 500 m (Koltermann et al. 2011). The very warmest water has the highest from-west-to-east u velocity here, and its structures, both vertically and horizontally, are important in computing second-order products huCi for any property C-including temperature and salinity. Apart from the features occurring at and below distances of about 18 of latitude, like the submesoscale, the vertical structure requires resolution of baroclinic Rossby radii much higher than the first one.
For the most part, ocean and climate modelers have sidestepped the traditional computational requirement of demonstrating numerical convergence of their codes, primarily because new physics emerges every time the resolution is increased. Experiments with regional, but very high resolution, models suggest, for exanple, that, in the vicinity of the Gulf Stream (and other currents), latitude and longitude resolution nearing 1/508 (2 km) is required (Lévy et al. 2010). In the meantime, the computational load has dictated the use of lower-resolution models of unknown skill-models sometimes labeled as ''intermediate complexity'' and other euphemisms. When run for long periods, the accuracy and precision in the presence of systematic and stochastic errors in under-resolved models must be understood.
b. State estimation/data assimilation
The meteorological community, beginning in the early 1950s (Kalnay 2002), pioneered the combination of observational data with system dynamics encompassed in the numerical equations of general circulation models [numerical weather prediction (NWP) models]. Almost all of this work was directed at the urgent problem of weather forecasting and came to be known as ''data assimilation.'' In the wider field of control theory, data assimilation is a subset of much more general problems of model-data combinations (see Brogan 1990). In particular, it is the subset directed at prediction-commonly for days to a couple of weeks.
When, in much more recent years, oceanographers did acquire near-global, four-dimensional (in space and time) datasets, the question arose as to how to make ''best estimates'' of the ocean using as much of the data as possible and the most skillful GCMs. The well- (Holland 1978). Note the very limited geographical area.
7.22
developed methods from NWP were used by some (e.g., Carton and Giese 2008) to make estimates of the changing ocean, ignoring the important point that the methods were directed at prediction and were not general best estimators. Without the urgent demands for short-range forecasts that drove the meteorological methods, the oceanographic problem was (and still is) primarily that of scientific understanding of the timeevolving system. In that context, the now conventional data-assimilation methods were scrutinized for physical consistency, and it came to be recognized that prediction schemes, and as used in so-called reanalyses, failed to satisfy basic global conservation laws (heat, freshwater, vorticity, energy, etc.). The more general methods of control theory specifically distinguish the prediction problem from the ''smoothing'' problem whereby the results are intended to satisfy known equations of motion for all times, without the unphysical jumps in state variables. Such jumps have no detrimental effect on the prediction problem. They do raise fundamental questions when used for understanding the physics of a system.
With this recognition, some effort has gone into finding estimates of the time-evolving ocean circulation over months to decades that would be physically consistent. These early explorations (e.g., Stammer et al. 2003) generally began with the smoother problem in which the hypothetically optimal sequential prediction algorithm (the Kalman filter) was replaced by sequential methods producing a dynamically self-consistent solution. 13 At the present time, although not yet in widespread use, multidecadal estimates of the ocean circulation employing Lagrange multipliers do exist (e.g., Fukumori et al. 2018) and are being analyzed to understand the mean and time-changing ocean. These solutions remain of coarser resolution than theory requires; the very great power of geostrophic balance governing the interior data is used to argue, as in the Stommel-Arons theory, that essentially passive boundary currents will transport the required mass and, at least to some extent, heat, salt, and so on that are necessary to reproduce the constraints of interior observations. With unresolved boundary dynamics, boundary currents will be dominated locally by dissipative processes.
c. High-resolution models
With the increase in computer power, it has recently become possible to run high-resolution numerical simulations of ocean regions a few hundred kilometer across. Such simulations have mesh grids of a few meters and resolve most of the whole of ocean physics down to the scale of breaking gravity waves. At this resolution the models can be configured to simulate ocean regions that are targets of field campaigns to fill the observational gaps. For example, many of the recent advances in understanding of the submesoscale dynamics have come from a careful coordination of numerical experiments and field campaigns [e.g., the Scalable Lateral Mixing and Coherent Turbulence (LatMix) process study described in Shcherbina et al. (2015)].
Ocean models have become an essential tool to interpret both the global ocean state and also its dynamics at the mesoscale and below. The challenge remains to bring all of these interacting scales into a unified picture.
The Southern Ocean
As already noticed, the Southern Ocean had long been recognized as having a physics that is distinct from the midlatitude and equatorial regions (Fig. 7-17). Without a supporting pressure gradient, ordinary Sverdrup dynamics cannot be applied to flows within the zonally unbounded latitude band of Drake Passage. Mesoscale Koltermann et al. 2011, p. 92.) 13 In a linear, or linearizable, discrete dynamical system, the optimal predictor is what is known as the Kalman filter. It is an elegant solution, and for linear systems it provides an optimal prediction if accurate covariances are used. Although its use has become a ''buzzword'' in the climate community, its computational cost is so great that in practice it is never used in weather or climate forecasting.
eddies are now believed to play a central role both in the dynamics and thermodynamics, making the Southern Ocean a turbulent fluid that cannot be understood with linear models.
The basic features of the circulation in the Southern Ocean had been identified as early as the mid-1930s (Sverdrup 1933;Deacon 1937) from hydrographic measurements collected primarily during the Challenger (1872-76) and Meteor (1925-27) Expeditions. The strong eastward flow of the ACC connects each of the ocean basins and is part of a global overturning circulation, inferred following property extrema such as an oxygen minimum or salinity maxima, consisting of deep water that spreads poleward and upward across the ACC and is balanced by an equatorward flow in lighter and denser layers as sketched in Fig. 7-18. Deacon (1937) originally showed that this flow pattern is consistent with the wind stress acting on the sea surface. A deep return flow must develop to compensate the northward Ekman flow and close the overturning circulation. This overturning cell was named the Deacon cell by K. Bryan (Döös et al. 2008). Deacon, however, had only suggested that there ought to be a subsurface return flow, and not a deep cell extending well below the thermocline. The full cell instead appeared in the solutions computed much later by Wyrtki (1961) using observed winds and available hydrographic sections.
At first, it was assumed that the Deacon cell involved deep downwelling of actual water parcels, which is difficult to reconcile with the prevailing strong stratification because it would require very strong diapycnal flowespecially in the downwelling branch. Furthermore the Deacon cell consisted of a closed loop with no exchange of waters with the basins to the north, again inconsistent with the circulation inferred from measured property extrema.
The problem came into focus in the 1990s as computer power increased to the point that full global simulations of the three-dimensional ocean circulation became feasible (e.g., McWilliams 1998). Model solutions produced a Deacon cell within the latitude band of the Drake Passage, but they were characterized by vertical isopycnals there, a result of resolution inadequate to produce the inevitable baroclinic instability. When the Gent and McWilliams (1990) eddy-effects parameterization was implemented in ocean models, it produced an overturning circulation that crossed density surfaces only in the surface mixed layer and that resulted in a much more realistic stratification. The parameterization, however, could not be tested easily because the resulting isopycnal slope depended on an arbitrary Austausch coefficient in the representation.
It took another decade to realize fully that the lack of lateral continental barriers in the latitude band of the Drake Passage makes the Southern Ocean dynamically Intense structure in the Southern Ocean is an indicator of the strong topographic effects and the generally distinct physics of that region. The strong (ageostrophic) divergence near the equator is also conspicuous in the Atlantic and Pacific Oceans. 7.24 similar to the midlatitude atmosphere (Gent et al. 1995;Radko 2003, 2006.) In particular, the ''vanishing'' of the Deacon cell is analogous to the atmospheric Ferrel cell, which is also cancelled by a counter-rotating eddy-driven cell. In both atmosphere and ocean a ''residual'' overturning circulation appears to exist that is composed of two counter-rotating cells stacked on top of each other. This residual circulation is the net circulation experienced by tracers in the ocean and sketched in Fig. 7-18-an estimate supported by modern observations and inverse models (Ganachaud and Wunsch 2000;Lumpkin and Speer 2007;Talley 2013). Marshall and Speer (2012) review the recent understanding of the Southern Ocean dynamics.
Observations had since demonstrated that the Southern Ocean is a critical element in the global overturning circulation-blending waters from the salty Atlantic with the fresh Indian and Pacific Oceans, and producing dense waters along the shelves of Antarctica that filled the ocean abyss at rates comparable to the dense waters formed in the North Atlantic [see the discussion in Sloyan and Rintoul (2001)]. It was not until a basic understanding of the dynamics of the Southern Ocean had emerged that the theoretical focus shifted to the interaction of the Southern Ocean with the ocean basins to the north, however. Biogeochemists were instrumental in bringing the question to the fore, because they realized that high latitudes appear to exert a strong control on the CO 2 concentration of the global ocean and hence the atmosphere (Knox and McElroy 1984;Sarmiento and Toggweiler 1984;Siegenthaler and Wenk 1984). Toggweiler and Samuels (1995) showed that in models the AMOC is very sensitive to the strength of the winds blowing over the Southern Ocean. Full theories of the overturning circulation that couples the Southern Ocean to the Atlantic, Indian, and Pacific Oceans are now under development [see, in particular, Gnanadesikan (1999), Wolfe and Cessi (2010), Nikurashin andVallis (2011), andFerrari et al. (2017)].
a. Paleophysical oceanography
The study of paleoceanography, as a subset of paleoclimate, gradually matured in the years after WWII, with most of the foundation built upon the measured isotopic ratio techniques developed during the war, in a geochemical/geological setting. Much of the most useful data, roughly corresponding to time series, came from drilling into deep-sea sediments and measuring quantities such as the chlorinity (salinity ice present at all. What the role of the ocean has been in these climatic changes and how the necessarily different ocean circulations would have influenced the isotopic ratio distributions and their interpretation became urgent problems. In practice, paleoceanographic time scales span a range from the decades before global instrumental data became available to thousands and hundreds of millions of years ago. The problems of paleoceanographic circulation inference are manifold but are mainly due to the paucity of data (deep-sea core measurements are possible only in the limited regions of major, preserved, sedimentary structures on the seafloor, and properties within cores can undergo complex in situ transformations), and little information exists about the state of the overlying atmosphere. Combined with the uncertain relationships between the ''proxies'' measurable in cores and physical variables of interest such as water temperature, making inferences about the ocean circulation becomes an exercise in stacked assumptions. In particular, absent knowledge of the wind field, computing past circulations is especially difficult. Paleophysical oceanography was reviewed by Huybers and Wunsch (2010).
Coupled models of the atmosphere and oceans, and sometimes with land and sea ice, are also now being used (e.g., Muglia and Schmittner 2015) although lowerresolution versions exist from earlier efforts. All of the difficulties of modern coupled modeling are encountered, but these are greatly aggravated by the lack of data and the immense range of time scales encountered in both modeling and observations. Barring a breakthrough in the observational problem, these coupled models might be labeled a form of ''geopoetry'' or ''geonovel'' (following Harry Hess on seafloor spreading) and will remain so for the foreseeable future.
b. Multitudes of physics
Ocean physics encompasses a wide variety of phenomena that are not discussed here. Among the topics omitted are the important subfields devoted to sea ice, the Arctic Seas, tidal dynamics, flow and sediment interactions, physics of the air-sea boundary, and the entire related fields of geochemistry and biology as well as the surface and internal wave problems to which we have already alluded. All of these have some bearing on the general circulation and the deduction of its properties. The time at which a one-volume compendium (Sverdrup et al. 1942) could encompass all of oceanography vanished long agoa measure of how far we have collectively come.
Future of the subject
Prediction of the future ocean circulation-under global climate change-is a difficult subject in its own right, and the history of such efforts is beyond our present ambitions. One can nonetheless speculate about the science itself. This review is being written at a time of very great uncertainty about the future of U.S. science, particularly that part related to the environment and climate change in general. What the next few years will bring to U.S. science is unpredictable. A particular worry concerns the necessary ongoing observations of the ocean-observations that are almost wholly government supported. In the bleakest outcome, one or more generations of expert scientists will be lost, whose experience and interest it would take decades to recover, and data gaps will exist that can never be filled. Whether the rest of the world can or will compensate for a decrease in U.S. efforts also remains an enigma.
Setting aside this possibility of a new dark age, we can consider the trajectory of physical oceanography as it has existed for the past few decades and attempt to extrapolate it into the future. That the emphasis in any science generally depends also upon the supporting societal infrastructure renders fraught any prediction.
The earlier reasonable assumptions that a few basic principles applied almost everywhere (e.g., the Sverdrup relation or uniform upwelling) have become untenable: many regionally distinct ''oceans'' exist with differing physics and expected differences in future response. These oceans differ by geography, by time scale, and by which physical elements dominate on varying space and time scales.
a. Future instrumentation and observations
It is doubtful that anyone in 1919 could have foreseen the technologies of ocean observation that had developed by the end of the twentieth century. Whether the advances of the past 100 years have been unique-as they made available the fruits of the electromagnetic and quantum revolutions-is an imponderable.
Physical oceanography in all its generality is likely to remain an observational science for the foreseeable future: even as models become more capable, ever more detail will require testing and confirmation. For the global problem-our focus here-the move away from ships as the primary platform must continue. Growth can be expected in the autonomous float, drifter, autonomous vehicle, and animal-borne technologies, preferably to the point where the whole oceanic volume is covered sufficiently rapidly that the residual space-time aliasing is tolerable or is at least quantitatively bounded.
Historically, the combination of Eulerian fluid dynamics along with near-Eulerian measurements has led to the deepest insights into basic fluid physics. The understandable growth of freely moving devices-with low production costs and easy deployments-has taken the 7.26 field into the complications of Lagrangian fluid mechanics and near-Lagrangian measurements. The fluid mechanics theory is much more difficult, and the measurements inextricably mix temporal and spatial structures and statistics. Revival of serious global-scale moored measurements using more modern technology is needed. History suggests that the appearance of some entirely unforeseen new technology should be expected, but ''if'' and ''what'' are shrouded in darkness. Perhaps the underexploited methods of acoustics-for example, naturalsound tomography-or of biologically based sensors will develop into routine global-scale measurements.
For satellites, the main challenge is likely to be the maintenance and incremental improvement of the existing technologies-almost the whole accessible electromagnetic spectrum has already been explored. If entirely novel remote sensing methods can exist, they are unknown to us.
Physical oceanography is likely to remain distinct from physical/dynamical meteorology into the future. Weather forecasting, its gradual extension into seasonal time scales, and its improvements are likely to remain the focus of the atmospheric sciences. Although the ocean has a clear analog of ''weather'' in its balanced and near-balanced eddy field, detailed prediction in any particular region is likely to remain of most interest to narrow military, fishing, or shipping groups. Prediction of the basin-to-global oceanic state decades in the future will be of intense scientific and practical interest, but the long times required to test any such predictions mean that the focus of the science is likely to continue to be on understanding rather than on prediction. The infrastructure of weather and climate forecasting, largely built on major civilian government laboratories and funding resources around the world, has no counterpart in oceanography. Thus, the research flavor of the two fluids is likely to remain distinct.
b. Future theory and modeling
Much of the existing effort surrounding physical oceanography on the large scale is directed at the use and interpretation of model results. As has been true of ocean numerical modeling since its beginnings in the 1950s, much of the focus concerns the effects of inadequate resolution on the large-scale flows. Over the next decades, two possible routes to full success can be envisioned: 1) Continued growth in computer power, perhaps through further revolutionary (e.g., quantum) methods, will ultimately permit complete resolution of all space and time scales of importance. Whether that includes reaching the laminar (viscous/diffusive limit) scale and the smallest important topographic scales on a global basis is not known. The computing power and storage requirements are entirely forbidding, but the capability available today would have seemed miraculous in 1919. 2) Fully adequate parameterizations of all scales of oceanic turbulence (at least from the mesoscale through internal waves and submesoscale waves to the inertial subrange) will be developed-conceivably with machine-learning algorithms trained with observations and high-resolution numerical simulations-permitting the quantitatively accurate calculation of their influence on coarser resolved scales.
As seems likely, the physics governing the circulation and variability will be distinct, at least to a degree, at every spatial point. The intellectual challenge will then be to extract the major governing principles operating globally so that one will have come full circle from the initial global-and basin-scale theories to the full understanding of regionality-and then back to determining the universal, overall, governing principles. In a rational world, the next 100 years ought to be very interesting!
|
2019-04-22T13:13:02.704Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "c5b16c48236a54d543d0012a366b733beaee90b4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1175/amsmonographs-d-18-0002.1",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5324878ed37049d87ae4ed22b8f8c8b7ff295e70",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
246102464
|
pes2o/s2orc
|
v3-fos-license
|
Fields with no recent legume cultivation have sufficient nitrogen-fixing rhizobia for crops of faba bean (Vicia faba L.)
(1) To assess the biological N fixation (BNF) potential of varieties of faba bean (Vicia faba L.) cropped with or without compost in an experimental field-scale rotation with no recent history of legumes, (2) to enumerate soil populations of Rhizobium leguminosarum sv. viciae (Rlv), and to genetically characterize the nodulating Rlv strains, (3) compare BNF with other sites in Britain. BNF was evaluated from 2012 to 2015 using 15 N natural abundance. Treatments were either PK fertilizer or compost. Soil rhizobial populations were determined using qPCR, the symbiotic rhizobia genotyped (16 S rRNA, nodA and nodD genes), and their BNF capacity assessed ex situ. The reliance of legumes on BNF at other British sites was estimated in a single season, and their nodulating symbionts examined. Faba bean obtained most of its N through BNF (>80%) regardless of variety or year. N-accumulation by cvs Babylon and Boxer increased with compost treatment in 2014/2015. Rhizobial populations were c. 105-106Rlv cells g−1 soil regardless of field or treatment. 157 Rlv microsymbionts grouped into two large nodAD clades; one mainly from V. faba, and the other from various legumes. All isolates nodulated, and some performed better than commercial inoculant strains. Faba bean can provide most of its nitrogen through BNF and leave economically valuable residual N for subsequent crops. Recent legume cropping in northern Europe is not essential for effective nodulation: rhizobia may persist in a range of farmland locations. Nevertheless, there is the potential to apply elite rhizobial strains as inoculants in some soils.
Introduction
Nodulated legumes are the largest contributors of biologically fixed nitrogen (N) to both natural terrestrial and agricultural environments and are a key component of sustainable agriculture (Cleveland et al. 1999;Jensen et al. 2012;Peoples et al. 2009;Udvardi et al. 2021;Unkovich et al. 2008). The root nodulating symbioses that they form with a diverse range of soil bacteria, collectively termed "rhizobia" (De Meyer et al. 2011;Gyaneshwar et al. 2011;Peix et al. 2015), are capable of fixing more than 200 kg N ha -1 yr -1 in agricultural systems in both tropical (Alves et al. 2003) and temperate regions (Carlsson and Huss-Danell 2003;Iannetta et al. 2016;Jensen et al. 2010). These systems include legumes grown for their high protein grains (for both human and animal consumption), as animal forage and fodder, and as "green manures", intercrops and understories.
If appropriately managed, legume crops can achieve high yields without applications of mineral N-containing fertiliser; in addition, they have the potential to contribute substantial amounts of fixed N to the succeeding non-legume crop (e.g. in rotations or crop sequences) or directly when used as green manures, intercrops or understoreys. While legumes are increasingly being used in tropical countries, e.g. soybean in Brazil (Alves et al. 2003), their use in northern European agriculture has greatly declined over the last 150 years (Squire et al. 2019). Since the mid-1900s, northern European crop systems have gained most of their N from synthetic fertilisers manufactured using the Haber-Bosch process, which has subsequently replaced the role of N-fixing legumes in cereal rotations and pastures. However, it is also now apparent that overuse of industrially-produced N-containing fertilisers can lead to eutrophication of rivers and the pollution of groundwater, and following denitrification can contribute substantial amounts of greenhouse gases (GHG) such as N 2 O (Jensen et al. 2012). Given the likelihood of further statutory curbs on the use of N-fertilisers, and growing acceptance that locally-grown pulses deliver improvements to human and animal nutrition with reduced economic and environmental costs compared to the importation of legume grain (e.g. of soybean from the Americas), the demand for N-fixing legumes in northern European cropping systems is expected to increase (Iannetta et al. 2016).
The most economically important grain legumes (pulses) in the UK and northern Europe in general are pea (Pisum sativum L.) and faba bean (Vicia faba L.), which are cropped for their high protein grains. Faba bean is capable of high biological N fixation (BNF) in many parts of the world, including Australia and Canada (Denton et al. 2013;Hossain et al. 2016;Unkovich et al. 2010;Van Zwieten et al. 2015), but also in Europe, where 15 N-based field studies have indicated that it can fix between 73 and 335 kg shoot N ha -1 yr -1 (Giambalvo et al. 2012;Jensen et al. 2010;López-Bellido et al. 2006). However, there have been no studies to quantify BNF in the cooler, wetter climates of the British Isles since Sprent et al. (1977) estimated very large inputs of fixed N (600 kg ha -1 yr -1 ) from faba bean grown in East Scotland (UK) using the acetylene reduction assay. Faba bean varieties, farmers' practices and management have changed considerably over the last 40 years, and so these early BNF estimates may not reflect the position in commercial agriculture today. Even less is known about BNF by pea in the British Isles, although it is known to fix large amounts of N (albeit less than faba bean) in other parts of the world Hossain et al. 2016;Jensen 1986;1987;Unkovich et al. 2010;Unkovich et al. 2008).
Two critical factors need to be resolved before any programme for legume expansion can be put in place. First, in general, legumes will only nodulate and fix N if they need to, i.e. if the soil N status is sufficiently low Peoples et al. 1991;Unkovich et al. 2008). However, northern European soils are usually already enriched in N because of the intensive use of fertilisers typical of commercial arable rotations, and so in order to induce legumes to nodulate and fix N, concentrations of soil N may need to be reduced, for example by the prior use of an N-demanding crop Van Zwieten et al. 2015). Second, suitable bacteria need to be present in the soil. Faba bean, pea and lentil (Lens culinaris Medik.) are all nodulated by the common soil bacterium Rhizobium leguminosarum sv. viciae (Rlv) (Laguerre et al. 2003), which also nodulates several native European legumes in the genera Lathyrus and Vicia (Mutch and Young 2004). Populations of rhizobia are adversely affected by high soil N, but also by the absence of legume host crops over a prolonged period . This means that a legume crop may have reduced nodulation and N-fixation capacity (and may even fail altogether) if care is not taken to ensure that an adequately high population of suitable rhizobia is present in the soil. In studies of UK soils, Nutman and Hearne (1979) reported >1000-fold reductions in Rlv populations under continual fallow or cereal cropping, and Sorwli and Mytton (1986) suggested that BNF by faba bean might be limited by low rhizobial populations and/or ineffective Rlv genotypes. Similarly, in France, Depret et al. (2004) concluded that long-term cropping of cereals, particularly maize (Zea mays L.), resulted in a marked decrease in the diversity of Rlv. More recently, Walker and Watson (2011) have recommended that pulse crops in the UK are inoculated with Rlv as part of a strategy to overcome "yield instability" brought about by reductions in Rlv populations as a consequence of the long-term absence of legumes in many soils.
The opportunity to test the overarching hypothesis that the prolonged absence of legume cropping results in reductions in soil Rlv populations, and hence impacts negatively on yields by legume crops that are largely dependent on BNF, arose at the Centre for Sustainable Cropping (CSC). The CSC is a long-term experimental platform close to Dundee, Scotland, with no recent history of legume cropping. Furthermore, it was established with the primary aim of testing whether yields obtained by conventional arable management comprising standard local fertiliser inputs can be maintained or even bettered under a low-input integrated system comprising inputs of green waste compost only . Therefore, it also allowed for testing the additional hypothesis that applications of compost can help maintain (and even enhance) soil Rlv populations, and hence legume BNF and grain yields.
In this context, there were five aims to the present study: 1. To determine the contribution of BNF to several varieties of faba bean in a four-year field-scale arable rotation in soils amended with or without compost. 2. To estimate populations of Rlv in the faba beancropped soils using a qPCR approach based on 16S rRNA and nodD primers. 3. Considering the prolonged absence of legume cropping at the site, to determine the potential origins of the Rlv strains nodulating the faba bean by comparing them genetically with strains isolated from neighbouring domesticated and wild relatives of faba bean in the genera Vicia, Pisum, Lathyrus and Lens. 4. To assess the plant growth-promoting performance of the rhizobial isolates. 5. To assess the degree to which the N-needs of pea and faba bean crops in other parts of Britain are met by BNF, and to use these crops as additional sources of genetically and functionally diverse Rlv genotypes to compare with those isolated from the main study site and its immediate environs.
Materials and methods
Experimental design of a 4-year experimental crop sequence at the CSC incorporating faba bean Faba bean was sampled annually from 2012 to 2015 at the CSC field-scale experimental platform in Balruddery Farm, Dundee, North-East Scotland, UK (56.48 lat, -3.13 long). The location of the CSC is indicated in Fig. 1, along with the other sites where BNF was assessed and/or rhizobia were sampled (Table S1). The fields comprising the CSC were not cropped with legumes for at least 50 years prior to the onset of the experimental rotation in 2011; they provided grass for grazing and/or cattle fodder until the late 1990s, and thereafter were part of an arable rotation mainly comprising winter cereals. The CSC experimental platform is a 42 ha contiguous block of six arable fields based on a six year rotation of the commonly grown crops in the region: potatoes (Solanum tuberosum L.), winter wheat (Triticum aestivum L.), winter barley (Hordeum vulgare L.), winter oilseed rape (Brassica napus L.), spring sown faba bean and spring barley. It was established as a long-term field platform in which conventional arable management is being compared with a low-input, integrated cropping system designed with the goal to maintain yields whilst enhancing biodiversity and minimizing environmental impact. More details about the CSC can be found in Hawes et al. (2018) andFreitag et al. (2018), and at https:// csc. hutton. ac. uk. The soil N-status of each field is shown in Table 1 along with the non-legume crop that preceded faba bean. Monthly average temperature and precipitation during the 2012-2015 growing seasons are shown in Fig. S1. The experimental crop sequence was conducted on four of the six fields comprising the CSC; each 6-7 ha field was divided in half: one half (labelled conventional) was treated with a standard application of fertilizer currently used in east Scotland (i.e. PK, but not N), while the other half (labelled integrated) received a reduced level of PK fertilizer, but was also treated with green waste compost containing 1.35% N and 16.01% C, which was applied annually before seed sowing at a rate of 35 t ha −1 (Table 1). In each season, four to five varieties of faba bean were sown in 18 m wide strips in each field half at a sowing density of 210-320 kg ha −1 , depending on variety. Although some of the varieties changed over the four seasons owing to difficulties in obtaining sufficient quantities of seed, two of them (cvs Fuego and Pyramid) were sown in each year from 2012 to 2015.
Calculation of BNF using the 15 N natural abundance technique Biomass and BNF measurements were undertaken at early to mid-podfill, when grain legume BNF is maximal (i.e. early to mid-podfill; Jensen et al. 2010;Rose et al. 2018;Unkovich et al. 2008), which is normally late July/early August in Scotland. Faba bean plants in 0.5 m 2 sampling points, (five points per strip), were harvested and the dry biomass of the entire aerial parts (stems, leaves and pods) was weighed for an estimate of above-ground biomass per hectare (Fretiag et al. 2018;Hawes et al. 2018). A single whole faba bean plant with an intact nodulated root system was sampled from the edge of each sampling point (Freitag et al. 2018;Hawes et al. 2018). Adjacent non-legume dicot weeds or non-legume crop volunteers were sampled as non-N-fixing reference plants for 15 N natural abundance assays (Carlsson and Huss-Danell 2014) (listed in Table S2); at least one reference plant was sampled from each of the five sampling points, so that each strip was represented by five faba bean plants and five references. These were set aside for %N and 15 N analysis (see next section). The crop was harvested according to standard farm practice in September or October, depending on the weather. The dry weight of grain per 18 m-wide strip was converted to yield in t ha −1 (taking into account a standard 15% moisture content).
The five individual faba bean plants per strip were divided into shoots, roots and nodules, which were dried at 60 °C, weighed and milled to a fine powder. The same was done with the aerial parts of the reference plants. A sub-sample of each plant was analysed using an elemental analyser linked to a mass spectrometer to determine the %N and 15 N contents, respectively. Samples (0.5 mg) were weighed into tin capsules and analysed for the 15 N isotopic composition using an automated nitrogen-carbon elemental Fig. 1 A map of the British Isles showing the location of the legume nodule sampling sites (red spots) and sites where biological N fixation (BNF) was also measured (blue spots). Major cities are marked in green. The CSC is indicated by an arrow ◂ analyser (ANCA) coupled to a 20/20 isotope ratio mass spectrometer (both SerCon Ltd., Crewe, UK). The total N content of the faba bean plants was calculated by multiplying the dry biomass of their shoots by their %N. The proportion of faba bean N derived from atmospheric N 2 (%Ndfa) was calculated by comparing the 15 N natural abundance (expressed as δ 15 N or parts per thousand [‰] relative to the 15 N composition of atmospheric N 2 ) of the faba bean shoot N (δ 15 N legume) to the δ 15 N of the non-N 2 -fixing reference plants (which are assumed to reflect the δ 15 N of the plant-available soil N [δ 15 N soil]) using Eq. (1): where B represents the δ 15 N of faba bean shoots grown entirely reliant upon BNF for growth (Unkovich et al. 2008).
Generation of B-values for all seven of the faba bean varieties sown in the CSC (cvs, Babylon, Ben, Boxer, Fanfare, Fuego, Pyramid and Tattoo), and for pea cv. Corus was performed according to Unkovich et al. (2008) and Burchill et al. (2014). Plants were grown in a 1:1 (v/v) mixture of autoclaved perlite and sand in 10 L pots in an unheated glasshouse from March to June. The pots were inoculated with a liquid culture of a mixture of Rlv isolates from the (1) %Ndfa = 100 × δ 15 N soil − δ 15 N legume ∕ δ 15 N soil − B CSC (see next section), and were fed weekly with an N-free nutrient solution (Burchill et al. 2014), and watered daily or as required. As recommended by Rose et al. (2018), plants were harvested at early-to mid-podfill stage to match the plants sampled in the field, and δ 15 N values were obtained from dried aerial parts as described above. B-values were also obtained for each year of the trial to ensure internal consistency in the data from the faba bean, reference plants, and B-values used to estimate %Ndfa for each season (Table S3).
The amount of N fixed was then calculated from estimates of %Ndfa, shoot dry biomass and N content (%N) using Eqs. (2) and (3): Estimating total crop N and residual N after grain harvest Shoot-based calculations of N 2 fixation underestimate total inputs of fixed N since substantial amounts of legume N can also be associated with, or released and derived from, the nodulated roots (Liu et al. 2019; (2) Legume shoot N = %N∕100 × (legume shoot dry matter)
(3)
Amount shoot N f ixed = %Ndfa∕100 × (legume shoot N) Unkovich et al. 2010). In the case of field-grown faba bean, below-ground pools of N have been reported to represent between 24 and 40% of the total plant N (Unkovich et al. 2010). Given the well-known difficulties in obtaining accurate below ground crop N data (Liu et al. 2019;Unkovich et al. 2010), the total amounts of N 2 fixed by the faba bean at the CSC at early to mid-podfill were estimated by multiplying the shoot values calculated using Eq. (3) by a factor of 1.52 to include the N-content of the nodulated roots (Unkovich et al. 2010). Total crop N at harvest was calculated from the grain yield, and this was then used to estimate the residual N. Grain was harvested from each 18 m strip using a standard combine harvester; sub-samples of dried grain were analysed for their %N using an elemental analyser as described above, and the total grain N was calculated by multiplying the dried grain mass by %N. A mean harvest index (HI) value of 0.43 was calculated from the CSC faba bean crops in 2012 and 2013, a value which is typical for faba bean in northern Europe Sprent et al. 1977); this was used to calculate the dry biomass of the aerial parts of the crop comprising the stover plus the grains. An estimate of the total crop dry biomass (TCB) including the underground parts (roots and nodules) was then obtained by multiplying the dry biomass of the aerial parts by 1.52 to include the N-content of the nodulated roots (Unkovich et al. 2010) using Eq. (4). The residual N (i.e. the N content of the stover plus roots) was estimated by subtracting the dry grain weight from the TCB and multiplying the resulting value by 0.6%, which was the average %N across all varieties for the residual dry matter in the plots at the CSC in 2013 (5):
Assessment of faba bean nodulation
To quantify nodulation at the CSC, nodules were removed from the root systems of the plants used for obtaining δ 15 N values at mid-podfill stage; nodule dry weights were obtained by drying overnight at 70 °C, and expressed as nodule biomass per plant. However, because it is difficult to excavate an intact root system (4) Total crop dry biomass (TCB) = (dry grain weight∕0.43) × 1.52
(5)
Legume residue N = (TCB − dry grain weight) × 0.006 from field-grown faba bean plants these data are only representative of the crown nodulation zone, which extends ca. 15 cm down the tap root.
Quantification of rhizobia in soils using a real-time qPCR assay A principal aim of the study was to assess faba bean BNF at the CSC with reference to rhizobial soil populations in order to determine if the prolonged absence of legume cropping had a negative impact on these populations, and hence if it had a follow-on impact on BNF. The impact of compost application on rhizobial populations under the integrated treatment was also investigated. To this end, a relative real-time PCR method using an artificial reference "spike" (Daniell et al. 2012) and 16S rRNA and nodD gene primers (Macdonald et al. 2011) was employed to estimate Rhizobium leguminosarum 16S rRNA and Rhizobium leguminosarum sv. viciae nodD gene copy numbers in the four faba bean-cropped fields from 2012 to 2015.
Soil samples from 12 permanent GPS locations across faba bean fields were collected every March (before crop sowing) during 2012-2015 from the CSC farm platform as described by Hawes et al. (2018). In short, at each sample position, 1.5 L of soil was taken to a depth of 0.15 m using a soil auger or trowel, weighed and passed through a sieve with a mesh size of 10 mm. Soil samples were dried overnight at 70 °C to determine their moisture content. Prior to drying, small sub-samples of the fresh soil (2 mL in volume) were flash frozen in liquid N 2 and stored at -80 °C for subsequent molecular analyses.
DNA was extracted from ~0.25 g soil samples using the DNeasy PowerSoil Kit (Qiagen, Hilden, Germany) following the manufacturer's instructions, using a 2 min bead-beating step (Retsch MM300, Haan, Germany) at a frequency of 30 beats s −1 . A 194 bp (bp) fragment of the mutated 16S rRNA gene from Escherichia coli, which is routinely used as an artificial reference spike to account for DNA losses during DNA extraction from the soil (Daniell et al. 2012), was added to each soil sample at a concentration of 10 9 copies per sample prior to DNA extraction.
The wild-type calibrator controls were generated by PCR amplification of 16S rRNA and nodD gene fragments from soil DNA extracts using the primer sets F979+R1264 and F88+R443, respectively (Macdonald et al. 2011). Briefly, 1 µL of soil DNA extract was used as template for PCR in a final reaction volume of 50 µL containing: 1.25 U GoTaq® G2 DNA polymerase (Promega, Southampton, UK), 0.4 µM of each primer, 0.2 mM of each nucleotide and 1x clear GoTaq® G2 Buffer (Promega , Southampton, UK). All reactions were performed on G-Storm GS1 thermal cyclers (GRI Ltd, Braintree, UK), with the following cycling conditions: initial denaturation at 95 °C for 2 min; 35 cycles of 95 °C for 1.5 min, 58 °C for 1 min and 72 °C for 1 min; final extension of 72 °C for 15 min. PCR products were gel purified using the MinElute Gel Extraction Kit (Qiagen, Hilden, Germany), and cloned into E. coli DH5α (Invitrogen, Thermo Fisher Scientific, Waltham, Massachusetts, U.S.) using pGEM®-T Easy Vector System (Promega, Southampton, UK). Plasmid DNA was purified with QIAprep Spin Miniprep Kit (Qiagen, Hilden, Germany) and quantified using the Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen, Thermo Fisher Scientific, Waltham, Massachusetts, U.S.). The number of copies of each gene was calculated using equations in Daniell et al. (2012).
Amplifications of the artificial spike, the 16S rRNA and nodD calibrator controls and the 16S rRNA and nodD genes were performed in triplicate using the LightCycler® 480 SYBR Green I Master Kit (Roche, Burgess Hill, UK) following the manufacturer's instructions and using the set of primers of the amplified region of each reaction i.e. Mut-F+Mut-R for the spike (Daniell et al. 2012), F979+R1264 for 16S rRNA, and F88+R443 for nodD (Macdonald et al. 2011). All amplifications were carried out under the following conditions: an initial denaturation at 95 °C for 15 min was followed by 42 cycles of 94 °C for 20 s, 58 °C for 30 s, 72 °C for 30 s and a single acquisition step at 81 °C for 5 s. Melt curve analysis was performed between 55 and 95 °C.
In each PCR run, calibration curves were included for the 16S rRNA and nodD standards, and the artificial reference spike, by diluting these standards accordingly to give a concentration range from 10 2 to 10 8 copies per reaction in a 10-fold dilution series. A reaction containing 1 µL of each of these standards was included in each PCR run in duplicate. Crossing points were estimated using the Roche Diagnostic Systems software (Burgess Hill, UK) with default settings for the second derivative method, the copy number of each gene was calculated by regression analysis, corrected for individual PCR efficiencies calculated with LinRegPCR v 2020.0 (Ruijter et al. 2009) and expressed as number of gene copies g −1 dry weight soil, assuming three copies of 16S rRNA and one copy of nodD in the genome of R. leguminosarum (Macdonald et al. 2011). Although specificity of the 16S rRNA and nodD primers for R. leguminosarum Rlv, respectively, was previously shown by MacDonald et al. (2011), their specificity was further confirmed by performing a BLASTn search in the NCBI database for both 16S rRNA primers. This showed that all sequences that had a match for both primers belonged to the genus Rhizobium with R. leguminosarum the most common species. A similar search carried out with both nodD primers only produced Rhizobium species with, again, R. leguminosarum the most frequent matches, and most importantly that all isolates were in sv. viciae (Ferrando-Molina, unpublished).
Isolation of rhizobial symbionts from the CSC and other sites
A further aim of the study was to assess the genetic diversity of the rhizobia nodulating the faba bean at the CSC to determine their identity and hence their possible origins given the prolonged absence of legume cropping at this site. Symbionts were also assessed for their functional ability to promote legume growth in comparison with other local, Scottish, and British rhizobial isolates. Therefore, rhizobia were isolated from faba bean nodules that were sampled from the same fields as the BNF determinations in the CSC; three nodules per plant were sampled from three plants from each treatment (conventional and integrated) in each 18 m row in 2012 and 2013, and from one plant per row in 2014, 2015 and 2016. For comparison with the CSC isolates, rhizobia were also isolated from nodules on faba bean and pea crops growing in fields adjacent to the CSC, as well as from wild legumes belonging to the genera Lathyrus and Vicia that were growing in the field margins at the CSC and in its general locality in East Scotland. To place the CSC isolates into a wider geographical context, additional isolates were obtained from the soil in other agricultural and non-agricultural locations across the British Isles (Fig. 1, Table S1) by using pea, faba bean or lentil as "trap" plants. For this, surface-sterilized seeds (70% [v/v] ethanol for 1 min, followed by immersion in sodium hypochlorite (2.5% [v/v] NaClO) for 5 min, and then rinsing thrice in sterile distilled water (SDW)) were placed into autoclaved pots with a sterile vermiculite-perlite substrate plus a small quantity of the soil (100 g). The plants were watered with tap water as required. Plants were harvested after 4-6 weeks of growth, and pink healthy nodules, if present, were sampled from freshly washed roots. Five pots of uninoculated plants with just vermiculite-perlite were randomly distributed on the bench in the glasshouse or growth room, but none were nodulated and all died from N-starvation before harvesting the "inoculated" plants.
The nodules from each plant (CSC and others) were processed separately by surface sterilizing them in 70% [v/v] ethanol for 1 min, followed by immersion in sodium hypochlorite (2.5% [v/v] NaClO) for 3 min, and then rinsing thrice in SDW. The nodules were then crushed using a sterile plastic pestle, and the nodule extracts were grown on Medium 79 (Fred and Waksman 1928), otherwise known as yeast mannitol agar or YMA (Vincent 1970), with Congo Red (CR) added to make YMA-CR plates, and incubated at 28 °C for 24-48 h. Single colonies were picked off and individually streaked onto freshly prepared YMA-CR plates. Once pure isolates were obtained, a single colony from each YMA-CR plate was used to inoculate a sterile 5 ml tryptone-yeast (TY) broth (Beringer 1974). Cultures were grown at 28 °C for 24-48 h in a shaking incubator (150 rpm). Liquid cultures at log phase (with an OD 600 between 0.2 and 0.8) were used to prepare 25% [v/v] sterile glycerol stocks for long term storage at -80 o C and DNA extractions.
Identification of rhizobia via sequencing of their rrs (16S rRNA) and nodulation (nodA and nodD) genes Primers and PCR thermal profiles used in this study are listed in Table S4. Each 30 µl PCR reaction mixture used 1x Colorless GoTaq® Reaction Buffer (1.5 mM MgCl 2 final concentration; Promega, Southampton, UK), 0.2 mM of each dNTP, 0.4 µM concentration of each primer, 1.25 U of GoTaq DNA polymerase (Promega, Southampton, UK) and 1 µL DNA template. All reactions were performed on G-Storm GS1 thermal cyclers (GRI Ltd, Braintree, UK). To confirm successful amplification of the correct region, PCR products were resolved by electrophoresis on agarose gels with SYBR® Safe DNA Gel Stain (Invitrogen, Thermo Fisher Scientific, Waltham, Massachusetts, U.S.) and visualised using UV-illumination (FluorChem® Imager, Alpha Innotech, San Leandro, CA, U.S.). The PCR products were then purified using Illustra ExoProStar™ 1-step (GE Healthcare US77702V, Chicago, Illinois, U.S.) or using QIAquick-spin columns (Qiagen, Hilden, Germany), according to the manufacturer's recommendations, and sequenced using an ABI3730 DNA analyser with a 36 cm x 48 capillary array (Applied Biosystems®, Thermo Fisher Scientific, Waltham, Massachusetts, U.S.). All sequencing was performed by the sequencing service at the James Hutton Institute. All the sequences were obtained with the Forward primer only, and these were inspected and edited manually (trimmed) using BioEdit Sequence Alignment Editor Version 7.2 (Hall 1999 were screened against databases using the nucleotide basic local alignment tool (BLASTN) queuing system (Altschul et al. 1997) 2.2.28 on the NCBI website.
Phylogenetic analysis of rhizobial isolates
Evolutionary analyses were conducted in MEGAX (Kumar et al. 2018). First, sequences were aligned using Clustal Ω (Sievers et al. 2011), then the best models were selected. Next, gene trees were inferred by using the Maximum Likelihood (ML) method and Tamura 3-parameter model with 1000 bootstrap replicates. Only bootstrap values >50% are shown in the trees. All positions with less than 95% site coverage were eliminated (partial deletion option). A discrete Gamma distribution was used to model evolutionary rate differences among sites (5 categories) in the phylogeny of the nodD gene (+G, parameter = 0.2861) and the nodA gene (+G, parameter = 0.2672).
Assessment of the plant growth-promoting (BNF) potential of rhizobial isolates on pea and faba bean All the isolates from the CSC plus those from other faba bean and pea crops assessed for their BNF contributions (Fig. 1), together with isolates from various wild legumes, were inoculated onto pea cv. Corus, and grown in a glasshouse to assess their ability to promote the growth of a typical Rlv host in the absence of soil N. Various reference strains were included for comparative purposes, including V. faba strains from the study of Mutch and Young (2004), strains isolated from faba bean in Ethiopia (Gebre Yohannes, unpublished), and strains isolated from lentil in Wiltshire, South-West England (this study). After the trials on cv. Corus, the 13 highest-performing strains were selected for further trials on pea cv. Kareni and on faba bean cv. Vertigo in comparison with a low-performing strain plus two standard laboratory strains. Controls were uninoculated plants. The experiments were performed using 1 L pots filled with a 1:1 mixture of autoclaved vermiculite and perlite; 3 replicate pots per strain/treatment. To prevent cross-contamination during watering the pots were covered in plastic film with a hole for the shoot to emerge. Plants were fed weekly with an N-free nutrient solution (Burchill et al. 2014), and watered daily or as required. Plants were harvested at the flowering stage (49 d after inoculation for pea and 70 dpi for faba bean) and above ground biomass quantified as a proxy for BNF (Unkovich et al. 2008).
Sampling of faba bean and pea crops from a range of British farms
To extend the relevance of our single, detailed field study, and to explore BNF in faba bean and pea more generally across commercial crops in Britain, eleven additional sites (five for faba bean and six for pea) were selected following a survey of pulse growers conducted in collaboration with the Processors and Growers Research Organisation (PGRO) (Fig. 1). Participating growers were advised how to sample aerial parts of their faba bean and/or pea crops together with associated non-legume weeds and/or volunteers as for the CSC (see above); these samples were sent to the James Hutton Institute together with soil samples from the same fields for rhizobial trapping trials (see earlier). The plant samples were treated as per the CSC faba bean samples, except that only their δ 15 N values were determined, which were then used to calculate %Ndfa. B-values were determined as described above for any faba bean and pea varieties that had not already been obtained for the CSC %Ndfa calculations.
Statistical analyses
For analysis of the CSC data, the Restricted Maximum Likelihood (REML) and the linear mixed model approach procedures were performed in Gen-Stat for Windows 21th Edition (VSN International Ltd., Hemel Hempstead, U.K.) for shoot biomass and %N of the faba bean grown in the four-year rotation (2012-2015), as described previously by Freitag et al. (2018). In short, Year+Variety*Treatment effects were fitted as fixed effects. Terms: Year.Variety + Year.Treatment + Year.Strip.Treatment (accounting for differences between strips in a half-field) + Year.Rep (accounting for trends down the field) + Year. Treatment.Rep (accounting for trends down a half-field of the field halves) were included as random effects. As the grain yield, grain N and residue N were only measured from whole strips, the random effects of Variety, Rep and Strip were excluded in REML for these variables. As there were changes to the selection of varieties grown for field beans between years, cv. Maris Bead, which was present in only one year (2012), was excluded from the analysis. Multiple comparisons were carried out using Fisher's unprotected least significant difference test. Details are given in File S1A.
An analysis of variance (ANOVA) and Bonferroni test in GenStat for Windows 20th Edition (VSN International Ltd., Hemel Hempstead, U.K.) were used to adjust for multiple comparisons in assessment of faba bean nodule and root dry weight. Means and standard errors were calculated for each year, variety of faba bean, and treatment. However, as different varieties were used each year, the means for years were not statistically compared.
The ANOVA analysis of the qPCR data was carried out with R version 3.5.1 (R core Team 2013) implemented on RStudio version 1.1456 (R Studio Team 2015). When the ANOVA results were significant, package multcomp version 1.4.10 (Hothorn et al. 2008) was used to carry out the post-hoc pairwise comparison Tukey's Honest Significant Difference (Tukey's HSD) test for comparing means between fields. Additionally, packages dplyr version 0.8.3 (Wickham et al. 2019) and ggplot2 version 3.1.0 (Wickham 2016) were used for data handling and visualisation, respectively. Details are given in File S1B.
Nitrogen fixation and nodulation by faba bean in an experimental rotation incorporating compost instead of fertilizer
Variations in crop biomass, shoot N, shoot 15 N, and grain yield reflected the seasonal variations in weather over the four years of the CSC rotation tested in the present study (2012-2015) (Fig. S1), with 2014 being exceptionally high in terms of all growth parameters, including grain yield (Figs. 2, S2; Table S5). In terms of BNF, the %Ndfa values at early to mid-podfill stage were consistently high in each season, ranging from c. 80% in 2013 to >90% in 2014 and 2015 (Table S5); the values from the integrated treatment were generally higher than those from the conventional treatment, but this was only significant in 2012 when the %Ndfa of the conventional and integrated plants were 80.99 and 90.52, respectively (Table S5). The BNF data closely followed the same pattern as shoot biomass and shoot N content, with the crops in 2012 and 2013 fixing less than half those in 2014 and 2015 (Figs. 2B, S2, Table S5). In 2014, by including projected values for the underground contribution using the root factor calculated by Unkovich et al. (2010), it was estimated that faba bean fixed more than 300 kg N ha -1 under both conventional and integrated management (Table S5, Fig. S2D, E). There was a significant interaction between management type and faba bean genotype on shoot fixed N and total plant fixed N in 2014 and 2015, with BNF by cvs. Babylon and Boxer significantly benefitting from the integrated treatment ( Fig. S2B-E, Table S5).
At crop harvest, grain N and estimates of residual N (Fig. 2D, Table S5) followed the same patterns as grain yield; in the highest-performing year (2014) more than 100 kg N ha -1 was estimated to be left in the crop residues after the grain had been harvested (Fig. 2D). There were no significant differences between treatments in dry grain yield, grain N and residue N.
Overall, there were no significant differences between treatments in each of the three years that were analysed for nodule and root dry weights (2012, 2013 and 2014), although there were significant differences between varieties in 2012 and 2013 (Table S6).
Quantification of rhizobia in soils subsequently cropped with faba bean
Soil populations of the potential symbionts of faba bean at the CSC were assessed every March in the years 2012-2015. The concentration of R. leguminosarum 16 S rRNA and Rlv nodD gene copy numbers (Fig. 3a) did not differ between conventional and integrated treatments within each field cropped with faba bean, but there were significant differences between years for both R. leguminosarum 16S rRNA (Fig. 3a) and Rlv nodD (Fig. 3b) gene copy numbers. The faba bean-cropped field in year 2012 had the highest concentrations of both markers (1.62 × 10 6 R. leguminosarum 16S rRNA g dry soil −1 ; 1.93 × 10 5 Rlv nodD g dry soil −1 ) whilst the field in 2015 had the lowest concentration for R. leguminosarum 16S rRNA (6.42 × 10 5 Rleg 16S rRNA g dry soil −1 ). The fields in 2013 and 2015 had the lowest concentration Vol:. (1234567890) of Rlv nodD (1.31 × 10 5 and 1.21 × 10 5 Rlv nodD g dry soil −1 , respectively). Despite the differences observed in gene copy number of both markers, the ratio of the copy numbers of Rlv nodD per copy of R. leguminosarum 16S rRNA (Fig. 3c) was similar in the faba bean-cropped fields in 2012 and in 2013, but there was a significantly higher ratio in the faba bean-cropped field in 2014 (conventional treatment only), and most particularly in 2015 wherein this field housed the highest proportion of Rlv nodD per copy of R. leguminosarum 16S rRNA (19.84% and 17.98% in the conventional and integrated treatments, respectively).
Genetic diversity of rhizobia isolated from faba bean in comparison to other cultivated and wild legumes
The genetic diversity of the rhizobia nodulating faba bean at the study site from 2011 to 2016 was determined, and then compared to neighbouring localities and other sites across Britain. In total, 144 rhizobial isolates were obtained from nodules on faba bean grown at the CSC, as well as from faba bean and pea crops neighbouring the CSC, from wild legumes in the CSC field margins, and from other cropped and non-cropped sites in Scotland and England (Table S1, S8). Reference strains from other parts of Britain included six faba bean strains from Yorkshire in Northern England (Mutch and Young 2004), the well-studied laboratory standard strain Rlv 3841 (Young et al. 2006), and strain rcr1045 which is commonly used as the basis of commercial faba bean and/or pea inoculants. Some non-British reference strains were also included in the analyses to provide an international context; these included seven strains isolated from faba bean in Ethiopia, and a strain from Lathyrus sativus in the USA. All the 160 strains in the study were identified as belonging to the genus Rhizobium based on sequences of their 16S rRNA (rrs) genes (Table S1, S8). Highest similarity BLAST hits suggested that they were all close to R. leguminosarum, which is the most commonly isolated symbiont of this group of legumes in Northern Europe (Ampomah and Huss-Danell 2017;De Meyer et al. 2011;Mutch and Young 2004). However, as information about the symbiotic properties of Rhizobium resides in the transferable symbiotic plasmid (pSym) which contains the nitrogen fixation (nif) and nodulation (nod) genes (Young et al. 2006), in order to better understand the genetic diversity of the isolates, two nodulation genes (nodA, nodD) were then sequenced and compared to those in the database.
The nodA and nodD phylogenies showed a high level of congruence, with an almost identical distribution of the isolates between two large and distinct clades for both genes; these were closely related to strains previously isolated from faba bean, pea, and other Vicia, Lathyrus and Lens species (Figs. 4, S3, S4, S5). Taking the nodD phylogeny as an example (Figs. 4, S3), Group I comprised mainly strains from pea, lentil and wild legumes growing in the islands of Orkney and Skye through mainland Scotland into northern (Yorkshire; Mutch & Young (2004)) and southern (Wiltshire, Norfolk and Cambridgeshire) England. It included all the strains isolated from wild legumes (Lathyrus pratensis L. and Vicia cracca L.) in the CSC field margins (JHI27, JHI32, and JHI35) (Fig. S3). The only V. faba isolates in Group I were some of those sampled from the CSC in 2013 (16 strains), two CSC strains from 2015, JHI983 from a farm neighbouring the CSC, a single strain from the centre of England (JHI1147), and four of the Ethiopian strains. Group I also harboured strains from the USA (pea), Poland (pea), and Sweden (wild legumes). Group II contained most of the V. faba isolates from the CSC (sampled in the six years from 2011 to 2016, and including the 2013 isolates not clustered within Group I) and from V. faba cultivated in farms neighbouring the CSC (e.g. JHI981, JHI982 and JHI984), as well as isolates trapped from pea grown in CSC soils from 2014 (Fig. S3). Group II also included most of the faba bean isolates from other parts of Scotland and England, as well as several pea and lentil isolates, together with V. faba strains from Ethiopia, Spain, Canada, and China. All strains in Group II were closely related to the sequenced laboratory strain rlv3841. In addition to the two large groups, four smaller ones, Groups III -VI, were also apparent (Figs. 4, S3). Group III contained the type strain of Rlv, USDA 2370 T (isolated from V. faba in Tunisia) together with four strains from the same location in Angus, Scotland, which were isolated from nodules on pea (JHI10, JHI13, JHI1438) or V. sativa (JHI47). Group IV consisted of JHI1249 isolated from pea nodules in Orkney (Scotland) together with strains from Lathyrus sativus in the USA (JHI1084), V. faba in China and Jordan, and lentil in Bangladesh (type strains of R. bangladeshense, R. binae, and R. lentis).
Group V consisted of a single UK strain, JHI2450, isolated from pea grown in soil from Norfolk (east England), together with Swedish and Russian strains isolated from wild legumes (Ampomah and Huss-Danell, 2017). Finally, Group VI comprised only JHI2449, a single pea strain from Norfolk in southeast England.
Plant growth-promoting (BNF) potential of the rhizobial isolates on pea and faba bean Almost all the isolates and reference strains (147) were tested in the glasshouse for their ability to promote the growth of pea (Fig. S6), a relatively "promiscuous" Rlv-nodulating species (Mutch and Young 2004). All the isolates nodulated cv. Corus, but some were particularly effective in terms of promoting shoot dry weight e.g. many resulted in plants with shoot dry weights that were double those inoculated with the standard laboratory strains, Rlv 3841 and rcr1045. Fourteen of the isolates were selected for further trials on pea cv. Kareni (Fig. 5A) and on faba bean cv. Vertigo (Fig. 5B). The highest performing strain in terms of growth promotion on both pea varieties was JHI388; this strain, as well as other strains that promoted the highest measured growth on pea cvs Corus or Kareni were isolated from various hosts, including faba bean (JHI388, JHI370, VF5), the rare wild species Vicia lutea Fig. 2 Shoot dry biomass (a), total crop N and total crop fixed N at mid pod-fill (b), dry grain yield at harvest (c), and grain N and residual N at harvest (including roots) (d) of faba bean during the 2012-2015 growing seasons at the Centre for Sus-tainable Cropping (CSC) farm platform with conventional (C) or integrated (I) management. Data are means ± standard error. Significant differences (p<0.05) are indicated with different letters Fig. 3 Quantifications of R. leguminosarum 16S rRNA marker (a) and Rlv nodD (b) given as gene copy number per g of dry soil per year in fields cropped with faba bean during the 2012-2015 growing seasons at the Centre for Sustainable Cropping (CSC) farm platform with conventional (C) or integrated (I) management. The ratio of Rlv nodD per R. leguminosarum 16S rRNA is also shown (c). Each pair of boxplots with the same letter did not show significant differences (p-value<0.05) between each other according to the results of Tukey's HSD. Outliers are data points more than 1.5⋅IQR (interquartile range) above the third quartile (Q3) or below the first quartile (Q1). Low outliers are below the first quartile (Q1−1.5⋅IQR) and high outliers are above the third quartile (Q3+1.5⋅IQR) Estate (2012) Kennels ( (JHI42), and pea (JHI13). The five strains that promoted the most growth of faba bean cv. Vertigo were not the same as those that promoted the most growth of pea, and they were also isolated from a wide range of hosts, including V. tetrasperma (JHI24), faba bean (VF2), pea (JHI974, Rlv 3841), and Lathyrus linifolius (JHI1093). The Rlv strains that promoted the most growth of either legume host did not fall into any particular nodAD clade ( Table 2).
Estimates of the contribution of BNF (%Ndfa) to faba bean and pea crops in other locations in the British Isles The δ 15 N of faba bean and pea grown on commercial farms from the northern Isles of Scotland (Orkney) down through to southern England were used to estimate %Ndfa at these various locations (Fig. 1). In all cases, the difference between the legumes and the associated non-legume reference plants were indicative of high contributions from BNF with estimates ranging from 82 to 96% for faba bean and 58-97% for pea (Table S7).
Discussion
Faba bean can provide most of its nitrogen requirements via BNF in a northern temperate cropping system The CSC provided a suitable platform for assessing the ability of faba bean to fix N in Northern Britain. Although several studies have been conducted in other European countries, as well as in North America and Australia (Denton et al. 2013;Giambalvo et al. 2012;Hossain et al. 2016;Jensen et al. 2010;Van Zwieten et al. 2015), the present field scale study is the first to provide direct experimental evidence over several consecutive seasons that faba bean can fix almost all of its N-requirements under the relatively wet and cool climate of the British Isles.
The %Ndfa in the CSC trial were generally high, demonstrating that most of the plant N-requirements were met by BNF, as has been observed in many other locations in Europe and elsewhere in the world (Denton et al. 2013;Jensen et al. 2010;Peoples et al. 2021). The differences in %Ndfa between 2012 (c. 80%) and 2013 (c. 90%) were not reflected in their total N which were very similar; this suggests that the 2013 crop assimilated at least 40 kg ha -1 of soil N at mid-podfill stage, which corresponds to the available N in the field at the start of the 2013 season. One possibility is that the cold spring in 2013 inhibited nodulation and BNF, so that the plants had to compensate for the lack of fixed N by utilising the available soil N until the onset of the warmer summer weather (Burchill et al. 2014). In all the other years the high %Ndfa demonstrated that uptake of available N in the soil was very low, probably less than 30 kg ha -1 . Faba bean is, however, particularly effective at fixing N when grown in soils with high levels of applied mineral N. For example, 300 kg ha -1 will completely inhibit BNF by most legume crops, but faba bean can maintain %Ndfa levels above 40% at this fertiliser rate (Guinet et al. 2018).
The quantities of N fixed by faba bean in the present study were generally within the range previously reported in northern Europe and elsewhere i.e. 100-250 kg N ha -1 yr -1 , although there were statistically significant variations from season to season. In 2012 and 2013, at 100 kg N ha -1 yr -1 the BNF values were at the lower end of the range of previous estimates, but in 2014 and 2015 they were much higher, ranging from 250 to 350 kg N ha -1 yr -1 . These differences can be attributed to biomass production by the early-to mid-podfill stage (the stage at which BNF is maximal), as at high %Ndfa (>80%) BNF is essentially a function of the %N and biomass of the shoots (Unkovich et al. 2010). Biomass production was low in 2012 and 2013 owing to poor growing conditions (flooding in 2012 and a prolonged cold spring followed by a hot and dry summer in 2013), but in 2014 (and to a lesser extent in 2015) weather conditions were ideal for crop growth, and consequently plant biomass and grain yields were high. This was also reflected in the total crop N at the final harvest which is comprised of grain N, as well as the N left in the crop residues (including the roots) of over 100 kg N ha -1 yr -1 ; these N-residue values are within the range predicted or measured by other studies of nodulated field-grown faba bean (Denton et al. 2013;Jensen et al. 2010).
Commercial faba bean and pea crops sampled in other parts of the British Isles also had generally high dependence on BNF. None of the fields in which these crops were grown received applications of mineral N fertilisers, and so these data also support the earlier contention that faba bean and pea in the British Isles can provide most of their N-requirements through BNF. Such confirmatory data from actual farm locations are considered to be critical in assessing if experimental %Ndfa values are valid in terms of calculating global inputs of BNF (Peoples et al. 2021).
Integrated crop management can enhance BNF by some faba bean varieties in a northern temperate cropping system A positive effect of the integrated treatment on the BNF of faba bean at the CSC was only apparent in the high-yielding years, but it was especially evident in 2014 where it resulted in an estimated additional 50-100 kg N ha −1 yr −1 being fixed, depending on
Lens culinaris
Other variety (Fig. S2c, e). The positive effects of integrated management on BNF in 2014 and 2015 cannot be explained in terms of either rhizobial populations (both absolute and in terms of the symbiosis gene, nodD) nor nodulation, as the mass of nodules per plant was not significantly increased by this treatment. This is surprising given the strong link observed between N-demand and nodule numbers/ mass in other legumes, such as pea (Voisin et al. 2010). It possibly suggests that the rate of BNF per nodule was increased in the integrated field halves in 2014 and 2015 where high BNF was driven by high demand for N to supply the rapidly growing plants in these years where temperatures and precipitation were close to optimum for crop development. An increased BNF rate might have been due to the main component of the integrated treatment, the compost, raising soil pH, carbon stocks, and moisture retention, and also by acting as a controlled-release fertilizer, supplying increased concentrations of the main plant growth-limiting macronutrients (P and K) , and possibly micronutrients essential for the BNF process, particularly Mo . Another clue as to the mechanism behind a possible positive effect of Integrated management on BNF is given by the interaction between this treatment and the faba bean varieties, with cvs Boxer and Babylon responding particularly well; this suggests that there is a genetic component underlying the ability of faba bean to benefit from the improved soil conditions.
Rhizobia populations in the CSC soils were sufficient to support faba bean BNF despite a 50-plus year absence of legume crops Populations of R. leguminosarum in the CSC soils in the four years from 2012 to 2015 (10 5 to 10 6 g −1 soil) were similar to those obtained using most probable number (MPN) estimates in Denmark (Jensen and Sørensen 1987), and England (Hirsch 1996;Nutman and Hearne 1979), and in England using both MPN and qPCR with the same primer sets used in the present study (Macdonald et al. 2011). It should be noted that the 16S rRNA primers potentially recognize all R. leguminosarum cells in the soil, including both Rlv and clover-nodulating strains of R. leguminosarum sv. trifolii (Rlt), as well as non-symbiotic strains (Hirsch 1996), whereas data obtained using the nodD primers are more useful in specifically evaluating populations of faba bean-nodulating Rlv (Macdonald et al. 2011); these were approximately 10 5 DNA copy numbers g −1 soil in the CSC. The Rlv population values can be considered as baseline data, as the soil samples were taken from the CSC fields in March just prior to sowing the faba bean, and except for the field which was cropped with faba bean in both 2011 and 2015, no legumes were cropped in the fields used in the experimental rotation for at least 50 years prior to its onset. This level is consistent with the findings of Boivin et al. (2020), who recorded levels of Rlv nodD DNA copy number in the range of 10 5 to 10 8 in European agricultural soils that mostly had some previous history of faba bean cultivation, but not rhizobial inoculation. The significant, albeit relatively minor, differences between years in terms of both total R. leguminosarum and Rlv populations (e.g. comparing 2012 with 2015) may be due to the locational effects of the different fields analyzed. The higher Rlv/R. leguminosarum ratio in 2015 is intriguing in spite of overall lower values of both component populations in this year, and could be due to the prior cropping of this field with faba bean in 2011, as the recent presence of a specific legume host (e.g. pea) is known to facilitate the persistence of Rlv populations (Hirsch 1996;Macdonald et al. 2011). Compared to other studies that have examined the impact of prolonged absence of legume cropping on Rlv populations in uninoculated northern European soils, the population densities in the present study of CSC soils with more than 50 years without legumes are relatively high. For example, Nutman and Hearne (1979) reported 1000-fold decreases in Rlv populations in southern English soils after 14 years under cereals and negligible levels of Rlv after prolonged fallow, suggesting that for maintenance of Rlv in soils cropping with non-legumes is preferable to no cropping at all, as these may help maintain saprophytic rhizobial populations by providing C e.g. from root exudates and plant decomposition (Hirsch 1996). Similar conclusions were reached by Jensen and Sørensen (1987) in their study of Danish soils Fig. 5 Effect of inoculation of rhizobial strains on the aerial dry biomass of pea cv. Kareni (a) and faba bean cv. Vertigo (b). Data from the reference strains, rcr1045 and rlv3841, are indicated in light green, and non-inoculated controls (NC) in orange. Data are means ± standard error ◂ artificially inoculated with Streptomycin-resistant Rlv strains i.e. the presence of a host legume (pea) markedly increased the survival of the inoculated strains.
The diversity of Rlv strains nodulating faba bean at the CSC suggests that they came from various local sources Several rhizobium strains were isolated from the CSC and from the other sites in which BNF by faba bean and pea was estimated, as well as from other related crop and wild legumes in which BNF was not estimated. Based upon sequences of their nodAD genes, they were distributed into two distinct genotypes of Rlv; Group I which contained isolates from wild legumes and some V. faba CSC isolates, and Group II which contained the laboratory strain Rlv 3841 (Young et al. 2006) and most of the V. faba CSC isolates. Similar data were obtained by Laguerre et al. (2003), Mutch and Young (2004) and Tian et al. (2010), and the Group II from the present study is essentially an expansion of the NodDF-2 group from Mutch and Young (2004), and is equivalent to Group B1 of Boivin et al. (2020). The NodDF-2 group was considered by Mutch and Young (2004) to be a V. faba-specific group as strains within it could readily nodulate faba bean, but those from the other nodDF groups had a much reduced capacity to nodulate this domesticated host. In contrast to V. faba which appears to be quite selective in terms of which Rlv strains can nodulate it (Mutch and Young 2004), all the isolates in the present study, regardless of their original host, were shown to be capable of nodulating pea, and many performed better than a commercially used strain (rcr1045) on both the pea varieties tested. A nodulation test on faba bean using a sub-set of 14 strains from both nodAD Groups I and II (including some from the CSC) indicated that they could all nodulate this host. However, the highest-performing strains were not the same as those with pea, and there was no specific nodAD genotype associated with a high performance on either host in single inoculation tests, which agrees with Boivin et al. (2020) who concluded that Rlv nod genotype is unrelated to symbiotic effectiveness.
Taken together, these tests demonstrated that in spite of reports that prolonged absence of legume cropping can result in a loss of Rlv diversity (Depret et al. 2004) the CSC soils harboured a high diversity of effective Rlv capable of nodulating faba bean and pea. The maintenance of these relatively high and quite diverse Rlv populations may be the result of various factors including their persistence in the soil as saprophytes from legume-cropping prior to 1970 (although no record exists for the CSC sites, this cannot be excluded), but also legume weeds, including Vicia species, which were widespread in arable systems throughout the 20th century (Squire 2017), and invasion from adjacent sites (Hirsch 1996). In the case of the CSC, neighbouring farms were clearly a potential source as strains JHI981, JHI982 and JHI984 (from Carmichael and James Hutton Institute Farms) were included in the nodAD Group II that contained most of the CSC faba bean strains, as were pea strains isolated from fields neighbouring the CSC at Balruddery Farm. On the other hand, nodAD Group I was the more diverse group (in terms of host), and contained all the strains from wild Lathyrus and Vicia species resident in the field margins, and probably within the in-field seedbank during the 20th century, so we cannot necessarily conclude that the nodAD Group I made a lesser contribution to the in-field V. faba symbionts in the CSC. Indeed, it also harboured V. faba symbionts, such as another strain from Carmichael Farm (JHI983), as well as a group of 17 CSC strains that were almost exclusively isolated from faba bean cropped in 2013; these 17 strains indicate that location can also be an important determinant of which Rlv genotypes predominate in any given field.
Conclusions
Faba bean and pea have been grown in northern Europe for millennia, and although it has long been considered that they do not require any N-fertiliser (http:// www. pgro. org/ ; Iannetta et al. 2016;Squire et al. 2019), here we present the first comprehensive evidence that they can fix all their N needs, but also that in addition to these legumes having no requirement for fertiliser N they may also leave a residue of 50-110 kg N ha -1 yr -1 in the soil after they are harvested. With appropriate management this residual N can be made available to the following (non-legume) crops . Based on the data from 2014 to 2015 in the present study, changes in management that could be considered to increase BNF, N-accumulation and residual N deposition by faba bean are to introduce a range of integrated soil cultivation and organic amendments, especially if particular varieties (e.g. Babylon and Boxer) shown to respond positively to this soil amendment are to be sown. Although concern has been raised that low Rlv populations with reduced diversity might limit faba bean BNF and yields (Sorwli and Mytton 1986), especially if the decline in pulse cropping in northern Europe is to be reversed (Iannetta et al. 2016;Squire et al. 2019), the present study suggests that as long as legume cropping has continued in some neighbouring fields, and/or if the field margins and in-field seedbank contain wild Vicia and Lathyrus species, then there will most likely be sufficient rhizobia of appropriate genotypes in most soils to support nodulation of faba bean and pea in suitable cropping areas. Nevertheless, it is also clear that not all rhizobial strains are equal, i.e. there may be potential to apply "elite" strains as inoculants for boosting BNF and grain yield, particularly on soils that have been under long periods of fallow (Nutman and Hearne 1979), and in which low Rlv populations have been estimated e.g. using the qPCR method (Macdonald et al. 2011;this study). These elite strains, however, would need to be "tailored" to a particular pulse, as strains that are highly effective on pea are not necessarily as effective on faba bean (and vice versa), and moreover, they would need to be assessed for their ability to compete in the soil for nodulation of their target hosts (Boivin et al. 2020;Mendoza-Suárez et al. 2020). On the other hand, if elite strains can compete for nodulation, Westhoek et al. (2021) have recently demonstrated that there is good reason to believe that they can then dominate as symbionts via the host plant "conditionally sanctioning" nodules occupied by other, lesser-performing strains.
It is now increasingly clear that there is a critical need to reduce inputs of fertilizer N into agroecosystems, particularly in the developed world, and that BNF by legumes can play a crucial role in this reduction (Udvardi et al. 2021). The unique aspects of the present study are its field scale plus the multiple years in which it was conducted, whereby it has demonstrated the enormous amounts of N that legume crops can fix. Given this knowledge, the proper harnessing of BNF by legumes should now be implemented to help grow arable crops more sustainably, but also to meet commitments to reducing GHG emissions.
|
2022-01-22T16:32:13.537Z
|
2022-01-20T00:00:00.000
|
{
"year": 2022,
"sha1": "8d152bb8ff9c15d03faa747c2a0687f3a605eaf3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11104-021-05246-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff341f7fd5d4fd386dc375c4b74a3e0b41c9c3dc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
229484321
|
pes2o/s2orc
|
v3-fos-license
|
Association of polycythemia vera with positive JAK2V617F mutation and myasthenia gravis: A report of two cases
Abstract Screening for MG in patients with PV positive for JAK2V617F mutation can help in early diagnosis and treatment, resulting in a significant reduction in morbidity and mortality.
1 | BACKGROUND 1.7% of myeloproliferative neoplasms are associated with autoimmune conditions. Association of myasthenia gravis (MG) with chronic myeloid leukemia is reported, but its association with polycythemia vera (PV) has never been reported. We report two patients who had MG and PV with JAK2V617F mutation. Both had splenomegaly but no thymoma.
Myasthenia gravis (MG) is an autoimmune disease characterized by antibodies to acetylcholine receptors at the neuromuscular junction (NMJ). The prevalence of MG in the United States is 0.02%. 1 Prevalence in Arab countries is slightly higher (0.05%-0.08%). 2 It involves the extraocular muscles initially, characterized by fluctuating muscle weakness worsening with exercise and improving with rest. MG has an established association with autoimmune thyroiditis, Grave's disease, rheumatoid arthritis, systemic lupus erythematosus (SLE), and type 1 diabetes mellitus. 1 Myeloproliferative Neoplasms (MPNs) are a group of rare blood cancers due to stem-cell hyperplasia characterized by an increased peripheral blood cell count, overactive bone marrow, and proliferation of mature hematopoietic cells. 3 Chronic myeloid leukemia (CML), essential thrombocythemia (ET), polycythemia vera (PV), and myelofibrosis are designated as MPNs, with CML being positive for BCR-ABL1 gene fusion (Philadelphia chromosome) 4 and the latter three negative. 5 The majority of BCR -ABL1 negative MPNS are sporadic; however, there are reports of familial cases from different parts of the world. 6 Paraneoplastic syndromes are clinical syndromes involving nonmetastatic systemic effects that accompany a malignant disease. Neurologic paraneoplastic syndromes are estimated to occur in fewer than 1% of patients with cancer. [7][8][9] There have been rare documentations of association of MG with CML. 8,9 However, the association of MG and PV has never been reported in the literature up to our knowledge.
| Case 1
In March 2016, a 57-year-old lady presented with a 6-month history of difficulty talking and voice change that started incidentally after an episode of shouting. She also complained of intermittent diplopia, which was more evident in looking toward the left side. There were no other associated symptoms. She did not report difficulty in swallowing, variation in the speech pattern, or difficulty in breathing. The past medical and family history was noncontributory. Her facial appearance, strained speech, and fatigue with recurrent effort suggested myasthenia gravis (MG). Electromyogram (EMG) along with positive high titer of antimuscle-specific kinase (MuSK) antibodies (48.5 nmol/L, normal < 0.1 nmol/L) confirmed MG. Antiacetylcholine receptor (AchR) antibodies were negative. She was initially started on steroids and azathioprine to which she had a good response but developed steroid-induced Cushing syndrome and multiple thoracolumbar spinal fractures. She was then shifted to tacrolimus with excellent response, and steroids were tapered. She developed congestive heart failure (CHF) with an ejection fraction of 25%, which was thought to be secondary to the tacrolimus. Tacrolimus was replaced by mycophenolate with bridge steroids for a short period. In the most recent clinic visit, she was doing well with mycophenolate. During her initial visit, she was incidentally noted to have high hemoglobin (17.8 mg/ dL, normal < 16.5 mg/dL) with high hematocrit (64.3%, normal 35%-45%), erythrocytes (6.9 × 10 6 /µL, normal 3.8-4.8), leukocytes (17.4 × 10 3 /µL, normal 4-10) and thrombocytes (505 × 10 3 /µL, normal 150-400). A blood smear showed erythrocytosis with predominantly normochromic red cells, leukocytosis with neutrophilia, and thrombocytosis. Physical examination revealed hepatosplenomegaly, which was confirmed with an ultrasound abdomen showing a liver span of 19 cm and a spleen measuring 18 cm ( Figure 1). She was diagnosed with Polycythemia Vera (PV) as per World Health Organization Diagnostic Criteria as JAK2V617F mutation was positive. Treatment was initiated with hydroxyurea 500 milligrams twice daily and aspirin 100 milligrams daily, with follow-ups at regular intervals. The latest blood tests showed normal hemoglobin (14.8 mg/dL), erythrocytes (4.5 × 10 6 / µL), leukocytes (8.2 × 10 3 /µL), and thrombocytes (292 × 10 3 / µL) stable over the last 2 years.
| Case 2
A 63-year-old gentleman, known to have hypertension was referred to hematology clinic in November 2018, after he was detected to have high hemoglobin (20.3 mg/dL, normal <16.5 mg/dL) with high hematocrit (62.6%, normal 35%-45%), erythrocytes (7.2 × 10 6 /µL, normal 3.8-4.8), leukocytes (14.3 × 10 3 /µL, normal 4-10), and thrombocytes (674 × 10 3 /µL, normal 150-400). A blood smear showed erythrocytosis with normal indices with a packed smear appearance, neutrophilic leukocytosis, and marked thrombocytosis. Ultrasound of the abdomen showed that the liver measured 14.4 cm, and the spleen measured 15.4 cm (Figure 2). He was positive for JAK2V617F mutation and was diagnosed with PV. Treatment was initiated with hydroxyurea 1 gram daily. Six months later, he was found to have pancytopenia during his routine follow-up. He had a severely reduced hemoglobin of 2.9 mg/dL, a white cell count of 2.3 × 10 3 /µL, and a platelet count of 69 × 10 3 / µL. He also complained of diplopia when looking toward the left for a week. Peripheral smear showed no blasts, and repeated ultrasound showed a decrease in the size of the spleen. Hydroxyurea was stopped under the impression of drug-induced bone marrow suppression; supportive transfusions were given, and blood counts were monitored. The neurologic evaluation showed that he had diplopia that becomes more pronounced on the left lateral gaze. There was a bilateral restriction of adduction and vertical movements of eyes. Pupils were equal and responsive to light. There was no facial asymmetry, and the gag reflex was preserved. Left-sided ptosis was noted, which worsened with repeated movements of the eyelid. He developed hypophonia after counting out loud. The patient's son added that he has been having a deconjugate gaze and generalized weakness toward the end of the day for the last 2 years, but these symptoms were ignored. There was a high clinical suspicion of MG, and he was started on pyridostigmine 60 mg daily, observing for clinical response. There was an improvement in left eye ptosis over the next 3 days, and he was maintained on the same dose of pyridostigmine. AchR antibody was positive, but there was no thymoma on computed tomography (CT) of the thorax. Hydroxyurea was stopped, and he was maintained on close follow-ups to monitor blood counts. Therapeutic venesection was done as and when needed. He was asymptomatic from the MG point of view and had normal blood counts during his latest clinic visit in June 2020.
| DISCUSSION
Lymphoproliferative disorders are well known to be associated with autoimmune diseases (8% prevalence). MPNs are less commonly associated with autoimmune diseases (1.7% prevalence). U Dührsen et al described the spectrum of autoimmune diseases in 346 patients with MPNs, including 76 patients with CML, 46 with idiopathic myelofibrosis (IMF), 35 with PV, 42 with unclassifiable myeloproliferative disorders, 14 with myelodysplastic syndrome, and 133 with acute myelogenous leukemia (AML). They found no instances of MG preceding or during any of the MPNs. Autoimmune diseases such as rheumatoid arthritis, ankylosing spondylitis, and multiple sclerosis were associated with CML and pernicious anemia with IMF. They also described the spectrum of autoimmune diseases related to lymphoproliferative disorders, in which there was one case of MG associated with chronic lymphocytic leukemia (CLL). 7 Paraneoplastic phenomenon in MPNs is rare. There are two case reports of patients who presented simultaneously with CML and MG. 8,9 Kumar et al in 2007 reported the case of a 47-year-old male who presented with diplopia and was found to have leukocytosis on routine laboratory evaluation. He was diagnosed simultaneously with CML and MG. He had splenomegaly (6 cm), BCR -ABL1 gene fusion, and positive anti-AchR antibodies. He was started on steroids and pyridostigmine for MG and imatinib 400 mg daily for CML. Re-evaluation after 12 weeks showed regression of spleen with a complete hematologic and cytogenetic response. There was a resolution of ptosis and ophthalmoplegia, and anti-AchR turned negative. 8 There are no cases of PV associated with MG reported so far. In our first case, both MG and PV presented simultaneously from the initial visit, whereas the second patient presented initially with PV, and MG manifested later, almost 6 months after initiation of treatment with hydroxyurea. There was no evidence of thymoma. Both patients were treated with hydroxyurea for their PV, but the second patient's course was complicated with pancytopenia. Patient 1 had positive anti-MuSK antibodies, whereas patient 2 had positive anti-AchR antibodies. There is no clearly defined pathophysiology in the literature regarding the association between PV and MG. The closest possible hypothesis is that of paraneoplastic syndrome. It is postulated that the anti-AchR and anti-MuSK autoantibodies specifically target the α3 subunit of nicotinic acetylcholine receptors (nA-ChRs), which are found in the thymus. It is proven that lung cancers and neuroblastomas can express the α3 subunit of nAChR and thus cause MG without thymoma. 10 Patient 1 had a simultaneous presentation of both PV and MG. In patient 2, even though the MG was diagnosed 6 months after PV, there was a 2-year history of diplopia, raising the possibility of simultaneous onset. Absence of thymoma, simultaneous onset, and positive anti-AChR or anti-MuSK antibodies are features suggesting possible paraneoplastic syndrome, in which PV expresses nAChRs.
Patients with PV who are more than 60 years of age and at high risk of thrombosis need cytoreductive therapy. The most common agent used for cytoreductive treatment in PV is hydroxyurea considering its cost-efficacy and safety profile. But there is no evidence of the benefit of hydroxyurea in patients with MG. One of our patients developed MG after 6 months of therapy with hydroxyurea. There are no other reports in the literature showing the onset MG after hydroxyurea therapy. Other nontraditional agents that could be used for cytoreductive therapy are busulfan and interferon-alpha. There are several case reports which link the onset of MG with busulfan therapy. Interferon-alpha is an upcoming agent in cytoreductive therapy for PV. 11 Some studies have shown that it is useful in the treatment of MG as well. 12 A few case reports have shown the onset of MG after interferon-alpha therapy. 13,14 It has been concluded that 6 months of IFN α therapy seems to be safe in MG, though in patients with malignancy, IFN α may cause increased autoimmunity, AChR positivity and MG. 12
| CONCLUSION
Any patient with a malignancy who develops a neuromuscular syndrome should be investigated for the possibility of paraneoplastic syndrome. Patients with PV positive for JAK2 mutation can develop MG as a paraneoplastic syndrome, in the absence of thymoma. PV shows good response to hydroxyurea therapy and MG to steroid plus cholinergic therapy. Interferon-alpha is an upcoming modality for cytoreductive therapy in PV, which is also evidenced to bring about remission in MG.
|
2020-11-26T09:06:22.713Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "71ca79515c8192dd34717ba2479287a1fa823f4c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.3574",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f6cc59e62a0b603c324e4570e00bce111645252",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256199443
|
pes2o/s2orc
|
v3-fos-license
|
Population history, genetic variation, and conservation status of European white elm (Ulmus laevis Pall.) in Poland
The core populations of the European white elm (Ulmus laevis Pall.) located in Poland maintained slightly higher level of genetic diversity compared to the peripheral populations of this species. The most severe threat to elms is the loss of natural habitat under the pressures of agriculture and forestry as well as urbanization. The reductions in European white elm populations as well as populations of other elm species have also been caused by Dutch elm disease (DED). Previous studies have indicated a low level of genetic variation in Ulmus leavis Pall. However, in Poland, the genetic resources and demographic history of U. laevis populations remain poorly documented. The genetic resources of U. laevis in Poland were identified and characterized. Additionally, tests were performed to identify potential bottleneck signatures and effective population sizes of the examined populations. Polymorphism was analyzed using a set of six nuclear microsatellite markers (nSSRs) for 1672 individuals from 41 populations throughout the species range in Poland. (1) A moderate level of genetic variation was found. (2) A low genetic differentiation and lack of population structuring were identified. (3) Evidence of reduction in population size was found as a consequence of severe, past bottlenecks. The loss of genetic diversity of U. laevis probably occurred in their refugia or shortly after the postglacial recolonization. This loss may have been affected by past DED pandemics similar to those seen at present.
Introduction
The European white elm (Ulmus laevis Pall.) is a broadleaved tree whose occupied natural distribution in Europe extends from central France to the Urals. In the northern part of its range, white elm covers only the southern part of Finland; it reaches the southern end of its range in Albania and Bulgaria and grows in several isolated stands in Turkey (Jalas and Suominen 1999;Collin 2003). Across its natural range in Europe, white elm grows mainly in lowlands and sporadically enters mountainous areas along river valleys. Generally, it is more common in eastern Europe than in western Europe (Boratyńska et al. 2015). As the species preferentially occupies lowland areas, its optimal occurrence range covers fertile and moist habitats in river valleys; it is often found in floodplains, is tolerant to humid soils and periodic flooding, and typically occurs in damp, lowlying areas and as a component of riparian forests. In Poland, white elm is one of three native species of elm: the European white elm (U. laevis Pall.), wych elm (U. glabra Huds.), and field elm (U. minor Mill.). The area occupied by elms covers 17,654 ha, i.e., 0.24% of the total forested area. There are only about 1000 ha dominated by elms (Napierała-Filipiak et al. 2014). The vast majority of elm resources are currently formed by white elm. In at least 1/3 of the sites, white elm are of artificial origin. Most often, the species is represented by isolated individuals or by small groups (Napierala-Filipiak et al. 2016). It seldom occurs in the mountains and does not exceed foothill elevations. U. laevis can sow and grow under the canopies of old trees, and the species often emerges from dense grass covers. Therefore, in areas covered by white elm, several generations of elm forest may coexist (Filipiak and Napierała-Filipiak 2015).
Today, the most severe threat to elms is the loss of their natural habitat under the pressures of agriculture, forestry, and urbanization. For the last hundred years, European white elm populations and other elm species have also undergone population reductions caused by Dutch elm disease (Brasier 2000), caused by the non-native fungi, Ophiostoma ulmi (Buisman) Nannf. and O. novoulmi Brasier, which are spread by bark beetles of Scolytus Geoffroy (Coleoptera, Scolitidae; Brasier 2001). U. laevis is the most resistant to infection (Brasier 2001). This disease spread to large areas of Europe, South America, and Asia, causing very high mortality rates, and contributing to the current very dispersed distribution. Genetic changes associated with small, fragmented populations, and increased isolation may limit the evolutionary potential of a species and affect its ability to adapt to new challenges related to climate change as a consequence of lost genetic diversity via random drift (Schaberg et al. 2008). Small, fragmented populations are more prone to adverse effects due to random processes such as the "founder" and "bottleneck" effects (Sork and Smouse 2006).
Recently, special attention has been given to the protection of remaining natural or seminatural riparian forest communities. In many countries, there are places where it has become important to re-naturalize river valleys to restore their natural, economic, and recreational value. The white elm is an extremely important element of these communities. The first step to develop a species protection strategy is to document the level and pattern of its genetic variation. Microsatellite markers of nuclear DNA are widely used in this respect (e.g., Litkowiec et al. 2018;Scotti-Saintagne et al. 2021). Previous research on the genetic structure of U. laevis in Europe using different marker systems demonstrated a low level of overall genetic variation and significant differentiation between populations, especially in peripheral populations of the species (Vakkari et al. 2009;Nielsen and Kjaer 2010;Fuentes-Utrilla et al. 2014).
Microsatellite markers have been developed and successfully used for many elm species (Whiteley et al. 2003;Collada et al. 2004;Zalapa et al. 2008). However, the degree of U. laevis genetic diversity in Poland has not been determined thus far. This study was initiated to shed light on the genetic diversity and structure of many elm populations throughout their central range in Poland. We aimed to answer several questions: (1) Is the level of genetic diversity and genetic differentiation within and among examined populations comparable to other populations from entire natural distribution? (2) Is the observed genetic diversity the result of recent population decline or severe, past population bottlenecks? (3) Is the genetic structure of examined populations as a consequence of postglacial history?
Plant sampling
This study examined 41 U. laevis populations from the entire species range in Poland (Fig. 1). The number of specimens sampled from each population ranged from 11 to 50 individuals; a total of 1672 individuals were analyzed (Table 1). Most trees were located near watercourses and lakes in fertile or moderately fertile, moist areas. Almost half of the collected material came from nature reserves. Additionally, most of the areas from which the research material was collected were located in nature protection areas (Natura 2000).
DNA extraction, amplification, and genotyping
The total genomic DNA was isolated from approximately 20 mg of leaf tissue using an ISOLATE II PLANT DNA kit (Bioline, London, UK). Suitable markers originally described for Ulmus species were tested for their ability to provide repeatable, high-quality results, sufficient polymorphism and unambiguous allele binding. Finally, six polymorphic markers: Ulm19, Ulm2, Ulm3, Ulm6, Ulm9 (Whiteley et al. 2003), and UR188a (Zalapa et al. 2008) were simultaneously amplified in a multiplex reaction using Multiplex Master Mix (Qiagen, Hilden, Germany). The polymerase chain reaction (PCR) program was as follows: 3 min at 94 °C; 30 cycles of 15 s at 94 °C, 90 s at 53 °C, and 2 min 72 °C, and 20 min at 72 °C. The fluorescently labeled PCR products, along with a size standard (GeneScan 600 LIZ, Thermo Fisher Scientific, Waltham, Massachusetts, USA), were separated on an ABI 3500 capillary sequencer (Thermo Fisher Scientific, Waltham, Massachusetts, USA). Alleles were identified based on their size using GeneMapper software (ver. 5.0; Thermo Fisher Scientific, Waltham, Massachusetts, USA), and all Page 3 of 13 Litkowiec et al. Annals of Forest Science (2022) 79:38 variants were checked and approved manually ).
Genetic diversity and differentiation
The following genetic diversity estimators were computed using FSTAT v 2.9.3 software (Goudet 2001): the number of alleles (A), allelic richness (AR), estimated for the minimum sample size of 11 individuals, observed heterozygosity (H o ), and unbiased expected heterozygosity (H e ). The number of private alleles (P a ) and effective number of alleles (A e ) were calculated using GenAlEx 6 (Peakall and Smouse 2006). An allele was declared "private" when it was detected only in a particular population and was absent in the other populations. Microsatellite markers are susceptible to genotyping errors, such as null alleles (Guichoux et al. 2011), and can overestimate the F-statistic on account of false homozygotes in populations (e.g., Litkowiec et al. 2018). Therefore, the loci were also tested for the presence of null alleles (N0) using INEST 2.0 software (Chybicki and Burczyk 2009). The multiple sample score test (U test, Raymond and Rousset 1995), implemented in GENEPOP ver. 4.3 (Rousset 2008), was used to determine the significant deviation from Hardy-Weinberg equilibrium (HWE).
The genetic differentiation among populations was assessed using F st values with FSTAT v. 2.9.3. Considering the presence of null alleles at all loci, FreeNA software was used to estimate the F st values based on the Cavalli-Sforza and Edwards (1967) genetic distance using the Excluding Null Alleles (ENA, F st ENA) correction method (Chapuis and Estoup 2007). The bootstrap 95% confidence intervals (CI) for the global F st values were calculated using 10,000 replicates over the analyzed loci. We also compared the pairwise F st and R st values to indicate the phylogeographic structure using the SpaGeDi 1.3d program (Hardy and Vekemans 2002;Hardy et al. 2003). R-statistics are analogous to F-statistics but are based on allele sizes instead of allele identity; R st assumes the diversity resulting from genetic drift and mutation processes according to a stepwise mutation model (SMM). The R ST values were compared after the allele sizes were permuted using the within loci pR st (permuted R st , corresponding to F st ). The statistical significance of the alternative hypothesis of R st > pR st , which suggests that allele size mutations contributed to the population differentiation, was estimated by a permutation (10 000 permutations) test implemented in the SpaGeDi 1.3d program (Hardy and Vekemans 2002).
The genetic structure of the white elm populations was evaluated using the Bayesian clustering method implemented in STRU CTU RE ver. 2.3.4 (Pritchard et al. 2000). The assumed parameter sets were admixture allele models with correlated allele frequencies and no prior information about the location of the analyzed population. The Monte Carlo Markov Chain (MCMC) sampling Table 1. The proportion of the membership coefficient of each individual in 41 U. laevis populations are shown for the inferred number of clusters of 5 (K =5), as determined from the STRU CTU RE analysis Page 4 of 13 Litkowiec et al. Annals of Forest Science (2022) 79:38 scheme was run for 200,000 iterations with a 100,000 iteration burn-in period; the K values ranged from 1 to 41, and 10 independent replications were performed for each K value. The optimal K value was estimated using the StructureSelector (Li and Liu 2018) which implements the Evanno's method (Evanno et al. 2005), as well as four alternative statistical measures (Puechmaille 2016). To check for the presence of isolation by distance (IBD, Rousset 1997), a Mantel correlation test (Mantel 1967) was used. The significance of the correlations between the pairwise geographic distances and pairwise genetic distances, measured as F st /(1-F st ), was tested using 9 999 permutations implemented in GenAlEx 6.
Demographic history
With NeEstimator 2.01 (Do et al. 2014), the effective population size (N e ) of each population was estimated via the linkage disequilibrium method (Waples and Do 2008), assuming a random mating model and a critical allele frequency (P cirt = 0.02). The 95% confidence intervals (CI Ne ) were determined with the jackknife method described by Waples and Do (2008).
The examined Ulmus populations were tested for evidence of genetic bottlenecks using two methods. For each population, the M-ratios (Garza and Williamson 2001) defined as the ratio of k (number of microsatellite alleles) to r (overall range in allele size, i.e., M=k/r) and Wilcoxon test for heterozygosity excess (Cornuet and Luikart 1996) were calculated using INEST 2.2 software (Chybicki and Burczyk 2009). This analysis was performed using the two-phase mutation (TPM) model with two parameters: the proportion of multistep mutations (pg) and the mean size of multistep mutations (δg). The parameters pg = 0.22 and δg = 0.31 were used as recommended (Peery et al. 2012). The significance of a potential bottleneck was tested using Wilcoxon's signedrank test P values based on 1,000,000 permutations to obtain approximate values. Also, we used microsatellite data with the approximate Bayesian computation (ABC) method, implemented in DIYABC v 2.0.1 (Cornuet et al. 2014) to analyze the population demographic history. We used three different scenarios to test the changes in population size. Scenario 1 is a constant size population of white elm (Ne constant from past to present); scenario 2 is a population expanded recently, Na (Ne during the expansion, Ne < Na); and scenario 3 consist of a population that still experiencing a bottleneck, Nb (Ne during the bottleneck, Ne > Nb). We pooled all populations into a single sample, and for scenario construction, a total 10,000 simulations were performed to generate the reference table, and all summary statistics included in the DIYABC were used (Cornuet et al. 2014). The posterior probability of each scenario was assessed using logistic approaches (Cornuet et al. 2014). The scenario with the highest posterior probability was selected, and the associated parameters were determined.
Genetic diversity and differentiation
All nSSRs were polymorphic, and only 59 alleles were detected in the studied populations, among which 14 were private alleles. The smallest number of alleles (6) was found in the Ulm6 locus, and the highest number (15) was found in both the Ulm3 and Ulm9 loci. Low frequencies of null alleles were found, with an average frequency of 0.014. As the frequency of null alleles did not exceed the threshold (0.2) over which null alleles can result in a significant underestimation of H e (Chapuis and Estoup 2007), all loci were used in the further analyses.
In general, the studied populations were characterized by a moderate level of genetic variation ( Table 2). The mean number of alleles (A) was 4.2, ranging from 3.7 in the LAM population to 5.7 in the KLE population. The effective number of allele (A e ) values was much lower, ranging from 2.1 in the LAM population to 2.6 in the WIR population, with an average value of 2.4. Due to the unequal sizes of the studied populations, the allelic richness (AR) values were calculated by reducing the sizes of all populations to 11 individuals (the size of the KLC population). The mean AR value was 4.0, and the populations were quite homogeneous, with AR values ranging from 3.0 in ZRB and BAB to 3.9 in KLE and PRZ. Private alleles (P a ) were found in nine populations at very low frequencies (below 10%), with an average frequency of 5%. The largest P a number was detected in the BOB population (P a = 4), and the POD and KLE populations each had two private alleles. The observed (H o ) and expected (H e ) heterozygosity values ranged from 0.595 (KLE) to 0.675 (BIE) and from 0.529 (LAM) to 0.585 (BIE), respectively. The mean H o value (0.641) was higher than the mean H e value (0.553), indicating an excess of heterozygotes (F is = −0.149). The deviation of genotypic frequencies from Hardy-Weinberg equilibrium (HWE) in all studied populations was not statistically significant in all cases. The pairwise F st values ranged from −0.0033 to 0.227, with an overall F st of 0.076 (CI95% = 0.053-0.092; p < 0.001). The differentiation value was somewhat lower when the null alleles were included, with an F st ENA value of 0.074 at a 95% confidence interval (CI95% = 0.055-0.090). This result suggests that null alleles have a nonsignificant influence on the differentiation pattern among populations.
The global genetic differentiation based on allele size (R st = 0.061; CI95% = 0.046-0.124) was not significantly different from the differentiation that accounted for allele identities (pR st = 0.077, p = 0.481), indicating the absence of a geographic structure and that gene flow is high compared with the mutation rate.
Genetic structure
Grouping and thus finding the optimal number of clusters (K) is difficult because the current genetic structure of natural species populations is multifaceted and complex, as a consequence of the demographic, environmental, and historical processes influence (Meirmans 2015). As the Evanno method (delta K) did not lead to a biological interpretation for our dataset, we used alternative measures (MedMean and MaxMean) proposed by Puechmaille (2016) to find optimal K for our populations. Puechmaille methods were found to be more accurate than delta K or mean Ln P(K) with unevenly sampled populations, which is our case. The optimal number of clusters was K = 5 for all 41 U. laevis populations (Fig. 2). However, the proportions of each cluster in the gene pools of the examined populations were comparable and homogenous, except for six populations for which the frequency of one of the four clusters was slightly higher than 0.55 (Fig. 1). Mantel tests of isolation by distance found nonsignificant correlation between the geographical and genetic distance matrices (R = 0.002, p = 0.328).
Demographic history and effective population size
The effective population size based on linkage disequilibrium (N e LD) varied widely among the white elm populations and ranged from 2.7 (STU) to 360.9 (LEM), an overall harmonic mean of 16.8 (Table 3). The N e LD in the 11 white elm populations was lower than the overall harmonic mean N e LD. Most Ne confidence intervals overlapped (not a surprise when using few loci) and their interpretation was done with caution. The M-ratios (MRs) were significantly reduced according to the mean MRs derived under mutation-drift equilibrium (Mr eq ) for all examined white elm populations; this result is strong evidence of a past bottleneck. On the other hand, Wilcoxon's test for heterozygosity excess (H e ), performed under the TPM model, indicated recent population reductions in only ten of the analyzed populations (Table 3)
Genetic variation
Overall, the populations examined in our study showed a moderate level of genetic diversity. This is contrary to the premise that outcrossed, wind-pollinated, widespread temperate trees typically exhibit high levels of withinpopulation genetic diversity and low to moderate levels of among-population genetic differentiation resulting from large populations, extended gene flow, and phenotypic plasticity (e.g., Hamrick et al. 1992;Nybom 2004). Although the Polish populations of European white elm are in the core of the natural range of the species, their level of genetic variation is slightly higher compared to the level of genetic variation maintained by the peripheral populations of this species ( Some of the analyzed nuclear loci were previously used in the study of European white elm in other part of Europe. Out of the set of six loci, four (Ulm2, Ulm3, Ulm9, and UR188a) were also analyzed in Danish (Nielsen and Kjaer 2010) and five (Ulm2, Ulm3, Ulm9, Fig. 2 The optimal K number indicated by alternative measures MedMean K and MaxMean K applied in STRU CTU RE SELECTOR Page 8 of 13 Litkowiec et al. Annals of Forest Science (2022) 79:38 Ulm19, and UR188a) in Spanish populations (Venturas et al. 2013). Overall, much more alleles have been found in Poland than in the Spanish and Danish populations. For example, the most variable Ulm3 and Ulm9 loci in Poland each had 15 alleles. In the material from Denmark and Spain, there were only 4 and 3 alleles for the Ulm3 locus and 7 and 6 alleles for the Ulm9 locus, respectively. Presented data suggest that selected nuclear In our study, a low to moderate level of genetic differentiation was found among the studied U. laevis populations, with and without adjusting for null alleles (F st = 0.076 versus F st Null= 0.074, p < 0.01). Similarly, Whiteley (2004) found low differentiation levels among five Central and Northeastern European populations. The overall population differentiation was lower than that observed among Spanish populations (F st = 0.155; Fuentes-Utrilla et al. 2014). These differences may have resulted from greater gene flow between the Polish elm populations located in a smaller area than the Spanish populations, which may have prevented increased genetic differentiation among populations. Moreover, in this study, the global genetic differentiation estimated based on allele sizes (R st ) was lower than that of the pR st analog to F st , indicating that random genetic drift was more important than mutation in causing the observed differences among the studied U. laevis populations in Poland.
The Bayesian clustering methods indicated that five clusters (K = 5) most likely provided representations of the overall genetic structure of the analyzed U. laevis populations. Generally, the gene pools of the analyzed populations were genetically homogeneous. The studied populations had comparable gene pool compositions except for a few populations, where one of five clusters was dominant. The non-geographically structured gene pool of Polish U. laevis populations was confirmed by the non-significant Mantel test (R = 0.002, p =0.328). The current genetic divergence pattern of U. laevis in Europe, including in Poland, is the result of Quaternary climate change leading to a reduction in population sizes and the long-term isolation of populations during glacial-interglacial cycles and postglacial migration (Hewitt 2000). The observed genetic structure pattern implies the occurrence of free gene exchange among populations and that they probably share a common postglacial history. The investigation of postglacial history of U. laevis in Europe, using chloroplast DNA markers (cpDNA) identified three cpDNA haplotypes (A, B, and C) which are characteristic for potential glacial refugia for white elm (Whiteley 2004). The haplotype A was found with high frequency from France to Northwest Russia of the natural range distribution of U. laevis. The other two haplotypes are very rare. The haplotype B was found in southern France while the haplotype C was found in the Balkans and Southwest Russia. The presence of both haplotypes A and C in Russia indicates a core Russian glacial refuge from which current white elm populations have originated by postglacial expansion. However, the southern distributions of haplotypes B and C could indicate additional refugia for white elm, but probably postglacial recolonization from these areas was limited (Whiteley 2004). It can be speculated that the U. laevis entered to the territory of Poland from the Russian or the Balkan refuge, or Poland, was under the range of both refugial areas. The hypothesis that U. laevis migrated to Poland from two refugia Russian and Balkan is probably true because we observed the heterozygosity excess in all of the studied populations as a consequence of the mixing of two previously isolated populations ("isolate-breaking" effect) (Wahlund 1928). These hypotheses should be verified using cpDNA markers. Moreover, Bayesian analysis of population structure of U. laevis performed by Fuentes-Utrilla et al. (2014) showed the differentiation between Iberian/SW France and Central Europe core distribution of this species. The pattern of genetic diversity of Spanish populations is not consistent with the pattern of presence in Central Europe. The authors concluded their results stating that Spanish populations of U. laevis may represent relict populations of an Iberian glacial refuge.
Demographic history
A bottleneck is a factor that can negatively influence the genetic structure of natural populations due to a decline in population sizes increasing the level of inbreeding and reducing the level of genetic diversity; thus, bottlenecks threaten the sustainability of populations in the short and long terms. On the other hand, the life history traits that are common to long-lived forest trees, such as Page 10 of 13 Litkowiec et al. Annals of Forest Science (2022) 79:38 long-distance pollen and seed dispersal, overlapping generations and longevity, may protect populations against the effects of sharp population declines over the short term. In this study, the specific tests used to examine the bottleneck effect with microsatellites yielded interesting results, with the calculated M-ratios test suggesting bottlenecks in all populations; however, the heterozygosity excess test performed with the TPM model showed evidence of bottlenecks in only ten of the analyzed populations. Tests that are based on heterozygosity using a given number of alleles are better able to identify recent, less-severe bottlenecks. Recently, bottlenecked populations show excess heterozygosity relative to that expected based on the number of alleles. In contrast, the M-ratio is smaller in a bottlenecked population than in an equilibrium population. Moreover, the M-ratio test is more powerful at detecting ancestral and extended declines than the heterozygosity excess test (Williamson-Natesan 2005). As the recovery time of the M-ratio is longer than that of the heterozygosity excess test, the low M value obtained reflects older and more severe reductions in the sizes of the studied populations (Garza and Williamson 2001;Williamson-Natesan 2005). Thus, the low M-ratio values obtained for all studied U. laevis populations are likely to be a consequence of genetic decline during postglacial recolonization. Our results indicated that the loss of U. laevis genetic diversity probably occurred in their refugia or shortly after their postglacial recolonization. This assumption was confirmed by the result of analysis DIYABC, where the best fit scenario to our data probably indicted the reduction size of the ancestral population.
Another investigation also showed a signature of historical bottlenecks after Holocene migration in Spanish U. laevis populations (Whiteley 2004;Fuentes-Utrilla et al. 2014). Similarly, relict U. glabra populations from the Iberian Peninsula experienced historical reductions in their population sizes (Martín del Puerto et al. 2017). The genetic bottleneck phenomenon is largely related to the N e and serves as a warning sign for the conservation status of a given population (Frankham 2005;Luikart et al. 2010). The assumed value of 50/500 proposed by Franklin (1980) has become an essential indicator in conservation genetics. Based on this theory, N e = 50 is sufficient to prevent inbreeding depression in the short term (over five generations), whereas N e ≥ 500 is appropriate for securing long-term viability because the population can maintain a balance between genetic drift and mutation, thereby retaining its evolutionary potential. The Polish U. laevis populations studied herein are characterized by low population sizes with a mean N e value of 16.8. Moreover, only eight out of forty-one (20%) examined populations exhibited N e values over 50. Based on these assumptions, the studied populations probably maintain a low evolutionary potential. However, field observations of Polish U. laevis populations indicated that they are in good condition despite their moderate level of genetic diversity and low effective population size.
Loss of genetic variation and Dutch elm disease
In Poland, the maximum spread of Ulmus in the Holocene started at approximately 6000 B.P. from the southeast direction, before the advent of the Neolithic people, and the proportion of Ulmus in the overall forest stand composition was higher than 10% (Ralska- Jasiewiczowa et al. 2003). Then, approximately 5000 B.P., a rapid decrease down to 2% occurred in the proportion of Ulmus in the forest stands. Undoubtedly, settlement activities and regional climate change contributed to reduced forest cover in areas where Ulmus occurred (Ralska-Jasiewiczowa et al. 2003). However, according to many authors, the rapid reduction rate of Ulmus corresponds better with the pathogenic hypothesis caused by cyclical Dutch elm disease pandemics (Girling and Greig 1985;Ralska-Jasiewiczowa et al. 2003;Caseldine and Fyfe 2006). In Poland, the last pandemic was first recorded in Katowice in 1927 and then in northern Poland andWarsaw in 1932 and1935, respectively. In the 1950s and 1960s, the disease was reported in all parts of the country and caused substantial losses in urban green areas, roadsides, and forests (Mańka 2005). Compared to other tree species with similar life history, the U. laevis populations maintained a strongly reduced level of genetic variation and low genetic differentiation. For example, black poplar, which occupies habitats similar to those of U. laevis, still maintains a high level of genetic variation and a low level of genetic differentiation (Lewandowski and Litkowiec 2017;Wójkiewicz et al. 2019;Wójkiewicz et al. 2021).
In the case of a species having a high level of genetic variation, the random loss of various alleles as a result of the recent pandemic should increase the level of interpopulation variation; however, we did not observe this in our study. Moreover, only a few populations displayed "private" alleles, with very low frequencies. Thus, it appears that despite the high mortality rate of white elm during the last pandemic, the species lost a small number of alleles. This may indicate that the Polish elm populations that existed before the last pandemic were homogeneous with a low level of genetic variation, similar to that expressed by the species today. Additionally, some other studies have suggested that there is no evidence of genetic diversity losses in elm populations as a consequence of the last pandemic (Nielsen and Kjaer 2010;Brunet et al. 2016;Buiteveld et al. 2016). It is possible that elm populations lost most of their genetic variation during previous pandemics. As our research shows, all Page 11 of 13 Litkowiec et al. Annals of Forest Science (2022) 79:38 analyzed populations have experienced severe reductions in their N e . It cannot be ruled out that the loss of genetic variation in white elm already took place in their refugia; therefore, more extensive research is needed involving populations from other parts of Europe.
Conclusion
The U. laevis populations examined appeared to maintain a moderate level of genetic variation and low genetic differentiation and no evidence of genetic population structuring. Our results indicated demographic processes such as reduced population sizes via past bottlenecks. We speculated that the loss of genetic variation in U. laevis probably occurred in their refugia or shortly after their postglacial recolonization. This hypothesis should be confirmed by additional investigation using cpDNA and nSSR markers. Also, a much more detailed sampling of the Russian would be extremely valuable.
Despite the pandemic and moderate genetic diversity, white elm individuals are still quite numerous in Poland. However, in white elm populations, very significant reductions in the numbers of individuals have taken place, and most populations have low effective population sizes. In Poland, a twofold increase has occurred in the area of forest stands dominated by elms in the last 50 years (Napierala- Filipiak et al. 2016), mostly consisting of areas with white elm individuals that are least prone to the disease. U. laevis often occurs in legally protected areas, such as nature reserves, or in areas covered by the Natura 2000 network. Therefore, we currently do not see any urgent need for ex situ or in situ conservation action. We propose that the most valuable populations with high effective population sizes, such as TRB, LEM, OLR, TUM, and KAL, be considered candidates for dynamic conservation units (DCUs) in the European information system on forest genetic resources (EUFGIS; http:// www. eufgis. org/) conservation network and subjected to continuous monitoring.
|
2023-01-25T15:35:15.438Z
|
2022-09-05T00:00:00.000
|
{
"year": 2022,
"sha1": "eabdfef3ee87b048c265426f887d5573de889052",
"oa_license": "CCBY",
"oa_url": "https://annforsci.biomedcentral.com/counter/pdf/10.1186/s13595-022-01157-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "eabdfef3ee87b048c265426f887d5573de889052",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
84797174
|
pes2o/s2orc
|
v3-fos-license
|
Hartmann-Schröeder , 1962 ( Polychaeta , Orbiniidae ) : a new record for intertidal mussel beds of the Southwestern Atlantic shore affected by sewage effluents *
Resumen.La especie Protoariciella uncinata (Polychaeta, Orbiniidae) es mencionada por primera vez para el Atlántico sudoccidental, proveniente de muestreos intermareales realizados en la comunidad del mitílido Brachidontes rodriguezi del área de Mar del Plata, Argentina. La especie ha sido anteriormente citada para bancos de mitílidos de Chile y Perú. El trabajo también brinda información sobre distribución espacial, relación con el gradiente orgánico producido por efluentes cloacales, densidad y otros datos ecológicos. Palabras clave: Protoaricinae, primer registro, distribución, Océano Atlántico. Abstract.Protoariciella uncinata (Polychaeta, Orbiniidae) is mentioned for the first time in southwestern Atlantic shores, from mytilid mussel beds of Brachidontes rodriguezi of the Mar del Plata area, Argentine. The species was formely described for mytilids banks of the Pacific coast of Chile and Peru. The work also provides information about spatial distribution, relationship with sewage organic enrichment, density and other ecological data.
Introduction
Mussel beds are effective refuges for several small organisms.In the Southwestern Atlantic shore, the Perna perna community (Jacobi 1987) in southern Brazil, and the Brachidontes rodriguezi community in Uruguay and northern Argentine (Olivier et al. 1966, Penchaszadeh 1973, Scelzo et al. 1996) are examples of this phenomenon.In the latter community, debris and sediments accumulate among byssal filaments (up to 19 kg m -2 in horizontal substrates), being colonized by polychaetes, nemerteans and other invertebrates (Penchaszadeh 1973).These processes increase when patches become older and multilayered.
The identification of the associated polychaetes in the Brachidontes rodriguezi community was incomplete.Recently, in a study of the community structure of B. rodriguezi (Vallarino et al. 1999 1 ) developed in abrassion platforms affected by domestic sewage of the Mar del Plata City (38 ºS -57 ºW), a number of polychaetes were identified (Elias et al. 1999 2 ).One of them corresponds to Protoariciella uncinata Hartmann-Schröeder (1962a).This is the first mention of the species in waters of the Southwestern Atlantic shore, being formely cited for the Pacific coasts of Peru and Chile.
Study area
The sampling area is an open coast subjected to the littoral current (south to north), with extense sand beaches only interrupted by quarcitic outcrops and abrassion platforms of caliche (consolidate loees).Biogeographically, the region is a transitional temperate-cold water area, between the subantarctic (Patagonia) and the subtropical region (southern Brazil).Seawater temperature ranges between 8 and 21 ºC and salinity between 33 and 34 PSU.Semidiurnal tides vary between 0,90 to 0,60 m.
Sewage discharges are produced in intertidal abrassion platforms, about 6 km north to the city, where 5 stations (named A, far, to E, close to the effluent) were randomly sampled with a 78 cm 2 corer in two tidal levels (4 sampling units in the upper fringe and 4 in the lower fringe).A control station (X) was sampled in the same way 9 km north to the effluent in a similar abrassion platform (Santa Elena Formation).The material examined was collected from 1-station X (37º 50.860 S -57º 27.315 W, 150 specimens from 8 sampling units), 2-stations A (33 specimens), B (70) and C (39), placed around the intertidal effluent (37º 55.591 S -57º 31.701W); 3-sampling units (also 78 cm 2 corers) in vertical artificial substrates of centric beaches of Mar del Plata (Scelzo et al. 1996), 24 specimens.
Results and Discussion
The material examined fits well with the description of Hartmann-Schröeder (1962a): Protoaricinae (first two segments achaetous), branchiae from the third setiger, all thoracic notosetae are crenulate capillaries, acicular setae present in posterior notopodia.In our material, notosetae include thick hooks with three to five teeth.The description of specimens from Chile (Hartmann-Schröeder 1962b) shows hooks with only three teeth.Other remarkable features are: prostomium pointed in many specimens, posterior end in some individuals shows an elongate morphology.The number of setigers varies from 45 to 78 in 3,5 to 14 mm long, in specimens from Mar del Plata (Fig. 2).
Protoariciella uncinata has been found in interticial sediments accumulated between mussel beds of Brachidontes rodriguezi.These sediments are poorly sorted, being a mixture of sand grains (fine to coarse) with shell debris.In abrassion platforms intersticial sediments can reach up to 100 kg m -2 (Fig. 3), in a thick layer placed between mussels and substrate.Organic matter content of sediments varies along a gradient from effluent (Fig. 4).
The distribution of Protoariciella uncinata shows a negative effect due to organic pollution, being mean density lower in impacted areas (Stations E to A), rather than in control sites (Station X) (Fig. 5).The species was also found in vertical artificial substrates (breakwaters) of central Mar del Plata beaches, in high and middle levels of the intertidal (Scelzo et al. 1996).Densidad (ind m -2 ) de Protoariciella uncinata en bancos de bivalvos intermareales de Brachidontes rodriguezi desarrollados sobre plataformas de abrasión afectadas por descargas cloacales.
Figure 5 Density (ind m -2 ) of Protoariciella uncinata in intertidal mussel beds of Brachidontes rodriguezi developed in abrassion platforms affected by sewage.
|
2018-12-06T18:41:27.470Z
|
2000-01-15T00:00:00.000
|
{
"year": 2000,
"sha1": "8c077d87fa0ce7514f71f912b95474c4975f5869",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/revbiolmar/v35n2/art06.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1dc6ccda9aed553b49cc874c9fa84ded9dbf67a4",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Geography"
]
}
|
231749634
|
pes2o/s2orc
|
v3-fos-license
|
A uniqueness result for the Sine-Gordon breather
In this note we prove that the sine-Gordon breather is the only quasimonochromatic breather in the context of nonlinear wave equations in $\mathbb{R}^N$.
Introduction
Breathers are time-periodic and spatially localized patterns that describe the propagation of waves. The most impressive solution of this kind is the so-called sine-Gordon breather for the 1D sine-Gordon equation It is given by the explicit formula where the parameters m, ω > 0 satisfy m 2 + ω 2 = 1. It is natural to ask if other real-valued breather solutions exist. We shall address this question in the broader context of more general nonlinear wave equations of the form (2) ∂ tt u − ∆u = g(u) in R N × R, where the space dimension N ∈ N and the nonlinearity g : R → R are arbitrary.
The existence of radially symmetric breather solutions for the cubic Klein-Gordon equation g(z) = −m 2 z + z 3 , m > 0 in three spatial dimensions was established in [13]. These realvalued solutions are only weakly localized in the sense that they satisfy u(·, t) ∈ L q (R N ) for some q ∈ (2, ∞) but u(·, t) / ∈ L 2 (R N ). In [10] infinitely many weakly localized breathers were found for nonlinearities Q(x)|u| p−2 u where Q lies in a suitable Lebesgue space and p > 2 is chosen suitably depending on Q as well as the space dimension N ≥ 2. Up to now, nothing is known about the existence of strongly localized breathers of (2) satisfying u(·, t) ∈ L 2 (R N ) for almost all t ∈ R and N ≥ 2, see however [11] for a an existence result for semilinear curlcurl equations for N = 3. In the case N = 1 strongly localized breather solutions different from the sine-Gordon breather have been found for nonlinear wave equations of the form where the coefficient functions s, q are discontinuous and periodic, see [ 1 localized breather solutions of (2) different from the sine-Gordon breather is not known. Still for N = 1 there are nonexistence results by Denzler [4] and Kowalczyk, Martel, Muñoz [9] dealing with small perturbations of the sine-Gordon equation respectively small odd breathers (not covering the even sine-Gordon breather). We are not aware of any other mathematically rigorous existence or nonexistence results for (2).
One of the main obstructions for the construction of localized breathers is polychromaticity. Indeed, plugging in an ansatz of the form u(x, t) = k∈Z u k (x)e ikt with u k = u −k one ends up with infinitely many equations of nonlinear Helmholtz type that typically do not possess strongly localized solutions, see for instance [8,Theorem 1a]. For this reason the solutions obtained in [10,13] are only weakly localized. On the other hand, a purely monochromatic ansatz like u(x, t) = sin(ωt)p(x) cannot be successful either provided that g is not a linear function. In view of the formula (1) for the sine-Gordon breather we investigate whether quasimonochromatic breathers exist.
We show that in one spatial dimension the sine-Gordon breather is, up to translation and dilation, the only one for (2) and that no such breathers exist in higher dimensions as long as g does not act like a linear function. In fact, to rule out L ∞ -small solutions of linear wave equations, we assume that g : R → R is not a linear function near zero, i.e., that there is a nontrivial interval I ⊂ R containing 0 with the property that there is no β ∈ R such that g(z) = βz for all z ∈ I. Theorem 1. Assume N ∈ N and that g : R → R is not a linear function near zero.
(i) In the case N ≥ 2 there is no quasimonochromatic breather solution of (2).
We stress that our result holds regardless of any smoothness assumption on g nor any kind of growth condition at 0 or infinity. Moreover, our considerations are not limited to small perturbations of u * or small breathers in whatever sense. Following the proof of Theorem 1 one also finds that quasimonochromatic breathers of wave equations on any open set Ω R N with homogeneous Dirichlet conditions with profile functions p ∈ C 2 (Ω) do not exist either (even if N = 1) provided that g is not a linear function near zero. We will comment on this fact at the end of this paper. As a consequence, we find that Rabinowitz' C 2 ([0, 1] × R)-solutions of the 1D wave equation from [12, Theorem 1.6] are not of quasimonochromatic type. This might be true as well for the solutions from [2,3], but here our argument does not apply in a direct way since the solutions are not known to be twice continuously differentiable up to the boundary.
For completeness we briefly comment on the linear case g(z) = βz, β ∈ R. Then the profile function p of any given quasimonochromatic breather of (2)
Proof of Theorem 1
In the following let u(x, t) = F (sin(ωt)p(x)) be a solution of (2) as in (1) with g as in the Theorem. Plugging in this ansatz we get for all x ∈ R N such that p(x) = 0, where z = sin(ωt)p(x) ∈ [− p ∞ , + p ∞ ]. This and (2) imply for x ∈ R N , z ∈ R such that If F was linear on [− p ∞ , + p ∞ ], then g would have to be linear on the nontrivial interval I := {F (z) : |z| ≤ p ∞ } as well. Since the latter is not the case by assumption, we know that z → z 2 F ′′ (z) does not vanish identically on that interval. Multiplying (4) with p(x) and choosing z according to z 2 F ′′ (z) = 0 we find that p does not change sign. Indeed, if p(x * ) = 0 and R > 0 is the smallest radius such that p has a fixed sign in the open ball B R (x * ), then Hopf's Lemma [6, Lemma 3.4] implies |∇p| > 0 on ∂B R (x * ). But then (4) implies that ∆p is unbounded on ∂B R (x * ), which contradicts p ∈ C 2 (R N ). Hence, p does not change sign and we will without loss of generality assume that p is positive. So (4) holds for all x ∈ R N and all z ∈ [− p ∞ , p ∞ ] and standard elliptic regularity theory gives p ∈ C ∞ (R N ).
Differentiating (4) with respect to x i we get Since p 2 is non-constant, we infer that F satisfies an ODE of the form Here, µ 2 = 0 is due to the fact that F is not a linear function. Each nontrivial solution of such an ODE satisfies F ′ (z) = 0 for almost all z ∈ [− p ∞ , p ∞ ]. Combining (5) and (6) we thus infer Since µ 2 = 0 we can find λ 1 , λ 2 ∈ R such that This implies We now use (7) and the positivity of p to show that p is radially symmetric about its maximum point x 0 ∈ R N . We concentrate on the case N ≥ 2 since the claim for N = 1 follows from the fact that x → u(x 0 + x) and x → u(x 0 − x) solve the same initial value problem. Since p vanishes at infinity, we must have λ 1 ≥ 0 and, since p does not change sign, λ 2 ≥ 0, see [14,Theorem 1]. Moreover, p attains its maximum at some point x 0 ∈ R N with p(x 0 ) > 0, |∇p(x 0 )| = 0, ∆p(x 0 ) ≤ 0. This and (7) implies λ 1 , µ 1 > 0 as well as µ 2 ≥ 0. So we know that (7) holds for λ 1 , µ 1 > 0, λ 2 , µ 2 ≥ 0. In the case λ 2 > 0 Theorem 2 from [5] implies the radial symmetry about x 0 , so we are left with the case λ 2 = 0.
So we have for some A > 0, m = 0. So −∆p + λ 2 p = µ 2 p 3 can only hold for N = 1 as well as Plugging these values into (6) and solving the ODE we get from This implies that the breather solution is given by for u * as in (1). So have proved the nonexistence of such breathers for N ≥ 2 from claim (i) and the uniqueness statement from claim (ii).
To see that this solution formula determines the nonlinearity g, we combine (6) and (7) to get Plugging in z = Aω m tan( y 4κ ) for |y| < 2π|κ| we get F (z) = y and hence
Remark 1.
(i) We explain why nonlinear quasimonochromatic breathers of (3) with profile functions p ∈ C 2 (Ω) do not exist on open sets Ω R N . The arguments presented above reveal that any such breather is given by functions F, p as in Definition 1 such that for all x ∈ Ω, p(x) = 0, |z| ≤ p ∞ we have as in (4) Now fix z ∈ (− p ∞ , p ∞ ) such that z 2 F ′′ (z) = 0 and choose x * ∈ Ω such that p(x * ) = 0. Let R > 0 be largest possible such that |p| is positive in the open ball B R (x * ) ⊂ Ω. By the homogeneous Dirichlet boundary condition, we know R ≤ dist(x * , ∂Ω) < ∞ and that p vanishes on ∂B R (x * ). So the same argument as in the above proof (Hopf 's Lemma) shows that |∆p| is unbounded on B R (x * ), a contradiction. As a consequence, such a profile function cannot exist and we obtain the nonexistence of quasimonochromatic breathers for (3). (ii) In our proof we did not use the assumption p(x) → 0 as |x| → ∞ when we proved that |p| is positive. As a consequence, each profile function p of a solution u(x, t) = F (sin(ωt)p(x)) of (2) has a fixed sign regardless of its behaviour at infinity. Similarly, (7) holds without this hypothesis. So we conclude that any profile function p ∈ C 2 (R N ) of a quasimonochromatic breather is a positive solution of (7) provided that the nonlinearity g is not a linear function on the interval {F (z) : |z| ≤ p ∞ }.
Notice also that the assumption F (0) = 0 is not used either. (iii) Our notion of a quasimonochromatic breather does not allow for the solutions u(x, t) = u * (x 1 , t) (x ∈ R N ), which are localized only with respect to one spatial direction. Accordingly, our nonexistence result for N ≥ 2 is false under the weaker requirement (8) sup x ′ ∈R N−1 |p(x 1 , x ′ )| → 0 as x 1 → ∞.
One may conjecture that the solutions u(x, t) = u * (x · θ, t) for θ ∈ S N −1 ⊂ R N are the only quasimonochromatic breathers that are localized in some spatial direction. This open problem bears some similarity to the Gibbon's Conjecture or de Giorgi Conjecture about the classification of monotone solutions of the Allen-Cahn equation ∆u+u = u 3 in R N that we recast in our setting below.
|
2021-02-03T02:16:02.969Z
|
2021-02-02T00:00:00.000
|
{
"year": 2021,
"sha1": "0ec7e4927de63e88b562a94e178026ba8a61869c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42985-021-00084-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0ec7e4927de63e88b562a94e178026ba8a61869c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
266373271
|
pes2o/s2orc
|
v3-fos-license
|
Oral anticoagulant prescribing among patients with cancer and atrial fibrillation in England, 2009–2019
Anticoagulation of patients with atrial fibrillation (AF) and cancer is challenging because of their high risk for stroke and bleeding. Little is known of the variations of oral anticoagulant (OAC) prescribing in patients with AF with and without cancer.
INTRODUCTION
3][4][5] This is because of the shared risk factors between AF and cancer (e.g., age and age-related comorbidities), the use of some chemotherapy agents, chest radiotherapy, and cancer-induced inflammation. 6When considering anticoagulation therapy for AF, patients with cancer have distinctive features, such as their higher risk for thrombotic events than the general population (because of increased levels of procoagulants, inflammatory cytokines, chemotherapies, and patient-related factors), which may influence stroke risk for some cancer subtypes. 7,8[11] According to 2019 guidance from the International Society on Thrombosis and Haemostasis on anticoagulation of patients with cancer with nonvalvular AF receiving chemotherapy, "individualized anticoagulation regimens after shared decision making with patients, based wherever possible on risk of stroke, bleeding, and patient values," is recommended. 12If patients are already on an anticoagulant regimen before starting chemotherapy, it is recommended that they continue anticoagulation unless there are drug-drug interactions.
Alternatively, if a patient is unable to tolerate the oral route of administration (e.g., because of nausea and vomiting), then switching from an oral anticoagulant (OAC) to parenteral anticoagulation can be considered with resumption of oral anticoagulation as soon as possible. 12Despite current guidance, management of oral anticoagulation in patients with cancer is lacking evidence-based consensus within the oncology community, which leads to significant differences in the management of these patients. 13There is evidence that shows that in patients with AF without cancer, OACs are underprescribed according to data from UK primary care. 14Moreover, in previous work, we found that many clinical conditions were linked to the underprescribing of OACs in patients with AF, including cancer. 15wever, very little is known regarding the variation of OAC prescribing in patients with AF with and without cancer, across population strata of interest, and by cancer type.
Study design and data source
We conducted a population-based retrospective cohort study.Data were obtained from the UK Clinical Practice Research Datalink (CPRD) GOLD and Aurum databases. 16,17CPRD databases include anonymized information on patients' demographics, diagnoses, consultations, referrals to specialists, prescribing records, and laboratory tests.Data on clinical conditions and diagnoses are recorded via Read codes in CPRD GOLD and Systematised Nomenclature of Medicine Clinical Terms, Read codes, and local EMIS web codes in CPRD Aurum. 16,17Prescription data are recorded via the Gemscript product code system in the GOLD database and a dictionary of medicine and device prescribing codes in the Aurum database. 16,17We ob- >14 days before AF diagnosis.Within the AF cohort, patients with a history of cancer were defined as patients with a diagnostic code for cancer before the index date of AF diagnosis.We focused on the most common cancer types diagnosed in England 18 -breast, prostate, colorectal, and lung cancer-which are reportedly associated with cardiovascular disease. 19We have also included patients with hematological malignancies because they are known to increase bleeding risk. 20The control population included patients with AF without a diagnosis of cancer before the index date.AF and cancer diagnoses were primarily identified from primary care records.Cancer diagnosis by type was supplemented with HES data via relevant International Classification of Diseases, Tenth Revision codes.Diagnostic codes were independently reviewed by a consultant cardiologist (M.
Characteristics of the study population
A.M.), and medication lists were reviewed by the first author (A.M.A.).
AJABNOOR ET AL.
The codes used to produce the data for this study can be found at https://github.com/ammajabnour/AF-project.Patients' follow-up started from the index date of AF diagnosis and continued until the earliest date of the following: patients transferred out of the practice, last collection date for the practice, end of the first treatment episode in the case of OACs or aspirin users, death, or end of the study observation period (December 31, 2019) (Figure 1).
Primary outcome
The primary outcome was patients' exposure to OACs.Therefore, patients prescribed OACs after a diagnostic record of AF were identified as OAC users.OACs available in the United Kingdom were identified from the British National Formulary.Exposure to OACs included either vitamin K antagonists (VKAs) (warfarin, phenindione, and acenocoumarol) or non-vitamin K antagonist oral anticoagulants (NOACs) such as dabigatran, rivaroxaban, apixaban, and edoxaban.To ascertain exposure to OACs, we focused on the first continuous treatment episode of OACs, defined as continuous prescriptions of the same drug within a grace period of 30 days after the expected end of the previous prescription.A ≤30-day gap between the last day of the initial prescription and the next was assumed to be a continuous treatment episode.In the case of NOACs, the quantity issued was estimated by dividing the number of tablets prescribed by the approved number of daily doses (twice daily for dabigatran and apixaban; once a day for rivaroxaban and edoxaban).However, if the quantity of NOACs was not recorded, then it was estimated from the mean number of NOAC tablets prescribed for the same drug for that patient or the overall mean for that drug if patient-specific data were not available.Because precise dosages of VKAs were not available because they vary according to international normalized ratio (INR) measurements and are not consistently recorded in general practice, the median time between all previous sequential prescriptions of VKAs for each patient was used to estimate days of supply.INR measurements, if reported, were treated as an indicator for VKA exposure and therefore treated in the same way as prescriptions.Additionally, if a patient was prescribed aspirin with no OAC, an exposure status of aspirin-only was assigned.bidity Index (CCI). 24Polypharmacy was also assessed at baseline with the common definition of the concomitant use of five or more medications over 1 year before AF diagnosis. 25A history of drug use was assessed at baseline within 90 days before AF diagnosis.
Statistical analysis
Categorical variables were described as counts and percentages, and continuous variables were described as means and standard deviations (SDs) or medians and interquartile ranges (IQRs).We estimated the proportion of patients prescribed OACs (warfarin or NOACs), aspirin-only, or no treatment according to their stroke and bleeding risk.We used a competing-risk model to estimate the risk of OAC prescribing (warfarin or NOACs) over time (from AF diagnosis to OAC prescribing) while considering death as a competing risk.Risk of OAC prescribing was estimated with subhazard ratios (SHRs) and 95% CIs.In addition, we performed a sensitivity analysis similar to the main model but with cancer defined as active if diagnosed only within 2 years before AF.In these models, only high-risk patients who were recommended to take OACs according to their baseline stroke risk were included (CHA 2 DS 2 -VASc score ≥2 in males or ≥3 in females), and the treatment status was defined as the first continuous treatment episode after AF diagnosis.A random-intercepts model was used to account for clustering by general practice adjusted for age, sex, chronic conditions, IMD, smoking, and alcohol consumption status.Missing baseline BMI was imputed by an interpolation algorithm that has been used in previous studies with the CPRD. 26Missing data of categorical variables (e.g., ethnicity, smoking, and alcohol consumption) were assigned to a separate "unknown" category.Additional models with interaction terms (e.g., cancer type x patient IMD) were included to estimate the cumulative incidence of OAC prescribing with the post estimation command cif, for example, if a patient had breast cancer and was living in the most deprived quintile (IMD 5).In all analyses, p < .05 was considered statistically significant.All statistical analyses were performed with Stata version 16.
Baseline characteristics
During the observation period of 11 years, 177,065 patients satisfied the inclusion and exclusion criteria (Figure 2), of which 20,737 patients (11.7%) had a previous history of cancer.Mean age was 78.6 years (SD 8.9 years) for patients with cancer and 74.3 years (SD 12.7 years) for patients without cancer (Table 1).In the group with cancer, 54.9% were males, and the most prevalent cancer type was prostate cancer (29.1%), followed by breast cancer (25.8%), colorectal cancer (18.7%), hematological cancer (17.3%), and lung cancer (9%).Median (IQR) follow-up was 2.0 years (0.7-4.2 years) for patients with AF without cancer and 1.5 years (0.5-3.2 years) for patients with cancer.
Overall, a history of chronic diseases was more frequent in the group with cancer than in patients without cancer; the frequency of patients with a CCI of ≥3 was 66.3% in the group with cancer compared with 28.3% in patients without cancer.Similarly, 88.3% of patients with cancer were at a high risk for stroke (CHA 2 DS 2 -VASc score ≥2 in males or ≥3 in females) compared with 77.3% of patients without cancer; 57.5% of patients with cancer were at a high risk for bleeding (HAS-BLED ≥3) compared with 49.6% of patients without cancer. -131 considering high bleeding risk, over 60% of patients with a history of breast or prostate cancer also received OACs at a similar frequency to those with no history of cancer (34.3% received warfarin; 32.7% received OACs).However, approximately two thirds of patients with a history of lung cancer at a high bleeding risk were not prescribed OACs and were either prescribed aspirin-only or no treatment, and the same for patients with hematological cancer or colorectal cancer but to a lesser extent.
OAC prescribing by cancer type and risk stratification
We also explored the trend in OAC prescribing over the years in patients with AF with and without cancer.From 2009 to 2012, patients with and without cancer showed similar trends in OAC prescribing.Aspirin-only was prescribed more commonly than OACs, specifically VKAs at that time period.From 2012, prescribing of NOACs started to escalate over the years until it reached its maximum percentage in 2019 compared to VKAs or aspirin-only among both patients with cancer and patients without cancer (Figure 3).
Risk of excess suboptimal treatment by sex, age group, and deprivation quantile
After adjustment for sociodemographic and clinical factors, the SHR of OAC prescribing was overall lower among patients with cancer compared with those without cancer: prostate cancer (SHR, 0.95; 95% CI, 0.91-0.99),breast cancer (SHR, 0.93; 95% CI, 0.89-0.98),colorectal cancer (SHR, 0.93; 95% CI, 0.88-0.99),hematological cancer (SHR, 0.70; 95% CI, 0.65-0.75),and lung cancer (SHR, 0.44; 95% CI, 0.38-0.50)(Table 3).This was demonstrated in the cumulative incidence curve that shows the CIF of OAC prescribing over time to be highest for patients with AF without cancer (Figure 4A), followed by patients with prostate cancer, breast cancer, colorectal cancer, and then hematological cancer, followed by lung cancer with the lowest CIF.In the sensitivity analysis (cancer was defined as active if diagnosed 2 years before AF), the SHRs were overall similar to the main model in terms of patients with cancer having a lower risk of OAC prescribing compared to patients without cancer.However, the SHRs were generally lower across all cancer types (Table 1) compared to the main model.This was also demonstrated in the cumulative incidence curve (Figure 4B).
The first interaction model (Table S1) compared the cumulative incidence of OAC prescribing in patients without cancer to those with cancer when stratified by sex.The CIF curve shows a similar pattern in terms of the cumulative incidence of OAC prescribing being highest across males and females without cancer and the lowest among males and females with lung cancer (Figure S1A,B).Among males, it seems that patients with prostate cancer had the highest cumulative incidence of OAC prescribing after patients without cancer, and among females, breast cancer had the highest cumulative incidence of OAC prescribing after patients without cancer.The second interaction model (Table S2) compared the cumulative incidence of OAC prescribing in patients without cancer to those with cancer when stratified by patient IMD.Across all IMD groups, patients without cancer and those with breast, prostate, and colorectal cancer had higher cumulative incidences of OAC prescribing, and hematological and lung cancer were at the lower side of the CIF curves (Figure S2A-E).Also, the difference between the incidence of OAC prescribing in patients with and without cancer
DISCUSSION
In this population-based cohort study of patients with cancer and AF, we found an independent association between the history of certain cancer types and underprescribing of OACs in patients with AF.
Among patients at a high risk for stroke, approximately two thirds of patients who had breast cancer, prostate cancer, or colorectal cancer were prescribed OACs at similar levels to what we observed in those without a history of cancer.However, compared with patients without cancer, underprescribing of OACs was more obvious in patients with a history of lung cancer and hematological malignancies.
Furthermore, the difference in OAC prescribing between patients without cancer and patients with cancer is lower among patients living in the most deprived quintile and among patients aged between 55 and 64 years, with the lowest prescribing rates found in elderly patients aged ≥85 years.
In the setting of cancer, there is a growing interest in the management of arterial thromboembolism and stroke prevention. 7The balance between thromboembolic and bleeding risk in AF is particularly challenging in patients with cancer.Although cancer may cause a prothrombotic state, it may also increase bleeding risk.Furthermore, the CHA 2 DS 2 -VASc and HAS-BLED risk scores have not been validated in patients with cancer.Thus, the decision to prescribe OACs for stroke prevention may be quite challenging and should not be based exclusively on the risk assessment tools used for the general population.
The first finding of the study was that patients with either breast cancer, prostate cancer, or colorectal cancer were prescribed OACs at similar levels to those with no cancer history.This may suggest that patients with these types of cancer were considered to be able to tolerate the risks associated with anticoagulation and are clinically stable similar to those without cancer.In addition, OAC prescribing seems to correlate with stroke risk in patients with cancer because patients with breast, prostate, or colorectal cancer had the highest proportions of patients with a high risk for stroke.On the other hand, patients with lung cancer or hematological malignancies had lower proportions of high-risk patients and were the least likely to receive anticoagulation therapy.These findings are in line with the results of a single-center study that found that anticoagulant use was associated with a higher CHA 2 DS 2 -VASc score (≥2). 27other possible explanation for the underprescribing of OACs in certain cancer types is the risk of developing bleeding events with lung cancer or hematological malignancies, 5,28 which we have observed in a previous analysis. 29One of the common complications of lung cancer is bleeding in the airway causing hemoptysis; also, the risk of bleeding is increased after lung cancer resection because of the invasiveness of the procedure and the hematological changes induced by chemotherapy. 28Whereas the high bleeding risk observed with hematological malignancies can be explained by the altered platelet function and numbers, deficiencies in clotting factors, circulating anticoagulants, and defects in vascular integrity collectively increase bleeding risk. 20r findings are consistent with previous studies on AF in cancer in that we found that among all patients with cancer, NOACs were prescribed at higher frequencies than warfarin.This is because NOACs might be a preferred option over warfarin in patients with cancer because they do not require frequent INR monitoring and have fewer drug-drug or food-drug interactions. 30Despite the lack of dedicated trials, there are data on NOACs derived from post hoc analysis of randomized trials and observational studies showing that NOACs are the drugs of choice for long-term anticoagulation in cancer, with lowmolecular weight heparin preferred in the active phase of cancer (cancer diagnosed within the previous 6 months). 31,32 report that OAC prescribing patterns may differentially vary by age; for example, in certain age groups (55-64, 65-74, and 75- These findings are consistent with previous data from the United States that looked at OAC prescribing in patients with cancer with AF. 35 These results also reflect on what we have observed earlier in our previous analysis, that socioeconomic inequalities in the prescribing of OACs in England exist, with low socioeconomic status associated with the prescription of aspirin only or no treatment compared with patients with higher socioeconomic status. 15In our previous work, we also captured ethnic inequalities in the care of patients with AF 15 ; however, in this study, we could not exclude the possibility of ethnic inequalities in OAC prescribing between patients with and without cancer because of the small number of ethnic groups per cancer type. This study has several limitations.First, it was based on data from primary care records supplemented with secondary care data without linkage to cancer databases, which means that the accuracy of data specific to cancer status (if the patient is cancer free or in remission) or data on cancer management may not be optimal (e.g., chemotherapy/radiotherapy data).Second, the CPRD provides prescription data but no information on dispensing or adherence.There is also the possibility of OAC prescribing in secondary care, which our analysis did not account for, especially because HES records do not provide prescription data.Therefore, it was not feasible to assess whether the low rate of OAC prescribing in patients with cancer is related to poor integrated care between primary and secondary care.
Third, our findings are dependent on accurate recording from health professionals.Lack of event recording would result in a falsenegative classification of a certain event and therefore could potentially bias our findings, but the quality of CPRD data for cerebrovascular disease has been acknowledged previously.Fourth, our study was retrospective, which makes residual confounding inevitable because of a lack of data on certain factors that could be associated with underprescribing of OACs.Fifth, there is the possibility that bleeding risk was underestimated because we adopted a modified HAS-BLED score that did not include labile INR as a risk factor because of the inconsistency of INR recording within the CPRD.
The study also has certain strengths.This is the first study to tained access to linked hospitalization records via Hospital Episode Statistics (HES) Admitted Patient Care, and area location deprivation was defined via patient-level index of multiple deprivation (IMD) 2015 quintiles in England.The data were requested via application to the CPRD and approved by the Independent Scientific Advisory Committee for Medicines and Healthcare Products Regulatory Agency Database Research (protocol number 20_198R).The raw data underlining the results presented in the study are available and subject to CPRD's Research Data Governance process (https://CPRD.com; contact enquiries@cprd.com).
We conducted a nationwide, population-based, retrospective cohort study in patients with newly diagnosed AF in England.Inclusion criteria were (1) adults aged ≥18 years; (2) first-ever records of AF between January 1, 2009, and December 31, 2019; and (3) registration in a general practice in England for at least 1 year before AF diagnosis.Patients with heart valve problems before AF diagnosis were excluded.Additional exclusion criteria were applied within a lookback period of 12 months before AF diagnosis: records of irregular heartbeats or cardioversion, records of atrial flutter alone with no mention of AF, and previous use of quinidine, sotalol, amiodarone, flecainide, or propafenone.Additionally, patients were excluded if they had received oral or parenteral anticoagulants
1318 -
Extracted baseline patient characteristics included age, sex, body mass index (BMI), smoking, alcohol consumption status, ethnicity, F I G U R E 1 Visualization of study design and patient follow-up.**Patients transferred out of the practice, last collection date for the practice, end of the first treatment episode in the case of OAC or aspirin users, death, or end of the study observation period (December 31, 2019).AF indicates atrial fibrillation; BMI, body mass index; CPRD, Clinical Practice Research Datalink; GP, general practitioner; OAC, oral anticoagulant.OAC PRESCRIBING IN PATIENTS WITH CANCER AND AF IMD quintiles, and information relevant to the CHA 2 DS 2 -VASc 21 and HAS-BLED scores. 22CHA 2 DS 2 -VASc score, consists of eight categories, with points given for each of the following: congestive heart failure, hypertension, age ≥75 years (�2 points), diabetes mellitus, prior stroke or transient ischemic attack (TIA) or thromboembolism (�2 points), vascular disease, age 65 to 74 years, and sex category.Patients were classified according to their stroke risk as "low risk" if the CHA 2 DS 2 -VASc score equaled 0 in males or 1 in females; "intermediate risk" if the CHA 2 DS 2 -VASc score equaled 1 in males or 2 in females; and "high risk" if the CHA 2 DS 2 -VASc score was ≥2 in males or ≥3 in females.According to European Society of Cardiology guidelines, 23 OACs should be considered in intermediate-risk patients and recommended in high-risk patients.HAS-BLED which consists of 9 points, one for each of the following: hypertension, abnormal kidney or liver function (1 point each), stroke, history of bleeding or predisposition, labile INR, elderly (>65 years), and drugs/ alcohol concomitantly (1 point each).Because INR measurements are not consistently reported in the CPRD, a modified HAS-BLED score was used, which does not include the INR element.The overall comorbidity burden of a patient was defined via the Charlson Comor-
F I G U R E 2
Cohort selection and number of included and excluded patients.AF indicates atrial fibrillation; CPRD, Clinical Practice Research Datalink.1320 -OAC PRESCRIBING IN PATIENTS WITH CANCER AND AF T A B L E 1 Patient characteristics according to cancer status.
Abbreviations: AF, atrial fibrillation; BMI, body mass index; IMD, index of multiple deprivation; IQR, interquartile range; NA, not applicable; NDP CCB, nondihydropyridine calcium channel blocker; NSAID, nonsteroidal anti-inflammatory drug; PPI, proton pump inhibitor; SD, standard deviation; SSRI/ SNRI, selective serotonin reuptake inhibitor/selective norepinephrine reuptake inhibitor; TIA, transient ischemic attack.Proportion of patients prescribed anticoagulants, aspirin only, or no treatment among patients with AF (with or without cancer) stratified by stroke and bleeding risk.Treatment status was determined according to the first continuous treatment episode after AF diagnosis.Trends of prescribing over the years 2009-2019 in with AF with and without cancer and for only patients with a recommendation for OACs (patients with a high risk for stroke: CHA 2 DS 2 -VASc score ≥2 in males or ≥3 in females).Competing-risk analysis with subhazard ratios and 95% CIs of the risk of OAC prescribing taking into account death as a competing risk. a Abbreviations: AF, atrial fibrillation; BMI, body mass index; IMD, index of multiple deprivation; OAC, oral anticoagulant; Ref, reference; SHR, subhazard ratio.Cumulative incidence curve that shows the CIF of OAC prescribing over time for each cancer type.(A) CIF curve for the main competing-risk model.(B) CIF curve for the sensitivity analysis where cancer was defined as active if diagnosed 2 years before AF.*Breast cancer in Panel A overlaps with colorectal cancer because of similar CIF values.AF indicates atrial fibrillation; CIF, cumulative incidence function; OAC, oral anticoagulant.
report the pattern of anticoagulation use in patients with breast cancer, prostate cancer, colorectal cancer, lung cancer, and hematological cancer with subsequent AF in a nationwide real-life cohort in T A B L E 3 (Continued) F I G U R E 4
|
2023-12-21T06:17:42.587Z
|
2023-12-20T00:00:00.000
|
{
"year": 2023,
"sha1": "8c8fe0d3b2ab2698c24f1423068764b9dab82649",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cncr.35152",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ffb810976b66b146a2e089a613952a6053bce1e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23086911
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Traffic Splitting Policy in LTE-based Heterogeneous Network
Dual Connectivity (DC) is a technique proposed to address the problem of increased handovers in heterogeneous networks. In DC, a foreground User Equipment (UE) with multiple transceivers has a possibility to connect to a Macro eNodeB (MeNB) and a Small cell eNodeB (SeNB) simultaneously. In downlink split bearer architecture of DC, a data radio bearer at MeNB gets divided into two; one part is forwarded to the SeNB through a non-ideal backhaul link to the UE, and the other part is forwarded by the MeNB. This may lead to an increase in the total delay at the UE since different packets corresponding to a single transmission may incur varying amounts of delays in the two different paths. Since the resources in the MeNB are shared by background legacy users and foreground users, DC may increase the blocking probability of background users. Moreover, single connectivity to the small cell may increase the blocking probability of foreground users. Therefore, we target to minimize the average delay of the system subject to a constraint on the blocking probability of background and foreground users. The optimal policy is computed and observed to contain a threshold structure. The variation of average system delay is studied for changes in different system parameters.
I. INTRODUCTION
With an upsurge in the use of smartphones and tablet devices, the mobile data traffic is proliferating. According to [1], the monthly global mobile data traffic is predicted to reach 30.6 exabytes by 2020. The deployment of Heterogeneous Networks (HetNet) comprising of small cells overlaid with ubiquitous macro cells is one of the significant approaches to meet this ever-increasing demand for mobile data traffic. Although the introduction of HetNets is beneficial in many aspects, it leads to an increase in the number of UE handovers and signaling overhead, due to the difference in the coverage areas of small and macro cells. To combat this, 3rd Generation Partnership Project (3GPP) has proposed Control-plane/Userplane split [2], [3] and the Dual Connectivity (DC) technique as a part of Long Term Evolution (LTE) Release 12 [4]. In Control-plane/User-plane split, macro cells manage the Control-plane whereas the small cells handle the User-plane. DC allows a User Equipment (UE) with multiple transceivers to simultaneously receive data from both a Macro eNodeB (known as Master eNodeB) and a Small cell eNodeB (known as Secondary eNodeB). In this paper, we study the optimal splitting policy for DC UEs.
We consider the split bearer architecture [4] of DC, which has a user plane protocol stack as depicted in Figure 1. In this architecture, only the Macro eNodeB (MeNB) has a connection with the Core Network. The MeNB manages the Control-plane of UE, whereas its User-plane can be split between the MeNB and the Small cell eNB (SeNB). The MeNB and SeNB are connected via the Xn interface, which is a non-ideal backhaul link. The data of a radio bearer for a UE arrives from the higher layers at the Packet Data Convergence Protocol (PDCP) layer of MeNB; MeNB then splits it into two parts, as shown by Radio Bearer 2 in the figure. One part is forwarded to the SeNB via the backhaul link, which then transmits to the UE and the other part is transmitted by the MeNB. The aggregation of the split bearer then takes place at the PDCP layer of the UE.
A DC-capable UE can use DC to significantly increase its throughput and improve its mobility performance [5]. However, there may be considerable delays in the reception of DC traffic at the UE because the first and the last packet corresponding to a single transmission may arrive via two different paths with widely varying delays. The legacy UEs (background UEs) can connect to MeNB only. For the UEs which are capable of DC (foreground UEs), data traffic can be received via MeNB or SeNB or both. Since the resources in the MeNB are shared by background and foreground UEs, DC may increase the blocking probability of background UEs. Single connectivity of foreground UEs with SeNB may bring down the blocking probability of background UEs, by saving the MeNB resources for background UEs. However, it again increases the blocking probability of foreground UEs, since the MeNB resources are not utilized for foreground UEs. Hence, we introduce a constraint on the weighted sum of the blocking probabilities of background and foreground UEs. Our objective is to minimize the average delay of the system subject to a constraint on the blocking probability of background and foreground UEs.
In [6], the authors propose a flow control algorithm in which SeNB periodically sends data requests to the MeNB, depending on the buffer status at SeNB. In [7], the authors propose a downlink traffic scheduling scheme for maximization of the network throughput. [8], [9] deal with maximizing the data rate of DC users in LTE and multiple-Radio Access Technology (RAT) scenario, respectively. The works [7], [9], [10] consider throughput as the system metric of interest. However, none of them consider the delay in the system, which requires attention considering the varying network conditions in the two different paths.
The authors in [10] propose a split bearer algorithm for video traffic to improve the data rate. In [11], the optimal splitting ratio for minimizing the queuing delay in the system is calculated for a single UE. The authors in [12] obtain the optimal traffic splitting over multiple Radio Access Technologies (RATs) such that maximum average delay across different RATs is minimized. They, however, do not consider user arrival and departure. Also, in [12], the authors consider the maximization of expected delays in different RATs as the optimization parameter. However, in our work, we deal with expected maximum delay as the system metric which captures the real life scenario better than that by [12].
Our contribution is twofold. First, we obtain the optimal splitting policy to minimize the average delay in the system subject to a constraint on the blocking probability. Second, we demonstrate the variation of average delay in the system as a function of load in the system and backhaul delay. To the best of our knowledge, this work is the first attempt to present an optimal splitting policy for minimizing the average delay in the system using DC enhancement.
The paper has the following organization. In Section II, we outline the system model. The problem formulation and solution methods are explained in Section III. The structure of the optimal policy along with some numerical results are described in Section IV followed by conclusion in Section V.
II. SYSTEM MODEL
We consider a macro cell with a wide coverage area and a small cell situated inside the macro cell as presented in Figure 2. Let d be the one-way latency of the backhaul link connecting the SeNB with the MeNB. SeNB uses this backhaul link to share its status information with the MeNB, and MeNB uses it to share control/data information with the SeNB. We assume MeNB and SeNB operate at different carrier frequencies. As data traffic over the Internet is bursty in nature, we consider batch arrivals with a random number of packets in a batch. The batch size G follows a discrete probability distribution α i = P (G = i), i = 1, 2, · · · with mean batch sizeḠ. The flow controller is situated at the MeNB. It routes the incoming traffic to MeNB or SeNB appropriately.
We segregate the UEs into two categories. The legacy UEs which are present in the coverage area of the macro cell and can connect to MeNB only (e.g., u 2 in Figure 2) are categorized as background UEs. The UEs which are present in the coverage area of the small cell and capable of dual connectivity to the MeNB and SeNB (e.g.,u 1 in Figure 2) are categorized as foreground UEs. The data traffic streams for these two sets of UEs are each assumed to constitute two Poisson arrival streams with rates λ 1 and λ 2 , respectively. The service times of a packet in MeNB and SeNB are exponentially distributed with mean 1/µ m and 1/µ s , respectively. All UEs are assumed to be stationary. In LTE, each eNB is assigned a certain number of resources. We assume each fixed size packet of a batch requires one server, i.e., one resource of an eNB to get served. After all the resources are exhausted, the packets are placed in a queue at the eNB. We assume the queue size is large but finite (say, N ) for both the systems. After the packets join any of the two systems, the scheduling of packets in both the systems takes place independently of each other. Thus, MeNB and SeNB are modeled as M/M/n queuing systems with First-Come-First-Serve queuing discipline.
The background traffic can join the MeNB or get rejected. For the foreground traffic, the flow controller at the PDCP layer of MeNB needs to take an appropriate decision regarding admission and splitting of traffic between the two systems. We assume that both types of UEs are assigned equal priority while allocating resources. Henceforth, we denote the MeNB system as System M and the SeNB system as System S.
A. States
We model the system as a continuous time stochastic process {X(t)} t 0 defined on state space S. A state s ∈ S is represented as a 3-tuple (s 1 , s 2 , k), where s 1 , s 2 represent the number of packets in the queue plus the number of packets currently in service in the System M and S, respectively. k takes different values based on the arrival of a batch or departure of a packet. In the case of departure of a packet, k = 0. If there is a foreground batch arrival of size G = 1, 2, ..., n then k takes values 1, 2, ...n, respectively. If there is a background batch arrival of size G = 1, 2, ..., n then k takes values n + 1, n + 2, ...2n, respectively. Since the state of the system changes only at the arrival or departure instants, there is no need to consider the state of the system at other points in time. We explain the state space with an example. For maximum batch size n = 2, k = 1, 2 represent a foreground traffic arrival of batch size 1 and 2, respectively. k = 3, 4 represent a background traffic arrival of batch size 1 and 2, respectively. Let n 1 and n 2 represent the number of resources at MeNB and SeNB, respectively. For instance, consider n 1 = 5, n 2 = 5 and queue size N = 10. Then, s 1 n 1 + N, s 2 n 2 + N . Thus, state s = (3, 6, 2) indicates that there are 3 packets in the MeNB system and 5 packets in service (n 2 = 5) plus 1 packet in the queue of the SeNB system, when a foreground traffic with batch size 2 (k = 2) has arrived.
B. Decision epochs and Actions
The decision epochs are the time instants at which the controller needs to take a decision, based on the current system state. The decision epochs are the arrival and departure instances. We denote the actions as a ∈ A, where A is the action space. At arrival epochs of the background traffic, the action is to either reject or accept the traffic in the MeNB system. At arrival epochs of foreground traffic, the controller's job is to either reject or decide the appropriate fraction of traffic to route through System M, based on the current state of the system s. The action space grows as the size of the batch increases. For instance, in case of maximum batch size n = 2, the action space is as follows:
C. Transition probabilities
At each decision epoch, the controller takes an action a ∈ A depending on the state of the system s. Depending on the state and action taken, the system moves to another state with a finite probability. Let T ss ′(a) denote the transition probability from state s to state s ′ under the action a. Denote ν(s 1 , s 2 ) as the sum of arrival and departure rates, when the current state is s = (s 1 , s 2 , k), ν(s 1 , s 2 ) = λ 1 + λ 2 + min(s 1 , n 1 )µ m + min(s 2 , n 2 )µ s . (1) Note that ν(s 1 , s 2 ) is independent of k.
The transition probabilities from state s = (s 1 , s 2 , k) to state s ′ under action a are given by: where ν(s 1 , s 2 ) is given by (1). Given the current state s = (s 1 , s 2 , k) and action a, the next state s ′ = (s ′ 1 , s ′ 2 , k) takes values as tabulated in Table I.
D. Cost function
Let c(s, a) denote the cost incurred when the system is in state s = (s 1 , s 2 , k) and action a ∈ A is taken. We define this cost as the expected delay encountered by the arriving batch of packets. Since a batch consists of many packets, the delay of a batch is the response time of the last packet of the batch. Thus, the cost function c(s, a) is the expected response time of the last packet of the arriving batch and is given by, where R m (s 1 ) and R s (s 2 ) denote the response times of the arriving batch in System M and System S, respectively. The response time of a packet is the summation of queuing delay and service time. For instance, if s = (2, 2, 2) and action a = 2 is chosen, then c(s, 2) = E{max{R m (2), R s (2)}}. Suppose number of resources is n 1 = 5, n 2 = 2 and queue size N = 5. Then R m (2) ∼ exp(1/µ m ). This is because the action a = 2 will add 1 packet in System M, which has 2 resources occupied out of 5. So this packet will be served in time which is exponentially distributed with parameter µ m . Also, R s (2) = X 1 + X 2 + d, where X 1 ∼ exp(1/2µ s ) and X 2 ∼ exp(1/µ s ) and d is the latency of the backhaul link. The action a = 2 adds 1 packet in System S with 2 packets already in service in the system. The current packet has to wait for X 1 time since all resources are occupied (n 2 = 2 and s 2 = 2); then it gets serviced in exp(1/µ s ) time.
Minimization of the average delay of the system may, however, lead to blocking of both background and foreground traffic. The function b b (s, a) (b f (s, a)) is defined as a binary indicator that is set to 1 if the background (foreground) arrival is blocked and to 0 otherwise. The parameter δ, 0 δ 1, which decides how much weight is to be assigned to the blocking probability of each traffic type, depends on the choice of the service provider. We define the blocking cost as the weighted sum of background and foreground traffic blocking
III. PROBLEM FORMULATION
We aim to split the foreground traffic optimally among the two available paths to minimize the average delay in the system. However, the foreground dual connectivity traffic may use up resources of both the systems M and S, and sufficient resources may not be available for background traffic, which can connect to the System M only. Hence, a constraint on the blocking probability of background single connectivity traffic may be required. However, due to sharing of resources in the MeNB between the two types of UEs, foreground UEs may be forced to move to System S or even blocked. This again increases the blocking probability of foreground traffic. Therefore, we introduce a constraint on the weighted sum of blocking probabilities of background and foreground traffic.
Thus, our objective is to minimize the average delay in the system subject to a constraint on the total blocking probability. Since, the times between the decision epochs are random, this leads to the formulation of a constrained Semi-Markov Decision Problem (SMDP).
A. Formulation as Constrained Markov Decision Process (CMDP)
The average cost criterion is considered as the performance criteria in this work. Let Π be the set of stationary policies. We assume that the Markov chains associated with these policies have no two disjoint closed sets,i.e., the Markov chains are unichain. Let C(t) and B(t) be the total delay and blocking incurred up to time t (t 0), respectively. The time-averaged delay and blocking can be expressed as, and,B = lim respectively, where E π is the expectation operator under policy π ∈ Π. Note that the limits in (5) and (6) exist since we are considering stationary policies. Our objective is to obtain a policy that minimizesC subject to a constraint (say, B max ) onB. MinimizeC Subject toB ≤ B max .
It is a constrained MDP problem with average cost and finite state and action spaces. It is widely known that a stationary randomized optimal policy [13] exists.
B. Uniformization
The SMDP problem is converted into a discrete-time MDP problem, using the uniformization method [14]. We denote the expected time until the next decision epoch, if the action a is chosen in the state s = (s 1 , s 2 , k) as τ (s, a). First, choose a number τ such that 0 < τ < min
C. Lagrangian Approach
The constrained problem (7) can be converted into an unconstrained problem by using the Lagrangian approach [13]. Let us consider Lagrange Multiplier β ≥ 0. Definê h(s, a; β) =ĉ(s, a) + βb(s, a). The dynamic programming equation yielding the optimal policy is given by, The problem can be solved using the Value Iteration Algorithm (VIA) [15] for a fixed value of β. At a particular value of β = β * , minimum cost is obtained for the constrained problem. This value β * can be determined by using the gradient descent algorithm following [16]. The value of β at the n th iteration is given by, whereB n is the blocking probability obtained using the policy π βn at iteration n. For this value of β * , the optimal policy is a mixture of two stationary policies, which can be determined by small deviation ǫ of β * in both directions. This results in two policies π β * −ǫ and π β * +ǫ with associated average blocking probabilityB β * −ǫ andB β * +ǫ , respectively. Define a parameter q such that qB β * −ǫ + (1 − q)B β * +ǫ = B max . The optimal policy π * of the CMDP is randomized mixture of the two stationary policies (π β * −ǫ and π β * +ǫ ), such that at each decision epoch, the first policy is chosen with probability q and the second policy is chosen with probability (1 − q). Thus, the optimal policy is given by, π * = qπ β * −ǫ + (1 − q)π β * +ǫ .
IV. NUMERICAL RESULTS AND ANALYSIS
In this section, we analyze the optimal policy obtained by solving (7). The parameters used for the computation of the optimal policy are as presented in Table II. We assume µ s µ m because the achievable rate in the coverage area of a small cell is typically higher than that in a macro cell [17]. Although for the computation purpose, we assume a maximum batch size of two, the analysis presented in Section III holds for any general batch size. The structure of the policy and the variation of the average delay under different parameters is described in this section.
A. Optimal policy structure
In this section, the optimal policy obtained by solving the CMDP is outlined. The optimal policy for foreground traffic arrival is illustrated in Figures 3a and 3b. We observe that the optimal policy for foreground traffic arrival with batch size 1 (k = 1) has a threshold structure. When there are free servers available in System M, the packets are routed to System M. This is because routing to System S incurs an extra backhaul delay, which overrides the benefits achieved from the higher service rate of System S. When the load on System M increases, the queue in System M builds up and the packet at the end of the queue experiences a longer delay in System M. Hence, after a certain point (s 1 = 4, s 2 = 0), the controller decides to route the arrivals to System S. This is also because the resources of System M need to be reserved for the background traffic, else the blocking probability of the system will increase. If we increase the service rate of System S (µ s ), we observe that the value of this threshold decreases and more traffic is routed through System S. The extra backhaul delay is compensated by the higher service rate of System S. As the number of packets in System S (s 2 ) grows, the choice of system is switched from M to S beyond a threshold. The value of this threshold increases as s 2 increases. Again, the optimal action changes to blocking above a certain threshold. Therefore, the policy for foreground traffic arrival with batch size 1 (k = 1) follows a threshold structure that depends on the number of packets in both the systems and can be expressed as, where γ 1 (s 1 , s 2 ) and γ 2 (s 1 , s 2 ) are thresholds which depend on s 1 and s 2 .
For foreground traffic arrival with batch size 2 (k = 2) (Figure 3b), following a similar argument as in the case of k = 1, if the System M has servers available (s 1 < 5, s 2 > 5), then both packets are routed to System M, as shown by circles in the figure. Then as the load on System M increases, the queuing delay increases. Hence, after a threshold on s 1 , the arrivals are routed to System S to save the resources of System M for the background traffic. When s 1 < 5, s 2 < 5, all the traffic is routed to System S because the backhaul delay is constant, irrespective of the batch size. Hence, the total delay per packet, which consists of backhaul delay (d) plus response time of the packet, decreases. Therefore, System S is preferred. When there is only one free server in System S, and there are free servers available in System M (s 2 = 5, s 1 < 6), then one packet is routed to System S, and the other packet is routed to System M, as shown by triangles in Figure 3b. The batch of two packets is split among the two systems to reduce the overall delay in the system; otherwise, the system would suffer an additional queuing delay of 1 packet in System S. Thereafter, the batch is split whenever the delay in System M is nearly the same as that in System S, as shown by the near diagonal structure of triangles in the policy. It is evident from the policy that there exists a threshold, beyond which the batch gets routed to System S. The squares in the figure show that after a threshold on s 1 , the arrivals are blocked to save resources for background traffic. Thus, the optimal policy for k = 2 follows a threshold structure and can be expressed as, 1, s 1 ≤ γ 3 (s 1 , s 2 ), 2, γ 3 (s 1 , s 2 ) < s 1 ≤ γ 4 (s 1 , s 2 ), 3, γ 4 (s 1 , s 2 ) < s 1 ≤ γ 5 (s 1 , s 2 ), 0, s 1 > γ 5 (s 1 , s 2 ), where γ 3 (s 1 , s 2 ), γ 4 (s 1 , s 2 ) and γ 5 (s 1 , s 2 ) are thresholds which depend on s 1 and s 2 .
The optimal policy for background traffic is illustrated in Figures 3c and 3d. The optimal policy for background traffic arrivals with batch size 1 (k = 3) is to accept the arrivals in System M and reject beyond a threshold on s 1 . The optimal policy for k = 3 can be expressed as, π * (s 1 , s 2 , 1) = 2, s 1 ≤ γ 6 (s 1 , s 2 ), 0, s 1 > γ 6 (s 1 , s 2 ), where γ 6 (s 1 , s 2 ) is a threshold which depends on s 1 and s 2 . Similarly, the optimal policy for background traffic arrival of batch size 2 (k = 4) is to accept the arrivals in System M and reject them after a threshold on s 1 . Thus, the optimal policy for k = 4 follows a threshold structure similar to (11), where after the threshold the optimum action changes from a = 3 to a = 0.
B. Parameter variation
In this section, we describe the variation of expected delay in the system with the variation of different parameters, fol- lowing the optimal policy. Figures 4a, 4b and 4c illustrate the expected delay in the system for different values of λ 1 , λ 2 and d, respectively. In Figure 4a, we vary the foreground arrival rate λ 1 from 0.67 to 6.67 batches/s with other parameters fixed at λ 2 = 1 batches/s and backhaul latency d = 0.5s. We observe that the expected delay increases steadily with λ 1 . For low values of λ 1 , the Lagrangian Multiplier (β * ) for which the optimal policy is obtained is small. Hence, the difference between the expected delay for CMDP problem and the corresponding unconstrained problem is small. However, as λ 1 increases, β * becomes larger and hence, the rate of increase of expected delay increases.
In Figure 4b, we keep λ 1 = 1 batches/s, d = 0.5s and vary background arrival rate λ 2 . As λ 2 increases, the expected delay in the system rises steadily. For low values of λ 2 , the optimal policy is to route to System M initially and then to System S as explained in Section IV-A. For a higher value of λ 2 , the optimal policy structure remains the same, however, the threshold on s 1 changes to a lower value. As background traffic increases, the resources of System M are saved for background traffic and more foreground traffic is routed through System S.
In Figure 4c, we keep λ 1 , λ 2 = 6.67, 1 batches/s and vary backhaul delay d. As d increases, the expected delay in the system rises. For low values of d, more foreground traffic is routed to System S reserving the System M for the background traffic. The higher service rate of System S subdues the effect of backhaul delay, and overall delay of the system is low. For high values of d, the optimal policy is similar to the policy explained in Section IV-A except for the case k = 2. For foreground traffic arrival with batch size 2 (k = 2), the region where the arriving batch is routed to both the systems is increased due to comparable delays in the two systems. The higher value of d is compensated by the higher service rate of System S. The blocking probability is constant at B max = 0.02 with variation in the parameters λ 1 , λ 2 and d.
In Figure 4d, we keep λ 1 , λ 2 = 6.67, 1 batches/s, d = 0.5s and vary the blocking probability constraint B max . As B max increases, blocking of incoming traffic is allowed more and more which leads to a drop in the delay of the system. We are unable to report all the results due to space constraints.
V. CONCLUSION
In this work, we focus on the problem of varying delays in a split bearer dual connectivity scenario. This is the first work to present an optimal splitting policy using DC enhancement for minimizing the average delay in an LTE-based HetNet subject to a constraint on the blocking probability. The problem is formulated as a constrained SMDP problem, and the optimal policy is observed to contain a threshold structure. We present numerical results which depict the variation of the system delay under different parameter variations.
|
2017-10-23T16:00:53.000Z
|
2017-10-23T00:00:00.000
|
{
"year": 2017,
"sha1": "66a03368402236f2c4f300e455cc314428bf904b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.11453",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a4ab8df39aa136f892507d1895fa779c875ee925",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
15771727
|
pes2o/s2orc
|
v3-fos-license
|
Gene loss, protein sequence divergence, gene dispensability, expression level, and interactivity are correlated in eukaryotic evolution.
Lineage-specific gene loss, to a large extent, accounts for the differences in gene repertoires between genomes, particularly among eukaryotes. We derived a parsimonious scenario of gene losses for eukaryotic orthologous groups (KOGs) from seven complete eukaryotic genomes. The scenario involves substantial gene loss in fungi, nematodes, and insects. Based on this evolutionary scenario and estimates of the divergence times between major eukaryotic phyla, we introduce a numerical measure, the propensity for gene loss (PGL). We explore the connection among the propensity of a gene to be lost in evolution (PGL value), protein sequence divergence, the effect of gene knockout on fitness, the number of protein-protein interactions, and expression level for the genes in KOGs. Significant correlations between PGL and each of these variables were detected. Genes that have a lower propensity to be lost in eukaryotic evolution accumulate fewer substitutions in their protein sequences and tend to be essential for the organism viability, tend to be highly expressed, and have many interaction partners. The dependence between PGL and gene dispensability and interactivity is much stronger than that for sequence evolution rate. Thus, propensity of a gene to be lost during evolution seems to be a direct reflection of its biological importance.
Lineage-specific gene loss is one of the major evolutionary processes that have been brought to light by comparative analyses of gene sets from completely sequenced genomes (Aravind et al. 2000;Moran 2002).The extent of gene loss can be dramatic, and it can occur relatively rapidly under a strong selective pressure.For example, the endosymbiotic bacterium Buchnera aphidicola has 580 genes compared with the ∼4300 genes in the genome of the closely related ␥-proteobacterium Escherichia coli.Apparently, Buchnera has lost ∼86% of the genes during its adaptation to the endosymbiotic life style, to which this bacterium converted 200 to 250 million years ago (Baumann et al. 1995).Similarly, the genome of a eukaryotic intracellular parasite, the microsporidian Encephalitozoon cuniculi, contains ∼2000 genes, compared with 5500 to 6000 genes in the genomes of yeasts, which themselves probably have undergone considerable gene loss (Katinka et al. 2001).Although genomes of parasites expose the most striking cases of massive gene loss, recent reconstructions of parsimonious scenarios of evolution for prokaryotes indicated that substantial gene loss has occurred in all phylogenetic lineages (Snel et al. 2002;Mirkin et al. 2003).In prokaryotes, gene loss is one of the two major evolutionary processes, along with horizontal gene transfer (HGT), that contribute to the intensive "gene flux" that seems to have shaped the genomes of these organisms.In eukaryotes, particularly in complex multicellular organisms, the evolutionary significance of lineage-specific gene loss might be even greater because HGT between these organisms does not appear to be widespread.The likelihood that a gene is lost during evolution, which is reflected in the pattern of presence-absence of the gene in the analyzed genomes (hereinafter, phyletic pattern), appears to be an important measure of evolutionary conservation.
Sequence divergence is a measure of the evolutionary conservation of a gene that is fundamentally different from gene loss propensity.Although gene loss is the result of a complete deletion or oblation of a gene, sequence divergence occurs through point mutations, as well as small deletions and insertions, and generally does not lead to elimination of the gene.Hence, these two variables, gene loss propensity and sequence divergence (or its correlate, the evolutionary rate), seem to be complementary measures of the conservation of a gene during evolution.Sets of orthologous proteins show a broad distribution of evolutionary rates (Grishin et al. 2000;Bromham and Penn 2003;Hedges and Kumar 2003).For example, protein sequences of ubiquitins or histones in eukaryotes typically are 90%-98% identical, whereas dihydroorotases (essential enzymes of pyrimidine metabolism) are only 20% to 30% identical.
The evolutionary rate of a gene, that is, the estimated number of substitutions per position between orthologous sequences, has long been assumed to depend on the importance of the gene in question for the fitness of the organism.The "knockout rate" hypothesis predicts that the greater the effect of a gene knockout on fitness, the slower the evolutionary rate.In particular, essential genes (those for which knockout is lethal) are expected to evolve significantly slower than are nonessential ones (Wilson et al. 1977).The availability of multiple genome sequences and genome-wide data on the phenotypes of gene knockouts for model organisms, such as yeast Saccharomyces cerevisiae (Giaever et al. 2002) and the nematode Caenorhabditis elegans (Kamath et al. 2003), enabled direct testing of these predictions.More generally, comparative analyses aimed at the identification of characteristics of genes that determine or at least strongly correlate with the evolutionary rate have become feasible.The results of the tests of the knockout rate hypothesis have been somewhat contradictory, but the studies with larger samples of genes indeed revealed a positive correlation between the evolutionary rate and the effect of a gene knockout on the fitness of the organism (Hirsh and Fraser 2001).However, it appeared rather unexpectedly that the effect was relatively minor, although statistically significant thanks to the large amounts of data analyzed; that is, only a small part of the variability of the evolutionary rate could be explained by differences in gene dispensability.
We sought to investigate the connection between the two distinct measures of the evolutionary conservation of a gene: (1) the newly introduced propensity for gene loss (PGL) and the rate of sequence evolution and (2) the major variables that determine the functional importance of a gene, namely, the effect of gene knockout on fitness, interactivity, and expression level.For this analysis, we used the recently developed collection of clusters of eukaryotic orthologous groups (KOGs) of proteins from seven (nearly) completely sequenced eukaryotic genomes (Tatusov et al. 2003), which allowed us to construct a parsimonious scenario of gene losses along the branches of the eukaryotic phylogenetic tree.We introduce here a numerical measure for gene loss, PGL, and show a statistically significant positive correlation between PGL and evolutionary rate of a KOG.Furthermore, both PGL and sequence divergence strongly and negatively correlate with the fitness effect of knockout, interactivity, and expression level of the respective gene.The protein sequences of genes that are rarely lost during evolution change relatively slowly; these genes tend to be essential for the survival of an organism and are highly expressed.
RESULTS
The Data Set of Conserved KOGs and Distribution of Gene Losses Over the Eukaryotic Phylogenetic Tree The KOG database contains 5873 KOGs represented in two to seven eukaryotic genomes: the plant Arabidopsis thaliana; animals C. elegans, Drosophila melanogaster, and Homo sapiens; fungi S. cerevisiae and Schizosaccaromyces pombe; and the microsporidian E. cuniculi (Tatusov et al. 2003; http://www.ncbi.nlm.nih.gov/COG/new/shokog.cgi).According to the phylogeny of the eukaryotic crown group that is currently considered most likely (Hedges 2002), plants branched off first, followed by the divergence of the fungi-microsporidian and metazoan (animal) clades (Fig. 1).For the purposes of the present analysis, we chose a subset of KOGs that are represented in at least three species and could be traced back to the last common ancestor of plants, animals, and fungi.If the amount of HGT between complex eukaryotes is considered to be negligible, reconstruction of the ancestral gene set becomes straightforward: All 3140 KOGs shared by Arabidopsis and any two of the other species should be considered ancestral (KOGs consisting of only two species were not analyzed).
Given a tree topology, the most parsimonious evolutionary scenario resulting in the observed distribution of the phyletic patterns of KOGs can be reconstructed by using the evolutionary parsimony principle.For the purpose of this reconstruction, the phyletic pattern of each KOG was treated as a string of binary characters (one, the presence of the given species; zero, its absence in the given KOG).Given the implausibility of HGT between eukaryotes, the Dollo parsimony principle, under which gene loss is treated as irreversible (a gene can be lost independently in several evolutionary lineages but cannot be regained), was adopted (Farris 1977).
In the resulting parsimonious scenario, each branch was associated with the number of gene losses such that the sum total of losses was minimal, with the exception of the plant branch and the branch leading to the common ancestor of fungi and animals: Gene losses could not be assigned to these branches with the current set of genomes (Fig. 1).The evolutionary scenario includes a massive gene loss in the fungal clade, with additional loss in the microsporidian, and subsequent substantial gene loss in each of the animal lineages, particularly in the nematodes and arthropods (Fig. 1).
Propensity for Gene Loss
The simplest numerical measure for gene loss in a group of orthologs is the fraction of lineages in which a given gene has been lost.However, the one/zero scoring scheme for gene loss and preservation in different lineages does not reflect the time during which a particular gene was lost or preserved.This time can be different for different lineages, which renders the binary measure inaccurate.In our reconstruction of the parsimonious evolutionary scenario, we mapped gene losses onto the widely accepted phylogenetic tree for the analyzed lineages.The PGL for each gene (KOG) was then calculated by taking into account the tree topology and the available time estimates for each divergence point (Hedges et al. 2001;Hedges 2002;Hedges and Kumar 2003).The logic behind this calculation was as follows.Each branch of the phylogenetic tree was treated as an independent trial during which the given gene was either preserved or lost.The longer the time during which a gene could have been lost, but was not, compared with the total time available, the lower the propensity of this gene to be lost (Fig. 1; for details, see Methods).
A PGL value of zero corresponds to KOGs that are represented in all seven species.A PGL value of one, in theory, would be assigned to a gene present in the last common ancestor of the analyzed species but lost in all lineages.Such genes, for obvious reasons, cannot be detected, and in practice, PGL values can range from zero to some maximum value less than one.In the data set analyzed here, the PGL values varied from zero to 0.49, the upper limit of PGL being a function of the number of lineages included and the times since their divergence.Genes with PGL value that was estimated as zero using the current data set of seven species (i.e., that were not lost in any of these seven species) might, in reality, have some propensity to be lost in other species.Nevertheless, the PGL values remain meaningful and internally consistent for this data set inasmuch as they are used to estimate the relative propensity for gene loss among all analyzed genes over the time elapsed since the last common ancestor of the compared species.The highest PGL value obtained here, 0.49, is the maximum only for the genes and species considered in this analysis; as additional genomes are included, greater PGL values will result.
The Dependence Between Gene Loss and Sequence Evolution Rate
The tendency of a gene to be lost and sequence evolution rate are two variables that characterize the evolutionary conservation of the gene.A priori, these variables could be considered independent.For example, a protein potentially could evolve relatively fast due to relaxed functional constraints but have a low propensity for loss linked to an essential function.For the purposes of the present analysis, we used the mean evolutionary distance between the KOG member from Arabidopsis (the outgroup with respect to the other analyzed species; Fig. 1) and the rest of the KOG members as the measure of the sequence evolution rate characteristic of the KOG (gene) as a whole.When the PGL values for the analyzed sample of 3140 KOGs were plotted against the evolutionary rates (determined with several methods, see Methods), clear positive correlation was observed (Table 1).The correlation coefficient (R) ranged from 0.3-0.4,depending on the distance measure used, whereas all correlations were statistically highly significant (p << 10 6מ ).Thus, the assumption of independence of the two variables could be rejected with a high degree of confidence.There is a definite connection between the two facets of evolutionary conservation: The more often a gene is lost, the more substitutions it typically accumulates.However, it is equally notable that the interdependence of the two values is not overwhelmingly strong as only 10%-15% of the variation in the sequence evolution rate can be explained by variation in PGL (and vice versa).
Viability of Knockouts of Yeast Genes With Different Propensities for Loss
Intuitively, it appears that the propensity of a gene to be lost should strongly correlate with the effect of gene knockouts on the viability of the organism.Indeed, one would surmise that if a gene is never lost during a long span of evolution, this is because its function is essential for survival.The PGL values for those KOGs that are represented in S. cerevisiae were superimposed over the available data on the effect of gene knockout on yeast viability (Giaever et al. 2002).More than half of the genes with PGL equal to zero, that is, those that have not been lost in any of the seven lineages considered here, are essential; that is, the respective knockouts are lethal (Fig. 2).The fraction of essential genes was dramatically lower in all other PGL classes (P << 10 6מ by the 2 criterion).Thus, genes with the lowest propensity for loss during evolution seem to be involved in indis-pensable functions to a much greater extent than are those genes that have been lost in some lineages.Although one might expect that the fraction of essential genes among those with PGL = 0 could be somewhat lower in more complex organisms due to functional redundancy among paralogs, the conservation pattern of a gene expressed numerically through PGL still could be a reasonable predictor of essential gene functions.
In contrast to the strong connection between the PGL and (in)dispensability of a gene, and in agreement with the previous report (Hirsh and Fraser 2001), we found no appreciable correlation between the sequence evolution rate and dispensability.Among the genes with PGL=0, the sequence evolution rate was slightly lower for essential genes, but the difference in rates between essential and nonessential genes was statistically significant (p < 0.05) for only one method of evolutionary rate calculation, the PAM distances (Table 2).Thus, although PGL positively, and strongly, correlates with both sequence evolution rate and dispensability, the latter two variables are not significantly correlated; that is, they appear to be (nearly) independently linked to PGL.
Propensity for Gene Loss, Substitution Rates, and Expression Levels
A highly significant negative correlation between the evolutionary rate of yeast genes has been reported: Highly expressed genes appear to evolve slowly (Pal et al. 2001).We examined the correlation between the gene expression levels in various organisms, PGL, and the sequence evolution rate.A significant negative correlation was detected between the expression level and both measures of evolutionary conservation; that is, highly expressed genes tend to evolve more slowly and to be less prone to loss in various lineages than are genes expressed at lower levels.Although the correlation coefficient varied for different measures of evolutionary distance, it was consistently greater for sequence evolution rate than for PGL (Table 1).
Number of Protein-Protein Interactions, PGL, and Substitution Rates
Genes with products that are involved in numerous proteinprotein interactions tend to evolve more slowly than do those that have few interaction partners, although the magnitude of the difference varied in different studies and was not dramatic in any of them (Fraser et al. 2002;Jordan et al. 2003).We examined the correlation between PGL and sequence evolution rate, on the one hand, and the number of protein-protein interactions for the KOG members from yeast on the other hand.To this end, the data set collected in the General Repository for Interaction Datasets (GRID) database (Breitkreutz et al. 2003) was used as the
Genome Research 2231
www.genome.orgsource of protein-protein interaction data.We found a strong negative correlation between the number of protein-protein interactions per protein and PGL, and a weaker correlation with various measures of sequence evolution rate (Table 1).Both correlations were highly statistically significant (P < 10 6מ ).Furthermore, when the KOGs were binned according to their PGL values, the difference in the mean number of interactions of yeast proteins between the bins appeared dramatic (Fig. 3).Thus, proteins that have many interaction partners seem to be substantially less prone to loss during evolution than are those with fewer partners, and this connection is much stronger than that between the interactivity and sequence evolution rate.This is compatible with the observation that highly connected proteins in the yeast interaction network include a higher proportion of essential gene products than do proteins with fewer interactions (Jeong et al. 2001).
DISCUSSION
Sequence evolution rate is a traditional measure of the conservation during evolution of a gene.Early molecular evolutionary studies have unequivocally shown that different genes evolve at substantially different rates (Kimura 1983).However, only with the advent of genomics and other kinds of "omics", such as genome-wide analysis of gene expression and protein-protein interactions, has the opportunity presented itself to systematically explore the connections between the evolution rate and various other characteristics of genes (Wolfe and Li 2003).The results of these studies so far have been somewhat disappointing, in that a truly strong correlate of the evolution rate had not been identified.It has been shown that slow-evolving genes tend to be highly expressed (Pal et al. 2001) and encode longer proteins (Lipman et al. 2002) that tend to be involved in a somewhat greater number of protein-protein interactions than are fastevolving gene products (Fraser et al. 2002;Jordan et al. 2003).However, establishing the significance of each of these correlations required careful examination of statistical evidence.In other words, none of these correlations is particularly strong, and none can explain much of the variation in evolution rate, although they are statistically significant thanks to the massive amounts of genomics data.Notably, the results of direct tests of Wilson's knockout rate hypothesis are in the same category: Knockout of slow-evolving genes tends to have a greater effect on fitness than does knockout of fast-evolving genes, but the con-nection is relatively weak, to the point that some studies have failed to support its significance (Hurst and Smith 1999;Hirsh and Fraser 2001;Jordan et al. 2002;Pal et al. 2003).
These observations incite the iconoclastic idea that sequence evolution rate might not be the most biologically relevant measure of the evolutionary conservation of a gene.Here we explored an alternative, the propensity of a gene to be lost during evolution, a characteristic that obviously can be measured only through comparison of multiple complete gene sets.PGL is a much more intuitive correlate of the dispensability of a gene than is sequence evolution rate; indeed, if a gene is never lost during evolution, that is probably because it is essential for viability.However, the connection is not as trivial as it seems to be at first glance because it is based on a strong assumption, namely, the transfer of the information on the essentiality of a gene in one organism (e.g., yeast) to its ortholog in another, vastly different organism (e.g., worm).Actually, the conservation of essentiality is not guaranteed because a gene might be rendered nonessential by the evolution of redundancy, in the form of paralogs or unrelated but functionally analogous genes.This might be followed by the loss of a formerly essential gene, resulting in nonorthologous gene displacement (Koonin and Mushegian 1996).
Empirically, we observed a strong connection, but definitely not a one-to-one correspondence, between PGL and knockout viability, and a highly significant positive correlation between PGL and sequence evolution rate.In contrast, sequence evolution rate and viability are linked weakly at best.This suggests that PGL carries with it a strong biological signal, which is directly linked to the dispensability of a gene and less directly, even if indisputably, to the sequence evolution rate.By transitivity, it should be expected that the latter two variables are also correlated but that connection is nearly lost in the statistical noise.Thus, a gene shown to be essential in a particular organism has a strong tendency to be retained and, by implication, to be essential even in phylogenetically remote lineages; the protein sequences encoded by such genes also might tend to evolve slightly slower than do those of nonessential genes.
These conclusions are supported by the detected strong correlation between PGL and the interactivity of a protein: Hubs of the protein interaction network are lost during evolution much less readily than are proteins with few interaction partners, and this connection is much stronger than that between interactivity Figure 3 PGL and number of protein-protein interactions for yeast proteins.Yeast proteins were binned into four classes according to the PGL values for the corresponding KOGs.The average number of interactions was calculated for each class.For KOGs with multiple yeast paralogs, the sum of interactions for all paralogs was used, with the rationale that this is the natural integral measure of the interactivity of the proteins in the given KOG, under the assumption that all paralogs in a KOG have evolved via relatively recent, lineage-specific duplications.
and sequence evolution rate.This is compatible with the previous reports on the connection between interactivity and dispensability (Jeong et al. 2001) and with the general notion that scalefree networks, such as the network of protein-protein interactions, are tolerant to error (random elimination of weakly connected nodes) but are highly vulnerable to attack (directed elimination of the hub; Albert et al. 2000;Barabasi 2002).Because protein-protein interaction domains generally show limited sequence conservation (whereas structure conservation is crucial), it is perhaps not unexpected that the connection between interactivity and sequence evolution rate could be detected (at best) only as a relatively weak statistical trend.Surprisingly, however, the observations reported here indicate that gene expression level more strongly correlated with sequence evolution rate than with PGL.Generally, one would expect the same trends to be seen with dispensability, interactivity, and expression level.If validated by further analysis of more robust and extensive expression data, this inversion could suggest a nontrivial connection between expression level and sequence conservation, the nature of which remains to be explored.
PGL and sequence evolution rate are measures of evolutionary conservation that seem to capture substantially different aspects of evolution.PGL is a much more direct reflection of the biological dispensability of a gene, whereas sequence evolution rate depends largely on the selective constraints on protein structure and sequence; the extent of these constraints depends on the nature of the protein function.We showed here that PGL and sequence evolution rate are moderately dependent; that is, highly constrained proteins are lost during evolution significantly less often than are weakly constrained ones.With the small set of seven eukaryotic genomes analyzed here, PGL is a coarse measure, and a much more refined analysis will become feasible as the collection of sequenced eukaryotic genomes grows (with prokaryotic genomes, of which a large database is already available, this type of analysis is hampered by widespread HGT, which could be hard to distinguish from gene loss; Snel et al. 2002;Kunin and Ouzounis 2003;Mirkin et al. 2003).Combined with improved data on gene dispensability, expression, and protein interactivity, such studies should take us closer to an understanding of the prevailing trends in genome evolution.
The KOG Data
The KOGs were constructed largely as described previously (Tatusov et al. 1997(Tatusov et al. , 2001)), with minor modifications (Tatusov et al. 2003), and are available at http://www.ncbi.nlm.nih.gov/COG/new/shokog.cgi and via ftp at ftp://ftp.ncbi.nih.gov/pub/COG/KOG/.If a KOG included more than one protein from one or more species (paralogs), the most conserved ortholog from the respective species was chosen.The sequences from the respective KOG were compared to each other by using the BLASTP program (Altschul et al. 1997), and for each species, the sequence that had the best cumulative score with the sequences from the other species was selected.
Other Data
The data on gene knockout effects in yeast were primarily from Giaever et al. (2002).The SGD database (http://genome-www.stanford.edu/Saccharomyces/)was used to collect the knockout viability data for each individual gene.For KOGs with multiple yeast paralogs, only the most conserved paralog identified as described above was considered.
The GRID database was used as the source of data on protein-protein interactions (Breitkreutz et al. 2003).All duplicate interactions were collapsed into one entry.Absence of interactions in GRID for a given gene was interpreted as zero interactions.
Average expression levels of C. elegans genes were from Hill et al. (2000).Expression levels of human genes were estimated in the following fashion: The human CDSs were used as queries in a BLASTN search against the dbEST database.Hits with >98% identity for alignment length >400 nt or with >95% identity for alignment between 100 to 400 nt were tallied, and the number of ESTs was taken as the expression level for the respective gene.Expression levels of yeast genes were obtained from the published microarray analysis by averaging the control (no diauxic shift) data (DeRisi et al. 1997).For all three organisms, gene expression data were mapped to KOGs, and if more than one paralog was present in a KOG, the maximum expression level for the given organism was assigned to the KOG.
Divergence Times of E. cuniculi, S. cerevisiae, and S. pombe
Phylogenetic trees for CDC28 kinase, glyceraldehyde-3-phosphate dehydrogenase (GPDH), small chain of ribonucleosidediphosphate reductase (RDR), and triosephosphate isomerase (TIM) families were constructed by using the Mega and ProtML packages (Adachi and Hasegawa 1992;Kumar et al. 1994).The lengths of the branches connecting E. cuniculi, S. cerevisiae, and S. pombe were taken to be proportional to the divergence times for these lineages.The divergence times were calculated by using the estimates for the other eukaryotic lineages (Wang et al. 1999).The ratio of the previously estimated times since divergence to branch lengths for A. thaliana, H. sapiens, C. elegans, and D. melanogaster was used to calibrate the branches of the tree in years.An average estimate over the CDC28, GPDH, RDR, and TIM families was used as the estimate of the time of divergence of E. cuniculi, S. cerevisiae, and S. pombe.
PGL Calculations
By using the published estimates (Wang et al. 1999) and our own estimates for the divergence times of E. cuniculi, S. cerevisiae, and S. pombe, specific divergence times were assigned to each internal node (ancestral form) in the phylogenetic tree of the eukaryotic crown group (Fig. 1A).Given a phyletic distribution pattern,
Genome Research 2233
branches of the tree associated with gene loss (B L ) can be identified (Fig. 1B,C).Designating those branches of the tree, in which the given gene was preserved B P , we have PGL = ⌺B L /(⌺B P + ⌺B L ) In terms of Fig. 1, B and C, this is the ratio of the sum of the lengths of blue branches to the sum of the lengths of all colored branches.Thus, for a gene present in Arabidopsis, human, and C. elegans but lost in the Drosophila branch and the Fungi-Microsporidia branch (Fig.
Calculation of Evolutionary Distance Between Protein Sequences
Evolutionary distances between proteins in a KOG were calculated from multiple alignments.To obtain the P-distance multiple alignments of protein sequences were constructed, and distances between orthologs were calculated as the proportion of different amino acids.All positions in the alignment containing a deletion or insertion in at least one of the sequences were removed prior to calculating P-distance.P-distances were measured relative to A. thaliana orthologs for all KOGs; their mean value was used as the distance characteristic for the given KOG.Similarly, evolutionary distances between proteins was calculated by using the PAM (Dayhoff et al. 1983) or JTT (Jones et al. 1992) substitution matrices and the mean distance from A. thaliana to other species was used for further analysis.The three kingdom mean distance was calculated as the unweighted average of the mean distances among plants, animals, and fungi.JTT matrix distances were also calculated with ␥-correction by using the Protdist program with the ␣-parameter of 1.0 (Felsenstein 1996).
Figure 1
Figure 1 The phylogeny of eukaryotes and PGL calculations.(A) Estimated divergence times in millions of years ago (MYA) are shown for all internal nodes of the tree; the estimates are from Hedges et al. (2001).The number of lost genes according to the reconstructed parsimonious scenario is shown next to each branch.(B, C) Examples of PGL calculation.The presence and absence of a gene in each of the extant species is indicated by "+" and ,"מ" respectively.Red branches are those that retained the gene; blue branches are those to which a loss was mapped.(B) The loss of gene in the branch leading to the common ancestor of yeasts and microsporidian is shown by a blue dot because this branch formally has zero length.
Figure 2
Figure 2 Distribution of essential and nonessential yeast genes among PGL classes.Yeast proteins were binned into four classes according to the PGL values for the corresponding KOGs.The number of essential (E) and nonessential (N) genes in each class is indicated.If there were multiple yeast paralogs in a KOG, the KOG was counted as essential if at least one of the paralogs was essential.
a gene found in Arabidopsis and the two yeast species (lost in the Metazoa branch and in the E. cuniculi branch, Fig.
Table 1 . Correlation (R) Between the Propensity for Gene Loss, Substitution Rates, Gene Expression Level, and the Number of Protein-Protein Interactions PGL
a Different method for evolutionary distance (a surrogate for substitution rate) calculation are introduced in Methods.b A.t., Arabidopsis thaliana.
Table 2 . Viability of Knockouts in Yeast, PGL, and Sequence Evolution Rate
Different methods for evolutionary distance (a surrogate for substitution rate) calculation are introduced in Methods. a
|
2014-10-01T00:00:00.000Z
|
2003-10-01T00:00:00.000
|
{
"year": 2003,
"sha1": "9f26ac0b307d69e02f3066b5b834756e1b86fc64",
"oa_license": "CCBYNC",
"oa_url": "https://genome.cshlp.org/content/13/10/2229.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Crawler",
"pdf_hash": "876fd8bfdcb0e997f735a9c5b5c605e6962f7c78",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
257387359
|
pes2o/s2orc
|
v3-fos-license
|
Winter dynamics of functional diversity and redundancy of riffle and pool macroinvertebrates after defoliation in a temperate forest stream
Headwater streams are highly heterogenous and characterized by a sequence of riffles and pools, which are identified as distinct habitats. That higher species richness and density in riffles than in pools is considered a general pattern for macroinvertebrates. As temperate winters can last long up to half a year, however, macroinvertebrate communities of riffles and pools may assemble differently under ices or snows. Particularly, defoliation concentrating in autumn can largely change habitats in both riffles and pools by litter patching. According to the absence or presence of litter patches, there exist four types of subhabitats, i.e., riffle stones, riffle litters, and pool sediments, pool litters, which are selectively colonized by macroinvertebrates. To study the spatial pattern and temporal dynamics of colonization, macroinvertebrates were surveyed in a warmer temperate forest headwater stream in Northeast China during four periods: autumn, pre-freezing, freezing, and thawing periods. Our study focused on functional trait composition, functional diversity and functional redundancy of macroinvertebrate communities. The colonization of macroinvertebrates was found to be significantly different in these subhabitats. Riffle stones supported higher taxonomic and functional diversities than pool sediments; litter patches supported higher total macroinvertebrate abundance and higher functional redundancy than riffle stones or pool sediments. The functional trait composition changed significantly with seasonal freeze-thaw in both riffle stones and pool sediments, but not in litter patches. Macroinvertebrate community in litter patches showed seasonal stability in taxonomic and functional diversities and functional redundancy. Thus, this study strongly highlights that litter patches play an important role structuring macroinvertebrate community over winter, supporting high abundance and maintaining functional stability.
Headwater streams are highly heterogenous and characterized by a sequence of riffles and pools, which are identified as distinct habitats. That higher species richness and density in riffles than in pools is considered a general pattern for macroinvertebrates. As temperate winters can last long up to half a year, however, macroinvertebrate communities of riffles and pools may assemble differently under ices or snows. Particularly, defoliation concentrating in autumn can largely change habitats in both riffles and pools by litter patching. According to the absence or presence of litter patches, there exist four types of subhabitats, i.e., riffle stones, riffle litters, and pool sediments, pool litters, which are selectively colonized by macroinvertebrates. To study the spatial pattern and temporal dynamics of colonization, macroinvertebrates were surveyed in a warmer temperate forest headwater stream in Northeast China during four periods: autumn, pre-freezing, freezing, and thawing periods. Our study focused on functional trait composition, functional diversity and functional redundancy of macroinvertebrate communities. The colonization of macroinvertebrates was found to be significantly different in these subhabitats. Riffle stones supported higher taxonomic and functional diversities than pool sediments; litter patches supported higher total macroinvertebrate abundance and higher functional redundancy than riffle stones or pool sediments. The functional trait composition changed significantly with seasonal freeze-thaw in both riffle stones and pool sediments, but not in litter patches. Macroinvertebrate community in litter patches showed seasonal stability in taxonomic and functional diversities and functional redundancy. Thus, this study strongly highlights that litter patches play an important role structuring macroinvertebrate community over winter, supporting high abundance and maintaining functional stability.
Introduction
Headwater streams are ubiquitous in river landscapes and are important sources of biota for downstream reaches, and critical sites for maintaining the ecological integrity and health of whole river networks (Clarke et al., 2008;Finn et al., 2011;Callisto et al., 2021). Understanding their biological diversity and community assembly is fundamental to monitoring and management (Clarke et al., 2008;Finn et al., 2011). Headwater streams are highly heterogenous and characterized by a sequence of riffles and pools. Riffles and pools are identified as distinct habitats, and significantly different in physical features including flow, depth, slope, and substrate composition (Gordon et al., 1992;Merritt and Cummins, 1996;MacWilliams et al., 2006). Studies of riffles and pools have shown a general pattern for macroinvertebrate communities: species richness and density are higher in riffles than in pools (Scullion et al., 1982;Logan and Brooker, 1983;Brown and Brussock, 1991). Additionally, macroinvertebrate composition of functional feeding groups is different between the two habitat types, for example, more scrapers in riffles and more gather-collectors in pools (Cummins, 2016). However, such a pattern changes largely with latitude, season, flow regime, and environmental stress (Boulton and Lake, 1992;Carter and Fend, 2001;Bogan and Lytle, 2007;Mendes et al., 2017;Herbst et al., 2018). For example, discharge regime has been found to affect largely macroinvertebrate community composition in snowmelt-dominated streams (Carter and Fend, 2001;Herbst et al., 2018).
Although there have been many studies from temperate streams, how biological communities of riffles and pools assemble under surface ices in temperate winter remain unclear. In high latitudes, winter is rather long and even lasts up to ≥half a year. However, most seasonal studies usually have a low time-resolution for such a long winter. In temperate streams, particularly, leaves from riparian vegetation intensively fall in autumn and are accumulated in the riffles and pools Kagaya, 2002, 2004). As leaves are not only an important food source, but also can modify habitat heterogenicity, sudden accumulated leaves and thereafter decomposition can influence the structure and dynamics of macroinvertebrate community during winter (Richardson, 1992;Mendes et al., 2017;Al-Zankana et al., 2021). Leaves are unevenly distributed in riverbeds and form so-called litter patches in both riffles and pools. In riffles, litter patches easily occur at the upstream face of flow obstacles (such as stones, and branches) and usually have higher leaf mass. In pools, litter patches occur in places with low water currents and have higher mass of wood and small litter fragments. According to litter's presence or absence, there are four types of distinct subhabitats: riffle stones, riffle litters, and pool sediments, pool litters. The four subhabitats have distinct physical and chemical conditions, and can be colonized selectively by macroinvertebrates. Such colonization significantly depends on rainfall and discharge (Buss et al., 2004).
Within a stream, difference in macroinvertebrate communities between riffles and pools is dependent of environmental conditions such as flow regimes and food supply (Scullion et al., 1982;Brown and Brussock, 1991). Several studies reported that difference in species richness is not always significant, but the difference in quantitative composition is more common (Mendes et al., 2017). During a long temperate winter that can be divided into pre-freezing, freezing, and thawing periods, the difference in macroinvertebrates between riffles and pools is expected to change largely from the autumn just after defoliation toward water freezing and snow thawing next spring.
For the assembly of macroinvertebrate communities, both classical river continuous concept and habitat templet theory suggest that species sorting or environmental selection be stronger over smaller spatial extents (Vannote et al., 1980;Townsend and Hildrew, 1994;Hamilton et al., 2020). As a local filter, environmental selection retains species with suitable functional traits, which are morphological, biochemical, physiological, structural, phenological, or behavioral features that influence performance or fitness (Nock et al., 2016). Functional diversity, a component of biodiversity, is defined as the functional trait differences between organisms present in a community, mostly including functional richness, evenness, and divergence (Mason et al., 2005(Mason et al., , 2013. The functional diversity of a local community indicates species heterogeneity and is strongly associated with its performance in environmental change (Mason et al., 2005(Mason et al., , 2013. Thus, trait-based assessment (i.e., functional trait composition analysis) is likely to detect sensitively the structural and functional difference between riffles and pools (Herbst et al., 2018). At community level, functional redundancy is defined as the fraction of taxonomic diversity not expressed by functional diversity (Ricotta et al., 2016). It provides an important measure indicating community assembly in viewpoint of functional traits (Ricotta et al., 2020). Although litter patches are attractive to many groups of macroinvertebrates, they tend to decrease substrate heterogeneity in both riffles and pools, and may reduce functional diversity and increase functional redundancy.
For temperate deciduous broad-leaved forests, the defoliation concentrates in autumn, macroinvertebrates immediately colonize into the litter patches, and specially, shredders intensively involve and facilitate litter decomposition (Cummins et al., 1989;Kagaya, 2005, 2009). Usually, it takes months for litters to be completely decomposed (Gessner et al., 1999). Such decomposition occurring hiddenly under ices or snows may be limited by rather low water temperature. Macroinvertebrate richness and density increase with litter palatability and may peak in the middle and even late winter. Toward next spring, snowmelt can result in increasing discharge that can change habitat stability, promoting passive or active dispersal of many species. Species colonized into the litter substrates of both riffles and pools become biota sources for downstream. Therefore, macroinvertebrate assemblages experience a highly dynamic succession (Wang, 2020).
In this study, we aim to test three hypotheses that highlight difference in functional diversity and redundancy of macroinvertebrate communities between riffles and pools (Figure 1). (1) Riffles have higher habitat heterogeneity than pools, thus, host richer macroinvertebrate species and higher functional diversity. Since environmental selection has greater impact on functional traits than species themselves, higher functional diversity leads to lower functional redundancy in riffles than in pools. (2) For both riffles and pools, litter patches provide macroinvertebrates with more foods but lower substrate heterogeneity. The reduced environmental selection and limited competition under low temperature will result in lower functional diversity and higher functional redundancy. (3) From the prefreezing to freezing periods, food that increases with litter decomposition supports high richness and density of colonized macroinvertebrates. During this period, both low temperature and high food resource reduce interspecific competition, and tends to decrease functional diversity and increase functional redundancy. From the freezing to thawing periods, however, leaf nutrition decreased with litter decomposition, the richness and density of colonized macroinvertebrates decrease, and nutritional limitation promoted interspecific competition, resulting in increase in functional diversity and decrease in functional redundancy.
In the present study, we test the three hypotheses by examining macroinvertebrates in a warmer temperate stream of the Songhua River, Jilin Province, Northeast China. We limited this study in a single stream so that all local communities surely share a common species pool and all sites have a common regional background, especially the same riparian vegetation. The resulted dataset can reduce the complex influence of multiple factors. This study provides a case study of the dynamics of stream macroinvertebrate assembly associated with defoliation in warmer temperate.
Study area
The filed investigation was conducted in a forest headwater stream of the Songhua River Basin, of Northeast China, which is located in the Longwan Nature Reserve (126 • 13 55 -126 • 13 55 N, 42 • 16 20 -42 • 26 57 E). The river basin has a warm temperate and continental monsoonal climate, with a mean annual precipitation of about 700 mm. The mean annual water temperature is about 7 • C, and the monthly water temperature varies from 0.5 • C in January in winter to 15 • C in July in summer. Winter here lasts for 5 months from November to April each year, during which approximately 70% of the surface stream water is frozen. The riparian vegetation is dominated with tree species: Acer mono, Tilia amurensis, Quecus mongolica, Ulmus pumila, and Populus davidiana. In this study, a 500 m reach was selected to consist primarily of fast-flowing riffles and slow-flowing pools (Figure 2A). From autumn to next early spring, the riverbeds are covered with leaves or litters that form litter patches in both riffles and pools. Following the classification by Buss et al. (2004), we defined four common substrates: riffle stones and pool sediments, riffle litters, and pool litters (Figures 2B-G).
Sampling and identification of macroinvertebrates
Macroinvertebrates and litter patches were investigated across four periods: autumn (mid October of 2017), pre-freezing (early November of 2017), freezing (early January of 2018), and thawing periods (early March of 2018). The sampling was performed in a 500 m reach that consists primarily of fast-flowing riffles and slowflowing pools. In each period, four litter patches from four pools and four litter patches from four riffles were sampled. For riffle litters, the components of litter patches and macroinvertebrates were collected with a Surber sampler (30 × 30 cm, 500 µm mesh size) because patches size (area covered by litter) was less than 900 cm 2 . Organic matter inside the sampler except for large branches was washed into the net. The size of sampled litter patches was recorded. For litters and macroinvertebrates in pool patches, a square-cut cushion of sponge rubber (50 × 50 cm with an inner 20 × 20 cm opening) was putted on litters for quantitative sampling. Organic matter inside the opening was washed into the D-frame net (500 µm mesh size). In each period, six riffles and six pools were randomly selected for collecting macroinvertebrates of riffle stones and pool sediments. Habitats for riffle stones and pool sediments were identified visually based on velocity, depth, particle size, and moss cover. One sample of macroinvertebrates was collected and integrated for each riffle or pool at three microhabitats with a Surber net (30 × 30 cm, 500 µm mesh size). All the macroinvertebrates were stored in 75% ethanol.
In the laboratory, all macroinvertebrates were individually picked from the detritus and other materials from each sample. All litter samples were washed through nested sieves (16 and 1 mm), and the contents of these sieves were separated into litter and macroinvertebrates. Macroinvertebrates were identified to the genus level, except for Chironomidae, which was taxonomically rich and identified as a subfamily. Identification and counting of taxa were performed by a stereoscopic microscope using monographs, publications, and other relevant literature (Morse et al., 1994;Wiggins, 1996;Thorp and Covich, 2001). All litters were classified into three categories: coarse particulate organic matter (CPOM: >16 mm), leaves (16 mm), and small woody detritus (SWD: 16-100 mm). The litters in each category were dried at 60 • C for 48 h and weighed.
Measurements of environmental variables and litter patches characteristics
Environmental variables were synchronously measured at riffles and pools. The water velocity (Vel) was measured using a portable velocity analyzer. Water depth was estimated using a graduated stick. Water temperature (Temp), pH, turbidity (Turb), conductivity (Cond), and dissolved oxygen (DO) were measured using a portable water quality analyser (YSI). Substrates were quantified by visually estimating the percentage of boulders, cobbles, pebbles, gravel and sand following the protocol established by Cummins (1962). In each period, six riffle patches and six pool patches were randomly selected for measuring. Detritus area (cm 2 ), detritus height (cm), and water depth (cm) were recorded, and current velocity (m/s) just above the patches was measured using a portable current meter. A hypothesized variation of functional diversity and redundancy between four subhabitats (riffle stone, pool sediment, riffle litter, and pool litter) and between four periods (autumn, pre-freezing, freezing, and thawing periods). (A) The difference in functional redundancy between riffle stones and pool sediments; (B) the difference in functional redundancy between riffle sediments and riffle litters; (C) the temporal variation of functional redundancy from autumn to next spring. D S , taxonomic diversity, indicated by an open ellipse; D F , functional diversity, shown by a filled ellipse, D F ≤ D S . Functional redundancy = 1-D F /D S . Location of sampling reach (A); sampling reach with sequence of riffles and pools in the autumn (B); sampling reach in the winter (C); riffle litter in the autumn (D); riffle litter in the winter (E); pool litter in the autumn (F); pool litter in the winter (G). database information published by Poff et al. (2006), Tomanova and Usseglio-Polatera (2007). The chosen traits represent the dimensions of the ecological niche of macroinvertebrates, including life history (voltinism), mobility (swimming ability), morphology (shape), and ecology (rheophily, habitat, trophic habits), and have been proved to be sensitive to the environmental conditions, such as physical environment condition and food resource (Cummins, 1974;Tomanova and Usseglio-Polatera, 2007). Functional trait dissimilarity between taxa was quantified using the gawdis distance with the "gawdis" function in the R package (de Bello et al., 2021). Simpson's diversity (D), Rao's diversity (Q), and the taxon-level vulnerability with the "uniqueness" function were calculated in the ade package (Ricotta et al., 2016). Following a framework proposed by Physical environmental conditions: (A) depth, (B) velocity, (C) water temperature, (D) substrate composition in riffles and pools during the four periods; and litter composition: (E) leave abundance, (F) CPOM abundance, (G) SWD abundance, and (H) relative abundance of each litter type in the riffle and pool litter patches during the winter. R, riffles; P, pools; AP, the autumn period; PP, the pre-freezing period; FP, the freezing period; TP, the thawing period; CPOM, coarse particulate organic; SWD, small woody detritus. Ricotta et al. (2016Ricotta et al. ( , 2021, Pavoine and Ricotta (2019), functional redundancy was measured as the fraction of taxonomic diversity not expressed by functional diversity. Finally, we calculated functional redundancy (FR): FR = (D-Q)/D (Ricotta et al., 2016). The methodology of calculating functional diversity and functional redundancy was described in detail in Wang et al. (2023). All diversity analyses were performed using R v4.2.0.
Statistical analysis
Two-way analysis of variance (ANOVA) was applied to determine the differences in environmental variables between riffles and pools and among the four periods. The difference in the composition of macroinvertebrate communities was tested among the and four periods between riffle litters and pool litters, between riffle stones and riffle litters, and between pool sediments and pool litters by two-way analysis of similarities (ANOSIM) based on the Bray-Curtis dissimilarity matrix, separately. Then, SIMPER (similarity percentages-species contributions) was performed to determine the species that most contribute to the differences. The difference in relative abundance of each functional trait, species richness, density, Simpson's, Rao's diversity and functional redundancy among four periods and between riffle litters and pool litters, between riffle stones and riffle litters, and between pool sediments and pool litters were also detected by two-way ANOVA. Where significant ANOVA result was obtained (p < 0.05), Tukey's multiple comparisons tests were conducted. Nested ANOVA analysis was applied to determine the difference in species richness, Simpson's, Rao's diversity and functional redundancy between riffles (include riffle stones and riffle litters) and pools (include pool sediments and pool litters) in each period. The ANOVA and Tukey's test were performed using SPSS software (version 21.0). The two-way similarity (ANOSIM) and similarity percentagesspecies contributions were conducted using PAST software (version 3.0) (Hammer et al., 2001).
Redundancy analysis (RDA) was run to check the variables that influence community variation of the macroinvertebrate communities. Environmental variables included velocity, depth, substrate composition (boulders, cobbles, pebbles, gravel, and sand%), total litter abundance and four litter components (CPOM abundance, Leaves abundance,and SWD abundance). Species' population density was Hellinger transformed and environmental variables were standardized prior to RDA. The significance of the full model of RDA was test with the ANOVA function. A forward selection procedure was conducted with the ordiR2step function to select the significant variables. Redundancy analysis was run in the vegan package (Legendre and Legendre, 2012). The hierarchical partitioning method was used to distinguish a single variable's contribution via the rdacca.hp package (Lai et al., 2022).
Environment conditions
Physical environmental variables exhibited significant differences between riffles and pools (Figures 3A-D). Water flow velocity was higher in riffles, while water depth was higher in pools. Substrates in riffles were mainly composed of boulders and cobble, but those in pools had higher proportion of sand and gravel. The characteristics and composition of litter patches were significantly different between riffles and pools. Pool litters had higher litter area (Figures 3E-H and Supplementary Table 2), but lower litter abundance than riffle litters. Pool litters had higher relative abundance of coarse particulate organic matter (CPOM), while riffle litters had higher relative abundance of leave.
Toward the winter, water temperature decreased extremely, and the stream below the ice surface was characterized by low water depth and low water velocity. Subsequently, in the thawing period, the increased temperature accelerated ice-snow melting, resulting in significant increase in water velocity. However, there was no obvious change in substrate composition during the whole winter. Regarding litter composition, the relative abundance of leaves was highest in the autumn, then decreased gradually, while the relative abundance of CPOM was highest in the freezing period.
ANOSIM analysis showed considerable difference in macroinvertebrate communities between riffle stones and pool sediments in each period (p < 0.05), between riffle stones and riffle litters in each season (p < 0.05), and between pool sediments and pool litters during the autumn and pre-freezing periods (p < 0.05). Both riffle stone and pool sediment communities showed significant temporal variation between different periods (p < 0.05). Riffle litter community significantly changed from the autumn to the pre-freezing period and from the freezing to the thawing periods (p < 0.05). However, pool litter community changed significantly only from the autumn to the pre-freezing periods (p < 0.05). SIMPER analysis showed the density variation of Chironominae, Utaperla, Taenionema, and Tanypodinae most contributed to spatial and temporal differences in community composition (Figure 4).
The density of macroinvertebrates, especially collector-filterers, scrapers and shredders were significantly higher in riffle stones than in pool sediments (p < 0.05, Figure 5). Riffle litters supported higher macroinvertebrate density than riffle stones. Regarding Change in the mean density of main groups that caused spatial and temporal differences of community composition via similarity percentages (SIMPER) analysis. RS, riffle stones; PS, pool sediments; RL, riffle litters; PL, pool litters; AP, autumn period; PP, pre-freezing period; FP, freezing period; TP, thawing period. Density of macroinvertebrates and functional feeding groups in the four types of substrates during the four periods. Light black asterisk indicated significant difference between riffle stones and pool sediments; dark black asterisk indicated significant difference between riffle stones and riffle litters. RS, riffle stones; PS, pool sediments; RL, riffle litters; PL, pool litters; AP, the autumn period; PP, the pre-freezing period; FP, the freezing period; TP, the thawing period.
functional feeding groups, predator density was significant higher in riffle litters than in riffle stones in each period (p < 0.05); collector-gatherer and shredder density were higher in riffle litters from the autumn to the freezing periods (p < 0.05); however, collector-filterer density was higher in riffle stones than in riffle litters from the pre-freezing to the thawing periods (p < 0.05). However, there were no significant difference in density of each functional feeding group between pool sediments and pool litters.
Collector-gatherer density was highest during the freezing period in both riffle stones and pool sediments; scrapers and predator density be highest during the freezing period in riffle stones. In riffle litters, collector-filterer density was highest during the autumn; while predator density was highest in the freezing period. In pool litters, the densities of macroinvertebrates and their individual functional feeding group has no significant difference between the four periods.
RDA showed that the full model (adjR 2 = 0.120, p = 0.001) significantly explained the variation in total community structure. The significant variables, i.e., velocity, Boulders%, and total litter abundance, explained 3.65, 4.99, and 3.38% of the total variation of the communities, respectively (Supplementary Figure 1).
Functional trait composition
There was significant difference in functional trait composition between riffle stones and pool sediments, and between riffle stones or pool sediments and their litter patches (Figure 6). Pool sediment community was characterized by higher relative abundance of "Bi-or multivoltine, " "None swim, " "Burrower, " and "Collectorgatherer" taxa; while riffle stone community had more relative abundance of "Erosional" and "Scraper" taxa. Higher relative abundance of "Erosional, " "Collector-filterer, " and "Shredder" taxa occurred in riffle stones than in riffle litters. Higher relative abundance of "Depositional" and "Predator" taxa occurred in pool litters than in pool sediments.
Functional trait composition showed a clear temporal change in both riffle stones and pool sediments. In riffle stones, the relative abundance of "Semivoltine" groups decreased but that of "Univoltine, " "weak swimming, " and "Depositional and Erosional" groups increased significantly from the autumn to the pre-freezing periods; "Shredder" decreased from the freezing to the thawing periods. In pool sediments, the relative abundance of "Erosional" groups decreased but "Burrower" and "Collector-gatherer" groups increased significantly from the autumn to the pre-freezing period; whereas the relative abundance of "Burrower" and "Collectorgatherer" decreased, but "weak swimming, " "Streamlined, " and "Shredder" groups increased from the freezing to the thawing periods. However, functional trait composition in both riffle litters and pool litters did not change significantly over the four periods.
Taxonomic and functional diversity and functional redundancy
Taxonomic and functional diversity and functional redundancy of macroinvertebrate communities showed significant difference between subhabitats (Figure 7 and Supplementary Table 4). Taxonomic richness colonized was significantly higher in riffle stones than in pool sediments, and higher in riffle stones than in riffle litters. Simpson's diversity was significant higher in riffle stones than in pool sediments during the autumn and the freezing periods. Rao's diversity was significantly higher in riffle stones than in pool sediments, and significant higher in riffle stones or pool sediments than their litter patches. Functional redundancy did not show significant difference between riffle stones and pool sediments, but was significant higher in litter patches than in both riffle stones and pool sediments.
Simpson's diversity and functional redundancy showed a significant seasonal variation in pool sediments (Figure 7), where Simpson's diversity decreased significantly during the freezing period, and functional redundancy increased significantly during the pre-freezing period.
The difference in richness, Simpson's diversity, Rao's diversity and functional redundancy between riffles and pools via the nested ANOVA analyses showed similar results with the difference between riffle stones and pool sediment via the two-way ANOVA ( Table 1 and Supplementary Table 4).
Winter dynamics of functional diversity and redundancy of macroinvertebrates in the riffle stones and the pool sediments
Headwater streams usually show high spatial heterogeneity even in a few meters, creating mosaics (i.e., a sequence of riffles and pools) with different environmental conditions (Frissell et al., 1986;Mazão and da Conceição, 2016) and affecting aquatic macroinvertebrate assemblages (Vison and Hawkins, 1998;Heino et al., 2004;Clarke et al., 2008). As we expected, physical environmental characteristics and food resource availability were significantly different between riffles and pools in this study. Compared to riffles, pools were deeper, subjected to lower hydrological disturbance, containing more fine sediments and higher relative abundance of CPOM but less leaves. Under distinct environmental selection, both taxonomic and function diversities were higher in the stony substrates of riffles than the sedimental substrates of pools, but functional redundancy showed little difference, supporting partially our first hypothesis.
Physical disturbance and substrate type are key environmental factors affecting macroinvertebrate communities (Brown and Brussock, 1991;Roy et al., 2003). In our stream, water velocity and substrate composition significantly affected macroinvertebrate community composition in both riffles and pools. "None swim" and "Burrower" taxa, such as Chironomidae dominated the macroinvertebrates in pool sediments; while "weak swim" (e.g., Taenionema) dominated those in riffle stones. Food resources also constitute an environmental template for macroinvertebrates communities (Cummins and Klug, 1979;Beisel et al., 2000;Herbst et al., 2018). More abundant shredders mainly feeding on leave colonized in riffle stones than in pool sediments, because more leaves covered on riffle stones. More scrapers also colonized in riffle stones, because lots of rocks provide more surface habitats attached by diatom. And omre collector-filterers (e.g., Hydropsyche) were also been found in riffle stones, because they benefit from high water velocity that provides suspended organic matter. As a result, riffle stones supported a greater taxonomic and functional diversities of macroinvertebrates than pool sediments, which is similar with most comparative studies of stream macroinvertebrate communities between riffles and pools (Logan and Brooker, 1983;Brown and Brussock, 1991;Carter and Fend, 2001). Due to the synchronous change in both Simpson's and Rao's diversities of macroinvertebrates between riffle stones and pool sediments, there was no significant difference in functional redundancy between the two types of habitats. Richness, Simpson's, Rao's, and functional redundancy (FR) in the four types of substrates during the four periods. Light black asterisk indicated significant difference between the riffle stones and the riffle litters (p < 0.05); dark black asterisk indicated significant difference between the pool sediments and the pool litters (p < 0.05); different small letters represent significant difference in Simpson's diversity and functional redundancy among periods in the pool sediments (p < 0.05). RS, riffle stones; PS, pool sediments; RL, riffle litters; PL, pool litters; AP, the autumn period; PP, the pre-freezing period; FP, the freezing period; TP, the thawing period. It is common considered that functional trait composition has temporal stability in habitats with low environmental fluctuation and strongly changes only in the high environmental fluctuation (Statzner et al., 2004;Bêche et al., 2006). Freezing and thawing are two contrast environmental processes. Freezing decreases water temperature and hydrologic disturbance, whereas thawing increases water temperature and hydrologic disturbance. Significant variation of macroinvertebrates was found in functional Frontiers in Microbiology 09 frontiersin.org trait composition between the two periods. During the pre-freezing period, "weak swim" group (e.g., Taenionema) became significantly abundant in riffle stones, indicating this group can escape from unfavorable habitats and adapt to considerable environmental changes caused by low temperature and freezing. Decrease in the relative abundance of "Erosional" group but increase in "Burrower" group indicated by their relative abundance in pool sediments suggest the adaptation of macroinvertebrates to low hydrological disturbance. During the thawing period, increase of "weak swim" group but decrease of "Burrower" group in pool sediments show their adaptation to high hydrological disturbance. Taxonomic diversity, functional diversity and functional redundancy in riffle stones showed a low temporal variation, suggesting that freezing and thawing had less effects on riffle macroinvertebrates. In pool sediments, Simpson's diversity increased significantly with the increased evenness during the thawing period, and functional redundancy increased significantly during the pre-freezing period, due to that Simpson's diversity increased greater than Rao's diversity. The seasonal change in biological diversity and functional redundancy of macroinvertebrate communities in pool sediments depend on its substrate stability. That is, the riverbeds in pool sediments that are mainly composed of sand and gravel are unstable and sensitive to hydrological disturbance (Duan et al., 2008). In comparison, riffle stones have a constitutional potential to maintain the temporal stability of structure and function of macroinvertebrate communities.
Litter patches altered functional diversity and redundancy of macroinvertebrate communities
Litterfall occurs mainly in streams within a short autumn period in boreal streams, they are intercepted in riffles or deposited in pools, forming litter patches (Egglishaw, 1964;Mackay and Kalff, 1969;Kobayashi and Kagaya, 2002). Being different from pure stony and sedimental substrates, litter patches over them largely modify the features of habitats, i.e., increasing habitat homogeneity by reducing weak hydrological disturbance but providing much more food resource (Cummins, 1974;Holomuzki and Hoyle, 1990;Dobson, 1994;Wallace et al., 1999). In our case, total litter abundance was found to influence significantly the community structure of macroinvertebrates.
In our riffles, the litter substrates attracted abundant shredders (e.g., Gammarus) by providing leaves for their feeding, as well as collector-gatherers (e.g., Ephemerella and Chironomidae) by accumulating much fine particulate organic matter, and abundant predators (e.g., Utaperla). Many studies have also found that litter retention largely determines the abundance of macroinvertebrates (Short et al., 1980;Dobson and Hildrew, 1992;Dobson, 1994;Wallace et al., 1999). However, not all taxa in our investigated riffles were attracted markedly by litter patches. More "Erosional" taxa (e.g., Glossosoma and Cyrnellus) and "Collector-filterer" taxa (e.g., Hydropsyche and Simulium) colonized in riffle stones, because they require high water flow for living (e.g., breathing or feeding). In general, litter patches support lower macroinvertebrate richness but higher density than stones. Due to higher community evenness in litter patches, Simpson's diversity did not show significant difference between riffle litter and stony substrates. On the other hand, reduced environmental selection and limited competition promote trait clustering between species, resulting in lower functional diversity (Grime, 2006;Helmus et al., 2010). As our second hypothesis predicted, riffle litters had lower functional diversity but higher functional redundancy than riffle stones did.
The area of litter patches is larger in pools than in riffles, usually covering most of pool sediments. Due to the similar physical environment conditions, pool litter and pool sediment communities had similar taxonomic richness, density and Simpson's diversity. However, high food resource reduced interspecific competition that decreased functional diversity, finally resulting in decrease in functional redundancy that supports our second hypothesis.
Litter patches are not a type of stable habitat, their food value and habitat feature change with the litter decomposition, which could affect macroinvertebrate communities. The decomposition of litter usually lasts for several months, generally in three phases: (1) leaching and initial rapid loss, (2) microbial conditioning, and (3) macroinvertebrate consumption and physical breakdown (Webster and Benfield, 1986;Gessner et al., 1999). Conditioning of microbes not only accelerates the decomposition of leaf litters but also changes the palatability of litters for shredders (Cummins et al., 1989;Gessner et al., 1999). With increase in litter palatability, litters could attract more abundance of macroinvertebrate (Cummins et al., 1989;Graça et al., 2001). According with this general temporal pattern, the macroinvertebrate density in this study indeed increased from the autumn to the freezing period in riffle litters and pool litters, despite the low temperature during the freezing period. This also suggests that macroinvertebrates could be more sensitive to food resources than to temperature changes. Many boreal stream studies also demonstrated that macroinvertebrates activity still active under low temperatures and may play a larger role in litter decomposition (Irons et al., 1994;Muto et al., 2011). After that, the macroinvertebrate density decreased during thawing period, due to decrease in litter nutrient content and increase in hydrological disturbance.
Under strong interspecific competition, the abundance of competitive taxa increases and its vulnerability decreases (see Ricotta et al., 2016). In our stream, Taenionema and Rhyacophila are both scrapers and overlap in their food niche. With highly competitive ability, Taenionema abundance increased a lot from the autumn to the freezing periods, and its vulnerability value significantly decreased from 0.21 to 0.14. In contrast, with a low competitive ability, Rhyacophila increased its vulnerability from 0.21 to 0.29. On the other hand, reduction in hydrological disturbance decreased environmental stress, which increased trait clustering between species in litter patches. As a result, functional trait composition and functional diversity of litter patch macroinvertebrates showed only slight change from the autumn to the freezing periods. During the thawing period, the reduced food tends to promote interspecific competition, but counteracted by increased hydrological disturbance, resulting in stable functional composition. As Simpson's diversity and functional diversity did not change significantly during the studied four periods, the functional redundancy had temporal stability in the litter substrates of both riffles and pools, inconsistent with our third hypothesis.
In summary, compared with riffle stones and pool sediments, litter patches support not only higher total macroinvertebrate abundance but also higher functional redundancy which support an over-winter stability of functional trait composition and functional diversity.
Data availability statement
The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.
|
2023-03-08T16:06:49.507Z
|
2023-03-06T00:00:00.000
|
{
"year": 2023,
"sha1": "e28a675d5bc37c43a6e99bc7f5e9ac2a28949095",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1105323/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5d274b7a1cceecdf4eda781f35166c4b0f2fce4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
921436
|
pes2o/s2orc
|
v3-fos-license
|
Nuclear expression and/or reduced membranous expression of β-catenin correlate with poor prognosis in colorectal carcinoma
Supplemental Digital Content is available in the text
Introduction
Colorectal cancer (CRC) is the fourth most frequent human malignancies worldwide. [1] Although the mortality of colorectal cancer has been decreased by almost 35%through earlier screening and better treatment modalities, [2] CRC remains the second highest cause of cancer-related deaths. The 5-year survival rate for CRC exceeds 50%, but it is highly variable depending on the stage of the disease. [3] The molecular pathways involved in the tumorigenesis of CRC are complicated and heterogeneous. Fearon and Vogelstein [4] have reported a series of genetic alterations, including the activation of certain oncogenes and the inactivation of particular tumor suppressor genes, which are responsible for colorectal tumorigenesis.
b-catenin localizes in the membrane, cytoplasm, or nucleus and exerts different functions related to cell differentiation and proliferation. Membranous b-catenin was identified as a protein associated with E-cadherin in maintaining cell-to-cell interactions. Interestingly, the membranous expression of b-catenin exerts a restrictive effect on tumor cell movement and growth. Loss of b-catenin expression on the cell surface increases cell motility, growth, and transformation and thus promotes tumorigenesis. [5] Cytoplasmic b-catenin, which can translocate to the nucleus and activate the downstream target genes relevant to cell proliferation, migration, invasion, cell cycle progression and metastasis, serves as a downstream transcriptional transactivator through int/Wingless family (Wnt) transduction signaling. [6] Pre-existing intracellularb-catenin directly connects scaffolding proteins Axin, adenomatous polyposis coli(APC), serine/threonine kinases, Casein kinase 1 alpha (CK1a), and glycogen synthase kinase 3b (GSK3b) to form a destruction complex without Wnt ligands. [7] However, if the APC is inactivated, Axinor b-catenin mutates; therefore, free b-catenin cannot be degraded and accumulates in the cytosol. Subsequently, b-catenin translocates to nucleus as a co-factor for T-cell factor (TCF) family of transcription factors to activate the downstream Wnttarget genes. [6] This aberrant Wnt/b-catenin-TCF signaling plays a key role in the development and progression of colorectal cancer. The Wnt/b-catenin pathway has been recognized to play a critical role in maintaining the stem cell features by targeting genes, such as Lgr5, Ascl2, and Sox9. [8][9][10] The hyperactivation of Wnt/ b-catenin signaling enhances the invasive and metastatic potential of CRC cells. [11] Knockdown of b-catenin in CRC cells dampens cell proliferation and invasion. [12,13] Nuclear b-catenin expression detected by immunohistochemistry has been reported to be associated with high tumor burden and worse survival outcomes of CRC. [14][15][16][17] However, other studies did not present this association. [18,19] The variable and contradictory results were also observed regarding the correlation between the reduced membranous b-catenin expression and the prognosis in patients with CRC. [17,[19][20][21] A previous analysis suggested that b-catenin overexpression in nucleus, rather than in cytoplasm, was associated with poor prognosis of CRC. [22] In this paper, we collected and added updated articles regarding b-catenin expression in CRC to reanalyze the prognostic value of b-cateninin cytoplasm and/or nucleus in CRC patients. More importantly, we also extracted relevant data to analyze the prognostic significance of reduced membranous b-catenin expression in patients with CRC. The pooled results suggested that b-catenin overexpression in nucleus or reduced b-catenin expression in the membrane was associated with worse prognosis of CRC.
Literature selection
We systematically searched the PubMed, Embase, and Web of Science databases to identifying pertinent articles published prior to November 2015. The terms used in the search were as follows with all possible combinations: "b-catenin, Beta-catenin, or CTNNB1," "WNT/b-catenin signal pathway," "prognostic, prognosis, or survival," and "colorectal neoplasms, colorectal cancer, colorectal carcinoma, colorectal tumor." The reference lists associated with all the studies were also inspected for additional available studies.
Inclusion and exclusion criteria
To obtain high-quality literatures to meet the high standards for this meta-analysis, studies must fulfill the following criteria: (1) patients included have definite pathological diagnosis of colorectal carcinoma; (2) b-catenin expression was evaluated by immunohistochemistry in the CRC tissue; (3) evaluation of the correlation between b-catenin expression and CRC pathological features and overall survival (OS) or disease-free survival (DFS) were done; and (4) studies included were published in English. In addition, the following articles were excluded: (1) articles published in a non-English language; (2) articles where the relevant data provided could not be extracted; (3) articles where no relevant data was provided for necessary analysis; (4) duplicated articles; (5) cut-off scoring of positive immunoreactivity of b-catenin expression in nucleus was higher than 30%; and (6) the quality of included study was too low (score < 4). The studies were evaluated and selected by 2 reviewers (XY and JS), and the disagreements were settled by a third reviewer (DL). Then, the eligible articles were included for further data processing. Characteristics of the studies of the b-catenin expression in cytoplasm.
Author Table 3 Characteristics of the studies of the reduced b-catenin expression in the membrane.
Author quality of the eligible studies was evaluated using the Newcastle-Ottawa scale (NOS), which was described previously. [22,23]
Statistical analysis
The STATA (version 12.0, Stata Corp. College Station, TX) was utilized for this meta-analysis. Odds ratios (ORs) with Table 4 Characteristics of the studies of the nuclear b-catenin expression in the invasive front of cancer.
Author 95% confidence intervals (CIs) were used to evaluate the association between the different subcellular localizations of b-catenin expression and the prognosis or clinicopathological parameters. We pooled the statistical variables directly if they were depicted in articles. Otherwise, Kaplan-Meier curves were read by Engauge Digitizer to obtain the necessary data. The chisquare based Q statistical test was used to evaluate the heterogeneity among the outcomes of enrolled studies. [24] In addition, I 2 statistic represented the proportion of total variation caused by heterogeneity, and I 2 >50% meant significant heterogeneity. According to the results of the Q statistical test, P > 0.10 indicated the outcomes of analysis among the results with low heterogeneity and a fixed-effects model was selected. In addition, the random-effects model was used for studies with P < 0.05. Egger's test and Begg's test were used to examine the potential risk of publication bias. Sensitivity analysis was performed following sequential omission of individual studies to evaluate the stability of the results.
Quality of eligible studies
The Newcastle-Ottawa Scale (NOS) was conducted to assess the methodological quality of the studies. As described previously, [23] a score of 9 represented the highest quality and a score of 5 or more was considered as high quality. Twenty-seven studies included in our meta-analysis were of high quality with scores of 5 or more after quality assessment.
Prognostic value of b-catenin expression in colorectal cancer
Twelve enrolled studies provided the HRs and 95%CI directly or indirectly about the correlation between nuclear b-catenin overexpression and 5-year OS. The pooled HR of the b-catenin overexpression in nucleus with OS was 1.50 (95% CI: 1.08-2.10; Z = 2.40; P = 0.016) ( Fig. 2A), but heterogeneity did exist (I 2 = 79.7% P = 0.000). The association of b-catenin overexpression in nucleus with DFS was analyzed based on 5 studies; the pooled HR was 1.17 (95% CI: 0.77-1.77; Z = 0.73; P = 0.463) (Fig. 2B). In addition, 3 studies assessed the association of nuclear b-catenin overexpression in the invasive front of tumor with OS; the pooled HR was 1.67 (95% CI: 0.73-3.82; Z = 1.22; P = 0.221) (Fig. 2C). Then, we evaluated the correlation between b-catenin overexpression in the cytoplasm and 5-year OS based on 5 studies, and the pooled HR was 1.00 (95% CI: 0.85-1.18; Z = 0.01; P = 0.991) (Fig. 3A). The pooled HR of the association of reduced membranous b-catenin expression and OS was 1.33 (95% CI: 1.15-1.54; Z = 3.81; P = 0.0001) based on 9 studies (Fig. 3B). The above results suggested that b-catenin overexpression in the nucleus was associated with lower OS, but not with DFS. In addition, reduced b-catenin expression in the membrane was correlated with a worse prognosis of CRC. (Fig. 4). However, as indicated by subgroup analysis, a significant relationship between nuclear b-catenin overexpression and 5year OS was shown only by an antibody sourced from the Transduction Laboratory (HR = 1.61; 95%CI: 1.04-2.47; I 2 = 83.4%, P = 0.000). Other factors including study location and number of patients altered the significant prognostic impact of nuclear b-catenin expression. We could not identify the source of heterogeneity in this study by subgroup analysis; however, we inferred that the heterogeneity may have been caused by the different clinical features of patients or other factors we could not assess.
Publication bias
We assessed the publication bias by constructing a funnel plot (S1A Fig, S1B Fig, S2 Fig, http://links.lww.com/MD/B436) as more than 10 studies were included for meta-analysis. Egger's test indicated that publication bias existed when we evaluated the impact of b-catenin in the nucleus with 5-year OS, although Begg's Test showed no significant publications bias (P = 0.064). However, with Egger's test, there is inadequate power of testing when the number of included studies is fewer than 20. [52] We performed sensitivity analysis and demonstrated that the pooled HRs were not significantly influenced by omitting any single study (S1C Fig, http://links.lww.com/MD/B436).
Discussion
Colorectal carcinogenesis is a complicated multistage process including multiple genetic alterations. The aberrant Wnt/ b-catenin pathway has been proven to be involved in progression of CRC. Approximately 60% to 80% of CRCs development is Table 5 Meta-analysis of b-catenin in the nucleus = cytoplasm and membrane. due to the aberrant activation of the Wnt/b-catenin signaling pathway. [53] Wnt signaling play a central role in both early colorectal tumorigenesis and later progression. [54] Activated Wnt/b-catenin signaling promotes EMT, migration, and invasion of CRC cells by targeting miR-150, BOP1, CKS2, and NFL3 genes, which induced a mesenchymal-like morphological change and experimental metastasis of CRC cells. [55,56] High Wnt/ b-catenin signaling is also critical in the maintenance of the stem cell niche, which leads to tumor progression and metastasis. [10] b-catenin accumulation in the nucleus or cytoplasm was identified as a poor prognosis marker and nuclear b-catenin was implicated as a potential target for cancer therapy. [57,58] However, there were also contradictory results suggesting that b-catenin expression in the nucleus was associated with noninvasive tumors and a more favorable outcome. [59,60] Therefore, the prognostic significance of b-catenin expression in patients with CRC remains controversial and a systematic analysis is required to achieve a reliable conclusion. In this meta-analysis, we explored the prognostic significance of the different subcellular localizations of b-catenin expression for patients with CRC. The results indicated that nuclear expression of b-catenin or decreased expression of b-catenin in the membrane was associated with lower OS. However, no significant association was observed between b-catenin overexpression in the cytoplasm and 5-year OS, which was consistent with the previous results. [22] Unexpectedly, our results indicated that b-catenin overexpression in the nucleus and cytoplasm was negatively associated with differentiation grade, which needs further study to verify this conclusion.
Genetic mutations, such as mutation of APC or CTNNB1, are the main cause of accumulation of nuclear b-catenin. [61] Previous studies reported that the mutation rate of the CTNNB1 gene in CRC ranged from 10% to 50%. [62][63][64] Therefore, the b-cateninin the nucleus could be either mutant type or wild type; however, these 2 different typesof b-cateninin the nucleus are of functionally distinct. It is necessary to separate the wild-type and mutant-type b-catenin proteins by expression staining and analyze their prognostic value. Here, we failed to distinguish whether the nuclear b-catenin was mutant type or wild type due to lack of relevant information. We inferred that the variable outcomes of the relationship between nuclear b-catenin expression and prognosis in CRC may be caused by the analysis of the mutant-type and wild-type b-catenin. Furthermore, such analysis may also contribute to inter-study heterogeneity.
In addition, we could not ignore the limitations in this metaanalysis. First, heterogeneity that would affect the results of metaanalysis does exist. The subjective evaluation of b-catenin expression, different source and dilution of primary antibodies, and different characteristics of patients in each study contributed to significant heterogeneity. However, we failed to identify the source of heterogeneity by stratified analysis. To eliminate variations across studies, the random-effects model was performed accordingly. Second, we did not include non-English studies, which might introduce potential language bias. In addition, publication bias existed as only studies performed with positive results or significant outcomes were suitable for publication. Another potential source of bias might have come from the less reliable data that were extrapolated from survival curves. As the presence of inevitable limitations exists in this meta-analysis, additional large and well-designed prospective studies are should to be conducted.
Our study is the first to meta-analyze the association between the reduced membranous expression of b-catenin and prognosis in CRC patients. The pooled data suggested that reduced expression of b-catenin in the membrane is significantly associated with poor survival in patients with CRC. In addition, nuclear b-catenin overexpression, rather thancytoplasmic b-catenin overexpression, could serve as a biomarker of poor prognosis on CRC. New approaches to therapeutically target Wnt/b-catenin pathway needs to be explored, and it is necessary to distinguish the differential subcellular localizations of b-catenin to develop different therapeutic strategies.
|
2018-04-03T03:09:24.422Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "7af68d95defcd9b1a28367b0cd28a721b789a5b6",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000005546",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7af68d95defcd9b1a28367b0cd28a721b789a5b6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16756735
|
pes2o/s2orc
|
v3-fos-license
|
MicroRNA profiling of CD3+CD56+ cytokine-induced killer cells
Studies have proven that IL-2 and IL-15 showed contrasting roles during CIK cells preparation. By employing microarray, we analyzed miRNA expression profiles of PBMC, CIKIL-2 and CIKIL-15. Advanced bioinformatic analyses were performed to explore the key miRNAs which may regulate cell proliferation and anti-tumor activity of CIK. We identified 261 differentially expressed miRNAs (DEMs) between PBMC and CIKIL-2, and 249 DEMs between PBMC and CIKIL-15. MiR-143-3p/miR-145-5p was miRNA cluster which may positively regulate cell proliferation. In contrast, miR-340-5p/miR-340-3p cluster may negatively regulate cell proliferation via induction apoptosis, which may cause decreased cell proliferation capacity of CIKIL-2. MiRNA-target interaction analysis indicated that 10 co-downregulated miRNAs may synergistically turn on the expression of a pool of tumor cytotoxic genes in CIK cells. The DEMs between CIKIL-2 and CIKIL-15 may contribute to enhanced tumor cytotoxic capacity of CIKIL-2. Importantly, we found that repressed miR-193a-5p may regulate the expressions of inhibitory receptor KLRD1. The results of the validation assay have shown that KLRD1 were upregulated in CIK cells. Our findings have provided new insights into mechanisms of CIK cells production and tumor cytotoxic function, and shed light on their safety for clinical trial.
A mazing scientific advances have been translated into better ways to prevent, detect, diagnose and treat cancer during the past five years 1 . Nowadays, people are surviving longer after their cancer has been diagnosed due to these remarkable progress. Numerous therapeutics against cancer have shown large potential in clinical trials 1 . Notably, one group of strategies against cancer which are likely to revolutionize the treatment of certain cancer in the very near future are immunotherapies 1 . These therapeutics educate the patients' immune system to attack their cancer cells yielding both strong and durable response. Among these strategies, adoptive immunotherapy has shown great promise and encouraging efficacy in the tumor treatment with minimal adverse events 2,3 . Cytokine-induced killer (CIK) cells based immunotherapy is widely performed for clinical trials in China which is alternatives to conventional therapies 2 . CIK cells, a subset of T lymphocytes with a natural killer T cell phenotype, have been proven to be effective to most of tumors in vitro and in vivo 4 . CIK cells are generated from peripheral blood lymphocytes through time sequential stimulations of IFN-c, monoclonal antibody against CD3 (OKT3) and IL-2. During this time period of CIK cells preparation, OKT3 provided mitogenic signals to T lymphocytes 5 . Priming with IFN-c is to activate the monocytes through providing contact-dependent (CD58/LFA-3) and soluble (IL-12) crucial signals to promote generation of autophagy and antigen cross-presentation 6 . IL-2 is essential for T cell proliferation, survival and acquisition of cytolytic capacity in the following culture. At the end of expansion, a heterogeneous population of CD3 1 CD56 1 CIK cells presenting potent cytotoxicity against a variety of tumor cells were obtained. However, the protocol for preparation of CIK cells can be differed for the purpose of enhancing the tumor cytotoxicity and CIK cells proliferation capacity 7 . It has been reported that the addition of IL-6 every 2-3 days during the preparation of CIK cells could inhibit the generation of Foxp3 1 Treg cells and increase the proportion of CD3 1 CD56 1 cells 8 . In our previous study, we have shown that CIK cells stimulated with combination of IL-2 and IL-15 exhibited enhanced proliferation capacity and cytotoxicity against lung cancer 9 . Interestingly, the results have indicated that CIK cells induced with combination of IL-2 and IL-15 could upregulate the expression levels of IFN-c and TNF-a in mice models. In further investigation, we have found that CIK IL-2 showed greater tumor cytotoxicity than CIK IL-15 , and CIK IL-15 exhibited enhanced proliferation capacity than CIK IL-2 10 . By advanced bioinformatic analysis of RNA-seq data from CIK IL-2 and CIK IL-15 , results indicated that genes participating in Wnt signal pathway and focal adhesion were upregulated in CIK , and the expression levels of genes involved in cytokine-cytokine receptor interaction were increased in CIK IL-2 10 . Although the expression profiles of important genes in CIK IL-2 and CIK IL-15 have been well revealed, the regulation of these genes by IL-2 and IL-15 are still unknown.
MicroRNAs (miRNAs), a class of highly conserved ,20-22 nt long noncoding RNA, are essential molecules of post-transcriptional regulation of gene expression 11 . MiRNAs regulate gene expression negatively by targeting the 39 untranslated region (3'UTR) or coding region of the mRNA, leading to either RNA degradation or inhibition of translation 12 . MiRNAs participated in many biological processes including cell proliferation, differentiation, apoptosis and tumorgenesis 13 . More recently, it was reported that miRNAs are involved in regulatory networks in immune system and regulation of development of immune cells 14 . However, the regulatory functions of miRNAs in CIK cells expansion and acquisition of cytotoxic capacity have not been reported yet.
In order to identify the roles of miRNAs in regulatory network of CIK cells generation, we performed miRNAs microarray analysis between PBMC (peripheral blood mononuclear cell) and CD3 1 CD56 1 CIK cells, and investigated the changes in global miRNAs expression level. Advanced system biology strategies have been employed to comprehensively investigate the molecular mechanism of translational modulation of miRNAs during CIK cells expansion. Our finding will provide evidence to better understand the acquisition of tumor cytotoxicity and proliferation capacity of CIK cells.
Results
Dynamic miRNA profiles between PBMC and CIK cells. We have prepared CIK IL-2 and CIK IL-15 from PBMCs of three healthy volunteers under identical conditions. Sequentially, PBMCs, CIK IL-2 and CIK IL-15 were sampled and preserved in liquid nitrogen for the following miRNA microarray analysis. The phenotype of CIK cells were determined by flow cytometry. The result showed that the average percentages of CD3 1 CD56 1 cells were over 98% in both CIK IL-2 and CIK IL-15 ( Figure S1 and Table S1). After confirmation of the portion of CD3 1 CD56 1 cells, the proliferation capacity and tumor cytotoxicity of both CIK IL-2 and CIK IL-15 were measured by automatic absolute cell counting and CCK-8 based method respectively. The results of these two assays were previously described 10 . Then, miRNAs was isolated and purified from the preserved PBMC, CIK IL-2 and CIK IL-15 . After quality assessment, the miRNAs were labeled and hybridized to Agilent V19.0 miRNA array. The array contained 2,006 human miRNAs, which allowed us to perform a deep investigation to CIK cells miRNA expression. After normalization of the raw data, we screened 261 differentially expressed miRNAs (DEMs) between PBMC and CIK IL-2 , and 249 DEMs between PBMC and CIK IL-15 by the following criteria: fold change (FC) . 2 or FC , 0.5, P value , 0.05, all flag signals of 3 replicate were all same (Table S2 and S3). Of these DEMs between PBMC and CIK IL-2 , 111 and 150 miRNAs were downregulated and upregulated respectively ( Figure 1A). There were 109 downregulated and 140 upregulated miRNAs between PBMC and CIK IL-15 ( Figure 1A). However, no significant DEM was identified between CIK IL-2 and CIK IL-15 by the screening criteria referred above. By further comparison of the miRNA expression patterns of CIK IL-2 and CIK IL-15 against PBMC, we found that 130 miRNAs were coupregulated, accounting for 86.66% and 92.85% of total upregulated miRNAs in CIK IL-2 and CIK IL-15 respectively. In addition, there were 103 miRNAs co-downregulating in both of the two CIK cells, accounting for 92.79% and 94.49% of total downregulated miRNAs in CIK IL-2 and CIK IL-15 respectively ( Figure 1B).
Differentially expressed miRNA chromosome clustering. Interestingly, evidences have supported that CIK IL-2 and CIK IL-15 shared over 90% of downregulated miRNAs. Furthermore, target genes may be upregulated in response to downregulation of corresponding miRNA. Therefore, co-downregulated miRNAs were chosen for further analysis. In order to identify their function during CIK preparation, we aligned the co-downregulated miRNAs to the chromosomes they located based on the chromosome coordinates of each miRNA (Figure 2A). We assumed that miRNAs which closed to each other may have the same biological function 15,16 . Among codownregulated miRNAs, we have found 17 miRNA clusters in which the distance of miRNAs are no longer than 5000 bp ( Figure 2A, Table 1). The heatmap and hierarchical analysis of 17 DEMs www.nature.com/scientificreports clusters were shown in Figure 2B. There were 3 miRNA clusters in co-upregulated miRNAs. No miRNA cluster were observed in CIK IL-2 or CIK IL-15 specific DEMs.
Predicted target gene ontology and pathway analysis. To further characterize the function of the miRNA we identified, the target genes of each miRNA cluster were predicted based on miRTarBase Release 4.5. The target genes of each miRNA in miRTarBase were validated by Reporter assay, Western Blot, Microarray or pSILAC. Next, we performed gene ontological analysis to screen codownregulated miRNA clusters whose target genes may be involved in cell proliferation and tumor cytotoxicity of CIK cells by employing annotation tool of DAVID bioinformation database. By analyzing the significant GO terms, we found that response to cytokine stimulus and positive regulation of cell proliferation were the most significant GO terms of the target genes regulated by miR-29b-3p/miR-29c-3p (C2) and miR-143-3p/miR-145-5p (C3) respectively ( Figure 2C). Importantly, induction of apoptosis was significant among target genes of miR-340-5p/miR-340-3p (C4) ( Figure 2C). We focused on cell proliferation and induction of apoptosis which are two key biological functions of CIK cells. Functional classification analysis in 2-D view was performed in order to visualize the associations between target genes of C3 and C4 clusters and GO terms including cell proliferation and induction of apoptosis ( Figure 2D, Figure 2E). With respect to miR-143-3p/ miR-145-5p cluster, HRAS, KRAS and NRAS which are the most common members of Ras subfamily were found to be strongly correlated with cell proliferation of CIK cells. By 2-D view analysis, two proto-oncogenes including Bcl2 and c-Myc which are involved in promoting cell proliferation were identified to be regulated by miR-143-3p/miR-145-5p cluster ( Figure 2D). Among the genes regulated by miR-340-5p/miR-340-3p, TNFRSF10B was shown to participate in cell death ( Figure 2E). To further explore the influence of co-downregulated miRNA cluster on function of CIK cells, we performed pathway analysis based on KEGG and Biocarta databases by using Fisher exact test. The results indicated that target genes of miR-29b-3p/miR-29c-3p, miR-143-3p/miR-145-5p and miR-23b-3p/miR-27b-3p clusters were all involved in IL-2 receptor beta chain in T cell activation and focal adhesion. The two pathways were significant in the target genes of each miRNA cluster ( Figure 3A). By 2-D view analysis, the genes regulated by the 3 miRNA clusters were mainly involved in cell proliferation signaling transduction pulsed by IL-2 including kinases (JAK1, RAF1, CRKL and AKT1), signaling transducers (SOS1, HRAS, SOCS1, SOCS3 and IRS1), DNA binding proteins (FOS and E2F1) and apoptosis suppressors (BCL2 and BCL2L1) ( Figure 3B). Among genes involved in focal adhesion, we found that components of extracellular matrix and their receptors were regulated by miR-29b-3p/miR-29c-3p including collagen, laminin and integrin ( Figure 3C). Additionally, target genes regulated by miR-143-3p/miR-145-5p and miR-23b-3p/ miR-27b-3p clusters were responsible for signal transduction which may be mediated by integrin.
MiRNA-Target Interaction (MTI) network in tumor cytotoxicity pathway. Alternatively, we have employed miRTar database to identify miRNA-target interactions in cytotoxicity pathway. We inputted co-downregulated miRNAs and selected natural killer cell mediated cytotoxicity and cytokine-cytokine receptor interaction as our target pathways which were derived from KEGG database. We built the miRNA-target interaction network based on the anti-tumor factors expressed on cytotoxic lymphocytes ( Figure 4A). The network analysis indicated that 10 distinct co-downregulated miRNA were found targeting cytotoxic genes, and the hierarchical analysis of their expression profiles was shown in figure 2B. The results indicated that let-7c was a key miRNA which regulated 3 tumor toxic molecules including Fas ligand (FasL), TNFSF10 (Trail) and OSM (Oncostatin M). Among the cytotoxic genes, PRF1, GZMB, TNFSF10 and FasL were shown to be targeted only by co-downregulated miRNAs. Other anti-tumor genes including OSM, TNF-a and CD40LG were regulated by miRNAs of both downregulated and upregulated. In addition, we performed miRNAs-pathway interaction analysis to establish the linkage between these 10 miRNA and gene pool participating in natural killer cell mediated cytotoxicity pathway. By building the miRNApathway network, the results showed that 8 co-downregulated miRNAs were involved in regulation of natural killer cell mediated cytotoxicity pathway, and the correlations between them were shown in figure 4B. In the gene group regulated by these 8 miRNAs, we found that receptors, signal transduction components and tumor cytotoxicity factors were all included ( Figure 4C). Among receptors expressed on NK cells, inhibitory receptors (KIR2DL4; CD94) and activating receptors (ITGB2; KIR2DS5; NKG2C/E; NKp46) were identified which may work as sensors to discriminate normal cells and tumor cells.
Validation of representative miRNAs and mRNAs. Next, we examined the expression profiles of co-downregulated miRNAs referred above by employing qRT-PCR across these 3 types of cells. Consistent with miRNA array data, the results have shown that all miRNAs we referred above were significantly downregulated in both CIK IL-2 and CIK IL-15 compared to PBMC ( Figure 5A). Interestingly, among these 16 miRNAs, 10 miRNAs differentially expressed between CIK IL-2 and CIK IL-15 , which have not been identified by microarray analysis ( Figure 5A). Moreover, we validated the expression levels of cytotoxic genes in CIK cells in order to demonstrate the regulatory relationship between miRNAs of interest and their targets we suggested above. Except for OSM, the expression of all anti-tumor genes including TNF-a, PRF1, GZMB, FasL, TNFSF10 and CD40LG were significantly upregulated in CIK cells compared to PBMC, which were negatively correlated with expression profiles of their potential regulators ( Figure 5B, Figure 6A and Figure 6B). Among these genes, the expression of GZMB and TNFSF10 were significantly higher in CIK IL-2 than CIK IL-15 in protein level ( Figure 6A and Figure 6B). To further explore the cell proliferation mechanism of CIK cells, the expression of important genes regulated by miR-143-3p/miR-145-5p cluster were profiled. The results indicated that the expressions of c-Myc and Bcl-2 were significantly increased in CIK IL-2 and CIK IL-15 ( Figure 5C and Figure 6B). Although NRAS mRNA was found to be significantly upregulated in CIK cells, no significant difference was observed in protein level ( Figure 5C and Figure 6B). Importantly, we have examined the expression of inhibitory and activating receptors to evaluate the recognition specificity of CIK cells. NKG2D, which has been reported to be upregulated in CIK cells was taken as positive control in validation assay. By validation of both mRNA and protein level, the results showed only KLRD1 (CD94) were significantly upregulated in CIK cells ( Figure 5D and Figure 6A).
Discussion
Nowadays, immunotherapies which have made remarkable progress are revolutionizing the treatment of cancers. However, immunotherapy development is a challenging work which need large amount of studies to prove its effectiveness and safety. CIK is one of adoptive immunotherapy approaches which have exhibited potent cytolytic activities against tumor cells with minimal adverse effects. The original work of CIK study was reported by Schmidt-Wolf from Stanford 4 . Clinical trials of CIK based immunotherapies were widely performed in China, however, few studies were observed which focused on molecular mechanism of tumor toxic function. Herein, we performed microarray analysis to identify the miRNA expression profiles of CD3 1 CD56 1 CIK cells for the first time, and elucidate their proliferation and cytolytic mechanisms on post-transcriptional regulation level. The efficiency of CIK based immunotherapies were www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9571 | DOI: 10.1038/srep09571 determined by cell proliferation and cytotoxicity capacities against tumor 5 . To improve the effectiveness of CIK cells, cytokines including IL-1, IL-17, IL-12 and IL-15 were used instead of IL-2 or in combination with IL-2 7 . IL-2 and IL-15 have similar biological function in vitro, and they shared receptor signaling components (IL-2/ 15Rbc c ) with each other 17,18 . By pathway analysis, we found codownregulated miRNA clusters including miR-29b-3p/miR-29c-3p, miR-143-3p/miR-145-5p and miR-23b-3p/miR-27b-3p were participated in IL-2 receptor beta chain in T cell activation ( Figure 3A and 3B). However, contrasting roles of IL-2 and IL-15 were observed in lymphocytes based immunotherapy 19 . It was reported that CIK cells which were generated in combination with IL-2 and IL-15 showed greater cytotoxicity against lung cancer than CIK cells prepared with IL-2 alone 9 . Moreover, our previous comparative analysis indicated that CIK IL-15 showed enhanced cell proliferation capacity than CIK IL-2 , whereas, CIK IL-2 showed greater tumor cytotoxic activity against tumor than CIK IL-15 in vivo 10 . Interestingly, advanced bioinformatic analysis and validation assays provided evidences to illuminate the potential mechanism of differential cell proliferation capacity and anti-tumor activity of CIK IL-2 and CIK IL-15 .
We have identified 130 co-downregulated and 103 co-upregulated miRNAs through microarray analysis. By chromosome clustering and miRNA-target interaction analysis, we have focused on 16 21 . Surprisingly, we have found that miR-29b, miR-27b, let-7c and miR-28 we included in this study were listed within the top 60 miRNAs which accounted for over 97% miRNA sequence in NK cells.
By chromosome clustering and GO analysis, we found that miR-143-3p/miR-145-5p were a miRNA cluster which may positively regulated the cell proliferation of CIK cells. Importantly, proto-oncogene including Ras family, Bcl-2 and c-Myc were predicted targets of this miRNA cluster. However, we found that only Bcl-2 and c-Myc were significantly upregulated during CIK generation ( Figure 5C and Figure 6B). These two anti-apoptotic genes may be involved in cell proliferation during CIK preparation 22 . On the other hand, target genes of miRNA cluster containing miR-340-5p and miR-340-3p was identified to potentially participate in induction of apoptosis ( Figure 2C and 2E). The results of qRT-PCR showed that the expression level of miR-340-5p and miR-340-3p were significantly higher in CIK IL-15 than CIK IL-2 , which indicated that a pool of apoptosis promoting genes may be upregulated in CIK IL-2 compared to CIK IL-15 . Furthermore, IL-15 is an anti-apoptotic cytokine which inhibits IL-2 mediated activation-induced cell death (AICD) of T cell and stimulates survival of memory T cell, whereas, IL-2 induces AICD and eliminates self-reactive T cell to maintain peripheral tolerance through upregulating of FasL and TNFSF10 19,23 . Interestingly, we have found that the expression of FasL and TNFSF10 were upregulated in both CIK IL-2 and CIK IL-15 . However, the expression of TNFSF10 was higher in CIK IL-2 than CIK IL-15 ( Figure 6B). Collectively, these evidences may explain our observation that CIK IL-15 showed greater proliferation potential than CIK IL-2 as previously described.
We have built the regulatory network among tumor toxic genes and differentially expressed miRNA which governed their expression. It showed that GZMB, which is a key anti-tumor molecule in vivo, was regulated mainly by miR-199a-5p and miR-199b-5p. Data from qRT-PCR and Western blot suggested that the expression level of GZMB was significantly increased in CIK cells. Importantly the results have implicated that the expression of GZMB was upregulated in CIK IL-2 compared to CIK IL-15 , which negatively correlated with the expression profiles of miR-199a-5p and miR199b-5p and suggested the enhance tumor cytotoxicity of CIK IL-2 . PRF1, which is known as pore-forming protein and induces apoptosis in synergy with GZMB was upregulated in both CIK IL-2 and CIK IL-15 24,25 . Except for TNF-a and TNFSF10 which have been reported as anti-tumor molecules of CIK, we have identified CD40LG as tumor toxic effector in CIK cells by analyzing interactions between co-downregulated miRNAs and cytokine-cytokine receptor interaction pathway [26][27][28] . The expression of CD40 LG was upregulated in CIK cells. Compared to PBMC, OSM was significantly downregulated in both CIK IL-2 and CIK IL-15 . Studies have reported that OSM which was initially found to inhibit proliferation of several tumors appeared to promote growth of malignant cells now [29][30][31] . Therefore, evidences obtained from miRNA-Target network and validation assays have suggested the potential cytotoxicity mechanism of CIK cells. The higher expression of TNFSF10 and GZMB in CIK IL-2 may account for the greater tumor cytotoxic efficiency of CIK IL-2 than CIK IL-15 .
Additionally, the function of natural killer cell is determined by the balance between signals triggered by activating and inhibitory receptors. CD94-NKG2A are important inhibitory receptor systems in www.nature.com/scientificreports most species, and they transduce inhibitory signal through SHP-1 and SHP-2 [32][33][34] . By co-downregulated miRNAs-receptor interaction analysis, we found that KLRD1 (CD94) were significantly upregulated in CIK cells. KLRD1 is a peptide-selective receptor on NK cell, which binds HLA-E-peptide complex and provides inhibitory signal in absence of its signaling partner NKG2A 35 . This results suggested that increased expression of CD94 in CIK cell may protect HLA-E positive cell from lysis 36 .
In conclusion, we have performed microarray analysis to investigate the dynamic miRNA expression profiles during CIK cell preparation for the first time. By advanced bioinformatic analysis, the results indicated that miR-143-3p/miR-145-5p cluster may positively regulated cell proliferation through upregulating a group of proto-oncogenes. In contrast, miR-340-5p/miR-340-3p cluster may negatively regulated cell proliferation via induction apoptosis, which may cause decreased cell proliferation potential of CIK IL-2. MiRNAtarget interaction analysis revealed that 10 co-downregulated miRNAs may synergistically promote the expression of a pool of tumor cytotoxic genes in CIK cells. Importantly, upregulation of inhibitory receptors (KLRD1) in CIK cells implicated the possibility that activation of CIK cells through activating receptors (NKG2D) could be negatively regulated by KLRD1. These evidences suggested that the presences of both activating and inhibitory receptors on CIK cells may contribute to their safety for clinical trial.
Methods
Antibodies and Cytokines. The antibodies for CIK cells phenotype assay were purchased from BD Biosciences. The antibodies used for detecting cell surface receptors and cytotoxic factors were obtained from BioLegend, Inc. and R&D System. For Western blot, antibodies were purchased from EMD Millipore and Santa Cruz Biotechnology, Inc. Cytokines for CIK cells preparation including OKT3, IFN-c, IL-2 and IL-15 were from Miltenyi Biotec. Methods involving human peripheral blood in this studies were reviewed and approved by Bioethics Committee of Yan'an Affiliated Hospital of Kunming Medical University. The methods were carried out in accordance with the approved guidelines. Written informed consents have been given from all volunteers participated in this study.
Generation of CIK cells. The Bioethics Committee of Yan'an Affiliated Hospital of Kunming Medical University has approved the investigation protocols to draw blood from healthy volunteers after written informed consent for the purposes of CIK cells preparation against tumor and microarray analysis. The standard protocol of CIK generation was described previously 9 . Briefly, PBMCs were isolated by standard Ficoll separation and cultured in RPMI 1640 growth medium at a density of 5 3 10 6 cells/mL. The RPMI 1640 growth medium for CIK cell contained 10% FBS, 2% L-glutamine and antibiotics. The generation of CIK cells was primed by adding 1000 U/mL IFN-c on day 0 and 100 ng/mL anti-CD3 antibody and 500 U/mL IL-2 or 10 ng/mL IL-15 within the following 15 days of culture. The CIK cells were propagated every 5 days with RPMI 1640 growth medium supplemented with anti-CD3 antibody and IL-2 or IL-15 respectively. The CIK cells were expanded for 15 days.
RNA isolation and labeling. Total RNA was extracted and purified using mirVana TM miRNA Isolation Kit (AM1560, Ambion, Austin, TX, US) following the manufacturer's instructions. The quality was assessed by an Agilent Bioanalyzer 2100 (Agilent technologies, Santa Clara, CA, US). MiRNA in total RNA was labeled by miRNA Complete Labeling and Hyb Kit (5190-0456, Agilent technologies, Santa Clara, CA, US) following the manufacturer's instructions.
Microarray hybridization. Labeled RNA samples were further detected by the miRNA array by using Agilent's human miRNA microarray, version 19.0. Each slide was hybridized with 100 ng Cy3-labeled RNA using miRNA Complete Labeling and Hyb Kit in hybridization Oven (G2545A, Agilent technologies, Santa Clara, CA, US) at 55uC, 20 rpm for 20 hours according to the manufacturer's instructions. After hybridization, slides were washed in staining dishes (121, Thermo Shandon, Waltham, MA, US) with Gene Expression Wash Buffer Kit (5188-5327, Agilent technologies, Santa Clara, CA, US).
Data acquisition and identification of differentially expressed miRNAs. Slides were scanned by Agilent Microarray Scanner (G2565BA, Agilent technologies, Santa Clara, CA, US) and Feature Extraction software 10.7 (Agilent technologies, Santa Clara, CA, US) with default settings. Raw data were normalized by Quantile algorithm, Gene Spring Software 11.0 (Agilent technologies, Santa Clara, CA, US). Differentially expressed miRNAs were identified by using unpaired Student's t test with P values cutoff by 0.05 and fold change more than 2.0 or less than 0.5.
Target genes prediction, Gene ontology and pathway analysis. MicroRNA target gene prediction in gene ontology analysis was performed by miRTarBase Release 4.5 which is a public platform providing known experimentally validated miRNA targets 37 . GO analysis was applied to analyze the main function of the targets of differentially expressed miRNAs according to the Gene Ontology which is the key functional classification of NCBI 38,39 . GO analysis of target genes were performed by employing DAVID gene annotation tool 40,41 . Statistical analysis of GO terms was done by Fisher's exact test and x 2 test, and the false discovery rate (FDR) was calculated to correct the P-value,the smaller the FDR, the small the error in judging the p-value. The significant GO terms were defined as P value , 0.05 and FDR , 0.05. Likewise, pathway analysis was used to find out the significant pathway of target genes of differentially expressed miRNAs according to KEGG and Biocarta 42 . Still, we turned to the Fisher's exact test and x 2 test to select the significant pathway, and the threshold of significance was defined by P-value and FDR. The significant pathway was identified by P value , 0.05 and FDR , 0.05. We visualized the associations between target genes and miRNAs\pathways by using functional classification 2-D view analysis module of DAVID annotation tool. MiRNA-target interaction was analyzed by miRTar web server of human (http://mirtar.mbc.nctu.edu.tw/human/). We picked the miRNA-target interactions in biological pathways of interest and used Cytoscape for graphical representations.
Quantitative reverse transcription PCR. The qRT-PCR was performed on the CFX96 Touch TM (BIORAD, USA). The first strand of cDNA was synthesized with adjusted concentration of RNA, and corresponding genes were amplified by employing EVA Green Supermix (BIORAD, USA). All the primers used for qRT-PCR were obtained from GeneCopoeia (USA).
Flow cytometry. The cells were collected by centrifugation at speed of 2000 rpm. The cell pellets were suspended with blocking buffer. After washing with blocking buffer, the cells were stained with corresponding mAbs for 30 min at room temperature. The monoclonal antibodies (MAb) used were either conjugated with fluorescein isothiocyanate (FITC), phycoerythrin (PE) or phycoerythrin-cyanin 5 (PerCP). Cell surface markers of FasL, NKG2D and TNFSF10 were labeled with PE conjugated mAb. CD40LG, CD94, KIR2DS5 and KIR2DL4 were stained by FITC conjugated mAb. 2B4 and NKG2C were stained with PerCP labeled mAbs against corresponding markers. After staining, the cells were washed twice before FACS analysis.
Western Blot. PBMCs, CIK IL-2 and CIK IL-15 were treated with cell lysis buffer, and the concentration of total proteins extracted were measured by Lowry based method. The samples were analyzed by 12% SDS-PAGE gel loaded with equal amounts of protein. The proteins were electransferred to PVDF membrane at 40 V for 100 min. Next, the membrane was incubating with 5% skimmed milk in PBST for blocking overnight. The primary antibodies against TNF-a, PRF1, Bcl-2, GZMB, c-Myc, N-RAS and b-actin were added and incubated at room temperature for 4 hours. The HRP conjugated secondary antibodies were added after three time PBST washing. After incubating, the membranes were washed thoroughly with PBST for 4 times, and then the bands were visualized by enhanced chemiluminescence kit (Millipore, USA).
|
2018-04-03T04:07:20.375Z
|
2015-03-31T00:00:00.000
|
{
"year": 2015,
"sha1": "803ba375097fcaa83cb3ac9185a5bea5f2e82b16",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep09571.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "803ba375097fcaa83cb3ac9185a5bea5f2e82b16",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
73477820
|
pes2o/s2orc
|
v3-fos-license
|
Preschool children in Danish out-of-hours primary care: a one-year descriptive study of face-to-face consultations
Background The demand for out-of-hours (OOH) primary care has increased during the last decades, with a considerable amount of contacts for young children. This study aims to describe the reasons for encounter (RFE), the most common diagnoses, the provided care, and the parental satisfaction with the general practitioner (GP) led OOH service in a Danish population of children (0–5 years). Methods We conducted a one-year cross-sectional study based on data for 2363 randomly selected contacts concerning children from a survey on OOH primary care including 21,457 patients in Denmark. For each contact, the GPs completed an electronic pop-up questionnaire in the patient’s medical record. Questionnaire items focussed on RFE, health problem severity, diagnosis, provided care, and satisfaction. The parents subsequently received a postal questionnaire. Results The most common RFE was non-specific complaints (40%), followed by respiratory tract symptoms (23%), skin symptoms (9%), and digestive organ symptoms (8%). The most common diagnosis group was respiratory tract diseases (41%), followed by general complaints (19%) and ear diseases (16%). Prescriptions were dispensed for 27% of contacts, and about ¾ were for antibiotics. A total of 12% contacts concerned acute otitis media; antibiotics were prescribed in 70%. A total of 38% of contacts concerned fever, and ¼ got antibiotics. A total of 7.4% were referred for further evaluation. The parental satisfaction was generally high, but 7.0% were dissatisfied. Dissatisfaction was correlated with low prescription rate. Conclusion Respiratory tract diseases were the most common diagnoses. The GPs at the OOH primary care service referred children to hospital in 7.4% of the face-to-face consultations, and the provided care was evaluated as non-satisfying by only 7.0% of the parents. Clinical implications of the findings mean room for less prescription of antibiotic to children with ear diseases and a need for research in factors related to dissatisfaction.
Background
The out-of-hours (OOH) primary care service has been used increasingly during the last decades. At the same time, the organization of the OOH service has changed in many countries: small rotation groups have become large-scale general practitioner (GP) cooperatives, and telephone triage performed by GPs or nurses has become an essential part of the healthcare system [1,2].
In the Central Denmark Region (CDR), patients in need of acute care outside office hours must call the OOH primary care service, where GPs answer the calls and perform telephone triage. The GPs can end the call by giving advice or prescribing medication (telephone consultation), triaging to a face-to-face consultation with a GP (clinic consultation or home visit), or referring the patient directly to a hospital (emergency department or paediatric department). A substantial number of the calls to the OOH service concern children aged 0-5 years, and the children often have symptoms of infectious disease [3,4]. It has been widely discussed if all children attending the OOH service should be seen by a paediatrician or by a GP, who serves as a gatekeeper to secondary care [5]. However, little is known about RFE and parental satisfaction in children seen in OOH primary care.
The objective of this study was to describe face-to-face consultations for children aged 0 to 5 years in OOH primary care, specifically the RFEs, the diagnoses recorded by the triaging and treating GPs, and the provided care in terms of dispensed prescriptions, reason for referral, and parental satisfaction with the contact.
Study design and setting
The present population-based cross-sectional study is based on data from a random sample of patient contacts to the OOH primary care service in the CDR, which is one of five Danish regions. These data were collected from June 2010 to May 2011 as part of the OOH care cohort study referred to as 'the LV-KOS study' [6,7]. The organization of OOH primary care has not been changed since data collection.
Danish GPs provide regional OOH primary care on a rotating basis. The OOH primary care service in the CDR covers a population of 1.2 million citizens, and the service consists of two call centres and 13 consultation centres located throughout the region. Opening hours are 4 pm -8 am on weekdays and all day on weekends and public holidays. The OOH registration system is fully computerised, and each contact is registered in the patient's medical record through the unique civil registration number assigned to every Danish citizen. An electronic copy of the record is subsequently sent to the patient's own GP, and the data are transmitted to the regional administration for remuneration purposes as GPs are paid according to a fee-for-services model [6].
Data collection and variables
The electronic OOH registration system provided data on patient age and gender, date and time of contact, type of contact, the GP's clinical notes, and detailed prescription information [7]. All prescriptions were automatically registered in the system using the Anatomical Therapeutic Chemical (ATC) classification System.
Additionally, a pop-up questionnaire completed by participating GPs were used to collect extra data. For one GP on duty per type of shift, the computer system randomly selected contacts (every 10th telephone consultation, every 3rd clinic consultation, and all home visits) for inclusion in the LV-KOS study ( Fig. 1) [7]. In this way, all included contacts were chosen by random within their contact type. Development of the GP questionnaire involved cognitive interviews of 12 GPs to improve the face validity of the questionnaire and a pilot test, which resulted in minor changes. The questionnaire addressed issues such as severity of health problem, diagnosis, and provided care [6].
Two-tree days after the contact with the OOH service, a postal questionnaire was sent to the parents of the Fig. 1 Flow diagram of study population selection. Note: For one GP on duty per type of shift, the computer system randomly selected contacts (every 10th telephone consultation, every 3rd clinical consultation, and all home visits) included 2363 children; 1223 (51.8%) questionnaires were completed and returned. The questionnaire focused on their experience with the OOH service [6,7]. Parents were asked to rate the perceived severity of the health problem, the duration of symptoms, and their satisfaction with the OOH service. Only these three variables from the parental questionnaire were included in this study.
We used the International Classification of Primary Care, second edition (ICPC-2), to code RFE based on the GP's clinical notes and diagnosis [8]. The main (first mentioned) RFE was designated 'primary RFE' and others 'secondary'. Secondary RFE was only recorded if it had another code number or chapter number than the primary RFE. The RFE was written in the patient's medical record by the GPs. Coding was performed by a specially trained medical student who also received supervision from one of the authors.
Population
During the one-year study period, 644,395 contacts were registered in the OOH service in the CDR (59.5% telephone contacts, 27.6% face-to-face consultations, and 12.9% home visits). The LV-KOS study included 21,457 contacts, i.e. 3.3% of all contacts to the OOH service ( Fig. 1). The inclusion and exclusion criteria have been described earlier [6,7]. Due to variations in the pop-up interval of registrations, the distribution of registered contacts in the LV-KOS is not comparable to the distribution of all contacts to the OOH service. Each patient could be included more times during the one-year study period. Thus, the risk of including an already included patient was about 1:30 when the patient contacted the OOH service a second time [7]. For this study, we included children from 0 to 5 years of age, in this paper called children. Because of no objective information on the diagnosis the telephone contacts are not included in the detailed analysis in this study.
Statistics
We generated simple descriptive statistics by using IBM SPSS version 24 and the "Statistics with confidence" programme, 2nd edition, which provided a 95% confidence interval (CI) and a two-sided p-value of 5% [9]. Because several variables were on an ordinal scale, Spearman's rho rank-order correlation coefficient was used to calculate rank-order correlation between variables due to non-normal distribution. The rho value ranges from − 1 to + 1, and zero indicates no linear association between two variables.
Results
Of the 21,457 contacts, 36.4% were telephone contacts, 32.4% were clinic consultations, and 31.1% were home visits [6]. A total of 4002 contacts concerned children (aged 0-5 years), and 1639 (41%) of these were telephone contacts (Fig. 1). The present study focuses on the 2363 contacts concerning children who were seen by a GP at a face-to-face consultation in the OOH clinic (n = 1875) or at a home visit (n = 488).
A completed questionnaire was returned by 1220 of the parents (51.6%). A non-respondent analysis showed only minor differences between the respondents and non-respondents. The GPs more frequently rated non-respondents not to be seriously ill (70.3%) compared to respondents (65.2%) (difference: 5.1%, CI: 1.3-8.8%).
In Table 1 contact types along with gender and age of the included children and parental assessment are listed together with the GPs' assessment of health problem severity, and the parents' estimated duration of symptoms, severity, and satisfaction with the contact. Moreover, prescription rates (ranged from 0 to 37.7%) and the rates association for each variable are presented.
Reason for encounter
GPs recorded RFE according to the information provided by the parents during the consultation. About half the contacts (53%) had two RFEs: a primary and a secondary. Non-specific complaints, including fever and respiratory tract symptoms, were the most common primary symptoms; these were followed by skin symptoms and symptoms from digestive organs ( Table 2). More than 70% of the secondary RFE belonged to another chapter than primary RFE. Fever was the primary or secondary RFE in 891 contacts (37.7, 95% CI: 35.8-39.7). Ear symptoms were present in 186 (7.9%, CI: 6.9-9.0) contacts; 119 of these had ear pain, corresponding to 5.0% (CI: 4.2-6.0) of all included contacts. In more than half the contacts (59.1%, CI: 56.4-61.9), the parents assessed the presented health problem as serious (Table 1), whereas the treating GPs assessed only 32.4% (CI: 30.5-34.3) of contacts to concern serious health problems. The symptoms had lasted for more than 24 h in about one-third of the contacts and for less than 5 h in 27.9% of contacts (CI: 25.4-30.5) ( Table 1). We found symptoms of less than 5 h more often assessed as serious or maybe serious by the GPs (39.0%) compared to symptoms of longer duration (30.7%, diff. = 8.3%, CI: 3.5-13.1). The GPs did not state any RFE for 174 contacts (7%).
Common diagnoses
Four percent of the contacts received two diagnoses, resulting in 2452 diagnoses for 2363 contacts ( Table 3). The most common diagnosis was respiratory tract disease, which was present in 979 (41.4%, CI: 39.5-43.4); 360 of these were diagnosed as upper respiratory tract infection (RTI). General complaints, including fever and unspecified viral infection, was found in 438 (18.5%, CI: 17.0-20.2) of the diagnoses. Ear disease was the third-most common diagnosis group and was found in 370 (15.7%, CI: 14.2-17.2); 291 of these were diagnosed with acute otitis media (AOM).
Provided care
About three-quarters of the parents received some general advice from the GP. In 639 (27.0%, CI: 25.3-28.9) of the contacts, children were given one (601) or two (38) prescriptions of medicine. The prescriptions were less common at home visits (16.2%) compared with Table 1 Contact type, characteristics of participants, GP-and parent-assessed of severity, duration of symptoms, parental satisfaction, and prescription rates in preschool children seen by a GP in a face-to-face consultation at the out-of-hours service 1). Problem severity was related to prescription rates when assessed by the GPs (Rho = 0.136, P < 0.001). Short duration of symptoms was associated with a low prescription rate (17%). The prescriptions in the different age groups varied between 20 and 34% without any trend ( Table 1). The most common type of medication was oral antibiotics, which was prescribed 471 times (19.9%, CI: 18.4-21.6), and 33 children (1.4%) received topical antibiotics for eye infection (Table 4). In 222 contacts, children were prescribed penicillin-V (beta lactamase sensitive penicillin), corresponding to 47% of all prescribed oral antibiotics, and 216 (46%) were prescribed amoxicillin (beta lactamase resistant penicillin). No children received a prescription of cephalosporin or other newer broad-spectrum antibiotics. A total of 167 (7.1%, CI: 6.1-8.2) contacts involved prescriptions for other types of medicine than antibiotics, mostly for respiratory symptoms (n = 79). Of the 291 contacts ending with a diagnosis of AOM, 204 (Table 4), and 12% were recommended to make an appointment with their own GP.
Referral
In 175 (7.4%, CI: 6.4-8.5) face-to-face consultations or home visits, children were referred for further evaluation or admission to a nearby hospital, mainly a paediatric department. Children under one year of age were more often referred (12%) than children over one year of age (6%). The diagnoses of the referred children were mainly respiratory diseases, such as bronchiolitis, pneumonia, or asthma (n = 97), or general bad condition, and these were often combined with high fever (n = 34) ( Table 5).
In 151 cases (86.3%, CI: 80.4-90.6), the GP considered the condition of the referred child to be serious or potentially serious.
Main findings
Non-specific complaints, including fever, were the most common primary or secondary RFE (1303, 55.1%) in our random sample of children 0-5 years of age seen by a GP at the OOH primary care service. Fever alone was identified in 891 (37.7%) children. Respiratory tract disease was the most common diagnosis group (41.4%); 360 (15.2%) had upper respiratory infection, 438 (18.5%) had general complaints, and 370 (15.7%) had ear diseases. A total of 639 (27%) contacts resulted in prescriptions, and 471 (20%) were prescribed antibiotics. In total, 70.1% of children with AOM received antibiotics, and 7.4% were referred for further examination/treatment at a paediatric or emergency department. In total, 7.0% of the parents reported that they were dissatisfied with the quality of the contact with the GP-run OOH service. Two percent of the children did not receive a diagnosis after being seen by a GP.
Comparison with other studies
Use of OOH service The original OOH organization, which was based on services provided by GPs for their own listed patients, has changed into large GP cooperatives with telephone triage, and regional clinics have become integral parts of the new model [1,10]. In both the UK and Poland, children under five years of age have been found to have about fourfold more contacts with the OOH service than adults [11,12]. Huibers et al. compared the use of OOH services in Denmark and the Netherlands. They found that Danish children had 250 contacts per 1000 inhabitants per year compared with Dutch children who had less than 100 contacts per 1000 inhabitants per year [4].
Reason for encounter (RFE)
Only few studies have reported RFE in children seen in OOH primary care, which makes comparison difficult [13]. A Dutch study based on a population including 20% under age five years reported that 25% of the parents contacted the OOH service with non-specific complaints; 15% were caused by respiratory problems in the child [14]. We found that non-specific complaints were the primary or secondary RFE in 1303 (55.1%) contacts, and complaints of respiratory tract symptoms were identified in 43.5%. These differences can be explained by different use of OOH primary care, telephone contacts included, and different age groups. In a Norwegian study from 2008, Welle-Nilsen et al. reported that one third of 210 OOH consultations concerned children aged 0-10 years. They found that 28% were classified with minor Notes: a 20 children had two diagnoses ailments; cough, fever, sore throat, upper RTI, and earache were the most common RFEs [15]. Another Norwegian study found fever was the most frequent RFE in children when nurses did telephone counselling [16]. De Bont et al. reported that 31% of contacts to a large Dutch GP OOH service concerning children under age 12 years were fever related [17]. This figure corresponds largely to our finding of 37.7% in a somewhat younger population.
Diagnosis
A multinational study exploring the diagnostic scope in OOH primary care in eight European countries found respiratory problems in 14-44% of children under the age of 18 years, general and non-specific complaints in 11-24%, and ear problems in up to 13% [3]. In an OOH paediatric clinic in the US, Goodrich et al. found that 26% of children under the age of 15 years presented with upper respiratory infection and 14% with otitis media or related conditions [18]. These figures are very similar to our findings although the age group investigated in our study was younger. Kozin et al. presented US figures on otology-related diagnoses given in an emergency department setting in 2009-11. They included children aged 0-17 years who presented with an ear complaint. In total, 82% were diagnosed with suppurative or unspecified otitis media; this corresponds to 5.6% of all visits [19]. This is in line with our finding of AOM in about 12% of younger children.
Provided care
Salisbury et al. found that 32% of all OOH primary care contacts in the UK ended with a prescription [11], and we found 27% in a Danish setting. Eishout et al. reported that 36.3% (CI: 31.3-41.7) of 322 febrile children (3 months -6 years of age) seen by a GP in a face-to-face consultation in OOH primary care in the Netherlands were prescribed antibiotics [20]. We found that 25.0% (CI: 22.3-28.0) of the 891 contacts concerning children with fever ended with a prescription of antibiotics; this is significantly less, but the OOH service is more frequently used in Denmark than in the Netherlands [4]. In a Norwegian study with 401 children with respiratory symptoms and/or fever found prescription rate of antibiotics was 23% [21]. A C-reactive protein value over 20 mg/L, positive findings on ear examination, use of paracetamol and no vomiting were significant associated with antibiotic prescription.
In a population-based study of prescriptions of antibiotics during one year (2010-11) based on 644,777 OOH primary care contacts, Huibers et al. found that 25% of children (0-4 years of age) received an antibiotic prescription after a clinic consultation and 12% did after a home visit [22]. As this study was the basis for our study, the similar prescription rates (20% antibiotics and 7% other medicine) for children aged 0-5 years are not surprising.
We found that 16% of the children were diagnosed with ear-related problems and 12% with AOM; these results are in line with findings in Belgium and Spain [3]. The antibiotics prescription rate of 70% for contacts involving AOM in our study was lower than the figures reported from emergency department settings in the US, which has seen an increase from 79% in 1996 to 86% in 2004 [23].
Referrals
Giesen et al. analysed 4423 contacts to an OOH primary care service in the Netherlands for all age groups. They found that 7.1% were referred to an emergency department for further treatment [24]. In Norway Rebnord et al. found a referral rate of 7.7%. The strongest predictor for referral was affected respiration [21]. Shipman et al. found that 7.0% received a referral when contacting a cooperative OOH clinic in inner London [25]. They did not report any age stratification in the referral rates. We found a similar figure for preschool children in our study as 7.4% received a referral. However, we included only face-to-face consultations. As very few children are referred directly to hospital after a telephone contact, and face-to-face consultations accounted for 40.6% of all contacts in the CDR, the overall referral rate from OOH primary care is closer to 3% for all children.
Satisfaction
We acknowledge that the concept of satisfaction may be complex and several questions should optimally address what an assessment of patients' experience of satisfaction is based on. However, we actually have several questions covering the patients' experience of the encounter. In an earlier published paper, the issue of overall satisfaction has been addressed and found useful as it detects differences between groups of patients [26]. McKinley et al. reported in 1997 that 9.8% of the patients were dissatisfied with the OOH service when served by a GP and 17.9% when served by a deputising doctor [27]. In a study among 1139 respondents in Vejle County, which was conducted three years after the establishment of large-scale GP cooperatives in Denmark, Christensen et al. found that 13% were dissatisfied, 13% were neutral, and 74% were satisfied [1]. Our findings on parental satisfaction are in good agreement with this study as we found that 82% of the responding parents were satisfied, 11% were neutral, and 7.0% were dissatisfied. A study from Wales found that delays in response and triage times after using OOH services reduced the patient satisfaction, whereas a consultation length of over 10 min increased the satisfaction [28]. We have not identified other studies reporting on the correlation between low patient satisfaction and low prescription rates.
Strengths and limitations
The study has several strengths. The random inclusion of children during the one-year study period minimised the risk of selection bias, and the one-year inclusion ensured that no seasonal bias existed in the data. Moreover, no drop-outs were observed in the electronic registrations. Our material of 2363 children (0-5 years) contacts is sufficiently sizeable to achieve high statistical precision.
It could be a limitation that our study counted contacts and not children. However, the risk of being included in the study more than once was less than 6% for patients seen in the OOH service during the one-year study period. It is a limitation that the diagnoses given rely exclusively on the individual GP's clinical examination and evaluation of the child, in combination with the information provided by the parents. The low response rate for the postal questionnaire could have implied selection bias, and this potential risk must be considered when assessing the parental perceptions. The RFEs were based on the text stated by the GP in the medical record, and this text was subsequently ICPC-coded by a trained medical student and checked by one of the authors. This subsequent coding may have introduced a risk of misclassification as the stated text was sometimes ambiguous. The data could be considered a bit dated (2010-2011), but as no organisational changes have been made, results are expected to be valid.
Clinical implications and future research
The finding of 7.0% of parental dissatisfaction is in line of other studies on patient satisfaction, but additional studies seem relevant to identify reasons for dissatisfaction and further reduce dissatisfaction. Dissatisfaction may be related with the quality of communication and care in OOH service. The finding that some antibiotics were prescribed without a clear diagnosis points to the challenges that GPs face and the need for continuous awareness to limit unneeded antibiotic prescriptions. Whether a pediatrician or a GP should see children at OOH primary care services is not up for discussion in Denmark, because of the gatekeeping system with GPs taking care of children in daytime and outside office hours, without direct access to a pediatrician. Taking our results in account, one may deduct that a pediatrician is not needed. The diagnostic scope is most relevant for primary care and parents are satisfied with the care provided. Yet, this may be different in countries with another healthcare system, resulting in different patient expectations.
Conclusions
We studied a random sample of face-to-face consultations and home visits at the OOH primary care service for 2363 contacts concerning children under the age of six years during one year with no drop-outs. The most common RFEs were non-specific complaints (40%) and respiratory tract symptoms (23%), whereas fever was identified in 38% of the contacts. The GPs diagnosed respiratory tract disease in 41% and ear disease in 16% and made a prescription for 27% of children (20% for systemic antibiotics). In total, 7.4% of children were referred to a hospital mostly for respiratory problems. Parental satisfaction was generally high, but 7.0% of the parents were dissatisfied with the contact; this needs further exploration.
|
2019-03-01T00:06:01.143Z
|
2019-02-26T00:00:00.000
|
{
"year": 2019,
"sha1": "eccb8e5319da05751efd6f5f58f5c6f86488843f",
"oa_license": "CCBY",
"oa_url": "https://bmcfampract.biomedcentral.com/track/pdf/10.1186/s12875-019-0922-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eccb8e5319da05751efd6f5f58f5c6f86488843f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250408127
|
pes2o/s2orc
|
v3-fos-license
|
On arithmetic sums of connected sets in $\mathbb{R}^2$
We prove that for two connected sets $E,F\subset\mathbb{R}^2$ with cardinalities greater than $1$, if one of $E$ and $F$ is compact and not a line segment, then the arithmetic sum $E+F$ has non-empty interior. This improves a recent result of Banakh, Jab{\l}o\'nska and Jab{\l}o\'nski [4,Theorem 4] in dimension two by relaxing their assumption that $E$ and $F$ are both compact.
Introduction
Given finitely many sets E 1 , . . . , E n ⊂ R d , their arithmetic sum is defined as A fundamental question is to find suitable conditions on E 1 , . . . , E n under which their arithmetic sum has non-empty interior. There are two classical results on this question. First, if two sets E, F ⊂ R d are large in the sense of having positive Lebesgue measure (and measurable), then the (generalized) Steinhaus theorem states that E + F has non-empty interior. Second, Piccard's thoerem says that the same conclusion holds when E and F are large in the sense of being of second category in R d and having the Baire property. For a detailed account on these two results, the reader is referred to the monograph of Oxtoby [10].
There also have been many works on the above question for sets which are small in the sense of both measure and topology, which are often fractal sets. Studies in this direction also date back to a work of Steinhaus, who in [15] first observed that the arithmetic sum of the middle-third Cantor set with itself is the interval [0, 2]. Subsequential generalizations include the works of Hall [7], Newhouse [9] and Astels [2], to merely name a few. Very recently, Feng and the author [6] studied the above question for fractal sets in higher dimensional Euclidean spaces. the points (0, 0) and (1,0), and F ⊂ R 2 is a curve connecting (0, 0) and (0, 1), then E + F + Z 2 = R 2 . Simon and Taylor [14] studied the question when the arithmetic sum of a C 2 curve and a certain class of fractal sets in R 2 has non-empty interior. They [13] also studied the dimension and measure of the arithmetic sum of a C 2 curve and a set in the plane. Very recently, Banakh, Jab lońska and Jab loński proved the following result, which motivated the present paper.
Theorem 1.1. [3, Theorem 4] Let K 1 , . . . , K d ⊂ R d be compact connected sets. Suppose that there exist a i , b i ∈ K i for i = 1, . . . , d such that the vectors a 1 −b 1 , . . . , a d −b d are linearly independent. Then K 1 + · · · + K d has non-empty interior.
Banakh et al. proved Theorem 1.1 by making an elegant use of a result in topology on products of continua (see [3,Proposition 1]). A natural question arises to what extent the compactness assumption in Theorem 1.1 can be relaxed. In this note, we investigate this question in the case when d = 2. By using a completely different approach, we obtain the following result. Theorem 1.2. Let E, F ⊂ R 2 be connected sets with cardinalities greater than 1. If F is compact and not a line segment, then E + F has non-empty interior. Theorem 1.2 improves Theorem 1.1 when d = 2, since we allow one of the two connected sets to be non-compact. We also show that the assumptions in Theorem 1.2 can not be further relaxed. More precisely, we show that if F ⊂ R 2 is a line segment, then there exists a non-compact connected set E ⊂ R 2 not lying in a line such that E + F has empty interior. Also, we will give examples of non-compact connected sets E, F ⊂ R 2 , neither of which is contained in a line, such that E + F has empty interior. Theorem 1.2 combining with these examples gives a full answer to the above question on the compactness assumption in Theorem 1.1 in the case when d = 2.
Our strategy to prove Theorem 1.2 is as follows: first, we prove the conclusion when the complement of F has at least one bounded connected component. Then we refine this result by proving that if there is a compact set K lying in a line in R 2 such that the complement of F ∪ K has at least one bounded connected component, then E + F has non-empty interior as well. These two results are stated and proved in R d , see Lemmas 2.1-2.2. Next, we prove that when F has empty interior, there does exist a compact set K lying in a line in R 2 such that the complement of F ∪ K has at least one bounded connected component, see Proposition 3.3. Finally, Theorem 1.2 follows by combining Lemma 2.2 and Proposition 3.3.
The paper is organized as follows. In Section 2, we give some preliminary lemmas. Then we prove Theorem 1.2 in Section 3. Finally, in Section 4 we present examples to show that the assumptions in Theorem 1.2 can not be further relaxed.
Preliminary lemmas
For A ⊂ R d , let A c , A o , ∂A and A denote respectively the complement, interior, boundary and closure of A. We first prove a useful lemma.
Lemma 2.1. Let E ⊂ R d be a connected set with cardinality greater than 1. Let F ⊂ R d be a compact set so that F c has at least one bounded connected component. Then the arithmetic sum E + F has non-empty interior.
Proof. We assume that the interior of F is empty, otherwise there is nothing left to prove. Let U be the unbounded connected component of F c . Write V = F c \U. By the assumption that F c has at least one bounded connected component, V is a non-empty bounded open subset of R d . Set Clearly W is open. We claim that W = ∅. To prove this claim, it suffices to show that (s + V ) ∩ U = ∅ for each non-zero s ∈ R d . To this end, fix a non-zero s ∈ R d and define P s : R d → R by P s (x) = x, s , where ·, · is the standard inner product in R d . Since V is bounded, Pick x 0 ∈ V so that P s (x 0 ) > λ − s 2 /2, and take a small r ∈ (0, s /2) so that stands for the open ball centered at x 0 of radius r. Then for each y ∈ R d with y − x 0 < r, which implies that s + y ∈ V , and so s + B o (x 0 , r) ⊂ V c . Since F has empty interior, it follows that (s Finally, we prove that W ⊂ E + F , which immediately implies that E + F has nonempty interior. Suppose this is not true, i.e., there exists x ∈ W so that x ∈ E + F . By the definition of W , there exist a, b ∈ E so that However, according to The next lemma is a refined version of Lemma 2.1.
Suppose that there exists a compact set K ⊂ R d so that the following two properties hold: Then the arithmetic sum E + F has non-empty interior.
Lemma 2.2 is a direct consequence of the following result.
Proof. By taking a suitable rotation and translation if necessary, we may assume that , and that at least one bounded connected component of (F ∪ K) c has non-empty intersection with the half-space Let U be a bounded connected component of (F ∪ K) c so that is non-empty. Since ∂U ⊂ F ∪ K and K ⊂ ∂H, we easily see that Then h is positive and finite.
To prove that A is totally disconnected, we may assume that #A ≥ 2, since otherwise there is nothing to prove. Let u, v ∈ A with u = v. In the following, we are going to construct an open set W ⊂ R d such that Since u, v ∈ A with u = v are arbitrary, it will follow that A is totally disconnected.
To prove (2.5), first notice that the open set (u where ·, · is the standard inner product in R d . Then λ is finite as V is bounded. Also, it is clear that λ = sup x∈V x, a . Since a = 0, we have a, a > 0. Hence we can find x 0 ∈ V such that x 0 , a > λ − a, a .
Thus we have a + x 0 , a = x 0 , a + a, a > λ, Then we have Then by (2.8), (2.9) and the compactness of z −K, we can find finitely many points z 1 , . . . , z k ∈D so that See Figure 1 for an illustration of the definition of W , where for simplicity we assume that V is an open half disk. Below we show that the open set W satisfies (2.5).
First notice that v / ∈ W since v / ∈ z − V ; see (2.6). Moreover, since z 1 , . . . , z k ∈D, we have Since u ∈ z − V , it follows that u ∈ W . In the following, we show that ∂W ∩ A = ∅.
To see this, observe that by (2.4) and (2.11) we have By (2.10), (2.11) and the compactness of z −K, we see that W is disjoint from a neighborhood of z −K. In particular, this implies that Next we show that Moreover, since z 1 , . . . , z k ∈D, we see that On the other hand, since V ⊂ {(x 1 , . . . , x d ) ∈ R d : x d > 0} and z ∈ u + V , where the last inequality is by (2.11). Now (2.15)-(2.17) imply that for i ∈ {1, . . . , k}, From this (2.14) follows.
By (2.12)-(2.14) we see that Since z, z 1 , . . . , z k ∈ D, the definition of A implies that A has no intersection with the right hand side of (2.18). As a consequence, A ∩ ∂W = ∅. Hence (2.5) is proved and we finish the proof of the lemma. For an illustration of the proof, see Figure 1.
Hence by Lemma 2.3, z∈D (z − F ) c is totally disconnected. However, from the definition of D we see that This contradicts the assumption that E is a connected set with cardinality greater than 1. Hence we have (E + F ) • = ∅, completing the proof of the lemma.
Proof of Theorem 1.2
We first state a classical result in convex analysis. The reader is referred to [11, Theorem 17.1] for a proof. Proof. The result might be well-known. However we are not able to find a reference, so we simply include a proof. By Carathéodory's Theorem, x can be represented as a convex combination of 3 elements of S. Equivalently, x lies in a triangle with vertices in S. Since x is on the boundary of conv(S), it follows that x lies on one edge of the triangle. Let u, v ∈ S be the endpoints of this edge. Since x ∈ S, we have u = v.
Let L u,v denote the straight line passing through the points u, v. We show that S lies completely on one side of L u,v . Suppose on the contrary that S does not lie on one side of L u,v . Then we can pick w 1 , w 2 ∈ S such that w 1 , w 2 lie on different sides of L u,v . Clearly, the point x lies in the interior of the quadrilateral with vertices u, v, w 1 , w 2 (see Figure 2). However, this quadrilateral is a subset of conv(S), contradicting the assumption that x ∈ ∂(conv(S)). Next we prove the following proposition, which plays a key role in the proof of Theorem 1.2. Proposition 3.3. Let F be a compact connected subset of R 2 with empty interior. Suppose that F is not lying in a line. Then there exists a compact set K ⊂ R 2 lying in a line such that (F ∪ K) c has at least one bounded connected component.
Proof. We may assume that F c has no bounded connected components, otherwise we simply take K = ∅.
Let conv(F ) denote the convex hull of F . Since F is not contained in a straight line, conv(F ) has non-empty interior and there exists a homeomorphism h : R 2 → R 2 so that h(conv(F )) is the unit closed ball centered at the origin (see e.g. [4,Exercise 8.11]).
We claim that ∂(conv(F )) ⊂ F . Suppose on the contrary that ∂(conv(F )) ⊂ F . As ∂(conv(F )) is homeomorphic to the unit circle, R 2 \∂(conv(F )) has exact two connected components by the connectedness of F c . This implies that V 2 ⊂ F , contradicting the assumption that F has empty interior.
Pick z ∈ ∂(conv(F ))\F . By Lemma 3.2, there exist u, v ∈ F such that the straight line L u,v passes through z and F lies completely on one side of L u,v . By taking suitable rotation and translation to F , we may assume that z = (0, 0), L u,v is the x-axis, u is on the negative part of the x-axis and v the positive part of x-axis, and F lies entirely on the upper half plane.
Choose a large R > 0 such that F is contained in the closed half disc Let K be the line segment [−R, R] × {0}, which is the bottom edge of S. Below we show that (F ∪ K) c has at least one bounded connected component.
Since the origin is not contained in F , there exists a small r > 0 such that the following open half disc is contained in (F ∪K) c . Let V be the connected component of (F ∪K) c that contains T . Notice that S c is contained in the unbounded connected component U of (F ∪K) c . To show that V is bounded, it is enough to show that V = U (keep in mind that (F ∪ K) c has a unique unbounded connected component, due to the compactness of F ∪ K).
Suppose on the contrary that
Since U is open and connected, there exists a simple curve γ ⊂ U such that γ consists of finitely many line segments and γ joins the points a, b (see e.g. [1, p. 56] for a proof). Clearly, γ must intersect the open half circle Γ := {(x, y) : x 2 + y 2 = R 2 , y > 0} at one or more than one points. As γ is a polygon, we may choose a sub-polygon γ 1 which joins a and a point c ∈ Γ such that c is the unique intersection point of γ 1 and Γ. Connect the point a and the origin by a simple polygon γ 2 ⊂ T such that γ 2 intersects γ 1 only at the point a, and γ 2 intersects K only at the origin.
Let η = γ 1 ∪ γ 2 . Then η is a simple polygon, joining the origin and the point c. Except the endpoints, other points of η are contained in U ∩ S o . Hence η ∩ F = ∅.
Write c = (c 1 , c 2 ). Let L + , L − be the vertical half lines defined by L + := {(c 1 , y) : y ≥ c 2 } and L − := {(0, y) : y ≤ 0}. Then the union η ∪ L + ∪ L − has no intersection with F , moreover its complement has two connected components, with u, v being contained in different components. This implies that F is disconnected, leading to a contradiction. See Figure 3 for an illustration of the proof. Now we combine Lemma 2.2 and Proposition 3.3 to prove Theorem 1.2.
Proof of Theorem 1.2. We can assume that F • = ∅, since otherwise there is nothing to prove. Since F is compact, connected, F • = ∅ and F is not a line segment, by
Some examples
We have proved our main result Theorem 1.2 in the previous section: if E, F ⊂ R 2 are connected sets with cardinalities greater than 1, and F is compact and not a line segment, then E + F has non-empty interior. In this section, we present examples to show that the assumptions in this result can not be further relaxed.
Our first example shows that there are non-compact connected sets E, F ⊂ R 2 , neither of which is contained in a line, such that E + F has empty interior. Therefore the compactness assumption for F in Theorem 1.2 can not be dropped.
The example that we will give involves a result on additive functions on R.
It is well-known that under some regularity assumptions, for instance continuity at a point or Lebesgue measurability, an additive function is necessarily linear. However, F. B. Jones [8,Theorem 5] proved the existence of discontinuous additive functions with connected graphs. Based on this result we give the following example. Moreover, since f is additive, it follows that E + F = G f and so E + F has empty interior.
We next give examples to show that the conclusion of Theorem 1.2 may fail if F ⊂ R 2 is a line segment, and E ⊂ R 2 is a connected set which is not contained in a line in R 2 . In our examples, we will take F to be a vertical line segment and E the graph of a certain function.
We first give a simple necessary and sufficient condition in terms of the oscillations of a function f : R → R for the existence of a vertical line segment L such that G f +L has empty interior.
Given a function f : R → R, the oscillation of f at a point x ∈ R is defined by We say that f is uniformly oscillated if inf x∈R ω f (x) > 0. Clearly, f is not uniformly oscillated if f has a point of continuity. Proof. In one direction, assume that inf x∈R ω f (x) > 0. Let L be a vertical line segment with length 0 < ℓ < inf x∈R ω f (x). Below we show that (G f + L) • = ∅.
By applying a suitable translation we can assume that L = {0} × [0, ℓ]. Suppose on the contrary that (G f + L) • = ∅. Then in particular G f + L contains a horizontal line segment, say, [a, b] × {c} for some a, b, c ∈ R with a < b. Notice that is a disjoint union of vertical line segments. By this and our assumption that and thus Let x 0 = (a + b)/2. Then (4.1) clearly implies that ω f (x 0 ) ≤ ℓ, contradicting that inf x∈R ω f (x) > ℓ. The contradiction yields that (G f + L) • = ∅.
In the other direction, we will prove that if inf x∈R ω f (x) = 0, then (G f + L) • = ∅ for any vertical line segment L. Assume that inf x∈R ω f (x) = 0. Let L be a vertical line segment with length ℓ > 0. Again we can assume that L = {0} × [0, ℓ].
Since (x, y) ∈ R is arbitrary, it follows that R ⊂ G f + L. This proves the above claim, and in particular, that (G f + L) • = ∅.
According to Lemma 4.2, if f : R → R is a uniformly oscillated function with a connected graph, then let E be G f and F be an appropriate vertical line segment, we have E + F has non-empty interior. Such functions do exist as shown in the following examples.
Example 4.3. F. B. Jones [8, proved that there are additive functions on R whose graphs are connected and dense in R 2 . Let f be such a function. Since G f is dense in R 2 , it is easy to see that inf x∈R ω f (x) = ∞. Let E = G f and let F ⊂ R 2 be a vertical line segment. Then from the proof of Lemma 4.2 we see that E + F has empty interior.
Recently, Rosen [12] proved that the set E in Example 4.3 has positive two dimensional Lebesgue measure. Hence by Fubini's theorem, E is not Lebesgue measurable. Below we give another example in which E is Borel. is the binary expansion of x. Here we adopt the convention that a n (x) = 1 for all large n if x has two different binary expansions.
Notice that f is a Borel function, and hence its graph G f is a Borel subset of R 2 . Also, it is easy to check that inf x∈[0,1] ω f (x) = 1. Moreover, Vietoris [16] proved that G f is connected.
Let E = G f and F ⊂ R 2 be a vertical line segment of length less than 1. Then we see from the proof of Lemma 4.2 that E + F has empty interior.
|
2022-07-11T01:15:51.304Z
|
2022-07-08T00:00:00.000
|
{
"year": 2022,
"sha1": "6ff475ef1d95ff75a98623eb7bdb863bc022c6e0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6ff475ef1d95ff75a98623eb7bdb863bc022c6e0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
249636291
|
pes2o/s2orc
|
v3-fos-license
|
Mapping climate change’s impact on cholera infection risk in Bangladesh
Several studies have investigated how Vibrio cholerae infection risk changes with increased rainfall, temperature, and water pH levels for coastal Bangladesh, which experiences seasonal surges in cholera infections associated with heavy rainfall events. While coastal environmental conditions are understood to influence V. cholerae propagation within brackish waters and transmission to and within human populations, it remains unknown how changing climate regimes impact the risk for cholera infection throughout Bangladesh. To address this, we developed a random forest species distribution model to predict the occurrence probability of cholera incidence within Bangladesh for 2015 and 2050. We developed a random forest model trained on cholera incidence data and spatial environmental raster data to be predicted to environmental data for the year of training (2015) and 2050. From our model’s predictions, we generated risk maps for cholera occurrence for 2015 and 2050. Our best-fitting model predicted cholera occurrence given elevation and distance to water. Generally, we find that regions within every district in Bangladesh experience an increase in infection risk from 2015 to 2050. We also find that although cells of high risk cluster along the coastline predominantly in 2015, by 2050 high-risk areas expand from the coast inland, conglomerating around surface waters across Bangladesh, reaching all but the northwestern-most district. Mapping the geographic distribution of cholera infections given projected environmental conditions provides a valuable tool for guiding proactive public health policy tailored to areas most at risk of future disease outbreaks.
Introduction
Cholera, a waterborne bacterial disease that causes severe diarrhea and dehydration in humans, remains a significant threat to global health. Despite proposed efforts to reduce global cholera mortality by 90% by 2030 [1], researchers estimate that between 1.3 million and 4 million cholera cases occur annually, with an estimated 21,000 to 143,000 deaths [2].
The etiological agent Vibrio cholerae resides in coastal brackish water and riverine habitats and is typically seeded along coastlines [3]. Among many proposed hosts, vectors, and reservoirs of infection, zooplankton remain the largest known environmental reservoir of V. cholerae [4]. Consumption of seafood or water contaminated with an infective dose of freefloating V. cholerae or V. cholerae-harboring zooplankton causes human infections, while infection may also occur through fecal-oral transmission between human hosts. Such transmission pathways are influenced by environmental conditions in waterbodies that favor bacterial growth [5]. Changes to such waterbodies influence the epidemiology and ecology of V. cholerae by altering bacterial reproduction, transmission, and exposure risks. Climatic conditions, such as rainfall and sea surface temperature, drive epidemiological risk, with warmer, wetter environments increasing the likelihood of disease transmission and infection [6]. Specifically, increases in sea surface temperature and photosynthetic activity, which increases salinity and pH levels, have been shown to encourage bacterial growth and hence V. cholerae infection risk and endemicity in the Bay of Bengal [7].
However, future climate conditions can also promote increased infection risk in inland populations. Heavy rainfall events (e.g., El Niño and Southern Oscillation and summer monsoons) increase cholera infection risk by damaging sanitation systems and contaminating water sources with sewer spillage [5,8,9]. Surface water contaminated with brackish coastal waters may also serve as sources of infection after flooding events [10]. Cholera infection risk may also increase in periods of drought, during which reliance on scarce water sources increases the likelihood of contamination with V. cholerae, especially if human hygiene practices partake in waters used for drinking water [11].
The role of the environment in shaping disease epidemiology and ecology, is not unique to cholera. Recently, researchers have found that air pollution, chemical exposures, population density, and the climate-specifically, the ambient air temperature-influence SARS-CoV-2 transmission dynamics [12][13][14]. For COVID-19 and cholera alike, curbing widespread infection, mortality, and social disruption requires characterizing the epidemiological risk, which in turn depends on how regional weather, land-use practices, and climate conditions influence disease epidemiology and ecology. Risk mapping, a method of associating risk values to explicit geographic areas, has become an effective tool for not only visualizing the spatial distribution of disease burden (i.e., risk) but also for guiding public health policy to reduce that burden [15,16].
One approach to estimating risk across a landscape is to use non-mechanistic correlative models that predict infection risk given disease incidence data (e.g., disease presence/absence) and environmental covariates. Predicting risk under future environmental and climate scenarios is essential for disease surveillance. While predictive studies cannot predict into the future with complete accuracy and are often subject to the limitations of global climate models that create future environmental variables, risk prediction remains a powerful tool in guiding proactive public health policy for areas most at risk of future disease outbreaks. Such a strategy is particularly critical in endemic areas, as pandemic strains of V. cholerae almost invariably emerge from endemic areas that seed epidemics abroad [6,17].
Several studies have sought to predict risk for cholera infection given climate and weather differences via risk-mapping [10,[17][18][19][20]. Most risk-mapping studies restrict their analyses to present climatic conditions or limit climate projections to coastal settings only. To our knowledge, no study to date integrates long-term climate projections into risk mapping, especially for inland populations of endemic countries notoriously affected by climate change. Bangladesh is one such country. Not only is it uniquely vulnerable to coastal flooding, due to its geography and population density, but recent research also finds that by 2100, regardless of the global climate model used, Bangladesh will experience an increase in exposure risk to flooding, with lower-lying regions most at risk [21]. Cholera epidemics are also frequently seeded in the Bay of Bengal and emerge with seasonality [17,22]. Given these vulnerabilities, this study seeks to quantify current and future cholera infection risk values across Bangladesh given environmental conditions. Such an analysis is critical to lessening the burden of cholera and to sustaining and redirecting regional public health strategies as needed over the medium and long term.
Materials and methods
Here we construct risk maps for cholera infection for Bangladesh under current and future climate scenarios. We identify spatial environmental variables associated with human cholera infection and cholera incidence data from a detailed country-wide serosurvey study, and employ a fitted random forest model to predict the risk of infection across Bangladesh at a fine spatial resolution [17]. Below, we characterize our analyses in greater detail.
(a) Study area
We used the administrative boundary level 0 provided by the GADM spatial database (v. 3.6) as the extent for our study area (88.01057˚W, 92.67366˚E, 20.74111˚S, 26.63407˚N) [48].
(b) Cholera occurrence data
We used a serosurvey dataset described in Azman et al. (2020) that identifies cholera prevalence within Bangladesh for 2015 for our disease presence data [17]. Of the 2930 surveyed individuals, the 639 predicted positive cases constituted our model's presence data while the predicted 2291 negative cases constituted absence data. The approximate coordinate location of each surveyed individual was also used by our model to extract values from our spatial covariates. Notably, multiple presence or background points may exist at the same coordinate location as serum samples were often taken from multiple individuals within the same household.
(c) Spatial environmental data
To develop our model, we considered 13 spatial variables known to correlate with V. cholerae occurrence and case incidence and for which data were available for 2015 and 2050 (Table 1). Given our interest in predicting risk for the entirety of Bangladesh, we restricted our variables to those with values available for each cell in the extent used. Moreover, as V. cholerae can be found in semiaquatic and seasonally aquatic settings, we excluded environmental variables describing aquatic environments only [7,23]. All raster datasets were projected to the World Geodetic System 84 (WGS 84) projection, resampled to a 0.00214˚(approximately 250-m 2 ) resolution, and cropped to the extent of our study area using the 'raster' package version 3.4-13 in R (see S1 Text) [24,25].
(d) Statistical analyses
We constructed a predictive model estimating cholera incidence in each 250-m 2 raster cell in 2015 as a function of the spatial correlates using the presence-absence algorithm of the 'ran-domForest' package version 4.6-14 in R [42]. Briefly, the random forest (RF) algorithm uses bootstrap aggregation and resampling to create an ensemble of lowly correlated decision trees that together classify each datapoint [43,44].
To select the best-fitting model, we performed a stepwise model selection procedure using the variable importance measures from the RF model calibrated and evaluated with all covariates included. From this model, we selected the highest contributing variable to first create a univariate RF model using 80% of each sample group (i.e., presence and absence) as training data for model calibration and the remaining 20% for model evaluation. We ran the univariate models for 1000 iterations, computing the area under the curve (AUC) statistic from the receiver operating curve (ROC) generated for each run to create a 95% confidence interval of the AUC. From here, covariates were added individually to this model if the AUC confidence interval generated for the new model over 1000 iterations indicated improved predictive ability over the univariate model. For each iteration of the RF model, we used the algorithm's default settings in R to perform a supervised classification.
Once the relevant variables were identified, we ran the best-fitting RF model from 2015 1000 times, training and evaluating the model of each iteration with the same 80% sample or 20% sample of presence-absence data, respectively. With each iteration, the model fitted to the 2015 data predicted cholera occurrence probabilities for 2050 for each 250-m 2 cell. From these predictions, we constructed a mean, 2.5%-, and 97.5%-quantile rasterized map for each year by determining the mean, 2.5%-quantile, and 97-5%-quantile values for each cell within Bangladesh. Using the 'arcgisbinding' package in R, we interfaced ArcGIS Pro version 2.6.3 with R to transfer the raster maps generated in R to ArcGIS to ensure our rasters were of the appropriate resolution and extent [45,46]. All code used in the analysis is publicly available on github (github.com/sophiakruger/cholera_risk) and released under the GNU Public License v.3 [47].
(a) Drivers of cholera infection risk
Our random forest classification model including all predictors ("the full model") showed elevation as the most prominent predictor (S1 Table). Thus, we began our stepwise model selection from a model with elevation as the sole predictor. The random forest classification model including elevation and distance to water as predictors increased the model's predictive power Distance (m) from nearest surface water body. Meters (0, 63750) [33]; Original creation based on the surface water data from [27]. [34,35] The anticipated number of persons within each square kilometer.
PLOS GLOBAL PUBLIC HEALTH
compared to the full model and outperformed all other predictors that were added to the univariate model ( Table 2). Model performance invariably declined when additional variables were added one at a time to the bivariate model (S2 Table). Generally, we find cholera infection risk increased with lower elevation and a shorter distance to the nearest surface water body (S1 and S2 Figs).
(b) Spatial predictions of cholera infection risk
We find that the distribution of cholera infection risk changes over time, with coastal and inland Bangladesh projected to experience an increase in cholera infection occurrence probability from 2015 to 2050 (Fig 1(A)-1(C)). Even under the most conservative estimate for 2050, we find risk increases along tributaries, running inland from the coast (Fig 1(B)). In 2015, cells with an average occurrence probability of 0.50 or greater cluster tightly along the coast of the Khulna and Barisal districts and are more widely distributed inland, though many follow the Padma River north into the district of Dhaka (Fig 2). Yet by 2050, clusters of cells with an occurrence probability of 0.50 or greater are predicted to increase inland in the districts of Khulna, Barisal, Chittagong, Rajshahi, Dhaka, and Sylhet (Fig 2). Notably, while an occurrence
PLOS GLOBAL PUBLIC HEALTH
probability of 0.50 and greater cluster around major river systems along district boundaries in 2015, by 2050 these risk clusters expand inland latitudinally (Fig 2).
Discussion
In this study, we predicted how changing climatic and land-use patterns can alter the risk for cholera infection at very fine spatial scales for the entirety of Bangladesh between the years 2015 and 2050. Using a species distribution modelling approach, we found areas with low elevation and shorter distances to surface water to be at highest risk. Areas at low elevations have greater potential for inundation from future rainfall events, which may compromise sanitation systems and increase risk for the spread of waterborne pathogens. Not only this, but projected increases in coastal vulnerability to V. cholerae and more frequent heavy rainfall events will also likely increase the presence of V. cholerae in surface waters at these elevations [3,49]. Low PLOS GLOBAL PUBLIC HEALTH elevation areas are also likely at greater risk for infection than those of higher elevation given human settlement patterns on low-lying arable land, along rivers and other surface water. To the extent that high population density correlates with increased risk for infection, whether through increased contact with positive cases, sanitation system strain, or under-development and poverty, these areas therefore exhibit greater potential for human-to-human cholera spread [38,[50][51][52]. We find that although cells of high risk (designated as having a cholera case occurrence probability of 0.50 and higher) cluster along the coastline predominantly in 2015, by 2050 high-risk areas expand from the coast to inland Bangladesh with all but the northwestern district of Rangpur seeing increased clusters around surface water. The overall increased risk for infection in inland Bangladesh indicates that coastal vulnerability to infection translates to increased inland infection risk. This is worrying given the predicted doubling of ENSO events in the future which will only promote V. cholerae coastal suitability and increase coastal cholera incidence [3,53].
Cholera infection risk mapping studies that restrict their analyses to Bangladesh remain limited. Previous risk-mapping studies that quantified cholera infection risk on a global scale may account for global trends in the distribution of cholera incidence and its etiological agent; however, these trends may not accurately reflect the factors shaping the distribution of infection risk at the country level. For example, Escobar et al. (2015) generated a global suitability map for cholera infection, using the environmental suitability for V. cholerae as a proxy for cholera infection risk, but restricted those predictions to the coastline, globally, leaving inland risk values for cholera-endemic countries unaccounted for [3]. Recently, Azman and colleagues attempted to fill this gap by restricting their analysis to Bangladesh, quantifying relative infection risks at the grid-cell level; however, this analysis was restricted to present environmental conditions only [17]. Notably, our study addresses both issues by expanding the spatial scope of predictions under a future climate scenario to include inland Bangladesh, where approximately 70% of the population lives [54]. Given ongoing efforts to reduce global cholera morbidity by 90% by 2030, our study offers valuable insight into projected high-risk areas in need of continued, if not additional, public health intervention measures to reduce the burden of disease in the coming decades.
Even in the presence of infrastructural and public health advances, predictive risk mapping studies for cholera infection risk will continue to be essential in reducing the disease burden. This is because such predictions characterize a baseline set of expectations about the distribution of infection risk if future conditions resemble current circumstances. Moreover, novel cholera strains are expected to continue to arise in Bengali waters, due in part to cholera biology in the environmental reservoir. For instance, while bacteriophage niche adaptation has allowed bacteriophages to prey on V. cholerae infecting zooplankton in fresh and estuary water, coevolution enables V. cholerae to resist bacteriophage predation [55,56]. Additionally, phages can facilitate the evolution of specific toxigenic V. cholerae biotypes through horizontal transfer of genes associated with virulence or enhanced environmental fitness [57]. This suggests that aquatic interactions between bacteriophages and strains of V. cholerae can not only select for more environmentally persistent strains, but also more virulent strains with the capacity to seed epidemics.
Climate change is likely to affect not only the distribution of waterborne diseases inland, but also socioeconomic conditions and infrastructural integrity. Thus, further modelling studies should seek to include covariates of the latter in combination with climatic variables to predict infection risk. Such models should also consider the potential for climate-associated human migration inland from vulnerable coastal regions to influence inland risk. As we developed our model, we initially found the distance from each grid cell to the coast of Bangladesh to be an important variable in predicting cholera infection occurrence, with closer distances experiencing higher cholera occurrence probabilities. However, the lack of coastline projections for 2050 prevented us from including that variable in our model. Therefore, the need for accurate coastline data under future climate scenarios remains to support robust predictive studies into disease occurrence. In addition to supporting the need for accurate sociological variable data-which is difficult to project decades into the future-remote sensing data could fill this need and in turn be useful in training models that seek to consider the interplay between human hosts and their environment on the risk for cholera infection.
As with our study, to generate valid risk predictions future models must also rely on robust case incidence data that reflects actual disease prevalence. Risk predictions from correlative models may also improve with added model complexity, but potentially at the expense of explanatory power. In future infection risk forecasting studies for cholera, researchers should consider the use of hierarchical spatial models or neural networks in spatial distribution modelling that have been shown to generate robust predictions in emerging infectious disease studies [58][59][60][61].
Mechanistic models of transmission are also needed. Species distribution models (SDMs), like that of this study, represent a key first step in developing such models, but may not include the effect of climate-sensitive ecological processes on model predictions [62]. Therefore, in the context of global change, modelling the spatial distribution of risk for cholera infection is best done using process-based models that will use our model's infection probabilities, consider the correlative components of our model, and incorporate the ecological mechanisms influencing the distribution of cholera and human transmission. Nevertheless, our study holds importance in providing robust inland climate-associated cholera infection risk predictions that can inform preventive Bengali public health strategies.
|
2022-06-14T19:03:08.569Z
|
2022-06-14T00:00:00.000
|
{
"year": 2022,
"sha1": "e6915d5bac0ca3e3dfdf67ecf2f12c284b620f74",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000711&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b976e23dddf25bac9c0bbef13dca7a18b069594e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10508793
|
pes2o/s2orc
|
v3-fos-license
|
Heat-Induced Cytokinin Transportation and Degradation Are Associated with Reduced Panicle Cytokinin Expression and Fewer Spikelets per Panicle in Rice
Cytokinins (CTKs) regulate panicle size and mediate heat tolerance in crops. To investigate the effect of high temperature on panicle CTK expression and the role of such expression in panicle differentiation in rice, four rice varieties (Nagina22, N22; Huanghuazhan, HHZ; Liangyoupeijiu, LYPJ; and Shanyou63, SY63) were grown under normal conditions and subjected to three high temperature treatments and one control treatment in temperature-controlled greenhouses for 15 days during the early reproductive stage. The high temperature treatments significantly reduced panicle CTK abundance in heat-susceptible LYPJ, HHZ, and N22 varieties, which showed fewer spikelets per panicle in comparison with control plants. Exogenous 6-benzylaminopurine application mitigated the effect of heat injury on the number of spikelets per panicle. The high temperature treatments significantly decreased the xylem sap flow rate and CTK transportation rate, but enhanced cytokinin oxidase/dehydrogenase (CKX) activity in heat-susceptible varieties. In comparison with the heat-susceptible varieties, heat-tolerant variety SY63 showed less reduction in panicle CTK abundance, an enhanced xylem sap flow rate, an improved CTK transport rate, and stable CKX activity under the high temperature treatments. Enzymes involved in CTK synthesis (isopentenyltransferase, LONELY GUY, and cytochrome P450 monooxygenase) were inhibited by the high temperature treatments. Heat-induced changes in CTK transportation from root to shoot through xylem sap flow and panicle CTK degradation via CKX were closely associated with the effects of heat on panicle CTK abundance and panicle size. Heat-tolerant variety SY63 showed stable panicle size under the high temperature treatments because of enhanced transport of root-derived CTKs and stable panicle CKX activity. Our results provide insight into rice heat tolerance that will facilitate the development of rice varieties with tolerance to high temperature.
INTRODUCTION
The global mean surface temperature increased rapidly and considerably during the 20th century, and a further increase of 0.3-4.8 • C is predicted by the end of the 21st century (Pachauri et al., 2014). Notably, nighttime temperature has increased more rapidly than has daytime temperature (Peng et al., 2004;Elagib, 2010). High temperature extremes and warmer nights are likely to become more frequent and intense in the near future (Mika, 2013;Wang et al., 2014).
Rice plants are highly susceptible to high temperature stress, especially during the reproductive stage (Moldenhauer et al., 2001;Jagadish et al., 2015), during which high temperature may severely reduce grain yield. An increase of 1-4 • C reduced rice grain yield by 0-49%; grain yield decreased 14% for every 1 • C increase in temperature (Singh et al., 2009). Another study showed that an increase of 1 • C in nighttime temperature reduced rice grain yield by 10% (Peng et al., 2004). Heat-induced yield reduction is largely attributed to adverse effects on yield components (Jagadish et al., 2015).
During the early reproductive phase of rice, which includes the processes of panicle initiation and development, high temperature events, including relatively warm nights, reduced the number of spikelets (Wei et al., 2010;Wu et al., 2016). During the middle and late reproductive phases, during which heading and grain filling occur, high temperature events reduce the grain filling rate and grain weight in rice (Jagadish et al., 2015). Several studies have assessed the agronomic, physiological, and molecular aspects of the negative impact of heat stress on rice production, especially at the flowering and grain filling stages (Shi et al., 2013;Jagadish et al., 2015). However, understanding of the physiological aspects of the impact of high temperature on panicle differentiation and spikelet formation during the early reproductive phase is limited (Wang et al., 2015).
In higher plants, cytokinins (CTKs) regulate numerous biological processes, including shoot growth and development, differentiation, and responses to environmental stresses (Ha et al., 2012;Zwack and Rashotte, 2015). In rice, CTKs also regulate the number of spikelets (Kyozuka, 2007). Rice varieties with small panicles had correspondingly low panicle CTK abundance, but exogenous CTK application to plants gradually increased the number of spikelets (Ashikari et al., 2005;Ding et al., 2014). These results show that CTKs play crucial roles in panicle formation and spikelet differentiation in rice.
Cytokinin accumulation in the aerial organs of plants is determined by the rate of importation of root-synthesized CTKs, local CTK synthesis, and local CTK catabolism (Kudo et al., 2010). Root-derived CTKs are the main source of CTKs for aerial organs (Aloni et al., 2005), in which most CTKs are derived from the roots and transported to shoot via xylem sap (Yong et al., 2014). In Betula pubescens, CTK abundance in xylem sap was positively correlated with the growth rate of shoot and number of branches (Rinne and Saarelainen, 1994). CTK synthesis involves several enzymes, but isopentenylation of adenosine phosphate by isopentenyltransferases (IPT) to produce N 6 -( 2 -isopentenyl) adenosine (iP) phosphate is the key step in CTK biosynthesis (Miyawaki et al., 2006). Trans-hydroxylation of the side-chain of iP-riboside to form trans-zeatin (tZ) riboside is catalyzed by cytochrome P450 mono-oxygenase CYP735A (Kiba et al., 2013). Additionally, LONELY GUY (LOG) proteins with phosphoribohydrolase activity are involved in the conversion of riboside 5 -monophosphate CTKs with low activity to highactivity forms such as iP (Kurakawa et al., 2007;Tokunaga et al., 2012). Degradation of CTKs is primarily catalyzed by cytokinin oxidase/dehydrogenase (CKX) (Kudo et al., 2010).
Suppressed CTK degradation and enhanced CTK biosynthesis contribute to local CTK accumulation in plant organs (Kudo et al., 2010). CTK metabolism influences panicle size in rice; relatively low CKX activity was associated with reduced panicle CTK degradation and a greater number of spikelets per panicle (Ashikari et al., 2005). And, others have reported that enhanced CTK biosynthesis is associated with large panicle size in rice. For example, expression levels of IPT and LOG, which are involved in local CTK synthesis in shoot meristem, were directly associated with the number of panicle spikelets (Kurakawa et al., 2007;Ding et al., 2014). In Arabidopsis, shoot development and growth were promoted by overexpression of CYP735A and retarded in loss-of-function CYP735A mutants (Kiba et al., 2013). However, few studies have simultaneously evaluated enzymes involved in CTK synthesis and catabolism, especially in rice, and it is not yet clear which processes and enzymes involved in CTK metabolism influence panicle CTK accumulation and panicle differentiation.
Cytokinins mediate plant responses to abiotic stresses, including heat stress (Ha et al., 2012). In rice, Arabidopsis and passion fruit, reduced CTK abundance in shoots in response to heat caused floret abortion, whereas exogenous CTK application to shoots mitigated the effects of heat injury on branches and florets (Sobol et al., 2014;Wu et al., 2016). CTK transportation via xylem sap is involved in heat stress tolerance (Ha et al., 2012;Zwack and Rashotte, 2015). In creeping bentgrass, application of zeatin riboside (ZR) to the root zone increased the abundance of shoot CTKs and alleviated heat stress injury following exposure to high soil temperature and high air temperature (Liu et al., 2002). However, Udompraset et al. (1995) found that root tZR was not involved in the development of heat tolerance in Phaseolus vulgaris. These findings show that the relationship between CTK transportation from root to shoot and heat tolerance is not well understood.
Changes in CTK metabolism are thought to be involved in adaptation by plants to various environmental stresses. Enzymes involved in CTK synthesis play roles in stress responses in several plant species. IPT, the key enzyme in the process of CTK synthesis, is involved in adaption to heat, drought, osmotic stress, and salt stress in maize, peanut, and Arabidopsis (Vyroubalová et al., 2009;Qin et al., 2011;Skalák et al., 2016). In rice, LOG expression is altered by various abiotic stress conditions, including heat stress, cold stress, drought, and salinity (Tripathi et al., 2012). CYP735A expression in rice was altered by exposure to cold and dehydration (Maruyama et al., 2014). Regulation of CTK abundance by CKX occurs in response to heat, cold, drought, and salinity in rice, maize, tobacco, and pea, respectively (Vaseva-Gemisheva et al., 2004;Vaseva et al., 2009;Vyroubalová et al., 2009;Tripathi et al., 2012;Lubovská et al., 2014). Accumulation of CTKs as a result of impaired degradation, enhanced local biosynthesis, and/or enhanced CTK transportation from roots, is favorable for adaption to abiotic stress (Ha et al., 2012). Therefore, assessing the effects of heat on CTK transportation from root to shoot via xylem sap and the activity levels of enzymes involves in CTK metabolism, such as IPT, LOG, CYP735A, and CKX, should illuminate the mechanisms underlying phytohormonal regulation of heat tolerance in rice.
Rice varieties display wide genotypic variation in the physiological response to heat stress (Tao et al., 2009;Shi et al., 2013). In this study, we investigated genotypic variation in physiological processes associated with CTK homeostasis in rice plants exposed to high temperature, as well as the relationship between these processes and spikelet number, with the goal of revealing the mechanisms underlying hormonal regulation of panicle size under heat stress during the early reproductive phase.
Crop Husbandry
Pot experiments were conducted during the rice growth season of 2013 at Huazhong Agricultural University, Wuhan,China (30 • 29 N,114 • 22 E). Four varieties were used in this study: N22, HHZ, LYPJ, and SY63. The methods for crop husbandry, temperature treatments, and phytohormone determination were described as previous study (Wu et al., 2016).
After breaking dormancy at 50 • C for 5 days, seeds were sown in plastic seeding trays with loam soil. Four three-leaf seedlings were transplanted into a 14 L plastic pot (28.5 cm height × 30 cm top diameter × 25 cm bottom circumference) containing a mixture of 17 kg soil (loam:sand, 2:1) and 12.5 g compound fertilizer (N:P 2 O 5 :K 2 O, 16%:16%:16%). Seedlings were thinned to three plants per pot 8 days after transplanting, and the main tillers were tagged. A total of 1.0 g urea was topdressed per pot 10 days after transplanting. The plants were flooded with water staying approximately 2 cm above soil surface from sowing to maturity. Each pot was manually rotated by 90 • clockwise every 7 days to avoid positional effects. Pests, diseases, birds, and weeds were intensively controlled.
High Temperature Treatments
The rice plants were randomly arranged with four replications. All plants were carefully cultivated under natural ambient conditions, after which they were moved to four temperaturecontrolled greenhouses at the start of panicle initiation (panicle emergence was observed visually). The greenhouses were equipped with a wetting machine, an air conditioner, two ventilators, and two sensors for monitoring relative humidity (RH) and air temperature.
The four temperature treatments included a high nighttime temperature treatment (HNT) that imposed high temperature from 19.00 to 07.00 h, high daytime temperature treatment (HDT) that imposed high temperature from 07.00 to 19.00 h, high daytime plus nighttime temperature treatment (ADT) that imposed high temperature during the entire day, and control (CK) treatment that imposed a favorable temperature for rice plant growth during the entire day. RH was set at 80%. The air temperature and RH in the greenhouse were controlled by a central auto-controller (Auto-Greenhouse Monitoring and Data Management System, Version 3.00, Auto, China). Air temperature and RH% were recorded 5 cm above the rice canopy using a standalone sensor (HOBO, H08-003-02, Onset Computer Corporation, Bourne, MA, USA). The mean nighttime and daytime temperatures were 27.2 and 31.9 • C under the CK treatment. The mean daytime temperature was 36.1 • C under the HDT treatment (4.2 • C higher than that of the CK treatment). The mean nighttime temperature was 31.9 • C under HNT treatment (4.7 • C higher than that of the CK treatment). The mean nighttime and daytime temperatures under the ADT treatment were 31.5 and 38.3 • C, respectively (4.3 and 6.4 • C higher than those of the CK treatment, respectively) ( Figure 1).
The plants were exposed to the high temperature treatments for 15 days, after which the plants were removed and grown continuously under normal conditions until maturity, at which point panicle length was approximately 1.5-2.0 cm.
Application of Exogenous Benzylaminopurine (BAP)
The solution of BAP (60 mg L −1 ), a synthetic CTK, was prepared by dissolving 60 mg BAP (Sigma-Aldrich, USA) in 1 mL of 1% (w/v) NaOH solution, following by dilution to a final volume of 1 L with double-distilled H 2 O. Two drops of 0.01% (v/v) Tween 20 was added as a surfactant, after which the solution was mixed thoroughly. The BAP solution was sprayed onto the stems of LYPJ (heat-sensitive) and SY63 (heat-tolerant) varieties under the ADT treatment (20 mL per plant per application). The BAP solution was applied twice: 1 day before the high temperature treatments and on the second day after the high temperature treatments.
FIGURE 1 | Dynamic temperature records for the three hightemperature treatments and one control treatment.
Determination of Yield Traits
To determine the number of spikelets per panicle and grain yield, three panicles of three main tillers were harvested at maturity from three plants grown in three pots for each replication. All spikelets were threshed from the panicles manually, and then collected to determine grain yield and yield components. Grain yield and other yield components were also calculated. The number of spikelets per panicle was referred to as the number of spikelets in the main tiller.
Determination of CTKs in the Panicles, Roots, and Xylem Sap
For each replication, three young panicles of the main tillers were collected from three plants grown in three pots at the last day of exposure to the high temperature treatments. The panicles were frozen in liquid nitrogen and stored at −80 • C. Next, xylem sap was collected from the same three plants during the night (19.00-07.00 h) as described by Arai-Sanoh et al. (2010). The number of tillers per plant was counted, after which the plants were cut approximately 8 cm above the soil level. Dead leaf sheathes in stubbles were cleared artificially, and the first droplet of exudation was wiped to avoid contamination. Polyethylene bags containing cotton wool (6-7 g) were attached to the cut ends and fixed with rubber bands, and collected root xylem sap for 12 h. The difference in weight of the cotton wool was considered as the weight of the collected root exudates. The root xylem sap flow rate (mg tiller −1 h −1 ) was calculated as the amount of collected sap per tiller divided by 12. Finally, roots were sampled and washed to allow collection of fresh white roots. The collected roots and exudates were stored at −80 • C and used for measurements of CTK abundance.
Extraction and Determination of CTKs
Cytokinins in the roots, xylem sap, and panicles were extracted and purified according to methods reported by Xie and Zhang (2001) and Hoyerová et al. (2006) and quantified according to the methods of Chou et al. (2000) using high performance liquid chromatography (HPLC) with some minor modifications.
For CTKs in the panicles and roots, frozen samples were cut into pieces and mixed completely. Next, 1 g of tissue was ground with 8 mL of cold extraction buffer (methanol:double-distilled H 2 O:formic acid, 15:4:1). The homogenates were transferred to 10-mL centrifuge tubes, incubated at 4 • C for 12 h, and centrifuged at 12,000 × g for 20 min at 4 • C, after which supernatants were collected. The pellets were subjected to phytohormone extraction twice as described above (e.g., 8 mL cold extraction buffer and centrifugation). All supernatants were pooled and condensed to 2 mL using a freeze-dryer (ALPHA 1-4 LD plus, Marin Christ, Osterode, Germany), after which 2 mL of petroleum ether was added to extract pigments and phenolics. The extraction was repeated three times. After removing the petroleum ether containing pigments and phenolics, the lower aqueous phase was freeze dried, and 3 mL of sodium acetate (1 mol/L, pH 8.0) was used to resuspend the samples as crude extracts for determination of various CTKs.
For CTK extraction in root xylem sap, 10 mL of xylem sap was collected in a centrifuge tube by extruding the cotton wool in which the xylem sap was collected. The xylem sap was centrifuged at 12,000 × g for 20 min at 4 • C, after which 8 mL of the supernatant was collected and freeze-dried. Next, the sample was resuspended in 3 mL of sodium acetate (1 mol/L, pH 8.0). These solutions were designated as the crude extracts for determination of various CTKs.
For further CTK extraction, the crude extract (3 mL) was extracted with 1-butanol (3 mL) three times, after which the upper organic phase (1-butanol) containing CTKs was pooled together for CTK measurement. The lower aqueous phase was pooled, adjusted to pH 3.0, and extracted three times with ethyl acetate. The upper organic phase (ethyl acetate) was pooled as the mixture of IAA, GAs, and ABA.
All collected upper organic phases were pooled together for determining CTKs, IAA, GAs, and ABA. After freeze-drying, the residues were dissolved in 5 mL of methyl alcohol and purified using a C 18 -SepPak cartridge (Waters Corporation, Milford, MA, USA). The purified samples were freeze-dried and dissolved in 0.8 mL of methyl alcohol for phytohormone determination. The elution procedure, flow rate, column temperature, and detection wavelength for HPLC analysis were optimized. Based the optimized conditions, abundance of CTKs was measured using HPLC equipped with a C 18 column (WondaCract ODS-2 C 18 column; 4.6 mm × 250 mm, 5 µm) by a multi-step linear gradient elution (45 min) at a flow rate of 1.6 mL min −1 . The column temperature was maintained at 45 • C. The UV detection wavelength was 269 nm. The solutions for eluting according to the methods of Chou et al. (2000) with modification. The solutions for eluting include methanol (A), double-distilled H 2 O (B), and 4.5% acetic acid solution (C). The following protocol was used for gradient elution: 0 min, 0% A and 100% B; 17 min, 30% A and 70% B; 18 min, 40% A and 60% B; 22 min, 40% A and 60% B; 24 min, 35% A and 65% B; 25min, 35% A and 65% C; 35min, 100% B; 45min, 100% B.
The calibration standards were mixed in a CTK standard solution containing isopentenyladenine riboside-5'-monophosphate (iPMP), iP, N 6 -( 2 -isopentenyl) adenosine riboside (iPA), tZ, and tZR (OlChemIm Ltd., Czechia). The calibration standards were prepared at concentrations of 5.7, 8.5, 11.4, 45.6, 91.1, and 182.3 ng mL −1 for each hormone standard in the mixed standard solution of five compounds. Calibration standard curves were repeated four times, after which a standard curve was calculated for each compound.
The concentration of each CTK in the roots and panicles was expressed as ng g −1 based on fresh weight, whereas the concentration in xylem sap was expressed as pg mL −1 . The transport rate of each CTK via xylem sap (pg tiller −1 hr −1 ) was calculated by multiplying the xylem sap flow rate by the concentration of the corresponding CTK in xylem sap.
Enzyme Extraction and Soluble Protein Determination
According to the methods described by Zalewski et al. (2010), a 0.5-g sample of young panicles was cut into pieces, powdered with liquid nitrogen using a hand mortar, and extracted with 6 mL of TRIS-HCl buffer (0.2 M, pH 8.0, containing 1 mM phenylmethylsulfonyl fluoride and 0.3% Triton X-100).
All debris was removed by centrifugation at 12,000 × g for 15 min at 4 • C. The supernatants were collected and used for determinations of enzyme activity. The soluble protein concentration was evaluated using the method of Bradford (1976), with bovine serum albumin as the standard.
CKX Activity
The assay of CKX activity based on iP degradation was performed according to the methods of Frébort et al. (2002) with minor changes. A reaction mixture containing 0.2 mL of the enzyme extract, 0.2 mL iP (0.15 mM), 0.5 mL 2,6dichlorophentolindophenol (0.5 mM), and 0.2 mL Tris/HCl buffer (75 mM, pH 8.5) was incubated at 37 • C for 60 min, after which the reaction was stopped by the addition of 0.3 mL trichloroacetic acid (40%). The mixture was centrifuged at 18,000 × g for 30 min. HPLC analysis was used to quantify iP by measuring absorbance at 269 nm as described previously. CKX activity (nmol mg −1 protein h −1 ) was defined as the amount of iP (nmol) degraded by 1 mg protein per hour under the selected reaction conditions.
IPT Activity
The assay of IPT activity was performed according to the method of Takei et al. (2001a) with minor changes. The enzyme extract (0.2 mL) was incubated in 0.2 mL of the reaction mixture (1 M betaine, 20 mM triethanolamine, 50 mM KCl, 10 mM MgCl 2 , 1 mM dithiothreitol, 1 mg/mL bovine serum albumin, pH 8.0) with 0.2 mL adenosine monophosphate (1 mM) and 0.3 mL dimethylallylpyrophosphate (340 µM) at 25 • C for 2 h, after which the reaction was stopped by the addition of 0.2 mL acetate (10%). When optimizing the conditions for the IPT activity determination, we had set six incubation times, i.e., 20, 30, 50, 60, 90, and 120 min, respectively, and found that the activity of IPT was highest when incubating for 2 h, and the activity did not detected when incubating for 20, 30, 50, 60. The mixture was centrifuged at 18,000 × g for 20 min. The supernatant was subjected to HPLC, and iPMP was quantified by measuring absorbance at 269 nm as described previously. The activity of IPT (nmol mg −1 protein h −1 ) was defined as the amount of produced iPMP (nmol) per 1 mg protein per hour under the selected reaction conditions.
LOG Activity
The assay of LOG activity was performed according to the method of Kurakawa et al. (2007) with minor changes. The enzyme extract (0.2 mL) was incubated in 0.2 mL of the reaction mixture (50 mM Tris-HCl, 1 mM MgCl 2 , 1 mM dithiothreitol, pH 6.5) with 0.08 mL iPMP (10 mM) at 30 • C for 2 h. The reaction was terminated using 0.3 mL of cold acetone. The mixture was stored at −80 • C for 30 min and centrifuged at 18,000 × g for 20 min. The supernatant was subjected to HPLC. The content of synthetic iP was quantified by measuring absorbance at 269 nm as described previously. LOG activity (nmol mg −1 protein h −1 ) was defined as the amount of iP produced per hour per mg protein under the selected reaction conditions.
CYP735A Activity
The assay of CYP735A activity was performed according to the method of Sasaki et al. (2013) with minor changes. The enzyme extract (0.2 mL) was incubated with 0.2 mL of the reaction mixture (100 mM sodium phosphate, 10% sucrose, 3 mM triphosphopyridine nucleotide, 1 mg/mL bovine serum albumin, pH 7.5) and 0.08 mL iPMP (10 mM) at 20 • C for 2 h. The reaction was terminated by the addition of 0.2 mL of termination buffer (50 mM CHES-NaOH, 0.5 mM MgCl 2 , pH 10.0). The mixture was incubated with 0.01 mL of calfintestine alkaline phosphatase (1 u/µL, Sigma) at 37 • C for 40 min and centrifuged at 18,000 × g for 20 min. The supernatant was subjected to HPLC. The content of tZR was quantified by measuring absorbance at 269 nm and comparison with the standard curve for tZR. CYP735A activity (nmol mg −1 protein h −1 ) was defined as the amount of tZR produced per hour per mg protein under the selected reaction conditions.
Statistical Analysis
In this study, tZ-type and iP-type CTKs were referred to as biologically active CTKs (aCTKs), including tZ, tZR, iPMP, iP, and iPA. Relative values for traits are presented in this study as a means of evaluating the responses of rice plants to high temperature treatments. A relative value was defined as the ratio of the value under a high temperature treatment to that under the control treatment for the same trait in the same variety. A relative value less or more than 1.0 indicated a decrease or increase in a trait under a high temperature treatment compared with that under the control condition, respectively. The absolute values of CTK contents, xylem sap flow rate, enzyme activities are presented as Supplementary Tables S1-S3, respectively. The absolute concentrations of CTKs in young panicle have already presented in our previous paper (Wu et al., 2016), in which we focused the relationship panicle differentiation and panicle CTK contents under high temperature treatments.
The mean of the relative value across the four replicates was used for analysis of variance and determination of significant differences by the least significant difference (LSD) test at P < 0.05 using Statistix 8.0 (Analytical Software, Tallahassee, FL, USA). Regression analysis was used to estimate the relationship among the investigated traits across four varieties and three high temperature treatments (n = 12) using Sigmaplot software (version 12.5; SPSS Inc., Chicago, IL, USA).
Number of Spikelets per Panicle under High Temperature Treatments
The absolute numbers of spikelets per panicle in N22, HHZ,LYPJ,and SY63 were 103,203,196, and 127 under control temperature treatment, respectively. As shown in Figure 2, the three high temperature treatments significantly reduced the number of spikelets per panicle in the HHZ and LYPJ varieties, but the largest reductions in the number of spikelets per panicle were observed under the ADT treatment. The number of spikelets FIGURE 2 | Relative number of spikelets per panicle of rice varieties under high temperature treatments. Data are presented as mean ± SD (n = 4). Different letters indicate significant differences among varieties under the same temperature treatment by a least significant difference (LSD) test at P < 0.05. Asterisks indicate significant differences when comparing the absolute mean of a trait under a given high temperature treatment with that under the control by LSD test at P < 0.05. per panicle was reduced by 19.9-32.3% in the HHZ variety and 15.6-32.0% in the LYPJ variety under the three high temperature treatments. For the N22 variety, the number of spikelets per panicle was reduced significantly only under the ADT treatment. For the SY63 variety, the number of spikelets per panicle was not affected significantly by the three high temperature treatments. The SY63 variety had more spikelets per panicle in comparison with the LYP9 and HHZ varieties under the three high temperature treatments, as well as more spikelets per panicle in comparison with the N22 variety under the ADT treatment.
Application of exogenous BAP increased the number of spikelets per panicle by 11% in the LYPJ variety and by 5% in the SY63 variety under the ADT treatment, in comparison with the same varieties under the ADT treatment without BAP application (Figure 2).
The HNT treatment significantly decreased the abundance of tZ+tZR, iPMP+iP+iPA, and aCTKs, with the exception of tZ+tZR in the N22 and SY63 varieties (Table 1). Similarly, the HDT and ADT treatments significantly reduced the abundance of the tested CTKs, with the exception of tZ+tZR in the SY63 variety. Generally, the largest reductions were found under the ADT treatment; on average across the four varieties, iPtype CTKs, tZ-type CTKs, and aCTKs in the panicles were reduced in abundance under ADT treatment by 57% (17-73%), 25.3% (0-39%), and 45.8% (16-57%), respectively. In general, Relative values were presented in the table. A relative value was defined as the ratio of the score under a high temperature treatment to that under the control treatment for the same trait in the same variety. Different letters indicate significant differences among varieties under the same temperature treatment by a least significant difference (LSD) test at P < 0.05. A relative value less or more than 1.0 indicated a decrease or increase in a trait under a high temperature treatment compared with that under the control condition, respectively. Asterisks indicate significant differences when comparing the absolute mean of a trait under a given high temperature treatment with that under the control by LSD test at P < 0.05.
the abundance of iP-type CTKs was reduced more than was the abundance of tZ-type CTKs. On average, iP-type CTKs and tZ-type CTKs in the panicles were reduced in abundance by 40.4% (15-73%) and 17.0% (0-39%), respectively, across the four varieties and three high temperature treatments (Table 1). Generally, the three high temperature treatments had no effects on panicle tZ+tZR content in the SY63 variety, which showed the highest relative abundance of panicle CTKs among the four tested varieties ( Table 1).
There was no significant reduction in the abundance of tZ+tZR, iPMP+iP+iPA, or aCTKs in the roots of the N22, HHZ, and SY63 varieties under the three high temperature treatments. However, in the LYPJ variety, the abundance of tZ+tZR, iPMP+iP+iPA, and aCTKs was reduced significantly by the HDT and ADT treatments ( Table 1).
Xylem Sap CTK Content and CTK Transport Rate under High Temperature Treatments
The absolute rates of xylem sap flow were 86, 151, 143, and 85 mg/tiller/h in N22, HHZ, LYPJ, and SY63 under control temperature treatment, respectively (Supplementary Table S2).
The HNT treatment did not substantially decrease the abundance of xylem sap CTKs in the four varieties. Similar effects were observed under the HDT treatment, but iP-type CTKs were reduced in abundance in the HHZ and LYPJ varieties. The three temperature treatments had no effects on tZ+tZR; however, the high temperature treatments decreased the abundance of iP-type CTKs in the N22, HHZ, and LYPJ varieties, as well as the abundance of aCTKs in the HHZ and LYPJ varieties. Generally, the three high temperature treatments imposed small effects on CTK concentrations in xylem sap; most of the effects of high temperature were not significant, with the exception of the significant effects of the ADT treatment on the LYPJ, HHZ, and N22 varieties ( Table 2 and Supplementary Table S2).
The transport rate of CTKs changed in a manner similar to that of xylem sap flow in the four rice varieties ( Table 2 and Supplementary Table S2). The transport rates of tZ+tZR, iPMP+iP+iPA, and aCTKs were reduced significantly in the 1.11a * 0.95a 0.91a 0.95a 1.05a 1.05a 1.05a A relative value was defined as the ratio of the score under a high temperature treatment to that under the control treatment for the same trait in the same variety. Different letters indicate significant differences among varieties under the same temperature treatment by a LSD test at P < 0.05. A relative value less or more than 1.0 indicated a decrease or increase in a trait under a high temperature treatment compared with that under the control condition, respectively. Asterisks indicate significant differences when comparing the absolute mean of a trait under a given high temperature treatment with that under the control by LSD test at P < 0.05.
HHZ, LYPJ, and N22 varieties in response to the three high temperature treatments, while these transport rates were increased or not affected by the treatments in the SY63 variety. The relative transport rate of CTKs in xylem sap in the SY63 variety was significantly greater than the corresponding rates in the HHZ, LYPJ, and N22 varieties. Table S3). The three high temperature treatments significantly increased the activity level of CKX in the HHZ, LYPJ, and N22 varieties; however, the treatments had no substantial effects in the SY63 variety ( Table 3 and Supplementary Table S3). Additionally, the relative activity level of CKX in the SY63 variety was significantly lower than that of the HHZ, LYPJ, and N22 varieties ( Table 3).
Generally, the activity levels of IPT, LOG, and CYP735A were slightly or significantly reduced by HNT in the four tested rice varieties; however, the activity levels of IPT, LOG, and CYP735A were reduced significantly under the HDT and ADT treatments. The relative activity levels of IPT, LOG, and CYP735A under the HNT treatment were higher than their activity levels under the HDT and ADT treatments. Additionally, the SY63 and N22 varieties generally showed high relative activity levels of IPT, LOG, and CYP735A in comparison with those of the HHZ and LYPJ varieties ( Table 3).
Relationships of Panicle CTKs with Its Transportation via Xylem Sap
As shown in Figure 3A, the relative concentrations of tZ-type CTKs, iP-type CTKs, and aCTKs in panicles were positively and significantly correlated with their relative transport rates via xylem sap. It was also observed that concentrations of panicle CTKs were significantly and positively correlated with the xylem sap flow rate ( Figure 3B) and CTK concentrations in xylem sap ( Figure 3C). Additionally, panicle CTK concentrations showed positive correlations with root CTK concentrations, with the exception of iP-type CTKs ( Figure 3D).
Relationships between Panicle CTK Abundance and CTK Metabolism-Related Enzymes
The relative contents of tZ-type CTKs, iP-type CTKs, and aCTKs in panicles were significantly and negatively correlated with the relative CKX activity level in panicles (r = −0.81 for tZ-type, −0.68 for iP-type, and −0.75 for aCTKs) (Figure 4A). The relative contents of iP-type CTKs and aCTKs were significantly and positively correlated with the relative activities of IPT (r = 0.63 for iP-type and 0.62 for aCTKs, Figure 4B), and CYP735A (r = 0.57 for iP-type, and 0.57 for aCTKs, Figure 4D); however, tZ-type CTK abundance was poorly correlated with the activity levels of IPT, LOG, and CYP735A.
Relative values were presented in the table. A relative value was defined as the ratio of the score under a high temperature treatment to that under the control treatment for the same trait in the same variety. Different letters indicate significant differences among varieties under the same temperature treatment by a LSD test at P < 0.05. A relative value less or more than 1.0 indicated a decrease or increase in a trait under a high temperature treatment compared with that under the control condition, respectively. Asterisks indicate significant differences when comparing the absolute mean of a trait under a given high temperature treatment with that under the control by LSD test at P < 0.05.
Relationships of the Number of Spikelets per Panicle with CTK Content and Metabolism-Related Enzyme Activity Levels
The relative number of spikelets per panicle was positively and significantly correlated with the relative concentrations of tZ-type CTKs (r = 0.93), iP-type CTKs (r = 0.82), and aCTKs (r = 0.87) in the panicles (Figure 5A), as well as with the relative transport rates of tZ-type CTKs (r = 0.69), iP-type CTKs (r = 0.81), and aCTKs (r = 0.75) ( Figure 5B). The relative number of spikelets per panicle was negatively and significantly correlated with the relative activity level of CKX (r = −0.69, Figure 5C). The relative activity levels of IPT, LOG, and CYP735A were poorly correlated with the relative number of spikelets per panicle ( Figure 5D).
Response of Panicle Size to High Temperature in Rice Varieties
Panicle size was reduced in response to the three high temperature treatments, especially in the HHZ, LYPJ, and N22 varieties (Figure 2). Previous studies also reported reduced panicle size in rice plants subjected to high temperature conditions (Wei et al., 2010;Wang et al., 2015). Heat-induced reduction in panicle size was associated with disruption of differentiation and induction of degradation of secondary branches and attached florets (Wang et al., 2015). The SY63 variety showed relatively stable panicle size under the three high temperature treatments. Moreover, the relative number of spikelets per SY63 panicle was significantly higher than that of the panicles of the HHZ and LYPJ varieties under the HNT, HDT, and ADT treatments (Figure 2). Therefore, the results presented in this study show that the effect of high temperature on panicle size is dependent on the variety of rice subjected to high temperature conditions. In addition, the SY63 variety has heat tolerance greater than that of the HHZ variety or LYPJ variety. Similarly, Wang et al. (2015) also reported genotypic variation in the effect of high temperature on panicle size during the early reproductive phase in rice. It is noteworthy that the N22 variety showed heat tolerance under the HNT and HDT conditions (Figure 2). However, the N22 variety suffered from heat stress injury under the ADT treatment (a combination of the HNT and HDT treatments), in contrast to the SY63 variety and similar to the heat-susceptible HHZ and LYPJ varieties (Figure 2). The increased intensity of high temperature conditions aggravates injury in rice during the reproductive stage (Jagadish et al., 2007). Therefore, our data suggest that the N22 variety can likely withstand only low intensity of heat stress, whereas the SY63 variety may be able to withstand high intensity of heat stress (a combination of the HNT and HDT treatments).
Response of Panicle CTKs to High Temperature and Relationship with Panicle Size
Panicle CTK abundance was reduced in response to high temperature during the early reproductive stage, especially in the heat-susceptible LYPJ and HHZ varieties ( Table 1). This result was in agreement with previous observations that high temperature reduced the abundance of active CTKs (aCTKs) in Arabidopsis and Phalaenopsis (Chou et al., 2000;Skalák et al., 2016). Under the HNT and HDT treatments, the N22 variety, which showed a stable number of spikelets per panicle, had relatively stable/high aCTK abundance in panicles; however, the N22 variety showed large reductions in the number of spikelets per panicle and the abundance of iP-type CTKs and aCTKs under the ADT treatment (combination of the HDT and HNT treatments), and the relative aCTK abundance of the N22 variety under the ADT treatment was similar to that of the heat-susceptible HHZ and LYPJ varieties ( Table 1). In tobacco leaves, the abundance of aCTKs was reduced continuously as the duration of high temperature treatment was prolonged (Macková et al., 2013). However, in heat-tolerant variety SY63, which had a stable number of spikelets per panicle, the abundance of panicle tZ-type CTKs was not affected by the three high temperature treatments. Although panicle iP-type CTKs and aCTKs were reduced significantly in abundance by the high temperature treatments in the SY63 variety, their abundance remained relatively stable in comparison with that of the same CTKs in the other three varieties ( Table 1). Heat-induced reduction in the number of spikelets per panicle was alleviated by application of exogenous BAP to heat-susceptible variety Liangyoupeijiu, which showed reduced CTK abundance under the high temperature treatments (Figure 1). In rice, exogenous BAP application increased the abundance of tZ-type CTKs (Liu et al., 2011) and the number of spikelets per panicle (Ding et al., 2014). Similarly, several passion fruit species and Arabidopsis ecotypes that showed stable abundance of aCTKs had less severe reductions in floret number in comparison with species and ecotypes that showed reduced CTK abundance in response to high temperature when exogenous CTKs were applied to mitigate the effect of heat injury on floret growth (Sobol et al., 2014). Additionally, we observed that relative panicle CTK concentrations were positively correlated with the relative number of spikelets per panicle ( Figure 5A). In our study, reductions in the number of spikelets per panicle were associated −, +, and 0 indicate a decrease, an increase, and relative stability in a certain trait under high temperature treatment, respectively.
indicates a significant correlation between two traits.
indicates no significant correlation between two traits. The value of the trait with a box was decreased in heat-susceptible varieties while increased in heat-tolerant variety SY63 under the three high temperature treatments. The other traits without a box showed similar responses to the high temperature treatments, which showed larger changes in heat-susceptible varieties than that in heat-tolerant variety.
with reduced panicle CTK abundance under high temperature treatments (Figure 6). The stability of panicle CTKs may be involved in maintaining panicle size under high temperature conditions in heat-tolerant variety SY63.
This study assessed the stability of tZ-type CTKs in SY63 panicles under high temperature treatments ( Table 1). tZ-type CTKs are thought to act as messengers from root to shoot, while iP-type CTKs act as messengers from shoot to shoot (Kudo et al., 2010). Root-synthesized CTKs are transported to the aerial organs via xylem sap flow (Mader et al., 2003;Aloni et al., 2005;Yong et al., 2014). In SY63 variety, stable tZ-type CTKs in panicles may be partially attributed to stable import of root CTKs via enhanced xylem sap flow under the high temperature treatments ( Table 2).
CTK Translocation and Its Relationship with Panicle Size under High Temperature Treatments
We observed that xylem sap flow showed genotypic variation in response to high temperature. In heat-susceptible varieties, xylem sap flow was decreased significantly by exposure to high temperature, while heat-tolerant variety SY63 showed significantly increased xylem sap flow after 15 days of high temperature treatment during the early reproductive phase ( Table 2). Tao et al. (2009) found that heat-tolerant rice varieties showed enhanced exudation of xylem sap during 5 weeks of high temperature treatment; however, xylem sap exudation was inhibited significantly after 2 weeks of high temperature treatment in heat-susceptible rice varieties. Xylem sap flow is closely associated with transpiration flow (Boonman et al., 2007;Kudo et al., 2010). Under high temperature stress, the increased transpiration flow from roots may be importance for the decrease in leaf surface temperature via evaporative cooling, and for transport of root-derived messengers (such as ABA) transport to shoot to relief heat injury (Zandalinas et al., 2016). Therefore, our results suggest that heat-tolerant rice varieties may relieve heat stress via persistent enhancement of xylem sap flow in comparison that of heat-sensitive varieties.
In the present study, root CTKs and xylem sap CTKs showed similar responses to high temperature treatments; their abundance was slightly reduced in comparison with that of panicle CTKs (Tables 1, 2), indicating that the effects of high temperature treatments on panicle CTKs were more severe than their effects on root CTKs. Similarly, Takei et al. (2001b) reported that root CTKs showed changes in abundance that corresponded with changes in CTKs transport from root to shoot via xylem sap. In apple trees, exposure of roots to high temperature treatment produced negligible effects on xylem sap CTKs (Tromp and Ovaa, 1994). Additionally, high air temperature produced minor effects on CTKs in roots, which were less severe than the effects of high air temperature on CTKs in the aerial organs of pea plants (Vaseva et al., 2009). These studies demonstrate that CTKs in roots and xylem sap were less sensitive than CTKs in aerial organs, such as panicles, to high temperature.
In this study, the transport rate of CTKs changed in response to high temperature in a manner similar to that of xylem sap flow (r = 0.98, P < 0.01, n = 12). The transport rate of CTKs was decreased significantly by high temperature treatments in heat-susceptible varieties (LYPJ, HHZ, and N22), which showed decreased xylem sap flow rate in comparison with that of heattolerant variety SY63. However, the transport rate of CTKs in heat-tolerant variety SY63 increased as the xylem sap flow rate increased ( Table 2). CTK transport is associated with xylem sap flow, which is closely associated with transpiration flow (Boonman et al., 2007;Kudo et al., 2010). In shade, reduced CTK transport leads to decreased CTK abundance in aerial organs (Boonman et al., 2007). In our study, the relative transport rate of CTKs was significantly correlated with relative CTK abundance in young panicles (Figure 3) and the relative number of spikelets per panicle (Figure 5). Root-derived CTKs translocated via xylem sap mediate apical shoot development (Aloni et al., 2005). Changes in xylem sap CTK abundance in response to environmental conditions regulate plant adaptation to adverse stresses (Ambler et al., 1992;Takei et al., 2001b;Alvarez et al., 2008;Kudoyarova et al., 2014). Taken together, our results suggest that transport of CTKs via xylem sap influences the number of spikelets per panicle by adjusting panicle CTK abundance under high temperature conditions (Figure 6). A relatively high CTK transport rate and relatively good panicle CTK stability likely contribute to the minimal reduction in the number of spikelets per panicle observed in heat-tolerant variety SY63 in response to high temperature.
Response of Enzymes Involved in CTK Metabolism to High Temperature and Their Relationship with Panicle Size
In this study, CTK metabolism-related enzymes CKX, IPT, LOG, and CYP735A were regulated to different extents by high temperature treatments in a manner dependent on rice variety. The activity level of CKX increased significantly in heatsusceptible varieties LYPJ, HHZ, and N22; however, heat-tolerant variety SY63 showed no substantial changes in CKX activity under any of the three tested high temperature treatments. The activity levels of LOG, IPT, and CYP735A were reduced by high temperature in all four tested varieties (Table 3). Similar results were obtained in previous studies, in which the activity level of CKX was increased in Nicotiana tabacum and Pisum sativum under high temperature conditions (Vaseva-Gemisheva et al., 2004;Lubovská et al., 2014), whereas the activity levels of IPT, CYP735A, and LOG were suppressed by various abiotic stresses such as heat, cold, and drought (Maruyama et al., 2014;Skalák et al., 2016). These results suggest that stress conditions, including heat stress, disrupt the balance between synthesis and catabolism of CTKs via regulation of the activity levels of related enzymes.
Most previous studies have separately assessed the roles of IPT (Ding et al., 2014), CYP735A (Takei et al., 2004), LOG (Kurakawa et al., 2007), and CKX (Ashikari et al., 2005) in regulating CTK abundance and panicle size in rice and Arabidopsis; however, it is not yet clear which processes or enzymes play important roles in heat tolerance. CKX activity was stable under each high temperature treatment in heat-tolerant variety SY63, which showed stable panicle CTK abundance and a consistent number of spikelets per panicle. Correlation analysis also revealed that the relative CKX activity level was significantly and negatively correlated with relative panicle CTK abundance and the relative number of spikelets per panicle (Figures 4A, 5C). In rice and Nicotiana, CKX is involved in the effect of high temperature on plant growth (Tripathi et al., 2012;Lubovská et al., 2014). Although certain, but not always significant, relationships were found between the activity levels of CTK biosynthetic enzymes (IPT and CYP735A) and panicle CTK abundance (Figure 4), none of these enzymes had an activity level that was correlated with the relative number of spikelets per panicle ( Figure 5D). However, it is noteworthy that the activity levels of IPT, LOG, and CYP735A were reduced significantly by the HDT and ADT treatments. Moreover, slight correlations were found between the activity levels of IPT and CYP735A and the concentrations of iP-type CTK and aCTK in panicles (Figures 4B,D). Therefore, the relationship between reductions in the activity levels of IPT and CYP735A and panicle size merit further investigation. Our study demonstrates that CKX activity seems to influence panicle size under high temperature treatments by regulating panicle CTK abundance (Figure 6). The stable CKX activity of heat-tolerant variety SY63 may underlie its stable panicle CTK abundance and consistent number of spikelets per panicle under high temperature conditions.
CONCLUSION
The three high temperature treatments significantly decreased the number of spikelets per panicle in heat-susceptible rice varieties (HHZ, LYPJ, and N22), whereas heat-tolerant variety SY63 showed a relatively stable number of spikelets per panicle. Application of exogenous BAP increased the number of spikelets per panicle in the LYPJ variety under the ADT treatment, indicating that CTKs are involved in panicle differentiation under high temperature conditions. The three high temperature treatments significantly decreased panicle CTK abundance, xylem sap flow rate, and the transport rate of CTKs via xylem sap flow, whereas the treatments increased the activity level of CKX in three heat-sensitive rice varieties (HHZ, LYPJ, and N22). In comparison with the heat-susceptible varieties, heat-tolerant variety SY63 showed more stable panicle CTK abundance, an enhanced xylem sap flow rate, a greater CTK transport rate, and more stable CKX activity under the three high temperature treatments. The activity levels of enzymes involved in CTK synthesis (IPT, LOG, and CYP735A) were decreased by the high temperature treatments, especially by the HDT and ADT treatments.
Generally, panicle CTK concentrations were significantly and positively correlated with the xylem sap flow rate, transport rate of CTK via xylem sap, and CTK concentrations in xylem sap and roots, whereas panicle CTK concentrations were negatively correlated with CKX activity and slightly and positively correlated with the activity levels of IPT, and CYP735A.
According to our results, reduced panicle CTK abundance in heat-susceptible rice varieties under the high temperature treatments was associated with: (i) the reduced transport rate of xylem sap CTKs, which was resulted from the reduced xylem sap flow and the decreased xylem sap CTK concentration; (ii) increased activity of CKX in young panicles, which promoted degradation of panicle CTKs; (iii) decreased activity levels of IPT, and CYP735A, regardless of the insignificant correlations between the relative activity levels of these enzymes and relative panicle size. Therefore, CTK transport from root to shoot and CTK degradation via CKX are the key processes that determine panicle CTK abundance and thus regulate panicle size under high temperature conditions. Generally, stable CTK concentrations under high temperature conditions inhibited the reduction in the number of spikelets per panicle in heat-tolerant variety SY63 in comparison with that which occurred in the heat-susceptible varieties; this effect was attributed to enhanced transportation of root-derived CTKs due mainly to strong xylem sap flow and impaired CTK degradation in panicles as a result of a relatively low level of panicle CKX activity. Therefore, a high transport rate of CTKs and low CKX activity are required to stabilize panicle size under heat stress (Figure 6). From the viewpoint of plant hormone metabolism, our results provide insight into the mechanisms underlying rice heat tolerance, laying a foundation for effective breeding and selection of rice varieties with high temperature tolerance and maximal grain yield.
AUTHOR CONTRIBUTIONS
CW and KC designed the experiments. CW, QL, WW, SF, and QH performed the experiments, KC, JH, LN, and SP performed parts of the experiments. CW and KC analyzed the data and wrote the manuscript. SF, JH, LN, PM, and SP revised the manuscript. All authors have read and approved the final manuscript.
|
2017-05-05T06:01:49.396Z
|
2017-03-17T00:00:00.000
|
{
"year": 2017,
"sha1": "323bbd39c0d88f6948ce6aabe9753f953f9860c1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.00371/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "323bbd39c0d88f6948ce6aabe9753f953f9860c1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
201873536
|
pes2o/s2orc
|
v3-fos-license
|
EEG-based Processing and Classification Methodologies for Autism Spectrum Disorder: A Review
: Autism Spectrum Disorder is a lifelong neurodevelopmental condition which affects social interaction, communication and behaviour of an individual. The symptoms are diverse with different levels of severity. Recent studies have revealed that early intervention is highly effective for improving the condition. However, current ASD diagnostic criteria are subjective which makes early diagnosis challenging, due to the unavailability of well-defined medical tests to diagnose ASD. Over the years, several objective measures utilizing abnormalities found in EEG signals and statistical analysis have been proposed. Machine learning based approaches provide more flexibility and have produced better results in ASD classification. This paper presents a survey of major EEG-based ASD classification approaches from 2010 to 2018, which adopt machine learning. The methodology is divided into four phases: EEG data collection, pre-processing, feature extraction and classification. This study explores different techniques and tools used for pre-processing, feature extraction and feature selection techniques, classification models and measures for evaluating the model. We analyze the strengths and weaknesses of the techniques and tools. Further, this study summarizes the ASD classification approaches and discusses the existing challenges, limitations and future directions.
Introduction
Autism Spectrum Disorder (ASD) is a heterogeneous neurodevelopmental condition characterized by behavioural impairments in social interaction and communication, along with restricted and repetitive behaviours (APA, 2013). ASD is called a spectrum disorder as the symptoms and their severity are unique for each individual. Common symptoms include difficulty in understanding facial expressions, delayed speech and poor comprehension skills. The symptoms start to appear in early childhood within the first three years. A recent report of the Centers for Disease Control (CDC) identifies having siblings with ASD, having older parents and certain genetic conditions as general risk factors of ASD.
The motivation behind this survey is the lack of well-defined automated approaches for ASD diagnosis. In order to support studies on automated ASD classification, it is important to explore various techniques along with the diagnostic processes. This paper explores and analyzes the techniques for EEG pre-processing, feature extraction and classification, which enables to automate the diagnostic process. Moreover, this paper identifies the existing limitations, challenges and suggests future research directions. Hence, the researchers and practitioners can utilize the suggested techniques and address the limitations in the course of the possible research area.
The methodology of the ASD diagnosis is divided into four phases: (1) EEG data collection, (2) pre-processing, (3) feature extraction and (4) classification using learning models. Under EEG data collection we have discussed EEG metadata and challenges due to its diversity. Pre-processing phase discusses different techniques for noise removal, data transformation and popular EEG pre-processing tools. Commonly used EEG-based features for ASD classification, feature extraction techniques and feature selection techniques are discussed under the feature extraction phase. The classification phase states different machine learning algorithms and different evaluation metrics.
Finally, the paper discusses the existing challenges, limitations and potential areas for future work.
Overview of the Current Diagnostic Criteria
The etiology of ASD is still under research and lacks a well-defined medical test for ASD diagnosis. Current diagnostic criteria are behaviour dependent, which utilizes direct observation and standardized interviews (Newschaffer et al., 2007). They are based on the presence or absence of specific behaviours. These practices are generalized as a comprehensive developmental approach, where several characteristics of a child's development are evaluated. These characteristics include different levels of functioning, the child's developmental progress, genetic, family, medical and educational histories and child's ability to apply the skills in everyday life. DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision), ADOS (Autism Diagnostic Observation Schedule), Autism Diagnostic Interview-Revised (ADI-R), The Diagnostic Interview for Social and Communication Disorders (DISCO) and Developmental, Dimensional and Diagnostic Interview (3di) are some techniques used for clinical diagnosis. Among them, ADOS and ADI-R are considered as the main standards (Reaven et al., 2008).
In addition to determining ASD or no-ASD, another key aspect is the autism severity rating. ADOS score is widely used for ASD severity measurement. Besides ADOS and ADI-R, several other scales including Childhood Autism Rating Scale (CARS), Gilliam Autism Rating Scale (GARS) and Autism Behaviour Checklist (ABC) also provide autism severity ratings (Gotham et al., 2009). Severity scores assist in providing specific individualized interventions rather than more general treatment plans. It would also help monitor the change in risk profiles as the child's development progresses and how the subject is responding to intervention.
Behaviour-Independent Diagnostic Practice
According to a recent CDC report, one in 59 children in the United States has been diagnosed with ASD (Baio et al., 2018). In 2010, it was calculated to be 1 in 68. Thus, it is evident that the prevalence of ASD is increasing over the years. ASD might not be a fatal disease, yet the daily activities of autistic people are extremely challenging. Even though ASD cannot be cured, the symptoms can be improved through proper individualized treatment. An early diagnosis would facilitate starting the medication, therapies and social skills training at an early age which enhances a child's response to treatment.
A significant challenge is that the current clinical diagnosis practices are subjective, especially behaviour dependent. Current diagnostic procedures require input from a team of multi-disciplinary professionals. Besides, a complete profile of the child's abilities is required for an accurate diagnosis. Such comprehensive evaluations sometimes take several months or even years, delaying the diagnosis and the treatment. Also, current nosological systems and ASD severity measures work well for children above the age of three, however not so accurate for children younger than two years of age.
Early diagnosis of ASD is difficult as the defining behaviours often become significant only after the first three years and routine well-baby check-ups do not contain simple, reliable measures to identify them. Early diagnosis of milder forms of ASD is even harder as the symptoms tend to overlap with several other diagnoses. Moreover, the early diagnosis needs to be re-evaluated because of rapid development in early ages and the impact of the intervention (Hollander et al., 2011). There also exists the problem of misdiagnosis (Mandell et al., 2007). The symptoms for ASD being diverse and several symptoms being overlapped with other diagnoses similar to ADHD (Mayes et al., 2012) are the major causes for the misdiagnosis.
The fact that etiology and developmental course are getting more diverse with time makes future diagnosis even more challenging. By developing behaviourindependent diagnostic approaches which are simple, affordable and easy to implement in the routine wellbaby check-ups, these challenges can be resolved.
EEG as a Diagnostic Test
A behaviour-independent approach can be designed based on Electroencephalography (EEG). EEG records the electrical activity of the brain by recording the electrical impulses of different frequencies used by neurons for communications through electrodes attached to the scalp. EEG is being studied for a long time to support medical diagnosis (Niedermeyer and da Silva, 2005). The abnormalities in EEG signals have been found to be reliable biomarkers for medical conditions such as epileptic seizures (Tzallas et al., 2009) and Alzheimer's disease (Jeong, 2004). In addition to diagnosis, novel approaches to facilitate treatment plans using EEG have also been proposed (Fan et al., 2015).
Literature reveals that two different types of EEG based approaches were proposed in the past to diagnose ASD: (1) comparison method and (2) pattern recognition and classification approach (Hashemian and Pourghassem, 2014). In the first approach, EEG signal characteristics of typically developing individuals are compared with that of individuals with ASD. This paper focuses on the second approach which adopts machine learning algorithms to analyse the EEG signal and classify ASD.
Phase 1: EEG Data Collection
Recording the EEG data is the first step in the classification methodology. Our focus is not on the technical details of EEG data collection but on the metadata. The metadata of EEG datasets plays a crucial role in deciding the processes carried out in the next phases of classification. The metadata of an EEG dataset generally includes details regarding the sampling frequency, number of electrodes, electrode locations, EEG montage, recording duration, the activities in which the subjects were involved while recording the data and data types. The EEG output is a relative value. The values are generated based on a reference point. The montage provides information about the point of reference. Different EEG montages include bipolar, common electrode reference, average reference, weighted average reference and Laplacian.
The datasets used in the related studies are unique. They have diverse metadata. Different file formats of the EEG data include but not limited to BrainVision file formats (.vhdr, .vmrk, .eeg), European data format (.edf) and BioSemi data format. EEG signals were sampled at different frequencies of 128 Hz, 250 Hz, 256Hz and 500 Hz. While recording the EEG signals subjects were involved in a different set of activities such as blowing bubbles to control the subjects' attention, carrying out ADOS assessment and keeping the subjects in a resting state.
EEG dataset with a different number of channels and different electrode placement locations were also used. International 10-20 system is an internationally recognized electrode placement standard. Placement of electrodes in the locations Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1 and O2 according to International 10-20 system is shown in Fig. 1. One major limitation is that because of the diverse EEG datasets, the proposed approach becomes specific to the dataset. None of the studies has tested their approaches over different datasets with varying metadata. Hence it is challenging to measure how well the approaches can be generalized.
Phase 2: Pre-Processing
Overview of EEG Signal Pre-Processing Data pre-processing is a crucial step for any machine learning based approach because real-world datasets contain incomplete, noisy and inconsistent data. Poor data quality will result in poor classification. According to (Han et al., 2011), major tasks in data preprocessing include data cleaning, data integration, data transformation, data reduction and data discretization. This paper emphasizes the noise elimination techniques because of its significance in the context of classifying ASD. The noise in the EEG signal is induced by both nonphysiological factors (external environment) and physiological factors (because of the subject being examined). Several external artefacts are discussed in (Tandle and Jog, 2015). The artefacts that depend on the subjects are of three main types: electrooculogram (EOG), electromyogram (EMG) and cardiac activity. EOG is the noise generated by eye blink and cornea movement, while EMG is the noise generated by muscle activity around the electrodes, specifically in the neck, face and scalp.
Independent Component Analysis
Independent Component Analysis (ICA) is a multivariate analysis which decomposes the original signal into a set of Independent Components (ICs). It separates the signals from different sources from a set of mixed signals. Two important assumptions are made in ICA: (1) the signals from different sources are independent of each other and (2) independent components have non-gaussian distribution. Artefact removal in EEG signals using ICA is a three-step process: (1) decomposing into ICs, (2) discarding standalone ICs and (3) concatenating the remaining ICs to form an artefact-free signal (Lai et al., 2018).
Popular EEG signal processing tools including EEGLAB provide functionalities to perform ICA (Delorme and Makeig, 2004). Even though multiple ICA algorithms exist, FastICA, Infomax and JADE are being widely used (Azlan and Low, 2014). Several studies report second-order blind identification (SOBI), an ICA algorithm, as a successful technique to remove all types of artefacts from the EEG signal (Urigüen and Garcia-Zapirain, 2015). ICA has been used as a pre-processing technique for ASD classification in (Djemal et al., 2017). It has also been used in (Abdulhay et al., 2017) as a pre-processing step to detect abnormal EEG activities and neural connectivity in autistic individuals.
Principal Component Analysis
Principal Component Analysis (PCA) converts a set of possibly correlated variables into a set of linearly uncorrelated variables using orthogonal transformation. The linearly uncorrelated variables are called the principal components. The principal components are constructed in such a way that they maximize the variance and the i th principal component is orthogonal to the (i-1) th principal component. The principle behind using PCA as a denoising technique is that the principal components with relatively higher variance compared to the effect of the noise are relatively less noisy. Denoising techniques based on PCA have been presented in (Kang and Zhizeng, 2012;Turnip and Junaidi, 2014). However, the survey done in (Urigüen and Garcia-Zapirain, 2015) reveals that recent works prefer ICA over PCA since artefacts are better modeled as independent components rather than orthogonal components.
Wavelet-based Analysis
Wavelet is a rapidly decaying oscillation with a zeromean value. There are two types of wavelet transforms, continuous wavelet transform (CWT) and discrete wavelet transform (DWT). DWT has been frequently used for denoising signals. Denoising using DWT is a three-step process: (1) decompose, (2) discard and (3) reconstruct. Initially, the signal is filtered using a low pass and a high pass filter and the outputs are called approximation coefficients and detail coefficients, respectively. Signal decomposition using DWT is shown in Fig. 2. (Bosl et al., 2018) O X ASD classification using EEG and eye movement (Thapaliya et al., 2018) X X Classifying ASD using MS-ROM/I-FAST algorithm (Grossi et al., 2017) X ASD diagnosis using DWT, Shannon entropy and ANN (Djemal et al., 2017) O X X X Wavelet-based ASD classification (Cheong et al., 2015) O X X ASD diagnosis utilizing brain connectivity (Jamal et al., 2014) X X O Fuzzy synchronization likelihood methodology for ASD diagnosis (Ahmadlou et al., 2012a) O ASD diagnosis based on improved visibility graph fractality (Ahmadlou et al., 2012b) O X EEG as a biomarker for distinguishing ASD children (Bosl et al., 2011) X Classification of ASD using fractal dimensions (Ahmadlou et al., 2010) O X Frequency 3D mapping and interchannel stability of EEG as indicators towards ASD diagnosis (Abdulhay et al., 2017) X X O O Diagnosing ASD utilizing EEG spectral coherence (Duffy and Als, 2012) X X X X ASDGenus: channel optimised classification using EEG (Haputhanthri et al., 2019) X O X The high-frequency band (detail coefficients) contains most of the noise and useful information as well. The useful information needs to be preserved while removing the noise. A threshold value is chosen and the coefficients with magnitudes less than the threshold value are discarded. The signal is then reconstructed based on the new coefficients (inverse DWT). The low pass subband is decomposed further at multiple levels for further analysis. Table 1 states the five frequency bands and noise separated using DWT, as the initial step of noise removal.
In (Kumar et al., 2008) and (Zhou and Gotman, 2004), techniques based on wavelet transformation to denoise ASD using EEG signals have been proposed. Daubechies wavelet was used in (Bosl et al., 2018;Djemal et al., 2017) and Coifman wavelet was used in (Ahmadlou et al., 2012a), to perform DWT. CWT was used in (Jamal et al., 2014). However, in these studies, wavelets were used for signal decomposition instead of noise removal.
Visual Inspection
Manual noise removal using visual inspection is an easy and reliable approach. However, it is hard to perform when the dataset contains long duration signals from many subjects. Visual inspection was used in (Thapaliya et al., 2018) as a pre-processing step in classifying ASD. Table 2 summarizes different pre-processing techniques used to process the EEG signal. Even though the last two studies are not related to classifying ASD using machine learning algorithms, they have been included to introduce new techniques for noise removal as noise filtering is independent of the application. The "X" symbol indicates techniques used for noise filtering and the "O" symbol indicates other pre-processing techniques used for data transformations. Even though DWT can be used for removing noise, the studies have used it primarily to decompose the signal into different frequency bands. Frequencies outside the range of the frequency bands were filtered using band-pass filters in most of the researches. Band-pass filters are simple and easy to implement. I-FAST and Makoto's pre-processing pipeline combine several techniques for EEG signal preprocessing. Apart from the techniques discussed earlier, adaptive filtering, Fourier transform, source component technique, multivariate regression and empirical mode decomposition have also been used for artefact removal. The source component techniques are a combination of two approaches for artefact removal based on brain electric source analysis and principal component analysis proposed in (Lins et al., 1993;Berg and Scherg, 1991).
The studies done in (Khatwani and Tiwari, 2013;Urigüen and Garcia-Zapirain, 2015;Lai et al., 2018) have presented surveys of denoising techniques. Khatwani and Tiwari (2013) have discussed denoising techniques based on PCA, ICA, wavelet and wavelet packet in their work. The effectiveness of these techniques was measured based on Mean Squared Error (MSE), signal to noise ratio (SNR) and peak signal to noise ratio (PSNR). High SNR and PSNR values and low MSE values are indicators for less noisy signals. They conclude that the wavelet-based method produces better results based on the MSE, SNR and PSNR values calculated in different studies. Besides the work done in (Lai et al., 2018), has presented ICA and wavelet-based analysis that uses statistical analysis methods and additional artefact removal techniques.
Urigüen and Garcia-Zapirain (2015) presented a detailed survey of denoising techniques in their work. Their study explores the noise removal techniques under the following major categories: linear regression methods, EOG correction methods, filtering methods, blind source separation (BSS) methods, source decomposition methods, the combination of different algorithms and other methods. ICA and PCA were categorized under BSS methods with several other techniques. Wavelets were categorized under source decomposition methods. Methods suitable for removing specific artefact types such as ocular artefacts, muscle artefacts, cardiac artefacts and mixed artefacts were also discussed. Their study concludes that the best technique for a given scenario should be chosen considering the type of EEG signal, artefacts that are present and the signal to contaminant ratio. There is no best technique which can be applied to all scenarios.
EEG Pre-Processing Tools
Several tools with user-friendly graphical user interface (GUI) have been developed to facilitate the analysis of EEG recordings. This section summarises some of the widely used tools.
EEGLAB
EEGLAB was initially developed as a MATLAB toolbox with a GUI to process EEG data (Delorme and Makeig, 2004). New tools and plugins for EEGLAB have been continuously developed over time making it a versatile pre-processing tool. In (Delorme et al., 2011), the authors have summarized several pre-processing tools which can be integrated with the EEGLAB. Some of the tools are EEGLAB STUDY Design, SIFT (source information flow toolbox), NFT (neuroelectromagnetic forward head modelling toolbox), BCILAB (brain-computer interface LAB) and ERICA (experimental real-time interactive control and analysis). These tools are freely available with a GUI/CLI (Command Line Interface) environment.
Recent versions of EEGLAB can process EEG, magnetoencephalography (MEG) and other electrophysiological data. Some of the useful features are a user-friendly GUI, the privilege for experienced MATLAB users to interact using MATLAB scripts, ability to handle multiple data formats, effective data visualization, ICA functionality, time/frequency transforms, continuous upgrades with new tools and plugins and availability of ample tutorials.
Brainstorm
Brainstorm is an opensource application for MEG/EEG analysis (Tadel et al., 2011). This application is intended to provide user-friendly tools to the scientific community. Hence, Brainstorm provides a rich and intuitive GUI (Graphical User Interface). It is written using MATLAB scripts and Java which makes it a portable, cross-platform software (a stand-alone version for users who do not own a MATLAB license is also available). The end users without any programming knowledge can use the software easily as well. Besides, advanced users have the privilege to interact using MATLAB scripts similar to EEGLAB. It is well documented with enough support online. Apart from the inbuilt pre-processing pipeline, other tools such as EEGLAB can be used for pre-processing and the results can be imported. Brainstorm supports different file formats including Neuroscan (cnt, eeg, avg), Brainvision BrainAmp, EGI (raw), EEGLAB, Cartool and Generic ASCII text files.
Overview of EEG Feature Extraction
After pre-processing the EEG signal, the next step is to extract features to train the learning model. Noise filtering techniques for EEG are generally independent of the application. We can use the same noise filtering techniques regardless of the considered disorder type. However, feature extraction techniques are often application specific. Depending on the features that we need to extract, the feature extraction techniques vary. In general practice, features which have a strong correlation with the target class are selected. If the root cause of ASD is known, features can be easily selected utilizing the available background knowledge. Since the etiology of ASD is yet to be discovered, the feature extraction is a trial and error approach. Even though the etiology is unknown, several studies have focused on the abnormality identification in EEG signals of autistic individuals. Such abnormalities can be used as features in the classification task.
EEG-based Features for ASD Classification
Power, Hemispheric Asymmetry and Coherence Wang et al. (2013), have reviewed abnormal power, abnormal hemispheric asymmetry and abnormal coherence in resting state EEG. EEG power is further categorized into relative and absolute power. Relative power measures the activity in one band compared to other bands while absolute power measures the activity in one band independent of the others. Their work has summarized the variations in absolute and relative powers of different frequency bands (delta, theta, alpha, beta and gamma) of different brain regions. They have identified a U-shaped profile where high-frequency bands (beta, gamma) and low-frequency bands (delta, theta) display excessive power while middle range frequency bands (alpha) display reduced power as shown in Fig. 3.
Enhanced power in delta and theta bands has been found in both relative and absolute powers in multiple regions. Similarly, the alpha band also shows reduced power in both relative and absolute powers. However excess power is seen in relative beta and absolute gamma only. Their work also highlights that according to most of the existing literature, the left hemisphere exhibits enhanced power than the right hemisphere in ASD patients. Separate studies report the dominance of left hemisphere in the delta, alpha and beta powers over the right hemisphere. Finally, the presence of weaker longrange coherence patterns has also been pointed out.
Statistical Features
Standard deviation and mean are the commonly used statistical features. Statistical features were used in (Bosl et al., 2018;Cheong et al., 2015;Djemal et al., 2017;Thapaliya et al., 2018) to classify ASD.
Entropy
Entropy is one of the frequently used features in ASD classification. Entropy is a measure of uncertainty of random variables. If X is a discrete random variable, its entropy is calculated according to Equation 1: where, p(x) is the probability mass function of X. There are many entropy-based methods such as sample entropy, Shannon entropy, multiscale entropy and modified multiscale entropy. Entropy has been used in (Bosl et al., 2018;Djemal et al., 2017;Thapaliya et al., 2018) for the diagnosis of ASD. Several EEG-based features for ASD classification including EEG rhythm, absolute and relative power, coherence, mu wave suppression, cordance and multiscale entropy have been discussed in (Hashemian and Pourghassem, 2014). (Bosl et al., 2018) X X X X ASD classification using EEGand eye movement (Thapaliya et al., 2018) X X X X Classifying ASD using MS-ROM/I-FAST algorithm (Grossi et al., 2017) X X ASD diagnosis using DWT, Shannon entropy and ANN (Djemal et al., 2017) X X Wavelet-based ASD classification (Cheong et al., 2015) X ASD diagnosis utilizing brain connectivity (Jamal et al., 2014) X X Fuzzy synchronization likelihoodmethodology for ASD diagnosis (Ahmadlou et al., 2012a) X X ASD diagnosis based on Improved visibility graph fractality (Ahmadlou et al., 2012b) X X EEG as a biomarker for distinguishing ASD children (W. Bosl et al., 2011) X Classification of ASD using fractal dimensions (Ahmadlou et al., 2010) X X ASDGenus: channel optimized classification using EEG (Haputhanthri et al., 2019) X X
Feature Extraction Techniques
Feature extraction techniques are used to compute the selected features. However, there are techniques which are applied during pre-processing the signal to facilitate feature extraction such as ICA, PCA, DWT and adaptive filtering. For instance, instead of calculating the standard deviation of the original signal, DWT can be applied to decompose the signal at multiple levels. Then standard deviation can be calculated for the decomposed signals. Most of these algorithms split the original signal into multiple components and they can also be used for noise filtering. These techniques only pre-process the signal to facilitate feature extraction but do not extract any features (Lakshmi et al., 2014;Azlan and Low, 2014). Table 3 summarizes different techniques used for feature extraction in the related studies. Statistical feature extraction and entropy-based techniques are more common compared to other techniques. Standard deviation and mean are the common statistical features that are extracted. Among several entropy-based techniques Shannon entropy, multiscale entropy and modified multiscale entropy have been used in the related studies. One noteworthy aspect is that unlike preprocessing techniques, feature extraction techniques are sparsely distributed. Because of the unknown etiology, studies intend to discover new features which have strong correlations with ASD classification. Almost all the studies use a unique set of features and as a result, a different set of feature extraction techniques were used.
Feature Selection Techniques
After the feature extraction phase, often many features will be available. For example, suppose the EEG dataset contains data from 128 channels and after decomposing the signal into five frequency bands, standard deviation, mean and entropy were calculated. At the end of the process, 1920 features (128 channels x 5 frequency bands x 3 features) would be generated. Training a model with 1920 features requires a larger number of training samples. However, in many of the previous studies, only less than 100 samples were available. In addition, irrelevant features will negatively impact the classification. One challenge after feature extraction is to select the best features which contribute to the classification process. Feature selection reduces overfitting, improves accuracy and reduces training time. Some of the commonly used feature selection techniques are correlation-based feature selection (CFS), analysis of variance (ANOVA), PCA and training with input selection and testing (TWIST) algorithm.
Different feature selection techniques used in related studies are summarized in Table 3. Here, RQA denotes Recurrence Quantitative Analysis and DFA indicates Detrended Fluctuation Analysis. ANOVA has been used in several related work by the same author. Feature selection techniques that were used are also unique to different studies. However, there is no significant reason behind and often it is a choice based on which technique produces the best results.
There are no best features, best feature extraction or feature selection techniques. Often, it is a trial and error approach. Besides, since the etiology of ASD is unknown, there is a high possibility for discovering new features with a strong correlation to ASD classification. The best approach is to try different combinations of feature sets and techniques and select the one which produces the best results.
Introduction to Classification
The selected features from the feature extraction phase are fed as input to the fourth phase, which is the final phase in diagnosing ASD. In this section, we have summarized different machine learning algorithms which have been used frequently in the context of ASD classification and different techniques to evaluate the correctness of the trained model.
For the classification task, the dataset is divided into two mutually exclusive sets, one for training the model and the other one to test the model. Any machine learning based classifier functions in the following manner. Initially, a classification model is built based on the training data. Then its correctness is measured by applying the model on the test set. If the obtained accuracy is not satisfactory, the model will be retrained and retested. It is impossible to universally define an algorithm as the best fit for a specific problem. Finding a suitable algorithm is an empirical task.
In this section, our intention is not to provide an indepth understanding of the learning algorithms but to give an abstract idea about the algorithms, their pros and cons and their applications in the context of ASD.
Support Vector Machine
The idea of support vector machine was introduced in the 1990s by Boser, Guyon and Vapnik. The original SVM is a supervised, non-probabilistic, binary classifier. It can classify only linearly separable data. Using the idea of kernels, SVM can classify data which are not linearly separable by mapping them to a higher dimensional space (Burges, 1998). SVM classifies the data points by constructing a hyperplane that separates the data points of available target classes as shown in Fig. 4.
Some of the advantages of using SVM are the ability to handle high dimensionality (>10 6 ), efficient memory usage and versatility (due to the ability to apply new kernels). If the number of features is greater than the number of training samples, it will lead to low results.
Besides, SVM does not offer a direct probabilistic interpretation. Yet, the distance from the hyperplane can be used as an indirect measure of the probability. SVM was used in (Bosl et al., 2018;Jamal et al., 2014;Thapaliya et al., 2018) to classify ASD.
Logistic Regression
Logistic regression has been used in the field of statistics starting from the 19th century. In machine learning, logistic regression is a popular algorithm for binary classification problems, similar to classifying ASD and no-ASD (Dreiseitl and Ohno-Machado, 2002). When the model is trained, values for the weights and bias are learned. The core function used is a sigmoid function. The output value will be in the range of 0 to 1. By setting a threshold value T 0 , output values above T 0 are classified to be class and output values below T 0 are classified to be the other class. In this context, the two classes are ASD and no-ASD. Logistic regression is simple, easy to implement and does not require extreme computational power. Authors of (Thapaliya et al., 2018;Grossi et al., 2017) have used logistic regression to diagnose ASD.
Naïve Bayes
Naïve Bayes classifier is considered as the gold standard against which other algorithms are compared. It is based on the Bayes' theorem and considered naïve because of its class conditional independence assumption (Rish, 2001). Even though the assumption does not hold in many real-world problems, it produces reasonable, satisfactory results. Unlike SVM, it can predict the probability for a given sample to belong to a specific target class. Naïve Bayes classifier requires relatively less amount of training data and it is scalable, simple, easy to implement and fast. Among the proposed ASD classification approaches, Naïve Bayes has been used in (Thapaliya et al., 2018;Grossi et al., 2017).
Random Forest
Random forest is an ensemble algorithm which builds multiple models and combines the results of each model to generate the overall result (Liaw and Wiener, 2002). It creates a collection of decision trees from randomized subsets of the training data and during classification, results from each decision tree are combined and a result is generated. Building several models increase the accuracy of the result by reducing the effect of noise and other biases. However, many decision trees will slow down the algorithm. In (Bosl et al., 2018;Grossi et al., 2017) random forest technique has been used to classify ASD.
K-Nearest Neighbour (KNN)
Classification algorithms can be divided into lazy learners and eager learners. Lazy learners simply store the training data and do not build any models. They wait until a sample is provided for the classification. Eager learners construct a classification model using the training data and use the model for classification. Lazy learners are relatively slow during prediction. KNN is a lazy learning algorithm. Given a data sample, it would find K number of nearest neighbours from the training set and target class of the given sample will be decided based on the most common target class of the neighbours (Peterson, 2009). Among the proposed machine learning based ASD diagnosis approaches, KNN was used in (Bosl et al., 2018;Grossi et al., 2017).
Neural Networks
A single node in a neural network (Haykin, 2009) imitates a neuron in the human nervous system. They consist of an input layer, one or more hidden layers and an output layer. Each layer consists of one or more nodes. A model of a neural network is shown in Fig. 5. Each node is a computational unit which calculates the weighted sum of inputs from the previous layer.
Input Layer
Hidden Layers Output Layer
Output
In order to add non-linearity, activation functions are introduced into the nodes. The weighted sums are fed as parameters for the activation functions. The activation function decides the output of a node. Some of the common activation functions are ReLU (Rectified Linear Unit), sigmoid and linear functions. Given enough amount of training samples, neural networks can classify most of the complex relationships. However, it requires a considerably large amount of training data for learning. Majority of the proposed approaches use a neural network. Some of them are (Thapaliya et al., 2018;Ahmadlou et al., 2012a;Cheong et al., 2015) and (Djemal et al., 2017).
Different algorithms used for classification in the related studies are summarized in Table 4. As the table illustrates, the neural network has been most frequently used for classification. Next to neural networks, SVM is the most common algorithm. Compared to other techniques discriminant analysis, sequential minimal optimization and k-contractive map have been seldom used. However, we cannot define one algorithm as the best since it depends on several factors. ASD classification being a medical application, interpretability of the decision is important. Algorithms such as decision trees generate classification models with better interpretability.
Models generated by algorithms similar to SVM and neural network are black boxes which are difficult to interpret. However, they can model complex relationships unlike simpler methods such as decision trees and Naïve Bayes. Further, if sufficient data is not available neural networks will not produce satisfactory results since it requires a large amount of data to train the model. Similarly, not all algorithms can handle noisy data. It is a standard practice to start with simpler models and if the results are not satisfactory then move on to more complex models to avoid overfitting. If many samples are available choosing neural networks has a high probability for producing more accurate results.
Evaluation Techniques
Evaluating the learning model is an essential step in any classification task. Choosing evaluation techniques and evaluation procedures which are not suitable can lead to biased and misleading results. Two popular evaluation techniques are the holdout method and crossvalidation method.
Holdout Method
It is widely known as the training-testing approach. In the holdout method, the dataset is randomly partitioned into a training set and a test set which is mutually exclusive. The rule of thumb is to allocate twothirds of the data for training and one-third for testing. Random subsampling is a variation of the holdout method in which several iterations of training-testing are carried out and the overall accuracy is obtained by combining the accuracy of each iteration.
One drawback of this approach is that when there is not enough data, the produced accuracy values are not reliable. Besides, if the same training set is used for several iterations, there is a high tendency for overfitting, where the model classifies the training sets well, however, performs poorly when classifying new samples.
Cross-Validation
Cross-validation is very useful when only a limited number of data samples are available. In k-fold crossvalidation, the dataset is divided into k partitions of approximately equal size. In each iteration, one partition is used for testing and all others are used for training. The overall accuracy is the number of correctly classified samples from all the iterations divided by the total number of samples. 10-fold cross-validation and leave-one-out cross-validation (only one sample is used for testing in each iteration) are commonly used k-fold cross-validation approaches. (Thapaliya et al., 2018) Classifying ASD using MS-ROM/I-FAST algorithm (Grossi et al., 2017) X X X X X X X X X ASD diagnosis using DWT, Shannon entropy and ANN (Djemal et al., 2017) X X Wavelet-based ASD classification (Cheong et al., 2015) X X ASD diagnosis utilizing brain connectivity X X X (Jamal et al., 2014) Fuzzy synchronization likelihood methodology X X for ASD diagnosis (Ahmadlou et al., 2012a) ASD diagnosis based on improved visibility X X graph fractality (Ahmadlou et al., 2012b) EEG as a biomarker for distinguishing ASD X X X X children (Bosl et al., 2011) Classification of ASD using fractal X X dimensions (Ahmadlou et al., 2010) ASDGenus: channel optimised classification using EEG (Haputhanthri et al., 2019) X X X X X Evaluation techniques used in the related works are also summarized in Table 4. Most recent studies carried out after 2017 have used cross-validation while the holdout method had been popular among the initial studies. Since the number of samples in the dataset are often limited in most of the studies, using crossvalidation would produce more reliable results. Further, compared to the holdout method a larger fraction of the dataset can be used for training. Thapaliya et al. (2018), aim to identify ASD using a combination of EEG and eye movement data. They have also compared different machine learning classifiers. EEG data were recorded from 128 channels at a sampling rate of 500Hz while subjects were watching videos. Among the data collected from 52 participants, data of 34 participants were used in the study. Since the scope is limited to EEG, eye movement metrics are not discussed in detail. In the pre-processing stage, Makoto's pre-processing pipeline was used paired up visual inspection. For feature extraction, mean, standard deviation and entropy values were used. Fig. 6 shows the workflow of the classification process using EEG data.
ASD Classification Approaches
The results were obtained after running the tests 200 times, except for DNN due to its computationally exhaustivity. Here, the ratio between the training and test set was 80:20. Based on the results of 10x2 crossvalidation, 100% accuracy has shown for the combined dataset using Naïve Bayes and Logistic Regression Classifiers. Using only the eye movement data, Logistic Regression and DNN have achieved 100% accuracy.
A data-driven approach is followed by Bosl et al. (2018), to classify ASD subjects as shown in Fig. 7. Unlike most of the other studies, EEG data collected from 188 participants were used. It includes 89 Low-Risk Controls (LRC) (among which 3 were diagnosed with ASD) and 99 High Risk for Autism (HRA) (among which 32 were diagnosed with ASD). In addition, the participants were in between the ages of 3 to 36 months of age and were scheduled several visits in that period. During the collection period, bubbles were blown to control the child's behaviour. EEG data from either 64 or 128 channels were recorded but only the channels in the International 10-20 system were used for the analysis.
They have extracted features using Sample Entropy, DFA and Recurrence Quantitative Analysis (RQA). For each channel, the 9 features: sample entropy, detrended fluctuation analysis, entropy derived from recurrence plot, max line length, mean line length, recurrence rate, determinism, laminarity and trapping time were generated. The features of interest were filtered using the feature ranking methods (Recursive Feature Selection).
Cross-validation
For the classification of ASD or no-ASD, only the data from ASD and LRC subjects were used for training with leave-one-out cross-validation scheme. The HRA subjects (test set) were classified using data from the ASD and LRC subjects as the training set. SVM was used for classification. The distance from the hyperplane which is used as the decision boundary in SVM is used to calculate the severity score between the range of 1-10. Classification using SVM achieved 100% accuracy in distinguishing ASD subjects from the LRC subjects. However, when classifying HRA subjects, classifier's accuracy depreciated as it was challenging for SVM to classify HRA subjects who were placed close to the decision boundary. Another prominent feature of this study is that severity scores were calculated, and they had a strong correlation with the actual severity score. Multi-Scale Ranked Organizing Map coupled with Implicit Function as Squashing Time algorithm (MS-ROM/I-FAST) is an Artificial Neural Network based system with the capability to extract valuable features from EEG. Mainly it does not require any preliminary preprocessing. The algorithm was able to distinguish Mild Cognitive Impairment and/or Alzheimer's Disease with an accuracy of 94%-98%. The work done in (Grossi et al., 2017), has tried to measure its effectiveness in identifying autistic people. Their work involves 25 participants, 15 ASD (13 males and 2 females; 7-14 years of age; mean -10.4) and 10 typically developing (4 males and 6 females; 7-12 years of age; mean-9.2) individuals.
The collected data were resting state EEG obtained while the participants were opening and closing their eyes. Data were collected for 3 minutes at a sampling rate of 256Hz based on the International 10-20 system. The structure of I-FAST is demonstrated in Fig. 8. It consists of 3 phases: squashing phase, noise elimination phase and classification phase. In normal practice, noise filtering is followed by feature extraction.
However, the I-FAST algorithm transforms the EEG channels into feature vectors first using MSE and MS-ROM in the unsupervised squashing phase. Then in the noise elimination phase, irrelevant features are considered as noise and are filtered. The outputs of the MS-ROM are fed into the TWIST algorithm (Buscema et al., 2013) to select the best features.
Finally, with the help of machine learning algorithms, the classification phase classifies the data. A novel algorithm, MS-ROM, based on the Self Organizing Map (SOM) neural network is presented. It consists of three steps: sampling, projection and ranking. In the sampling phase, EEG signals are sampled many times at different scales and using SOM, the generated subsamples are projected into a two-dimensional grid. In the ranking phase, the generated grids are ranked based on cell frequency. Seven learning algorithms have used for the classification process: sine net neural networks (Sn), logistic regression (LR), sequential minimal optimization (SMO), K-NN, K-Contractive Map (K-CM), Naïve Bayes and Random Forest. This approach was able to produce 100% accuracy consistently with the trainingtesting protocol (11 ASD and 6 control subjects for training and the rest for testing) and with leave-one-out protocol best results were produced by Random Forest with an accuracy of 92.8% and K-Contractive Map and k-Nearest Neighbours with the accuracy of 87.3%.
A Computer Aided Diagnosis (CAD) system for ASD diagnosis using DWT, Shannon entropy and Artificial Neural Network (ANN) was proposed in (Djemal et al., 2017). EEG data were recorded from 19 subjects, 9 autistic subjects (six males and three females) between 10 and 16 years of age and 10 typically developing males between 9 and 16 years of age. Data were recorded in a relaxing state from 16 channels based on the international 10-20 acquisition system, sampled at 256 Hz and filtered using a band-pass filter. To remove ocular artefacts ICA was applied to the channels located close to the eyes (Fp1, Fp2, F7 and F8). Next, the signals were filtered using an elliptic band-pass filter and segmented into 10 minutes long segments. For better feature extraction, the EEG signal was decomposed into approximation and detail coefficients using DWT. A four-level DWT decomposition with Daubechies-four (db4) wavelet was used and the first four detail coefficients (D1, D2, D3 and D4) and the approximation coefficient (A4) were calculated. Then five statistical features (mean, standard deviation, variance, skewness and kurtosis) and four entropy features (log energy, threshold entropy, Renyi entropy and Shannon entropy) were extracted from all the DWT coefficients and the original EEG signal as demonstrated in Fig. 9. Two-layer Artificial Neural Network (ANN) was used for classification. Using 10-fold cross-validation, accuracy, sensitivity and specificity were measured.
The classification was carried out in several stages. In stage one, statistical features and entropy features were used separately as inputs to ANN keeping the segment length fixed. After identifying standard deviation and Shannon entropy as the best features, further optimizations were carried out in the next stages. Tests were carried out to find the optimum segment length and frequency band (wavelet coefficient). Results obtained using overlapping and non-overlapping segments were also analysed. Best segment length was found to be 50 sec. Similarly, detail coefficients D1, D2, D3 and D4 produced the best accuracy of 98.9%. The test results for overlapping and non-overlapping segments revealed that 60 sec long segments with half-segment overlapping produce the best accuracy of 99.7%. The results conclude that the best approach for the CAD system is to extract standard deviation and Shannon entropy from the detail coefficients using 60 sec long half overlapping segments. Cheong et al. (2015), have proposed a classification technique based on DWT. The EEG dataset, used in this research was recorded during stimulation of three tastes (salty, sour and sweet). Data were recorded from 30 ASD subjects between 3 and 10 years of age based on the International 10-20 system at a sampling rate of 500Hz. They were identified with 3 levels of autism, 5 subjects with mild autism, 11 subjects with moderate autism and 14 subjects with severe autism. Only the channels related to the taste sensory (C 3 , C 4 and C z ) were selected for analysis. Fig. 10 shows the process.
Noise filtering was performed using voltage threshold method and bandpass filter with band frequency 0.4Hz to 60Hz was applied. In the feature extraction phase, DWT was applied using db4 as the mother wavelet. Standard deviations of the alpha frequency band (8Hz -16Hz) of the three channels for three different tastes were calculated and used as inputs to the classification phase. A two-layer ANN was used for classification. Through trial and error data division of 65% for training, 10% for testing and 15% for validation was found to be producing the best results of accuracy 92.3% with a mean squared error of 0.0362. One significant feature of this methodology is the usage of a validation set. Other related studies only used training and test sets. When we adjust the model continuously based on the results obtained by evaluating the model on the test set, most likely we would end up overfitting the model to the test set. By using a validation set, the model can be evaluated for overfitting to the test set.
The authors of (Jamal et al., 2014) analyzed the functional connectivity of the brain using phase synchronization to find a reliable biomarker for diagnosing ASD. Studies suggest that inactivation of brain circuitry associated with face processing might be the cause for the challenges faced by autistic children to understand facial expressions. Hence, the connectivity of the brain was explored in order to find differences between ASD and normal children during face perception. Data were collected from 24 subjects, 12 children with ASD between 6 and 13 years of age (average = 10.2) and 12 typically developing children between 6 and 13 years of age (average = 9.7) while performing face perception tasks. Data were obtained from 128 channels at a sampling rate of 250 Hz and filtered within the range of 0.5 Hz to 50 Hz using a band-pass filter. Fig. 11 shows the methodology proposed in the study. Continuous Wavelet Transform (CWT) was applied and phase synchronized states (synchrostates) were obtained.
Since obtaining synchrostates is a long procedure, we have omitted the details. The brain connectivity graph was built where the EEG electrodes are the nodes and the synchronization values between them are the weights of the edges. Modularity, transitivity, characteristic path length, global efficiency, radius and diameter of the brain connectivity graph were selected as features for the classification task. These six features were calculated corresponding to the three facial stimuli (fear, happy and neutral) with minimum and maximum occurring states. Thus 36 features were obtained in total. Fisher's discriminant ratio was used for feature ranking. Nine different subsets of the features were created and used for classification separately. Discriminant analysis and SVM with polynomial kernel were used for classification. When using all the min and max state features for all three stimuli and all the max features for all three stimuli, classification using SVM with secondorder kernel produced the best accuracy of 94.7% with sensitivity 85.7% and specificity 100%. Ahmadlou et al. (2012a), have proposed an approach which uses Fuzzy Synchronization Likelihood (Fuzzy SL). This approach analyses the functional connectivity of the brain of normal and autistic children using Fuzzy SL and diagnoses ASD based on that. An abstract workflow of their approach is demonstrated in Fig 12. EEG data were collected from 18 subjects, 9 autistic children between 7 and 13 years of age (average = 10.8) and 9 typically developing children between 7 and 13 years of age (average = 11.1), according to International 10-20 system at a sampling rate of 256 Hz.
Applying Butterworth filter EEG is filtered within the range of 1-60Hz and using the wavelet transform signal was divided into 5 frequency bands: gamma, beta, alpha, theta and delta. The electrode locations were categorized into 7 regions: prefrontal, frontal, right temporal, left subjects for testing) 100 times and obtaining the average, an accuracy of 95.5% was obtained with a variance of 1.2%. Since the number of subjects involved in the study is low, using cross-validation would have increased the reliability of the results and allowed more data to be used for training. In addition to the classification, the authors also claim that measured regional Fuzzy SLs can be used in the neurofeedback treatment as well.
Another study by the same authors to diagnose ASD using improved visibility graph (VG) fractality is presented in (Ahmadlou et al., 2012b). Power of scalefreeness of VG (PSVG) and improved PSVG were evaluated in their study for effectiveness in classifying ASD. Visibility graphs convert a fractal time series to a scale-free graph characterized by P(k) = k -r , where P is the probability distribution of the edges, k is the order of the nodes and r is the power of scale-freeness. A scalefree graph is a graph whose degree distribution follows a power law. PSVG is the value of the slope when log 2 [P(k)] is plotted against log 2 [k]. The same data used in their previous study (Ahmadlou et al., 2010) was used for this study and the same methodology as in the previous study was followed until wavelet decomposition. The details of their previous study will be discussed later in this section. The classification methodology is presented in Fig. 13.
PSVG and improved PSVG values were calculated for all 5 sub bands. Using ANOVA features with pvalues less than 0.01 were selected as inputs to the EPNN. PSVG computed for the beta band and improved PSVG computed for beta and alpha bands were selected. About 80% of the data were selected for training and 20% were used for testing. The classification was performed 200 times. Classification based on improved PSVG achieved an average accuracy of 95.5% with 1.7% variance, while classification based on PSVG achieved an average accuracy of 84.2% with 1.8% variance.
The diagnosis approach proposed in (Bosl et al., 2011) is one of the initial attempts which utilized analysis of EEG data to produce a biomarker for children at high risk for ASD. The goal of their study was to demonstrate that mMSE (modified multiscale entropy) can be used as a biomarker to distinguish typically developing children from children at high risk for ASD. The children with an older sibling diagnosed with ASD were categorized as high risk for ASD. The workflow of the approach is presented in Fig. 14.
Their study included 79 participants, among which 46 were at high risk for ASD and 33 controls. Similar to the other studies, the control subjects were defined on the basis that they have a typically developing older sibling and no family history of neurodevelopmental disorders. The participants were between 6 to 24 months of age. From some participants, data were collected multiple times at different ages. Those data were considered as independent datasets, hence even though there were only 79 participants, a total of 143 sets of data were included in the study. EEG data were collected using a 64-channel Sensor Net System while blowing bubbles. Signals were band-pass filtered at 0.1 to 100.0 Hz and sampled at a rate of 250Hz. Out of the 2 minutes long recordings, only 20 seconds long continuous segments were used for the analysis. As the first step for calculating the mMSE values, coarse-grained series from scales 1 to 20 were computed for each channel. Then the entropy values were calculated using modified sample entropy (mSE). The entropy values calculated using mSE are more robust to noise and consistent with short time series. Finally, for each coarse-grained series from scales 1 to 20, mMSE is defined as a series of mSE values. SVM, K-NN and Naïve Bayes algorithms were used for classification. The models were evaluated using 10-fold cross-validation. Unlike the other studies, boys and girls have been classified separately and as a unified complete set as well. Moreover, classification was performed separately for different age groups, at 6, 9, 12, 18 and 24 months of age. For the dataset combining both boys and girls, K-NN achieved the maximum accuracy of 90% for 9 and 18 months age groups. For the boys, SVM produced 100% accuracy for the 9 months age group and for the girls, SVM produced the maximum accuracy of 80% for the 6 months age group. Ahmadlou et al. (2010), have proposed a methodology based on fractality and a wavelet-chaosneural network for diagnosis of ASD as illustrated in Fig. 15. They introduced the idea of using Fractal Dimensions (FDs) as features. FD is a non-integer dimension which shows the degree of complexity and self-similarity of a signal. Eye-closed EEG data were collected from 17 subjects, 9 ASD children (6 to 13 years old) and 8 typically developing children (7 to 13 years old). International 10-20 standard was used for electrode placement and data were recorded from 19 channels at a sampling rate of 256Hz. This dataset was used by the authors in (Ahmadlou et al., 2012b) as well. Applying bandpass filters, signals were filtered within 0-60Hz and using wavelet decomposition gamma, beta, alpha, theta and delta bands were obtained. After preprocessing the signal, Higuchi's Fractal Dimension (HFD) and Katz's Fractal Dimension (KFD) algorithms were used for FD computation of the EEG signals. Statistically significant FDs with a p-value less than 0.01 were selected using ANOVA. Three features were obtained and were used for classification using a twolayer Radial Basis Function Neural Network (RBFNN). 82% of the data were used for training and 18% were used for testing. The classification was performed 100 times using random subsampling. RBFNN produced results with 90% average accuracy and 0.15% variance. Non-complex implementation, using only the holdout method. Classify ASD at 3 severity levels, Use a taste-based EEG data (Jamal et al., 2014) Classification The considered ASD diagnostic approaches were selected based on the recent studies, that have applied machine learning approaches for ASD classification, between the year 2010 and 2018. We have explored the details of EEG datasets, techniques used, the methodology followed, significant aspects and the results of each of the related study. We have compared the pre-processing, feature extraction and classification techniques used by each of the studies in identifying ASD subjects. Thus, researchers and practitioners can use this survey to understand the useful and effective techniques.
Small Training Sets
The datasets of most of the related studies contain data from less than 36 subjects. Although, it is challenging to acquire EEG data of autistic subjects, from a statistical point of view the results will be biased and less reliable. Thus, there is a limitation in building solid relationships using the available small dataset.
Less Real-World Practice
Most of the proposed approaches have not been tested practically in real-world applications. Many unexpected issues may arise, when deploying an automated system in clinical practices.
Unavailability of a Benchmark Dataset
Although several ASD classification models have been proposed, there is a lack of a standard measure to compare the models. If a benchmark dataset with an adequate amount of data exists with global accessibility, the models can be applied, and selected the best model.
Dataset-Specific Classification Models
Each of the proposed models was trained and tested on specific EEG data. For instance, the equipment and infrastructure used to record data, electrode locations, number of channels, sampling rate, activities done by each of the subjects during the data collection are specific a given study. They were not tested on multiple EEG datasets with different properties. Thus, there is a limitation of assessing the effectiveness of those models, in classifying other EEG data with varying metadata.
Limited access to data
In general, it is challenging to acquire and access personal health records or medical data due to ethical issues, health care policies and regulations. Thus, the real data accessibility is limited in ASD diagnosis researches.
Difficulty in Classifying Mild Forms of ASD
The severity of ASD varies from person to person. Many studies have reported difficulties in diagnosing milder forms of ASD compared to severe cases. When the predicted results are close to the decision boundary separating ASD and no-ASD, it is challenging to conclude the results with an acceptable level of confidence.
Unknown Etiology
A clear understanding of the relationship between the connectivity of neurons in different regions of the brain and ASD is yet to be discovered. Thus, it is challenging to design a classification framework. Researchers are forced to follow the empirical/trial and error approaches to overcome this barrier. If the etiology is clear, better features can be extracted and optimal classification models can be built.
ASD being a Spectrum Disorder
Unlike most of the typical disorders, identifying ASD or no-ASD is not entirely sufficient because ASD represents a combination of neurodevelopmental conditions including high functioning autism, Asperger's syndrome, pervasive developmental disorder and Rett syndrome. The type and severity of symptoms vary from person to person. Hence, in addition to classifying ASD, the learning model should calculate the severity and if possible, the specific type of disorder.
Future Research Directions Predicting Severity Scores
Majority of the studies were aimed towards classifying ASD but generating severity scores similar to ADOS was explored by only a few. Developing an approach which could predict the severity of ASD and if possible, explore the specific type of ASD would facilitate more individualized treatment.
Building a Generic Decision Support System
Another possible research direction is designing a generic decision support system which supports EEG data with different characteristics (differences in devices used for data collection, data types, sampling rate) and with a simple, user-friendly GUI to facilitate non-technical users. It can easily be deployed for real-world testing and if successful, can be adopted for general use.
Real-World Deployment of the Models
It is important to deploy an ASD diagnosis system in real-world clinical practice. This can be used in parallel with the manual diagnosis process and verified the reliability and correctness of the system.
Optimization Techniques
After achieving the goal of real-world deployment of the models, different measures to optimize performance, resource utilization and accuracy can be explored.
Integrating Different Types of Data
Along with EEG, a model can be built integrating different data sources including eye movement, Functional Magnetic Resonance Imaging (fMRI) and thermal imaging. Combining EEG and eye movement data has already been proven to be an effective measure to classify ASD. A model based on different data sources will be more flexible, robust, reliable and accurate.
Study Importance for the Future Researchers
Research who are involving in EEG based ASD classification can utilize this study to obtain a detailed understanding of the evolution of the proposed classification approaches over the past decade. Moreover, this study helps to identify the techniques and features that have already been used and their effectiveness. Further, for clinical practitioners who are interested in developing a decision support system to diagnose ASD and utilize it for clinical diagnosis, this study will be helpful to select the optimum approach based on the expected accuracy, available resources and complexity of the methodology.
Conclusion
ASD is a lifelong neurodevelopmental condition that requires early intervention. This paper is explored the related studies of ASD diagnostic approaches, discussed the applicability of the techniques, identified the limitations in the current clinical diagnostic practices and the need for a behaviour-independent diagnostic approach. Studies reveal that the prevalence of ASD is increasing every year. By identifying the shortcomings in current ASD diagnostic criteria, we have emphasized the need for behaviour independent diagnostic approaches to facilitate early intervention. Dividing the classification methodology into four phases, this paper has discussed EEG data collection, pre-processing, feature extraction and classification. We have summarized different techniques, their strengths and weaknesses.
Concluding one technique as the best for one phase is impossible, because each technique has its own advantages and disadvantages. The suitable technique for the approach needs to be chosen based on the requirement. However, there are some techniques which produce satisfactory results, not the optimum, in general. For instance, noise filtering technique SOBI is widely used to remove noise from the EEG signal. Similarly, given sufficient data to train, the neural network can classify the subjects with reasonable accuracy.
Further, we have discussed the diagnostic approaches proposed after 2010, providing the workflow of the methodology and significant aspects. Even though most of the related studies have achieved accuracies close to 100%, only a few studies have calculated severity scores similar to ADOS. Additionally, a combination of psychophysiological data such as EEG, fMRI, eye movement data and thermal images can be considered to diagnosis ASD. Further, we have presented the identified limitations, challenges and future research directions of ASD classification. Thus, researchers and practitioners can use this survey to facilitate their work.
Funding Information
This research is funded by the Senate Research Committee Grant SRC/LT/2019/18, University of Moratuwa, Sri Lanka.
|
2019-09-09T05:26:31.062Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "405835ac8231c151ae80a0ad08f0ab8f8108c0b1",
"oa_license": "CCBY",
"oa_url": "https://thescipub.com/pdf/jcssp.2019.1161.1183.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9313487290a3bcbeae13871c94f7164995ac26cb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
245181099
|
pes2o/s2orc
|
v3-fos-license
|
Genetic structure and diversity among individuals of Copaifera langsdorffii Desf. from Mato Grosso, Brazilian Amazon, using ISSR markers
The Amazon is the largest tropical forest in the world and is home to around 20% of all the biodiversity on the planet, among the species present in the Amazon is Copaifera langsdorffii , exploited mainly for the extraction of oil-resin and wood, often in ways incorrect, which can cause the loss of genetic variability. The aim of this study was to evaluate the genetic structure and diversity among individuals of C. langsdorffii located in Mato Grosso, Brazil, using ISSR markers. We sampled leaves from 27 adult individuals of C. langsdorffii , whose total genomic DNA was extracted. A total of 12 ISSR primers were used for the molecular characterization of the individuals. A grouping analysis was performed using the unweighted pair group method, Bayesian analysis and characterized by the genetic diversity. The genetic diversity among and within the groups was demonstrated by the AMOVA. As a result, 106 fragments were amplified and 98.11% were polymorphic. The polymorphic information content of each primer ranged from 0.45 to 0.81. The dendrogram showed the formation of 4 distinct groups. The greatest genetic variability is found within the groups and not between them. The percentage of polymorphism, genetic dissimilarity values and genetic diversity indexes indicate that there is high genetic variability among Copaifera langsdorffii individuals, suggesting that ISSR primers were efficient in detecting polymorphism in this species and that the individuals have potential for compose programs aimed at the preservation of the species and the ability to integrate germplasm banks. 0.81. ,
Introduction
With approximately 6.7 million km2, the Amazon Rainforest is considered the largest tropical forest in the world, and 60% of its extension is in Brazilian territory (Ferreira et al., 2010). As such, it is home to about 20% of all biodiversity on the planet . Among the species present in the Amazon, the Copaifera langsdorffii Desf. (Copaíba) stands out and is widely distributed in Brazil (Lorenzi, 1992;Reis et al., 2016). It is exploited mainly for the extraction of its resinous oil, which is used in popular medicine as an anti-inflammatory and bactericide (Lisboa et al., 2018), and in industries: pharmacological, drug development; cosmetics: for the production of fixatives for fragrances, cosmetics and soaps; and in varnishes and solvents, for their production (Veiga & Pinto, 2002). Copaiba oil also stands out as a raw material for the manufacture of soaps and soaps by small Family businessses, fostering regional trade (Sousa et al., 2016). And its wood, in the production of plywood (Lisboa et al., 2018).
Inadequate management of C. langsdorffii, as well as forest fragmentation, influence the genetic composition of populations. This also causes a decrease in the number of individuals, which in the long term can lead to an increase in inbreeding, a reduction in genetic variability and consequently the loss of the adaptive capacity of the species .
Intraspecific genetic variability is fundamental for the persistence of species in nature, therefore knowing how much genetic variation exists and how it is distributed geographically in each species is necessary in order to characterize its conservation status (Santos et al., 2010). Thus, many studies have used genetic markers as a tool to map the variability and genetic distribution of species.
According to Turchetto-Zolet et al. (2017), a genetic marker is any visible character or phenotype that is somehow analyzable, by which alleles in individual loci segregate in a Mendelian manner. DNA molecular markers are effective tools for revealing the presence of genetic polymorphism, and are widely used in genetic studies of plant populations (Borém & Caixeta, 2016;Cordeiro et al., 2020). Among them, those based on polymerase chain reaction (PCR) stand out since they can be applied to non-model species and can be classified according to the type of allelic inheritance in dominant and codominant markers (Turchetto-Zolet et al., 2017), and the dominant markers do not distinguish between dominant homozygotes and heterozygotes (Zietjiewicz et al., 1994;Costa et al., 2015). Research, Society andDevelopment, v. 10, n. 16, e187101623025, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i16.23025 3 Among the dominant markers based on PCR are ISSR (Inter Simple Sequence Repeat), widely used in studies related to genetic characterization, due to low cost and high reproducibility (Ng & Tan, 2015). Polymorphisms between individuals are identified in electrophoretic analyses by the presence or absence of amplicons.
Thus, the objective herein was to evaluate the diversity and genetic structure among individuals of Copaifera langsdorffii Desf. from Mato Grosso in the Brazilian Amazon, using ISSR markers.
Sampling
We sampled the leaves of 27 adult individuals of C. langsdorffii found in the location known as Pista do Cabeça (S 10° 23' 22", W 56° 24' 27"), in the municipality of Alta Floresta, located in the north of the state of Mato Grosso (Figure 1), whose climate according to Alvares, Stape, Sentelhas, Gonçalves, and Sparovek (2013) is classified as AM type (tropical humid or subhumid). The average temperature is 24°C and precipitation is from 2800 to 3100 mm. The collection points were geo-referenced with the aid of a GPS (Global System Position). The individuals were collected at points where there was already evidence of the existence of the species in question, with of a the help local resident. They are later grouped into four sample subunits according to their geographic proximity. Samples subunits: I (AF1, AF6, AF7 and AF8); II (AF2, AF3, AF4, AF5 and AF9); III (AF10, AF11, AF12, AF13, AF14, AF15, AF16, AF17, AF18 and AF19) and IV (AF20, AF21, AF22, AF23, AF24, AF25, AF26 and AF27) [ Figure 1D]. Sample subunit IV is further away from the others, being 36 km from III, 41 km from I and 44 km from II. The closest subunits are I and III (7 km). Source: Authors (2020). Research, Society and Development, v. 10, n. 16, e187101623025, 2021 (CC BY 4.
DNA extraction and quantification
The laboratory procedures were performed at the Laboratory of Genetics and Molecular Biology of the University of Mato Grosso Carlos Alberto Reyes Maldonado (UNEMAT) in Alta Floresta/Mato Grosso. Total genomic DNA was extracted from approximately 300 mg of leaves from each sample, following the CTAB (cetyltrimethylammonium bromide) protocol described by Doyle and Doyle (1987). The evaluation of the quality of the extracted DNA, as well as the quantification, were performed using electrophoresis in 0.8% agarose gel (m/V) stained with ethidium bromide (0.6 µg/µL-1) for 20 minutes. After quantification, the extracted DNA samples were diluted in autoclaved distilled water and standardized to a concentration of approximately 10 ng µL-1.
Statistical analysis
The matrix of presence (1) and absence (0) of the amplicons was obtained from visual evaluation of the most defined fragments for each primer in the 27 subjects studied. Based on the matrix, the genetic similarities between the individuals of C.
langsdorffii were determined using the Jaccard coefficient, and a grouping analysis was performed using the unweighted pair group method with arithmetic mean (UPGMA); the cutoff point was defined according to the methodology proposed by Mojena (1977). The bootstrap reliability index was also estimated based on 1000 repetitions, as well as the cophenetic correlation coefficient (r). The analyses were performed using the GENES program (Cruz, 2016). The Structure program (Pritchard et al., 2000), based on Bayesian analysis, was used to infer the structure of the population, which indicated distinct genetic groups (K) and assigned individuals to these groups. In all, 20 runs were performed for each K value (K = 4), 200,000 initial interactions (burn-ins) and 500,000 Markov chain Monte Carlo (MCMC) simulations. The criteria described by Pritchard and Wen (2004) and Evanno et al., (2005) were used to define the most likely K in relation to those proposed. To characterize the genetic variability between the genetic groups constituted by the Bayesian analysis, the genetic diversity of Nei (He) (Nei, 1978), the Shannon diversity index (I) (Lewontin, 1972) and the percentage of polymorphic loci (%P) were calculated from the analysis of the binary matrix of presence and absence, using the program POPGENE 1.32 (Yeh et al., 2000). Genetic diversity among and within groups was demonstrated using AMOVA (analysis of molecular variance), according to Excoffier et al., (1992) and with the aid of the Arlequin 3.01 program (Excoffier et al., 2007).
Results
The extracted DNA showed high quality. The 12 primers that were used amplified 106 fragments, and were 98.11% polymorphic. The number of amplified fragments ranged from 6 (UBC-828 and UBC-873) to 16 (UBC-810), with an average of 8.83 fragments per primer (Table 1) The values of genetic dissimilarity observed among individuals ranged from 0.24 to 0.69. The least genetically dissimilar individuals were AF20 and AF21, and AF24 and AF25, both pairs with 0.24, and all belonged to sample subunit IV.
The most dissimilar were AF5 and AF27 with 0.69, and belonged to sample subunits II and IV respectively, as more distant from each other geographically, it may be a question of belonging to parents from other regions. Among the combinations, 41% are within the range 0.41-0.50 ( Figure 2). The mean dissimilarity found was 0.49.
Figure 2. Distribution of genetic dissimilarity between pairs of individuals of Copaifera langsdorffii.
Source: Authors. 6 Among the clustering methods tested, the UPGMA presented higher cophenetic correlation coefficient (CCC) (0.728), lower stress (10.39) and distortion (1.08). The genetic dissimilarity dendrogram based on the ISSR was generated by the UPGMA method, and based on the dissimilarity matrix, forming 4 groups (GI, GII, GIII, GIV) (Figure 3). Bayesian analysis demonstrated the existence of 2 distinct groups (k = 2) named A and B (Figure 4). The analysis of molecular variance (AMOVA), also based on the two groups obtained by Bayesian analysis, indicated that the greatest genetic variability is within each group (85.84% of total variance) and not between them. The group genetic differentiation value (FST) was 0.14162, with 1023 random permutations, indicating that, between the groups, the variation is approximately 14% (p<0.000).
Discussion
The primers used herein were effective in the detection of genetic polymorphism of C. langsdorffii and there is genetic diversity among the individuals sampled. The high percentage of polymorphism (99.80%) found in this study is similar to that found by Dúcar, Rewers, Jedrzejczyk, Mártonfi, and Sliwinska (2018), who evaluated the genetic diversity of eight species of Lotus sp. (Fabaceae), as well as that observed by Bagheri, Abbasi, Mahmoodi, Roofigar, and Blattner (2020) (97.60%), who studied the genetic variability of Astragalus subrecognitus (Fabaceae), which demonstrates the effectiveness of ISSR markers in the detection of polymorphism in species of the Fabaceae family.
The polymorphic information content (PIC) demonstrated that 11 of the 12 primers used in this study can be considered as very informative for C. langsdorffii, since they presented PIC values above 0.5. Only the primer UBC-816 showed a value Research, Society andDevelopment, v. 10, n. 16, e187101623025, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i16.23025 8 between 0.25 and 0.50, which makes it moderately informative. For Botstein, White, and Davis (1980), molecular markers that present PIC values below 0.25 are considered to be poorly informative, whereas those with values between 0.25 and 0.50 are classified as moderately informative and above 0.50 are very informative.
The mean dissimilarity found was similar to that found by Brito et al. (2016) for the species Varronia curassavica, which, according to the authors, indicates a high genetic diversity among many pairs of individuals. Dissimilarity data indicate that there is no evidence of genetically identical individuals, and these therefore present potential for the composition of germplasm banks. The CCC value obtained in the UPGMA is considered satisfactory according to Rohlf (1970), since it is above 0.70, and indicates a good adjustment between the dissimilarity matrix and the cophenetic matrix.
Considering the results obtained by UPGMA and Bayesian analysis, the disposition of individuals from different sample subunits in the same genetic grouping can be explained by the fact that their main dispersers are birds (Rabello, Ramos, & Hasui, 2010), and this type of dispersion allows dispersion over long distances. According to Trolliet, Forget, Doucet, Gillet, and Hambuckers (2017) and Oliveira et al. (2020), this plant-frugivore relationship has a fundamental role in the forest structure, and may be one of the main mechanisms of dispersal of plant species.
High genetic diversity in Copaifera langsdorffii was also observed by Martins, Santos, Gaiotto, Moreno, and Kageyama (2008) who studied populations in Pontal do Paranapanema, in the state of São Paulo, using the analysis of microsatellite markers, as well as by Sebbenn et al. (2010), who evaluated a population of C. langsdorffii in the municipality of São Jose do Rio Preto, State of São Paulo. Indicating that even with the pressure exerted by exploration and deforestation, individuals with high genetic diversity can still be found that demonstrate the capacity to be used in the conservation of the species.
The Shannon index (I) resembled that found by Guerra, Gómez, Gutierrez, and Hahn (2018) (between 0.36 and 0.39) who evaluated genetic diversity in Adesmia bijuga Phil using ISSR markers. Group B has the highest genetic diversity, as well as the highest values for the Shannon and Nei indices, which may be associated with the geographical distance between the individuals, since this group is basically composed of the individuals sampled in subunits III and IV. AMOVA indicated that the greatest genetic variability is within each group, and allows a greater number of combinations between individuals, which, according to Demartelaere et al. (2020), is important for determine possible adaptations in the face of environmental changes. Nybom (2004) found, while analyzing studies performed with dominant markers, that long-lived, allogamous and latesuccessional plants presented greater genetic variability within populations, which is in accordance with the characteristics and lifestyle of Copaifera langsdorffii.
Given the scenario of forest degradation in Brazil, the genetic diversity found among individuals of Copaifera langsdorffii corroborate the importance of forest preservation and conservation, since, according to Fonseca et al. (2021), the area deforested in the 2021 deforestation calendar (August 2020 and July 2021) was 10476 km², 57% larger than that recorded in the previous year. The possible creation of ecological corridors between the fragments to connect them or bring them closer together would make the gene flow more viable. Martins et al. (2008) state that the connectivity between fragments facilitates the maintenance of genetic diversity and allows the movement of fauna, enabling the dispersal of seeds of zoochoric species (Trolliet et al., 2017;Oliveira et al., 2020).
Conclusion
The results herein indicate that the evaluated individuals of C. langsdorffii have high genetic diversity, and thus have potential to compose programs aimed at the preservation and conservation of the species and may be integrated in germplasm banks. They also evidenced the efficiency of the DNA extraction and amplification method using the ISSR markers described in this work, confirming the possibility of applying these for the genetic study with other populations the species.
|
2021-12-16T18:19:58.474Z
|
2021-12-11T00:00:00.000
|
{
"year": 2021,
"sha1": "9bf3b9b887c7410a7f188f51144a39d9c6c08d06",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/23025/20847",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5040af14b5d9cf70df2cc20146b4d25d72637447",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
267349369
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid Fusion Approach for Alzheimer’s Disease Progression Employing IHS and Wavelet Transform
Abstract — Image fusion has become a commonly utilized technology for boosting the medical information in brain images. Magnetic resonance imaging (MRI) depicts the morphology of the brain tissue, it has great spatial resolution but lacks functional information. Positron emission tomography (PET) displays the brain with great function but low spatial resolution. Hence, a fusion of the two imaging techniques will help the neurologist to accurately identify Alzheimer's disease progression. In this paper, a new fusion method that combines two transformation approaches, triangular intensity-hue-saturation (IHS) and discrete wavelet transform (DWT), is introduced. DWT is applied to the intensity component of the PET image and the smoothed version of the MRI image. Wavelet coefficients are fused using a specific fusion rule for the low and high-frequency bands. Inverse DWT is applied to obtain a new intensity component, and the gray version is subtracted from the new intensity. The fused image is obtained by applying the inverse triangular IHS. For evaluation, quantitative measurement and statistical analysis are determined. The proposed method achieved discrepancy, average gradient, mutual information, and overall fusion performance of 7.0529, 5.3879, 0.6550, and 1.6651 respectively. The final results reveal that the proposed method achieved the highest performance compared with existing methods.
Hybrid Fusion Approach for Alzheimer's Disease Progression Employing IHS and Wavelet Transform Doaa Y. Hussein, Mostafa Y. Makkey, and Shimaa A. Abdelrahman I. INTRODUCTION MAGE fusion is an approach that combines information from two imaging techniques into a single fused image [1].In medical applications, it provides a very promising diagnostic tool for a variety of diseases.Medical images come in different forms, and each has a particular use.High-resolution anatomical information image is produced by magnetic resonance imaging (MRI), and computed tomography (CT).Functional imaging techniques are available such as positron emission tomography (PET), but this technique has fewer anatomical details and low resolution.
To create an image that is more informational and better suited for diagnosis, information from two forms was combined by image fusion [2].For Alzheimer's disease, MRI and PET are two powerful imaging techniques that provide complementary information about the brain.PET images can tell information about brain function while MRI images show information about the internal structural shape of the brain.
IHS and retina-inspired model (RIM) were integrated to improve the functional and spatial information content [3].Images were decomposed using non-subsampled contourlet transform (NSCT), and the resultant two images were combined using different fusion rules in [4].This method employed a maximal energy rule to combine low-frequency band coefficients, and a maximal variance rule to combine high-frequency band coefficients.Features were extracted from PET and MRI images using a convolutional neural network [5], and the resultant weights were employed to construct a fused image.An advanced wavelet transformbased method was introduced in [6] that employed morphological processing with PCA.Discrete wavelet transform (DWT) based methods were presented to obtain the fused image in [7][8].
Existing fusion techniques [2][3][4][5][6][7][8] are studied in this paper; including; pixel average, IHS cylindrical model, Brovey, DWT, and à-trous wavelet transform.The study reveals that; some of these methods provide a high spatial intensity fused image but they reduce the correlation between the original image and the fused one.Additionally, the fused image loses some important spectral color information and has an inaccurate color representation, artifacts, and noise.Hence, a hybrid method employing IHS and wavelet transform is proposed in this paper to improve the functional and spatial information content.IHS introduces a high spatial intensity and DWT minimizes the spectral distortion of the resultant image.The proposed method successfully preserves the original functional information with no spatial distortion compared with the existing methods.Statistical analysis and quantitative measurement of the fused image using mutual I information, discrepancy, average gradient, and overall performance are utilized for results evaluation.
The rest of this paper is organized as follows.Section II describes the IHS triangular model.DWT and the fusion rules will be introduced in section III.Section IV illustrates the utilized dataset to apply and evaluate the proposed method.Section V describes the methodology of the proposed hybrid IHS and DWT fusion approach.Section VI presents the results and evaluations.Finally, section VII concludes this paper.
II. IHS TRIANGULAR MODEL
The IHS triangular model [9][10][11][12] is a color space transformation that converts a red-green-blue (RGB) image into an IHS image as in shown Fig. 1.The PET image contains the intensity and the color information (hue and saturation).Hence, the IHS model is employed in the proposed method to separate the intensity information from the color information.This separation allows for the manipulation of the intensity channel independently of the color channels, which can be useful in image fusion.The intensity, hue, and saturation components and the inverse transformation of these components can be calculated as in (1-16), [2], [9].
Where R C , G C , and B C are the three color components red, green, and blue respectively, and (2) Where H C is the hue component, and S C is the saturation component.The range of I C , H C , and S C is from 0 to 1.If the red component has the minimum value (R C < G C and R C < B C ): (5) If the green component has the minimum value (G C < R C and G C < B C ):
) The inverse IHS transform is calculated as follows: If the blue component has the minimum value (B
If the green component has the minimum value (G C < R C and G C < B C ): (15) DWT-based image fusion approach [8] fuses MRI image and intensity component of PET.Fusion of the DWT coefficients is obtained by applying certain image fusion rules, including the maximum, minimum, average, and weighted average rules.These rules determine which coefficients to retain in the new intensity image based on their magnitudes.All of these fusion rules are studied and the final results reveal that the maximum and weighted average rules are the most appropriate ones to apply the proposed method.Prioritizing the detail coefficients with the highest absolute value is applied at each transformation scale.This is followed by a local morphological procedure, which confirms the chosen pixels through a filling and cleaning operation as shown in Fig. 2.This operation, either fills or eliminates isolated pixels locally to enhance the uniformity of coefficient selection, thereby minimizing distortion in the new intensity image.For our purpose, the shaded pixel is taken from the MRI image, and the white pixel is taken from the intensity of the PET image.The maximum level of DWT decomposition, denoted as L Decom , is contingent on the size of the input image, which can be expressed as in (17), [8].
(min( , )) min( , ) Where, the dimensions of the image are represented by M and N, while m o and n o denote the dimensions of the image transformed by DWT at the highest scale.The term 'min' is used to select the smallest value.
IV. DATASET
In this paper, the utilized dataset consists of 24 color PET images and 24 high-resolution MRI brain images that are registered together all images are downloaded from the Harvard University website [10].This dataset is divided into four categories: normal coronal, normal sagittal, normal transaxial, and Alzheimer's disease images.PET images are resized to 256 × 256 pixels to maintain uniform conditions of three RGB bands based on metabolic processes in the brain, while MRI images are high-resolution grayscale images.Fig. 3 displays a sample of the utilized dataset.The dataset is divided into four groups, dataset 1 for normal axial, dataset 2 for normal coronal, dataset 3 for normal sagittal, and dataset 4 for Alzheimer's disease brain images.
V. METHODOLOGY
The proposed approach is derived by implementing a DWT on the intensity pixel of the PET and the refined version of the MRI image to acquire the wavelet coefficients.These coefficients are fused using a distinct fusion rule for both low and high-frequency bands.An inverse DWT is performed, which is enhanced by subtracting the new intensity image from the MRI image.This step helps to highly improve spectral color information.Ultimately, the final image is produced after applying the inverse triangular IHS model to the new intensity components of the image along with the hue and saturation components of the PET image.The main steps of the proposed method are shown in Fig. 4.
A. Preprocessing
For accurate fusion, which consequently enhances the identification method of the progression of Alzheimer's disease.The primary region of interest in MRI and PET images is the medial temporal lobe, which contains the hippocampus and the entorhinal cortex.Therefore, a proposed preprocessing step is required to remove the outer framework (the bones and layers surrounding the brain) as shown in
B. Hybrid Fusion
A hybrid fusion method is proposed by combining IHS and DWT.DWT is applied to the preprocessed MRI image to obtain the low and high-frequency bands.On the other side, a resized PET image is converted from an RGB model to an HIS triangular model to get the three main IHS components, I, H, and S individually.The intensity component is also passed through wavelet transform to obtain the low and highfrequency bands.For different band combinations from MRI and PET, a weighted average fusion rule is applied to the lowfrequency band as illustrated in (18), [8].( Where CF represents the fused coefficients, C Intensity and C NEW MRI are low-frequency bands from the input images.The effect of the parameter a1 and a2 on the dataset has been studied.The results of the study reveal that, if a large weight is given to an MRI image, more spatial resolution will be preserved of the new intensity image.
On the other hand, if a large weight is specified to the intensity of the PET image, more spectral color information is obtained.Hence, two approximately equal weights are assigned to both images.Additionally, these values are more significant in Alzheimer's disease images than in normal brain images.The maximum selection is applied to the highfrequency band to evaluate the best result and an inverse discrete wavelet transform is applied to the new intensity image.After that, the inverse IHS triangular model is applied to the new intensity image, hue, and saturation components of the PET image.For evaluation, two criteria, statistical and visual analysis, are utilized to quantitatively measure the fusion performance.The proposed method is compared with the existing methods including; pixel average, IHS cylindrical model, Brovey, DWT, and à-trous wavelet transform as shown in Fig. 7.It is obvious that the proposed hybrid method has the least distorted color information and clear spatial details comparable to the existing fusion techniques.For statistical analysis, metrics including; average gradient, discrepancy, mutual information, and overall fusion performance [11] are determined.
A. Discrepancy
Discrepancy is an essential metric that can be used to assess the quality of fused images produced by image fusion algorithms.The discrepancy calculates the difference in the pixel value between the original images and the resultant fused as in (19), [3] Where D i is the discrepancy for the "i" color component (i=R C , G C , or B C ), N refers to the total number of pixels in the input images, F refers to the pixel values of the fused image, O represents the pixel values of the original images (PET or MRI).A lower discrepancy value indicates a better quality of the fused image, this means that the percentage of similarity between the two merged and input images is large.
B. Average Gradient
The average gradient indicates the quality of the fused image.It is calculated as the mean of the gradient magnitudes of the fused image.A higher average gradient value indicates sharper edges and better preservation of the spatial details in the fused image.The gradient magnitude can be computed using the gradient components in the x and y directions (G X and G Y ) as in (20 -26), [3], [11].
Where AG i refers to the average gradient of the fused image, G X is the average gradient in the "x" direction, and G Y is the average gradient in the "y" direction.G Y and G X are calculated using the Sobel operator as in (21 -26).
Where F (x, y) refers to the pixel value at position (x, y) in the fused image.
C. Mutual Information
Mutual information evaluates the quality of fused images, where it can evaluate the information that two images exchange with one another, such as PET and MRI images.A higher mutual information value indicates a better fusion result, as it means that the fused image contains more information from both original images as in (27 -29), [3].
Where MI (F, MRI) is the mutual information between fused image F and MRI.
D. Overall Image Fusion Performance
The overall performance is measured based on the discrepancy Di and the average gradient AG i .If the fusion technique produces a small amount of overall performance (Op) then the fused image will have greater overall fusion quality.It can be described as in (30), [3].A comparison between the proposed fusion method and the existing methods employing four different datasets is summarized in Table I
Fig. 5 . 2 ) 3 ) 4 ) 5 )
As a first step, MRI and PET images are resized to 256×256 pixels.The main steps of the proposed preprocessing include; 1) Converting PET image into a binary image.Filing the holes of the PET binary image to obtain a mask.Applying morphological operations to clean up the mask.Multiplying the mask by the MRI image to obtain the segmented MRI with the original pixels' values.Applying the Gaussian filter to obtain the smoothed MRI as shown in Fig. 6.
MI(F,O) is the mutual information between images F and O,P(F,O) is the joint probability distribution of the pixel intensities in images F and O, P(F)is the marginal probability distribution of the pixel intensities in image F, and P(O) is the marginal probability distribution of the pixel intensities in image O.To calculate the MI between the fused image (F) and (PET, MRI) images, the MI values for both pairs (F, PET) and (F, MRI) are computed as in (28) and (29):
TABLE I THE
-Table IV.It is obvious from the results that, the proposed method successfully fused MRI and PET images, by achieving the lowest mean Di, highest mean AG i , lowest OP, and highest mean MI .FUSION METHODS FOR ALZHEIMER'S DISEASE DATASET 1
TABLE II THE
FUSION METHODS FOR CORONAL NORMAL BRAIN DATASET 2
TABLE III THE
FUSION METHODS FOR AXIAL NORMAL BRAIN DATASET 3
TABLE IV THE
FUSION METHODS FOR SAGITTAL NORMAL BRAIN DATASET 4
|
2024-02-01T16:37:52.585Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "14dff3cec4546a3a324acbed7722fb4d90f4c5c5",
"oa_license": "CCBY",
"oa_url": "https://mjeer.journals.ekb.eg/article_336982_b9761cb9200af70f49a6952a4914911f.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6def7890ebd69c9560495094a5f425773b411840",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
}
|
252628430
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Modifiable, Non-Modifiable, and Physiological Risk Factors of Non-Communicable Diseases in Indonesia: Evidence from the 2018 Indonesian Basic Health Research
Purpose Indonesia is facing an increasing occurrence of non-communicable diseases (NCDs) every year. We assessed the modifiable, non-modifiable, and physiological risk factors of NCDs among the Indonesian population. Methods Secondary data was analyzed from the 2018 Indonesian basic health research (RISKESDAS). The national survey included participants aged 15–54 years and obtained 514,351 responses. Linear systematic two-stage sampling was conducted by RISKESDAS. Furthermore, chi-square and binary logistic regression were utilized to explore the determinant of NCDs with a significance level of 95%. Results We found that almost 10% respondents in Indonesia had NCDs. We observed that depression has a higher odd (aOR: 2.343; 95% CI: 2.235–2.456) contributed to NCDs and followed other factors such as no education (aOR: 1.049; 95% CI: 1.007–1.092), passive smoking (aOR: 0.910; 95% CI: 0.878–0.942), fatty food (aOR: 1.050; 95% CI: 1.029–1.073), burnt food (aOR: 1.033; 95% CI: 1.005–1.062), food with preservatives (aOR: 1.038; 95% CI: 1.002–1.075), seasoned food (aOR: 1.057; 95% CI: 1.030–1.084), soft drinks (aOR: 1.112; 95% CI: 1.057–1.169), living in an urban area (aOR: 1.143; 95% CI: 1.119–1.168), living in central Indonesia (1.243; 95% CI: 1.187–1.302), being female (aOR: 1.235; 95% CI: 1.177–1.25), and obese (aOR: 1.787; 95% CI: 1.686–1.893). Conversely, people in Indonesia who undertook vigorous activity (aOR: 0.892; 95% CI: 0.864–0.921), had employment (aOR: 0.814; 95% CI: 0.796–0.834), had access to improved sources of drinking water (aOR: 0.910; 95% CI: 0.878–0.942), and were aged 35–44 years (aOR: 0.457; 95% CI: 0.446–0.467) were less likely to develop NCDs. Conclusion Modifiable, non-modifiable, and physiological risk factors have a significant influence on NCDs in Indonesia. This finding can be valuable information for Indonesian Government to arrange a cross-collaboration between government, healthcare workers, and society through advocacy, partnership, health promotion, early detection, and management of NCDs.
Introduction
Non-communicable diseases (NCDs) are one of the biggest health challenges in the 21st century 1 and have become a global concern in both developing and developed countries. 2 NCDs are responsible for 41 million of 57 million deaths
Data Sources & Samples
Samples were taken from the 2018 RISKESDAS, which consisted of 30,000 census blocks (CBs) in urban and rural areas of Susenas with 300,000 household responses. 18 The total survey included 565,592 individual observations obtained by linear systematics with two-stage sampling. The sampling was done in two stages: 1) Implicit stratification of all CBs from the 2010 population census based on welfare strata. Twenty-five percent of the total CBs were then systematically determined with the probability proportional to size (PPS) method to get 30,000 CBs. 2) A total of 10 households were chosen based on the highest level of education of the head of household to maintain the representativeness of the value diversity of household characteristics. 18 RISKESDAS in determining the instrument components refers to the Sustainable Development Goals, National Mid-Term Development Plan, Strategic Plan, Minimum Service Standards, Community Health Development Index, Healthy Indonesia Program -Family Approach, and the Healthy Living Community Movement developed by the Government of Indonesia. The primary health indicators measured in RISKESDAS 2018 include morbidity (Non-Communicable Diseases and Infectious Diseases), disability, injury, environmental health (hygienic, sanitation, latrines, water and housing), knowledge and attitudes towards HIV, health behaviour (seeking treatment, tobacco use, drinking alcohol, physical activity, risky food consumption behaviour), various aspects of health services (access and health coverage) and nutritional status, as well as dental and oral health status. Then the questionnaire was tested for validity by RISKESDAS before being used. 19 We included respondents aged 15-54 years and excluded missing responses from this study. Furthermore, the proportion of each region was calculated by weighting the observation based on the number of provinces in Indonesia. Out of 565,592 individual observations, approximately 51,241 observations were excluded from this study due to missing values for one or more variables. Finally, a total of 514,351 (245,234 males and 269,117 females) were recorded in this study.
Variables
In this study, we adopted and modified the framework of NCDs risk factors from the National Action Plan of Prevention and Handling of NCDs, Ministry of Health of the Republic of Indonesia. 9 The framework consisted of modifiable risk factors (behavioral and environmental), non-modifiable risk factors, and physiological risk factors ( Figure 1). Furthermore, we constructed independent variables consisting of: behavioral -modifiable risk factors (education level, smoking, level of activity, and the consumption of sweet food, fatty or fried food, burnt food, food containing preservatives, seasoned food, instant food, soft drinks, and energy drinks); environmental-modifiable risk factors (working status, residence, regional, time spent at a health facility, source of drinking water, and depression); nonmodifiable risk factors (age and gender); and physiological risk factors (body mass index).
Behavioral -modifiable risk factors: Level of education was categorized into high education, secondary education, primary education, and no education. 20 Smoking was organized into no, passive smoker, and active smoker. Level of activity was categorized into gentle, moderate, and vigorous. 21 The consumption of sweet food, fatty or fried food, burnt food, food containing preservatives, seasoned food, instant food, soft drinks, and energy drinks were classified into < 3 times/week and ≥ 3 times/week. 22 Environmental -modifiable risk factors: Working status and residence were classified into yes/no and rural/urban respectively. Regional areas were classified based on time differences in Indonesia, namely Eastern Indonesia, Central Indonesia, and Western Indonesia. 23 Time spent to health facility was organized into fast (≤ 8 minutes) and slow (> 8 minutes). 24 Source of drinking water was classified into bottled water, improved (refill water, tap water, retail/purchase tap water, drill/pump well, protected dug well, and protected spring), and not improved (unprotected dug wells, unprotected springs, rainwater storage, and surface water (rivers/ lakes/ irrigation)). We introduced the bottled water category because the prevalence of bottled water usage has increased in Indonesia. 25 Additionally, depression was categorized into no and yes based on questionnaire developed by RISKESDAS with mean as the cut of point.
Physiological risk factors: Body mass index (BMI) was classified into underweight (under 18.5 kg/m^2), normal (greater than or equal to 18.5 to 24.9 kg/m^2), overweight (greater than or equal to 25 to 29.9 kg/m^2), and obese (greater than or equal to 30 kg/m^2). 27 The dependent variable in this study was NCDs. It was composed of seven diseases: stroke, cardiovascular disease, diabetes mellitus, cancer, hypertension, renal disease, and asthma. All the NCDs were declared and diagnosed by medical doctor and recorded in patient's medical records. The seven selected diseases were identified as the most common diseases in Indonesia. 28 If the respondents had one or more of the seven diseases, the variables of NCDs were categorized as NCDs and organized into yes and no.
Data Analysis
This study was written using the strengthening the reporting of observational studies in epidemiology (STROBE) statement as a guideline, and all of the methodologies were carried out to conform to the appropriate standards and regulations. 29 STATA version 16.1 was used to carry out the data analysis. We used univariate analysis to present the weighted percentages of independent and dependent variables. We also used the chi-square test to investigate the association between each variable. The factors associated with NCDs in Indonesia were examined using binary logistic regression. The adjusted odds ratio (aOR) was presented in the study, along with a 95% confidence interval (CI) and a p-value of 0.05. Since the nationwide survey was used, we performed STATA's "svy" survey commands to account for clustering effects, and sample weight due to multi-stage cluster random sampling was used in the data collection.
Results
National survey data representing Indonesia are presented in this study. We presented risk factors of NCDs into behavioral -modifiable risk factors, environmental -modifiable risk factors, non-modifiable risk factors, and physiological risk factors.
Behavioral -Modifiable Risk Factors
In a total of 514,351 respondents, it was found that almost half had a primary level of education. The majority of the respondents were found to be non-smokers (62.6%) and more than 50% of the activity level results were in the moderate category. The results for consuming food and beverages revealed that burnt food (86.17%), food with preservatives (89.14%), instant food (71.06), soft drinks (94.27%), and energy drinks (95.70%) were consumed ≤ 3 times per week. Additionally, sweet food (60.31%), fatty or fried food (64.57%), and seasoned food (84.69%) were consumed ≥ 3 times per week (Table 1).
Environment -Modifiable Risk Factors
We found that the majority (73.15%) are employed, and more than half of the total respondents live in urban areas. The survey showed that the majority of participants are from Western Indonesia (79.17%). The majority of time spent at health facilities was categorized as slow (92.55%), the majority of drinking water sources were categorized as improved and 11.65% consume bottled water. Additionally, the majority of respondents were found to be not depressed (Table 1).
Non-Modifiable Risk Factors
From the survey, the age range of 15-54 years was evenly distributed (>20%). As many as 50.1% are male and the rest are female (Table 1).
The results of the bivariate analysis regarding the determinants of non-communicable diseases in Indonesia are presented in Table 2. It is known that age, gender, education level, working status, residence, regional, time spent at a health facility, water, smoking, level of activity, BMI, depression, and the consumption of sweet food, fatty or fried food, food containing preservatives, instant food, soft drinks, and energy drinks have a correlation with NCDs in Indonesia (p < 0.05). Additionally, burnt food and seasoned food have no significant correlation with NCDs in Indonesia (p > 0.05).
Discussion
In this study, we found that behavioral (modifiable risk factors), environment (modifiable risk factors), non-modifiable risk factors, and physiological risk factors significantly contribute to NCDs in Indonesia. These risk factors can enhance the likelihood of developing NCDs and can be characterized in a variety of ways. 31 Thus, management of NCD risk factors should be initiated by understanding the characteristics of each risk factor.
Behavioral -Modifiable Risk Factors
In this study, respondents with no education were more likely to contract NCDs compared with those with a higher education level. It is important to note that education level is a predictor of the incidence of NCDs. This finding is in line with previous studies. 32,33 Society needs basic information on the maintenance of a healthy lifestyle to prevent NDCs. 34 It is assumed that people with a higher education level could receive and understand the information regarding a healthy lifestyle and implement it in their daily lives. 35 By optimizing education levels, correct and easily understood information can be widely distributed to decrease the prevalence of NCDs. 35,36 However, in Indonesia, we are challenged by cultural and belief barriers that sometimes contradict information regarding NCDs. Thus, for further strategic programs, healthcare workers, such as nurses, need to collaborate with local authorities to share information regarding the management, control, and prevention of NCDs.
Another behavioral -modifiable risk factor is smoking. We found that passive smokers are more likely to contract NCDs. This is supported by previous studies. 37,38 Passive smoking is more dangerous than active smoking because it can increase the risk of respiratory illnesses, including asthma, bronchitis, and pneumonia. [39][40][41][42] Exposure to tobacco smoke will contribute to a change in metabolism, gene mutation, and deoxyribonucleic acid (DNA) damage that will affect the seriousness of NCDs, such as cancer. [42][43][44] The most challenging task is changing the beliefs and behavior of smokers who believe that smoking cigarettes does not cause NCDs. Thus, a comprehensive approach is needed to solve this problem.
In this study, respondents with vigorous activity levels had a lower likelihood of contracting NCDs in Indonesia. Increased physical activity can have a positive impact on the body's metabolic system and can reduce metabolic syndrome in patients with NCDs. [45][46][47] Physical activity released dopamine and glucoregulatory hormones to reduce stress. [48][49][50] Thus, a regular program of increased activity levels should be promoted and initiated at the government, company, and societal levels.
We found that fatty food, burnt food, food containing preservatives, seasoned food, and soft drinks consumed more than three times per week had a significant contribution to NCDs. Fatty food increases cholesterol blood levels and causes hypertension and cardiovascular diseases; 51 burnt food has become a predictor and derived carcinogen of cancer; 52,53 and preservatives, seasoned food, and soft drink will affect metabolism, gene mutation, and bone mineral density that leads to NCDs. [53][54][55][56] However, a European study found that seasoning food with herbs can prevent NCDs. 50 Clear information regarding healthy food and drinks is important and should be delivered to communities through health https://doi.org/10.2147/JMDH.S382191
DovePress
Journal of Multidisciplinary Healthcare 2022:15 education programs. Additionally, collaborations between healthcare workers and the government on the distribution of risky food should be initiated.
Environmental -Modifiable Risk Factors
In this study, the respondents that have working status were less likely to have NCDs than people who did not have working status. This is because people who work have more activities, which can prevent a sedentary lifestyle. 57 In addition, people who have jobs and earn money can better regulate their lifestyle by eating healthy foods, such as fruits and vegetables. Conversely, people who do not work are at risk of experiencing stress because they do not have an income. The combination of stress and a sedentary lifestyle can trigger the emergence of NCD diseases such as stroke and diabetes mellitus. 58,59 People living in urban areas are more likely to have NCDs than people living in rural areas. Previous studies have shown that urban societies that have greater access to processed, high calorie, high fat, and salty foods can easily develop NCDs. 60 Also, urban societies that are supported by easy access to public facilities can develop sedentary lifestyles, which can lead to lower levels of physical activity. 61 Socioeconomic status among the urban population can shape people's behavior to choose an unhealthy lifestyle. Lack of physical activity and exercise, lack of intake of nutritious foods such as fruits and vegetables, consuming foods that are high in fat, calories, and sodium, alcoholic beverages, and smoking habits can significantly increase the incidence of NCDs. The results of the study also indicated that people who live in Central and Western Indonesia are more likely to have NCDs compared with those in Eastern Indonesia. Central and Western Indonesians are in the urban category, where people's lifestyles are often sedentary, in contrast to Eastern Indonesia, which is included in the rural area. People in rural areas tend to adhere to a simple culture with a natural environment and informal social life.
Respondents who consumed water from improved sources are less likely to have NCDs compared with those who consume bottled water. Health problems can occur because chemical substances that can harm the body have been found in some bottled water. A previous study in Malawi found that from 12 samples of bottled water, 10 brands did not comply with the Malawi Standards (MS) 699 (2004) turbidity standard (1 Nephelometric Turbidity Units (NTU)), and the pH of one of the brands was below the minimum MS 699 (2004) standard of 6.50. 62 Also, all the brands had bottle labeling errors and discrepancies in chemical composition. Another study found the presence of microplastics in bottled water that led to contamination that partially came from the packaging and/or the bottling process itself. 63,64 Contaminated water in bottles can trigger cancer because microplastic can enter and be absorbed into the body. Repeated use of drinking water bottles should also be avoided because plastic bottles are intended for single use only.
Our study found that people who have depression are more likely to have NCDs compared with those who did not have depression. Depression in a person can be caused by physical or psychological factors. Prolonged depression makes a person disabled, and it can interfere with health. Previous research has shown that depression can be a precursor to disease and a double burden for an NCD sufferer. 65 Depression and NCDs are interrelated and can be mutually exclusive. In accordance with previous research that states that someone who has an NCD requires adjustment to their condition. Living with pain, disability, and social and economic problems is what triggers depression. 66,67 Another study also stated that depression was twice as common in patients with NCDs and is prevalent in 40% of diabetic patients, 37% of cancer patients, 38% of hypertension patients, and 39% of stroke patients. 68 Indirectly, depression can increase the morbidity and mortality of NCDs. 69
Non -Modifiable Risk Factors
This study found that respondents aged 35-44 years are less likely to have NDCs than respondents aged 45-54 years. This is related to the previous study that conducted in Southern India indicated by the correlation of NCDs with increasing age. 70,71 As they get older, many people are exposed to risk factors for longer periods, then difficulties can arise and NCD clinical syndromes develop. 72 In this study, the prevalence of NCDs similarly increased with age. The frequency of NCDs increased the most in a very young age group (30-49 years). These findings point to the population's early development of NCDs, which must be addressed.
Females are more likely to have NCDs compared with males. This finding is related to biological risk factors, including being overweight or obese, as well as elevated blood pressure, glucose, and cholesterol, which are seen in a higher percentage of women in older age groups. 73 In developing countries, such as Indonesia, the prevalence of selected behavioral and clinical risk factors was higher among women than men. 74 Treatment-seeking behavior is slightly better among men than women. 75 The government should take the vulnerability of women into account while designing and implementing programs to prevent and control NCDs.
Physiological Risk Factor
The physiological risk factors in this study were measured from BMI. We found that obese respondents had more susceptibility to NCDs in Indonesia. Obesity, in particular, affects nearly every facet of health, from reproductive and pulmonary function to cognition and mood. Obesity raises the risk of a variety of severe and fatal diseases, including diabetes, heart disease, and some malignancies. 76,77 Thus, weight-loss management is important for a healthy life. Obesity prevention, starting at a young age and continuing throughout one's life has the potential to greatly enhance individual and public health, as well as reduce the chance of suffering from NCDs.
Strength and Limitations
The study presents the data regarding risk factors of NCDs nationally. This study covered respondents from younger to older age and presented Indonesian population data that can represent the characteristics of respondents. Additionally, this study uses a framework that has been developed by the Ministry of Health of the Republic of Indonesia, which has been adapted to the characteristics of the Indonesian population. Thus, these findings can be used as a reference and source of information for the Indonesian government to determine policies for the management and control of NCDs. However, the data presented by RISKESDAS is still limited and does not provide data related to beliefs and culture that can be used for the proper handling of NCDs using a local approach. The gender and urban-rural disparity analysis will present rich information like the previous study in Kerala, India. 78
Conclusion
The Indonesian government should pay attention to the prevention and handling of NCDs. Modifiable, non-modifiable, and physiological risk factors are the most significant causes of NCDs in Indonesia and need to be addressed. The utilization of healthcare and cross-collaboration among healthcare workers, government, and society should be provided through advocacy, partnerships, health promotion, steps for early detection, and management of NCDs. However, the program should take local culture, beliefs, and regional differences into consideration. Healthcare workers, especially nurses, should collaborate with local public authorities to educate the target population to optimize the screening, control, and NCDs management and treatment. Furthermore, the results of this study will be valuable as essential information for further policies and interventions to promote PPPM as a new suggestion on NCDs.
|
2022-10-01T15:13:04.178Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "541f4848b14839a41517f299ab7c97689f0c4929",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=84370",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "896a08106d645da2cac98bece9896f611b4f5d23",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250687138
|
pes2o/s2orc
|
v3-fos-license
|
Perspectives on the quantum Zeno paradox
As of October 2006, there were approximately 535 citations to the seminal 1977 paper of Misra and Sudarshan that pointed out the quantum Zeno paradox (more often called the quantum Zeno effect). In simple terms, the quantum Zeno effect refers to a slowing down of the evolution of a quantum state in the limit that the state is observed continuously. There has been much disagreement as to how the quantum Zeno effect should be defined and as to whether it is really a paradox, requiring new physics, or merely a consequence of "ordinary" quantum mechanics. The experiment of Itano, Heinzen, Bollinger, and Wineland, published in 1990, has been cited around 347 times and seems to be the one most often called a demonstration of the quantum Zeno effect. Given that there is disagreement as to what the quantum Zeno effect is, there naturally is disagreement as to whether that experiment demonstrated the quantum Zeno effect. Some differing perspectives regarding the quantum Zeno effect and what would constitute an experimental demonstration are discussed.
Introduction
A recent entry in Wikipedia, an Internet-based encyclopedia, defines the quantum Zeno effect as follows: The quantum Zeno effect is a quantum mechanical phenomenon first described by George Sudarshan and Baidyanaith Misra of the University of Texas in 1977. It describes the situation that an unstable particle, if observed continuously, will never decay. This occurs because every measurement causes the wavefunction to "collapse" to a pure eigenstate of the measurement basis [1].
This definition is close to the original language of Misra and Sudarshan [2], but is not sufficiently general to describe the many situations that are considered to be examples of the quantum Zeno effect. It is true that the quantum Zeno effect describes the situation in which the decay of a particle can be prevented by observations on a sufficiently short time scale. However, the quantum Zeno effect is much more general, since it describes the situation in which the time evolution of any quantum system can be slowed by sufficiently frequent "observations." The references to observations and to wavefunction collapse tend to raise unnecessary questions related to the interpretation of quantum mechanics. Actually, all that is required is that some interaction with an external system disturb the unitary evolution of the quantum system in a way that is effectively like a projection operator. Finally, the word "never" describes a limiting case. A slowing of the time evolution, as opposed to a complete freezing, is generally regarded as a demonstration of the quantum Zeno effect. [2] and to Itano et al [3].
The Misra and Sudarshan Paper
The 1977 article "The Zeno's paradox in quantum theory" by Misra and Sudarshan [2] studied the evolution of a quantum system subjected to frequent ideal measurements. They showed that, in the limit of infinitely frequent measurements, a quantum system would remain in its initial state. Applied to the case of an unstable particle whose trajectory is observed in a bubble chamber or film emulsion, this result seemed to imply that such a particle would not decay, in contradiction to experiment. In this case, the resolution to the apparent paradox lies in the fact that the interactions between the particle and its environment that lead to the observed track are not sufficiently frequent to modify the particle's lifetime.
The time distribution of literature citations to Misra and Sudarshan [2] is shown in Fig. 1. The total number of citations listed in the Web of Science database in October 2006 was 535. The graph shows a relatively low but steady number of citations per year for about a decade, followed by a large increase that continues for over a decade, possibly peaking about 25 years after the original publication date. The great increase in the rate of citations in recent years is partially due to the increased interest in quantum information processing, where the quantum Zeno effect may find practical applications.
Simple derivation of the quantum Zeno effect
The quantum Zeno effect can be derived in an elementary way by considering the short-time behavior of the state vector [4]. (The treatment of Misra and Sudarshan [2] is more general since it involves the density matrix.) Let |φ be the state vector at time t = 0. If H is the Hamiltonian, in units whereh = 1, then the state vector at time t is e −iHt |φ , and the survival probability is If t is small enough, it should be possible to make a power series expansion: so that the survival probability is where Many quantum systems have states whose survival probability appears on ordinary time scales to be a decreasing exponential in time. This is inconsistent with the quadratic time dependence of Eq. (1) and implies that in such cases Eq. (1) holds only for very short times. Consider the survival probability S(T ), where the interval [0, T ] is interrupted by n measurements at times T /n, 2T /n, . . . , T . Ideally, these measurements are instantaneous projections and the initial state |φ is an eigenstate of the measurement operator. In that case, the survival probability is which approaches 1 as n → ∞.
It is important to note that at this level there should be nothing controversial or problematic about the existence of the quantum Zeno effect. The quantum Zeno effect should be observed as long as the physical system can be made to display the behavior shown in Eq. (4). For a given system, it may be difficult or impossible to make measurements quickly enough for the quadratic time dependence of the survival probability to be observed, so that, as a practical matter, the quantum Zeno effect cannot be observed. It should be noted that the semantic arguments over terms such as "measurement" or "observation" can be avoided if we accept that a "measurement" is an operation that interrupts the unitary time evolution governed by H in such a way as to yield Eq. (4) as a good approximation. That is, the "measurement" should effectively act as a projection operator. According to this view, it is not necessary that the "measurements" be recorded by a macroscopic apparatus or that they be instantaneous.
The IHBW Experiment
The experiment of Itano, Heinzen, Bollinger, and Wineland (IHBW) [3] was based on a proposal of Cook [5] for observing the quantum Zeno effect in a three-level atom (see Fig. 2). Levels 1 and 2 are stable on the time scale of the experiment. Level 3 decays to level 1 with the emission of a photon. In the experiment of IHBW, levels 1 and 2 were two of the hyperfine sublevels of the ground 2 S 1/2 state of the Be + ion. Level 3 was a sublevel of the 2 P 3/2 excited state that decayed only to level 1.
The experiment was carried out with a sample of about 5000 Be + ions confined by electric and magnetic fields in a Penning trap. The steps in the experiment were as follows: (i) The ions were prepared in level 1 by optical pumping with the laser beam. (ii) A resonant radio frequency (RF) magnetic field was applied for the interval required to drive the ions to level 2. (iii) During the time that the RF pulse was applied, a variable number n of equally spaced short laser pulses was applied to the ions (see Fig. 3). (iv) The laser (resonant with the 1-to-3 transition) was turned on, and the induced fluorescence was recorded.
The intensity of the laser-induced fluorescence at the end of the experiment was proportional to the population of level 1. If there are no optical pulses during the long RF pulse, the population of level 2 as a function of the time t that the RF pulse is applied is where Ω is proportional to the amplitude of the RF field. If the duration of the RF pulse is chosen to be T = π/Ω (a pi-pulse), then all of the population is transferred from level 1 to level 2. If n equally-spaced laser pulses of negligible duration are applied during the RF pi-pulse, the population of level 2 at time T is which approaches 0 as n goes to infinity. Figure 4 compares the data to theory. The solid bars represent the transition probability as a function of n according to the simplified calculation of Eq. (6). The bars with horizontal stripes represent the data. The bars with diagonal stripes represent a calculation that takes into account the finite duration of the laser pulses and optical pumping effects. The data are in reasonably good agreement with the simplified calculation and in better agreement with the improved calculation. The decrease in P 2 (T ) as n increases demonstrates the quantum Zeno effect.
A variation of the experiment was carried out by initializing the ion in level 2 and then applying the RF field and the laser pulses. In this case, the transition from level 2 to level 1 was inhibited as n increased. This is another example of the quantum Zeno effect. In this case, the inhibition of the transition is accompanied by the absence of laser-induced fluorescence.
Recently, the quantum Zeno effect was observed for an unstable quantum system by Fischer et al [6]. The quantum Zeno effect for induced transitions and for unstable systems are not fundamentally different, since they both follow from the general arguments of Misra and Sudarshan [2], but it has been difficult to observe in the latter case, because of the short times over which the decay is nonexponential. Fischer et al were able to create an artificial system (atoms tunneling from a standing-wave light field) in which the interactions could be controlled so as to observe the desired effects.
Responses to the IHBW Experiment
As can be seen by the history of citations (Fig. 1), the publication of the IHBW experiment [3] generated considerable interest. Initially, some of the responses were critical in one way or another. Some (e. g., Ref. [7]) objected to the use of the term "wavefunction collapse" in describing the experiment. The authors responded that the concept of wavefunction collapse was not essential, and that any interpretation of quantum mechanics that yielded the same prediction of the experimental results should be regarded as valid [8]. Some objected to the fact that photons were not actually observed during the intermediate "measurements," in the sense of having the scattered photons registered by a detector, so that the experiment did not actually demonstrate the quantum Zeno effect [7,9,10,11]. However, the results are predicted to be the same whether or not the intermediate measurements are made. It is enough that the measurements could have been made. As long as the laser interactions act effectively as projection operators, so that the algebra of Eqs.
(1)-(4) is followed, the experiment should be regarded as a demonstration of the quantum Zeno effect. It should be noted that none of the criticisms were directed at the execution of the experiment itself, only at the interpretation. For the most part, the citations to Ref. [3] simply accept it as a demonstration of the quantum Zeno effect. In fact, it is cited in quantum mechanics textbooks [12,13,14,15] and popular science books [16,17,18].
Distinctions
While Misra and Sudarshan originally used the term "quantum Zeno paradox," as did Peres [4] and others, the more recent work usually uses the term "quantum Zeno effect," perhaps because the effect no longer seems paradoxical. Some authors distinguish between the quantum Zeno paradox and the quantum Zeno effect, but they do so in differing ways. Pascazio and Namiki [19] call the situation in which the frequency of measurements is finite and the evolution is slowed the quantum Zeno effect, and the limiting case in which the frequency of measurements is infinite and the evolution is frozen the quantum Zeno paradox. Block and Berman [20] call the inhibition of spontaneous decay the quantum Zeno paradox and the inhibition of induced transitions (as in the IHBW experiment) the quantum Zeno effect. In Ref. [21], Home and Whitaker reserve the term quantum Zeno paradox for a negative-result experiment involving observations with a macroscopic apparatus. This definition of the quantum Zeno paradox seems to exclude most, if not all, feasible experiments. In this context, the IHBW experiment is not regarded as an example of the quantum Zeno paradox because a local interaction is present between the laser field and the atoms, and also because the electromagnetic field, containing zero or a few scattered photons, is not regarded as a macroscopic observation apparatus. They regard the type of experiments where the time evolution of a quantum system is affected by a direct interaction, for example with an external field, as examples of the quantum Zeno effect. However, in a later publication [22] the same authors treat the terms quantum Zeno paradox and quantum Zeno effect as synonymous and restrict both to nonlocal negative-result experiments involving a macroscopic observation apparatus. Experiments that do not meet these criteria would not be examples of either the quantum Zeno paradox or the quantum Zeno effect, according to their later definition.
Extensions
Several variations on the general theme of quantum Zeno effects have been described. Soon after the IHBW experiment was carried out, Peres and Ron [23] showed that a partial quantum Zeno effect results if the measurements are too weak to completely destroy the coherence of the state of the measured system. A modification of the IHBW experiment was proposed in which the measurement laser pulses are weakened. Jordan et al [24] showed that a related effect, damped oscillations of the state populations, can occur if the duration of the experiment is extended, while weak measurements are made. Some, including Kofman and Kurizki [25] and Facchi et al [26] have shown that the decay of an unstable quantum system can be accelerated by frequent observations. This is called the quantum anti-Zeno effect or the inverse quantum Zeno effect. As is the case for the quantum Zeno effect, the observations must take place before the decay becomes exponential. Unlike the quantum Zeno effect, which follows from rather general arguments, e. g. Eqs.
(1)-(4), the possibility of observing a quantum anti-Zeno effect depends on the details of the system. The experiment of Fischer et al [6] demonstrated the quantum anti-Zeno effect as well as the quantum Zeno effect. An interesting generalization of the quantum Zeno effect is the concept of quantum Zeno dynamics [27,28]. Frequent measurements can confine the evolution of a quantum system to a subspace of the Hilbert space rather than simply to the initial state. Compared to the ordinary quantum Zeno effect, the difference is that the measurements distinguish not between the initial state and all other states but between a subspace and the rest of the Hilbert space. This form of quantum Zeno effect may find application in quantum information processing.
Applications
As already noted, the recent increase in the rate of citations to the articles of Misra and Sudarshan [2] and IHBW [3] is partially related to increased interest in quantum information processing. In this context, there have been various proposals to use the quantum Zeno effect to preserve quantum systems from decoherence.
Beige et al [29] have proposed an arrangement of atoms inside an optical cavity capable of carrying out quantum logic operations with low error rates within a decoherence-free subspace of the Hilbert space. States outside the decoherence-free subspace are coupled strongly to the environment. The quantum Zeno effect then leads to effective confinement of the system to the decoherence-free subspace, which is an example of a quantum Zeno subspace.
Franson et al [30] have proposed use of the quantum Zeno effect to suppress errors in a linear optics implementation of quantum computation. In this implementation, the presence of two photons in the same mode indicates an error. The presence of a strong two-photon absorber in an optical fiber takes the role of the "observer" and suppresses the errors. Other proposed applications of the quantum Zeno effect to error prevention in quantum computation are discussed in Refs. [31,32,33,34].
Quantum "bang-bang" control and related dynamical decoupling techniques [35,36] utilize frequent, pulsed interactions to effectively prevent decoherence of a quantum system by confining the dynamics to a subspace. This is not exactly the quantum Zeno effect, since the interactions are unitary, but the results are mathematically similar to those for the quantum Zeno effect.
Dhar et al [37] have discussed the "super-Zeno effect," which preserves a state (or more generally, keeps a quantum system within a subspace of the Hilbert space) with a set of pulsed interactions unequally spaced in time. The timing of these interactions can be arranged so as to be more efficient than can be done with the same number of equally spaced interactions (ordinary quantum Zeno effect). Also, it should be noted that the pulsed interactions are unitary kicks, as in the so-called "bang-bang control" [35], and not observations in the usual sense.
Conclusion
The 1977 publication of Misra and Sudarshan stimulated a great deal of theoretical and experimental work that has enhanced our understanding of the time development of quantum systems, such as the short-time nonexponential decay of unstable quantum systems. The results of the IHBW experiment, published in 1990, was a clear confirmation of the existence of the quantum Zeno effect for the case of the inhibition of an induced transition. Interest in the quantum Zeno effect continues to be high, partially due to the possibility of practical applications in quantum information processing.
|
2022-06-28T03:35:37.802Z
|
2009-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "3c2299fa858971ff47fe9860abbf4f0e0b86b936",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/196/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3c2299fa858971ff47fe9860abbf4f0e0b86b936",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
16530371
|
pes2o/s2orc
|
v3-fos-license
|
Cell Envelope of Corynebacteria: Structure and Influence on Pathogenicity
To date the genus Corynebacterium comprises 88 species. More than half of these are connected to human and animal infections, with the most prominent member of the pathogenic species being Corynebacterium diphtheriae, which is also the type species of the genus. Corynebacterium species are characterized by a complex cell wall architecture: the plasma membrane of these bacteria is followed by a peptidoglycan layer, which itself is covalently linked to a polymer of arabinogalactan. Bound to this, an outer layer of mycolic acids is found which is functionally equivalent to the outer membrane of Gram-negative bacteria. As final layer, free polysaccharides, glycolipids, and proteins are found. The composition of the different substructures of the corynebacterial cell envelope and their influence on pathogenicity are discussed in this paper.
The Genus Corynebacterium
The genus Corynebacterium belongs to the class of Actinobacteria (high G+C Gram-positive bacteria) and comprises a collection of morphologically similar, irregular-or clubshaped nonsporulating (mico)aerobic microorganisms [1,2]. To date, 88 species were taxonomically classified [3]. More than half of these, that is, 53 species, are occasional or rare causes of infections, with the most prominent member of the pathogenic species being Corynebacterium diphtheriae, which is also the type species of the whole genus. Several pathogenic species are considered to be part of the human skin flora, for example, Corynebacterium amycolatum or Corynebacterium jeikeium, others are considered as zoonotic agents, for example, Corynebacterium pseudotuberculosis, Corynebacterium ulcerans, or Corynebacterium xerosis [3]. Biotechnologically important species used for the industrial production of nucleotides and amino acids are Corynebacterium ammoniagenes, Corynebacterium efficiens, and Corynebacterium glutamicum. C. glutamicum especially is dominating the field of white (bacterial) biotechnology with a production of two million tons of L-glutamate and 1.8 million tons of L-lysine per year [4] and an increasing application as platform organism for the industrial production of various metabolites [5].
General Cell Envelope Architecture
Almost all Corynebacterium species are characterized by a complex cell wall architecture: the plasma membrane of these bacteria is covered by a peptidoglycan layer, which itself is covalently linked to arabinogalactan, an additional heteropolysaccharide meshwork. Bound to this, an outer layer of mycolic acids is found which is functionally equivalent to the outer membrane of Gram-negative bacteria. As top layer, outer surface material composed of free polysaccharides, glycolipids, and proteins (including S-layer proteins, pili, and other surface proteins) is found (reviewed in [29,30], see also Figure 1.) The corynebacterial cell envelope has been investigated by different optical techniques. Electron microscopy of thin sections after freeze substitution revealed a layered cell envelope organization, comprising a plasma membrane, a thick electron-dense layer, an electron-transparent layer, and a thin outer layer [31][32][33]. This picture resembles the typical mycobacterial cell envelope appearance. The electrondense layer is traditionally interpreted as peptidoglycan, the electron-transparent layer as mycolic acid layer. When comparing the thickness determined from the electron microscopic pictures of corynebacteria and mycobacteria, the question arose, why their electron-transparent layer has a similar thickness despite the fact that mycobacterial mycolic acids have about a threefold length compared to corynebacterial ones [33]. A solution of this problem was indicated by cryoelectron tomography studies. These revealed a typical outer membrane with a bilayer structure in Mycobacterium smegmatis, Mycobacterium bovis, and C. glutamicum. The thickness of mycobacterial outer layers was smaller than expected and, based on this observation, an alternative model for mycolic acid distribution was proposed, implicating a folding of mycolic acids [34,35].
The general structure and composition of the corynebacterial cell envelope were earlier reviewed by [29,30] in respect to biochemical and genetic properties especially in C. glutamicum. The aim of this paper will be the presentation of cell envelope properties with a broader focus on different corynebacteria and the importance of cell envelope components for pathogen host interaction.
Cytoplasmic Membrane and
Fatty Acid Synthesis The cytoplasmic or plasma membrane is the main diffusion barrier of cells and separates cytoplasm and environment. As in other bacteria, the corynebacterial plasma membrane is mainly composed of phospholipids, assembled into a lipid bilayer, which additionally contains other polar lipids besides a great variety of proteins crucial for transport processes and bioenergetics of the cell. The main phospholipid found in C. glutamicum is phosphatidylglycerol, followed by diphosphatidylglycerol, phosphatidylinositol, and minor amounts of phosphatidylinositol dimannosides (PIM 2 ) [33,[36][37][38]. Fatty acids dominating in the plasma membrane are the saturated palmitic acid (16:0) and the desaturated decenoic acid (18:1) [39][40][41][42]. However, as in other bacteria, fatty acid composition might change significantly depending on environmental conditions such as low or high temperature [43] or the carbon source available [42].
Fatty Acid Synthesis.
Fatty acids are synthesized by successive cycles of multistep reactions [44]. Two distinct types of these fatty acid synthases (FASs) are distinguished based on their general composition: the FAS-II type, characteristically found in bacteria, contains the minimum seven functional domains necessary for fatty acid synthesis organized in one polypeptide, whereas the FAS-I type, typically found in eukaryotes, is comprised of a large multifunctional protein complex. Interestingly and as an exception from the rule FAS-I proteins are found in members of the Corynebacterineae. C. glutamicum, and C. efficiens contain even two FAS-Itype complexes, FAS-IA and FAS-IB, which were functionally characterized in detail for C. glutamicum [42]. FAS-IA is essential in C. glutamicum, while FAS-IB is not; however, FAS-IB-devoid mutant strains exhibit an altered pattern of fatty and mycolic acids, showing that FAS-IB is active and necessary to generate the typical wild-type fatty acid profile [42]. The regulatory mechanism allowing adaptation of FAS activity to environmental stimuli (see above) is unknown.
C. glutamicum does not only have genes coding for two FAS-I but also genes coding for FAS-II [42], which are involved in elongation of mycolic acid chains in mycobacteria. Since C. glutamicum does not contain elongated mycolic acids and FAS-II is absent in other corynebacteria such as C. diphtheriae, it was speculated that these proteins might play a minor physiological role or even not be functional in C. glutamicum [42].
Lipomannan and Lipoarabinomannan.
As in other proand eukaryotes, the corynebacterial lipid bilayer of the cytoplasmic membrane is not symmetric. In corynebacteria, the proposed reason for asymmetry is an insertion of glycoconjugates in the outer sheet of the cytoplasmic membrane [29]. Lipomannan (LM) and lipoarabinomannan (LAM) derivatives were found in different Corynebacterium species [33]. They might be inserted into the plasma membrane via covalently bound palmitic acid or decenoic acid molecules, besides their appearance in the outer surface material of corynebacteria. Distribution of LM-and LAMlike substances seems to be species-specific: in C. glutamicum LM-like molecules are dominating, in C. xerosis and C. amycolatum LAM-like substances were preferentially found, while a C. diphtheriae strain showed an almost equal distribution of LM and LAM derivatives [33]. An excellent review dealing with the synthesis of PIM, LM, and LAM derivatives was published recently [45]. the plasma membrane but is also located at the surface of C. diphtheriae (see the following) and facilitates binding to epithelial cells [46]. The role of lipoarabinomannan with respect to initiation of immune responses was addressed in C. glutamicum recently [47]. Characterization of a C. glutamicum strain devoid of (1 → 2) arabinofuranosyltransferase AftE, revealed that AftE is involved in the synthesis of arabinans of LAM. Absence of AftE leads to a hypermannosylated variant of LAM, designated hLM. Both, LAM and hLM were able to modulate the initiation of immune response by interacting with TLR2. As shown by a number of in vitro assays, arabinose branching of lipoarabinomannan impacts T-helper-cell differentiation and LAM as well as hLM activate dendritic cells via TLR2. Interestingly, alterations of lipoarabinomannan seem to be discriminated by TLR2 and signal pathway induction by hLM was shown to be broader. In accordance with this observation, hLM was shown to be a stronger inducer of immune responses in mice.
The Cell Wall Heteropolysaccharide Meshwork
In contrast to, for example, Escherichia coli or Bacillus subtilis, the cell wall skeleton of corynebacteria and related taxa is not exclusively composed of peptidoglycan, but, in addition, a layer of arabinogalactan is covalently bound to the peptidoglycan, which itself is linked to mycolic acids (for review see [29,33,48]).
Peptidoglycan.
As in other bacteria, the glycan part of the murein sacculus is composed of alternating -1,4-linked N-acetylglucosamine and N-acetyl muramic acid units, which form the glycan part of the macromolecule. Crosslinking between different glycan polymers occurs via peptide side chains attached to the carboxyl group of muramic acid via peptide bonds. Corynebacterial peptidoglycan is directly cross-linked, as shown for C. bovis, Corynebacterium pseudodiphthericum, C. pseudotuberculosis, C. striatum, C. ulcerans, and C. xerosis [49]. Interpeptide bridges as found in other Gram-positives are absent. In summary, the peptidoglycan of Corynebacterium in sensu stricto is of the A1 type [49]. In C. diphtheriae, the major peptide units found are the tetrapeptide L-Ala-D-Glu-meso-DAP-D-Ala and the tripeptide L-Ala-D-Glu-meso-DAP [50]. Interestingly, only a portion of the peptide side chains are cross-linked via D-Ala-meso-DAP bridges, while the others are supposed to be connected by DAP-DAP bridges [29]. Due to the similar peptidoglycan structure, as well as the homology and synteny of genes involved in cell wall synthesis, peptidoglycan synthesis is assumed to be similar to that in E. coli [34] (for a topical review on E. coli cell wall synthesis, see [51]) and can be separated into three distinct parts. First, the building blocks of peptidoglycan have to be synthesized in the cytoplasm. For this purpose, UDP-N-acetylglucosamine is synthesized and partially converted to UDP-N-acetylmuramic acid by the murA and murB gene products [30,52]. Next, UDP-N-acetylmuramylpentapeptide is formed, a process in which the murE, murF, murD, and murC gene products are involved.
The second step of building block synthesis is located at the cytoplasmic membrane and involves transfer of a phospho-N-acetylmuramyl-pentapeptide to polyprenol phosphate catalyzed by the mraY gene product and resulting in lipid I. As in many other bacteria, undecaprenol (C 55 ) might be the polyprenol used, since at least in C. glutamicum this compound is used for polyprenyl monophosphomannose synthesis [53]. Next, N-acetylglucosamine is transferred from UDP-N-acetylglucosamine to lipid I. This is catalyzed by murG and yields lipid II or N-acetylglucosamine--(1,4)-N-acetylmuramyl(pentapeptide)-pyrophosphoryl-polyprenol. The generated lipid intermediate mediates the transport of the hydrophilic disaccharide pentapeptide precursor from the cytoplasm across the hydrophobic plasma membrane to the peptidoglycan layer.
As third step, disaccharide pentapeptide precursors are integrated into the growing peptidoglycan meshwork by transglycosylation and transpeptidation reactions catalyzed by penicillin binding proteins [54]. Typically, several copies of these can be found in different Corynebacterium species such as C. diphtheriae [52] and C. glutamicum [55] (for review, see [30]). For C. glutamicum five out of nine putative penicillin binding proteins were shown to be functional in peptidoglycan synthesis [55].
A strict coordination of the described steps of cell wall synthesis is crucial for survival of bacteria, since otherwise the cells would be prone to disruption due to the high internal turgor pressure. The regulation of this process and cell division has been reviewed for C. glutamicum [54] and other Actinobacteria [56] recently.
Linker Unit.
As shown in mycobacteria, arabinogalactan is covalently bound to the murein sacculus via a polysaccharide linker unit making up phosphodiester bonds to about 10% of the muramic acid residues of the peptidoglycan [57]. The mycobacterial linker unit consists of galactose, rhamnose, and N-acetylglucosamine linked as Galf -(1 → 4)-Rhap-(1 → 3)-GlcNAc via a 1-O-phosphoryl bond of GlcNAc to the 6-OH position of muramic acid [58]. Studies of C. diphtheriae revealed a very similar linkage profile of arabinogalactan to that of Mycobacterium tuberculosis [30,33].
Whereas C. glutamicum arabinogalactan consists exclusively of arabinose and galactose, C. diphtheriae arabinogalactan contains significant amounts of mannose, and C. amycolatum and C. xerosis arabinogalactans are characterized by additional glucose content [33]. In any case, arabinogalactan provides a covalent connection not only to peptidoglycan but also to the outer membrane layer.
Composition and Biosynthesis.
A second permeability barrier equivalent to the outer membrane of Gram-negative bacteria is a key feature of the CMN group (Corynebacterium, Mycobacterium, and Nocardia) of Actinobacteria. The functionality of this barrier is critically influenced by its mycolic acid content [63]. The inner half of the corynebacterial mycolic acid layer is mainly formed by mycolic acids esterified to the 5-OH group of the penultimate (1 → 2)linked or ultimate Araf residue of arabinogalactan, while in the outer sheet, trehalose and glycerol esterified mycolic acids are predominating. Additionally, minor amounts of free mycolic acids are found. A recent biochemical disclosure of the outer membrane of C. glutamicum showed that the lipids composing the mycomembrane consist almost exclusively of mycolic acids derivatives, whereas only minor amounts, if any, of phospholipids and lipomannans were detected [64].
In corynebacteria, fatty acids with around 30 carbon atoms (corynomycolates), in nocardia with about 50 carbon atoms (nocardomycolates), and in mycobacteria with about 70 to ninety carbon atoms (eumycolates) are found [29]. In contrast to the linear fatty acids of the phospholipids, mycolic acids are -branched -hydroxy fatty acids, requiring carboxylation and condensation of two fatty acids for their synthesis [65]. The enzymes involved in these steps were identified by mutation analyses in C. glutamicum, and two carboxylases were identified to be essential for mycolic (and fatty) acid synthesis, AccD2 and AccD3 [66]. These are conserved in Corynebacterineae and provide the crucial carboxylated intermediate for condensation of merochain and -branch [67]. Further proteins involved in mycolic acid synthesis in C. glutamicum and related organisms are AccD1, involved in malonyl-CoA synthesis, Pks, a ketoacyl synthase involved in fatty acid elongation, and FadD, a fatty acid acyl-AMP ligase ( [65,67]; for a recent review see [30]).
Three pathways for trehalose synthesis were described in C. glutamicum: the OtsA-OtsB pathway synthesizing trehalose from UDP-glucose and glucose-6-phosphate, the TreY-TreZ pathway using malto-oligosaccharides or -1,4glucans as substrate, and the TreS pathway using maltose as an educt for trehalose synthesis [68,69]. It is suggested that the transfer of mycolic acid moieties to trehalose occurs outside the cytoplasm [70], since in absence of internally synthesized trehalose, either trehalose or externally added glucose, maltose and maltotriose, can be used as substrate for mycolic acid modification and result in the corresponding diand monocorynomycolates [70].
The production of arabinogalactan-linked mycolates, trehalosylmonocorynomycolates, and trehalosyldicorynomycolates indicates the presence of mycolyltransferases. In fact, proteins similar to the mycobacterial antigen 85 showing mycoloyltransferase activity, also designated fibronectinbinding protein, were identified in corynebacteria. The first member was the PS1 protein from C. glutamicum [71]. Later it was shown that six of these proteins are present in this species, five in C. efficiens, and four in C. diphtheriae [29,72,73]. The enzymes are fully redundant in C. glutamicum in respect to mycoloyl moiety transfer to trehalose and partially redundant with respect to transfer of arabinogalactan [29,72,74]. The proteins associated with mycolic acid transport across the plasma membrane were characterized in M. smegmatis and C. glutamicum recently [75]. In C. glutamicum, four mmpL genes encode large membrane proteins associated with mycolate metabolism and transport, which have partially redundant function [75].
Besides the plasma membrane lipids, the outer membrane fatty acid composition also has to be adapted to different temperatures, a crucial process for function of the mycolic acid layer as a diffusion barrier [76]. In fact, one stressinduced protein, designated ElrF, was identified, which is conserved in Corynebacterineae and plays a role in the regulation of outer membrane lipid composition in response to heat stress [77].
Corynomycolates and Pathogenicity.
Almost all Corynebacterium and Mycobacterium species are characterized by a complex cell wall architecture comprising an outer layer of mycolic acids, which is functionally equivalent to the outer membrane of Gram-negative bacteria, not only in respect to its physiological role as a permeability barrier, but also as important component of host pathogen interaction. It has long been known that constituents of the mycolic acid layer may be immune stimulatory and effecting macrophage function. These effects are best investigated in M. tuberculosis [78], where trehalose dimycolate, also designated as cord factor, inhibits fusion events inside the host macrophage and at the same time contributes to macrophage activation. However, limited data for corynebacteria are available. Investigations of C. pseudotuberculosis (formerly Corynebacterium ovis) indicate a lethal effect of outer membrane lipids on caprine and murine macrophages. Lipid extracts of C. pseudotuberculosis had negative effects on glycolytic activity, viability, and membrane integrity [79]. When macrophages were infected with C. pseudotuberculosis, uptake of the bacteria and lysosome fusion was functional, but bacteria survived internalization. However, the macrophages were destroyed [80]. For C. glutamicum priming and activation of murine macrophages by trehalose dimycolates were reported [81].
Top Layer
Cell surface molecules extracted from different corynebacteria consist of over 90% carbohydrates and of a minor portion (less than 10%) of proteins. Analyses of surface saccharides of C. diphtheriae strains indicated the presence of sugars such as N-acetylglucosamine, N-acetylgalactosamine, galactose, mannose, and sialic acid [82]. In C. amycolatum and C. xerosis, a neutral glucan consisting mainly of glucose with an apparent mass of 110 kDa was found, in addition to arabinomannans consisting of arabinose and mannose in a 1 : 1 ratio of 13 and 1.7 kDa [29,33]. Additionally, the above-mentioned lipoarabinomannan and lipomannan were detected in the outer layer, besides trehalose dicorynomycolate, trehalose monocorynomycolate, and phospholipids. Interestingly, the same lipid composition was found for whole bacteria, indicating that all classes of lipid molecules were exposed on the corynebacterial cell surface [33], in contrast to the mycobacterial situation [83]. Also, a more distinct composition of plasma membrane and mycomembrane was reported for C. glutamicum [64].
While Puech and coworkers could only detect a few bands on coomassie-stained SDS polyacrylamide gels for different corynebacteria, proteome analyses supported the idea that a plethora of proteins is exposed on the surface of corynebacteria. Corresponding studies carried out for C. diphtheriae [84], C. efficiens [85], C. glutamicum [64,85,86], C. jeikeium [87], and C. pseudotuberculosis [88] revealed a significant number of proteins for every single species. Often, these proteins are uncharacterized; some clearly have functions for nutrient uptake and growth, while others are involved in host pathogen interactions.
Corynebacterial Porins.
Since the permeability barrier of the mycolic acid layer most likely hinders nutrient uptake by plasma membrane transporters, the outer membrane has to be selectively permeabilized to allow growth. For this purpose, porin proteins are inserted into the membrane. In general these are functionally equivalent to Gram-negative porins but resemble a completely different structure, multimers of -helical subunits instead of the Gram-negative trimeric -barrels.
Corynebacterial porins are best investigated in C. glutamicum, where several different channel-forming proteins were identified, namely, PorA, PorB, PorC, and PorH [89][90][91][92]. Homologs of these were found in Corynebacterium callunae, C. efficiens [93], and C. diphtheria [94]. Despite the fact that C. amycolatum does not have corynemycolic acids and contains only small amounts of extractable lipids [33,95,96], a channel-forming protein was also isolated from this pathogenic species [97]. Reconstitution experiments with purified PorA and PorH from C. glutamicum and homology studies indicated that the major cell wall channels of C. callunae, C. diphtheriae, C. efficiens, and C. glutamicum are formed of two porins, one of the PorA and one of the PorH type [98]. Furthermore, for C. glutamicum PorA and PorH, an O-mycolation was shown, an extremely unusual modification [99].
Comparative studies of C. glutamicum wild type and a PorA-lacking mutant strain revealed a drastically decreased susceptibility of the mutant towards ampicillin, kanamycin, streptomycin, tetracycline, and gentamicin [92]. Corresponding studies with pathogenic corynebacteria are missing; however, based on the structural and functional similarities depicted above, differences in the porin repertoire might be one reason for different antibiotic susceptibility observed for the different species.
S-Layer Proteins.
In some Corynebacterium strains, the cell surface is covered with a crystalline surface layer composed of single protein species, which are anchored in the corynebacterial outer membrane [100]. In C. glutamicum, the S-layer protein PS2 has an apparent molecular weight of 63 kDa and is anchored in the mycomembrane by a Cterminal hydrophobic domain [101]. Electron and atomic force microscopy applications revealed a highly ordered hexagonal S-layer [31,[101][102][103][104] and a certain degree of variability in different C. glutamicum isolates [103]. The PS2 encoding cspB gene is located on a genomic island [103], a situation often found for virulence genes; however, no distinct phenotype was found for S-layer mutant strains.
Sialidase.
Sialidases, also designated as neuraminidases, are glycosyl hydrolases that catalyze the removal of terminal sialic acid residues from a variety of glycoconjugates of the host surface [105]. The sugar is subsequently metabolized or used to decorate the surface of the pathogen. In fact, C. diphtheriae exposes sialic acids on its outer surface [82]. Sialidase activity was first identified in a crude preparation of diphtheria toxin [106], and sialidase production and composition of cell surface carbohydrates of C. diphtheriae seem to be directly depending on iron concentration in the medium [107][108][109]. A putative exosialidase, designated NanH, was identified in C. diphtheriae. Biochemical studies revealed trans-sialylation activity; however, it is still unclear if NanH is involved in sialic acids decoration or not [107].
Pili.
Proteinaceous protrusions such as fimbriae and pili are pivotal players for the attachment of bacteria to abiotic and biotic surfaces. In corynebacteria, pili were first reported in Corynebacterium renale [110]. Later detailed molecular biological analyses were carried out in C. diphtheriae (for review, see [111]). C. diphtheriae type strain NCTC13129 produces three distinct pilus structures, SpaA-, SpaD-, and SpaH-type pili, which are polymerized by specific class C sortases [112] and covalently linked to cell surface by sortase F [113]. Each type of pilus is composed of the name-giving shaft proteins, SpaA, SpaD, and SpaH, and minor pili subunits, that is, SpaB, SpaC, SpaE, SpaF, SpaG, and SpaI. For SpaB and SpaC, a function in cell line specificity of pathogen attachment was shown [114]. Ott and coworkers observed that different C. diphtheriae isolates are characterized by a different pili repertoire [115] which is characterized by different biophysical properties [116]. Interestingly, pili formation and adhesion rate are not strictly coupled processes and bald strains are also able to attach to host cells [115], indicating the presence of adhesion factors besides pili.
In vitro experiments with protein-coated latex beads indicated that DIP0733 contributes to invasion and induction of apoptosis in HEp-2 cells [118].
6.6. NlpC/P60 Proteins. Pathogen factors responsible for adhesion are at least partially characterized; however, the molecular background of invasion remains unclear. Based on a comprehensive analysis of proteins secreted by C. diphtheriae [84], Ott and coworker [119] started to characterize the surface-associated protein DIP1281, annotated as invasion associated protein. DIP1281 I is a member of the NlpC/P60 family, a large superfamily of several diverse groups of proteins [120], including putative proteases and probably invasion-associated proteins. They are found in bacteria, bacteriophages, RNA viruses, and eukaryotes and various members are highly conserved among nonpathogenic and pathogenic corynebacteria such as C. diphtheriae, C. efficiens, C. glutamicum, and C. jeikeium. DIP1281 mutant cells completely lacked the ability to adhere to host cells and consequently to invade these. Based on proteome, fluorescence and atomic force microscopy, it was concluded that DIP1281 is a pleotropic effector of the C. diphtheriae outer surface rather than a specific virulence factor [119], an idea that was supported by results obtained for corresponding mutants of the nonpathogenic C. glutamicum R strain [121]. Nevertheless, proteins of this family seem to play a major role in corynebacterial cell surface organization and cell separation. Furthermore, the results obtained with C. diphtheriae mutant strains [119] support the idea that the corynebacterial cell envelope components are important determinants of host pathogen interactions.
Concluding Remarks
Corynebacteria show a fascinating complex cell wall synthesis and architecture (see also Figure 1). Enormous progress was made in the characterization of fatty acids, sugar moieties, and cell wall polysaccharides; however, the protein content of the top envelope layer has only been characterized partly, despite its probable importance for intercellular communication and host pathogen interaction, as targets for therapy or its importance for biotechnological production. Furthermore, porin and S-layer proteins may have interesting biotechnological functions in respect to surface display of proteins and setup of nanostructures [100,122].
|
2016-05-16T12:30:45.199Z
|
2013-01-21T00:00:00.000
|
{
"year": 2013,
"sha1": "697cd203eb7effa372a4a19c76a79a84eba02ea2",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2013/935736.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "697cd203eb7effa372a4a19c76a79a84eba02ea2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119117778
|
pes2o/s2orc
|
v3-fos-license
|
Astrophysical Constraints on Dark Matter
Astrophysics gives evidence for the existence of Dark Matter and puts constraints on its nature. The Cold Dark Matter model has become"standard"cosmology combined with a cosmological constant. There are indications that"Cold"Dark Matter could be"warmer"than initially discussed. This paper reviews the main information on the Cold/Warm nature of Dark Matter.
Introduction
Warm Dark Matter (WDM) has become a hot topic and the debate is not settled. The Meudon Workshop 2011 was running in parallel to our CYGNUS Directional Direct Detection workshop followed in July by the 2011 Cosmology Colloque in Paris. Presentations from those two workshops are available online.
Due to the restricted number of pages, I skip the review on the astrophysical evidence for the existence of Dark Matter(DM). At all scales (galaxy, cluster and Universe) there are observations which point to the existence of a large quantity of DM. A small proportion of dark (ie, non luminous) matter may be baryonic: astronomical bodies, such as black holes, massive compact halo objects, or cold molecular hydrogen, ... DM, however, is currently defined in cosmological parameters fits, as non-baryonic and cold. Candidates include neutrinos, and other hypothetical entities such as axions, or supersymmetry particles, ... Non-baryonic DM is classified as Hot Dark Matter (HDM), Warm Dark Matter (WDM), or Cold Dark Matter (CDM), depending on the velocity of the particles at decoupling. In the CDM hypothesis, the DM particles are massive and have small velocity. The most studied candidate is the supersymmetric neutralino of the MSSM (Minimal Supersymmetric Standard Model).
Hot DM particles (eg light neutrinos) would have velocities close to the speed of light. The DM velocity has consequence on the large scale structure (LSS) for-mation. The CDM model yields "bottom-up" hierarchical formation of structures in the universe while HDM would induce preferentially "top-down" formation.
In the nineties, the development of N-body simulations of large scale structures (LSS) led to a preference for CDM models over HDM.
2 N-body simulations of LSS S. von Hoerner (1960S. von Hoerner ( , 1963 pioneered the work on N-body simulations in the early sixties. Many simulations use only CDM, and thus include only the gravitational force. Incorporating baryons into the simulations dramatically increases their complexity and in the past, radical simplifications of the underlying physics was made. Only in the last decade have simulations tried to understand processes that occur during galaxy formation. N-body simulations of cosmological structures with CDM have shown that the radial profiles of the mass density and velocity dispersion of DM haloes follow rather simple universal functional forms that are largely independent of halo properties such as mass, environment, and formation history. For the density profile, there have been many discussions about the value of the logarithmic slope of its central cusp, whether it is -1 (as for example in Hernquist (1990); Navarro et al. (1997)) or -1.5 as in Moore et al. (1998).
Results from more recent N-body simulations suggest actually a lack of a definite inner slope: the density profile of the now better resolved DM haloes continues to flatten with decreasing radius (e.g., Navarro et al. 2004;Merritt et al. 2005Merritt et al. , 2006Graham et al. 2006). Functional forms such as the Einasto (1969) or the Prugniel and Simien (1997) profiles, motivated by the Sersic profile for the surface brightness of galaxies (Sersic 1968) provide a more accurate fit to the more recent simulations.
Another parameter that has been used is the radial profile of the pseudo-phasespace density, ρ/σ 3 , where σ is either the total velocity dispersion or the velocity dispersion in the radial direction. It seems well approximated by a power-law (e.g., Taylor and Navarro 2001;Ascasibar et al. 2004;Dehnen and McLaughlin 2005;Hoffman et al. 2007, Stadel et al. 2009.
Comparison of N-body simulations with observations
At the end of the millenium, precision in N-body simulations of DM structures have shown some problems with a pure CDM model at small scales: the predicted number of galactic satellites was not observed (cf. eg, Klypin et al.,1999) and a cusp/core controversy in galactic centers developed. This has led some to conjecture WDM to explain the discrepancy. However, new observed faint galactic satellites and other explanations for the observed galactic cores could allow the CDM model to survive. In the mean time, the size of mini-voids in the local Universe and HI velocity functions and widths measurements have increased the importance of the so-called "overabundance problem" in pure ΛCDM simulations. But the controversy is still on...
It has been shown (eg, Mashchenko, Couchman, and Wadsley, 2006) that stellar feedback can solve this difference by removing cusps. The numerical simulations with random bulk motions of gas (driven for example by supernovae explosions from star-forming galaxies) can flatten the central dark matter cusp on relatively short timescales (∼108 years). Once removed, the cusp cannot be reintroduced during the subsequent mergers involved in the build-up of larger galaxies. As a consequence, in the present Universe both small and large galaxies would have flat dark matter core density profiles, in agreement with observations. Romano-Diaz et al. (2008) proposed that baryons also can erase DM Cusps in Cosmological Galactic Halos. They find a different evolution between the Pure DM (PDM) and Baryon+DM(BDM) models within the inner few 10 kpc region. The PDM model forms a R −1 cusp as expected, while the DM in the BDM model forms a larger isothermal R −2 cusp instead. The isothermal cusp is stable until z ∼ 1 when it gradually levels off. This leveling proceeds from inside out and the final density slope is shallower than -1 within the central 3 kpc (i.e., expected size of the R −1 cusp), tending to a flat core within ∼ 2 kpc. This effect cannot be explained by a finite resolution of the code, neither is it related to the energy feedback from stellar evolution or angular momentum transfer from the bar. Instead it can be associated with the action of DM+baryon subhalos heating up the cusp region via dynamical friction and forcing the DM in the cusp to flow out and to cool down.
Number of galactic satellites
The recent discovery of many new DM dominated satellites of the Milky Way in the Sloan Digital Sky Survey (eg Belokurov, et al., 2010) has reduced the importance of the missing satellite issue. Maccio and Fontanot (2010) and Polisensky and Ricotti(2011) have given lower limits of a few keV to DM particle mass from the number of Milky Way satellites, since the number of satellites predicted decreases with decreasing mass of the DM particle. Assuming that the number of satellites exceeds or equals the number of observed satellites of the Milky Way, Polisensky and Ricotti derive a lower limit on the DM particle mass of 13.3 keV (95%CL) for a sterile neutrino produced by the Dodelson and Widrow mechanism, 8.9 keV for the Shi and Fuller mechanism, 3.0 keV for the Higgs decay mechanism, and 2.3 keV for a thermal DM particle.
These lower limits are comparable to constraints on WDM mass from Lymanα forest modeling (Narayanan et al. 2000;Viel et al. 2005Viel et al. ,2008Boyarsky et al. 2009a), high z quasar luminosity functions (Song and Lee 2009), X-ray observations of the unresolved cosmic X-ray background, DM halos from dwarf galaxy to cluster scales, (cf. eg. Boyanovsky, de Vega, Sanchez 2008 ;de Vega and Sanchez 2009 for reviews).
HI velocity functions
In their very recent paper, Papastergis, Martin, Giovanelli and Haynes (2011) present results from 40% of the ongoing wide-area, extragalactic HI-line, Arecibo Legacy Fast ALFA (ALFALFA) survey. They measure the space density of HIbearing galaxies as a function of their observed velocity width (uncorrected for inclination) down to velocities of 20 km/s and confirm previous indications (Zavala et al., 2009;Gottloeber, Hoffmann and Yepes, 2009;Trujillo-Gomez et al., 2010) of a substantial discrepancy at low widths between the observed distribution and the ΛCDM simulations.
There is an overabundance of model galaxies by a factor of ∼ 10 compared to observed dwarf galaxies with circular velocity V circ < 50 km/s. This is a serious problem for the ΛCDM model: galaxies with these small circular velocities cannot be affected much by normal physical processes (e.g., supernovae feedback or reionization of the Universe) proposed for the solution of the satellite problem at V circ < 30 km/s. The difference in abundance is a factor of about 8 at v= 50 km/s (which corresponds to the resolution limit of the Zavala et al.(2009) simulation, and implies a difference of a factor of ∼ 100 when extrapolated to the ALFALFA low-width limit (v = 20 km/s).
Papastergis, Martin, Giovanelli and Haynes (2011) also examine several solutions to the discrepancy: (i) a 1 keV WDM scenario and (ii) HI disks in low mass galaxies are usually not extended enough to probe the full amplitude of the galactic rotation curve. In this latter case, they infer a relationship between the measured HI rotational velocity of a galaxy and the mass of its host CDM halo, which should be checked to provide an important test of the validity of the established CDM model. Tikhonov and Klypin (2010) have studied the luminosity function, peculiar velocities and sizes of voids in the Local Volume within the distance 4-8 Mpc. The predictions of the standard cosmological ΛCDM model give a factor of 10 more dwarf haloes as compared with the observed number of dwarf galaxies. The theoretical void function matches the observations remarkably well only for haloes with circular velocities Vc larger than 40-45km/s. For haloes with circular velocities < 35km/s, there are too many small haloes in the ΛCDM model resulting in voids being too small, as compared with observations. The problem is that many of the observed dwarf galaxies have HI rotational velocities below 25 km/s that strictly contradicts the ΛCDM predictions. This is related to the "overabundance problem", and could be solved by the same assumptions about keV WDM or HI disks in low mass galaxies.
Conclusion on the problems with ΛCDM?
ΛCDM N-body simulations are fitting impressively well a wealth of data. To-date discrepancies concern the "overabundance problem", which could, however, be due to the inability of HI to trace the maximum halo rotational velocity of low-mass systems. So CDM is not dead yet!
The effects of WDM compared to CDM on the structure formation is to remove power from small scales, due to the large thermal velocities of the particles. Future lensing projects like EUCLID, can provide measurements of a WDM mass for masses < 2.5 keV, since the cosmic shear power spectra depends on the DM mass , and departure from the CDM power spectra is not sensitive above roughly 2.5 keV. In order to fully exploit future observations, models should be able to predict the non-linear matter power spectrum at the level of 1 per cent or better for scales corresponding to comoving wavenumbers 0.1 < k < 10 h Mpc −1 . However baryonic and other astrophysics effects (stellar, supernovae, AGN feedbacks, ... ) can have large impacts on the measured power spectrum at small scales. this has been verified by different groups of N-body simulations: eg., Gottloeber et al., (2010), CLUES project; Guillet, Teyssier and Colombi (2010)
Importance of Baryonic physics in N-body simulations
The importance of baryonic physics in N-body simulations and weak lensing has been ignored till . White (2004 and Zhan and Knox (2004) calculated the effects of cooling and intra-cluster gas on the lensing power spectrum. These components have each an effect of a few percent on the lensing power spectrum at l around 3000, but with opposite signs. Jing et al. (2006) were the first to include in a N-body gas simulation the physical processes of radiative cooling and star formation, supernova feedback, outflows by galactic winds, and a sub-resolution multiphase model for the interstellar medium.
More recently, Sales et al.(2010) studied the properties of simulated highredshift galaxies using cosmological N-body gas dynamical runs from the Over-Whelmingly Large Simulations (OWLS) project. The different feedback models they use result in large variations in the abundance and structural properties of bright galaxies at z= 2. The OWLS simulations have also been used by van Daalen, Schaye, Booth, and Dalla Vecchia (2011) to study the distribution of power over different mass components, the back-reaction of the baryons on the CDM and the evolution of the dominant effects on the matter power spectrum. Single baryonic processes are capable of changing the power spectrum by up to several tens of per cent. The simulation that includes AGN feedback, predicts a decrease in power relative to a dark matter only simulation ranging, at z= 0, from 1 per cent at k∼ 0.3 h Mpc −1 to 10 per cent at k∼ 1 h Mpc −1 and to 30 per cent at k∼ 10 h Mpc −1 . They confirm that baryons, and particularly AGN feedback, cannot be ignored in theoretical power spectra for k> 0.3 h Mpc −1 . It is necessary to improve our understanding of feedback processes in galaxy formation. 5 Candidate DM: the sterile neutrino
The need for sterile neutrinos
In the early times of the Standard Model of particles, neutrinos were thought to be massless and different lepton numbers were believed to be conserved. This was a reason for not introducing righthanded neutrinos. However, the observation of neutrino oscillations in experiments with solar, atmospheric, accelerator and reactor neutrinos requires the addition of new particles to the Standard Model. Thus the interest in "sterile" neutrinos which are right-handed, and have very weak (if any) interactions, besides gravity... Sterile neutrino can be cold or warm DM depending on the models and parameters. Shaposhnikov, Boyarsky, and their collaborators presented many different models of sterile neutrinos. A relatively new review of astrophysical and cosmological constraints on some models can be found in Boyarsky, Ruchayskiy and Shaposhnikov (2009). The conclusion is that "Realistic Sterile Neutrino Dark Matter with KeV Mass does not Contradict Cosmological Bounds" , in agreement with the many astrophysical and laboratory constraints on WDM mass (and neutrino mixing angles), which were first thoroughly investigated by Abazajian et al.(2001Abazajian et al.( ,2006 with the then existing data.
Has a sterile neutrino of 5 KeV been found?
Since 2006, several groups have searched for decaying DM (cf eg, the many papers of Boyarsky et al.., 2006 and after) and set constraints on sterile neutrino model parameters. Loewenstein and Kusenko (2010) report the presence of a narrow emission feature with energy 2.51± 0.07(0.11) keV and flux [3.53 ± 1.95(2.77)] 10 −6 photons cm −2 s −1 at 68% (90%) confidence in the Chandra X-ray Observatory spectrum of the ultra-faint dwarf spheroidal galaxy Willman 1. Interpreting this signal as an emission line from sterile neutrino radiative decay, the feature is consistent with a sterile neutrino mass of 5.0 ± 0.2 keV. But this signal is too weak and would need confirmation before a claim of discovery can be made.
Conclusion
Since Cygnus is a Directional Direct Detection workshop, news about a keV DM candidate discovery can be a bit unsettling. It is important to keep an open eye on currently discussed candidates since it can have consequences on our projects. If an Universe with only HDM can be excluded, it is not possible to-date to rule out neither CDM nor WDM. There are still many questions: Can we trust present N-body simulations? They are impressive but halos from the simulations are not galaxies. Are all baryonic and other astrophysics effects well taken into account? The Wilman1 feature is not convincing, so whether DM is Cold or Warm and in the keV range is still not settled. Furthermore, even if WDM in the keV range existed, it is not excluded that more massive CDM would be also present. Finally, there are (many?) other particle candidates than keV sterile neutrinos. Lin et al. (2001) have, for example, proposed the non-thermally produced decaying DM, which could reconcile CDM and WDM. Some phenomenology has been presented by Bi et al. (2010). Considering the long timescales of Direct DM Detection efforts, I imagine non-thermally produced decaying DM could be an alternative, welcome by this community...
Ackowledgements:
Many thanks to Frederic Mayet, without whom I would not have reviewed the subject in Aussois. His gentle pressure has given birth to this written version... I am also grateful to Daniel Santos who has kept, for so many years, the steady direction of TPCs for DM, and has continuously shared with me the progress of his team. The workshop has allowed me to meet the new generation of enthusiastic DM research people and I enjoyed the Aussois environment. The content of this talk has been enriched by discussions with Zhang XinMin, Qin Bo, Shan HuanYuan, Bi XiaoJun and Zhan Hu.
|
2011-10-13T14:03:14.000Z
|
2011-10-03T00:00:00.000
|
{
"year": 2012,
"sha1": "b2eafdcdef45b7a9510c4286bddf7beba893e8f8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1110.0298",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b2eafdcdef45b7a9510c4286bddf7beba893e8f8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
55128322
|
pes2o/s2orc
|
v3-fos-license
|
Measuring heavy metal content in bone using portable X-ray fluorescence
The ability of inorganic-based analytical chemistry techniques to quantify trace amounts of heavy metals in skeletal remains has been integral for understanding health and social status in human populations. Low detection limits and the sensitivity of inductively coupled plasma-mass spectrometry (ICP-MS) and other techniques to most elements on the periodic table are ideally suited for the quantification of lead (Pb) and other heavy metals in bone. However, the time required for sample preparation and analysis, expense, destructive analytical process, and availability of instrumentation often limit researchers’ ability to utilise these techniques for archaeological applications. This paper explores the use of portable X-ray fluorescence (XRF) instrumentation for heavy metal analysis of bone as an alternative to more traditional analytical techniques. XRF has been shown to be an extremely useful tool for archaeologists seeking to conduct quantitative analyses of cultural materials such as obsidian and metals. However, little research has been undertaken to assess the usefulness of portable XRF for measuring heavy metals found in low concentrations in archaeological bone. This paper compares data derived from ICP-MS and portable XRF analyses of bone. Results demonstrate that XRF analyses of bone are problematic due to diagenesis and variability of Pb content in bone.
Introduction
Childhood and adult exposure to lead (Pb) in colonial American populations historically has been attributed to the prominence of leadglazed pottery in households (Hume, 1969).Other sources include the use of pewter vessels and spoons, consumption of contaminated wines and distilled spirits, and the handling and making of bullets.Lead is a toxic poison, and ingestion by children can result in mental retardation and elevated mortality rates (Neeldman, 1992), along with a variety of serious pathological conditions in adults including bowel impactions and neurological impairment.Because human bone accumulates lead throughout an individual's lifetime, the amount of lead in bone can typically account for 70-90% of an individual's lifetime lead burden (Bellis et al., 2006).In order to better understand the extent of lead poisoning in Chesapeake region historic period populations, Smithsonian researchers have been working to establish a database for American populations from early Colonial times to the near-present that includes the trace metal content of Pb and other heavy metals (e.g.As and Hg) in human bone.Because wealthier families had significantly greater access to lead-glazed pottery and pewter, as well as potential pharmaceuticals containing mercury and arsenic, the quantification of heavy metals in human remains has the added benefit of providing a window into the health and economic status of these populations (Aufderheide et al., 1981(Aufderheide et al., , 1988)).Traditionally, researchers have used various methods for the quantification of lead in bone, including instrumental neutron activation analysis (INAA), Xray fluorescence (XRF), atomic absorption spectroscopy (AAS), and inductively coupled plasmamass spectrometry (ICP-MS) (Aras et al., 1999;Bellis et al., 2006;Farnum et al., 1995;Hoppin et al., 1995).Low detection limits and sensitivity of ICP-MS to most elements on the periodic table make this technique ideally suited for the quantification of Pb and other heavy metals in bone (Speakman et al., 2005(Speakman et al., , 2007)).However, the time required for sample preparation and analysis, instrumentation expense and availability generally limits the use of this technique for archaeological applications.In addition, destructive sampling of human remains is oftentimes not possible due to restrictions imposed by museums and/or cultural groups.In an attempt to identify an alternative, inexpensive, and non-invasive analytical approach, we examined portable-XRF (PXRF) for heavy metal analysis of bone.XRF has been shown to be an extremely useful tool for archaeologists doing quantitative analysis of cultural materials such as obsidian and metals.However, little research has been undertaken to optimise portable XRF for the measurement of heavy metals in bone and/or assess data generated from such endeavors.This study focuses on the calibration of portable XRF instrumentation for the quantification of lead in bone followed by the comparison of XRF data to ICP-MS measurements for the same 25 individuals.
X-ray fluorescence description
For XRF analyses, each bone was analysed twice: the first analysis examined an unmodified, visibly clean surface (e.g. the unburred surface); the second tested a surface that had been prepared using a silicon carbide abrading tool (e.g. the burred surface).XRF analyses were conducted using a Bruker Tracer III-V handheld spectrometer.This instrument is equipped with a rhodium tube and a Si-PIN detector with a resolution of ca.170 eV FWHM for 5.9 keV X-rays (at 1000 counts per second) in an area 7 mm 2 .All analyses were conducted at 40 keV, 15 µA using a 0.127 mm copper filter in the X-ray path for a 100 second live-time count.
XRF data were then imported into Elva-x Regression (Elvatech LTD, Kiev, Ukraine) for quantification.Peak intensities for Pb were calculated as ratios to the Compton peak of rhodium, and converted to parts-per-million (ppm) using a quadratic model derived from the analysis of 9 matrix matched standards (see below).
Matrix matched standards were made by spiking powdered bone meal with ICP-MS Pb solutions of known concentration.The solution was then allowed to evaporate and the resulting powder was mechanically homogenised using an agate mill.Aliquots of the powder were then pressed into pellets.A second aliquot of each standard was analysed by ICP-MS using the methods described above to verify actual concentrations.Observed versus expected values in the bone standards were determined to be in agreement.In total, nine standards were produced with target concentrations of 1, 25, 50, 100, 250, 500, 1000, 2500, and 5000 ppm.
Inductively coupled plasma-mass spectrometry description
For analysis by ICP-MS, the exterior surfaces of each bone sample were first prepared using a silicon carbide abrading tool and then powdered using an agate mortar and pestle.Following initial sample preparation, 50 mg of the powdered samples were weighed into trace metal-free 15 mL polypropylene centrifuge tubes.Samples were digested using 4 mL concentrated HNO 3 and 1 mL concentrated H 2 O 2 .Tubes containing the bone and acid were sonicated in a water bath at 70°C until the bone was fully digested.Quality control samples of NIST SRM 1486 (bone meal) were similarly prepared.A 100 mg aliquot of digestate from each sample was weighed into clean 15 mL centrifuge tubes.The digestate was then topped off to 10 g with 2% HNO 3 (Little et al., 2004).Standard solutions and blanks were similarly prepared.
The instrumentation used in this analysis was a GBC Optimass 9500 (GBC Scientific Equipment, Braeside, Australia) inductively coupled plasma-time-of-flight-mass spectrometer (ICP-TOFMS).The digested samples, quality controls, blanks and standards were introduced into the instrument via a peristaltic pump, where argon gas plasma capable of sustaining temperatures between 8000 and 10,000 K is used to ionise the injected sample.The resulting ions then pass though a three-stage interface (1 sample and 2 skimmer cones) designed to enable the transition of the ions from atmospheric pressure to the vacuum chamber of the ICP-TOF-MS system.A voltage is then applied to push the ions past a Smartgate RT and through a reflectron before reaching the detector.
Results and Discussion
Data generated by portable XRF for the unburred bone surfaces demonstrated that Pb values were on average 2-3 times higher than analyses of the burred bone (Figure 1, Table 1), suggesting that burring the exterior of the bone removes a large amount of diagenetic Pb from the surface of the bone (Figure 2).However, in six samples (266-267 and 287-290), Pb content of the unburred bone was equal to or higher than the burred samples, suggesting that considerable variability exists in Pb content across the surface of the bone.Examination of these samples showed these bones to be highly mineralised and extremely difficult to abrade.It is likely that sample preparation methods for mineralised bone needs to be adjusted to remove outside surfaces for a more accurate representation of lifelong Pb accumulation.Additionally, it is important that researchers mechanically remove the outer layers of bone before attempting to compare XRF data with that obtained by ICP-MS.
Overall, Pb measured by XRF was notably different from Pb measured by ICP-TOF-MS in the archaeological bone samples.Data generated by XRF of abraded surfaces were consistent with ICP-TOF-MS data for some samples (Table 1).In general, XRF data generally were inconsistent with ICP-MS data at both high and low Pb concentrations, likely a result of the heterogeneity inherent in the bone matrix and differences between what is essentially a surface analysis (XRF) and a bulk analysis (ICP-MS).Nevertheless, we were able to demonstrate the presence/absence of Pb in both burred and unburred bone in low, moderate, and high amounts.
Conclusions
We were unable to use portable XRF to generate data for burred samples that were consistently comparable to ICP-TOF-MS data.Much of this disparity results from several factors including variation in Pb content across the bone (Grupe, 1988), and differences between the two analytical techniques (XRF is a surface analysis, solution ICP-TOF-MS is a bulk analysis).Although portable XRF is ideally suited for rapid and completely non-destructive analyses, it is clear that surface contamination on the bone is a fundamental issue-one that cannot be overcome without proper sample preparation.However, when used as a preliminary qualitative tool, portable XRF can be useful in the selection of human bone samples for future heavy metal quantification in a laboratory setting.
Figure 1 .
Figure 1.Graph showing the comparison of ppm lead measured by X-ray fluorescence and inductively coupled plasma-time-offlight-mass spectrometer for both burred and unburred bone.Sample 256 is not shown due to the disproportionately high lead content in this sample.
Figure 2 .
Figure 2. Spectra showing the difference in X-ray fluorescence intensity of lead peaks for burred and unburred bone for sample sample 256.Lead concentrations for the unburred and burred samples are 414 and 20 ppm, respectively.
|
2018-12-12T06:55:47.034Z
|
2014-05-28T00:00:00.000
|
{
"year": 2014,
"sha1": "d7299b31d9c4db01b3cbbbee37094ccc43a078a0",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepress.org/journals/index.php/arc/article/download/arc.2014.5257/4186",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d7299b31d9c4db01b3cbbbee37094ccc43a078a0",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119218825
|
pes2o/s2orc
|
v3-fos-license
|
Conductance properties of rough quantum wires with colored surface disorder
Effects of correlated disorder on wave localization have attracted considerable interest. Motivated by the importance of studies of quantum transport in rough nanowires, here we examine how colored surface roughness impacts the conductance of two-dimensional quantum waveguides, using direct scattering calculations based on the reaction matrix approach. The computational results are analyzed in connection with a theoretical relation between the localization length and the structure factor of correlated disorder. We also examine and discuss several cases that have not been treated theoretically or are beyond the validity regime of available theories. Results indicate that conductance properties of quantum wires are controllable via colored surface disorder.
I. INTRODUCTION
Ever since Anderson's model of electron transport in disordered crystals [1], wave localization in disordered media has attracted great interests due to their universality. For example, two recent experiments directly observed matter-wave localization in disordered optical potentials using Bose-Einstein condensates [2,3]. One of the most known results from Anderson's model is that in one-dimensional (1D) disordered systems, the electron wavefunction is always exponentially localized and hence does not contribute to conductance for any given strength of disorder. Note however, this seminal result is based on the strong assumption that the disorder is of the white noise type. If the disorder is colored due to long-range correlations, then a mobility edge may occur in one-dimensional systems as well [4].
Quantum transport in nanowires is of great interest due to their potential applications in nanotechnology. In addition to the possibility of ballistic electron transport, quantum nanowires are found to show many other important properties. In particular, silicon nanowires can have better electronic response time [5] as well as desirable thermoelectric properties [6]. It is hence important to ask how the nature of surface disorder of quantum wires, modeled by quantum waveguides in this study, affects their conductance properties.
Remarkably, if the surface scattering contribution is weak, then it is possible to map the conduction problem of a long two-dimensional (2D) rough waveguide to that of a 1D Anderson model of localization, with the disorder potential determined by the surface roughness [7,8]. Initially this mapping was established for one-mode scattering but later it was generalized for any number of modes in the transverse direction [9]. As such, a quantum wire with white-noise surface disorder will have zero conductance if the localization length is much smaller than the wire length. However, in reality the surface disorder of a rough quantum wire always contains correlations. As a result it becomes interesting and necessary to understand the conductance properties in rough quantum wires with their surface disorder modeled by colored noise. This has motivated several pioneering theoretical studies [4,7,8,9,10,11]. Under certain approximations the theoretical studies predicted localization-delocalization transitions of electrons in 2D waveguides with colored surface disorder. Some theoretical details were tested by examining the eigenstates of a closed system with rough boundaries [12]. Moreover, the predicted mobility edge due to colored disorder was recently confirmed in a microwave experiment [13].
Using a reaction matrix formalism for direct scattering calculations, here we computationally study the conductance properties of rough quantum wires with colored surface disorder.
The motivation is threefold. First, though the dependence of the localization length upon the correlation function of surface roughness is now available from theory, how the more measurable quantity, namely, the conductance of the waveguide, depends on colored surface roughness has not been directly examined. This issue can be quite complicated when the localization length becomes comparable to the waveguide length. Second, computationally speaking it is possible to consider any kind of colored surface disorder, thus realizing interesting circumstances that are not readily testable in today's experiments. Indeed, in our computational study we can create rather arbitrary structures in the surface disorder correlation function and then examine the associated conductance properties. Third, direct computational studies allow us to predict some interesting conductance properties that have not been treated theoretically or go beyond the validity regime of available theories [4,7,8,9,14]. For example, we shall study the conductance properties for very strong surface roughness, for rough bended waveguides, and for scattering energies that are close to a shifted threshold value for transmission. The long-term goal of our computational efforts would be to explore the usefulness of colored surface disorder in controlling the conductance properties.
This paper is organized as follows. In Sec. II we describe the scattering model of a quantum wire with colored surface disorder. Therein we shall also briefly introduce the methodology we adopt for the scattering calculations. In Sec. III we present detailed conductance results in a variety of one-mode scattering cases and discuss these results in connection with theory. Concluding remarks are made in Sec. IV. We treat quantum wires as a long 2D waveguide as illustrated in Fig. 1(b). The scattering coordinate is denoted x and the transverse coordinate is denoted y. The width of the waveguide is denoted w and the length is denoted L. In all the calculations L = 100w and w is set to be unity. That is, we scale all length by the waveguide width. The upper and lower boundaries of the waveguide are described by y = P (x) and y = Q(x). The case in Fig. 1(b) represents a situation where the upper boundary is a straight line (P (x) = 1) and the lower boundary is rough. As in our other studies of rough waveguides [15,16], we form a rough waveguide boundary in three steps. First, we divide a rectangular waveguide into M pieces of equal length L/M. Second, the end of each piece is shifted in y randomly, with the random y-displacement, denoted η, satisfying a Gaussian distribution. Third, we use spline interpolation to combine those sharp edges to generate a smooth curve η(x) for either the upper or the lower waveguide boundary. For the sake of clarity, Fig. 1(a) depicts this procedure with the number of random shifts being as small as M = 4. In all our calculations below we set M = 100. In Fig. 2(a) we show one realization of the surface roughness function η(x).
The function η(x) may be characterized by its ensemble-averaged mean η and its self- where σ is the variance of η(x). In the limit of white noise roughness, One tends to characterize the strength of the surface roughness by the variance σ defined above. However, in practice it is better to use the maximal absolute value of η(x), denoted |η max |, to characterize the roughness strength. This is because for strong roughness with a given variance, there is a possibility that some of the random displacements become too large such that the waveguide may be completely blocked. Recognizing this issue, we first set a value of |η max | and then, after having generated a roughness function η(x) based on spline interpolation, rescale η(x) such that its maximal absolute value is given by |η max | .
The roughness function η(x) obtained above does not have any peculiar features. There are a number of ways to introduce some structure to the correlation function C η (x − x ′ ).
In Ref. [17] a filtering function method was proposed to produce a power-law decay of Here we adopt the approach used in Ref. [18], which is based on the convolution theorem of Fourier transformations. In particular, the discrete form of autocorrelation function of η(x) is defined as where m = −N + 1, · · · , −1, 0, 1, · · · , N − 1, N is the total number of grid points along x, and c is a normalization constant such that C η (0) = 1 [19]. In Fig. 2(b) we show the autocorrelation function for the surface roughness function depicted in Fig. 2(a). The autocorrelation drops from its peak value to near zero at a scale of R c ∼ 0.7w, which is much smaller than the waveguide length.
As will be made clear in what follows, it is important to consider the Fourier transform of C η (x), i.e., the autocorrelation function in the Fourier space. This important quantity is The new surface roughness functionη(x), with correlations that are absent in η(x).
denoted χ η (k), where k is the wavevector conjugate to x. Using the Fast Fourier transform of C η (x), χ η (k) can be evaluated as follows: where the value of k on the left side is determined by the value of m on the right side via is a real function due to the evenness of C η (x). The real function χ η (k) is called below the structure factor of the surface roughness. Figure 2(c) shows the structure factor χ η (k) obtained from the correlation function shown in Fig. 2 Additional correlations in the surface disorder can now be generated by modulating the structure factor χ η (k). Because the structure factor χ η (k) for a single realization is equivalent to the square of the Fourier transform of η(x), we may imprint interesting structures onto χ η (k) by convoluting η(x) with some filtering function. Consider then the function ρ ′ (x) = sin(ax)/ax with a > 0. Its Fourier transform is a step function of |k|, [9] with a height π/a and the step edge located at |k| = a. Consider then a combination of n such functions, i.e., where A n , a r n , a l n are predefined parameters. Then the Fourier transform will be π/A n if a r n > |k| > a l n or a r n < |k| < a l n ; and zero otherwise. If we now consider the following roughness function [20],η then according to the convolution theorem, we have where χη(k) is the structure factor for the new surface roughness functionη(x). As such, the structure of χ ρ (k) is directly imprinted on χη(k). That is, computationally speaking, arbitrary modulation can be imposed on the structure factor by filtering out the unwanted components and magnifying other desired structure components. Below we apply this simple technique to create different kinds of surface roughness correlation windows and then examine the conductance properties. In Fig. 2(d) we show one example of ρ(x). Its Fourier transform amplitude, as shown in Fig. 2(e), displays two windows. As shown in Fig. 2(f), this double-window structure is passed to χη(k) due to Eq. (6). Finally, in Fig. 2(g) we show the surface roughness functionη(x), which obviously contains more correlations than the old surface roughness function η(x) shown in Fig. 2(a).
B. Reaction matrix and scattering matrix
Here we briefly describe how we calculate the electron conductance of a rough 2D waveguide as described above. The Hamiltonian for the quantum transport problem is given by where m * is the electron effective mass and V c (x, y) represents a hard wall confinement potential. That is, V c (x, y) is zero in Q(x) < y < P (x) and becomes infinite otherwise.
In our early work [15,16] we formulated such a waveguide scattering problem in detail in terms of the so-called reaction matrix method. In the reaction matrix method we first expand scattering state in the scattering region (region I, gray area in Fig. 1(b)) in terms of a complete set of basis states. The basis states are obtained by transforming the rough waveguide into a rectangular one, with the expense of a transformed Hamiltonian with extra surface dependent terms. The solutions in the leads (region II, Fig. 1(b)) are given by for the left and right leads, respectively. Here n is the index for the modes in the transverse direction, and the wavevector k n is given by where E is the initial electron energy. The scattering coefficients A n , B n , C n and D n in Eq. (8) are determined by the scattering matrix S, which relates the outgoing states to the incoming states. Specifically, where the submatrices r and r ′ denote the reflection matrix and t and t ′ denote the transmission matrix. In the case of one-mode scattering (n = 1) considered below, k 1 will be simply denoted as k, with 0 < kw/π < √ 3. The S matrix is related to the so-called R-matrix in the reaction matrix method as follows, where m is maximal number of propagating modes and I 2m is a 2m×2m unit matrix, and K is 2m×2m diagonal matrix with diagonal elements determined by the wavevector associated with each scattering channel [15,16]. Once the S matrix is obtained from the R matrix, the conductance is calculated by G = G 0 Trace(tt ′ ), where G 0 = e 2 /(2h) is the conductance quanta. Note that in our calculations we include about 10 evanescent modes though we focus on the energy regime where only one mode in the y-direction admits propagation along x.
As to the number of basis states we use in describing the transformed rectangular waveguide, we use 1000 basis states for the x degree of freedom and 4 basis states for the y degree of freedom. Such a large number of basis states is for a good description of the scattering wave function inside the waveguide, and this number should not be confused with the number of propagating modes or evanescent. Good convergence is obtained in our calculations. Note also that due to the large number of basis states used in the scattering direction, the Fourier transform techniques developed in Ref. [15] is especially helpful.
III. EFFECTS OF COLORED SURFACE DISORDER ON CONDUCTANCE
With the mapping between the scattering problem in 2D waveguide and 1D Anderson's model [7,8], early theoretical work [7,9] established that the localization length L loc of the 2D waveguide problem is given by where χ(2k) is either the structure factor χ η (2k) or the new structure factor χη(2k) after a convolution procedure. If L loc > L, a transmitting state is expected and if L loc << L, then the electron can only make an exponentially small contribution to the conductance.
As such, one expects transmitting states when the structure factor χ(2k) is essentially zero; and negligible conductance if χ(2k) is significant and if σ is not too small. This suggests that the conductance properties can be manipulated by realizing different surface roughness functions.
Equation (12) is obtained under a weak electron scattering approximation (Born approximation). As such, the theoretical result of Eq. (12) may not be valid if σ is not small as compared with w or if the scattering electron is close to the threshold value of channel opening. Another assumption in the theory is that L loc should be much greater than R c , the radius of the surface correlation function C η (x).
A. Straight Rough Waveguides
However, in our computational studies we will examine some interesting cases that are evidently beyond the validity regime of the theory. For example, the strength of the surface disorder may not be small and the scattering energy may be placed in the vicinity of a shifted channel opening energy.
In Fig. 3(a) we show conductance results averaged over three realizations of a rough waveguide, with a flat upper boundary P (x) = 1 and a rough lower boundary Q(x) = η(x).
The strength of the surface disorder is characterized by |η max | = 0.2w. Due to our procedure in generating a fixed |η max | = 0.2w, the variance σ of the surface function in each single realization of the surface function will change slightly. For the three realizations used in Fig. 3(a), σ = 0.0779w, 0.0802w and 0.0773w. As is clear from Fig. 3(a), there exists a threshold k ∼ 0.6π/w beyond which the system becomes transmitting (this threshold will be explained below). In the transmission regime the conductance shows a systematic trend of increase as the wavevector k increases. The inset of Fig. 3(a) shows χ η (2k), one key term in Eq. (12). The characteristic magnitude of χ η (2k) for the shown regime of k is ∼ 0.3. Using Eq. (12), one obtains that the localization length L loc is comparable to L = 100w. This prediction is hence consistent with our computational results that demonstrate considerable transmission.
Next we exploit the convolution technique described above to form new rough surfaces described byη(x). In particular, the inset of Fig. 3(b) shows two sample cases with distinctively different surface structure factors. In one case (dotted line) χη(2k) has significant values in the interval 0.67 < kw/π < 0.8. Indeed, during that regime the value of χη(2k) is many times larger than the mean value of χ η (2k) in the case of Fig. 3(a). In the other case (solid line) χη(2k) is large only in the regime of 0.75 < kw/π < 0.9. For these regimes, the theory predicts that the localization length to be much smaller than the waveguide length and hence vanishing conductance. This is indeed what we observe in our computational study. As shown in Fig. 3(b), either the dotted or the solid conductance curve display a sharp dip in a regime that matches the main profile of χη(2k).
In addition, similar to what is observed in Fig. 3(a), Fig. 3(b) also displays a transmission threshold. Take the dotted line in Fig. 3(b) as an example. For kw/π < 0.55, there is no transmission at all, even though χη(2k) in that regime is essentially zero. This suggests that this threshold behavior is unrelated to surface roughness details. Rather, it can be considered as a non-perturbative result that is not captured by Eq. (12). To qualitatively explain the observed threshold, we realize that due to the relatively strong surface roughness, the effective width of the waveguide decreases and as a result, the effective mode opening energy increases [15]. For |η max | = 0.2w, we estimate that the effective width of the waveguide is given by w eff = w − |η max | = 0.8w. Hence, the corrected mode opening energy E is now given by (h 2 /2m * )(π/0.8w) 2 . Using Eq. (9), this estimate gives that, regardless of the surface roughness details, the threshold k value for transmission is ∼ 0.75π/w, which is close to what is observed in Fig. 3. Such an explanation is further confirmed below. This also demonstrates that the maximal value of |η(x)| is an important quantity to characterize the surface roughness strength. Of course, the exact dependence of the effective waveguide width upon η max is beyond the scope of this work [21].
The results in Fig. 3 show that even when the surface roughness is strong enough to significantly shift the threshold energy for transmission, the surface structure factor may still be well imprinted on the conductance curve. Moreover, the resultant windows of the conductance curves in Fig. 3 are seen to match the location of the structure factor peak.
Nevertheless, one wonders how such an agreement might change if we tune the strength of the surface roughness.
To that end we examine in Fig. 4 four scattering cases with increasing roughness strength, with |η max | = 0.01w and σ = 0.0046w in Fig. 4(a) (representing a case with quite weak surface roughness), |η max | = 0.1w and σ = 0.0400w in Fig. 4(b), |η max | = 0.2w and σ = 0.0779w in Fig. 4(c), and |η max | = 0.3w and σ = 0.1255w in Fig. 4(d) (representing a case with very strong surface roughness). The main profile of the structure factor is also shifted closer to the threshold regime observed in Fig. 3. For the case with |η max | = 0.3w, the theory based on the weak roughness assumption is not expected to hold. Indeed, In Fig. 4(d) the transmission threshold is right shifted further to the high energy regime as compared with those seen in Fig. 3 or other panels in Fig. 4. Nevertheless, we still observe a clear window of almost zero conductance, but now with its location also significantly shifted as compared with the profile of χη(2k). For the case of |η max | = 0.2w in Fig. 4(c), it is somewhat similar to the dotted line in Fig. 3(b), consistent with the fact that they have the same roughness strength. However, because here the location of the peak of χη(2k) is close to the threshold k value, the zero conductance window is also near this threshold: the conductance curve rises when k exceeds the threshold and then it quickly drops to zero again. For the case of |η max | = 0.1w, its zero conductance window shown in Fig. 4(b) is narrower than those in Fig. 4(c) and Fig. 4(d), consistent with our intuition. Somewhat surprising is the case shown in Fig. 4(a), where the roughness strength is weak and the energy threshold for transmission is almost unaffected. But still, a narrow window for very small conductance is clearly seen in Fig. 4(a). This result is unexpected, because if one applies Eq. (12) directly, one would predict that no such conductance window should occur for |η max | = 0.01. Further, the conductance window in Fig. 4(a) is much left-shifted as compared with the profile of χη(2k) (inset of Fig. 4(a)). Similar results are obtained in other realizations of the surface roughness function η(x) that have similar profile of the structure factor [22]. Such a remarkable deviation from the theory, we believe, is due to a breakdown of the Born approximation in deriving Eq. (12). Indeed, the conductance window for |η max | = 0.01 is located in a regime of very low scattering energy, and is hence not describable by a theory based on the Born approximation. Certainly, it should be of considerable interest to experimentally study the conductance windows in these cases of weak surface roughness.
To further confirm that the conductance windows observed here are due to the colored surface disorder, we note that if we consider a surface function as that shown in the inset of Fig. 3(a), then all the conductance windows shown here indeed disappear and the results will become similar to what is shown in Fig. 3(a).
B. Bended Rough Waveguides
In Fig. 5 we examine the conductance properties of a bended rough waveguide ( Fig. 5(b) and Fig. 5(c)) as compared with those of a straight rough waveguide ( Fig. 5(a)). In all the three cases shown in Fig. 5, the upper boundary is given by P (x) = 1; and the lower boundary is a parabolic curve plus random fluctuations, i.e., Q(x) = 4a(x−L/2) 2 /L 2 +η(x); with a = 0 in Fig. 5(a), a = 0.5 in Fig. 5(b), and a = 1.0 in Fig. 5(c). As to the structure factor ofη(x), it is assumed to be of a double-window form as shown in the inset of Fig. 5(a), with |η max | the same as in Fig. 3. In the case of a straight rough waveguide, this doublewindow structure factor creates an analogous double-window structure in the conductance curve ( Fig. 5(a)), with its location matching the profile of the structure factor. Interestingly, as we introduce a curvature in the lower boundary in Fig. 5(b), the double-window structure survives but shifts considerably toward higher k values. In Fig. 5(c), the curvature of the rough waveguide further increases, the transmission threshold value of k also increases (as expected), and the fingerprints of the double-window structure factor can still be seen in the conductance curve. We have also checked if we create three windows in the structure factor, then three windows in the conductance curves can be induced as well, with their locations controllable by tuning the curvature of the bended waveguide.
Finally we consider waveguides with both upper and lower boundaries being rough. Interestingly, in this case a more sophisticated theory [14] shows that the scattering can be regarded as the scattering in a smooth waveguide plus an additional effective potential. The theoretical electron mean free path, calculated using a Green function averaged over different surfaces, is shown to be contributed by different terms, due to different mechanisms called amplitude scattering, gradient scattering, and square gradient scattering [14]. The importance of these terms depends on whether the upper and lower boundaries are symmetric, uncorrelated or anti-symmetric. Motivated by this interesting prediction, here we show in Fig. 6 three computational results, for symmetric ( Fig. 6(a)), uncorrelated ( Fig. 6(b)), and anti-symmetric boundaries (Fig. 6(c)), all with the same roughness strength as in Fig. 3.
For the symmetric case, the effective waveguide width is not affected by the roughness.
By contrast, for the antisymmetric case of Fig. 3(c), the effective waveguide width is most affected. These two simple observations explain why the threshold k value for transmission is the smallest in Fig. 6(a) and the largest in Fig. 6(c). Even more noteworthy is how the structure factor of surface roughness generates a conductance window for these three cases. In particular, the window of the conductance drop in the symmetric case ( Fig. 6(a)) is narrower than that seen in Fig. 6(b) and Fig. 6(c). Moreover, the conductance window in the anti-symmetric case is the widest one, and is much shifted towards high k values as compared with the structure factor. This large mismatch between the conductance window and the peak location of χη(2k) hence reflects clearly an effect from the correlation between the two rough boundaries. Though our results cannot be easily explained by the theoretical result of Eq. (12), they are consistent with the theoretical prediction in Ref. [14] that among the three cases of symmetric, uncorrelated and anti-symmetric waveguides, the electron mean free path in anti-symmetric waveguides should be the shortest.
IV. DISCUSSION AND CONCLUSION
In this computational study we have focused on how the structure of surface roughness impacts the conductance properties of electrons propagating in a quantum wire modeled by a 2D waveguide. Our conductance results are directly computed from a reaction matrix approach. An early theoretical result is hence confirmed by detailed behavior of the conductance, a quantity that should be measurable in experiments. In addition, our results for symmetric, uncorrelated, and anti-symmetric rough waveguides are consistent with a very recent theory [14].
Unlike in the bulk case, for quantum wires of limited length the sensitive dependence of the localization length upon the structure factor of surface roughness can be easily manifested in conductance properties. Our direct scattering calculations show that this is true, even for those interesting cases that are beyond the domain of today's theory or have not been treated theoretically. We conclude that conductance properties are easily controllable by engineering the surface roughness of quantum wires.
Though we have focused on the transport behavior of electrons, we believe that our methodology might be also useful for studies of other types of wave propagation in disordered systems. In particular, there is now a keen interest in understanding phonon transport in rough quantum wires. Recent computational work [23] and experiment work [24] showed the importance of surface disorder in the heat transport of thin silicon nanowires with a radius of w = 22 nm. It was also demonstrated experimentally that surface roughness can be used to dramatically suppress heat conductivity [6] and hence enhance thermoelectric efficiency for thin silicon nanowires with a radius about w = 50 nm. Our computational tools, together with the guidance from the theory [4,7,8,9,14], might help answer some important questions regarding to phonon transport in rough nanowires. Indeed, we conjecture that it should be possible to design some colored surface to create conductance windows for phonons, but not for electrons. If this is indeed realized, then electron conductance is not much affected and phonon conductance will be greatly reduced. This will be of vast importance to thermoelectric applications.
Finally, we note that spin accumulation in quantum waveguides with rough boundaries was recently studied in Ref. [16]. It should be interesting to see how colored surface disorder might have some useful impact on spin accumulation effects or spin transport.
use of their computer facilities.
|
2009-10-02T01:47:08.000Z
|
2008-09-22T00:00:00.000
|
{
"year": 2009,
"sha1": "2634a9441dc9253aac0d7fe244b8ee91d92b1582",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0910.0303",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2634a9441dc9253aac0d7fe244b8ee91d92b1582",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
218772089
|
pes2o/s2orc
|
v3-fos-license
|
VR systems in dental education
There are several methods available to an orthodontist to alter the anchorage balance, for which the extraction pattern is one. Anchorage loss results from unwanted tooth movements.3 It is possible that during this period unwanted tooth movement and space loss occurs which may compromise the final orthodontic result or lead to extended treatment times. Without routine dental appointments taking place, general dental practitioners are not able to carry out orthodontic assessments and subsequently refer patients for orthodontic treatment. Timely orthodontic referrals are essential for the management of patients that require interceptive treatment, treatment with functional appliances and those with impacted teeth or pathology, eg root resorption.4 It is also possible that during this time patients that may have been eligible for treatment on the NHS turn 18 years of age, which means they no longer qualify for treatment. It is essential that we are aware of these possible consequences and consider strategies to manage them when practice resumes. J. L. Jopson, L.C. Kneafsey, P. Fowler, Bristol, UK
advantage of learning through trial and error without physically harming a patient.
Alternatively, in some universities in Latin American and European countries, it is common to pair haptic simulators with VR systems in stomatology. This consists of the use of technological equipment that reliably imitates the sensation of touch that the operator may experience when in contact with real objects without coming into physical contact with them. In this way, haptic simulators are being applied in the field of endodontics, restorative dentistry and dental prostheses, among others.
We believe that it is important to implement such haptic simulators systems as an alternative in all dental faculties to enable students with the development of skills in the clinical field while complying with social distancing measures throughout the duration of the COVID-19 pandemic.
C. F. Cayo, L. A. Cervantes, R. Agramonte, Universidad Inca Garcilaso de la Vega, Perú suffer from psychosomatic problems. CBT has been proven effective for the treatment of psychiatric disorders, and has begun to be applied for psychosomatic problems in the dental patients. The prevalence of TMD in a community sample was almost 17.5% and the incidence even higher during the worldwide epidemic. Studies reported that CBT was more effective than no treatment. 1 Although CBTs were mainly conducted by psychologists, those conducted by trained dental hygienists were also found to be effective in reducing TMD pain and painrelated interference.
BMS is characterised by a burning sensation of the oral mucosa, with a prevalence of 3.7-.9% and is frequently associated with stressful life events, anxiety, and depressive disorders. 2 Various methods including psychological and pharmacological approaches have been applied for BMS with either long sessions of CBT or short duration of treatment improving the pain severity and discomfort of patients. Approximately 10-12% of the adult population suffer from dental anxiety. 3 A significant reduction in subjective anxiety was achieved by patients with CBT when compared to those who received no treatment or anaesthesia/sedation. This study suggests more attention needs to be paid to patients with psychosomatic problems caused by acute dental pain and other urgent conditions; accessibility to online consulting service systems should be further strengthened and improved, particularly for confirmed cases who are in self-quarantine.
VR systems in dental education
Sir, we read with great interest the letter from B. Dunphy proposing replacement of conventional teaching aids during the coronavirus pandemic. 1 In various countries importance is being given to implementing the use of 3D virtual reality (VR) systems in health sciences. Here, a student utilises a digital system and VR glasses to monitor a patient and perform clinical examination procedures in a realistic virtual setting while being monitored by the teacher from a main cabin. 2 VR teaching gives students the
Psychosomatic problems
Sir, the high transmissibility of the coronavirus and other contributing factors may cause psychological problems, including anxiety, depression, and stress. Patients who experience dental problems, especially such as acute pulpitis, oral haemorrhage, dental and maxillofacial trauma during the pandemic may also suffer tremendous psychosomatic problems. Furthermore, isolation at home for a long period of time, suspension of dental services and high risk of dental treatment due to aerosolised respiratory secretions and close doctor-patient contact may exacerbate existing mental conditions and produce new oral psychosomatic disorders such as temporomandibular disorders (TMD), burning mouth syndrome (BMS), dental anxiety and other oral complaints.
Online psychological counselling services have been widely established in mainland China which provide free cognitive behavioural therapy (CBT) for depression, anxiety and insomnia for dental patients who
Redeployment DFT survey
Sir, we conducted a voluntary survey amongst DFTs to discover the factors that would influence their transition, their perceived needs, and their current skillset into redeployment. We received over 72
|
2020-05-22T14:59:37.846Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "4431ca2a588d227491b25cbf97922a6b64d00cb2",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/s41415-020-1689-1.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5aff45432263bae91e2b80542a4d620492d0b2b9",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
235367802
|
pes2o/s2orc
|
v3-fos-license
|
Lifts for Voronoi Cells of Lattices
Many polytopes arising in polyhedral combinatorics are linear projections of higher-dimensional polytopes with significantly fewer facets. Such lifts may yield compressed representations of polytopes, which are typically used to construct small-size linear programs. Motivated by algorithmic implications for the closest vector problem, we study lifts of Voronoi cells of lattices. We construct an explicit d-dimensional lattice such that every lift of the respective Voronoi cell has 2Ω(d/logd)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{\Omega (d/{\log d})}$$\end{document} facets. On the positive side, we show that Voronoi cells of d-dimensional root lattices and their dual lattices have lifts with O(d)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(d)$$\end{document} and O(dlogd)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(d \log d)$$\end{document} facets, respectively. We obtain similar results for spectrahedral lifts.
In this work, we study to which extent this phenomenon also applies to Voronoi cells of lattices.Here, a lattice is the image of Z d under a linear map.We say that a lattice is k-dimensional, if k is the dimension of its linear hull.The Voronoi cell VC(Λ) of a lattice Λ ⊆ R d is the set of all points in lin(Λ) for which the origin is among the closest lattice points, i.e., VC(Λ) := {x ∈ lin(Λ) : x ≤ x − z for all z ∈ Λ} , where lin(•) denotes the linear hull and • denotes the Euclidean norm.The lattice translates z + VC(Λ), z ∈ Λ, induce a facet-to-facet tiling of lin(Λ), so that in particular Voronoi cells of lattices are what is commonly called space tiles, see Figure 1.Moreover, it is known that VC(Λ) is a centrally symmetric polytope with up to 2(2 d − 1) facets.We refer to [22,Ch. 32] for background on translative tilings of space.
It is tempting to believe that the rich structure of Voronoi cells of lattices allows to construct polytopes with significantly fewer than 2(2 d − 1) facets and that linearly project onto VC(Λ).In fact, this is true for several examples: A lattice whose Voronoi cell has the largest possible number of facets is the dual root lattice A d (see Section 3.1 for a definition).However, its Voronoi cell is a permutahedron and admits a lift with only O(d log d) facets [17], see Section 3.1.More generally, if the Voronoi cell of a d-dimensional Figure 1: A lattice in R 2 together with its Voronoi cell and the corresponding tiling of the plane via its lattice translates.lattice is a zonotope, then it has O(d 2 ) generators and hence has a lift with O(d 2 ) facets.We discuss this result in detail in Section 3.2.
The lattice A d also belongs to the prominent class of root lattices and their duals.By their algebraic and geometric properties, these lattices are prime examples in various contexts: For example, they play a crucial role in Coxeter's classification of reflection groups (cf.[9,Ch. 4]), and they yield the densest sphere packings and thinnest sphere coverings in small dimensions (see [9] or [39]).
As one part of our work, we show that Voronoi cells of such lattices generally admit small lifts.In what follows, for a polytope P we write xc(P ) for the minimum number of facets of any polytope that can be linearly projected onto P .This number is called the extension complexity of P .This raises the question whether Voronoi cells of other lattices also have a small extension complexity, say, polynomial in their dimension.One of the main motivations for representing a polytope P as the projection of another polytope Q is that a linear optimization problem over P can be reduced to one over Q.If Q has a small number of facets, then the latter task can be expressed as a linear program with a small number of inequalities, also known as an extended formulation.
Thus, given a lattice Λ ⊆ R d whose Voronoi cell has a small extension complexity, we may phrase any linear optimization problem over VC(Λ) as a small-size linear program.Such a representation may have several algorithmic consequences for the closest vector problem.In this problem, one is given Λ and a point x ∈ R d and is asked to determine a lattice point that is closest to x, i.e., a point in Note that z ∈ cl(x, Λ) if and only if x − z ∈ VC(Λ).Thus, a small extension complexity of VC(Λ) would yield a small-size linear program to test whether a lattice point is the closest lattice vector to x.However, in view of the fact that the closest vector problem is NPhard [42] and the belief that NP = coNP, we do not expect efficient algorithms that, for general lattices (given in form of a basis), decide whether a point is the closest lattice vector to x.Another sequence of algorithmic implications arises from the algorithm of Micciancio & Voulgaris [33], which also motivated other recent work on compact representations of Voronoi cells, such as [24], see also [25, § 3.7].
We remark that the mere existence of small size extended formulations of Voronoi cells may not be immediately applicable, since finding such representations as well as verifying that they indeed yield the Voronoi cell of a given lattice might be hard.Thus, polynomial bounds on the extension complexities of Voronoi cells of general lattices would not contradict hardness assumptions in complexity theory.In fact, we initially considered the possibility of such bounds.
However, as our main result we explicitly construct lattices with Voronoi cells of extension complexity close to the trivial upper bound 2(2 d − 1).
Lower bounds on extension complexities have been established for various prominent polytopes in recent years.Of particular note are results for cut polytopes [15,26,6], matching polytopes [38], and certain stable set polytopes [19].Lower bounds for other polytopes Q are typically obtained by showing that a face F of Q affinely projects onto one of the polytopes P from above and using the simple fact xc(P ) ≤ xc(F ) ≤ xc(Q).Unfortunately, it seems difficult to construct lattices for which this approach can be directly applied to the Voronoi cell.However, we will exploit the lesser known fact that xc(Q) = xc(Q • ) holds for every polytope Q with the origin in its interior, where Q • is the dual polytope of Q.In fact, we will describe a way to obtain many 0/1-polytopes as projections of faces of dual polytopes of Voronoi cells of lattices.As an example, for every n-node graph G we can construct a lattice Λ of dimension at most n + 1 such that the stable set polytope of G is a projection of a face of VC(Λ) • .Theorem 2 then follows from a construction of Göös, Jain & Watson [19] of stable set polytopes with high extension complexity.
Another prominent way of representing polytopes is via linear projections of feasible regions of semidefinite programs, i.e., spectrahedra.We will discuss how our approach also yields a version of Theorem 2 for such semidefinite lifts with a slightly weaker but still superpolynomial bound.
Outline In Section 2, we provide a brief introduction to lifts of polytopes and lattices, focusing on tools and properties that are essential for our arguments in the following sections.In Section 3, we derive upper bounds on the extension complexity of Voronoi cells for some selected classes of lattices, such as root lattices and their duals, zonotopal lattices, and a class of lattices that do not admit a compact representation in the sense of [24].The proof of Theorem 2 is given in Section 4, and in Section 5, we briefly introduce semidefinite lifts and present a version of Theorem 2 with a superpolynomial bound on the semidefinite extension complexity.We close our paper with a discussion of open problems in Section 6. Lemma 3.For every face F of a polytope P , we have xc(F ) ≤ xc(P ).
Proof.If P is the image of a polyhedron Q with k facets under a linear map τ , then F is the image of τ −1 (F ) ∩ Q, which is a face of Q and hence has at most k facets.
For the next fact we need the notion of a slack matrix of a polytope.To this end, we consider a polytope , where [m] := {1, . . ., m} and •, • denotes the standard Euclidean scalar product.Corresponding to these two descriptions of P , we define the slack matrix S = (S i,j ) ∈ R m×n ≥0 via S i,j = b i − a i , v j .Yannakakis [46] showed that the extension complexity xc(P ) of P equals the nonnegative rank of S, which is the smallest number r such that S = F V , where F ∈ R m×r ≥0 and V ∈ R r×n ≥0 , and which is denoted by r + (S).For a polytope P containing the origin 0 in its relative interior, the dual polytope of P is defined as It is a basic fact that P • is again a polytope with the origin in its relative interior, lin(P • ) = lin(P ), and (P • ) • = P .Moreover, it is easy to see that if In particular, this shows that if S is a slack matrix of P induced by v 1 , . . ., v n and w 1 , . . ., w m , then S is a slack matrix of P • .Since r + (S) = r + (S ) we obtain the following fact.
Lemma 4. For every polytope P ⊆ R d that contains the origin in its relative interior, we have xc(P ) = xc(P • ).
The next statement shows that the extension complexity behaves well under Cartesian products, Minkowski sums and intersections.
Proof.(i): If P linearly projects onto P and Q onto Q, then P × Q linearly projects onto P × Q.Moreover, the number of facets of P × Q is equal to the sum of the number of facets of P and Q .
(ii): The polytope P × Q linearly projects onto P + Q via (p, q) → p + q for (p, q) ∈ P × Q, and hence the claim follows from (i).
(iii): If P = π(P ) and Q = τ (Q ) hold for some polyhedra P , Q and linear maps π, τ , then Moreover, the number of facets of L is at most the number of facets of P × Q , which, again, is equal to the sum of the number of facets of P and Q .
The next fact is a very useful result following from a work of Balas [3] deriving a description of the convex hull of the union of certain polytopes.The proof of the version presented here can be found in [44,Prop. 3.1.1].Lemma 6.Let P 1 , . . ., P k be polytopes, then We mentioned already that some lattices have a permutahedron as their Voronoi cell.These polytopes arise from a single vector by permuting its coordinates in all possible ways and taking their convex hull.Let us denote the set of all bijective maps on [d] by S d .For a permutation π ∈ S d and a vector ) be the vector that arises from v via permuting its entries according to π.
Lattices and Voronoi cells
Most basic notions regarding lattices and their Voronoi cells have been already introduced in Section 1.In this section, we provide some further definitions and results that we use to obtain bounds on the extension complexity of Voronoi cells of lattices.
We call two lattices Λ, Γ ⊆ R d isomorphic, if there exists an orthogonal matrix Q ∈ R d×d such that QΛ = Γ.Note that VC(Γ) = Q VC(Λ) and therefore the extension complexities of their Voronoi cells coincide.
In some parts, we will consider the dual lattice of a lattice Λ ⊆ R d , which is defined as Λ = {x ∈ lin(Λ) : x, y ∈ Z for all y ∈ Λ} .
Note that for every two lattices Λ, Γ, their product Λ × Γ is also a lattice.The following lemma shows that the Cartesian product behaves well with respect to Voronoi cells or duals of lattices.
Lemma 8.For any two lattices Λ ⊆ R d and Γ ⊆ R d we have The proof is straightforward from the definitions and is left as an exercise.A main ingredient for proving Theorem 2 is to consider the dual polytope VC(Λ) • of VC(Λ).Recall that we have xc(VC(Λ)) = xc(VC(Λ) • ) by Lemma 4. The following two observations are crucial for our arguments.Lemma 9.For every lattice Λ we have Proof.In view of the identities the claim follows from (1).
with equality if and only if z ∈ cl(p, Λ) \ {0}.Note that the above inequality is equivalent to p, 2 z 2 z ≤ 1.Thus, due to Lemma 9 we see that F := {y ∈ VC(Λ) • : p, y = 1} is a face of VC(Λ) • .This establishes the claim since 3 Lattices with small extension complexity In this section, we provide bounds on the extension complexities of Voronoi cells of some prominent lattices.
Root lattices and their duals
We start with Voronoi cells of root lattices and their duals.An irreducible root lattice is a lattice Λ for which there exists a finite set S of vectors of squared length equal to 1 or 2, such that Λ = { b∈S α b b : α b ∈ Z for all b ∈ S}.We say that a lattice is a (general) root lattice, if it is isomorphic to a lattice obtained by iteratively taking Cartesian products with irreducible root lattices.A well-known theorem related to the classification of reflection groups states that besides the lattice Z d of integers, up to isomorphism the irreducible root lattices split into the two infinite classes x(1) + . . .+ x(d + 1) = 0 and and the three exceptional lattices x, e 7 + e 8 = 0} and x, e 6 + e 8 = 0} .
Here and in the following, we denote by e i the ith standard Euclidean unit vector and by 1 the all-one vector in the corresponding space.Moreover, the dual lattices of the two infinite classes A d and D d are given by , for 0 ≤ i ≤ d and j = d + 1 − i, and respectively.In the literature the dual D d is usually scaled by a factor of 2 in order to get an integral lattice, which is often more convenient to investigate.In order to avoid confusion, we denote it by and note that this scaling has no effect on the extension complexity of its Voronoi cell.We refer to Conway & Sloane [9, Ch. 4 & Ch.21] and Martinet [31,Ch. 4] for proofs, original references and background information on root lattices.Details on Voronoi cells and Delaunay polytopes of root lattices can be found in Moody & Patera [34], which together with the two aforementioned monographs are our main sources of information.Given a lattice Λ ⊆ R d we write |Λ| = min{ z : z ∈ Λ\{0}} for the length of a shortest non-trivial vector in Λ.A minimal vector of Λ is any vector z ∈ Λ with z = |Λ|, and a facet vector of Λ is any vector w ∈ Λ, such that the constraint x, w ≤ 1 2 w 2 defines a facet of the Voronoi cell VC(Λ).For convenience, we write for the set of minimal vectors and facet vectors, respectively.In general, one has the inclusion S(Λ) ⊆ F(Λ), which however is usually strict.Root lattices are now neatly characterized by the property that every facet vector is at the same time a minimal vector, that is, the equality S(Λ) = F(Λ) holds (see Rajan & Shende [36]).
Since the set of minimal vectors of the irreducible root lattices are well-understood, this allows to describe their Voronoi cells as well.For the sake of the asymptotic study of the extension complexity of their Voronoi cells, it suffices to understand the two infinite families A d and D d , and their duals A d and D d .In the sequel, we provide bounds on the extension complexities of the Voronoi cells of these lattices.To achieve these bounds, we sometimes use a characterization of the facet vectors and in other cases we use a characterization of the vertices of the Voronoi cell.For the sake of easy reference, we describe the vertices and facet vectors in all cases.Due to Lemma 5 and Lemma 8, these bounds directly imply Theorem 1.Moreover, the bound in Theorem 1 is asymptotically tight since the Voronoi cell of A d is a permutahedron, see Lemma 13.
Voronoi cell of A d
The Voronoi cell of the root lattice A d is given by Proof.Using the description of the vertices of VC(D d ) stated above, we obtain that Hence, the dual of the Voronoi cell is the intersection of a hypercube and a crosspolytope.
Voronoi cell of A d
The Voronoi cell of the dual of the root lattice A d is given by . Moreover, we have The characterization of the vertices can be found in [9, Ch.21, Sec.3F] and the fact that the facet vectors are exactly the vertices of VC(A d ) is explained in detail in the unpublished monograph [10,Ch. 3.5].
Lemma 13. xc(VC(A d )) = Θ(d log d).
Proof.Using the description of the vertices of VC(A d ) stated before, we obtain that VC(A d ) is an affine linear transformation of the standard permutahedron The claim follows, since Goemans [17] showed that the extension complexity of P d is in Θ(d log d).
Voronoi cell of D d
As explained before, we consider the integral lattice D Moreover, we have Note that all the bounds stated in Lemmas 11, 12, and 14 are asymptotically tight, since the extension complexity of a polytope grows at least linearly with its dimension (cf.[13, Eq. 2 & Prop.5.2]).
Zonotopal lattices
A zonotope Z ⊆ R d is the Minkowski sum of finitely many line segments, that is, there are vectors a The non-zero vectors z i = b i − a i are usually called the generators of the zonotope, and clearly, Z is an affine projection of the m-dimensional cube [−1, 1] m via e i → z i , for 1 ≤ i ≤ m, and a suitable translation.Regarding the extension complexity of a zonotope Z, the bound xc(Z) ≤ 2m thus immediately follows from the definition.
A lattice Λ ⊆ R d is said to be zonotopal if its Voronoi cell is a zonotope.Every lattice of dimension at most three is zonotopal, but from dimension four on there exist non-zonotopal lattices.For instance, the Voronoi cell of the root lattice D 4 is the non-zonotopal 24-cell.Examples of classes of zonotopal lattices are Z d , the root lattice A d , its dual lattice A d , lattices of Voronoi's first kind, and the tensor product A d ⊗A d .Zonotopal space tiles have been extensively studied over the years, mostly due to their combinatorial connections to regular matroids, hyperplane arrangements, and totally unimodular matrices.For a detailed account on zonotopal lattices and pointers to the original works containing the previous statements we refer to [32,Sect. 2].
The tiling constraint on a zonotope that arises as the Voronoi cell of a lattice, allows it to have at most quadratically many generators in terms of its dimension.In particular, these polytopes admit lifts with quadratically many facets.
Proof.It suffices to argue that the Voronoi cell is generated by at most d+1 2 line segments.Indeed, each line segment L satisfies xc(L) = 2 and hence the statement follows using Lemma 6.
Erdahl [11,Sect. 5] proved that the generators of a space tiling zonotope correspond to the normal vectors of a certain dicing.A dicing in R d is an arrangement of hyperplanes consisting of r ≥ d families of infinitely many equallyspaced hyperplanes such that: (1) there are d families whose corresponding normal vectors are linearly independent, and (2) every vertex of the arrangement is contained in a hyperplane of each family.
By [11,Thm. 3.3], every dicing is affinely equivalent to a dicing whose set of hyperplane normal vectors -one normal vector for each of the r families -consists of the columns of a totally unimodular d × r matrix.By construction, this totally unimodular matrix is such that for any two of its columns v, w, we have v = ±w and v, w = 0.A classical result, that is often attributed to Heller [23] but already appears in Korkine & Zolotarev [28], yields that every such totally unimodular d × r matrix has at most r ≤ d+1 2 columns.Thus, the zonotopal Voronoi cell VC(Λ) is generated by at most d+1 2 line segments.
Alternatively, the fact that zonotopal Voronoi cells in R d are generated by at most d+1 2 line segments also follows from Voronoi's reduction theory.The Delaunay subdivisions of zonotopal lattices correspond to certain polyhedral cones (Voronoi's L-types) in the cone S d ≥0 of positive semi-definite d × d matrices that are generated by rank one matrices.Since S d ≥0 has dimension d+1 2 , Carathéodory's Theorem yields the bound.We refer the reader to Erdahl [11,Sect. 7] for an intuitive description and references to the original works.
Lattices defined by simple congruences
For any a ∈ N, we consider the lattice The case a = d 2 played a special role in [24,Thm. 2] for the determination of lattices that do not have a basis that admits a compact representation of the Voronoi cell.To this end, the authors determined the set F(Λ d ( d 2 )) of facet vectors explicitly (there are exponentially many of them).However, their proof can be extended to general a to give a description of the facet vectors of F(Λ d (a)) that is precise enough to allow drawing conclusions towards small extended formulations.
Lemma 16.For all a ∈ N, the set of facet vectors of Λ d (a) is contained in where the last union is over all k ∈ [d − 1], ∈ { ak d , ak d } and the sets V ±1 , V ±a and V k, are defined as follows: Combining these bounds and applying Lemma 6 and Lemma 4, we obtain the desired bound.
Lower bounds on the extension complexity of Voronoi cells
The aim of this section is to prove Theorem 2. Inspired by Kannan's proof [27,Sec. 6] of the NP-hardness of the closest vector problem, for every 0/1-polytopes P we are able to construct a lattice such that a face of its dual Voronoi cell projects onto P .To obtain a lattice of small dimension, P needs to fulfill some extra condition.
Lemma 18.Let H ⊆ R k be an affine subspace such that all vectors in X := {0, 1} k ∩ H have the same norm.There is a lattice Λ with dim(Λ) ≤ dim(H) + 1 such that conv(X) is a linear projection of a face of VC(Λ) • .
Proof.Let α ≥ 0 be such that x = α for all x ∈ X.We may assume that H is nonempty and that α > 0, otherwise conv(X) is empty or consists of a single point, in which case the claim is trivial.Let h ∈ H and let L be the linear subspace such that H = L + h.Consider the lattice holds.The claim then follows from Lemma 10.First note that U ⊆ Λ.Moreover, we have p − 0 = α and for each x ∈ X we have p − (x, −α) = x = α.Thus, in order to establish (4) it remains to show that every lattice point z = (z , z ) ∈ Λ \ U satisfies α < p − z .Equivalently, we have to show that every such point satisfies ( equality only if z ∈ {0, 1} k .However, in the latter case we would have z ∈ {0, 1} k ∩ H = X and hence z ∈ U , a contradiction.Thus, we obtain (5).Finally, if z = −2α, then f (z) = z 2 + α 2 and 1, z = 2α 2 > 0, implying z = 0 and hence (5) holds.
While the previous lemma appears quite restrictive, the next lemma shows that we may apply it to a large class of 0/1-polytopes.
for all x ∈ X.There is a lattice Λ of dimension at most k + 1 such that conv(X) is the linear projection of a face of VC(Λ) • .Proof.Consider the set and observe that projecting X onto the first k coordinates yields the set X.Moreover, notice that every vector in X consists of exactly k + m ones.In other words, the norm of every vector in X is √ k + m and hence, we may apply Lemma 18 to obtain a lattice Λ with dimension at most k + 1 such that conv(X ) is the linear projection of a face F of VC(Λ) • .Since conv(X) is a linear projection of conv(X ), we see that conv(X) is also a linear projection of F .
Proof of Theorem 2. We use a result of Göös, Jain & Watson [19] that yields a family of n-node graphs G such that the stable set polytope P G of G satisfies xc(P G ) = 2 Ω(n/ log n) .Let X ⊆ {0, 1} n denote the set of characteristic vectors of stable sets in G. Notice that and the claim follows since d = O(n).
Open questions
We conclude our investigations of the extension complexity of Voronoi cells of lattices with a collection of some open problems that naturally arise from our studies and which we find interesting to pursue in future research.
In view of Theorem 2 a natural question is whether the logarithmic term in the lower bound 2 Ω(d/ log d) on the extension complexity of certain Voronoi cells can be removed: Question 21.Does there exist a family of d-dimensional lattices Λ such that xc(VC(Λ)) = 2 Ω(d) ?
We remark that our bound relies on a lower bound by Göös, Jain & Watson [19] on extension complexities of stable set polytopes, which meet the criteria of Lemma 19.It is known that there exist d-dimensional 0/1-polytopes with extension complexity 2 Ω(d) , see [37].However, no explicit construction of such polytopes is known and so it is unclear how to transform such polytopes in order to apply Lemma 18 efficiently.
Comparing the superpolynomial bound in Theorem 2 with the polynomial upper bounds for certain classes of lattices in Section 3, the question arises what we can expect the extension complexity of the Voronoi cell of a generic lattice to be.Of course, this requires a suitable notion of a random lattice.Our question refers to interesting examples such as Siegel's measure [41] or uniform distributions over integer lattices of a fixed determinant, see Goldstein & Mayer [18].
In Theorem 2 we have shown that exactly describing a Voronoi cell of a lattice may require superpolynomial-size extended formulations.It would be interesting to understand how this situation changes if we allow approximations instead of exact descriptions, in particular in view of various results on the complexity of the approximate closest vector problem, see, e.g., Aharonov & Regev [1].To this end, for α ≥ 1 we say that a polytope Q is an α-approximation of a polytope P , if P ⊆ Q ⊆ αP .
Question 23. What can be said about extension complexities of α-approximations of Voronoi cells of lattices?
We have seen in Theorem 1 that not only the root lattices but also their dual lattices have polynomial extension complexity.Is that a general phenomenon?Question 24.Given a d-dimensional lattice Λ, is there a polynomial relationship between xc(VC(Λ)) and xc(VC(Λ ))?
Given that in view of Theorem 15 zonotopal lattices admit lifts with quadratically many facets, and the fact that the closest vector problem on such lattices can be solved in polynomial time (see [32]), one might expect that small-sized lifts of the corresponding Voronoi cells can actually be constructed explicitly.
Question 25.Given a basis of a d-dimensional zonotopal lattice Λ, is it possible to construct an explicit lift of VC(Λ) with polynomially many facets in polynomial time?
Note that our arguments leading to Theorem 15 are not constructive.
Theorem 1 .
For every d-dimensional lattice Λ that is a root lattice or the dual of a root lattice, we have xc(VC(Λ)) = O(d log d).
1} d .We refer to [9, Ch.21, Sect.3E] for the characterization of the facet vectors and the inner description of the Voronoi cell, which is therein denoted by the symbols β(d, d/2), for d even, and 1 2 δ(d, (d − 1)/2), for d odd.Lemma 14. xc(VC(D d )) = O(d).Proof.Using the above description of the facet vectors, we obtain that VC( D d ) = [−1, 1] d ∩ d 2 • conv{±e 1 , . . ., ±e d }.Hence, the Voronoi cell of D d is the intersection of a hypercube and a crosspolytope.As in the case of the root lattice D d , the stated bound follows by Lemma 5 and Equation (2).
|
2021-06-09T01:16:16.391Z
|
2021-06-08T00:00:00.000
|
{
"year": 2023,
"sha1": "e907de577ca688bdf432e4068240a202ca6f105e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00454-023-00522-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "e907de577ca688bdf432e4068240a202ca6f105e",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
241484483
|
pes2o/s2orc
|
v3-fos-license
|
A Case of Degos Disease Complicated By Constrictive Pericarditis In Remote Phase
Background: Degos disease, also known as malignant atrophic papulosis, is characterized by cutaneous manifestations due to chronic thrombo-obliterative vasculopathy. There have been reports of rare late-onset Degos disease complicated by constrictive pericarditis (CP). We report a case of CP caused by Degos disease that developed 20 years after diagnosis. Case presentation: A 62-year-old woman who has been taking aspirin for 20 years for Degos disease was hospitalized for worsening heart failure. The patient was diagnosed with CP and underwent pericardiectomy. Pathological ndings suggested the involvement of Degos disease. The postoperative course was uneventful, and her heart failure and Degos disease did not worsen. Conclusions: This report suggests that Degos disease can cause long-term CP. Aspirin effectively inhibited the progression of Degos disease, and surgical treatment is necessary when heart failure due to CP is refractory to treatment.
Background
Degos disease, also known as malignant atrophic papulosis, is rare. To date, approximately 200 cases have been reported. Degos disease is characterized by cutaneous signs, such as central porcelain-white atrophic papules with an erythematous telangiectatic rim caused by chronic, thrombo-obliterative vasculopathy [1,2]. There have been few reports of constrictive pericarditis (CP) caused by Degos disease [3,4]. We performed a surgical intervention for CP caused by Degos disease, presenting with treatment-refractory heart failure. To the best of our knowledge, this is the rst report of CP caused by Degos disease that developed 20 years after diagnosis.
Case Presentation
A 67-year-old woman was admitted to our cardiology department for dyspnea. Her medical history included hypertension, atrial brillation, diabetes mellitus, and Degos disease, and she had been taking low-dose aspirin for 20 years.
At the time of diagnosis, she exhibited cutaneous signs, and a histopathological examination displayed perivascular lymphocytic in ltration with distinct mucin deposition. These lesions were associated with Degos disease [1,2]. No systemic symptoms were observed. Three years ago, gastrointestinal endoscopy revealed a small intestinal lesion, which was suspected as a systemic manifestation of Degos disease On admission, her blood pressure was 110/62 mmHg, and her heart rate was 99 beats/min with atrial brillation. Physical examination revealed liver enlargement, jugular vein distension with Kussmaul's sign, and limb edema. Chest radiography revealed bilateral pleural effusion and calci cation of the pericardium. Computed tomography revealed bilateral pleural effusion and pericardial effusion with marked calci cation of the pericardium (Fig. 1).
Cardiac catheterization revealed equal right and left ventricular end-diastolic pressures and square root signs (Fig. 2). No coronary artery stenosis was observed. Echocardiography revealed pericardial thickening, pericardial effusion, ventricular septal paradoxical motion, septal bounce, and a normal left ventricular ejection fraction. The cutaneous signs were similar to those observed 20 years ago.
Despite optimal medical treatment, her heart failure did not improve, and the patient became catecholamine-dependent. Therefore, surgical pericardiectomy was performed.
During the operation, the pericardium was markedly thickened and calci ed. The pericardium was incised, and 200 ml of bloody uid was suctioned. Inside the pericardial sac, there were adhesions with some calci cation (Fig. 3A), partly in ltrating the myocardium (Fig. 3B). The thickened pericardium was then thoroughly resected.
The central venous pressure decreased from 30 to 16 mm Hg, and the cardiac diastolic capacity improved.
Histopathological examination of the pericardium revealed a high degree of brosis, vitri cation, and calci cation of the pericardium. Lymphocytic in ltration was observed around the pericardial vessels ( Fig. 4A, B).
The postoperative course was uneventful. The patient was extubated on day 1, discharged from the intensive care unit on day 2, and discharged on day 18. After surgery, the patient received aspirin, furosemide, spironolactone, bisoprolol, and perindopril erbumine treatment for 4 years. Her heart failure has not worsened.
Discussion And Conclusions
Pierce et al. reported a case of chronic pleuritis and pericarditis in a 32-year-old woman with Degos disease [3]. In that report, the patient developed heart failure and required surgical treatment.
Histopathological examination revealed a calci ed and brotic epicardium, similar to our case, but there was no proliferative vasculitis of Degos disease. In our case, lymphocytic in ltration was present around the pericardial vessels, indicating the involvement of Degos disease.
According to Theodoridis et al., systemic signs were present in 29% of patients with Degos disease. Organ involvement began within the rst 7 years of disease, and the mean survival time from the development of systemic disease was 0.9 years [2]. However, our patient, who took aspirin, did not develop a systemic disease until 17 years after diagnosis. Since the current study was a case report, a general conclusion cannot be made. However, Yukiiri et al. has reported CP caused by untreated Degos disease, medically treated with aspirin, dipyridamole, and furosemide [4]. Therefore, aspirin was found to effectively inhibit the progression of Degos disease. In summary, Degos disease can cause long-term CP. Aspirin effectively inhibits the progression of Degos disease, and surgical treatment is necessary when heart failure due to CP is refractory to treatment. Abbreviations CP, constrictive pericarditis Declarations Ethics approval and consent to participate: Ethics approval was not required for the retrospective analysis of this clinical case.
Consent for publication: Written consent was obtained from the patient.
Availability of data and materials: Not applicable.
Competing interests: The authors declare that they have no competing interests.
Funding: None
Authors' contributions: YK wrote the original draft. TK was in charge of writing, reviewing, and editing the manuscript. KM supervised the study. All authors read and approved the nal manuscript.
|
2021-09-09T20:49:47.979Z
|
2021-07-27T00:00:00.000
|
{
"year": 2021,
"sha1": "3006dac5ab6bee91f346e81f88243392f5e7ced8",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-718386/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a13c80d3bc775bed11bf86c650daeb01260d4fb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
201307849
|
pes2o/s2orc
|
v3-fos-license
|
The Research and Control Measures of the Influence on the Complicated tie-line and the Bridge under the Shield Tunnel
The analysis and calculation of the influence of Shield Tunnels Crossing Existing Railways and bridges considering the time-space effect is of great significance to the risk control of shield tunnels. Taking the long-distance bus station-production Road station section of Jinan rail transit line R2 as an example, shield tunnelling method is used to construct the shield tunnels in this section, and the shield tunnels are crossed under the Beibei Bridge of Beijing-Jiaozuo Liaison Line and Beijing-Shanghai Link In order to study the influence of shield tunnelling on railway and bridge piles, three-dimensional finite element numerical analysis method is used to establish a three-dimensional finite element analysis model through MIDAS/GTS. The position relationship of shield tunnels, railways and bridge piles and the shield tunnelling distance are simulated. The stress of railway subgrade and pile foundation caused by shield tunnelling is analyzed. The result of deformation is within the allowable value of deformation control of railway and bridge. Without any reinforcement measures, through monitoring the deformation of subgrade and bridge, timely feedback of monitoring information and timely adjustment of shield tunnelling parameters, shield tunnelling can safely Cross railway and bridge. The development of similar projects will provide relevant reference.
Introduction
The construction of underground rail transit system is an important means to solve urban traffic problems. Subway is favored for its advantages such as large capacity, fast speed and low pollution. However, the development of subway is bound to be influenced by the existing urban buildings. The common problems in the construction of shield tunnel are as follows: underpass railway [8][9][10], bridge pile foundation [1][2][3][4], etc., take reasonable control measures [5][6][7] to reduce the existing railway in the new shield tunnel, The adverse effect of bridge pile foundation is of great significance to ensure the construction safety of new shield tunnel and ensure the safety of existing railway and bridge operation. Although many researches have been done on subway tunnel crossing the railway and bridge pile foundation at home and abroad, due to the geological conditions of underground engineering, the complexity and diversity of structure form, the existing achievements can not completely reflect the shield tunnelling to the railway. In this paper, using MIDAS/GTS NX numerical calculation software, taking a project in a section of Jinan rail transit R2 line as an example, using the numerical simulation method to study the existing railways, the Beibei Bridge of Beijing-Jiaozuo Liaison Line and the Beijing-Shanghai third and fourth Line are used as examples. The stress and deformation law of bridge pile foundation are analyzed, and the control measures are put forward, which can be used for reference for similar projects in the future.
Ambient environment
Jinan rail transit line R2 long-distance bus station ~ production road station between the base of Beijing-Shanghai third and fourth line and the Beibei Bridge of Beijing-Jiaozuo Liaison Line. The crossing node is located in the subgrade section of the main line of Beijing-Shanghai North Garden Station to Jinan Station (4 ‰ slope limit). The minimum net distance between the right tunnel and the bridge pile is 5.51m. The left tunnel is 6.95m from the pier of 1#. About 15.14m above the vault of the tunnel.
Beibei Bridge of Beijing-Jiaozuo Liaison Line is 902m in length. The design load of the bridge is medium to live load. The top of abutment is flat, and the supporting cushion of pier and abutment is filled with crown cap concrete.
The horizontal and cross-sectional diagrams of the piles in the new shield tunnel are shown in figs. 1, 2 and 3.
Control standard for railway and bridge piles
The determination of railway settlement control value should not only meet the requirements of railway bearing capacity, but also meet the safety of driving operation, according to the Railway subgrade Design Code (TB10001-2016). In this section, the allowable displacement of subgrade through Beijing-Shanghai railway is 15mm. According to the basic Code for Railway Bridge and culvert Design (TB10002-2017), the settlement of the single pier of the Beibei Bridge of Beijing-Jiaozuo Liaison Line is 15mm, and the differential settlement of the pier and abutment adjacent to the pier is 5mm.
Model building
(1) In this numerical simulation, the following assumptions are adopted: 1) The surrounding rock mass is homogeneous and isotropic continuum medium, and it is assumed to be an ideal elastic-plastic material.
2) The stress and deformation of the tunnel are calculated according to three dimensions. 3) When the initial stress field is simulated, the tectonic stress is not taken into account, but only the influence of the gravity stress is considered.
4) The segment is simulated according to homogeneous elastic ring, and the stiffness reduction coefficient of 90% is adopted to consider the problems of joint assembling and bolt connection.
(2) The outer and left sides of the tunnel structure are about 5 times diameter, the interval tunnel structure bottom plate is about 3 times diameter, and the model size is X × Y × Z = length × width × height = 100m × 100m × 35m, The calculation model is shown in fig. 3 and fig. 4.
According to the Design Code for Railway subgrade (TB10001-2005), rail and train loads are simplified as plane loads applied to railway subgrade. The surface load of the model is 770kN/m 2 .
Calculation parameters
In the course of simulation, the physical and mechanical parameters of surrounding rock stratum, pier pile foundation, cap, pier and shield segment structure are shown in Table 1 and Table 2 .
Analysis on the influence of Shield Tunnel Construction on Railway subgrade
Due to the disturbance of the subgrade and the underlying stratum caused by tunnel excavation, the soil layer above the vertical area of the shield tunnel has been deformed in a certain range. When the left shield tunnel is through, the maximum vertical settlement of the stratum is -8.43mm, and the maximum vertical settlement of the subgrade of the Beijing-Shanghai Railway is -3.67mm. When the right tunnel is through, the maximum vertical settlement of the stratum is -3.67mm. The maximum vertical settlement of Beijing-Shanghai Railway Subgrade is -9.85 mm, and the maximum vertical settlement of Beijing-Shanghai third and fourth Line is -6.95mm. The difference of vertical displacement between railway subgrade and stratum is due to the difference of their own stiffness. The computed vertical displacement nephogram of subgrade and stratum of Beijing-Shanghai Railway third and fourth Line is shown in Fig. 5-8.
Analysis of influence of Shield Tunnel Construction on Bridge pile
Vertical and horizontal displacement, axial force, bending moment, differential settlement of pier and abutment of 1# 2# pier and abutment are shown in Fig. 9-14. From the analysis of figs. 9-14, it can be seen that the influence of shield tunnel construction on the deformation of pile foundation is mainly as follows: 1) From Fig. 9, it can be seen that the maximum vertical displacement of 1# pier abutment is -2.00mm, and the maximum vertical displacement of 2# pier abutment is -1.85mm. The settlement rate of pile foundation of 1# pier abutment increases first, then decreases gradually, and finally tends to be stable. The pile foundation of 2# pier abutment presents the opposite change state, mainly due to the following In the right shield construction, the pile foundation of pier and abutment is the nearest to the pile foundation of pier and abutment. At the same time, because of the existence of the left tunnel, the influence of the right tunnel construction on the pile foundation of pier and abutment is weakened.
2) From fig. 10, it can be seen that the maximum differential settlement of the pier and abutment of 1# and 2# appears when the left line shield tunnelling is 100m, The pier and abutment of 1# are settled first and completed first.
3) From fig. 11 and fig. 12, it can be seen that the maximum horizontal displacement of 1# is -2.00 mm and the maximum horizontal displacement of 2# is 2.33 mm.They all displace away from the shield tunnel. The recent displacement of pier and abutment is mainly due to the squeezing of soil between piles and abutment of 1# and 2# by shield tunnelling.
4) It can be seen from fig. 13 that the axial force and bending moment produced of 1# are the largest after the right line shield tunnel is through, and the maximum axial force is -2045kN at the third position from the top of the pile, and the maximum bending moment is -33 kN· m.
|
2019-08-23T14:13:36.987Z
|
2019-07-25T00:00:00.000
|
{
"year": 2019,
"sha1": "cec56d38bd17a05f59ccf7d41d492b670573776a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/295/4/042081",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "79a92eaa93f0ab792e1a7a8e3ba1637828d03180",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
271119630
|
pes2o/s2orc
|
v3-fos-license
|
Temporal relationship between hepatic steatosis and fasting blood glucose elevation: a longitudinal analysis from China and UK
Background The link between nonalcoholic fatty liver disease and type 2 diabetes has not been fully established. We investigated the temporal relationship between nonalcoholic fatty liver disease (NAFLD) and type 2 diabetes (T2D), quantitatively assessed the impact, and evaluated the related mediation effect. Methods This study involved participants from the China Multi-Ethnic Cohort Study and the UK Biobank. We performed cross-lagged path analysis to compare the relative magnitude of the effects between NAFLD and T2D using two-period biochemical data. Hepatic steatosis and fasting blood glucose elevation (FBG) represented NAFLD and T2D respectively. We fitted two separate Cox proportional-hazards models to evaluate the influence of hepatic steatosis on T2D. Furthermore, we applied the difference method to assess mediation effects. Results In cross-lagged path analyses, the path coefficients from baseline hepatic steatosis to first repeat FBG (βCMEC = 0.068, βUK−Biobank = 0.033) were significantly greater than the path coefficients from baseline FBG to first repeat hepatic steatosis (βCMEC = 0.027, βUK−Biobank = -0.01). Individuals with hepatic steatosis have a risk of T2D that is roughly three times higher than those without the condition (HR = 3.478 [3.314, 3.650]). Hepatic steatosis mediated approximately 69.514% of the total effect between obesity and follow-up T2D. Conclusions Our findings contribute to determining the sequential relationship between NAFLD and T2D in the causal pathway, highlighting that the dominant pathway in the relationship between these two early stages of diseases was the one from hepatic steatosis to fasting blood glucose elevation. Individuals having NAFLD face a significantly increased risk of T2D and require long-term monitoring of their glucose status as well. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-024-19177-3.
Introduction
Nonalcoholic fatty liver disease (NAFLD) has emerged as a major public health issue worldwide.NAFLD is considered the leading cause of progression to more serious liver diseases [1,2].Notably, its global prevalence is around 25.24% [3] and still increasing [4], closely corresponding to the growing epidemics of obesity and type 2 diabetes (T2D) [5].Some studies reported that 51.34% of patients with NAFLD also have obesity [3] and 22.51% of them concurrently suffer from T2D [6].Consequently, numerous studies have focused on understanding the connections underlying this phenomenon to gain insight into the pathogenic mechanisms and treatments of NAFLD and T2D [7][8][9].
Despite the fact that obesity has been identified as the shared etiology of NAFLD and T2D [7,8], the link between these above two diseases remains complex and the subject of ongoing debate.NAFLD was long thought to be a part of metabolic syndrome [10] and a long-term complication of T2D [9].T2D could raise the risk of NAFLD developing into cirrhosis and even hepatocellular carcinoma [11].One of the main mechanistic hypotheses supporting this view is that long-term exposure to high glucose levels could result in glucotoxicity.Lipotoxicity associated with insulin resistance in adipocytes and glucotoxicity have adverse effects on the growth of NAFLD [12].Besides, The Framingham Heart Study provided epidemiological evidence that participants with confirmed T2D had a higher risk of NAFLD [13].On the contrary, a growing viewpoint suggests that NAFLD is a multisystemic disease [14] and could precede T2D [15].That is, NAFLD may also operate as a risk factor and a predictor of T2D [16].The potential mechanism of this point is that ectopic fat accumulation in the liver, leading to increased hepatic glucose output, may raise the risk of T2D [7].Several longitudinal studies suggested that participants with NAFLD at baseline had a higher incidence of T2D, although the results showed substantial variation across studies, with a rise in the incidence of up to 5.5fold [13,17,18].Additionally, evidence from Mendelian randomization studies has suggested a two-way relationship between NAFLD and T2D [19,20].
And yet, most published research was unable to directly compare effect sizes in both directions, leaving the classical chicken-egg problem unresolved: which is the starting point, NAFLD or T2D [21,22]?Or which is the dominant path between them?For longitudinal investigations involving multiple periods of biochemical data, the use of cross-lagged path analysis would probably be a more effective choice to answer the above questions.Cross-lagged path analysis could simultaneously assess the magnitude of two-way effects between interrelated variables in the same population by leveraging and analyzing multi-period continuous variables of biomarkers instead of binary disease outcomes [21].In this model, the effect evaluation is standardized, allowing for a direct comparison of the magnitude of the effect in both directions.This scientific model has been successfully applied in multi-phase clinical trials and large population-based studies [23].
In this study, we included the participants from both the China Multi-Ethnic Cohort (CMEC) and UK Biobank in our analysis to account for the ethnic and geographical heterogeneity of NAFLD [16,17].We proposed to use cross-lagged path analysis to dissect the temporal relationship between NAFLD and T2D.Due to the requirements for continuous variables, we used hepatic steatosis and fasting blood glucose (FBG) elevation as early-stage indicators of NAFLD and T2D, respectively.Once the temporal sequence has been established, we further quantitatively assessed the impact of antecedent disease on subsequent disease using Cox-proportional hazard model to achieve a comprehensive understanding of their severity and to guide public health policies or clinical practice.Additionally, we constructed the mediation model for obesity (represented as the body fat percentage), hepatic steatosis, and T2D to examine the mediation effects.
Study population
Launched in 2017, the China Multi-Ethnic Cohort Study has recruited 98,631 participants between the ages of 30 and 79 from southwest China.The CMEC collected multi-dimensional information on chronic diseases at baseline through face-to-face interviews with electronic questionnaires from the trained interviewers, physical examinations, and clinical laboratory testing.From 2020 to 2021, CMEC selected 10% of the baseline study participants to complete the first repeated survey by stratifying by region and conducting random cluster sampling with communities or villages as sampling units.The data collection of this repeated survey was the same as the baseline survey.Based on the comparison of baseline characteristics, the follow-up population is well-representative (see more details in the Appendix Table S1).Further information on the specifics of the CMEC study has been provided elsewhere [24].
Established in 2006, the UK Biobank has enrolled 502,392 participants between the ages of 40 and 79 from the United Kingdom.The UK Biobank collected comprehensive data from participants.Furthermore, participants were also regularly followed up to obtain health and disease-related data.More information about data collection has already been presented previously [25].Since serum biochemical analyses were only performed at the baseline and the first repeat surveys, we limited the study sample to people from these two time periods.
In the current study, we primarily concentrated on individuals who had complete basic personal information and biochemical data at baseline and first repeat surveys.As body mass index (BMI) is closely linked to the target diseases and extreme BMI values raised our concerns about data accuracy for participants, we only included participants with a reasonable BMI (range from 14 to 45).Additionally, individuals taking insulin or other medications for diabetes, those with liver-related diseases (e.g., chronic hepatitis, liver cirrhosis), and individuals with a history of cancer were excluded from the study population.Finally, this study involved 7,668 subjects from the CMEC and 11,876 subjects from the UK Biobank. Figure S1 depicts the flow of participant selection.The strategies used in this research complied with the STROBE statement.
Anthropometric information and biological samples
In the CMEC, trained doctors collected physical data at each site using uniform devices.We further calculated BMI and waist-to-hip ratio (WHR) using classic formulas.In addition, blood samples for testing were collected in the morning after participants had fasted for at least 12 h.These samples were then transported under cold chain to centralized regional laboratories for analysis.Plasma glucose and lipid profiles were evaluated using an AU5800 automated chemistry analyzer provided by Beckman Coulter Commercial Enterprise.Additionally, glycated hemoglobin (HbA1c) levels were measured with an MQ-6000 glycated hemoglobin analyzer from Shanghai Medconn Biotechnology Corporation.More details of the measurements of the blood parameters have been described in published articles [26].Local CDCs were in charge of field QC, which included checking devices, ensuring study protocols and randomly selecting participants for re-examination.
In the UK Biobank, blood biochemistry biomarkers such as FBG, and triglycerides, were measured by Beckman Coulter AU5800 at baseline and the first repeat surveys.Proton density fat fraction (PDFF) was measured by magnetic resonance imaging at the second and third repeat surveys.Body fat percentage (BFP) was assessed by impedance measurement at each survey and we used this indicator to assess obesity.More details of study protocols have been described in published articles [25].
Diagnostic criteria
In this study, we chose FLI and PDFF to diagnose hepatic steatosis.FLI, a frequently used surrogate marker in numerous studies, is a non-invasive and validated tool [27].The variables used in the calculation of FLI are triglycerides, BMI, gamma-glutamyl transferase and waist circumference [28].Further, hepatic steatosis was identified using FLI ≥ 60.PDFF is widely acknowledged as a reliable indicator for estimating liver fat content [29,30].When using PDFF for analysis, we determined hepatic steatosis based on PDFF values exceeding 5.6% [29].
In addition to using FBG as the marker of diabetes, we also recognized T2D through self-reported history, FBG ≥ 7.0mmol/L, or hemoglobin A1c ≥ 6.5% on physical examination.
We detected other chronic diseases based on personal disease history with the diagnostic record, or the results of physical examination and serum biochemical analyses.The latter criteria were: (1) for hypertension: systolic blood pressure ≥ 130 mmHg, or diastolic blood pressure ≥ 80 mmHg; (2) for dyslipidemia: TC ≥ 6.22 mmol/L, LDL ≥ 4.14 mmol/L, TG ≥ 2.26 mmol/L or HDL ≤ 1.04 mmol/L.
Questionnaire survey and covariates selection
Personal information was obtained by completing a comprehensive electronic questionnaire.Based on the information mentioned above and the existing literature [13,31], we selected sex, age, WHR, ethnic group, occupation (CMEC) or deprivation index (UK Biobank), education, cigarette smoking, alcohol status, dietary score, and nonsedentary physical activity (METs-h/day) as covariates for the subsequent analysis.Considering that the composition of the FLI consists of BMI, we used WHR instead of BMI to assess obesity.Dietary score was calculated by the Dietary Approaches to Stop Hypertension (DASH) score [32] in CMEC and the healthy diet pattern score [33] in UK Biobank.Table S2 displays the descriptions of covariates in detail.
Statistical analysis
At the start of this study, we described the population characteristics at baseline and the first repeat survey for both cohorts.We used the median [interquartile range] for continuous variables and numbers (percentages) for categorical variables as statistical descriptive indicators.
We conducted cross-lagged path analysis to analyze the directional link between FLI and FBG that have been measured repeatedly at 2-time points.At first, we conducted the regression residual analysis to adjust the FLI and FBG indices at baseline and the first repeat assessment by previously mentioned confounders and used Z-transformation to standardize residuals.Then we applied structural equation modeling to perform the cross-lagged path analyses.We could determine the temporal sequence relationship through comparing the path coefficients β 1 (baseline FLI to subsequent FBG) and β 2 (baseline FBG to subsequent FLI).Fisher's Z test was used to examine the difference between β 1 and β 2 .We selected the comparative fitness index (CFI) and standardized root mean square residual (SRMR) to measure the goodness of models.CFI ≥ 0.95 and SRMR ≤ 0.08 implied that the model fitted the sample data well.We also conducted stratified analyses to examine potential effect heterogeneity among the predefined stratification subpopulations, which included sex, age (60 years as the cut-off value), ethnic group, whether suffering from hypertension, and whether suffering from hyperlipidemia.Additionally, a sensitivity analysis was performed in the current study.We further applied cross-lagged path analysis to a population that had neither NAFLD nor T2D at baseline.
After determining the main direction of effects between the two conditions, we focused on the magnitude of the impact of antecedent disease on subsequent disease by fitting the Cox proportional-hazards models and adjusting for confounders mentioned earlier.Then we constructed a regression-based mediation model to investigate the related mediation effects among obesity, hepatic steatosis and T2D.We fitted two separate Cox proportional-hazards models to assess the association between exposure-outcome, as well as exposuremediator-outcome, adjusted for previously mentioned confounders.We calculated the total, direct, indirect effects and the proportion of mediation using the difference method [34].Then we used the bootstrap method to compute the 95% confidence intervals (CI).The proportion of mediation was calculated by (ln (HR Tot )-ln (HR DE )) / ln (HR Tot ).Because only the UK Biobank had the data of long-term follow-up outcome and BFP, survival and mediation analyses were limited to this cohort.Figure S2 and S3 show the specific flow of participant selection.
Furthermore, we conducted several validation analyses to demonstrate the reliability of FLI.We used PDFF instead of FLI for analyses and compared their results.The lack of concurrent PDFF and FBG data limited our capacity to conduct cross-lagged path analyses with PDFF.Hence, we only replicated the survival and mediation analyses using PDFF, following the same procedures as before.All models satisfied the proportional hazards assumption (Figure S4 -S6) and had no interaction between exposure and mediator (Table S3 and S4).
For all of the above analyses, we used multiple imputation to deal with missing data.The dataset was filled five times independently.Each complete dataset was analyzed separately to generate five estimates, which were then pooled using Rubin's rules to produce the final result.We performed all statistical analyses in R 4.1.0.For convenience, the baseline, first repeat and second repeat assessments were referred to as T1, T2 and T3 in the following content.
Participant characteristics
Table 1 summarizes our study population characteristics at T1 and T2 from CMEC and UK Biobank.In CMEC, we included 7,668 subjects with a median age at T1 of 50[43.00,58.00] years, 63.11% of females, and a median follow-up between T1 and T2 of 1.98 [1.78, 2.16] years.The median FBG and FLI at T1 were 5.08 [4.71, 5.43] mmol/L and 22.33 [9.32, 47.30], respectively.Compared to the characteristics at T1, the number of smokers and drinkers, as well as non-sedentary physical activity, reduced while FLI increased at T2.In UK Biobank, we included 11,876 subjects with a median age at T1 of 58.00 [52.00, 63.00] years, 48.15% of females, and a median follow-up between T1 and T2 of 4.43 [2.11, 6.12] years.The median FBG and FLI at T1 were 4.88 [4.55, 5.23] mmol/L and 41.63 [18.19,70.63],respectively.Compared to the characteristics at T1, the number of smokers and drinkers reduced while FLI increased at T2.In comparison to CMEC, the UK Biobank population had a lower proportion of females, older age, higher BMI and FLI, a greater proportion of higher educated people and regular drinkers, as well as less non-sedentary physical activity.
Cross-lagged path analysis
Figure 1 depicts the results of cross-lagged path analyses.In CMEC, the coefficient of the path from T1 FLI to T2 FBG (β 1 = 0.068, P < 0.001) was approximately 2.5fold that of the coefficient of the path from T1 FBG to T2 FLI (β 2 = 0.027, P = 0.001).The difference between two path coefficients (β 1 and β 2 ) was statistically significant (Z = 3.047, P = 0.002).The variance (R 2 ) of T2 FLI was 0.44 and 0.20 for T2 FBG.This model fitted well with CFI and SRMR of 0.983 and 0.021, respectively.In UK Biobank, the coefficient of the path from T1 FLI to T2 FBG (β 1 = 0.033, P < 0.001) was approximately 3-fold that of the coefficient of the path from T1 FBG to T2 FLI (β 2 = -0.01,P = 0.127).The difference between two path coefficients was statistically significant (Z = 3.771, P < 0.001).The variance (R 2 ) of T2 FLI and FBG were 0.50 and 0.14, respectively.This model also had an equally good fit with CFI and SRMR of 0.995 and 0.012.The results from both cohorts indicated that the dominant path was the one from T1 FLI to T2 FBG.
Figure 2 displays the cross-lagged path analyses results in predefined stratification subpopulations.These results were generally consistent with the results of the whole population, although the effects were weak in some subgroups.Notably, the difference between two path coefficients in the hypertension group was not statistically significant in either cohort Table S5 (See appendix for details) shows the crosslagged path analyses results in the limited population (CMEC: n = 6,327; UK Biobank: n = 8,335).Consistent with the previous results, the path from T1 FLI to T2 FBG remained dominant with statistically significant differences between two path coefficients in both cohorts (Z CMEC = 2.674, P = 0.008; Z UK Bio−bank = 2.941, P = 0.003).
Mediation analysis
We further excluded individuals who had T2D before T1, and 11,627 participants from the UK Biobank were involved in the mediation analyses.During the median follow-up of 14.195 [13.134, 14.745] years, 478 (4.11%) participants developed T2D and 417 participants died after the baseline survey.The total person-years of follow-up was 159075.6.Figure 3 presents the mediation model for T1 BFP, T2 FLI or T3 PDFF, and followup T2D.For FLI, the total effect was represented by Hazard Ratio (HR = 1.097, 95% CI = 1.074,1.120)from the exposure-outcome Cox proportional-hazards model.The indirect-effect HR (1.066 [1.051,1.081])was greater than the direct-effect HR (1.029 [1.004,1.054]).The proportion of mediation was estimated to be 69.514%.For PDFF, the total effect HR was 1.102 [1.085,1.120].The indirecteffect HR was 1.016 [1.013,1.019]and the direct-effect HR was 1.085 [1.068,1.103].The proportion of mediation was estimated to be 15.975%.All of these findings indicated that hepatic steatosis appears to be a significant mediator in the link between BFP and T2D.
Discussion
In two longitudinal cohort studies with multiple periods of biochemical data, we examined the temporal relationship between hepatic steatosis and FBG elevation.Our cross-lagged path analyses suggest that the development of hepatic steatosis precedes the development a Median [interquartile range] or counts (proportion).Due to the presence of missing data, proportions may not add up to 1 b Because CMEC had a large proportion of people working in agriculture and animal husbandry, the non-sedentary physical activity value was higher than people in UK Biobank c Drinking less than three times a month was considered an occasional drinker, and drinking more than once a week was considered a regular drinker d Dietary score referred to the DASH score in CMEC, and healthy diet pattern score in UK Biobank of FBG elevation, with both cohorts showing a consistent relationship.The stratified analyses suggest that the association between hepatic steatosis and FBG elevation disappeared in the hypertension group.Additionally, the survival analyses suggest that individuals with hepatic steatosis have a significantly higher risk of T2D.Further, our mediation analyses suggest that hepatic steatosis mediated a part of the total effect between obesity and follow-up T2D.
The current study confirms that the dominant path was the one from T1 FLI to T2 FBG.Several previous observational researches have found the significant link between NAFLD and a high risk of T2D [17,35].Yamazaki et al. also demonstrated that improvements in NAFLD could reduce the incidence of T2D.While these studies support our findings, they only focused on the unidirectional effects of NAFLD on T2D by using binary variables for diseases and calculating adjusted hazard ratio or adjusted odds ratio.Meanwhile, some studies have also suggested NAFLD as a complication of T2D.The evidence from cross-sectional studies [6] indicated that there is a high prevalence of NAFLD among patients with T2D.Ma et al. [13].conducted a parallel analysis of the link between baseline fatty liver and incident Fig. 1 Cross-lagged path analysis of the FLI with FBG.β 1 indicates the coefficients of the path from T1 FLI to T2 FBG; β 2 indicates the coefficients of the path from T1 FBG to T2 FLI.The conditional correlation coefficient between T2 FBG and T2 FLI was set to zero.The covariates adjusted in the model include sex, age, WHR, ethnic group, occupation (CMEC) or deprivation index (UK Biobank), education, cigarette smoking, alcohol status, dietary score, and nonsedentary physical activity.Abbreviation: FLI: the fatty liver index; FBG: fast blood glucose T2D, as well as baseline fasting plasma glucose and incident fatty liver based on around 1000 participants from the Framingham Heart Study cohort.This study indicated a two-way relationship between liver fat and T2D.Notably, several studies using Mendelian randomization analysis also indicated the two-way relationship between them.Liu et al. suggested NAFLD could be divided into two subtypes."Nature" NAFLD increase the risk of type 1-like diabetes.Meanwhile, T2D could increase the risk of "nurture" NAFLD [19].De Silva et al. implied insulin resistance (IR) promote the emergence of NAFLD and NAFLD is associated with an increased risk of developing T2D [20].The findings of these bi-directional researches for each direction cannot be directly compared due to the use of different indicators to assess NAFLD and T2D.The stratified cross-lagged path analysis in the hypertension group implies that there is likely to be a potential mediator related to the cause of hypertension in the NAFLD-T2D pathway.Previous evidence indicated that NAFLD might be a cause of hypertension [36].Meanwhile, it has been demonstrated that hypertension and diabetes are bad companions and often coexist, with related pathological mechanisms between the two [37], such as the nitrous oxide pathway of IR and the subsequent stimulatory effect on the sympathetic excitation, growth of smooth muscle, and sodium-fluid retention.It is reasonable to speculate that the common cause of hypertension and diabetes may be the mediator in the FLI-FBG path.Controlling hypertension, as a descendent of the mediator, could block the causal path from T1 FLI to T2 FBG.
The survival analyses show that in European populations, individuals with hepatic steatosis have a risk of T2D that is roughly three times higher than those without the condition.Results from prior studies have varied widely, with risks ranging from a 33% increase [38] to a 5.5-fold increase [39].Differences in these results may be attributed to variances in geographical regions and the duration of follow-up.Moreover, the findings of FLI are consistent with the PDFF findings, providing the confirmation of the reliability of FLI.There have been various potential mechanisms proposed to elucidate the complex pathogenesis of how NAFLD leads to T2D [18,22].In patients with NAFLD, ectopic fat accumulation in the liver may increase hepatic glucose output, which could adversely affect glucose metabolism.Additionally, molecules associated with liver inflammation that are secreted by the liver, such as angiopoietin-like protein, is a risk factor for T2D.Further research indicates that fatty liver exhibits distinct endocrine functions compared to healthy liver tissue.A fatty liver can differentially express and secrete various proteins (hepatokines) into the circulation, such as Fetuin-A, ANGPTL3, FGF21, Selenoprotein P, Fetuin-B, and Follistatin.These factors can adversely affect the development and progression of T2D [40].Increased levels of total serum bile acids, diacylglycerols, and ceramides are also potential risk factors.
The results of the mediation analyses reveal a significant and critical role of hepatic steatosis in the association between obesity and T2D.It highlights the importance of obesity-NAFLD-T2D underlying pathophysiological and metabolic mechanisms.Few studies have previously investigated this mediation effect in this complex link, so we lack data available for comparison.Obesity may cause NAFLD through two main mechanisms.The primary cause of liver injury [41] is due to impaired suppression of lipolysis and increased free fatty acids (FFAs) release in obesity.An increase of de novo lipogenic pathways within hepatocytes would also promote hepatic steatosis [9].The potential mechanisms by which NAFLD causes T2D have been discussed previously.
The prevalence of obesity and NAFLD are expected to increase globally.It is anticipated that these trends have an adverse effect on the prevalence of T2D.Our study suggests that identifying and managing hepatic steatosis in individuals may be an important preventive strategy for reducing the risk of developing T2D, particularly in the initial phases of the condition.This result reinforces the causal relationship between NAFLD and T2D [22], highlighting the importance of hepatic steatosis as a potential contributor to the development of T2D.Highrisk groups should be prioritized for the prevention of T2D, and the management of obesity is still the priority.Future analyses should consider the heterogeneity in obesity and T2D [42], and aim to provide important new insights into the impact of metabolically unhealthy obesity on the risk of T2D.
Strength and limitation
The present study shows several strengths.Unlike previous studies primarily focused on Asian populations, we used two large cohorts from China and the UK to explore the relationship between liver fat and T2D in varying ethnic groups, which can provide new population-based evidence and expand on existing information.Further, we analyzed multi-period biochemical data using cross-lagged path analysis, a more effective approach to detective the temporal sequence relationship between inter-related variables.As far as we know, this is the initial research to compare the relative magnitude of the effects between hepatic steatosis and fasting blood glucose elevation.Notably, previous studies have rarely estimated mediation effect to measure the contribution of liver fat to the obesity-T2D pathway.By performing mediation analyses, our study provides population-based insights for exploring the complete mechanistic pathway.
We acknowledge that the current study had several limitations.First, we only included those who completed both the baseline and first repeat survey, so some selection bias may exist and our study population cannot represent the entire population.Additionally, our study excluded individuals taking insulin or other medications for diabetes, so the conclusions cannot be generalized to the population of patients with advanced diabetes.Second, due to the lack of concurrent data for PDFF and FBG, the PDFF data did not support its application in cross-lagged analyses.We were constrained to perform these analyses using FLI, which is the practical indicator in large general population cohorts.However, FLI is not the best estimate for hepatic steatosis.Moreover, as FLI includes BMI in its composition, there is a possibility of overestimating the mediation effect.Third, in crosslagged path analyses, we could not distinguish the type of diabetes.The participants we included were all over the age of 30 years, among whom the prevalence of type 1 diabetes (T1D) was low, at 0.69 per 100,000 personyears in China and lower in the UK.Thus, we anticipated that the sample size of T1D patients would be very small in this study and that the failure to distinguish between types of diabetes would not result in significant bias.Finally, in assessing mediation effects, we did not consider the impact of heterogeneity in obesity and T2D.The current mediation analysis is exploratory and will need to be further investigated in the future.
Conclusion
Our current study provides new population-based evidence that in the early stages of these two diseases, hepatic steatosis may have a greater impact on T2D, compared to the opposite direction.Our findings also reveal that individuals having NAFLD face a significantly increased risk of T2D.Our exploration of the specific causal relationships between obesity, hepatic steatosis, and T2D would help to further elucidate the pathogenesis of NAFLD and identify the population at high risk for T2D.As supplement aggressive obesity control, targeting hepatic steatosis may be an alternative strategy for preventing T2D.Meanwhile, we recommend to pay more attention to the glycemic profile of patients with NAFLD.
Fig. 2
Fig. 2 Cross-lagged path analysis of FLI with FBG in subgroups.Subgroups include male / female, old / young, whites / others, hypertension / normotensive, and hyperlipidemia / ortholiposis.The X-axis represents the magnitude of path coefficients in two directions.To the right are the path coefficients from T1 FLI to T2 FBG, and to the left are those from T1 FBG to T2 FLI.The length of the bars only indicates the absolute value of the coefficients, with the specific coefficient values displayed within the bars.a Numbers in brackets show the number of participants in each group.§ represented P < 0.05.The covariates adjusted in the model include sex, age, WHR, ethnic group, occupation (CMEC) or deprivation index (UK Biobank), education, cigarette smoking, alcohol status, dietary score, and non-sedentary physical activity.Abbreviation: FLI: the fatty liver index; FBG: fast blood glucose
AbbreviationFig. 3
Abbreviation FLI: the fatty liver index; PDFF: proton density fat fraction; HR: hazard ratio; CI: confidence interval a .The covariates adjusted in three models include sex, age, WHR, ethnic group, deprivation index, education, cigarette smoking, alcohol status, dietary score, and non-sedentary physical activity b .This result is interpreted as the effect of 1 SD change in the FLI or PDFF on outcome
Table 1
Characteristics of participants at T1 and T2 from the CMEC and the UK Biobank a
Table 2
Cox-proportional hazard model for T1 FLI or T3 PDFF and follow-up type 2 diabetes a
|
2024-07-14T05:09:22.083Z
|
2024-07-12T00:00:00.000
|
{
"year": 2024,
"sha1": "5af2ca4d7a51978d74c3b668086034e7cdef0670",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5af2ca4d7a51978d74c3b668086034e7cdef0670",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258074586
|
pes2o/s2orc
|
v3-fos-license
|
The SUMOylation of Human Cytomegalovirus Capsid Assembly Protein Precursor (UL80.5) Affects Its Interaction with Major Capsid Protein (UL86) and Viral Replication
Human Cytomegalovirus Capsid Assembly Protein Precursor (pAP, UL80.5) plays a key role in capsid assembly by forming an internal protein scaffold with Major Capsid Protein (MCP, UL86) and other capsid subunits. In this study, we revealed UL80.5 as a novel SUMOylated viral protein. We confirmed that UL80.5 interacted with the SUMO E2 ligase UBC9 (58-93aa) and could be covalently modified by SUMO1/SUMO2/SUMO3 proteins. 371Lysine located within a ψKxE consensus motif on UL80.5 carboxy-terminal was the major SUMOylation site. Interestingly, the SUMOylation of UL80.5 restrained its interaction with UL86 but had no effects on translocating UL86 into the nucleus. Furthermore, we showed that the removal of the 371lysine SUMOylation site of UL80.5 inhibited viral replication. In conclusion, our data demonstrates that SUMOylation plays an important role in regulating UL80.5 functions and viral replication.
Introduction
Human cytomegalovirus (HCMV), a member of the herpesvirus family, is a widespread pathogen that affects 70-90% of the general population and can establish lifelong latent infection [1]. Although HCMV infection is generally asymptomatic or mild in immunecompetent hosts, it can be life-threatening and cause severe disease complications in immune-compromised hosts [2]. Indeed, HCMV infection is a leading viral cause of congenital abnormalities, intellectual disabilities, and cerebral palsy in newborns [3,4].
HCMV exhibits a characteristic temporal cascade of gene expression with immediateearly (IE), early (E), and late (L) phases [5,6]. UL80.5 as a late-phase protein plays a crucial role in HCMV capsid assembly [7]. Capsid assembly of HCMV is initiated by UL80.5 forming a complex with MCP (UL86) and UL80 in the cytoplasm [8,9]. UL80.5 has two important binding domains, including the CCD domain where UL80.5 interacts with UL86 and the ACD domain where UL80.5 interacts with itself or UL80 [10,11]. By forming a complex, UL80.5 provides the nuclear localization sequences that UL86 lacks and translocates it into the nucleus [12,13]. Once inside the nucleus, UL80.5 further associates with itself and UL80, causing UL86 protomers to coalesce with themselves and other capsid subunits to form a capsid scaffold [14]. This internal scaffold then interacts with the triplex that formed between MnCP (UL85) and MnCP-bp (UL46), leading to the formation of procapsid. Although the scaffold plays a central role in capsid assembly, no scaffolding protein is found within the mature capsid or the virion. Finally, UL80.5 and UL80 are cleaved at the M site and eliminated to make room for viral genome [15,16]. In fact, only a small fraction of the assembled procapsids accomplish the elimination of scaffolding structures and are filled with viral genomic DNA, which becomes infectious C-capsids [17].
sub-cloned into pCMV-Myc vectors. The constructs that contained the sequences that encoded different UL80.5 (K-R) mutants were generated by a Fast Mutagenesis kit (Transgen Biotech, Beijing, China) using pCMV-HA-UL80.5 as the template with primers that contained appropriate nucleotide change (Table 1). There were a total of ten lysine residues that individually mutated to arginine which were separately called K41R, K163R, K175R, K178R, K205R, K208R, K242R, K296R, K355R, and K371R. The construct that contained the sequences encoding the UBC9 (C93S) mutant was also generated by a Fast Mutagenesis kit using pRK11-FLAG-UBC9 as the template.
Y2H Analysis
Protein interactions were analyzed using GAL4 fusion proteins in a yeast two-hybrid system. Saccharomyces cerevisiae strain AH109 and control vectors pGADT7, pGADT7-T, pGBKT7-p53, and pGBKT7-Lam were purchased from Clontech. The AH109 yeast strain was transformed with the bait plasmid pGBK-UL80.5 (in fusion with GAL4-BD) and subsequently transformed with selected expression clones pGAD-UBC9 and UBC9 deletion mutant series (in fusion with GAL4-AD). Positive clones were selected on a synthetic dropout medium that lacked four nutrients, including tryptophan, leucine, adenine, and histidine (QDO), and were tested for β-galactosidase activity.
The denatured polypeptides were separated by electrophoresis in 4% to 10% polyacrylamide gradient gels which contained SDS (SDS-PAGE) and transferred electrically to nitrocellulose membranes. The detection of tagged proteins was performed using incubation with anti-Myc, anti-HA, or anti-Flag antibodies and IgG antibody conjugated with horseradish peroxidase as the secondary reagent to visualize bound antibodies. The membranes were subsequently stained with a chemiluminescent substrate with the aid of ECL Western blotting detection reagent kits (GE Healthcare, Chicago, IL, USA) and quantitated using a phosphorimager.
Lentivirus Production and Infection
We used the 3xFlag sequence to replace the GFP sequence in the pLenti CMV GFP Puro vector (Addgene, 658-5) and added some restriction enzyme cutting sites (XbaI-EcoRV-BstBI-BamHI) before the 3xFlag tag. Then, the pLenti vector encoding the 3xFlag-SUMO1 were transfected into HEK293T cells together with psPAX2 and pMD2.G with a ratio of 4:3:1. Culture supernatants were harvested 36 h and 60 h after transfection. U251 cells were infected with supernatants that contained lentiviral particles in the presence of 4 µg/mL polybrene (Merck). After 48 h of culture, stable transduced cells were selected with 2 µg/mL puromycin (Merck).
Virus Yield Assay
HFF cells seeded in 6-well plates were infected with HCMV at a multiplicity of infection (MOI) of 0.02 or 0.01 in a 200 µL DMEM inoculum and replaced with DMEM supplemented with 10% fetal bovine serum after 2 h of incubation with cells. To determine the level of viral growth, the cells and medium were harvested at desired days postinfection and stored at −80 • C until all samples were collected. The titers of the viral stocks were determined by plaque assay or quantitative real-time PCR (qPCR).
HFF cells were infected with the appropriate virus (WT, UL80.5(K371R)-Zeo, and UL80.5WT-Zeo) at an MOI of 0.02, and titers of viral stocks were determined by plaque assay. HFF cells in 24-well plates were infected with stock preparations serial 10-fold dilutions (from 10 −1 to 10 −6 ). After 2 h of incubation, warmed DMEM+ 1% agarose were added into these plates. Infected cells were monitored by fluorescence microscopy for the expression of the GFP marker, and the virus titer was calculated as the average number of foci expressing GFP (i.e., plaques) per well (in triplicate) multiplied by the dilution factor (i.e., PFU per ml). HFF cells were infected with the appropriate virus (UL80.5(K371R)-Zeo and UL80.5WT-Zeo) at an MOI of 0.01, and the total DNA of the viral stocks for examination was extracted using the tissue DNA Kit (Omega, Norcross, GA, USA). The levels of viral DNA (UL83 gene) were then determined by qPCR and normalized to the cellular β-actin gene copies.
UL80.5 Interacts and Co-Localizes with UBC9
To identify the cellular interaction partners of the UL80.5 protein, we performed the yeast two-hybrid system to screen the cDNA library created from human brain using UL80.5 as the bait. After sequencing, one clone that UL80.5 specifically interacted with showed the highest homolog to UBC9, which is an E2 conjugating enzyme in the SUMOylation process. To confirm this interaction in human cells, we performed Co-IP experiments in 293T cells and found that UL80.5 interacted with UBC9 in vivo ( Figure 1A). Further, the results of the confocal assay indicated that UL80.5 strongly co-localized with UBC9 in the cell nucleus ( Figure 1B). Though UL80.5 interacted and co-localized with UBC9, whether it got involved in the SUMOylation cascade and what role it played remained unknown. Therefore, we wanted to identify the region of UBC9 where it interacted with UL80.5. The E2 enzyme UBC9 is highly conserved from yeast to human and has a conserved 158-residue Though UL80.5 interacted and co-localized with UBC9, whether it got involved in the SUMOylation cascade and what role it played remained unknown. Therefore, we wanted to identify the region of UBC9 where it interacted with UL80.5. The E2 enzyme UBC9 is highly conserved from yeast to human and has a conserved 158-residue αββββββααα motif named as ubc superfold. As shown in Figure 1C, the truncated protein UBC9-C3(1-58) and UBC9-N1(93-158), which lacked β4-β6 regions, did not interact with UL80.5; additionally, the truncated proteins UBC9-C2(57-158), N2(1-94), and N3(1-102), together with fulllength UBC9 and the mutant protein UBC9 (C93S), showed interaction with UL80.5 and all possess β4-β6 regions; and the control Y2H experiments showed that none of the truncated UBC9 proteins activated transcription by themselves. The results demonstrated that UBC9 interacted with UL80.5 mainly through the β4-β6 regions ( Figure 1D). Former research reported that the interface between UBC9 and the SUMO-E1 is region α1 and the common binding domain for UBC9-E3/substrate interaction is the region below the α1 helix [35]. As the binding domain between UBC9 and UL80.5 is located in the β4-β6 region, we assume that UL80.5 possibly acts as an E3/substrate in SUMOylation cascade.
SUMOylation Site of UL80.5 Is K371
Next, we sought to identify the SUMO1 acceptor sites of UL80.5. Since SUMOylation often occurs at the lysine residue of substrates containing ΨKxE/D motif, a prediction analysis by the SUMOsp 2.0-SUMOylation Site Prediction program (http://SUMOsp.biocuckoo. org/index.php, accessed on 1 January 2023) was used and identified two putative SUMOylation sites: K371 and K242. However, the SUMOylation sites also had a chance to be other lysine residues as former reports on the HCMV UL44 protein [34,36]. To test whether one or more of these lysine residues could be SUMO1 acceptor sites, each of the 10 lysine residues was conservatively and individually mutated to arginine. These lysine residues were located in several important domains of UL80.5 ( Figure 3B), such as the ACD/CCD domains and the nuclear localization signals (NLS1 and NLS2). Then, lysine mutants were individually tested to determine whether they could be modified by SUMO1 in 293T cells. The results showed that the K371R mutant blocked the SUMOylation of UL80.5 as the band corresponding to SUMOylated UL80.5 (70 kDa, 90 kDa, and 110 kDa) was not detected ( Figure 3A, lane 2), while other K/R substitutions showed modification patterns identical to wild type UL80.5 ( Figure 3A, lane 1 and lanes 3-11). To further confirm that SUMOylation was responsible for the observed migrating change of UL80.5, we compared the differences in HEK293T cells that were transfected with HA-UL80.5/K371R/K163R in the presence or absence of UBC9 or SUMO1, and a Co-IP assay was performed with HA antibody. With the absence of SUMO1 and UBC9, no SUMO1 conjugated-UL80.5 was immunoprecipitated and detected ( Figure 3C, lane 5); with SUMO1 present, SUMO1 conjugated-UL80.5 was immunoprecipitated ( Figure 3C, lane 1); with SUMO1 and UBC9 present, more SUMO1 conjugated-UL80.5 was immunoprecipitated ( Figure 3C, lane 2); with SUMO1 and UBC9 present, the same amount of SUMO1 conjugated-K163R was immunoprecipitated ( Figure 3C, lane 4); and nearly no SUMO1 conjugated-K371R was immunoprecipitated ( Figure 3C, lane 3). These results suggested that UL80.5 possesses lysine residue K371 as a major SUMO1 acceptor site. kDa and 90 kDa) were immunoprecipitated and detected in the presence of SUMO1/2/3 ( Figure 2A, lanes 2, 4, 6) and further promoted in the presence of SUMO1/2/3 and UBC9 (Figure 2A, lanes 3, 5, 7), which indicates that UL80.5 could be SUMOylated by SUMO1/2/3 proteins. Moreover, SUMO2/3-conjugated UL80.5 (Figure 2A, lanes 4-7) were detected much less than SUMO-1-conjugated UL80.5 (Figure 2A, lanes 2 and 3). These data suggested that UL80.5 was SUMOylated by SUMO1/SUMO2/SUMO3 via UBC9 in 293T cells and the SUMO-1 modification was the strongest. Furthermore, the results of the confocal assay indicated that UL80.5 strongly co-localized with SUMO1 in the cell nucleus ( Figure 2B). Notably, UL80.5 and SUMO1 were mainly distributed as spots in the cell nu cleus, and the co-localization also mainly existed in those spots.
SUMOylation of UL80.5 Restrains Its Interaction with UL86
Since the SUMOylation site of UL80.5 had lysine residue K371 in the CCD domain (UL86 binding domain), we wanted to confirm whether this SUMOylation affected its interaction with UL86 (MCP). Competitive Co-IP experiments in 293T cell were performed by co-transfection of HA-UL80.5/K371R and Myc-UL86 in the presence or absence of UBC9 or UBC9(C93S). UBC9(C93S) was the mutant that E2 enzyme activity abolished. The results showed that UL86 was co-precipitated with UL80.5 ( Figure 4A, lane 1); the amount of coprecipitated UL86 decreased with the overexpression of UBC9 ( Figure 4A, lane 2 vs. lane 1); the amount of co-precipitated UL86 stayed the same with the overexpression of UBC9(C93S) ( Figure 4A, lane 4 vs. lane 1); and K371R mutant blocked the SUMOylation of UL80.5 and restored the amount of co-precipitated UL86 ( Figure 4A, lane 3 vs. lane 2). Next, we explored whether SUMO1 as the SUMOylation system enhancer affected the interaction of UL80.5 with UL86. The results of the Co-IP assay in 293T cells demonstrated that SUMO1 overexpression inhibited the function of UL80.5 binding UL86 ( Figure 4B, lane 2 vs. lane 3), which was the same as UBC9 overexpression ( Figure 4B, lane 1 vs. lane 3). In addition, UBC9 had stronger inhibition than SUMO1 as the amount of co-precipitated UL86 was much less. In conclusion, these results demonstrated that SUMOylation of UL80.5 restrains its interaction with UL86.
SUMOylation of UL80.5 Restrains Its Interaction with UL86
Since the SUMOylation site of UL80.5 had lysine residue K371 in the CCD domain (UL86 binding domain), we wanted to confirm whether this SUMOylation affected its interaction with UL86 (MCP). Competitive Co-IP experiments in 293T cell were performed by co-transfection of HA-UL80.5/K371R and Myc-UL86 in the presence or absence of UBC9 or UBC9(C93S). UBC9(C93S) was the mutant that E2 enzyme activity abolished. The results showed that UL86 was co-precipitated with UL80.5 ( Figure 4A, lane 1); the amount of co-precipitated UL86 decreased with the overexpression of UBC9 ( Figure 4A . In addition, UBC9 had stronger inhibition than SUMO1 as the amount of co-precipitated UL86 was much less. In conclusion, these results demonstrated that SUMOylation of UL80.5 restrains its interaction with UL86.
SUMOylation of UL80.5 Had No Effect on Translocating UL86 into Cell Nucleus
UL80.5 interacts with UL86 and provides NLS that UL86 lacks to translate it into cell nucleus in the initial assembly process [12]. Since SUMOylation of UL80.5 inhibits the interaction between UL80.5 and UL86, we wanted to determine whether SUMOylation affected the ability of UL80.5 translocating UL86. The data from the confocal assay showed that when UL80.5 or K371R was expressed individually, it was mainly distributed as spots in the cell nucleus ( Figure 5A(e-l)); when UL86 was expressed individually, it was distributed in the cytoplasm ( Figure 5A(a-d)); when UL80.5 was co-expressed with UL86, UL86 was translocated into the cell nucleus, and the strong co-localization as spots was observed ( Figure 5A(m-p)); and the SUMOylation site mutant K371R blocked its SUMOylation, but the same strong co-localization as spots was also observed ( Figure 5A(q-t)). Furthermore, the role of SUMO1 as an enhancer of UL80.5 SUMOylation was evaluated in stable cell lines in which U251 cells were infected with lentivirus expressing flag-SUMO1 (Lentivirusflag-SUMO1) and selected with puromycin. The indirect immunofluorescence assay data showed that pLenti-flag-SUMO1 cells stably expressed flag-SUMO1 ( Figure 5B). UL80.5 and UL86 were co-expressed in pLenti-flag-SUMO1 or pLenti-CT U251 cells, and the same strong co-localization as spots of UL80.5 and UL86 was observed in confocal assay ( Figure 5C). These experiments demonstrated that the SUMOylation of UL80.5 did not affect its location in the cell nucleus and the ability of translocating UL86 into the nucleus.
Mutation of UL80.5 371 lysine Attenuated Progeny Production of HCMV Infection in HFFs
We further examined whether the SUMOylation of UL80.5 played a role in viral replication. Since other HCMV proteins (e.g., IE1 and IE2) can be SUMOylated and influence viral replication, instead of over-expressing SUMO1, we used mutant viruses which allowed us to directly dissect the SUMO effects on UL80.5. To make an HCMV virus containing UL80.5-K371R single-site mutation, Zeocin cassette was introduced following the UL80.5 loci on the Towne-BAC for the recombination selection purpose ( Figure 6A). Multi-step viral growth curve experiments were performed using wild type (WT), UL80.5(K371R)-Zeo mutant and UL80.5WT-Zeo viruses in HFF cells and examined by plaque assay. The inoculum titers are presented in Figure 6B as virus yields on day zero and show that similar amounts of the virus were used. The UL80.5(K371R)-Zeo mutant showed about 2-fold-decreased virus yields at 5 dpi compared to those of wt or UL80.5WT-Zeo viruses, and the trend extended to about 10-fold at 9 dpi ( Figure 6B). A similar trend was found in viral replication examined by QPCR for viral DNA synthesis as the UL80.5(K371R)-Zeo mutant showed a decreased amount of viral DNA compared with the UL80.5WT-Zeo virus ( Figure 6C). The amount of viral DNA was represented as copies of the viral gene UL83 per copy of the cellular β-action gene. Together, these results suggested that the removal of 371 lysine SUMOylation site of UL80.5 inhibited viral production. Consequently, the SUMOylation of UL80.5 had a positive effect on HCMV replication.
Discussion
UL80.5 is an essential gene of HCMV and plays a key role in viral capsid assembly [7,37]. To better understand the biological function of UL80.5, we performed a yeast two-hybrid screen, and the most frequently isolated UL80.5-interacting protein was the SUMO conjugating enzyme UBC9. In this study, we first reported that UL80.5 is a novel target of SUMOylation and could be effectively conjugated with SUMO1, SUMO2, and SUMO3. The conjugation levels of UL80.5 by SUMO1, SUMO2, and SUMO3 proteins were not the same as SUMO1 modification was the strongest. Moreover, several additional higher molecular weight UL80.5 species were detected, suggesting that UL80.5 could be multi-SUMOylated. In mammalian cells, some substrates of SUMOylation were capable of being modified by either SUMO1 or SUMO2/3, and other substrates showed a clear preference for a particular SUMO type [38,39]. Until recently, there were no obvious differences in protein functional types that were modified by SUMO1 versus SUMO2/3. Therefore, we choose SUMO1 modification as a model for our research.
Full length UL80.5 contains 10 lysine residues. A prediction sequence analysis identified the most putative SUMOylation sites, including K371 (middle-possibility) and K242 (low-possibility). We confirmed that the major SUMOylation site of UL80.5 is K371 in the CCD domain. The CCD domain is highly conserved and essential for the interaction between UL80.5 and UL86. Our study demonstrates that the SUMOylation of UL80.5 inhibited its interaction with UL86. The interaction between UL80.5 and UL86 is essential for the process of capsid assembly and formation. First, UL86 needs to be translocated into the nucleus by its interaction with UL80.5; next, the formation of the procapsid shell needs the interaction between UL86 and UL80.5 to coalesce capsid subunits; finally, UL80.5 and UL80 need to cleave themselves through the M site and break their interaction with UL86 to make room for viral DNA. Confocal assay results demonstrated that a lack of SUMOylation of UL80.5 also translocated UL86 into the cell nucleus, which indicated that the SUMOylation did not affect the first step. In addition, we also found UL80.5 co-localized with SUMO1 as spots in the nucleus. It is possible that the SUMO modification of UL80.5 mainly occurred in the nucleus and created no effect on translocating UL86 from the cell cytoplasm into the nucleus.
Past research about SUMOylated viral structural proteins have reported that SUMOylation could have negative or positive impacts on viral assembly and reproduction. For example, the SUMOylation of HIV p6 protein has a negative impact and correlates with reduced viral reproduction through an unknown mechanism [40]. However, for the M1 protein of the influenza A virus (IAV), the SUMOylation of M1 plays a critical role and facilitates viral assembly, and viruses carrying SUMO-deficient M1 produce a lower viral titer [41]. There are two possible theories of positive or negative impacts that exist for the SUMOylation of UL80.5. To confirm whether the SUMOylation of UL80.5 has positive or negative impact, we performed viral growth curve experiments and found out that the SUMOylation site mutant virus (K371R mutant virus) showed restrained viral replication. We hypothesized that the SUMOylation of UL80.5 inhibited the interaction between UL80.5 and UL86, which may promote the removal of UL80.5 from the procapsid to make room for viral DNA (the final step), leading to the completion of capsid assembly and viral multiplication. Nevertheless, there is no direct evidence regarding how the SUMOylation of UL80.5 affected capsid assembly; therefore, this assumption needs more evidence based on future studies.
|
2023-04-12T15:27:12.839Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "76ee8e30b7a62f08d358aeb575db304b3984057c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/15/4/931/pdf?version=1680873167",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aed3371bf70c40277fbdde8fc38e378beb89033d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
726094
|
pes2o/s2orc
|
v3-fos-license
|
CD4+ T cells reactive to enteric bacterial antigens in spontaneously colitic C3H/HeJBir mice: increased T helper cell type 1 response and ability to transfer disease.
C3H/HeJBir mice are a new substrain that spontaneously develop colitis early in life. This study was done to determine the T cell reactivity of C3H/HeJBir mice to candidate antigens that might be involved in their disease. C3H/HeJBir CD4+ T cells were strongly reactive to antigens of the enteric bacterial flora, but not to epithelial or food antigens. The stimulatory material in the enteric bacteria was trypsin sensitive and restricted by class II major histocompatibility complex molecules, but did not have the properties of a superantigen. The precursor frequency of interleuken (IL)-2-producing, bacterial-reactive CD4+ T cells in colitic mice was 1 out of 2,000 compared to 1 out of 20,000-25,000 in noncolitic control mice. These T cells produced predominately IL-2 and interferon gamma, consistent with a T helper type 1 cell response and were present at 3-4 wk, the age of onset of the colitis. Adoptive transfer of bacterial-antigen-activated CD4+ T cells from colitic C3H/HeJBir but not from control C3H/HeJ mice into C3H/HeSnJ scid/scid recipients induced colitis. These data represent a direct demonstration that T cells reactive with conventional antigens of the enteric bacterial flora can mediate chronic inflammatory bowel disease.
T he inflammatory bowel diseases (IBD) 1 , encompassing Crohn's disease and ulcerative colitis, are complex chronic inflammatory diseases of the intestine whose etiology and pathogenesis remain unknown. There are multiple etiologic theories, one of which is that a dysregulated CD4 T cell response to the abundant antigens in the lumen may be responsible (1,2). This hypothesis is based on theoretical grounds and there is only limited supporting data in humans as yet (3,4). However, support for this hypothesis has come from the results of studies done in a number of recently developed experimental models of IBD, some of which have been the unexpected result of gene deletions by selective gene targeting (5)(6)(7)(8)(9). In a number of such models, CD4 ϩ T cells have been found to mediate colitis, and most commonly this has involved an exaggerated Th1 response manifested by excessive IFN-␥ production in the lesions (10)(11)(12). The localization of inflammatory disease to the colon of mice that have global deficiencies of an immune molecule suggests that the bacterial flora is the major immune stimulant leading to chronic intestinal inflammation. Indeed, in some models, animals that are raised germ-free no longer develop colitis (5,13), and in others rederivation with a defined flora (6,12) or antibiotic treatment (14) ameliorates the disease. Moreover, reconstitution of intestinal bacteria into germ-free animals can restore intestinal inflammation (15). However, it has remained unclear how the bacterial flora generates chronic intestinal inflammation.
Humans with IBD do not have absolute deficiencies of the immune molecules whose deletion in mice has resulted in colitis. For this reason, we have derived and studied a new strain of mice which develop colitis spontaneously, namely the C3H/HeJBir strain (16). C3H/HeJBir mice develop a predominantly right-sided colitis early in life that largely resolves by 3 mo of age. Previous studies on the immunopathogenesis of disease in this mouse strain have found that C3H/HeJBir mice, but not mice of the parenteral C3H/HeJ strain, have high titer serum IgG antibodies to a selected subset of antigens of the enteric bacte-
Preparation of Antigens
Enteric Bacterial Antigens. C3H/HeJ or C3H/HeJBir mice were killed and their cecums were removed. The cecums were opened and placed in 1 ml of PBS. The cecal bacteria were expelled by mixing with a vortex, and residual cecal tissue was removed. After addition of DNAse (10 g/ml), 1 ml of this bacterial suspension was added to 1 ml of glass beads. The cells were disrupted at 5,000 revolutions per minute in a Mini-Bead Beater (BioSpec Products, Bartlesville, OK) for 3 min and then iced. The glass beads and unlysed cells were removed by centrifuging at 5,000 g for 5 min. The lysates were filter sterilized by a 0.2 micron syringe filter. Cultured bacteria were processed in a similar manner. In most experiments, cecal bacteria were obtained from normal C3H/HeJ mice.
For some experiments, as indicated in text, the lysates of cecal bacteria were digested with trypsin. The trypsin was added into cecal bacterial protein preparation at a 1:50 (wt/wt) ratio and was incubated for 24 h. The mixture was then dialyzed in a 12,000-kD tube against PBS for 24 h, then used to pulse APCs for use in the T cell cultures.
Enteric Bacterial Isolates. Multiple bacterial strains were obtained from three different institutions: Escherichia coli and Eubacterium species were purchased from ATCC; Bacteroides vulgatus was provided by Dr. T. Ohkusa (Tokyo Medical and Dental University, Tokyo, Japan); and Proteus mirabilis was isolated from a C3H/ HeJBir mouse by Dr. K. Waites (University of Alabama, Birmingham, AL). All of these strains were either previously well characterized (ATCC) or were identified by standard biochemical tests.
Epithelial Cell Proteins and Food Antigens. Protein extracts were made from a murine C3H/He-derived intestinal epithelial cell line, Mode-K, which was provided by Dr. Dominique Kaiserlian (Institut Pasteur, Lyon, France) or from the MCA-38 murine colon adenocarcinoma cell line, which was the gift of Dr. Barbara Barna (Cleveland Clinic, Cleveland, OH). The cells were washed three times with PBS, and suspended in 1 ml of 5 mM MgCl 2 with 2 mM PMSF and 10 mM Tris-Cl, and then lysed by freezing (in dry ice/ethanol) and thawing three times. The lysate generated by this treatment was centrifuged at 16,000 g for 30 min. The supernatant was filter sterilized and immediately used.
Murine chow (Agway Pro Lab RMH 1000, Agway, Inc., Syracuse, NY) was prepared in PBS with homogenization in the same manner as the cecal bacteria.
Isolation of CD4 ϩ T Cells and APCs
C3H/HeJBir or C3H/HeJ mouse spleen and mesenteric lymph nodes were removed and placed into cell suspensions by straining through a small mesh sieve. After two washes, the cells were passed through a nylon wool column as previously described (18). The column-passed cells were washed twice and treated with anti-CD8 antibody (TIB 211; ATCC) supernatant (1 ml supernatant/10 7 cells) for 30 min on ice. After washing three times, magnetic beads coated with anti-rat IgG were added to the cells (BioMag, Cambridge, MA) and incubated for 30 min on ice. After passing through a magnet, CD4 ϩ T cells were collected and reconstituted at 4 ϫ 10 6 cells per ml in complete RPMI media containing 10% FCS for use in cell culture.
For APCs, spleen cells from syngeneic mice were prepared and treated with appropriate concentration of antigens as indicated at 2 ϫ 10 7 cells/5 ml in a 15-ml tube overnight at 37 Њ C. After washing twice, the cells were reconstituted at 4 ϫ 10 6 cells per ml in complete media containing RPMI 1640 10% FCS, 2 mM l -glutamine, 0.05 mM 2-ME, 100 U/ml penicillin, and 10 g/ ml streptomycin for use in cell culture. These APCs were irradiated with 3,000 rads before being added to T cell cultures.
Assay of Antigen-specific Proliferation of T Cells
Spleen and mesenteric lymph node (MLN) CD4 ϩ T cells were isolated as described above and 4 ϫ 10 5 cells/well were incubated in triplicate in the presence of 4 ϫ 10 5 antigen-pulsed APCs/well in wells of a 96-well flat-bottomed tissue culture plate (Falcon 3072, Lincoln Park, NJ) at 37 Њ C in 5% CO 2 humidified air. After different times of incubation as indicated in the text, 0.5 Ci of [ 3 H]-thymidine (New England Nuclear, Boston, MA) was added to each culture in the last 18 h of the incubation period. The cells were harvested on glass fiber filters on a PHD cell harvester (Cambridge Technology, Inc., Watertown, MA), washed with distilled water, and dried. Proliferation was assessed as the amount of incorporation of 3 H-thymidine into cell DNA, as measured by beta scintillation counting (Beckman Instruments, Palo Alto, CA) of the harvested samples and were expressed as cpm Ϯ SD.
Cytokine Assays
Spleen and MLN CD4 ϩ T cells were cultured in the presence of APCs pretreated with antigens as described above in complete media. The culture supernatants were collected at different times and pooled together for assay. The supernatants collected after 24 h of culture were used for IL-2 assay and the supernatants collected at 72 h of culture were used for IL-3, IL-4, and IFN-␥ assays. IL-2 was assessed by using the IL-2-dependent cell line HT-2AB (19), IL-3 by using the IL-3-dependent cell line FDC-1 (20), IL-4 by using the IL-4-dependent cell line CT-4S (21), and IFN-␥ by using the IFN-␥ -dependent cell line WEHI 279 (22) as previously described (23).
Precursor Frequency Analysis by Limiting Dilution
A modification of a previously described method (24) was used to determine the frequency of IL-2-producing, cecal bacterial antigen-specific cells. In brief, CD4 ϩ T cells were isolated from spleen and MLN of C3H/HeJ, C3H/HeJBir or C57BL/6 mice, and graded numbers of CD4 ϩ cells were plated on 4 ϫ 10 4 syngeneic, irradiated (3,000 rad) and cecal bacterial antigen-pulsed spleen cells in U-bottomed 96-well plates (Costar, Cambridge, MA) in a total volume of 50 l. 24 wells were plated for each cell concentration; 12 control wells (no responder cells) were set up in each experimental plate. After incubation at 37 Њ C for 5 d, the plates were irradiated (3,000 rad), 50 l HT-2 AB IL-2-dependent cells were added to each well, and incubation was continued for another 24 h with [ 3 H]TdR added to each well for the last 6 h of culture. Individual microcultures were scored as positive if their incorporation of [ 3 H]thymidine was Ͼ 2 SD above the incorporation of control wells. Regression curves were constructed by plotting the number of responder cells per well versus the log of the percentage of negative wells. Frequencies were calculated on the regression curve by interpolating the number of responder cells required to give 37% negative cultures (corresponding to one precursor per well according to Poisson statistics).
Cell Transfer
For the fresh CD4 ϩ T cell transfer group, 2 ϫ 10 6 freshly prepared CD4 ϩ T cells from spleen and MLN of C3H/HeJ or C3H/HeJBir mice were transferred intravenously into C3H/ HeSnJ scid / scid recipients. Other CD4 ϩ T cells isolated from spleen and MLN of C3H/HeJ or C3H/HeJBir mice were cultured with cecal bacterial antigen-pulsed and irradiated C3H/HeJ splenic cells in complete medium at 37 Њ C for 4 d in 5% CO 2 air before transfer as above. 3 mo later, the recipients were killed, and then cecum and proximal, medial, and distal portions of colon were fixed in formalin. Fixed tissues were embedded in paraffin, and sections were stained with hematoxylin and eosin for histologic examination. All slides were read by an experienced pathologist (A. Lazenby) without knowledge of their origin.
Statistics
The results were expressed as the mean Ϯ SD. The significance of the difference in means was determined by Student's t test.
Results
C3H/HeJBir CD4 ϩ T Cells Respond to Enteric Bacterial but not Food or Epithelial Cell Antigens. The focus of disease in C3H/HeJBir mice is the cecum and right colon, which contains a variety of antigens from bacteria, food, and shed epithelial cells. We tested whether C3H/HeJBir CD4 ϩ T cells respond to antigens of any of these sources. Extracts of murine chow were used as food antigens and an extract of the Mode-K intestinal epithelial cell line was used as a source of epithelial cell antigens. As shown in Fig. 1, the C3H/HeJBir CD4 ϩ T cells did not respond to stimulation of food antigens or epithelial cell antigens, but did respond strongly to stimulation by APCs pulsed with lysates of cecal bacteria (CBA). When various doses of food antigens or CBA epithelial antigens (from 0.2 to 1000 g/ml) were used in the stimulation, C3H/HeJBir T cells still did not respond (data not shown). C3H/HeJBir T cells did not respond either to epithelial cell antigens derived from MCA-38 cells or from primary isolates of C3H/HeJBir small intestinal or colon epithelial cells (data not shown). C3H/HeJBir T cell reactivity was found to bulk cultures of cecal bacteria, although the level of stimulation was generally lower than to freshly obtained cecal bacteria (data not shown). Control T cells from normal C3H/HeJ mice did not respond at all to food or epithelial cell antigens (data not shown) and only weakly to CBA (see below).
Responses of C3H/HeJBir CD4 ϩ T Cells to Cecal Bacterial Antigens. In further experiments, purified CD4 ϩ T cells from spleen and MLN of C3H/HeJBir mice or from normal, noncolitic C3H/HeJ mice were stimulated with APCs previously pulsed with lysates of enteric bacteria. C3H/ HeJBir CD4 ϩ T cells proliferated strongly to APCs pulsed with lysates of cecal bacteria, whereas spleen and MLN CD4 ϩ T cells of normal C3H/HeJ mice had low proliferative response to the same APCs ( Fig. 2 A ). Similarly, a high level of IL-3 was produced by C3H/HeJBir CD4 ϩ T cells stimulated with cecal bacterial antigens, whereas control C3H/HeJ CD4 ϩ T cells produced low levels of IL-3 when cultured with cecal bacterial antigens ( Fig. 2 B). There was no difference in the CD4 ϩ T cell response whether the cecal bacterial antigens were derived from the colitic C3H/ HeJBir or from the normal C3H/HeJ strains (data not shown), thus, in the following experiments the cecal bacterial antigens were derived from normal C3H/HeJ mice.
To determine whether the C3H/HeJBir CD4 ϩ T cell response was dose dependent and whether the lower C3H/ HeJ CD4 ϩ T cell response was due to a difference in the dose response, a broad range of cecal bacterial antigen doses, from 0.2 to 1,000 g/ml, were used to pulse APCs that were then cocultured with CD4 ϩ T cells. CD4 ϩ T cells of C3H/HeJBir mice had a proliferative response to cecal bacterial antigens starting from a dose of 20 g/ml, and the response increased with increasing cecal bacterial antigen concentrations (Fig. 3 A). The CD4 ϩ T cells of C3H/HeJ mice had low proliferative responses to all tested concentrations of cecal bacterial antigens (Fig. 3 B). When the kinetics of proliferation of CD4 ϩ T cells to cecal bacterial antigens was studied by pulsing with [ 3 H]thymidine at different times of culture from day 3 to day 5, the optimal response of the C3H/HeJBir CD4 ϩ T cells to cecal bacterial antigens was at day 5 of culture; CD4 ϩ T cells of normal C3H/HeJ had low proliferative responses at each time point (data not shown).
H-2 Restriction of CD4 ϩ T Cell Response to Cecal Bacterial Antigens. To determine whether the C3H/HeJBir CD4 ϩ T cell response to cecal bacterial antigens was due to a mitogen or due to MHC-T cell receptor signaling as in a conventional antigen-specific T cell response, monoclonal antibodies to I-A k and I-A b MHC class II molecules were added to the cultures of CD4 ϩ T cells plus cecal bacterial antigen-pulsed APCs (Fig. 4). The addition of the relevant anti-H-2 k antibody to cultures blocked the T cell proliferative response, whereas the addition of the irrelevant control anti-H-2 b antibody had no significant effect (Fig. 4). These data plus the kinetics of the response are not compatible with a mitogenic effect but instead indicate that C3H/HeJ-Bir CD4 ϩ T cell response to the cecal bacteria is due either to conventional antigen or possibly to a superantigen.
The Cecal Bacterial Lysate Component that Stimulates C3H/ HeJBir CD4 ϩ T Cells Is a Protein but Not a Superantigen. To further characterize the stimulatory component, lysates of cecal bacteria were treated with trypsin and then dialyzed, were dialyzed but not trypsin-treated, or were stored in a tube for an equivalent amount of time. Trypsin treatment significantly decreased the ability of cecal bacterial lysates to stimulate C3H/HeJBir CD4 ϩ T cells, whereas the cecal bacterial lysates that were dialyzed only or were stored still strongly stimulated C3H/HeJBir CD4 ϩ T cells to proliferate (Fig. 5). These results indicate that the stimulatory material in cecal bacteria is a trypsin-sensitive protein.
Many superantigens are derived from enteric bacteria and have been implicated in various immune-mediated diseases (25). To test whether the protein in cecal bacterial lysates that stimulated C3H/HeJBir CD4 ϩ T cells was a superantigen, APCs were pulsed in different ways either with cecal bacterial antigens or with Staphylococcal enterotoxin B (SEB), a known superantigen, before their addition to C3H/HeJBir CD4 ϩ T cells. When APCs were pulsed with SEB for 30 min, and then cultured with C3H/HeJBir CD4 ϩ T cells, the T cells responded strongly (Fig. 6). However, when APCs were pulsed with cecal bacterial lysates for 30 min and then added to C3H/HeJBir CD4 ϩ T cells, there was no significant response. When APCs were fixed with paraformaldehyde and then pulsed with SEB overnight, CD4 ϩ T cells still responded well. In contrast, when fixed APCs were pulsed with cecal bacterial lysates overnight, they failed to stimulate a significant C3H/ HeJBir T cell response (Fig. 6). These results indicate that the proteins in the cecal bacterial preparation that stimulated C3H/HeJBir CD4 ϩ T cells are not superantigens but rather conventional protein antigens. Frequency of IL-2-producing CD4 ϩ T Cells Specific to Cecal Bacterial Antigen. To further confirm that the stimulation of C3H/HeJBir CD4 ϩ T cells by cecal bacterial lysates was not due to a superantigen, the precursor frequency of IL-2producing C3H/HeJBir CD4 ϩ T cells specific for cecal bacterial antigens was determined and compared to that of normal C3H/HeJ mice and C57BL/6J mice. The precur- sor frequency of IL-2-producing CD4 ϩ T cells specific for cecal bacterial antigens in C3H/HeJBir mice was 1 in 2000 CD4 ϩ T cells, whereas that in normal C3H/HeJ or C57BL/6J mice was 10 times lower, i.e., 1 in 21,250 in C3H/HeJ and 1 in 25,000 in C57BL/6 mice.
In Vivo Kinetics of the C3H/HeJBir CD4 ϩ T Cell Response to Cecal Bacterial Antigens. Because colitis in C3H/HeJBir mice peaks at 4-8 wk of age (16), the time course of the CD4 ϩ T cell response to enteric bacteria was investigated. Spleen and MLN CD4 ϩ T cells of C3H/HeJBir mice of age from 2 wk to 6 mo were stimulated with APCs pulsed with lysates of cecal bacteria. The C3H/HeJBir CD4 ϩ T cells responded to cecal bacterial antigens beginning at 3 wk of age and this response increased with age up to 12 wk when it plateaued (Fig. 7). These data were derived using a mixture of MLN and spleen CD4 ϩ T cells. When MLN and spleen CD4 ϩ T cells were isolated and tested separately for reactivity to enteric bacterial antigens at 4, 5, 11, and 14 wk, the proliferative response of the MLN CD4 ϩ T cells was greater than that of spleen at each time point (data not shown).
Production of High Levels of IL-2 and IFN-␥ but Low Levels of IL-4 by C3H/HeJBir CD4 ϩ T Cells Exposed to Cecal Bacterial Antigens.
To define the CD4 T cell subset response, the cytokines produced upon stimulation by cecal bacterial antigen were measured. As shown in Table 1, CD4 ϩ T cells from colitic C3H/HeJBir mice produce substantial amounts of IL-2 and IFN-␥ upon stimulation with cecal bacterial antigens, whereas CD4 ϩ T cells from normal, noncolitic C3H/HeJ mice produced only minimal amounts of cytokines. Of particular interest is that in contrast to IL-2 and IFN-␥, CD4 ϩ T cells from C3H/HeJBir colitic mice produced only low levels of IL-4, indicating a Th1-predominant response to enteric bacterial antigens.
Phenotype of C3H/HeJBir CD4 ϩ T Cells Stimulated by Cecal Bacterial Antigens.
The surface phenotype of the C3H/ HeJBir CD4 ϩ T cells activated by cecal bacterial antigens was determined by flow cytometry. CD4 ϩ T cells freshly isolated or stimulated with cecal bacterial antigens for 4 d were stained with antibodies to CD45RB, CD44, CD69, IL-2R, L-selectin, integrin  7 , and antibodies to TCR V families. Fresh CD4 ϩ T cells expressed low levels of CD44, CD69, and IL-2R, and high levels of L-selectin. Upon stimulation with cecal bacterial antigens, the expression increased for CD44 (from 65 to 91%), CD69 (from 5.3 to 69%), and IL-2R (from 3.5 to 77%), whereas L-selectin expression decreased (from 83 to 59%). Approximately 85% of fresh CD4 ϩ T cells expressed integrin  7 , compared to C3H/HeJBir CD4 ϩ T Cell Reactivity Toward Different Enteric Bacterial Species. The reactivity of CD4 ϩ T cells from C3H/HeJBir mice to specific, cultured bacterial strains, was assessed by their proliferative response and IL-3 production. CD4 ϩ T cells from C3H/HeJBir mice proliferated significantly when cultured in the presence of E. coli, Bacteroides species, and Eubacteria species antigen-pulsed APCs (Fig. 8), and also had a high level of IL-3 production (data not shown), whereas CD4 ϩ T cells of C3H/HeJ control mice had low responses to the same antigen-pulsed APCs when measured by either proliferation (Fig. 8) or IL-3 production (data not shown). Notably, the proliferative response of C3H/HeJBir mice CD4 ϩ T cells to cultured bacterial antigens of these strains was significantly lower than that observed to freshly obtained cecal bacteria. There was no difference in CD4 ϩ T cell proliferative responses to polyclonal Con A stimulation between C3H/HeJBir and C3H/HeJ mice (data not shown). When the CD4 ϩ T cell cytokine production was measured upon stimulation with APCs pulsed with lysates of E. coli, Bacteroides species, or Eubacteria species, C3H/HeJBir CD4 ϩ T cells produced a high level of IL-2 and IFN-␥ but a low level of IL-4 (Table 1 and data not shown), again indicating a Th1-predominant response.
Transfer of Colitis by Bacterial Antigen-activated CD4 ϩ T Cells into C3H/HeSnJ scid/scid Recipients. To determine whether this CD4 ϩ T cell response is a cause or a secondary effect of the colitis, CD4 ϩ T cells isolated from spleen and MLN of normal C3H/HeJ mice or of colitic C3H/HeJBir mice were transferred separately into groups of four C3H/HeSnJ scid/scid recipients. 3 mo later, the recipients were killed and the histopathology of cecum and the proximal, medial, and distal portions of colon were examined. These freshly isolated CD4 ϩ T cells did not transfer disease although both reconstituted gut lymphoid tissue histologically. However, when C3H/HeJBir CD4 ϩ T cells were activated with cecal bacterial antigens in vitro for 4 d and then transferred to scid/scid mice, three out of four recipients developed colitis (Fig. 9). The lesions in the recipients were focal, similar to those that occur in the donor C3H/HeJBir mice. In contrast, none of the four recipients that received C3H/HeJ CD4 ϩ T cells that were activated with cecal bacterial antigens in vitro for 4 d developed colitis. Lastly, transfer of C3H/ HeJBir CD4 ϩ T cells activated in vitro with the polyclonal activator, monoclonal anti-CD3, to four scid/scid mice did not result in colitis in any of the four recipients.
Discussion
C3H/HeJBir mice represent a new strain that develops colitis spontaneously (16). The cecum and right colon are most affected; no inflammation is seen in the small intestine or other organs. Disease occurs early in life at 3-4 wk (a time corresponding to bacterial colonization), peaks 2-4 wk later, and heals by 10-12 wk. This strain was generated by a program of selective breeding to capture a trait, namely perianal ulceration and colitis, that had been occurring sporadically in the C3H/HeJ strain at the Jackson Laboratory. Subsequently it has been found that C3H/HeJ mice are quite susceptible in a number of models of chronic intestinal inflammation, and there is preliminary evidence that there are a number of susceptibility genes present in the parental C3H/HeJ strain that makes this strain prone to chronic intestinal inflammation (26).
C3H/HeJBir mice have demonstrated surprisingly selec- tive antibody reactivity to antigens of the bacterial flora in a previous study (17). In these studies, Western blot analysis was performed using sera from C3H/HeJBir mice to detect possible stimulatory antigens of epithelial cells, food, or enteric bacteria. Although no reactivity was found to the former two, a number of trypsin-sensitive bacterial antigens were detected by serum antibody. This antibody response was highly selective when one compared the number of proteins present to those detected by colitic sera. For the most abundant anaerobic bacterial species, only one to three proteins were recognized out of the thousands present.
These data indicate that even in inflamed mucosa the responsiveness of the immune system toward enteric bacterial antigens is restricted. The mechanism of this selectivity is unknown.
Our studies were initiated to determine the reactivity of T cells to epithelial, food, or bacterial antigens. Again, no reactivity was found to either epithelial or food antigens, whereas there was striking reactivity to enteric bacterial antigens. These studies used the strategy of isolating fresh cecal bacteria from C3H/HeJ donors, because these are the organisms that are present at the site of disease, not all organisms of the enteric flora can be cultured, and culture conditions can change gene and thus antigen expression. This may explain why T cell responses to lysates of bulk anaerobic cultures of cecal bacteria were generally lower that those obtained with freshly isolated bacteria. The kinetics of the in vitro response to these cecal bacterial antigens, the dose response and the inhibition by antibodies to class II MHC all indicate an antigenic rather than a mitogenic effect. Increased T cell responses could be detected to multiple specific enteric bacterial strains including both anaerobes and aerobes such as E. coli; however, the level of proliferative response was consistently lower than that found among the crude, freshly isolated cecal bacterial antigen preparation. Although the specific bacterial antigens stimulating T cells were not identified here, C3H/HeJBir T cells appear to be responding to many of the same anti-gens as do their B cells, because solid-phase absorption of cecal bacterial antigens with C3H/HeJBir serum antibody before pulsing APCs for the T cell cultures, substantially reduced the subsequent proliferative response (data not shown). In a previous study (17), serum antibody from C3H/HeJBir mice detected an increasing number of bacterial antigens on Western blots over time, compatible with epitope spreading. The same may occur in the CD4 ϩ T cell compartment because there is a progressive increase in the T cell proliferative responses over the first 12 wk of life (Fig. 7). Lastly, certain bacterial lipids are known to be presented to T cells via CD1 molecules (27); however, the stimulatory material in cecal bacteria was trypsin-sensitive, indicating that it was a protein or glycoprotein and not lipid.
Superantigens derived from enteric bacteria have been implicated in various immune-mediated diseases (25). Thus, the possibility that the cecal protein stimulating C3H/ HeJBir T cells might be a superantigen was explored. The cecal bacterial extracts were directly compared to a known superantigen, SEB, by using short antigen pulses of APCs or pulsing paraformaldehyde-fixed APCs. Under both conditions, SEB was quite stimulatory, whereas the cecal bacterial extract did not stimulate. In addition, we measured the precursor frequency of CBA-reactive T cells in C3H/ HeJBir mice and compared it to that of the C3H/HeJ parent strain and the C57BL/6 strain. The frequency of antigen-reactive CD4 T cells producing IL-2 when stimulated with CBA-pulsed APCs was not consistent with a superantigen, but was in a range consistent with a response to conventional antigen. There was a 10-fold increase in C3H/ HeJBir mice T cells compared to those from either C3H/HeJ or C57BL/6. The latter both had high frequencies compared to the T cell response of naive mice toward a conventional exogenous protein antigen, consistent with low level in vivo priming of T cells by enteric bacteria.
To assess the phenotypic subset of the CD4 ϩ T cells reactive to cecal bacterial antigen, the cytokines produced by Figure 9. Histopathology of the colon of C3H/HeSnJ scid/scid mice that had been injected with CD4 ϩ T cells from either C3H/HeJBir or C3H/HeJ mice 3 mo before. Before the transfer, CD4 ϩ T cells from both sources were activated for 4 d with cecal bacterial antigen-pulsed APCs. Three months after transfer, the colon of the recipient of the activated C3H/HeJ T cells (left panel) shows normal mucosa overlying a lymphoid follicle, whereas the colon of the recipient of activated C3H/HeJBir T cells (right panel) shows inflammation and a focal ulcer. 862 C3H/HeJBir T Cell Response to Enteric Bacterial Antigens CD4 T cells from either C3H/HeJBir or C3H/HeJ mice stimulated in vitro with antigen-pulsed APCs were measured. The production of IL-2 and IFN-␥ by C3H/HeJBir CD4 ϩ T cells was 10-20-fold higher than that of the noncolitic parental strain CD4 ϩ T cells. Although there was a significant increase in IL-4 as well, the overall pattern was predominantly Th1. The cytokine pattern when E. coli antigens were used was also predominantly Th1. Interestingly, although the cytokine response of C3H/HeJBir CD4 ϩ T cells to cecal bacterial and E. coli antigens was comparable, the CD4 ϩ T cell proliferative response was substantially lower to E. coli than to cecal bacteria. We postulate that this might be due to an antigen dosage effect in vivo in that E. coli is a minor constituent of the bacterial flora. Thus, the Th1 subset is implicated in the C3H/HeJ-Bir mice just as it has been in a number of other models (10)(11)(12).
The question remained as to the significance of the T cell reactivity to these bacterial antigens, because it seemed possible that the CD4 ϩ T cell responses might be a result rather than a cause of colitis. Thus the kinetics of appearance of this T cell reactivity was defined. Increased T cell reactivity to cecal bacterial antigens could be demonstrated as early as 3-4 wks of age, which is the time of onset of colitis in these mice. In addition, adoptive transfer studies showed that when C3H/HeJBir CD4 ϩ T cells were activated with CBA-pulsed APCs in vitro and then transferred into C3H/ HeSnJ scid/scid recipients, three out of four developed colitis. In contrast, none of the four recipients of similarly treated CD4 ϩ T cells from C3H/HeJ strain developed disease, nor did any of the recipients of anti-CD3-activated C3H/ HeJBir CD4 ϩ T cells, indicating that the induction of disease needs CBA-specific activation of C3H/HeJBir CD4 ϩ T cells, and that nonspecific activation is not sufficient. The activation by antigen may disturb the balance between effector and regulator cells, allowing the former to expand more rapidly in vivo. Notably, the lesions in the scid/scid recipients were focal as are the lesions in the C3H/HeJBir donors. These data demonstrate that the CD4 ϩ T cells reactive to enteric bacterial antigens in C3H/HeJBir mice are pathogenic. Although this possibility has been postulated for a number of years as a possible pathogenic mechanism of inflammatory bowel disease (1), this is a direct demonstration that this does occur.
In recent years a number of transgenic or knockout mice, collectively termed induced mutants, have developed colitis in the absence of any further manipulation. These strains represent a small minority of the induced mutants of immune-related genes that have been generated, arguing that they affect pathways critical to maintenance of normal intestinal homeostasis. The histopathology varies among these strains but often they have hyperproliferative crypts and rectal prolapse, neither of which is a feature of the spontaneously colitic C3H/HeJBir mouse. The genetic background of the inbred strain on which the mutation is placed is a key determinant of the disease and, as mentioned above, the C3H/HeJ strain seems highly susceptible in a number of these models. Abnormal mucosal CD4 ϩ T cell reactivity has been implicated in a number of models, particularly abnormal Th1 responses manifested by excessive IFN-␥ in the lesions. The restriction of the inflammation to the colon in these mutants implicates the enteric bacterial flora as the stimulant of the inflammation and indeed, in several models, germ-free animals do not get disease (5,13) and in one, the disease is considerably lessened by specific pathogen-free conditions (6,11). The mechanism by which the enteric bacterial flora stimulates disease in such models is unknown. The results of this study suggest that one possible mechanism is CD4 ϩ T cell reactivity to conventional antigens of the commensal bacteria, similar to what has been found in C3H/HeJBir mice; but it remains possible that the lesions in these models are due to other bacterial effects such as mitogens, superantigens, or other immunomodulatory molecules.
Although the bacterial flora drives the development of the mucosal immune system and to a lesser extent the systemic immune system, little is known about how the normal response to enteric bacterial antigens is regulated. Experiments in which germ-free mice have been monocontaminated with specific bacterial strains indicate that there is a wide spectrum in the resulting response, with some commensal enteric bacteria failing to elicit any immune response, even when injected parenterally (28,29). This result has given rise to the concept of an "autochthonous flora", i.e., bacterial strains that do not stimulate the immune system due to an extended coevolution with the host (30). Recently, T cells isolated from the intestinal lamina propria have been found to be tolerant to an individual's own enteric bacteria, but reactive to the enteric bacteria of other individuals (31,32). This relative unresponsiveness to the abundant antigens of the intestinal flora may be related to the limited T cell receptor repertoire found in intestinal intraepithelial T cells as compared to that found in peripheral blood, lymph node, or spleen (33)(34)(35)(36). The TCR repertoire of mouse lamina propria lymphocytes is unknown. Although relative hyporesponsiveness to enteric bacterial antigens appears to exist, the mechanisms that initiate and maintain such relative hyporesponsiveness remain unknown. Some insight into these mechanisms will probably result from studies of models such as C3H/HeJBir mice, in which these mechanisms appear to be impaired and in which IBD results.
|
2014-10-01T00:00:00.000Z
|
1998-03-16T00:00:00.000
|
{
"year": 1998,
"sha1": "8333a660b61fc081f9ab5d2e521a7b911ac5428c",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/187/6/855.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec5138d3d072836ccbee57ffc20c1bda2ceaae6d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
15473844
|
pes2o/s2orc
|
v3-fos-license
|
Baric structures on triangulated categories and coherent sheaves
We introduce the notion of a"baric structure"on a triangulated category, as an abstraction of S. Morel's weight truncation formalism for mixed l-adic sheaves. We study these structures on the derived category D_G(X) of G-equivariant coherent sheaves on a G-scheme X. Our main result shows how to endow this derived category with a family of nontrivial baric structures when G acts on X with finitely many orbits. We also describe a general construction for producing a new t-structure on a triangulated category equipped with given t- and baric structures, and we prove that the staggered t-structures on D_G(X) introduced by the first author arise in this way.
Introduction
Let Z be a variety over a finite field. The triangulated category of ℓ-adic sheaves on X has a full subcategory D b m (Z) of "mixed sheaves," defined in terms of eigenvalues of the Frobenius morphism. The existence and good formal properties of this category are among the most important consequences of Deligne's proof of the Weil conjectures. It plays a major role in the theory of perverse sheaves and their applications in representation theory. An important part of the formalism of mixed sheaves is a certain filtration of D b m (Z) by full subcategories {D b m (Z) ≤w } w∈Z , known as the weight filtration.
Let us now turn our attention to the world of equivariant coherent sheaves. Let X be a scheme (say, of finite type over a field), and let G be an affine group scheme acting on X with finitely many orbits. In [A], the first author introduced a class of t-structures, called staggered t-structures, on the bounded derived category D b G (X) of G-equivariant coherent sheaves on X. These t-structures depend on the choice of a certain kind of filtration of the abelian category of equivariant coherent sheaves. These filtrations, known as s-structures, bear an at least superficial resemblance to the weight filtration of D b m (Z). The main goal of this paper is to try to make this resemblance into a precise statement, and to thereby place these two kinds of structures in a unified setting. We do this by introducing the notion of a baric structure on a triangulated category. The usual weight filtration on D b m (Z) is not a baric structure, but a modified version of it due to S. Morel [M] is. (Indeed, the definition of a baric structure is largely motivated by Morel's results.) An s-structure is not a baric structure either: for one thing, it is a filtration of an abelian category, not of a triangulated category.
Date: August 23, 2008. The first author was partially supported by NSF Grant DMS-0500873.
We show in this paper how to construct baric structures on D b G (X) using an sstructure on X. We also exhibit several other examples of baric structures that have appeared in the literature.
The second goal of the paper is to recast the construction in [A] as an instance of an abstract operation that can be done on any triangulated category. Specifically, given a triangulated category with "compatible" tand baric structures, we outline a procedure, which we call staggering, for producing a new t-structure. Note that in [A], "staggered" was simply a name assigned to certain specific t-structures by definition, whereas in this paper, "to stagger" is a verb. We prove that these two uses of the word are consistent: that is, that the t-structures of [A] arise by staggering the standard t-structure on D b G (X) with respect to a suitable baric structure. (The staggering operation can also be applied to the weight baric structure on D b m (Z), as well as to other baric structures. This yields a new t-structure that has not previously been studied.) An outline of the paper is as follows. We begin in Section 2 by giving the definition of a baric structure and of the staggering operation. In Section 3, we give examples of baric structures, including Morel's version of the weight filtration. Next, in Section 4, we begin the study of baric structures on derived categories of equivariant coherent sheaves, especially those that behave well with respect to the geometry of the underlying scheme.
The next three sections are devoted to the relationship between baric structures and s-structures. First, in Section 5, we review relevant definitions and results from [A]. Section 6 contains the main result of the paper, showing how s-structures on the abelian category of coherent sheaves give rise to baric structures on the derived category. In Section 7, we briefly consider the reverse problem, that of producing s-structures from baric structures.
Finally, in Section 8, we study staggered t-structures associated to the baric structures produced in Section 6. Specifically, we prove that their hearts are finitelength categories, and we give a description of their simple objects. This was done in some cases in [A], but remarkably, the machinery of baric structures allows us to remove the assumptions that were imposed in loc. cit.
We conclude by mentioning an application of the machinery developed in this paper. The language of baric structures allows one to define a notion of "purity," similar to the one for ℓ-adic mixed constructible sheaves. In a subsequent paper [AT], the authors prove that every simple staggered sheaf is pure, and that every pure object in the derived category is a direct sum of shifts of simple staggered sheaves. These results are analogous to the well-known Purity and Decomposition Theorems for ℓ-adic mixed perverse sheaves.
Baric structures
In this section we introduce baric structures on triangulated categories (Definition 2.1), and the operation of staggering a t-structure with respect to a baric structure (Definition 2.8). Staggering produces, out of a t-structure (D ≤0 , D ≥0 ) on a triangulated category D, a new pair of orthogonal subcategories ( s D ≤0 , s D ≥0 ). Our main result is a criterion which guarantees that ( s D ≤0 , s D ≥0 ) is itself a t-structure (Theorem 2.11).
Baric structures.
Definition 2.1. Let D be a triangulated category. A baric structure on D is a pair of collections of thick subcategories ({D ≤w }, {D ≥w }) w∈Z satisfying the following axioms: (1) D ≤w ⊂ D ≤w+1 and D ≥w ⊃ D ≥w+1 for all w.
(3) For any object X ∈ D, there is a distinguished triangle A → X → B → with A ∈ D ≤w and B ∈ D ≥w+1 .
This definition is at least superficially very similar to that of t-structure, and in fact arguments identical to those given in [BBD,.5] yield the following basic properties of baric structures.
Definition 2.4. Let D be a triangulated category equipped with a baric structure ({D ≤w }, {D ≥w }) w∈Z . We will use the following terminology: (1) The adjoints β ≤w and β ≥w to the inclusions D ≤w ֒→ D and D ≥w ֒→ D are called baric truncation functors.
(2) The baric structure is bounded if for each object A ∈ D, there exist integers v, w such that A ∈ D ≥v ∩ D ≤w . (3) It is nondegenerate if there is no nonzero object belonging to all D ≤w or to all D ≥w . Note that a bounded baric structure is automatically nondegenerate. (4) Let D ′ be another triangulated category, and suppose it is equipped with a baric structure ({D ′ ≤w }, {D ′ ≥w }). A functor of triangulated categories F : D → D ′ is said to be left baryexact if F (D ≥w ) ⊂ D ′ ≥w for all w ∈ Z, and right baryexact if F (D ≤w ) ⊂ D ′ ≤w for all w ∈ Z. Let us also record the following definitions, though we will not use them until later in the paper.
Definition 2.5. Let D be a triangulated category equipped with a baric structure ({D ≤w }, {D ≥w }) w∈Z .
(1) Suppose D is equipped with an involutive antiequivalence D : D → D. The baric structure is self-dual if D(D ≤w ) = D ≥−w .
(2) Suppose D has the structure of a tensor category, with tensor product ⊗.
The baric structure is multiplicative with respect to ⊗ if if for any A ∈ D ≤v and B ∈ D ≤w , we have A ⊗ B ∈ D ≤v+w . (3) Suppose D has an internal Hom functor Hom. The baric structure is multiplicative with respect to Hom if for any A ∈ D ≤v and B ∈ D ≥w , we have Hom(A, B) ∈ D ≥w−v . Note that whenever we have an adjunction between ⊗ and Hom, the multiplicativity conditions are equivalent.
2.2.
Staggering. Below, if D is equipped with a t-structure (D ≤0 , D ≥0 ), we write C = D ≤0 ∩ D ≥0 for its heart, and we denote the associated truncation functors by τ ≤n and τ ≥n . The nth cohomology functor associated to the t-structure is denoted h n : D → C.
Definition 2.6. Let D be a triangulated category equipped with both a t-structure and a baric structure. These structures are said to be compatible if τ ≤n and τ ≥n are right baryexact, and β ≤w and β ≥w are left t-exact.
Remark 2.7. Of course there is a dual notion of compatibility, but it does not seem to arise as often.
Definition 2.8. Let D be a triangulated category equipped with compatible tand baric structures. Define two full subcategories of D as follows: Assume that the pair ( s D ≤0 , s D ≥0 ) constitutes a t-structure. It is called the staggered t-structure, or the t-structure obtained by staggering the original t-structure with respect to the given baric structure.
As usual, we let s D ≤n = s D ≤0 [−n] and s D ≥n = s D ≥0 [−n].
Lemma 2.9. Let D be a triangulated category equipped with compatible tand baric structures. Assume the t-structure is nondegenerate.
(1) A ∈ D ≤w if and only if h k (A) ∈ D ≤w for all k.
(3) We have (4) D ≤w ∩C is a Serre subcategory of C, and D ≥w ∩C is stable under extensions.
(5) s D ≤0 and s D ≥0 are stable under extensions.
Recall (e.g. [V,Proposition 4.4.6]) that we have a spectral sequence and all a, b ∈ Z, we see that Hom(A, B) = 0 for all B ∈ D ≥w+1 , and hence that A ∈ D ≤w .
(4) Suppose we have a short exact sequence in C. If A and C are in D ≤w , then B must be as well, since D ≤w is stable under extensions. Conversely, suppose B ∈ D ≤w . Assume that C / ∈ D ≤w , and consider the distinguished triangle By left t-exactness of the baric truncation functors, we have an exact sequence We must have h 0 (β ≥w+1 C) = 0: otherwise, we would have C ∼ = h 0 (β ≤w C) ∈ D ≤w . Next, from the distinguished triangle and hence that β ≥w+1 A = 0 and β ≥w+1 C = 0. Thus, A and C are in D ≤w , as desired.
That D ≥w ∩ C is stable under extensions follows immediately from the fact that D ≥w is stable under extensions.
(5) Let A → B → C → be a distinguished triangle with A ∈ s D ≤0 and C ∈ s D ≤0 , and consider the exact sequence Proposition 2.10. Let D be a triangulated category equipped with compatible tand baric structures. Assume the t-structure is nondegenerate.
(1) Hom(A, B) = 0 for all A ∈ s D ≤0 and B ∈ s D ≥1 . ( (4) If the baric structures is also nondegenerate, there is no nonzero object belonging to all s D ≤n or to all s D ≥n . (5) If the tand baric structures are bounded, then for any A ∈ D, there are integers n, m such that A ∈ s D ≥n ∩ s D ≤m .
It follows from the spectral sequence (2.1) that Hom(A, B) = 0.
(2) Suppose Hom(A, B) = 0 for all B ∈ s D ≥1 , and suppose for some k, This contradicts the assumption that Hom(A, B) = 0 for all B ∈ s D ≥1 , so we must have h k (A) ∈ D ≤−k for all k, and hence A ∈ s D ≤0 .
On the other hand, if Hom(A, B) = 0 for all A ∈ s D ≤0 , a similar argument involving the morphism τ ≤−k β ≤k B → B shows that B ∈ s D ≥1 . ( (4) Suppose A ∈ s D ≤n for all n. Then h k (A) ∈ D ≤n−k for all n and all k. The nondegeneracy of the baric structure implies that h k (A) = 0; then, the nondegeneracy of the t-structure implies that A = 0. Next, suppose A ∈ s D ≥n for all n, and assume A = 0. Choose some w such that β ≤w A = 0, and then choose some k such that τ ≤k β ≤w A = 0. By right baryexactness of τ ≤k , we know that τ ≤k β ≤w A ∈ D ≤w , so we obtain a sequence of isomorphisms In particular, the natural map τ ≤k β ≤w A → A is nonzero. But clearly τ ≤k β ≤w A ∈ s D ≤k+w , so A / ∈ s D ≥k+w+1 , a contradiction. (5) This follows from Lemma 2.9(6).
We will not prove in general that ( s D ≤0 , s D ≥0 ) is a t-structure.
Theorem 2.11. Let D be a triangulated category endowed with compatible bounded, nondegenerate tand baric structures. Suppose we have a function µ : D → N with the following properties: (1) µ(X) = 0 if and only if X = 0.
Proof. It will be convenient to use " * " operation on triangulated categories (cf. [BBD,§1.3.9]): given two classes of objects A, B ⊂ D, we denote by A * B the class of all objects X ∈ D such that there exists a distinguished triangle A → X → B → with A ∈ A and B ∈ B. In view of the preceding proposition, the present theorem will be proved once we show that every object of D belongs to s D ≤0 * s D ≥1 . We proceed by induction on µ(X). If µ(X) = 0, then X = 0, and there is nothing to prove. Otherwise, let n be the smallest integer such that h n (X) = 0. Let A 1 = τ ≤n β ≤−n X, X ′ = τ ≥n+1 β ≤−n X, and B 1 = β ≥−n+1 X. It follows from the right baryexactness of τ ≤n that A 1 ∈ s D ≤0 , and, similarly, it follows from the left t-exactness of β ≥−n+1 that B 1 ∈ s D ≥1 . Recall [BBD,Proposition 1.3.10] that the " * " operation is associative. By construction, we have Since µ(X ′ ) < µ(X) by assumption, we know that X ′ ∈ s D ≤0 * s D ≥1 , and hence Since s D ≤0 and s D ≥1 are stable under extensions, we have s D ≤0 * s D ≤0 = s D ≤0 and s D ≥1 * s D ≥1 = s D ≥1 , so X ∈ s D ≤0 * s D ≥1 , as desired.
Examples
In this section, we exhibit several examples of baric structures occurring "in nature." In the first one, the staggering operation of Definition 2.8 is a new approach to a known t-structure. In two others, this operation gives what appears to be a previously unknown t-structure. The main example of this paper-baric structures on derived categories of coherent sheaves-will be discussed in the next section.
3.1. Perverse sheaves. Let X be a topologically stratified space (as in [GM]), with all strata of even real dimension. (This example can be easily modified to relax that condition, or to treat stratified varieties over a field instead.) Let D = D b c (X) be the bounded derived category of sheaves of complex vector spaces that are constructible with respect to the given stratification. For any w ∈ Z, let X w be the union of all strata of dimension at most 2w. (Thus, X w = ∅ if w < 0.) This is a closed subspace of X. Let i w : X w → X be the inclusion map. Let D ≤w be the full subcategory consisting of complexes whose support is contained in X w , and let D ≥w+1 be the full subcategory of complexes F such that i ! w F = 0. If F ∈ D ≤w and G ∈ D ≥w+1 , then F ∼ = i w * i −1 w F , and F → is one whose first term lies in D ≤w and whose last term lies in D ≥w+1 . Thus, we see that ({D ≤w }, {D ≥w }) w∈Z is a baric structure on D b c (X), with baric truncation functors β ≤w = i w * i ! w and β ≥w = j w * j −1 w . It is easy to see that this baric structure is compatible with the standard tstructure on D. If F is supported on X w , it is obvious that any truncation of it is as well, so D ≤w is stable under τ ≤n and τ ≥n . On the other hand, it is clear from the formulas above that β ≤w and β ≥w are both left t-exact.
In the associated staggered t-structure ( s D ≤0 , s D ≥0 ), we have F ∈ s D ≤0 if and only if h k (F ) ∈ D ≤−k , or, in other words, The staggered t-structure in this case is none other than the perverse t-structure of middle perversity.
3.2. Quasi-exceptional sets. Let D be a triangulated category. A set of objects {∇ w } w∈N in D indexed by nonnegative integers is called a quasi-exceptional set if the following conditions hold: (1) If v < w, then Hom(∇ v , ∇ w [k]) = 0 for all k ∈ Z.
(2) For any w ∈ N, Hom(∇ w , ∇ w [k]) = 0 if k < 0, and End(∇ w ) is a division ring. For w ∈ N, let D ≤w be the full triangulated subcategory of D generated by ∇ 0 , . . . , ∇ w , and for an integer w < 0, let D ≤w be the full triangulated subcategory containing only zero objects. (Here, we are following the notation of [B1], but this will turn out to be consistent with our notation for baric structures as well.) A quasi-exceptional set is dualizable if there is another collection of objects (4) For any w ∈ N, we have ∆ w ∼ = ∇ w mod D ≤w−1 . The last condition means that ∆ w and ∇ w give rise to isomorphic objects in the quotient category D ≤w /D ≤w−1 .
Next, let D ≥w be the full triangulated subcategory generated by the objects {∇ k | k ≥ w}. If A ∈ D ≤w and B ∈ D ≥w+1 , then Axiom (1) above implies that Hom(A, B) = 0. In addition, by [B1,Lemma 4(e)], each inclusion D ≤w → D ≤w+1 admits a right adjoint ι w . By a straightforward argument, these functors can be used to construct distinguished triangles as in Definition 2.1(3). Thus, ({D ≤w }, {D ≥w }) w∈Z is a baric structure on D. It is nondegenerate and bounded by construction.
A key result of [B1] is the construction of a bounded, nondegenerate t-structure (D ≤0 , D ≥0 ) associated to a quasi-exceptional set. This t-structure is defined as follows (see [B1,Proposition 1]): Here, the notation S stands for the smallest strictly full subcategory of D that is stable under extensions and contains all objects in the set S.
We claim that this t-structure and the baric structure defined above are compatible. It follows from Axiom (1) above that This calculation shows that the baric truncation functors preserve D ≥0 . On the other hand, Axiom (3) implies that τ ≤0 ∇ w is contained in the subcategory generated by ∆ 0 , . . . , ∆ w , and that subcategory coincides with D ≤w by Axiom (4). Thus, τ ≤0 preserves D ≤w , so τ ≥0 does as well.
3.3. Weight truncation for ℓ-adic mixed constructible sheaves. Let X be a scheme of finite type over a finite field F q , and let ℓ be a fixed prime number distinct from the characteristic of F q . Let D = D b m (X, Q ℓ ) be the bounded derived category of mixed constructible Q ℓ -sheaves on X. Let p h n denote the nth cohomology functor with respect to the perverse t-structure on D with respect to the middle perversity.
. Since all objects in the heart of this t-structure have finite length, we may attach a nonnegative integer µ(F ) to each complex F by the formula Moreover, by [M,Proposition 4.1.3], the baric truncation functors are t-exact for the perverse t-structure. This implies that µ satisfies the assumptions of Theorem 2.11, so the perverse t-structure on D b m (X, Q ℓ ) can be staggered with respect to Morel's baric structure to obtain a new t-structure. The authors are not aware of any previous appearance of this "staggered-perverse" t-structure on ℓ-adic mixed constructible sheaves.
3.4. Diagonal complexes. We conclude with an example, due to T. Ekedahl [E], of a t-structure that closely resembles a staggered t-structure, although it does not in general arise by staggering with respect to a baric structure. (The authors thank N. Ramachandran for pointing out this work to them.) Let D be a triangulated category with a bounded, nondegenerate t-structure (D ≤0 , D ≥0 ), and as usual, let is equipped with a radical filtration, Ekedahl shows that the categories constitute a bounded, nondegenerate t-structure on D. This is called the diagonal t-structure, and the objects in its heart are called diagonal complexes.
These formulas are, of course, strongly reminiscent of those in Definition 2.8. Let us comment briefly on the relationship between the two constructions. Given a radical filtration, one could hope to define a baric structure by setting D ≤w = {A ∈ D | h k (A) ∈ C ≤w for all k ∈ Z}. However, the construction of a baric truncation functor turns out to require a stronger Hom-vanishing condition between C ≤w and C ≥w+1 than that stated above: one needs something like Lemma 2.9(3). Conversely, given a baric structure, one could hope to define a radical filtration by setting C ≤w = D ≤w ∩C. This also fails, because a baric structure imposes no higher Hom-vanishing conditions on the right-orthogonal of C ≤w .
Baric Structures on Coherent Sheaves, I
In this section, we will investigate baric structures on derived categories of coherent sheaves. Let X be a scheme of finite type over a noetherian base scheme, and let G be an affine group scheme over the same base, acting on X. We adopt the convention that all statements about subschemes are to be understood in the G-invariant sense. Thus, "open subscheme" will always mean "G-stable open subscheme," and "irreducible" will mean "not a union of two proper G-stable closed subschemes." This convention will remain in effect for the remainder of the paper.
Let C G (X) and Q G (X) denote the categories of G-equivariant coherent and quasicoherent sheaves, respectively, on X. One of the headaches of the subject is the need to work with three closely related triangulated categories, which we denote as follows: ( is the bounded-above derived category of C G (X).
(3) D + G (X) is the full subcategory of the bounded-below derived category of Q G (X) consisting of objects with coherent cohomology sheaves. D b G (X) will be the focus of our attention, but it will be necessary to work D − G (X) and D + G (X) as well, simply because most operations on sheaves take values in one of those categories, even when acting on bounded complexes.
Remark 4.2. Implicit in this definition are some finiteness conditions; e.g., it is conceivable that there are interesting baric structures on D + G (X) that take advantage of the fact that the functors β ≤w can take bounded complexes to unbounded complexes. Nevertheless, this is the definition we will work with. (1) and (2) of Lemma 2.9, we define the following subcategories of D − G (X) and D + G (X):
Inspired by parts
It is unknown whether these categories constitute parts of baric structures on D − G (X) or on D + G (X). Nevertheless, they will be useful in the sequel, in part because they admit the alternate characterization given in the lemma below. If Y is another scheme endowed with a baric structure, we will, by a minor abuse of terminology, call a functor D − In particular, we see from this lemma that G (X) ≤k+1 and G ∈ D b G (X) ≤w−1 , and there is a nonzero morphism G → β ≤w−1 τ ≤k F . In particular, the group 4.1. HLR baric structures. We do not wish to work with arbitrary baric structures on D b G (X); rather, we want them to be well-behaved in relation to the scheme structure on X. We have already imposed the condition that the baric structure be compatible with the standard t-structure. We may also ask that it give rise to baric structures on subschemes, in the following sense.
Definition 4.4. Suppose X is equipped with a baric structure, and let κ : Y ֒→ X be a locally closed subscheme. A baric structure on Y is said to be induced by the one on X if Lκ * is right baryexact and Rκ ! is left baryexact.
The class of "HLR (hereditary, local, and rigid) baric structures," defined below, is particularly well-behaved. For instance, every locally closed subscheme of a scheme with an HLR baric structure admits a unique induced baric structure. (See Theorem 4.10.) The remainder of Section 4 is devoted to establishing various properties of HLR baric structures, and the main result of the paper, Theorem 6.4, is a statement about a class of nontrivial HLR baric structures.
Definition 4.5. A baric structure on X is said to be hereditary if every closed subscheme admits an induced baric structure. A hereditary baric structure on X is said to be local if every open subscheme admits an induced baric structure that is also hereditary.
Next, a hereditary baric structure on X is rigid if for every sequence of closed subschemes Z t ֒→ Z 1 ֒→ X where Z 1 is a nilpotent thickening of Z (i.e., Z 1 has the same underlying topological space as Z), the induced baric structures on Z and Z 1 are related as follows: . Finally, a baric structure that is hereditary, local, and rigid is called an HLR baric structure.
It turns out that the "local" and "rigid" conditions on an HLR baric structure are redundant: Theorem 4.6. Every hereditary baric structure is HLR.
This theorem will be proved in Section 4.3. We first require a couple of preliminary lemmas about induced baric structures, proved below. Following that, in Section 4.2, we will establish a number of useful properties of HLR baric structures.
w∈Z be a baric structure on X, and let i : Z ֒→ X be a closed subscheme. If Z admits an induced baric structure, it is given by Conversely, if the categories (4.3) constitute a baric structure on Z, then that baric structure is induced from the one on X.
If an open subscheme j : U ֒→ X admits an induced baric structure, it is given by Conversely, if the categories (4.4) constitute a baric structure on U , then that baric structure is induced from the one on X.
, so the first and last terms above must be the baric truncations of i * F : Next, assume the categories (4.3) constitute a baric structure on Z. We will show that this baric structure is induced from the one on X.
Thus, Li * is right baryexact, and Ri ! is left baryexact, as desired.
We turn now to open subschemes.
w∈Z is an induced baric structure on an open subscheme j : U ֒→ X. In view of the equalities (4.1), the definition of "induced" implies that j * : Finally, assume the categories (4.4) constitute a baric structure on U . We must show that this baric structure is induced. Clearly, j * is baryexact as a functor of bounded derived categories D b G (X) → D b G (U ). Since j * is also exact, it commutes with truncation and cohomology functors, and it takes D b Lemma 4.8. Let j : U ֒→ X be the inclusion of an open subscheme, and let i : Z ֒→ X be the inclusion of a closed subscheme. Assume that U and Z are equipped with baric structures induced from one on X. Then: (1), (2), and (3) hold by definition.
(4) We saw in the proof of Lemma 4.7 that as a functor of bounded derived w∈Z be a hereditary baric structure on X, and let i : Z ֒→ X be the inclusion of a closed subscheme. The induced baric structure on Z is also hereditary.
Proof. Let κ : Y ֒→ Z be a closed subscheme of Z. We must show that Y admits a baric structure induced from the one on Z. In fact, we claim that the baric structure on Y induced from the on X (via i • κ : Y ֒→ X) has the desired property.
such that Hom(Lκ * F , G) = 0. Then Hom(F , κ * G) = 0 and, because i * is faithful, Hom(i * F , i * κ * G) = 0. But this is impossible, because according to Lemma 4.8, Thus, Lκ * is right baryexact and Rκ ! is left baryexact, so the baric structure on Y induced from the one on X is also induced from the one on Z. The induced baric structure on Z is therefore hereditary.
4.2.
Properties of HLR baric structures. In this section, we prove three useful results about HLR baric structures. First, we prove that the HLR property is inherited by induced baric structures on subschemes. Next, we prove an additional rigidity property for nilpotent thickenings of closed subschemes. Finally, we prove a "gluing theorem" that states that an HLR baric structure is determined by the baric structures it induces on a closed subscheme and the complementary open subscheme. It should be noted that the proofs of these results depend on Theorem 4.6.
Theorem 4.10. Suppose X is endowed with an HLR baric structure. Every locally closed subscheme κ : Y ֒→ X admits a unique induced baric structure. Moreover, this baric structure is also HLR.
Proof. We have already seen the uniqueness of the induced baric structure in the case of open or closed subschemes, in Lemma 4.7. For a general locally closed subscheme, let us factor the inclusion map κ : Y → X as a closed imbedding i : Y ֒→ U followed by an open imbedding j : U ֒→ X. Then U acquires a unique induced hereditary baric structure from the baric structure on X, and it in turn induces a unique baric structure on its closed subscheme Y . This baric structure is also induced from the one on X: clearly, Lκ * = Li * • j * is right baryexact, and To show that this is the unique baric structure on Y induced from the one on X, we must show that the baryexactness assumptions on Lκ * and Rκ ! imply the same conditions on Li * and Ri ! . (It then follows that any baric structure induced from the one on X is actually induced from the one on U .) Suppose F ∈ D − G (U ) ≤w , and consider a distinguished triangle of the form On the other hand, suppose that F ∈ D + G (U ) ≥w , and consider a distinguished triangle of the form . But we also have Rκ ! F 1 ∼ = Ri ! β ≥w τ ≤k F , and from the chain of isomorphisms We now conclude that any baric structure on Y induced from the one on X is also induced from the one on U , and is therefore uniquely determined.
To show that the induced baric structure on a locally closed subscheme is HLR, it suffices, by Theorem 4.6, to show that it is hereditary. In the case of a closed subscheme, this was done in Lemma 4.9, and in the case of an open subscheme, there is nothing to prove: this property is part of the definition of "local." The assertion then follows for a general locally closed subscheme, since, by construction, the induced baric structure on such a subscheme is obtained by first passing to an open subscheme, and then to a closed subscheme of that.
Next, we turn to nilpotent thickenings of a closed subscheme.
Proposition 4.11. Suppose X is endowed with an HLR baric structure, and let Z t ֒→ Z 1 ֒→ X be a sequence of closed subschemes of X with the same underlying topological space. Then: (2) is entirely analogous and will be omitted.
Finally, we prove a "gluing theorem" for HLR baric structures.
Theorem 4.12. Suppose X is endowed with an HLR baric structure. Let i : Z ֒→ X be a closed subscheme of X, and let j : U ֒→ X be its open complement. Endow U and Z with the baric structures induced from that on X. Then we have
In particular, there is a unique HLR baric structure on X which induces the baric structures
We have an exact sequence lim →
Z1
Hom(i Z1 * Li * Z1 F , G) → Hom(F , G) → Hom(j * F , j * G), where the limit runs over nilpotent thickenings of Z. (See, for instance, [B2, Proposition 2 and Lemma 3(a)] for an explanation of this exact sequence.) We have j * F ∈ D b G (U ) ≤w and j * G ∈ D b G (U ) ≥w+1 , and by Lemma 4.8, we have i Z1 * Li * Z1 F ∈ D − G (X) ≤w so the first and third terms vanish. We conclude that Hom(F , G) also vanishes. The argument for D b G (X) ≥w is similar. 4.3. Proof of Theorem 4.6. In this section, we will prove that hereditary baric structures are automatically also local and rigid. We begin with a result about baric truncation functors with respect to a hereditary baric structure. If X is endowed with a hereditary baric structure, and F ∈ D b G (X) is actually supported on some closed subscheme i : Z ֒→ X, then the baric truncations of F are obtained by taking baric truncations in the induced baric structure on Z, and then pushing them forward by i * . In other words, hereditary baric structures have the property that baric truncation functors preserve support. More precisely: has set-theoretic support on a closed set Z ⊂ X, then so do β ≤w F and β ≥w F .
(2) If a morphism u : F → G in D b G (X) has set-theoretic support on Z, in the sense that u| X Z = 0, then so do β ≤w (u) and β ≥w (u).
Proof. If F is set-theoretically supported on Z then there is a subscheme i : Z 1 ֒→ X of X, whose underlying closed set is Z, such that F we must have i * β ≤w F ′ ∼ = β ≤w F and i * β ≥w+1 F ′ ∼ = β ≥w+1 F . In particular these objects are set-theoretically supported on Z, proving the first assertion.
To prove the second assertion, consider the exact sequence where i Z ′ : Z ′ ֒→ X ranges over all closed subscheme structures on Z. By assumption, u ∈ Hom(F , G) vanishes upon restriction to X Z, so we see from the exact sequence above that it must factor through i It follows that β ≤w (u) and β ≥w (u) factor through β ≤w τ ≤n i Z ′ * Ri ! Z ′ G and β ≥w τ ≤n i Z ′ * Ri ! Z ′ G, respectively. These objects have set-theoretic support on Z by the first part of the proposition, so β ≤w (u) and β ≥w (u) have set-theoretic support on Z as well, as desired.
We may use this fact to prove the following: Theorem 4.14. Every hereditary baric structure is local.
We will prove this theorem over the course of the following three propositions. Recall from Lemma 4.7 that in a local baric structure, the induced baric structures on open subschemes necessarily have the form given in the proposition below.
w∈Z be a hereditary baric structure on X, and let U be an open subschme of X. For any w ∈ Z, define full subcategories of D b G (U ) as follows: , we may find for every morphism u : F → G an object G 2 ∈ D b G (X) and a diagram F 1 → G 2 ← G 1 such that (G 2 ← G 1 )| U is an isomorphism, and the composition coincides with u. We claim that the diagram β ≤w F 1 → β ≤w G 2 ← β ≤w G 1 has the same property. In that case, the cone on the composition F 1 ∼ = β ≤w F 1 → β ≤w G 2 belongs to D b G (X) ≤w , which shows that the cone on u : F → G belongs to D b G (U ) ≤w . To prove the claim, note that the cone on the map G 1 → G 2 is set-theoretically supported on the closed set X U , and since the baric structure w∈Z is hereditary, the same must be true for the cone on β ≤w G 1 → β ≤w G 2 ; in particular the restriction of the latter map to U is an isomorphism.
We have shown that the D b G (U ) ≤w ⊂ D b G (U ) is a triangulated subcategory. To show that it is thick we have to show that it is also closed under summands -i.e.
, we may find a triangle F 1 → H → G 1 → whose restriction to U is isomorphic to the triangle
F → F ⊕ G → G →
In particular the map G 1 → F 1 [1] is set-theoretically supported on X U , so by proposition 4.13 the same must be true of β ≤w G 1 → β ≤w F 1 . From the diagram whose rows and columns are distinguished triangles, we see that β ≥w+1 G 1 → β ≥w+1 F 1 is an isomorphism. But since this morphism has set-theoretic support on X − U the objects β ≥w+1 F 1 and β ≥w+1 G 1 must have set-theoretic support on X − U which implies there are isomorphisms β ≤w F 1 | U ∼ = F and β ≤w G 1 | U ∼ = G. Thus F and G belong to D b G (U ) ≤w . A similar proof shows that the subcategories D b G (U ) ≥w are thick.
, and i : Z ֒→ X runs over all subscheme structures on X U . The first term above vanishes automatically, and each of the terms Hom(i * Li * F 1 , G 1 [1]) vanishes because, by Lemma 4.8, i * Li * F 1 ∈ D − G (X) ≤w . Thus, Hom(F , G) = 0 and Proof. Using Lemma 4.7 and the previous proposition, we know that the baric It remains only to show that this baric structure is hereditary. Let i : Y ֒→ U be a closed subscheme of U . By Lemma 4.7, we must prove that the following categories constitute a baric structure on Y : Let Y be the closure of Y in X, and let i 1 : Y ֒→ X be the inclusion map, so that we have a commutative square of inclusions By definition, the hereditary baric structure on X induces a baric structure on Y . This baric structure is itself hereditary, by Lemma 4.9. Thus, by the previous proposition, the baric structure on Y induces one on its open subscheme Y . This is given by be such that there exists a map i 1 * F ′ 2 → F 1 which is an isomorphism over U . Then i 1 * β ≤w F ′ 2 → F 1 is also an isomorphism over U , and F 2 := β ≤w F ′ 2 has the property that F 2 | Y ∼ = F and Let us finally show that hereditary baric structures are rigid.
Let Z be a subscheme of X and let Z 1 be a nilpotent thickening of Z in X, and write t for inclusion of Z into Z 1 . If F is a bounded chain complex of coherent sheaves on Z 1 , then we may find a filtration of F by subcomplexes F k whose subquotients are scheme-theoretically supported on Z. Thus in D b G (Z 1 ) we may find a sequence of objects and maps Then we may apply β ≤w to the sequence to obtain 0 = β ≤w F 0 → β ≤w F 1 → · · · → β ≤w F n = F and distinguished triangles It follows from Lemma 4.7 that the object β ≤w t * G k is isomorphic to t * β ≤w G k . Thus, F is in the thick closure of the image of D b G (Z) ≤w under t * . A similar proof gives the same result for D b G (Z 1 ) ≥w . This completes the proof of Theorem 4.6.
Background on s-structures and Staggered Sheaves
In this section, we review the t-structures on derived categories of equivariant coherent sheaves that were introduced in [A]. (They were called "staggered tstructures" in loc. cit.; in Section 8, we will prove that they usually arise by the staggering construction of Definition 2.8.) These t-structures depend on two auxiliary data: an s-structure, and a perversity function. After fixing notation, we briefly recall some facts about these objects, and we then describe the t-structures themselves. We will also prove a few useful lemmas about these objects.
As before, let X be a scheme of finite type over a noetherian base scheme, acted on by an affine group scheme G over the same base. We adopt the additional assumptions that the base scheme admits a dualizing complex in the sense of [H, Chap. V], and that the category C G (X) has enough locally free objects. It follows (see [B2,Proposition 1]) that X admits an equivariant dualizing complex. Fix one, and denote it ω X ∈ D b G (X). Next, let D = RHom(·, ω X ) denote the equivariant Serre-Grothendieck duality functor. Let X gen denote the set of generic points of G-invariant subschemes of X, and for any x ∈ X gen , we denote by Gx the smallest G-stable closed subset of X. (We do not usually regard Gx as having a fixed subscheme structure.) For any point x ∈ X gen and any closed subscheme structure i : V ). Let cod Gx be the unique integer such that h cod Gx (Ri ! ω X | V ) = 0. This number is independent of the choice of closed subscheme structure i : Z ֒→ X and of open subscheme V ⊂ Z. If X is, say, an equidimensional scheme of finite type over a field, ω X may be normalized so that cod Gx is the ordinary (Krull) codimension of Gx.
An s-structure on the scheme X is a pair of collections of full subcategories ({C G (X) ≤w }, {C G (X) ≥w }) w∈Z of C G (X) satisfying a list of ten axioms, called (S1)-(S10) in [A]. We will not review all the axioms here, but we do recall some of the key properties of s-structures: • Each C G (X) ≤w is a Serre subcategory, and each C G (X) ≥w is closed under extensions and subobjects. • C G (X) ≥w is the right orthogonal to C G (X) ≤w−1 .
• Each sheaf F contains a unique maximal subsheaf in C G (X) ≤w , denoted σ ≤w F . The quotient σ ≥w+1 F ∼ = F /σ ≤w F is the largest quotient of F in C G (X) ≥w+1 . • An s-structure on X induces s-structures on all locally closed subschemes of X. Assume henceforth that X is equipped with a fixed s-structure. Given a point x ∈ X gen and a closed subscheme structure i : Z ֒→ X on Gx, choose an open subscheme V ⊂ Z such that Ri ! ω X | V is concentrated in degree cod Gx. There is a unique integer, called the altitude of Gx and denoted alt Gx, such that Again, alt Gx is independent of the choice of i and V .
The staggered codimension of Gx is defined by scod Gx = alt Gx + cod Gx.
A (staggered ) perversity function is a function p : X gen → Z such that Given a perversity p : X gen → Z, the functionp : X gen → Z given bȳ is also a perversity function, known as the dual perversity. Given a staggered perversity function p, we define a full subcategory of D − G (X) by for any x ∈ X gen , any closed subscheme structure i : Z ֒→ X on Gx, and any k ∈ Z, there is a dense open and a full subcategory of D + G (X) by ). The t-structure associated in [A] to the given s-structure and to a perversity p is the pair . The remainder of the section will be spent establishing a number of useful lemmas about these objects. Let q : X gen → Z be a function such that Given such a function, let for any closed subscheme i : One may either regard this definition as a condition only on reduced closed subschemes of the form Gx, or as a condition on all possible closed subscheme structures on the various closed sets Gx. These two interpretations are equivalent by [A,Proposition 4.1], however, so there is no ambiguity in the definition. The first viewpoint is more convenient for checking explicit examples, but the second is sometimes more useful in proofs.
Lemma 5.1. Let x ∈ X gen , and let i : Z ֒→ X be a closed subscheme structure on Gx. For any sheaf F ∈ q C G (X) ≤w and any r ≥ 0, there is a dense open subscheme Proof. The proof of this lemma follows that of [A,Lemma 8.2] nearly verbatim. By the definition of q C G (X) ≤w , we know that there is a dense open subset . Then X ′ is a dense open subset of X, and i : Z ′ ֒→ X is a closed subscheme of X ′ . It clearly suffices to prove the lemma in the case where X and Z are replaced by X ′ and Z ′ . We therefore henceforth assume, without loss of generality, that i * F ∈ C G (Z) ≤w+q(x) . We now proceed by induction on r. For r = 0, the lemma is trivial: we have i * F ∈ C G (Z) ≤w+q(x) by assumption. Now, suppose r > 0. According to Axiom (S10) in the definition of an s-structure [A], there is an open subscheme V ′ ⊂ Z such that for any open set U ⊂ X with U ∩ Z ⊂ V ′ , we have Ext r (F | U , i * G| U ) = 0 for all G ∈ C G (Z) ≥w+q(x)+1 . (In fact, Axiom (S10) guarantees this vanishing for all G in a slightly larger category, denotedC G (Z) ≥w+q(x)+1 , but we will not require that additional information.) Equivalently, for any open , and then from the distinguished triangle Since τ ≤−r τ ≥−r Li * F ∼ = h −r (Li * F )[r], the sequence above can be rewritten as The first term above vanishes. Note that Thus, by the inductive assumption, the cohomology sheaves of τ ≥−(r−1) Li * F have the property that for each k, there is a dense open subscheme . This property is precisely the hypothesis of [A,Lemma 8.1], which then tells us that there is a dense open subscheme V ′′ ⊂ Z such that the last term in the exact sequence above vanishes whenever V ⊂ V ′′ . In particular, let us take V = V ′ ∩ V ′′ . The middle term above then clearly vanishes. Since , as desired.
Lemma 5.2. q C G (X) ≤w is a Serre subcategory of C G (X).
Proof. Suppose we have a short exact sequence 0 → F ′ → F → F ′′ → 0 in C G (X). Given x ∈ X gen and a closed subscheme structure i : Z ֒→ X on Gx, consider the exact sequence Suppose F ′ and F ′′ are in q C G (X) ≤w . Then there are dense open subschemes Next, let p be a staggered perversity function. The following alternate characterization of p D − G (X) ≤0 will be useful. Remark 5.4. Note the similarity between the right-hand side of this equation and the definition of s D ≤0 of definition 2.8.
Proof. Throughout the proof, x will denote a point of X gen , and i : Z ֒→ X will denote a closed subscheme structure on Gx.
First, suppose F is concentrated in a single degree with respect to the standard t-structure, say in degree n, and that h n (F ) ∈ p C G (X) ≤−n . If k > n, then of course G (X) and h k (F ) ∈ p C G (X) ≤−k for all k, it follows that F ∈ p D − G (X) ≤0 by the preceding paragraph and a standard induction argument on the number of nonzero cohomology sheaves of F . Finally, suppose that F ∈ D − G (X) and that h k (F ) ∈ p C G (X) ≤−k for all k. For any k ∈ Z, τ ≥k F is in D b G (X), so we already know that τ ≥k F ∈ p D − G (X) ≤0 . But consideration of the distinguished triangle Now, we will prove by downward induction on k that h k (F ) ∈ p C G (X) ≤−k and that τ ≤k−1 F ∈ p D − G (X) ≤0 for all k. These statements hold trivially if k > a. Suppose we know that h k+1 (F ) ∈ p C G (X) ≤−k−1 and τ ≤k F ∈ p D − G (X) ≤0 . By the preceding paragraph, we know that h k Assume r ≤ k − 1 (otherwise, the middle term above vanishes). By Lemma 5.1, Replacing V by a smaller open subscheme if necessary, we may also assume that In the course of the preceding proof, we have also established the following statement.
Corollary 5.5. The category p D − G (X) ≤0 is stable under all standard truncation functions τ ≤k and τ ≥k .
Baric Structures on Coherent Sheaves, II
In this section, we achieve the main goal of the paper: the construction of a class of baric structures on derived categories of equivariant coherent sheaves. These baric structures depend on a function on X gen that plays a role analogous to that played by a staggered perversity in Section 5.
Definition 6.1. Suppose G acts on X with finitely many orbits. For each orbit C ⊂ X, let I C ⊂ O X denote the ideal sheaf corresponding to the reduced closed subscheme structure on C ⊂ X. An s-structure on X is said to be recessed if for each C, I C /I 2 C ∈ C G (X) ≤−1 . For the remainder of the paper, we assume that G acts on X with finitely many orbits, and that X is endowed with a recessed s-structure. (See Remarks 6.10 and 8.3, however.) The assumption that the s-structure is recessed is a mild one: "most" of the s-structures appearing in [T] are recessed, as is the one used in [AS].
Note that I C /I 2 C is always at least in C G (X) ≤0 , since it is a subquotient of O X ∈ C G (X) ≤0 . In addition, since the coherent pullback functor to a locally closed subscheme is right s-exact, it follows that the restriction of a recessed s-structure to any locally closed subscheme is also recessed.
Remark 6.2. It is certainly possible to define the notion of "recessed s-structure" in a way that does not assume finiteness of the number of orbits. (One simply imposes a condition on the ideal sheaf of Gx for every x ∈ X gen , not just for every orbit closure.) However, it seems likely that when there are infinitely many orbits, there are no recessed s-structures.
Given a function q : X gen → Z satisfying (5.1), define a a new functionq : X gen → Z given byq (x) = alt Gx − q(x). Note that when G acts on X with finitely many orbits, a function q : X gen → Z satisfying (5.1) may be regarded as a Z-valued function on the set of orbits. It will sometimes be convenient to adopt this point of view, and, given an orbit C ⊂ X, we sometimes write where x C ∈ X gen is any generic point of C.
Lemma 6.3. Let G ∈ C G (X), and let j : U ֒→ X be an open subscheme. Suppose F 1 ⊂ G| U is such that F 1 ∈ q C G (U ) ≤w . Then there exists a subsheaf F ⊂ G such that F | U ∼ = F 1 and F ∈ q C G (X) ≤w .
Proof. If U is closed (i.e., if U is a connected component of X), then j * F 1 is naturally a subsheaf of G, so we simply take F ∼ = j * F 1 . Otherwise, let C be an open orbit in U U , and let V be the open subscheme U ∪ C. By induction on the number of orbits in U U , it suffices to find F ⊂ G| V such that F ∈ q C G (V ) ≤w and F | U ∼ = F . Let κ : C ֒→ V be the inclusion map, and let I C be the ideal (2) We proceed by noetherian induction: assume the statement is known if X is replaced by a proper closed subscheme, or if X is retained and Z is replaced by a proper closed subscheme. Suppose F ∈ q D − G (X) ≤w . We show by downward induction on k that h k (Li * F ) ∈ q C G (Z) ≤w . For large k, h k (Li * RF ) = 0, so this holds trivially. Now, assume that h r (Li * F ) ∈ q C G (Z) ≤w for all r > k, and consider the distinguished triangle τ ≤k Li * F → Li * F → τ ≥k+1 Li * F →. Then τ ≥k+1 Li * F is an object of q D b G (Z) ≤w , so for any x ∈ Z gen and any closed subscheme structure κ : Y ֒→ Z on Gx, we know that Lκ * τ ≥k+1 Li * F ∈ q D − G (Y ) ≤w . Consider the exact sequence If Y is a proper closed subscheme of Z, then we have assumed inductively that L(κ • i) * F ∈ q D − G (Y ) ≤w , and in that case, the last term in the sequence above belongs to q C G (Y ) ≤w as well. By Lemma 5.2, the middle term as well, and the existence of the desired open subscheme V ⊂ Y follows.
On the other hand, if Y = Z, and κ is the identity map, then Lemma 5.1 gives us a dense open subscheme G (Z) ≤w and q D − G (X) ≤w are defined by conditions on their cohomology sheaves, the first statement follows from the fact that i * is an exact functor taking q C G (Z) ≤w to q C G (X) ≤w . The second statement follows by duality.
Proposition 6.7. If F ∈ q D − G (X) ≤w and G ∈ q D + G (X) ≥w+1 , then Hom(F , G) = 0. Proof. We proceed by noetherian induction: assume the theorem is known for all proper closed subschemes of X. Let a and b be such that G ∈ D + G (X) ≥a and F ∈ D − G (X) ≤b . Since Hom(F , G) ∼ = Hom(τ ≥a F , G), we may replace F by τ ≥a F and assume that F ∈ q D b G (X) ≤w . Next, let G ′ ∈ D − G (X) ≤−w−1 be such that DG ′ ∼ = G. For a sufficiently small integer c, we will have D(τ ≤c G ′ ) ∈ D + G (X) ≥b+1 . From this, it follows that Hom(F , G) ∼ = Hom(F , D(τ ≥c+1 G ′ )). Replacing G by D(τ ≥c+1 G ′ ), we may assume that G ∈ q D b G (X) ≥w . With F and G both in D b G (X), induction on the number of cohomology sheaves allows us to reduce to the case where both F and G ′ := DG are concentrated in a single degree. By shifting both objects simultaneously, we may assume without loss of generality that F ∈ C G (X). Let x be a generic point of X. There is an open subscheme U ⊂ X containing x such that G ′ | U ∈ C G (U ) ≤alt Gx−q(x)−w−1 . By [A, Remark 3.2 and Lemmas 6.1-6.2], we may replace U by a smaller open subscheme containing x such that G| U is concentrated in a single degree, say d, and such that G[d]| U ∈ C G (U ) ≥q(x)+w+1 . If d > 0, then clearly Hom(F | U , G| U ) = 0. Otherwise, we invoke [A,Axiom (S9)] to replace U by a smaller open subscheme such that Hom(F | U , G| U ) = 0. Let Z be the complementary closed subspace to U , and consider the exact sequence where i Z ′ : Z ′ ֒→ X ranges over all closed subscheme structures on Z. We have just seen that the last term vanishes. Since Li * , the first term vanishes by induction. So Hom(F , G) = 0, as desired.
Proof. Once again, we proceed by noetherian induction, and assume the result is known for all proper closed subschemes of X. Now, assume first that F is a sheaf. Let C ⊂ X be an open (and possibly nonreduced) orbit, and let i : C ֒→ X be the inclusion of its closure. By Lemma 6.3, there exists a subsheaf F 1 ⊂ F such that F 1 ∈ q C G (X) ≤w and F 1 | C ∼ = σ ≤w+q(C) (F | C ). Next, form a short exact sequence 0 → F 1 → F → G → 0.
Let b = cod C. Then i * Ri ! DG ∈ D + G (X) ≥b , and, by [A,Lemma 6.1], we know that i * Ri ! DG| C ∼ = DG| C is concentrated in degree b. Furthermore, [A,Proposition 6.8] tells us that DG[b]| C ∈ C G (C) ≤alt C−q(C)−w−1 . (If C is reduced, these assertions about DG| C are immediate from the fact that D is an exact functor, but in general, we must invoke [A, Lemma 6.1 and Proposition 6.8].) Now, we use Lemma 6.3 again to find a subsheaf G 1 ⊂ h b (i * Ri ! DG) such that G 1 ∈qC G (X) ≤−w−1 and and then complete it to a distinguished triangle Here, G ′ is necessarily supported on the complement of C. Let F 2 = D(G 1 [−b]), and let H = DG ′ , so we have a distinguished triangle G (X) ≥0 as well. Note also that F 2 ∈ q D b G (X) ≥w+1 , and that , and H is supported on a proper closed subscheme, we conclude that F ∈ q D b G (X) ≤w * q D b G (X) ≥w+1 , as desired. The last statement of the proposition holds by noetherian induction as well, since F 1 , H, and F 2 all lie in D b G (X) ≥0 by construction. The result also follows for any object of D b G (X) that is concentrated in a single degree. Finally, for general objects F ∈ D b G (X), we proceed by induction on the number of nonzero cohomology sheaves. Let a ∈ Z be such that τ ≤a F and τ ≥a+1 F are both nonzero. Then, they both have fewer nonzero cohomology sheaves than F , and we assume inductively that there exist distinguished triangles By Proposition 6.7, this composition is 0, so we see from the exact sequence G (X) by completing this diagram as follows, using the 9-lemma [BBD, Proposition 1.1.11]: are stable under shift and extensions, we see that F ′ ∈ q D b G (X) ≤w and F ′′ ∈ q D b G (X) ≥w+1 , as desired. Moreover, if F lies in D b G (X) ≥0 , then so do τ ≤a F and τ ≥a+1 F , and hence, by induction, the objects F ′ 1 , F ′′ 1 , F ′ 2 , and F ′′ 2 all lie in D b G (X) ≥0 as well. It then follows that F ′ are F ′′ are in D b G (X) ≥0 , as desired. Proof of Theorem 6.4. Lemma 6.5 and Propositions 6.7 and 6.8 together state that all the axioms for a baric structure hold. Moreover, the last part of Proposition 6.8 tells us that the baric truncation functors are left t-exact (with respect to the standard t-structure), and it is obvious from the definition of q D b G (X) ≤w that it is preserved by the truncation functors τ ≤n and τ ≥n . Thus, the baric struc- w∈Z is compatible with the standard t-structure. Next, for any closed subscheme i : Z ֒→ X, Lemma 6.6 tells us that Li * is right baryexact and that Ri ! is left baryexact. Thus, this baric structure is hereditary, and hence HLR by Theorem 4.6.
It remains to prove that the baric structure is bounded (and therefore nondegenerate). Every sheaf in C G (X) belongs to some C G (X) ≤n , and hence to some q C G (X) ≤w (simply take w to be the maximum value of n − q(x)). Since an object F ∈ D b G (X) has finitely many nonzero cohomology sheaves, we can clearly find a w such that all its cohomology sheaves belong to q C G (X) ≤w , so that F ∈ q D b G (X) ≤w . The same reasoning yields an integer v such that DF ∈qD b G (X) ≤−v , and hence F ∈ q D b G (X) ≥v . Thus, the baric structure is bounded and nondegenerate.
We can now verify that the notation q D + G (X) ≥w is consistent with the notation of Section 4. Corollary 6.9. We have We have already observed that the definition ofqD − G (X) ≤−w is consistent with the notation of Section 4, so by Lemma 4.3, for F ∈ D − G (X), we have F ∈ q D − G (X) ≤−w if and only if Hom(F , G) = 0 for all G ∈qD b G (X) ≥−w+1 . Applying D, we have F ∈ q D + G (X) ≥w if and only if Hom(DF , DG) = 0 for all G ∈ q D b G (X) ≤w−1 , or, equivalently, if Hom(G, F ) = 0 for all G ∈ q D b G (X) ≤w−1 . The corollary follows by another application of Lemma 4.3.
Remark 6.10. The proof of Lemma 6.3 depends in an essential way on the assumption of finitely many orbits and a recessed s-structure, but no other arguments given in this section do. (The role of the orbit closure C in the proof of Proposition 6.8 could instead have been played by Gx for some generic point x.) By imposing additional conditions that permit us to evade Lemma 6.3, we can find a version of Theorem 6.4 that holds in much greater generality.
Specifically, assume that the function q : X gen → Z is monotone: that is, if x ∈ Gy, then q(x) ≥ q(y). Suppose we have a coherent sheaf G ∈ C G (X), an open subscheme j : U ֒→ X, and a subsheaf F 1 ⊂ G| U with F 1 ∈ q C G (U ) ≤w . By replacing U by a smaller open subscheme, we may assume that F 1 ∈ C G (U ) ≤q(x)+w , where x is a generic point of U . Then F 1 is a subsheaf of σ ≤q(x)+w G| U , and standard arguments show that there is a subsheaf F ⊂ σ ≤q(x)+w G supported on U such that F | U ∼ = F 1 . The monotonicity assumption then implies that F ∈ q C G (X) ≤w . This reasoning can be substituted for invocations of Lemma 6.3 for q C G (X) ≤w . Similarly, if q is comonotone, meaning thatq is monotone, then the reasoning above can replace invocations of Lemma 6.3 for the categoryqC G (X) ≤w . The proof of Theorem 6.4 uses Lemma 6.3 in both these ways.
We thus obtain the following result: suppose X is a scheme satisfying the assumptions of Section 5, equipped with an s-structure. In particular, we do not assume that G acts with finitely orbits, or that the s-structure is recessed. If q : X gen → Z is both monotone and comonotone, then the collection of subcategories ({ q D b G (X) ≤w }, { q D b G (X) ≥w }) w∈Z is a bounded, nondegenerate HLR baric structure on X.
Multiplicative Baric Structures and s-structures
In this section we study the relationship between multiplicative baric structures on the triangulated category D b G (X) and s-structures on the abelian category C G (X). The authors had originally hoped that under appropriate conditions the two notions would be equivalent, and that the developments in sections 5 and 6 could be simplified by replacing the latter concept with the former. In other words, the hope was that there would be a one-to-one correspondence between multiplicative HLR baric structures and s-structures on a G-scheme X.
This turns out to be not quite correct. Rather, we prove here that there is a oneto-one correspondence between multiplicative baric structures and a certain class of pre-s-structures, including all s-structures. (A pre-s-structure is a collection of subcategories of C G (X) satisfying the first six of the ten axioms for an s-structure in [A].) It would be interesting to look for an additional axiom on multiplicative baric structures that is satisfied precisely by those baric structures corresponding to s-structures, but we have not pursued this here.
We say that a baric structure ({D b G (X) ≤w }, {D b G (X) ≥w }) w∈Z is multiplicative if either of the following two equivalent conditions holds: (1) If F ∈ D b G (X) ≤w and G ∈ D b G (X) ≤v , then F ⊗ L G ∈ D − G (X) ≤w+v . (2) If F ∈ D b G (X) ≤w and G ∈ D b G (X) ≥v , then RHom(F , G) ∈ D + G (X) ≥v−w . Theorem 7.1. Suppose ({D b G (X) ≤w }, {D b G (X) ≥w }) w∈Z is a multiplicative baric structure on X. Then the categories C G (X) ≤w = C G (X) ∩ D b G (X) ≤w , C G (X) ≥w = {F ∈ C G (X) | Hom(G, F ) = 0 for all G ∈ C G (X) ≤w−1 } constitute a pre-s-structure on X.
Conversely, given an s-structure ({C G (X) ≤w }, {C G (X) ≥w }) w∈Z on a scheme X with finitely many G-orbits, the categories G (X) ≤w−1 } constitute a multiplicative baric structure on X.
Proof. Suppose first that ({D b G (X) ≤w }, {D b G (X) ≥w }) w∈Z is a multiplicative baric structure on X. To show that the categories above constitute a pre-s-structure, we must verify axioms (S1)-(S6) from [A]. (The reader is referred to [A] for the statements of these axioms.) Axioms (S2) and (S3) are clear from the definitions, and axiom (S1) follows from the fact that ({D b G (X) ≤w }, {D b G (X) ≥w }) w∈Z is compatible with the standard t-structure.
Let us prove axiom (S4). Let F be an object of C G (X). Since F is noetherian, and C G (X) ≤w is a Serre subcategory, there is a largest subobject F ′ ⊂ F belonging to C G (X) ≤w . Then F /F ′ must belong to C G (X) ≥w+1 : otherwise, there is a nonzero map G → F/F ′ whose image I = 0 belongs to C G (X) ≤w , but the inverse image of I in F contains the maximal F ′ .
Axiom (S5) follows from the fact that the baric structure on D b G (X) is bounded, and Axiom (S6) follows from the multiplicativity of the baric structure and the fact that for F , G ∈ C G (X), we have F ⊗ G ∼ = h 0 (F ⊗ L G). Now, suppose we are given an s-structure ({C G (X) ≤w }, {C G (X) ≥w }) w∈Z . Let 0 denote the constant function X gen → Z of value 0. We claim that 0 C G (X) ≤w = C G (X) ≤w . It is clear from the definition that C G (X) ≤w ⊂ 0 C G (X) ≤w . Conversely, if x ∈ X gen is a generic point of the support of an object F / ∈ C G (X) ≤w , it follows from the gluing theorem for s-structures [A,Theorem 5.3] that there is no open subscheme V ⊂ Gx such that the restriction of F to V lies in C G (V ) ≤w , so F / ∈ 0 C G (X) ≤w . Since 0 C G (X) ≤w = C G (X) ≤w , we see that the categories ({D b G (X) ≤w }, {D b G (X) ≥w }) w∈Z defined in the statement of the theorem coincide with the baric structure constructed in Theorem 6.4 by taking q = 0. The fact that this baric structure is multiplicative is a consequence of Proposition 7.2 below.
Proposition 7.2. Let X be a scheme with finitely many G-orbits, and let p, q : X gen → Z be functions satisfying (5.1). Suppose F ∈ p D − G (X) ≤w .
object is an (a priori possibly nonreduced) orbit closure. Since X is assumed to consist of finitely many G-orbits, it suffices to show that the support of a simple object is irreducible. Let κ : X ′ ֒→ X be the scheme-theoretic support of F ; that is, F ∼ = κ * F ′ , and the restriction of F ′ to any open subscheme of X ′ is nonzero. Assume X ′ is reducible; let i : Z ֒→ X ′ and i ′ : Z ′ → X ′ be proper closed subschemes such that Z ∪ Z ′ = X ′ . Let U = Z (Z ∩ Z ′ ) and U ′ = Z ′ (Z ∩ Z ′ ). Clearly, U and U ′ are disjoint open subschemes of X ′ . Let V = U ∪U ′ . The natural morphism i * Ri ! F ′ | V → F ′ | V is the inclusion of the direct summand of F | V supported on U . In particular, the above morphism is neither 0 nor an isomorphism. But it is also the restriction to V of the natural morphism q h 0 (i * Ri ! F ′ ) → F ′ , so this latter is also neither 0 nor an isomorphism. Therefore, F ′ is not simple, and hence neither is F .
|
2008-08-23T21:16:56.000Z
|
2008-08-23T00:00:00.000
|
{
"year": 2008,
"sha1": "3586c240477e231eea0d9ba96353ab6d391e83e7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0808.3209",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5578d77415d95e149b108feba8196b338290857f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
259318621
|
pes2o/s2orc
|
v3-fos-license
|
Epigenetic regulatory mechanism of ADAMTS12 expression in osteoarthritis
Background Osteoarthritis (OA) is a degenerative joint disease with lacking effective prevention targets. A disintegrin and metalloproteinase with thrombospondin motifs 12 (ADAMTS12) is a member of the ADAMTS family and is upregulated in OA pathologic tissues with no fully understood molecular mechanisms. Methods The anterior cruciate ligament transection (ACL-T) method was used to establish rat OA models, and interleukin-1 beta (IL-1β) was administered to induce rat chondrocyte inflammation. Cartilage damage was analyzed via hematoxylin-eosin, Periodic Acid-Schiff, safranin O-fast green, Osteoarthritis Research Society International score, and micro-computed tomography assays. Chondrocyte apoptosis was detected by flow cytometry and TdT dUTP nick-end labeling. Signal transducer and activator of transcription 1 (STAT1), ADAMTS12, and methyltransferase-like 3 (METTL3) levels were detected by immunohistochemistry, quantitative polymerase chain reaction (qPCR), western blot, or immunofluorescence assay. The binding ability was confirmed by chromatin immunoprecipitation-qPCR, electromobility shift assay, dual-luciferase reporter, or RNA immunoprecipitation (RIP) assay. The methylation level of STAT1 was analyzed by MeRIP-qPCR assay. STAT1 stability was investigated by actinomycin D assay. Results The STAT1 and ADAMTS12 expressions were significantly increased in the human and rat samples of cartilage injury, as well as in IL-1β-treated rat chondrocytes. STAT1 is bound to the promoter region of ADAMTS12 to activate its transcription. METTL3/ Insulin-like growth factor 2 mRNA-binding protein 2 (IGF2BP2) mediated N6-methyladenosine modification of STAT1 promoted STAT1 mRNA stability, resulting in increased expression. ADAMTS12 expression was reduced and the IL-1β-induced inflammatory chondrocyte injury was attenuated by silencing METTL3. Additionally, knocking down METTL3 in ACL-T-produced OA rats reduced ADAMTS12 expression in their cartilage tissues, thereby alleviating cartilage damage. Conclusion METTL3/IGF2BP2 axis increases STAT1 stability and expression to promote OA progression by up-regulating ADAMTS12 expression. Supplementary Information The online version contains supplementary material available at 10.1186/s10020-023-00661-2.
Introduction
Osteoarthritis (OA) is a main cause of disability and is a degenerative joint disease that affects > 242 million individuals worldwide (Ghouri and Conaghan 2021). The prevalence of OA is increasing due to the ageing population and some risk factors such as obesity and inflammation (Ghouri and Conaghan 2021). Chondrocyte dysfunction leads to chondrocyte extracellular matrix degradation and osteoarthritis (Ramasamy et al. 2021). Hence, attenuating cartilage/chondrocyte damage might help improve OA. Some molecular biomarkers have been regarded as potential therapeutic targets for OA, such as insulin-like growth factor 1, transforming growth factor-β, and a disintegrin and metalloproteinase with thrombospondin motifs 5 (ADAMTS5) (Wen et al. 2021;Yoo et al. 2022;Jiang et al. 2021). However, more biomarkers should be studied to further understand the pathological mechanisms of OA.
Proteins belonging to the ADAMTS family are expressed in the cartilage and are associated with joint health and diseases, including OA (Yang et al. 2017). Among them, ADAMTS4, ADAMTS5, and ADAMTS7 have been reported to act as potential targets for OA treatment Verma and Dalal 2011;Zhang et al. 2015), and ADAMTS12 is an important target for cancer, diabetes mellitus, and stroke treatment (Li et al. 2020a;Tastemur et al. 2021;Witten et al. 2020). Researchers have revealed that ADAMTS12 is required for the inflammatory response (Moncada-Pazos et al. 2018) and is associated with cartilage oligomeric matrix protein degradation (Luan et al. 2008), suggesting the role of ADAMTS12 as a promising target for OA treatment.
Signal transducer and activator of transcription 1 (STAT1) is a nuclear transcription factor that regulates genes associated with cell survival and inflammatory response (Butturini et al. 2020). STAT1 is involved in infection, immunity, and inflammation (Mogensen and Transcription Factors 2018;Benedetti et al. 2021). Studies have reported that STAT1 expression was increased in inflammatory arthritis and lipopolysaccharides (LPS)induced OA model, and STAT1 suppression attenuated LPS-induced inflammation (Walker et al. 2006;Jin et al. 2021) and pyroptosis in chondrocytes. We highly suspected that the abnormal ADAMTS12 elevation in OA might be related to the transcriptional activity of STAT1 on it by analyzing the promoter region sequence of ADAMTS12.
N6-methyladenosine (m6A) modification regulates stability of gene expression to drive OA progression ). Methyltransferase-like 3 (METTL3) is an m6A writer that regulates m6A levels by acting as the major methyltransferase, which is involved in OA progression by regulating extracellular matrix (ECM) degradation, inflammatory response, and chondrocyte damage (Sang et al. 2021;(Liu et al. 2019a). Insulin-like growth factor 2 mRNA-binding protein 2 (IGF2BP2) acts as the m6A reader to recognize m6A and stabilize m6A-modified mRNAs (Bi et al. 2019) and is suggested to be involved in the bone function and inflammatory response (Liu et al. 2018;Wang et al. 2021). Previous studies revealed that m6A of STAT1 mRNA is mediated by METTL3 (Liu et al. 2019b), but the modification mechanism of STAT1 m6A in OA remains unknown so far. Thus, this study aimed to explore the involvement of the m6A writing and reading process mediated by METTL3 and IGF2BP2 in the expressional regulation of STAT1, thereby affecting ADAMTS12 in OA progression.
This study established rat and cell OA models to clarify the molecular mechanism of ADAMTS12 overexpression in OA and demonstrated the effect of METTL3/ IGF2BP2-mediated m6A modification on the STAT1/ ADAMTS12 regulatory axis. This study may propose a novel understanding of OA pathogenesis and provide new targets for OA treatment.
Patient sample collection
OA cartilage specimens with visible lesions were obtained from five patients who were diagnosed with severe hip OA and had total hip replacement surgery (Age: 71.8 ± 3.1; Female: 3; Male: 2). Specimens of normal cartilage were obtained from five patients who had total hip replacement surgery due to a fresh traumatic femoral neck fracture (Age: 73.2 ± 6.1; Female: 3; Male: 2). No hip disease had been diagnosed in the medical history of patients with a traumatic femoral neck fracture, and macroscopic examination confirmed their intact and smooth cartilage tissues. All surgeries were performed by the same team of orthopedists. This study was approved by the Medical Ethics Committee of Hunan Provincial People's Hospital (the first affiliated hospital of Hunan Normal University), [2022] Scientific Research Ethics Review NO: [120], and the patients were between 65 and 85 years of age with informed consent. Relative experiments were repeated 5 times in patients.
Animal experiments
The Sprague-Dawley male rats (12 weeks old, 300-350 g) obtained from Jiangsu Aniphe Biolaboratory Inc. (Nanjing, China) were acclimated for 1 week at 23℃ ± 1℃ with a 12-h light/dark cycles. The rats were randomly divided into 3 groups (n = 5/group): sham, anterior cruciate ligament transection (ACL-T), and ACL-T + S-Adenosylhomocysteine (SAH). ACL transection surgery was performed in the ACL-T group to induce the OA model following a previous study (Ma et al. 2020). Rats in ACL-T + SAH group underwent ACL-T transection surgery and were injected with 10 mg/kg of METTL3 inhibitor SAH (SAH; MedChemExpress, Monmouth Junction, NJ, USA) dissolved in normal saline into the right knee joint. Rats were euthanized with 5% isoflurane and cervical dislocation after 4 weeks. The cartilage tissues were collected and used for related analyses. This study was approved by Institutional Animal Care and Use Committee of Hubei Provincial Academy of Preventive Medicine with grant No.202120223.
Histopathological analysis
The cartilage damage was investigated by Periodic Acid-Schiff (PAS), hematoxylin-eosin (HE), and safranin O-fast green assays referring to previous studies (Wu et al. 2015;(Li et al. 2020b;Chang et al. 2021;Liao et al. 2021). In brief, cartilage tissues were fixed in 4% paraformaldehyde (Beyotime, Shanghai, China), and then decalcified by immersion in a decalcification solution for softening, decalcified in 10% disodium ethylenediaminetetraacetic acid (EDTA; Solarbio) until complete demineralization, embedded in paraffin, and cut into 3-µm tissue sections, followed by staining with PAS (Beyotime), HE (Beyotime), and safranin O-fast green (Solarbio) following the manufacturer's instructions. The stained sections were observed using a 200 × magnification microscope (Olympus, Tokyo, Japan). The cartilage structure damage was investigated using the Osteoarthritis Research Society International (OARSI) score referring to a previous report (Pritzker et al. 2006).
Micro-computed tomography (CT) assay
The cartilage tissues were fixed in 4% paraformaldehyde and then scanned by micro-CT for morphological evaluation of the cartilage damage.
Chondrocyte isolation and treatment
The primary chondrocytes were isolated from knee articular cartilage tissues of rats using trypsin and collagenase II following a previous report (Zhang et al. 2022). Chondrocytes were cultured in DMEM/F12 medium (Gibco, Grand Island, NY, USA) containing 10% fetal bovine serum (Gibco) and 1% penicillin/streptomycin (Gibco) at 37 °C in 5% CO 2 . Interleukin-1 beta (IL-1β) was used to mimic the inflammatory environment in cultured chondrocytes. This study incubated chondrocytes with 10 ng/ mL of IL-1β (MedChemExpress) to induce OA cellular model as previously reported (Zhang et al. 2022). Relative experiments were repeated 3 times in rat primary chondrocytes.
Chromatin immunoprecipitation (ChIP)-qPCR
The lysed cartilages or chondrocytes were fixed with 1% formaldehyde and then quenched with glycine. The chromatin lysates were obtained by ultrasonic with Sonifier (Branson, Missouri, MO, USA) to obtain DNA fragments of 200-1000 bp. ChIP assay was performed using a SimpleChIP kit (Cell Signaling, Danvers, MA, USA) following the manufacturer's instructions. The antibodies anti-STAT1 (LS-B591, 1:100 dilution, LifeSpan Bio-Sciences, Seattle, WA, USA) or IgG were used for DNA sample immunoprecipitation. ADAMTS12 DNA level was detected by qPCR.
Dual-luciferase reporter assay
The WT or MUT sequences of ADAMTS12 promoter were inserted in psiCheck2 vectors (Promega) to generate the reporter vectors, and co-transfected with pcDNA3.1 empty vector (OE-NC) or STAT1 overexpression vector (OE-STAT1) in chondrocytes. Luciferase activity was detected using a dual-luciferase reporter assay system (Promega) after 48 h. Relative experiment were repeated 3 times in 293T cells.
RNA immunoprecipitation (RIP)
RIP assay was performed by adopting the Magna RIP kit (Sigma-Aldrich) following the manufacturer's protocols. The cartilage samples from patients or rats and chondrocytes were lysed and incubated with magnetic beads conjugated via anti-IGF2BP2 (11601-1-AP, 1:100 dilution, PeorteinTech) or IgG (ab205718, 1:200 dilution, Abcam) overnight. The immunoprecipitated mSTAT1 was examined and expressed as % of input (cell lysates).
MeRIP-qPCR
The mRNAs from cartilage samples or chondrocytes were incubated with anti-m6A-conjugated magnetic beads with a Magna MeRIP™ m6A kit (Sigma-Aldrich) following the manufacturer's protocols. The m6A-modified STAT1 was immunoprecipitated and detected by qPCR.
Apoptotic ratio by flow cytometry analysis
Cell samples were collected from each group, including those floating in the medium. The cell samples to be tested were obtained after cleaning, centrifugation, and Binding Buffer resuspension. Annexin V-FITC at 5 µL was added and incubated at room temperature for 15 min in the dark, followed by 5 µL of propidium iodide, before testing. Flow cytometry (CytoFLEX, BECKMAN) was used for detection.
Apoptosis observed by TdT -mediated dUTP Nick-End labeling (TUNEL)
Cells were fixed with 4% paraformaldehyde for 20 min, rinsed in phosphate-buffered saline (PBS), and treated with ethanol/acetic acid (2:1) at 20 °C for 5 min. Then, cells were washed with PBS and permeated at room temperature for 15 min with 0.2% Triton X-100 diluted in 0.1% sodium citrate (w/v). Then, cells were immersed for 30 min in TUNEL buffer: 30 mM of Tris-HCl buffer (pH of 7.2), 140 mM of sodium cacodylate, 1 mM of cobalt chloride, and 0.3% Triton X-100. The cells were washed with PBS after incubating for 2 h at 37 °C in the TUNEL reaction mixture (Roche Diagnostics), and then incubated at room temperature in the dark with Cy3conjugated streptavidin (1:500; Jackson ImmunoResearch Laboratories), and then counterstained with 1:2,000 DAPI.
Statistical analysis
Data are presented as mean ± standard deviation from three independent experiments. The difference was analyzed by student's t-test or Analysis of Variance followed by the least significant difference test using Statistical Package for the Social Sciences version 17.0 (SPSS, Chicago, IL, USA). The difference was considered significant at P < 0.05.
The up-regulated ADAMTS12 in OA is related to the transcriptional activation of STAT1
Obvious histological changes were observed in the cartilage of patients with OA (Fig. 1A). The safranin O-fast green and PAS staining suggested a significant reduction in the content of cartilage. The protein expression levels of ADAMTS12 and STAT1 measured by the IHC staining in cartilaginous tissue of pathological changes were significantly increased compared with the non-pathological ones (Fig. 1B). Further, STAT1 was confirmed to bind to the promoter region of ADAMTS12 through the ChIP-qPCR assay in cartilage samples (Fig. 1C). Additionally, STAT1 over-expression increased the transcriptional activity of ADAMTS12 by binding motifs of 5'-GCGTTCTTGGAAACGCAGA-3' through the EMSA and dual-luciferase reporter analysis (Fig. 1D-E).
Next, STAT1 binding to the ADAMTS12 promoter region was found in a model of IL-1β -induced chondrocyte inflammation ( Fig. 2A-B). Knocking down STAT1 by shRNA in the presence of IL-1β ( Figure S1) resulted in a significant decrease in the mRNA and protein levels of ADAMTS12 and STAT1 (Fig. 2C-D). Meanwhile, the apoptosis of chondrocytes was significantly reduced when shRNA interfered with STAT1 ( Fig. 2E-F).
These results prove that STAT1 promotes ADAMTS12 expression in a transcriptional regulation manner under the inflammatory environment, leading to the chondrocyte extracellular matrix degradation.
In particular, chondrocyte inflammation is induced by exposing chondrocytes to a culture medium containing D) The binding of STAT1 on AD-AMTS12 promoter in human cartilage tissues by ChIP-qPCR and EMSA analyses. n = 5 (E) Luciferase reporter assay for binding of STAT1 on ADAMTS12 promoter in 293T cells. n = 3. * P < 0.05; **, P < 0.01 LPS or IL-1β. However, LPS could not increase the mRNA or protein expression levels of ADAMTS12, while IL-1β could (Fig. 3). Therefore, IL-1β was chosen as an inductor of chondrocyte inflammation throughout the study.
METTL3 mediates m6A modification of STAT1 in OA
The RNA methylation level of STAT1 was analyzed in cartilage tissues to clarify why STAT1 is up-regulated in OA and then affected ADAMTS12 expression. Rat OA models established by ACL-T with significant cartilage pathological changes and increased ADAMTS12 expression were presented ( Figure S2). The result revealed a noticeably higher methylation level in the cartilage tissues of patients with OA than in normal cases (Fig. 4A). The methyltransferase METTL3 expression, as an m6A writer, was significantly enhanced in cartilage tissues of patients with OA compared to the normal group ( Fig. 4B-D), consistent with that in OA rat models (Fig. 4E-H). Additionally, the m6A modification degree of STAT1 was detected and was enhanced undergoing IL-1β stimulation in the chondrocyte inflammation cells, whereas METTL3 silencing by shRNA ( Figure S3) attenuated this effect (Fig. 4I). Similarly, the mRNA and Rat chondrocytes were transfected with NC or shSTAT1, followed by IL-1β incubation. Cells were grouped as control, IL-1β, NC + IL-1β, or shSTAT1 + IL-1β. (A) ChIP-qPCR analysis for binding of STAT1 on ADAMTS12 promoter in rat chondrocytes stimulated with or without IL-1β. n = 3 (B) The binding of STAT1 on ADAMTS12 promoter in rat chondrocytes through EMSA assay. n = 3 (C-D) RT-qPCR and western blot assays for STAT1 and ADAMTS12 levels in rat chondrocytes. n = 3 (E) TUNEL staining of rat chondrocyte. n = 3 (F) Apoptosis of rat chondrocyte detected by flow cytometry. n = 3. * P < 0.05; **, P < 0.01 protein levels of STAT1 revealed the same changes to its m6A modification levels in each group (Fig. 4J-K). These results indicates that METTL3 up-regulates STAT1 expression by enhancing its m6A modification.
IGF2BP2 mediates mSTAT1 stabilization and promotes its expression in OA
M6A-modified RNAs may be degraded or more stable, thereby showing different translation levels. IGF2BP2 is known as the m6A reading protein and is an RNAbinding protein that recognizes the methylation modifications on mRNA and promotes RNA stabilization. The RIP-qPCR assay in cartilage tissues both from clinic and rat models presented that IGF2BP2 could bind with mSTAT1, especially in patients with OA and rats of OA model (Fig. 5A-B). Similarly, the binding relationship between IGF2BP2 and mSTAT1, as well as the mRNA expression levels of mSTAT1, in chondrocyte inflammatory cells were enhanced after IL-1β treatment by fold change expression (Fig. 5C), which suggests that IGF2BP2 may only recognize and bind to m6A modified mSTAT1, thereby up-regulating its expression.
We performed RNA resistance tests in chondrocytes over-expressing METTL3 or simultaneously knocking down IGF2BP2 to determine whether the up-regulation of mSTAT1 was due to the elevation of m6A modification. The results revealed more amplified mSTAT1 through qPCR detection in the METTL3 over-expressed group. However, the abundance of mSTAT1 significantly decreased when shRNA interfered with IGF2BP2 expression (Fig. 5D). These cell samples were tested before and after IL-1β induction to evaluate the effects of the simulated inflammatory environment on m6A modification and mSTAT1 expression level. Additionally, the expression levels of STAT1 and ADAMTS12 up-regulated in IL-1β-treated chondrocytes were augmented by METTL3 over-expression, and this effect was reversed because of IGF2BP2 silencing (Fig. 5E-F). Concurrently, the increased apoptosis of chondrocyte cells induced by METTL3 up-regulation could be restored after IGF2BP2 interference (Fig. 5G-H). These results presented that
METTL3 promotes chondrocyte injury by up-regulating ADAMTS12 in vitro and in vivo
In vivo and in vitro experiments were conducted to investigate whether METTL3 could regulate ADAMTS12 expression by affecting STAT1 in OA. Relative mRNA and protein levels of METTL3, STAT1, and ADAMTS12 were increased after IL-1β stimulation, which was then reversed by reducing METTL3 through shRNA. Transfection with ADAMTS12 over-expression plasmid increased ADAMTS12, but it did not change the METTL3 levels ( Fig. 6A-C). Additionally, the apoptosis in IL-1β-treated chondrocytes was decreased by METTL3 silencing, which could be reversed by ADAMTS12 over-expression (Fig. 6D-E). These data support our belief that the effects of METTL3 on Subsequently, we analyzed the effects of the METTL3/ ADAMTS12 axis on ACL-T-induced OA rat models. The rats were divided into the sham, ACL-T, and ACL-T + SAH (specific small molecule inhibitor of METTL3) groups. The results of HE, PAS, and safranin O-fast green revealed that the cartilage damage and fibrosis induced by ACL-T were mitigated because of METTL3 inhibitor SAH (Fig. 7A). The cartilage damage by mirco-CT and OARSI score were decreased due to SAH (Fig. 7B and C). Additionally, both mRNA and protein levels of STAT1, METTL3, and ADAMTS12 in cartilage tissues were increased in the ACL-T group compared to the sham group but significantly decreased when SAH was administered (Fig. 7D-F). These results suggest that taking SAH can significantly improve the symptoms of OA and inhibit the expression of STAT1 and ADAMTS12 in vivo. Additionally, we did not detect Fig. 7 METTL3 regulates osteoarthritis progression in OA rats via mediating ADAMTS12. OA rats were induced by the ACL-T method and then treated with METTL3 inhibitor SAH. Rats were grouped as sham, ACL-T, and ACL-T + SAH. (A) HE, PAS, and safranin O-fast green of rat cartilage tissues in each group. n = 5 (B and C) OARSI score and micro-CT assays were performed in each group. n = 5. (D-F) RT-qPCR, western blot, and IHC assays for METTL3, ADAMTS12, and STAT1 levels in rat cartilage tissues in each group. n = 5. (G) The binding relationship between STAT1 and ADAMTS12 promoter region in rat cartilage tissues was tested by ChIP assay. n = 5. (H) The binding relationship between STAT1 mRNA and IGF2BP2 in rat cartilage tissues was tested by RIP assay. n = 5. * P < 0.05; **, P < 0.01; ***, P < 0.001 the over-enrichment of STAT1 in the ADAMTS12 promoter region in the cartilage tissue of ACL-T + SAH rats compared with the sham group. The enrichment degree decreased to no difference with the sham group after SAH administration although ACL-T could significantly increase the enrichment of STAT1 (Fig. 7G). STAT1 functioned as a transcription factor and could promote ADAMTS12 transcription in the OA model group by regulating its promoter region. The addition of SAH reduced the m6A modification of STAT1, thereby relieving the effects of STAT1 on ADAMTS12 transcription. Similarly, IGF2BP2 binding to STAT1 mRNA was enhanced in the ACL-T group but was not significantly different from that in the sham group in the cartilage tissue of rats administered SAH (Fig. 7H). In conclusion, the mechanism by which METTL3 inhibition can significantly alleviate OA is related to its participation in regulating the stability of mSTAT1, thereby affecting the expression level of ADAMTS12.
Discussion
OA is a common joint disease associated with the loss of articular cartilage, which affects > 10% of people over the age of 60 years (Panikkar et al. 2021). This reduced the quality of life and elevated the morbidity of patients. Hence, we aimed to explore new targets to improve OA. This study first revealed STAT1 could transcriptionally activate ADAMTS12 in OA by establishing animal and cellular models of OA. Moreover, we confirmed that METTL3 formed the majority of m6A deposition on STAT1, and IGF2BP2 was further bound with m6A-modified regions of STAT1 to increase its stability. STAT1 was transferred to the nucleus in the presence of IL-1β to promote ADAMTS12 transcription and expression, which contributed to OA development (Fig. 8). Our research indicated new therapeutic targets of OA.
ADAMTS12 is a multifunctional metalloproteinase with important roles in inflammation (Wei et al. 2014). ADAMTS12 was significantly upregulated in the pathological tissues of patients with OA and was associated with ECM degradation and chondrocyte destruction (Luan et al. 2008;Ji et al. 2016;Li et al. 2022;Perez-Garcia et al. 2019;Liu et al. 2006), suggesting that ADAMTS12 may be a key target in OA treatment. This study revealed that IL-1β significantly stimulated ADAMTS12 up-regulation in chondrocytes, whereas LPS did not. We speculate that this is because LPS is a bacterial lipopolysaccharide, which has a good induction effect on inflammation caused by exogenous infections, while OA is a sterile inflammatory disease. The differential effects of IL-1β and LPS on ADAMTS12 expression reflect the disease specificity in which ADAMTS12 may be involved.
STAT1 is an inflammation-related transcriptional activator involved in infection and inflammatory diseases, including OA and coronavirus disease 2019 (Butturini et al. 2020;Rincon-Arevalo et al. 2022). Furthermore, STAT1 Fig. 8 Graphic abstract of the regulation mechanism in this study was reported to contribute to chondrocyte inflammation and damage Xu and Xu 2021), indicating its role in OA development. Moreover, STAT1 was reported to function as a transcriptional activator to stimulate some gene expressions by binding with their promoters, such as sphingosine 1-phosphate receptor 1 and ankyrin repeat-, SH3 domain-, and proline-rich region-containing protein 2 (Xin et al. 2020;Turnquist et al. 2014). Our study first revealed that STAT1 can act as an important upstream transcriptional activator to up-regulate ADMATS12 expression, which could regulate OA progression by targeting ADAMTS12.
High amounts of m6A methylated mRNAs were presented in IL-1β-treated chondrocytes (Liu et al. 2019a). METTL3 is well-recognized as one of the most important m6A "writers", and the "reader" IGF2BP2 is responsible for identifying methylated transcripts mediated by METTL3 Li et al. 2019). The current study demonstrated that METTL3 acts as a "writer" to increase the methylation level of STAT1 mRNA, which is subsequently recognized by the "reader" IGF2BP2 to stabilize it in chondrocytes, where inflammatory cytokines are present, thereby ultimately achieving elevated expression levels. This mechanism explains why STAT1 is up-regulated in pathological tissues of OA from the perspective of epigenetic regulation by m6A modification and indicates the importance of m6A in OA development. Here, we successfully reversed chondrocyte inflammatory injury by silencing METTL3, indicating the potential of targeting METTL3 in OA therapy, which was consistent with a previous study (Liu et al. 2019a).
Conclusion
In conclusion, ADAMTS12 was an important target for the METTL3/IGF2BP2/STAT1 axis to regulate OA progression. Mechanically, METTL3 increased STAT1 stability in an IGF2BP2-dependent manner to upregulate ADAMTS12 transcription, thereby promoting OA progression. Our study might provide new preventive strategies for OA treatment by focusing on METTL3/IGF2BP2-mediated methylation, STAT1, and ADAMTS12.
|
2023-07-04T13:40:06.541Z
|
2023-07-03T00:00:00.000
|
{
"year": 2023,
"sha1": "7e6238f8dd8a42dde76d1988ac01c09b098f59b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "7e6238f8dd8a42dde76d1988ac01c09b098f59b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245783890
|
pes2o/s2orc
|
v3-fos-license
|
Rethinking the bioavailability and cellular transport properties of S-adenosylmethionine
S-adenosylmethionine (SAM) is a versatile metabolite that participates in a wide range of reactions such as methylation and transsulfuration. These capabilities allow SAM to influence cellular processes such as gene expression and redox balancing. The importance of SAM is highlighted by its widespread usage as an over-the-counter nutrient supplement and as an experimental reagent in molecular biology. The bioavailability and cellular transport properties of SAM, however, are often overlooked under these contexts, putting limits on SAM's therapeutic potential and complicating the interpretation of experimental results. In this article, we examined the chemical stability and cellular permeability of SAM, proposed a schematic for indirect SAM transport across the mammalian plasma membrane, and lastly discussed the implications arising from such transport schematic.
INTRODUCTION
As one of the rare sulfonium metabolites present in eukaryotic cells, S-adenosylmethionine (SAM) is remarkably versatile. Owing to the electron-deficient sulfur inside SAM, the covalent bond between the sulfur and its neighboring groups are uniquely susceptible to nucleophilic attack. This property allows the transfer of these neighboring groups onto other molecules, and enables SAM to participate in a diverse set of chemical reactions including methylation, transsulfuration, and aminopropylation amongst others [1]. Cells employ these reactions to perform tasks such as protein and nucleic acid methylation, sulfur amino acid metabolism and polyamine synthesis, and, in turn, to orchestrate gene expression, redox status and other crucial processes. Given the ubiquity of these SAM-dependent processes, alternation of intracellular SAM availability thus could elicit profound impacts on cellular biology, physiology, and ultimately diseases.
The importance of SAM is also reflected by the widespread public interest in using it as an over-the-counter nutrient supplement with purported effects as a therapeutic agent to ameliorate conditions such as liver diseases and depression. At the same time, SAM is also frequently utilized in tissue culture experiments as a reagent to eluci-date the functions of methionine metabolism and putative SAM-related processes. It is worth noting, however, both the cellular availability and the mode of action of supplemented SAM are unclear. This could lead to important implications in its usage both as a therapeutic drug and a reagent in molecular biology experiments. Here we discuss this issue in detail.
SAM AVAILABILITY AND LIVER DISEASES
Multiple studies have observed decreased hepatic SAM biosynthesis in different forms of chronic liver injury [2]. This SAM deficiency often co-occurs with impaired hepatic methionine metabolism and reduced activity of the SAM synthase (MAT) [2], which condenses methionine and adenosine triphosphate (ATP) to form SAM. On the other hand, SAM supplementation in animal models has been demonstrated to alleviate alcohol-induced liver damage, and to improve survival rate in drug-induced hepatotoxicity and liver injuries [3]. In addition, studies in humans have suggested that the ingestion of SAM is generally well tolerated with an excellent safety profile [4]. These results have prompted interests in using SAM as a therapeutic agent for human patients with liver injuries. Meta-analyses of early small-scale randomized clinical trials have found significant results in reducing mortality [2]. However, studies have also suggested that orally administered SAM has a rather poor bioavailability, with the area under the plasma concentration-time curve (AUC) ranging from 0.58% to 1.04% of the AUC of intravenously administered SAM [5]. For additional context, the SAM concentration at various organ tissue has a reported range of 3.5-9 nmol/100 mg tissue [6] and the plasma SAM concentration has been reported in the range of 50-150 nmol/L [7,8]. Studies performed in human volunteers have found no significant increases in blood SAM concentration when an oral dose of 10 mg/kg was administered [9]. Experiments involving radioactively labeled SAM have suggested that the carbon, hydrogen, and sulfur of SAM can be effectively incorporated into the body even with oral administration [4]. However, it is unclear whether such incorporation occurred through intact SAM or through its degradation products. With SAM's mode of action remaining poorly understood and the low bioavailability of SAM with oral supplementation, its therapeutic potential is complicated.
SAM AS A REAGENT IN TISSUE CULTURE EXPERIMENTS
As the immediate catabolic product of the essential amino acid methionine, SAM is positioned in the nexus of nutrient sensing networks and it exerts influences over important cellular decisions such as proliferation, differentiation, and autophagy in response to the dynamic nutrient environment [10]. Consequently, the interplay of SAM and these processes has been heavily investigated by researchers in a host of studies [11]. Common to many of these studies, the initial identification of SAM-related genes is often followed up by complementation or rescue experiments with direct SAM supplementation to further test the causation link between SAM availability and observed phenotypes. In these experiments, SAM is often directly added to the cell culture medium over the duration of the experiments, which could range from overnight to a couple of days, such as in the case of cell proliferation. The cellular availability of such forms of SAM supplementation, however, is rarely discussed and could have implications in interpreting the experimental results.
SAM AVAILABILITY IN ANIMAL EXPERIMENTS
In contrast to tissue culture experiments, where the components of the culture medium are to some extent well defined, animal experiments commonly involve diet with less well-defined chemical compositions compounding on top of the interactions between microbiota, food, and the host. These additional factors might all influence the availability of a supplemented nutrient to the animal in studies. For example, the enzyme L-methioninase, which degrades methionine, is ubiquitous in fungi and bacteria [12]. Given that SAM is the immediate metabolic product of methionine, it is likely that SAM availability and metabolism could also be influenced by the methionine metabolism of the host microbiota. Besides potentially acting through methionine, the microbiota could also influence the host's SAM metabolism through B-vitamin production and choline consumption [13]. Some of the added complexities from the host-microbiota interactions can be studied through manipulations of metabolic genes that control the utilization of a given nutrient. For most of the cases, however, such isolation remains challenging, and the impact of hostmicrobiota interactions in general remains poorly understood. Thus, this article will primarily focus on SAM transport and availability at the cellular level.
TRANSPORT PROPERTIES OF SAM
Synthesized from methionine and ATP, SAM is highly polar (Figure 1). This polar nature presents challenges for its passive diffusion across biological membranes. Cellular uptake studies of SAM in hepatocytes have revealed a low level of cellular accumulation with a ratio of intracellular to extracellular SAM concentration of 0.19 μM : 1 μM at equilibrium [14]. Additional studies performed in an intestinal epithelial model have reported that the apparent permeability coefficient of SAM (0.6x10 -6 to 0.7x10 -6 cm/s) is much lower than the typical value for passive diffusion [14], and suggested that the main mode of SAM transport is paracellular transport, by which molecules primarily travel through the tight junction. These findings are echoed by the likely absence of SAM transporters in the plasma membrane of mammalian cells. Past studies have successfully identified SAM transporters in the plasma membrane of yeast (SAM3) [15], and in the inner membranes of human mitochondria (SAMC) [16]. However, the lack of mammalian SAM3 orthologue, and the mitochondriallocalization of SAMC [16] suggest the lack of a dedicated SAM transporter in the mammalian plasma membrane and further complicate the uptake of SAM from the extracellular environment. These findings might lead one to reason that the direct SAM supplementation may render little to no influence over cellular biology since it cannot be efficiently absorbed by cells, yet successes have been observed in clinical trials and in laboratory experiments. The paradox, however, could potentially be resolved if SAM enters the cell, not in its intact form, but in the forms of its breakdown products, and then reassembles back to SAM inside the cell.
CHEMICAL STABILITY OF SAM UNDER PHYSIOLOGICAL CONDITIONS
Common cell culture media are usually buffered to a slightly basic condition. A solution environment with such pH and a temperature of 37°C has been demonstrated detrimental to SAM's covalent stability. Liquid chromatographic studies suggested that SAM is markedly unstable at pH 7.5 and can rapidly degrade into 5'-methylthioadenosine (MTA) through a non-enzymatic cleavage reaction [17] (Figure 1). Efforts towards making more stable salts of SAM, such as the widely used sulfate and p-toluenesulfonate double salts of SAM, have yielded improved dry-state stability and greatly extended its shelf-life [18]. The insolution stability, however, remains mostly unaddressed. The half-time of MTA formation from in-solution SAM has been reported ranging from 16 to 42 hours [1]. This range of half-time overlaps with the length of many rescue experiments in tissue culture systems and could have major implications in the intracellular SAM availability over the course of these experiments.
PERMEABILITY OF 5'-METHYLTHIO-ADENOSINE (MTA) AND THE METHIONIN E SALVAGE PATHWAY
Unlike SAM, MTA can readily cross the plasma membrane of mammalian cells [19,20]. With a chemical structure similar to that of an adenosine nucleoside (Figure 1), MTA can enter the cell through the nonspecific nucleoside transport system [19]. Additionally, kinetics studies have suggested that MTA might also enter the cell through passive diffusion, which could account for over 50% of its influx in certain cases [19]. This opens the possibility for cells to utilize extracellular MTA from degraded extracellular SAM.
Normally produced as a byproduct during polyamine synthesis, MTA is often recycled into methionine through a series of enzymatic reactions collectively known as the methionine salvage pathway. In this pathway, MTA's methylthio group and the carbon backbone of its ribose are retained and eventually transformed into methionine [21]. The methionine generated through this process can then be combined with cellular ATP to replenish intracellular SAM. Crucially, it has been observed that the methionine salvage pathway can be co-regulated with certain SAM demanding processes, such as polyamine synthesis in yeast [21], potentially to help maintain SAM availability. Such observations suggest the possibility of an indirect transport mechanism through the sulfur and the methyl group of extracellular SAM using MTA as the carrier (Figure 2). In fact, studies have demonstrated that the passive diffusion of MTA alone can support a methionine salvaging capacity of at least 50 μM, and the supplementation of MTA can support the short-term growth of lymphoblasts on a methionine depleted cell culture medium [22].
This indirect SAM transport schematic, capable of explaining the rescue effect of extracellular SAM in tissue culture systems, however, has its limitations in restoring the decreased intracellular SAM levels observed in liver disease models. In many of these models, the SAM synthase, MAT, is often found defective or transcriptionally inhibited [23]. Consequently, the salvaged methionine from MTA might not be able to restore the lowered intracellular SAM availability. However, a study has reported that MTA can mimic SAM's inhibition on TNF-α expression in certain systems [24], which might partially underlie SAM's anti-inflammatory and hepatoprotective effects, thus providing a viable alternative mode of action for this schematic.
Together, given the covalent instability and membraneimpermeable nature of SAM, and MTA's capability of crossing the cell membrane, participating in methionine salvaging, and eliciting anti-inflammatory effects, it is possible that the observed effects of extracellularly supplemented SAM in tissue culture experiments and in clinical usage are achieved through the salvaged methionine and SAM from MTA or through MTA itself.
IMPLICATIONS OF THE DIRECT SAM TRANSPORT SCHEMATIC
If SAM supplementation exerts its effects through MTA, several implications could arise. First, in addition to providing methionine and SAM to the cell, this schematic could also lead to an increase in MTA levels that are normally not present during normal SAM metabolism. MTA itself can influence various cellular processes. For example, studies have reported that MTA possesses inhibitory effects over histone methylation and it was also reported that MTA could inhibit the activity of S-adenosylhomocysteine (SAH) hydrolase [1]. The latter could lead to the buildup of SAH, which is a potent inhibitor of many methyltransferases and is closely intertwined with one-carbon metabolism through the methionine cycle [25]. Indeed, cellular toxicity of MTA has been observed in lymphoblasts when the concentration of supplemented MTA exceeds 50 μM [22]. Second, the methionine salvage pathway underpinning this indirect SAM transport schematic is highly delicate. With at least five enzymatic reactions in between MTA and methionine [21], defects of any one of the enzymes could render the pathway broken. For instance, the methylthioadenosine phosphorylase (MTAP), which catalyzes the first step in MTA's conversion into methionine, is commonly found defective in many cancer cell lines [25]. Such defects could complicate the result interpretation of experiments involving SAM supplementation. Third, due to the presence of the methionine salvage step, supplementation of SAM could also lead to increased intracellular methionine availability. This presents a challenge when one wants to parse the other roles of methionine, such as protein synthesis and nutrient sensing, from its role in methyl group donation via SAM. Last, on the therapeutic side, considerable amounts of effort have been directed towards the development of SAM salts/analogues with longer shelf-life while leaving the aspects of cellular permeability largely unaddressed. Such approaches, on the contrary, could potentially further limit the therapeutic efficacy of SAM, due to less degradation of SAM into MTA for methionine salvaging and SAM synthesis inside the cell.
FUTURE DIRECTIONS
Given the caveats of extracellular SAM supplementation and the implications of the indirect SAM transport schematic, one might want to consider the following points when designing experiments involving SAM supplementation or improving SAM as a therapeutic drug. First, the genetic background of the experiment system should be examined, especially regarding genes that are involved in the methionine salvage pathways. Second, one might con-sider performing MTA supplementation in parallel to SAM supplementation to further strengthen the causation link between SAM and the observed phenotype. Third, genetic manipulations or pharmacological modulation of SAM utilizing enzymes might be helpful to discern between the contribution of SAM and the contribution of methionine towards a given phenotype. Last, efforts on developing better SAM-based therapeutics should also address SAM's impermeability. More efforts could be directed towards developing SAM mimics that are more permeable to cell membranes or exploring MTA's therapeutic potential.
ACKNOWLEDGMENTS
J.W.L. acknowledges grants from the American Cancer Society, and the National Cancer Institute of the National Institutes of Health (R01 CA193256). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. J.W.L thanks Minkui Luo for helpful ideas surrounding SAM stability.
CONFLICT OF INTEREST
J.W.L advises Nanocare Technologies, Restoration Foodworks, and Raphael Pharmaceuticals. Y.S. declares no competing interest. The polar nature of SAM and the likely absence of SAM transporters in the plasma membrane presents challenges for the direct transport of SAM across the plasma membrane and limits its bioavailability. Under physiological conditions, however, SAM readily degrades into MTA. Unlike SAM, MTA can cross cell membrane through both nonspecific nucleoside transport system and passive diffusion. Once inside the cell, MTA can participate in the methionine salvage pathway to regenerate methionine, which can be utilized by the SAM synthase, methionine adenosyltransferase (MAT), to replenish intracellular SAM. When viewed as a whole, MTA effectively brings in the sulfur and activated methyl group of the extracellular SAM into the cell, overcomes the challenges of direct SAM transport across plasma membrane, and enables the utilization of those functional groups for reactions such as methylation and transsulfuration.
|
2022-01-07T16:18:54.550Z
|
2021-01-10T00:00:00.000
|
{
"year": 2021,
"sha1": "2dbd01cf98efb1e47e572c2b5811993548979666",
"oa_license": "CCBY",
"oa_url": "http://www.cell-stress.com/wp-content/uploads/2021A-Sun-Cell-Stress-Advanced-Pub.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "493ec61ea5d6ed97cd6737fdd2dee5dd6edb9eef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2752624
|
pes2o/s2orc
|
v3-fos-license
|
Pulmonary sclerosing hemangioma in a 21-year-old male with metastatic hereditary non-polyposis colorectal cancer: Report of a case
Background Pulmonary sclerosing hemangioma (SH) is a rare tumor of the lung predominantly affecting Asian women in their fifth decade of life. SH is thought to evolve from primitive respiratory epithelium and mostly shows benign biological behavior; however, cases of lymph node metastases, local recurrence and multiple lesions have been described. Case Presentation We report the case of a 21-year-old Caucasian male with a history of locally advanced and metastatic rectal carcinoma (UICC IV; pT4, pN1, M1(hep)) that was eventually identified as having hereditary non-polyposis colorectal cancer (HNPCC, Lynch syndrome). After neoadjuvant chemotherapy followed by low anterior resection, adjuvant chemotherapy and metachronous partial hepatectomy, he was admitted for treatment of newly diagnosed bilateral pulmonary metastases. Thoracic computed tomography showed a homogenous, sharply marked nodule in the left lower lobe. We decided in favor of atypical resection followed by systematic lymphadenectomy. Histopathological analysis revealed the diagnosis of SH. Conclusions Cases have been published with familial adenomatous polyposis (FAP) and simultaneous SH. FAP, Gardner syndrome and Li-Fraumeni syndrome, however, had been ruled out in the present case. To the best of our knowledge, this is the first report describing SH associated with Lynch syndrome.
Background
Sclerosing hemangioma of the lung (SH), alternatively characterized as alveolar pneumocytoma, was first described by Liebow and Hubbel in 1956 [1] and represents a rare and, in the majority of cases, benign neoplasm of the lung. It predominantly affects females in their fifth decade of life [2,3] and is more common in Asian women. Although several theories have been proposed for its histogenesis and the term implies an endothelial derivation, an origin from immature respiratory epithelium is currently accepted [3][4][5][6][7]. Symptoms such as atypical thoracic pain, cough, hemoptysis and dyspnea might occur due to tumor enlargement and compromising of surrounding tissue [3]. However, in most patients, SH is detected incidentally during routine chest radiographic examination because it is generally asymptomatic [2,8]. Although SH is thought to be benign, cases of lymph node metastases, local recurrence and multiple lesions have been reported [2,[9][10][11] suggesting that the progression to an overtly malignant phenotype might be possible. Lymph node metastases, however, do not seem to have an impact on long-term survival [12]. Altogether, little is known about the associated risk factors, prognosis and natural course of SH, and little clinical data exists from western countries.
Only a few cases have been reported affecting young patients. There are two recent reports describing middleaged female patients suffering from familial adenomatous polyposis (FAP) and simultaneous SH that suggest a common tumorigenesis and report SH as a part of the clinical phenotype of FAP [13,14]. Many hereditary syndromes associated with colorectal cancer (CRC) can have extracolonic manifestations. However, to the best of our knowledge, we present the first case of a patient with the diagnosis of SH and a history of Lynch syndrome.
Case Presentation
We first diagnosed a 21-year-old Caucasian male suffering from CRC in January of 2009. The patient complained of having recurrent rectal bleeding for three months. He was otherwise a healthy non-smoker and in good condition appropriate for his age. His medical history was uneventful. Evaluation of family history revealed five relatives afflicted with malignant tumors at a young age. Among them were his mother, who died at the age of thirty-five from endometrial cancer, and the mother's brother, who passed away at the age of forty from CRC. The patient did not report significant weight loss, fever or night sweats. Physical examination was unremarkable. Carcinoembryonic antigen (CEA) and carbohydrate antigen 19-9 (CA 19-9) were within normal range. Clinical staging diagnostics revealed a partially stenosing rectal adenocarcinoma (uT4, uN+) but no potentially metastatic lesions in the liver or lung at that time. There was no clinical evidence of FAP or Gardner syndrome. Li-Fraumeni syndrome was subsequently ruled out by sequencing of multiple TP53-exons (3-9) after PCR amplification of genomic DNA.
With respect to locally advanced tumor growth, the patient underwent neoadjuvant 5-fluorouracil-based chemoradiotherapy (5-fluorouracil/folinic-acid, 50.4 Gy) followed by low anterior resection including total mesorectal excision in the spring of 2009. Intraoperative sonography of the liver showed a small lesion in segment VII, but, due to the locally advanced tumor stage (pT4, pN2 (6/9), uM1 (hep), V1, L1, G2, R0), we decided in favor of non-simultaneous resection of the hepatic lesion [15]. According to revised Bethesda guidelines [16], microsatellite instability (MSI) testing was performed by DNA isolation and subsequent PCR amplification from tissue of the primary rectal carcinoma resulting in detection of significant instability in microsatellites BAT25, BAT26, D17S250 and D2S123. This finding shaped up as high level of MSI (MSI-H). Moreover, sequencing of the protooncogenes KRAS and BRAF showed no mutation (wildtype). This raised the strong suspicion of a Lynch syndrome particularly with regard to the patient's family history, his age and the fulfillment of the Amsterdam criteria [17,18]. MSI in CRC of patients under the age of forty are estimated to be due to an underlying germline mutation in 85.7% of the cases, a probability, which is elevated by the presence of a BRAF-wildtype. The latter can be used to distinguish sporadic MSI CRC from MSI tumors that arise in the setting of Lynch syndrome [19]. Consecutively, the patient underwent human genetic counseling followed by testing for germline mutations in mismatch repair (MMR) genes by sequencing of their cDNA emanating from PAX-RNA and total RNA isolated from short-term lymphocyte culture. Thereby a mutation was detected in MMR-gene PMS2 (exon 11). Altogether, the diagnosis of Lynch syndrome was made.
Early restaging was performed during intermittent FOLFOX chemotherapy and the patient was found to have hepatic ( Figure 1a) and pulmonary lesions suspicious for metastases. Thoracic computed tomography showed a well-circumscribed 6 mm lesion in the left lower lobe of the lung (Figure 1b) with homogenous contrast media enhancement as well as two smaller lesions in the right upper lobe. There were neither signs of infiltration of the adjacent tissue nor signs of pathologically enlarged lymph nodes. We decided to first perform a partial hepatectomy (segment VII), which confirmed hepatic spread of the tumor. In the light of the patient's young age, his early recovery and his good general state of health, we proceeded to remove the leftsided pulmonary lesion four weeks later. Therefore, he underwent atypical resection of the left lower lobe through a left anterolateral thoracotomy followed by a systematic mediastinal and hilar lymphadenectomy [20]. The patient's postoperative course remained uncomplicated and he again recovered well. Gross examination of the specimen, however, showed a well-circumscribed solid pulmonary tumor, 7 mm in diameter. Histological evaluation revealed a mixed papillary, hemorrhagic and sclerotic growth pattern of cuboidal surface cells and polygonal stromal cells. Cuboidal surface cells were immunopositive for thyroid transcription factor-1 (TTF-1), epithelial membrane antigen (EMA) and pan-cytokeratin, whereas polygonal stromal cells were immunopositive for neuron-specific enolase (NSE) and S-100 protein as well as EMA. These findings are consistent with a sclerosing hemangioma of the lung (Figure 2). Ki-67 index was less than 5%. Both significant MSI evaluated by PCR amplification and loss of expression of MMR-proteins MLH1, MSH2, MSH6 and PMS2 determined by immunohistochemistry could not be detected in the pulmonary SH. Moreover, all lymph nodes sampled were free of metastases.
By thoracic computed tomography, the pulmonary lesions in the right upper lobe remained unchanged after 3 months. According to interdisciplinary tumor board recommendations and oncological guidelines [21][22][23] we decided not to suggest further chemotherapy or restorative proctocolectomy but to perform careful aftercare with monitoring of the pulmonary lesions at close intervals as well as attentive follow-up via abdominal ultrasound and colonoscopy.
Furthermore, the patient's family members were referred to cancer genetics specialists for counseling interviews and recommended germline mutation analysis. During regular follow up visits CEA and CA 19-9 were within normal range. Accurate colonoscopy and diagnostic imaging of liver and lungs were unremarkable, in particular pulmonary lesions of the right upper lobe both were not identifiable any more.
Discussion
Pulmonary SH is a rare and mostly benign neoplasm of the lung. Histologically, SH is essentially characterized by two epithelial cell types: cuboidal surface cells, which resemble type II pneumocytes, and polygonal stromal cells (round cells) with bland nuclei and pale cytoplasm, which are thought to stem from primitive respiratory epithelium [4,5]. These two cell types form four histological patterns; papillary, which often appears to be the predominant type, but epitheloid, sclerotic and hemorrhagic configurations are also found in some cases as in the present one (Figure 2, [24]). Predominant papillary growth patterns might make it complicated to differentiate SH from a carcinoma that also exhibits a papillary pattern. Metastatic papillary thyroid carcinoma, mesothelioma and bronchioloalveolar carcinoma have to be considered accurately [11]. In this respect, however, decreased Ki-67 labeling and low p53 expression could help to differentiate SH from papillary thyroid carcinoma [2]. The cuboidal surface cells of SH are typically immunopositive for thyroid transcription factor-1 (TTF-1), epithelial membrane antigen (EMA), surfactant protein B (SP-B), low molecular weight cytokeratin (CK-L) as well as carcinoembryonic antigen (CEA) and negative for neuroendocrine markers, whereas polygonal stromal cells (round cells) are positive for vimentin and TTF-1 and weakly positive for several neuroendocrine markers [4,7,25]. Mitotic figures are rarely identified [2]. In the present case, the patient's lesion comprised mixed papillary growth patterns consisting of superficial layers of cuboidal cells that were immunopositive for TTF-1 and EMA, as well as stromal cells positive for TTF-1 expression, and some also for neuroendocrine markers such as neuron-specific enolase (NSE) and S-100 protein. Thus, histological and immunohistochemical diagnosis of SH was made, and a very low Ki-67 index of less than 5% indicated a biologically non-active tumor [26].
In most patients, SH is detected during routine chest radiographic examination [2,8]. Therefore, the actual prevalence of SH is not known due to the relatively asymptomatic nature of the disease. SH is usually diagnosed as a single asymptomatic nodule in the periphery of the lung [2,8], often affecting the lower lobe [27,28]. Radiologically, it mostly presents as a well-circumscribed lesion with marked contrast media enhancement. Calcification might be detected in the minority of cases. A lucent zone around SH, the "air meniscus sign", first described in 1978 [29], is a typical radiological feature representing trapped air around the lesion. Additionally, other reports of air spaces surrounding SH have been published [30]. However, other diagnoses must be considered, including carcinoids, hamartoma, hemangioma, malignant teratoma, arterio-venous malformations and inflammatory lesions. In the present case, chest radiography was normal, but thoracic computed tomography revealed a small but well-defined lesion of the left lower lobe with homogeneous contrast enhancement (Figure 1b). No typical lucent zone was found at the periphery of the lump, and no regional lymph node enlargement was present. Due to the history of metastatic CRC, however, a pulmonary spread of rectal cancer was the most probable diagnosis, so surgical resection of the lesion was performed.
During surgical intervention, we found early stage SH. Wedge resection in previous cases of early stage SH was associated with excellent long-term survival and therefore should be the treatment of choice if an exact preor intraoperative diagnosis is possible [3,31]. Otherwise, especially in cases of uncertain intraoperative frozen section examinations and given the uncertainty of growth, biological behavior, local recurrence and metastatic spread, the optimal therapeutic approach remains undefined. In these cases, atypical or anatomic resection with systematic lymphadenectomy is suggested [31]. Because of our patient's distinctive history, we oriented our therapy toward a strong suspicion of a pulmonary metastasis of CRC and elected to pursue a thorough surgical approach with atypical resection followed by regional lymphadenectomy [20].
Only a few cases of SH have been reported in young patients, among them a 10-year-old, an 18-year-old and a 19-year-old Asian female as well as a 22-year-old male, who presented with lymph node metastases implying a more malignant case of SH [12,32]. The latter might corroborate with the monoclonality of cells within SH, which has been described before and which suggests a neoplastic growth pattern of the lesion [33]. With respect to synchronous colorectal neoplasms, female patients suffering from FAP and simultaneous SH have been described [13,14]. In these cases, patients did not have any extracolonic manifestations of FAP and did not suffer from CRC until they presented with SH. To the best of our knowledge, this is the first report of SH associated with Lynch syndrome.
Autosomal-dominant Lynch syndrome (HNPCC) is a rare genetic disease (OMIM #609310) that usually shows right-sided predominance of CRC at a young age and is often caused by mutations of MMR-genes [34]. Although occurrence is less frequent than CRC there is a high prevalence of synchronous or metachronous extracolonic manifestations, especially endometrial cancer, which caused the death of our patient's mother. Other extracolonic manifestations include gastric, genitourinary, ovarian, small bowel, brain and sebaceous tumors [34,35]. Only one case of Muir-Torre syndrome, a variant of Lynch syndrome with additional skin lesions, was reported that was associated with non-small cell lung cancer [36]. However, there are no reports of benign lung tumors as extracolonic manifestation of Lynch syndrome.
In our patient, MSI testing of SH and immunohistochemistry for MLH1, MSH2, MSH6 and PMS2 did not reveal MSI or loss of MMR-expression in the pulmonary nodule. On the one hand we would have judged SH as an extracolonic manifestation of Lynch syndrome in this specific patient if SH would have featured MSI and loss of MMR-expression. On the other hand, one might anticipate that high-grade MSI and loss of MMR-expression by homocygosity of a mutated PMS2 should then have led to a more malignant growth pattern of SH. Pulmonary SH, as in the present case (Ki-67 index <5%), is a mostly benign and heterogeneous tumor composed of different cell types and exhibits various histological patterns [33]. Nevertheless, heterozygosity of PMS2 in the present case as exhibited by c-DNA-sequencing might still be causally associated with the development of this exceedingly rare tumor. Although a sporadic coincidence of SH and Lynch syndrome could not be ruled out in our patient, one might raise the suspicion of a common etiology being responsible for the exceptional concurrence of these two extremely infrequent events in a young male Caucasian.
Conclusions
We present the first case of pulmonary SH in a young Caucasian male and in a patient suffering from Lynch syndrome. It might be speculated that SH did not just incidentally co-occur with the patient's CRC. From this unlikely concurrence we assume that the underlying Lynch syndrome might have abetted the arising of the patient's SH and hypothesize a common cause for these rare events. However, SH could not be termed as an extracolonic manifestation of Lynch syndrome since it obviously showed a benign behavior and did not exhibit MSI or loss of MMR-expression based upon heterozygosity of PMS2.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
|
2014-10-01T00:00:00.000Z
|
2011-06-06T00:00:00.000
|
{
"year": 2011,
"sha1": "05841327907eef2b4d0375143f88ebaa875e4e35",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/1477-7819-9-62",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a41fdceb101f6de2a74d91e66f218eed87f5772",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13052258
|
pes2o/s2orc
|
v3-fos-license
|
Disassembly of interchromatin granule clusters alters the coordination of transcription and pre-mRNA splicing
To examine the involvement of interchromatin granule clusters (IGCs) in transcription and pre-mRNA splicing in mammalian cell nuclei, the serine-arginine (SR) protein kinase cdc2-like kinase (Clk)/STY was used as a tool to manipulate IGC integrity in vivo. Both immunofluorescence and transmission electron microscopy analyses of cells overexpressing Clk/STY indicate that IGC components are completely redistributed to a diffuse nuclear localization, leaving no residual structure. Conversely, overexpression of a catalytically inactive mutant, Clk/STY(K190R), causes retention of hypophosphorylated SR proteins in nuclear speckles. Our data suggest that the protein–protein interactions responsible for the clustering of interchromatin granules are disrupted when SR proteins are hyperphosphorylated and stabilized when SR proteins are hypophosphorylated. Interestingly, cells without intact IGCs continue to synthesize nascent transcripts. However, both the accumulation of splicing factors at sites of pre-mRNA synthesis as well as pre-mRNA splicing are dramatically reduced, demonstrating that IGC disassembly perturbs coordination between transcription and pre-mRNA splicing in mammalian cell nuclei.
Introduction
Pre-mRNA splicing factors localize by immunofluorescence microscopy to 20-50 irregularly shaped nuclear speckles set against a diffuse distribution within the nucleoplasm of mammalian cells (for reviews see Spector, 1993;Lamond and Earnshaw, 1998). In addition, in some cell types they also localize to Cajal bodies (Spector et al., 1992;Gall, 2000). By transmission electron microscopy (TEM),* the speckled immunofluorescence localization corresponds to interchromatin granule clusters (IGCs) and perichromatin fibrils (PFs; for review see Fakan and Puvion, 1980;Spector, 1993). IGCs are composed of particles measuring 20-25 nm in diameter, and they contain numerous factors that are involved in RNA synthesis and processing. IGC constituents include, but are not limited to, small nuclear ribonucleoprotein particles (snRNPs), arginine-serine-rich (SR) splicing fac-tors, and the hyperphosphorylated form of the large subunit of RNA polymerase II (Bregman et al., 1995). The majority of the protein constituents of IGCs have now been identified (Mintz et al., 1999, and unpublished results), making it possible to better address the biological function of these nuclear domains.
Although splicing factors localize to IGCs, pre-mRNA synthesis does not occur within these structures, but at PFs on the IGC periphery or at some distance away from the IGCs (Fakan, 1994;Cmarko et al., 1999). One recent report indicated overlap between sites of bromo-uridine incorporation and nuclear speckles (Wei et al., 1999); however, a significant proportion of the overlap likely corresponds to PFs (transcription sites) on the periphery of speckles. The majority of nucleotide incorporation and immunocytochemistry studies have shown that IGCs are not likely to be sites of transcription because they do not contain DNA, and they are not labeled by 3 H-uridine incorporation (Turner and Franchi, 1987;Spector, 1990;Fakan, 1994). Cmarko et al. (1999) performed extensive analysis of bromo-UTP incorporation at the TEM level and did not detect transcription in IGCs. Furthermore, inhibition of RNA polymerase II transcription with ␣ -amanitin causes splicing factor recruitment to cease, and speckles become larger and more rounded (Carmo-Fonseca et al., 1992;Spector et al., 1993;Misteli et al., 1997). Therefore, enrichment of splic-The online version of this article contains supplemental material.
ing factors in IGCs may be due to the fact that they are sites of complex formation and/or modification of splicing factors, or sites of splicing factor storage (Huang et al., 1994;Spector et al., 1993). In support of these possibilities, experiments in living cells revealed that hyperphosphorylation of splicing factor SF2/ASF on its RS-rich domain releases it from the IGCs for recruitment to active genes (Misteli et al., 1997). However, despite this advance, neither the structural organization nor the precise biological function of the IGCs is known.
It is presently unclear why mammalian cells contain IGCs and whether their position in the nucleus reflects a spatial positioning that is essential for function. Time-lapse observations of nuclear speckles in living cells have shown that the position of IGCs is maintained over many hours (Misteli et al., 1997;Kruhlak et al., 2000), suggesting that they reside in predetermined locations. Such positioning may be a result of granules clustering upon a specific structural framework or around specific chromosomal regions. To investigate this possibility, we used overexpression of murine Clk/STY 1 as a method to completely disassemble IGCs in vivo. Ultrastructural analysis of such cells indicates that SR proteins are redistributed throughout the nucleus in small clusters, and no specific underlying structural IGC scaffold was revealed. Nascent transcripts are produced in cells without IGCs, but accumulation of splicing factors originating from an entirely nucleoplasmic pool onto pre-mRNA is significantly reduced and spliced mRNA is markedly reduced to undetectable.
Overexpression of Clk/STY causes redistribution of nuclear speckle components
A-431 cells were transiently transfected with murine Clk/ STY1 or catalytically inactive mutant Clk/STY1(K190R) to assess the extent of disassembly of various nuclear speckle components. Transient overexpression resulted in a population of transfected cells with variable levels of Clk/STY expression and extent of nuclear speckle disassembly. Our goal was to examine pre-mRNA synthesis and processing in cells in which nuclear speckles are no longer intact. We examined cells in which over-expression of green fluorescent protein (GFP)-Clk/STY ( Fig. 1 A) induces a complete redistribution of splicing factors, such as SC35 ( Fig. 1 B). Multiple SR protein family members, such as those recognized by monoclonal antibody 3C5 (Turner and Franchi, 1987), responded to hyperphosphorylation in the same manner (unpublished data). However, overexpression of GFP-Clk/STY(K190R) that is not able to phosphorylate SR proteins ( Fig. 1 C) did not induce nuclear speckle disassembly of SC35 ( Fig. 1 D), indicating that the release of SR proteins is dependent upon kinase activity. Next, we looked at the response of proteins that localize to nuclear speckles but are not members of the SR protein family. B" is a component of U2 snRNP that is involved in pre-mRNA splicing (Habets et al., 1986). When GFP-Clk/STY was transiently overexpressed in A-431 cells (Fig. 2 A), similar to SR proteins, B" redistributed from its typical nuclear speckle localization to a diffuse nuclear localization (Fig. 2 B). Our ongoing identification and characterization of the protein constituents of IGCs has given us the ability to sort IGC components based upon predicted functions (Mintz et al., 1999 and additional unpublished data). Among the proteins identified in our purified IGC fraction is pinin (unpublished result), which was reported previously to link intermediate filaments to the submembrane plaque of desmosomes (Ouyang and Sugrue, 1996;Ouyang et al., 1997). In contrast, pinin has also been reported to be a strictly nuclear protein that localizes in nuclear speckles (Brandner et al., 1997). Although pinin may localize to both desmosomes and nuclear speckles (Ouyang, 1999), the possibility that pinin is potentially a structural protein in the IGCs was tested by examining the response of endogenous pinin to hyperphosphorylation of SR proteins. However, cells transiently overexpressing GFP-Clk/STY (Fig. 2 C) exhibited a complete redistribution of pinin (Fig. 2 D). Several reports have indicated that popula- tions of actin (Nakayasu and Ueda, 1984) and lamin A (Jagatheesan et al., 1999) are associated with snRNPs or nuclear speckles, respectively. Although these two proteins are very good candidates for structural IGC proteins, they also redistributed in response to Clk/STY overexpression in the same manner as splicing factors (unpublished data). It is unlikely that this redistribution is due to direct hyperphosphorylation of actin or lamins, since Clk/STY specificity is linked to phosphorylation of serines in the RS region of SR proteins (Colwill et al., 1996a,b;Nayler et al., 1997) and these proteins lack RS regions. Therefore, although actin and lamins may play some role in IGCs, they do not appear to serve as constituents of an underlying scaffold of these nuclear structures. Next, we considered that RNA, rather than protein, might provide a structural framework in IGCs. A stable population of polyadenylated (polyA ϩ ) RNA resides in nuclear speckles (Huang et al., 1994). The function of this RNA is not clear, but we reasoned that if it were a structural IGC component, it would not respond to release of SR proteins. However, similar to what was observed for IGC proteins, upon overexpression of FLAG-Clk/STY in A-431 cells (Fig. 3 A), the stable polyA ϩ RNA became diffusely localized throughout the nucleus (Fig. 3 B). This effect was dependent on hyperphosphorylation, because upon overexpression of FLAG-Clk/STY(K190R) (Fig. 3 C), the polyA ϩ RNA maintained its typical nuclear speckle localization (Fig. 3 D). We also tested IGC constituents that were recently identified by mass spectrometry analysis. KIAA0111 encodes translation initiation factor eIF4Aiii (Weinstein et al., 1997;Li et al., 1999;Holzmann et al., 2000). KIAA0801 encodes a protein of unknown function; both proteins contain DEAD/H box RNA helicase motifs. KIAA0536 is the human homologue of PRP4, a serine/threonine protein kinase in fission yeast (Alahari et al., 1993;Kojima et al., 2001). GFP fusion constructs were made for each of these cDNA clones, and cells were transiently cotransfected with the respective fusion construct plus pTetON and FLAG-Clk/STY plasmids. After allowing time for accumulation of GFP-tagged protein in nuclear speckles, expression of FLAG-Clk/STY was induced with doxycycline. In FLAG-Clk/STY-transfected cells, each of these proteins exhibited a diffuse nuclear localization, whereas in neighboring cells not overexpressing FLAG-Clk/STY they maintained a nuclear speckle localization (unpublished data). Whereas each constituent of nuclear speckles examined here redistributed upon overexpression of Clk/STY, nucleolar organization, as shown by ANA-N staining (Fig. 2, E and F), and chromatin organization (unpublished data) were not altered by overexpression of Clk/STY.
Cells overexpressing Clk/STY lack intact interchromatin granule clusters
Since all nuclear speckle components examined became diffusely distributed upon overexpression of Clk/STY, we were interested to determine if this redistribution would reveal a specific underlying IGC scaffold. Cells overexpressing Clk/STY and observed by immunofluorescence to have a completely diffuse distribution of splicing factor SC35 were processed for TEM. At least one neighboring untransfected cell with intact IGCs was examined in the same thin sections, serving as a control. The results shown in Fig. 4 are representative of observations from 12 Clk/STY-transfected cells. Untransfected A-431 nuclei contained large IGCs distributed throughout the nucleoplasm, and immunogold labeling for SR proteins was found in IGCs, as well as in small clusters in the surrounding nucleoplasm (Fig. 4, A and B). In cells transfected with Clk/STY, there were no intact IGCs, and immunogold labeling for SR proteins was dispersed throughout the nucleoplasm in small clusters that resemble PFs (Fig. 4, C and D). In addition, we did not detect a specific IGC substructure, nor did we observe empty nuclear regions where IGCs would previously have been located.
Transcription is unaffected in cells lacking intact IGCs
Next, we evaluated the functional implications of complete IGC disassembly, namely the effects on transcription and pre-mRNA splicing in situ. A-431 cells transiently trans- . SC35 accumulation at transcription sites is dramatically reduced in cells without intact nuclear speckles. -globin transcripts are detected as a single locus in the nucleus of A-431 cells stably transfected with -globin genomic DNA (A). In these stable cell lines, SC35 accumulates at the -globin transcription site as expected (arrows, A and B). Transiently expressed GFP-Clk/STY initially localizes to nuclear speckles (D), and in addition, accumulates at transcription sites (arrows, C and D). In cells expressing GFP-Clk/STY and with completely disassembled nuclear speckles (E), SC35 does not accumulate at the -globin transcription site (arrow, F and G). However, in an adjacent cell that is not overexpressing GFP-Clk/STY, SC35 does accumulate at the transcription site (arrowhead, F and G). Bars, 5 m. fected with FLAG-Clk/STY were gently permeabilized with digitonin, and transcription buffer containing bromouridine-triphosphate (bromo-UTP) was added to the cells for 5 min at 37 Њ C. The cells were then processed for triple-label immunolocalization of FLAG-Clk/STY, bromo-UTP, and SR proteins (Fig. 5). Cells overexpressing FLAG-Clk/STY ( Fig. 5 A) exhibited a completely diffuse localization of SR proteins, as confirmed by staining with 3C5 ( Fig. 5 C). Interestingly, such cells remained transcriptionally competent ( Fig. 5 B), and the extent of bromo-UTP incorporation was comparable to that in surrounding untransfected cells.
Splicing factors do not accumulate at transcription sites and splicing is significantly reduced in cells without intact IGCs
A-431 cell lines stably expressing  -globin genomic DNA were generated to assess splicing capacity in cells without in-tact IGCs. The site of  -globin pre-mRNA synthesis was detected by RNA FISH as a single dot in each interphase nucleus ( Fig. 6 A), and splicing factor SC35 was colocalized at this transcription site (Fig. 6 B) as expected from previous studies (Jiménez-García and Spector, 1993;Xing et al., 1993;Huang and Spector, 1996). Interestingly, GFP-Clk/ STY was recruited to the transcription site in cells that had not yet undergone nuclear speckle disassembly (Fig. 6, C and D). The  -globin pre-mRNA transcription site (nascent transcripts) in nuclei overexpressing Clk/STY and exhibiting complete nuclear speckle disassembly was of comparable size and intensity to loci in untransfected nuclei ( Fig. 6 G), confirming that transcription is not affected by Clk/STY overexpression. However, the loci in 11 of 12 cells scored without intact IGCs exhibited a largely reduced accumulation of SC35 compared with loci in untransfected nuclei (Fig. 6 F; compare regions at arrow and arrowhead). In addition, using a nonphosphoepitope antibody that recognizes the U2 sn-RNP B" protein, we also observed reduced accumulation of U2 snRNP at this transcription site in cells without intact IGCs (unpublished data).
Although splicing factors did not accumulate to significant levels at transcription sites in cells without intact IGCs, we directly examined the ability of  -globin pre-mRNA to be spliced at the transcription site. We used an oligonucleotide probe to specifically detect removal of intron 2 from  -globin pre-mRNA in vivo by fluorescence in situ hybridization. A probe designed to target the splice junction of exons 2/3 hybridizes to spliced  -globin mRNA in all untransfected cells scored (Fig. 7 B, arrowheads; 79/79 cells). Nearly all cells that overexpressed GFP-Clk/STY but had not yet undergone nuclear speckle disassembly also exhibited a hy-bridization signal with the splice junction probe (Fig. 7 B, arrow; 50 of 52 cells exhibited a hybridization signal). However, there was no hybridization signal with the splice junction probe in cells that lacked IGCs due to overexpression of GFP-Clk/STY, indicating that splicing was inhibited (Fig. 7 D; 26/26 cells) by comparison with an adjacent untransfected cell which gave a hybridization signal (Fig. 7 D,arrowhead). A probe designed to target  -globin intron 2 hybridized to  -globin pre-mRNA both in untransfected cells and in all cells transfected with GFP-Clk/STY that exhibited complete nuclear speckle disassembly (Fig. 7,E and F;16/ 16 cells). Similar results were observed using a stable cell line expressing a  -tropomyosin minigene construct (unpublished data). Because we detected splicing only in cells having intact nuclear speckles, we conclude that splicing factors Figure 7. Splicing is markedly reduced to absent in cells without intact nuclear speckles. A-431 cells stably expressing -globin genomic DNA were transiently transfected with GFP-Clk/STY (A, C, and E). In situ hybridization was performed using oligonucleotide probes to the splice junction of exons 2/3 (B and D) or to intron 2 (F). GFP-Clk/STY initially localizes in nuclear speckles and splicing of -globin pre-mRNA is unaffected (B, arrow) by comparison with splicing in an untransfected nucleus (B, arrowhead). However, in cells with completely disassembled nuclear speckles (C, cell on the left), there is no hybridization signal in any focal plane (D, cell on the left), whereas splicing is detected in a neighboring untransfected cell (D, arrowhead). A probe that targets -globin intron 2 hybridizes to -globin pre-mRNA in untransfected cells (F, arrowhead) as well as cells expressing GFP-Clk/STY and exhibiting no intact nuclear speckles (F, arrow). Bar, 5 m. originating from an entirely diffuse nucleoplasmic pool (hyperphosphorylated) are not competent to perform pre-mRNA splicing in vivo.
Catalytically inactive mutant Clk/STY(K190R) traps splicing factors in nuclear speckles
Since hyperphosphorylation of SR proteins is required for release of splicing factors from nuclear speckles (Misteli et al., 1997), we reasoned that we may interfere with splicing factor release by overexpression of mutant Clk/ STY(K190R). A-431 cells were transfected with GFP-Clk/ STY(K190R) and the effects of overexpression were analyzed in living cells. Time-lapse images of a cell overexpressing GFP-Clk/STY(K190R) are shown in Fig. 8. A sequence of 200 images (350 ms exposures) was taken every 30 min, beginning 6 h after transfection when a GFP-Clk/ STY(K190R) signal was first detectable in the nuclear speckles of the cell shown. At 6.0 h (Fig. 8 A), the speckle morphology and peripheral movement was comparable to that shown previously for GFP-SF2/ASF (Misteli et al., 1997). However, in contrast to time-lapse observations of speckles in control cells, the cells transfected with Clk/STY(K190R) began to exhibit bright GFP-Clk/STY(K190R) foci on the periphery of speckles. In the example shown in Fig. 8, at 6.0 h after transfection, two speckles had bright foci (Fig. 8 A, arrows; video 1). By 6.5 h, each speckle had developed at least one bright focus (Fig. 8 B; video 2). Multiple foci appeared on all speckles by 7.0 h, often in pairs that maintained close proximity (Fig. 8 C, arrows; video 3). Finally, at 7.5 h, the speckles had multiple foci, and they no longer exhibited the typical peripheral movement. Instead, the speckles were almost completely immobilized (Fig. 8 D, arrow; video 4). Identical foci were observed in cells transfected with FLAG-Clk/STY(K190R) (unpublished data), as well as in fixed cells (see below), ruling out the possibility that the foci were merely aggregates of the GFP fusion or a result of phototoxic effects of live-cell imaging. Furthermore, immunoelectron microscopy analysis showed that foci contain interchromatin granules (unpublished data).
Similar to what we saw with overexpression of wild-type Clk/STY, transient overexpression of Clk/STY(K190R) resulted in a population of transfected cells expressing different amounts of kinase, except that the mutant kinase showed various stages of foci formation on the periphery of nuclear speckles. Some nuclei (without nuclear speckle foci) showed a complete colocalization of splicing factors and Clk/STY(K190R) as shown in Fig. 1, C and D. We speculate that this variation in phenotype is due to the timing of DNA entry into each cell and the lower level of mutant kinase expression, and that formation of foci might require expression of Clk/STY(K190R) to reach a certain threshold to override the activity of endogenous Clk/STY isoforms, as well as other putative SR protein kinases. We examined the focal accumulations at the periphery of nuclear speckles in cells that we interpret as being representative of the early stages of foci formation (equivalent to the stages shown in Fig. 8, A and B). Bromo-UTP incorporation indicated that global transcription is not altered in cells with this phenotype (unpublished data). RNA FISH using a probe against  -globin intron 2 verified that  -globin transcripts are being synthesized; however, the splice junction probe hybridizes in only 60% of the cells scored, suggesting that splicing is somewhat less efficient when nuclear speckles are partially immobilized (unpublished data).
Immunofluorescence localization with nonphosphoepitope antibodies against B" (Fig. 9, A-C), SF2/ASF (Fig. 9, D-I), and m 3 G (unpublished data) verified that these splicing factors precisely colocalized with GFP-Clk/STY(K190R) in the foci. We interpret these foci as regions of the speckles in which splicing factors accumulate because they are in a state of reduced phosphorylation and therefore cannot be released from the speckles. To confirm this hypothesis, we transfected A-431 cells with GFP-Clk/STY(K190R) and performed immunofluorescence using anti-SC35 antibody, which recognizes a phosphoepitope on SC35, and mAb104, which recognizes phosphoepitopes on a family of SR proteins (Roth et al., 1990). There was a dramatic reduction of immunolabeling with these phosphoepitope antibodies in the focal accumulations of GFP-Clk/STY(K190R), as noted by complete absence of labeling in these regions with mAb104 (Fig. 10, A-C) and with anti-SC35 (Fig. 10, D-I). This result confirms that unphosphorylated or hypophosphorylated proteins are present in the foci and might be unable to leave the speckles due to a lack of or reduced levels of phosphorylation.
Discussion
Observations of nuclear speckles in living cells have shown that they are highly dynamic nuclear domains (Misteli et al., 1997;Eils et al., 2000). Although there is a continuous exchange of the protein constituents of speckles over time (Kruhlak et al., 2000;Phair and Misteli, 2000), each speckle maintains its position in the nucleus, suggesting that some static component tethers the speckles to a particular location within the nucleoplasm. To investigate this possibility, we used Clk/STY overexpression as a tool for modulating the integrity of speckles in vivo. We reasoned that the release of SR proteins from speckles might allow us to reveal underlying structural elements, such as a network of filaments or SR protein receptors, hypothesizing that a specific structural component of nuclear speckles would maintain its localization while splicing factors would be released. However, all proteins examined, including some predicted to serve structural roles, like nuclear speckle populations of lamin A (Jagatheesan et al., 1999) and snRNP-associated actin (Nakayasu and Ueda, 1984), redistributed upon overexpression of Clk/STY. Furthermore, ultrastructural analysis of cells without intact nuclear speckles did not reveal empty regions or areas in the nucleoplasm that appeared to be remnants of a previously intact IGC. We conclude from this study that IGCs are likely to be maintained by protein-protein interactions, including RS domain-RS domain interactions among members of the SR protein family of pre-mRNA-splicing factors, rather than by attachment to an IGC-specific framework.
Although it does not appear that IGC components, such as lamin A or actin, form filaments that provide a scaffold for clustering of interchromatin granules, it is possible that individual interchromatin granules may require these structural proteins as monomers or very short multimers in order to assemble a large number of protein and RNA components into particles. Alternatively, G-actin or lamin A monomers may be recruited to transcription sites as members of interchromatin granules. Once at transcription sites, they may multimerize into short filaments that may act as a scaffold for the assembly/disassembly of the transcription RNA processing complex. In support of this possibility,  -actin and actin-related proteins are components of the mammalian SWI/SNF-like BAF (Brg-associated factor) complex, and binding of the BAF complex to the nuclear matrix in vitro is enhanced by phosphatidylinositol (4,5)-bisphosphate (PtdIns[4,5]P 2 ), a lipid that regulates actin-binding proteins (Zhao et al., 1988). Furthermore, PtdIns(4,5)P 2 and multiple phosphatidylinositol phosphate kinase (PIPK) isoforms have been localized to nuclear speckles in vivo by antibody labeling (Boronenkov et al., 1998). Recently, Percipalle et al. (2001 have shown that actin becomes associated with a Balbiani ring mRNA via a heterogeneous nuclear ribonucleoprotein (hrp36) at the site of transcription. Future studies will directly address the organization of individual interchromatin granules and the possible role of structural proteins in their assembly/disassembly.
We examined the effect of Clk/STY hyperphosphorylation on the release of a large number of protein constituents of IGCs, including many that do not contain the RS domain essential for phosphorylation by Clk/STY. Our finding that all proteins redistributed, regardless of the presence of an RS domain, is consistent with the proposal that transcription and RNA-processing factors may exist in the nucleus in a unitary particle called a transcriptosome (Gall et al., 1999), or alternatively, in multiple smaller complexes. However, it is currently unclear if such a particle is held together simply by protein-protein interactions or in concert with other potential mechanisms. In this regard, it is particularly intriguing that upon overexpression of Clk/STY, the stable population of poly(A) ϩ RNA that is present in IGCs (Huang et al., 1994) becomes diffusely distributed throughout the nucleoplasm, whereas nascent transcripts at specific transcription sites do not redistribute. This finding raises the possibility that the stable population of poly(A) ϩ RNA that is localized to IGCs may have a role in maintaining the organization of pre-mRNA-processing factors at these nuclear domains. These stable RNAs may represent the core organizing unit of individual interchromatin granules and the binding site for RNA-processing proteins. Studies are currently underway to purify and characterize these RNA molecules.
The release of splicing factors from IGCs by hyperphosphorylation makes these factors available for recruitment to sites of transcription and splicing (Misteli et al., 1997). Since not all interchromatin granules dissociate at once, regulatory mechanisms must influence the steady-state level of interchromatin granules within these structures as well as the rate of release of complexes into the nucleoplasmic pool. In this study, we tested whether an entirely nucleoplasmic pool of splicing factors, that presumably would be fully accessible to transcription sites, was sufficient for both recruitment and function. Interestingly, we found that disruption of IGCs did not affect synthesis of pre-mRNA on either a global or a specific level. However, IGC disassembly largely prevented accumulation of splicing factors on nascent transcripts at the site of transcription, and in doing so significantly reduced or abolished pre-mRNA splicing. Interestingly, in a previous study we have shown that microinjection of antisense oligonucleotides or antibodies to pre-mRNA splicing factors resulted in the rounding up of nuclear speckles and an inhibition of both transcription and pre-mRNA splicing (O'Keefe et al., 1994). In this study we show that this coordination can be disrupted by the break-up of nuclear speckles, suggesting that this nuclear structure plays some role in coupling these two processes.
We cannot completely exclude the possibility that reduction in splicing in vivo could be due to inactivation of splicing factors via their hyperphosphorylation rather than the loss of IGCs. For example, a recent in vitro study showed that Clk/STY directly affects the activity of SR proteins, and altering the phosphorylation state of these proteins either by hyper-or hypophosphorylation resulted in inhibition of splicing activity (Prasad et al., 1999). Overexpression of Clk/STY in vivo has also been shown to affect splice site selection on a reporter transcript (Duncan et al., 1997), although correlation with the extent of IGC disassembly was not reported. However, our data demonstrate that upon hyperphosphorylation of SR proteins, all components tested, including those that do not contain RS domains, were redistributed. Both an SR protein (SC35, Fig. 6 F) and a non-SR protein U2-B" (unpublished data) failed to accumulate at transcription sites, and there was a marked reduction in spliced product. It is therefore likely that in vivo the organization of IGCs is fundamentally linked to the phosphorylation state of SR proteins and hence to the ability of the processing machinery to perform pre-mRNA splicing. This possibility is further supported by previous studies showing that phosphorylation of SR proteins was linked to their release from IGCs and subsequent recruitment to transcription sites (Misteli et al., 1997). While pre-mRNA splicing can occur in vitro in the absence of intact IGCs, it is conceivable that nuclear extracts used for such experiments contain individual interchromatin granules that may be altered upon SR protein hyper-or hypophosphorylation, leading to decreased splicing activity.
The present study implicates IGCs in the assembly/maturation of the RNA-processing machinery into splicing-competent complexes or particles that must be maintained during transit to active genes for efficient targeting and function. Perhaps certain components of these complexes are responsible for recognizing newly synthesized pre-mRNA and/or stabilizing the association of the splicing machinery on pre-mRNA. Our finding that GFP-Clk/STY was recruited to transcription sites is consistent with the possibility that, in addition to regulating release of splicing factors from IGCs, it might also regulate/remodel splicing factor interactions during alternative or constitutive splicing.
Assuming that Clk/STY is responsible for phosphorylation events that lead to release of splicing factors from nuclear speckles, then a mutant kinase that lacks SR protein kinase activity would be expected to inhibit the release of splicing factors in vivo. As predicted, overexpression of GFP-Clk/STY(K190R) causes peripheral regions of nuclear speckles to become immobilized. Splicing factors accumulate in foci, and this ultimately leads to a reduction in splicing activity. Perhaps the mutant kinase is interacting with its substrates, but because it is unable to phosphorylate them, the result is sequestration of hypophosphorylated splicing factor complexes in foci. Examination of these regions in fixed cells confirmed that SR proteins and snRNPs are present in the foci, and that there is a depletion of phosphorylated splicing factors in the regions that become immobilized. Morphological changes of nuclear speckles in living cells overexpressing GFP-Clk/STY(K190R) and phosphoepitope depletion in foci of such nuclear speckles strongly support the idea that phosphorylation of SR proteins by Clk/STY is one of the key events that results in recruitment of splicing factor complexes from nuclear speckles to sites of transcription. Furthermore, alterations in the phosphorylation state of SR proteins is highly correlated with IGC structural reorganization, and demonstrates an important link between structure and function in the mammalian cell nucleus.
Materials and methods cDNA constructs
PCR was used to generate restriction sites at the start codon of murine Clk/ STY1 and Clk/STY1 (K190R) cDNAs for convenient subcloning into pEGFP-C3 (CLONTECH Laboratories, Inc.). Inducible Clk/STY1 overexpression was achieved by using a tetracycline responsive element FLAG-pUHD-104B (Tsukamoto et al., 2000). KIAA cDNA clones were obtained from Kazusa DNA Research Institute (Chiba, Japan) and subcloned inframe into pEGFP-C vectors.
Cell culture and transfection
A-431 cells were grown in DME containing high glucose (GIBCO BRL/Life Technologies) supplemented with penicillin-streptomycin and 10% fetal bovine serum (Hyclone). Cells were seeded onto acid-washed coverslips in 35-mm petri dishes containing 2 ml DME, and attached cells were transiently transfected with 2 g total DNA using FUGENE (Roche) according to manufacturer instructions. FLAG-Clk/STY was cotransfected with pTetON (CLONTECH Laboratories, Inc.), and expression of kinase was induced by addition of doxycycline (2.0 g/ml). GFP-KIAA fusion constructs were cotransfected with FLAG-Clk/STY ϩ pTetOn 24 h before fixation, and doxycycline was added 12-14 h before fixation. In all other experiments, cells were processed for immunofluorescence localization of nuclear speckle proteins 14-16 h after transfection.
TEM analysis
A-431 cells seeded onto gridded coverslips and transfected with GFP-Clk/ STY were fixed for 15 min in 2% formaldehyde/0.5% glutaraldehyde in PBS (pH 7.4). Cells were rinsed in buffer A (PBS ϩ 0.5% goat serum ϩ 0.3 M glycine), then permeabilized for 20 min in PBS ϩ 2% saponin. Anti-SC35 antibody was applied (1:1,000) in buffer B (PBS ϩ 0.5% goat serum ϩ 0.3 M glycine ϩ 0.5% saponin) for 1 h at room temperature. Cells were rinsed in buffer B and Texas red GAM-IgG 1 was applied (1:1,000) in buffer B. Cells were rinsed in buffer B, mounted, and sealed with rubber cement. The position of cells expressing GFP-Clk/STY and exhibiting com-pletely disassembled SC35 nuclear speckles was documented and used later to relocate the cells for thin sectioning. Coverslips were processed as described (Huang et al., 1994); briefly, embedded cells were thin sectioned (100 nm), stained by the EDTA regressive method (Bernhard, 1969) and labeled with 3C5 antibody followed by 5 nm colloidal gold-conjugated secondary antibody. Sections were examined using a Hitachi H-7000 TEM operated at 75 kilovolts.
Live cell microscopy
Attached cells were transfected with GFP-Clk/STY using FUGENE as described above. The cells were transferred 4 h after transfection to an FCS2 live-cell chamber (Bioptechs) mounted onto the stage of an Olympus IX70 inverted fluorescence microscope (Olympus) and kept at 37 Њ C in L-15 medium containing 10% FBS and without phenol red. Time-lapse images acquired with a 100 ϫ 1.4 NA heated objective lens were captured with a peltier-cooled IMAGO CCD camera using an SVGA interline chip (1,280 ϫ 1,024) with a pixel size of 6.7 ϫ 6.7 m (Till Photonics) as soon as nuclear expression was initially detected (at ف 6.0 h). For GFP-Clk/ STY(K190R), a sequence of 200 exposures (350 ms each) was recorded every 30 min.
Online supplemental material
Videos corresponding to Fig. 8 are presented. Image sequences were acquired using TillVision software (Till Photonics) and animated using QuickTime software. For each video a sequence of 200 images (350 ms each) was taken. Video speed is five times faster than real time. Video 1 shows nuclear speckle dynamics in a cell overexpressing GFP-Clk/ STY(K190R) at 6.0 h posttransfection. GFP-Clk/STY localizes to nuclear speckles and exhibits rapid movement in and out of the speckles. Focal accumulations of GFP-Clk/STY are seen on several speckles. Video 2 shows the same cell at 6.5 h posttransfection. Although all of the nuclear speckles exhibit foci at this stage, nuclear speckle dynamics outside of the foci are largely unaffected. Video 3 shows the same cell at 7.0 h posttransfection. As multiple foci form on each nuclear speckle, they appear paired and larger regions of the speckles become immobilized. Video 4 shows the same cell at 7.5 h posttransfection. The nuclear speckles are almost completely immobilized. Videos are available at http://www.jcb.org/cgi/ content/full/jcb.200107017/DC1.
|
2016-10-26T03:31:20.546Z
|
2002-02-04T00:00:00.000
|
{
"year": 2002,
"sha1": "7df1deec3d107044de71b26a898c6a2d62f747fe",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/156/3/425/1302484/jcb1563425.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7df1deec3d107044de71b26a898c6a2d62f747fe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
118466158
|
pes2o/s2orc
|
v3-fos-license
|
Neutrino emission characteristics and detection opportunities based on three-dimensional supernova simulations
The neutrino emission characteristics of the first full-scale three-dimensional supernova simulations with sophisticated three-flavor neutrino transport for three models with masses 11.2, 20 and 27 M_sun are evaluated in detail. All the studied progenitors show the expected hydrodynamical instabilities in the form of large-scale convective overturn. In addition, the recently identified LESA phenomenon (lepton-number emission self-sustained asymmetry) is generic for all our cases. Pronounced SASI (standing accretion-shock instability) activity appears in the 20 and 27 M_sun cases, partly in the form of a spiral mode, inducing large but direction and flavor-dependent modulations of neutrino emission. These modulations can be clearly identified in the existing IceCube and future Hyper-Kamiokande detectors, depending on distance and detector location relative to the main SASI sloshing direction.
I. INTRODUCTION
The neutrino signal of the next nearby core-collapse supernova (SN) will be measured in many detectors that will register tens to hundreds of events, assuming a fiducial distance in the galaxy of 10 kpc [1]. The largest statistics will be provided by Super-Kamiokande [2,3] with roughly 10 4 and IceCube [4][5][6] with roughly 10 6 events, the latter without event-by-event energy information. In the context of neutrino oscillation physics, additional large detectors are in different phases of planning, notably JUNO [7], a 20 kt liquid scintillator detector, Hyper-Kamiokande [8], a megaton water Cherenkov detector, and a 30 kt liquid-argon time-projection chamber [9,10]. The main problem, of course, is that galactic SNe are rare, perhaps one every few decades [11][12][13][14][15][16][17][18][19][20]. Clearly we should prepare well for such a once-in-a-lifetime opportunity and should understand in advance what could be learnt from such an observation.
The low-statistics neutrino signal of SN 1987A has confirmed the general picture of stellar core collapse, but was too sparse to extract much astrophysical detail [21]. On the other hand, it has provided many useful particlephysics lessons, notably on the possible energy loss in new forms of radiation such as axions [22,23]. A future observation will refine such arguments, but the real benefit of high statistics may be detailed astrophysical information on the physics of core collapse [24][25][26][27][28][29][30]. Thirty years after the formulation of the neutrino-driven delayed-explosion paradigm by Bethe and Wilson [31,32], we still can not be sure that their theory is not missing some important ingredient [33].
In the course of the present research project we have recently discovered the LESA phenomenon ("leptonnumber emission self-sustained asymmetry") [55]. The deleptonization (ν e minusν e ) flux during the accretion phase develops a pronounced dipole pattern, i.e. the lepton-number flux emerges predominantly in one hemisphere. We have identified a feed-back loop as the likely cause of this effect. Its elements are asymmetric accretion caused by shock-wave deformation and asymmetric neutrino heating behind the shock front causing the shock-front deformation. It is not yet clear if LESA is a benign curiosity of multi-dimensional SN physics or an important player in the overall core-collapse phenomenology, perhaps in conjunction with neutrino flavor conversion. Either way, its discovery certainly shows that in multi-dimensional SN models there is room for hitherto unsuspected new phenomena.
The various hydrodynamical instabilities appearing in 3D core collapse during the phase of a standing accretion shock imply that the neutrino signal expected from the next nearby SN can show fast modulations and depends on observer location relative to the main direction of SASI sloshing and relative to the LESA dipole direction.
The main purpose of our paper is to explore these issues based on our current portfolio of 3D core-collapse models with full-scale three-flavor neutrino transport. The progenitor masses are 11.2, 20 and 27 M , all of them show the LESA phenomenon and the two heavier models show pronounced SASI activity.
The present paper expands on our earlier Physical Review Letter [28] where we have reported the appearance of signal modulations by SASI that are detectable in Ice-Cube and the future Hyper-Kamiokande detector. In the context of 2D models, this point had been made earlier [25]. On the other hand, it had also been shown that convective overturn alone produces signal modulations that can be detected only if the SN is very close [26]. Therefore, detectable signal modulations are typically tied to the appearance of SASI.
A vigorous debate among SN modelers had revolved around the question if SASI indeed appears in 3D models or if its growth would be suppressed by large-scale convective overturn [49,56,57]. Meanwhile SASI activity in 3D SN models with different neutrino treatments was found by several authors [50,58,59], but such a convergence of qualitative numerical conclusions, of course, leaves open the question of what actually happens in nature. The appearance of the SASI is driven by progenitordependent conditions, which determine the growth rates of the SASI and convective instability in the postshock accretion layer [53]. A neutrino observation of SASI modulations would be a unique smoking gun to prove its very existence in real-life core-collapse events.
When we studied more closely how the neutrino emission characteristics depend on observer direction we noticed a pronounced asymmetry in the lepton-number flux, whereas the overall neutrino luminosity is nearly spherically symmetric except for the SASI modulations [55]. A detailed study of the various elements of this puzzling LESA phenomenon, however, has yielded support for its possibly physical origin. In particular, we believe that we have identified the feedback loop driving this new neutrino-hydrodynamical instability. Nevertheless, we cannot exclude that LESA is a numerical artifact and the final verdict depends on LESA being reproduced by 3D models with true multi-D transport (for discussions of multi-D transport effects in 3D, see Ref. [60], and in 2D, see Ref. [61]).
A directional dependence of this sort is not immediately obvious in the usual visualization of multi-D hydrodynamical simulations. Extracting the neutrino signal characteristics as a function of observer direction requires a significant amount of dedicated post-processing. In this sense, our study is also meant to encourage other SN modelers to show this sort of information which is important for neutrino signal detection and studies of flavor conversion. Our procedure for an efficient extraction of this information may be useful for other authors as well.
Of course, our discussion pertains exclusively to the SN accretion phase where hydrodynamical instabilities are a key element and which must ultimately lead to the explo-sion. For the initial collapse and bounce phase, perhaps up to about 100 ms after bounce, spherical symmetry remains a good approximation in discussing the neutrino emission. The neutrino signal during this early phase is surprisingly independent of model details [62,63]. Likewise, after the explosion has taken off, the subsequent phase of proto-neutron star cooling is again governed by spherically symmetric emission. These three phases should be seen as distinct episodes, testing very different aspects of hydrodynamics as well as nuclear and particle physics.
Our paper begins in Sec. II with a summary of the main features of our SN models. In Sec. III we discuss the features of the neutrino signal from our three SN progenitors and the features of the LESA phenomenon in the presence of SASI. In Sec. IV we review the role of neutrino oscillations, while in Sec. V we focus on the detection perspectives of the signal modulation in IceCube and Hyper-Kamiokande. Discussion and conclusions will be presented in Sec. VI.
II. NUMERICAL SUPERNOVA MODELS
Our SN simulations were performed with the neutrinohydrodynamics code Prometheus-Vertex. This SN simulation tool combines the hydrodynamics solver Prometheus with the neutrino transport module Vertex (see Refs. [55,58] for more details and additional references). It includes a "ray-by-ray-plus" (RbR+), fully velocity and energy-dependent neutrino transport module based on a variable Eddington-factor technique that solves iteratively the neutrino energy, momentum, and Boltzmann equations [64,65]. We employ state-of-theart neutrino interaction rates [39,65] and relativistic gravity and redshift corrections [64,66]. The RbR+ description assumes the neutrino momentum distribution to be axisymmetric around the radial direction everywhere, implying that the neutrino fluxes are radial.
We have performed 3D simulations for the evolution of the 11.2 and 27 M progenitors of Woosley, Heger and Weaver [67] and the 20 M model of Woosley and Heger [68], using the high-density equation of state (EoS) of Lattimer and Swesty [69] with a nuclear incompressibility of K = 220 MeV. They were previously employed for 2D simulations [38][39][40]70]. Seed perturbations for aspherical instabilities were imposed by hand 10 ms after core bounce by introducing random perturbations of 0.1% in density on the entire computational grid to seed the growth of hydrodynamic instabilities. None of these models led to successful explosions during the simulation period of about 350 ms for the 11.2 and 20 M models and 550 ms for the 27 M case, although explosions were obtained in the corresponding 2D simulations with the same microphysics. The postbounce hydrodynamics of the 27 M model, in particular the prominent presence of SASI sloshing and spiral modes, was described in a previous paper [58], while more details on the hydrody- namics of the 11.2 and 20 M SN progenitors have been provided in our LESA paper [55].
The 20 and 27 M models both show periods of strong SASI activity. In the former case, which was simulated until 550 ms post bounce (p.b.), a second SASI episode occurs after a period clearly dominated by convective overturn. On the other hand, the 11.2 M model does not exhibit any clear evidence of SASI motions but develops the typical signatures of postshock convective overturn in the neutrino-heating layer.
We will usually show neutrino flux characteristics as they would be seen by a distant observer located at chosen angular coordinates in the coordinate system of the SN simulation. For any angular position, the neutrino luminosity reaching the observer is given by the superposition of the projected fluxes emitted under different angles, as described in Appendix A. Therefore, the observable neutrino fluxes are weighted hemispheric averages performed such as to include flux projection effects in the observer direction. The hemispheric averages, as expected, show smaller time variations than specific angular rays.
As a benchmark example, we show in Fig. 1 the luminosity for ν e ,ν e and ν x = ν µ , ν τ ,ν µ orν τ as a function of time, as seen by a distant observer with angular coordinates close to the plane of the SASI spiral mode. Largeamplitude, near-sinusoidal modulations of the neutrino signal occur in the interval 120-260 ms as imprinted by SASI. For 260-410 ms a convective phase occurs, followed by another SASI episode on a different plane with respect to the previous one. SASI modulations have a similar amplitude for ν e andν e , while they are somewhat smaller for ν x . Figure 2 shows the properties of our 27 M simulation, averaged over all directions, to mimic an equivalent spherically symmetric case. Of course, this average does tions here have very small amplitude, i.e., convection and SASI activity do not strongly modulate the overall neutrino emission parameters-the modulations in various directions essentially cancel out. The hierarchy of fluxes and average energies as well as the shape parameter correspond to what is expected. It is noteworthy, however, that the averageν e andν x energies become very similar after around 220 ms, at the end of the first SASI episode, when the shock wave has considerably expanded. This feature has been seen in previous simulations [30,37] too, and reflects the temperature increase in the settling, growing accretion layer on the proto-neutron star core. This accretion layer radiates mainly ν e andν e and downgrades the ν x escaping from deeper layers in energy space [71]. The pronounced luminosity drop at ∼250 ms occurs because of the infall of the Si/SiO shell interface leading to strong shock expansion and therefore to a dramatic decrease of the mass accretion rate [58].
Although the difference between 2D and 3D models is not subject of our work, in Fig. 2 we show the corresponding 2D spectral parameters averaged over all directions. The 3D and 2D integrated quantities are very similar up to 300 ms when the 2D model explodes. Figure 3 shows the luminosity of the ν e ,ν e and ν x species for our 20 M simulation, averaged over all directions to mimic an equivalent spherically symmetric case, for comparison with the top panel of Fig. 2. The hierarchy among the luminosities of different flavors as well as their behavior as a function of time is similar for both the 20 and 27 M progenitors. However, the luminosities of ν e andν e are slightly higher for the 20 M simulation. Despite the average over all directions, the integrated luminosities present residual sinusoidal modulations for t ≥ 160 ms (i.e., during the SASI episode) with amplitude larger than for the 27 M simulation (see Fig. 2, top panel) because the SASI activity is stronger for this SN model.
A. 11 M progenitor
We now turn to a detailed discussion of the direction and time dependent features of the observable neutrino signal emitted by our 3D models. Beginning with the 11 M progenitor, Fig. 4 shows the luminosity evolution, L, relative to the time-dependent average L over all directions, separately for ν e ,ν e and ν x . This model does not show any SASI activity, but only small-amplitude, fast time variation caused by large-scale convective overturn. However after some 150 ms, the ν e andν e luminosities develop a quasi-stationary dipole pattern, representing the LESA effect discussed in our earlier paper [55].
The two observer directions shown in Fig. 4 (blue and magenta lines) are chosen on opposite sides of the SN along the LESA axis. The black curve represents a typical orthogonal direction, i.e., it is on the "LESA equator." The observer directions remain fixed in time whereas the LESA dipole direction slowly drifts, so in this sense these directions are not always exactly along the LESA axis or equator, respectively.
As discussed in our LESA paper [55], the sum of all flavor luminosities is almost independent of direction and theν e and ν e dipoles point in opposite directions. However, in a realistic detector, we measure only theν e signal by inverse beta decay. Ignoring flavor oscillations, the measurable Lν e could therefore differ by as much as 30% during the accretion phase, depending on the observer location, affecting the implied overall neutrino luminosity.
Of course, what is really measured in a detector depends on flavor conversion which likely is a large effect. Since theν x fluxes show a much weaker directional modulation, the real uncertainty between the measurement and the true 4π equivalent flux will be less dramatic.
In order to quantify the directional dependence of the neutrino signal, Fig. 5 shows the neutrino flux properties (luminosity, mean energy and the shape parameter α) for the three species along the same three directions chosen in Fig. 4, i.e., "Magenta," "Black" and "Blue" directions respectively named by the curves of the same colors shown in Fig. 4. We recall that the flux characteristics pertain to observers in those directions, i.e., they involve hemispheric averaging with appropriate flux projections. The small-amplitude "vibrations" of these parameters are caused by accretion variations associated with convective overturn.
The hierarchy among flavor-dependent luminosities along the three directions is slightly different. In particular, L νe > Lν e along the "Magenta" direction, they are almost comparable along the "Black" direction, and L νe < Lν e along the "Blue" direction, while the remaining neutrino flux properties exhibit the same hierarchy independently of the observer direction. There is no SASI activity in this model.
B. 20 and 27 M progenitors
In contrast to the 11.2 M case, the 20 and 27 M progenitors show large-amplitude modulation of the neutrino signal due to SASI spiral modes, which cause ac-cretion variations and corresponding fluctuations of the neutrino emission. The LESA phenomenon also occurs for these progenitors. Even though LESA persists during the phases of violent sloshing of the shock-wave radius, it is somewhat masked during the SASI episodes, as explained in our LESA paper [55]. We focus first on the 27 M progenitor to facilitate comparison with the previous discussions of this model [28,58]. Figure 7 shows the luminosity evolution, relative to the directional average, for the three flavors in analogy to Fig. 4. However, here we do not use the LESA axis and locate the observers in directions where the SASI amplitude is particularly large during the first SASI episode (light blue and violet lines) and a third direction where it is small (black). The SASI-implied modulations, on the other hand, are such that L νe and Lν e vary in phase with each other (see Fig. 8). The SASI variation of the neutrino signal is up to 15% for ν e , even larger forν e , and still around 5% for ν x .
While both SASI and convection can lead to large-scale shock deformations, SASI is distinguished by a characteristic quasi-periodic oscillatory nature. As discussed in Ref. [58], the SASI sloshing axis initially wanders and then stabilizes as the sloshing of the shock further grows in amplitude and violence. When SASI starts to grow vigorously, predominantly sloshing occurs, whereas later a transition to a spiral mode takes place, associated with a circular motion of the maximum shock radius. Both SASI and convective regimes are easily recognized in Fig. 7. For 120-260 ms, SASI sloshing and spiral modes occur, for 260-410 ms convection dominates, and then a second SASI episode takes place up to the end of our simulation (cf. Figs. 1, 2 and 6 of Ref. [58]).
The plane where spiral motions develop remains relatively stable until the maximum amplitude is reached and SASI dies down. During the first SASI spiral phase, the plane where it develops is roughly perpendicular to the vector n = (−0.35, 0.93, 0.11) in the SN simulation grid, i.e., there is no alignment with the axis of the spherical polar grid [58]. The second SASI phase develops in a plane different from the first one. Therefore, the three fixed directions shown in Fig. 7 are no longer optimal relative to a maximum SASI effect.
In Fig. 8 the luminosity evolution, relative to the directional average, is shown as a function of time for the three flavors, along the direction plotted in violet in Fig. 7. The LESA phenomenon, while somewhat masked during the SASI episodes, clearly appears during the convective phase between the SASI episodes, in the form of a hemispheric asymmetry between ν e andν e luminosity. The relative LESA amplitude ofν e is opposite in sign to the one of ν e and the amplitude for ν x is smaller, with its sign correlated with the one ofν e . The SASI modulations, on the other hand, have the same sign for all flavors, but a smaller amplitude for ν x .
In order to discuss the directional dependence of the SASI modulation of the neutrino signal, Fig. 6 shows the neutrino flux properties (luminosity, mean energy and FIG. 5: Evolution of neutrino flux properties for the 11.2 M progenitor as seen from a distant observer. For νe,νe and νx we show the luminosity, average energy and shape parameter α. The "Magenta" and "Blue" directions are opposite along the LESA axis, corresponding to the magenta and blue curves in Fig. 4, whereas the "Black" direction is on the LESA equator (black in Fig. 4). shape parameter α) for the three flavors along the same three directions, respectively corresponding to the violet, black, and light blue curves in Fig. 7 and named by color.
Although the neutrino flux properties are similar for all directions, the modulation of the signal along "Black" and "Light Blue" directions is much less pronounced during the first SASI episode than the modulation along the "Violet" direction. Figure 9 shows sky maps of the relative luminosity of ν e for t = 217, 225 and 230 ms, i.e., corresponding to subsequent SASI maximum and minimum signal ampli- tudes. Comparing the three snapshots there is again a total variation of ∼ 20% for the different angular positions. Looking at the hottest and coldest spot in the three time slices, it is clear how the SASI sloshing motions proceed.
We repeat the same analysis as before for the 20 M progenitor. Figure 10, in analogy to Fig. 7, shows the relative luminosity. This progenitor exhibits only one SASI episode for t ≥ 160 ms, lasting for a longer time than for the 27 M progenitor. The SASI-implied modulations are again such that L νe and Lν e vary in phase with each other, as clearly visible in Fig. 11. Traces of LESA ap-pear in the hemispheric asymmetry between the ν e andν e luminosity, especially before SASI sets in (t < 160 ms) when the relative variation ofν e is opposite in sign to the one of ν e . We find maximum fluctuations of the signal of 17% and minimum of 7%. Figure 12, similar to Fig. 9, shows sky maps of the relativeν e luminosity for three snapshots (t = 186, 193 and 200 ms) corresponding to the SASI maximum and minimum signal amplitudes. As for the 27 M SN progenitor, a total variation of ∼ 20% for the different angular positions occurs. The SASI spiral mode develops in a plane perpendicular to n = (−0.56, −0.81, −0.20) in the SN simulation grid, i.e., in a different plane than in the 27 M case, as evident from a comparison of Figs. 9 and 12. In fact the SASI plane is randomly selected and bears no relation to the numerical grid in the case of a non-rotating model.
C. The LESA phenomenon in presence of SASI
The LESA phenomenon is characterized by a maximum of the ν e emission coincident with a minimum of theν e emission (i.e., the amplitude of the ν e emission is anti-correlated with the amplitude of theν e emission), whereas SASI is responsible for correlating the amplitude variations of the ν e andν e signals (see Figs. 7,8,10,and 11). In order to investigate in greater detail the LESA phenomenon in the presence of SASI for our two heavier progenitors, we consider the LESA dipole, i.e., the dipole component of the lepton number flux (ν e minusν e ), following the definition adopted in Sec. 3.1 of Ref. [55] (see also their Fig. 3), and the SASI dipole, i.e., the dipole component of the neutrino energy flux of all flavors (ν e +ν e + 4 ν x ). Note that we choose the total neutrino energy flux to define the SASI dipole instead of the total number flux because SASI modulates the total energy flow, including the mean neutrino energies [28]. Note that the LESA dipole is different from zero, even during the SASI episodes [55]. This means that the LESA mechanism is active during SASI episodes, even if the LESA dipolar behavior is not clearly visible in the neutrino signal in Figs. 7, 8, 10, and 11 because it is masked by strong SASI modulations.
The "SASI dipole" of the 27 M progenitor does not completely vanish between SASI episodes because largescale convection also causes an overall emission dipole. During the first SASI episode it increases strongly, reaching its maximum around 200 ms where it is about 16% of its monopole (total energy flux, i.e., the sum of the lu-minosities as plotted on the top panel of Fig. 2). During the second SASI phase it is at most 10% of its monopole. On the other hand, the LESA dipole quickly grows up to 150 ms. The LESA dipole is almost two times the monopole at about 500 ms. The SASI activity of the 20 M model is more pronounced compared to the the 27 M simulation and the SASI dipole is correspondingly larger. However, in relative terms it also reaches a maximum of 16% of the monopole strength at 180 ms and then decreases. On the other hand, the ratio between the LESA dipole and monopole is maximum at 280 ms and is about 1.4. It is interesting to notice that the general trend as a function of time is the same between LESA and SASI dipoles, but it is strongly progenitor dependent. In particular the dipole grows during the SASI activity for the 20 M SN progenitor, while it is on average stationary during SASI for the 27 M SN progenitor. Figure 14 shows the track of the LESA dipole in gray and the SASI dipole in blue hues during the SASI episodes for the 27 M (left) and for the 20 M (right) SN progenitors, in order to investigate a possible correlation between the LESA dipole and the plane of the SASI sloshing and spiral modes. While for the 27 M case, the SASI spiraling drives the LESA dipole to wander in the SASI plane, this does not happen for the 20 M SN progenitor. The neutrino SASI dipole trajectory closely reproduces the shock-deformation trajectory shown in Fig. 8 of Ref. [55].
The LESA dipole direction as well as the SASI dipoles are progenitor dependent (see also Fig. 3 of Ref. [55]), and none of them are correlated with the numerical grid of the simulation. The mutual interaction between SASI and LESA seems to be strongly progenitor dependent, although from this preliminary analysis it is clear that these are two separate phenomena. We here refrain from drawing any firm conclusion on the interaction between LESA and SASI since a much deeper understanding of the LESA phenomenon and its origin is required and hydrodynamical simulations of more SN progenitors are needed to favor a better understanding of the coexistence between the two phenomena.
IV. FLAVOR OSCILLATIONS
Neutrino transport in SN models is treated in the weak-interaction basis of flavors. In our three-species treatment, we use ν e ,ν e and ν x , neglecting weakmagnetism effects that distinguish between neutralcurrent scattering of ν µ (ν τ ) andν µ (ν τ ). We also ignore the possible presence of muons that would allow charged-current processes for ν µ andν µ in the deep interior of the proto-neutron star. Most importantly, we ignore flavor conversion caused by flavor mixing. The justification for this simplification is the strong matter effect that effectively "de-mixes" neutrinos, i.e., the propagation eigenstates essentially coincide with the weakinteraction eigenstates [72]. However, as neutrinos stream away from the SN core, the matter effect decreases and eventually flavor conversion becomes important. What is measured in a detector crucially depends on neutrino flavor oscillations along the way.
In the simplest traditional picture, the slowly-varying matter profile provides for adiabatic flavor conversion, the so-called Mikheev-Smirnov-Wolfenstein (MSW) ef-fect [72,73]. In particular, the recent measurement of the third mixing angle sin 2 (2Θ 13 ) = 0.095 ± 0.010 [74], being fairly large, implies that the entire three-flavor conversion process would indeed be adiabatic [75]. For the normal neutrino mass hierarchy (NH), theν e survival probability isp NH = cos 2 Θ 12 ∼ 0.70, whereas for the inverted ordering (IH) it isp IH = 0 [75]. Therefore, a detector measuringν e by inverse beta-decay (IBD) will see in NH a superposition of roughly 70% of the originalν e flux spectrum with 30% of theν x flux spectrum, whereas in IH it will detect the originalν x flux spectrum at the source.
This simple prediction can get strongly modified by two effects. The density profile can be noisy and show significant stochastic fluctuations [51,[76][77][78][79] that can modify the adiabatic conversion [80][81][82][83][84][85]. Such effects would be especially expected in the turbulent medium behind the shock wave, i.e., the relevance pertains in particular to neutrino propagation after the explosion has set in and the shock wave travels outward. However, we are here concerned with the standing-shock phase and flavor conversion outside of the shock-wave radius.
Of greater importance is then the impact of neutrinoneutrino refraction which can lead to self-induced flavor conversion, usually at a smaller radius than the MSW effect [86]. It can put the MSW result effectively upside down and can lead to novel spectral features (spectral splits) [87][88][89][90][91][92][93]. On the other hand, self-induced flavor conversion can be suppressed by the "multi-angle matter effect" [94] and this may be typical in many cases, re-instating the traditional scenario [95][96][97]. On the other hand, what exactly happens when self-induced conversion is not suppressed remains poorly understood because of a number of complications that have only recently been appreciated [98][99][100][101][102][103][104][105][106]. In addition, the direction-dependent neutrino flux properties and especially the LESA phenomenon throw in additional uncertainties that have not been studied yet.
In this situation we can but state that theν e flux arriving at the detector will be some superposition, possibly depending on energy, of the originalν e andν x flux spectra. We therefore consider two extreme cases. One is that the detector measures the originalν e flux, the other assumes a complete flavor swap and the detector measures what was theν x flux at the source.
A. Detector Models
Detecting the SASI-imprinted modulations in the highstatistics neutrino signal of the next galactic SN would go a long way in studying SN hydrodynamics. What are the opportunities for such a detection?
In the largest operating detector, IceCube, and the future Hyper-Kamiokande, neutrinos are primarily detected by IBD,ν e + p → n + e + , through the Cherenkov radiation of the final-state positron. We will ignore the small additional contribution from elastic scattering on electrons. The signature for fast time variations is limited by random fluctuations (shot noise) of the detected neutrino time sequence.
In IceCube [4], usually at most one Cherenkov photon from a given positron is detected, i.e., every measured photon signals the arrival time of a neutrino and in this sense provides superior signal statistics. In rare cases, two or more photons from a single neutrino are detected, depending on neutrino energy, allowing one to extract interesting spectral information from time-correlated photons [5], but this intriguing effect is not of direct interest here. The instantaneous signal count rate caused by IBD in a single optical module (OM) is [4] where n p = 6.18 × 10 22 cm −3 is the number density of protons in ice (density 0.924 g cm −3 ), E e is the final-state positron energy, V eff γ = 0.163 × 10 6 cm 3 the average effective volume for a single photon detection, N γ (E e ) = 178 E e /MeV is the energy-dependent number of Cherenkov photons, and σ (E e , E ν ) = dσ(E e , E ν )/dE e is the IBD cross section, differential with respect to the positron energy.
We correct the positron energy, E e → E e + 1 MeV, because gamma rays from positron annihilation and neutron capture produce additional recorded energy [4]. Moreover, the IceCube rate from IBD is about 94% of the total, so we apply a correction factor r = r IBD /0.94 (2) to account approximately for all channels. Every OM shows a background rate of around 540 Hz, including correlated events. Introducing an artificial dead time of t dead = 250 µs after every hit reduces the background to a single rate of about 286 Hz at the cost of about 13% dead time. More specifically, the signal reduction by this dead-time effect is 0.87/(1 + r t dead ). Therefore, the overall SN signal rate is where N OM = 5160 is the number of OMs in IceCube.
In previous studies of the IceCube potential for detecting fast signal variations [25,26], these various corrections had not been included. Moreover, a simple approximate expression for the IBD cross section was used. As in our companion Physical Review Letter [28], we here use the IBD cross section provided in Ref. [107], which includes recoil, the neutron-proton mass difference, the positron mass, and nucleon form factors. If theν e spectrum is described by a Gamma distribution (see Appendix B), the final-state positrons also follow such a distribution with good approximation. In Appendix B we give analytic approximation formulas for the spectral parameters of the detected positrons in terms of those of the primaryν e .
For the example of our 27 M model, Fig. 2 (bottom panel) shows the single-OM IceCube rate r as defined in Eq. (2) without dead-time effect. We show r forν e , ignoring flavor oscillations, and also forν x under the assumption of a full flavor swapν e ↔ν x . The maximum rate is around 170 Hz, somewhat larger than half of the background rate, so that r t dead ∼ 0.04. In this case, dead-time effects reduce the overall signal to about 84% of the raw rate.
Incorporating dead time, the average single-OM background rate is 286 Hz. After multiplying with N OM = 5160 we find an overall background rate of R bkgd = 1.48 × 10 3 ms −1 . (4) For a SN at 10 kpc, this is about twice a typical signal rate. The detectability of fast time variations is limited by random signal fluctuations (shot noise) which originates from both the signal itself and fluctuations of the background rate.
As for Hyper-Kamiokande [8], a next-generation megaton water Cherenkov detector, we focus on the number of IBD events, expecting a correction of a few percent due to the other neglected channels. The expected rate is where N p = 4.96 × 10 34 is the number of protons for a 0.74 Mton Cherenkov detector [8]. The advantage relative to IceCube is that such a detector is essentially background free and, as a plus, will provide event-byevent energy information. We found that although the expected rate as function of time is almost three times lower than the IceCube rate [28], the expected Hyper-Kamiokande rate has the same modulation of the signal with slightly lower amplitude. We also noticed (results not shown here) as convolving the expected signal rate with powers of the energy, that the amplitude of the sinusoidal modulations is enhanced. Figure 15 shows the expected IceCube and Hyper-Kamiokande rates for 27, 20 and 11.2 M SN progenitors, respectively, from left to right, at a distance of 10 kpc and forν e and ν x (assuming full swap by flavor conversion) fluxes. For the progenitors where SASI develops (27 and 20 M progenitors), we show the signal as seen by a distant observer close to the SASI spiral plane where the signal modulations are large. The IceCube rate was defined in Eq. (3) and the Hyper-Kamiokande one in Eq. (5).
B. Detection Perspectives
The relative amplitude of the SASI modulations is similar in theν e andν x channels, although theν x rate is always lower that theν e one. The origin of this effect is that, although the luminosities show different amplitudes of SASI modulation (Fig. 1), theν x spectrum is less pinched than theν e one (see α's forν e and ν x in Fig. 2). As discussed in our earlier paper [28], in spite of shot noise, an observer located along an optimal direction will be able to detect SASI modulations out to a distance of 20 kpc (cf. Fig. 1 of Ref. [28]). Note that IceCube and Hyper-Kamiokande offer complementary information since IceCube will be more suitable for SN at small distances where the shot noise is smaller. Hyper-Kamiokande will be more useful at larger distances because it is background free and the shot noise is dominated by fluctuations of the signal itself [28].
As shown in Fig. 7 (black line), SASI has not the same intensity along all the directions and can be very weak. In order to characterize where, on average, the modulations of the neutrino signal due to SASI are stronger and therefore where an observer has more chances to detect it, we define the following standard deviation of the Ice-Cube rate for each angular position of the observer [28] where R is the time-dependent average over all directions. Figure 16 (upper panel) shows a sky-map of σ/σ max with σ max the maximum of σ during the first SASI window of the 27 M SN progenitor, i.e., integrating over the time interval 120-250 ms. It is clear that the SASI modulations are on average stronger, and will be experimentally observable, for an extended region close to the SASI spiral plane, roughly corresponding to 60% of possible observer locations. The regions of strong SASI modulations visible in Fig. 16 correspond to the hottest and coldest regions in Fig. 9. Of course, for several SASI episodes or a strong drift of the main plane, some part of the SASI activity may become visible in a larger fraction of all observer directions at different times. Figure 15 (middle panel) shows the IceCube and Hyper-Kamiokande rates for the 20 M SN progenitor along one of the directions where SASI modulation of the neutrino signal is strong. Only one SASI phase occurs for this progenitor and SASI is somewhat stronger than for the 27 M SN progenitor. Indeed, the detection rate for this progenitor is slightly higher than for the 27 M case. Figure 16 (bottom panel) shows the sky-map of σ/σ max for the SASI episode 150-330 ms of the 20 M star. Comparing the two panels of Fig. 16, we see that, as already pointed out in Sec. III B, SASI develops for the 20 M progenitor on a different plane than for the 27 M case. Therefore, the optimal observer directions to detect SASI effects are almost perpendicular in the two models.
In our earlier paper [28], we have considered the power spectrum of the IceCube rate for all the three studied SN progenitors. One finds a strong peak at f 80 Hz for the 27 and 20 M cases, corresponding to the typical SASI frequency. This frequency equals the one that describes large-amplitude fluctuations of the low spherical harmonics SASI amplitude vector in Fig. 2 (right panel on the top) of Ref. [58]. It is also basically understood from analytic and numerical studies of the linear growth regime of the SASI, and it is roughly the inverse of the advection timescale plus the sound travel timescale (see Eq. 2 of Ref. [28]). Both of these timescales only depend on shock radius and neutron-star radius [28,53].
VI. DISCUSSION AND SUMMARY
The first 3D full-scale hydrodynamical SN simulations with sophisticated neutrino transport are now available for three SN progenitors with masses 11.2, 20 and 27 M , respectively. In a series of papers, we have explored the neutrino emission properties of these models and in particular the dependence on observer direction and the time variability of the signal and opportunities to measure them in large-scale detectors such as IceCube and the future Hyper-Kamiokande.
The first important point was made in our companion Physical Review Letter [28] where we emphasized the appearance of pronounced SASI activity in our two heavier progenitors. The question if SASI indeed appears in 3D models or if it would be suppressed by convective overturn had been debated among SN modelers, but a consensus seems to be appearing that SASI is not generically suppressed in 3D. Of course, the appearance of SASI needs confirmation also by future 3D simulations that yield successful explosions (none of our 3D models has led to an explosion so far), and numerical simulations might still be different from what happens in real stars. Detecting SASI in the neutrino signal of the next nearby SN would go a long way in testing our hydrodynamical understanding of stellar core collapse. With IceCube and the future Hyper-Kamiokande, a galactic SN offers a realistic opportunity for such a detection at any distance up to 20 kpc, but the signal amplitude strongly depends on the observer direction relative to the main SASI plane of motion.
The main point of our present paper is to provide more details about the neutrino signals of these models and their directional dependence. We stress that observer-related quantities are weighted hemispheric averages with appropriate flux-projection effects as considered here and outlined in our Appendix A. We have also provided, in Appendix B, simple analytic approximation formulas, based on the IBD cross sections of Ref. [107], that allow one to obtain detection rates based on the parameters of an assumed Gamma distribution for the neutrino spectra. In order to translate 3D model output into detection rates, SN modelers would have to provide flavor-dependent luminosities as well as first and second energy moments that are based on such observer-related hemispheric averaging.
In our other companion paper [55] we have reported a new spherical-symmetry breaking effect in the form of LESA. The emission of ν e andν e , during the accretion phase, builds up a distinct dipole pattern such that deleptonization happens predominantly in one hemisphere. Therefore, the relative number fluxes of ν e and ν e show a strong angular variation. It has not yet been explored what this means in the context of flavor oscillations with neutrino-neutrino refractive effects.
The direction of the LESA dipole and the plane of SASI sloshing and spiral modes are apparently not related-these are different effects that can coexist. In particular, while we find LESA for all three studied progenitors, SASI occurs only for the heavier ones. Any influence of SASI on the LESA dipole orientation seems to depend on the relative LESA and SASI dipole orientations, both randomly established for each progenitor. LESA survives phases of violent SASI activity, even though it may be somewhat masked by the latter. Further analysis on the LESA phenomenon and hydrodynamical simulations for more SN progenitors are needed to properly disentangle the two effects.
During the standing-shock accretion-powered phase of neutrino emission, several new effects develop in 3D in contrast to the traditional spherically-symmetric picture. This phase offers a rich variety of new hydrodynamical and neutrino-hydrodynamical phenomenology that has only begun to be explored. The theory of neutrino flavor conversion with neutrino-neutrino refraction needs to be further developed to understand its role during this phase. A future high-statistics observation by IceCube and Hyper-Kamiokande will provide opportunities to test such effects, and in particular the appearance of SASI modes.
Given the neutrino emission characteristics at the SN from a 3D simulation we need to calculate the flux measurable by a distant observer, closely following Ref. [51]. Given a coordinate system in which the simulation has been performed (see Fig. 17), the observer is located at a large distance D R in an arbitrary direction Ω = (Θ, Φ). Here R is the radius of a sphere near the SN where the neutrino intensities are specified by the output of the code. We have chosen R = 500 km so that it is not necessary to apply coordinate transformations and redshift effects between the fluid frame and the dis- The neutrino intensity is defined in terms of the location R of the emitting surface element dA and the angle ω = (θ, φ) of emission relative to the direction R. The observer is located at a distance D R in an arbitrary direction Ω = (Θ, Φ), and γ is the angle between the location of the radiating surface element and the direction of the observer. tant observer. All quantities depend on time t, which we never show explicitly, and we neglect retardation effects between neutrinos emitted from different regions of the emitting surface.
We assume that the neutrino intensity I(R, ω) is given in terms of the location R on the emitting surface. The angle ω = (θ, φ) describes the angular emission characteristic relative to the direction R on the surface. While the intensity is usually defined as the local spectral energy density of the neutrino radiation field for a given direction of motion ω times the speed of light, we here take it to be integrated over energy or over a specific energy bin. It is trivial to go back to spectral quantities (differential with regard to neutrino energy).
In order to obtain the energy flux at the location of the observer we have to integrate over solid angles dΩ over the surface of the source as seen by the observer and add up the flux contributions emitted by each surface element in the direction of the observer. A given area dA on the emitting surface has the transverse cross section, as seen by the observer, of cos γ dA where γ is the angle between R (location of the surface element) and the direction of the observer (see Fig. 17) so that dΩ = cos γ dA/D 2 . The observable flux is therefore where ω Ω is the emission direction toward the observer and F Ω is the energy flux at distance D in the direction Ω. If the observer interprets this flux as originating from a spherically symmetric source, the measured flux corresponds to the 4π-equivalent luminosity of 4πD 2 times this expression or where the surface integral is over the part of the surface that is visible to the observer. Our 3D hydrodynamical simulations are based on the ray-by-ray scheme [64] where in each angular zone one solves a 1D neutrino transport problem so that, within such a zone, the emission is axially symmetric and depends only on the zenith angle, θ, relative to the radial direction. In principle, I(R, θ) can be extracted from the numerical results, but would require a vast amount of post-processing of huge data files. Instead, we fall back on a simple approximation where the directional distribution on each point of the radiating surface can be described by the diffusion approximation for a radial flux [51,108] with E the neutrino energy density and F the neutrino energy flux. In order to determine a and b we refer to the definitions of E and F in terms of angular integrals of the intensity I, and where c is the speed of light. To express both coefficients by the same quantity we assume F = f c E so that I(R, θ) = (f −1 + 3 cos θ) F (R)/4π. The value of f is determined by the requirement that for F = const. on the sphere, after integrating over the entire surface, one obtains the luminosity L = 4πR 2 F so that Note that since I ≥ 0, Eq. (A6) is strictly valid only for cos θ ≥ −2/3, which includes inward going radiation for cos θ < 0. Equation (A6) reproduces the limb-darkening effect-see Ref. [108] for more details. With this result, we finally obtain where we have inserted θ = γ because the zenith angle of local radiation emission that points in the direction of the observer is identical with γ (see Fig. 17). The value of γ at a surface point R depends, of course, on the observer direction Ω.
In conclusion, the only information from the numerical SN model that we actually use is the radial neutrinoenergy dependent flux F (R, ν ) on a given surface. From here, we perform projections along the direction of the observer for the energy-dependent fluxes before computing the observable L, ν , 2 ν for the neutrino spectral information in the form of an assumed Γ distribution (see Appendix B).
Appendix B: Neutrino Spectra and Inverse Beta Cross Section
The quasi-thermal neutrino spectra produced at the SN can be well approximated in terms of a Gamma distribution which has the normalized form [71,109], where Γ is the Gamma function, A an energy scale, and α a shape parameter with α = 2 corresponding to a Maxwell-Boltzmann distribution. The spectra are usually "pinched," meaning that usually α > 2. For the moments of the distribution we use the notation The first two moments are 1 = = A and 2 = 2 = α + 2 α + 1 This implies that the shape parameter is given in terms of the first two moments as (B4) The rms width of the Gamma distribution is rms = 2 − 2 = A/ √ α + 1. From the numerical data we extract the energy moments 1 and 2 and determine A and α accordingly.
The main detection process is inverse beta decay (IBD),ν e p → ne + , where the final-state positron shows up by its Cherenkov radiation. Therefore, the primarȳ ν e spectrum must be translated to the corresponding e + spectrum. If we take positrons to be massless, ignore the proton-neutron mass difference as well as recoil effects, the cross section is σ naive = G 2 F cos 2 θ C (1 + 3C 2 A ) π where G F = 1.166 × 10 −5 GeV −2 is the Fermi constant, cos θ C = 0.9746 ± 0.0008 the cosine of the Cabibbo angle, and C A = −1.270 ± 0.003 the axial-vector coupling constant. With this simple 2 ν scaling, the positron spectrum would also follow a Gamma distribution with average energy A e = A ν (3 + α ν )/(2 + α ν ) and α e = α ν + 2. An example with A ν = 13 MeV and α ν = 3 is shown in Fig. 18 A realistic IBD cross section requires to take into account recoil effects, the neutron-proton mass difference, the positron mass, and nucleon form factors. We use the results of Ref. [107] to derive the positron distribution, shown as a thick blue line in Fig. 18. We compare it with a Gamma distribution (dashed red line) with the same and 2 and find A e = 17.86 MeV and α e = 4.76. We show a normalized spectrum here; the average cross section is approximately 0.74 of the naive result. We conclude that the positron spectrum is also well approximated by a Gamma distribution. (Of course, the positron spectrum strictly begins only at e = m e , but the energy range below a few MeV is irrelevant in practice.) What remains is to express the positron Gammadistribution parameters in terms of those of the primarȳ ν e spectrum. We have derived analytic approximation functions for this transformation. Expressing all energies in MeV, we find Typically, these approximation formulas are good to much better than 1% in our range of interest.
|
2014-08-26T09:10:54.000Z
|
2014-05-30T00:00:00.000
|
{
"year": 2014,
"sha1": "f4379ac3ec6c147cfdc26a33cba9906940453529",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1406.0006",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f4379ac3ec6c147cfdc26a33cba9906940453529",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
53952200
|
pes2o/s2orc
|
v3-fos-license
|
Bayesian zero-inflated spatio-temporal modelling of scrub typhus data in Korea, 2010-2014
Scrub typhus, a bacterial, febrile disease commonly occurring in the autumn, can easily be cured if diagnosed early. However, it can develop serious complications and even lead to death. For this reason, it is an important issue to find the risk factors and thus be able to prevent outbreaks. We analyzed the monthly scrub typhus data over the entire areas of South Korea from 2010 through 2014. A 2-stage hierarchical framework was considered since weather data are covariates and the scrub typhus data have different spatial resolutions. At the first stage, we obtained the administrative-level estimates for weather data using a spatial model; in the second, we applied a Bayesian zero-inflated spatio-temporal model since the scrub typhus data include excess zero counts. We found that the zero-inflated model considering the spatio-temporal interaction terms improves fitting and prediction performance. This study found that low humidity and a high proportion of elderly people are significantly associated with scrub typhus incidence.
Introduction
Scrub typhus is an acute febrile disease spread by the bites of the larvae of trombiculid mites infected with Orientia tsutsugamushi, a bacterium similar to Rickettsia (Ogawa et al., 2002). The infection usually occurs in autumn when there is a high chance of contact with these chigger larvae (Cracco et al., 2000). It is commonly distributed in the Asia-Pacific area (Figure 1) within the socalled tsutsugamushi triangle region (McCrumb et al., 1957). In South Korea within this triangle, an average of 8,329 patients per year were diagnosed with scrub typhus from 2010 to 2016.
Patients with scrub typhus have symptoms such as fever, headache, fatigue, swollen lymph nodes and muscle pain. They are easily cured by antibiotica (tetracycline or chloramphenicol) when administered in the early stage; however, patients who are not treated appropriately can develop complications that can lead to death, such as pneumonia, encephalitis, and multi-organ failure. Finding the risk factors for scrub typhus is important as this would contribute to prevention of outbreaks of the disease.
Previous studies suggest that meteorological factors and the proportion of elderly people influence the number of scrub typhus cases (Ogawa et al., 2002;Kuo et al., 2011;Tsai et al., 2013;Li et al., 2014). Kuo et al. (2011) focused the spatial distribution of scrub typhus in 350 administrative districts of Taiwan, showing by application of the Spearman rank correlation coefficient. They showed that the scrub typhus occurs more often if the temperature increases and rainfall, normalized difference vegetation index, the proportion of farmers and dry land decrease. Li et al. (2014) investigated the association between meteorological factors and the monthly scrub typhus incidence in Guangzhou, China for the period 2006-2012 through negative binomial regression. They found that temperature had a positive association and humidity a negative one, i.e. less infections with lower temperature and higher humidity. They only considered the temporal resolution and their results might be applicable only in countries with similar weather patterns. Ogawa et al. (2002) analyzed the clinical characteristics of scrub typhus in Japan by use of a questionnaire approach involving healthcare workers in 1998, showing that females and people over 51 years old had a higher chance of acquiring scrub typhus.
In recent years, a few studies have examined the spatial or spatio-temporal distribution of scrub typhus. Kuo et al. (2011) conducted spatial clustering of scrub typhus incidence using Moran's I (Li et al., 2007) and LISA (Anselin, 1995). Wardrop et al. (2013) N o n -c o m m e r c i a l u s e o n l y conducted a spatial analysis using a Poisson regression model with weather covariates. Wu et al. (2016) explored the spatio-temporal patterns of scrub typhus incidence to detect hotspots using clustering methods, while Noh et al. (2013) analyzed the scrub typhus incidence dataset in Korea by considering the spatio-temporal dependency structures within a Bayesian framework. However, neither Wu et al. (2016) nor Noh et al. (2013) considered the possible risk factors, which are particularly important with reference to policy decision-making. In addition, although Noh et al. (2013) considered space and time, they only took into account a single spatial dependency structure and a single temporal dependency structure over the entire domain investigated. A more versatile approach is needed as the spatial and temporal patterns of scrub typhus incidence could vary across space and time. For example, the spatial distribution of scrub typhus incidence this year might not be the same as the one in the past. In such cases, it is important that the interaction of space and time be considered in modelling in the statistical analysis in order to avoid distorted, even wrong results.
In this paper, we discuss the analysis of monthly scrub typhus incidence data for all administrative districts of South Korea, while also considering the complicated spatio-temporal dependency structures. To the best of our knowledge, this is the first study to adopt a spatio-temporal zero-inflated model for scrub typhus data.
Materials and Methods
We used meteorological and socioeconomic factors as covariates and propose a Bayesian hierarchical model for the building of flexible spatio-temporal structures by combining prior knowledge with the data at hand. We examined whether such a space-time interaction structure should be adopted in analyzing the data along with the overall spatial and temporal dependency structures. In South Korea, most of the scrub typhus incidence is concentrated in the south-western regions of the country and in the autumn season because of harvest and increased outdoor activities. Taking the whole country into account, most of the monthly incidence data had zero counts, which leads to over-dispersion. Therefore, we used a zero-inflated Poisson (ZIP) distribution (Lambert, 1992) to account for such data distribution characteristics. Since the meteorological data as covariates are gathered from monitoring stations and the scrub typhus incidence data are collected based on administrative area, they have different spatial data resolutions, which is often called spatial misalignment (Gotway and Young, 2002). To overcome this problem, we applied a 2-stage framework (Choi et al., 2009). At the first stage, we obtained weather estimates for all administrative districts through a spatial weather model, to be used as inputs for the next stage. At the second stage, we applied the spatio-temporal ZIP model to the scrub typhus data to investigate the associations between the meteorological factors and scrub typhus incidence. Finally, the performance of the proposed model is compared to competing models.
Study region and data
We used monthly datasets in South Korea from 2010 to 2014 covering 251 administrative districts and 60 months. The basic characteristics of all variables are shown in Table 1. The monthly scrub typhus dataset obtained from the Korea Centers for Disease Control and Prevention (http://is.cdc.go.kr/dstat) contains the number of patients diagnosed with scrub typhus per month in each administrative area. The zero count of this dataset is about 73%, showing that our data are highly zero-inflated. Due to this, we summarized the incidence data with and without zero counts in Table 1. The daily precipitation, temperature, and humidity datasets were obtained from the Korea Meteorological Administration (http://data.kma.go.kr). Precipitation and temperature data were collected from 487 monitoring stations (Figure 2A), and humidity from 95 monitoring stations ( Figure 2B). Monthly averaged values were used for the analysis. Because the number of scrub typhus cases is related to the population and the proportion of elderly people (age 65 and over), we also considered these factors as an offset and a covariate. These datasets were obtained from the Korean Statistical Information Service (http://kosis.kr). In the Korean Government system, the total population dataset is collected monthly, while the elder population dataset is collected on an annual basis. Thus, the monthly variation of the proportion of elderly people could be inferred.
Statistical modelling
We proposed a 2-stage hierarchical framework to overcome the different spatial data resolutions. At the first stage, we predicted weather values for all administrative districts using a spatial model in which projected coordinates of longitude, latitude and weather data are covariates. At the second stage, we fitted a Bayesian spatio-temporal zero-inflated model to the incidence data, using the predicted weather values and the proportion of elderly people as covariates. The detailed framework is shown in Figure 3.
Stage one: spatial modelling for meteorological data. We assumed that the spatial model for each weather data is as follows: where W (s,t) is the observed weather value at monitoring station s and time t, and Z(s,t) the vector of covariates with the corresponding coefficient vector γ. The vector Ψ with the element Ψ (s,t) explains the spatial effects and measurement error with the covariance matrix Σ, in which a Matern spatial covariance structure (Banerjee et al., 2014) provides the best prediction performance. Based on an exploratory data analysis, we used projected coordinates as covariates for precipitation data. Additional covariates included precipitation for temperature data, and temperature as well as precipitation for humidity data. The parameters were assumed to follow non-informative prior distributions to make the most use of the data at hand and to avoid bias. We obtained the predicted weather values at each time and location of interest, i.e. a Kriging (Banerjee et al., 2014) approach. Here, we estimated the true weather values at about 1,000 locations for each time point in Figure 2C. The estimated weather value at administrative area i in month t was averaged by the estimates of true weather values within area i and used as inputs for the second stage. Stage two: zero-inflated spatio-temporal modelling for scrub typhus incidence data. The incidence of scrub typhus for administrative area i and month t, y it follows a zeroinflated Poisson distribution:
Article
Eq. 2 where p it is the probability of structural zeros and N it the population. The logit(p it ) is the linear combination of precipitation W 1it , temperature W 2it , humidity W 3it , and the proportion of elderly people X it . The corresponding coefficients α j , j = 0, 1, … ,4 indicate the effects of the covariates. The log relative risk log(θ it ) was modelled with fixed effects and space-time random effects: where the random effects u i ~N(0,σ u 2 . ) and l t ~ N(0,σ l 2 . ) are the spatially and temporally unstructured terms, respectively. The spatially correlated random effect v i follows a conditional autoregressive (CAR) model (Besag, 1974) ). Generally speaking, a CAR model is constructed based on neighbourhood information. The mean of a specific area is defined as the weighted average of its neighbours, and the variance is inversely proportion-al to the number of neighbours. The temporally correlated random term k t follows a first-order autoregressive AR(1) process (Yule, 1921). Knorr-Held (2000) proposed four different types of the spatio-temporal interaction term ϕ it and its covariance can be expressed as Σ ϕ =Σ s ⊗ Σ T using the Kronecker product, where the matrices Σ S and Σ T indicate covariance matrices of space and time, respectively. In our data, we found that the temporal pattern for each area was not identical to that of other areas and the spatial pattern varied from year to year, supporting the fact that considering a space-time interaction term is reasonable. We used non-informative priors for the parameters: Normal (0,10 5 ) for the coefficients β j and α j , j = 0,1, … ,4, and U (0,100) for the standard deviations σ u , σ l , σ v , σ k , and σ ϕ .
The WinBUGS statistical package (http://www.mrcbsu.cam.ac.uk/software/bugs) was used. Two chains with different initial values were used to check the sample convergence. Every 50th sample was extracted as a posterior sample. After the burn-in, 2,500 samples for each chain, in total of 5,000 samples, were used for parameter estimation. We checked the convergence using trace plots, the Gelman-Rubin statistic (Gelman et al., 1992), and autocorrelation plots. All outcome Figures in this paper were produced with open source program R (https://www.r-project.org).
We additionally considered seven competing models. All models (models 1-8) are listed in the Appendix. Model 1 to Model 4 are Poisson models, and Model 5 to Model 8 are ZIP models. Models 1 and 5 only consider covariates. Models 2 and 6 additionally contain spatially and temporally uncorrelated terms. Spatially and temporally correlated random terms were added in models 3 and 7. Finally, in models 4 and 8, spatio-temporal interaction term was considered additionally. We investigated the performance of the proposed model (model 8) and other competing models (models 1-
Weather data results
To examine the prediction performance of the proposed spatial model, we compared the values observed at the monitoring stations and the predicted values for the administrative district in which each station is located. We chose three administrative areas that contain weather monitoring stations: Inje-gun in Gangwon Province, Youngdong-gun in Chungcheong Province and Mungyeong-si in Gyeongsangbuk Province. Figure 4 shows that most points are located close to the line y = x, indicating that the predicted weather values are similar to the observations from the stations. We also found that the observed and predicted values for the other areas were similar. Thus, the proposed spatial weather model fits the data well. Table 2 summarizes the model performances. The MSPE values of models 4 and 8 decreased to about one-sixtieth and oneeightieth of those of models 1 and 5, respectively. The DIC values also decreased dramatically for the spatio-temporal models compared with the non-spatio-temporal ones. Moreover, the space-time interaction terms not only provide smaller DIC values but also smaller MSPE values. Overall, the Poisson models (models 1-4) have larger MSPE and DIC values than the ZIP models (models 5-8). As more complicated spatio-temporal structures are included in a Poisson model, its performance becomes more similar to the performance of the ZIP model with the corresponding spatio-temporal dependency structure. Therefore, space-time random components explain the over-dispersion in Poisson regression models. Since model 8 works the best of the eight models in terms of MSPE and DIC values, the ZIP model with the space-time interaction term was deemed more suitable for our data than the other models.
Scrub typhus data results
We also compared the empirical probability of zero counts from the real data with the estimated probability from the models. Around 73% of the incidence had zero values. In Table 2, the estimate of the probability of zero counts was 0.157 in the simple Poisson model (model 1), but 0.739 in model 8, which is almost the same as the observed probability of zero. Therefore, using a spatio-temporal ZIP model significantly improves the ability to capture zero-inflation. The other models (models 2-7) had similar values because the space-time random terms explain most of the zero-inflation.
The parameter estimates of the best model, model 8, are shown in Table 3. Only the coefficients of humidity and proportion of elderly people are statistically significant since the 95% credible intervals did not contain zero. The regression coefficient of humidity was negative and that of proportion of elderly people positive. The estimated coefficients of precipitation and temperature were positive and negative, respectively, but not statistically significant.
We compared the observed values with the predicted values from model 8. In Figure 5, the observed incidence of scrub typhus and the predicted values are located with the regression line y = 0.94x + 0.15. Figure 6 presents the time series plots of the observed data and predicted values for two selected areas. Here, Gwanak-gu in Seoul City, and Ulju-gun in Ulsan City have the highest incidence within Seoul and South Korea, respectively. Figure 7 presents the observed and predicted maps of the incidence in October 2013 and October 2014. These comparison methods show that the predicted values were similar to the observed data.
Discussion
Investigating the relationship between weather factors and scrub typhus has led to the result that humidity is a significant risk factor, but a negative one. We found that the number of the scrub typhus cases increases as humidity decreases. This negative association can be explained by the fact that the autumn season is rel- atively dry and the incidence is mostly centred at that time. This result is parallel to the negative correlation of relative humidity and scrub typhus incidence shown by Li et al. (2014) and Wu et al. (2016). Since both these studies and our own made use of the scrub typhus incidence during all seasons, the effect of humidity on the disease might be different if the incidence data were restricted to the autumn season.
Article
In addition, we showed that the higher the proportion of elderly people is, the more scrub typhus occurs, which is supported by Ogawa et al. (2002). Since scrub typhus commonly occurs in farmland and farm workers are mostly aged over 60 in South Korea, this result seems to be reasonable. Also, there is a high chance that older people have a less vigorous immune system and therefore are more at risk of scrub typhus infection than young people.
A negative binomial zero-inflated spatio-temporal model as an alternative for our data can be considered, but as it has larger DIC and MSPE values (DIC = 25553 and MSPE = 6.44) than the proposed Poisson zero-inflated spatio-temporal model, the latter would then be better in terms of model performance.
Since most of the hotspots are in rural areas, interventions specified for those areas can effectively prevent scrub typhus. A high proportion of the residents in rural areas are senior citizens who are likely to lack information on scrub typhus. Therefore, a key approach would be to provide education to all residents in the endemic areas before peak season. As an example, Koryung County, South Korea, effectively prevented the disease by educating its residents, especially the elderly. People who had experienced scrub typhus were invited as guest speakers and as soon as the first case of the disease occurred, information went out. In addition, the government of Koryung County distributed tick repellent and protective clothing to the residents. In doing so, the incidence of scrub typhus in Koryung County decreased compared to previous years. The prevention policies should especially be focused on the autumn season due to ensuing harvest and increased outdoor activities.
All models in this study used adopted Bayesian methods. In spite of a high computational cost, they have advantages over frequentist methods. Unlike the difficult interpretation of confidence intervals in frequentist inference, credible intervals in Bayesian inference are more straightforward and easy to interpret. Especially in spatial modelling, the Bayesian framework enables understanding based on hierarchical models highly intuitive. Combining prior knowledge with real-world data is another benefit of Bayesian inference. Here, careful selection of appropriate priors is required and we used non-informative priors. To understand how the prior distributions influence the results, we conducted a sensitivity analysis using inverse gamma distributions for variances. These prior distributions provided almost similar results.
We had a minor support problem in using the weather data as covariates in this study. As a solution, we used a two-stage model which can offer location datasets without monitoring stations using a relatively small number of observed data. Owing to this strength, statistical analysis can be conducted with the complete covariates and find the significant risk factors. Based on these results, we were able to prevent and deal with the disease effectively. Several further tasks remain to be done. First, the model for weather data in the first stage is limited to the spatial model in this study. Adopting a spatio-temporal model for meteorological data might improve the predictive performance. Also, combining weather observation values and predicted values from numerical models might enhance the predictive performance. Second, we expect to be able to analyze the data using sex-and age-adjusted individual patient data in the future, but we were unable to obtain this information with reference to the people diagnosed with scrub typhus in this study. Third, because scrub typhus occurs mostly in the autumn, analyzing only autumn data but on a daily basis might help locate detailed trends. In addition, conducting a spatio-temporal clustering, might be helpful in deriving interventions for each season and could lead to a simulation study to investigate the effects of the interventions.
Conclusions
This study is the first attempt to use a Bayesian spatio-temporal ZIP model for the association between the incidence of scrub typhus in Korea and the weather and proportion of people older than 65 years. Our spatio-temporal model dramatically increased the performance. This supports that spatio-temporal models should be applied for the data with spatio-temporal dynamics. Given that many epidemiological data contain spatial and temporal dependencies, our model could be a template for the use of spatio-temporal models with epidemiological data.
|
2018-12-02T14:21:44.612Z
|
2018-11-09T00:00:00.000
|
{
"year": 2018,
"sha1": "170364e7ea10a896f0aea7ae56ab0f71e426e10e",
"oa_license": "CCBYNC",
"oa_url": "https://geospatialhealth.net/index.php/gh/article/download/665/689",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7944c7695ae279aac150984b8f340ccc62a3ae45",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
85445449
|
pes2o/s2orc
|
v3-fos-license
|
Puerarin pre-conditioning on the expression levels of CK-MB, cTnI and inflammatory factors in patients undergoing cardiac valve replacement
Effect of puerarin preconditioning on the expression levels of nuclear factor κB (NF-κB), interleukin 6 (IL-6), interleukin 8 (IL-8), troponin I (cTnI), and creatine kinase isoenzyme MB (CK-MB) in the neutrophils of patients undergoing cardiac valve replacement under cardiopulmonary bypass (CPB) was evaluated. We enrolled 50 patients scheduled for cardiac valve replacement and assigned them randomly divided into either a puerarin or a control group. Puerarin was dissolved in 10 ml normal saline before CPB, and administered by intravenous infusion to patients in the puerarin group. The control group was administered an equivalent amount of saline. We used flow cytometry to determine the expression levels of NF-κB, IL-6 and IL-8 in neutrophils and an auto chemistry analyzer to determine the serum levels of cTnI and CK-MB before anesthesia induction (T0), 30 min after aortic declamping (T1), 4 h after aortic declamping (T2), and 8 h after aortic declamping (T3). We found the mean serum cTnI and CK-MB levels of the puerarin group tended to decrease with time. The positive rates of NF-κB, IL-6 and IL-8 at different time-points were lower in patients of the puerarin group than in those of the control group (and the differences at T3 were statistically significant). The clinical manifestations of patients in the puerarin group after operation were better than those in the control group (P<0.05). We found that the expression levels of NF-κB, IL-6 and IL-8 were positively correlated with the levels of CK-MB and cTnI (P<0.05). Puerarin preconditioning can reduce the NF-κB activation and the overexpression of IL-6 and IL-8 in neutrophils, and it inhibits the release of myocardial enzyme cTnI and CK-MB reflecting myocardial cell protection. Puerarin seems to improve safety and efficacy of valvular replacement operations.
Introduction
Ischemia-reperfusion injury refers to the phenomenon aggravating tissue injury with blood perfusion recovery after an ischemic event, usually affecting myocardial tissues after heart surgery, percutaneous transluminal angioplasty or thrombolytic therapy. Myocardial ischemia-reperfusion injury is related to the overexpression of certain cytokines and adhesion molecules in local tissues (1). The expression of the inflammatory factor nuclear factor κB (NF-κB) is increased at the beginning of myocardial ischemia reperfusion (2), together with interleukin 6 (IL-6) and IL-8 (3,4). For a safe valvular replacement operation, it is critical to protect the myocardium and to carry out the open-heart surgery under cardiopulmonary bypass (CPB). Clinically, myocardial ischemic preconditioning, post-processing and medical treatments can all relieve the reperfusion injury (5). Puerarin is an isoflavone compound extracted from the Kudzu root; its cardiovascular vessel-protective effects have been confirmed by various pharmacological actions (6). This study was designed to evaluate the effects of puerarin preconditioning on the expression levels of creatine kinase isoenzyme MB (CK-MB), troponin I (cTnI) and inflammatory factors in patients undergoing cardiac valve replacement.
Patients and methods
General data. We enrolled 50 patients undergoing cardiac valve replacement surgery with CPB in Department of Cardiovascular Surgery from March 2017, to September 2017, and randomly separated them into a puerarin (n=25) and a control group (n=25). All patients had ASA II or III classification levels. We excluded patients with serious primary diseases of important organs and patients with surgical contraindications. The patient's baseline age, sex and ASA levels and other general characteristics were comparable (P>0.05).
The present study was approved by the Ethics Committee of The Second Affiliated Hospital of Zhengzhou University (Zhengzhou, China) and signed informed consents were obtained from all participants.
Research methods. After entering the operating room, the patients underwent routine anesthesia induction and maintenance. Prior to CPB establishment the patients in the puerarin group were administered an intravenous injection of 4 mg/kg of puerarin (Zhejiang Zhenyuan Pharmaceutical, Zhejiang, China), while the patients in the control group were administered an injection of an equivalent volume of normal saline. The central venous pressure (CVP) and other life signs were closely monitored.
Test index and methods. Radial artery blood samples were extracted before anesthesia induction (T0), 30 min after aortic declamping (T1), 4 h after aortic declamping (T2) and 8 h after aortic declamping (T3). The samples were centrifuged at 3,200 x g for 10 min, the supernatants extracted and kept at -80˚C until further use. A 7600 auto-chemistry analyzer (Hitachi, Ltd., Tokyo, Japan) was used to determine the levels of CK-MB and cTnI in serum. A CyFlow Cube 8 flow cytometry (Sysmex Europe GmbH, Norderstedt, Germany) was used to determine the expression levels of inflammatory factors NF-κB, IL-6 and IL-8 in neutrophils. We recorded the following monitoring indexes during and after the operations: Aorta clamping time, CPB time, electric defibrillation time, and contractility score 24 h after operation, assisted respiration time after operation and ICU hospitalization time.
Statistical analysis. We used the IBM SPSS 19.0 statistical software (IBM Corp., Armonk, NY, USA) for data analysis. Measurement data were expressed by mean ± standard deviation (SD). We analyzed comparisons between two groups using the t-test, and comparisons among multiple groups using analysis of variance (ANOVA) and Least Significant Difference test. Correlations were established using the Pearson' s correlation analysis. P<0.05 was considered to indicate a statistically significant difference.
Results
The differences revealed by the comparison of general data of patients in the two groups before the operation were not statistically significant (P>0.05) ( Table I).
Comparison of indexes between two groups before and after the operation. We found patients in the puerarin group had improved aorta clamping, CPB, electric defibrillation, and operation times than patients in the control group, but the differences were not significant (P>0.05). Compared with the control group, the mean contractility score 24 h after the operation of the puerarin group was significantly improved, and the assisted respiration and ICU hospitalization times were significantly shorter (P<0.05) (Tables II and III).
Comparison of serum CK-MB and cTnI levels between the two groups. We found the CK-MB and the cTnI levels of patients in the two groups increased significantly at T1. With time, the serum myocardial injury markers in patients of the puerarin group decreased gradually, and the levels at T2 and T3 were significantly lower than that at T1. The control group levels were significantly higher than those of the puerarin group after T1 (P<0.05) (Table IV).
Mean expression levels of inflammatory factors NF-κB, IL-6 and IL-8 of the two groups. The positive expression levels of NF-κB, IL-6 and IL-8 in neutrophils in both groups increased over time after T0. However, the values in the control group were increased to a significantly higher degree than those in the puerarin group. The expression levels of NF-κB, IL-6 and IL-8 at different time-points in the patients of the puerarin group were lower than those in patients of the control group, and the difference was statistically significant at T3 (P<0.05). These results suggest that puerarin preconditioning can inhibit the release of inflammatory factors IL-6 and IL-8 during the process of ischemia-reperfusion injury ( Table V).
Analysis of the correlation between inflammatory factors and the myocardial injury markers.
Pearson correlation analysis results showed that inflammatory factors NF-κB, IL-6 and IL-8 in the serum of all patients undergoing cardiac valve replacement were positively correlated with CK-MB and cTnI (P<0.05) (Table VI).
Discussion
Ischemia-reperfusion injury occurs often in the process of CPB during cardiac surgery. It can lead to heart failure and sudden death in serious cases (7). Clinically, myocardial cell injury is graded after monitoring the perioperative dynamic changes in serum CK-MB and cTnI after the CPB procedure (8). The mechanisms of myocardial ischemiareperfusion injury, caused by aorta clamping and declamping during the cardiac valve replacement operation under CPB, is related to the activation, adhesion, accumulation and release of inflammatory mediators in neutrophils. The expression of NF-κB, as a transcriptional regulator of inflammatory genes is increased (9,10). The generation of IL-6 occurs as an acute reaction, and it can stimulate the expression of inducible nitric oxide synthase, increase the level of myocardial cGMP and reduce the level of myocardial cAMP (11)(12)(13)(14). IL-8 is the most powerful chemotaxis factor of neutrophils. It can strengthen chemotactic activity and stimulates release of large amounts of inflammatory mediators, while inhibiting apoptosis and prolonging inflammation (15,16). Research on ischemia/ reperfusion injury protection includes the application of a cardiac arrest technique, cardiac cryogenics, ischemic preconditioning and pharmacological preconditioning (17,18). Puerarin exerts various effects in cardiovascular diseases, it can dilate coronary arteries, relax vessels, improve ischemic myocardium metabolism, and slow-down the heart rate, and reduce myocardial ischemia (19). Animal experiments have shown that puerarin reduces myocardial ischemia-reperfusion injury (20). This study evaluated the effects of puerarin preconditioning on acute myocardial ischemia-reperfusion injury due to CPB during cardiac valve replacement. Our findings confirmed the surgical procedure increases the levels of CK-MB1 and cTnI of patients, as seen by the increased levels in both groups at T1. We showed how the levels decreased over time in the puerarin group. The levels of of NF-κB, IL-6 and IL-8 were lower in the puerarin group than those in the control (with a significant difference at T3). Collectively this suggests that puerarin preconditioning can inhibit the release of inflammatory factors IL-6 and IL-8 in the process of ischemia-reperfusion injury. Our results also show that after puerarin preconditioning, the clinical markers of patients after the operation were better than the same markers in the control group, suggesting that puerarin can protect myocardial cells and promote their recovery after the operation. The reduction in the levels of inflammatory cytokines in serum was consistent with the reduction in the myocardial injury markers levels. Moreover, our correlation analysis revealed that the inflammatory cytokines NF-κB, IL-6 and IL-8 in patients with cardiac heart valve replacement were positively correlated with CK-MB and cTnI (P<0.05).
Our results indicate that puerarin preconditioning before cardiac valve replacement can relieve myocardial ischemiareperfusion injury by effectively inhibiting the expression of inflammatory factors and reducing the release of myocardial enzymes.
In conclusion, puerarin preconditioning can reduce the NF-κB activation and overexpression of IL-6 and IL-8 in neutrophils, and it can inhibit the release of myocardial enzymes cTnI and CK-MB, suggesting myocardial protective effects. Further studies with puerarin are warranted given its potential clinical application value.
|
2019-02-16T03:01:40.955Z
|
2019-01-29T00:00:00.000
|
{
"year": 2019,
"sha1": "0e0c5092dff39b099736f433e65ec97a1efc953a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.3892/etm.2019.7217",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e0c5092dff39b099736f433e65ec97a1efc953a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266631611
|
pes2o/s2orc
|
v3-fos-license
|
The Transcription Factor StuA Regulates the Glyoxylate Cycle in the Dermatophyte Trichophyton rubrum under Carbon Starvation
Trichophyton rubrum is the primary causative agent of dermatophytosis worldwide. This fungus colonizes keratinized tissues and uses keratin as a nutritional source during infection. In T. rubrum–host interactions, sensing a hostile environment triggers the adaptation of its metabolic machinery to ensure its survival. The glyoxylate cycle has emerged as an alternative metabolic pathway when glucose availability is limited; this enables the conversion of simple carbon compounds into glucose via gluconeogenesis. In this study, we investigated the impact of stuA deletion on the response of glyoxylate cycle enzymes during fungal growth under varying culture conditions in conjunction with post-transcriptional regulation through alternative splicing of the genes encoding these enzymes. We revealed that the ΔstuA mutant downregulated the malate synthase and isocitrate lyase genes in a keratin-containing medium or when co-cultured with human keratinocytes. Alternative splicing of an isocitrate lyase gene yielded a new isoform. Enzymatic activity assays showed specific instances where isocitrate lyase and malate synthase activities were affected in the mutant strain compared to the wild type strain. Taken together, our results indicate a relevant balance in transcriptional regulation that has distinct effects on the enzymatic activities of malate synthase and isocitrate lyase.
Introduction
Efficient nutrient assimilation by pathogenic fungi during infection is crucial for survival [1,2].The dermatophyte Trichophyton rubrum infects and degrades keratinized tissues, such as nails and skin, by breaking down proteins into free amino acids and peptides [3][4][5].Fungi assimilate these products as carbon sources via membrane transporters [6,7].Upon infection, the fungus adapts to the host milieu using molecular mechanisms that facilitate its metabolic flexibility, colonization, and invasion [8].Orchestrated mechanisms of gene modulation and interactions with transcription factors govern the balance of metabolic reprogramming.This balance contributes to fungal fitness and pathogenicity and represents an attractive target for prospecting antifungal drugs [6,9].
During metabolic adaptation in fungal pathogenesis, the glyoxylate cycle is pivotal for the use of alternative carbon sources.Under glucose deprivation, fungal pathogens undergo a metabolic transition toward the glyoxylate cycle.This adaptation enables them to assimilate two carbon compounds [10].Fungal cells mobilize fatty acids to generate acetyl-CoA, which activates the glyoxylate cycle.In this metabolic cycle, isocitrate lyase converts isocitrate into glyoxylate, and malate synthase converts glyoxylate into malate.Both enzymes are exclusive to this cycle [11,12].Malate is then driven toward oxaloacetate production and continually reacts with acetyl-CoA to maintain the cycle.Succinate is shuttled to the tricarboxylic acid (TCA) cycle in mitochondria, where it is metabolized into cycle intermediates to generate oxaloacetate.Oxaloacetate molecules transported to the cytosol trigger gluconeogenesis and reestablish the glucose supply in fungal cells [1].
Transcription factors (TFs) play a pivotal role in signaling pathways by orchestrating mechanisms that either activate or suppress molecular responses based on the specific biological context to which the fungus is exposed [13][14][15].The APSES family belonging to transcription factors of the basic helix-loop-helix (bHLH) class, which includes Asm1p, Phd1p, Sok2p, Efg1p, and StuA, is unique to fungi and plays an essential role in regulating a wide range of processes, including fungal growth, virulence, pathogenicity, and metabolism [16][17][18].In Aspergillus fumigatus, StuA is critical for the morphogenesis and biosynthesis of secondary metabolites [19,20].Additionally, studies involving null mutants of stuA in the dermatophyte Arthroderma benhamiae have demonstrated its involvement in keratin degradation and sexual development [21].Our recent studies demonstrated the role of StuA in several aspects involved in the virulence of T. rubrum [22].In a previous RNA sequencing analysis [23] of transcripts generated by the ∆stuA strain during growth on glucose or keratin, the impact of StuA deletion on central carbon metabolism was evident reducing transcript levels of the genes encoding the glyoxylate cycle enzymes [23].This result raised hypotheses about whether StuA would also regulate the enzymatic activity of glyoxylate cycle enzymes during fungal-host interaction [24][25][26].
Post-transcriptional regulation through alternative splicing (AS) allows the production of various protein isoforms in response to physiological requirements and environmental cues, often serving as drivers of phenotypic diversity within the eukaryotic cell proteome [27][28][29].Intron retention (IR) is one of the most common AS events in fungi [30,31], and it may be relevant in the regulatory mechanisms of fungal physiology, adaptation to fungal niches, pathogenicity, and drug resistance [13,32,33].
In this study, we hypothesized that the transcription factor StuA plays a significant role in regulating essential enzymes of the glyoxylate cycle depending on the carbon source or in an infection-like scenario.We also investigated the possibility of post-transcriptional regulation through IR events in the transcripts of malate synthase and isocitrate lyase genes.Post-transcriptional regulation is a crucial mechanism that facilitates fungal adaptation, particularly under glucose-depleted conditions.Our findings suggested that StuA regulates the transcription of the main enzymes of the glyoxylate cycle.We also showed that modulation of the isoforms generated by the AS of an isocitrate lyase gene depended on culture conditions.
Reannotation of Isocitrate Lyase as a Single Gene (OR643895)
By sequencing the DNA and cDNA of the exonic and their flanking regions of the TERG_11637, TERG_11638, and TERG_11639, we concluded that they comprised a single gene identified as OR643895 (Figures S1 and S2).Multiple alignments of the OR643895 nucleotide sequence with the isocitrate lyase-coding genes and protein sequences from various dermatophytes showed a homology of more than 95% with the isocitrate lyasecoding genes of Trichophyton tonsurans, Trichophyton verrucosum, Trichophyton equinum, Arthroderma benhamiae, and Microsporum canis.Therefore, T. rubrum has only two genes encoding isocitrate lyase: TERG_01271 and OR643895.
The ∆stuA Mutant Reduces the Transcription of Isocitrate Lyase and Malate Synthase Genes
The wild type (WT) strain exhibited an approximately 10-fold upregulation of genes encoding isocitrate lyase during fungal growth in keratin compared to growth in glucose.The protein StuA exerted distinct regulatory effects on the modulation of isocitrate lyase during the cultivation of the ∆stuA strain in glucose.While OR643895 transcripts were downregulated at 24 and 48 h, TERG_01271 transcripts were derepressed at 24 h and stayed at the same level in transcript abundance at 48 h compared to the WT (control) strain.Transcript levels of TERG_01281 also exhibited a decline in the ∆stuA strain at both time points.
Fungal growth in keratin resulted in reductions in the transcript levels of isocitrate lyase and malate synthase gene isoforms in the ∆stuA strain across all evaluated time points (Figure 1).during the cultivation of the ΔstuA strain in glucose.While OR643895 transcripts were downregulated at 24 and 48 h, TERG_01271 transcripts were derepressed at 24 h and stayed at the same level in transcript abundance at 48 h compared to the WT (control) strain.Transcript levels of TERG_01281 also exhibited a decline in the ΔstuA strain at both time points.Fungal growth in keratin resulted in reductions in the transcript levels of isocitrate lyase and malate synthase gene isoforms in the ΔstuA strain across all evaluated time points (Figure 1).We conducted transcriptional analysis of genes encoding isocitrate lyase and malate synthase in an infection-like scenario in human keratinocytes.Our results revealed different levels of isocitrate lyase (OR643895 and TERG_01271) transcripts during co-culture with the WT strain.We observed an overexpression of OR643895 at 24 and 48 h, whereas TERG_01271 exhibited lower transcript levels during the same period.The gene encoding malate synthase (TERG_01281) was upregulated at 24 and 48 h.During co-culture with the ΔstuA strain, we observed a reduced expression of OR643895 and TERG_01281 only at 48 h post-infection.However, TERG_01271 exhibited decreased transcript levels at 24 and 48 h (Figure 2).We conducted transcriptional analysis of genes encoding isocitrate lyase and malate synthase in an infection-like scenario in human keratinocytes.Our results revealed different levels of isocitrate lyase (OR643895 and TERG_01271) transcripts during co-culture with the WT strain.We observed an overexpression of OR643895 at 24 and 48 h, whereas TERG_01271 exhibited lower transcript levels during the same period.The gene encoding malate synthase (TERG_01281) was upregulated at 24 and 48 h.During co-culture with the ∆stuA strain, we observed a reduced expression of OR643895 and TERG_01281 only at 48 h post-infection.However, TERG_01271 exhibited decreased transcript levels at 24 and 48 h (Figure 2).
Alternative Splicing Assay
In a previously generated RNA sequencing dataset of the ∆stuA mutant grown in glucose and keratin published by our research group [23], we observed AS events in TERG_01271.Our in silico analysis of the protein sequence and conserved motifs in this gene revealed that AS generated mRNAs with premature stop codons that disrupted protein translation.The retention of intron 2 (IR-2) resulted in a putative protein with 231 amino acid residues lacking conserved domains (Figure 3).
Alternative Splicing Assay
In a previously generated RNA sequencing dataset of the ΔstuA mutant grown in glucose and keratin published by our research group [23], we observed AS events in TERG_01271.Our in silico analysis of the protein sequence and conserved motifs in this gene revealed that AS generated mRNAs with premature stop codons that disrupted protein translation.The retention of intron 2 (IR-2) resulted in a putative protein with 231 amino acid residues lacking conserved domains (Figure 3).
We identified IR-2 events in TERG_01271 during the growth of the WT and ΔstuA strains in glucose and keratin, as well as during co-culture with human keratinocytes.We noted increased expression of the TERG_01271 isoform transcripts with IR-2 in the mutant strain compared to the WT strain when grown in glucose (Figure 4A).However, IR-2 transcript levels decreased in the mutant when both strains were cultured in a keratin medium (Figure 4B).During co-culture, IR-2 transcript levels exhibited a distinct profile depending on the duration of the interaction between the fungus and the host.In the WT strain, we observed an increased number of IR-2 transcripts at 24 h, followed by a significant decrease in the expression of this alternative isoform at 48 h when co-cultured with human keratinocytes (Figure 4C).However, for the mutant ΔstuA, we noted that the IR-2 transcripts were initially repressed at 24 h but induced at 48 h of co-culture (Figure 4D).We identified IR-2 events in TERG_01271 during the growth of the WT and ∆stuA strains in glucose and keratin, as well as during co-culture with human keratinocytes.We noted increased expression of the TERG_01271 isoform transcripts with IR-2 in the mutant strain compared to the WT strain when grown in glucose (Figure 4A).However, IR-2 transcript levels decreased in the mutant when both strains were cultured in a keratin medium (Figure 4B).During co-culture, IR-2 transcript levels exhibited a distinct profile depending on the duration of the interaction between the fungus and the host.In the WT strain, we observed an increased number of IR-2 transcripts at 24 h, followed by a significant decrease in the expression of this alternative isoform at 48 h when co-cultured with human keratinocytes (Figure 4C).However, for the mutant ∆stuA, we noted that the IR-2 transcripts were initially repressed at 24 h but induced at 48 h of co-culture (Figure 4D).
The Enzymatic Activities of Isocitrate Lyase and Malate Synthase during Fungal Growth in a Medium Supplemented with Glucose or Keratin
The enzymatic activity of isocitrate lyase was notably higher in glucose and keratin in the ∆stuA mutant after 24 h compared to the WT control.However, after 48 h, we noted a significant increase in enzymatic activity in the WT strain under both growth conditions, whereas the mutant strain grown on keratin exhibited a substantial reduction in activity.At 96 h, we observed a distinct pattern of enzymatic activity during the growth of the mutant depending on the nutritional carbon source (Figure 5).
Regarding the enzymatic activity of malate synthase, we observed a substantial decrease in the ΔstuA strain compared to the control after 96 h of growth in glucose.However, during keratin growth, we observed higher enzymatic activity in the mutant at 24 h, with no statistically significant differences in enzymatic activity observed between the mutant and WT during the remaining time intervals (Figure 5).Table 1 summarizes our main findings in an attempt to correlate the read count numbers obtained in our previously published RNA sequencing dataset [23] with the expression analysis of genes involved in the glyoxylate cycle and their respective enzyme activities in the WT and ΔstuA strains.Regarding the enzymatic activity of malate synthase, we observed a substantial decrease in the ∆stuA strain compared to the control after 96 h of growth in glucose.However, during keratin growth, we observed higher enzymatic activity in the mutant at 24 h, with no statistically significant differences in enzymatic activity observed between the mutant and WT during the remaining time intervals (Figure 5).
Table 1 summarizes our main findings in an attempt to correlate the read count numbers obtained in our previously published RNA sequencing dataset [23] with the expression analysis of genes involved in the glyoxylate cycle and their respective enzyme activities in the WT and ∆stuA strains.
Discussion
Under glucose-deprived conditions, fungal metabolism relies on carbon acquisition from alternative nutritional sources.One potential approach involves the activation of the glyoxylate cycle.Given its absence in humans, this pathway is an attractive and promising target for antifungal development, primarily because of its exclusive non-human enzymes, isocitrate lyase, and malate synthase [1].These enzymes become active in the dermatophyte T. rubrum when grown on keratin or when exposed to cytotoxic drugs in an infection-like scenario involving human keratinocytes [6,[34][35][36][37][38].For the first time, we present evidence of the transcriptional modulation of essential enzyme-coding genes within the glyoxylate cycle by StuA.Through DNA sequencing, we concluded that the three exonic regions of the genome (TERG_11637, TERG_11638, and TERG_11639), previously annotated as separate entities, were part of the same gene, and have now been identified as OR643895.Furthermore, our in silico analysis showed the presence of a StuA consensus-binding site in the TERG_01271 promoter region, implying possible direct transcriptional regulation of this gene by StuA in T. rubrum.Also, the StuA consensus-binding site is conserved in the promoter region of the TERG_01271 homolog in other dermatophytes (Figure S3).
The Expression of Glyoxylate Cycle Genes Depends on StuA during Fungal Growth in Keratin
The regulation of the expression of glyoxylate cycle genes relies on the transcriptional control exerted by transcription factors.Here, we show that when considering fungal growth in keratin, the absence of StuA significantly reduces the transcription levels of both isocitrate lyase and malate synthase genes (Figure 1).Notwithstanding, our results presented a distinct transcriptional regulation pattern in glucose cultures for isocitrate lyase-coding genes, suggesting that glucose might alter the StuA-mediated regulation of glyoxylate-coding genes.However, we detected a tendency of the upregulation of TERG_01271 in the ∆stuA strain (for 24 and 48 h), which suggests that StuA, in wild type conditions, might have a role in repressing a relevant enzyme of the glyoxylate cycle (Figure 1).In this sense, StuA deletion promotes a cascade of stress response events that affect central carbon metabolism.A high-throughput transcriptomic analysis suggested that the ∆stuA strain tended to upregulate specific glutamate metabolism genes in keratin cultures [23].Considering the repression of essential genes for the glyoxylate cycle in the ∆stuA strain, we infer that StuA is necessary for T. rubrum survival adaptation in glucose-depleted conditions.
Isocitrate Lyase and Malate Synthase Genes Are Upregulated during Co-Culture with Human Keratinocytes, but the Absence of StuA Impairs Their Expression
We observed the overexpression of OR643895, whereas the opposite occurred with TERG_01271 (Figure 2) in the WT strain co-culture, suggesting a dual transcriptional regulation pattern for isocitrate lyase-coding genes.However, in the ∆stuA strain cocultured with human keratinocytes, the expression of the isocitrate lyase and malate synthase-coding genes was repressed, mainly after 48 h of interaction.The differences observed in the transcriptional modulation of genes encoding isocitrate lyase led us to propose that another level of transcriptional regulation drives the expression of TERG_01271 or OR643895.
Fungi trigger the glyoxylate cycle upon contact with macrophages [39][40][41][42].This is a remarkable pathogenic strategy for determining how metabolic flexibility contributes to the virulence of fungal pathogens.In T. rubrum, the overexpression of genes encoding malate synthase and isocitrate lyase has been observed during a dual RNA sequencing analysis of T. rubrum co-cultured with human keratinocytes [36].Here, we propose that StuA plays a significant role in activating the glyoxylate cycle in human keratinocytes, suggesting that this regulatory protein might be a potential target for impairing the virulence of T. rubrum.
Post-Transcriptional Regulation of TERG_01271 by Alternative Splicing
We observed through in silico analysis that IR events in TERG_01271 resulted in the introduction of a premature stop codon, forming a potentially putative non-functional truncated protein (Figure 3).In addition, the carbon source may influence the transcript levels of the IR isoform in T. rubrum via a mechanism mediated by StuA (Figure 4A,B).Our results showed that StuA represses both the TERG_01271 conventional and IR isoforms in glucose cultures but acts as an activator of identical isoforms in keratin cultures.This is reasonable considering that glyoxylate cycle genes are activated in glucose-deprived environments.Remarkably, both the carbon source and StuA may influence the AS of the glyoxylate gene in T. rubrum.Furthermore, the presence of mature conventional transcripts did not impair the presence of the IR isoforms in T. rubrum, as reported previously [13].
The balance between mutual and non-exclusive splicing isoform abundance was also observed in co-culture assays, where a challenge with keratinocytes elicited a distinct pattern for the TERG_01271 IR isoform.Although we detected IR isoforms in both WT and ∆stuA strain cultures (control), contact with keratinocytes triggered differences in the abundance of AS isoforms (Figure 4C,D), in contrast to the single repression pattern generated in co-cultures of conventional splicing isoforms.As the co-culture mimics an infection-like scenario, we hypothesized that the differences between conventional splicing and AS isoforms agree with the biological requirements imposed on T. rubrum to fight host defense strategies.
Isocitrate Lyase and Malate Synthase Activities Are Independently Regulated during T. rubrum Culture in Glucose or Keratin
The absence of StuA compromised isocitrate lyase activity in specific instances.We observed a higher enzymatic activity in the absence of StuA after the first 24 h of culture, regardless of the carbon source.Conversely, the mutant strain presented lower enzymatic activity after 48 h and showed distinct enzymatic activity at 96 h, depending on the carbon source.We hypothesized that throughout fungal growth (48 and 96 h), the absence of this transcription factor, which is associated with AS events, would significantly reduce the enzymatic activity of isocitrate lyase.We also observed a statistically significant increase in malate synthase enzymatic activity after 24 h of ∆stuA culture in keratin, followed by a reduction in activity after 96 h in glucose (Figure 5).Malate synthase is responsible for converting acetyl-CoA and glyoxylate into malate.Malate synthase activity was not affected by the reduced isocitrate lyase activity in certain instances.Therefore, even with a reduction in isocitrate lyase activity, which consequently reduces glyoxylate production, the glyoxylate generated under the conditions evaluated in this study was sufficient to stimulate malate synthase activity.
From the data summarized in Table 1, we observed compensatory activity in the expression of TERG_01271 and OR643895 in the ∆stuA strain, which may result in few changes in isocitrate lyase enzymatic activity.Fluctuations in read counts in the mutant strain at 24, 48, and 96 h did not correlate with significant changes in isocitrate lyase activity.We observed a similar trend in TERG_01281 expression.During growth on keratin, ∆stuA exhibits lower read counts compared to WT.However, this strain has shown significant isocitrate lyase activity in some instances.Additionally, the reduction in isocitrate lyase gene read counts in ∆stuA from 48 to 96 h did not significantly affect enzymatic activity.
It is well known that post-translational modifications can influence the catalytic potential of enzymes, including phosphorylation [43].In Saccharomyces cerevisiae [44] and Paracoccidioides brasiliensis [45], phosphorylation has been shown to reduce isocitrate lyase activity, leading to enzyme inactivation.By analogy, we hypothesize that a similar phenomenon occurs in T. rubrum.However, further research is necessary to fully understand the post-translational mechanisms that govern isocitrate lyase activity in T. rubrum.
Fungal Strains and Culture Conditions
The T. rubrum strain CBS118892 (Westerdijk Fungal Biodiversity Institute, Utrecht, The Netherlands) was used as a reference (WT).We also used the previously constructed null mutant strain, ∆stuA [22].The strains were grown on a solid malt extract agar medium (2% glucose, 2% malt extract, 0.1% peptone, 2% agar, pH 5.7) at 28 • C for 20 days.Next, to prepare a conidial suspension, we flooded the plates with a 0.9% sterile NaCl solution and filtered them through fiberglass to remove hyphal fragments.Conidial concentration was estimated using a Neubauer chamber.Then, we inoculated approximately 1 × 10 6 conidia of each strain into 100 mL of Sabouraud dextrose broth and incubated the cultures at 28 • C for 96 h in an orbital shaker with agitation (120 rpm).
The resulting mycelia were transferred into 100 mL of a minimal medium [46] containing 70 mM sodium nitrate (Sigma-Aldrich, St. Louis, MO, USA), 50 mM glucose (Sigma-Aldrich, St. Louis, MO, USA), or 0.5% bovine keratin (m/v).We incubated the cultures for 24, 48, and 96 h at 28 • C with constant agitation (120 rpm).Subsequently, we filtered the biological material from three independent replicates of glucose or keratin cultures at each time point and stored it at −80 • C until RNA extraction.
Co-Culture of Fungal Strains and Human Keratinocytes
The immortalized HaCaT human keratinocyte cell line (Cell Lines Service GmbH, Eppelheim, Germany) was cultured in an RPMI-1640 medium (Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% fetal bovine serum at 37 • C in a humidified atmosphere containing 5% CO 2 .We added penicillin (100 U/mL) and streptomycin (100 µg/mL) to prevent culture medium contamination.The co-culture assays of the fungal WT or ∆stuA strains with HaCaT keratinocytes were performed as previously described [13].We used uninfected keratinocytes and WT or ∆stuA conidia as the controls.The assay was performed in triplicate.
RNA Extraction and cDNA Synthesis
Total RNA was extracted using an Illustra RNAspin Mini Isolation Kit (GE Healthcare, Chicago, IL, USA), according to the manufacturer's instructions.For fungal cell wall disruption in the co-culture, the samples were treated with a solution of lysing enzymes from Trichoderma harzianum, as previously described [36].RNA concentration and purity were assessed using a NanoDrop ND-100 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).
Total RNA was treated with DNase I (Sigma-Aldrich, St. Louis, MO, USA) to prevent genomic DNA contamination.Subsequently, cDNA synthesis was performed using the Platus Transcriber RNase H-cDNA First Strand Kit (Sinapse Inc., Miami, FL, USA), according to the manufacturer's instructions.To assess the quality of the obtained cDNAs, we conducted a PCR reaction using oligonucleotides to amplify a region of the constitutive β-tubulin gene, followed by analysis on an agarose electrophoresis gel.We suspended the cDNAs in 70 ng/µL dilutions for a reverse-transcription quantitative polymerase chain reaction (RT-qPCR).
Genomic DNA and cDNA Sequencing of Isocitrate Lyase
Previous RNA sequencing performed by our group showed aligned reads in the intergenic regions of TERG_11637, TERG_11638, and TERG_11639, all of which were annotated as encoding isocitrate lyases.The web server Augustus (https://bioinf.unigreifswald.de/augustus/.Accessed in April 2023) was used [47] to predict genes from this supercontig region.The predicted gene sequence was aligned against the Ensembl Fungi database using BLAST tools to verify the homology among the genes of other species.The coordinates of the exons and introns were obtained by aligning the isocitrate lyase sequences from dermatophytes in the database against the predicted gene sequence.
For DNA sequencing, we designed specific primers flanking the supercontig regions of these three genes and used genomic DNA and cDNA samples (Table S1).We purified the PCR products using the Wizard ® SV Gel and PCR Clean-UP System Protocol (Promega Corporation, Madison, WI, USA), according to the manufacturer's instructions.A Nan-oDrop ND-100 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) was used to assess the purity and integrity of PCR products before sequencing.
DNA sequencing was performed using a BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Waltham, MA, USA), according to the manufacturer's instructions, with the Sanger methodology in an ABI 3500xL Genetic Analyzer (Thermo Fisher Scientific, Waltham, MA, USA).Sequencing Analysis Software v.5.4 was used to analyze the quality of the rendered sequences.The gDNA and cDNA sequences were assembled and analyzed using DNASTAR SeqMan Ultra software (https://www.dnastar.com.Assessed in August 2023).The obtained nucleotide sequence corresponds to a single isocitrate lyase and is available in GenBank under accession number OR643895.
Alternative Splicing Analyses
Sequencing reads from a previous RNA sequencing analysis available at the Gene Expression Omnibus under accession numbers GSE163357 and GSE134406 [23] were mapped to the T. rubrum reference genome using the STAR aligner [48].To identify AS events, we processed the aligned reads using the ASpli package in R software version 4.3.1 [49].Differential expression was analyzed using the DESeq2 Bioconductor package [50].The Benjamini-Hochberg-adjusted p-value was set to 0.05, with a Log 2 Fold Change of ±1.5 to identify the abundance of significantly modulated levels of transcripts [23].
We used in silico tools to identify the isoforms, reading frames, conserved sites, and domains of the isocitrate lyase encoded by TERG_01271 during AS events with IR and conventional splicing mRNA processing.The ExPAsy Translate Tool [51] was used to identify the translated protein sequences of the analyzed transcripts.We searched for protein domains in virtual databases such as Ensembl Fungi [52], Interpro [53], and PANTHER [54,55].We drew a graphical representation of each isoform using Illustrator for Biological Sequences software (IBS 1.0) [56].
RT-qPCR Analyses
We used a QuantStudio 3 Real-Time PCR System (Applied Biosystems, Waltham, MA, USA) with the primers listed in Supplementary Table S1 for transcript quantification.For TERG_01271, which exhibited both conventional and AS events, we designed primers flanking only exon-exon junctions for traditional splicing analysis and primers within the intronic region for IR events.The concentration of each primer was standardized for reaction efficiencies between 90% and 110%.Reactions were prepared using Power SYBR™ Green PCR Master Mix (Applied Biosystems, Waltham, MA, USA) with ROX dye as a fluorescent normalizer [57].We used the 2 −∆∆Ct method [58] for relative expression analysis, considering the T. rubrum gene gapdh as an endogenous control.Relative expression normalization of conventional splicing and IR events in the WT and mutant strains grown in glucose or keratin and co-cultured with human keratinocytes was performed as described previously [13].The results are presented as the mean relative expression values from three independent replicates with standard deviations.
Enzymatic Activity Assays
We used the macerated mycelium of WT and ∆stuA strains to assess enzymatic activity.Approximately 0.75 g of macerated mycelium was mixed with 500 µL of a Tris-HCl buffer (50 mM Tris-HCl, 2 mM MgCl 2 , 2 mM dithiothreitol, pH 8.0).The samples were vortexed and centrifuged for 30 min at 1.268× g at 4 • C [9].The supernatant (protein extract) was collected and stored at −80 • C until enzymatic assays for isocitrate lyase and malate synthase were performed.Proteins were quantified using the Bradford reagent (Sigma-Aldrich, St. Louis, MO, USA), and concentrations were determined using a standard curve of serial dilutions of Bovine Serum Albumin (BSA) (Sigma-Aldrich, St. Louis, MO, USA).
The enzymatic activities of isocitrate lyase and malate synthase were represented as units per milligram (U/mg) of total protein extract.Three biological replicates were used in each experiment.
Statistical Analysis
We used an unpaired t-test to statistically analyze the transcript quantifications and enzymatic assay results.Statistical significance was determined using the Holm-Sidak method with p < 0.05.Significance is represented in the graphs as * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001.GraphPad Prism software v.6 (GraphPad Software, San Diego, CA, USA) [61] was used for statistical analysis and graph design.
Conclusions
In summary, our results provide new insights into the annotation of isocitrate lyase genes.This is the first study to report the association of the transcription factor StuA with the transcriptional regulation of genes involved in the glyoxylate cycle in conjunction with AS events.The absence of StuA impaired the expression of genes encoding isocitrate lyase and malate synthase during growth in different carbon sources and co-culture with human keratinocytes.Therefore, this transcription factor can directly regulate TERG_01271 and indirectly regulate the OR643895 and malate synthase (TERG_01281) genes.We also revealed a balance between conventional and AS in the post-transcriptional regulation of TERG_01271.Finally, we demonstrated the impairment of isocitrate lyase activity in the mutant strain under certain conditions.However, the enzymatic activity of malate synthase was not entirely affected during fungi growth in different carbon sources.
Figure 1 .
Figure 1.Relative expression analysis of isocitrate lyase (OR643895 and TERG_01271) and malate synthase (TERG_01281) transcripts in wild type (WT) and ΔstuA mutant strains during growth on glucose or keratin.The WT strain served as the control.Statistical significance was determined using an unpaired Student's t-test with Holm-Sidak correction for multiple comparisons.* p < 0.05, *** p < 0.001, and **** p < 0.0001.
Figure 1 .
Figure 1.Relative expression analysis of isocitrate lyase (OR643895 and TERG_01271) and malate synthase (TERG_01281) transcripts in wild type (WT) and ∆stuA mutant strains during growth on glucose or keratin.The WT strain served as the control.Statistical significance was determined using an unpaired Student's t-test with Holm-Sidak correction for multiple comparisons.* p < 0.05, *** p < 0.001, and **** p < 0.0001.
Figure 2 .
Figure 2. Relative expression analysis of isocitrate lyase (OR643895 and TERG_01271) and malate synthase (TERG_01281) transcripts in wild type (WT) and ΔstuA mutant strains during co-culture with HaCaT keratinocytes.The WT and ΔstuA strains without keratinocytes were used as the controls in their respective co-culture assays.Statistical significance was determined using an unpaired Student's t-test with Holm-Sidak correction for multiple comparisons.* p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001.
Figure 3 .
Figure 3. Schematic representation of TERG_01271 with conventional splicing and protein translation featuring the ICL/PEPM domains (Isocitrate Lyase-like/Penta-EF-hand Protein Motif).The ICL domain is responsible for the catalytic activity of isocitrate lyase, and the PEPM domain is associated with calcium-binding activity.An alternative splicing event with intron 2 retention results in an mRNA with premature stop codons and the formation of a putative truncated protein; both domains lost.
Figure 3 .
Figure 3. Schematic representation of TERG_01271 with conventional splicing and protein translation featuring the ICL/PEPM domains (Isocitrate Lyase-like/Penta-EF-hand Protein Motif).The ICL domain is responsible for the catalytic activity of isocitrate lyase, and the PEPM domain is associated with calcium-binding activity.An alternative splicing event with intron 2 retention results in an mRNA with premature stop codons and the formation of a putative truncated protein; both domains lost.
Figure 3 .
Figure 3. Schematic representation of TERG_01271 with conventional splicing and protein translation featuring the ICL/PEPM domains (Isocitrate Lyase-like/Penta-EF-hand Protein Motif).The ICL domain is responsible for the catalytic activity of isocitrate lyase, and the PEPM domain is associated with calcium-binding activity.An alternative splicing event with intron 2 retention results in an mRNA with premature stop codons and the formation of a putative truncated protein; both domains lost.
Figure 4 .
Figure 4. Relative expression analysis of TERG_01271 transcripts with IR-2 in the WT (control) and ΔstuA strains.Expression patterns are shown during growth in glucose (A) and keratin (B) media, as well as during co-culture with human keratinocytes for the WT (C) and ΔstuA (D) strains.The WT and ΔstuA strains without keratinocytes were used as controls in their respective co-culture
Figure 4 .
Figure 4. Relative expression analysis of TERG_01271 transcripts with IR-2 in the WT (control) and ∆stuA strains.Expression patterns are shown during growth in glucose (A) and keratin (B) media, as well as during co-culture with human keratinocytes for the WT (C) and ∆stuA (D) strains.The WT and ∆stuA strains without keratinocytes were used as controls in their respective co-culture assays.Statistical significance was determined using an unpaired Student's t-test with Holm-Sidak correction for multiple comparisons.*** p < 0.001, **** p < 0.0001.
Author
Contributions: M.F.P. designed, conducted the experiments, and wrote the manuscript.L.M.-S.supported the enzymatic activities assay, DNA sequencing, and contributed to the final version of the manuscript.P.R.S. performed bioinformatic analysis.V.M.O.performed fungi cultures and supported RNA and protein extraction.N.M.M.-R.and A.R. supervised the experiments' design and realization and edited the manuscript.All authors reviewed the manuscript and approved the submitted version.All authors have read and agreed to the published version of the manuscript.Funding: This work was supported by grants from the Brazilian Agencies: São Paulo Research Foundation-FAPESP (proc.No. 2019/22596-9, postdoctoral scholarship Nos.2021/10359-2 to MP and 2021/10255-2 to LM-S); the National Council for Scientific and Technological Development-CNPq (grant Nos.307871/2021-5 and 307876/2021-7); the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)-Finance Code 001; and the Fundação de Apoio ao Ensino, Pesquisa e Assistência-FAEPA.Informed Consent Statement: Not applicable.
Table 1 .
Read counts, gene expression, and enzymatic activity of genes from the glyoxylate cycle in ∆stuA and WT strains of T. rubrum grown on glucose or keratin.
|
2023-12-30T16:22:22.284Z
|
2023-12-28T00:00:00.000
|
{
"year": 2023,
"sha1": "4e3f1dcd80edcdf1bec8d9e0b6595fcd3d357fbb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/1/405/pdf?version=1703743364",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4eaf5fba289d69c658fea870bdcf51ec1bf58523",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10077353
|
pes2o/s2orc
|
v3-fos-license
|
The Validity of Using Analogue Patients in Practitioner–Patient Communication Research: Systematic Review and Meta-Analysis
When studying the patient perspective on communication, some studies rely on analogue patients (patients and healthy subjects) who rate videotaped medical consultations while putting themselves in the shoes of the video-patient. To describe the rationales, methodology, and outcomes of studies using video-vignette designs in which videotaped medical consultations are watched and judged by analogue patients. Pubmed, Embase, Psychinfo and CINAHL databases were systematically searched up to February 2012. Data was extracted on: study characteristics and quality, design, rationales, internal and external validity, limitations and analogue patients’ perceptions of studied communication. A meta-analysis was conducted on the distribution of analogue patients’ evaluations of communication. Thirty-four studies were included, comprising both scripted and clinical studies, of average-to-superior quality. Studies provided unspecific, ethical as well as methodological rationales for conducting video-vignette studies with analogue patients. Scripted studies provided the most specific methodological rationales and tried the most to increase and test internal validity (e.g. by performing manipulation checks) and external validity (e.g. by determining identification with video-patient). Analogue patients’ perceptions of communication largely overlap with clinical patients’ perceptions. The meta-analysis revealed that analogue patients’ evaluations of practitioners’ communication are not subject to ceiling effects. Analogue patients’ evaluations of communication equaled clinical patients’ perceptions, while overcoming ceiling effects. This implies that analogue patients can be included as proxies for clinical patients in studies on communication, taken some described precautions into account. Insights from this review may ease decisions about including analogue patients in video-vignette studies, improve the quality of these studies and increase knowledge on communication from the patient perspective. Electronic supplementary material The online version of this article (doi:10.1007/s11606-012-2111-8) contains supplementary material, which is available to authorized users.
as such. With regard to external validity, the question arises whether results are generalizable to CPs and clinical care, i.e. are APs able to adopt a video-patient's perspective?
To summarize, we lack an understanding of the rationales for conducting video-vignette studies with APs; how both internal and external validity are increased and tested; how APs' perceptions of communication correspond to CPs' perceptions; and whether APs' evaluations of communication overcome ceiling effects. An overview of these elements will provide more insight into when and how APs can be used in future studies. Therefore, a systematic review is conducted with the following research questions: 1. What are the rationales for conducting clinical and scripted video-vignette studies on medical communication with APs ? 2. What have video-vignette studies done to increase and test their internal and external validity? 3. How do APs perceive-affective, instrumental and general-communication elements? 4. Do APs' evaluations of communication overcome ceiling effects?
Identification of Studies
Pubmed, Embase, Psycinfo and CINAHL were searched in February 2012. Searches were not restricted to any parameter and focused on two central concepts: 'analogue patients' and 'video' (see the Online Appendix Supplementary data for search strategies used). Studies were eligible for inclusion if they were about (verbal/nonverbal) communication between physicians/ nurses and patients and: i) used video-vignette designs; ii) included APs (>18 years): healthy subjects, untrained or trained only for this study; patients not judging their own doctor/nurse; standardized patients viewing a videotaped consultation they took part in; and iii) used APs' perceptions of physician's/nurse's communication as outcome measures (e.g., preferences, recall). Studies were excluded if: i) observers were trainers, research assistants, trained/ experienced coders, examiners, medical students or faculty members; ii) APs' comments did not include a quality judgment.
Data
The following data were extracted from each study and summarized in Table 1: study characteristics and quality, design, rationales for conducting video-vignette studies with APs, attempts to increase and test internal and external validity, limitations, and APs' perceptions of the studied communication elements.
Quality of studies was assessed 16 by applying the Research Appraisal Checklist (RAC). 17 The RAC consists of 51 items covering the quality of title, abstract, introduction, methodology, data analysis, discussion, and style/form. Each item is scored on a 1-6 scale, so total scores can vary between 0 and 306 points with three quality categories: i) Below Average (0-103 points), ii) Average (103-204 points), iii) Superior (205-306 points).
Meta-Analysis to Determine Ceiling Effects
To determine whether APs' evaluations of communication (e.g. satisfaction, preferences) overcome ceiling effects, a random-effects multivariate meta-regression analysis 18 was performed using the statistical package MLWIN 2.02. 19 The following quantitative data was abstracted for each evaluation: M, SD, range. For each study the number of participants, videos viewed per participant and available videos was abstracted. For each evaluation, using various scales, the mean score was transformed to a 0-100 score 20 using two formulas; for scales starting at 1: ((mean-1)/ (range-1))x100, for scales starting at 0: ((mean/range))x100. Authors were contacted to provide relevant data not presented in the articles.
RESULTS
The 2950 references initially found were reviewed on title/ abstract (and if necessary on full-text) to determine whether they: a) were about communication, b) used a videovignette design, c) included APs. A random 10 % of the articles were independently checked on these criteria by two authors (LV and JB); interrater agreement exceeded 95 %. Thirty-four articles met these criteria and a forward-and backward reference search was performed. Four hundred and fifty-two new articles were reviewed in the aforementioned manner, resulting in 32 additional articles. These 66 articles were explored full-text on the final criteria: a) a focus on doctor/nurse-patient communication, b) inclusion of APs who viewed videos and judged the communication. Thirty-four articles met all criteria. Their references were In a pilot study APs thought the videos were credible. APs were given information on the medical condition, and were asked to think back of the last time they had this problem.
Courtesy led to higher satisfaction, but competence both to higher satisfaction and intended compliance.
Whether the results of APs' viewing a videotape are the same as CPs' reactions is unclear.
DISCUSSION
This systematic review focused on the rationales, methodology and outcomes of medical video-vignette studies with APs. Scripted studies provided more specific rationales for using video-vignette designs with APs than clinical studies and directed more efforts at increasing/testing internal and external validity. APs' perceptions of communication overlapped generally with CPs' perceptions. Meanwhile, their evaluations overcame ceiling effects. These results have interesting methodological, theoretical and practical relevance.
Scripted studies paid the most attention to increasing the designs' methodological soundness. Specific methodological rationales for conducting video-vignette studies with APs were provided, such as the opportunity to study communication systematically. This fills a gap in clinical care studies, in which only correlations, but no causality between communication and outcomes can be determined. 58,59 Unfortunately, some scripted studies included container-concepts of communication (e.g., patient-centeredness). When positive effects are found, it remains unclear which specific element(s) of communication influenced outcomes. 15,58 Additionally, as argued, when videos are watched by multiple APs, the reliability of assessments increases. 60,61 Another argument for including APs was that their evaluations can overcome ceiling effects. APs' evaluations were indeed not high; averagely 54.28 on a 0-100 scale. By comparison, a meta-analysis of CPs' satisfaction ratings showed an average score of 80.00 (0-100 scale). 20 Moreover, a recent study compared CPs' satisfaction scores with those of APs viewing these videotaped consultations. Mean score (1-6 scale) for CPs was 5.8, while for APs it was 4.0 (p<0.001). 62 APs' ratings thus seem to overcome this limitation of CPs' evaluations. 4,5 Accordingly, these and other methodological rationales provide strong foundations for conducting video-vignette studies with APs.
To achieve internal validity, APs reflected on manipulations in scripted consultations. Unexpectedly, 'experts' (doctors/researchers) were not often asked to comment on manipulations, although they may have insight into the manipulations' (theoretical) success. Moreover, little information was provided on how exactly scripts were created, i.e. it often remained unclear what input researchers used to develop scripts and at what point(s) the scripts were validated.
Focusing on external validity, some studies argued that APs' perceptions overlap with CPs' perceptions. However, none of these studies determined whether APs watching videotaped consultations and CPs in these consultations overlapped on outcome measures. As stated earlier, such a study has recently been performed. 62 In this study-taking into account CPs' skewed satisfaction scores-APs' and CPs' evaluations were correlated. Additionally, a meta-analysis in psychology 63 showed that lay people can make reliable judgments for (non)verbal communication based on brief (clinical and scripted) videotaped interactions.
Theoretical evidence supporting the external validity of APs can be found in simulation theory and is supported by neuro-cognitive studies on empathy. According to simulation theory, we infer other persons' mental states by matching their states with resonant states of one's own mental state. 64 Neuro-cognitive studies show that the brain's mirror neurons fire when a particular action is carried out or observed. 65 They form the basis for empathy, [66][67][68][69] as they are involved in experiencing and observing emotions in others 70 and allow people to adopt another person's perspective. 71 Indeed, some oncological scripted studies included survivors alongside healthy participants. Their perceptions overlapped, indicating that healthy people can put themselves in the shoes of (cancer) patients. 72 However, the methodological and theoretical rationales and advantages of using APs as proxies for CPs are relevant only when APs' perceptions of communication are applicable in clinical practice, which is mainly supported by our results. APs' perceptions of communication overlap mostly with those of CPs. A few-seemingly-contradictory findings were found. APs disliked information-exchange during bad news conversations, while CPs mostly valued this behavior. However, CPs often report receiving too much information during these conversations. [73][74][75][76][77][78] Besides, while most studies point to the positive effects of patientcenteredness, a study with APs 51 and review on CPs 12 found that for purely physical complaints, a patient-centered style may be suboptimal.
Despite these promising results, various aspects should be taken into account when interpreting APs' perceptions for clinical practice. First, in one study APs' perceptions were unrelated to CPs' satisfaction scores. The considerable age difference (students versus seniors) may be responsible for this finding, as age influences communication preferences. [79][80][81] Future studies should take background characteristics influencing preferences-e.g. gender, 81,82 education 83,84 -into account. Consequently, students should not be included as APs merely for convenience. Second, the diversity in APs' evaluations should be kept in mind. The long-term doctor-patient relationship possibly influencing CPs' evaluations cannot be captured by studies using APs. Thus, as video-vignette studies make it possible to disentangle the effect of various communication elements, these elements should afterwards be tested in clinical care.
Limitations
This review has its limitations. First, the literature is inconsistent in the terms used for "analogue patients". To include all relevant articles, both forward and backward reference searches on possible relevant articles were performed and included studies' references were handsearched. Future studies should use the term "analogue patients" consistently. Second, we excluded trained observers, but included lay people trained for this specific study. As studies may have used inconsistent labels, we screened for detailed information on observers. Despite these precautions taken, inadequately indexed and little cited relevant studies may have been missed, as we used a topdown search strategy.
CONCLUSION AND FUTURE STUDIES
Scripted video-vignette studies increased their methodological soundness by providing specific rationales for conducting video-vignette studies with APs and increasing (internal and external) validity. In keeping with simulation theory and neuro-cognitive studies, APs' perceptions of communication overlapped largely with CPs' perceptions-while overcoming ceiling effects. However, it may be necessary to match participants on variables such as age and gender. Moreover, the effect of a long-term doctor-patient relationship on evaluations cannot be studied with APs. This leads to the conclusion-taking these precautions into accountthat APs can provide knowledge on the patient perspective on communication.
Future-scripted-studies may benefit from the described elements to increase their methodological strength and provide more information about the process of ensuring validity. From this review we cannot conclude which communication elements-and outcome measures-can best be studied with APs. Ambady and Rosenthal 63 suggested that communication with an affective component is fastest recognized because its evolutionary importance. 85,86 Future studies could investigate differences between various types of APs. Research could build further on aforementioned work, 62 comparing CPs' perceptions with those from APs watching these consultations, taking into account differences in rating dispersion and focusing on background characteristics. This will raise the level of future studies in this promising research field, aimed at systematically unraveling the patient perspective on communication.
|
2016-05-12T22:15:10.714Z
|
2012-06-15T00:00:00.000
|
{
"year": 2012,
"sha1": "f361fece47a7073bde2a88477bafcce197cdc2d7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-012-2111-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f361fece47a7073bde2a88477bafcce197cdc2d7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
140368475
|
pes2o/s2orc
|
v3-fos-license
|
The Usual Suspects 2019: of Chips, Droplets, Synthesis, and Artificial Cells
Synthetic biology aims to understand fundamental biological processes in more detail than possible for actual living cells. Synthetic biology can combat decomposition and build-up of artificial experimental models under precisely controlled and defined environmental and biochemical conditions. Microfluidic systems can provide the tools to improve and refine existing synthetic systems because they allow control and manipulation of liquids on a micro- and nanoscale. In addition, chip-based approaches are predisposed for synthetic biology applications since they present an opportune technological toolkit capable of fully automated high throughput and content screening under low reagent consumption. This review critically highlights the latest updates in microfluidic cell-free and cell-based protein synthesis as well as the progress on chip-based artificial cells. Even though progress is slow for microfluidic synthetic biology, microfluidic systems are valuable tools for synthetic biology and may one day help to give answers to long asked questions of fundamental cell biology and life itself.
Introduction
Investigation on function, structure, and dynamics of cells at the molecular level will be the key to understand fundamental uncertainties regarding the definition and the origin of life. Synthetic biology is an emerging discipline that attempts to synthesize, re-engineer, and manipulate biological systems under very controlled conditions to better understand nature [1]. This interdisciplinary field aims to design and synthesize unnatural (bio)chemical structures using bottom-up or top-down approaches, on a genomic, proteomic, or cellular level, and to re-composite and manipulate consisting biological systems [2]. Essentially, synthetic biology is based on well-characterized and functional DNA building blocks, which assemble into newly designed biosystems [3]. From the engineering point of view, classical engineering development cycles can be used to facilitate synthetic biology processes based on four steps that comprise the design, the construction, the testing, and the analysis of an artificial system in relation to the functional and structural properties of the natural system [4][5][6]. These key
Cell-Based Protein Synthesis-A Ménage à Trois of Droplets, Digital Microfluidics, and Cells
Synthetic biology approaches provide an important tool for on-demand control of gene expression mechanisms in cellular organisms. Applications allowing the user to engineer cellular pathways are not only vital for optimizing biotechnological processes but also for understanding physiologic and pathologic mechanisms in cell biology. The addition of microfluidic devices to the equation enables the user to (i) guide cellular microenvironments using automated feedback algorithms, (ii) entrap cells in droplets to sort for the most valuable strains, and (iii) conduct sophisticated experiments, for instance, to fine-tune inducer concentrations or to observe prokaryotic and eukaryotic cells in a dynamically changing microenvironment [18].
Dynamic changes of the environment can be generated by fluctuating lactose supply to lac-operon-controlled Escherichia coli (E. coli). To monitor such changes in single cells and their progeny, Kaiser et al. [19] combined a dual-input Mother Machine Chip (DIMM, shown in Figure 1A) with a Mother Machine Software Analyzer (MoMA) algorithm. The application of the DIMM microfluidic chip offers several advantages over common in vitro cultivation, namely (i) employing dynamic changes in substrate type and concentration, (ii) observing the gene regulatory response in each single cell, and (iii) tracking gene expression changes over time. By utilizing the powerful MoMA software, capable of segmenting and tracking cells in phase-contrast images, the researchers identified several fascinating novel features of lac operon induction in E. coli. Nonetheless, even though this setup provides a powerful tool for dynamic gene regulation studies, its capabilities were demonstrated with a widely used standard host organism and promoter, therefore presenting a mere proof-of-principle study. Applying this knowledge to test the stochastic properties of synthetically engineered inducible promoters would pose an important next step in the development of such microfluidics for synthetic biology. A good example of such a system applied to mammalian cells was recently published by Postiglione et al. [18], proving how cultivation of synthetically engineered mammalian cells can be combined with control engineering for automated adjustment of inducer concentration and, thus, protein expression. Inserting not just microorganisms but mammalian cells within such a microfluidic setup provides multiple exciting possibilities for cybergenetics to improve both biotechnological production as well as understanding pathways in cellular development and differentiation. Even though the microfluidic device is not new and has been initially developed by Kolnik et al. [20], it enables shear-free cultivation of mammalian cells with automated cell loading and medium exchange. The device is housed in a setup that optically determines the accumulation of expressed fluorescent reporter molecules. Further, it automatically adjusts the expression to a reference level by varying inducer concentration using two syringes by a feedback-loop control. The system was not only successfully applied to chinese hamster ovary cells, the standard mammalian workhorse for producing recombinant proteins, but also in mouse embryonic stem cells. The opportunity to control gene expression in complicated mammalian cells promises enthralling new opportunities for fundamental cell biology as well as thrilling future insight into human medicine. its capabilities were demonstrated with a widely used standard host organism and promoter, therefore presenting a mere proof-of-principle study. Applying this knowledge to test the stochastic properties of synthetically engineered inducible promoters would pose an important next step in the development of such microfluidics for synthetic biology. A good example of such a system applied to mammalian cells was recently published by Postiglione et al. [18], proving how cultivation of synthetically engineered mammalian cells can be combined with control engineering for automated adjustment of inducer concentration and, thus, protein expression. Inserting not just microorganisms but mammalian cells within such a microfluidic setup provides multiple exciting possibilities for cybergenetics to improve both biotechnological production as well as understanding pathways in cellular development and differentiation. Even though the microfluidic device is not new and has been initially developed by Kolnik et al. [20], it enables shear-free cultivation of mammalian cells with automated cell loading and medium exchange. The device is housed in a setup that optically determines the accumulation of expressed fluorescent reporter molecules. Further, it automatically adjusts the expression to a reference level by varying inducer concentration using two syringes by a feedback-loop control. The system was not only successfully applied to chinese hamster ovary cells, the standard mammalian workhorse for producing recombinant proteins, but also in mouse embryonic stem cells. The opportunity to control gene expression in complicated mammalian cells promises enthralling new opportunities for fundamental cell biology as well as thrilling future insight into human medicine. Alternative to the merits outlined above, microfluidic devices can also be used for active expression screening and cell sorting. In droplet generators individual synthetically engineered cells are encapsulated in droplets by utilizing the laminar flow properties of fluid handling at the microscale. Laminar flows of an immiscible fluid perpendicular to the droplet-generating channel lead to the formation of hydrophilic droplets within a hydrophobic carrier fluid. By addition of cells to the hydrophilic fluid, single cells can be encapsulated in these droplets and later on sorted by specific properties such as fluorescent gene expression. Microfabricated fluorescence-activated cell sorter (µFACS) systems with inline droplet generators offer similar advantages as FACS; however, they eliminate the need for expensive equipment and minimize the probability of channel clogging. One recent example of employing droplet microfluidics for cell sorting has been shown by Yu et al. for plant protoplasts, as shown in Figure 1B [21]. Protoplast fluorescence was detected on-chip by coupling a laser-based optical detection setup with electrodes generating a dielectric force dependent on fluorescent readout. If a droplet containing a protoplast positive for either chlorophyll or yellow Alternative to the merits outlined above, microfluidic devices can also be used for active expression screening and cell sorting. In droplet generators individual synthetically engineered cells are encapsulated in droplets by utilizing the laminar flow properties of fluid handling at the microscale. Laminar flows of an immiscible fluid perpendicular to the droplet-generating channel lead to the formation of hydrophilic droplets within a hydrophobic carrier fluid. By addition of cells to the hydrophilic fluid, single cells can be encapsulated in these droplets and later on sorted by specific properties such as fluorescent gene expression. Microfabricated fluorescence-activated cell sorter (µFACS) systems with inline droplet generators offer similar advantages as FACS; however, they eliminate the need for expensive equipment and minimize the probability of channel clogging. One recent example of employing droplet microfluidics for cell sorting has been shown by Yu et al. Figure 1B [21]. Protoplast fluorescence was detected on-chip by coupling a laser-based optical detection setup with electrodes generating a dielectric force dependent on fluorescent readout. If a droplet containing a protoplast positive for either chlorophyll or yellow fluorescent protein (YFP) passes the optical detection unit, the droplet is steered into the positive channel by activating the electrodes whereas negative droplets are excluded by fluid resistance. The microdevice shows high success rates, as all microdroplets collected in the positive channel contain YFP-expressing protoplasts, and the negative channel mainly features empty droplets or droplets containing ruptured or wild type protoplasts. By adding this microfluidic sorting unit to the experimental palette, possibilities for synthetic engineering of plants are significantly enhanced. Identifying successfully transfected protoplasts prior to strenuous tissue culture undoubtedly can decrease time and cost of experiments, which is quite usable for today's scientists for obvious reasons. As cheap and nice as these systems may be, droplet generators are often limited to hydrophilic proteins since hydrophobic expression products (e.g., oils for biofuel production) are soluble in the carrier oil. To overcome this downside, Siltanen et al. [22] engineered a platform enabling microfluidic cell sorting and subsequent printing of droplets onto a microwell array. First, droplet-encapsulated yeast colonies were sorted based on similar optical density using dielectrophoretic cell sorting as described above. Subsequently, isogenic colonies were printed onto a microarray consisting of dielectrophoretic traps placed below nanoliter-sized wells. After substrate addition, the hydrophobic carrier oil was aspirated and replaced by humidified air to solve the carrier issues. Finally, successful staining of hydrophobic expression product was enabled by encapsulating the yeast colonies in a hydrogel mesh. Nonetheless, the system would greatly benefit from additional microfluidic upgrades to enable cell culture within the same device prior to as well as after sorting, on-line quantitative fluorescent detection, and carrier replacement.
As an alternative to pressure-driven fluid flow systems, droplets can also be generated and manipulated using digital microfluidics (DMF). In contrast to traditional microfluidics, DMF utilizes alternating currents on an electrode array for moving fluid in the microdevice. Shortly, the liquid is moved on an open-plane device through manipulation of the droplet's surface tension by electrowetting. For a more detailed description, the reader is referred to an excellent recent review by Jebrail et al. [23]. Digital microfluidics provides several advantages over traditional pump-based systems, as it eliminates the need for bulky lab equipment and allows precise control over the droplet movements including droplet fusion and separation. To demonstrate the applicability of such systems for synthetic biology, Husser et al. [24] recently developed the first automated induction microfluidics system (AIMS; see Figure 2A). This integrative approach offered several advantages, such as (i) automation of bacterial cell culture induction and handling, (ii) reducing the risk of cross-contamination, and (iii) simultaneous screening of multiple cultures. The AIMS featured, amongst others, a cell culture mixing chamber, an absorbance measurement spot, as well as incubation areas for multiple samples. In the experimental setup, a mother droplet with cells was dispensed into the culture area, where it was mixed by alternating vertical and horizontal currents. Upon experimental initiation, the mother droplet was moved to the absorbance measurement spot, and the experiment was automatically initiated if the optical density (OD) exceeded a certain threshold. The AIMS featured two operation modes, (1) automated monitoring of fluorescent protein expression using varied inducer concentrations or (2) screening of expressed enzyme activity using fluorescent reaction products. Even though the freedom of fluid manipulation is undisputable, the biggest downside for the system is that the devices need to still be transferred to a plate reader for fluorescent detection and read-out and, therefore, still lacks some vital parameters for full automation of synthetic biology on a single device. Lastly, droplet and digital microfluidics can be combined by adding a DMF manipulation layer to a classic microfluidic droplet-generating channel microstructure, thus creating an integrated and multi-layered droplet-digital microfluidic (I2DM) system (see Figure 2B). This technology, recently published by Ahmadi et al. [25], relies on pressure-driven microfluidic droplet generation with subsequent digital microfluidic on-demand droplet manipulation. First, single cells are encapsulated in droplets using the pressure-driven droplet part of the device and subsequently merged and mixed with a droplet of inducer fluid on the DMF. Secondly, the droplet containing a single cell is transferred to the incubation region of the device using fluid flow, where it is incubated for 24 h to allow protein expression. Finally, the droplet is then analyzed for cell density using absorbance and sorted through an n-array cell sorting channel. Similar to the previously mentioned devices, this microfluidic setup holds great promise for microfluidic analysis of synthetically engineered cells. However, significant work still needs to be conducted to integrate cell cultivation and allow higher throughput and on-line measurement methods. to the previously mentioned devices, this microfluidic setup holds great promise for microfluidic analysis of synthetically engineered cells. However, significant work still needs to be conducted to integrate cell cultivation and allow higher throughput and on-line measurement methods.
Overall, microfluidics proves to be an important tool for synthetic biologists in manipulating cellular systems. The devices offer great advantages such as automated feedback control [18,19], sorting of engineered cells based on protein yield [21,22,25], and on-line detection of cell growth [25,26]. Digital microfluidic devices additionally allow for novel fluid handling operations and process automation. However, despite these many advances and advantages, the technology is still in its infancy. In the future, automated cell cultivation, protein expression, and detection will give rise to an emerging field holding numerous promises. (Reproduced from [25] with permission from The Royal Society of Chemistry).
Microfluidic Devices for Cell-Free Protein Expression
In recent years, the emergence of cell-free synthetic biology has opened opportunities for studying complex cellular activities in vitro in the absence of heterogenous living cells. This powerful technology allows biological networks to be engineered in a more controllable and less complex experimental setup, which allows rapid prototyping of newly designed gene circuits before implementing them in living cells [27,28].
Cell-free protein synthesis (CFPS) systems offer many advantages over cell-based systems, including high protein yield, the generation of soluble and functional proteins without inhibition of regulatory pathways, as well as the possibility of using mRNA fragments directly without any need for cloning [29,30]. Additionally, many proteins are unstable and proteolytically sensitive, which makes a cellular microenvironment a rather harsh environment [31,32]. However, CFPS has been explored for synthetic biology, allowing engineering of biomolecular systems with cell-like behaviors and construction of artificial cell-like structures such as attachment and integration of plasmid-DNA within a hydrogel matrix by chemical manipulation [33][34][35].
Jiao et al. recently developed a clay-based hydrogel system for CFPS by using microfluidic droplet technology to circumvent sophisticated chemical manipulations and to preserve the high (B) An integrated and multilayered droplet-digital microfluidic system for on-demand droplet creation, mixing, incubation, and sorting combining droplet with digital microfluidics. (Reproduced from [25] with permission from The Royal Society of Chemistry).
Overall, microfluidics proves to be an important tool for synthetic biologists in manipulating cellular systems. The devices offer great advantages such as automated feedback control [18,19], sorting of engineered cells based on protein yield [21,22,25], and on-line detection of cell growth [25,26]. Digital microfluidic devices additionally allow for novel fluid handling operations and process automation. However, despite these many advances and advantages, the technology is still in its infancy. In the future, automated cell cultivation, protein expression, and detection will give rise to an emerging field holding numerous promises.
Microfluidic Devices for Cell-Free Protein Expression
In recent years, the emergence of cell-free synthetic biology has opened opportunities for studying complex cellular activities in vitro in the absence of heterogenous living cells. This powerful technology allows biological networks to be engineered in a more controllable and less complex experimental setup, which allows rapid prototyping of newly designed gene circuits before implementing them in living cells [27,28].
Cell-free protein synthesis (CFPS) systems offer many advantages over cell-based systems, including high protein yield, the generation of soluble and functional proteins without inhibition of regulatory pathways, as well as the possibility of using mRNA fragments directly without any need for cloning [29,30]. Additionally, many proteins are unstable and proteolytically sensitive, which makes a cellular microenvironment a rather harsh environment [31,32]. However, CFPS has been explored for synthetic biology, allowing engineering of biomolecular systems with cell-like behaviors and construction of artificial cell-like structures such as attachment and integration of plasmid-DNA within a hydrogel matrix by chemical manipulation [33][34][35].
Jiao et al. recently developed a clay-based hydrogel system for CFPS by using microfluidic droplet technology to circumvent sophisticated chemical manipulations and to preserve the high protein production of plasmids (see Figure 3A). In this system, electrostatic interactions were involved in both the preparation of the clay hydrogel beads (microgels) and the binding of plasmids to the clay microgels. The microfluidic clay microgel system created compartmentalized microenvironments capable of high-yield and repeated protein syntheses, indicated by a six-fold higher enhanced green fluorescent protein (eGFP) production and a 3.5-fold higher expression rate than traditional solution phase systems [36]. Dynamics of mRNA and protein synthesis are key parameters that are needed to optimize the performance of gene circuits. Thus, real-time monitoring of transcription-translation (TX-TL) dynamics is crucial to acquire information of novel synthesized mRNAs and proteins before implementing them in living cells or in artificial cells. Wang et al. used a microfluidic PDMS device for generating cell-sized single-emulsion droplets by encapsulating a mammalian CFPS reaction and a locked nucleic acid (LNA) probe to investigate the dynamics of mRNA and protein expression. Microfluidic-generated water-oil droplets provide an effective method to reduce sample volume to the picolitre range, compared to the bulk reaction volume of microliters, and they offer the possibility for investigation and characterization of gene circuits in the context of live and artificial cells [37].
Nevertheless, an open question in cellular communication is the nature of many cellular cascades and how networks of genes interact to form "oscillations" [38][39][40]. The concentrations of mRNAs and proteins increase and decrease rhythmically with a well-defined temporal period in cells. The oscillations of mRNA and protein concentrations are often caused by transcriptional/translational feedback loops, a mechanism that is referred to as a genetic oscillator [41]. These genetic oscillators can be seen, for example, in cell cycles, circadian rhythms, and inflammatory responses [42]. If the activity of one gene in a feedback loop increases, it activates other genes in the circuit that ultimately inhibit it [43,44]. To extend the lifetime of these transcriptional reactions, microfluidic platforms are ideal, since TX-TL components can be replenished, creating an open system wherein the transcription and translation rates are sustained in a steady-state. Yelleswarapu et al. were able to characterize a two-component oscillator with an activator-repressor motif that utilized native transcription machinery of E. coli. The behavior of two individual oscillators as well as the behavior of a coupled network were experimentally investigated on an E. coli-based TX-TL system operating under steady-state conditions in a pneumatically actuated bi-layer microfluidic device [45]. Since cell-free protein approaches are not restricted by physical barriers, biochemical reactions can be controlled by external fields such as light [46], magnetic fields [47], and electrochemical transduction [48]. In principle, electric field (E-field) manipulation could be a more rapid and specific method and can be combined with microelectronics. To study these effects in more detail, Efrat et al. designed a PDMS device equipped with gold electrodes for trapping ribosomes, RNA polymerases, nascent RNA, and proteins in an electric-field (see Figure 3B) to induce protein synthesis oscillations by on/off switching of the electric field. The combination of an E-field with compartmentalized cell-free expression created a simple, non-invasive approach for controlling synthetic biological systems with a bioelectronic interface [49]. Apart from that, pulsed electric fields can also utilize deformation of the interface between an aqueous and an oil phase. This demonstrates that droplets containing a cell-free transcription-translation system executing protein synthesis could be generated by an electric field-driven droplet generator in a timely and programmable manner [50].
Further, the capacity of micro-and nanofabrication in terms of multiplexing and automation combined with CFPS aligns well with the needs of systems biology for high-throughput and fast characterization of cellular functions. Since traditional cell-based protein expression requires multiple days of effort, in contrast, cell-free protein synthesis enhances expression time, as it only requires mixing template DNA with macromolecules and incubation for approximately 2 h [51,52]. The combination of droplet microfluidics interfaced with electrospray ionization-mass spectrometry (ESI-MS) provides an efficient, label-free, high-throughput screening for pharmaceutical biocatalyst applications such as enzyme library screening (see Figure 4A). Especially, industry needs novel analytical methods that are more general, less compound-specific and faster to develop. In a recent paper, throughput was improved to 3 Hz with a wide range of droplet sizes (10-50 nL) demonstrated by using two different transaminase libraries. Droplet-MS showed a significantly faster rate compared to the liquid chromatography-mass spectrometry (LC-MS) method with a 100% match on hit variants, and it showed the capability to perform transcription-translation inside the droplets followed by direct analysis of the reaction mixture by MS. The success of cell-free synthesis in nanoliter droplets suggested great potential for accelerating testing of DNA libraries from 3-4 weeks to 24 h with significant cost savings [53]. Nonetheless, commercial application of microdroplet technology is still rare and is mainly applied to the use of specific equipment in the academic laboratory environment, even though off-the-shelf droplet generators can be purchased from manufacturing companies (e.g., Dolomite, Micronit, Darwin microfluidics, etc.) because of rather high costs (two to three digit € per piece) [54]. Commercial challenges, chip manufacturing, and costs can be read elsewhere [55][56][57]. Further, the capacity of micro-and nanofabrication in terms of multiplexing and automation combined with CFPS aligns well with the needs of systems biology for high-throughput and fast characterization of cellular functions. Since traditional cell-based protein expression requires multiple days of effort, in contrast, cell-free protein synthesis enhances expression time, as it only requires mixing template DNA with macromolecules and incubation for approximately 2 h [51,52]. The combination of droplet microfluidics interfaced with electrospray ionization-mass spectrometry (ESI-MS) provides an efficient, label-free, high-throughput screening for pharmaceutical biocatalyst applications such as enzyme library screening (see Figure 4A). Especially, industry needs novel analytical methods that are more general, less compound-specific and faster to develop. In a recent paper, throughput was improved to 3 Hz with a wide range of droplet sizes (10-50 nL) demonstrated Advances of microfluidics combined with integration of cell-free protein synthesis can also be exploited for therapeutic or diagnostic purposes, for example. Since proteins for point-of-care applications require a certain purity, there remains a need to integrate protein synthesis and protein purification on a microfluidic chip in order to obtain the desired recombinant proteins with a simple operation. Xiao et al. integrated two functional units, a protein synthesis unit and a protein purification unit, into a microfluidic chip for production of a recombinant protein (see Figure 4B) [58]. The first channel was filled with template DNA-modified agarose beads to form a cell-free protein synthesis unit, and the second channel was filled with nickel ion-modified agarose beads (Ni-nitrilotriacetic acid (NTA)) as a protein purification unit. The mixed reaction solution passed through the protein purification unit, where the target protein was captured by Ni-NTA beads. Pure protein was obtained after washing and an elution buffer were introduced to remove non-specific bindings. This device shows the potential to produce single-dose recombinant protein drugs on demand. For the detection of cell-free DNA (cfDNA) in plasma samples of healthy donors and cancer patients, Campos et al. developed a novel microfluidic solid-phase extraction device (µSPE) consisting of a micromachined plastic chip (see Figure 4C) [59]. The chip contained arrays of pillars that were activated with UV/O 3 to generate surface-confined -COOH functional groups for the selective extraction of cfDNA. One advantage of this chip was the scalability of the target load by tuning the bed size and/or reducing the pillar size to increase the recovery of cfDNA due to reducing diffusion distances. This polymer-based device can be fabricated in a single molding process, negating the need for adding attractional supports and keeping the device and assay costs low for quantification of cfDNA in clinical samples.
Advances of microfluidics combined with integration of cell-free protein synthesis can also be exploited for therapeutic or diagnostic purposes, for example. Since proteins for point-of-care applications require a certain purity, there remains a need to integrate protein synthesis and protein purification on a microfluidic chip in order to obtain the desired recombinant proteins with a simple operation. Xiao et al. integrated two functional units, a protein synthesis unit and a protein purification unit, into a microfluidic chip for production of a recombinant protein (see Figure 4B) [58]. The first channel was filled with template DNA-modified agarose beads to form a cell-free protein synthesis unit, and the second channel was filled with nickel ion-modified agarose beads (Ninitrilotriacetic acid (NTA)) as a protein purification unit. The mixed reaction solution passed through the protein purification unit, where the target protein was captured by Ni-NTA beads. Pure protein was obtained after washing and an elution buffer were introduced to remove non-specific bindings. This device shows the potential to produce single-dose recombinant protein drugs on demand. For the detection of cell-free DNA (cfDNA) in plasma samples of healthy donors and cancer patients, Campos et al. developed a novel microfluidic solid-phase extraction device (µSPE) consisting of a micromachined plastic chip (see Figure 4C) [59]. The chip contained arrays of pillars that were activated with UV/O3 to generate surface-confined -COOH functional groups for the selective extraction of cfDNA. One advantage of this chip was the scalability of the target load by tuning the bed size and/or reducing the pillar size to increase the recovery of cfDNA due to reducing diffusion distances. This polymer-based device can be fabricated in a single molding process, negating the need for adding attractional supports and keeping the device and assay costs low for quantification of cfDNA in clinical samples.
Microfluidics and Artificial Cells
Synthesis of artificial cells disentangled from their complex environments constitutes one of the most important aspects in bottom-up synthetic biology. Bottom-up approaches strive to construct artificial living systems by using non-living matter as initial building blocks. Functionality is achieved by the reconstitution of functional modules from both natural and artificial origins. Through addition of various components, the desired complexity can be built up in a sequential manner, eventually resulting in a truly synthetic living cell [17]. Although living systems feature a high intrinsic complexity, Yewdall et al. [60] recently defined five common hallmarks shared among all of them: compartmentalization, growth and division, information processing, energy transduction, and adaptability. Synthetic biology is trying to address these hallmarks and-with cell-sized compartments representing the most basic unit of a synthetic cell-compartmentalization has become an important topic of investigation over the last years. Especially, cell-sized giant unilamellar vesicles (GUVs) have gained increasing interest because of their natural building blocks as well as their broad applicability as microreactors, biosensors, drug delivery systems, and as artificial cells [61][62][63]. Unfortunately, the need for precise control over critical aspects such as vesicle size, architecture, compartment number, interconnectivity, and functionalization is not met by standard methods such as electroformation and film hydration. With its ability for precision, high-throughput, and controlled fluid handling, as already outlined in the first two sections of this review, microfluidics provide a powerful toolkit for addressing these complex requirements [64,65]. Using droplet microfluidics, Elani et al. [66] were able to generate complex hybrid cellular bionic systems by functionalizing GUVs with functional modules of biological origin. Within the microfluidic device, E. coli and several eukaryotic cell lines could be successfully integrated into vesicles. This modification yielded a functional synergy between the encapsulated cell and the vesicle host. While external architecture was able to efficiently shield the cell from its toxic surroundings, the cell acted as an organelle-like module by conferring the artificial cell with its cellular biochemistry. The coupling of cellular and non-cellular pathways was demonstrated by devising a three-step biochemical pathway ultimately resulting in the fluorescent read-out. Overall, the PDMS-based microfluidic device enabled formation of artificial cells with high throughput, control over vesicle size, biomolecular content, and cell number. In a follow-up study, Trantidou et al. [67] displayed the potential applicability of these artificial cells as biosensors by incorporating E. coli genetically equipped with a GFP-coupled lldPRD promoter into GUVs to monitor lactate in the external environment of the artificial cell, with a linear measurement range up to 5 mM, in real-time. To circumvent problems associated with the longevity and stability of GUVs, Weiss et al. [68] developed a microfluidic device for the generation of droplet-stabilized GUVs (see Figure 5A). This PDMS-based device enabled sequential loading of transmembrane and cytoskeletal proteins via pico-injection technology as well as subsequent removal of the droplet shell, releasing functional self-supporting protocells into an aqueous, thus physiologically, relevant phase. Exposed to various substrates, protocells that were functionally equipped with integrins displayed distinct differences in their spreading behaviors, thus validating the proteins' biological functionalities. Upon integration of ATP synthase into the droplet-stabilized GUVs and subsequent exposure to an acidic environment, a total amount of 5 nM ATP could be measured within the released aqueous content of the vesicles. This indicated a functional reconstitution of the enzyme within the stabilized GUVs as well as a correct orientation of at least some of the enzymes within the membrane. Overall, the microfluidic palette was expanded with a powerful tool for the bottom-up assembly of complex synthetic cells, successfully addressing several individual hallmarks simultaneously. In a recent publication, Deshpande et al. [69] (see Figure 5B) presented a novel microfluidic device capable of controllably dividing liposomes with high symmetry and low leakage. Within this device, cell-sized liposomes were generated via octanol-assisted liposome assembly and subsequently flowed against a wedge-shaped splitter, resulting in two liposomes with a size of 6 µm. Octanol-assisted liposome assembly has been previously shown to enable fast maturation times of a few minutes. It also has excellent encapsulation efficiency coupled with the high-throughput production of biologically relevant liposomes in the size range of 5-20 µm. [70] Despite the limitation that this device may not be suitable for multicomponent vesicles, it nonetheless may provide a powerful tool for addressing growth and division cycles of artificial cells. Since not only generation of synthetic cell-like vesicle models (also handling thereof) is a critical aspect in synthetic biology, Yandrapalli et al. [71] integrated a series of micro-structured posts to create a sophisticated PDMS-based device capable of handling up to 23,000 GUVs at once (see Figure 5C). While adjusting the height of the device enables trapping of differently sized subpopulations, it further tunes the assembly of GUVs within different layers in 3D, enabling artificial cell-to-cell interaction studies based on ligand-binding interactions. In addition, this design allows for a precise and fast solution exchange. With only 2 µL, the complete solution around the vesicles can be exchanged, rendering this design a useful tool when working with samples such as nanoparticles, drugs, or proteins. Overall, this chip can be applied for high-throughput experiments capable of delivering statistically robust data sets. Once again, microfluidics is a powerful tool in bottom-up synthetic biology and in the creation of artificial cell-like constructs; however, the question remains whether living cells encapsulated within artificial shells are truly artificial cells made by bottom-up approaches.
Micromachines 2019, 10, x 10 of 14 create a sophisticated PDMS-based device capable of handling up to 23,000 GUVs at once (see Figure 5C). While adjusting the height of the device enables trapping of differently sized subpopulations, it further tunes the assembly of GUVs within different layers in 3D, enabling artificial cell-to-cell interaction studies based on ligand-binding interactions. In addition, this design allows for a precise and fast solution exchange. With only 2 µL, the complete solution around the vesicles can be exchanged, rendering this design a useful tool when working with samples such as nanoparticles, drugs, or proteins. Overall, this chip can be applied for high-throughput experiments capable of delivering statistically robust data sets. Once again, microfluidics is a powerful tool in bottom-up synthetic biology and in the creation of artificial cell-like constructs; however, the question remains whether living cells encapsulated within artificial shells are truly artificial cells made by bottom-up approaches.
Conclusions and Outlook
Commercial gene synthesis and gene construction approaches have become a highly competitive field, where customer demands, including fulfillment time and accuracy, have steadily driven continuous technology improvement. Presently and going forward, there will be tighter correlation and inter-dependency between scale and cost of DNA construction with the need for cycles of iteration to accelerate the growing understanding of underlying complexity and genetic design parameters [72]. One important question to address in synthetic biology is how to increase the predictability of designed artificial systems as novel gene circuits and enzyme libraries. Answering this question will have wide-reaching consequences for the field but will require a shift in how synthetic biology is carried out in academia. Given developments leave no doubt that microfluidics will increase the scope for complexity in the field of bottom-up synthetic biology; however, it has to be noted that up to now, the generation of an artificial cell satisfying all the hallmarks of life is far from being realized. However, as shown in Figure 6, microfluidic and synthetic biology-driven publications have been continuously increasing in recent years with thousands of papers and reviews in contrast to microfluidic synthetic biology publications. Synthetic biology-on-a-chip is a very small community, yet it has been constantly growing over the last 10 years. Publication output in the last
Conclusions and Outlook
Commercial gene synthesis and gene construction approaches have become a highly competitive field, where customer demands, including fulfillment time and accuracy, have steadily driven continuous technology improvement. Presently and going forward, there will be tighter correlation and inter-dependency between scale and cost of DNA construction with the need for cycles of iteration to accelerate the growing understanding of underlying complexity and genetic design parameters [72]. One important question to address in synthetic biology is how to increase the predictability of designed artificial systems as novel gene circuits and enzyme libraries. Answering this question will have wide-reaching consequences for the field but will require a shift in how synthetic biology is carried out in academia. Given developments leave no doubt that microfluidics will increase the scope for complexity in the field of bottom-up synthetic biology; however, it has to be noted that up to now, the generation of an artificial cell satisfying all the hallmarks of life is far from being realized. However, as shown in Figure 6, microfluidic and synthetic biology-driven publications have been continuously increasing in recent years with thousands of papers and reviews in contrast to microfluidic synthetic biology publications. Synthetic biology-on-a-chip is a very small community, yet it has been constantly growing over the last 10 years. Publication output in the last few years has reached a plateau phase since 2016, indicating that aside from droplet generators that have obviously become state-of-the-art to create natural and synthetic vesicles on micro-and nanoscales, the combination of synthetic biology and chips is hard work because of the required tight control over experimental procedures. Hopefully, 2020 is a better year, with more microfluidic devices used not only as droplet-machines but also as valuable tools for cell-based synthetic biology and the creation of artificial cells. Research in this field may one day give answers to long asked questions of fundamental cell biology and life itself.
Micromachines 2019, 10, x 11 of 14 few years has reached a plateau phase since 2016, indicating that aside from droplet generators that have obviously become state-of-the-art to create natural and synthetic vesicles on micro-and nanoscales, the combination of synthetic biology and chips is hard work because of the required tight control over experimental procedures. Hopefully, 2020 is a better year, with more microfluidic devices used not only as droplet-machines but also as valuable tools for cell-based synthetic biology and the creation of artificial cells. Research in this field may one day give answers to long asked questions of fundamental cell biology and life itself. Figure 6. Publication outputs of synthetic biology, microfluidics, and microfluidic synthetic biology over the last ten years expressed as number of total publications (thorough PubMed research using keywords "microfluidic", "synthetic biology", and "microfluidic synthetic biology").
|
2019-04-30T15:58:40.287Z
|
2019-04-27T00:00:00.000
|
{
"year": 2019,
"sha1": "ea5dd502ca974728928d4d09ba71bd2ce6714e96",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/mi10050285",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea5dd502ca974728928d4d09ba71bd2ce6714e96",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
197774290
|
pes2o/s2orc
|
v3-fos-license
|
Tourism and rural development: policy analysis and lessons from the LEADER method
of Chapter 1 The first chapter is the introduction of the doctoral thesis. The justification of the research and the relevance of the project are discussed. The research framework, objectives, methods and case study are then presented. Abstract of Chapter 2of Chapter 2 In order to guide the theoretical foundations of the doctoral thesis, Chapter 2 examines the origins and evolution of agricultural policies that have favoured the development of tourism activities in rural areas. Similarly, a diachronic study of the CAP was carried out and the associated decoupling of agricultural subsidies in favour of new approaches to rural development. Abstract of Chapter 3 Chapter 3 analyses the methodologies proposed for evaluating EU RDP and tourism projects. The literature review reveals that the mandatory evaluation systems used pay insufficient attention to analysing the impact of these programmes on the territory and few studies have assessed the impact of tourism generated by RDP. Abstract of Chapter 4of Chapter 4 The theoretical framework was complemented with a holistic analysis of the pull and push factors affecting the competitiveness of rural tourism destinations. This analysis has also been useful to clarify the functionality of the RDP and their influence on the different elements that make up the internal and external tourist systems. Abstract of Chapter 5 In this chapter, tourism projects through EU RDP from LEADER I (1991-1994) to RDP 2007-2013 were analysed. From a pragmatic perspective, a diachronic reading of the volume of investment in tourism and a classification of tourism projects developedof Chapter 5 In this chapter, tourism projects through EU RDP from LEADER I (1991-1994) to RDP 2007-2013 were analysed. From a pragmatic perspective, a diachronic reading of the volume of investment in tourism and a classification of tourism projects developed were carried out. These data were applied to the case study of Valencia and Castilla-La Mancha (Spain). Abstract of Chapter 6 In Chapter 6, a more detailed description of the factors influencing the location of tourist activities in rural areas was conducted. More specifically, this was done from the perspective of supply and with an exploratory purpose, so weaknesses and areas for improvement in the key elements were identified in rural destinations by means of a tourism-related stakeholders survey alsoof Chapter 6 In Chapter 6, a more detailed description of the factors influencing the location of tourist activities in rural areas was conducted. More specifically, this was done from the perspective of supply and with an exploratory purpose, so weaknesses and areas for improvement in the key elements were identified in rural destinations by means of a tourism-related stakeholders survey also applied to the case study of Valencia and Castilla-La Mancha (Spain). Abstract of Chapter 7of Chapter 7 Chapter 7 focused on clarifying the causes for dysfunctions in LAG management and the implementation of LEADER method. Tourism planning processes, the degree of decentralisation and participation in decisionmaking, the criteria established for project eligibility, social responsibility, transparency Tourism and rural development: policy analysis and lessons from the LEADER method. Doctoral Dissertation Summary. 128 or communication were some of the most important aspects analysed at this chapter. Abstract of Chapter 8of Chapter 8 In Chapter 8, several examples of good practices applied on tourism projects were exposed by way of preliminary findings. Paradigmatic examples of structuring, design and implementation of LEADER method were described in order to foster a demonstration effect. Abstract of Chapter 9of Chapter 9 This chapter reviews the key findings of the doctoral thesis. The main theoretical, practical and methodological contributions to the research field are also discussed in this chapter. Likewise, the limitations encountered in the development of this thesis are presented, as well as some guidelines for future lines of research. References: Barke, M. (2004). Rural tourism in Spain. International Journal of Tourism Research, 6(3), 137-149 Chevalier, P., & Dedeire, M. (2014). Application du programme LEADER selon les principes de base du développement local 25 9 , ) 4 , ( Économie rurale . Esparcia, J. (2000). The LEADER programme and the rise of rural development in Spain. Sociologia Ruralis, 40(2), 200-207 Haven‐Tang, C., & Jones, E. (2012). Local leadership for rural tourism development: A case study of Adventa, Monmouthshire, UK. Tourism Management Perspectives, 4, 28-35 Hernández, M. (2008). Balance de las políticas de desarrollo rural en la Comunidad Valenciana (1991-2006). Investigaciones Geográficas, 45, 93-119 Pulido, J. I., & Cardenas, P. J. (2011). El turismo rural en España: orientaciones estratégicas para una tipología aún en desarrollo. Boletín de la Asociación de Geógrafos Españoles, (56), 155-17
Goal and objectives of the dissertation Goal
The main objective of this PhD thesis is to provide a sectoral perspective of the implementation of the EU Rural Development Programs (RDP) and to have a better understanding of the relationship between tourism and rural development through the implementation of the LEADER method.
Objectives
This thesis brings an operational, tactical and sectoral assessment of the implementation of EU RDP. To reach this point and in order to meet the main goal, the following objectives were set:
Methodology
This thesis used an integrated approach in which quantitative methods (statistical analysis of the socio-economic impacts resulting from the implementation of EU RDP) are combined with qualitative methodologies (interviews and questionnaires to rural development managers, entrepreneurs, public sector representatives, tourism specialists and scholars). In relation to the methodological issues of the empirical focus of the doctoral thesis, the study was structured into three parts. In the first part, in order to assess the efficiency of investments in the tourism sector through initiatives and programs based on the LEADER approach, a categorisation of over 4,200 tourism project from the 1990s to the present was carried out. Information were transcribed and managed with Microsoft Excel, the mappings included were developed using QGIS software.
In a second part, a structured survey with open-ended questions was chosen as the method to explore the challenges regarding pull factors of destinations located in rural areas. It addressed quantitative and qualitative pull factors in relation to improving the competitiveness of rural tourism destinations. The survey was sent to key stakeholders (N=118): a) researchers; b) entrepreneurs and representatives of rural tourism associations; c) rural development managers and d) public administration tourism staff.
Finally, in the third part, a qualitative approach was also used. Semi-structured interviews (N=9) were conducted to understand the key issues in rural development governance and its influence in the development of tourism activities. For this purpose, managers and Local Action Group (LAG) staff were interviewed. Processing of the information collected was done using qualitative analysis software Atlas.ti.
Results
The categorisation and study of tourism projects has made it possible to understand the role of tourism in the distribution of the investment made through the EU RDP. The oversupply of accommodation is one of the main dysfunctionalities observed, the design and structuring of destination products is an aspect that also needs to be reviewed. After reading the tourism projects developed over the last 25 years, the application of the LEADER method is questionable, there are few tourism packages developed at the local level that promote public-private partnerships and articulate resources of a similar type at the supramunicipal level.
Similarly, the participation of the population and the integration of agricultural activities into the configuration of destination products should be strengthened (Esparcia, 2000). The shift in policy orientation has left many farmers unable to adapt to the new scenario. Very few venture to design, plan and market tourism products linked to their agricultural, livestock or agri-food activities. The lack of integration of tourism with agricultural activities in the cases analysed reveals the gap between rural development and agriculture (Hernández, 2008).
Theoretical conclusions
Regarding the theoretical conclusions, the study carried out on the evolution of the Common Agricultural Policy (CAP) has given rise to a greater understanding of the inclusion of tourism as a driving force for rural development (Barke, 2004;Haven-Tang & Jones, 2012). Similarly, this analysis has helped to explain the proposed EU RDP methodologies and also the synergies that could be derived from applying the LEADER method (Chevalier & Dedeire, 2014).
The fact that the processes related to the design, management and forms of organisation of EU Regulations and Guidelines for rural development can be transferred to the academic realm also represents a remarkable contribution. Given the orientation of the doctoral thesis, an indepth knowledge of the assessment processes is particularly important. The conclusions obtained as a result of the analysis of the methods for evaluating tourism have opened new lines of research in the field of study.
Practical application of the dissertation
The diachronic study of the evolution of the CAP has allowed a better understanding of the conflicts between farmers and LAG stakeholders. This circumstance confirms the need for a greater cohesion of policies aimed at the diversification of agriculture.
The findings also highlight that most of the weaknesses and opportunities identified in rural destinations are linked to governance. Public-private partnership, coordination on both horizontal and vertical level, participation of the population and the integration of agricultural activities should be strengthened. In this respect, it is appropriate to emphasise the importance of the rural development organizations established within the LEADER approach. DMOs should steer towards similar structures to those proposed by the EU via the LEADER. A tool for management, planning and stimulus such as the LAG, at least in theory, precisely addresses the weaknesses identified in this doctoral thesis. In practice, these structures lack legitimacy and have insufficient rallying power for the public authorities to relegate leadership in the intervention of tourism policy (Pulido & Cardenas, 2011).
Abstract of Chapter 1
The first chapter is the introduction of the doctoral thesis. The justification of the research and the relevance of the project are discussed.
The research framework, objectives, methods and case study are then presented.
Abstract of Chapter 2
In order to guide the theoretical foundations of the doctoral thesis, Chapter 2 examines the origins and evolution of agricultural policies that have favoured the development of tourism activities in rural areas. Similarly, a diachronic study of the CAP was carried out and the associated decoupling of agricultural subsidies in favour of new approaches to rural development.
Abstract of Chapter 3
Chapter 3 analyses the methodologies proposed for evaluating EU RDP and tourism projects. The literature review reveals that the mandatory evaluation systems used pay insufficient attention to analysing the impact of these programmes on the territory and few studies have assessed the impact of tourism generated by RDP.
Abstract of Chapter 4
The theoretical framework was complemented with a holistic analysis of the pull and push factors affecting the competitiveness of rural tourism destinations. This analysis has also been useful to clarify the functionality of the RDP and their influence on the different elements that make up the internal and external tourist systems.
Abstract of Chapter 5
In this chapter, tourism projects through EU RDP from LEADER I (1991)(1992)(1993)(1994) to RDP 2007-2013 were analysed. From a pragmatic perspective, a diachronic reading of the volume of investment in tourism and a classification of tourism projects developed were carried out. These data were applied to the case study of Valencia and Castilla-La Mancha (Spain).
Abstract of Chapter 6
In Chapter 6, a more detailed description of the factors influencing the location of tourist activities in rural areas was conducted. More specifically, this was done from the perspective of supply and with an exploratory purpose, so weaknesses and areas for improvement in the key elements were identified in rural destinations by means of a tourism-related stakeholders survey also applied to the case study of Valencia and Castilla-La Mancha (Spain).
Abstract of Chapter 7
Chapter 7 focused on clarifying the causes for dysfunctions in LAG management and the implementation of LEADER method. Tourism planning processes, the degree of decentralisation and participation in decisionmaking, the criteria established for project eligibility, social responsibility, transparency
|
2019-07-21T18:03:43.248Z
|
2018-10-01T00:00:00.000
|
{
"year": 2018,
"sha1": "26d0c0268f0003305863e93aff27cdd25b791e26",
"oa_license": "CCBY",
"oa_url": "https://ejtr.vumk.eu/index.php/about/article/download/344/348",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "44dbbe687430779d22aa3e6377a89cf82f42415e",
"s2fieldsofstudy": [
"Economics",
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
149810119
|
pes2o/s2orc
|
v3-fos-license
|
The Impacts of Complex Social , Environmental , and Behavioral Factors on Obesity
Obesity is a prominent global concern, which is correlated with several chronic diseases and associated mortalities. Social determinants and environmental factors play an important role in the adoption of certain behaviors that cause obesity and related health issues. This makes obesity a complex public health issue dependent on several physiological, pathobiological, and psychological phenomena. Here we aimed to review the complex interrelationship between the social determinants, behavioral factors, and obesity. The literature search was carried out in PubMed, Web of Science, and Embase databases using keywords of “obesity” and/or “multimorbidity” and/or “chronic diseases” along with “social factors”, “social determinants”, “social determinants of health”, “cultural factors”, and “Environmental factors”. We found the importance of school-based programs in prevention of obesity through behavioral modification. Educational programs and incentives and their impact on obesity and diabetes at the community level were demonstrated. Social factors and health behaviors significantly predicted body mass index (BMI) with gender-specific variations. Furthermore, psychological, emotional, and social experiences of the individuals with obesity had a drastic effect on their mental and physical health. It is apparent that the social factors influence the relations between BMI and weight-related behaviors and outcomes. To understand the mechanisms behind obesity, both quantitative and qualitative methods should be applied in order to examine the overt as well as cognitive aspects of the complex relationships described here.
Introduction
4][5] Obesity has reached epidemic levels in developed as well as developing countries and is known to have a significant impact on both physical and emotional health.High sugar intake, increased portion size, insufficient physical activity, and more screen time have contributed to increased obesity. 6cial determinants and environmental factors could play an important role among humans in the adoption of certain behaviors leading to obesity and related health issues. 7,8][5][6][7][8][9] This review aimed to explore the complex interrelationships between the social environment, social determinants, and behavioral factors and their role in the development of obesity and its adverse health outcomes.Furthermore, we described a few examples of different public health interventions and their impact on behavioral modifications.
Materials and Methods
The literature search was carried out using keywords and Medical Subject Headings (MeSH) terms in the English language.The search was executed on PubMed, Web of Science, and Embase databases.The used terms were "obesity" and/or "multimorbidity" and/or "chronic diseases" along with "social factors", "social determinants", "social determinants of health", "cultural factors", and "environmental factors".The phrases such as "obesity and social determinants of health" were also searched.Boolean operators (AND, OR) were used to make different combinations of search terms.An attempt was made to select these keywords as per accordance to the population, exposure, outcome (PEO) framework, but most of the searches were only based on exposure (e.g.social determinants, environment) and outcome of interest.The articles concerned with the impacts of social and environmental factors on obesogenic behavior (such as types and availability of diet, physical activity opportunities, and socioeconomic status) were also reviewed.The articles encompassing clinical and pathobiological aspects of obesity were excluded from the literature review.
The screening of articles was carried out by reviewing their abstracts.Attention was given to the background, objectives, and methods of the abstracts before moving forward to review the complete article.Article titles representing pharmaceutical, biochemical, pathobiological, and occasionally clinical aspects of the issue were not considered in the literature review.The articles with the titles containing "national survey", "cross-sectional", "population-based", "social factors", "environment", and "cultural factors" were considered for further review.Reference lists of the selected articles were also examined to identify relevant papers.The primary search resulted in 748 210 items which were reduced to 44 articles after applying different combinations of keywords, phrases, and filters.Different types of research designs (e.g.exploratory, descriptive, explanatory) were considered for review.
School-Based Interventions
The article by Veugelers and Fitzgerald evaluated the effectiveness of school-based programs in the prevention of obesity among children. 10This is important as obesity and certain health behaviors during childhood could lead to adulthood obesity and related complications.The authors compared the effectiveness of schoolbased programs targeting obesity and overweight, diet quality and physical activity and compared the results with the conditions without such programs.The aim of authors in this study was to examine the relationship between school-based programs and their role in obesity prevention.The primary objectives were to assess the diet, physical activity, and obesity among schools with 1) General nutritional programs, 2) Coordinated program known as "Annapolis Valley Health Promoting Schools (AVHPSP)" that targeted eating practices and physical activity, and 3) No nutritional programs.The authors evaluated these programs and studied their effectiveness by assessing behavioral outcomes.The results of this study showed that the students attending schools with a coordinated program (AVHPSP) had better rates in term of overweight and obesity, diet quality, fruit and vegetable consumption, fat consumption, physical activity, and sedentary activities.The statistically significant favorable results were obtained for the students in the areas of overweight and obesity, diet quality, fruit and vegetable consumption in AVHPSP-implemented schools compared to the students of schools without a nutritional program.The results between schools with and without nutritional programs were not much different except that there were slightly lower rates of overweight and obesity among the students who attended schools with a nutritional program, though this difference was not statistically significant.This study described how interventions can play an important role in younger populations.A study based on the European School Fruit and Vegetable Scheme with a sample size of 702 and age range of 7-10 years (thirdfourth grades) showed that parental modeling and peer influence had a significant positive impact on fruit and vegetable intake while verbal directive had a negative impact. 11Furthermore, preference and knowledge about different types of fruits and vegetables had a significant positive impact on their intake.Socio-economic, Behavioral, and Environmental Influences Ludwig et al evaluated the effect of a social experiment using incentives on the obesity and diabetes among the samples. 12The authors implemented this study based on the theory that neighborhood situations such as poverty and racial disparities may increase the risk of obesity and diabetes among deprived individuals.Using an experimental approach, they attempted to explain the changes in circumstances and their role in selected outcomes.The participants from high-poverty areas were randomly assigned to three groups.One group received counseling and low-poverty vouchers to support relocation to a low poverty rate census area.The second group was given a standard voucher without any counseling.The third or control group received no intervention or extra incentives.The authors found that after a long-term follow-up, the intervention group who received low poverty vouchers had a lower prevalence of obesity and glycated hemoglobin levels (diabetes) compared to the controls.No significant differences were identified between standard vouchers group and controls.The individuals who spent more time in the low-poverty area had positive changes in their diabetes and obesity outcomes when the results were examined using a doseresponse model.Such result was also supported by an Iranian population-based cross-sectional study in which higher socioeconomic status was associated with lower rate of obesity. 13Furthermore, as evident from the results of two different longitudinal and cohort studies, socioeconomic disadvantages and deprived neighborhood increased the risk of multimorbidity (two or more chronic health problems) and higher body mass index (BMI) among participants, respectively. 14,15In a systemic review, the deprived neighborhoods were found to have limited access to supermarkets.Moreover, access to the takeaway outlets had a relationship with an increased body weight while the opposite was seen in the areas with better access to the supermarkets. 8In the latter situation (good access to the supermarkets), the low obesity rates could be attributed to the access to fresh fruits, vegetables and healthier food options usually available at these supermarkets.
Ball et al in their study aimed to examine the role of health behaviors in explaining the relationships between social factors and obesity. 16The authors used a sample of 8667 adults who participated in the 1995 Australian National Health and Nutrition Survey, which collected data pertaining to the health factors, including objectively measured height and weight, health behaviors, and social factors such as family status, employment status, housing situation, and migration status.The authors selected behaviors such as physical activity, alcohol use, diet, and weight control efforts, and examined their impacts on social group's differences in obesity.In this study, a model of these components (i.e.social factors based on employment, housing, migration, family unit, behaviors, and BMI) was tested.The authors used a nonexperimental, descriptive study design for which the data were previously collected through a cross-sectional survey.The authors found that in the adjusted analytical model, the social factors significantly predicted BMI with gender-specific variations.Furthermore, behavioral and social factors interacted with each other depending on the gender to predict BMI.The men living in the lower levels of housing (rental properties or houses with fewer number of bedrooms) and family status (familial marriages) had higher BMI.The women with part-time jobs and lower occupation status (unemployed or receiving pension or benefits) had higher BMI compared with the women with full-time jobs or high-level occupations (including managerial or professional positions).
Psychological and Social Influences (Qualitative)
The study by Rand et al explored psychological, emotional, and social experiences of individuals with obesity. 17To understand these experiences, the authors used a qualitative approach for which 4 levels of the social-ecological model (SEM) derived from the mental well-being of obese individuals were employed.At an individual level, they examined food as a coping mechanism and source of emotional distress.At the interpersonal level, they experienced blame and shame by family and friends regarding their body weight plus lack of support from health professionals.At the organizational level, the participants experienced insufficient mental health support in obesity management programs.Finally, at the community level, obese individuals faced negative mental well-being impacts of the social stigma of obesity.The authors used a methodology that included interactions between the researchers and participants and implemented qualitative semi-structured interviews and used transcripts from 19 obese participants and 16 health professionals.Furthermore, two frameworks were applied to collect relevant data for this study (i.e.WHO domains of mental well-being and SEM levels which were aligned according to the identified mental well-being themes).The results showed that at "individual level", about half of the respondents considered food as a coping mechanism and a source of emotional distress having a negative influence on mental well-being.At "interpersonal level", the expression of blame and shame by family, friends, and healthcare providers had a detrimental impact on the psychological and mental health as well as on losing body weight.At "organization level", the participants felt the need for judgment-free support programs to address their psychological issues pertaining to the obesity.At "community level", the individuals reported to experience the stigma of obesity in verbal and non-verbal manners.
Summary
Social, cultural, environmental, and behavioral factors are interconnected in influencing obesity and its underlying processes.Along with the general or occult comprehension of obesity and related outcomes, understanding the psychological aspects of why society or individuals adopt certain characteristics such as obesity is equally crucial.Furthermore, examination of psychological effects of obesity could be useful in formulating preventive programs.It is important to understand individuals' perceptions of what constitutes a healthy diet and their impression of what is the health status among children, adolescents, and adults.This could be coupled with different surveillance approaches (e.g.ongoing school-based programs to monitor body weight, physical activity, nutritional habits, workplace wellness programs and hospital-based registries) to further explore obesity and its risk factors.Studies have indicated that sedentary lifestyles, especially long sitting hours at work, sleep hygiene, and environmental factors such as air and weather pollution act under the concept of the epidemiological triad (host, agent, and environment) to influence obesity (Figure 1). 18,19Sleep duration has been known to be associated with weight-related changes; for example, a review study indicated that sleeping less could be associated with a higher risk of obesity among children and young adults.The lack of sleep can be due to the host-related as well as environmental factors. 20
Conclusion
The present study reviewed the impacts of some of the social, environmental, and economic factors on obesity and related items.It is apparent that the social factors influence the relation between BMI and weight-related behaviors and outcomes.To understand the mechanisms behind obesity, both quantitative and qualitative methods are essential to examine the overt as well as cognitive aspects of the complex relationships between obesity and risk-factors.While health promotion activities targeting diet, physical activity, and general awareness are important, interventions incorporating psychological support are also critical.In addition, interventions targeting younger populations and incorporating multidimensional approaches seem to have a better effect on the positive and long-term behavioral changes.
Ethical Approval
Not applicable.
Competing Interests
Authors declare that they have no competing interests.
Figure 1 .
Figure 1.Epidemiological Triad Example of Obesity and Related Factors.
|
2019-05-12T14:24:26.217Z
|
2018-10-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9da0c5396e2bdf3a4c848b642bf58a670431a64d",
"oa_license": "CCBY",
"oa_url": "http://ijbsm.zbmu.ac.ir/PDF/ijbsm-5288",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9da0c5396e2bdf3a4c848b642bf58a670431a64d",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
248061682
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Risk of Clostridium difficile Infection Using a Serum Bile Acid Profile
Since intestinal secondary bile acids (BAs) prevent Clostridium difficile infection (CDI), the serum BA profile may be a convenient biomarker for CDI susceptibility in human subjects. To verify this hypothesis, we investigated blood samples from 71 patients of the Division of Gastroenterology and Hepatology at the time of admission (prior to antibiotic use and CDI onset). Twelve patients developed CDI during hospitalization, and the other 59 patients did not. The serum unconjugated deoxycholic acid (DCA)/[DCA + unconjugated cholic acid (CA)] ratio on admission was significantly lower in patients who developed CDI than in patients who did not develop CDI (p < 0.01) and in 46 healthy controls (p < 0.0001). Another unconjugated secondary BA ratio, 3β-hydroxy (3βOH)-BAs/(3βOH + 3αOH-BAs), was also significantly lower in patients who developed CDI than in healthy controls (p < 0.05) but was not significantly different between patients who developed and patients who did not develop CDI. A receiver operating characteristic (ROC) curve determined a cut-off point of DCA/(DCA + CA) < 0.349 that optimally discriminated on admission the high-risk patients who would develop CDI (sensitivity 91.7% and specificity 64.4%). In conclusion, a decreased serum DCA/(DCA + CA) ratio on admission strongly correlated with CDI onset during hospitalization in patients with gastrointestinal and hepatobiliary diseases. Serum BA composition could be a helpful biomarker for predicting susceptibility to CDI.
Introduction
Clostridium difficile infection (CDI) is one of the most common hospital and antibioticassociated infections [1]. CDI causes various clinical symptoms, including diarrhea with colitis, abdominal pain, and fever [2]. The prevalence of CDI has been increasing worldwide, and refractory or life-threatening severe CDI is also reported in Western countries [3]. Therefore, in addition to the prevention and treatment of CDI, it would be helpful to develop a method for screening high-risk hospitalized patients.
Bile acids (BAs) are the end products of cholesterol metabolism and perform many chemical, physiological, and pathophysiological functions. BAs are synthesized in the liver, and cholic acid (CA) and chenodeoxycholic acid (CDCA) conjugated with glycine or taurine are secreted into bile as primary BAs (Figure 1). In the intestine, microbial bile salt hydrolase deconjugates amino acids to form free CA and CDCA, and then BA 7α-dehydroxylating bacteria convert CA and CDCA to the secondary BAs deoxycholic acid (DCA) and lithocholic acid (LCA), respectively. Many reports show the relationship between BAs and CDI. For example, the secondary BAs DCA and LCA inhibit Clostridium (DCA) and lithocholic acid (LCA), respectively. Many reports show the relationship between BAs and CDI. For example, the secondary BAs DCA and LCA inhibit Clostridium difficile growth in vitro [4,5] and in vivo [6][7][8][9], and secondary BAs in stool are reduced in patients with CDI [10,11]. Microbial bile salt hydrolase deconjugates the amino acid to form free cholic acid (CA) and chenodeoxycholic acid (CDCA). The free (deconjugated) CA and CDCA are then metabolized to deoxycholic acid (DCA) and lithocholic acid (LCA), respectively by multi-step 7α-dehydroxylation. Hydroxyl groups at the 3α, 7α, and 12α positions can be converted to carbonyl groups by 3α-, 7α-, and 12α-hydroxysteroid dehydrogenases, respectively. In addition, the carbonyl groups at the 3, 7, and 12 positions can be converted to hydroxyl groups at the 3β, 7β, and 12β positions by the reverse reactions of 3β-, 7β-, and 12β-hydroxysteroid dehydrogenases, respectively. G/T-, glycine or taurine conjugated.
Clostridium scindens, one of the BA 7α-dehydroxylating intestinal bacteria, converts the primary BAs to secondary BAs, and is positively correlated with the resistance to CDI [12,13]. Dehydroxylation at the 7α-position is encoded by multi-step bile acid-inducible (bai) genes in a single bai operon [14,15]. Fecal baiCD gene abundance represents the amount of BA 7α-dehydroxylating bacteria and was significantly higher in CDI-negative stools than in CDI samples [16]. Furthermore, these 7α-dehydroxylating gut bacteria synthesize not only secondary BAs but also tryptophan-derived antibiotics and inhibit the growth of Clostridium difficile [17]. These results suggest that we may predict the risk of CDI by BA analysis of hospitalized patients.
Fecal BA composition is a potential biomarker for the prediction of CDI. Allegretti et al. [10] reported that the ratio of unconjugated fecal DCA to glycoursodeoxycholic acid Figure 1. Metabolism of primary bile acids (BAs) conjugated with amino acid (glycine or taurine). Microbial bile salt hydrolase deconjugates the amino acid to form free cholic acid (CA) and chenodeoxycholic acid (CDCA). The free (deconjugated) CA and CDCA are then metabolized to deoxycholic acid (DCA) and lithocholic acid (LCA), respectively by multi-step 7α-dehydroxylation. Hydroxyl groups at the 3α, 7α, and 12α positions can be converted to carbonyl groups by 3α-, 7α-, and 12α-hydroxysteroid dehydrogenases, respectively. In addition, the carbonyl groups at the 3, 7, and 12 positions can be converted to hydroxyl groups at the 3β, 7β, and 12β positions by the reverse reactions of 3β-, 7β-, and 12β-hydroxysteroid dehydrogenases, respectively. G/T-, glycine or taurine conjugated.
Clostridium scindens, one of the BA 7α-dehydroxylating intestinal bacteria, converts the primary BAs to secondary BAs, and is positively correlated with the resistance to CDI [12,13]. Dehydroxylation at the 7α-position is encoded by multi-step bile acid-inducible (bai) genes in a single bai operon [14,15]. Fecal baiCD gene abundance represents the amount of BA 7α-dehydroxylating bacteria and was significantly higher in CDI-negative stools than in CDI samples [16]. Furthermore, these 7α-dehydroxylating gut bacteria synthesize not only secondary BAs but also tryptophan-derived antibiotics and inhibit the growth of Clostridium difficile [17]. These results suggest that we may predict the risk of CDI by BA analysis of hospitalized patients.
Fecal BA composition is a potential biomarker for the prediction of CDI. Allegretti et al. [10] reported that the ratio of unconjugated fecal DCA to glycoursodeoxycholic acid (GUDCA) was a predictor of CDI recurrence. However, we recently showed that the unconjugated fecal DCA/(DCA + CA) ratio was the best predictor of fecal proportion of Clostridium subcluster XIVa that includes Clostridium scindens [18]. In addition, the unconjugated serum DCA/(DCA + CA) ratio is also a possible marker for the fecal C. subcluster XIVa fraction.
In hospitals, serum is easier to obtain from patients than stool. Therefore, we tried to predict the risk of CDI in patients with gastrointestinal or hepatobiliary diseases by analyzing serum BA composition. If the results confirm our hypothesis, the DCA/(DCA + CA) ratio could be an aid in predicting susceptibility to CDI.
Baseline Characteristics of the Patients Enrolled in This Study
We enrolled 71 patients who had been admitted to the Gastroenterology and Hepatology division due to high inflammatory responses in blood tests. Twelve patients developed CDI during hospitalization and the other 59 patients did not. The baseline characteristics of the patients are shown in Table 1. Of the 71 patients, 34 had hepato-biliary-pancreatic diseases, 21 had gastrointestinal diseases other than inflammatory bowel diseases (IBD), 5 had IBD, and the other 11 had pneumonia or pyelonephritis. None had taken antibiotics on admission. After admission to the hospital, 66 out of 71 patients were administered intravenous antibiotics. In addition, about half of the enrolled patients had a history of regular use of proton pump inhibitors (PPIs).
Serum BA Composition in CDI Patients
A total of 10 conjugated and 20 unconjugated (free) BAs were quantified and compared among the different groups. As shown in Table 2, CA, 3-dehydro-CA, and glycochenodeoxycholic acid (GCDCA) proportions in patients with CDI were significantly higher than in those without CDI. Compared to healthy controls, the proportion of total unconjugated BAs decreased significantly and total glycine conjugated BAs increased significantly in patients with CDI. However, we observed the same tendencies in patients without CDI.
Serum BA Transformation Markers in CDI Patients
To estimate the effects of BA transformation, we calculated the product/(product+substrate) ratio for the specific reactions that may be related to the inhibition of CD growth ( Figure 1). We calculated BA deconjugation by free/total primary BAs, 7α-dehydroxylation of BAs by DCA/(DCA + CA) or LCA/(LCA + CDCA), and the epimerization of 3α-hydroxy-BAs (3αOH-BAs) to 3β-hydroxy-BAs (3βOH-BAs) by 3βOH-BAs/(3βOH + 3αOH-BAs). As shown in Figure 2, DCA/(DCA+CA) on admission (prior to antibiotic use and CDI onset) was significantly lower in patients who developed CDI during hospitalization than in patients who did not develop CDI ( p < 0.01) and healthy controls (p < 0.0001). Serum DCA levels were not necessarily decreased in patients with CDI (Table 2) because they are affected by the total amount of BAs in the colon and the rate of BA absorption from the colon. However, the product/(product + substrate) ratio represents the conversion rate from CA to DCA, which is not easily affected by conditions other than enzyme activity. Although 3βOH-BAs/(3βOH + 3αOH-BAs) in patients with CDI was not significantly different from that in patients without CDI, it was significantly lower than that in healthy controls (p < 0.05). Free/total primary BAs and LCA/(LCA + CDCA) were not significantly different among the groups.
Metabolites 2022, 12, x FOR PEER REVIEW 6 of 12 Figure 2. Serum BA markers in patients who were admitted to the Gastroenterology and Hepatology division due to high inflammatory responses in blood tests. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). CDI (+), patients who developed CDI during hospitalization (n = 12); CDI (-), patients who did not develop CDI (n = 59); Controls, healthy controls (n = 46). Each column and error bar represents the mean and SEM. According to the Tukey-Kramer test, * p < 0.05, ** p < 0.01, and *** p < 0.0001 were significantly different. ns, not significant.
Comparison of Serum BA Markers Among the Underlying Diseases
On admission, the BA ratios were compared among the three patient groups, hepatobiliary-pancreatic diseases, gastrointestinal diseases (except for IBD), and IBD ( Figure 3). The free/total primary BAs ratio was significantly lower in hepato-biliary-pancreatic diseases than in gastrointestinal diseases (p < 0.05). The DCA/(DCA + CA) ratio was significantly lower in IBD than in gastrointestinal diseases (p < 0.01). The LCA/(LCA + CDCA) and 3βOH-BAs/(3βOH + 3αOH-BAs) ratios were not significantly different among the groups. Comparison of serum BA markers among patients with different underlying diseases. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). Biliary, patients with hepato-biliary-pancreatic diseases (n = 34); Intestinal, patients with gastrointestinal diseases (n = 21); IBD, patients with inflammatory bowel diseases (n = 5). Each column and error bar represents the mean and SEM. According to the Tukey-Kramer test, * p < 0.05 and ** p < 0.01 were significantly different; ns, not significant.
Effects of Treatment with Antibiotics on Serum BA Markers
After hospitalization, 61 out of 71 patients were administered intravenous antibiotics. Six patients developed CDI during or after treatment, and the other 55 did not. Serum BA markers in the pair sera before and after antibiotics were analyzed. As shown in Figure 4, . Serum BA markers in patients who were admitted to the Gastroenterology and Hepatology division due to high inflammatory responses in blood tests. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). CDI (+), patients who developed CDI during hospitalization (n = 12); CDI (−), patients who did not develop CDI (n = 59); Controls, healthy controls (n = 46). Each column and error bar represents the mean and SEM. According to the Tukey-Kramer test, * p < 0.05, ** p < 0.01, and *** p < 0.0001 were significantly different. ns, not significant.
Comparison of Serum BA Markers among the Underlying Diseases
On admission, the BA ratios were compared among the three patient groups, hepatobiliary-pancreatic diseases, gastrointestinal diseases (except for IBD), and IBD ( Figure 3). The free/total primary BAs ratio was significantly lower in hepato-biliary-pancreatic diseases than in gastrointestinal diseases (p < 0.05). The DCA/(DCA + CA) ratio was significantly lower in IBD than in gastrointestinal diseases (p < 0.01). The LCA/(LCA + CDCA) and 3βOH-BAs/(3βOH + 3αOH-BAs) ratios were not significantly different among the groups. , x FOR PEER REVIEW 6 of 12 Figure 2. Serum BA markers in patients who were admitted to the Gastroenterology and Hepatology division due to high inflammatory responses in blood tests. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). CDI (+), patients who developed CDI during hospitalization (n = 12); CDI (-), patients who did not develop CDI (n = 59); Controls, healthy controls (n = 46). Each column and error bar represents the mean and SEM. According to the Tukey-Kramer test, * p < 0.05, ** p < 0.01, and *** p < 0.0001 were significantly different. ns, not significant.
Comparison of Serum BA Markers Among the Underlying Diseases
On admission, the BA ratios were compared among the three patient groups, hepatobiliary-pancreatic diseases, gastrointestinal diseases (except for IBD), and IBD ( Figure 3). The free/total primary BAs ratio was significantly lower in hepato-biliary-pancreatic diseases than in gastrointestinal diseases (p < 0.05). The DCA/(DCA + CA) ratio was significantly lower in IBD than in gastrointestinal diseases (p < 0.01). The LCA/(LCA + CDCA) and 3βOH-BAs/(3βOH + 3αOH-BAs) ratios were not significantly different among the groups. Figure 3. Comparison of serum BA markers among patients with different underlying diseases. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). Biliary, patients with hepato-biliary-pancreatic diseases (n = 34); Intestinal, patients with gastrointestinal diseases (n = 21); IBD, patients with inflammatory bowel diseases (n = 5). Each column and error bar represents the mean and SEM. According to the Tukey-Kramer test, * p < 0.05 and ** p < 0.01 were significantly different; ns, not significant.
Effects of Treatment with Antibiotics on Serum BA Markers
After hospitalization, 61 out of 71 patients were administered intravenous antibiotics. Six patients developed CDI during or after treatment, and the other 55 did not. Serum BA Figure 3. Comparison of serum BA markers among patients with different underlying diseases. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). Biliary, patients with hepato-biliary-pancreatic diseases (n = 34); Intestinal, patients with gastrointestinal diseases (n = 21); IBD, patients with inflammatory bowel diseases (n = 5). Each column and error bar represents the mean and SEM. According to the Tukey-Kramer test, * p < 0.05 and ** p < 0.01 were significantly different; ns, not significant.
Effects of Treatment with Antibiotics on Serum BA Markers
After hospitalization, 61 out of 71 patients were administered intravenous antibiotics. Six patients developed CDI during or after treatment, and the other 55 did not. Serum BA markers in the pair sera before and after antibiotics were analyzed. As shown in Figure 4, DCA/(DCA + CA) and 3βOH-BAs/(3βOH + 3αOH-BAs) ratios were significantly decreased by treatment with antibiotics (p < 0.0001). However, free/total primary BAs and LCA/(LCA + CDCA) ratios did not change significantly after using antibiotics.
Figure 4.
Effects of treatment with antibiotics on serum BA markers. Serum samples were obtained on admission (prior to antibiotic use and CDI onset) and after intravenous administration of antibiotics (n = 61). Before, before using antibiotics; After, after using antibiotics. The mean value for each group is indicated by the columns. According to a paired t-test, * p < 0.0001 was significantly different; ns, not significant.
Effects of the Use of PPIs on Serum BA Markers
There were no significant differences in free/total primary BAs or the DCA/(DCA + CA), LCA/(LCA + CDCA), or 3βOH-BAs/(3βOH + 3αOH-BAs) ratios between patients who did and did not take PPIs (including a new potassium-competitive acid blocker, vonoprazan) ( Figure 5).
Figure 5.
Effects of proton pump inhibitor use on serum BA markers. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). PPI (+), patients using proton pump inhibitors (n = 33); PPI (-), patients not using proton pump inhibitors (n = 36). Each column and error bar represents the mean and SEM. Statistical significance was tested by the Mann-Whitney test; ns, not significant.
The Receiver Operating Characteristic (ROC) Analyses for the Prediction of CDI Development by Serum BA Markers
We calculated the sensitivity and specificity of each BA marker to predict CDI development using 71 patients by ROC analyses (Figure 6). The areas under the curve (AUC) and the 95% confidence intervals of free/total primary BAs, DCA/ ( . Effects of treatment with antibiotics on serum BA markers. Serum samples were obtained on admission (prior to antibiotic use and CDI onset) and after intravenous administration of antibiotics (n = 61). Before, before using antibiotics; After, after using antibiotics. The mean value for each group is indicated by the columns. According to a paired t-test, * p < 0.0001 was significantly different; ns, not significant.
Effects of the Use of PPIs on Serum BA Markers
There were no significant differences in free/total primary BAs or the DCA/(DCA + CA), LCA/(LCA + CDCA), or 3βOH-BAs/(3βOH + 3αOH-BAs) ratios between patients who did and did not take PPIs (including a new potassium-competitive acid blocker, vonoprazan) ( Figure 5).
Figure 4.
Effects of treatment with antibiotics on serum BA markers. Serum samples were obtained on admission (prior to antibiotic use and CDI onset) and after intravenous administration of antibiotics (n = 61). Before, before using antibiotics; After, after using antibiotics. The mean value for each group is indicated by the columns. According to a paired t-test, * p < 0.0001 was significantly different; ns, not significant.
Effects of the Use of PPIs on Serum BA Markers
There were no significant differences in free/total primary BAs or the DCA/(DCA + CA), LCA/(LCA + CDCA), or 3βOH-BAs/(3βOH + 3αOH-BAs) ratios between patients who did and did not take PPIs (including a new potassium-competitive acid blocker, vonoprazan) ( Figure 5).
Figure 5.
Effects of proton pump inhibitor use on serum BA markers. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). PPI (+), patients using proton pump inhibitors (n = 33); PPI (-), patients not using proton pump inhibitors (n = 36). Each column and error bar represents the mean and SEM. Statistical significance was tested by the Mann-Whitney test; ns, not significant.
The Receiver Operating Characteristic (ROC) Analyses for the Prediction of CDI Development by Serum BA Markers
We calculated the sensitivity and specificity of each BA marker to predict CDI development using 71 patients by ROC analyses (Figure 6). The areas under the curve (AUC) and the 95% confidence intervals of free/total primary BAs, DCA/ ( Effects of proton pump inhibitor use on serum BA markers. Serum samples were obtained on admission (prior to antibiotic use and CDI onset). PPI (+), patients using proton pump inhibitors (n = 33); PPI (−), patients not using proton pump inhibitors (n = 36). Each column and error bar represents the mean and SEM. Statistical significance was tested by the Mann-Whitney test; ns, not significant.
The Receiver Operating Characteristic (ROC) Analyses for the Prediction of CDI Development by Serum BA Markers
We calculated the sensitivity and specificity of each BA marker to predict CDI development using 71 patients by ROC analyses (Figure 6). The areas under the curve (AUC) and the 95% confidence intervals of free/total primary BAs, DCA/(DCA + CA), LCA/(LCA + CDCA), and 3βOH-BAs/(3βOH + 3αOH-BAs) ratios were 0.6045 (0.4648-0.7443) (NS), 0.7571 (0.6217-0.8924) (p < 0.01), 0.5240 (0.3333-0.7147) (NS), and 0.6681 (0.5248-0.8113) (p = 0.068), respectively. Since DCA/(DCA + CA) had the largest AUC, this ratio appears to be the optimal biomarker for the prediction of CDI. The cut-off value of DCA/(DCA + CA) was <0.349 for discriminating the high-risk patients with CDI on admission (prior to antibiotic use and CDI onset). At this value, the sensitivity was 91.67%, the specificity was 66.10%, and the likelihood ratio was 2.704.
Metabolites 2022, 12, x FOR PEER REVIEW 8 of 12 = 0.068), respectively. Since DCA/(DCA + CA) had the largest AUC, this ratio appears to be the optimal biomarker for the prediction of CDI. The cut-off value of DCA/(DCA + CA) was < 0.349 for discriminating the high-risk patients with CDI on admission (prior to antibiotic use and CDI onset). At this value, the sensitivity was 91.67%, the specificity was 66.10%, and the likelihood ratio was 2.704. ; specificity = true negative number/(true negative number + false positive number). The minimum distance from the upper left corner (0, 1) was considered the optimal cut-off value. The cut-off value of DCA/(DCA+CA) was < 0.349 in discriminating the high-risk patients with CDI before treatment with antibiotics. At this value, the sensitivity was 91.67%, the specificity was 66.10%, and the likelihood ratio was 2.704.
Discussion
Our results demonstrated that the blood proportion of secondary BAs was a suitable biomarker to identify patients at high risk of developing CDI during hospital admission. Many reports show the relationship between BAs and CDI, but only a single limited study [10] has utilized BA composition as a surrogate marker for predicting susceptibility to CDI. The unconjugated fecal DCA/GUDCA ratio was reported to be a predictor of the recurrence of CDI. However, GUDCA is not a substrate for DCA, and the concentration of GUDCA is affected by many factors, including conversion from CDCA to UDCA in the colon, absorption from the colon, glycine conjugation in the liver, deconjugation in the intestine, and the possibility of administration in patients with hepatobiliary diseases. In contrast, as a predictor of CDI, we used DCA/(DCA + CA), which is a product/(prod-uct+substrate) ratio, to calculate dehydroxylating activity at the 7α-position of CA. Furthermore, we determined the ratio not in the stool, but in the serum. Our previous data showed that the fecal proportion of Clostridium subcluster XIVa correlated better with the DCA/(DCA + CA) in the feces than in the serum [18]. However, a patient's serum is more easily obtainable than stool in hospital. When predicting intestinal 7α-dehydroxylating activity using serum BA profile, it is essential to use only deconjugated DCA and CA for calculation. Only deconjugated CA is transformed to DCA by intestinal bacteria, and almost all deconjugated CA and DCA absorbed from the intestine are re-conjugated with glycine or taurine in the liver. Therefore, among serum BAs, only the deconjugated BAs directly reflect the activity of secondary BA production in the intestine.
In addition to DCA/(DCA+CA), we also calculated LCA/(LCA + CDCA), 3βOH-BAs/(3βOH + 3αOH-BAs), and free/total primary BAs in this study. LCA/(LCA + CDCA) should also reflect the 7α-dehydroxylating activity of the intestinal bacteria but showed different results from DCA/(DCA + CA) (Figure 2-4). In healthy subjects, the LCA/(LCA + CDCA) ratios were much smaller than the DCA/(DCA + CA) ratios in serum but not in feces [18], which suggests that LCA is less readily absorbed from the intestine than other BAs. Therefore, DCA/(DCA + CA) is a better serum marker for 7α-dehydroxylation than LCA/(LCA + CDCA). Serum 3βOH-BAs/(3βOH + 3αOH-BAs) represents epimerization activity from 3αOH to 3βOH. In our previous study [18], this ratio was also positively ; specificity = true negative number/(true negative number + false positive number). The minimum distance from the upper left corner (0, 1) was considered the optimal cut-off value. The cut-off value of DCA/(DCA+CA) was <0.349 in discriminating the high-risk patients with CDI before treatment with antibiotics. At this value, the sensitivity was 91.67%, the specificity was 66.10%, and the likelihood ratio was 2.704.
Discussion
Our results demonstrated that the blood proportion of secondary BAs was a suitable biomarker to identify patients at high risk of developing CDI during hospital admission. Many reports show the relationship between BAs and CDI, but only a single limited study [10] has utilized BA composition as a surrogate marker for predicting susceptibility to CDI. The unconjugated fecal DCA/GUDCA ratio was reported to be a predictor of the recurrence of CDI. However, GUDCA is not a substrate for DCA, and the concentration of GUDCA is affected by many factors, including conversion from CDCA to UDCA in the colon, absorption from the colon, glycine conjugation in the liver, deconjugation in the intestine, and the possibility of administration in patients with hepatobiliary diseases. In contrast, as a predictor of CDI, we used DCA/(DCA + CA), which is a product/(product+substrate) ratio, to calculate dehydroxylating activity at the 7α-position of CA. Furthermore, we determined the ratio not in the stool, but in the serum. Our previous data showed that the fecal proportion of Clostridium subcluster XIVa correlated better with the DCA/(DCA + CA) in the feces than in the serum [18]. However, a patient's serum is more easily obtainable than stool in hospital. When predicting intestinal 7αdehydroxylating activity using serum BA profile, it is essential to use only deconjugated DCA and CA for calculation. Only deconjugated CA is transformed to DCA by intestinal bacteria, and almost all deconjugated CA and DCA absorbed from the intestine are re-conjugated with glycine or taurine in the liver. Therefore, among serum BAs, only the deconjugated BAs directly reflect the activity of secondary BA production in the intestine.
Serum 3βOH-BAs/(3βOH + 3αOH-BAs) represents epimerization activity from 3αOH to 3βOH. In our previous study [18], this ratio was also positively correlated with the fecal fraction of Clostridium subcluster XIVa. Although it is not clear if this bacterial subcluster epimerizes the hydroxyl group at the C-3 position, the change in 3βOH-BAs/(3βOH + 3αOH-BAs) was associated with the change in DCA/(DCA + CA) (Figures 2-4). On the other hand, free/total primary BAs may be a surrogate marker for small intestinal bacterial overgrowth (SIBO). In bile, almost all BAs are conjugated with amino acid and deconjugated by bile salt hydrolases of various genera in the gut microbiota, including Bacteroides, Bifidobacterium, Clostridium, Enterococcus, and Lactobacillus [19]. Since nearly 95% of BAs are reabsorbed from the small intestine [14], and most primary BAs originate from the small rather than the large intestine, patients with SIBO may have an increased serum deconjugated (free) primary BA fraction.
At the time of admission (prior to antibiotic use and CDI onset), we measured the above BA markers in patients who were admitted to the Gastroenterology and Hepatology division due to high inflammatory responses in blood tests. Our results demonstrated that DCA/(DCA + CA) on admission was significantly lower in patients who developed CDI during hospitalization than in patients who did not develop CDI and healthy controls ( Figure 2). In addition, ROC analyses showed that DCA/(DCA + CA) had the largest AUC, indicating that this ratio is the optimal biomarker for the prediction of CDI development ( Figure 5). The cut-off value of DCA/(DCA + CA) on admission for discriminating patients at high risk of developing CDI was <0.349. At this value, the sensitivity and specificity were 91.67% and 66.10%, respectively. Therefore, patients with a DCA/(DCA + CA) of less than 0.349 at the time of admission should be monitored for CDI development during hospitalization.
In many cases of high inflammatory responses in blood tests, antibiotics are used after hospitalization. In fact, 66 out of 71 patients were treated with antibiotics after hospitalization ( Table 1). The antibiotics caused dysbiosis with decreased DCA/(DCA + CA) and 3βBA/(3βBA + 3αBA) ( Figure 4). However, our results suggest that most patients who developed CDI were already in dysbiosis on admission ( Figure 2). In particular, IBD patients had the lowest DCA/(DCA + CA) ratio on admission (Figure 3). We have already reported that DCA/(DCA+CA) in feces and serum is decreased in IBD patients, regardless of disease activity [18]. Although the mechanism of dysbiosis in IBD patients is not fully understood, IBD is considered a significant risk factor for CDI development. While most patients developed CDI after the use of antibiotics, some IBD patients developed CDI without antibiotics. These IBD patients were hospitalized due to worsening of their primary disease, but we cannot exclude the possibility that they had already developed CDI at the time of admission. However, in any case, patients with low DCA/(DCA + CA) on admission remain a risk group for CDI. A recent study by Berkell et al. [20] showed that patients developing CDI already exhibited distinct microbiota and significantly lower diversity before antibiotic treatment, suggesting the possibility of a predictive microbiota-based diagnosis of CDI. Our results indicate that not only microbiota-based diagnostics but also serum BA composition, such as DCA/(DCA + CA), could be convenient predictive markers for CDI.
In addition to the impacts of antibiotics and IBD, we examined the effects of PPIs on the BA markers. Previous reports showed that the use of PPIs altered the composition of gut microbiota significantly, more than the use of antibiotics or other drugs [21,22]. As a result, PPI users have an increased risk of CDI. However, our data showed that any BA markers were not significantly different between patients with and without PPI treatment ( Figure 5). Therefore, the use of PPIs does not appear to be the primary cause of CDI development in our patients.
There are several limitations to this study. First, as the sample size of CDI patients was small, a multicenter study with a large sample size is needed to validate the results. Second, since most of the patients enrolled in this study had gastrointestinal or hepatobiliary diseases, further studies are needed using patients other than those with digestive disorders.
Third, multivariate analysis with other "classic" CDI risk factors would be essential to validate the utility of our new biomarker.
In conclusion, decreased serum DCA/(DCA + CA) on admission in patients admitted to the Gastroenterology and Hepatology division due to high inflammatory responses in blood tests exhibits a strong correlation with a high risk of CDI development during hospitalization. Thus, serum BA profile, especially decreased serum DCA/(DCA + CA), could be a convenient surrogate marker for the prediction of CDI development.
Sample Collection
Fasting blood samples were collected from patients at the time of admission (prior to antibiotic use and CDI onset). Fasting blood samples were also obtained from 46 healthy volunteers (35 males and 11 females, aged 47.6 ± 8.2 years). Sera were stored at −20 • C until analysis.
Diagnosis of CDI
During hospitalization, 12 patients developed frequent diarrhea and were diagnosed with CDI according to a flow chart by Czepiel et al. [2]. Fecal Clostridium difficile-specific glutamate dehydrogenase (GDH) and toxins (CD toxins) were determined by GE test immunochromato-CD GDH/TOX "NISSUI" (Nissui Pharmaceutical Co., LTD., Tokyo, Japan) and used for the CDI screening test.
Serum BA Analyses
Serum BA compositions were measured by HPLC-MS/MS as described by Murakami et al. [18]. Briefly, a mixture of internal standards was added to 20 µL of serum and diluted with 2 mL of 0.5 M potassium phosphate buffer (pH 7.4). BAs were extracted with Bond Elut C18 cartridges and analyzed by HPLC-MS/MS system.
Statistical Analysis
Data are reported as the mean ± SEM. The statistical significance of differences among the three groups was evaluated by the Tukey-Kramer test, the difference between the two groups was by the Mann-Whitney test, and the difference before and after treatment was by paired t-test. Categorical variables were analyzed using Fisher's exact test. The ROC curve was used for the analysis of the values of free/total primary BAs, DCA/(DCA + CA), LCA/(LCA + CDCA), and 3βOH-BAs/(3βOH + 3αOH-BAs) in the prediction of CDI. The minimum distance from the upper left corner (0, 1) was considered the optimal cut-off value. Sensitivity was calculated as true positive number/(true positive number + false negative number), and specificity was as true negative number/(true negative number + false positive number). For all analyses, significance was accepted at the level of p < 0.05. All statistical analyses were conducted using Prism (ver. 9.2.0) software (GraphPad Software, San Diego, CA, USA). Funding: This work was supported in part by a Kakenhi grant (18K07920) from the Japan Society for the Promotion of Science.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and the Ethics Committee of Tokyo Medical University Ibaraki Medical Center approved the experimental protocol (#IR1818) for studies involving humans.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
|
2022-04-10T15:15:02.600Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "20e41def25cb7efaca39d77f4db780b18588949f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "79c85de61da6aa420bebff2a98d4f2d44ea5bf25",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
255854945
|
pes2o/s2orc
|
v3-fos-license
|
Challenges of DHS and MIS to capture the entire pattern of malaria parasite risk and intervention effects in countries with different ecological zones: the case of Cameroon
In 2011, the demographic and health survey (DHS) in Cameroon was combined with the multiple indicator cluster survey. Malaria parasitological data were collected, but the survey period did not overlap with the high malaria transmission season. A malaria indicator survey (MIS) was also conducted during the same year, within the malaria peak transmission season. This study compares estimates of the geographical distribution of malaria parasite risk and of the effects of interventions obtained from the DHS and MIS survey data. Bayesian geostatistical models were applied on DHS and MIS data to obtain georeferenced estimates of the malaria parasite prevalence and to assess the effects of interventions. Climatic predictors were retrieved from satellite sources. Geostatistical variable selection was used to identify the most important climatic predictors and indicators of malaria interventions. The overall observed malaria parasite risk among children was 33 and 30% in the DHS and MIS data, respectively. Both datasets identified the Normalized Difference Vegetation Index and the altitude as important predictors of the geographical distribution of the disease. However, MIS selected additional climatic factors as important disease predictors. The magnitude of the estimated malaria parasite risk at national level was similar in both surveys. Nevertheless, DHS estimates lower risk in the North and Coastal areas. MIS did not find any important intervention effects, although DHS revealed that the proportion of population with an insecticide-treated nets access in their household was statistically important. An important negative relationship between malaria parasitaemia and socioeconomic factors, such as the level of mother’s education, place of residence and the household welfare were captured by both surveys. Timing of the malaria survey influences estimates of the geographical distribution of disease risk, especially in settings with seasonal transmission. In countries with different ecological zones and thus different seasonal patterns, a single survey may not be able to identify all high risk areas. A continuous MIS or a combination of MIS, health information system data and data from sentinel sites may be able to capture the disease risk distribution in space across different seasons.
Background
Malaria is an endemic disease and a public health issue in Cameroon. It is a major cause of morbidity and mortality among children less than 5 years. In 2014, the morbidity of malaria was 30% in children and 18% in adults [1,2]. Conscious of this situation, the government has considered the fight against malaria to be a national priority and part of the health strategic plan [3]. Since 2002, the National Malaria Control Programme (NMCP) was created under the coordination of the ministry of public health. The aim was to improve the quality of strategic actions and to raise resources. During the last 10 years, huge investments have been deployed by donors, the international community and the government, to develop strategies and tools for reducing the burden of malaria in the country. According to the national malaria strategic plan of 2014-2018 [4], the NMCP is implementing interventions to sustain and scale up malaria control. Those interventions include distribution of insecticide-treated nets (ITN) to populations at risk and of sulfadoxine-pyrimethamine to pregnant woman, parasitological confirmation of suspected malaria cases (microscopy or rapid diagnostic test), and treatment of uncomplicated malaria cases by artemisinin-based combination therapy (ACT). Until 2011, the NMCP has distributed ITNs only to vulnerable groups. In 2012, the distribution policy has changed and more than eight million of long-lasting insecticide nets (LLIN) was given to populations at risk [5,6]. Before the LLIN mass campaign distribution, two representative surveys were carried out by the National Institute of Statistics: a demographic and health survey (DHS) combined with multiple indicator cluster survey (MICS) and a malaria indicator survey (MIS).
The DHS was the first national malaria survey to collect prevalence data across the country, however for logistic reasons data were collected outside the malaria high transmission season. The NMCP and partners have decided to conduct the MIS during the second and most important rainy season (September-October), when the highest peak of malaria transmission occurs in order to assess the ability of DHS to estimate the malaria burden in the country [7]. Hence, the objective of this study is to assess the influence of the survey period on the detection of risk pattern by comparing estimates of the malaria parasite risk and the effects of interventions obtained from both surveys. The analysis was carried out using Bayesian geostatistical logistic regression models similar to the ones that have been used for spatial analyses of other DHS and MIS data such as Angola, Senegal, Nigeria, Burkina Faso, Uganda and Sudan [8][9][10][11][12][13].
Country profile
Cameroon is a central Africa country, bordered with Nigeria to the West, Chad to the North, Central African Republic to the East, Congo, Gabon and Equatorial Guinea to the South. The country is decentralized and organized around 10 regions, 58 divisions and 360 communal areas. English and French are the official languages. Yaoundé is the political capital and Douala is the economic town. The global surface of the country is 475,650 km 2 , population is around 22 million inhabitants [14,15] and index of human development is 0.512 in 2015 [16]. The percentage of the population living in urban areas is 49%. Children under 5 years old represent 17% of the population [3,17]. Despite of the presence of natural resources as oil, gas, iron, gold, and favourable climatic situations for agriculture, the national income per inhabitant is still low (< 2000$ per year) with important disparity between urban and rural areas [18]. The country has different geographic and ecological zones, that generate six epidemiologic facets of malaria transmission [19][20][21], corresponding to different ecological systems: the dry Sahelian in the Far North region and the Sudano-guinean in the North region where malaria transmission period is between 4-6 months; the highlands of Adamawa and West regions with length of malaria transmission between 7-12 months; the equatorial forests which includes Centre, East and part of South regions where the transmission is stable; and the Atlantic coastal covering the Littoral and a part of South and South-West regions where the malaria is perennial with seasonal variations. The malaria transmission in the North part of Cameroon is characterized by seasonal pattern linked to rainy season which cover the period from August to October. Like many Africa countries, Plasmodium falciparum is also the predominant species and responsible of more than 95% of confirmed infection cases in this study [22,23].
DHS-MICS 2011 survey
DHS are nationally representative household based surveys commonly carried out by the National Institute of Statistics and ICF International in Africa or elsewhere collecting socioeconomic, demographic, disease and intervention related data. MICS is another standardized household survey carried out by UNICEF, compiling health related data on children and women. Both DHS and MICS were carried jointly in Cameroon during January-August 2011. A sample of 15,050 households living in 580 clusters was selected using a two-stage sampling approach, 291 clusters were in urban zone (Fig. 1). Blood samples were taken in 50% of households inside the cluster surveyed and 5515 children that are under 5 year were tested by a rapid diagnostic test (SD BIOLINE Malaria Antigen Pf/Pan) [3].
MIS 2011 survey
The MIS was carried out between September and November 2011, during the malaria high transmission season, 1 month after the increase of rains in the country. The MIS was conducted on 6040 households within 257 clusters randomly selected out of the 580 clusters of the DHS 2011 (Fig. 1). The sample size was determined using the same calculations as DHS, however it was based on the proportion of children aged 0-59 months using ITN in comparison to DHS that considered the proportions of a range of indicators. Malaria screening was performed in 4939 children under 5 years old living in selected households and with the approval of adult in charge using a rapid diagnostic test (First Malaria Response Antigen) [24].
Environmental and climate factors
Environmental and climate predictors were extracted from satellite sources (Table A.1 in Additional file 1). In particular, data were compiled on land surface temperature during the day and night (LSTD, LSTN), Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), land cover surface and permanent water bodies obtained from the moderate resolution imaging spectroradiometer (MODIS) terra satellite. rainfall estimates (RFE) and altitude were retrieved from FEWS (or Famine Early Warning Systems Network) and SRTM (or Shuttle Radar Topographic Mission) web sites, respectively [25,26]. Climatic proxies with weekly and biweekly temporal resolution were averages over the 1 year period prior to the survey.
Socio-economic factors
Socio-economic data were included in both, the DHS and the MIS surveys. Two socio-economic proxies were used, education of women in reproductive age and household asset index. The education level was treated as a categorical variable with three levels (primary, secondary and university). The household asset index was included in the database and used in categorical form, grouped into quintiles corresponding to the poorest, poor, middle, rich and richest segments of the population. Rural and urban area information was available in the database for the observed survey locations and it was extracted from the GRUMP (or Global Rural and Urban Mapping Project) database at the locations of predictions [27].
Interventions
To capture the effects of interventions at national level, output indicators were generated using data available in the DHS and MIS, according to the household survey indicators tool for malaria control developed by Roll Back Malaria and partners. In particular, the following coverage indicators of use and access to ITN interventions were created: (a) proportion of children under 5 years old who slept under an ITN the previous night; (b) proportion of households in the cluster with at least one ITN; (c) proportion of households in the cluster with at least one ITN for every two people; (d) proportion of population with access to an ITN within their household. Furthermore, a health system performance indicator was calculated to measure the proportion of children under 5 years old with fever in the last 2 weeks who seek treatment at hospital, tested and treated with recommended ACT [28].
Bayesian geostatistical modelling
Bayesian geostatistical binomial models fitted on cluster level malaria aggregated data were used to estimate parasitaemia risk at high spatial resolution based on climatic predictors (Model 1). Climatic variables were categorized in groups with cut-offs defined from quintiles and exploratory analysis. Geostatistical variable selection was carried out to identify the most important climatic and environmental predictors, including their best fitting functional form [29,30]. For each predictor a categorical indicator was introduced with values 0, 1 and 2 corresponding to exclusion of the predictor from the model or inclusion in linear or categorical form, respectively. It was assumed that the indicator arose from a multinomial distribution with probabilities defining the variable-specific exclusion/inclusion probabilities (in linear/categorical forms) in the model (Additional file 2). A threshold of 50% was considered for the probability of inclusion (i.e. posterior inclusion probability) into the predictive geostatistical model. In the final model, the effect of a predictor was considered to be statistically important if the 95% Bayesian credible interval (BCI) of the coefficient did not include the one on the odds ratio scale. Validation of Model 1 was performed to assess the model's predictive performance. In particular, the sample was divided into a training set which included 80% of the data which was used for model fit and a test set consisting of the remaining data. Model validation compared the mean error between the observed parasitaemia at the locations of the test set with the model-based predicted risk. The model predictive performance was also evaluated by calculating the proportion of test locations correctly predicted within the 95% of BCI. Bayesian kriging was applied using Model 1 to predict the parasitaemia risk over a gridded surface of 117,192 cells and obtain pixel-level risk estimates at 2 × 2 km 2 resolution [31,32].
Geostatistical variable selection was also applied to select the most important coverage indicators of malaria interventions. A Bayesian geostatistical Bernoulli model was fitted on the parasitaemia status of each child to estimate the effect of selected malaria interventions (Model 2) after adjusting for potential confounding effects of the climatic factors used in Model 1 and of the socioeconomic factors. The same methodology was employed separately on the DHS and the MIS data. Model fit and prediction were conducted in R [33] and OpenBUGS version 3.2.3 (Imperial College and Medical Research Council, London, UK) [34,35]. Convergence of parameters was assessed by the Geweke statistic and by visually inspecting the traceplots [36]. Computations were performed in the parallel scientific computing (sciCORE) platform of Basel University. Different maps were produced by ESRI's ArcGIS version 10.2.1 for Desktop (http://www.esri.com/).
DHS results
The observed parasitaemia risk in children under 5 years old was 30% at national level, 37% in the rural and 20% in the urban areas. In urbanized cities, such as Yaoundé and Douala the malaria parasite risk was among the lowest in the country, i.e. 12 and 13%, respectively. Five percent of the population had access to an ITN within their household and 21% of children slept under an ITN during the night preceding the survey. The percentage of children under 5 years old with fever in the last 2 weeks that treated with ACT was 6%. The proportion of children from the poorest and poor quintiles was 63% and the proportion of mothers with at least primary education was 80% ( Table 1).
The geostatistical variable selection identified NDVI and altitude (in categorical form) as the most important predictors of parasitaemia risk, using the cluster level model (Model 1). The proportion of the population with access to an ITN in their household and the proportion of children under 5 years old with fever in the last 2 weeks treatment-seeking at hospital, tested and treated with ACT, were selected from Model 2 ( Table 2).
Posterior estimates of the model's parameters are shown in Table 3. The climatic, cluster level model confirmed known relations between malaria parasite risk and climatic predictors, i.e. a positive association with NDVI and a negative relation with altitude. A malaria parasite risk map was generated using the climatic predictors identified from the DHS data (Fig. 2). The individual level model (Model 2 in Table 3) shows that children in urban areas or those living in households with higher socioeconomic level were less affected by malaria. Children to mothers with high educational level or aged below 12 months had low malaria parasite risk. The proportion of population with access to an ITN in their household was able to capture a statistically important effect on parasitaemia risk.
MIS results
The national observed malaria parasite risk was 33% with substantial disparities between rural (43%) and urban (19%) areas. The North-West region and the towns of Yaoundé and Douala had registered low prevalence of 10, 6 and 16%, respectively. According to the survey data, 9% of the population had access to an ITN within their household and 15% of households possessed one ITN per two persons. The percentage of children under 5 years old with fever in the last 2 weeks that received ACT was 12%. The percentage of children from the poorest and poor quintiles was 68% and the proportion of mothers with at least primary education was 76% ( Table 1).
The geostatistical variable selection applied to the cluster level model (Model 1) had identified NDVI, the categorical forms of EVI and of distance to water, the presence of forest and altitude as the most important predictors of parasitaemia risk. The individual level model estimated high posterior inclusion probabilities for the following ITN coverage proxies, i.e. the proportion of population with access to an ITN in their household, the proportion of children who slept under an ITN in the previous night and the proportion of households with one ITN per two persons. However, the paired correlations between the above ITN indicators were ranging from 0.6 to 0.8; therefore, the indicator included in the final model (Model 2) was the last ITN coverage measure which had the highest inclusion probability (Table 2).
Parameter estimates for Model 1 and 2 are shown in Table 4. The cluster level predictive model indicated that malaria parasite risk was positively related with NDVI, EVI and presence of forest, and it was negatively associated with altitude. A malaria parasite risk map was drawn with climatic predictors selected by the MIS (Fig. 3). The individual level model (Model 2 in Table 4) showed that children in rural areas as well as those living in households with lower socioeconomic status are more vulnerable to parasitaemia risk. Children aged below 12 months have low risk. The educational level of mother was not statistically associated with malaria parasite risk. ITN coverage was statistically important and had a negative effect on malaria parasite risk.
The proportion of test locations falling into the BCIs of the predictive posterior distributions with probability coverage varying from 50 to 95% was comparable for both surveys (Model 1), but the accuracy of estimates was higher for the DHS data as it is shown by the smallest BCI width (Fig. 4)
Discussion
This study is the first to assess the influence of survey season on the estimates of the geographical distribution of malaria parasite risk and of the effects of interventions, using data collected by DHS and MIS carried out at the same locations and year, but at different malaria transmission seasons. The analysis employed Bayesian geostatistical models because this study was interested in comparing the estimates of the risk pattern across the country rather than at the observed locations.
The DHS collects a large number of indicators on diverse sectors and huge logistics are involved to guarantee the coverage of all clusters, in particular those in rural areas with difficult access. Moreover, the planning of DHS usually avoids the rainy season in Africa because of road's degradation which challenges the survey implementation. The constraints described above have often an impact on the schedule and duration of DHS. The DHS and MIS surveys in Cameroon provide a unique opportunity to assess the effect of season on malaria survey-based estimates.
Both surveys showed low level of parasitaemia risk (under 5%) in West and Adamawa highlands. These areas are suitable for elimination interventions. Also, both data indicated that, the parasitaemia risk in East region was the highest in the country and above 50%. This high risk level is explained by the important coverage of forest, the predominance of rural areas and the low educational level of the population.
DHS data did not identify a cluster of high malaria parasite risk in the North and Far-North regions as estimated by the MIS. However, evidence from the upsurge of malaria cases that over strain the capacity of the health system during the rainy season and the high malaria mortality risk among children in the northern part of the country does not support the DHS finding [37,38]. The non-concomitance between DHS and the malaria seasonal transmission in the north regions may explain the underestimation of malaria parasite risk in that area.
Table 2 Posterior inclusion probabilities (%) of the climatic predictors and intervention coverage indicators obtained by the geostatistical variable selection applied to DHS and MIS data
Furthermore, DHS could not capture a malaria cluster in the coastal part which is the estuary of the biggest rivers in the country that pour into the Atlantic Ocean. During the long rainy season that begins in August, some areas are flooded and large ponds of stagnant water are created [39][40][41][42]. The high transmission occurs just within the rainy season which is characterized by the increase of mosquito population. The
Table 3 Estimates (posterior median and 95% BCI) of the geostatistical model parameters based on the cluster level climatic (Model 1) and the individual level model (Model 2), DHS 2011
a Children less than 6 months were not surveyed [43][44][45].
The altitude and NDVI were identified as important predictors in the cluster level models of both surveys. The presence of forest, EVI and distance to water body were found to be important in modelling the MIS data. As known, the altitude has a negative effect on malaria parasite risk. The effect of distance to water was not linear and households located more than 70 m away from water bodies are at higher risk of malaria compared to those households close to them for a number of reasons including the wind direction and the availability of human hosts [46]. Rainy season has an influence on vegetation and on human activities, such as farming which exposes people to mosquito bites, and that could be the reason of the positive association between the EVI, NDVI, the presence of forest and the parasitaemia risk [47,48].
The analysis of the MIS data showed that the proportion of households with one ITN per two persons was statistically important with a negative effect indicating that the household coverage had an influence on malaria parasite risk among children [49]. According to the DHS, the ITN coverage indicator with a statistically important and protective effect was the population with access to an ITN. The use of ACT among children under 5 years old with fever in the last 2 weeks before the survey was positively associated to the malaria parasite risk but not statistically important. Similar results regarding ACT have been obtained from the MIS in Uganda and in Burkina-Faso [11,12].
The disease risk resembles the pattern of socioeconomic inequalities in the country. In both surveys, the place of residence had an important effect and was negatively associated to malaria parasite risk. The DHS data showed that the effect of only the least poor category of the wealth index was statistically important compared to the most poor baseline category, however the MIS data estimated statistically important effects in all socio-economic categories. The educational level of mothers had a protective effect which was however statistically important only for the DHS. These results suggest that during the high malaria transmission season, the quality of the household environment is more important than the mother's education. Obviously, children from wealthy households can benefit from additional vectors control tools, such as appropriate malaria treatment, ITNs, sprays products and the sanitized neighbourhood. Wanzirah et al. and Tusting et al. have also shown that high house quality reduces the entry of mosquito vectors and, therefore, lessens the risk of infection [50,51].
A gradient of malaria parasite risk was associated to the age and as expected the gender effect was not statistically important. Younger children were at lower risk than older ones, which may be a consequence of the passive immunity given by mothers [52].
The high residual spatial correlation estimated by the models, especially those that used the MIS data indicates the presence of unmeasured spatially structured factors that influence the geographic distribution of the parasitaemia risk. It is likely that the climatic proxies considered in the model such as day and night LST or NDVI and EVI were not able to capture the entire ground climatic conditions. Similar analyses of other MIS data estimated relative high residual spatial correlation, particularly in recent surveys that climatic factors are confounded from malaria interventions [10,12,30]. The BCI width of the estimated parameters obtained with DHS were tighter than those of MIS, most likely due to the smaller number of survey clusters in the later [53,54].
Both, DHS and MIS were used a RDT. RDTs could remain positive for few weeks after a malaria treatment. Therefore our estimates of parasitaemia risk may be slightly overestimated than those based on diagnosis by microscopy [55][56][57].
DHS and MIS are based on a two-stage cluster sampling design. In the first stage, the number of clusters that are selected at regional level is proportional to the population. This design oversamples clusters in places with high population density and can selects fewer clusters over larger regions with small populations (i.e. East region) where the disease may vary more compared to the urban areas and big cities such as Yaoundé and Douala. Therefore, the DHS/MIS survey design may provide lower precision of the estimates in rural areas.
Since 2011, Cameroon has implemented two mass campaigns of LLINs, introduced preventive treatment of children against malaria in the North region and built two large dams in the East and South regions. There is currently a DHS ongoing in Cameroon and the results of this study will serve as a baseline to assess the changes in malaria risk as a result of disease interventions, climatic effects and environmental modifications [58,59].
Conclusion
Timing of the malaria survey influences estimates of the geographical distribution of the disease risk, especially in settings with seasonal transmission. The DHS and MIS in Cameroon provide information about the geographical distribution of malaria parasite risk and of the effects of interventions in a country that different ecosystems cohabitate. In countries where malaria transmission is affected by seasonality, a single survey may not be able to identify all high-risk areas. A continuous MIS similar to the one running for example in Senegal or a combination of MIS, health information system data and data from sentinel sites may be able to capture the disease distribution in space across different seasons. However, in countries with no variation in the malaria transmission season, a single survey may be sufficient.
|
2023-01-17T14:40:43.531Z
|
2018-04-06T00:00:00.000
|
{
"year": 2018,
"sha1": "6154839f3974f55969d711105ee9b3f486c1c7d6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12936-018-2284-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6154839f3974f55969d711105ee9b3f486c1c7d6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
29260766
|
pes2o/s2orc
|
v3-fos-license
|
Can Online MBA Programs Allow Professional Working Mothers to Balance Work, Family, and Career Progression? A Case Study in China
Career progression is a general concern of professional working mothers in China. The purpose of this paper is to report a qualitative study of Chinese professional working mothers that explored the perceptions of online Master's of Business Administration (MBA) programmes as a tool for career progression for working mothers balancing work and family in China. The objective was to examine existing work-family and career progression conflicts, the perceived usefulness of online MBA in balancing work-family and career aspirations, and the perceived ease of use of e-learning. Using Davis's (1989) technology acceptance model (TAM), the research drew on in-depth interviews with 10 female part-time MBA students from a university in Wuhan. The data were analysed through coding and transcribing. The findings showed that conflicts arose where demanding work schedules competed with family obligations, studies, and caring for children and the elderly. Online MBA programmes were viewed as a viable tool for balancing work and family and studying, given its flexible time management capabilities. However, consideration must be given to address students' motivation issues, lack of networking, lack of face-to-face interaction, and quality. The research findings emphasise the pragmatic need to re-align higher education policy and practice to position higher education e-learning as a trustable education delivery channel in China. By shedding light on the prevailing work-family conflict experienced by women seeking career advancement, this study suggests developing better gender-supporting policies and innovative e-learning practices to champion online MBA programme for this target niche.
Introduction
Career progression is a general concern of professional working mothers in China.
Extensive research abounds relating work-family conflict to the under-representation of women in top management (Blair-Loy, 2003). Still to be addressed, however, is how women can realise individual agency in developing and achieving career progression (Broadbridge & Simpson, 2011). One potential route to more senior management positions for women is the pursuit of management education through a Master's of Business Administration degree (MBA) (Finney, 1996;Simpson, 2000).
An MBA is considered an important pre-requisite for both men and women who aspire to senior positions (Finney, 1996); an effective tool against gender discrimination (Leeming & Baruch, 1998); and a passport to fast-track career and senior managerial roles (Baruch & Peiperl, 1999). Although the benefits of an MBA for career progression are well-documented in the literature from a non-Chinese perspective, little empirical evidence exists in the context of China's economy, especially for the career development of women .
Conversely, these MBA students already lead complex lives as employees, employers, spouses, mothers, community volunteers, and caregivers for elderly and ill relatives (Crosby, 1991). An alternative to the emotional rigours of trying to balance work, family, and studies is to opt for online education. Many researchers have suggested that online education (also referred to as e-learning) is a viable option for busy and working mothers (Home, 1998), offering learning opportunities 'anywhere' and 'anytime'. However, these changing patterns in education have been found to induce anxiety and uncertainty in users (e.g., Ong & Lai, 2006), and many conceptual boundaries (e.g., technological, pedagogical, social, economic) have yet to be fully understood or explored (Rossiter & Crock, 2006). In particular, the conceptual boundaries of the work-family conflict, career aspirations, and the overall impact on online MBA acceptance for women in China remain relatively unexplored.
Additionally, online education is still a new concept for many people in China, and attitudes towards its adoption have not been fully studied (Duan et al, 2010). A better understanding of online education adoption intentions in China, particularly those of professional working mothers, would enable providers to offer courses that are more likely to be utilised by future online learners. This research aims to examine online MBA programme adoption intentions in China from a technology acceptance perspective.
To achieve the research objectives, Davis's (1989) technology acceptance model (TAM) was employed as one of the most popular theoretical frameworks for predicting system acceptance of technology. In this study, the TAM model is used to investigate the perceived usefulness and ease of use of online MBAs in a qualitative case study by interviewing 10 female professional part-time MBA students.
This case study shows that female MBA students' expectations may present challenges for educational practitioners, e-learning developers, and policy makers intending to exploit the flexibility of online MBAs in China. This research makes a number of contributions to both theory and practice by applying the TAM model to a qualitative case study in the context of China. The authors propose that a number of key issues need to be considered carefully when promoting and supporting e-learning and the use of online MBAs for career progression. Furthermore, a range of interventions is needed to re-align educational policy and practice with prevailing labour market requirements.
This paper provides a background for the work-family conflict and career progression followed by a literature review of the use of MBAs for career progression and online MBAs. A theoretical explanation of the TAM theory and description of the methodology provides synthesised findings leading up to a discussion of the results, their implications, and suggestions for future practice and research, which concludes the study.
Work-family conflict and career progression
In China, women make up to 38% of the fulltime work force and are overrepresented in manufacturing, services, and public sector industries such as health, education, and social welfare (Cooke, 2012). However, their presence in managerial levels reflects a gender gap in terms of career progression (Cao, 2001) and salary increments (Shu & Bian, 2003). Prior research relates the failure of women to progress to senior management with work-family conflict (Broadbridge & Simpson, 2011;Blair-Loy, 2003). Greenhaus & Beutell (1985, p. 77) define work-family conflict as: "a form of inter-role conflict in which the role pressures from the work and family domains are mutually incompatible in some respect".
The ideal worker norm has long been associated with men, who are expected to devote more time to work. Women who try to fit into this norm and serve as primary caregiver may find it difficult to balance the demands of both an ideal worker and an ideal caregiver (Williams, 2005). Women's work and childbearing lifecycle patterns are diametrically opposed to the senior management career lifecycle, where the intensive workload and commitment necessary to succeed coincide with peak child-rearing years (Drew & Murtagh, 2005). This difficulty then becomes the primary source of women's disadvantage in the corporate world and explains their "concentration in low paid, part-time employment and their absence at the most senior levels of management" (Doherty, 2004) (p. 433).
The one-child policy in China has meant that the only-child generation is more precious than the previous generations of children, and parents, particularly those in the middle class, compete against each other in bringing up their prodigy (Xiao & Cooke, 2012). Children are expected to start their serious education well before the official schooling age of six. Therefore, despite being able to afford paid childcare, middle-class mothers may be under pressure to channel their energy into developing their only child instead of their own career (Xiao & Cooke, 2012).
Although it is customary for Chinese women to receive childcare support from their live-in parents, caring for elderly parents, especially if ill or particularly old, is an additional family demand. Managing elder care has shown to be more complex than managing childcare because it involves the coordination of many social services (Friedman & Galinsky, 1992). Subsequently, even the strong support typical of Chinese families and social groups does not appear sufficient to alleviate women's childcare and domestic burdens, nor does it allow them to overcome cultural barriers to their career aspirations, resulting to a failure to achieve top positions in management (Ng et al, 2002).
MBA for career progression
Obtaining a good job in China still depends upon obtaining advanced education and using one's guanxi (network) to persuade a personnel manager to hire oneself (Granrose, 2005). Guanxi is a complex web of social connections and mutual obligations used to exchange favours and conduct business in Chinese society (Park and Luo, 2001). In Hong Kong, higher levels of education have been associated with higher income and more prestigious careers (Cheng & Yuen, 2012). Similarly, several studies have found significant improvements in the career progression of managers after completing an MBA course (Association of MBAs, 1992). Although some (e.g., Mintzberg, 2004) have argued against MBA in developing interpersonal skills required for effective management and leadership, Simpson (2000) found that an MBA increased confidence and credibility, providing information collection and analysis, quantitative analysis, technology management, entrepreneurial, and action skills (Boyatzis & Case, 1989). Nonetheless, an MBA serves as an effective means of acquiring managerial competencies and enhancing career prospects (Finney, 1996).
An MBA can therefore be considered an important component of a Chinese professional women's career progression. However, the inter-role conflicts experienced by women working a double or even triple shift, including career, children, ageing parents, and study, necessitates flexible learning delivery channels, such as online education and an online MBA.
The technology acceptance model (TAM)
This study employs Davis's (1989) technology acceptance model (TAM) as the theoretical grounding for exploring factors influencing the perception of online MBA programmes. TAM, adapted from the theory of reasoned action (TRA) (Fishbein & Ajzen, 1975), has been used as the theoretical basis for many empirical studies of user technology acceptance (Davis, 1989;Ong & Lai, 2006;Teo et al, 2012;Venkatesh & Morris, 2000). This model is perhaps the most promising direction for attempts to overcome the problem of underutilised systems. Unfortunately, there is little evidence of TAM being applied to professional working mothers in the context of China.
The TAM model is comprised of two prominent variables, perceived usefulness and perceived ease of use. Perceived usefulness has been defined as an indicator of the extent to which a person believes that using a particular technology will enhance his or her performance and therefore represents an individual's extrinsic motivation to use a technology (Davis, 1989). A significant body of prior research has shown that perceived usefulness has a positive effect on behavioural intention to use (Davis et al, 1989;Venkatesh & Morris, 2000). Conversely, perceived ease of use refers to the degree to which a person believes that the use of a particular technology will be free of effort and is therefore an indicator of an individual's intrinsic motivation to use a technology (Davis, 1989). Venkatesh & Morris (2000) found that a low evaluation of perceived ease of use caused an increase in the salience of such a perception in determining perceived usefulness and user acceptance decisions. In TAM, beliefs that a technology is useful and easy to use influence the users' attitudes toward the technology and thereby their decision to adopt the technology. The need to balance work-family commitments and career aspirations would likely position online MBA programmes as a useful alternative to the traditional classroom education delivery method and is expected to influence the decision to pursue an online MBA.
E-learning and online MBA programmes
The greatest attraction of online MBA programmes is the convenience and flexibility of the delivery channel. Online education enables adults with full-time jobs to attend classes without having to leave their current jobs (Lorenzo, 2004). This allows students to maintain employment and other family responsibilities while being able to conveniently continue with their education with a flexible schedule and low travel costs and enables students to interact with teachers and students from around the world (Hung et al, 2010).
Studies have found student satisfaction and perceived usefulness to be key factors in explaining learners' behavioural intention to use e-learning (Liaw, 2008). Lee, Yoon, & Lee (2009) presented the significant influence of instructor characteristics and teaching materials on the perceived usefulness of e-learning, while perceived usefulness and enjoyment were predictors of the intention to use e-learning. On the other hand, Visser, Plomp, & Kuiper (1999) showed that the student's characteristics and motivation to learn predicted participation in e-learning. Furthermore, research has shown that perceived networking, instructor interaction with students, quality, and active discussion (Swan et al, 2000, Jiang & Ting, 1998) have a significant impact on perceptions of online education.
Research questions
This study aims to answer the following research questions (RQs):
RQ1-What are Chinese professional women's experiences with work-family conflict and career aspirations?
RQ2-What is the perceived usefulness of an online MBA programme in terms of balancing work-family demands and career aspirations?
RQ3 -What is the perception of e-learning in terms of ease of use?
Research method
Although TAM has been the subject of investigation of a large number of studies, many such studies are limited in several respects, such as the strictly positivist quantitative perspective of research focusing on the adoption of technologies as such (Davis, 1989). This study helps to address this limitation in the literature by providing an in-depth qualitative study. This decision is in accordance with recommendations of proponents of the case study approach, such as Yin (1994). Given the complexity of the work-family and career progression conflict, the authors found it necessary to record the informants' experiences and thoughts regarding online MBA programmes instead of using structured questionnaires, which would have risked omitting critical information.
Data collection
In-depth face-to-face interviews were conducted in the spring of 2012 with 10 part-time MBA students from a university in Wuhan, China. Demographic data including age, marital status, and education and career background; patterns of career progression; and availability of home and child support were first collected using semi-structured interviews. Next, the interviewer asked three open-ended questions concerning the informant's perception of the experiences of work-family conflict and career progression; the usefulness of online MBA programmes in helping to balance work-family and career progression; and the ease of use of e-learning technology.
The snowball sampling method was used to recruit female participants. This method relies on referrals from the initial subjects to generate additional subjects.
Interview invitations with criteria (working mothers) were sent to part-time MBA students via the class monitor. The initial participants were then encouraged to bring along fellow classmates with similar backgrounds. This method was suitable and effective for recruiting the appropriate target group because student working mothers knew other student working mothers.
The average age of the informants was 28 years old. Their backgrounds included automobile (2 participants), electrical (3 participants), software applications (2 participants), human resources (1 participant), and retail (2 participants). Interview questions were given to participants prior to the interview, and participants were guaranteed the confidentiality of their information to ensure they spoke freely. The interviews were audiotaped, and notes were taken to ensure accurate recording of the responses and the interviewer's overall impressions. To protect the participants' identities, pseudonyms are used in this report. All interviews were conducted in English and generally lasted approximately one hour each. The data were then sorted into a database manually by one of the authors and checked by the other author.
Data analysis
Theoretical thematic data analysis was adopted to analyse the case data, following Braun & Clarke's (2006) qualitative data analysis model. First, the researchers read through the transcripts and jotted down comments, notes, thoughts, and observations in the margins. Next, the researchers went over the marginal notes to summarise key issues and to section and categorise the data. Code labels were assigned to each section using the interviewee's words or the researchers' own words.
The preliminary codes were examined for overlaps and redundancy. Eliminating redundant codes and collapsing similar codes enabled the codes constructed in the early stage to be narrowed down into broader themes. The new list of code words was then used to examine the texts to check whether these codes recorded common themes and recurring patterns. The different data sets were continuously read and analysed to refine the categories and to make sure that no text sections were overlooked.
During the analysis phase, the researcher continuously linked the recurring themes to TAM as the theoretical lens of this study. TAM was used both to organise the categories and as an analytical tool to form an in-depth view of the conceptual meanings of the category under the framework. The themes fell into two main categories: those related to work-family conflict and three sub-categories related to attitudes towards online MBA acceptance. The different components of the TAM model (i.e., perceived usefulness and ease of use) were used as "containers" for arranging data themes (Barab, Evans, & Baek, 2003).
As the interviewers gathered more data and the coding continued, it was found that no new themes were being identified and no new issues arose. Therefore, as suggested by Strauss and Corbin (1990), this study had reached its saturation point with 10 interviews.
To ensure reliability and internal validity, both researchers were involved in the analysis. Where necessary, the informants were contacted for clarification or additional information. For this reason, 2 follow-up telephone interviews were conducted. In addition, peer debriefing was employed. A professional peer who was not directly involved in the data collection but was familiar with the socio-cultural elements of qualitative case study analysis was invited to comment on the findings as they emerged and to check for misinterpretations and researcher bias. When I observe how my colleagues, co-workers, and classmates cope with their children and work, I think it's very likely that they ask for help from their parents. If they don't, they will have to hire someone to help take care of their children, and that's expensive. And there would still be the worry that she [helper] might not treat my kids like her own. Parents are a great help because they will treat the children like their own, just like their children when they were young.
MBA
Many of the informants believed that an MBA was a ticket to top management positions and higher salaries. The following was stated: Some people like me have 8 or 10 years of experience. Eventually, I figured that if I were to enrol in this MBA programme, it would fulfil a large requirement for my promotion…. I have a career development plan and I know I must study more to prepare for this position. Therefore, I believe that after this MBA course, we will have good opportunities…. for career development, we should be ready with experience, and with this programme, we will be ready for promotion.
-Jessica, 36, Program manager Salary-wise, you will get at least a 50% raise, which corresponds to 1.5 times your usual pay. This is the first raise, which is very dramatic. I believe that if I keep improving myself, opportunities will present themselves, followed by salary increases. I think that if I instigate the changes myself, good things will come to me. -Veronica, 25, Program manager
MBA programmes
Overall the women had positive perceptions of online MBA programmes, centred on the beliefs that online MBA programmes were useful in integrating the role of student with those for work and family. Several themes emerged from the analysis of interview data, which are described below.
Flexibility
Many of the women reported flexibility as the most useful aspect of online MBA programmes. Given the need to juggle a full-time job, family, and school, being able to learn at their convenience in terms of time and learning space favourably influenced perception towards online MBA programmes. Examples of relevant comments were as follows.
I think it's good for us because with e-learning, we can study at home. However, when the children at home ask you to play with them or check their homework, you should comply with their requests and needs, that's the way it is in China. So, when that happens, I cannot study.
However, in so many cases, we can use this [e-learning] in an efficient way. -Jessica, 36,
Program manager
It has huge benefits, which are more about flexibility and saving time (than anything else), and if you have some urgent issues, you can just press "pause" and continue later. That's great! -
Abundant information
Given that e-learning systems contain large repositories of information, the informants believed that they could access more information online than in the limited time spent in classrooms lectures. One of the women stated: Being online, you can access more information than in the classroom because on the internet, you can see information on many topics on the same page, whereas in traditional classroom learning, we can only see what the teacher or classmates around you contribute, so that's the difference. -Veronica, 25, Program manager
Perceived limitations of online MBA programmes
Although the women were generally positive about the usefulness of online MBA programmes, they also noted that there were some limitations to online learning. The informants cited perceived challenges with self-study motivation, lack of face-to-face networking, and interactions with instructor and fellow students. These themes are described below:
Motivation
Some of the informants felt that e-learning required more motivation and self-discipline to allocate time for learning and to sit through an online class compared to classroom learning. One of the women stated:
Lack of face-to-face interaction
Most informants reported that a lack of face-to-face interaction was a major drawback. Some believed that online learning failed to allow them to interact with their instructors and classmates, as in traditional classroom settings. They argued that Chinese students are accustomed to discussing, learning, and working in teams and feel the need to interact closely with the teacher.
Another concern brought up by the informants was the lack of emotional connection between the instructor and student as well as among students. The women stated: In classroom learning, we can have face-to-face conversations with our teacher, we can discuss with our classmates, and we can conduct firm case studies, but with e-learning, the efficiency can be based on internet quality. Sometimes lines drop or the teacher cannot be seen face-to-face, or when you see him on the screen of your computer, he is in some session and cannot be in direct contact with you. Thus, mutual feelings are missing, and your understanding and input can be minimised. This is different from the traditional approach. Moreover, sometimes we need to have someone to talk with, and e-learning cannot always offer that opportunity. Under such circumstances, my focus will be worse than in the classroom. -
Veronica, 25, Program manager
Actually, I don't like online learning because if we talk face-to-face, I can see you. That way, I can understand how you feel, and you might be able to notice whether I understand what you are saying, and that's good. However, with online learning, you just follow the teacher's lessons, the teachers can't see you and you can't see the teachers, so there are no interactions. So, I like face-to-face instruction much more than virtual instruction. -Daisy, 30, Sales manager
Perceived networking
Another concern noted was the lack of networking in e-learning. The informants argued that one of the main attractions of an MBA programme was the ability to use this platform to network and share ideas with other professionals. They stated: Before I wanted to be an MBA student, I wanted to make friends who are also earning their MBA, as they have valuable opinions and ideas and we might be able to work together to do something. I wanted to communicate with others. It's very important! However, in an online classroom, you don't have that chance. Most people who go back to school after working also want to meet people whom they can talk to on a regular basis and share interests with. In contrast, students who are college graduates want higher education and may not have this desire. However, when you have some working experience, you want others to share their experience with you, which is very important! -Terry, 31, Electrical engineer
Women's perceptions of the ease of use of e-learning
This study found very little hesitation regarding the ease of use of e-learning.
The informants generally perceived online MBA programmes to be easy to use.
However, the issue of easy navigation and access to lessons affected this belief. One informant stated: As long as we can easily find the lessons, e-learning is easy to rely on and can be put to good use. -Jessica, 36, Program manager
Opinions of the quality of online MBA programmes
Another issue found to impact the perception of online MBA programmes was concerns of the assessment and credibility of online education. This is illustrated below:
Assessment
Some of the informants perceived online learning as incapable of verifying that the registered e-learner was in fact the actual candidate being examined. For this reason, the informants questioned the reliability and trustworthiness of online education. They stated: I think many people don't have enough time, so they get another person to learn for them, which is a disadvantage. For example, if I don't have time and I need to pass my exam, I can call somebody to learn for me (laughs). For instance, we have an e-learning programme in our company, and my boss gets me to do the e-learning activities for him (laughs). -Anna, 25,
Electronics engineer
Some of the informants were concerned about how teachers could assess online students' understanding or how online students could self-assess their learning progress. One woman stated: In the context of e-learning, I think it's a little hard to check the results. In the classroom, the teacher will ask questions and students will answer. Then, the teacher will know how the students are doing. For me, it's hard to assess the student's level after the completion of training.
Credibility of online MBA programmes
Another theme that surfaced was the issue of credibility and the reputation of online MBAs, especially for employers. The following was stated: You know, in China, fraud is common. Even the course, certificate, or degree delivered to you could be falsified. Online learning is invisible. You cannot touch it, so some people will think it's not credible and that you may not be qualified for a job or worthy of a position with it. -
Rebecca, 33, Project manager
I think people trust famous brands; they are the ones with some internationally qualified certification. In this university, our school has its MBA education certified from the system and I think even e-learning platform resources can get the same tool. If they got…, people will judge me as somebody who graduated from a famous school, from a very high platform. If people think or feel that you graduated from a good education agency, they judge your qualifications differently. -Veronica, 25, Program manager
Discussion
The findings of this study present several important issues to be discussed. The following section discusses these results as per the research objectives:
RQ1 -What are Chinese professional women's experiences with work-family conflict and career aspirations?
The findings show that balancing the time allotted for different roles was the predominant cause of fatigue and stress. Work and caring for children and the elderly left little time for study or relaxation. As a result, working mothers risk minimising their social responsibilities when stretching their roles, which in turn affects their prospects of progressing their careers. These findings support Ng et al's (2002) finding that marital and familial roles impact women's progression up the organisational ladder. Nonetheless, undertaking an MBA programme may help these women develop skills to progress their careers.
MBA programme in terms of balancing work-family demands and career aspirations?
Online MBA programmes are vital in allowing women to balance work-family demands with career aspirations providing the necessary flexibility in terms of time and location to enable them to combine demanding work schedules with family life.
Indeed, the informants believe that e-learning is a viable tool due to its flexibility, information availability, and time effectiveness. These findings support Home's (1998) research on online education and work-family balance. However, one finding contradicts Romero (2011), who argues that distance learners have poorer time management due to the lack of time structuring experienced by face-to-face students and that e-learning does not help reconcile the conflict between work, family, and studies.
Nevertheless, concerns were raised by the majority of informants in this study over the lack of face-to-face interactions and networking in e-learning, similar to other studies (McGorry, 2002). The cause for dissatisfaction was due to the fact that, in the special case of China, to develop future career opportunities, both men and women have to develop a network of guanxi contacts and obligations, as explained by Granrose (2007). The guanxi (network) differs from other networks in that these social ties have a long-term perspective, are slower to develop and dissolve, and involve a deeper sense of obligation and reciprocal loyalty than is usually present in non-Chinese individuals' concepts of network ties. In addition, China's collective society encourages teamwork and togetherness. There appears to be a misconception that e-learning cannot facilitate student and teacher collaboration or teamwork (Intel, 1997). Furthermore, some informants cited concerns regarding self-motivation. McCall (2002) found that self-motivated learners are more likely to succeed in online learning settings.
RQ3 -What is the perception of e-learning in terms of ease of use?
The informants' perceived ease of use contradicted the initial expectations of this study that ease of use would impact attitude towards and acceptance of online MBA programmes. This study contradicts related studies (e.g., Ong & Lai, 2006).
The informants expressed general comfort and confidence with computer usage and the ability to conduct e-learning. This could be because the informants were already experienced computer users due to the demands their daily work, and some were familiar with e-learning from job training. Additionally, many of the informants were from engineering and technology backgrounds with computer knowledge from either work or previous higher education. These findings support prior studies that have shown computing experience to be a strong predictor of attitudes toward computers, computer usage (Whitley, 1997), and subsequent adoption.
Perceived quality of online MBA
Investing in online MBA education is believed to bring valuable returns for future careers. Some informants raised concerns over the quality of online MBA programmes related to the authenticity of the platform for testing students' progress.
In other words, how do we ensure that the registered student is the same student being tested? This is particularly of interest in the Chinese case, perhaps due to its long history of examinations, where many pre-defined standards have traditionally been set by educational agencies using strict measures to examine individuals' progress.
Research and practice implications
This study has contributed to our understanding of the implications of online MBA programmes for Chinese professional working mothers in a number of different ways. It has supported the findings of other researchers (e.g., Home, 1998) that online education is a valuable vehicle for career development when balancing work and family. The findings of this study provide an interesting direction for future research regarding the development and delivery of online MBA programmes and remedies for resolving the work-family and study conflict.
Implications for China's business schools
Leading business schools around the world are increasingly providing women-focused centres. Chinese business schools would benefit from catching up with this trend to lure more women into MBA programmes. Chinese and international business schools should consider marketing to this audience the convenience of online MBA programmes in balancing work-family demands with career aspirations, for which more research is warranted.
This study's findings emphasise the importance of building reputable online learning brands to counteract the fears surrounding quality issues in China. Chinese business schools should consider seeking e-learning accreditation and certifications to further enhance the perceived trust of e-learning as benchmarked by famous and reliable online higher education brands, such as Harvard and MIT.
Implications for online MBA agencies and e-learning practitioners
Due to the lack of perceived networking in online education, higher education institutions in China will need to effectively communicate the versatile and interactive possibilities of online MBAs via marketing campaigns, including web chats, forums, webinars (web seminars), and video conferences. More research is needed to address the ways in which professionals can effectively network and maintain contacts online.
Business schools may want to appeal to the perceived usefulness and more supportive environment of collaborative uses of the medium as a way to attract more women to their programmes (Ong & Lai, 2006).
Additional research is also needed to investigate ways in which e-learning lessons can simulate the real-world working environment and market labour requirements with a minimal gap. Online education researchers can consider incorporating lessons on business soft skills, which increase confidence and develop networking skills that can add value to women's managerial careers . However, e-learning agencies need to address the issue of the assessment of e-learners against learning outcomes given the strong history of examinations and fear of fraud in China. It would be interesting to explore whether these factors influence the perceptions of employers when recruiting or promoting online MBA graduates. Future research could also examine the estimated returns of online MBAs verses classroom-taught MBAs in terms of career and salary increment for both men and women. Similar studies could also compare different national contexts.
Further consideration needs to be paid to mitigating the lack of motivation to engage in online learning, which is important given the already constrained time of a working mother. More research is needed to explore various learning styles, considering that Chinese students tend to prefer learning styles as a collective society.
Furthermore, e-learning designers need to be creative regarding interactive multimedia functions such as animations if the programmes are to remain viable.
The participants also highlighted the importance of easy to use e-learning systems. E-learning developers will need to consider user-friendly e-learning environments, such as simple navigation, a robust e-learning infrastructure, and an attractive graphical user interface (GUI). In addition, the availability of support services, such as help desk facilities and troubleshooting, go-to manuals, and 'frequently asked questions' (FAQs) are highly recommended. This would not only help save time when finding lessons across the platform but would also offer motivation to sit through the lessons.
Implications for professional working mothers
Although online MBA programmes permit greater flexibility in terms of learning time and space, which potentially allows Chinese professional working mothers to fulfil multiple roles, the traditional time constraint arising from the uneven division of domestic responsibilities and caring remains. This finding supports previous recommendations that it is necessary to seek wider systems of social support for balancing work-family demands and career aspirations, including having supportive managers and support networks outside the workplace, flexible work hours, access to challenging assignments, and influential decision makers and having clearly defined requirements for advancement and career paths (Ibeh et al, 2008).
Limitations
This study has the following limitation. While the use of the snowballing sampling method proved most suitable for this study, it introduced bias, as the method reduces the likelihood that the sample will represent a good cross-section of the population. This shortcoming was manifested in this research by the predominance of students from science backgrounds. Thus, these findings may not necessarily be applicable to women across other professional fields, such as in the social sciences or arts.
Conclusions
The findings of this study confirm that work-family conflict is a significant problem for many female career aspirants. Many women are faced with the demands of performing multiple roles, such as employee/employer, wife, mother, a daughter to elderly parents, and student. Overall, the findings indicate a positive attitude toward online MBA programmes as a viable tool for facilitating work-family balance and career aspirations and a wealthy source of information. However, students' motivation for online learning, lack of networking provision, and perceived lack of quality assessment require consideration.
Although e-learning is already widely used in work settings and continuing education in China, it still lags behind in higher education, which is traditionally a conservative institution. Thus, China's societal culture has not yet fully accepted e-learning as an alternative to in-classroom education vis-à-vis its popularity in other parts of the world with established higher education e-learning, such as in the United Kingdom (the Open University), Spain (the Universidad Nacional de Educacio´n a Distancia), or the Korean National Open University. It should be noted, however, that the transition from traditional classroom learning to e-learning cannot occur instantaneously, requiring time for the users to adjust (Arbaugh, 2004). This issue might be perceived differently by the cyber generation, who have never lived in a world without cyber technology, which needs further research. This paper is intended to serve as a timely reminder to Chinese business schools and other key stakeholders of the need to revisit discussions of gender-supporting policies and innovative e-learning delivery methods tailored to the needs of professional working mothers in higher education, a group constantly overlooked by policymakers. In opening these discussions, higher education and companies will be addressing the dire issue of the continuing under-representation of women in business schools and managerial positions. This paper provides a fertile field for scholars to develop future theories and advance research and knowledge.
|
2014-05-13T23:53:46.000Z
|
2014-05-13T00:00:00.000
|
{
"year": 2014,
"sha1": "88d019136b2081065a36b5b72eaa27142d06840a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.3381",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "88d019136b2081065a36b5b72eaa27142d06840a",
"s2fieldsofstudy": [
"Business",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Computer Science"
]
}
|
256056030
|
pes2o/s2orc
|
v3-fos-license
|
Is TKA femoral implant stability improved by pressure applied cement? a comparison of 2 cementing techniques
Background The majority of knee endoprostheses are cemented. In an earlier study the effects of different cementing techniques on cement penetration were evaluated using a Sawbone model. In this study we used a human cadaver model to study the effect of different cementing techniques on relative motion between the implant and the femoral shaft component under dynamic loading. Methods Two different cementing techniques were tested in a group of 15 pairs of human fresh frozen legs. In one group a conventional cementation technique was used and, in another group, cementation was done using a pressurizing technique. Under dynamic loading that simulated real life conditions relative motion at the bone-implant interface were studied at 20 degrees and 50 degrees flexion. Results In both scenarios, the relative motion anterior was significantly increased by pressure application. Distally, it was the same with higher loads. No significant difference could be measured posteriorly at 20°. At 50° flexion, however, pressurization reduced the posterior relative motion significantly at each load level. Conclusion The use of the pressurizer does not improve the overall fixation compared to an adequate manual cement application. The change depends on the loading, flexion angle and varies in its proportion in between the interface zones.
Background
Osteoarthritis is the most common joint disease worldwide, affecting 344 million people [1]. The knee is the most frequently involved joint and accounts for 50% of cases [2]. Total knee arthroplasty is one of the most successful interventions for restoring knee joint function and reducing pain after nonsurgical treatment options have been exhausted and the patient's quality of life has been permanently impaired [3]. In 2020 there were 111,365 primary knee replacements and 13,767 revisions registered in the German Arthroplasty Registry [4]. Cementation remains the gold standard in knee arthroplasty [5], even though numerous publications have reported Page 2 of 10 Schonhoff et al. BMC Musculoskeletal Disorders (2023) 24:51 comparable survival and functional outcome for both cemented and cementless fixation methods [3,6,7]. This trend is also confirmed in reports from several prosthesis registries, in which cementation was used in 68% to 94% of their cases [4,8,9]. Cementing technique has been the subject of scientific research for many years because it affects the crucial interface between prosthesis and bone and is intended to enable long term survival [10]. There are many factors influencing good cementation results including the type of cement, viscosity, volume used, mixing procedures, temperature, humidity, jet lavage, timing, speed and force during impaction of the components and handling of the cement [11][12][13][14]. Although loosening of the femoral components accounts for only 4.6% of revisions [4], the continuously increasing number of implantations and revisions, in addition to earlier reports of significantly higher loosening rates of modern high-flex prostheses has justified detailed investigations of methods for optimizing prosthesis fixation [15][16][17][18]. Also, recent publications have reported a higher incidence of radiolucent lines accompanying a new prosthesis design when compared to its predecessor, although the clinical and biomechanical significance of this is currently unclear [19,20].
Studies were able to indicate that applying cement to both the bone and implant prior to implantation is advantageous in TKA [13,14,21,22]. Also, several authors have demonstrated that using a cement gun is advantageous [14,23,24]. In a publication from 2019, Schwarze et al. were able to demonstrate in a Sawbone ® model of knee arthroplasty that cement application with a pressurizer creates a more homogeneous cement coating and adequate cement penetration [12], thereby confirming positive results from prior studies [23][24][25]. However, the used Sawbone ® model only partially reflects the in-vivo scenario in reference to the cement-bone inferface. The aim of the current study was to further investigate the effects of pressurized application of cement to human femoral cadaver specimens, in particular the effects on implant stability and relative motion of the implant/bone interface during loading.
General
This study was performed in accordance with the Declaration of Helsinki and approved by the local Ethics Committee (Ethikkommission der Medizinischen Fakultät Heidelberg, reference S-351/2018). The tissue samples were obtained from Science Care (Phoenix, AZ, USA), which is accredited by the American Association of Tissue Banks. All donors and/or their legal guardian(s) provided informed consent prior to sample acquisition.
In 15 pairs of fresh frozen human legs the Attune total knee replacement system (DePuy Synthes, Warsaw, IN, USA) was implanted by a surgeon experienced in the surgical technique. Preservation of biomechanical properties prior to the experimental period was ensured by frozen storage [26]. In a randomized manner, two different cementation techniques (Groups A and B) were used for the implantation of the femoral component of the Attune system.
Group A consisted of specimens in which the articular surfaces of the femoral components and of the femoral condyles all had conventional cement application using a cement gun. Both surfaces were covered with cement, as this has been shown to provide the best results [13]. Also, the cement gun has been shown to provide superior cementation of the bone to finger packing [14,23,24].
Group B had the cement applied to the distal femur with a pressurizing nozzle added to the cement gun. The cement was applied to the femoral component in the conventional manner as in group A. (Fig. 1).
More details on the cementation technique are provided below.
The right and left sides of the 15 leg pairs were randomly allocated to group A or B by means of a computergenerated list (Randlist 1.2; Datinf GmbH, Tübingen, Germany). The mean donor data showed a mean age of 68.3 ± 11.5 years, a mean height of 174.4 ± 10.9 cm, a mean weight of 75.1 ± 16.4 kg, and a mean body mass index of 24.6 ± 4.7 kg/m2.
The bone mineral density (BMD) was assessed for both groups to improve the comparability. Franck et al. showed a high correlation between standard dual-energy absorptiometry (DXA) at the hip and various locations such as the extremities [27]. Therefore, we measured bone mineral density using DXA with standard hip parameters (Hologic QDR-2000, Marlborough, Massachusetts, USA). For all 30 knee joints, native radiographs in anterior-posterior (a.p.) and lateral projections were obtained to exclude bone pathology that would preclude a knee prosthesis and to determine prosthesis size using TraumaCad software (Voyant Health, Ltd., Brainlab AG, Munich, Germany). The same prosthesis size was planned and implanted on the right and left side of each leg pair. The following prosthesis sizes were used: 5 × size 5, 3 × size 6, 3 × size 7, 4 × size 8. Postoperative radiographs were performed to verify the implantation result and to exclude intraoperative fractures.
Cementing procedure
Prior to surgery, the human legs were thawed to room temperature. To standardize the experimental conditions and the surgical steps, all adjustments and resection measurements were documented and repeated on the contralateral side. Bone stock preparation and implantations were performed according to the prosthesis manufacturer's surgical instructions. The entire prosthesis was implanted and the femur and tibia were subsequently separated for testing. Prior to cementation, the cancellous bone was cleaned of lipid deposits, blood and bone debris using the OptiLavage system (Zimmer Biomet Holdings, Warsaw, Indiana, USA) and superficially dried with a compress until immediate cement application. The implantation of the femoral components for both Groups A and B was performed with a vacuum mixed high viscosity bone cement (Optipac 40 Refobacin Bone Cement R, Zimmer Biomet Holdings, Warsaw, Indiana, USA). The cement was applied early (in other words directly after the waiting phase) using cement timing for vacuum mixed cement at a room temperature of 21.2 ± 0.2 °C. We applied the bone cement to the non-articulating surface of the femoral components in Groups A and B 80 s after starting the mixing process. In the next step for Group A, the cement was applied to the prepared bone at 110 s using the above described cementing technique. In Group B, a cement gun with cement cartridge was also used, but a pressurizer nozzle was attached to the conventional nozzle to apply the cement to the bone in a no-touch technique (no manual manipulation of the cement after application) at 110 s ( Fig. 2).
In group A, a homogeneous and uniform layer of PMMA cement was applied to the femoral bone stock with a cement gun (Optigun, Zimmer Biomet, Warsaw, Indiana, USA) medial and lateral from anterior to posterior for complete coverage. In addition, a homogeneous and uniform layer of cement was applied to the entire inner surface of the femoral knee component medially, laterally, and anteriorly transversely with the cement gun. On both the component and the bone, manual modelling with clean medical gloves was performed to ensure even coverage. In group B the cement gun was modified to deliver cement at an increased pressure by the attachment of a pressurizing nozzle with 23-degree angled tip. The cement was applied to the femoral surface under pressure, and the amount of cement was standardized to assess the influence of the cementing technique. An identical amount of cement was used in Groups A and B. The cement was applied to the femoral component in the same way in both groups [12].
The impaction of the femoral component was performed 140 s after start of mixing. The femoral component was impacted until the edges of the cement pockets were in contact with the distal bony resection surface. Excess cement was removed, and the trial liner insert was placed on the previously implanted tibial component. Subsequently, the leg was placed in extension position at 240 s after start of mixing, where the cement was allowed to harden.
Load simulation and determination of relative motion
After the cementing procedure, the tibia and femur were separated, and the soft tissues removed. Afterwards, the specimens were cast in a mold using synthetic resin (Rencast FC 53, Huntsman Advanced Materials GmbH, Germany), in order to secure the specimens into the material testing machine. For the assessment of implant stability, an incremental dynamic load was applied at 1 Hz for the axial force with simultaneous extension-flexion between 20° and 50°, as had been done in a prior study [28]. The load maxima occurred at the time of extension and flexion, respectively [29]. A force representing daily stair climbing [30][31][32] was applied using a servohydraulic testing machine (MTS 858 Mini Bionix II, MTS Systems Corporation, Eden Prairie, USA) (Fig. 3). A preload of 200 N was applied before cyclic loading was started with the four load levels 1200 N, 1500 N, 1800 N, and 2100 N. The maximum load level corresponded to the force exerted on the knee of a person with a body weight of 75 kg during stair climbing [32]. The selected body weight for the load simulation corresponded to the average donor body weight. Optical markers were placed on the bone and the adjacent implanted component as shown in Fig. 3. The determination of the three-dimensional relative motion between the femoral component and bone was performed using an optical, camera-based system (PONTOS-GOM -Gesellschaft für Optische Messtechnik mbH, Braunschweig, Germany). Figure 3 shows the implant and bone markers (A: anterior, B: distal, C: posterior) of the three analyzed zones. The system is calibrated to a measurement volume of 250 × 200 mm 2 . The markers on the object to be measured were located in the center of this defined volume. Each of the markers were detected in greyscale by a stereo camera system, and a 3D point triangulation was done to calculate the 3D marker position and displacement vector in the defined coordinate system. The relative motion was calculated from the corresponding implant and bone markers. The two cameras of the stereo system operate each with a resolution of 2448 × 2050 pixels and a measuring accuracy of 1 µm according to the manufacturer's specifications [33]. However, the measuring accuracy of an optical measuring system depends strongly on the environmental conditions. Under laboratory conditions, we achieved a measuring accuracy of ± 4.9 µm for the test setup used. All measurements were done at the medial side. The calculated results of the resultant maximum relative motion were normalized to the right femur for maximum extension and flexion (20°, 50°).
Statistical analysis
Prior to the start of the experimental study, a sample size calculation was performed using G*Power 3.1 (University of Kiel, Germany) [34] based on the reported data by Schwarze et al. [12]. The sample size differed (7,9,11 were evaluated descriptively using the arithmetic mean, standard deviation, minimum and maximum. Prior to analysis, the normal distribution of the data was evaluated using a Shapiro-Wilk-test and the homogeneity of variance was verified using the Levene-test. We conducted a two tailed t-test for independent samples to assess effects between both groups on the parameters BMD and relative motion within each load level, flexion angle and fixation zone. All data were analyzed using SPSS 25 (IBM, Armonk, NY, USA) with a significance level of p < 0.05.
Results
15 fresh frozen pairs were acquired to carry out the experiments. During the radiographic evaluation, one pair was excluded due to a bone lesion and subsequent fracture during experimentation. We evaluated the remaining 14 pairs.
Bone density
Testing the density differences between the two test groups using the Shapiro-Wilk test resulted in a p-value of 0.4. Thus, a normal distribution of the difference in bone density in both groups was confirmed. The following paired t-test showed no significant difference in bone density between the two groups (t(14) = -0.449, p = 0.66, d = 0.12).
Relative motion
The determination of the resulting (XYZ) maximum relative motion between implant and bone was analyzed for all four load levels with a total of 4000 cycles. The points in time of the maximum extension-flexion values (20° and 50°) with simultaneous maximum axial load were analyzed. When analyzing the femoral components, the evaluation was also divided into anterior, distal and posterior regions.
Femur 20º
The check for normal distribution with the Shapiro-Wilk test yielded (α = 0.05) normally distributed data. Therefore, a t-test was used for dependent samples. The test showed the following values for the load levels examined (Table. 1).
Femur 50º
The check for normal distribution with the Shapiro-Wilk test yielded.
(α = 0.05) normally distributed data. Therefore, a t-test was used for dependent samples. The test showed the following values for the load levels examined (Table. 2
Discussion
Although femoral component loosening is a relatively uncommon occurrence and constitutes only 4.6% of all total knee arthroplasty (TKA) revisions, it is a relevant complication based on the constantly increasing number of implantations and revisions [4,12]. Studies have shown that cementation technique has a significant impact on femoral component loosening [14,22]. Our results with human fresh-frozen specimens and cement pressurization show that at 20-degrees and 50-degrees of flexion and incrementally increasing load there is an increase in the relative motion between implant and bone for both cementation techniques. In addition, the increase in relative motion both anteriorly and distally Table 1 Anterior, distal and posterior relative motion at 20° flexion of the femur depending on the load The max anterior relative motion at 20° flexion was at 2100 N for both groups: 53.7 µm for the group without a nozzle, and 130.7 µm for the group with nozzle The max distal relative motion at 20° flexion was at 2100 N for both groups: 48.8 µm for the group without a nozzle, and 64.3 µm for the group with nozzle The max posterior relative motion at 20° flexion was at 2100 N for both groups: 56.6 µm for the group without a nozzle, and 54. 3 with higher loads is significantly higher in the pressurized cementation group compared to non-pressurized cementation. In contrast, posteriorly at 50 degrees flexion, the relative motion between implant and bone is significantly reduced with cement pressurization compared to non-pressurized cementation. Our measured values of relative micromotion between the bone and the components fell between 7 and 46 µm, which are comparable to values recorded in prior studies [35][36][37].
There have been numerous prior in-vitro and invivo studies of the primary stability of hip and knee arthroplasties, using radiostereometric analyses (RSA) and optical measurements as utilized in our current study [37][38][39][40][41][42]. These have shown that many factors influence primary stability, including implant design, bone density, surgical technique and cement penetration [36,43,44].
Pressure application during cementation has been shown to be effective in increasing cement penetration [12,[23][24][25]. In 2019, Schwarze et al. published data on 3 different cementation techniques in a Sawbone ® model [12] and showed that pressure application improved cement penetration in all zones of the Knee Society Scoring System (KSSS). Although the Sawbone ® model had a bone structure similar to human cancellous bone [45], it differed from physiological bone in other properties and Tab 2 Anterior, distal and posterior relative motion at 50° flexion of the femur depending on the load The max anterior relative motion at 50° flexion was at 2100 N for both groups: 78.1 µm for the group without a nozzle, and 150.9 µm for the group with nozzle The max distal relative motion at 50° flexion was at 2100 N for both groups: 55.6 µm for the group without a nozzle, and 81.3 µm for the group with nozzle The max posterior relative motion at 50° flexion was at 2100 N for both groups: 79. therefore may not reflect cement penetration in the clinical setting. This study demonstrates that the cementation technique can significantly influence the degree of relative motion at the bone/femoral component interface under differing loading conditions. We found a significant reduction of relative motion posteriorly only at 50 degrees flexion at all load levels with pressurization, that we attribute to increased stability resulting from increased compression load during this degree of flexion. All other values of relative motion at both 20 and 50 degrees of flexion showed decreased relative motion in the non-pressurized samples, mostly of a significant degree. These latter findings were unexpected, and we have no good explanation. This may be related to differing elastic and plastic deformation in different areas of the bone implant construct. Less movement in the posterior region may result in more pronounced movement in the other regions. Further investigations would be helpful in this regard. Improved distributions of bone cement using pressure application [12] can significantly affect force transmission at the cement-host bone interface [46]. In a finite element analysis, Schultze et al. described the influence of cement thickness and prosthesis positioning, with the highest von Mises stresses anteriorly [46]. Our results showed that using a pressurizer only achieved a significant reduction of the relative motion between the implant and bone in the posterior region. The very narrow posterior intercondylar space precludes accurately controlled manual cement application into cancellous bone during the surgical procedure. Our results suggest that pressurized clinical application might be helpful for improved cementation in the posterior region only. The clinical significance of our documented differences in relative motion is unclear.
We cannot determine any clear association between our results and the occurrence of radiolucent lines noted radiographically. Hoskins et al. reported the majority of radiolucent lines distally (34.5%) and anteriorly (6.9%) while Staats et al. described them as being predominantly posteriorly located (12%) [19,20]. The authors therefore do not see any noticeable association with the results of the current study.
Limitations
Although our experimental set-up mimicked the clinical situation as much as possible, the physiological effect of the surrounding soft tissues, differences in bone density and occurrence of bleeding could not be reproduced, limiting the extrapolation of our results to the clinical scenario.
Only two flexion angles were tested, unlike the physiological state which has a much greater range of motion.
The Attune knee replacement system was the only system tested and the results may vary with other systems.
Our incrementally increasing loads for 1000 cycles represent the immediate post-operative period only and does not reflect micromotion that could occur over the long-term postoperatively.
Conclusions
Pressure application of bone cement changes the relative motion at the implant-bone interface in all areas. The change varied with the degree of loading and the joint flexion angle and differed in the anterior, distal and posterior bone/component interface zones (seen Table 2). Our results suggest that the use of the pressurizer did not improve the overall fixation compared to an adequate application using a cement gun, with the possible exception of the posterior zone. The posterior region was the only area that displayed a significant reduction of micromotion with pressurized cement application during flexion. Therefore, we suggest that application of cement with a pressurizer may be advantageous in this region only, where the narrow intercondylar space makes satisfactory manual application or use of a cement gun without a pressurizer difficult. An improved cementation technique may further decrease the component loosening seen clinically. Additional studies are suggested to investigate this further.
|
2023-01-22T06:16:03.536Z
|
2023-01-21T00:00:00.000
|
{
"year": 2023,
"sha1": "430cded3ef26781869f349abf6ff5b4670b6ba88",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ac2ee54d3e652cf01c6fd7f222882b06074144ef",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244748207
|
pes2o/s2orc
|
v3-fos-license
|
Factors Related to Work Stress among Health Office Employees during Covid-19 Pandemic Faktor yang Berhubungan dengan Stres Kerja pada Pekerja Dinas Kesehatan di Masa Pandemi COVID-19
Introduction: Study related to work stress is usually more focused on the industrial sector. Meanwhile, workers in other sectors such as the government organization also have the potential to feel stressed due to their job. During the COVID-19 (Coronavirus Disease) pandemic, Health Office is one of the main stakeholders in handling and preventing COVID-19. The COVID-19 pandemic can cause work stress due to unachieved health programs and huge demands to develop programs related to this pandemic. This study analyzed the relationship between individual factors, work factors, and factors outside of work with level of work stress among Health Office employees. This study is expected to be able to analyze work stress and its determinant as early as possible. Methods: This study was a cross-sectional study using the Depression, Anxiety, Stress Scale 42 (DASS-42) and NIOSH (National Institute for Occupational Safety and Health) Generic Job Stress Questionnaire instrument. This study was conducted at the Public Health Office Bogor Regency April 2021. This study used total sampling method on employees of the Public Health Office Bogor Regency with total sample collected of 135 respondents. Data analysis in this study was performed using multiple logistic regression. Results: 86.67% of respondents did not experience work stress, 5.93% of respondents had mild work stress, and 7.41% of respondents experienced moderate work stress. Moreover, workload (p = 0.0001) and social support (p = 0.011) had a significant relationship in increasing work stress. Conclusion: Workload was the most dominant variable affecting work stress in which workers who had high subjective workload were 33.63 times more stressful compared to workers who had the appropriate workload. Prevention of occupational stress can be done by adjusting workloads and building a good social environment between colleagues.
INTRODUCTION
There are several kinds of stress that can be caused by the Coronavirus Disease-19 (COVID-19) pandemic, which are academic stress experienced by students, work stress experienced by workers, and stress in the family (Lutfida, 2020).In fact, work stress is not a new phenomenon, yet it still becomes a threat to the healthiness and well-being of workers.Work stress can be defined as harmful physical and emotional responses due to incompatability between work, abilities, resources, and worker needs (Lady, Susihono and Muslihati, 2017).The Health and Safety Executive survey in 2017 -2018 showed that stress and depression related to work reached 595,000 cases with 1,800 cases in every 100,000 workers (Health & Safety Executive, 2018).
Studies related to occupational stress are usually more focused on the industrial sector.Meanwhile, workers in other sectors such as government organization may also experience stress related to work (Reppi, Sumampouw and Lestari, 2020).In fact, job activities in government organizations that count heavily on brain abilities will cause work to become more diverse and increase work stress (Jundillah et al., 2017).A study related to work stress in government organizations, especially in Health offices, showed that there were different levels of work stress that could be experienced by workers.Sorongan, Suoth and Boky (2018) found that 27.7% workers at the Manado Health Office experienced mild work stress while another 72.3% experienced moderate work stress.Unfortunately, this study has not explored the factors that cause stress among Health Office employees more comprehensively.
As one of the main government stakeholders in handling and preventing COVID-19, it takes hard work and quick work from all employess in the Health Office at this pandemic era (Akbar, 2020).Furthermore, pandemic has forced several changes in health programs to adjust with this pandemic situation.How a country deals with COVID-19 pandemic situation will also bear a heavy share of the burden among workers (Sinclair et al., 2020).This kind of situation would increase job demands which could cause burnout symptoms such as boredom and work stress (Nugroho, 2021).This study is, therefore, expected to be able to analyze work stress and its determinants among Health Office workers as one of the government organizations as early as possible.Thus, appropriate and effective prevention strategies can be carried out to overcome work stress, especially during this COVID-19 pandemic.Work stress that is successfully able to overcome is expected to increase worker productivity in providing public services in the health sector and help workers to be able to objectify health programs that have been planned in order to improve public health status under the jurisdiction of the Health Office itself.
METHODS
This research was a cross-sectional quantitative study conducted at the Public Health Office Bogor Regency during April 2021 using 2 standard instruments, namely Depression, Anxiety, Stress Scale 42 (DASS-42) to measure work stress levels and the National Institute for Occupational Safety and Health (NIOSH) Generic Job Stress Questionnaire to analyze the determinants of the work stress itself.This study used a total sampling method from accessible populations of 145 employees.After distributing the instruments, the number of samples obtained was 135 employees.The conditions that made some employess unable to fill out the instruments were work mutations, illnesses, unpaid leave, and unwillingness to become respondents.However, the number of samples obtained has already met the minimum sample size which could be calculated by a hypothesis test for two population proportions from the proportion of work stress among workers with inappropriate and appropriate workloads based on the result of previous study.The result of this test showed that the minimum sample size was 57 respondents, so that the sample that had been obtained could be generalized.
The two instruments that were used in this study had been tested for their validity and reliability.DASS-42 showed r value > r table (0.361) on all statements in the Pearson Product Moment Test and had an alpha coefficient of 0.879, so it could be declared valid (Sedana, 2018).Meanwhile, NIOSH Generic Job Stress Questionnaire was considered valid and reliable because it had been validated by several researchers for uses in various types of jobs with a validity value of 0.68-0.91 and a short rating scale with reliability of 0.53 in Indonesia (Hasanah, Rahayuwati and Yudianto, 2020).This study used several variables that could be measured by NIOSH Generic Job Stres Questionnaire, which were age, sex, work period, personality type, physical environment, career development, role conflict, interpersonal conflict, workload, social support, and non-work activities.
Data in this study consisted of primary data and secondary data.Primary data were obtained through online questionnaires filled out by the employees and interview, while secondary data were obtained from the database of the Public Health Office Bogor Regency.Primary data that had been collected were analyzed to determine the correlation between independent variables and work stress as well as the most dominant variables affecting it.Data analysis in this study was performed using multiple logistic regression.The scoring results of the DASS-42 instruments categorized stress into 5 categories, which were no work stress, mild work stress, moderate work stress, severe work stress, and very severe work stress.However, in the correlation analysis between factors and work stress in this study, these categories were divided into two groups, which were no work stress and work stress.Thus, the results of the study showed what factors were related to work stress without considering the level.
Before entering the multiple logistic regression, a chi-square test was carried out to analyze the independent correlation between independent variables and work stress.Chi-square test was also performed as a selection of predictor variables for multiple logictic regression analysis.Only variables with p value < 0.20 were included in the multiple logistic regression analysis model.This study has Ethical Clearence No. 2021.01.1.0591from the Ethical Committee of the Medical Faculty of Udayana University.
Work Stress
Work stress levels were categorized into 5 categories according to the score obtained when filling out the DASS-42 instruments.No work stress (scoring 0-14), mild work stress (scoring 15-18), moderate work stress (scoring 19-25), severe work stress (scoring 26-33), and very severe work stress (scoring > 33).The results showed that employees did not experience severe to very severe work stress.The results showed that 117 respondents (86.67%) did not experience work stress, 8 respondents (5.93%) had a mild level of work stress, and 10 respondents (7.41%) experienced a moderate level of work stress.
Work Stress Complaints during the Pandemic
This study compared the level of stress complaints felt by workerks during the pandemic with that before the pandemic.Data were obtained from 3 additional questions on the instrument to determine the subjective feelings of workers related to stress complaints.The results showed that 33% of workers admitted that the pandemic made them feel stressed when doing work, 19% of workers claimed to have experienced complaints before the pandemic, and 25% of workers admitted that their complaints were getting worse when doing work during the pandemic.However, this study did not specifically analyze the correlation between work stress and the COVID-19 pandemic.
Individual Factors of Respondents
This study used several variables to determine the individual factors of workers, which were age, sex, work period, and personality type.
Age
In this study, the age of the respondents was divided into 2 categories based on Hurlock's theory of early and middle adulthood, which were ≤ 40 years old (early adulthood) and > 40 years old (middle adulthood).Early adulthood is described as a time of seeking stability, having full of problems, (Peristianto, 2021).Based on the results obtained, the majority of respondents were aged > 40 years old (60.74%), while the rest were ≤ 40 years old (39.26%) with an average age of 42 years old.
Sex
Two-thirds of the respondents in this study were female workers.It means that the number of male workers was fewer than that of female workers.
Work Period
Work period in this study was categorized into 2 groups by considering the data distribution.Since the respondents had an average of 13 years work period, then the work period in this study was categorized into > 10 years work period and ≤ 10 years work period.The results showed that the majority of respondents had been working for > 10 years (58.52%).
Personality Type
A personality type is a person's pattern of behavior.In this study, personality types were divided into 2 types, which was type A and type B. Individuals with personality type A were known to be more aggressive and ambitious than individuals with personality type B who tended to be more relaxed (Astuti, 2018).Personality types were categorized based on the mean score because the data were normally distributed.The results showed that more than half of the respondents had type A personality (51.11%).
Work Factors of Respondents
Work factors that were used in this study were the physical environment, career development, role conflict, interpersonal conflict, and workload.
Physical Environment
Physical environment is the perception of workers on the physical condition of while they work.In this study, data on this variable were not normally distributed, so they were categorized into poor and good based on the median score.The results showed that 69 respondents (51.11%) considered their work environment good while the other 66 respondents (48.89%) considered their work environment poor.
Career Development
Career development refers to the oppurtunities that employees have to develop their careers in the next few years.Data on this variable were normally distributed, so they were categorized based on the mean score.The results showed that the majority of respondents (50.37%) did not have good career development.
Role Conflict
Role conflict is a conflict that appears when respondents try to fit several roles all at once.In this study, this variable was categorized into high and low based on the median score because the data were not normally distributed.More than a half of the respondents (51.85%) felt high level of role conflict while doing their job.
Interpersonal Conflict
Interpersonal conflict is the respondent's conflict with his/her colleagues due to personal dislike.Since the data on this variable were not normally distributed, the data were categorized into high or low based on the median value.The results showed that the majority of respondents (56.30%) had a high level of interpersonal conflict at work.
Workload
In this study, workload refers to subjective feelings related to job demands and responsibilities owned by the respondents.Data on this varable were not normally distributed, so they were divided into high and appropriate amount of workload based on the median score.It turns out that most of respondents (52.59%) felt they had a high amount of workload, while the rest of them (47.41%)felt they had appropriate amount of workload.
Factors Outside of the Respondents' Work
There were two variables analyzed on factors outside of work, which were non-work activities and social support.
Non-work Activities
Non-work activites refer to the activities that are carried out by the respondents outside of their working hours and are unrelated to their job.These activities could be another job, educational activities, responsibility for taking care of other family members, house chores, voluntary organizations, and religious activities.In this study, non-work activities were divided into high and low based on the mean score because the data were normally distributed.The results showed that more than a half of the respondents (65.93%) had high activities outside of work.
Social Support
In this study social support is defined as emotional support, appraisal support, informational support, and instrumental support obtained by the respondents from their social networks.Data on this variable were divided into good and poor based on the median score because they were not normally distributed.The results showed that most of the respondents (54.81%) had received good social support from their social networks, while the rest (45.19%) did not receive it.
The Correlation between Individual Factors and Work Stress
The results of this study showed that the proportion of work stress was found to be greater among respondents aged over 40 years old, male workers, respondents with over a 10-year work
The Correlation between Work Factors and Work Stress
Regarding the work factors, the proportion of work stress was greater among respondents who worked in poor physical environment condition, respondents with poor career development, respondents with high level of role conflict and interpersonal conflict, and respondents with high subjective workload.The results of chi square test showed that only workload that had significant independent correlation with work stress (p < 0.05).Workload was also the only variable in this work factor category that was eligible to be included into the multiple logistic regression analysis (p < 0.20).
The Correlation between Factors Outside of Work and Work Stress
The proportion of work stress was found to be greater among respondents with low nonwork activities and respondents with poor social support.The results of chi square test showed that neither of two variables had significant independent relationship with work stres (p>0.05).Out of factors outside of work, social support was eligible to be included into the multiple logistic regression analysis (p < 0.20).
Determinants Related to Work Stress
Based on the final model of multiple logistic regression analysis, workload and social supports had significant correlation with work stress (p<0.05).The results also showed that workload was the most dominant variable affecting work stress with adjusted OR = 33.63 and p value = 0.001.This means that high subjective workload was 33.63
Work Stress
Work stress at the Health Office was stratified from no work stress to moderate work stress.Workers who did not experience work stress were the largest proportion with 86.67% of respondents.However, not experiencing work stress does not mean that respondents did not have complaints related to work stress at all.When viewed from the average score of work stress, it can be seen that the majority of respondents still had complaints related to work stress even though they had not reached the level that can be considered as work stress.
Complaints of work stress that rise among employees must be of concern to the managers, so they can be prevented before they become work stress at a higher level.Prevention can be carried out either with individual or organizational approaches (Ihsan, Ariffin and Dewi, 2018).Overall, these two approaches are the implementation of Manuaba's principal of job adjustment for men (Anggrianti, Kurniawan and Widjasena,2017).This principle must be carried out so that workers will not be burdened physically or psychologically when doing their job.
Work stress needs to be prevented so as not to interfere with the productivity of workers (Saleh, Russeng and Tadjuddin, 2020).Thus, the programs that has been planned by the Public Health Office Bogor Regency can be fully realized.Given that the Public Health Office is a government's health stakeholder, the realization of the established program is expected to improve the health status of the community.
Work Stress Complaints during Pandemic
The COVID-19 pandemic can affect both physical and mental health.Nowadays, COVID-19 pandemic is considered as a new source of stress in community all over the world.This pandemic situation can cause almost all types of mental disorders from mild to severe, and even can cause xenophobia and suicide (Riyadi et al., 2020).Among workers, pandemic could cause a lot of conflict and social inequity between organizations and workers.This would increase stress among workers and affect their psychological well-being (Damayanti and Mursid, 2021).
The results of this study showed an increase in the proportion of respondents who felt stress due to the pandemic compared to the respondents who have already experienced stress before the pandemic.A quarter of respondents also admitted that stress complaints were getting worse during the pandemic.However, this study did not analyze a direct correlation between work stress and the pandemic.Further research is expected to be able to analyze the correlation of these two.Thus, it will show whether the pandemic has a real impact on work stress or not.
Age
The Respondents' age ranged between 21-58 years with most of respondents aged over 40 years old.There was no significant correlation between age and work stress of the respondents.This result is supported by previous study by Damayanti and Nawawinetu (2019) which stated that work stress was not related to age because it was more affected by the mental workload of workers.Age could be related to work stress due to the complexity of the problems faced by older workers (Gupita Bayuwega, Wahyuni and Kurniawan, 2016).Based on data from the Public Health Office, workers have already had a position with their respective duties and responsibilities.Instead of age, job demands were more influenced by the position held by the workers.The study of Klaiber et al. (2021) also mentioned that the frequency of COVID-19 stressors or the perceived severity of stressors did not occur significantly in different age groups.Therefore, either young or old workers share the same risk of experiencing work stress, especially in this COVID-19 pandemic.
Sex
The majority of respondents were female workers.In this study, there was no correlation found between sex and work stress.Previous study also stated that the correlation between sex and work stress has not been confirmed due to the fact that both of the female and male workers actually have the same potential to be exposed to psychososcial hazards at the workplace (Padkapayeva et al., 2018).
In fact, women could suffer from stress more quickly due to higher prolactin the women have than men (Putri, 2020).However, based on observations, females and males did not experience different treatments in term of rights and obligations related to work.Every worker has the same opportunity to carry out his job with full capacity without being limited by gender.As civil servants, rights and obligations of workers have been regulated in the constitution on Civil Servants.In general, these rights and obligations are not affected by gender of the workers (Nurlitasari, 2017).Thus, there was no discrimination between women and men that could cause work stress, especially for female workers.
Work Period
Speaking of work period, this study did not find any significant correlation between the respondent's work period and work stress.This result supports a previous study which indicated that even though workers have worked for a long period of time, work stress could be avoided when the organizational environment supports all work processes so that feelings of satisfaction and comfort arise at work (Marshanty, Wardani and Sari, 2019).
Workers with a shorter work period were considered to have greater possibility for experiencing work stress.Lack of experience when facing various situations at their job could make workers feel more stressed at work.Work period can indeed determine the position held by workers.Workers in higher positions tend to have higher mental workloads than workers in lower positions (Mohamedkheir et al., 2016).However, based on the data of the Public Health Office, the position of workers was not only influenced by their long work period, but also by their education level.Therefore, there are workers with a short work period who have a higher position than workers with a longer working period.The results of observation also indicated that both new and old workers had already known what their job was and work accordance to their positions.New workers who tended to feel more stressed also no longer learned about their work problems, so they were not at risk of experiencing work stress (Juninda, 2019).
Personality Type
The majority of repondents were individuals with personality type A. Individuals with this kind of personality are known to be more aggressive and more ambitious than individuals with personality type B who are known to be more relaxed (Astuti, 2018).In this study, personality type was found not to have significant correlation with work stress.This result is consistent with a previous study that also showed no correlation between personality type and work stress because there was no significant difference between the proportion of workers with personality type A and the proportion of workers with personality type B experiencing work stress (Saraswati, 2017).
A person with personality type A can experience a higher level of stress when facing stressful situations (Nuzulawati, 2016).This happens because the characteristics of individuals with personality type A are result-oriented and do not know when to relax (Purwanti and Nurhayati, 2017).Based on interviews with respondents, it can be seen that workers' duties and responsibilities were given by not considering workers' personality type.Respondents also admitted that they did not really understand about their personality and just did their job without thinking about their personality type.Therefore, the stress experienced by workers tends not to be influenced by their personality type.
Physical Environment
In general, respondents felt comfortable with their physical work environment.In this study, there was also no correlation found between physical work environment and work stress.Jundillah et al. (2017) who conducted a study on the causes of work stress among nurses in Konawe Kepulauan also showed that there was no correlation between the two because workers had already adapted to the conditions of the physical environment and work climate at their work place.
A good physical environment could actually increase positive mental strength (Monday and Sunday, 2020).Workers of the Public Health Office mostly worked indoors.Therefore, their working process was unlikely to be affected by bad temperature or uncomfortable physical environment.Based on observations, workers who felt uncomfortable could adjust their physical environment with the condition that is considered comfortable.Thus, they did not feel disturbed while doing their job,
Career Development
Most of respondents felt that they had poor career development.However, career development was found to be not significantly correlated with work stress.This result is supported by a previous study which stated that career development and work stress were not correlated because most of the workers were satisfied with the salary given (Purnama, Wahyuni and Ekawati, 2019).
The career development of civil servants is influenced by government policies, leadership attitudes, experience, level of education, and training (Usup, 2017).This has also been regulated in several constitutions (Pasiak, 2020).Employees of the Public Heath Office have equal opportunities to develop their careers according to the constitution on Civil Servants.Workers who do not have this kind of opportunity are only those who are about to retire because they are already at the final stage of their career development progress.
Even though career development has no correlation with work stress, workers should be given the opportunity to develop themselves and get promotions according to applicable regulations to prevent the possibilities of work stress (Dafinci, Meiliani and Kananlua, 2020).
Role Conflict
High role conflict was perceived by most of the respondents.In this study, there was no significant correlation between role conflict and work stress.A previous study also stated that there was no correlation between the two because workers had an education system that made them understand about their job at hand (Saraswati, 2017).
Role conflict is one of job demands dimensions that can become stressors at work (Lestari and Zamralita, 2018).Role conflict can occur due two contradictory orders received at the same time (Rifai, 2019).This can happen because inconstistent bureaucratic control mechanisms when workers have multiple jobs (Juwita and Arintika, 2018).Based on data of the Public Health Office, it is known that workers were divided into several sections, each of which had specific duties and functions.This could become a control for role conflict that might happen due to different tasks at the same time.Efforts to control role conflict as early as possible could prevent stress among workers (Rifai, 2019).
Interpersonal Conflict
Respondents who felt a high level of interpersonal conflict at work were more than respondents who felt the opposite.In this study, interpersonal conflict that was felt by respondents was found to be not significantly correlated with work stress.This result is consistent with the result of Benua, Lengkong and Pandowo (2019) study which conducted research among PT.Pegadaian Kanwil V Manado employees.
Interpersonal conflict could be categorized as a task conflict, which is differences of opinion among workers regarding work procedures, task responsibility, and resource delegation (Singh and Choudhary, 2018).Interpersonal confict management can involve organizational structure as one of its importants aspects.In the organizational structure, the unit leader is expected to be able to maintain interpersonal conflicts among workers as early as possible before they develop into much worse conflicts.Conflicts will be relatively easier and can be handled better when there are fewer people involved (Sudarmanto et al., 2021).
Workload
Workload was found to be significantly correlated with work stress, even being the most dominant variables affecting it.The result of this study is similar to a previous study which showed workload as one of the highest ranking stressors which could lead to higher levels of stress (Jiang et al., 2019).
Based on observations and discussion with employees, the Public Health Office has an important and active role for implementing local government programs in dealing with the COVID-19 pandemic.The majority of workers at the Public Health Office were directly involved in handling the COVID-19 pandemic.Almost every work sector at the Public Health Office was affected by the COVID-19 pandemic.This situation demands hard work and quick work which could be additional workload for employees in the Public Health Office as a government institution at the health sector (Akbar, 2020).Additional workload has proven to be a dominant factor causing work stress (Shrivastava, 2020).Evidently, 33% of respondents admitted that the pandemic has made them feel stressed when doing their job.
Although pandemic can cause an additional workload, there is also a possibility that over workload has occurred long before pandemic.In this study, workload refers to the subjective workload, namely workers' perception of their workload.This kind of perception could be the cause of work stress.Puspitasari's study on Air Traffic Control officers showed that the higher the perception of workload, the higher the level of work stress that can arise (Puspitasari and Kustanti, 2020).For this reason, it is necessary to adjust the workload with workers' ability to prevent the work stress (Shogunle, 2020).
Because this study used subjective workload assessment, every respondent, either directly or indirectly involved in handling the COVID-19 pandemic, could have some perception that they had high level of workload.An objective workload analysis is therefore needed so that the workloads given are truly in accordance with the workers' ability and not only depend on the subjective feelings of the workers (Wardanis, 2018).
Non-work Acrivities
Most of the respondents had high non-work ativities.However, the results showed that non-work activites were not significantly correlated with work stress.This result is consistent with a study by Lady, Susihono and Muslihati (2017) which stated that non-work activities were not correlated with work stress because respondents; non-work activities on that study were not at the level that could affect work stress.Imbalance between personal life and work can cause higher stress (Jaharuddin and Zainol, 2019).Inability to balance these two things will make workers have higher stress and depression (Piromon and Charoenarpornwattana, 2016).Based on the inteviews, it can be seen that respondents were able to manage time between non-work activities and job demands that must be completed even though they had so many activities outside of work.A good time management skill is a straight forward and low-cost method to minimize stressors and prevent workers from stress (Ravari et al., 2020).
Social Suppport
Another determinant of work stress found in this study was social support.This result was in line with a study on nurses in Grade 3 Inpatient Room Hospital X which also showed significant correlation between social support and work stress after other variables were controlled (Hamzens and Sofwati, 2017) Based on interviews with the employees, social support was not only received from colleagues, but also came from the closest people of employees, such as friends, couples, and family.This kind of emotional support can help workers to release tension and reduce conflict.Lack of social support were associated with high stress (Lambert et al., 2017).On the other hand, good social support from a familiar figure is believed to be able to improve health status and reduce stress (Karina and Sodik, 2018).
Determinants Related to Work Stress
Workload and social support, which are aspects related to work stress, should be a concern of managers at the Public Health Office.High subjective workload has been shown to further increase work stress as well as poor social support.Efforts to adjust work capacity and workload as well as the development of a positive social work environment need to be carried out to implement the occupational safety and health at the workplace.Although they are considered to have a low frequency of work accidents and relatively small risk, offices such as Health Offices must also implement occupational safety and health (Nugroho, 2019).The implementation of occupational safety and health at the office is expected to improve employee performance (Suparman, 2017), especially in government offices where the employees are in charge of providing public services to the community.
CONCLUSION
Workloads and social support were significantly correlated with work stress.Workload was the most dominant variable affecting works stress.Prevention of work stress could be done by adjusting work capacity and workload as well as developing positive social work environment.Future research is expected to be able to assess work stress through medical diagnosis and further examine its relationship with the COVID-19 pandemic.
Figure 2 .
Figure 2. Stress Complaints Before and During the Pandemic
Table 1 .
The Distribution of Individual Factors of Respondents at the Public Health Office Bogor Regency in 2021
Table 2 .
The Distribution of Work Factors of Respondents at the Public Health Office Bogor Regency in 2021
Table 3 .
The Distribution of Factors Outside of Works of Respondents at the Public Health Office Bogor Regency in 2021 period, and respondents with personality type A. However, none of these variables had significant independent correlation with work stress based on the results of chi-square test (p > 0.05).Out of these individual factors, age and personality type were eligible to be included into the multiple logistic regression analysis (p < 0.20).
Table 4 .
The Correlation between Respondents' Individual Factors and Work Stress at the Public Health Office Bogor Regency in 2021
Table 5 .
The Correlation between Respondents' Work Factors and Work Stress at the Public Health Office Bogor Regency in 2021
Table 6 .
The Correlation between Factors Outside of Work of Respondentsand Work Stress at the Bogor Regency Health Office in 2021
Table 7 .
Analysis of Determinants Related to Work Stress at the Public Health Office Bogor Regency in 2021
|
2021-12-01T16:25:09.806Z
|
2021-11-26T00:00:00.000
|
{
"year": 2021,
"sha1": "c9299fffe55354cd7150b2da59733805371ac678",
"oa_license": "CCBYNCSA",
"oa_url": "https://e-journal.unair.ac.id/IJOSH/article/download/28589/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "547d3b381304b1898febb44e21fdb4e3c46c2c1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
233634904
|
pes2o/s2orc
|
v3-fos-license
|
Refinements and Generalizations of Some Fractional Integral Inequalities via Strongly Convex Functions
Department of Mathematics, COMSATS University Islamabad, Attock Campus, Attock, Pakistan Department of Business Administration, Gyeongsang National University, Jinju 52828, Republic of Korea Department of Refrigeration and Air Conditioning Engineering, Chonnam National University, Yeosu 59626, Republic of Korea School of Mathematics and Statistics, Northeast Normal University, Changchun 130000, China
Introduction
Let f: I ⟶ R be a convex function defined on an interval I ⊂ R and x, y ∈ I, where x < y.
en, the following inequality holds: (1) e above inequality is well-known as the Hadamard inequality. is inequality provides lower and upper estimates for integral average of a convex function. Since the appearance of this result in literature, it has drawn attention of many mathematicians of recent age and it is one of the most extensively studied results for convex functions. In [1,2], Sarikaya et al. have studied it via Riemann-Liouville fractional integrals of convex functions. After these versions of Hadamard inequality, many researchers were motivated and elegantly produced fractional inequalities using different types of fractional integrals. Also, many new classes of functions have been introduced in the establishment of fractional Hadamard inequalities; for details, we refer the readers to [3][4][5][6][7][8][9][10][11].
Fractional calculus studies the integrals and derivatives of any arbitrary order, real or complex. Its history begins at the end of seventeenth century, when G. W. Leibniz and Marquis de l'Hospital in 1695 introduced it for first time by discussing the differentiation of functions of order 1/2. However, it experienced a rapid growth over the short span of time. For example, Lagrange, Laplace, Lacroix, Fourier, Abel, Liouville, Riemann, Green, Holmgren, Grunwald, Letnikov, Sonin, Laurent, Nekrassov, Krug, and Weyl made their major contributions to establish a solid foundation of fractional calculus (see [12][13][14] and references there in). Fractional integral and derivative operators are the key factors in the development of fractional calculus. Recently, the generalizations [15][16][17], extensions [18][19][20], and applications [21][22][23] for fractional operators have been made by many researchers in mathematics, fluid mechanics [24][25][26], biological population models [27], and numerical methods [28].
Our aim in this paper is to utilize generalized Riemann-Liouville fractional integrals with monotonically increasing function. e Hadamard inequality is studied for these integral operators of strongly convex functions, and also, by using some integral identities, error bounds are established. Next, we give the definition of strongly convex function introduced by Polyak [29] (see also [30]). Definition 1. Let D be a convex subset of X, (X, ‖.‖) be a normed space. A function f: D ⊂ X ⟶ R will be called strongly convex function with modulus C ≥ 0 if holds ∀x, y ∈ D ⊆ X, t ∈ [0, 1]. For C � 0, (2) gives the definition of convex function.
In the following, we give the definition of Riemann-Liouville fractional integrals.
en, left-sided and rightsided Riemann-Liouville fractional integrals of a function f of order μ where R(μ) > 0 are defined as follows: e fractional versions of Hadamard inequality by Riemann-Liouville fractional integrals are given in the following theorems.
Theorem 1 (see [1]). Let f: [a, b] ⟶ R be a positive function with 0 ≤ a < b and f ∈ L 1 [a, b]. If f is a convex function on [a, b], then the following fractional integral inequalities hold: with α > 0.
Theorem 7 (see [35]). Let f: , then the following inequalities for k-fractional integrals hold: Theorem 8 (see [36]). Let f: [a, b] ⟶ R be a positive function with 0 ≤ a < b. If f is a convex function on [a, b], then the following inequalities for k-fractional integrals hold: Theorem 9 (see [35]). Let f: , then the following inequality for k-fractional integrals hold: In the following, we give the definition of generalized Riemann-Liouville fractional integrals by a monotonically increasing function: Definition 4 (see [37]). Let f: [a, b] ⟶ R be an integrable function. Also, let ψ be an increasing and positive function on (a, b], having a continuous derivative ψ ′ on (a, b). e left-sided and right-sided fractional integrals of a function f with respect to another function ψ on [a, b] of order μ where R(μ) > 0 are defined by If ψ is identity function, then (17) and (18) coincide with (3) and (4).
e k-analogue of generalized Riemann-Liouville fractional integrals are defined as follows: Definition 5 (see [38]). Let f: [a, b] ⟶ R be an integrable function. Also, let ψ be an increasing and positive function on
In Section 2, we establish Hadamard inequalities for generalized Riemann-Liouville fractional integrals of strongly convex functions. e particular cases are given as consequences of these inequalities which are connected with already published results. In Section 3, by using two integral identities for generalized fractional integrals, the error bounds of fractional Hadamard inequalities are established. e findings of this paper are connected with results that are explicitly proved in [1,2,31,35,36,[40][41][42][43][44]].
Main Results
Also, suppose that f is strongly convex function on [a, b] with modulus C ≥ 0, ψ is an increasing and positive monotone function on (a, b], having a continuous derivative ψ ′ (x) on (a, b). en, for k > 0, the following k-fractional integral inequalities hold: with α > 0. Proof.
Corollary 1.
Under the assumption of eorem 10 with k � 1 in (21), the following inequality holds: Corollary 2. Under the assumption of eorem 10 with ψ as identity function in (21), the following inequality holds: (a, b). en, for k > 0, the following k-fractional integral inequalities hold: with α > 0.
Corollary 3.
Under the assumption of eorem 11 with C � 0 in (33), the following inequality holds:
Corollary 4.
Under the assumption of eorem 11 with k � 1 in (33), the following inequality holds: Corollary 5. Under the assumption of eorem 11 with ψ as identity function in (33), the following inequality holds:
Error Bounds of Hadamard Inequalities for Strongly Convex Functions
In this section, we provide the error bounds of fractional Hadamard inequalities using generalized Riemann-Liouville fractional integrals via strongly convex functions. Estimations here are further refined as compared to those already established for convex functions. e following lemma is useful to prove the next result.
Proof. From Lemma 1 and strongly convexity of |f ′ |, we have It can be noted that erefore, (47) implies From which after a little computation, one can get (46). □ Remark 3. Under the assumption of eorem 12, one can get the following results: (i) If k � 1 and ψ is identity function in (46), then eorem 6 is obtained. (ii) If C � 0 and ψ is identity function in (46), then eorem 9 is obtained.
Corollary 6.
Under the assumption of eorem 12 with C � 0 in (46), the following inequality holds: Corollary 7. Under the assumption of eorem 12 with k � 1 in (46), the following inequality holds:
Mathematical Problems in Engineering
Corollary 8. Under the assumption of eorem 12 with ψ as identity function in (46), the following inequality holds: We now derive a new fractional integral identity for fractional integrals (19) and (20). (a, b). en, for k > 0, the following identity holds: with α > 0.
Corollary 9.
Under the assumption of Lemma 2 with k � 1 in (54), the following identity holds: (60) Using above lemma, we give the following error bounds of the k-fractional Hadamard inequality.
Theorem 13. Let f: I ⟶ R be a differentiable mapping on (a, b) with a < b. Also, suppose that |f ′ | q is strongly convex function on [a, b] with modulus C ≥ 0 for q ≥ 1, and ψ is an increasing and positive monotone function on (a, b], having a continuous derivative ψ ′ (x) on (a, b). en, for k > 0, the following k-fractional integral inequalities hold: Mathematical Problems in Engineering with α > 0.
Proof. From Lemma 2 and strongly convexity of |f ′ |, let q � 1, we have Now, for q > 1, we proceed as follows. From Lemma 2 and using power mean inequality, we get Strongly convexity of |f ′ | q gives which after a little computation gives the required result. □ Remark 5. Under the assumption of eorem 13, one can get the following results: (i) If C � 0 and ψ is identity function in (61), then the inequality ( eorem 3.1) stated in [36] is obtained.
|
2021-05-05T00:07:58.062Z
|
2021-03-27T00:00:00.000
|
{
"year": 2021,
"sha1": "b08dfc730538cddca36649fb0be151d4f8e207c6",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6667226.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "756c8072fdcdfa4622e2e090d2ddbc23ecd08117",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
270993274
|
pes2o/s2orc
|
v3-fos-license
|
Effect of flavonol from chamomile ( Matricaria recutita ) flavonoids on memory disorders and determination of oxidative stress in Alzheimer's rats
Alzheimer's disease is one of the most common neurodegenerative diseases characterized by beta-amyloid plaques and neurofibrillary tangles. Alzheimer's is associated with various cellular changes including oxidative stress, neuronal inflammation, and mitochondrial disorders, ultimately leading to neuronal death. Flavonols found in the chamomile plant ( Matricaria recutita ) exert beneficial effects on brain disorders like Alzheimer's disease owing to their antioxidant properties. In this study, the flavonoids from the methanolic extract of chamomile ( M. recutita ) were isolated and purified using column chromatography and TLC methods. Flavonols from the flavonoid compounds were then extracted, separated, and identified using spectroscopic methods such as 1H-NMR, 13C-NMR, Mass, and IR. 56 adult male rats were divided into 7 groups, including control (vehicle 1, solvent of flavonol, and solvent of streptozotocin drug), Alzheimer's, and flavonoid doses of 120, 250, and 400 mg/kg. Diabetes was induced by a single intraperitoneal injection of streptozotocin at a dose of 60 mg/kg, and flavonols were administered for 15 days. Memory and learning were assessed using the shuttle box device. Data analysis was conducted using SPSS 22 software, ANOVA, and Tukey tests, with significance set at p ≤ 0.05. The results indicated that doses of 250 and 400 mg/kg of flavonol extracts from chamomile caused significant changes, compared to the control group, ultimately improving avoidance memory in rats. Additionally, oxidative stress parameters were significantly reduced in the Alzheimer's groups treated with chamomile flavonol. Plant flavonols demonstrated the ability to restore spatial memory function and normalize oxidative stress parameters in streptozotocin-treated groups.
INTRODUCTION
Learning and memory are fundamental functions of the central nervous system, representing the processes through which animals interact with their environment.Memory encompasses the encoding, storage, and retrieval of learned information (Josselyn and Tonegawa, 2020).Alzheimer's disease, a neurodegenerative condition associated with aging, is characterized by various cognitive impairments, including memory deficits (Tamagno et al., 2021), speech impairments (Teleanu et al, 2022), visualspatial impairments (Cammisuli, 2024), and sensory and motor deficits (Brewer et al, 2020).This disease arises from the accumulation and increased levels of betaamyloid protein, leading to the formation of brain plaques and the degeneration of nerve cells in the neocortex and other brain regions (Pfundstein et al., 2022).Free radicals, generated as active forms of oxygen, contribute to the destruction of brain tissue and the disruption of brain neurotransmitter function.Antioxidants are essential for neutralizing free radicals in the brain.Oxidative stress, a hallmark frequently observed in Alzheimer's disease and related dementias, is often overlooked or considered a consequence of the disease's main histopathological features.Notably, oxidative stress is directly or indirectly associated with each of Alzheimer's disease's common features, and signs of oxidative stress are evident from the earliest stages of the disease (Mayne et al., 2020).Clinical studies indicate that oxidative stress plays a significant role in the pathophysiology of dementia (Teleanu et al., 2022) and can contribute to the development of the disease by disrupting the balance between free radicals and the antioxidant system (Dufour et al., 2022).Oxygen free radicals have the potential to damage proteins, nucleic acids, and lipid membranes, thereby disrupting cellular function (Teleanu et al., 2022;Butterfield and Mattson, 2020).
Brain tissue contains a significant amount of unsaturated fatty acids, making it highly susceptible to attacks by free radicals.Lipid peroxidation, considered a destructive form of oxidative damage in neurons, compromises membrane integrity and generates neurotoxic secondary products (Pohl and Lin, 2018).The balance of oxidative stress in biological systems is determined by the interplay between free radical production and antioxidant mechanisms, including enzymes like superoxide dismutase, glutathione peroxidase, and catalase, as well as molecules such as glutathione and ascorbate (Teleanu et al., 2022).For instance, increased levels of malondialdehyde serve as a marker for lipid oxidation (Teleanu et al., 2022;El Joumaa and Borjac, 2022).
Memory and learning disorders can be induced in animal models by intraperitoneal injection of streptozotocin, leading to diabetes.Streptozotocin at 40 mg/kg significantly raises glucose levels in rats compared to the normal group.The mechanism involves the destruction of beta cell membranes, DNA fragmentation, and inhibition of enzymes like glucokinase, ultimately leading to increased blood glucose and diabetes (Furman, 2021).Diabetes induces free radical production and oxidative stress, resulting in lipid, protein, and DNA oxidation, ultimately damaging brain cells and causing memory and learning impairments.
Beta-amyloid accumulation is central to Alzheimer's disease pathology, with oxidative stress playing a significant role.Beta-amyloid peptides directly and indirectly induce oxidative stress by acting as enzymes to produce hydrogen peroxide and free radicals, which in turn trigger neuronal inflammation (Grimm and Eckert, 2017;Ansari, 2023).Neuronal inflammation has been extensively studied in Alzheimer's disease pathogenesis, with increased microglial and astrocytic activity, along with elevated cytokine levels, directly associated with aging plaques in Alzheimer's patients.Despite microglia's phagocytic abilities, the presence of inflammatory cytokines and extracellular matrix proteins hinders betaamyloid clearance (Jung et al., 2022).
Recent research suggests promising effects of herbal medicines with antioxidant properties in treating or preventing brain diseases such as memory impairments (Namazi, 2022), strokes (Hong et at., 2023), and various other conditions.These effects are attributed not only to specific ingredients but also, predominantly, to their antioxidant properties.
Chamomile, a flowering plant found across Europe, Asia, and Africa, possesses antioxidant properties.Flavonols, which are polyphenolic compounds abundant in chamomile, exhibit potent antioxidant effects.Chamomile contains various biologically active substances, including volatile oil and flavonols, with flavonols representing the highest percentage among these compounds (El Joumaa and Borjac, 2022).Polyphenols, including flavonols, have been recognized as neuroprotective agents due to their ability to modulate cellular processes such as the formation of neuronal tangles and beta-amyloid plaques.Epidemiological and experimental studies have suggested that a diet rich in flavonols improves cognitive function and protects against neuronal degeneration in humans.Quercetin, the primary compound in the flavonoid subgroup, constitutes 60 to 75% of flavonols and is found in foods such as onions, leeks, broccoli, apples, and chamomile (Hwang et al., 2018).
This study aimed to investigate the therapeutic effects of flavonols on memory and learning disorders and to assess oxidative stress in male rats with Alzheimer's disease.
MATERIALS AND METHODS
In this study, 56 white male Wistar rats weighing 230-250g (obtained from Pasteur Institute, Marand Serum Company) were utilized.All animals were housed in groups of 8 rats per cage at a temperature ranging from 21 to 23°C.
Throughout the 6-week experimental period, the rats had ad libitum access to both food and water.The adult male rats were then divided into 7 groups, comprising a control group, vehicle 1 (solvent of flavonol), vehicle 2 (solvent of streptozotocin), Alzheimer's group, and three different doses of flavonol treatment groups: 120, 250, and 400 mg/kg.
Preparation of chamomile hydroalcoholic extract
First, chamomile were collected from different regions of Marand region and the identity was confirmed by botanists, The material vegetal was dried and was used for preparation of the hydroalcoholic extract in 70% ethanol by maceration for 48 h in two shifts.The dry extract was prepared using a rotary evaporator at low temperature (El Joumaa and Borjac, 2022).
Determination of flavonoid compounds
To measure the flavonoid compounds, 0.5 ml of each extract solution (0.01 g in 10 ml methanol of 60%) 0.5 ml of 2% aluminum chloride and 3 ml of 5% potassium acetate were added.After 40 min, the absorbance of the samples was read against distilled water at 415 nm (Liang et al., 2022).
Determination of flavonol compounds
In order to measure the flavonol compounds, 0.5 ml of each extract solution (0.01 g in 10 ml of methanol 60%), 0.5 ml of 2% aluminum chloride and 3 ml of 5% sodium acetate were added.After 2.5 h, the absorption of the samples against distilled water at 440 nm was read.At the same time as the experiment, different dilutions were prepared and a standard curve was prepared.The absorption of the samples was compared with the standard curve.And the amount of flavonol of each extract was calculated in terms of mg per gram of dry extract (Hwang et al, 2018).
HPLC analysis
All standards, hydro alchol extracts of chamomile were analyzed on Agilent 1200 HPLC system (Agilent Technologies, Santa Clara, CA) using C-18 column.The mobile phase consists of acetonitrile and water as isocratic solvent (30:70 v/v) maintained at a flow rate was 1 ml/min with injection volume of 5 μl and run time of 8 min, respectively.Data were collected at 335 nm (λ max for the majority of the flavonol glucosides).
Mass spectrometric analysis of flavonol
Electrospray ionization tandem mass spectrometry was used to identify apigenin and its derivatives in aqueous and methanolic extracts.In brief, chamomile fractions were dissolved in 50% methanol and introduced onto a Quattro Ultima triple quadruple mass spectrometer (Micromass, Inc., Beverly, MA) at the rate 50μl/min and analyzed using electrospray ionization both in negative and positiveion modes and its derivatives were identified using both full and product scans.The capillary and cone voltages were set at 3.5 kV and 50V respectively.The desolvation and cone temperatures were set at 250°C and 120°C respectively.The nitrogen gas flow rate for desolvation and cone was 600 L/h and 80 L/h respectively.Collisioninduced dissociation was obtained using argon gas (Reis et al, 2020).
Experimental design
The chamomile extract, rich in flavonoids, was diluted with doubledistilled sterile water and administered intraperitoneally daily for 2 weeks, starting seven days after streptozotocin injection.Streptozotocin (Sigma, USA) at a dosage of 60 mg/kg dissolved in sterile normal saline was intraperitoneally administered to induce Alzheimer's disease in the rats.One week post-injection, the animals' free blood sugar (FBS) levels were measured to confirm diabetes induction, with only diabetic animals exhibiting FBS levels higher than 250 mg/dL proceeding to subsequent stages.
Over the following days, characteristic signs of polyphagia, polydipsia, diuresis, and weight loss gradually manifested in the rats.Weight loss was observed in all rats by the end of the experiments.The animals' weights were recorded both before and during the experiments.Additionally, FBS levels were measured using glucose oxidase enzyme (from Biochemical Company, Tehran), in addition to glucometry.Memory and learning abilities were assessed using the shuttle box device.Following the final behavioral test with the passive avoidance test, the rats were anesthetized with chloroform, their Pourrabie 83 heads were severed using a guillotine, and placed on an ice board.
The hippocampus was isolated from the brain and promptly stored in a freezer at -80°C.Tissue homogenate was prepared using a mechanical homogenizer and centrifugation at 3000 rpm for 10 min at 4°C, with the supernatant solution separated from the bottom sediment and utilized for biochemical analysis (Alahmady, 2024).
Evaluation of antioxidant potential with DPPH method
In this method, DPPH (1,1-diphenyl-2-picrylhydrazyl) was employed as a reagent to measure stable radical compounds.Initially, 50 ml of extracts at concentrations of 10,15,20,25,30,40,50,60,70, and 80 mg/ml in methanol were added to 5 ml of 0.004% DPPH solution in methanol.After 30 minutes, the optical absorption of the samples was measured at 517 nm compared to the blank.The percentage inhibition of DPPH free radicals was calculated using the formula I (%) = 100 × (A_control -A_sample) / A_control.Subsequently, the concentration of the extract that exhibited 50% radical inhibition was determined from the graph.It is noteworthy that a lower concentration indicates greater antioxidant power or inhibition of free radicals.In this experiment, butylated hydroxytoluene (BHT) was used as a positive control for antioxidant synthesis, and all experiments were performed in triplicate (Alahmady, 2024).
Statistical analysis
The obtained data were statistically analyzed in the latest version of SPSS29 software.One-way analysis of variance was used to compare the effects of different doses of each sample with the corresponding group, and two-way analysis of variance was used to investigate the interaction effects between drugs.The results were presented as mean ± standard error.After the differences were significant, Tukey's post-test was used to compare the differences between the experimental groups.P<0.05 was considered as a significant difference between the groups.Graphs were drawn using Excel 2021 software.
Streptozotocin administration
The FBS levels in the streptozotocin (60 mg/kg i.P) groups during the first and second weeks exhibited a significant difference (p < 0.005) compared to the control and vehicle groups (Figure 1A).Specifically, the results indicated that the FBS levels in the control and vehicle groups remained low, approximately 150 mg/dL.However, in the streptozotocin-treated groups, FBS levels increased to 250 mg/dL, confirming the induction of diabetes in the rats.
Prior to the experiment, FBS levels of all rats were measured using the Accu-check glucometer by obtaining a blood drop from the tail.FBS levels were monitored up to 3 days after diabetes induction and then at the end of each week following an 8-hour fasting period.Additionally, to ensure accuracy, serum glucose measurement was repeated using the enzyme-colorimetric method with glucose oxidase-peroxidase by an auto analyzer model RA1000 from Tichinco USA with Pars Azmoun (Iran) company kit.Data were analyzed using one-way analysis of variance (ANOVA) followed by Tukey's post hoc test.
Additionally, the results of the passive avoidance test, specifically STL1 (short-term memory) and STL2 (longterm memory) of the control and vehicle groups, demonstrated significant changes compared to the streptozotocin group.Each histogram represents the mean ± standard deviation (mean ± SD) of STLs, with 8 rats in each group, and the observed changes are significant at p < 0.01 compared to the control group (Figure 1b).The findings of this study indicate that diabetes leads to impairment in learning and memory processes, as evidenced by a significant decrease in the average duration of STL1 and STL2 in the diabetic group compared to the control group, reflecting diminished performance and reduced learning ability in diabetic rats.The performance defects observed in diabetic rats treated with streptozotocin may be attributed to alterations in glucose levels or neurotoxic hyperglycemia associated with the action of the acetylcholine neurotransmitter.
Flavonol administration
According to the statistical analysis depicted in Figure 2, the impact of different doses of flavonol, as well as vehicle 1 (aqueous solvent), vehicle 2 (ethanol solvent of the flavonol), and vehicle 3 (streptozotocin solvent), did not show significant differences compared to the control group.This lack of effect observed in the control group suggests that the vehicles themselves were insufficient to increase the duration of STL1 and STL2, while only flavonol demonstrated the ability to enhance both short-term and long-term memory.The control group served primarily for comparison with the other experimental groups to evaluate the effect of chamomile flavonol.Notably, the control group exhibited elevated values of both STL1 and STL2 (Figures 1 and 2).Regarding the statistical findings concerning the impact of flavonol at a dosage of 120mg/kg on the diabetic group, it was observed that this dosage did not significantly alter short-term or long-term memory performance compared to the diabetic group.Flavonol at 120mg/kg did not effectively improve memory and learning disorders in the diabetic group.
On the other hand, statistical analysis pertaining to the effects of flavonol at a dosage of 250mg/kg on the diabetic group revealed significant improvements (P≤0.01) in both STL1 and STL2 performance compared to the diabetic group.This indicates that the detrimental effects of hyperglycemia on cognitive performance, memory, and learning in the flavonol-treated groups at 250mg/kg were significantly mitigated, leading to improved memory and learning compared to the diabetic group.Furthermore, the statistical results associated with the diabetic group treated with flavonol at a dosage of 400 mg/kg demonstrated a significant increase (P≤0.001) in both short-term and longterm memory performance compared to the diabetic group.Importantly, the clinical symptoms related to memory and learning disorders were alleviated in this group; indicating a substantial improvement in memory and learning outcomes (Figure 2a and b).
Standardization of ethanol chamomile extract
The amount of phenolic, flavonol and flavonoid compounds in chamomile extract is 26.5, 78.4 and 47.6 mg per dry gram of chamomile plant extract.In relation to the antioxidant activity of the chamomile plant, the IC50 value for the radical scavenging activity of the chamomile plant extract is shown in Table 1.
DISCUSSION
Alzheimer's disease is a neurodegenerative condition characterized by the degeneration of various neural regions, notably the cerebral cortex, particularly impacting cholinergic neurons in areas such as the hippocampus and frontal cortex (Grimm and Eckert, 2017).This disease manifests with memory and learning impairments, including deficits in spatial memory, short-term memory, and longterm memory (Petersen, 2019).In this study, the impact of the active compounds found in chamomile hydroalcoholic extract, specifically flavonol was investigated concerning memory and learning disorders in streptozotocin-induced diabetic rats, commonly employed as an experimental model for Alzheimer's disease induction (Furman, 2021).
Streptozotocin is known to exert its effects both centrally and peripherally (de Oliveira et al, 2021), and it was initially utilized to induce an experimental model of Alzheimer's disease in rodents, resulting in memory and learning disturbances within a two-week period (Qi et at, 2021), a finding consistent with the results obtained in this study.
Research in this area suggests that intraperitoneal administration of streptozotocin disrupts brain insulin receptor function, leading to impaired glucose utilization, mitochondrial dysfunction, reduced ATP production, and ultimately, dysregulation of energy metabolism, mirroring early Alzheimer's disease pathology (Saleh et al., 2020).These actions contribute to disturbances in cellular membrane activity, promoting the accumulation of amyloidogenic processes and hyperphosphorylation of tau protein, key hallmarks observed in Alzheimer's disease pathology.
As indicated by the aforementioned research, low doses of streptozotocin disrupt and damage signaling pathways associated with brain insulin receptors, akin to type 2 diabetes mellitus.Conversely, high doses of streptozotocin impair the structural integrity of beta cells within the pancreatic islets of Langerhans, resulting in decreased insulin production and the onset of type 1 diabetes mellitus (Ureliano et al., 2022;de Oliveira et al., 2021).Additionally, streptozotocin induces the generation of free radicals, nitric oxide, and hydrogen peroxide in neurons, contributing to structural, neurochemical, and behavioral alterations reminiscent of those observed in Alzheimer's disease (de Oliveira et al., 2021).
The results of the passive avoidance test in the streptozotocin group demonstrate a significant decrease in STL1 and STL2 compared to the control and vehicle groups.Figure 1b illustrates that diabetes leads to the impairment of learning and memory processes, as evidenced by the significant reduction in short-term and long-term memory duration in this group compared to the control group.Cognitive deficits observed in diabetic animals may arise from alterations in glucose levels or neurotoxic hyperglycemia affecting cholinergic neurons and acetylcholine neurotransmitter secretion.
Beta-amyloid accumulation is a crucial factor in the pathogenesis of Alzheimer's disease.While oxidative stress is known to play a significant role in the disease's development, its occurrence is widespread in Alzheimer's pathology.Beta-amyloid peptides directly and indirectly induce oxidative stress; enzymatically, beta amyloid can catalyze iron reduction, leading to hydrogen peroxide and free radical production.Moreover, beta amyloid binding to mitochondrial proteins triggers free radical generation and neuronal inflammation, exacerbating oxidative stress.Consequently, oxidative stress contributes to cell membrane degradation, DNA damage, protein oxidation, and ultimately, apoptosis, which is the primary mechanism of neuronal death in Alzheimer's disease.
Laboratory analyses and pharmacological tests have revealed that chamomile flowers contain terpenoid compounds in essential oils such as azulene, chamazulene, and bisabolene oxide.Additionally, the flowers contain flavonoid compounds like apigenin, kaempferol, chrysin, luteolin, quercetin, coumarins, and mucilaginous substances (El Joumaa and Borjac, 2022).Chamomile extract has demonstrated neuronal protective effects in cerebral ischemia, as well as protection against oxidative stress induced by aluminum fluoride (El Joumaa and Borjac, 2022).
Statistical analysis in Figure 2 indicates that plant flavonol at doses of 250 and 400 mg/kg significantly alters short-term and long-term memory performance in the diabetic group compared to the diabetic group alone.The effects of chamomile extract mitigate the deleterious effects of hyperglycemia, leading to improved memory and learning outcomes in this group relative to the diabetic group.To definitively ascertain the effects of flavonol on memory and learning disorders, histological examinations of various brain regions, especially the hippocampus, and subsequent clinical trials are warranted.
Conclusion
The findings of this study demonstrate that chamomile extract enhances both short and long-term memory in rats, attributing this effect to the presence of its bioactive compounds.
Figure 1a .
Figure 1a.Changes of FBS in the first and second weeks of the experiments after streptozotocin injection compared to control and vehicle rats (P<0.005)***,(b)The results of passive avoidance test STL1 (short-term memory) and STL2 (long-term memory) of the control, vehicles and the streptozotocin groups, each histogram shows the mean ± standard deviation (mean ± SD) of the time of STLs.The number of rats in each group is 8 and the changes are significant (p˂0.01)compared to the control group.
Figure 2 .
Figure 2. The effect of flavonol in control, diabetic and vehicle groups.(a) The results of passive avoidance test STL1 (short-term memory), (b) The results of passive avoidance test long-term memory (STL2).The data are in the form of SEM Mean and n=8 animals in each group.(P≤0.001)*** (P≤0.01)** in comparison with diabetic group, cut of time = 900s.
Figure 2 -
Figure 2-The effect of flavonol in control, diabetic and vehicle groups.
Table 1 -
Antioxidant activity of flavonol from chamomile plant with butylated hydroxytoluene as a positive control in DPPH method.
Table 1 .
Antioxidant activity of flavonol from chamomile plant with butylated hydroxytoluene as a positive control in DPPH method.
|
2024-07-07T15:47:50.013Z
|
2024-06-30T00:00:00.000
|
{
"year": 2024,
"sha1": "7335f1af744a7f8b61ea30f5a61475b6a8626ef4",
"oa_license": null,
"oa_url": "https://doi.org/10.5897/ajpp2024.5386",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "32c1899cfc064a781a15d2e748d42b73a1105bee",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
117769487
|
pes2o/s2orc
|
v3-fos-license
|
Special Theory for Superluminal Particle
The OPERA collaboration reported evidence for muonic neutrinos travelling faster than light in vacuum. In this paper, an extended relativity theory is proposed. We think all particles can be divided into three kinds: The first kind of particle is its velocity in the range of $0\leq v<c$, e.g. electron, atom, molecule and so on ($c$ is light velocity, i.e., the limit velocity of the first kind of particle). The second kind of particle is its velocity in the range of $0\leq v<c_{m1}$, e.g. photon ($c_{m1}$ is the limit velocity of the second kind of particle). The third kind of particle is its velocity in the range of $c\leq v<c_{m2}$, e.g. tachyon, and muonic neutrinos ($c_{m2}$ is the limit velocity of the third kind of particle). The first kind of particle is described by the special relativity. With the extended relativity theory, we can describe the second and third kinds particles, and can analysis the OPERA experiment results and calculate the muonic neutrinos mass.
Introduction
Recently the OPERA collaboration reported evidence for muonic neutrinos slightly faster than light in vacuum [1]. The CERN Neutrino beam to Gran Sasso (CNGS) consists of ν µ , with small contaminations ofν µ (2.1%) and of ν e orν e (together less than 1%). At the average neutrino energy 17 GeV, the relative difference of the velocity of the muon neutrinos v with respect to light quoted by OPERA is: (v − c)/c = (2.48 ± 0.28(stat) ± 0.30(sys)) × 10 −5 . The velocity measurement of the muon neutrinos were also reported for muon neutrino beams produced at Fermilab. Dealing with energies peaked at 3 GeV, the MINOS Collaboration [2] found in 2007 that (v − c)/c = (5.1 ± 2.9) × 10 −5 . The measurement results above are seemingly in conflict with special relativity in 4 dimensions, a number of possible explanations for Lorentz violation exist in the literature [3][4][5][6][7][8][9][10][11][12]. The neutrino velocity is higher than the velocity of light, but we think there is a limit velocity in the nature. In the paper, we propose a full relativity theory, which is based on the two postulates: 1. All particles can be divided into three kinds: The first kind is its velocity in the range of 0 ≤ v < c, e.g. electron, atom, molecule and so on, the light velocity c is their limit velocity. The second kind is its velocity in the range of 0 ≤ v < c m1 (c < c m1 ), e.g. photon, the velocity c m1 is its limit velocity. The third kind is its velocity in the range of c ≤ v < c m2 , e.g. muonic neutrinos and tachyon. The velocity c m2 is their limit velocity.
2. The first kind of particle is described by Einstein's special relativity.
In the paper, we shall study the second and third kinds of particles by the extended relativity theory. With the extended relativity theory, we give new results about photon, and calculate the limit velocity c m1 and c m2 . Otherwise, we analysis the OPERA experiment data and calculate the muonic neutrinos mass.
2. The space-time transformation and mass-energy relation for the second kind of particle (0 ≤ υ < c m1 ) For the first kind of particle, its velocity is in the range of 0 ≤ v < c, e.g. electron, atom, molecule and so on, and they can be described by the special relativity. In 1905, Einstein gave the space-time transformation and mass-energy relation which are based on his two postulates, i.e., the invariant principle of light velocity and the relativity principle. The space-time transformation is of experiments revealed that electromagnetic wave was able to travel at a group velocity faster than c. These phenomena have been observed in dispersive media [13,14], in electronic circuits [15], and in evanescent wave cases [16,17]. It is about 40 years before, that O.M.P. Bilaniuk, V.K. Deshpande and E.S.G. Sudarshan have studied the space-time relation for superluminal reference frames within the framework of special relativity [18,19]. They assumed that the space-time and velocity transformation of special relativity are suitable for superluminal reference frames. They obtained the new results that the proper length L 0 and proper time T 0 must be imaginary so that the measured quantities, such as length L and time T , are real. We think there is superluminal photon, but its velocity can not be infinity. So, we think there is a limit velocity c m1 for the superluminal photon and give two postulates for the second kind of particle (photon) as follows: 1. The Principle of Relativity: All laws of nature are the same in all inertial reference frames.
2. The Universal of Limit Velocity: There is a limit velocity c m1 , and the c m1 is invariant in all inertial reference frames.
From the two postulates, we can obtain the space-time transformation and velocity transformation for the second kind of particle (0 ≤ υ < c m1 ). When we replace c with c m , we can obtain the new space-time transformation from the Lorentz transformation, they where x, y, z, t are space-time coordinates in frame, x ′ , y ′ , z ′ , t ′ are space-time coordinates in ′ frame, c is the speed of light, and u is the relative velocity between and ′ frame, which move along x and x ′ axes. The velocity transformation is where u x , u y , u z and v = u 2 x + u 2 y + u 2 z (0 ≤ v < c m1 ) are a particle velocity component and velocity amplitude in ) are the particle velocity component projection and velocity amplitude in ′ frame. Now, We can discuss the problem of the speed of light. For two inertial reference frames and ′ , the ′ frame is a rest frame for light, i.e., the two reference frames relative velocity v is equal to c. At the time t = 0, a beam of light is emitted from the origin O. When u x = c, we obtain the result from Eq. (8), when u x = −c, we have It shows that the invariant principle of light velocity is violated for the second kind particle in the inertial reference of light velocity movement, but the limit velocity c m1 is invariant.
For the second kind particle, the mass m, momentum p and energy E of a particle of rest mass m 0 and velocity v are is and and the dispersion relation is In the following, we shall study the nature of photon, and obtain some new results.
From Eqs. (11)- (14), we obtain the photon mass, momentum and energy at light velocity and where where m 0 is photon rest mass, ν 0 is photon rest frequency, we find photon has rest mass and rest frequency when v = c, we obtain the photon energy at light velocity where m c and ν c are photon mass and frequency at light velocity c. From Eq. (19), we can obtain the photon rest mass m 0 and light velocity mass m c , they are and From Eqs. (17) and (18), we have The Eq. (22) gives the relation of a photon's frequency with its velocity, and we find photon movement frequency ν v is larger than its rest frequency ν 0 .
When v = c, we obtain the photon frequency ν c at light velocity c From Eq. (22) and (23), we obtain the ratio between the photon frequency at arbitrary velocity v and the frequency at light velocity c we obtain The Eq. (26) gives the relation between photon wavelength and its velocity v. When v = 0, the light wavelength λ = 0, i.e., the photon hasn't wavelength when it rests, but it has rest mass m 0 and rest frequency ν 0 .
From Eq. (17) and (19), we have and From Eqs. (27) and (28), we can calculate the limit velocity c m1 if we can measure the photon mass m v (m c ) when its frequency is ν v (ν c ) at arbitrary velocity v (light velocity c).
All photon possess a finite mass and their physical implications have been discussed by many theories and experiments [20,21,22]. In Ref. [21,22], the experiment were made by laser, and it determined the range of the photon mass is 10 −6 eV < m ν < 10 −4 eV . We know the laser frequency is in the range of 8.9 × 10 13 Hz ∼ 9.23 × 10 14 Hz. From Eq. (28), we can estimate the limit velocity c m1 . It is in the rang of: If we take the middle experiment value, i.e., m c = 10 −5 eV, ν c = 5.5 × 10 14 Hz, the limit velocity is In experiment [23], E. Fomalont etal. have found light of frequency ν = 43GeV passing near the solar limb, the photon mass upper limit is 3.5 × 10 −11 MeV . We can estimate the lower limit of limit velocity c m , it is In Ref. [24], the experiment measured the superluminal velocity is 310c. In Refs. [25][26], the experiments measured signal velocity were 4.7c for the microwave and 1.7c for single photon.
3. The space-time transformation and mass-velocity relation for the third kind of particle (c ≤ υ < c m2 ) In the following, we will give the space-time relation in two inertial reference frames and ′ for the third kind of particle, which its velocity is in the range of c ≤ υ < c m2 . We think the muonic neutrinos and tachyon travels faster than light, but its velocity can not be infinity. So, we can assume there is a limit velocity c m2 for the third kind of particle, and we also give two postulates: 1. The Principle of Relativity: All laws of nature are the same in all inertial reference frames.
2. The Invariant Principle of Limit Velocity: There is a limit velocity c m2 in nature, and the c m2 is invariant in all inertial reference frames.
From the two postulates, we can obtain the new space-time transformation and velocity transformation for the third kind of particle. When we replace c with c m2 , we can obtain the new transformation relation from the Lorentz transformation Eq. (1), they are s s Figure 1: The Σ is the laboratory system, Σ ′ is the mass-center system for two particles m 1 and m 2 , and the two inertial frames relative velocity is v.
where x, y, z, t are space-time coordinates in frame, nates in ′ frame, c m2 is the limit velocity, and u is the relative velocity between and ′ frame, which move along x and x ′ axes. The velocity transformation is where u x , u y , u z and v = u 2 x + u 2 y + u 2 z (c ≤ v < c m2 ) are a particle velocity component and velocity in are the particle velocity component and velocity in ′ frame.
In the following, we will give the new relation of particle mass m with its velocity v.
We can consider the collision between two identical particle. It is shown in Figure 1.
The Σ is the laboratory system, and Σ ′ is the mass-center system of two particles m 1 and m 2 . In Σ system, the velocity of two particles m 1 and m 2 are v 1 and v 2 (c ≤ v 2 < v 1 < c m2 ), which are along with x(x ′ ) axis, and they are v ′ and −v ′ in Σ ′ system. After collision, the two particles velocities are all v (c ≤ v < c m2 in Σ system. The momentum was conserved in this process: According to equation (34), there is From equations (35) and (36), we get ).
From equation (34), we can obtain where v = u 2 x + u 2 y + u 2 z and v ′ = u ′ 2 x + u ′ 2 y + u ′ 2 z . For the particle m 1 , the equation (38) becomes For the particle m 2 , the equation (38) becomes By substituting equations (39) and (40) into (37), we get where m(c) is the particle mass when its velocity is light velocity c.
For arbitrary velocity v (c ≤ v < c m2 ), we have and hence where m c = m(c). The equation (43) is the relation between tachyon mass m and its From Eqs. (11)-(14), we can obtain the third kind particle energy, momentum and dispersion relation by replacing c with c m2 , and replacing m 0 with m c , they are and By substituting Eq. (43) into (44), we can obtain the relation of mass-energy.
From Eqs. (43) and (47), we can analysis the OPERA muonic neutrino experiment. and estimated the velocity limit c m2 for the third kind particle. Otherwise, we can calculate the muonic neutrino mass.
The Lorentz group and extended Lorentz group
For the first kind particle, we can obtain the invariant interval ds by the invariant principle of light velocity, it is and The Lorentz transformation is a line transformation in 4-dimension space-time, which satisfies the invariant of internal, it is where µ, ν=0.1.2.3, and x 0 =ct, The matrix form of Eq. (51) is and Lorentz transformation a µν satisfies the orthogonal relation the aggregate of orthogonal transformation a µν constitutes a group, which is Lorentz group.
For v in the x-direction, the special Lorentz transformation is For the second and third kind particles, we can also obtain the invariant interval ds by the invariant principle of limit velocity, they are and Where c mi (i = 1, 2) is the limit velocity, and c m1 , c m2 are the limit velocity of second and third kind particles. The transformation is the matrix form of Eq. (58) is and the transformation b µν satisfies the orthogonal relation The aggregate of orthogonal transformation b µν constitute a group, which is extended Lorentz group.
For v in the x-direction, the special extended Lorentz transformation is
The relativistic dynamics for the second and third kinds particles
For the first kind particle (0 ≤ v < c), we know the 4-force is defined as: the "ordinary" force K is while the fourth component the covariant equation for a particle are we define force F as From Eqs. (66)-(68), we have the relativistic dynamics equations for a particle are For the second kind particle (0 ≤ v < c m1 ), the relativistic dynamics equations are Eqs. (70) and (71), but some physical quantities should be modified The 4-momentum and 4-force are and For the third kind particle (c ≤ v < c m2 ), the relativistic dynamics equations are also Eqs. (70) and (71), and some physical quantities also should be modified The 4-momentum and 4-force are and 6. The quantum wave equation for the second and third kinds particles For the first kind particle (0 ≤ v < c), we express E and p as operators: we can obtain the quantum wave equation of spin 0 particle from Eq. (6) [ and we can obtain the quantum wave equation of spin 1 2 particle where α and β are matrixes where σ are Pauli matrixes, and I is unit matrix of 2 × 2.
For the second kind particle (0 ≤ v < c m1 ), we can obtain the quantum wave equation of spin 0 particle from Eq. (14) [ and we can obtain the quantum wave equation of spin 1 2 particle For the third kind particle (c ≤ v < c m2 ), we can obtain the quantum wave equation of spin 0 particle from Eq. (46) and we can obtain the quantum wave equation of spin 1 2 particle With Eq. (44), we have Table 1, where the first column is muonic neutrinos energy, the second and third column are corresponding to different energy neutrinos minimum and maximum velocity, the forth column is corresponding to different energy neutrinos mass, the final column is muonic neutrinos mass at light velocity c. From Table 1, we can find the muonic neutrinos mass m(c) is in the range of (1.161 ∼ 1.527) × 10 −18 GeV s 2 /m 2 , and the muonic neutrinos mass increase when its energy increase.
Conclusion
In this paper, an extended relativity theory is proposed. We think all particles can be divided into three kinds: The first kind of particle is its velocity in the range of 0 ≤ v < c, e.g. electron, atom, molecule and so on. The second kind of particle is its velocity in the range of 0 ≤ v < c m1 , e.g. photon. The third kind of particle is its velocity in the range of c ≤ v < c m2 , e.g. tachyon, and muonic neutrinos. The first kind of particle is described by the special relativity. With the extended relativity theory, we can describe the second and third kinds particles, and obtain some new results. Otherwise, we analysis the OPERA experiment data and calculate the muonic neutrinos mass.
|
2011-10-31T03:40:47.000Z
|
2011-10-05T00:00:00.000
|
{
"year": 2011,
"sha1": "924ea2e362f1876976c7e02b44c134f6f0e7e3d9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "924ea2e362f1876976c7e02b44c134f6f0e7e3d9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
257769082
|
pes2o/s2orc
|
v3-fos-license
|
The Rapunzel syndrome: a hairy tale
Background Trichobezoars are a rare medical condition, often requiring a surgical approach and commonly associated with an underlying psychiatric disorder. The Rapunzel syndrome is a rare variant of trichobezoar in the stomach extending from the stomach into the small intestine causing a bowel obstruction. Case presentation In this case report, the clinical presentation, diagnostic approach, and surgical removal of a large-size bezoar (Rapunzel syndrome) in a young and otherwise healthy female is described. Different surgical strategies are discussed. Psychiatric exploration gives an insight on development of trichophagia ultimately leading to the forming of the trichobezoar. Conclusions This brief report sheds light on the importance of the collective mind of a multidisciplinary team preventing a potentially fatal outcome.
Background
A bezoar is a concrement of indigestible human components or vegetable fibers that accumulate over time in the gastrointestinal tract. The most common type of bezoar in humans is the trichobezoar, which is mostly made of hair. However, bezoars can also form from any indigestible material. Various case reports describe the occurrence and diagnostic as well as surgical management of these peculiar surprises.
Trichobezoars, on the contrary to other bezoars, are not associated with alterations in the gastrointestinal motility, but with underlying psychiatric disorders. They are most commonly presented in young female adults [1]. The development of trichobezoars is a salient complication of trichophagia, an obsessive-compulsive behavior characterized by eating hair [2]. Trichophagia is thought to be in most cases preceded by trichotillomania, an irresistible urge to pull one's own hair [3]. However, other underlying or associated psychiatric diseases involve post-traumatic stress disorder (PTSD), for example as a result of childhood neglect or abuse, as well as affective disorders [4,5]. While diagnostic and surgical procedures of trichobezoars are well described in the literature, psychiatric literature on the etiology of trichobezoars remains anecdotal and unsystematic [6].
The here reported rare and unusual form of a trichobezoar extending into the small intestine is colloquially called "Rapunzel syndrome".
Case presentation
We are reporting about a previously healthy 21-yearold female with an ileus due to a large-size bezoar in the stomach and small bowel after a history of eating her own hair for several years.
The woman presented to the emergency department with a history of unspecific abdominal pain and vomiting after food or water intake. Furthermore, similar but less severe symptoms were reported since a few years. According to the mother, a habit of eating hair was observed by family members. A mild anemia was previously supplemented by intravenous iron treatment.
Diagnostic GIT-endoscopy was pending. Apart from that, medical history was unremarkable, and the patient never had abdominal surgery before. The patient presented with normal weight and shoulder long hair. Abdominal exam showed reduced bowel sounds with otherwise normal findings. No abdominal tenderness was noted. Ultrasound showed pendulum peristalsis in the small bowel, a greatly enlarged stomach and a non-vascularized obstructing mass in the lower abdomen (Fig. 1). Plain radiograph of the abdomen showed multiple air fluid levels with distended small intestinal bowel loops (Fig. 1). Laboratory work-up revealed a leukocytosis (19.8 G/L, ref 2.6-7.8) and a hyperregenerative microcytic and hypochromic anemia (95 g/L, ref 115-148). C-reactive protein (CRP)-levels, complete metabolic panel, as well as liver and pancreatic enzymes, were normal. A urine sample was contaminated and therefore non-conclusive.
Because of young age of the patient, computed tomography (CT) scan was discussed considering exposure to radiation versus direct explorative laparoscopic surgery with a high risk for laparotomy. It was decided to perform a CT scan nevertheless due to the ileus-like picture of the abdomen and the repetitive vomiting to evaluate extent of the likely needed operation. CT scan confirmed the suspected diagnosis of a mechanical-caused ileus due to a large mass in the small intestines in the left lower abdomen. Furthermore, there was a large mass seen distending the whole stomach (Figs. 2, 3).
Conclusively, a mechanical ileus due to a bezoar in the small intestine and a bezoar in the stomach was diagnosed. Additionally, extensive collateral circulation with portacaval shunting was present, most likely due to compression of the portal vein ( Fig. 2).
A virtual CT-reconstruction of the findings initially showed a possible tapering tail reaching from the stomach downwards (Fig. 3). This finding was consistent with subsequent intraoperative findings (Fig. 4).
A primary upper median laparotomy with gastrotomy and ileotomy was performed and a 29*19*10 cm trichobezoar was removed from the stomach (Fig. 4) and a smaller 14*4*4 cm trichobezoar was removed from the small intestine (Fig. 5), each in one piece. The trichobezoar removed from the stomach showed a tapering tail extending into the small bowel and was a perfect cast of the stomach, pylorus and duodenal bulb (Fig. 4).
Trichophagia was diagnosed by the in-house psychiatric staff. In psychiatric exploration, the patient reported having memories of "playing with her own hair" since the age of five after observing her mother showing similar behavior. The patient reported increase in pulling hair with subjective stress level. While the patient initially stored the pulled out hair in the nightstand, swallowing started after family members became aware of it. While the patient described herself as socially rather isolated and with only a few friends, there were no obvious signs of psychopathology. Especially denied were mood disorders, anxiety or/and a history of abuse or neglect, commonly reported in patients with trichophagia [7]. However, the patient reported suffering from severe sleep disorders, which she attributed to nausea and stomach cramps during the night.
The patient further described hair pulling and swallowing possessing stress relieving qualities, albeit being performed mostly secretly at home and not in public. Previous attempts of stopping the compulsive acts have failed. The patient furthermore described hair pulling and swallowing as happening somehow "out of her own Fig. 1 Ultrasound of the lower abdomen shows enlarged small bowel (4.5 cm diameter) consistent with an ileus. An obstructing conglomerate tumor is visible (left side). Plain radiograph shows multiple air fluid levels as a sign for ileus (right side) conscious perception". Even though the patient realized that there was something "out of order" with appetite and digestion, she did not consciously attribute the symptoms to trichophagia.
The patient was discharged 5 days after admission after a clinically good recovery but against surgeon's recommendation due to still highly elevated CRP-levels (202 mg/L, ref < 5). Due to the extent of trichophagia and lack of insight into severity of the disease, a specialized in-patient psychiatric clinic to treat the obsessive-compulsive disorder was recommended but rejected by the patient. An out-patient appointment was organized. In a follow-up psychiatric evaluation, the patient stated that she has continued eating hair after hospital discharge. The patient perceived eating hair as "hard to control" and "happening rather unconsciously", followed by frustration and a feeling of failure.
A check-up with the family doctor showed good wound healing, decreasing CRP-level and increasing hemoglobin levels.
Discussion
Foreign material in the gastrointestinal tract can lead to Bezoars. These concrements occur mainly in the stomach. Bezoars composed of hair or hair-like fibers are called trichobezoars. They are associated with the obsessive-compulsive disorder trichotillomania (pulling out one's own hair) and trichophagia (eating hair), however, there is anecdotal evidence of other (comorbid) underlying psychiatric diseases such as affective disorders as well as severe neglect or (sexual) abuse [7,8]. According to estimations, only 1% of patients with trichophagia develop a trichobezoar [9,10]. Trichobezoars form when hair in the stomach escapes the peristaltic propulsion of the stomach due to its slippery surface and accumulates in the folds of the gastric mucosa. In rare cases, a higher amount accumulates. As a result, the gastric peristaltic is forming the mass into a ball and ultimately into a perfect cast of the stomach, usually as one single solid mass [11].
In literature, it is assumed that the usual location of these trichobezoars is associated to the holdup by the pylorus, the motion of the stomach and, finally, the entangling of new hair into the mass. Stomach mucus covers the trichobezoar and gives it a shiny look; gastric acid denatures the hairs' protein and gives the bezoar its dark color. Due to decompensation and fermentation of the hair, the patients might have a putrid breath and sometimes present with halitosis [12,13].
Rapunzel syndrome is a rare form of a trichobezoar with no consistent definition in literature. Various definitions are described, for example a gastric trichobezoar with a tail extending to the ileocecal junction [14,15]. Furthermore, some authors describe it as a simple gastric trichobezoar with a tail which may lead to the jejunum or further and some define it as any size which causes intestinal obstruction [14].
Patients usually stay asymptomatic for many years. Symptoms start developing as the trichobezoar increases in size up to the point of obstruction. The most common symptoms are therefore abdominal pain, nausea and vomiting, intestinal obstruction, and peritonitis. Indirect signs, caused by malabsorption, are iron deficiency with consecutive microcytic and hypochromic anemia, vitamin B12 deficiency with consecutive megaloblastic anemia, fatigue, protein-losing enteropathy, and weight loss. A large obstructing or eroding bezoar may cause complications such as gastric ulceration, obstructive jaundice and acute pancreatitis [16]. When a bezoar is suspected, the focus in examination should be on trichotillomania and trichophagia as well as ingestion of items such as dolls/wigs or pet hair. Further clues are a refractory halitosis and patchy alopecia. The gold standard for diagnosis is upper gastrointestinal endoscopy for visualizing as well as possible sample taking and, when confirmed, initiation of therapeutic options.
The treatment of a bezoar focuses on surgical removal of the mass. Prevention of recurrence may only be reached by addressing the underlying psychiatric illness. The removal of the mass depends on its consistency, size and localization: the right approach might be via endoscopy or surgery. The endoscopic approach might be effective for phytobezoars or lactobezoars as they are usually smaller in size. Specialized bezotomes and bezotriptors (medical device to pulverize bezoars either mechanically or acoustically) are used to fragment solid trichobezoars [17]. Trichobezoars, particularly large ones (> 20 cm), and Rapunzel syndrome are less likely to be removed via endoscopy and usually require surgery due to their extension [9]. Surgical removal is done by gastrostomy and enterotomy. Surgery is indicated due to the size of the bezoar causing perforation, hemorrhage, or an ileus. The surgical access depends on the trichobezoars' size by performing an upper midline laparotomy with gastrotomy, as performed in our patient, or a laparoscopic approach in minimal invasive approach for smaller to moderatesize bezoars [14]. Multiple other methods like extracorporal shock wave lithotripsy, administration of enzymes to the stomach (pancreatic lipase, cellulose), and medications (metoclopramide, acetylcysteine) demonstrate heterogeneous treatment success [17,18].
Recurrence is reported after the initial removal of bezoars. Therefore a long-term psychiatric follow-up is advised [19]. However, the patient's motivation to engage in psychiatric/psychological treatment (e.g., cognitive-behavioral therapy to reduce obsessive compulsive behavior) is a prerequisite.and essentially preventing recurrence of a trichobezoar. In that case, the long-term prognosis in these cases is favorable [16].
Conclusions
The Rapunzel syndrome as presented in this case is a rare variant of trichobezoar in the stomach extending from the stomach into the small intestine and/or causing a bowel obstruction. While small trichobezoars may be removed by an endoscopic approach (fragmentation, lavage, enzymatic therapy, or combinations), larger trichobezoars/the Rapunzel syndrome usually needs a surgical removal.
Trichobezoar as an entity should be considered in the differential diagnosis of abdominal pain and non-tender abdominal mass in young patients. A thorough assessment of psychiatric history is mandatory to address the underlying disease to prevent recurrence.
PTSD
Post-traumatic stress disorder GIT Gastrointestinal tract CRP C-reactive protein CT Computed tomography
|
2023-03-28T14:12:48.261Z
|
2023-03-28T00:00:00.000
|
{
"year": 2023,
"sha1": "673efa0794be8eb155dc119a4d7319843a778e73",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "673efa0794be8eb155dc119a4d7319843a778e73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265240157
|
pes2o/s2orc
|
v3-fos-license
|
Develop a AI/mL Tool to Descry Whether a System / Firewall /Router / Network is Compromise
Computer networks target several kinds of attacks every hour and day; they evolved to make significant pitfalls. They pass new attacks and trends; these attacks target every open harborage available on the network. Several tools are designed for this purpose, such as mapping networks and vulnerabilities surveying. lately, machine literacy (ML) is a wide fashion offered to feed the Intrusion Discovery System (IDS) to descry vicious network business. The core of ML models’ discovery efficiency relies on the dataset’s quality to train the model. This exploration proposes a discovery frame with an ML model for feeding IDS to descry network business anomalies. This discovery model uses a dataset constructed from malicious and normal business. This exploration’s significant challenges are the uprooted features used to train the ML model about various attacks to distinguish whether it is an anomaly or regular business. The dataset ISOT-CID network business part uses for the training ML model. We added some significant column features, and we approved that point supports the ML model in the training phase. The ISOT-CID dataset business part contains two types of features, the first uprooted from network business inflow, and the others reckoned in specific interval time. We also presented a novel column point added to the dataset and approved that it increases the discovery quality. This point is depending on the rambling packet cargo length in the business inflow. Our presented results and trial produced by this exploration are significant and encourage other experiments and us to expand the work as future work. “ Artificial Intelligence is the tool of making opinions that would bear perception if done by mortal. “
Introduction
Intrusion Discovery System is a software operation to descry network intrusion using various machine literacy algorithms.IDS monitors a network or system for vicious exertion and protects a computer network from unauthorized access from druggies, including maybe bigwig.The intrusion discovery literacy task is to make a predictive model (i.e. a classifier) capable of distinguishing between 'bad connections' (intrusion/attacks) and a 'good (normal) connections'.
In network attacks, the bushwhacker must know active addresses, network topology, and available services.Network scanners can identify open anchorages on a system, whether TCP or UDP anchorages, where participated services are related to specific anchorages, and an bushwhacker could shoot packets to every harborages.TCP characteristic capacities of how systems reply to unauthorized packet formats different merchandisers TCP/IP heaps answer differently to unauthorized packets.So, the bushwhacker can determine OS by transferring multitudinous combinations of illegal packet options, initiating a connection with an RST packet, or combining other odd and illegal TCP law bits.The bushwhacker could know if a machine is running, whether Linux, Windows, or any other operating system.
Figure. 1
The main significant thing in our exploration that we added the new point.We believe this new point gives support for the ML model in the training process.This point is called rambling.Most machine literacy models are learning from the diversions of case values.The closer values can support the bracket process more directly.Depending on our knowledge network inflow business have many different packet sizes through the colorful type of contents.The network protocols have limited packet size related to artificial pots similar as Xerox Ethernet V2, intel, etc. Utmost of them ranged from (64 to 1518) bytes.Suppose we capture a group of packets that have the same destination IP address in a time interval.Let cargo of the packet in specific time T is Vi and Xi is the mean of these V (0,1, 2, …. n) the rambling point (R) calculate for each case inflow for the interval (t, dt) as the following.
Figure. 2
The proposed dataset uprooted from network business in different period and contains frame time, source MAC, destination MAC, source IP, source harborage, destination IP, source harborage, IP length, IP title length, TCP title length, frame length, neutralize, TCP member, TCP acknowledgment, in frequence number, and out frequence number.These attributes of network inflow can specify packets, whether anomaly or normal.The formulas shown in Fig. 2 can calculate the in-frequence number, and also the eschewal-frequeney number.
Dataset Medication stage
Understanding dataset pall computing networks facing security pitfalls, same as the traditional computing networks with some other differences .According to several protocols, services, and technologies such as virtual structures, these fresh security pitfalls related to the pall structure have data formatting situations.With such an terrain furnishing protection should consider all data business in both bigwig and stranger.The remaining challenge of completing this job is execting an ML model that trains IDS to capture these various data abstraction anomalies.likewise, the rooting features from these several data places need related tools to pass the gathered row data to the trained ML model.The rooting tools should be gathering recent cases of data from several coffers in real-time.likewise, ISOT-CID is unnaturally raw data and has not been converted, altered, or manipulated.It's set and structured for securing the pall community.In this exploration we consider only the network business part, as described in the Ph.D. thesis of Aldribi etal.In this exploration, we're working on only the network business part.
Marker the dataset
Labeling dataset is a significant process for training the ML Algorithm to classify the new business as vicious or normal.After calculating the attributes in Table 2 in the formal section using the Java program, we extend the program for labeling the case class by Normal if it has a source or destination IP address.
Benefactions
AAls and AAld have shared in the design of the proposed system.AAls has enforced and enciphered the system and go testing and gain the results.As a administrator, AAld support and companion AAls during her MSc degree with some ideas and knowledge.Both authors read and approved the handwriting.
Figure. 3 Figure. 4
Figure. 3 Description S_MAC Source MAC Address in the data link frame D_MAC Destination MAC address in the data line frame S_IP Source IP of the data packet S_PT Source Port of the data packet D_IP Destination IP of the data packet D_PT Destination Port of the data packet IP_LEN The length of the IP packet IP_HLEN The IP title length of the IP packet TCP_HLEN The TCP title length IP_OFFS The neutralize of the IP packet TCP_SEQ Data position of the TCP member TCP_ACK Number of data entered
|
2023-11-17T16:12:39.257Z
|
2023-11-12T00:00:00.000
|
{
"year": 2023,
"sha1": "92472d650899138b3c8ed3ec101e11f932927ddb",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijfmr.com/papers/2023/6/8725.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3f80e6f5529cb1089b00f56e8eaa51934b6578a2",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
122986294
|
pes2o/s2orc
|
v3-fos-license
|
Supercontinuum generation by nanosecond dual-pumping near the two zero-dispersion wavelengths of a photonic crystal fi ber
Supercontinuum generation by dual-wavelength nanosecond pumping in the vicinity of both zero-dispersion wavelengths of a photonic crystal fi ber (PCF) is experimentally demonstrated. It is shown in particular that two pumps at 1535 nm and 767 nm simultaneously pumping near the two zero-dispersion wavelengths of a specially designed PCF yields a combined visible and infrared supercontinuum spectrum spanning from 0.55 μ m to 1.9 μ m. We discuss the generation mechanisms underlying the continuum formation in terms of modulation instability and cascaded Raman generation.
Introduction
The study of supercontinuum (SC) generation in photonic crystal or microstructured fibers continues to be an area of active research, motivated by new applications requiring high intensity light sources over extended wavelength ranges [1][2][3][4]. Although many previous reports on SC generation have used fibers with only one zerodispersion wavelength (ZDW), studies of SC generation in photonic crystal fiber (PCF) presenting two ZDWs have been the subject of increasing interest [5][6][7][8][9][10][11]. For instance, enhanced SC bandwidth with improved flatness has been demonstrated in PCFs by generating two dispersive waves in the short-and long-wavelength sides of the SC spectrum [7,10]. In addition, it has recently been shown that multiwavelength pumping also allows for significant enhancement of the SC bandwidth [12][13][14][15][16]. In this paper, we extend these studies of dual wavelength SC generation by reporting experiments where we have performed dual-wavelength nanosecond pumping, simultaneously exciting broadband SC generation about each ZDW of a specially designed PCF. We use a microchip nanosecond Q-switched laser source at 1535 nm and periodically poled lithium niobate for efficient frequency doubling at 767 nm, and a PCF with ZDWs at 1540 nm and 863 nm. The two pumps at 1535 nm and 767 nm appear to generate visible and infrared supercontinuua independently, and the combined spectrum spans from 0.55 μm to 1.9 μm. By tailoring the relative power of the two pumps, the spectral content of the spectrum can be significantly varied, and this setup represents a particularly simple and compact means of generating broadband spectra in the visible and infrared spectral regions.
Experimental setup
The experimental setup is shown schematically in Fig. 1. As a pump source, we use a Cobolt Tango™ eye safe laser. This pulsed laser emits at 1535 nm with a repetition rate of 3.3 kHz and an average power of 14 mW, pulse length of 3 ns and peak power around 1.4 kW. The pump is frequency doubled to 767.5 nm using a 1 cm long periodically poled Lithium niobate (PPLN) crystal with a lens of 50 mm focal length. Owing to the high peak power of the pump laser, other spectral components are also generated in the PPLN crystal such as the fourth harmonic at 384 nm and a green component at 511 nm resulting from sum frequency generation (SFG) between the fundamental and the second harmonic [18]. The mean power of the second harmonic component has been estimated at 1.5 mW. Simultaneous coupling of the visible and infrared pumps was achieved for SC generation by using an aspheric lens with infrared coating with an effective focal length of 4.5 mm in order to minimize coupling loss due to chromatic aberrations. The white-light optical mode at the end of the fiber is also shown in Fig. 1, together with the visible part of the SC spectrum. Fig. 2(a) shows the cross-section (SEM image) of the PCF used. The fiber geometry consists of four rings of holes with different diameters. The first two rings are based on a triangular lattice and the smaller hole diameter is 800 nm. The two external rings consist of 12 larger elliptical holes that isolate the fiber core from the coating to lower the confinement losses. From this SEM image we simulated the fundamental optical mode at the pump wavelength propagating in the fiber using COMSOL© software. The fiber cross-section was imported from the SEM image and the optical mode was solved with the RF module. Fig. 2(b) shows that the fundamental mode is confined inside the first two rings leading to a small effective mode area A eff = 5μm 2 and therefore to a high nonlinear coefficient of γ = 2πn 2 / (λA eff ) = 20.5W − 1 km − 1 , with n 2 = 2.5 × 10 − 20 m 2 W − 1 . The group velocity dispersion (GVD) curve is also plotted in Fig. 2(c). We clearly see the existence of two zero-dispersion wavelengths located respectively at 863 nm and 1540 nm. The two superposed red lines in the figure show the position of two pump wavelengths used for SC generation. Note that the infrared pump at 1535 nm is very close to the second ZDW whereas the visible pump falls slightly in the normal dispersion regime below the first ZDW. It is important to stress here that this situation is significantly different from previous studies of SC generation with two ZDW PCF where the pump wavelengths were always very far from the ZDWs. The second, third and fourth order dispersion coefficients calculated at the pump wavelength of 1535 nm are β 2 = 1.207ps 2 . km − 1 , β 3 = − 0.315ps 3 . km − 1 and β 4 = 0.002ps 4 . km − 1 , respectively. Note that the dispersion slope at 1535 nm is negative and thus leads to a novel regime for infrared SC generation [2,10,17,19].
Experimental results
We first investigated the general variation in the spectral properties observed as the relative pump-SH power was varied. Fig. 3 shows a false color representation of the measured output spectra for variable coupling efficiency of both infrared and visible pumps. This is achieved simply by translating the aspherical lens along the optical z-axis. Unfortunately, it was not possible to measure the coupling efficiency and the powers launched in the PCF. An estimation was however possible from the spectra and is given in Figs. 4 and 6, that show some output spectra of the color map indicated by horizontal dashed lines in Fig. 3. As seen in Fig. 4, the infrared pump near the second ZDW undergoes spectral broadening through modulation instability (MI), which is manifested, in the Fourier domain, by the clear generation of two sidebands symmetrically located around the pump, as indicated by the arrows in Fig. 4. Their positions in the SC spectrum are predicted by the following phase-matching relation: β 2 Ω 2 + β 4 12 Ω 4 + 2γP = 0, P is the infrared peak power and Ω = ω MI − ω P is the frequency shift between the pump and the instability bands. A theoretical model in good agreement with measurements was already presented in details in Ref. [17]. Stimulated Raman scattering (SRS) also plays a central role in the infrared SC generation and manifests itself through the appearance of a Stokes band which is frequency down-shifted by approximately 13.2 THz from the pump. Fig. 4 shows the generation of a first Raman order denoted S1, strongly seeded by the MI process, and one can also notice the appearance, for sufficiently large pump power, of a second Raman order S2 at 1792 nm. Such a Raman cascade can be clearly observed because the Stokes bands fall within the normal dispersion regime and do not undergo spectral broadening or soliton self-frequency shift dynamics, as it is generally the case when pumping near the first ZDW [19]. Unlike the MI process, the Raman gain is anti-symmetric and spectral components of the anti-Stokes side, generated by four-wave mixing, have in principle a lower amplitude than those of the Stokes side. However, the observation of anti-Stokes Raman components (AS1, AS2), as shown in Fig. 4, is due to the strong coupling between SRS and parametric gain [17].
The spectral broadening of all anti-Stokes Raman order can be attributed to the fact that they fall in the anomalous dispersion regime and therefore are modulationally unstable.
When we move away the injection lens along the z-axis, the coupling efficiency for the infrared pump decreases whereas it increases for the visible pump which is, in turn, spectrally broadened. Fig. 6 shows that the visible pump at 767.5 nm generates down-frequency-shifted Stokes Raman components through cascading process in the normal dispersion regime of the PCF. An additional spectrum with a higher resolution actually shows that the Raman cascade exhibits four orders as shown in Fig. 5.
As the fourth Raman order falls in the anomalous dispersion regime at 874 nm just above the first ZDWs of the PCF, SC dynamics then takes place through MI, solitons and dispersive wave generation towards shorter wavelengths, as shown in the spectra plotted in Fig. 6. For higher visible and infrared pump powers, the spectrum in red in Fig. 6 shows that the two independently generated visible and infrared SC spectra merge together to give rise to a wide SC spectrum. The XPM contribution in the SC generation can be however seen in the color map in Fig. 3 when both the visible and IR spectrum exhibit a homogeneous spectral broadening towards longer wavelength. The IR spectrum shows in particular a sharp transition from the second order S2 to a broad and smooth spectrum due of the effect of XPM with the visible part of the spectrum. Moreover, this also yields to the smoothing of the discrete Raman cascade in the SC visible part and the MI sidebands, as it can be seen in Fig. 6.
Finally a wide supercontinuum is generated in the two ZDW PCFs for a special coupling of both components in the PCF. The SC spans from 550 nm to more than 1800 nm in Fig. 7 which are the lower and upper limits of our spectrum analyzer. Note that the blue and red sides of the SC spectrum appear with a small amplitude because of the low quantum efficiency of the OSA detector at those wavelengths. The SC short wavelength side is well below 550 nm given that the SC is white light and the red side should probably reach more than 2 μm. The insets also show that nearly white light SC generation is obtained in the fundamental optical mode. The total SC output mean power was measured to 1.5 mW and the SC flatness is relatively poor (about 20 dB). However, these could easily be improved by separating the two laser beams using a dichroic filter and by matching their beam waists, as done in Ref. [13]. The use of a fibered-PPLN waveguide should also greatly simplify and improve the SC power and flatness. Power Spectrum (10 dB/div.) Fig. 4. Output spectra plotted from Fig. 3 associated with a,
Conclusion and discussion
In this paper, we experimentally investigated supercontinuum generation by dual nanosecond pumping in a two widely spaced zerodispersion wavelengths of a highly nonlinear photonic crystal fiber. This was achieved by using a compact microchip laser at 1535 nm and frequency doubled by a PPLN crystal at 767.5 nm. These two wavelengths nearly match the two zero-dispersion wavelengths and generate visible and infrared supercontinuua that merge at high pump powers. The SC shows a spectral extent from 550 nm to more than 1900 nm. Finally, the pumping scheme that we demonstrated in this work could be advantageously used to pump small-core PCF with ZDW near 770 nm, close to the SHG wavelength, compared to bulky and costly Ti: Sapphire femtosecond laser, for applications such as those requiring commonly incandescent lights [20].
|
2019-04-20T13:05:03.952Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "147da509462ac613a6495b15dcb695c8fc64ce73",
"oa_license": "CCBY",
"oa_url": "https://hal.archives-ouvertes.fr/hal-00582983/file/Boucon2010.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c437e76d00febce24eefc4eb8588fa24339727d4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
14766797
|
pes2o/s2orc
|
v3-fos-license
|
Sequentially Cohen-Macaulay Edge Ideals
Let G be a simple undirected graph on n vertices, and let I(G) \subseteq R = k[x_1,...,x_n] denote its associated edge ideal. We show that all chordal graphs G are sequentially Cohen-Macaulay; our proof depends upon showing that the Alexander dual of I(G) is componentwise linear. Our result complements Faridi's theorem that the facet ideal of a simplicial tree is sequentially Cohen-Macaulay and implies Herzog, Hibi, and Zheng's theorem that a chordal graph is Cohen-Macaulay if and only if its edge ideal is unmixed. We also characterize the sequentially Cohen-Macaulay cycles and produce some examples of nonchordal sequentially Cohen-Macaulay graphs.
Introduction
Let G be a simple graph on n vertices (so G has no loops or multiple edges between two vertices). Denote the vertex and edge sets of G by V G and E G respectively. We associate to G the quadratic squarefree monomial ideal I(G) ⊆ R = k[x 1 , . . . , x n ], with k a field, where I(G) = ({x i x j | {x i , x j } ∈ E G }). The ideal I(G) is called the edge ideal of G.
The primary focus of this paper is edge ideals of chordal graphs. A graph G is chordal if every cycle of length n > 3 has a chord. Here, if {x 1 , x 2 }, . . . , {x n , x 1 } are the n edges of a cycle of length n, we say the cycle has a chord in G if there exists two vertices x i , x j in the cycle such that {x i , x j } is also an edge of G, but {x i , x j } is not an edge of the cycle.
We say that a graph G is Cohen-Macaulay if R/I(G) is Cohen-Macaulay. As Herzog, Hibi, and Zheng point out, classifying all the Cohen-Macaulay graphs is probably not tractable right now; this problem is as difficult as classifying all Cohen-Macaulay simplicial complexes [11]. However, Herzog, Hibi, and Zheng proved in [11] that when G is a chordal graph, then G is Cohen-Macaulay (over any field) if and only if I(G) is unmixed.
The property of being sequentially Cohen-Macaulay, a condition weaker than being Cohen-Macaulay, was introduced by Stanley [15] in connection with the theory of nonpure shellability.
We say that a graph G is sequentially Cohen-Macaulay (over k) if R/I(G) is sequentially Cohen-Macaulay. We can expand upon Herzog, Hibi, and Zheng's result by using this weakening of the Cohen-Macaulay condition. Our main result is the following theorem (which is independent of char(k)). Theorem 1.2 (Theorem 3.2). All chordal graphs are sequentially Cohen-Macaulay.
Thus even chordal graphs whose edge ideals are not unmixed still satisfy a good algebraic property. Theorem 3.2 also generalizes the one-dimensional case of work of Faridi on simplicial forests [4].
Our paper is organized as follows. In the next section, we gather some results from the literature on Alexander duality and on chordal graphs. In Section 3, we prove Theorem 3.2. We consider some nonchordal graphs in Section 4, classifying the sequentially Cohen-Macaulay cycles and investigating some properties of graphs containing n-cycles for n > 3. We also give a sufficient condition for a graph to fail to be sequentially Cohen-Macaulay.
Required ingredients
Throughout this paper G will denote a simple graph on n vertices with vertex set V G and edge set E G . Associated to G is the edge ideal The complete graph on n vertices, denoted K n , is the graph with edge set E G = {{x i , x j } | 1 ≤ i < j ≤ n}, i.e., the graph with the property that there is an edge between every pair of vertices. If x is a vertex of G, we shall write N(x) to denote the neighbors of x, that is, those vertices that share an edge with x. We shall be primarily interested in the case that G is a chordal graph. Chordal graphs have the following property: Lemma 2.1. [16, Lemma 6.7.12] Let G be a chordal graph, and let K be a complete subgraph of G. If K = G, then there is a vertex x ∈ V K such that the subgraph induced by the neighbor set N(x) of x is a complete subgraph. This also forces the subgraph induced on N(x) ∪ {x} to be a complete subgraph.
A vertex cover of a graph G is a subset A of V G such that every edge of G is incident to at least one vertex of A. Note that we never need to include an isolated vertex in a vertex cover. For example, if we have a graph on three vertices x 1 , x 2 , and x 3 , and {x 1 , x 2 } is the only edge, then {x 1 } and {x 2 } are both vertex covers. The vertex covers of a graph G are related to the Alexander dual of I(G). Definition 2.2. Let I be a squarefree monomial ideal. The squarefree Alexander dual of I = (x 1,1 · · · x 1,s 1 , . . . , x t,1 · · · x t,st ) is the ideal A simple exercise will then verify: Lemma 2.3. Let G be a simple graph with edge ideal I(G). Then and the minimal generators of I(G) ∨ correspond to minimal vertex covers.
Associated to any homogeneous ideal I of R is a minimal free graded resolution where R(−j) denotes the R-module obtained by shifting the degrees of R by j. The number β i,j (I) is the ij-th graded Betti number of I and equals the number of minimal generators of degree j in the i-th syzygy module.
Definition 2.4. Suppose I is a homogeneous ideal of R whose generators all have degree d. Then I has a linear resolution if for all i ≥ 0, β i,j (I) = 0 for all j = i + d.
For a homogeneous ideal I, we write (I d ) to denote the ideal generated by all degree d elements of I. Note that (I d ) is different from I d , the vector space of all degree d elements of I. Herzog and Hibi introduced the following definition in [9]. One can use linear quotients to determine if an ideal has a linear resolution.
Definition 2.7. Let I be a monomial ideal of R. We say that I has linear quotients if for some ordering u 1 , . . . , u m of the minimal generators of I with deg u 1 ≤ deg u 2 ≤ · · · ≤ deg u m and all i > 1, (u 1 , . . . , u i−1 ) : (u i ) is generated by a subset of {x 1 , . . . , x n }.
We then require [4, Lemma 5.2]: Lemma 2.8. If I = (u 1 , . . . , u m ) is a monomial ideal of R = k[x 1 , . . . , x n ] that has linear quotients, and all the u i have the same degree, then I has a linear resolution.
We end this section by applying these ideas to edge ideals. Lemma 2.9. If I(G) is the edge ideal of a graph G, then . , x i d } is a vertex cover of G of size d}). Proof. Since I(G) ∨ is generated by the minimal vertex covers, any squarefree monomial of degree d in I(G) ∨ corresponds to a set of d vertices which contains a minimal vertex cover, and thus, the d vertices also form a vertex cover of G. Proof. We show that for each d, I(G) ∨ [d] has linear quotients and hence a linear resolution, which means that I(G) ∨ is componentwise linear by Theorem 2.6.
The minimal vertex covers of K n are all subsets of V Kn of size n − 1. Hence, by Lemma 2.9, I(K n ) ∨ [d] = (0) if d < n − 1 or d > n When d = n, I(K n ) ∨ [d] = (x 1 x 2 · · · x n ) is a principal ideal. These cases trivially have linear quotients. It thus suffices to show that I(K n ) ∨ [n−1] has linear quotients. Note that I(K n ) ∨ is minimally generated by all squarefree monomials of degree n − 1, and hence I( ] . Now I(K n ) ∨ is a squarefree Veronese ideal and thus has a linear resolution [10]. Hence I(K n ) ∨ [n−1] has linear quotients if one orders the monomials in descending lexicographic order.
Remark 2.11. A statement more general than Lemma 2.10 is true. Let j ≤ n, and let R = k[x 1 , . . . , x n ]. We can consider ideals whose components are all possible ideals generated by j of the n variables: We can view these ideals as the Alexander duals of either the Stanley-Reisner ideal of a simplicial complex with all possible (j − 2)-faces but no (j − 1)-faces or as the facet ideal of a simplicial complex with all possible (j − 1)-faces as its facets. I is minimally generated by all squarefree monomials of degree n − j + 1, and hence it is a squarefree Veronese ideal. Thus I has a linear resolution and is therefore componentwise linear.
For our last lemma we show that to determine if I(G) ∨ is componentwise linear, we may reduce to the case in which the graph G has no isolated vertices.
Proof. Note that the edge ideals of G and H have the same minimal generators, though they live in different rings. Thus I(G) ∨ and I(H) ∨ have the same minimal generators. By [7, Lemma 2.9], since I(G) ∨ is componentwise linear, I(H) ∨ is also.
Main theorem
In this section we prove the main result of this paper. Our proof hinges on the following result of Herzog and Hibi [9] that links the notions of componentwise linearity and sequential Cohen-Macaulayness. We have arrived at our main result. Proof. Let G be a chordal graph. By Theorem 3.1 it suffices to show that I(G) ∨ is componentwise linear. To show I(G) ∨ is componentwise linear, we have based our proof on Faridi's proof of [4,Theorem 5.4] that the squarefree part of the facet ideal of a simplicial forest has linear quotients in each degree. By Theorem 2.6, we need to show that I(G) ∨ [d] has a linear resolution for each d. By Lemma 2.8, it suffices to show that I(G) ∨ [d] has linear quotients for each d. We induct on the number of vertices in the chordal graph. By Lemma 2.12, we may assume that G has no isolated vertices. Thus the first case to consider is when we have a graph G on two vertices connected by an edge. In this case G = K 2 , so I(G) ∨ [d] has linear quotients for each d by Lemma 2. 10.
Suppose now that G is a chordal graph on n ≥ 3 vertices that has no isolated vertices (so G has at least two edges). If G = K n , then we are done by Lemma 2.10. So, we may assume that G is not complete. By Lemma 2.1 there is a vertex x ∈ V G such that the induced subgraph on {x}∪N(x) is a complete graph. (For example, take K to be any edge of G, and then x will be some vertex not incident to that edge.) Write N(x) = {y 1 , . . . , y t }.
Observe that G\{x} and G\(N(x) ∪ {x}) must be chordal. Note that it is possible that G\(N(x) ∪ {x}) is an isolated vertex (or vertices); in this case, its edge ideal is the zero ideal.
Now by Lemma 2.9, is generated by the squarefree monomials that correspond to the vertex covers of G of size d. Note that any vertex cover {x i 1 , . . . , x i d } of G must cover the complete subgraph K t+1 formed by {x, y 1 , . . . , y t }. So each vertex cover must contain at least t vertices of {x, y 1 , . . . , y t }. If . (In the case when this subgraph is an isolated vertex, since there are no edges, the empty set is a vertex cover, as is any subset of vertices.) Let . . , y t }] be their respective edge ideals. From the above discussion, it follows that . . . , yA a , xB 1 , . . . , xB b ), with y = y 1 · · · y t , has linear quotients with respect to this order of the generators.
First note that because B 1 corresponds to a vertex cover of G\{x}, B 1 is divisible by at least t − 1 of {y 1 , . . . , y t }. (To see this, note that B 1 covers the complete graph K t formed by the y i s.) So there exists at most one y ℓ such that y ℓ |B 1 .
Now suppose there exist monomials m and p and a j such that We can assume that mxB 1 and pyA j are squarefree. There are two cases to consider. Case 1. If y|B 1 , then B 1 = yB ′ 1 = y 1 · · · y t B ′ 1 . Since B 1 corresponds to a vertex cover of G\{x}, B ′ 1 corresponds to a vertex cover of size . Note that if a variable z|m, then z must be a variable of the ring R 2 ; otherwise mxB 1 would not be squarefree. So, for any variable z such that z|m, , and hence zyB ′ 1 = zB 1 ∈ (yA 1 , . . . , yA a ). Thus zxB 1 ∈ (yA 1 , . . . , yA a ) for any z that divides m.
Remark 3.3. The proof of Theorem 3.2 shows that chordal graphs are sequentially
Cohen-Macaulay regardless of the characteristic of k because the linear quotients property is independent of k. Faridi [5] showed that if I is any monomial ideal that is sequentially Cohen-Macaulay, then the polarization of I, a squarefree monomial ideal associated to I, is also sequentially Cohen-Macaulay. Thus, if I is any monomial ideal whose polarization is the edge ideal of a chordal graph, I must be sequentially Cohen-Macaulay.
Recall that a graph G is a forest if it has no cycles. A forest, therefore, is an example of a chordal graph, so we get: Corollary 3.4. If G is a forest, then G is sequentially Cohen-Macaulay.
Remark 3.5. In [4], Faridi proved that if I(∆) is the facet ideal of simplicial forest ∆, then R/I(∆) is sequentially Cohen-Macaulay. When the simplicial forest has dimension 1, then I(∆) is simply the edge ideal of a forest. So, our result can be viewed as a partial generalization of Faridi's result.
We close by describing how our Theorem 3.2 implies Herzog, Hibi, and Zheng's result characterizing Cohen-Macaulay chordal graphs. We begin with a lemma. Proof. When R/I is Cohen-Macaulay, the result is obvious, so assume that R/I is sequentially Cohen-Macaulay and that I is unmixed. Let I ∨ be the Alexander dual of I. Then by Theorem 3.1, I ∨ is componentwise linear since R/I is sequentially Cohen-Macaulay. Moreover, since I is unmixed, I ∨ is generated in a single degree, meaning that I ∨ actually has a linear resolution. By [3, Theorem 3], R/I ∨∨ = R/I is Cohen-Macaulay.
Corollary 3.7. A chordal graph is Cohen-Macaulay if and only if its edge ideal is unmixed.
Proof. All chordal graphs are sequentially Cohen-Macaulay, so the corollary is an immediate consequence of Lemma 3.6.
Sequential Cohen-Macaulayness and nonchordal graphs
In the previous section we showed that if G is a chordal graph, then R/I(G) is sequentially Cohen-Macaulay. We now explore the situation in which G is not chordal. As we show, R/I(G) may or may not be sequentially Cohen-Macaulay.
We begin with a classification of the sequentially Cohen-Macaulay n-cycles. Villarreal shows in [16,Corollary 6.3.6] that the only Cohen-Macaulay cycles have three or five vertices. We prove that these are the only sequentially Cohen-Macaulay cycles as well. Note that this does not follow immediately from Villarreal's result because cycles need not be unmixed (in fact, Exercise 6.2.15 of [16] implies an n-cycle is unmixed if and only if n = 3, 4, 5, 7). Proof. Since a 3-cycle is chordal, the result for n = 3 follows from Theorem 3.2, and the Cohen-Macaulayness is easy to see. When n = 5, Now suppose n = 2r for r ≥ 2. We have 2r edges to cover, and each vertex is incident to exactly two edges. Therefore the minimum cardinality of a vertex cover is r, and {x 1 , x 3 , . . . , x 2r−1 } (odd indices) and {x 2 , x 4 , . . . , x 2r } (even indices) are the two minimal vertex covers. Thus (I(G) ∨ r ) = (x 1 x 3 · · · x 2r−1 , x 2 x 4 · · · x 2r ), which is a complete intersection of monomials of degree r ≥ 2, and therefore it does not have a linear resolution. Hence I(G) ∨ is not componentwise linear, and G is not sequentially Cohen-Macaulay. Suppose next that n = 2r + 1 for some r ≥ 3. A minimal vertex cover of G consists of alternating vertices plus one additional vertex since alternating vertices leaves a single edge uncovered; hence the lowest degree in which I(G) ∨ is generated is degree r + 1. Therefore there are 2r + 1 minimal generators of degree r + 1, one for each edge that gets double-covered when we add an adjacent vertex. Let J = (I(G) ∨ r+1 ). We show that J does not have a linear resolution. This implies that I(G) ∨ is not componentwise linear, and hence G is not sequentially Cohen-Macaulay.
To compute the Betti numbers of J, we use simplicial homology. Define a squarefree vector to be a vector with its entries in {0, 1}. Let M be a monomial ideal, and let This is the upper Koszul simplicial complex of M, defined, for example, in [13]. We can compute the N n -graded Betti numbers of M with the relation from [13,Theorem 1.34]. Summing over all squarefree b with degree j gives β i,j (M).
We show that β 2,2r+1 (J) = 0, which proves that J does not have a linear resolution when r ≥ 3. There is a single squarefree vector corresponding to degree 2r + 1, b = (1, . . . , 1), which is associated to the monomial m = x 1 · · · x 2r+1 . We have a chain complex Below, we shall use the following notation: If (i 1 , . . . , i n ) is a vector with entries in {0, 1} corresponding to a face in our simplicial complex, we shall often write the face as [x j 1 , . . . , x jp ], where the j t are exactly the nonzero entries of (i 1 , . . . , i n ). For example, the face (1, 0, 0, 1, 0, 1) is written as [x 1 , x 4 , x 6 ].
Note that the basis of C s (K b (J)) consists of the s-dimensional faces [ All the faces with which we work have dimension at most two; we orient the faces so that if i 0 < i 1 < i 2 , we traverse [x i 0 , x i 1 ] and [x i 1 , x i 2 ] in the positive direction and [x i 0 , x i 2 ] in the negative direction. Similarly, we direct edges so that going from x i 0 to x i 1 is in the positive direction.
Initially, suppose that 2r + 1 > 7; we handle the case 2r + 1 = 7 separately. We claim first that m/x 1 x 4 x 7 ∈ J. If it were, then there would be a minimal vertex cover m ′ that divided it. But then x 2 x 3 x 5 x 6 x 8 x 2r+1 divides m ′ since x 1 , x 4 , and x 7 are missing, and m ′ is a cover. If 2r + 1 > 9, then to cover the remaining 2r − 9 edges not covered, we need at least r − 4 vertices. This means that deg m ′ ≥ 6 + r − 4 = r + 2, but all the minimal vertex covers in J have degree r + 1 since J = (I(G) ∨ r+1 ). Also, when 2r + 1 = 9, the minimal generators of J have degree five, and x 2 x 3 x 5 x 6 x 8 x 2r+1 is a minimal vertex cover of degree six and hence is not divisible by an element of J. Thus in either case, m/x 1 x 4 x 7 ∈ J.
Next we show that m/x 1 x 4 , m/x 4 x 7 , and m/x 1 x 7 are in J. To prove this, we need to show that a minimal vertex cover divides each of these monomials. In the first case, use x 2 x 3 x 5 x 7 · · · x 2r+1 ; in the second, x 2 x 3 x 5 x 6 x 8 x 10 · · · x 2r works; and in the last, use Thus f is in the kernel of ∂ 1 , and β 2,2r+1 (J) = 0, so J does not have a linear resolution. When 2r + 1 = 7, we need a slightly different argument. One can compute that in this case, the Alexander dual of I(G) is and it has minimal graded free resolution Because of the second syzygy in degree seven, I(G) ∨ = (I(G) ∨ 4 ) does not have a linear resolution. Therefore G is not sequentially Cohen-Macaulay.
Remark 4.2. Proposition 4.1 is independent of the characteristic of k. Note that if k has prime characteristic, the graded Betti numbers of R/J are either the same as in characteristic zero, or they go up since the behavior is the same for the dimensions of the homology groups we computed. The dimensions of the homology groups in characteristic p > 0 are either the same as in characteristic zero, or they may increase if there is a p-torsion part introduced. See, for example, the latter part of the discussion of Universal Coefficients in [14,Chapter 9]. Thus we have β 2,2r+1 (J) > 0 for r > 2 over all k.
The case of a 5-cycle shows that the converse of Theorem 3.2 is false. There are many nonchordal sequentially Cohen-Macaulay graphs. We present two simple examples here to demonstrate that small changes in a graph that is not sequentially Cohen-Macaulay can give a graph with the property. For further investigation of this idea, see [6].
It is easy to check that I(H) ∨ is componentwise linear since it has a single generator in degree two and regularity three. Hence H is sequentially Cohen-Macaulay.
Example 4.4. For a slightly more complicated example, suppose that G is a 6-cycle, and we obtain the graph H by adding a seventh vertex and connecting it to two adjacent vertices of G. Thus I(H) = (x 1 x 2 , x 2 x 3 , x 3 x 4 , x 4 x 5 , x 5 x 6 , x 1 x 6 , x 1 x 7 , x 6 x 7 ), and . One can check in Macaulay 2 that I(H) ∨ is componentwise linear, so H is sequentially Cohen-Macaulay. We remark that tests in Macaulay 2 suggest that adding a triangle in this way to a cycle that is not sequentially Cohen-Macaulay may always produce a sequentially Cohen-Macaulay graph.
We round out this paper with a sufficient condition for a graph to fail to be sequentially Cohen-Macaulay. This condition makes use of another characterization of sequential Cohen-Macaulayness of quotients by monomial ideals due to Duval [2].
Recall that an element F ∈ ∆, where ∆ is a simplicial complex, is called a face of ∆. The dimension of a face F is dim F = |F | − 1. The dimension of ∆ is then dim ∆ = max F ∈∆ {dim F }. We write ∆ i to denote the subcomplex of ∆ whose maximal faces (the facets) are all the faces of ∆ of dimension i. We also need the following definition [15].
The complement of a simple graph G, denoted G c , is the graph with the same vertex set as G, but with edge set E G c = {{x i , x j } | {x i , x j } ∈ E G }, and the clique-complex (sometimes called the flag complex) of a simple graph H, denoted ∆(H), is the simplicial complex whose faces are the subsets of vertices on which the induced subgraph of H is a clique.
Theorem 4.7. Let G be a simple graph. Let H 2 be the set of isolated vertices of G c , and set H 1 = G c \H 2 (so G c is the disjoint union of H 1 and H 2 ). If #E H 1 − #V H 1 + 1 < 0, then I(G) is not sequentially Cohen-Macaulay.
Proof. Since I(G) is a squarefree monomial ideal, I(G) also corresponds to a simplicial complex via the Stanley-Reisner correspondence. In particular, I(G) = I ∆(G c ) where ∆(G c ) is the clique-complex associated to G c . Let ∆(G c ) 1 denote the pure 1-dimensional subcomplex of ∆(G c ). Now ∆(G c ) 1 is simply the 1-skeleton of G c , i.e., it is a graph. Specifically, ∆(G c ) 1 = H 1 . Since H 1 is a graph, the f -vector of H 1 is f (H 1 ) = (1, #V H 1 , #E H 1 ).
Using the relation between the f -vectors and h-vectors as given on page 58 of Stanley's book [15], we have h(H 1 ) = (1, #V H 1 − 2, #E H 1 − #V H 1 + 1) If #E H 1 − #V H 1 + 1 < 0, then h(H 1 ) has negative values. So R/I ∆(G c ) 1 is not Cohen-Macaulay by [15,Corollary 3.2] because the h-vector of a Cohen-Macaulay Stanley-Reisner ring must contain only nonnegative values (in fact, must be an O-sequence). Thus, by Theorem 4.5, I(G) = I ∆(G c ) is not sequentially Cohen-Macaulay.
Example 4.8. The above result gives an alternative justification for why the 4-cycle is not sequentially Cohen-Macaulay. Since G c = {{x 1 , x 3 }, {x 2 , x 4 }}, the graph G c has two edges, but 4 vertices, so I(G) cannot be sequentially Cohen-Macaulay since 2 − 4 + 1 < 0.
|
2014-10-01T00:00:00.000Z
|
2005-11-01T00:00:00.000
|
{
"year": 2005,
"sha1": "110a5080d87704040e88c191a771bf1fd33c4026",
"oa_license": null,
"oa_url": "https://www.ams.org/proc/2007-135-08/S0002-9939-07-08841-7/S0002-9939-07-08841-7.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "fe824f19b00f1c523f7ab7b311c89ea86eed10a4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
258442083
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Trainee Level on Surgical Time and Postoperative Complications of Anterior Cruciate Ligament Reconstruction
Purpose: The objective of this study was to investigate the association between trainee level and surgical time and postoperative complications of anterior cruciate ligament reconstruction (ACLR). Methods: A retrospective chart review of patients who underwent ACLR at an academic orthopaedic ambulatory surgery center collected demographic and clinical information, including the number of trainees present and trainee level. Unadjusted and adjusted regression analyses assessed the association between trainee number and level with surgical time (time from skin incision to closure) and postoperative complications. Results: Of 799 patients in this study operated on by one of five academic sports surgeons, 87% had at least one trainee involved. The average surgical time overall was 93 ± 21 minutes and by trainee level was 99.7 (junior resident), 88.5 (senior residents), 96.6 (fellows), and 95.6 (no trainees). Trainee level was significantly associated with surgical time (P = 0.0008), with increased surgical time in cases involving fellows (0.0011). Fifteen complications (1.9%) were observed within 90 days of surgery. No notable risk factors of postoperative complications were identified. Conclusion: Resident trainee level does not have a notable effect on surgical time or postoperative complications for ACLR at an ambulatory surgery center, although cases involving fellows had longer surgical times. Trainee level was not associated with risk of postoperative complications.
R esident and fellow training is a fundamental part of medical education, 1 but its effect on patient care and efficiency is increasingly under scrutiny. 2 Studies have shown that teaching residents increases surgical time in certain obstetrics/gynecology, 3 general surgery, 4 and otolaryngology procedures. 5 Investigations of orthopaedic procedures have focused primarily on total joint arthroplasty, with evidence showing variable effects on surgical time and no change in postoperative complications or patient-reported outcome measures. 6,7 There is little research to date on the effect that trainee learning curves might have in different surgical settings.
As expectations for surgeon efficiency increase, especially in the ambulatory surgery center (ASC) environment, 8 it is important to understand how the learning curve of trainees effects surgical intervention and patient outcomes. The purpose of this study was to test the hypothesis that trainee level is associated with surgical time and postoperative complications of anterior cruciate ligament (ACL) reconstruction, one of the most common orthopaedic, ASC procedures. We hypothesize that there will be notable differences in mean surgical time across trainee levels.
Study Participants
With approval from the institutional review board, a retrospective chart review of patients who underwent ACL reconstruction over a 3-year period (June 1, 2015, through June 1, 2018) at a freestanding, academic orthopaedic ASC, by one of five members of an academic department of orthopaedic surgery with fellowship training in sports medicine was conducted. Patients undergoing ACL reconstruction with concomitant meniscal surgery, cartilage débridement/chondroplasty, and loose body removal were included, but those undergoing other ligament surgery, osteotomy, or a articular cartilage restoration procedure were excluded. Because this study was designed to focus on surgical time and short-term complications, patients had to have documented follow-up in the chart of at least 90 days. Of the patients initially eligible for inclusion, 83.7% (878/1049) had at least 90 days of follow-up. A total of 799 patients met criteria and were included in the cohort. Participants were grouped based on the type of surgery: isolated ACL reconstruction; ACL reconstruction and meniscal repair with or without additional procedures such as meniscal débridement, chondroplasty, or loose body removal; ACL reconstruction and meniscal débridement with or without additional procedures such as chondroplasty or loose body removal; and ACL reconstruction with chondroplasty or loose body removal. Revision ACL surgeries were coded with a separate variable indicating that the procedure was a revision.
Data Acquisition
Data were collected through automated extraction of the electronic medical record, in collaboration with the university perioperative systems team and Clinical Investigation Data Exploration Repository, as described in a previous study. 9 All surgeries were conducted at one of our sites, an outpatient surgery center. Queries were run on all ACL reconstructions during the study period, and a chart review was used to check for database accuracy and to finalize missing data. Patient and surgeon data, including attending surgeon, patient age, anesthesia type, length of surgery, and surgical time, were automatically extracted. Demographic information, including age, sex, and body mass index (BMI), were collected manually from the electronic medical record. Information about postoperative complications, including infection, DVT/PE, wound dehiscence/hematoma evacuation, arthrofibrosis, and graft failure, was collected from Epic medical records (Verona). Records were reviewed through the date of last follow-up for each patient, and the length of follow-up from the date of surgery was recorded.
Trainee Level
We reviewed the surgical report for each patient and collected the total number of trainees present (including medical students, residents, and fellows). The names of all trainees present were also obtained from the surgical note; trainee names were subsequently converted to postgraduate years (PGYs) by consulting the university's residency class rosters. Trainee level was categorized as no trainee, junior residents (PGY1-3), senior residents (PGY4-5), or fellow. In cases where multiple trainees were present, trainee level was assigned according to the most senior trainee in the surgery. The database did not include data on whether advanced practice providers were involved in any of the surgeries because they are typically not used in this setting. Procedures with only medical students were classified as no trainee.
Surgical Outcomes
Outcomes of interest included surgical time and postoperative complications. Surgical time was defined as the time from skin incision to closure and was documented in the database. Postoperative complications, including infection, DVT/PE, wound dehiscence/hematoma evacuation, arthrofibrosis requiring surgical débridement, and graft failure, were obtained from a manual chart review. Complications were categorized into three time frames: 0 to 30 days, 31 to 90 days, or . 90 days postoperatively. The maximal length of follow-up for the cohort was up to 4 years after the initial surgery.
Statistical Analysis
We conducted unadjusted and adjusted regression analyses to investigate the association of trainee number and trainee level with surgical time and postoperative complications. Descriptive statistics were used for demographic data, and one-way analysis of variance (continuous variables) and the chi square or Fisher exact test (categorical variables) were used to assess any patient demographic differences between trainee levels. Unadjusted bivariable analysis was used to compare surgical times across trainee levels.
Multivariable Analysis
Linear regression analysis was conducted with 10 independent variables to determine the significance and effect size of each variable. These variables included age, sex (male and female), BMI, history of diabetes, smoking history, procedure category, whether the procedure was a revision surgery, attending surgeon, number of trainees, and trainee level. Age, BMI, and total number of trainees were continuous variables; all other variables were categorical. Our outcome of interest was surgical time. All factors included in analysis were identified a priori. General linear models with parameter estimates and effect size (partial eta squared) were created for surgical time.
Logistic regression analysis was conducted to determine which variables were markedly associated with the presence or absence of a complication. The same 10 independent variables described earlier were included in the logistic regression models. Models were created for all complications, and all complications that occurred less than 90 days postoperatively. Logistic models were also created for each individual complication type: infection, DVT/PE, wound dehiscence/hematoma evacuation, arthrofibrosis, and graft failure. A significance level of 0.1 was required for model entry, and a significance of 0.05 was needed to remain in the model. The Firth penalized score procedure was used to control for quasi-complete separation in rare event analysis for all logistic models. [9][10][11] Penalized odds ratios and 95% confidence intervals are reported. All statistical tests were conducted with a significance threshold of a = 0.05, and effect sizes were estimated with partial eta squared. Analysis was conducted with SAS (Cary).
Competing Interests
There were no financial, institutional, or general competing interests.
Results
There were 799 patients included in this study; most of these patients (87%) had at least one trainee involved in their surgery. The mean age for the cohort was 25.8 years (SD = 12.0), and 51.2% of patients were men. The mean BMI was 25.7 (SD = 5.0). The mean length of follow-up was 238 (SD = 123, range: 90 to 1072) days after surgery. Of surgeries that included a trainee, 78.9% had one trainee, 8.7% had two trainees, and 1% had three trainees. No significant patient demographic differences were observed across trainee levels (Table 1); however, trainee level was associated with surgical time (P , 0.0001), attending surgeon (P , 0.0001), and the total number of trainees involved in the case (P , 0.0001).
The mean surgical time was 93 minutes (SD = 21 minutes). Eighty-five complications (10.6%) were found in the cohort (Table 2), 15 (1.9%) of which occurred less than 90 days after surgery. Longer term complications included arthrofibrosis and graft failure, with some complications occurring years after the initial surgery.
Trainee Level and Surgical Time
Unadjusted bivariable analysis showed a significant difference in mean surgical time across trainee levels (P , 0.0001) (Figure 1), with senior residents having the shortest mean surgical time. The mean surgical time across trainee levels was 95.6 minutes with no trainees, 99.7 minutes with junior residents, 88.5 minutes with senior residents, and 96.6 minutes with fellows.
Trainee Level and Postoperative Complications
Short-term complications were rare in our cohort, with 1.9% of patients having complications less than 90 days after surgery. Trainee level was not an independent risk factor of short-term complications; the only notable risk factor of short-term complications was concurrent meniscal repair (OR = 4.6, 95% CI 1.3 to 16.3).
The overall global complication rate for our cohort was 10.6%, including longer term complications such as arthrofibrosis and graft failure. Trainee level was not a notable risk factor of global complications. Multivariable logistic regression modeling of risk factors of all postoperative complications did not show any significant risk factors (Table 4). Logistic regression models for each individual postoperative complication showed that trainee level was not a notable risk factor. Patient characteristics found to be notable risk factors of individual postoperative complications were age for DVT/PT, BMI for infection, and female sex for arthrofibrosis (Table 5).
Discussion
Trainee level is associated with surgical time, as surgeries took longer when fellows were involved, but not postoperative complications for ACL reconstructions. Differences in mean surgical time across trainee levels are likely attributable to different trainee levels operating with varying surgeons and participating in different procedure types. The lack of variance in surgical time between different levels of residents is consistent with a previous study on total knee arthroplasty. 6 The factors that have a notable effect on mean surgical time included procedure type, revision surgeries, patient age, patient BMI, and attending surgeon. We found that procedure type and attending surgeon have the biggest effect on differences in surgical time. Differences in surgical time based on the attending surgeon could be attributed to the surgical technique, overall experience, case difficulty, and the relative amount of time spent teaching intraoperatively.
Operating as a surgical trainee is an essential step in the path to becoming an orthopaedic surgeon; however, few studies have assessed the effect of trainees on surgical time and postoperative complications in orthopaedics. Prior research has focused primarily on the effect of trainees on surgical time in nonorthopaedic specialties. [3][4][5] However, studies have not explored how trainee level could affect surgical time or postoperative complications. Previous studies on factors affecting surgical time suggest that increased surgeon experience, team familiarity, and surgical volume could lead to shorter operating times. [12][13][14][15] We also found that trainee level is not associated with an increased risk of postoperative complications. The most common short-term complications in our cohort included infection and wound dehiscence; neither of these complications was associated with trainee level or the number of trainees present in the procedure. These findings add to the growing body of evidence refuting the "July effect," at least in the operating room. The July effect describes an increase in complications and infections when medical trainees transition between years. [16][17][18] Trainee level was not a global risk factor of complications, nor for any of the individual complication types studied.
It is important to emphasize that this study was designed to assess the effect of trainee level on surgical time and short-term complications, rather than outcomes, and therefore, our findings are suggestive rather than definitive, particularly in light of an average follow-up of 283 days. Nevertheless, our findings that patient-specific risk factors of complications, such as increased BMI and infection risk 19 or increased age and DVT risk, 20 are supported by previous studies. We also found an increased risk of arthrofibrosis among women, confirming two previous studies. 21,22 Our follow-up was admittedly short for assessing graft failure, which was associated with younger male patients with lower BMI in our current analysis. The association between increased age and decreased graft failure is consistent with previous studies. 23 However, there is currently contradictory evidence regarding the effect of patient sex and BMI on the risk of ACL graft failure. [23][24][25][26] More studies are needed to determine whether our findings hold up with longer follow-up or reflect the limitations and biases of our methodology.
Limitations
Limitations of our investigation include study generalizability because we examined ACL reconstructions conducted at a single academic ASC. Most of our patients were young, healthy, and undergoing an elective procedure. As a result, our findings may not be broadly generalizable to patients with multiple comorbidities or to patients undergoing other surgeries. Most trainees in our study were senior residents (PGY4 or higher), which limited our ability to identify the effect that more junior trainees may have on surgical time or postoperative complications. We did not include other patient-specific and surgery-specific variables, such as graft choice, that could have affected surgical time. Additional research on factors that could affect the difference in surgical time, such as autonomy given to trainees, amount of intraoperative teaching, and surgeon experience, is needed.
Conclusion
Our analysis of ACL reconstructions conducted in an academic ASC showed that resident trainee level is not markedly associated with increased surgical time or rates of postoperative complications. The presence of a fellow was associated with increased surgical time; however, other factors (including patient BMI, patient age, additional meniscal procedures, and attending surgeon) had a larger effect on surgical time. These findings suggest that this procedure is not negatively affected by medical education. Whether this holds up for other similar outpatient orthopaedic procedures in this setting is speculative and deserves investigation.
|
2023-05-03T14:44:14.270Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "310fd6f119a8440743c67260024c8b95ce3fd0e5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "310fd6f119a8440743c67260024c8b95ce3fd0e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265248058
|
pes2o/s2orc
|
v3-fos-license
|
Extracranial Germ Cell Tumors in Children: Ten Years of Experience in Three Children’s Medical Centers in Shanghai
Simple Summary A few large series of cases of extracranial germ cell tumors (GCTs) are reported in Asian children. We present a retrospective analysis of extensive scale data of pediatric extracranial GCTs from multiple centers in China. The pathological subtypes and primary sites of tumors showed a preference for occurrence at different ages. For example, sacrococcygeal tumors mostly occur in infants, while mediastinal tumors are common in adolescents. In the context of treatment strategies including platinum-based various chemotherapy regimens, the 5-year overall survival rate and event-free survival rates were 94.13% and 82.33%, respectively. There is no difference in overall survival rate among different chemotherapy regimens for children. The independent clinical factors associated with poor prognosis were that the primary tumor is located in the mediastinum and alpha-fetoprotein levels greater than 10,000 ng/L. In conclusion, the incidence rate and clinical features of extracranial GCTs in children in China are similar to those reported in Europe and the United States. The age distribution of various pathological types and primary sites reflect the characteristics of tumor cell origin of primordial germ cell (PGC) mismigration. For the malignant germ cell tumor with the primary site of mediastinum, more effective treatment regiments should be explored. Abstract Objective: The aim was to describe the clinical features of extracranial germ cell tumors (GCTs) in pediatrics and study the clinical risk factors related to survival for malignant germ cell tumors (MGCTs) in order to optimize therapeutic options. Methods: The clinical data of children with extracranial GCTs in three children’s medical centers in Shanghai were retrospectively analyzed. Results: In total, 1007 cases of extracranial GCTs diagnosed between 2010 and 2019 were included in this study, including teratomas (TERs) 706 (70.11%) and MGCTs 301 (29.89%). There were twice as many TER cases as MGCT cases. Approximately 50% of children with GCTs were <3 years old (43.39% for TERs, 67.13% for MGCTs). GCTs in children of different ages show differences in tumor anatomical locations and pathological subtypes. The 5-year event-free survival (EFS) and overall survival (OS) of all patients with MGCTs were 82.33% (95% CI, 77.32%, 86.62%) and 94.13% (95% CI, 90.02%, 96.69%), respectively. The multivariate Cox regression analysis identified a primary site in the mediastinum and alpha fetoprotein (AFP) levels ≥10,000 ng/mL as independent adverse prognostic factors (p < 0.0.0001, χ2 = 23.6638, p = 0.0225, χ2 = 5.2072.). There were no significant differences in OS among children receiving various chemotherapy regimens, such as the BEP, PEB, JEB and other regimens (VBP/VIP and AVCP/IEV) (p < 0.05). Conclusions: The clinical features of GCTs in Chinese pediatrics are similar to those reported in children in Europe and America. The age distribution of pathological types and primary sites in GCTs reflect the developmental origin of type I and type II GCTs transformed from mismigration primordial germ cells (PGCs). Optimizing the current platinum-based chemotherapy regimens and exploring the treatment strategies for MGCTs of the mediastinum are future research directions.
Introduction
Extracranial germ cell tumors (GCTs) are rare pediatric cancers that account for approximately 3.5% of all tumors in children under the age of 15 years [1].However, in adolescents, this proportion increases to 14% [1].GCTs are a heterogeneous group in age, primary sites, histological features, and prognosis [2].GCTs arise from primordial germ cells (PGCs) and their derivatives, which evolve into relatively benign teratomas (TERs) and various malignant germ cell tumors (MGCTs) in different environments provided by distinct mismigration pathways [3][4][5], and ultimately there are two common pathological types in pediatrics: type I and type II [6].Since the 1970s, the survival of patients with MGCTs has dramatically improved with the introduction of cisplatin-based chemotherapy regimens, thereby forming a treatment strategy that combines surgery and chemotherapy [7,8].There are still some incurable patients despite the application of various platinum-based chemotherapy protocols.More consensus among different international groups on the relative importance of tumor site, tumor stage, age of onset, and elevation in tumor marker levels is needed.The European and American databases and multicenter research in South Africa have reported the clinical prognostic factors [9][10][11].However, the multicenter data on MGCTs for children in Asia are rarely presented.
We summarized the clinical data of children with extracranial GCTs in three large children's medical centers in Shanghai in the past decade.These children come from all over the country and can represent the general situation of pediatric GCTs in China.We described the overview of pediatric GCTs and the distribution of onset age, tumor location, and pathological types and analyzed clinical factors related to survival, to reveal the relationship between clinical age distribution characteristics and tumor pathological origin, as well as the adverse prognostic factors of GCTs under the current treatment status.
The Diagnosis and Staging of Patients
The clinical data of patients who were enrolled in Shanghai Children's Medical Center, Shanghai Children's Hospital, and Children's Hospital of Fudan University from January 2010 to December 2019 and diagnosed with extracranial germ cell tumors were reviewed, excluding children with uncertain diagnoses and incomplete records.An imaging examination was performed before the operation, and serum AFP and human chorionic gonadotropin (HCG) levels were measured before treatment.The COG criteria were used for staging at diagnosis [12,13].In all the centers, the diagnosis of GCTs was mainly based on pathological confirmation, and a few cases were diagnosed according to elevated serum AFP levels and imaging (8 patients with sacrococcygeal tumors).Age at diagnosis, sex, tumor markers, pathology, imaging, and treatment method were analyzed.
Treatment, Monitoring, and Follow-Up
Gonadal primary tumors always required complete surgical organ ablation.Testicular tumors were resected via the inguinal route.Omental and peritoneal lesions were removed with ovarian tumors.Giant MGCTs in the sacrococcygeal region were treated first with neoadjuvant chemotherapy to reduce the tumor size, and then completely removed together with the coccyx.
For MGCTs, the AFP levels were monitored in each chemotherapy cycle during treatment, every two months in the first year after treatment and every three months in the second and third year after treatment.Children with TERs were followed up with imaging every 3 months for 3 years.
Statistical Methods
Statistical analyses were performed and figures were developed using R (version 4.1.2).Overall survival (OS) was defined as the time from diagnosis until death or last patient contact.Event-free survival (EFS) was defined as the time from diagnosis until disease progression, second malignant neoplasm, death, or last patient contact, whichever occurred first.Univariate and multivariate Cox regression analyses were conducted to identify prognostic factors.Survival curves were constructed using Kaplan-Meier and log-rank tests to analyze the differences in OS among the chemotherapy groups.All tests were two-sided, and p < 0.05 was considered statistically significant.
Ethical Approval
The study obtained approval as required by the Ethics Review Committee, Shanghai Children's Hospital, Shanghai Jiao Tong University Approval.protocol code 2023R069-E01 and approval on 7 July 2023).
Clinical Characteristics and Age Distribution by Pathological Type and Anatomical Location
In the past decade, there have been 1007 newly diagnosed extracranial GCTs in children in the three hospitals, including 706 TERs (70.11%) and 301 MGCTs (29.89%).For TERs, the median age at diagnosis was 68.4 months, and the number of TERs in females was 2.8 times higher than that in males.The highest number of TERs cases were at infancy, while the second peak emerged at the ages of 9-10 years old (Figure 1A).The most common primary site of TERs was the ovary (39.86%, 281), followed by the sacrococcygeal region (20.53%,145), testis (14.46%, 102), abdomen (mainly from the retroperitoneum, with a few cases of tumors originating from abdominal organs: 13.05%, 92), thorax (mainly from the mediastinum: 7.94%, 56), pelvic cavity (other pelvic areas except for ovarian and typical sacrococcygeal tumors: 2.83%, 20), and others (nasopharynx, neck, hip, femur, and unknown: 1.42%, 10).The pathological type of most patients was mature teratoma (MT, 86.77%), and 12.88% of patients had immature teratoma (IT).
For the 301 MGCTs, the median age at diagnosis was 41 months, and the case numbers were comparable in males and females.The peak incidence was under 3 years old, followed by a steep increase, starting at puberty (Figure 1B).The testicles, sacrococcygeal region, and ovaries were the main primary sites of MGCTs, accounting for 37.21%, 19.93%, and 16.61% of tumors, respectively.Yolk sac tumors (YST) and mixed germ cell tumors are common pathological types of MGCTs, accounting for 63.46% and 30.56%, respectively.The characteristics of the GCTs are summarized in Table 1.
years.In terms of the primary site of extragonadal GCTs, sacrococcygeal, pelvic, and retroperitoneal tumors mostly occurred before the age of 3 years, while tumors in mediastinal mainly emerged after puberty.Pathological types such as YST, TERs, dysgerminoma/seminoma, and non-seminoma also exhibit age-specificity (Figure 2).The distribution of the anatomical locations and the histopathological subentities varied according to age at diagnosis and age was divided into several age groups (Figure 2).Whether it was teratoma or MGCTs, approximately 50% of cases were before the age of 3 years.In terms of the primary site of extragonadal GCTs, sacrococcygeal, pelvic, and retroperitoneal tumors mostly occurred before the age of 3 years, while tumors in mediastinal mainly emerged after puberty.Pathological types such as YST, TERs, dysgerminoma/seminoma, and non-seminoma also exhibit age-specificity (Figure 2).).Extragonadal GCTs originating from the sacrococcygeal region, pelvic cavity, vagina, abdomen, and mediastinum, mirroring the migration trajectory of PGC during embryonic development, corresponded to various age at diagnosis which were in the order from newborn to puberty.YST and TERs were dominant before the age of 6 years, whereas dysgerminoma/seminoma and some rare pathological types such as choriocarcinoma and embryonic cancer mainly appeared after puberty.Gonadal GCTs were usually in children after 1 year of age, testicular GCTs mainly occurred from infants to preschoolers, with single tumor component YST or TERs, and ovarian tumors mostly occurred in children from toddler to school age, with dysgerminoma or mixed germ cell tumors or TERs.
Table 1.The characteristics of GCTs.
Tumor Markers in MGCTs
Among cases undergoing survival analysis, 246 MGCT were tested for AFP at the initial diagnosis, of which 29 had documented values of "greater than 3000 ng/mL", making it impossible to accurately determine whether their levels were greater than 10,000 ng/mL.Therefore, an AFP analysis was performed on 217 patients.The Cochran-Mantel-Haenszel test was used to explore the connection between AFP levels and the clinical features of MGCTs.High levels of AFP (>10,000 ng/mL) were common in children with YST and malignant mixed germ cell tumors and were closely associated with stage IV tumors.The AFP levels in patients with dysgerminomas were all normal (Table 2).
Survival and Prognostic Indicators of MGCTs
A total of 279 children with complete follow-up notes were included in the survival analysis.The follow-up time ranged from 1.33 to 13.24 years, and the mean duration of follow-up was 5.3 ± 2.1 years.For the whole cohort of MGCT patients, the 5-year EFS and OS were 82.33% (95% CI, 77.32%, 86.62%) and 94.13% (95% CI, 90.02%, 96.69%), respectively.According to univariate analysis, children with tumors originating in the gonads had significantly better survival than those with tumors originating in the extragonadal region (EFS 91.38 vs. 73.78%,OS 97.70% vs. 88.31%,p < 0.01); the survival rate of children with mediastinum tumors was lower than that of children with nonthoracic primary tumors (EFS 41.26% vs. 85.69%,OS 63.07% vs. 95.11%,p < 0.01).There was a significant difference in survival between children with complete and partial tumor resection (EFS 88.38% vs. 39.93%,OS 95.59% vs. 76.56%,p < 0.01).The EFS and OS for patients with stage I, II, III, and IV MGCTs showed a decreasing trend, with EFS rates of 95.27%, 93.06%, 73.63%, 70.43% (p < 0.01) and OS rates of 100%, 93.31%, 87.69%, 84.64% (p = 0.04), respectively.There was no significant difference in OS between metastatic and nonmetastatic patients, as well as between patients with different pathological types (YST, mixed germ cell tumors, and other pathological subtypes).There were no significant differences in either EFS or OS between different genders, between patients aged ≥11 years and <11 years, or between patients undergoing tumor resection before and after chemotherapy (Supplementary Materials, Table S1).The multivariate Cox regression analysis identified primary tumor in the mediastinum and AFP level > 10,000 ng/mL as independent adverse prognostic factors (χ 2 = 13.4262,p < 0.0.01;χ 2 = 5.2766, p = 0.0216).
Chemotherapy for MGCTs
Children with neoplasm stage I Gonadal MGCTs whose postoperative AFP level decreased as expected were managed with observation, and other children with MGCTs underwent various chemotherapy regimens, including BEP (20 patients), PEB (19 patients), JEB (97 patients), and other chemotherapy regimens (AVCP/IEV, VBP/VIP, or VAC/PVB protocol) (99 patients).Three additional chemotherapy courses were administered after the normalization of tumor markers, with total cycles of n + 3, except for patients receiving the JEB protocol: patients with stage I-II underwent 4 cycles and patients with stage III-IV underwent 6 cycles.There was no significant difference in OS among these children receiving various chemotherapy regimens (Supplementary Materials Table S1).
Discussion
GCTs are rare neoplasms representing heterogeneity in clinical features.There are few reports on the characteristics and risk factors for GCTs in Asian children.We presented extensive scale data of pediatric extracranial GCTs in China, demonstrating distinct clinical patterns that most likely reflect biological differences.
Prognosis-Related Factors
Platinum-based chemotherapy has significantly improved the outcomes of children with MGCTs.Despite good outcomes overall, the likelihood of a cure for certain sites and histologic conditions is less than 50% [14].The treatment for patients at a high risk of recurrence still needs to be improved.Several studies have explored the clinical factors conferring a survival disadvantage to pediatric MGCT patients, including an AFP level > 10,000 ng/mL, primary extragonadal tumors, an age greater than 11 years, a clinical stage of IV and residual disease after surgery [15][16][17].Similar conclusions were obtained, except for children older than 11 years old.It may be that there are fewer children over 14 years old in this data set, as the maximum age allowed for admission to children's hospitals in Shanghai is usually 14.In the univariate analysis covering various types of MGCTs in this group, there is no difference in OS between metastatic and non-metastatic cases.Perhaps it is because under the current treatment strategy, even children with metastasis, such as those MGCT originat-ing from the sacrococcygeal region with lung metastasis, will not have poor survival.There is no difference in OS between patients with stage IV and patients with stage I-III, but prognosis of stage IV patients was significantly worse than stage I-II, not stage III, for children with local infiltration (stage III) have poor prognosis, for example, mediastinal MGCT with pleural infiltration (Supplementary Table S1).It is meaningful only in homogeneous diseases to compare the survival status between metastatic and nonmetastatic.
Multivariate Cox regression analysis considering the impact of multiple clinical features on survival rate, AFP level > 10,000 ng/mL (discussed below), and a primary tumor of the mediastinum are independent adverse prognostic factors despite prompt cisplatinbased chemotherapy followed by aggressive thoracic surgery.Mediastinal MGCTs confer a survival disadvantage, which is similar to previous reports [16].A primary mediastinal tumor is a poor prognostic factor in the IGCCC prognosis system for the diagnosis and treatment of adult GCTs [18,19].In children, there is no consensus or guideline that includes the primary site of the mediastinum as an adverse prognostic factor of MGCTs, and more attention is given to adolescent patients.This may be because most primary mediastinal MGCTs occur in adolescents and young adults [20], so few cases of mediastinal MGCTs have been specifically studied in pediatric clinical practice [21,22].According to our research and the literature, primary mediastinal tumor should be an important adverse prognostic factor for treatment consideration.
AFP
Clinically, a rise in AFP levels above the age-related normal level is considered elevation.AFP levels are elevated in most children with MGCTs.Our research shows that a high level of AFP is related to the YST and mixed germ cell tumors, high tumor staging, and an AFP level > 10,000 ng/mL is an independent adverse prognostic factor.Previous studies on elevated AFP levels as a worse prognostic factor are not consistent [12,23].Data from the Children's Oncology Group(COG) and the Children's Cancer and Leukemia Group have proven high AFP levels to be a poor prognostic factor [10].The International Germ Cell Cancer Collaborative Group (IGCCCG) classification has identified an AFP level > 10,000 ng/mL as one of the factors determining poor prognosis [24].As a marker of GCTs, the AFP is an important indicator for diagnosis, tumor burden assessment, and monitoring for recurrence during follow-up.Children with MGCTs show elevated serum AFP levels that are usually 3.5 times the normal upper limit [25].In our MGCT data, almost all abnormal AFP values were 5 times higher than the upper limit of the institution's normal value, and a value 5 times higher than the upper limit of the normal is usually an indicator of tumor recurrence [26].
Platinum-Based Chemotherapy Regimen
The treatment strategy for children with MGCTs combines surgery and chemotherapy, which is derived from the adult BEP chemotherapy protocol [27].Subsequently, the PEB protocol, an improved regimen for children, was developed by COG through a series of clinical trials.Considering the side effects of cisplatin in children, the JEB scheme, in which cisplatin is replaced by carboplatin, has been adopted in the UK and optimized by the clinical trials GCII [21] and GCIII [22].Other chemotherapy schemes include the VAC or VAC/PVB in the USA [28,29], the VBP/VIP in France [30], and the AVCP/IEV protocol in China.All the above chemotherapy regimens have achieved good survival in their respective applications.We compared the PEB, JEB, and other chemotherapy regimens and found no significant difference in OS, consistent with that of the retrospective comparison of the PEB and JEB protocols by the International Federation of Germ Cell Tumors [31].A prospective study on which protocol is better (cisplatin or carboplatin) is being conducted by the COG (NCT03067181).The treatment of our patients included various platinumbased chemotherapy regimens and showed high survival rates in all patients.However, there was a significant difference in survival rate in various subgroups.The survival rate of children in stage IV is significantly lower than that of children in stage II, which is even lower in children with mediastinal MGCTs.Further research should focus on reducing the intensity of chemotherapy for low-risk patients and exploring more effective chemotherapy for high-risk patients.The compressed PEB protocol was developed for the treatment of stage II ovarian MGCTs to reduce short-term and long-term side effects and improve quality of life [32,33].Moreover, it is necessary to explore more effective chemotherapy regimens for patients with clinical risk factors, especially for those with tumors located in the mediastinum.
Age Distribution of GCTs Reflects Pathological Origin
Overall, the onset age is mostly within 3 years of age, and the other onset age peak was in puberty, which is similar to previous reports [34].The incidence, primary site, and pathological types were shown to be age-related as reported in the literature [35].YST and mixed germ cell tumors, which may be including YST components, accounted for more than 80% of the total pathological types of MGCTs and were mainly in prepubertal, the vast majority within children under 3 years of age, while all other histological entities such as germinoma/dysgerminomas and choriocarcinoma, were mainly seen in children over 10 years old.These results are also similar to those in the European and American multicenter databases [36].
The age distribution of the primary site and pathological subtypes of GCTs are prominent in our study and are shown in illustration (Figure 2).The anatomic site of primary tumor from bottom to top of body (in the sacrococcygeal region, pelvic cavity, retroperitoneum, and mediastinum) correspond to onset age preference which also being from young children to older children (in newborns, infants, young children, and adolescence), precisely reflecting the process of tumor cell origin (developing from PGC arrested in their migration along the midline of the body during the development of human embryos) [6].The pathological types I of YST and TERs mainly occur before the age of 6 years, while the pathological types II of dysgerminoma/seminoma and non-seminomatous tumors (NSTs) are more common after puberty, consistent with the new broad classification of GCTs [3].As we know, at a certain time during embryonic development, the mismigrated PGCs settle in a certain anatomical location of its migration path, encountering diverse niches thus transforming into different GCTs [37].The distinct age-specific patterns by anatomical site and pathological subclass of GCTs just mirror such complex and developmentrelated aetiology.
Asian children with GCTs exhibit clinical features similar to those of European and American children.However, in terms of the most common primary site of GCTs, our data shows that testicular cases are the most common in MGCTs, while ovarian cases being the most common in TERs, and ovarian cases are also the most common in total GCTs (32.97%), even after removing dermoid cysts from TERs.This is different from the concept that sacrococcygeal GCT is the most commonly formed based on the data from Europe and America.Perhaps there is a difference between Asians and Europeans and Americans [38].The characteristic of distinct age preferences of the primary site and pathological subtypes may be attributed to the unique pathological origin of GCTs tumor cells.Under current treatment strategies, the survival of MGCTs originating from the mediastinum is still poor.
Conclusions
The clinical characteristics of extracranial GCTs in Asian children are similar to those reported in European and American.The introduction of platinum based chemotherapy regimens has greatly improved the survival rate of children with GCTs.However, some children still have poor prognosis.serum AFP greater than 10,000 ng/mL and MGCT originating from the mediastinum are independent poor prognostic factors, therefore multicenter prospective collaborative research is needed to explore more effective treatment strategies for these patients.
This set of research data lacks cases over the age of 14, so the research results cannot reflect the true status of children with GCTs in this age group.Because it is a retrospective study, some results, such as that there is similar survival of children treated with various chemotherapy schemes, need to be confirmed by a prospective cohort study.
Figure 1 .
Figure 1.Frequency of occurrence at different ages.(A) The early peak of TERs occurred within one year of age, namely the infancy and neonatal period; another peak was during the school age.(B) The first peak of MGCTs emerged before the age of three; and a steep increase at puberty.
Figure 1 .
Figure 1.Frequency of occurrence at different ages.(A) The early peak of TERs occurred within one year of age, namely the infancy and neonatal period; another peak was during the school age.(B) The first peak of MGCTs emerged before the age of three; and a steep increase at puberty.
Cancers 2023 , 12 Figure 2 .
Figure 2. The distribution of GCTs in various children age groups by primary site and pathological subtypes.Generally, the children in pediatrics are divided into several age groups: newborns (0-1 M), infancy (1 M-1 Y), toddler age (1-3 Y), preschool age (3-6 Y), school age (6-10 Y), puberty (10-14 Y), and teenage(14-18 Y).Extragonadal GCTs originating from the sacrococcygeal region, pelvic cavity, vagina, abdomen, and mediastinum, mirroring the migration trajectory of PGC during embryonic development, corresponded to various age at diagnosis which were in the order from newborn to puberty.YST and TERs were dominant before the age of 6 years, whereas dysgerminoma/seminoma and some rare pathological types such as choriocarcinoma and embryonic cancer mainly appeared after puberty.Gonadal GCTs were usually in children after 1 year of age, testicular GCTs mainly occurred from infants to preschoolers, with single tumor component YST or TERs, and ovarian tumors mostly occurred in children from toddler to school age, with dysgerminoma or mixed germ cell tumors or TERs.
Figure 2 .
Figure 2. The distribution of GCTs in various children age groups by primary site and pathological subtypes.Generally, the children in pediatrics are divided into several age groups: newborns (0-1 M), infancy (1 M-1 Y), toddler age (1-3 Y), preschool age (3-6 Y), school age (6-10 Y), puberty (10-14 Y),and teenage (14-18 Y).Extragonadal GCTs originating from the sacrococcygeal region, pelvic cavity, vagina, abdomen, and mediastinum, mirroring the migration trajectory of PGC during embryonic development, corresponded to various age at diagnosis which were in the order from newborn to puberty.YST and TERs were dominant before the age of 6 years, whereas dysgerminoma/seminoma and some rare pathological types such as choriocarcinoma and embryonic cancer mainly appeared after puberty.Gonadal GCTs were usually in children after 1 year of age, testicular GCTs mainly occurred from infants to preschoolers, with single tumor component YST or TERs, and ovarian tumors mostly occurred in children from toddler to school age, with dysgerminoma or mixed germ cell tumors or TERs.
Table 1 .
The characteristics of GCTs.
Table 2 .
Correlation of AFP in MGCTs with pathology and staging.
|
2023-11-17T16:32:47.987Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c0a76743e800a32ef632ff51c638073ca1fdc5f8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/22/5412/pdf?version=1699964763",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89694659c30a6c6e0b6c0b3c6ec99fdd21518037",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
220424791
|
pes2o/s2orc
|
v3-fos-license
|
Gravitational-wave Signature of a First-order Quantum Chromodynamics Phase Transition in Core-Collapse Supernovae
A first-order quantum chromodynamics (QCD) phase transition (PT) may take place in the protocompact star (PCS) produced by a core-collapse supernova (CCSN). In this work, we study the consequences of such a PT in a non-rotating CCSN with axisymmetric hydrodynamic simulations. We find that the PT leads to the collapse of the PCS and results in a loud burst of gravitational waves (GWs). The amplitude of this GW burst is $\sim30$ times larger than the post-bounce GW signal normally found for non-rotating CCSN. It shows a broad peak at high frequencies ($\sim2500-4000$ Hz) in the spectrum, has a duration of $\lesssim5 {\rm ms}$, and carries $\sim3$ orders of magnitude more energy than the other episodes. Also, the peak frequency of the PCS oscillation increases dramatically after the PT-induced collapse. In addition to a second neutrino burst, the GW signal, if detected by the ground-based GW detectors, is decisive evidence of the first-order QCD PT inside CCSNe and provides key information about the structure and dynamics of the PCS.
INTRODUCTION
Quarks are confined in hadrons such as protons and neutrons at low temperatures and densities. Nonetheless, free quarks should exist in the early universe when the temperature is extremely high (k B T > ∼ 150 MeV) [1]. They may also exist in the cold and superdense interior of compact stars with a density above the nuclear saturation density (ρ sat 2.6 × 10 14 g cm −3 ) [see e.g., [2][3][4]. Moreover, a first-order quantum chromodynamics (QCD) phase transition (PT), i.e., hadron-quark PT, may take place in the protocompact stars (PCS) produced by a core-collapse supernova (CCSN) [5][6][7] or binary neutronstar merger [8,9]. Such a PT can result in a more compact PCS and even collapse of the PCS to a black hole (BH). For CCSNe, this can provide an additional energy source for the explosion [7], and leads to interesting observational consequences, such as a second neutrino burst [5] and the production of rare r−process elements [10].
A galactic CCSN is a yet-undiscovered candidate gravitational-wave (GW) source for ground-based GW detectors [11]. The GWs from a CCSN, in combination with the neutrino and electromagnetic signals, will boost our understanding of the CCSN explosion mechanism [12]. Sophisticated multi-dimensional simulations have predicted the GW signals emitted by rotational CCSNe, the oscillations of the proto-neutron stars (PNS), and the standing accretion shock instability [see, e.g., [13][14][15][16][17]. In the meantime, the collapse of a neutron star (NS) to a quark star has been studied in [18][19][20] and GW emission is also found in this scenario. However, in these studies the collapse is triggered artificially by using different equations of state (EOS) for the construction of NS and the hydrodynamic simulation. It is unclear whether the PT-induced collapse of PCS in CCSNe can leave an imprint in the GW signal. In this Letter, we demonstrate the effects of a first-order QCD PT on the GW signal from a non-rotating CCSN, with two-dimensional simulations and a simplified hybrid EOS including hadrons and quarks.
Equation of state
To study a first-order QCD PT in CCSNe, we use a hybrid EOS from Ref. [5,21,22]. This EOS employs the STOS EOS [23] for the hadronic phase and the MIT bag model EOS [2] for the quark phase, with the Gibbs con-arXiv:2007.04716v2 [astro-ph.HE] 29 Oct 2021 struction for the mixed phase [24]. The Bag constant has a value of B = (165 MeV) 4 . The Gibbs construction enforces global charge neutrality; therefore, it allows different charge fractions for the hadronic and quark portions in the mixed phase. This leads to a smooth transition from the hadronic phase to the mixed phase (at around ρ sat for the composition of a CCSN core), and the pressure is continuously increasing. A pure quark phase is realized at densities higher than ∼ 3.5ρ sat , which is similar to the values of those more sophisticated hybrid EOSs [7,9]. More information about the hybrid EOS can be found in Ref. [5,21,23].
For this hybrid EOS, there exists a stable branch of the hot third family of compact stars [4] with a pure quark core (see Fig. 2 in [25]) for matter properties similar to the CCSN core (entropy 3 k B /baryon and lepton number fraction ∼ 0.4). The maximum mass for the third family (∼ 1.50 M ) is larger than that of the second family whose core is in the mixed phase. As we will see, this unique property is important for the dynamics of the PCS.
FLASH simulation
We carry out CCSN simulations in two dimensions with the assumption of axisymmetry, using the FLASH code [26] with an "M1" scheme for the neutrino transport [27]. We take the 12-M , solar-metallicity, presupernova progenitor s12 from [28] as the initial conditions. To apply general-relativistic approximations, gravity is calculated with the Case A formulation of [29]. Unlike previous simulations with the FLASH code, we include the lapse function in the Euler equations to mimic the time-dilation effect in general relativity (see the modified Euler equations and some numerical tests in [30], also see [31]). This is found to affect the GW frequency significantly after the PT-induced collapse. A cylindrical grid with adaptive mesh refinement is used. It extends out to 2 × 10 9 cm in radius and ±2 × 10 9 cm along the cylindrical axis, with a finest resolution of 150 m. We extract the plus GW strain h + from our Newtonian simulation using the standard quadrupole formula [32].
Dynamics
To show the consequences of the PT, we run two simulations with the same settings, one using the hybrid EOS and the other using the STOS EOS. The resulting dynamics are shown in the upper panel of Fig. 1. The iron core of the s12 model collapses to above ρ sat and bounces at t b 151 ms for both EOSs. At t b , the core of the hybrid EOS has already entered the mixed phase with a central quark mass fraction X q = 18%. However, because the hybrid EOS transitions smoothly from the pure hadronic phase to the mixed phase, the PCS remains in the mixed phase with a low X q shortly after t b . The bounce shock turns into an accretion shock and stalls at ∼ 150 km at t b + 50 ms and begins receding inward.
During the accretion phase, the central density ρ c of the PCS with the hybrid EOS is always larger than that of the PNS with the STOS EOS and X q continuously increases. The mass of the PCS grows and reaches the maximum of the second family for the hybrid EOS at ∼ t b + 286 ms. The PCS becomes unstable against gravity and experiences a second dynamical collapse. The central density ρ c grows to 1.5 × 10 15 g cm −3 (∼ 6ρ sat ) and the PCS core enters the pure quark phase (X q = 1).
The pure quark core bounces in less than 1 ms at t 2b as the PCS enters the new stable branch of the third family and ρ c drops to ∼ 5ρ sat . This bounce shock expands quickly to explode the outer envelope. At the end of the simulation, the mean shock radius extends out to ∼1500 km with an explosion energy of ∼ 2.0 × 10 50 erg. The PT-induced collapse is associated with a second neutrino burst with more electron anti-neutrinos than electron neutrinos (lower panel of Fig. 1), which is consistent with the results of the spherically-symmetric simulation in [5].
Gravitational waves
In Fig. 2 we show the GW waveforms h + (t) up to 400 ms after the first bounce (∼ t 2b + 113 ms) extracted from both simulations. We assume that the distance from the source is 10 kpc. The signal from t b to ∼ t b + 50 ms comes from the prompt convection behind the stalling accretion shock. It is followed by an episode of continuous emission from the oscillations of the PCSs [33]. There is no qualitative difference between the two waveforms until t 2b and the cumulative emitted GW energies are quantitatively similar (bottom of Fig. 2). In accord with the more compact PCS, the peak GW frequency for the hybrid EOS is always higher than that for the STOS EOS.
Around t 2b , the PT-induced collapse results in a burst of GW emission with a much larger amplitude than those of earlier episodes. In Fig. 2, the 10 ms window around t 2b is stretched in time to show clearly this burst, which is associated with the PT-induced collapse and bounce. The maximum amplitude of h + reaches 10 −20 and ∼ 30 times larger than those of the other episodes. The energy carried by this burst is ∼ 4.6 × 10 −7 M c 2 , which is ∼ 3 orders of magnitudes more than the GW energy of the other episodes (and also that of the signal for the STOS EOS). Our numerical test shows that this GW burst results from asphericities developed between t b and t 2b [30]. After this burst, the amplitude damps quickly to the same level as before t 2b . This part of signal should come from the oscillations of the PCS with a pure quark core.
A time-dependent spectrum (or spectrogram) is useful for understanding the emission mechanisms of GWs, as well as designing efficient detection strategies. shows the spectrogram of the GW signal extracted from the simulation using the hybrid EOS. We use a Kaiser window with a width of 25 ms for the short-time Fourier transform except for around t 2b where a width of 10 ms is used. Before t 2b , the spectral evolution is similar to that of the STOS EOS (see [30]). The GW peak frequency is continuously increasing, in accord with the evolution of the Brunt-Väisälä frequencies f BV (Eq. (3) in [30]) at densities between 10 11 and 10 12 g cm −3 (blue band in Fig. 3), which is approximately the PCS surface [34]. Around t 2b , the GW burst has a much higher frequency (∼ 2500 − 4000 Hz). This is related to the change of the dominant GW emission region from ∼ 10 − 20 km to ∼ 5−10 km (see [30]), which is inside the quasi-static core during the second collapse and bounce. During this time, f BV peaks at 2900 Hz near a radius of 10 km (ρ 8×10 13 g cm −3 ) and is closer to the observed GW frequency.
Shortly after t 2b , the GW peak frequency drops back to ∼ 1000 Hz and continues to increase afterwards, albeit at a much faster rate. We find that f BV has a much larger spread inside the PCS and the GW spectral evolution does not match the track of f BV . After t b + 300 ms, the peak frequency of the dominant GW emission is closer to f BV at densities ∼ 5 × 10 12 g cm −3 . Nevertheless, due to the much larger ρ c ( > ∼ 4 times) and compactness of the PCS for the hybrid EOS (Fig. 1), the peak GW frequency is 2 − 3 times higher than that for the STOS EOS. The GW spectral evolution after t 2b contains information about the structure and evolution of the PCS with a pure quark core, from which one may infer the properties of the quark EOS (e.g. Bag constant). of the GW signals from t b to t b +400 ms with the STOS (blue) and hybrid (orange) EOSs. Also shown are the h char (f ) of the GW signal for the hybrid EOS in the time interval of: from t b to t b +280 ms (green) and between t 2b −3ms and t 2b +7ms ms (red). The black line is the sensitivity spectrum of Advanced LIGO [39].
Detection prospect
To estimate the detectability of the GW signals, we calculate the dimensionless characteristic GW strain (h char ) [35] assuming a distance of 10 kpc, and compare it with the sensitivity of Advanced LIGO in Fig. 4. Below ∼ 1000 Hz, h char are quantitatively similar for the hybrid and STOS EOSs. At higher frequencies, h char for the hybrid EOS shows a broad peak between ∼ 2500−4000 Hz, which is also above the detector's sensitivity curve. This part is mainly contributed by the burst associated with the PT-induced collapse, seen from the comparison between the entire h char and that between t 2b − 3ms and t 2b + 7ms.
We calculate the single-detector signal-to-noise ratio (SNR) of the GW waveforms assuming the optimal orientation using Eq. (1.1) in [35]. If a confident detection requires an SNR of 8, then for the hybrid EOS, inclusion (exclusion) of the burst yields a detection radius of 22 (12) kpc. The detectability of the burst is not significantly better because the current detectors are optimized for GW signals at ∼ 10 − 1000 Hz. The amplitude of h char and detector noise increase together by a factor of 10 from 100 Hz to 3000 Hz (Fig. 4). Future-generation detectors, such as the Einstein Telescope [36] and the Cosmic Explorer [37], may consider improving the sensitivity at several kHz if such signals are targeted (also for BH forming CCSNe [38]).
In this work, the GW waveforms h + (t) are extracted from 2D simulations with the assumption of axisymmetry. In various studies [e.g., 16,17,40], the amplitude of h + in 3D simulations can be 5 − 10 times smaller than those in their 2D counterparts, which lowers the expectation for the GW detection. Future study is needed to explore 3D effects for the PT-induced collapse and observables. We expect that the burst associated with the PT-induced collapse would still be present in 3D simulations but with smaller amplitudes. DISCUSSION We present here a specific case in which the PTinduced collapse results in a bounce shock that successfully explodes the mantle. However, for other progenitors [41] or other hybrid EOSs [5], the star may fail to explode and collapse into a black hole (BH) in two scenarios. First, the PCS at the onset of the PT-induced collapse may exceed the maximum mass which the hybrid EOS permits and it directly collapses into a BH. In this case, the GW (and neutrino) burst reported here will be absent. Nevertheless, the existence of free quarks in the PCS might be inferred from the shortening of the BH formation time [6,42], though it is subjected to the uncertainties of the pure hadronic EOS.
In the second scenario, the second bounce shock is launched but the PCS still collapses into a BH at a later time. In this case, the burst of GWs and neutrinos associated with the PT-induced collapse will be present, followed by the shut-off of both signals at BH formation. This is an interesting possibility to be explored. Moreover, in both cases of BH formation, if the iron core is rapidly rotating, the inclusion of PT can produce different BH ring-down signals compared to those for a hadronic EOS due to the different free-fall time of the PCS, which is found in binary NS-merger simulations [8].
CONCLUSIONS
In this Letter, we demonstrate the effects of a firstorder QCD PT on the GW signals from a non-rotating CCSN. We find that the PT results in the collapse of the PCS at ρ c ∼ 3.5ρ sat , and the core radiates a loud GW burst in < ∼ 5 ms. The amplitude of this burst reaches h + = 10 −20 assuming a source distance of 10 kpc and is larger by a factor of ∼ 30 than other episodes of GW emission (and generally those using a hadronic EOS). The spectrum of this burst shows a broad peak at ∼ 2500 − 4000 Hz, which is higher than that generally found for CCSNe without the PT-induced collapse. The peak GW frequency following this burst is also much higher (>1 kHz) than that for the hadronic EOS due to the large compactness of the PCS with a pure quark core. Therefore, the PT inside a CCSN can be inferred from the GW detection. However, the louder burst is not necessarily easier to detect because of the increasing noise level of high frequencies for current ground based GW detectors. Nevertheless, the loud, high-frequency burst of GW radiation over a short period of time may be a prime target for future searches of coherent wave burst signals [43].
The hybrid EOS transitions from the hadronic phase to the mixed phase at a low density (∼ ρ sat ). Ref. [7] simulated CCSNe in spherical symmetry using a more physical and complex hybrid EOS (DD2F-SF, transition density ∼ 2.4ρ sat ), and the dynamics of the second collapse are similar to our results. Therefore, we expect that the properties of the GW burst (i.e. the frequency and amplitude) associated with the PT-induced collapse should still be present with a more physical EOS, such as DD2F-SF [7,9] or a Chiral Mean Field model [8], which are consistent with the maximum NS mass measurement [44]. A natural extension is to employ such EOSs in multi-dimensional CCSN simulations. Moreover, progenitor dependence, such as the initial mass and rotation, should be studied to acquire a more comprehensive picture for the effects of a first-order QCD PT on the GWs from CCSNe. Particularly, we expect that if the iron core is rapidly rotating before collapse, the GW burst associated with the PT-induced collapse will be much louder and it may allow the detection for sources farther away. The FLASH code solves the Newtonian Euler equations. To mimic the deeper gravitational well in general relativity (GR), we use the effective gravitational potential with the Case A formalism of [29], which has been tested in corecollapse supernova (CCSN) simulations routinely [27,42,45].
In this work, we extend this by including the GR time-dilation effect directly in the Euler equations. The modified Euler equations read: where ρ, v, P and τ are the rest mass density, velocity, pressure and kinetic plus internal energy density of the fluid; Φ is the effective GR gravitational potential; α = exp(Φ) is the lapse function. The lapse function is included in the fluxes and source terms. The additional source term in the momentum equation, αP/c 2 ∇Φ, is verified by the derivation of the GR hydrodynamic equations with the metric g µν = [−α 2 , 1, r 2 , r 2 sin θ] in spherical symmetry [46]. This source term is important for maintaining the mechanical equilibrium in the hydrostatic regions. The energy equation is derived through the same procedure and has no additional source term. The consistency of the equations can be checked with a polytropic equation of state in which the pressure and specific internal energy are analytic functions of the rest mass density. To test the performance of the GR approximations, we simulated the oscillations of a compact star constructed using the Case A potential and hybrid EOS. We choose an initial central density of 1.5 × 10 15 g cm −3 to mimic the protocompact star (PCS) after the second bounce in our CCSN simulations. For reference, we also performed a simulation with the GR1D code using a fully relativistic TOV star with the same central density as the initial conditions. The initial conditions are different for the FLASH and GR1D simulations because the fully relativistic 0.5 TOV solution is not a stable configuration for the FLASH simulations (also see the Appendix A of [27]). The results of the central density evolution for 5 ms are shown in Fig. S1. The small-amplitude oscillations originate from the imperfect mapping of the initial conditions to the computational grids of the hydrodynamic simulations. This mapping leads to slightly different equilibrium states (central densities) in different codes. The frequencies of the PCS oscillations are ∼ 2500 Hz, 4500 Hz and 3300 Hz for the simulations using GR1D, FLASH and FLASH with the inclusion of the lapse function. This test shows that our implementation of the lapse function can approximate the GR time-dilation effect to some extent.
For the CCSN simulations in the main text, we find that the lapse function reduces the gravitational-wave (GW) frequency significantly, especially after the second collapse when the PCS is extremely compact. For example, the frequency of the burst around the second collapse is ∼ 2500 − 4000 Hz (∼ 4000 − 5000 Hz) for the simulation with (without) the lapse function. We expect full GR simulations will further reduce the GW frequency.
Test of potential numerical artifacts
We perform a test simulation to evaluate the contribution of potential numerical artifacts from the computation domain to the GW signal during the PT-induced collapse and bounce. The test simulation starts from a sphericallysymmetric compact star constructed using the STOS EOS, while the hybrid EOS is used for the 2D hydrodynamic simulation. The sudden reduction of pressure results in the collapse of the compact star. We compare the results of this test simulation to those of the PT-induced collapse in the CCSN simulation in Fig. S2. In the test simulation, the amplitude of the GW strain remains less than 0.2 × 10 −21 until 2 ms after bounce, which is at least an order of magnitude smaller than that of the burst in the CCSN simulation. This indicates that the GW burst in the CCSN simulation results from the asphericity already developed during the episode between the first bounce and the second collapse, but not artifacts from the dynamical collapse itself.
Resolution dependence
We perform a set of simulations with different resolutions, for the episode of the second collapse and bounce in the main text. The simulations start from ∼ 10 ms before the second collapse and the finest resolutions are 300 m, 150 m and 75 m, respectively. We find the PCS structure after the second bounce agrees well for the different resolutions. In Fig. S3, we plot the GW waveform and spectrum for the loud burst around the second collapse. Although the GW signals do not match exactly in phase, the amplitude and frequency of the burst agrees quantitatively well for the different resolutions. Because the high-resolution simulation has less numerical dissipation, the amplitude of h + damps more slowly in the high-resolution runs (75 m and 150 m) than in the low-resolution run (300 m) after t 2b + 2 ms.
Spatial distribution of GW strain Fig. S4 shows the spatial distribution of the tangential velocity (v θ , left half) and GW strain (right half): at 2 ms before (left) and 0.5 ms after (right) the second bounce. Here we assume the distance from the source is D = 10 kpc. The results are from the simulation with a finest resolution of 150 m. The distribution of v θ roughly shows the convective regions inside the PCS, which changes from ∼ 10 − 20 km to ∼ 5 − 10 km. The distribution of h + is correlated to that of v θ , which suggests the connection between the GW emission and convective motions inside the PCS. The contribution to h + from the regions outside 15 km is generally less than ∼ 10% during the burst around the second collapse.
Spectrogram for the STOS EOS Fig. S5 shows the spectrogram of the GW signal extracted from the simulation using the STOS EOS. This spectrogram is similar to those of CCSN simulations using a hadronic EOS in the literature, e.g. [38]. The green bands shows the time-dependent Brunt-Väisälä frequency (f BV ) at densities between 10 11 and 10 12 g cm −3 , where f BV is independent of the specific position. We estimate f BV by the Newtonian Brunt-Väisälä frequency multiplied by the lapse function: where c s is the speed of sound and others are the same as in Eq. 1. The evolution of the peak GW frequency roughly follows that of f BV for the 400 ms after bounce.
|
2020-07-10T01:01:01.320Z
|
2020-07-09T00:00:00.000
|
{
"year": 2020,
"sha1": "8083d2cdb8954234fa867d22518ff2af8c94b6df",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.125.051102",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d1afa51316b846b01b648958eadc738aa33102c1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
248592517
|
pes2o/s2orc
|
v3-fos-license
|
Bronchiolar adenoma with unusual presentation: Two case reports
BACKGROUND The clinicopathological features, immunohistochemical characteristics, and genetic mutation profile of two unusual cases of distal bronchiolar adenoma are retrospectively analyzed and the relevant literature is reviewed. CASE SUMMARY Case 1 was a 63-year-old female patient who had a mixed ground-glass nodule, with mild cells in morphology, visible cilia, and bilayer structures in focal areas. Immunohistochemical staining for P63 and cytokeratin (CK)5/6 revealed the lack of a continuous bilayer structure in most areas, and no mutations were found in epidermal growth factor receptor, anaplastic lymphoma kinase, ROS1, Kirsten rat sarcoma, PIK3CA, BRAF, human epidermal growth factor receptor-2 (HER2), RET, and neuroblastoma RAS genes. Case 2 was a 58-year-old female patient who presented with a solid nodule, in which most cells were observed to be medium sized, the nuclear chromatin was pale and homogeneous, local cells had atypia, and cilia were found locally. Immunohistochemical staining for P63 and CK5/6 showed no expression of these proteins in mild cell morphology whereas the heteromorphic cells showed a bilayer structure. The same nine genes as above were analyzed, and HER2 gene mutation was identified. CONCLUSION Some unresolved questions remain to be answered to determine whether the lesion is a benign adenoma or a part of the process of malignant transformation from benign adenoma of the bronchial epithelium. Furthermore, whether lesions with atypical bilayer structures are similar to atypical hyperplastic lesions of the breast remains to be elucidated. Moreover, clarity on whether these lesions can be called atypical bronchiolar adenoma and whether they are invasive precursor lesions is needed. Future studies should examine the diagnostic significance of HER2 gene mutation as a prognostic indicator.
INTRODUCTION
Bronchiolar adenoma (BA) clinically presents as a benign or potentially malignant tumor. It is thought to originate from the bronchiolar epithelium, which has a series of cell differentiation in a bilayer arrangement of multipartite epithelial cells and basal cells. BA is expected to gain more widespread recognition in the 2021 edition of the World Health Organization classification of thoracic tumors [1,2]. The histological variants of BA can be distinguished as the "classic" ciliated muconodular papillary tumor (CMPT) (proximal type) and "non-classic" CMPT (distal type). The histological features of CMPT include a bilayer structure composed of a continuous basal cell layer and luminal cell layer (comprising varying proportions of mucinous cells, ciliated cells, Clara cells, and/or type II alveolar epithelial cells) [3,4]. BA often exhibits only focal or no papillary architecture and contains variable numbers of ciliated and mucinous cells, with some lesions entirely lacking one or both of these components [3,4]. A recent study revealed the involvement of potential gene mutations that may be responsible for the neoplastic nature of BA [5]. Mutations in the anaplastic lymphoma kinase (ALK), Kirsten rat sarcoma (KRAS), BRAF, AKT1, and epidermal growth factor receptor (EGFR) genes were identified in BA, and these genes were considered as driver oncogenes that eventually lead to the development of neoplasms [6][7][8][9]. Meanwhile, in a recent study on BA, Chang et al [2] identified BRAF V600E mutations (38%), EGFR exon 19 deletions (10%), EGFR exon 20 insertions (10%), KRAS mutations (24%), and HRAS mutations (5%), thus supporting a truly neoplastic process of BA.
Cases of single-or double-layer bronchial adenoma with atypical bronchiolitis are rare. Here, we report two cases with BA confirmed by imaging, morphology examination, immunohistochemical characteristics, and genetic tests.
Chief complaints
Case 1: A 63-year-old female patient was found to have pulmonary nodules on examination at a local hospital in September 2020.
Case 2:
A 58-year-old female patient underwent chest computed tomography (CT) examination at our hospital on January 19, 2021 and was identified as having nodules in the right upper lobe of the lung.
History of present illness
Case 1: Upon examination at a local hospital in September 2020, the patient was found to have pulmonary nodules; she did not report having cough or expectoration, chest pain, chest tightness, or other symptoms. No further specific diagnosis was made or treatment advised. Since the discovery of the nodules, the patient has been lucid and mentally healthy with normal diet and sleep. The laboratory reports for urine and stool were normal, and there were no significant changes in weight.
Case 2:
The patient underwent chest CT examination at our hospital on January 19, 2021 and was identified as having nodules in the right upper lobe of the lung. Except for occasional cough and phlegm, she showed no other signs or symptoms.
History of past illness
The patients had a free previous medical history.
Personal and family history
The patients had no personal and family history.
Physical examination
Case 1: After admission to the hospital, the patient's temperature was 36.6 ℃, heart rate was 58 bpm, respiratory rate was 16 breaths per minute, and blood pressure was 112/59 mmHg.
Case 2:
The patient's temperature was 36.9 ℃, heart rate was 67 bpm, respiratory rate was 16 breaths per minute, and blood pressure was 120/67 mmHg.
In both cases, chest examination found that the trachea was in the center, the thorax was not deformed, the breath sounds of the lungs were slightly thicker, and no obvious dry or wet rales were heard.
Case 2:
The biochemical indicators showed the following results: CEA was 3.22 ng/mL, CA125 was 9 U/mL, NSE was 11.21 ng/mL, CK19 was 1.87 ng/mL, SCC was 0.7 ng/mL, and pro-GRP was 27.65 pg/mL, all of which were normal.
Imaging examinations
Case 1: On December 7, 2020, the findings of thoracic enhanced CT performed at our hospital revealed bronchitis, right lower pulmonary bullae, and subpleural nodules and pleural traction in the lower lobe of the right lung of the patient ( Figure 1A).
Case 2:
The patient underwent chest CT examination at our hospital on January 19, 2021 and was identified as having nodules in the right upper lobe of the lung ( Figure 1B).
Surgical findings
Case 1: A small subpleural nodule was found in the lower lobe of the right lung. The nodule was approximately in diameter and did not involve the visceral pleura. A wedge-shaped resection of the nodule was performed.
Case 2:
After performing preoperative puncture and locating the right upper lobe nodule, a solid nodule with a diameter of 0.7 cm was palpated around the lobe. The nodule of the right upper lobe was excised by a wedge-shaped incision.
Gross pathological examination
Case 1: A piece of grayish red lung tissue was removed by wedge resection; the tissue measured 9 cm × 3.5 cm × 2 cm. The pleura was grayish red and smooth; a grayish white nodule was found by multisection incision. The nodule measured 0.6 cm × 0.5 cm × 0.3 cm. The texture of the nodule was similar to that of normal salivary glands. It showed a clear boundary attached to the surrounding normal lung tissue, which was away from the anastomosis line, and the remaining section was grayish red and soft.
Case 2:
Upon gross pathological examination, we identified a piece of grayish red lung tissue measuring 10 cm × 4 cm × 2 cm. A partial incision was made by the surgeon. The pleura was grayish red and smooth; a grayish white nodule was later found upon incision. The nodule measured 0.6 cm × 0.5 cm × 0.5 cm. The texture of the nodule was similar to that of normal salivary glands. The nodule showed clear boundaries and was attached to the pleura 2 cm away from the anastomosis line. The remaining section was grayish red and soft.
Microscopic pathological examination and immunohistochemistry findings
Surgical specimens were fixed with 4% neutral buffer formaldehyde solution (18-24 h) and embedded cavities; B: At low magnification (100 ×, frozen section), the boundary of the tumor was relatively clear, and there was air cavities; C and D: Observations at high magnification (200 ×, frozen section) revealed that the tumor cells were mainly arranged in a monolayer structure, and the local part seemed to be a bilayer structure. Morphologically, the cells were observed to be medium sized, the nuclear chromatin was pale and homogeneous, and local cilia were seen (red arrow); E: At low magnification (100 ×), the relationship between the pulmonary lobular artery and bronchioles was close (arrow), and peripheral stromal lymphocytes were infiltrated in a focal shape (triangle); F and G: Observations at medium to high magnification (200 × and 400 ×, respectively) revealed that tumor cells were arranged as papillary and mural structures. The cell morphology is mild with visible cilia (arrows), bilayer structures (triangles), aggregation of phagocytes in the alveolar cavity (circle, F), and a fibrous non-cancerous stroma (circle, G). with paraffin; sections (4 μm thick) were subjected to hematoxylin-eosin staining[10] and immunohistochemistry analyses.
Immunohistochemical staining: Immunohistochemical analyses were performed on paraffinembedded sections using primary antibodies against the following proteins: P40, P63, P53, thyroid transcription factor 1, CK5/6, CD34, Ki-67, and collagen IV. All primary antibodies were purchased from Fuzhou Maixin Biotechnology Co., Ltd. (Fuzhou, China). Immunohistochemistry was performed according to the manufacturer's instructions. Polybutylene succinate was used as a negative control. Staining was performed using the Roche Benchmark XT medical system (Shanghai).
Genetic testing: Mutations in the EGFR, ALK, ROS1, KRAS, PIK3CA, BRAF, human epidermal growth factor receptor-2 (HER2), REarranged during transfection, and neuroblastoma RAS genes were detected using the ADX Arms and the Amoydx FFPE DNA/RNA Tissue Kit (Xiamen Ade Biomedical Technology Co., Ltd.). All experimental procedures were performed strictly according to the manufacturer's instructions.
Case 1: At low magnification (100 ×), the tumor boundary was relatively clear, and air cavities were present. The pulmonary lobular artery and bronchioles were observed, and the peripheral stromal lymphocytes were localized (Figures 2A, 2B and 2E). At high magnification (200 × and 400 ×), most tumor cells were arranged in a monolayer structure, and the local part appeared as a bilayer structure. Morphologically, the cells were observed to be of medium size (the size of the nucleus and normal phagocytic nuclei was equivalent in the alveolar space); the nuclear chromatin was pale and homogeneous, and local cilia were seen (Figures 2C-G). Thyroid transcription factor 1 (TTF1) was expressed in bronchioles and the peripheral alveolar epithelium, with the only difference being in the intensity of expression. The results of P40, P63, and CK5/6 staining were the same, and staining was positive only in the bilayer structure of the tumor (Figure 3).
Case 2:
At low magnification (100 ×), most cells appeared with moderate density, focal hyperplasia, and stroma within the focal lymphocytic infiltration; at high magnification (200 × and 400 ×), the tumor cells were arranged as an acinar structure and accessory wall structure; most cells were observed to be medium sized, the nuclear chromatin was pale and homogeneous, and cilia were seen. The focal nucleus was enlarged and atypical ( Figure 4). TTF-1 was positive; the results for P63 and CK5/6 staining were the same, and only basal cells were seen in the hyperplasia area. CD34 was present in the alveolar structure, and the Ki-67 index was low ( Figure 5).
Genetic testing
Genetic tests were performed using the patients' DNA samples to check for mutations in EGFR, ALK, ROS1, KRAS, PIK3CA, BRAF, HER2, RET, and NRAS genes. No gene mutations were detected in case 1, while HER2 gene mutation was detected in case 2.
FINAL DIAGNOSIS
Based on the histological characteristics and results of immunohistochemical staining, the two patients were diagnosed as having BA with unusual presentation.
TREATMENT
Complete wedge resection was performed at the Thoracic Surgery Department of Liaocheng People's Hospital.
OUTCOME AND FOLLOW-UP
After surgical resection, neither patient received radiotherapy or chemotherapy. At the time of writing this report, which is 11 and 12 mo postoperatively for the two patients, respectively, both of them have recovered well without signs of disease.
DISCUSSION
In 2018, BA was proposed by Chang et al [2] as a new type of lung tumor, defined as a group of pulmonary tumors that could be benign or have a potential for malignant transformation depending on the epithelial cell composition of the bronchiolar anatomy. These include classic CMPT and non-classic CMPT, which differ according to histological aspects. BAs can be further divided into proximal (similar to proximal bronchioles) and distal (similar to respiratory bronchioles) types based on the histomorphology (comparing histological features of different grades of bronchial epithelial cells and their similarity with the bronchioles) and immunohistochemical characteristics. Proximal-type BAs comprise numerous prominent mucinous cells and are well defined with ciliated cells and intact basal layer cells that are arranged in a papillary or flattened pattern. Conversely, the distal form usually shows a flattened pattern and comprises few mucinous cells, cubic cells, and/or ciliated cells. Although there is some overlap between the characteristics of the two types, some lesions may lack one or both of these components. Zheng et al [4] reported that mucinous and papillary components are usually present throughout classic CMPTs but may be absent in their "non-classic" counterparts. Furthermore, Shao et al [3] also found mixed-type BAs with monolayered lesions [2,4,11].
In this study, two very rare cases of BAs comprising mucinous cells are reported. The cell arrangement observed showed a flattened pattern, indicating the distal type of BA. Although tumor cells formed an adenoid or papillary structure, the ciliary structure could be seen locally in lumen cells. , the tumor cells were found to be mainly arranged in a monolayer with locally visible cilia (arrows); some nuclei appeared enlarged and atypical (star); C: At low magnification (100 ×), most cells appeared with moderate density (star), focal hyperplasia, and stroma within the focal lymphocytic infiltration; D and E: Observations at medium to high magnification (200 × and 400 ×, respectively) revealed that the tumor cells were arranged as an acinar structure and accessory wall structure; most cells were not atypia in shape, and cilia (arrow) were seen. Some nuclei were enlarged and atypical (circle).
Many studies have reported that the ciliary structure in lumen cells can distinguish this type of tumor from an adenocarcinoma, which is an important characteristic to help differentiate between the two tumor types [12]. However, in the two current cases, not every lumen cell had cilia, and the basal cells could not be easily observed, thus causing some difficulties in diagnosis, particularly when the specimen was frozen. Therefore, interpretations should be made considering both atypia of cells and their arrangement. In our two cases, most cells were loosely arranged, the morphology of glandular epithelial cells was not atypical, and the cytoplasm of local cells was transparent. Few intranuclear inclusion bodies were seen under a high-power microscope; this finding, together with a marginally increased nucleoplasmic ratio, suggested that the lesion was benign.
In the second case, atypic cells and the absence of the entire lesion's bilayer structure complicated the diagnosis. However, these lesions were different from adenocarcinoma in situ (AIS) and invasive adenocarcinoma. The tumor cells of AIS comprise type II alveolar epithelial cells and/or Clara cells, which grow along the original alveolar wall without destroying the alveolar structure. In this case, ciliated columnar cells or mucinous cells were rarely present, and cell atypia was more pronounced than that in BA. The boundary of invasive adenocarcinoma is not discernible, the alveolar structure is destroyed, and the growth is rapid. In addition, the micropapillary structure can be seen in the lumen and necrosis is visible, cell atypia is evident, and nuclear cleavage is widely observed [2][3][4].
Wang et al [13] considered BA as a kind of tumor associated with bronchioles, and bronchiole involvement can be found in almost all BAs. Upon careful observation, we also found the tumor to have expanded from bronchioles to the surrounding alveolar walls. Meanwhile, we also observed the pulmonary lobular artery and bronchioles in local ares in these two cases; this formed a relatively robust basis for our diagnosis.
In typical morphologic cases, the double-layer structure is obvious, and ciliary cells and mucous cells are clearly recognizable on the lumen surface, eliminating the need for immunohistochemical examination. However, in our two cases, it was difficult to judge whether the basal cells were present, thus warranting immunohistochemical staining to visualize the tissue structure and cell type. In case 1, P40, P63, and CK5/6 were detected only in local areas. Whereas in case 2, P63 and CK5/6 were expressed only in atypical cells, confounding our diagnosis. Many reports have indicated that the double-layer structure is essential in the diagnosis of BA; however, based on our understanding of the current cases and review of the related literature, we call these two lesions as monolayer BA lesions [14,15]. The presence of cellular atypia and the lack of the basal cell layer in monolayer BA lesions suggest their potential to transform into malignant tumors. These findings may reflect the continuous malignant transformation process of benign adenomas of the bronchial epithelium. Further large-scale studies of similar cases are required to investigate whether monolayer BA lesions are accompanied by atypical bronchiolar epithelial hyperplasia, whether they are precancerous lesions and are similar to the atypical hyperplasia of the breast, and whether they will eventually become AIS or even invasive adenocarcinoma [15].
Although the distal type of bronchial adenoma typically has cilia and can be found to extend with normal bronchioles, these characteristics are not easy to observe on intraoperative frozen sections [6,16,17]. The evaluation of the differentiation of bronchial adenoma and cancer requires immunohistochemistry-assisted diagnosis, which is not currently performed during the operation. Therefore, performing a differential diagnosis of bronchial adenoma and carcinoma using intraoperative frozen sections during operation is difficult and challenging.
Although some studies have reported that ill-defined peripheral opacity and pseudocavities of a ground-glass lung nodule on CT differentiate BA from AIS or minimally invasive adenocarcinoma [18], these aspects are not absolute. Thus, they provide some hints, but more comprehensive findings are required for differentiation of these lesions.
Kamata et al [19] identified cancer-driving gene mutations in CMPT, supporting the notion that these lesions are neoplastic rather than reactive or metaplastic. Unlike previous studies that primarily focused on EGFR and BRAF genes[5-9,20], we evaluated nine genes associated with susceptibility to BAs. Case 1 was negative for mutations in all genes. In case 2, HER2 gene mutation was found. Given the small number of samples in this case report, the significance of HER2 gene mutation needs to be further studied in a larger number of samples.
|
2022-05-10T15:26:20.300Z
|
2022-05-16T00:00:00.000
|
{
"year": 2022,
"sha1": "a55a86c0d0b9cfeeeee0b15a12221d167a15ce81",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v10.i14.4541",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9f68dc82b03fbc723216c9ae070a528ff33b124",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
257550035
|
pes2o/s2orc
|
v3-fos-license
|
Primary Pancreatic Undifferentiated Pleomorphic Sarcoma
ABSTRACT Primary pancreatic sarcomas are rare malignancies with an incidence of 0.1%. This case report is of a 48-year-old man who presented with this condition. The patient's treatment plan consisted of distal pancreatectomy and splenectomy with intraoperative immunohistochemistry and adjuvant chemotherapy. To correctly identify and treat undifferentiated pleomorphic sarcoma, a stepwise strategy involving cross-sectional imaging and extensive histopathology analysis is necessary.
INTRODUCTION
Abdominal pain is a common presenting symptom of pancreatic malignancy related to neurovisceral irritation. 1 Primary pancreatic sarcomas with an incidence of 0.1% often present with abdominal pain. The term "sarcoma" refers to a diverse group of malignancies that originate in the soft tissues and bones. We report a case of high-grade, undifferentiated pleomorphic sarcoma (UPS) of the pancreas.
CASE REPORT
A 48-year-old man presented to the emergency department with nonradiating, left lower quadrant abdominal pain for 2 days. The pain was aggravated by ingestion of food and accompanied by low-grade fever, malaise, loose stool, and 10 lbs. weight loss over 3 months. He denied nausea, emesis, melena, jaundice, recent trauma, infections, travel, alcohol, or drug or non-steroidal anti-inflammatory drug use. His family history was notable for prostate and colon cancer in his father. He was found to have tachycardia, but no hypotension or fever. His weight was 77 kg (169 lb 12.8 oz), and body mass index was 22.4 kg/m 2 . Physical examination revealed tenderness in the left lower quadrant without a palpable mass. Laboratory examination revealed the patient to present with neutrophilic leukocytosis (14.3 thou/mcL); microcytic, hypochromic anemia (hemoglobin 11.1 gm/dL, hematocrit 35.2%, mean corpuscular volume 24.3 pg); and elevated HbA1C (7.1%). The patient's liver profile revealed elevated alkaline phosphatase (245 U/L), and elevated alanine transaminase (58 U/L). CA 19-9 was normal. An abdominal computed tomography (CT) scan with intravenous contrast revealed a 6.8 cm complex cystic pancreatic mass suspicious for malignancy with mild pancreatic duct dilatation seen within the tail of the pancreas (Figure 1). Abdominal magnetic resonance imaging (MRI) showed a 9.2 cm macrolobulated mass involving the pancreas without local extension or metastasis. On MRI, the pancreatic tail was identified to have upstream atrophy with ductal dilatation, which continued to the ventral pancreatic level. Positron emission tomography-CT (PET-CT) showed a large fluorodeoxyglucose-avid centrally necrotic pancreatic mass without metastasis. Subsequent endoscopic ultrasound revealed a round mass identified in the pancreatic body. The mass was hypoechoic and heterogeneous and not completely cystic but not completely solid in nature. The mass measured 65 by 49 mm in maximal cross-sectional diameter with well-defined endosonographic borders. An intact interface was seen between the mass and the celiac trunk suggesting a lack of invasion. The endosonographic appearance of parenchyma and the upstream pancreatic duct indicated duct dilation and parenchymal atrophy. The pancreatic duct measured 2.4 mm in the head, 4.4 mm in the body, and 3.2 mm in the tail. Fine-needle biopsy of the mass was performed, which demonstrated malignant spindle cells. Immunohistochemistry (IHC) performed on this specimen was unable to characterize the neoplasm's lineage, being negative for antibodies directed against CKAE1AE3, CD163, synaptophysin, CD117, CD34, CD31, SMA, S-100, and SOX-10. In a patient with an intrapancreatic mass of this morphology without involvement of the retroperitoneum or lymph nodes, the differential diagnosis remained broad but included sarcomatoid carcinoma or sarcoma.
He underwent distal pancreatectomy and splenectomy (Figures 2 and 3). Intraoperative frozen section showed a highgrade, predominantly spindled but focally epithelioid malignancy ( Figure 4). Subsequent IHC was negative for multiple epithelial markers, including epithelial membrane antigen, broad-spectrum keratin cocktails (AE1/AE3 and OSCAR), and high molecular weight keratin (34betaE12) This combination of clinical, histomorphologic, and immunophenotypic features yielded a diagnosis of undifferentiated pleomorphic sarcoma. The mass measured 10.5 cm in the greatest dimension, and surgical margins were negative. He underwent adjuvant chemotherapy with 4 cycles of doxorubicin/ifosfamide. Surveillance with chest/abdominal/pelvic CT every 3 months was recommended. His most recent follow-up imaging obtained 15 months after therapy completion showed no pancreatic ductal dilatation, normal pancreatic parenchyma, and no cystic of arterially enhancing pancreatic lesions. Moreover, there was no evidence of recurrent or metastatic disease in the abdomen. The patient had an unremarkable clinical examination at follow-up and remained under a surveillance protocol for 4 additional months. The entire length of follow-up was 24 months from the time he presented to the emergency department until the present time.
DISCUSSION
Primary UPS is an exceptionally rare subtype of pancreatic sarcoma. 2 Pancreatic sarcomas are aggressive and are often associated with poor prognosis. 3 The first case of pancreatic UPS was reported in 1986, which had good outcome with only distal pancreatectomy and splenectomy without radiotherapy or chemotherapy. 4 Ambe et al and Feather et al described cases of pancreatic sarcoma noting that they are more commonly seen in younger individuals and more frequently involve the pancreatic tail and the body. 3,5 Similarly, our case involved a relatively young individual with the tumor localized to the pancreatic body. Pancreatic sarcomas are usually large in size and present with abdominal pain, nausea, vomiting, and weight loss. These features were present in our case. 2 An English-language literature search revealed 20 cases of UPS with a very similar clinical presentation as our patient (Table 1). Of the 20 cases that we found in the English literature, 11 presented primarily with abdominal pain and 4 presented abdominal pain as an associated symptom to the discomfort or other diagnosis.
There are 2 main hypotheses regarding the genesis of primary UPS of the pancreas. The first hypothesis is that these tumors are not a distinct entity, but rather a common morphologic pattern that many neoplasms share. As tumors become more undifferentiated, this shared morphologic pattern perhaps implies a common development pathway of malignancy. Under this hypothesis, UPS can arise from sarcomas and carcinomas. The second hypothesis is that UPS results from malignant transformation of mesenchymal stem cells that do not show differentiation markers at the outset.
UPS can mimic many entities and is essentially a diagnosis of exclusion. 2 UPS may clinically mimic any mass-forming lesions of the pancreas such as autoimmune pancreatitis, tuberculosis, sarcoidosis, and other malignancies. Likewise, they can histomorphologically mimic many high-grade carcinomas or sarcomas based on which features predominate in a given tumor. By definition, they do not diffusely express any characteristic immunophenotype, so IHC serves mainly to exclude other entities which enter the clinical or morphologic differential. For example, in our case, IHC chiefly served to exclude sarcomatoid carcinoma, gastrointestinal stromal tumor, neuroendocrine carcinoma, angiosarcoma, leiomyosarcoma, and melanoma. Genetic alterations are common in pancreatic carcinomas/ sarcomas, and molecular testing may reveal amplification of SAS, MDM2, CDK4, DDIT3, and HMGIC. 6 Mutations of the genes TP53, RB1, and CDKN2A also play a role in UPS growth. 2 Multimodal treatment with chemotherapy, radiofrequency ablation, and surgery improves the prognosis and survival of patients with primary pancreatic UPS. 7 The chemotherapy regimen in our case was recommended based on histology and currently existing data on adjuvant chemotherapy in soft-tissue sarcoma. Primary pancreatic sarcomas are not common and given the heterogeneity of primary sarcoma location; decision making regarding systemic therapy is largely driven by histology. UPS is one of the most common soft-tissue sarcomas, thus routinely included in clinical trials assessing the utility of treatment. Prospective studies assessing the role of adjuvant chemotherapy using anthracycline with ifosfamide (EORTC 62931) have failed to show an overall survival advantage and thus is not considered a standard of care for all patients. 8 However, the National Comprehensive Cancer Network guidelines recommends adjuvant chemotherapy may be considered in some cases. 9 Reassessment of data from EORTC 62931 for those considered to have high-risk disease or a 10year predicted probability of overall survival , 51% based on the Sarculator risk stratification nomogram noted improvement in risk of recurrence (hazard ratio [HR] 0.46) and risk of death (HR 0.46). 10 Using the Sarculator nomogram for our patient, he was deemed to have very high risk of recurrence (estimated 7-year overall survival of 39% and 7-year diseasefree survival of 14% using one of our nomograms). Therefore, we recommended anthracycline-based chemotherapy. Most of the cases in Table 1 describes middle-aged men with pleomorphic malignancies in the body and/or tail of the pancreas who did well with surgery. Despite the reported similarity of the cases, an evidence base to inform management of this malignancy is lacking because of its rareness. A stepwise approach with cross-sectional imaging and biopsy is important to appropriately diagnose and manage UPS.
|
2023-03-24T15:12:17.416Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "28f7684f05c44ccbf6b9140496b6e068dba24289",
"oa_license": "CCBYSA",
"oa_url": "https://bovine-ojs-tamu.tdl.org/bovine/article/download/1750/7690",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8077e8e4e1919f947359cbb1d0ce59146a045930",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
253872324
|
pes2o/s2orc
|
v3-fos-license
|
Damage Detection in a Rotor Dynamic System by Monitoring Nonlinear Vibrations and Antiresonances of Higher Orders
: Since rotor systems are very sensitive and vulnerable to transverse crack, early detection of damage is of paramount importance and essential for rotating machinery. Therefore, one of the main issues is to identify robust characteristics of the rotor vibration response that can be directly attributed to the presence of a transverse crack in a rotating shaft, preferably when the crack is small enough, in order to avoid catastrophic failures of rotating machines. This study investigates the potential links between the nonlinear vibrations and the locations of higher-order antiresonances and structural modifications due to the presence of a breathing crack in rotor systems. Using the proposed numerical results on the evolution of the nonlinear responses of a cracked rotor system, it was observed that a robust diagnostic of the presence of slight damage can be conducted by tracking nonlinear vibrational measurements, with particular attention to the antiresonance behavior of higher orders. These observations can easily serve as target observations for the monitoring system and for identifying the positions of damage at an early stage.
Introduction
One of the most challenging problems in the vibration monitoring of rotating machines is to promote effective tools and robust indicators to detect damage at an early stage. Nowadays, rotating machines are designed to operate at increasingly higher efficiencies and under increasingly challenging operating conditions for structural system integrity. As a result, many rotating systems contain rotor shafts that are potentially highly susceptible to transverse fatigue cracks. Certain catastrophic failures of rotating machines are due to the presence of growing cracks in the rotor shaft. This primary accidental risk, which can lead to rotor system downtime and prohibitive economic costs, can be avoided by deploying the early detection of propagating cracks. Many efforts have been made in this field of structural health monitoring to better understand the impact of the presence of a transverse crack on the dynamic behavior of rotating systems, leading to the development of advanced detection of defects to ensure the safety of rotating machines during continuous operation and under potentially fluctuating loading conditions. Generally, three different approaches are attempted to detect the presence of a crack in a rotating structure and possibly identify the position and size of the crack. The first approach is based on monitoring the natural frequencies of the rotor system because the presence of a crack in the rotating shaft reduces the rigidity of the rotor shaft, and therefore, causes a decrease in the natural frequencies of the original uncracked rotor system [1][2][3]. This approach has the advantage of being very easy to apply. It is often effective for detecting large faults but has the disadvantage of being incapable of detecting small ones. Consequently, this approach is very limited for the predictive maintenance of rotating systems in a real environment. The second approach focuses on changes in the linear measurements of the frequency response functions (FRF) and certain associated indicators, such as the motion of antiresonances [4][5][6][7] and coupling the vibration measurements of a rotating cracked shaft [8][9][10][11]. It can be noted, however, that some difficulties may arise when seeking a robust identification of damage based on the use of linear analysis: indeed, the crack might open and close alternately during experimental tests [12] due to the real and complex environments of rotating machines during operational conditions. If the presence of an open crack leads only to a loss of physical stiffness, which allows studying linear systems, the development of breathing cracks not only reduces the stiffness of the structure, but tends to make an otherwise linear structure non-linear, because of the evolution of the structure's stiffness characteristics associated with open and closed states. Therefore, considering the fact that the presence of damage can induce more complicated behavior, an alternative approach is to take into account the influence of an active transverse crack in the response of a rotor model by investigating the nonlinear structural vibration responses. As mentioned previously, the concept of an active crack is represented by the opening and closing of the crack (also called the crack breathing phenomenon [13][14][15][16][17]). Thus, many works based on the tracking of nonlinear dynamic characteristics, such as the presence of n× harmonic components or changes in the rotor orbit shape with the occurrence of multi-loops, have been developed and tested by many researchers [18][19][20][21][22][23][24][25][26]. It is now generally recognized that the use of nonlinear analysis allows the efficient and robust detection of medium to large defects due to the fact that nonlinear dynamic characteristics are significantly influenced by the presence of a transverse breathing crack in the rotating shaft. Since it is not possible to give an exhaustive list of all the work of interest carried out on structural health monitoring and damage detection, the reader can refer to the various state-of-the-art papers that have been proposed in the past for crack detection based on the measurement of linear and nonlinear vibrations [27][28][29][30][31][32][33][34].
More recently, some studies have shown that the presence of uncertainties might cause classical linear and nonlinear approaches to be less robust for detecting small cracks [35][36][37][38][39][40][41]. In order to overcome this difficulty, recent works have suggested looking at the zeros of the higher-order frequency response function (HOFRF), i.e., the antiresonance frequencies of HOFRF, for the detection and location of small cracks in beam systems [42,43]. It can be noted that the use of antiresonances from linear measurements has rarely been considered in structural health monitoring [4][5][6][7], whereas the antiresonance frequencies of measured FRFs provide useful information on the dynamic properties of a mechanical structure [44] and can be used for robust model updating [45,46]. For linear systems, it is well known that the magnitude of the FRF is characterized by the resonance-antiresonance pattern in the frequency domain and the antiresonance behavior due to the relationship between the driving and measurement points of the mechanical system. Indeed, the resonance frequencies can be defined as global quantities whose values do not depend on the given driving point and chosen measurement point. On the contrary, the antiresonance frequencies are typical of each FRF and depend on the location of both the driving force and the measurement point. One of the most well-known results regarding resonance-antiresonance behavior is that the resonances and antiresonances alternate continuously only for FRFs where the excitation points and the measurement points coincide. In this context, this study proposes a preliminary analysis of the supplementary information that can be obtained from the antiresonances of the higher-order responses for structures with local nonlinearities. More specifically, it considers the potential of using the locations of these antiresonances for the detection and identification of damage in rotating machines. Indeed, when considering the nonlinear response of a rotor system with a breathing crack, antiresonances of higher orders could become an attractive alternative in structural damage assessments. Due to the fact that antiresonances are very sensitive to small structural changes, the use of antiresonances of nth orders (with n > 1) seems to potentially meet several requirements for early damage detection in rotating machines. In this paper, the aim is to achieve this objective by proposing a complete numerical study to examine the nonlinear responses and the tracking of the antiresonance frequencies of nth orders for different configurations of cracked rotors. This work continues from two previous works [42,43] that addressed the problem of crack detection in a pipeline beam by using the non-linear vibrations and the antiresonances of HOFRF. Here, we propose an extension of this approach to the problem of structural health monitoring for rotating machines.
This paper is organized as follows: A brief reminder of the general description and characteristics of the cracked rotor system under study is first presented. The second section presents the general formulation of predicting the dynamic responses of the cracked rotor system. Then, the most relevant results on the crack's effect on the vibrational response of the rotor system are discussed. More specifically, the impact of the presence of a breathing crack on the vibrational response of the cracked rotor is analyzed, focusing more particularly on the appearance of harmonic components. The limitations of such detection by tracking nonlinear responses are also discussed in the context of robust preventive fault detection (i.e., structural health monitoring in the case of a rotor system with a small crack) in order to situate the objective and the originality of the study proposed. Finally, an efficient damage detection methodology based the antiresonance frequencies of higher orders is presented, and its robustness for crack localization, even when small levels of damage are encountered, is demonstrated by numerical examples.
Description of the Cracked Rotor System
This section briefly presents the modeling of the cracked element based on the notion of stiffness reduction and the breathing mechanism. The rotor system under study is also described. For the interested reader, the complete modeling of the rotor system was explained previously in [40], and the cracking model used in this study was proposed and discussed previously in [16].
The mechanical system under study consists of a two-bearing flexible cracked rotor, as shown in Figure 1. The system is composed of a shaft with a circular cross-section, 0.5 m in length and 0.01 m in diameter, with two rigid circular discs located at the middle of the shaft and at a quarter of the left side, respectively. The rotor is supported by two flexible supports, one on each extremity. It is excited by an out-of-balance force on both discs. The rotor shaft is discretized with 20 Euler beam elements, where each node has four degrees of freedom (dof). The values of the rotor parameters are given in Table 1. Considering that the presence of one transverse crack induces local flexibility due to strain energy concentration in the vicinity of the tip of the crack under load [13,14], it can be assumed that the reduction in the second moment of area ∆I at the location of the crack is given by where I 0 is the second moment of the area of the cross-section of the healthy rotor. R, l, and ν are the beam radius, element length of the section, and Poisson ratio, respectively. F(µ) defines a nonlinear compliance function which can be obtained from a series of experiments with chordal cracks [13,14]. µ = h R is the non-dimensional crack depth and h defines the depth of the crack, as illustrated in Figure 1. It then follows that the local flexibility due to the presence of one crack leads to an additional amount of stiffness denoted by K crack at the crack. The complete expressions of the stiffness matrix K crack are given in [16]. In addition, due to the rotation of the rotor system, it can be assumed that the crack will open and close once per revolution. The function describing this periodic opening and closing of the crack, called "breathing", can be approximated by a cosine function g(t) [16], by assuming that the gravity force is much greater than the imbalance force (i.e., the cracked rotor rotates under the load of its own weight). This results in the expression of a simple crack breathing mechanism where ω defines the rotational speed of the rotor. For g(t) = 1, the crack is fully open, and for g(t) = 0, there is no effect due to the crack on the dynamic behavior of the rotor system (i.e., the crack is totally closed, and the global cracked rotor stiffness is equal to the stiffness of the healthy rotor). Finally, the equations of the rotor with a breathing crack can be written as: where x,ẋ, andẍ are the displacement, velocity, and acceleration vectors. K and M are the stiffness and mass matrices of the complete uncracked rotor. f and q are the gravitational force and the imbalance, respectively. The matrix D combines the effects of the shaft's internal damping and gyroscopic moments. We have D = C + ωG, for which the damping matrix is C taken as a classical Rayleigh damping for the shaft (i.e., C = αM s + βK s , where M s and K s are the mass and stiffness matrices for the rotor shaft, and (α, β) constants of proportionality). For the rest of the study, the values of these two proportional damping coefficients (α, β) are estimated by considering that the two reference vibration modes with a damping ratio of 0.5% (for the healthy rotor at rest) are associated with the first and second forward modes. It should be noted that the global stiffness matrix K c of the rotor system due to the presence of the crack, situated at the ith beam location on the rotor shaft, is given by where K crack defines the stiffness matrix of the crack element and 0 defines the 8 × 8 null matrix.
Dynamic Responses of the Cracked Rotor System
Equation (3) has a time-dependent contribution (i.e., the term g(t)K c x) due to the fact that the crack breathes when the system rotates. Also the steady-state periodic responses of the cracked rotor system can be approximated by a truncated Fourier series of order m with a fundamental frequency f = ω 2π where ω corresponds to the rotational speed of the system. A 0 , A k , and B k (with k = 1, · · · , m) define the unknown coefficients of the finite Fourier series that allow approximating the nonlinear response of the cracked rotor. The resolution of such a dynamic system can be achieved via the well-known harmonic balance method (HBM) [40]. It should be recalled that the gravitational and unbalance forces are exactly defined by finite Fourier series with only constant components and first-order periodic components in the frequency domain, respectively (i.e., f = C T for the periodic solution x(t) can be determined by solving a set of (2m + 1) × n linear equations (where n is the number of dof) in the frequency domain such that with It can be noted that Γ defines the contribution of both the gravitational force and the imbalance. The matrix Λ c corresponds to the parametric terms due to the presence of the breathing crack.
Analysis of the Crack Effect on the Vibrational Response of the Rotor System
The main objective of this section is to analysis the impact of the presence of a breathing crack on the vibrational response of the cracked rotor, focusing more particularly on the appearance of n× harmonic components (with n > 1) and also the evolution of the antiresonances of higher orders for crack position detection. These analyses were validated by performing a numerical study on the effects of the two parameters of the crack (i.e., the crack depth µ and the crack location L crack ) on the higher-order vibrational responses.
First of all, a brief discussion and summary are given of the classical well-known results on the appearance of harmonic components due to the presence of a breathing crack. Secondly, a discussion is present on the possibility of identifying the position of a crack, even of small size, by considering the antiresonance behavior of higher orders for the vibration responses of the cracked rotor.
Nonlinear Vibration and Appearance of Harmonic Components
Considering the previous equations provided in Section 3, it is obvious that the vibrational response of the rotor system will be composed by only the constant and first harmonic components (i.e., A 0 , A 1 , and B 1 ) in the absence of a breathing crack. Indeed, if the rotor is healthy, we have Λ c = 0, and thus the excitation comes from the gravitational and unbalance forces (see Equation (7)). Conversely, due to the presence of a breathing crack (i.e., Λ c = 0), the vibrational response of the cracked rotor will be composed by not only the constant term and the first harmonic components, but also by the components of higher orders (i.e., A k and B k with k > 1). Moreover, by considering the expression of Λ c , it appears that the presence of a breathing crack leads to a direct mutual dependence between the static deflection A 0 and contributions of the first order (i.e., A 1 ), leading indirectly to a contribution of the static term on the higher orders due to the expression of Λ c (i.e., see the direct interactions between the coefficients (A k−1 , B k−1 ), (A k , B k ), and (A k+1 , B k+1 ) for k > 1 in Equation (9)). In practice, this results in the fact that the presence of a breathing crack is characterized by the appearance of super harmonics of the jth order (with j > 1), leading to amplitude peaks during rotation speeds equal to approximately 1 j−n (with n = 0, . . . , j − 1) of the critical speeds of the dynamic response of the cracked rotor [16,22].
To briefly illustrate these well-known analyses of fault detection by examining the non-linear vibration signature, three case studies are proposed with the crack parameters specified in Table 2. These studies correspond to a case with a deep crack (i.e., case 1), and the other two cases consider a small crack located at different positions on the rotor shaft (i.e., cases 2 and 3). Figures 2-4 show the non-linear response (i.e., the global response and the harmonic components 1×, 2×, 3×, and 4×) for the cracked rotor in each case. It is clear that the presence of a breathing crack on the rotor results in the appearance of harmonics of the jth order with amplitude peaks when passing 1 j critical speeds (see more specifically the marks (1, 4, 5), (2, 6, 7), and (3,8,9) for 4×, 3×, and 2× harmonic components, respectively). To better understand the proposed results and discussion, Table 3 gives the critical speeds of the cracked rotor system in the frequency range [0; 300] Hz. Moreover, the marks (10, 11,12) also suggest the presence of resonance peaks for each j× harmonic component at the main critical speeds. By comparing the results obtained for the three case studies, it can also be concluded that: • If the size of the crack is large, the j× harmonic responses contribute significantly to the overall dynamic response of the rotor system, resulting in the presence of peaks when passing through 1 j critical speeds with amplitudes greater than the first-order response (see, for example, the marks (2, 3,8) in Figure 2). However, in the practical context of a rotor system with a small crack, the presence of the crack is not apparent when inspecting the overall non-linear response alone, and it is necessary to specifically examine the responses of the j× harmonic contributions (see Figures 3 and 4). Thus, the evolution of the j× harmonic components offers a positive and useful way to diagnose the presence and potential propagation of cracks in a practical way in both preventive and predictive maintenance. • Although-due to the presence of one crack-variations in the frequencies and critical speeds of a rotor system exist from a theoretical point of view, these changes are often too small to be considered as reliable indicators for the early detection of defects. This is highlighted in Tables 3 and 4, which give the evolution of the critical speeds and the natural frequencies at rest in the case of a healthy rotor and the three cracked rotor configurations proposed in the present study. It should be noted that the calculations of the natural frequencies at rest are given for an open transverse crack, as illustrated in Figure 1. It has also been shown in previous studies [3,40,47] that the detection and identification of small defects in the presence of uncertainties (uncertainties in the vibration measurements or uncertainties in the physical model used) are often not possible if the analysis criteria are based exclusively on these evolutions of natural frequencies or critical speeds. • By comparing the vibration responses of several case studies for rotor systems with small cracks positioned at different locations (i.e., cases 2 and 3 for the present study), it appears that the evolutions of the first-order response (for a chosen vertical or horizontal location on the rotor) over the frequency range are very close. More precisely, the positions of the antiresonances are identical, and the evolution of the vibratory amplitudes between the resonance and antiresonance peaks are similar (see, for example, Figures 3 and 4). It was also verified that this statement is valid for all vertical and horizontal amplitudes over the rotor system. Indeed, the only modification between these various cases comes from the position of the crack, which does not greatly modify the 1× vibrational response of the rotor system because the latter is mainly governed by the imbalance in the case of a rotor system with a small crack. On the contrary, the evolutions of the higher-order responses and the value of the antiresonance frequencies are dependent on the position of the crack (see Figures 3 and 4 for the 2×, 3× and 4× harmonic responses). Consequently, a nondestructive detection technique based on the tracking of antiresonances should be a reliable and effective way for monitoring and identifying the locations of cracks in rotor systems when ensuring their structural health.
Crack Location Based on the Antiresonances of Higher Order Vibrational Responses
Based on the analysis carried out previously, the following section dealt with a robust approach to meet the challenges of predictive maintenance and thus promotes reliable monitoring of the condition of rotating systems by identifying the presence of a small crack. The main objective is to be able to identify one crack in the early state and consequently to prevent equipment failures that lead to costly downtime and repairs. The methodology proposed is based on using the antiresonances of higher-order responses for small crack detection and identifying the positions of cracks in rotor systems.
Firstly, a concise discussion on the linear equations of higher orders in the frequency domain is proposed. Through Equations (4), (6), and (9), it appears that the parametric terms due to the breathing behavior of the crack induce an additional internal force only at the crack location for all the harmonic components of the rotor response. As previously stated, the gravitational force is defined by Fourier series with only constant components, and the imbalance is exactly characterized by Fourier series with periodic components of the first order in the frequency domain. Therefore, it follows that the higher-order Fourier coefficients A k and B k (with k > 1) are directly excited only via the parametric terms contained in the matrix Λ c due to the presence of the breathing crack. More precisely, it follows that the linear equations of the unknown coefficients A k and B k (with k > 1) can be rewritten in the following form: where δ i = 1 for the dof of the ith element (i.e., at the location of the crack in the shaft rotor) and otherwise null. Thus, the right side of Equation (10) clearly indicates that the excitation contribution for the kth harmonic components is due only to the presence of the breathing crack, and so the excitation forces for the higher orders are located only at the crack location. It is also necessary to note that these excitations depend indirectly on the contributions of the gravitational and unbalance forces through the calculations of lower orders and the interconnections between orders via matrix Λ c (see Equations (6) and (9)).
These observations lead us to assume that the antiresonance behavior of higher-order responses could be very effective for detecting and identifying cracks in rotor systems. In the following, the numerical results associated with the three cases previously discussed in Section 4.1 are analyzed to consider the possibility of using the placement of antiresonance frequencies of higher orders to identify the crack location. Figures 5-7 illustrate the vertical displacements for all the rotor shaft positions for the 2nd and 3rd orders by considering the three cases of cracked rotor discussed previously. The contour lines indicate the isolines of the vertical amplitudes for the 2nd and 3rd orders. To facilitate the reader's comprehension, twenty contour lines are uniformly distributed between the minimum and maximum values of the amplitudes set in logarithmic scale. This has the advantage of providing a clear visualization of the evolution of the low-level amplitudes as a function of the position of the rotor shaft and the rotational speed, and thus highlights the evolution of the antiresonance frequencies of higher orders along the rotor. Consequently, the high amplitude peaks that correspond to the fundamental critical speeds and 1 j−n (with n = 0, . . . , j − 1, j > 1) of the critical speeds are indicated by the red contour lines, and the antiresonance frequencies are indicated by the blue contour lines. In addition, the two dotted red horizontal lines indicate the rotor element on which the crack is located. The red circle corresponds to the identification carried out for the crack location based on the minimum of the antiresonance frequencies (note that this criterion for detecting the damage location has already been mentioned for the locations of small cracks on beam systems [42,43] and will be explained in the following for application to rotor systems). In addition, the red cross corresponds to the identification of the crack location based on the minimum of the second antiresonance frequencies (if this identification is carried out successfully). In the following discussion, the notion of resonance frequency will generally be preferred to the exact notion of critical speed (i.e., undamped natural frequency of the rotor system due to imbalance), in order not to make the discussion too complex and to facilitate the understanding of the analysis of the results.
Based on the numerical results and additional studies which have been conducted by the author (but for which the results are not shown for the sake of conciseness), several conclusions can be proposed regarding the behavior of the antiresonances and their use for detecting and identifying a breathing crack in rotating machines: • The antiresonance frequencies of higher orders can be defined as local characteristics of the system that depend on both the driving point and the measurement point. This observation is of primary importance because the driving point is only non-null at the crack position for higher orders, since the excitation undergone by the rotor for the n× harmonic components (with n > 1) is due to the breathing behavior of the crack (for memory, see the expressions of Equations (10) and (11)). • On the contrary, not surprisingly, the resonance frequencies and super-harmonic resonances (i.e., the fundamental critical speeds and 1 j−n of the critical speeds with n = 0, . . . , j − 1, and j > 1) are global quantities whose values do not depend on the measurement point. • The minimum value of the antiresonance frequencies of orders 2 and 3 is located at the crack location on the rotor system. It should be noted that this result is also valid for the nth orders with n > 3, which are not shown in this study for the sake of conciseness. • Focusing on the vicinity of the antiresonance frequencies of orders 2 and 3, and increasing the distance between the crack location (i.e., excitation points for the 2nd and 3rd orders, and the shaft element with reduced stiffness) and the measurement point, will lead to higher antiresonance frequency values. • The resonance-antiresonance pattern is quite similar in both the vertical and horizontal directions. This can be explained by the fact that the only difference between the vertical and horizontal responses corresponds to the small dissymmetry in stiffness due to the breathing crack mechanism and the structural properties of the rotor supports of the rotor system under study. It should be noted that this commentary is based on the complete results of the different cases treated, which have been analyzed by the author but are not illustrated by figures in the study for the sake of brevity. • The resonance-antiresonance pattern is quite similar for cases where the crack size is different but the crack position is the same. It should be noted that this commentary is based on additional studies conducted by the author but not illustrated in this study. More precisely, the size of the crack only leads to a local stiffness reduction on the cracked element of the rotor system: increasing the crack size decreases the local stiffness, inducing a small decrease in the resonance and antiresonance frequencies of orders 2 and 3 and an overall increase in the amplitudes. In other words, there is a very slight shift on the left for the resonance-antiresonance patterns shown in Figures 6 and 7, when the crack size increases. This result is very interesting because it allows us to conclude that the resonance-antiresonance pattern of higher orders is more sensitive to the crack position than the crack size. • Whatever the size of the crack, monitoring the antiresonance frequencies of higher orders appears to be a robust indicator of the damage location. • Contrary to the classical analysis on linear systems [44], the resonance-antiresonance behavior for the 2× and 3× harmonic responses where the excitation point and the measurement point coincide is not trivial. Indeed, there is no evidence of alternating resonances and antiresonances associated with the fundamental critical speeds or 1 j−n (with n = 0, . . . , j − 1, j > 1) of the critical speeds.
Conclusions
The present work studied the primary characteristics of the nonlinear responses resulting from the introduction of a transverse breathing crack into a rotor system. Although the 2× and 3× harmonic components of the system's response can serve as target observations for the monitoring system, this study also demonstrated that the antiresonance behavior of higher orders can be considered as one common and robust indicator to detect the presence of damage and the location of a small crack. This paper highlighted that using and tracking the nonlinear signature has many advantages for structural health monitoring in rotordynamics and detecting and identifying damage at an early stage during operational conditions. In addition, this work points to several interesting perspectives for future research, as described in the following: • It was observed through this numerical study that the location of the crack corresponds to the minimum values of the antiresonance frequencies of the nth order (with n > 1). However, no formal proof was obtained on the subject in the present paper. To the author's knowledge, there is no theoretical study in the field of rotordynamics on the sensitivity of antiresonance frequencies of the nth order to structural changes. Additionally, no theoretical study has yet demonstrated that damage location can be effectively found by tracking the antiresonances of the nth order. It would therefore be interesting to conduct theoretical work in this direction in order to generalize the idea. It can also be pointed out that there is very little research aimed at extending the general higher-order nonlinear analysis of FRF to the detection and damage assessment of general structural systems with breathing cracks [48,49]. Solid theoretical works on the influences of breathing cracks on the higher-order responses of nonlinear structural systems are of interest due to the strong potential for applications of HOFRF (higherorder vibrational responses) to damage identification and assessment in real structural systems (rotating machines).
• It would be interesting to conduct experiments to validate the relevance of the approach proposed to detect crack locations by identifying the positioning of the antiresonance frequencies of higher orders. In a more practical context, where there are only a limited number of sensors and therefore only a limited number of measurement points on the rotor, the question arises as to how to optimally position these sensors to locate the damage. If increasing the distance between the location of the crack and the measurement point leads to an increase in the value of higher orders, then antiresonance frequencies seem to be factors favoring the possibility of robust defect detection in a practical case (by successive adaptation of the placement of the sensors for example). This deserves to be validated by extensive experimental studies. • Many faults that reduce the lifetime of rotating machinery exist, and they significantly affect the dynamics of rotor systems. It would be interesting to better understand the possibility of damage detection for rotor systems with features such as misalignment, bows, and asymmetric shafts. These faults can also generate nonlinear responses, so the resonance-antiresonance pattern of higher orders should therefore be used with caution to avoid irrelevant damage detection on complex industrial rotating machinery. • Even if multi-crack detection in the case of beam-like structures using the antiresonances locus of HOFRFs has been previously investigated by Chomette [43], this problem remains completely open and deserves further study for robust and reliable detection of multiple small cracks in rotating systems.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The author declares no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
|
2022-11-25T16:33:24.545Z
|
2022-11-22T00:00:00.000
|
{
"year": 2022,
"sha1": "b19efd4a43e7e6d7c83870e6d9ce03f96a3c75f5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/12/23/11904/pdf?version=1669617366",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5c0f20492bc7483fa8b4e23639377d119773fe39",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
244527268
|
pes2o/s2orc
|
v3-fos-license
|
High-resolution ALMA observations of V4046Sgr: a circumbinary disc with a thin ring
The nearby V4046 Sgr spectroscopic binary hosts a gas-rich disc known for its wide cavity and dusty ring. We present high resolution ($\sim$20 mas or 1.4 au) ALMA observations of the 1.3mm continuum of V4046 Sgr which, combined with SPHERE--IRDIS polarised images and a well-sampled spectral energy distribution (SED), allow us to propose a physical model using radiative transfer (RT) predictions. The ALMA data reveal a thin ring at a radius of 13.15$\pm$0.42 au (Ring13), with a radial width of 2.46$\pm$0.56 au. Ring13 is surrounded by a $\sim$10 au-wide gap, and it is flanked by a mm-bright outer ring (Ring24) with a sharp inner edge at 24 au. Between 25 and $\sim$35 au the brightness of Ring24 is relatively flat and then breaks into a steep tail that reaches out to $\sim$60 au. In addition, central emission is detected close to the star which we interpret as a tight circumbinary ring made of dust grains with a lower size limit of 0.8 mm at 1.1 au. In order to reproduce the SED, the model also requires an inner ring at $\sim$5 au (Ring5) composed mainly of small dust grains, hiding under the IRDIS coronagraph, and surrounding the inner circumbinary disc. The surprisingly thin Ring13 is nonetheless roughly 10 times wider than its expected vertical extent. The strong near-far disc asymmetry at 1.65 $\mu$m points at a very forward-scattering phase function and requires grain radii of no less than 0.4 $\mu$m.
INTRODUCTION
Recent observations of young circumstellar discs have transformed the current knowledge on planet formation. Among the main recent findings of resolved observations of discs is the discovery of substructures in the form of gaps, rings, cavities, and spirals, to name a few (see, Andrews 2020, and references there in). However, the focus of resolved imaging with the Atacama Large Millimeter/submillimeter Array (ALMA) or with the current generation of high-contrast cameras has mainly been towards the brighter sources (e.g., Garufi et al. 2017). It is to address this bias that the "Discs Around T Tauri Stars with SPHERE" (DARTTS-S) programme collected differential polarization imaging (DPI) data with the Spectro-Polarimeter High-★ E-mail: rmartinezbrunner@gmail.com contrast Exoplanet REsearch (SPHERE Beuzit et al. 2019) for a total of 29 solar-type stars (Avenhaus et al. 2018;Garufi et al. 2020). The sample is not biased towards exceptionally bright and large discs. The DARTTS observations revealed diverse structures and morphologies in the scattering surface of these discs. This article on V4046 Sagittarii (Sgr) is the first instalment of a companion programme, the DARTTS survey with ALMA (DARTTS-A), which will present millimetre observations of nine protoplanetary discs previously imaged in polarised scattered light in DARTTS-S. V4046 Sgr is a double-lined spectroscopic binary of K-type stars (K5 and K7) with very similar masses of 0.90±0.05 M and 0.85±0.04 M (Rosenfeld et al. 2012), on a close ( ≈ 0.041 au), circular ( 0.01) orbit, with an orbital period of 2.42 days (Quast et al. 2000). It is a member of the Pictoris moving group (Zuckerman & Song 2004), with an estimated age of 18.5 +2.0 −2.4 Myr (Miret-Roig et al. 2020), and its distance is 71.48±0.11 pc (Gaia Collaboration et al. 2021). V4046 Sgr hosts a massive (∼0.1 M ) circumbinary disc extending to ∼300 au (Rosenfeld et al. 2013;Rodriguez et al. 2010), rich in diverse molecular lines (Kastner et al. 2018). Previous analysis of radio observations taken with the Sub-Millimeter Array (SMA) (Rosenfeld et al. 2013) and later with ALMA (Guzmán et al. 2017;Huang et al. 2017;Bergner et al. 2018;Kastner et al. 2018), showed that the millimetre dust of the V4046 Sgr circumbinary disc features a large inner hole of ∼30 au and a narrow ring centred around 37 au that account for most of the dust mass. More recently, Francis & van der Marel (2020), with higher definition ALMA images, detected a structured outer disc with an inner disc in the millimetre continuum. On the other hand, by using SPHERE polarized light observations in the J-and H-band, Avenhaus et al. (2018) confirmed the double-ring morphology previously reported by Rapson et al. (2015) using GPI data. In these polarized light observations, the surface brightness presents a ∼10 au-wide inner cavity, a narrow ring at ∼14 au, and an outer ring centred at ∼27 au. Later, Ruíz-Rodríguez et al. (2019) elaborated on the characterisation of the observed rings in the SPHERE images, more on that in Section 2.2.
A parametric model that simultaneously fits the NIR scattered light and millimetre continuum emission links the observations to the structures in the underlying dust and gas density distributions, and sheds light on the complex processes that shaped them. In this paper, we present new ALMA observations of the continuum emission at 1.3 mm of V4046 Sgr with unprecedented angular resolution in this source, which reveal an inner ring in the disc. We also reproduce these observations with a 3D parametric model that fits all available data, including the polarized scattered light SPHERE image, the spectral energy distribution (SED), as well as the new high definition 1.3 mm continuum map. Section 2 describes the available observations that we aim to model. Section 3 describes the structural parameters of our model. A description of our parameter space exploration can be found in Appendix A. The results of our modelling are presented in Section 4, including a discussion of the main findings. Finally, in Section 5 we present our conclusions.
ALMA
New ALMA observations of V4046 Sgr were obtained in 2017 as part of the Cycle 5 program 2017.1.01167.S (PI: S. Perez). The observations acquired simultaneously the 1.3 mm continuum and the J = 2−1 line of 12 CO (i.e. with a band 6 211-275 GHz correlator setup). A log of the observations is shown in Table 1. The ALMA array was in its C43-8/9 configuration, with baselines ranging from 92 to 13894 m which translate into a synthesized beam of 0. 062 × 0. 055, in natural weights. Here we focus on the continuum observations only. Francis & van der Marel (2020) included a subset of this dataset, corresponding to the C43-8 configuration, as part of a large sample of transition discs, with a focus on the statistical properties of the inner discs.
Image synthesis of the ALMA continuum was performed with the uvmem package (Casassus et al. 2006;Cárcamo et al. 2018), which fits a non-parametric model image to the data by comparing the observed and model visibilities, • and , using a least-squares figure of merit : where are the visibility weights and is a dimensionless parameter that controls the relative importance of the regularization term .
The chosen regularization term for this case was the standard image entropy, or where is the default pixel intensity value and is set to 10 −3 times the theoretical noise of the dirty map (as inferred from the visibility weights ). Here we set = 0.01. Similar applications of uvmem in the context of protoplanetary discs can be found, for example, in Casassus et al. (2013Casassus et al. ( , 2018Casassus et al. ( , 2019a; Pérez et al. (2019) and in Pérez et al. (2020) using long baseline data. An advantage of uvmem compared to more traditional imaging strategies, such as provided by the tclean task in CASA, is that the effective angular resolution of the model image is ∼3 times finer than the natural weights clean beam (Cárcamo et al. 2018), giving us a new approximate uvmem resolution of ∼0. 021 × ∼0. 018. This angular resolution is comparable to uniform or super-uniform weights in tclean, but it preserves the natural-weights point-source sensitivity. The RMS noise of the uvmem image is 26 µmJy per resolution beam, at a frequency of 237 GHz.
The resulting uvmem image shown in the top right panel of Fig. 1 reveals new substructure of the disc, in the form of two rings of large dust grains with a broad gap between them, i.e. Ring13 at around ∼13 au and Ring24 starting at around ∼24 au (see below for a precise estimation). The wide and bright Ring24 reaches its peak intensity at ∼30 au, continues relatively flat and then breaks at ∼35 au into a steeper tail. While this is the first observation of Ring13, Ruíz-Rodríguez et al. (2019) anticipated its existence as their ALMA continuum image showed a distinct excess between ∼10 and 17 au.
Ring13 is surprisingly narrow and seems to be off-centred relative to the Gaia stellar position, at the origin of coordinates in Fig. 1. We determined the centre of the ring and its orientation using the MPolarMaps package described in Casassus et al. (2021), which minimises the dispersion of the intensity radial profiles between two given radii and returns the optimal values for the disc position angle (PA), inclination, and disc's centre. In an initial optimization we focus on the orientation of Ring13, and choose a radial range from 6 to 18 au, which covers Ring13 but excludes Ring24. The resulting disc orientation is set at a PA of 257.31±0.03 deg (east of north), with an inclination of 147.04±0.02 deg, and the optimal ring centre is at Δ = −4 ± 0.02 mas Δ = 13 ± 0.05 mas relative to the Gaia position of the stars. The offset of the centre of the cavity and the nominal stellar positions are coincident within the pointing accuracy of ALMA, which is ∼5 mas for the signal to noise ratio of our image (calculated using the ALMA Technical Handbook).
In a second optimization of the disc orientation, but this time aiming for Ring24 with a radial domain from 20 au to 70 au (fully including Ring24 and excluding Ring13), we obtained a PA of 256.86±0.02 deg, with an inclination of 146.08±0.01 deg and a centre at Δ = 2 ± 0.02 mas Δ = 12 ± 0.02 mas relative to the stars. We see that both Ring13 and Ring24 share a very similar orientation and centre, given the errors and the pointing accuracy. However, see that there are some hints for a somewhat different orientation in the azimuthal profiles for the ring radii, which may nonetheless be accounted for by the joint effect of all these small differences in disc orientation. This is summarised in Fig. 2a, which characterises the radial position and width of Ring13 as a function of azimuth by using radial Gaussian fits at each azimuth. The radial centroids for both disc orientation overlap within the errors, but there is a systematic trend for the difference between the two. It may be that this small difference reflects a finite intrinsic eccentricity of one or both rings. Deeper imaging is required to progress on this question. On average, we obtained a radial FWHM for Ring13 of 2.83±0.50 au, and a stellocentric radius of 13.15±0.42 au (see Fig. 2b). As the uvmem model image has an approximate uvmem beam of ∼0. 021 × ∼0. 018 (or ∼1.4 au at 71.48 pc), we see that Ring13 is resolved. After subtraction of the uvmem beam, the ring width is ∼2.46±0.56 au.
Interestingly, the ALMA image also detects 1.3 mm continuum emission near the stellar positions (see the inset in Fig. 1). Since this central emission is larger than the angular resolution, it is probably stemming from thermal emission from large dust grains rather than directly from the stars. The peak intensity of this dust structure is at an estimated distance of only 0. 012 ± 0. 002, or ∼0.85±0.14 au from the binary system, and, estimating its form as a Gaussian ellipse, it has a mean FWHM of ∼1.43 au.
VLT/SPHERE-IRDIS
V4046 Sgr was observed in DPI mode with SPHERE-IRDIS on March 13, 2016 (see Avenhaus et al. 2018, for details). Here we use a new reduction of the band data produced with the IRDAP pipeline (van Holstein et al. 2020), which can separate stellar and instrumental polarization. The polarized signal is consistent with the previous image in Avenhaus et al. (2018). The degree of linear polarization of the central and unresolved signal in V4046 Sgr is only 0.13%, with a systematic uncertainty of 0.05% due to timevarying atmospheric conditions during the exposures. The angle of polarization is aligned with the disc major axis, as expected given that the target has an extinction of Av=0.0 (McJunkin et al. 2016) and the entire polarization is dominated by circumstellar rather than inter-stellar material.
The scattered light image in the top left panel of Fig. 1 also shows a double ring structure in the micron-sized dust distribution. The observed morphology presents an inner cavity of ∼10 au in radius and two rings located at 14.10±0.01 au, coincident with Ring13, and 24.62±0.08 au, coincident with the inner wall of Ring24, with a small gap between them at ∼20 au (Ruíz-Rodríguez et al. 2019). Two other important features that are present in the image are: the near-far brightness asymmetry, and the shadows projected on the disc by the close binary system as they eclipse each other, discovered by D'Orazi et al. (2019).
The binary phase reported by D'Orazi et al. (2019) in the scattered light observation is at a PA of 265 deg, east of north. Using this measurement, the binary phase was calculated at the time of the ALMA observation at a PA of ∼80 deg.
Spectral energy distribution
The observed SED was collected from data in the literature (Helou & Walker 1988;Hutchinson et al. 1990;Jensen & Mathieu 1997;Høg et al. 2000;Kharchenko 2001;Cutri et al. 2003;Murakami et al. 2007;Ofek 2008;Ishihara et al. 2010;Cutri & et al. 2012), available online in . We also used archival Spitzer IRS spectroscopic data available in the CASSIS database (Lebouteiller et al. 2015). The data is displayed in Fig. 3 along with the resulting SED of the radiative transfer model presented in the next section (more on the resulting SED in Sec. 4). The SED exhibits the dip near 10 µm characteristic of transition discs, as described by Rosenfeld et al. (2013). Jensen & Mathieu (1997) concluded that these data matched that of a extended circumbinary disc truncated at ∼0.2 au, as the interior would also be expected to be cleared by dynamical effects of the central binary (Artymowicz & Lubow 1994).
PARAMETRIC RADIATIVE TRANSFER MODEL
The multi-frequency data can be interpreted in terms of a physical structure using radiative transfer predictions, for which we used the 3 package (version 2.0, Dullemond et al. 2012). The general framework of the parametric model that we developed is similar to that in Casassus et al. (2018) for DoAr 44, and the initial model values were inspired from those in Rosenfeld et al. (2013), Ruíz-Rodríguez et al. (2019) and Qi et al. (2019). A high-resolution radiative transfer model that reproduces multi-frequency imaging and the SED, is a solution to a highly degenerate problem, so a full parameter exploration requires a level of computation and time that exceeds our capabilities and the scope of this paper. Consequently, our approach was to find through trial and error a set of values for the parameters that closely fit the available data, and then, improve this fit by implementing one dimensional least squared optimizations for some key parameters (more on this in Sec. 4). The final structure of the parametric model is summarised in Fig. 4, where we show on the top panel the surface density profiles for gas and dust grains populations, and in the lower panel the respective aspect ratio profile.
General setup
The stars were modelled using two Kurucz photosphere models (Kurucz 1979;Castelli et al. 1997), with T eff,1 = 4350 K, R * ,1 = 1.064 R , M * ,1 = 0.90 M and T eff,2 = 4060 K, R * ,2 = 1.033 R , M * ,2 = 0.85 M , respectively and with an accretion rate of log( M/(M yr −1 )) = −9.3 for both cases to include excess UV due to stellar accretion (Donati et al. 2011). The stars were placed at a mutual separation of 0.041 au, so that their centre of mass coincides Reproducing the radial and vertical structure of the V4046 Sgr disc turned out to be challenging. We built the model in terms of the gas distribution, and with two main dust populations: large grains with radii from 0.3 µm to 10 mm that are vertically settled and dominate the total dust mass, and a population of smaller grains with radii ranging from 0.3 to 1.5 µm that are uniformly mixed with the gas and reach higher regions above the mid-plane.
We take a three dimensional model in a cylindrical reference frame with coordinates ( , , ). The inner radius of the model grid was set to 0.1 au, and the outer radius to 100 au, which is large enough for the dust disc to be undetectable. We set the values of the inclination and disc position angle to the same as obtained from the ALMA observation in Section 2, such that the model has an inclination of i = 147.04 deg and a P.
Radial structure: gas & small dust grains
Given the cylindrical coordinates ( , , ), the gas density ( gas ) distribution follows where ( ) is the scale height profile and Σ gas ( ) is the gas surface density profile. Although both ALMA and SPHERE-IRDIS images display tworinged morphologies, we propose a three-ringed structure plus an inner disc to reproduce the observations. This bold decision is due the major fit improvement in the SED and the polarize image (more on this in Sec. 4 and Appendix A2). We separate the gas disc into four individual regions: an inner disc with a power-law profile and three rings named Ring5, Ring13, and Ring24, as they are located at 5, 13, and 24 au respectively. The combined gas surface density profile is then given by: First, the inner disc model follows a power-law function defined by where is a characteristic radius and is the surface density powerlaw index. We used = 16 au, Σ c = 1.3 × 10 −4 g cm −3 and a fixed = 1 as it is a typical value for discs (Andrews et al. 2009(Andrews et al. , 2010. The gas in our model extends from in = 0.2 au outwards, consistent with the inner edge radius inferred from the SED data (see Appendix A2).
Thirdly, for Ring24 we used the same power-law as for the inner disc but scaled by an empirically obtained factor, sd ( ) and by ( ), a parameter that allows us model a smoother inner edge of the outer ring: with sd ( ) = 1.0 × 10 5 for > 18 au and zero for lower radii, and where in and peak respectively mark the inner edge and the location of maximum density of the outer ring. We used in,gas = 18 au and peak,gas = 26.4 au. Finally, the total dust-to-gas mass ratio is taken to be = 0.047 (as in Rosenfeld et al. 2013). The small dust grains are assumed to only make up for a fraction of sd = 1% of the total dust mass. As small dust is typically tightly coupled to the gas dynamics, its density profile is expected to follow the gas density. Then the density of small dust can be calculated as:
Radial structure: large dust grains
Since the large dust grains are less coupled to the gas, their distribution has some important differences that require a special parameterisation, such as a larger inner cavity, a larger gap between Ring13 and Ring24, and a break in the outer ring. We only included a low density of large grains within Ring5, just underneath the detection limit of the ALMA observation, as it does not show any visible signature. The surface density profile of the large dust grains is then defined by the sum of its three components For Ring5 and Ring13, we chose Gaussian profiles parameterized with centroid radii R5,ld = 5.2 au and R13,ld = 13.22 au, ring widths of R5,ld = 0.1 au and R13,ld = 0.85 au, and normalizations Σ •,R5,ld = 1.3×10 −4 g cm −3 and Σ •,R13,ld = 2.3 g cm −3 . For Ring24 we used a similar profile as for the gas (a power-law function). The surface density for large dust grains in the outer ring is thus given by where for the smoothing factor ( ) we used in,ld = 24.2 au and peak,ld = peak,gas , resulting in an inner wall of Ring24 at larger radii for the large dust but a peak at the same location than that of the small dust. In an effort to recreate the break seen in the outer ring, we used < 24.6 au 1.8 × 10 5 24.6 < < 27.9 au 8.4 × 10 4 27.9 < < 35.3 au 7.1 × 10 5 35.3 < < 64 au, Then the final density of large dust can be calculated as Following the observation, this profile is truncated at 63 au.
Reproducing the central emission
For whole purpose of reproducing the central emission in the ALMA image, we introduce a third and special dust population of larger grains with a distribution tightly confined to the stellar vicinity. This dust is distributed only on a very close-in Gaussian ring parameterized by central blob = 1.1 au, central blob = 0.4 au and Σ •,central blob = 58.8 g cm −3 , and it is composed of grains with radii ranging from 0.8 to 10 mm, same upper limit as the large dust but is depleted of small grains. With this distinctive size range we avoid creating a NIR excess in the SED, and we can be consistent with previous mass estimates (see Appendix A1 and Sec. 4).
Vertical structure
The parametric scale height profiles for the gas and for each dust population are where • is the scale height at = • , is the flaring index and is a scaling factor (in the range 0 − 1) that mimics dust settling. In hydrostatical equilibrium, dust diffusion and settling are expected to balance each other, leading to a settling factor of = √︁ d /( d + ) (Dubrulle et al. 1995). Here, d is a dimensionless parameter that informs about the level of diffusivity (it is typically assumed to be similar to the level of turbulence for the particle sizes regarded here Youdin & Lithwick 2007). The Stokes number, summarises the dynamical behaviour of a particle in a given environment (where mat represents a dust particle's material density). By definition, the gas has no settling, and the small dust grains settling is negligible, so that gas = sd = 1. As we do not know the level of diffusivity in the disc (this will be a topic of interest in Section 4), nor the exact value of Σ gas , we infer the scaling factor for large dust, ld , from the width of Ring13. In the radial profile this ring is observed two to three times wider in the gas-tracing NIR than in the fluxes received from larger grains by ALMA. We assume that the same ratio holds in the vertical direction (due to the settling of larger grains towards the mid-plane of the disc), leading to ld = 0.4. This is analogous to assuming equal radial and vertical turbulent diffusions. Fig. 4 that shows the aspect ratio profile ℎ( ) = ( )/ .
Image synthesis and SED computation
To take the simulated images we computed the dust opacities using the bhmie code provided in the 3 package. The two main populations are taken to be composed by a mix of 60% silicate, 20% graphite and 20% ice. Differently, for the special population of larger grains, as the ring is extremely proximate to the binary system, the dust will not have any traces of ice, so we used a composition mix of 70% silicate and 30% graphite.
For the reproduction of the ALMA observation, we create an image using ray-tracing preceded by a Monte Carlo run that gives the simulated image at 1250 µm, the rest wavelength of the real observation. This image is then convolved with Gaussian blur using the uvmem beam of 0. 021 × 0. 018 to syntesize the final image displayed in Fig. 1. The SED is computed in a similar manner by taking the spectra at 200 wavelengths between 0.1 and 2000 µm.
In order to reproduce the H-band image, we take a different approach. As the observed asymmetry between the near side and the far side of the disc in the DPI image is suggestive of a strongly forward-scattering phase function, we used much larger grains than typically used in the RT modelling of such NIR data (e.g., Casassus et al. 2018). For the computation of this particular image, we implemented a different grain size distribution, where we centred a Gaussian at a = 0.4 µm with a = 0.12 µm (smeared out by 30%), and distributed the dust over 20 bins within the range of ± . This distribution applies only to generate the NIR image and not the ALMA image or the SED. To produce this image we performed a linear combination of the two orthogonal linear polarizations U and Q, following Avenhaus et al. (2017), which gives a representation of an unbiased estimate of the polarized intensity image. The simulated DPI image at 1.65 µm in Fig. 1 was obtained with the scattering matrix calculated by the makeopac.py, script provided in the 3 package.
Finally, as mentioned before, the model presented here gives a solution to a highly degenerative problem, and the RT calculation for each set of parameters is very intensive in computational power and time, so a MCMC optimization or similar methods of parameter exploration are impossible to carry out. Nevertheless, as a way to improve the fit and obtain a rough measure of the accuracy of our model, we made one dimensional explorations of the parameter space and found uncertainties of some relevant parameters that will be useful for the discussion (see Appendix A1). We can estimate uncertainties for the scale height at • = 18 au with • = 0.89 ± 0.01 au and for the width of the gas in Ring13 with g = 5.30 ± 0.27 au.
MODEL RESULTS AND DISCUSSION
The observed SED reveals a small near-infrared (NIR) excess of 0.9±3.7 % (Francis & van der Marel 2020), this low emission would be primarily emanating from micron-sized dust grains at the hot inner dust wall of a low-mass inner disc. On the other hand, the mm-bright central emission suggests a massive inner dust ring. The faint near-IR excess contrast with the bright mm emission, and could point to a lack of micron-sized dust in the central ring, that could be due to efficient dust growth or due to the ring having an inner radius well beyond the sublimation radius. In order to reproduce the low NIR and simultaneous bright central mm emission, we extended the radius of an inner cavity to significantly exceed both the sublimation radius and the zone that is expected to be cleared by dynamical binary-disc interaction (see Appendix A2). A possible explanation for this wider cavity is the presence of an additional effect of truncation, like an unseen companion planet in the very inner part of the disc (Francis & van der Marel 2020). At the same time, the central emission is also well reproduced in the ALMA image with the inclusion of a Gaussian ring at 1.1 au. This inner ring produces a low near-IR excess only if it consists of dust grains larger than ∼0.3 mm (see Appendix. A1). But we implement 0.8 mm instead as the lower limit for the dust population that composes this feature, given that this predicts a dust mass of 0.012 M ⊕ , which is close to that obtained by Francis & van der Marel (2020) (0.013±0.002,M⊕). They converted mm-flux into mass using the standard opacity value of =10 cm 2 g −1 at 1000 GHz with an opacity power-law index of =1.0, while our mass estimates are extracted from the RT model. The decision of including Ring5 to the observed structure relies on the fact that the SED needs a thin dust ring, made mainly of small dust grains, at a radius of ∼5 au to have a proper fit between 6 and 300 µm (see Appendix A2). The introduced Ring5 is not visible in the simulated image at 1.65 µm, where it hides under the artificial coronagraph, neither in the 1.3 mm continuum simulated image, as its predicted peak intensity is around two times the noise in the ALMA image (∼1×10 −7 Jy beam −1 ). This gives us an upper limit for the total millimetre-sized dust mass present in Ring5 of ∼2×10 −5 M ⊕ . The depletion of large dust grains in Ring5 is consistent with efficient dust trapping in Ring13. A zone of radially increasing gas pressure can entirely filter out large dust grains from the inner regions, while smaller grains can be able to overcome this barrier due to their strong frictional coupling to the gas accretion flow (studied in the context of planetary gaps, Rice et al. 2006;Zhu et al. 2012;Weber et al. 2018). Still, the strong depletion of large particles demands that within Ring5 dust growth is extremely inefficient or limited by fragmentation. Otherwise, the present small grains should coagulate to form detectable grain sizes in the ALMA observation in Fig. 1 (Drążkowska et al. 2019).
As the radial profiles obtained from the simulated images of the model closely resemble those deduced from the observations (Fig. 5), we can assume that the model provides a possible approximation of the disc structure, including the dimensions of Ring13. The FWHM of the gas and micron-sized dust in Ring13 in the RT model corresponds to g = 5.30 ± 0.27 au, as well as a radius of 14.9 au, and a scale height FWHM of 0.63 au. Meanwhile, the millimetre-sized dust in Ring13 has a FWHM of 2.00 au, a radius of 13.22 au, and a scale height FWHM of 0.25 au (∼ 2.355 × ( R13,ld )). The total dust mass of Ring13 would be about 0.7 M ⊕ . The model predictions for the millimetre-sized dust population in Ring13 are close to the measurements, with only a 1% difference in the centroid location of the ring, and a 19% difference in the width estimation. Given that the scale height FWHM of the large grains in Ring13 is 0.25 au, and that the width of Ring13 from the ALMA observations is 2.46±0.56 au (see Sec. 2.1), we conclude that the large dust ring is 10.0±1.6 times more extended radially than vertically. Looking at the rest of the disc, our model reproduces the observations of Ring24 with peak intensity at ∼30 au, and the break at ∼36 au. The whole disc contains a total dust mass of ∼48 M ⊕ .
Even though the radial spread of large dust grains in Ring13 appears to be quite thin, the width in comparison to the sub-lying gas profile speaks for the presence of considerable turbulent diffusion. Following a similar ring analysis as in Dullemond et al. (2018), we find that the ratio between the dimensionless diffusion parameter, d , and the dimensionless Stokes number, St, (which parameterizes the dynamical behaviour of a grain) is roughly, d /St ≈ 0.1. The observed signal is expected to be dominated by grains of size ≈ 0.02 cm. The RT model, together with the dust-to-gas ratio of 0.05, prescribe a gas density of Σ g ≈ 0.5 gcm −2 to the location of Ring13. With these values, the relevant Stokes number is approximated to be St≈ 0.1. This yields an estimate for the level of diffusivity of d ≈ 0.01. It further provides a value for the level of turbulent viscosity in Ring13, turb ≈ 0.01, assuming the level of turbulence to be equal to the level of diffusion (Youdin & Lithwick 2007). We note that an observation of molecular line broadening has found no evidence for turbulent contributions, suggesting turb < 0.01 (Flaherty et al. 2020). The value inferred from our model is just within this limit. By our definition of the gas surface density profile, the value inferred for the level of turbulence is linearly proportional to the local dust-to-gas ratio. Lower values than the chosen ratio of 0.05 would, therefore, lead to an equally lower level of turbulence in the assessment. While the exact value for turb is not well constrained, a certain level of turbulence is required to explain the radial spread of the resolved Ring13.
The visible asymmetry in the SPHERE observations is reproduced using relatively large grains, ∼0.4 µm, as smaller grains did not result in such strong forward scattering. As Stolker, T. et al. (2016) state, the strong forward scattering that is present in the observation may indicate that the dust grains in the disc surface are relatively large, suggesting that the disc is depleted of very small grains. Alternatively, it may suggest that grains are not spherical as assumed in the calculations of opacities using Mie theory.
Another interesting feature of the simulated 1.65 µm image is that the model accurately shows the shadows described by D'Orazi et al. (2019) that are present in the SPHERE-IRDIS image. In contrast, there are no hints of radio decrements along Ring13 or in Ring24, in either the ALMA observations or in the simulated 1.3 mm continuum image, that would match the shadows. As noted by Casassus et al. (2019b), the diffusion of thermal radiation from the disc smooths out the decrements seen in scattered light, and in this case it is likely that the disc cooling time-scale is much slower than the that of the illumination pattern.
The general observed structure may point to the existence of planetdisc interactions within this system, where giant planets deplete their orbits of gas and dust material. A possible planetary constellation in this scenario is, therefore, the presence of two giant planets in the disc, one planet between the star and Ring13, and one planet between Ring13 and Ring24. As Ruíz-Rodríguez et al. (2019) suggest, the putative planet between Ring13 and Ring24 may be a giant planet with a mass within the range of 0.3−1.5 M Jup . This idea is supported by a dedicated study (Weber et al. submitted) which qualitatively reproduces the observations of this system with a hydrodynamical simulation including several giant planets.
The expected age of >20 Myr of V4046 Sgr suggests that its gasrich disc is unconventionally old in comparison to typical circumstellar examples (e.g. Fedele et al. 2010;Williams & Cieza 2011). The dispersal of such gas discs is typically assumed to be set by photoevaporation (Alexander et al. 2006;Gorti & Hollenbach 2009). While the dynamical origin of the disc's longevity is not the subject of the present study, we would like to mention that its occurrence around such a close binary might not be coincidental. Alexander (2012) predicted that disc lifetimes should show a sharp increase around binaries separated by 0.3 − 1.0 au. It still has to be seen whether a trend towards longer disc lifetimes in compact multiple-star systems (as recently proposed by Ronco et al. 2021) turns out to be prevalent.
CONCLUSIONS
We present new ALMA 1.3 mm continuum imaging of V4046 Sgr, a well-known circumbinary disc, at an unprecedented definition (∼0. 021 × ∼0. 018), where new features become visible. Together with the analysis of a SPHERE-IRDIS polarized image and a wellsampled SED, we aim to reproduce the observations with radiative transfer modeling, looking for a way to explain the data in terms of a physically model. The key conclusions of this analysis are as follows.
(i) The central emission in the ALMA image suggests the existence of an inner ring of dust grains larger than 0.8 mm. Our interpretation agrees with the mass estimation of this feature made by Francis & van der Marel (2020), with a mass of 0.012 M ⊕ .
(ii) Our parametric model, which accounts for the SED of the system, predicts the presence of an inner ring at ∼5 au, mainly consisting of small dust grains. This additional ring lies under the coronagraph of the scattered light image and is too faint to be detected by the ALMA observation. The depletion of large dust in this ring is consistent with efficient dust trapping at larger radii, as can be expected in Ring13.
(iii) The narrow ring in the 1.3 mm continuum, has a radius of 13.15±0.42 au and an estimated width of 2.46±0.56 au. The location of this ring is coincident with the inner ring observed in the scattered light image. From our RT modeling we can predict that this ring includes around 0.7 M ⊕ of millimetre-sized grains. Using the parametric model scale height FWHM value for the large grains ( ld = 0.25 au at 13.15 au) we find that the ring width is roughly 10 times its estimated height.
(iv) The 1.3 mm outer ring, that starts at ∼24 au and has its peak intensity at ∼30 au, presents a visible break in the surface brightness at ∼36 au.
(v) While we can not get an exact value for turb , the resolved radial width of Ring13 speaks for the presence of a considerable level of turbulent viscosity.
(vi) We interpret the asymmetry observed with SPHERE-IRDIS at 1.65 µm as due to strong forward-scattering, which implies that the dust population is depleted of grains smaller than ∼0.4 µm.
A1 Parameter space partial exploration
As a way to compute a measure of goodness of fit and to try to quantify the model uncertainties, we performed a partial exploration of the parameter space.
We explore 2 space in the vicinity of the model values (obtained by trial and error, see Sec. 3) for the scale height at • ( • ), the width of the gas component of Ring13 g13 , and around the lower limit of the grain sizes in the central blob. The parameter space exploration is shown in Fig. A1. Our model fits the SED and two images, so the total 2 value for a given model is composed of the sum of the 2 values for each of the three fits.
For • and g13 , we find that the values of the parameters in our model are at a minimum in each one dimensional 2 space, the uncertainties (up and down) will be those that correspond to 2 = 2 m +1, where 2 m is the local minimum value. We approximated the vicinity of the local minimum in these 1D cuts with a quadratic fit.
For the lower grain size limit, we found that despite that our chosen value seems to minimise 2 , the shape that follows the curve is more suggestive of a border condition. So there is a threshold around 300 µm where the SED starts to deviate strongly from the typical value. That point represents where the NIR excess becomes significantly larger that the observed.
A2 Comparison between different models
A compact inner hole in the dust distribution around a binary can typically be produced by sublimation of solids or dynamical clearing of the central stars. The sublimation radius of the system is expected to be at ∼0.05 au ( sub = 0.07 √︁ * ( ) au, Francis & van der Marel 2020), and the edge of the zone cleared by dynamical interactions between the near-circular binary and the circumbinary disc is estimated to be at 0.085 au ( = 2.08 , Artymowicz & Lubow 1994). On the other hand, for V4046 Sgr, Jensen & Mathieu (1997) required to implement a cavity to 0.2 au to fit the SED flux around the silicate feature at 10 µm. We find that the latter value significantly improves the SED fit when compared to a disc extending to the inner radius predicted by dynamical clearing in the SED fit (see Fig. A2), therefore, the inner radius of the disc in our best-fitting model lies at 0.2 au.
The final structure of our proposed disc also includes additional features that are necessary to fit both the SED and the images. These features are a cut-off of the inner density accumulation of large dust grains outside of 1.1 au (to fit the central emission in the ALMA image), and the thin Gaussian ring made mainly of small dust grains at 5.2 au (Ring5, demanded by mid-infrared excess in the SED). The first one is visible in the ALMA image so we are forced to include it, but Ring5 is not detected in the ALMA continuum and was included because of its contribution to the 10µm flux is required by the SED. Fig. A2 works as a summary as it shows a comparison between our best-fitting model and five models with changes on their structures: one without Ring5, one without the inner large-dust disc, one without both of those structures, another with an inner edge of the disc at 0.085 au, and a last one with an inner edge of the disc at 0.05 au. All these models diverge from the SED data somewhere in the 6-300 µm range. This paper has been typeset from a T E X/L A T E X file prepared by the author. best-fitting Model without Ring5 without inner ld disc without Ring5 and inner ld disc inner radius 0.085 au inner radius 0.05 au Photometry Data Figure A2. Comparison between the SEDs of best-fitting model versus five other models with differences in the dust structure. The dotted lines represent the models with different inner radius while the dashed lines represent the models without Ring5 or the inner large-dust disc.
|
2021-11-25T02:16:40.502Z
|
2021-11-24T00:00:00.000
|
{
"year": 2021,
"sha1": "3c859910d95b5e26e2290152b671935fe1e73fee",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2111.12668",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3c859910d95b5e26e2290152b671935fe1e73fee",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
169038096
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and correlates of food insecurity among U.S. college students: a multi-institutional study
Background College students may be vulnerable to food insecurity due to limited financial resources, decreased buying power of federal aid, and rising costs of tuition, housing, and food. This study assessed the prevalence of food insecurity and its sociodemographic, health, academic, and food pantry correlates among first-year college students in the United States. Methods A cross-sectional study was conducted among first-year students (n = 855) across eight U.S. universities. Food security status was assessed using the U.S. Department of Agriculture Adult Food Security Survey Module. Cohen’s Perceived Stress Scale, Pittsburgh Sleep Quality Index, and Eating Attitudes Test-26 were used to assess perceived stress, sleep quality, and disordered eating behaviors, respectively. Participants self-reported their grade point average (GPA) and completed questions related to meal plan enrollment and utilization of on-campus food pantries. Results Of participating students, 19% were food-insecure, and an additional 25.3% were at risk of food insecurity. Students who identified as a racial minority, lived off-campus, received a Pell grant, reported a parental education of high school or less, and did not participate in a meal plan were more likely to be food-insecure. Multivariate logistic regression models adjusted for sociodemographic characteristics and meal plan enrollment indicated that food-insecure students had significantly higher odds of poor sleep quality (OR = 2.32, 95% CI: 1.43–3.76), high stress (OR = 4.65, 95% CI: 2.66–8.11), disordered eating behaviors (OR = 2.49, 95% CI: 1.20–4.90), and a GPA < 3.0 (OR = 1.91, 95% CI: 1.19–3.07) compared to food-secure students. Finally, while half of the students (56.4%) with an on-campus pantry were aware of its existence, only 22.2% of food-insecure students endorsed utilizing the pantry for food acquisition. Conclusions Food insecurity among first-year college students is highly prevalent and has implications for academic performance and health outcomes. Higher education institutions should screen for food insecurity and implement policy and programmatic initiatives to promote a healthier college experience. Campus food pantries may be useful as short-term relief; however, its limited use by students suggest the need for additional solutions with a rights-based approach to food insecurity. Trial Registration Retrospectively registered on ClinicalTrials.gov, NCT02941497.
Background
Today nearly 70% of high school graduates directly transition to post-secondary education in pursuit of a college degree [1]. Despite this ostensibly accessible system of higher education, the cost of attending college greatly exceeds the financial means of most students [2]. Major cuts in state support for public colleges has precipitated a rise in the price of attending a public college, a rise that has outpaced growth in median income [2,3]. Federal support through student aid and tax credits has done little to compensate [2] and, although financing through student loans is nearly ubiquitous, students are not always able to secure adequate support through loans or deliberately choose not to out of fear of accruing excess debt [4]. Thus, transitioning to college might be more difficult than many college students anticipated [5]. The increased financial burden that students encounter may impact their spending priorities. Students often have to prioritize their available budget for rent, tuition, and utilities, while using the remaining insufficient balance for food, which increases their risk of food insecurity [6]. While there is a consensus that pursuing a university degree is an important determinant of social capital and health [7], experiences with food insecurity undermine the socioeconomic agenda of post-secondary education.
Food insecurity is defined as the limited or uncertain access to nutritionally adequate, safe, and acceptable foods that can be obtained in socially acceptable ways [8]. Experiences with food insecurity can refer to running out of food and being unable to afford more; having anxiety about affording meals, or eating a poor-quality diet as a result of limited financial ability [8]. The United States Department of Agriculture (USDA) classifies individuals on a continuum with respect to food security status. Those with high food security do not experience any issues stemming from consistent access to adequate food items. Marginally food-secure individuals experience anxiety over food sufficiency but are still able to maintain access to desired foods. Individuals with low food security experience reduced quality, variety, and desirability of their dietary choices but with little or no indication of a reduction in food intake. Finally, individuals who experience very low food security demonstrate multiple indications of disrupted eating patterns and reduced food intake [8].
First-year college students are uniquely susceptible to food insecurity as they are in a period of transition into their new-found autonomy [5], while also learning how to cope with an environment away from home [5]. Many of these students experience considerable difficulty in managing a variety of tasks that they are unaccustomed to, including managing their finances [9]. Added to this challenge is the diminished social support resulting from prolonged emotional and physical separation from their family and friends [10], the effects of which may jeopardize normal eating patterns. First-year college students may also have poor nutrition knowledge, limited earning potential, and lack of budgeting skills and resources required for healthy food preparation [11][12][13]. Additionally, they may experience higher rates of weight gain and poor eating behaviors, compared to older students [14]. For these reasons, the first year of college has been described as a 'critical developmental window' for preventing weight gain, [15] that is paradoxically associated with food insecurity [16].
An increasing number of studies have drawn attention to the high rates of food insecurity experiences on college campuses in the United States [17]. In a recent systematic review [17], the average student food insecurity rate in the U.S. was found to be 32.9% with a range of 14.1% [18] in an urban university in Alabama to 59.0% at a rural university in Oregon [19]. The pervasiveness of campus-based food pantries is also a potential indicator that food insecurity is a salient problem at post-secondary institutions [20]. Across studies, post-secondary students who report food insecurity are more likely to identify as racial minority [21], be financially independent, have an annual income < $15,000, live off-campus with roommates [19], receive a Pell grant [21], be employed while in school [19] and have low self-efficacy for cooking nutritious meals [18] and financial and food literacy skills [18,22].
Even if student food insecurity is only experienced during the time required to earn a degree, limited access to nutritious foods can precipitate poor health behaviors and increased risks of chronic disease over time. Compared with food-secure students, food insecure students eat fewer fruits, vegetables, and legumes [23], consume more processed meals in order to afford enough food [13], have lower odds of consuming breakfast and home-cooked meals [24] and are less physically active [25]. Consequently, prolonged exposure to food insecurity may contribute to the development of obesity [16] and associated co-morbidities such as hypertension, diabetes, and cardiovascular diseases [26,27]. Food insecurity also appears to be related to poor mental health and academic performance. Indeed, it has been posited that food insecure students endorse increased rates of depression and anxiety [24,28], decreased ability to concentrate [29], and low grade point averages compared to their counterparts [29]. Thus, food insecurity can lead to sub-optimal health and lower academic achievement, undermining the goals of tertiary education.
The extent to which first-year college students are at risk of food insecurity remains to be characterized, as research related to food insecurity among this population is currently limited [24,28,30]. Unlike the present study that included students from eight geographically diverse institutions and utilized on-site anthropometric and survey assessments, previous studies were limited to small samples from a single institution and reliance on self-reported data collection methods. The present study also provides a rare glimpse of the use and awareness of campus-based food pantries, one of the fastest growing movements to combat food insecurity on university campuses.
To address these gaps in the literature, the present study aimed to: (i) identify and describe the prevalence of food insecurity, (ii) assess the awareness and use of campus food pantries, and (iii) examine the differences in health, academic, and sociodemographic characteristics by food security status of first-year college students from eight U.S. universities. Our overall research question was, Is food insecurity related to health and academic outcomes in U.S. first-year college students? We hypothesized that food-insecure students would have poorer mental and physical health outcomes, and lower academic performance compared to food-secure students. Findings from this project will support the development of evidence-based campus initiatives and policies to address student hunger and financial challenges.
Study design
Data were acquired during the project development phase of a USDA-funded, multi-state, prospective health promotion study, Get FRUVED. Participants included first-year students (n = 855) from eight U.S. universities (University of Florida, University of Maine, University of Tennessee, Auburn University, South Dakota State University, Kansas State University, Syracuse University, and West Virginia University). These universities were members of an established multi-state research team (NC1193). Assessments were conducted at each university during fall 2015 and late spring 2016 academic semesters by trained research assistants. To reflect on food insecurity experienced during the students' first year of college, data from the second assessment point were utilized for this investigation. The University of Tennessee Institutional Review Board reviewed and provided ethical approval for all study activities at West Virginia University, South Dakota State University, University of Maine, Syracuse University and the University of Tennessee. The Institutional Review Boards at the University of Florida, Auburn University, and Kansas State University reviewed and approved the study for their respective campuses. Participants provided written informed consent prior to completing the assessment procedures.
Participant recruitment and enrollment
Recruitment of first-year students occurred by campus-wide announcements and advertising through e-mails, orientation events, social media, and campus informational booths. To be eligible, participants had to report eating less than 2 cups of fruits and/or less than 3 cups of vegetables as measured by the National Cancer Institute's screener [31] and having at least one additional risk factor for weight gain during the college years. The risk factors included any of the following: have a body mass index (BMI) ≥ 25 kg/m 2 , be a first-generation college student, have a parent who is overweight or obese, identify as a racial minority or be of a low-income background [32]. These eligibility criteria were selected in accordance with the objectives of the larger study which was to improve fruit and vegetable intake and other health behaviors among college students. After providing consent, participants completed on-site anthropometric measurements and surveys administered through a secure web-based format.
Food insecurity
The prevalence of food insecurity over the last 12 months was assessed using the 10-item validated USDA Adult Food Security Survey Module (AFSSM) [33]. The AFSSM measures several conditions and behaviors that are characteristic of food insecurity, including anxiety over food supply, reduced quality and quantity of food consumed, and meal skipping due to lack of financial resources to obtain food. According to the Guide to Measuring Food Security [34], the number of affirmative responses was summed to obtain a raw score ranging from 0 to 10. Students were then designated to one of four food security categories: high food security (i.e., no food access problems, defined as having a raw food security score zero), marginal food security (i.e., anxiety over food supply, defined as having a raw food security score 1-2), low food security (i.e., reduced diet quality and variety, defined as having a raw food security score 3-5), or very low food security (i.e., multiple indications of disrupted eating patterns and reduced food intake, defined as having a raw food security score 6-10). For analysis, food security status was dichotomized into food-secure (high food security or marginal food security status) and food-insecure (low food security or very low food security status) in accordance with the U.S. Department of Agriculture (USDA) Economic Research Service (ERS) [8].
Anthropometry
Anthropometric measurements (weight, height, and waist circumference) for study participants were conducted by trained research assistants using a standardized protocol and calibrated equipment. Participants were weighed on a digital scale (Tanita Scale SECA 874) to the nearest 0.1 kg while wearing minimal clothing. Standing height was measured using a portable stadiometer (SECA 213) to the nearest 0.1 cm. BMI was calculated by dividing weight in kilograms by the height in square meters (kg/m 2 ). Waist circumference was measured at the midpoint between the lowest palpable rib and the top of the iliac crest and was recorded to the nearest 0.1 cm. Height, weight, and waist circumference measurements were taken twice, and measurements within a pre-specified margin of error were averaged.
Sleep quality
Sleep quality was measured using the 19-item Pittsburgh Sleep Quality Index (PSQI) [35], a reliable and valid questionnaire designed to assess sleep quality over the past month [35,36]. The PSQI yields a total score ranging from 0 to 21 with higher scores indicating worse sleep quality. A total score greater than 5 indicates a "poor" sleeper [35].
Perceived stress
Perceived stress was measured using the 14-item Cohen's Perceived Stress Scale (PSS) [37]. The PSS measures the degree to which situations experienced during the past month are perceived as stressful. Each PSS item yields a score that ranges from 0 to 4, with 4 indicating the highest perception of stress. These item scores were summed to yield a total score ranging from 0 to 56 with higher scores indicating higher stress. Based on previous studies [38,39], a stress score of 28 or higher was classified as high stress.
Disordered eating
Disordered eating behaviors were measured using the Eating Attitudes Test-26 (EAT-26) [40], which assesses symptoms characteristic of eating disorders. Survey items scores were summed for a total score that ranges from 0 to 78. A score of 20 or higher indicates problematic eating behaviors and high risk of disordered eating [41]. The EAT-26 is a reliable and valid instrument that correlates with clinical and psychometric variables [40,42].
Food pantry use and awareness
Students were asked to report whether a campus-based food pantry existed on their campus. Subsequent analysis of the awareness of the food pantry was assessed by calculating the number of students affirming the existence of a food pantry on their campuses when a food pantry was operating at the time of the assessment. For those affirming that their school had a food pantry, they were asked whether they utilize the pantry to obtain food. Finally, the preference for the pantry location was assessed. The three response options included 'in the center of the campus' , 'in the center of the campus and hidden' and 'on the outskirts of campus with bus access'.
Sociodemographic characteristics
Data on participants' age, sex, race/ethnicity, meal plan, parental education, place of residence, employment, university, and Pell grant status (need-based federal financial aid) were collected. Age was assessed using nine categorical options, which were then grouped into two levels (i.e., 18 years or 19 years and older) due to skewness. Place of residence was assessed with five categorical options, which were then grouped into the 'On-campus' and 'Off-campus' levels. Participants were asked whether they were enrolled in a meal plan or received a Pell grant with responses available as 'yes' or 'no'. Mother's and father's education were assessed using five response options, which were then coded as 'some college or higher' and 'high school or less'. Participants also identified their race using seven response options asking respondents to select all that apply. Another question asked for self-identified ethnicity (i.e., ' Are you Hispanic or Latino?') and the available options were 'yes, ' 'no, ' and 'I don't know/not sure.' These were then coded as one race and ethnicity variable with four levels: 'Non-Hispanic white' , 'Non-Hispanic black' , 'Hispanic/Latino' , and 'Other/multi-racial'. Finally, GPA response options included 0.5-point range options from < 2.5 to 3.5-4.0.
Statistical analysis
Descriptive statistics were used to describe the prevalence of food insecurity and participants' characteristics. Chi-square test of independence was used to determine the bivariate associations of food insecurity and sociodemographic variables. Whenever the number in any cell was < 5 in a 2 × 2 contingency table, Fisher's exact test was used. The difference between food-secure and food-insecure students on health-related parameters was analyzed using independent t-test for data that pass the normality test and Mann-Whitney's U test for those not. To model the association of health and academic outcomes (i.e., BMI, perceived stress, disordered eating behaviors, sleep quality, and self-reported GPA) and food security status, multiple logistic regressions were used. These models were adjusted for variables found to be significant in the bivariate analyses (i.e., Pell grant status, parental education, place of residence, and meal plan status) and variables known to affect outcome measures (age, sex, university, and employment status) based on previous literature [6,19,43,44]. Results from these regression models were reported as odds ratios and 95% confidence intervals. All analyses were conducted using the IBM SPSS Statistics for Windows, version 24 (Armonk, NY). Statistical significance was determined at P < 0.05.
Participant eligibility and sample size
A total of 5426 students completed eligibility surveys from all eight universities. Of these, 85.3% (n = 4630) were enrolled in one of the eight universities and were at least 18 years old. Among the 4630 students, 86.5% (n = 4007) had less than optimal fruit and vegetable consumption (< 2 cups of fruit/d and/or < 3 cups of vegetable/d), 24.3% (n = 1127) had a BMI ≥ 25 kg/m 2 , 17.6% (n = 814) self-identified as first-generation college student, 35.7% (n = 1651) had overweight or obese parent, 27.4% (n = 1269) self-identified as a racial minority, and 0.8% (n = 35) were from low-income background. This criteria resulted in 2757 students eligible to enroll in the study.
Across the eight campuses, 1149 (41.7%) of eligible students chose to enroll in the study and completed a baseline assessment in the fall of 2015. Of these, 860 (74.8%) completed the second assessment during late spring 2016 which was utilized for this investigation. Participants who did not provide a full response to the ten USDA AFSSM questions were excluded from analyses (n = 5), leaving data from 855 students as the study sample of this investigation.
Descriptive statistics of the student sample by food security status and associations between food security status and sociodemographic characteristics are presented in Table 1. Using bivariate analysis, food security status was significantly associated with race/ethnicity (p < 0.001), Pell grant status (p < 0.001), meal plan status (p = 0.001), place of residence (p = 0.001), and mother's and father's education (p < 0.001). Specifically, the proportion of students who identified as Black or Hispanic/Latino was greater among food-insecure than food-secure students, and a greater proportion of food-insecure students reported having a parent with a high school degree or less. Findings also indicated that students residing off-campus, receiving a Pell grant, or not enrolled in a meal plan were significantly more likely to be food-insecure than their counterparts. Of note, meal plan enrollment was significantly associated with place of residence (p < 0.001). A higher proportion of students participating in a meal plan resided on-campus compared to their counterparts (92.5% versus 7.5%).
Prevalence of food insecurity
Responses to the AFSSM indicated that 692 (81.0%) students were food-secure with 476 (55.7%) having high food security and 216 (25.3%) with marginal food security. The remaining 163 (19%) students were classified as food-insecure, consisting of 103 (12.0%) with low food security and 60 (7.0%) with very low food security ( Table 2). The highest prevalence of food insecurity (low + very low food security) was observed among students attending the University of Tennessee (25.0%) while the lowest was for West Virginia University (7.1%).
Health correlates of food insecurity
Significant associations were noted when comparing food-insecure and food-secure students on health variables (Table 3). Accordingly, food-insecure students had significantly higher perceived stress (p < 0.001), disordered eating behaviors (p = 0.001), and poorer sleep quality compared to food-secure students (p < 0.001). There were no significant differences between food-insecure and food-secure students with respect to BMI and waist circumference.
Multivariate logistic regression analyses controlling for age, sex, race/ethnicity, parental education, meal plan enrollment, employment status, place of residence, and Pell grant status ( Association of food insecurity with being overweight was not statistically significant.
Academic correlates of food insecurity
Findings revealed that food security status was significantly associated with self-reported GPA (p = 0.001) ( Table 3). A significantly higher proportion of food-secure students had a GPA in the 3.50-4.00 category (53.3% versus 38.9%), while a higher proportion of food-insecure students had a GPA in the 2.50-2.59 and < 2.50 categories compared to food-secure students (20.8% versus 13.4%; 8.2% versus 4.4% respectively) ( Table 3). When controlling for sociodemographic characteristics (Table 4), food-insecure students had almost twice the risk of having a GPA < 3.00 compared to food-secure students (OR = 1.91, 95% CI: 1.19-3.07).
Food pantry use and awareness
To assess the students' knowledge of the food pantry as a food assistance resource on their campus, analysis of actual versus reported food pantry availability was conducted. Among the eight universities, only three had campus food pantries in operation at the time of the assessment: University of Florida, University of Maine, and Syracuse University. While most University of Florida students were aware of the existing campus food pantry (85.6%, n = 209), only a third of students attending Syracuse University (29.5%, n = 38) and the University of Maine (28.7%, n = 37) reported the existence of an on-campus food pantry.
Utilization of the food pantry was also assessed among students reporting the existence of campus food pantries in these three universities (n = 284). Results indicated that only 7.7% utilized the pantry for food acquisition (Table 5). Food pantry utilization was also significantly associated with food security status (p < 0.001). While a higher proportion of food-insecure students used the food pantry compared to food-secure students (22.2% versus 4.1%), most food-insecure students (77.8%) did not utilize the pantry for food acquisition. Lastly, most of the students
Discussion
This survey of 855 first-year students from eight U.S. universities indicated that towards the end of their first year of college, 19% were food-insecure and 7.1% reported severe food insecurity. An additional 25.3% of first-year students experienced anxiety about food shortage. Food-insecure students reported higher perceived stress, a greater prevalence of disordered eating behaviors, and poorer sleep quality compared to food-secure students, a finding that remained significant after controlling for sociodemographic On a scale of 0 to 56, with higher numbers indicating more stress. The score was dichotomized at 28, with scores ≥ 28 considered high stress [37,38] c On a scale of 0 to 21, with higher numbers indicating worse sleep quality. The score was dichotomized at 5, with scores ≥ 5 considered poor [35] d On a scale of 0 to 78, with higher numbers indicating higher level of problematic eating behaviors and a high level of concern about dieting and body weight. The score was dichotomized at 20, with scores ≥ 20 indicating disordered eating [40,41] correlates of food insecurity. Food security status was also associated with race/ethnicity, place of residence, Pell grant status, parental education, GPA, meal plan enrollment, and food pantry use. The prevalence of food insecurity in the current study is markedly lower than prevalence estimates reported in previous studies of college students [19,24,28,45]. Of two studies specific to first-year college students, Bruening et al. [24] found a prevalence of 32% while Darling et al. [28] reported a prevalence of 28%. It is worth noting that, not only are the sample sizes considerably smaller than that of the present study, but each is representative of a single institution. Heterogeneity in food security prevalence at the institutional or regional level may partly explain the discrepancy. Furthermore, the availability and extent of support available to prevent food insecurity among students may widely differ between schools. Another factor may be the influence of self-selection bias. As a sub-study of the larger Get FRUVED project, the present investigation was limited to students who volunteered for a multi-year study tied to health and wellness and attended a follow-up at the end of their first year in college.
Findings from this study shed light on the multifaceted impact food insecurity may have on college students' physical and mental health. Students who experienced food insecurity during their first year of college were four times more likely to have high perceived stress and two times more likely to have poor sleep quality compared to food-secure students. These findings are in line with previous results in the scientific literature. Studies among college students have linked food insecurity to poor mental health and high rates of anxiety [28] and perceived stress [25,28]. Similarly, in a longitudinal study, Heflin and colleagues [46] reported that food insecurity might be a causal or contributing factor for depression among women. With respect to sleep quality, although the association between food insecurity and Table 4 Multivariate logistic regression models examining the association between food insecurity and health and academic outcomes among first-year college students at risk of weight gain in the United States a (n = 855), 2016 On a scale of 0 to 56, with higher numbers indicating more stress. The score was dichotomized at 28, with scores ≥ 28 considered high stress [37,38] c On a scale of 0 to 21, with higher numbers indicating worse sleep quality. The score was dichotomized at 5, with scores ≥ 5 considered poor [35] d On a scale of 0 to 78, with higher numbers indicating a higher level of problematic eating behaviors and a high level of concern about dieting and body weight. The score was dichotomized at 20, with scores ≥ 20 indicating disordered eating [40,41] Choose not to answer 81 (9.5) 68 (9.8) 13 (8.0) a χ 2 test was used. P-value < 0.05 is statistically significant. b Question displayed for students who reported the existence of a campus food pantry sleep has not been examined yet among college students, a study of food insecurity and sleep among men and women reported similar findings [47]. Food-insecure men and women were more likely to report sleep complaints compared to their food-secure counterparts [47]. Thus, students experiencing food insecurity may frequently experience other hardships related to physical and mental health [28]. Food insecurity can further influence the students' health by eliciting disordered eating behaviors. Consistent with a previous study among first-year college students [28], results from this study suggest that students who have experienced food insecurity had higher odds of disordered eating behaviors than their food-secure counterparts. However, it is worth highlighting the possible overlap between disordered eating indices and compensatory behaviors stemming directly from food insecurity. For example, routine abstinence from eating when hungry could be indicative of disordered eating or simply a food-insecure individual's coping strategy to prolong food supplies. Other studies have shown that food-insecure individuals adopt a 'feast or famine' cycle determined by food availability [48] wherein food intake is intentionally limited as resources diminish followed by overeating when food is more available [49]. Although such behaviors may not represent 'traditional' disordered eating, previous work suggests that food insecurity may precipitate binge eating behaviors in children [50]. Regardless of the underlying cause, the increased odds of disordered eating behaviors among food-insecure students indicates heightened eating-related psychological stress and possible deviations from healthy eating patterns. Finally, while no difference was found in BMI by food security status, the observed health risks associated with food insecurity may lead to weight gain and associated co-morbidities over time [51][52][53][54].
Our results indicate that the burdens of food insecurity may translate to academic challenges. Food-insecure students were approximately two times more likely to have a GPA < 3.00 compared to food-secure students. This finding is similar to previous evaluations of GPA among food-insecure college students [29,45]. Morris et al. [45] noted a significant association between food insecurity and GPA in which students in the highest GPA range (≥ 3.00) were more food-secure than students with lower GPAs. Psychological aspects of food insecurity include fatigue, anxiety, sleep deprivation, and physical weakness [55,56], which may impair the ability to concentrate during class. Previous work has shown that student energy and ability to concentrate worsens as the food insecurity score increases [57]. Thus, the development of support systems to address food insecurity may be an additional approach for schools interested in enhancing students' academic experience. Nevertheless, self-reported GPA does not provide the full picture when examining students' success in college. Future research should consider incorporating additional metrics of academic success such as retention and on-time graduation rates.
This investigation provides insight into the relationship between food security status and students' characteristics. Significant associations were identified between food insecurity and race/ethnicity, parental education, Pell grant status, place of residence, and meal plan enrollment. Students who identified as Black or Hispanic/Latino and had a low parental education were at increased risk of food insecurity, which is consistent with national data from the general population [41] as well as findings from a large study among college students [45]. Although living off-campus and not being enrolled in a meal plan were each associated with food insecurity, these two variables are highly related as meal plan enrollment is generally required among students residing on-campus but not for those off-campus. This observation is substantiated by a significant association between meal plan enrollment and place of residence among our sample. Access to affordable food off-campus may be more limited than through campus dining halls. Food-insecure students also reported that the lack of reliable transportation hindered food access [6]. Hence, living and eating off-campus may challenge students' financial management skills more than living on-campus with a meal plan. Collectively, these characteristics can provide a framework for the development of interventions and support systems targeted to those most at risk of food insecurity.
College students who experience financial hardships or inability to afford food may seek aid from a few available resources. The United States Department of Education distributes the Federal Pell grant, a need-based program that is awarded for low-income students for 12 semesters. In the present study, students receiving Pell grant awards were more likely to be food-insecure. The implications of this finding may challenge the adequacy of the buying power of Pell grants currently available for students in financial need. While the cost of tuition reached an average of $9970 in the year of 2017-2018 [58], the maximum Pell grant awarded in the year of 2017-2018 was $5920 [59]. In addition to the Pell grant program, the Supplemental Food Assistance Program (SNAP) provides a safety net for food insecure individuals; however, its eligibility criteria are very restrictive for university students. To be eligible, students must work at least 20 h per week, have dependents and not have child care, and participate in work-study programs. Lastly, meal plan enrollment alone does not appear to promote food security, as approximately 70% of food-insecure students reported having a meal plan. The term 'meal plan' traditionally encompasses a range of plans offered by the school, each based on the extent of access provided to the student. While some plans allow for unlimited access throughout the week, others are limited to one meal per day and even no meals on weekends. Clearly these limited plans would not guarantee food security and, the all-you-can-eat policy at most campus dining halls may even perpetuate the feast-famine eating cycle, previously associated with binge eating, and weight gain [50,54]. Thus, even students who are enrolled in a meal plan or receive federal financial help may still be vulnerable to food insecurity.
In the wake of the cuts in federal and state funding and heightened food insecurity, campus food pantries have been the fastest growing form of emergency relief. Despite the recent increase in the number of food pantries [20], descriptions of students' use of this resource are limited. In the present study, only 7.7% of the student population utilized the food pantry, a finding that is comparable to our previous results of students at the University of Florida [21]. Many students refuse to use an on-campus food pantry because of the stigma attached to its use or the sense that the food pantry is not intended for them [21], as its need implies a personal failure. Access barriers such as limited hours, regulated frequency of use, and lack of knowledge on the logistics of its use, have also been reported by students [60]. Nonetheless, while the best-funded U.S. approaches to household food insecurity are charitable food-assistance programs, food pantries cannot end hunger or provide a nutritious food supply [61]. Donated food is often not appealing and limited in key nutrients [60]. In fact, food pantry users prefer and need fresh produce, dairy products, eggs, and meat above the canned food provided in the emergency food systems [62]. Collectively, to make the college experience more equitable for students, research and upstream solutions to student food poverty should go beyond the boundaries of need-based food pantries, to a broader food system, with a "rights-based approach to food security" [63].
The results of this study should be interpreted with consideration of its limitations. Sampling bias stemming from the study design may have influenced overall food insecurity prevalence. Thus, it is important to consider when interpreting these findings that the study population is restricted to students who met the eligibility for the Get FRUVED project. Nevertheless, although the prevalence of food insecurity may have been lower than other studies of first-year college students [24,28,30], the relationship between food insecurity, sociodemographic, health and academic parameters is similar to other reports in the literature [24,28,29]. The cross-sectional design of this study only permitted examining associations rather than establishing potential causation between food insecurity and health and academic parameters. Longitudinal and intervention studies that elucidate the mechanisms by which food security can improve health and educational outcomes are needed. Despite the anonymity of the survey, the food security questionnaire items are prone to recall and social desirability biases related to self-report and social stigma associated with food insecurity [21,64], which may limit the validity of the results. Additionally, food security survey items address questions referencing the past 12-months. Given that data collection occurred at the end of the spring semester (April 2016), a portion of that 12 months window included time prior to students' enrollment in college. However, consistent with other studies [24,30] we believe that capturing the experience of first-year college students is of utmost importance, as attending a university is a period where food insecurity may become an issue, for those experiencing financial constraints and social pressures in their new-found autonomy [5]. Finally, although we used USDA AFSSM to assess food insecurity among our sample, the psychometric properties of this survey among college students have not been evaluated.
Conclusion
This study provides insight into the relatively obscure area of food insecurity among first-year college students and builds upon the scant literature currently available. Findings identify important sociodemographic correlates of food insecurity, affirm observations from single universities about student hunger, and indicate that the prevalence of food insecurity is high. Our data support previous limited evidence that food-insecure students are at increased risk of adverse health and academic outcomes, the effects of which may impact student retention and health behaviors beyond the college years. If this is indeed the case, the impact would not be limited to the individual, presumably carrying over to the school, state, and national level. Our results substantiate the need for screening for food insecurity among college students and the development of evidence-based support modalities to address food insecurity. Both short-term and long-term approaches can provide an untapped opportunity to mitigate the consequences of food insecurity. These may include indexing Pell grants to tuition inflation, expanding work-study opportunities, providing full meal plan subsidies, hosting on-campus farmers' markets, expansion of the Supplemental Nutrition Assistance Program outreach, and providing university support for financial and food literacy training. Finally, this study underscores several areas in need of development to progress food security research among college students. Specifically, future prospective studies should examine the effect of food insecurity on college student retention, graduation, and health outcomes over time. Additionally, with respect to intervention work, future studies should seek to evaluate strategies aimed at addressing student food insecurity. Such progress is essential for accurately depicting the consequences of food insecurity and ultimately going beyond food security to realizing food rights.
|
2019-05-30T23:43:22.562Z
|
2019-05-29T00:00:00.000
|
{
"year": 2019,
"sha1": "0c3f6d9d864649e806d34a4ae247a378b242fbcf",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-6943-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c3f6d9d864649e806d34a4ae247a378b242fbcf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13684939
|
pes2o/s2orc
|
v3-fos-license
|
Electric Field Suppressed Turbulence and Reduced Viscosity of Paraffin Based Crude Oil Sample
Flows through pipes, such as crude oil through pipelines, are the most common and important method of transportation of fluids. To enhance the flow output along the pipeline requires reducing viscosity and suppressing turbulence simultaneously and effectively. Unfortunately, no method is currently available to accomplish both goals simultaneously. Here we show that Electrorheology provides a universal method and efficient solution which was confirmed by SANS and DOE pipeline testing. When a strong electric field is applied along the flow direction in a small section of pipeline, the field polarizes and aggregates the particles suspended inside the base liquid into short chains along the flow direction. Such aggregation breaks the rotational symmetry and makes the fluid viscosity anisotropic. In the directions perpendicular to the flow, the viscosity is substantially increased, effectively suppressing the turbulence. Along the flow direction, the viscosity is significantly reduced; thus the flow along the pipeline is enhanced. Recent laboratory experimental tests and field tests with a crude oil pipeline fully confirm the theoretical results. The technology consumes very little energy and will be very useful for both off-shore and on-shore crude oil production and transportation about three paraffin based crude oil samples, HAW, KHU, NAPD, from Saudi Aramco.
Introduction
Pipelines are the foundation of our liquid energy supply.Crude oil has traditionally been collected by pipelines from inland production areas.Crude oil also arrives in the U.S. by marine tankers, often moving for the final leg of that trip from a U.S. port to a refinery by pipeline, too.The United States consumes millions of gallons of crude oil every day.This is especially important because frequently pipelines permit the movement of large quantities of crude oil and product to these areas with little or no disruption to communities everywhere.Pipelines also move crude oil produced far offshore in coastal waters.Currently hydrocarbons remain the leading energy source.While the amount of conventional light crude oil becomes less, and less available, more and more heavy crude oil and off-shore crude oil are needed.High viscosity of these oils becomes a critical issue.Not only the heavy crude oil has a high viscosity, the off-shore crude oil also has very high viscosity because the deep-water temperature is very low, around 1.5-1.6 °C.The high viscosity makes the pressure required to pump crude oil via pipeline very high and creates much difficulties in oil extraction, too.The importance of this issue, reducing the crude oil viscosity, called the attention more than 30 years ago.However, the current dominate methods remain heating and dilution of crude oil with gasoline or diesel.The heating method is slow and energy consuming and raises concerns about its environmental impact, too.Moreover, for the off-shore crude oil, it is very difficult to utilize the heating or dilution methods.Based on the concepts of Electrorheology (ER), a new micro-nanotechnology to reduce the viscosity of crude oils by a strong electric field was proposed [1][2][3][4].
Comparing to the heating method, this technology consumes much less energy and is very fast and, therefore, much more efficient.Afterwards, the technology has developed very fast [5][6][7][8] and verified by experimental test and computer simulation [9][10][11][12][13].In this paper, we will report our finding that the AOT technology (Apply Oil Technology) significantly reduces the viscosity of Saudi Aramco crude oil and increase the flow rate.
Dr. Tao`s Viscosity Theory
Crude oil is a mixture of many different molecules.Gasoline, kerosene, and diesel, the liquid made of small hydrocarbon molecules, have very low viscosity.If we treat the rest large molecules, paraffin particles, and asphalt particles etc. as suspended particles in such low viscosity base liquid made of gasoline, kerosene, and diesel, crude oil is a liquid suspension.These suspended particles are typically of nanoscale.The theory about liquid suspensions thus provides the physics basis for our new method to reduce viscosity of crude oil.Einstein first studied a dilute liquid suspension of non-interacting uniform spheres in a base liquid of viscosity and found the effective viscosity h as follows [14-16], where the small parameter is the volume faction of the suspended particles.Following Einstein`s work, Krieger-Dougherty introduced the intrinsic viscosity for particles of different shapes and generalized it for all volume fractions [17], Where m f is the maximum value fraction allowed for packing the suspended particles.When j is unchanged, the most widely used method to reduce viscosity h is to reduce h 0 , such as raising the temperature.On the other hand, Equation (2) suggests that there is another method: if we change the rheology of the suspension to increase the value of j m and lower intrinsic viscosity [h], we will reduce the viscosity h .The physics is clear: the effective viscosity depends on how much freedom the suspended particles have in the suspension.A j m and low [h] mean high freedom for the suspended particles, which leads to lower dissipation of energy and lower viscosity [18].The following three mechanisms contribute to the viscosity reduction [18,19]: Aggregate the nanoscale particles into short chains with their (1) shapes streamlined along the flow direction Increase the polydispersity to increase (2) j m .
Increase the average size of suspended particles.
(3) Our technology is illustrated in (Figure 1).The crude oil flows from left to right along a pipe.Initially the nanoscale particles are randomly distributed and the viscosity is high (left side of the tube).When the oil passes a strong local electric field (two parallel metal meshes), the suspended particles are polarized by the electric field.The induced dipolar interaction forces the nanoscale particles to aggregate into micrometer-size short chains once they passed the electric field.They have high polydispersity and large size.In addition, the most important is that they are of streamline shape with low[h ] along the flow direction as the electric field is parallel to the flow direction.
It is also important to note that after formation of short-chains along the field direction, similar to the flow of nematic liquid crystal with its molecular alignment parallel to the flow direction which breaks the rotational symmetry, making the viscosity of crude oil anisotropic.Along the field direction, the viscosity is significantly reduced, while the viscosity along the directions perpendicular to the field is actually increased [20].This fact is very important and very useful as it does not only improve the flow along the field direction, but suppresses the turbulence inside the pipeline.
Application of Dr. Tao`s Viscosity Theory to Reducing Viscosity
It is clear from the above background that aggregating the nanoscale particles into short chains with their shapes streamlined along the flow direction will reduce the effective viscosity while remains the same.At the same time, it is important to note that after formation of short-chains along the field direction, its molecular alignment parallel to the flow direction which breaks the rotational symmetry, making the viscosity of crude oil anisotropic.Along the field direction, the viscosity is significantly reduced, while the viscosity along the directions perpendicular to the field is actually increased.For most suspensions, this aggregation can be realized with either electric or magnetic fields.We assume that the particles have an electric dielectric constant e p different from the dielectric constant of the base liquid e f .
In an electric field, the particles are thus polarized along the field direction.The dipole moment is estimated by , where is the local electric field, which should be close to the external field in dilute cases.The dipolar interaction between the two induces electric dipoles is U = p 2 (1 3cos 2 q) (e f r 3 ), where r is the distance between these two dipoles and q is the angle between the joining line and the electric field.If this interaction is stronger than the thermal , these two dipoles will aggregate together to align in the field direction.a = p 2 n (e f k B T ) 1, where n is the particle number density, k B is the Boltzmann constant and T is the absolute temperature.
We derive the following critical field (3) If the applied electric field is weaker than E c , the thermal Brownian motion prevents particles from aggregating together.So the applied electric field must be not lower than E c .The required pulse duration time is, If the duration of electric field is too shorter than, the particles do not have enough time to aggregate together [2].Once the e field is turned off, the hysterisis time is Our lab device is outlined in (Figure 2a), which is placed in an environment chamber (Figure 2b).The chamber provides the desirable and stable temperature for our test.The crude oil sample is loaded in cylindrical container at the top (Figure 2a), which serves as the reservoir.Underneath the reservoir, there are three meshes, serving as electrodes.The electrodes are connected to a low-amperage high-voltage power supply.Using a gravity feed, the crude oil flows through the three electrodes into a long capillary tube.A beaker on a microbalance collects the crude oil below the capillary tube.The microbalance is connected to a computer, which automatically records the oil mass in the beaker as a function of time with LabVIEW software.Using this configuration, we can accurately determine the untreated flow rate.When the power supply is turned on, a strong electric field is produced in the flow direction of crude oil, forcing the suspended particles inside to aggregate into streamlined short chains anisotropically along the flow direction (Figure 1).
In this way, the effective viscosity of crude oil along the flow direction is reduced, while no heating, drag reducing agents, or diluents are used.Because the Reynolds number is low, the crude oil flow inside the capillary tube is laminar.The capillary tube serves as a viscometer.From the flow rate, we can mathematically and precisely determine the viscosity.In this experimental setup, the pressure gradient due to the gravity remains constant.Therefore, the flow rate increases as the viscosity is reduced.Usually, we measure the flow rate without the electric field applied first and obtain the viscosity of the untreated oil.Following the baseline test, we turn on the electric field, measure the new flow rate, and obtain the viscosity of the electric-field treated crude oil.By adjusting the electric field strength, we can reach the optimal state to reduce the crude oil viscosity.The viscosity reduction and resultant flow rate improvements should significantly improve tariff revenue when this technology is employed commercially.Nuclear method is widely used in soft condensed matter study [21][22][23][24].We also run small angle neutron scattering test at NIST for crude oil samples.
Results
We conducted the tests at three different temperatures, 27 0 C, 48 0 C and 66 0 C for these three samples to meet various conditions for oil pipelines in Saudi.The results show that the AOT technology can significantly reduce the viscosity of all these three samples effectively.
Test Results for NAPD Crude Oil Sample
At 27 0 C, with an electric field of 6176V/cm, the flow rate of the NAPD sample was increased 108.4%.The viscosity was reduced by 52.03% (Figure 3).The typical test results are shown in (Figure 3).The crude oil was tested inside the Environment Chamber (Figure 2b) for temperature stability.Initially, there was no electric field applied and the oil was allowed to flow through the capillary tube.The slope of the curve was the flow rate.Afterwards, the electric field was applied; the curve's slope jumps, indicating that the oil flow rate is increased significantly.The viscosity is reduced effectively.From the flow rates, we calculated the viscosities.The test results are summarized in the following (Table 1) and plotted in (Figure 4).
Test Results for KHU Crude Oil Sample
The AOT technology can significantly reduce the viscosity of KHU crude oil and increase the flow rate, to shows the typical test results (Figure 5).It is clear that after an electric field is applied, the curve slope jumps up, indicating that the viscosity is reduced significantly.From the flow rates, we calculated the viscosities.The test results are summarized in (Table 2) and plotted in (Figure 6).
Test Results for HAW Crude Oil Sample
HAW is a very light crude oil.Its original viscosity is quite small.However, the AOT technology can also significantly reduce the viscosity of HAW crude oil and increase the flow rate.The test results are summarized in (Table 3) and plotted in (Figure 7).The applied electric field for all these tests is listed in all the three Tables.The electric current used during the tests was about 200 µA, indicating that the water content inside the crude oil is moderate.In summary, the AOT viscosity reduction technology significantly reduces the viscosity of NAPD, KHU, and HAW crude oil samples.
Conclusion
The test results fully confirm the theoretical analysis.On the other hand, in comparison with the theoretical prediction, there is still some room for improvement.This implies that in our electric field treatment, some particles are not aggregated into short chains.If we need to reduce the viscosity further, it is required to find an optimal range of electric field strength and optimal application time to make almost all particles inside the crude oil to aggregate into short chains.Or with mew technology, we can make the electric field even stronger for better effect [25,26].
Naturally, there is a question: How long can such reduced viscosity last after the electric-field treatment?Since the viscosity reduction is the result of aggregated short chains along the flow direction, the viscosity will return to the original value once the aggregated chains are completely dissembled.To answer the above question, we conducted a number of tests.It shows a low viscosity after the short chains are tilted along the flow direction.We have found that the reduced viscosity keeps more than 12 hours.The above results are understandable.The aggregated short chains make the suspension as a viscoelastic fluid.It is well known that to dissemble such viscoelastic chains is very slow.On the other hand, if we deliberately shake the crude oil violently, the chains will be
Figure 1 :
Figure 1: As the crude oil flow passes a strong local electric field from left to right, the suspended particles (brown dots) aggregate along the field direction after the mesh which applied high voltage, and the viscosity along the flow direction is reduced.
Figure 2 :
Figure 2: Device to test the crude oil samples (2a): The gravity feed crude oil flows through the three electrodes into a long capillary tube which is used to measure the viscosity (2b): the environment chamber.
Figure 3 :
Figure 3: At 27 0 C, with an electric field of 6176V/cm, the flow rate of the NAPD sample was increased 108.4%.The viscosity was reduced by 52.03%.
Figure 4 :
Figure 4: The AOT test results for NAPD crude oil sample.
Figure 5 :
Figure 5: At 66 0 C, with an electric field of 6040V/cm, the flow rate of the KHU sample was increased 36.5%.The viscosity was reduced by 26.8%.
Figure 6 :
Figure 6: The AOT test results for KHU crude oil sample.
Figure 7 :
Figure 7: The AOT test results for HAW crude oil sample.
Table 1 :
Test Results for NAPD Crude Oil Sample.
Table 2 :
Test Results for KHU Crude Oil Sample.
Table 3 :
Test Results for HAW Crude Oil Sample.
|
2018-05-11T02:13:45.179Z
|
2018-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "e9fda58fff5002f5a8dadfde89cc8e4739d48046",
"oa_license": "CCBYSA",
"oa_url": "https://gavinpublishers.com/admin/assets/articles_pdf/1525943174article_pdf613635057.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e9fda58fff5002f5a8dadfde89cc8e4739d48046",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
233258590
|
pes2o/s2orc
|
v3-fos-license
|
A Meta-analysis of University STEM Summer Bridge Program Effectiveness
University science, technology, engineering, and math (STEM) summer bridge programs provide incoming STEM university students additional course work and preparation before they begin their studies. These programs are designed to reduce attrition and increase the diversity of students pursuing STEM majors and STEM career paths. A meta-analysis of 16 STEM summer bridge programs was conducted. Results showed that program participation had a medium-sized effect on first-year overall grade point average (d = 0.34) and first-year university retention (Odds Ratio [OR] = 1.747). Although this meta-analytic research reflects a limited amount of available quantitative academic data on summer STEM bridge programs, this study nonetheless provides important quantitative inroads into much-needed research on programs’ objective effectiveness. These results articulate the importance of thoughtful experimental design and how further research might guide STEM bridge program development to increase the success and retention of matriculating STEM students.
Despite the need for data and analyses informing the effectiveness of summer bridge programs, limited empirical research is available, and much of this research is in the form of highly descriptive accounts, qualitative results, and literature reviews (Sablan, 2014;Kitchen et al., 2018) rather than systematic and quantitative evaluations of bridge program success (Gullatt and Jan, 2003). To our knowledge, no meta-analysis has been conducted on STEM bridge programs. In the current paper, we examine the objective academic impact of STEM bridge program participation to reinforce and extend other informative work such as Ashley et al.'s (2017) systematic review of STEM bridge programs' goals, student characteristics, research designs, and program success. We limit our analysis to academic outcomes associated with STEM retention and grade point average (GPA). Although we acknowledge that an array of outcomes is important and interesting to examine (e.g., motivation, STEM interest, self-efficacy), our relatively narrow focus is mostly a function of the outcomes currently examined in primary research studies. Increasing our understanding of the effectiveness of STEM bridge programs can provide insight into where future program directors might implement or improve features within their own programs to make them more effective. We also discuss ideas for future research on STEM bridge programs. For the purposes of this paper, we include the biological sciences (except majors specific to applied health science), physical sciences, mathematics, and computer science as "STEM" majors. For clarity of focus, and because primary research in these areas is limited, we have excluded consideration of social sciences such as psychology and anthropology.
STEM Bridge Programs
Increasing retention and diversity in STEM degree programs through program interventions may be an effective method to increase the number of STEM workers in the United States (President's Council of Advisors on Science and Technology, 2012). More specifically, program interventions may bolster the success of students in terms of STEM retention by supplementing high school experiences and exposing students to resources at colleges and universities designed to support student success (Zuo et al., 2018). In particular, an academically challenging high school experience, especially in math and science (National Academy of Sciences, 2010), is beneficial for STEM students to succeed in college (Benbow and Arjmand, 1990). Students from underrepresented minority groups are more likely to miss out on academically challenging high school experiences, because high schools in low-socioeconomic status areas, where students from these backgrounds are often overrepresented (Estrada et al., 2016), are less likely to offer math classes higher than algebra II, to have laboratory STEM activities and equipment, and to employ teachers well qualified to teach STEM classes (Campbell et al., 2002;Peske and Haycock, 2006). As such, increasing retention and diversity in STEM requires augmenting student understanding of academically challenging content and providing meaningful support before students enter college.
University STEM bridge programs are on-campus STEM interventions designed to increase STEM enrollment and retention (Wilson et al., 2012). STEM bridge programs provide intensive instruction in one or more STEM topics (Tsui, 2007) and expose students to realistic college expectations for STEM course work (Kezar, 2000). Bridge programs also often expose students to, and engage them in, other resources available at universities, such as tutoring, access to research opportunities, intensive advising, and mentorship programs (Maton et al., 2009). These resources and activities have multiple institutional goals, including improving the high school-to-college transition; providing a supportive campus community and climate; teaching students the importance and value of using college resources; and supporting students' diverse backgrounds, needs, and perspectives (Wheatland, 2001). In addition to these institutional goals for students, common STEM-specific bridge program goals address student skills, attitudes, and their approach to work, including raising students' confidence in their academic ability, developing problem-solving skills, increasing STEM career awareness and intentions, and augmenting math preparation (Yelamarthi and Mawasha, 2008).
STEM Content Instruction. STEM bridge programs offer course work in one or more STEM topics, though whether this is introductory-level, remedial, or more advanced STEM course work varies by program (Ashley et al., 2017). Many bridge programs have the explicit goal of filling knowledge gaps and combating the "weeding out" experience in introductory-level gateway courses (Massey, 1992), because first-year experiences in STEM critically inform students' decisions about whether to remain in or leave their STEM majors (Gainen and Willemsen, 1995).
Tutoring. Many STEM bridge programs offer individual or group tutoring sessions. Going beyond in-class instruction, tutors can answer questions and correct student mistakes in understanding, and they can otherwise provide further in-depth explanation to increase student comprehension (Dioso-Henson, 2012). Required tutoring may be beneficial to students even when they do not request it, because students often underestimate how much academic help will benefit their performance (Hodges and White, 2001).
Research Opportunities. Undergraduate research experience in STEM can involve working in applied or academic settings and with some combination of researchers, graduate and postdoctoral students, and faculty. These experiences allow students to identify, conceptualize, and execute various forms of correlational and experimental designs, as well as collect and analyze data, addressing basic science questions or real-world problems (Eagan et al., 2010). Research experiences may offer underrepresented minority students exposure to applied STEM subjects for the first time (Bauer and Bennett, 2003;Moore, 2006). The motivational, knowledge-based, and skill-based effects of obtaining research experience are significant and have been linked to greater STEM major retention (Gregerman et al., 1998), higher graduate school entrance rates, and enhanced pursuit of a STEM career (Zydney et al., 2002).
Campus Orientation. Bridge programs provide exposure to the campus as well as information on campus resources, which may foster students' sense of belonging to the university (i.e., the extent to which a student feels accepted at and fits into a college environment and major; Ostrove and Long, 2007). Campus orientation may be particularly important for first-generation college students, many of whom may need to be introduced to college not only academically, but also on informational, social, emotional, and cultural levels (McKenna and Lewis, 1986). Similarly, providing information about student organizations that may be relevant to underrepresented minority students may further promote a sense of belonging due to shared experiences, cultures, and networking opportunities (Torres, 2000).
Faculty Mentoring. Mentorship is the process by which senior professionals support and advise less-experienced students or employees on their career plans (Hill et al., 1989). Students from all backgrounds have cited poor support from STEM faculty as a major reason for leaving STEM (Seymour and Hewitt, 1997), and lack of meaningful connections with STEM professors was a major theme in a qualitative analysis of STEM student attrition (Hong and Shull, 2010). This may be due in part to a common STEM classroom culture of professors expecting most students to struggle and a certain number of students to fail (Luppino and Sander, 2015). STEM bridge programs have the potential to create an environment designed to build closer relationships with professors, who then provide social and instructional support to participants (Ashley et al., 2017;Cooper et al., 2018). In turn, such positive faculty interactions can increase student science identity and STEM graduate degree intentions (Aikens et al., 2017), and STEM retention, GPA, and self-efficacy (Christe, 2013).
Peer Mentoring and Tutoring. Many STEM bridge programs provide peer tutoring and mentoring. With peer tutoring, students receive tutoring from and give tutoring to their fellow students (Goodlad and Hirst, 1989). Outcomes of peer tutoring, such as retention of course material, often compare favorably with faculty tutoring (Moust and Schmidt, 1994). Peer mentoring can provide more immediate mentorship availability and accessibility than faculty mentorship, as well as rapport with, social connections to, and role modeling from people who have been on a similar academic journey (Budny et al., 2010). Peer mentoring in bridge programs can help incoming students develop social support networks, think critically, make informed academic choices (Brawer, 1996), and earn higher grades (Rodger and Tremblay, 2003). STEM applications in the practical setting (e.g., Stanich et al., 2018) suggest that peer mentors themselves benefit from mentoring in that they learn STEM material through teaching, given that teaching others is a form of active learning.
Bridge Program Elements to Support Underrepresented Minority Students. Many modern STEM bridge programs seek to increase the social capital and support networks of underrepresented minority students (Arendale and Lee, 2018) and create greater diversity in STEM, understanding that students from these groups are more likely to face greater barriers to college and STEM fields, both socially (Stolle-McAllister, 2011) and academically (Wilson, 2000). Stereotype threat, or the psychosocial anxiety individuals may experience when they are concerned they will be judged based on the negative stereotypes about a group with which they identify (Steele and Aronson, 1995), may be especially salient: perceptions of stereotype threat by underrepresented minority STEM students have been linked to increased attrition to non-STEM majors (Beasley and Fischer, 2012). Bridge programs may be useful in addressing stereotype threat, because they can provide opportunities to gain STEM-related mastery experiences (Hernandez et al., 2013), which research has shown predicts STEM self-efficacy (e.g., Honicke and Broadbent, 2016;Dorfman and Fortus, 2019). Programs that offer diverse peer mentors may also be impactful, because diversity across peer mentoring in multiple STEM fields predicts higher diversity and successful graduation rates for underrepresented minority STEM students (Fox et al., 2009). Finally, bridge programs that address student cultures, such as by helping them identify prosocial connections with STEM topics and the impact they could make on their larger communities, may help students successfully integrate within the bridge program and the university (Estrada et al., 2016).
Prior Research on STEM Bridge Programs
A wide range of student outcomes, both STEM-specific and more general, are evaluated within and across STEM bridge programs. Because we were most interested in relatively objective outcomes related to student performance, and due to the limits of the primary studies in this area, we did not consider attitudinal outcomes such as science motivation, science interest, and bridge program satisfaction. Rather, we focused on outcomes such as STEM major retention (e.g., Smith, 2017), STEM graduation rates (e.g., Kopec and Blair, 2014), math assessment scores (e.g., Ami, 2001), and class-specific GPAs (e.g., chemistry; Graham et al., 2016). Other outcomes considered in STEM bridge program research are general (non-STEM specific) academic outcomes, including time to graduate (e.g., Whalin et al., 2017), overall GPA (e.g., Graham et al., 2013), and university retention (e.g., Wischusen et al., 2011). Although more distal outcomes, such as STEM retention and STEM graduation rates, may be most important in evaluating whether programs are meeting the ultimate goal of increasing STEM participation in the workforce, from our review, these are among the least common outcomes reported in published work. Further, research on STEM bridge programs does not generally conform to standard experimental design requirements that augment internal validity (e.g., the random assignment of students to control vs. experimental conditions; Estrada et al., 2016) or even quasi-experimental designs comparing bridge intervention and control conditions without random assignment. Each program also has unique implementation issues, as well as a unique profile of student and institutional characteristics, further complicating a quantitative review.
As a result, few studies reported STEM-specific outcomes usable for meta-analytic purposes. Using a power analysis accounting for high levels of heterogeneity to detect a small effect size (d = 0.20), we estimated that at least 11 effects would be necessary to conduct a meta-analysis to exceed a statistical power of at least 0.70 (and 13 to exceed 0.80) to detect the meta-analytic mean (see Borenstein et al., 2011). As a result, we examine first-year GPA and university first-year retention, which were the only outcomes we considered that met the minimum threshold of 11 effects. We also limit our studies to those that report results on these outcomes for a comparable control group.
First-Year Overall GPA. STEM research has linked students' early overall GPA with STEM retention (Cromley et al., 2016). For example, in an analysis of ∼1200 STEM first-generation college students, Dika and D'Amico (2016) found that first-year GPA predicted STEM retention after three semesters. In a sample of 1925 college students, first-semester GPA was a moderate predictor of whether students ultimately received a STEM degree (Crisp et al., 2009). In a study of 137 freshman engineering students, higher first-year GPA predicted whether students would be retained in engineering into their second year in the program (Burtner, 2004). Based on the previously discussed aspects of bridge programs, we expect bridge participation to positively impact first-year GPA.
Hypothesis 1: Bridge program participants will outperform control group participants on first-year university GPA.
First-Year University Retention. Although we did not find enough studies that reported first-year STEM retention rates to use in our meta-analysis, first-year university retention may be worth exploring as a criterion of program success. For example, ∼20% of a nationally representative sample of college students entering a 4-year institution as STEM majors in 2003 dropped out of college rather than switching to a non-STEM major (Chen, 2013). To the extent that a STEM bridge program can increase university retention, the program may be providing a net positive impact to students, even if they leave STEM.
Hypothesis 2: Bridge program participants will outperform control group participants on first-year university retention.
We also explored publication bias using publication type as a moderator, meaning we explored whether publication type affected the strength of the relationship between bridge program participation and student outcomes. We compared published peer-reviewed articles with unpublished dissertations and conference papers. In our literature search, we discovered that many conference papers on bridge programs were program descriptions with very few data reported; consequently, we expected that unpublished outlets-like conference proceedings and dissertations-would include smaller effects than peer-reviewed papers, which would be more likely to include significant and larger effects.
Hypothesis 3a: Effect sizes reported in studies of bridge programs published in peer-reviewed journals will tend to be larger for first-year overall GPA than the effects published in dissertations and conference papers.
Hypothesis 3b: Studies of bridge programs published in peer-reviewed journals will tend to report greater first-year university retention than those published in dissertations and conference papers.
Search Strategy
Using the PsycINFO, Academic Search Complete, Medline, and ERIC academic databases, we searched for articles with titles, subjects, abstracts, or keywords containing 1) "science," "technology," "engineering," "biology," chemistry," "physics," 'math," "mathematics," "calculus"; 2) "college," "university," "students," "higher education"; 3) "summer," "bridge"; and 4) "retention," "attrition," "GPA," "grades," "academic performance." We excluded from our searches "elementary school" and "middle school," as we were only interested in the high school-to-univer-sity transition. We also reviewed the programs referenced by Ashley et al.'s (2017) review of STEM summer bridge programs when they were not otherwise captured by our search process. Finally, we identified and contacted 17 researchers associated with a STEM bridge program that met the other inclusion requirements but for which we could not find quantitative academic data or data for a control group and requested unpublished data. After reading study abstracts, we identified 114 articles for further analysis based on our inclusion criteria.
Inclusion Criteria
Two research (H.P. & B.M.) assistants independently read the identified articles to determine whether they met the study's inclusion criteria. B.C.B. made the final determination about whether articles met the inclusion criteria in cases of discrepancy between the research assistants. Articles were examined for further coding if the program 1) took place in the summer, on-campus, before students' first year of university; 2) covered at least one STEM topic (non-STEM topics in addition to STEM topics were permissible); 3) reported at least one objective academic outcome (such as GPA or retention); and 4) reported results of a control group that was more narrowly defined than just the rest of the university (e.g., non-underrepresented minority STEM majors or STEM majors with weak academic backgrounds).
Many bridge programs failed to meet our inclusion criteria, often because they did not report results from a similar control group. This is in line with Kulik et al. (1983), who found in their meta-analysis of college programs for high-risk students that only 60 (less than 12%) of the 504 articles the authors identified met their inclusion criteria, with a substantial portion failing to provide results for control groups or lacking appropriate control groups. Other studies excluded from this analysis included those that reported only nonquantitative subjective academic outcomes, such as qualitative data gained from conducting focus groups with participants, self-reported survey data such as perceived knowledge gained in a STEM topic or greater reported interest in a STEM topic, and measures of students' satisfaction with the bridge program.
Additionally, we intentionally excluded the Meyerhoff Scholars Program (Maton et al., 2012) from our analysis. The Meyerhoff program is a comprehensive STEM program that far surpasses an intervention with a STEM bridge program as its primary element (providing intensive, ongoing support for participants throughout all 4 years of university). Although we are limited by the information provided by other publications, no other bridge program in the primary studies included here describes a comprehensive program for ongoing student support, and thus we felt that the Meyerhoff program was qualitatively different. Notably, the Meyerhoff program (see Maton et al., 2012) is extremely successful, and including it in our meta-analysis would only strengthen the findings regarding STEM bridge program effectiveness.
Coding Procedures
We identified 25 studies that met the inclusion criteria. For each qualifying article, we recorded the quantitative outcome(s). After coding all reported outcomes, we determined that only first-year university GPA and first-year university retention met our requirement of having 11 or more effect sizes to have a power of more than 0.70 (see Borenstein et al., 2011). The most common general academic outcomes found in the literature search that did not meet our minimum number of studies were 2-year and 3-year university retention (three studies each). STEM-specific outcomes were 1-year and 3-year STEM retention (six studies each). In total, 16 studies comprising 25 samples were used in the meta-analysis. Two research assistants independently coded the sample size of the bridge and control groups; the overall first-year GPA of each group, the first-year university retention rate of each group, or both; and whether the study was published in a peer-reviewed journal. In cases of discrepancy between the two research assistants (which occurred in three out of 16 cases), B.C.B. made the final determination on the appropriate coding.
We also coded several program characteristics that research suggests may be important, although we are limited by the depth and description each publication or report provided. We only counted programs as including an element if it was explicitly stated in the publication, but it is conceivable that programs contained elements not described therein. Eleven of the 16 programs incorporated some sort of tutoring arrangement through peers or the university's tutoring center, whether this was during the bridge program, after the school year began, or both. Ten programs described some sort of faculty or industry professional mentoring arrangement during the summer or afterward, though programs varied in whether these relationships were mandatory or optional. Nine programs provided students with research opportunities during the summer or afterward. Although we are limited in our analysis of these moderators due to the small number of studies that examine them, we provide a summary of individual program characteristics in Table 1 for the interested reader.
The control groups used in the research studies included in this analysis are also described in Table 1. Five programs used all other STEM or engineering students as a control, seven programs used some sort of matched sample based on high school preparation, standardized tests scores and/or demographic background, three used more specific STEM demographic groups (two of underrepresented minority STEM students and one of female STEM students), and one program used all other students enrolled in precalculus. In four of these programs, students paid some amount to attend. In the remaining 12 programs, the program covered all costs (and in some cases provided stipends).
Missing Data
For studies that reported first-year overall GPA but did not report the SD of the GPA for the samples (seven of 12 studies), we imputed the SD using a weighted average of the square root of the variances reported in the other studies in the analysis (SD = 0.73 for program participants and SD = 0.60 for the control group).
Analyses
All meta-analyses were between-group comparisons using random-effects models, which tend to provide more accurate results compared with their fixed-effects counterparts when study effects are heterogeneous (National Research Council, 1992;Hunter and Schmidt, 2000). Heterogeneity is a reasonable assumption in the current meta-analysis, given the wide variety of bridge programs. For the first-year overall GPA outcome, the meta-analyzed Cohen's d was calculated with a random-effects model as the standardized mean difference in bridge participants' GPA compared with the control group's GPA (i.e., positive d values indicate higher average GPA for the bridge group). For first-year university retention, the log-odds ratio of participant versus control retention was calculated as the odds that a bridge student would be retained compared with a control group student on a logarithmic scale. The logodds ratio creates greater symmetry of the distribution of the outcome measures and centers it on 0 (Sterne et al., 2001), which makes the data more amenable to analyses. We then converted the log-odds ratio to a standard odds ratio for easier interpretability of the practical significance of findings. All analyses were conducted in R statistical software using the metafor package, a frequently used statistical package to fit fixed-, mixed-, and random-effects models to meta-analyses (Viechtbauer, 2010).
RESULTS
The 16 studies in this analysis yielded 25 different samples. Five studies were dissertations, six were conference papers, and the remaining five were published articles. Cumulatively, there were 4057 bridge program students and 26,516 control group students in this analysis. The median sample size of bridge participants was 75 (M = 122, SD = 167, interquartile range [IQR] = 30-101), and the median size of the control group was 168 (M = 967, SD = 2,051, IQR = 86-261). Many of these programs were at large public universities that had many more students deemed to be comparable to bridge program participants than the relatively few students who participated in the bridge program.
Of these studies, there were 13 first-year overall GPA effects and 19 first-year university retention effects (because several studies provided separate results for different years or iterations of their bridge program, and some provided both GPA and retention data for a single sample). Table 1 shows other descriptive information of program elements. Table 2 shows descriptive information about each study and effect sizes used in the meta-analysis. The names of the programs and universities are listed in the table, rather than the citation, similar to the approach taken by other review articles of this nature (e.g., Estrada et al., 2016;Ashley et al., 2017). Tables 3 and 4 provide the results of all the analyses described.
First-Year Overall GPA
The main effect of bridge program participation on GPA was statistically and practically significant, supporting hypothesis 1. Cohen's d (Cohen, 1988) (Cohen's d) of between 0.20 and 0.40 is frequently set as a benchmark for whether the program has made a practical impact (Lee and Munk, 2008). Generally, bridge students generally had higher first-year overall GPAs than control group students. Qualifying these effects, as expected, there was large heterogeneity in the sample; Q E (11) = 437.82, p < 0.0001, τ = 0.28. A retrospective power analysis using this effect size (d = 0.34) found that this analysis was appropriately powered (P = 0.99) to detect differences of this magnitude (Harrer et al., 2019). We also examined the studies in a direct manner for publication bias to address hypothesis 3a. We found that, on average, journal articles were marginally more likely to report larger positive effects for GPA outcomes than those published in conference papers and dissertations (journal M = 0.62, other publications M = 0.26; p = 0.08, 95% CI = −0.04, 0.78); however, of the studies that reported GPA, only three were published in peer-reviewed journals, meaning interpretability of this result is limited.
First-Year Retention
For first-year university retention, we examined the log-odds ratio using a random-effects model. Odds ratios compare the differences in probabilities of an event happening (in this case, first-year retention) between two groups (e.g., bridge students and control group students). The model was significant, with an odds ratio of 1.747 (p < 0.0001, 95% CI = 1.35, 2.56, CrI = 0.86, 3.57) in favor of a retained student being in the bridge group, supporting hypothesis 2. In other words, the mean odds ratio would predict that bridge students are 64% more likely (i.e., the odds ratio divided by one plus the odds ratio) to be retained than control group students. To provide further context that the odds ratio does not account for, the first-year retention base rates in these studies were moderately high (the weighted average retention rate across both groups was 76.1%), but many of the bridge groups were relatively small (the median size was 75 students), meaning that some caution should be used in extrapolating these findings. As with first-year GPA, there was evidence of heterogeneity in these studies; Q E (18) = 39.32, p = 0.002, τ = 0.31.
We also examined these studies for evidence of publication bias, addressing hypothesis 3b. We found that journals were marginally more likely to report positive outcomes than studies published in conference papers and dissertations (p = 0.09, 95% CI = −0.06, 0.88). However, of the studies that reported retention, only four were published in peer-reviewed journals, limiting our ability to find evidence of upward bias.
DISCUSSION
We examined the overall effectiveness of university STEM bridge programs, operationalized as participants' first-year overall GPA and first-year university retention. We found a medium-sized effect of bridge program participation on firstyear overall GPA compared with a control group, as well as greater first-year retention relative to control group students. The fact that bridge program participation impacted students' retention, which college retention models generally regard as the result of academic performance (Tinto, 1999), provides evidence of a longer-term impact of the bridge program beyond just increasing GPAs. One caveat, however, is that we cannot isolate the effect of bridge performance on student GPA and retention, because the studies included in this meta-analysis did not systematically control for student motivation, self-efficacy, interest in science, or other variables that might influence performance through random assignment. That is, there is likely selection bias associated with the quasi-experimental approaches used in the studies included in this meta-analysis, and students who participate in bridge programs may differ from those who do not in some important ways that we cannot control. We also examined publication bias and found that findings in peer-reviewed journal articles tended to include more positive outcomes and larger effects (GPA and first-year retention) compared with conference papers and dissertations (marginally significant). This trend aligns with findings such as those in O'Boyle et al.'s (2017)'s management review, which found that published studies reported a ratio of supported to unsupported hypotheses that was more than twice as high as those in dissertations, presumably because peer-reviewed publications are more likely to report significant results. However, we were limited in the implications of our findings by the small number of studies published in peer-reviewed journals, meaning that further exploration of the extent of publication bias in STEM bridge program research is necessary. Finally, we have provided descriptive information on bridge programs to give researchers and practitioners a general overview of elements of past STEM bridge programs, although it is possible that some programs used elements the authors did not describe in the publications. We found that more than half of the programs in this meta-analysis provided students course tutoring, mentoring arrangements, and research opportunities, although the combination of services provided varied by program, as did the timing when students were offered these services (i.e., during or after the summer bridge program). These findings suggest that many, if not most, STEM bridge programs attempt to incorporate some of the elements research would suggest are most influential for STEM academic success and retention. In all three cases (tutoring, mentoring, and research opportunities), the number of programs that did not include these elements was too low to reasonably use in a quantitative analysis.
Limitations
This meta-analysis provides empirical meta-analytic summaries across all available studies meeting our inclusion criteria. We made every attempt to be comprehensive, and we can say with some confidence that the wide array of bridge program studies we meta-analyzed are representative of what is available in the literature. It was clear that the heterogeneity of STEM bridge programs and the range of outcomes they report, as well as the relatively underspecified methodologies that many studies employ, limit the ability of the current meta-analysis to yield generalizable conclusions about the effect of any future particular bridge program intervention. Given the tension between program heterogeneity and our desire to report the available evidence, our meta-analysis included only a relatively small subset of studies that met reasonable standards for research design. With a larger sample size, we would be able to test our hypotheses and examine publication bias with increased confidence (Simonsohn et al., 2014). Additional research in this area would also potentially broaden the array of outcomes beyond those examined here.
Implications for Practitioners and Program Administrators
Evaluating the effectiveness of bridge programs is a complex task. To have the strongest design, bridge program administrators should strive to ensure both internal validity (the confidence with which one can say that the results obtained from participation are the true result of the intervention) and external validity (the applicability of the bridge program in being able to provide generalizable conclusions that other bridge program directors may be able to draw upon; Gay and Airasian, 2000). In the following sections, we expand on our recommendations for program administrators and researchers examining the effectiveness of bridge programs.
Tracking Additional Outcomes. Exploring other research questions beyond those in this meta-analysis would require tracking students beyond the yearlong time frame we report here, as well as ideally tracking STEM-specific outcomes. However, some universities do not require (and in some cases do not allow) students to declare majors until a certain point in their college careers (often at the end of the second year), which warrants additional consideration in terms of exploring how to operationalize early STEM retention and STEM performance. One option for program administrators is to collect data about students' current major intentions upon matriculation and compare their intentions against their formally declared majors later in their academic careers. This approach would offer program administrators a way to account for the possibility of students' intentions changing between accepting a university's admission offer and beginning a bridge program, providing a more accurate accounting of the effect of a bridge program on retention. Tracking student engagement with the university, faculty, and peers during and after a bridge program might also allow researchers to better understand bridge students' experiences at a university, how their experiences differ from those of nonparticipants, and how bridge participation might impact student engagement (Brewer, 2019).
We also note that the majority of studies included in this meta-analysis were conducted at relatively larger, PhD-granting institutions (see Tables 1 and 2). Thus there is an opportunity to better study the effectiveness of bridge programs with a broader array of institution types. There may be some barriers to this endeavor, however. Two-year colleges may not offer specialized academic tracks, and student retention through a bachelor's degree would be difficult to track. However, these institutions could examine the effectiveness of bridge programs on STEM course work and GPA, as well as declared major if students transfer to 4-year institutions. As other researchers have discussed (e.g., the review of Latinx STEM transfer interventions by Martin et al., 2018), 2-year institutions might coordinate with 4-year institutions to track transfer student success through the bachelor's degree. This research could be particularly valuable in understanding whether bridge programs decrease transfer shock, which is when transferring students' academic performance declines at their new 4-year institutions relative to their 2-year institution performance (Hills, 1965). Transfer students in STEM majors may experience greater transfer shock than transfer students in other majors (Lakin and Elliott, 2016), highlighting the importance of a continued focus on bridging academic STEM preparation gaps. Interventions such as mandatory learning communities for transfer students might reduce attrition when students transfer to a 4-year college or university (e.g., Scott et al., 2017). Moreover, students transferring from 2-year colleges into STEM classes and majors at 4-year institutions may also face unfavorable stereotypes by both faculty and peers about the ability and success of transfer students in STEM courses (Reyes, 2011). Despite these barriers, transfer students from 2-year colleges tend to be more committed to a specific major and career path than first-year university students (Aulck and West, 2017). Bridge programs at 4-year institutions might also be designed to better support the needs of transfer students.
Bridge programs at smaller, 4-year liberal arts institutions could also be better studied. Students at these institutions do not tend to declare majors until later in their college careers, making STEM retention hard to gauge. Although traditional liberal arts colleges tend to not offer professional, vocational, or applied majors (including STEM majors such as engineering; Roche, 2010), they do tend to produce a greater percentage of graduates who eventually receive doctoral degrees in STEM fields than the percentage of graduates from larger universities (Cech, 1999). A liberal arts bridge program might be especially beneficial for students from underrepresented minority groups and students with weaker academic backgrounds, as liberal arts colleges may be able to offer STEM students a strong science and math foundation and educational environment (through smaller class sizes; Wolniak et al., 2004), although potentially at the expense of extensive research opportunities. In sum, examining the effectiveness of bridge programs for supporting success at 2-year and smaller 4-year institutions is a muchneeded area of future research.
Mixed-Methods Analyses. Mixed-methods research uses one or more studies to both qualitatively and quantitatively explore the same underlying phenomenon (Leech and Onwuegbuzie, 2009). Qualitative research can enrich researchers' understanding of the impact of an intervention and uncover contextual factors that might influence student outcomes beyond just the direct effect of participating in the bridge program (Miller et al., 2020). In the context of STEM bridge programs, qualitative research on variables such as sense of belonging to one's major and science, math, or engineering identity might be able to supplement and enrich quantitative analyses such as this meta-analysis. Although constructs related to STEM attitudes such as career aspirations can be assessed quantitatively (e.g., Beier et al., 2018), qualitative data (e.g., gathered through focus groups, survey responses, qualitative analyses of interviews) can enrich our understanding of these constructs. Qualitative research can also be incorporated into the findings of existing quantitative analyses (e.g., quantitative bridge program evaluation) to capture changes in bridge program students' experiences and to assess whether program participation had a differential impact on students of different backgrounds (e.g., underrepresented minority students; see Tomasko et al., 2016).
STEM bridge program goals vary between individual programs (see Ashley et al., 2017), and research benefits when researchers precisely define their hypotheses in the context of the program's goals. For instance, researchers analyzing the impact of a bridge program goal to produce more STEM graduates should consider whether they also want to study the career intentions of these graduates, and whether these students intend to or ultimately enter a STEM field. They should also decide how to measure these goals. For instance, a program that is ultimately interested in determining whether participation was effective at increasing STEM interest (e.g., Thompson and Consi, 2008) might measure STEM career intentions, identity as a scientist, or sense of belonging to a STEM community, which might all be better predictors of students' attitudes and intentions than STEM GPA or graduation major.
Research Design Considerations.
A full review of quasi-experimental designs useful in educational environments is beyond the scope of this paper (although see Campbell and Stanley, 1967). Nonetheless, we offer some ideas most relevant to our review. First, although randomized experimental designs are generally the "gold standard" for experimental research (Rogers and Révész, 2019), students usually opt into bridge programs, making random assignment impossible and selection bias likely. Therefore, it is important to consider the factors that could impact students' self-selection into a program. For example, the cost to attend the program might play a major role in influencing students' decisions to participate. Students who feel reasonably prepared for STEM course work might be less willing to pay for a summer program, but they might have participated if the program were free or provided a stipend. If this assumption is true, the academic impact of STEM bridge programs might be understated, because bridge students would likely be initially weaker in STEM preparation than control students. There may also be group differences in student self-efficacy, STEM interest, or other psychological characteristics, depending on whether programs are free, offer stipends, or are fee based.
To determine program effectiveness while controlling for self-selection, matched sampling attempts to overcome the confounding that may occur when initial group differences are not controlled for (Campbell and Stanley, 1967) and provides an approach that is close to true experimental randomization (Stuart and Rubin, 2008), which might offer the most confidence in making conclusions about the effect of the program on student outcomes. Many studies we reviewed in our literature search compared results with a matched sample of similar students based on some operationalization of STEM preparedness, such as standardized test scores or high school performance (e.g., Gilmer, 2007;Bradford et al., 2019). Other studies used nonbridge underrepresented minority STEM students, or in the broadest cases, all other STEM students, as control group students (e.g., Kopec and Blair, 2014). Matched sampling analyses can be improved by using covariates that are not affected by a student participating in the bridge program (e.g., students' demographic backgrounds, high school preparation) to build propensity scores, which attempt to match students on these covariates and reduce bias produced by confounding variables (Powell et al., 2020).
Researchers might also consider increasing the internal validity of their studies by providing control group students a different treatment than the bridge program intervention (Campbell and Stanley, 1967), such as access to different classes or resources, rather than no-treatment controls. This approach would permit researchers to examine the effectiveness of different elements of the bridge program rather than the program in its entirety. Another way to increase internal validity would be to use multiple means of assessing constructs (i.e., using academic, psychological, and other STEM constructs such as career intentions) in both the treatment and control conditions pre and post intervention (i.e., a pretest-posttest control group design), which is considered one of the most robust approaches for quasi-experimental designs (Campbell and Stanley, 1967). Finally, time-series designs, in which data are collected at multiple time points pre and post intervention in order to see the impact of the intervention beyond underlying group trends (Grimshaw et al., 2000) can increase the strength of conclusions drawn about the impact of bridge program participation. Moreover, because bridge programs may be unable to increase sample sizes regardless of the outcome of any power analysis, it is important to make and report post hoc calculations to understand whether studies are powered adequately to detect expected effects.
Future Directions
Progress in bridge program research and evaluation can identify the effectiveness of a program, allowing comparisons of results against one another (in meta-analyses, within institution over time, or otherwise) and ensuring that researchers will have enough statistical power to detect significant and material effects of bridge program participation wherever those effects exist. University-specific gateway courses and class performance may be more straightforward for administrators to track, but these outcomes are among the least generalizable to other universities, which have different professors, class syllabi, and student populations. Although a discussion of classroom-level teaching practices is beyond the scope of this paper, incorporating the science of learning to design the most effective instruction methods to cover difficult STEM course work over a brief summer session is critical (National Academies of Sciences, Engineering, and Medicine, 2018). Future meta-analyses or institutional partnerships that allow for multilevel analyses across institutions could code for this natural variability (e.g., various classroom instruction styles, class syllabi, or other student characteristics) if institutions make this information available. Extending the outcomes examined in this research to include attitudes (e.g., STEM identity, belongingness, career aspirations) as well as performance outcomes would be valuable. Large-scale comparative studies could also be designed to identify which elements within the bridge program affect which outcomes.
Similarly, reporting objective academic results as well as those of a control group for relevant STEM outcomes (e.g., STEM major retention, final STEM GPA) would allow many more studies to be used in future meta-analyses, providing more robust findings on program effectiveness. If a program does not have an easily accessible reference group to serve as a control, program administrators could compare the effect of participation with a group of STEM students as similar as possible to bridge participants by coding for and incorporating pre-existing differences, such as high school GPA, incoming Advanced Placement credit in STEM classes, and quantitative standardized ACT or Scholastic Aptitude Test (SAT) test scores, in both within-study analyses and meta-analyses.
Future research could also explore underrepresented minority-focused STEM bridge programs, which comprise ∼50% of STEM bridge programs (Ashley et al., 2017). Examining whether these programs are more effective for underrepresented minority STEM students compared with more general STEM bridge programs would be valuable. Further research could also examine content differences between these two types of bridge programs. For instance, programs focused on underrepresented minority students might offer informational and social resources targeted toward the needs of this specific group of students. This line of research is especially important given the importance of inclusive STEM instruction. More generally, all STEM bridge programs should strive to define diverse students' learning outcomes using a strengths, or asset-based, pedological approach rather than one focused on students' perceived deficits (Johnson, 2019). Understanding that student-centered interventions (such as bridge programs) alone have not been enough to equalize STEM retention rates across groups, higher education researchers have identified increased institutional support as also necessary to build a culture of inclusive diversity and support the success of students who have been historically excluded from science based on their racial and ethnic backgrounds (termed "persons excluded because of their ethnicity or race," or PEERs; Asai, 2020).
Researchers should also attend to the definition of STEM relative to underrepresentation. Women major in the biological and health sciences at a significantly greater rate than they do other STEM majors (Dika and D'Amico, 2016). Similarly, students from underrepresented minority groups and female students have the highest graduation rates in biological and health fields (Lewis et al., 2009). As a result, the study of "PEMC" (physical sciences rather than any sciences, and computer science specifically instead of broader technology studies) may become the highest priority in interventions to ensure access across gender and race (Dika and D'Amico, 2016). Correspondingly, STEM bridge programs might also shift to more narrowly define their targeted STEM students. Future research on differing academic performance and rates of attrition by STEM subfield (especially regarding whether engineering and non-engineering STEM students have different intervention needs) may be useful, and many STEM bridge programs are specific to engineering students (e.g., Allen, 2001;Gleason et al., 2010). Engineering, which encompasses how scientific and engineering principles are combined and applied to solve problems (Kieran and O'Neill, 2009), is the STEM field with the most underrepresentation for both female and underrepresented minority students (Dika and D'Amico, 2016). It is distinct from other STEM majors (e.g., natural sciences) based on the extent to which students' quantitative skills and confidence in quantitative ability predict academic success (Veenstra et al., 2009). To increase student diversity, more STEM bridge programs might be designed around the predictors of success of engineering students in the future.
Finally, more research is required on student progression to graduate-level education in STEM and STEM careers as an outcome. Many researchers and policy makers discuss the importance of producing more STEM researchers and professionals, who often require education beyond a bachelor's degree. However, STEM bridge programs rarely track graduate school enrollment rates (Ashley et al., 2017), and virtually none that we know of track STEM careers. Providing early opportunities for research experience may inherently make students more competitive for graduate programs and STEM careers. The inclusion of exposure to STEM as an applied practice, the learning acquired from gaining STEM research experience as part of bridge programs, and students' consequent STEM decisions should also be explored.
CONCLUSION
STEM bridge programs serve an important goal of increasing STEM major retention, particularly for students who have faced barriers to successful STEM degree completion. However, despite the expense of these programs, the field has lacked systematic analysis of program effectiveness, as well as any consensus on criteria for success. To our knowledge, this is the first systematic quantitative review of the effectiveness of STEM bridge programs. We found that STEM bridge programs positively affected first-year student retention and performance. However, we were constrained in our analysis due to the limited outcomes many of the primary studies reported. Further research in this area would benefit from researchers and bridge program administrators continuing to examine a broad array of student outcomes and improving their study designs. We hope that this meta-analysis will serve others as a useful foundation
|
2021-04-17T06:16:15.288Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4f8f17946e3d2ca461bdfa3ce120d5368af87e07",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.lifescied.org/doi/pdf/10.1187/cbe.20-03-0046",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "41b381102a8a8aa127909adae2c30a75bc56ebc7",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10526795
|
pes2o/s2orc
|
v3-fos-license
|
Auxin efflux by PIN-FORMED proteins is activated by two different protein kinases, D6 PROTEIN KINASE and PINOID
The development and morphology of vascular plants is critically determined by synthesis and proper distribution of the phytohormone auxin. The directed cell-to-cell distribution of auxin is achieved through a system of auxin influx and efflux transporters. PIN-FORMED (PIN) proteins are proposed auxin efflux transporters, and auxin fluxes can seemingly be predicted based on the—in many cells—asymmetric plasma membrane distribution of PINs. Here, we show in a heterologous Xenopus oocyte system as well as in Arabidopsis thaliana inflorescence stems that PIN-mediated auxin transport is directly activated by D6 PROTEIN KINASE (D6PK) and PINOID (PID)/WAG kinases of the Arabidopsis AGCVIII kinase family. At the same time, we reveal that D6PKs and PID have differential phosphosite preferences. Our study suggests that PIN activation by protein kinases is a crucial component of auxin transport control that must be taken into account to understand auxin distribution within the plant. DOI: http://dx.doi.org/10.7554/eLife.02860.001
Introduction
The synthesis and proper distribution of the hormone auxin within the growing plant body is essential for basically all differentiation processes throughout plant development as well as for the plant's tropic responses. As such, proper plant development and morphology strictly require the directed cell-to-cell transport of auxin, which is achieved by a system of auxin influx and efflux transporters (Teale et al., 2006). AUXIN RESISTANT1 (AUX1)/LIKE-AUX1 (LAX) proteins are auxin influx transporters and PIN-FORMED (PIN) proteins, which have been proposed to act in concert with ABC transporters, are the proposed auxin efflux transporters (Galweiler et al., 1998;Friml et al., 2002;Noh et al., 2003;Geisler et al., 2005;Bainbridge et al., 2008;Peret et al., 2012). The directed transport of auxin throughout the plant is critically determined by the-in many cells-asymmetric plasma membrane distribution of PINs and plant developmental processes have been successfully modeled based on the knowledge of PIN distribution and PIN protein behavior (Jonsson et al., 2006;Smith et al., 2006;Wisniewska et al., 2006;Blakeslee et al., 2007;Grieneisen et al., 2007).
We have previously identified and studied Arabidopsis protein kinases of the AGCVIII family designated D6 PROTEIN KINASE (D6PK) (Zourelidou et al., 2009). The D6PK family is comprised of four functionally redundant members, namely D6PK, D6PK-LIKE1 (D6PKL1), D6PKL2 and D6PKL3. Although D6PKs are devoid of any sequence features indicative for an association of these protein kinases with the plasma membrane, D6PKs colocalize with PIN proteins at the basal (rootward) plasma membrane in cells of the root cortex and stele, the hypocotyl and main inflorescence stem as well as the shoot apical meristem (Zourelidou et al., 2009;Barbosa et al., 2014). D6PKs phosphorylate PIN proteins in vitro and PIN phosphorylation is reduced in d6pk mutants in vivo without affecting PIN distribution or strongly affecting PIN abundance (Zourelidou et al., 2009;Willige et al., 2013;Barbosa et al., 2014). Just as the PINs, D6PK constitutively cycles intracellularly between endosomal compartments and the plasma membrane but both, PINs and D6PK, traffic via distinct intracellular routes and seemingly encounter each other only at the basal plasma membrane (Barbosa et al., 2014). Since PIN phosphorylation, as assessed by evaluating overall PIN1 and PIN3 phosphorylation levels, rapidly reacts to the presence and absence of D6PK at the plasma membrane, we postulated that D6PKs directly activate auxin transport by PIN phosphorylation (Willige et al., 2013;Barbosa et al., 2014). This hypothesis has, however, never been tested.
Another subfamily of AGCVIII kinases comprises the proteins PINOID (PID), WAG1, and WAG2 (Christensen et al., 2000;Benjamins et al., 2001;Santner and Watson, 2006;Galvan-Ampudia and Offringa, 2007). Phosphorylation of PINs by PID/WAGs has previously been proposed to control PIN polarity (Friml et al., 2004;Michniewicz et al., 2007;Dhonukshe et al., 2010;Huang et al., 2010). PID/WAGs phosphorylate PINs at three highly conserved phosphosites, designated S1-S3 Huang et al., 2010). Modulating PIN phosphorylation either by PID or WAG overexpression or by introducing phosphorylation-mimicking mutants in PIN1 seemingly results in a basal-to-apical shift in PIN polar distribution (Michniewicz et al., 2007;Dhonukshe et al., 2010;Huang et al., 2010). The proposed loss of PIN phosphorylation in the pid mutant has been used to explain the phenotypic similarity between pin1 and pid mutants: pin1 mutants, on the one side, have a pin-formed inflorescence because they are devoid of the central auxin efflux protein required for shoot meristem differentiation (Galweiler et al., 1998); pid mutants, on the other side, are deficient in PIN1 phosphorylation, which seemingly prevents the essential basal-to-apical polarity switch required to redirect auxin fluxes during differentiation at the shoot meristem (Friml et al., 2004). eLife digest In plants, a hormone called auxin controls the growth of the stems and roots. This chemical is transported from cell to cell, and its flow though the plant is redirected continuously as the plant is developing. Auxin is pumped out of cells by proteins in the cell membrane called 'auxin efflux carriers'. These proteins are usually found on one side of each cell and this is what gives the direction to auxin transport. Zourelidou, Absmanner et al. now report that being positioned on the correct side of a plant cell is not enough to enable an efflux carrier to do its job-it must also be turned on by kinases before it can pump auxin out of cells. Kinases are enzymes that add phosphate groups to specific sites on other proteins, and plants without certain kinases are unable to transport auxin.
When Zourelidou, Absmanner et al. produced the efflux carrier and a plant kinase-which turns the efflux carrier on-in immature egg cells from frogs, auxin was rapidly pumped out of the cells. However, cells that contained the efflux carrier but not the kinase could not transport the hormone. Importantly egg cells from frogs do not normally transport auxin, but these cells are commonly used in experiments because they are large, which makes them easier to work with in the lab.
One of at least two kinases must tag a number of sites on the efflux carrier to ensure that it is switched on. It was already known that some of these sites are involved in making sure that the efflux carrier is located on the correct side of the cell. Zourelidou, Absmanner et al. also found that auxin itself encourages the addition of phosphate groups onto the efflux carrier.
Though it was thought that knowing where the auxin transporters are was enough to explain the direction of auxin transport in plants, it is now clear that activation by the kinases needs to be taken into account too. And since these kinases may activate the transporters to different extents, identifying how these proteins are controlled, for example by auxin itself, will be the next challenge in the field.
The PID/WAG-mediated repolarization of PIN proteins is also important for phototropic responses (Ding et al., 2011). During phototropic bending of the hypocotyl, the polarity of the relevant PIN3 protein changes upon light exposure and this polarity switch is required for auxin redistribution in the hypocotyl and for efficient phototropism. This PIN3 polarity change requires the activity of PID/WAG protein kinases and it has been proposed that PID/WAG-dependent PIN3 phosphorylations directly control this process (Ding et al., 2011). We showed previously that D6PKs also play a critical role in this process: d6pk mutants are strongly impaired in phototropic hypocotyl bending and the inability of d6pk mutants to efficiently transport auxin from the cotyledons to the hypocotyl may be responsible for this tropism defect (Willige et al., 2013). Importantly, the light-induced and PID/WAGdependent PIN3 polarity changes required for hypocotyl bending can still take place in the absence of D6PKs suggesting that the function of PID/WAGs in auxin transport and phototropism can be uncoupled from that of the D6PKs and that both kinases may control PINs independently and differentially (Willige et al., 2013). While the differential biological function of D6PK and PID/WAGs in the context of phototropism may be explained by the two kinases being active in different tissues or during different stages of the phototropism response, there is also evidence that the two kinases have differential biochemical activities. While the overexpression of PID and WAG kinases results in a basal-to-apical PIN shift, the overexpression of D6PKs does not affect PIN distribution (Zourelidou et al., 2009;Dhonukshe et al., 2010). Inversely, the loss of PID function results in strong differentiation defects of the primary inflorescence, which are not apparent in the d6pk mutants. Thus, there is evidence for a differential biochemical activity of D6PKs and PID/WAGs but the molecular basis of this differential activity remains to be determined.
The auxin efflux activity of PINs has previously been demonstrated by passive loading of yeast, plant, or mammalian cells with radiolabeled auxin (Petrasek et al., 2006;Wisniewska et al., 2006;Mravec et al., 2008;Yang and Murphy, 2009). In these experiments, the auxin efflux activity of PINs was deduced from the reduced amount of radiolabeled auxin that accumulated in cells (over-)expressing certain PIN proteins in comparison to control samples. Because these experiments used passive loading of auxin, it is unclear if the differences in intracellular auxin accumulation observed in these experiments are truly a result of differences in auxin efflux or a consequence of differences in auxin uptake. In other studies, auxin efflux was shown based on differences in auxin retention after passive loading and subsequent transfer to auxin-free medium, thereby reversing the electrochemical gradient. In these studies, background transport activities could not be ruled out and differences became apparent only at endpoint steady-state levels. To date, there has been no report of a heterologous expression system that allows measuring auxin export directly in the linear phase.
Here, we report the results from direct auxin efflux experiments with radiolabeled auxin (indole-3-acetic acid, IAA) injected into Xenopus oocytes. We find that PINs are unable to promote auxin efflux in this system unless PINs become activated by specific protein kinases of the Arabidopsis AGCVIII family. We map the phosphosites of these kinases in the PINs and further show that phosphorylation of conserved phosphosites is required for the efficient activation of PIN1 and PIN3. Our study strongly suggests that the activation of PIN-mediated auxin efflux by protein kinases is a crucial component of auxin transport control that must be taken into account to understand auxin distribution within the plant.
D6PK is required for basipetal auxin transport in inflorescence stems
In Arabidopsis thaliana, the four AGCVIII kinases of the D6PK subfamily D6PK, D6PK-LIKE1 (D6PKL1), D6PKL2 and D6PKL3 redundantly control auxin transport-dependent growth (Zourelidou et al., 2009;Willige et al., 2013). Mutants with defects in multiple D6PK genes such as d6pk d6pkl1 (d6pk01) double and d6pk d6pkl1 d6pkl2 (d6pk012) triple mutants are severely impaired in several developmental processes including tropic responses (d6pk01 and d6pk012) and lateral root differentiation (d6pk012) (Zourelidou et al., 2009;Willige et al., 2013). In inflorescence stems, auxin is transported primarily in a basipetal (rootward) direction (Teale et al., 2006). To understand the contribution of the individual D6PK genes to auxin transport in inflorescence stems, we measured basipetal auxin transport in primary inflorescence stems of a selected set of d6pk single, double and triple mutants that represented a previously established phenotypic series (Zourelidou et al., 2009;Willige et al., 2013). In these experiments, we noted a decrease in auxin transport in mutants of increased mutant complexity (Figure 1). While auxin transport defects were comparatively subtle in d6pk single mutants, the decrease in basipetal auxin transport was as strongly impaired in the d6pk012 triple mutant as in mutants of PIN1, a major PIN protein in this tissue ( Figure 1). Furthermore, we found that D6PKs are coexpressed with PINs in stems ( Figure 1-figure supplement 1) and that both, D6PK and PIN1, localize to the basal plasma membrane in cells where auxin levels are high as suggested by the auxin response reporter DR5:GFP (Figure 1-figure supplement 2). Based on these observations, we concluded that D6PKs have an essential role in auxin transport regulation in inflorescence stems.
D6PK activates PIN-mediated auxin efflux in a Xenopus oocyte system
Since auxin transport is impaired in d6pk mutant inflorescence stems and since we had previously accumulated evidence that D6PK directly phosphorylates PINs (Zourelidou et al., 2009;Willige et al., 2013;Barbosa et al., 2014), we hypothesized that D6PK may directly activate auxin transport by PIN phosphorylation in vivo. To test this hypothesis, we established a heterologous test system for measuring auxin efflux using Xenopus laevis oocytes. In this assay, in vitro transcribed cRNAs for the proteins under investigation were injected into the oocytes 4 days prior to the experiment to allow for protein synthesis. At the beginning of the experiment, radiolabeled IAA was injected and the amount of residual radiolabel was measured in the oocytes after incubation for up to 30 min. PIN as well as D6PK protein accumulated at the plasma membrane also in oocytes as shown by immunoblots for PINs and confocal microscopy for D6PK (Figure 2A,B). An inherent feature of this assay system was the gradual loss of the injected radiolabeled IAA from the oocytes over time-in the absence of exogenous proteins-which we attributed to the leakiness of the plasma membrane for IAA ( Figure 2C-F). Interestingly, when we tested PIN1 or PIN3 alone, we did not observe any measurable auxin efflux that differed from the background, suggesting that the PINs are inactive auxin transporters in the oocyte system. However, when we co-expressed D6PK with the PINs we observed a significant and kinase Figure 1. Basipetal auxin transport is impaired in d6pk and pin1 mutants. (A) Basipetal auxin transport measured in inflorescence stems of 5-week-old Arabidopsis plants. Segment numbers refer to the 5 mm stem segments dissected from the primary inflorescence stem where segment 1 is the 5 mm segment closest to the radiolabeled auxin. The 5 mm segment directly in contact with the radiolabeled auxin is not included. Mutant nomenclature: d6pk0, d6pk-1; d6pk1, d6pkl1-1; d6pk01, d6pk-1 d6pkl1; d6pk012, d6pk-1 d6pkl1 d6pkl2-2. A linear mixed-effects model analysis (fixed factor) revealed statistically significant differences (p<0.01) in the transport rates between the wild type and all mutant genotypes, between the d6pk single mutants and the higher order d6pk mutants as well as between the d6pk01 double mutant and the d6pk012 triple mutant. d6pk012 and pin1 are not significantly different (p=0.43). (B) Amount of radiolabeled auxin found in all segments of the plants shown in (A). An ANOVA revealed highly significant differences between groups (p<0.001). An all-pairwise post hoc analysis (Holm-Sidak) allowed the assignment of three significance levels indicated by letters (p≤0.05 between levels). DOI: 10.7554/eLife.02860.003 The following figure supplements are available for figure 1: activity-dependent activation of auxin efflux. This activation correlated with the appearance of high molecular weight bands for PIN1 and PIN3 that appeared in anti-PIN immunoblots only in the presence of the active D6PK kinase ( Figure 2B). In line with an activation of PINs by D6PK through direct PIN phosphorylation, a kinase-dead variant of D6PK could not activate auxin efflux in this system ( Figure 2D,F). In summary, these experiments showed that D6PK is an activator of PIN-mediated auxin efflux in the oocyte expression system.
D6PK phosphorylates PINs at specific phosphosites
Using mass spectrometry, we next identified D6PK-dependent phosphosites in the PINs after in vitro phosphorylation of the cytoplasmic loops (CL) of PIN1, PIN2, PIN3 and PIN4. These analyses resulted in the identification of two novel serine residues as conserved PIN phosphosites, S4 and S5, as well as three further serine phosphosites, S1-S3, that had previously been identified as phosphosites of the PID/WAG kinases ( Figure 3A; Figure Dhonukshe et al., 2010;Huang et al., 2010). Whereas S1, S2 and S3 are conserved in all four PINs tested, S4 and S5 are not conserved in PIN2 where the corresponding protein sequence motifs are divergent when compared to PIN1, PIN3, PIN4 and PIN7 and when compared to the strong conservation of the S1-S3 phosphosites in all PINs including PIN2 ( Figure 3A). Furthermore, S5 was not conserved in PIN1 but aligned with a strongly conserved region of PIN1. At the position of S5, PIN1 had an aspartic acid (D; D215) and we speculated that D215 might be a natural phosphomimic variant of the S5 site ( Figure 3A).
We next tested the identity and relevance of S4 and S5 in in vitro phosphorylation experiments using synthetic peptides as well as recombinant PIN CL fragments as substrates ( Figure 3B). In the experiments with the synthetic peptides, we could confirm the identity and phosphorylation of the novel S4 and S5 phosphosites using mutant peptides as negative controls where the respective S had been replaced by an alanine (A) ( Figure 3B). Since S5 from PIN3, PIN4 and PIN7 corresponded to D215 in PIN1 and since D215 was embedded in an otherwise highly conserved part of the protein, we were also interested in testing whether a serine (S) in a PIN1 D215S variant could be phosphorylated by D6PK. Indeed, while a synthetic peptide comprising PIN1 D215 could not be phosphorylated by D6PK in vitro, the D215S peptide variant was efficiently phosphorylated indicating that, although the respective S5 phosphosite was not conserved, the sequence conservation in this region was sufficient for phosphorylation by D6PK. This was suggestive for an overall structural conservation of this PIN1 protein domain ( Figure 3A). In contrast, PIN2-specific peptides corresponding to the S4 or S5 phosphosites could not be phosphorylated by D6PK despite the fact that their sequences also contained serine residues. Phosphorylation of the corresponding peptides also failed when an asparagine (N) at the respective position was replaced by a serine ( Figure 3B). Thus, S4 and S5 are novel PIN protein phosphosites that are differentially conserved in the five plasma membrane-resident PIN proteins with a role in promoting auxin efflux.
When we examined the contribution of the individual phosphosites to PIN1 phosphorylation in the context of the PIN1 cytoplasmic loop (CL) fragment, we found that PIN1 CL phosphorylation by D6PK was already strongly reduced (40% of wild type levels) in a mutant variant where only PIN1 S4 was replaced by an alanine (S4A; Figure 4A). In turn, mutations of the phosphosites PIN1 S1, PIN1 S2 or PIN1 S3 alone impaired phosphorylation by D6PK to a lesser extent (ca. 80%) and only mutation of all three sites in PIN1 S1A S2A S3A led to a clear reduction of PIN1 CL phosphorylation (58%; Figure 4A). Finally, the mutation of all four PIN1 phosphosites under investigation in PIN1 S1A S2A S3A S4A abolished phosphorylation by D6PK almost completely (2.7%; Figure 4A). Based on these analyses, we concluded that S4 is a major phosphosite for D6PK in PIN1.
Since PIN1 S1, S2, and S3 had previously been identified as phosphorylation targets of PID, we also examined and quantitatively compared the effects of the phosphosite mutations with those of D6PK. In the case of PID, the phosphorylation of PIN1 CL by PID was not altered in the PIN1 S4A mutant when compared to the wild type (100%; 40% for D6PK) but already strongly affected by the PIN1 S1A mutation (61%; ca. 80% for D6PK) and even more by PIN1 S1A S2A S3A (18%; 58% for D6PK; Figure 4B). Thus, D6PK and PID have an overlapping but also differential preference for specific phosphosites in PIN1. When we examined the effects of S4 and S5 site mutations in the context of PIN3, we detected a similar phosphosite preference. Whereas a PIN3 CL S4A S5A variant was still efficiently phosphorylated by PID its phosphorylation by D6PK was severely impaired (28%; Figure 4C,D). Thus, mutations of the five phosphosites have differential effects in the case of D6PK or PID.
The S4 and S5 phosphosites are required for PIN activation by D6PK
We next evaluated the importance of S1-S3 and S4 for PIN1-and D6PK-dependent auxin efflux in oocytes. For this purpose, we calculated the transport rates of PIN1 and the S to A mutants as described in Figure 5-figure supplement 1. In line with the proposed important role of S4 for PIN1 phosphorylation, we found that a PIN1 S4A mutant was already significantly impaired in auxin efflux activation by D6PK in the auxin efflux experiments ( Figure 5A,B). At the same time, the requirement of PIN1 S1, S2, and S3 for D6PK activation was not obvious with a PIN1 S1A S2A S3A mutant but became apparent in the presence of the S4A mutation where the PIN1 activation defect of the S4A mutation was further enhanced in the presence of mutations of the other three sites ( Figure 5A). We thus concluded that PIN1 S4 is an important site for D6PK-dependent PIN1 activation but that all four phosphosites are required for full PIN1 activation. Also, in line with the results obtained in the in vitro phosphorylation experiments, we found that a PIN3 S4A S5A variant showed reduced responsiveness to D6PK when compared to wild type PIN3 providing further support for the importance of the S4 and S5 phosphosites for PIN activation by D6PK ( Figure 5C).
Since S5 corresponded to an aspartic acid residue in PIN1 (D215) and because we could demonstrate that a peptide with a D215S replacement was efficiently phosphorylated by D6PK (Figure 3), we speculated that D215 might be a natural phosphomimic variant of the S5 phosphosite. We reasoned that PIN1 D215S might show a differential behavior in the auxin efflux experiments in the absence and presence of D6PK because the D215S mutant variant could show a stronger dependency on kinase activation. However, we found that the auxin efflux (activation) of the wild type PIN1 protein was indistinguishable from the behavior of the PIN1 D215S mutant in these oocyte experiments ( Figure 5D). We therefore rejected this hypothesis.
To examine the biological significance of S4 and S5 for PIN function, we introduced wild type and mutant transgenes for the expression of PIN1 and PIN3 under the control of their respective promoters into pin1 and pin3 pin4 pin7 (pin347) mutants, respectively ( Figure 6). In support of an important but not exclusive role of S4 phosphorylation for PIN1 function, we detected only a partial rescue of the auxin transport defect in inflorescence stems of pin1 mutants transformed with PIN1 S4A compared to a full rescue with the wild type PIN1. While the mutant and the wild type transgene were able to complement the PID-dependent inflorescence differentiation defect of the pin1 mutant ( Figure 6C,D), D6PK-dependent basipetal auxin transport in the stem was compromised ( Figure 6A,B). Since the mutation of the S4 phosphosite may potentially interfere with the polar distribution or the intracellular transport of the constantly trafficking PIN1 protein, we analyzed the polar distribution of PIN1 S4A and its sensitivity to the trafficking inhibitor Brefeldin A (BFA) ( Figure 6-figure supplement 1). Since PIN1 S4A showed an identical behavior to wild type PIN1 in these experiments, we concluded that changes in PIN1 polarity, PIN1 trafficking or PIN1 abundance at the plasma membrane may not be causal for the observed differences in basipetal auxin transport. We also evaluated the effects of PIN3 phosphosite mutations using the ability of PIN3 transgenes to complement the strong phototropism defect of the pin3 pin4 pin7 (pin347) triple mutant ( Figure 6E,F; Willige et al., 2013). When we measured the ability of wild type PIN3 and mutant PIN3 S4A S5A to complement the pin347 mutant when expressed from a PIN3 promoter fragment, we found that the phototropism defect of the pin347 mutant was Figure 5. D6PK activates auxin transport through phosphorylation of specific serine residues. (A) Results of quantitative analyses from oocyte auxin efflux assays with D6PK and wild type or mutant PIN1. The averages of at least three independent measurements are shown after normalization to the mock control. Student's t test: *p=0.022; **p=0.005; ***p<0.001; n.s., not significant. (B) Anti-PIN1 immunoblots of microsomal membrane (MF) and cytoplasmic fractions (CF) of the corresponding oocytes used in (A). (C) PIN3 S4 S5 are required for full activation by D6PK. Results of quantitative analyses from oocyte auxin transport assays with D6PK and wild type PIN3 or the PIN3 S4A S5A mutant. The averages of at least three independent biological replicates are shown after normalization to the mock control. Student's t test *, p=0.016; n.s., not significant. (D) PIN1 D215 does not contribute to the auxin transport activity of PIN1. Results of oocyte auxin efflux assays with wild type and mutant PIN1 together with YFP:D6PK (D6PK) as specified. Each data point represents the mean and standard error of measurements from at least 10 oocytes. DOI: 10.7554/eLife.02860.011 The following figure supplements are available for figure 5: only partially complemented by the PIN3 S4A S5A transgene while it was fully complemented by wild type PIN3 ( Figure 6E). This finding was in line with the hypothesis that D6PK-dependent PIN3 S4 and S5 phosphorylations are required for efficient basipetal auxin transport in the hypocotyls of darkgrown seedlings, which is a prerequisite for efficient hypocotyl bending. Consistent with the predominant role of the PID phosphosite phosphorylation at S1-S3, we found that the mutation of the PIN3 S1-S3 phosphosites as well as mutation of all five PIN3 phosphosites, S1-S5, fully impaired the ability of the PIN3 transgene to complement the pin347 mutation ( Figure 6F). This finding can be explained by the essential role of PID-dependent PIN3 polarity changes in the hypocotyl that take place after light exposure and that are required for phototropic hypocotyl bending. As we had previously shown, the PID-dependent PIN3 polarity change after phototropic stimulation is a distinct process that is independent from the regulation of basipetal auxin transport in the dark-grown seedling (Ding et al., 2011;Willige et al., 2013). In summary, this experiment supported the conclusion that the novel Research article Figure 6. PIN1 S4 and PIN3 S4 S5 are required for full pin mutant complementation. (A) Basipetal auxin transport measured in inflorescence stems of 5-week-old Arabidopsis plants. Segment numbers refer to the 5 mm stem segments dissected from the inflorescence stem where segment 1 is the 5 mm segment closest to the radiolabeled auxin. The 5 mm segment directly in contact with the radiolabeled auxin was discarded. The values represent the mean and standard error of six biological replicates, except pin1 and NPA-treated wild type (n = 2). A linear mixed-effects model analysis (fixed factor) revealed statistically significant differences (p<0.05) in the transport rates between the plant lines complemented with the PIN1 S4A construct and the other genotypes as indicated by the significance levels in (B). (B) Amount of radiolabeled auxin found in all segments of the plants shown in (A). An ANOVA revealed highly significant differences between groups (p<0.001). An all pairwise post hoc analysis (Holm-Sidak) allowed the assignment of three significance levels indicated by letters (p≤0.036 between levels). (C) Phenotypes of 5-week-old pin1 mutants complemented with a transgenic construct expressing wild type PIN1 and PIN1 S4A under control of a PIN1 promoter fragment. Scale bar = 10 cm. (D) PIN1 immunoblot detects comparable PIN1 protein levels between the wild type and PIN1 transgenic lines. (E) and (F) Analysis for the rescue of phototropic hypocotyl bending defects of a pin3 pin4 pin7 mutant carrying wild type and mutant transgenes for the expression of wild type and mutant PIN3 under control of a PIN3 promoter fragment. Seedlings were exposed for 6 hr (E) or 20 hr (F) to unilateral white light before quantification. To assess differences between genotypes a Kruskal-Wallis ANOVA on ranks was performed. The differences in the median values among the different Figure 6. Continued on next page phosphosites, PIN3 S4 and S5, are required for full PIN3 activity, most likely by interfering primarily with basipetal auxin transport in the hypocotyls of dark-grown seedlings.
S4 and S5 phosphorylation is strongly dependent on D6PK in vivo Next, we were interested in examining the phosphorylation at PIN1 S4 and PIN3 S4 and S5 in vivo and to examine the phosphorylation at these sites in the presence and absence of D6PKs. To this end, we employed selected reaction monitoring (SRM), a mass spectrometry technique that allows detection and quantification of specific peptides and their phosphorylated variants in total protein preparations (Picotti and Aebersold, 2012). In these experiments, we detected a strong reduction in the in vivo abundance of the PIN1 S4 as well as PIN3 S4 phosphorylations that increased with increasing d6pk mutant complexity ( Figure 7A,B). This decrease in S4 phosphorylation could not be explained by changes in the overall abundance of PIN proteins as shown by quantitative SRM analyses of the unphosphorylated PIN1 and PIN3 S4 peptides and analyses of internal control peptides ( Figure 7A,B). Furthermore, introducing a D6PK transgene expressing D6PK under control of a D6PK promoter fragment rescued the PIN1 and PIN3 S4 phosphorylation defects (Figure 7-figure supplements 1 and 2). We also examined phosphorylation at PIN3 S5 using the same methodology and observed that the abundance of phosphorylation at these sites was as strongly reduced in the d6pk012 triple mutant as observed for the S4 site. Again, the phosphorylation defect could not be explained by changes in PIN3 abundance and was rescued by a D6PK transgene as described above (Figure 7-figure supplement 3). Most importantly, the observed decreases in PIN1 and PIN3 phosphorylation were in good agreement with the reductions in auxin transport that we had detected in the same tissue of d6pk mutants (Figure 1). We therefore concluded that D6PKs are the major kinases targeting PIN1 S4, PIN3 S4, and PIN3 S5 in Arabidopsis inflorescence stems and that the reduced phosphorylation at these sites may be causal for the reduced auxin transport of d6pk mutants in this tissue.
PID/WAG kinases also activate PINs
D6PKs belong to a larger family of AGCVIII kinases in Arabidopsis (Galvan-Ampudia and Offringa, 2007). Besides D6PKs and the already introduced PID/WAGs, other AGCVIII kinases such as the phototropin blue light receptors phot1 and phot2 as well as UNICORN (UCN) have known biological functions (Inoue et al., 2008;Enugutti et al., 2012). We were interested in testing the ability of these protein kinases to activate PIN-mediated auxin efflux and examined PID, WAG2 as well as phot1 and UCN together with PIN1 in the oocyte auxin transport assay ( Figure 8). Interestingly, PID and WAG2 but not phot1 or UCN were able to activate PIN1-mediated auxin efflux ( Figure 8A,B, Figure 8figure supplement 1). We thus concluded that PID and WAG2 have a role in PIN activation besides their previously reported role in the control of PIN polarity (Friml et al., 2004;Dhonukshe et al., 2010).
We then examined whether the differential phosphosite preferences of D6PK and PID as observed in the in vitro phosphorylation experiments ( Figure 2C) would also translate into differential defects in the oocyte auxin transport assay. Indeed, we found, in agreement with the in vitro data, that the PIN1 S1A S2A S3A mutant was less efficiently activated by PID than by D6PK ( Figure 8C). Inversely, the PIN1 S4A mutation that strongly affected activation by D6PK did not significantly affect activation by PID. Again, mutation of all four PIN1 phosphosites, PIN1 S1A-S4A, resulted in the strongest impairment of PIN1 activation by PID ( Figure 8C).
We also used SRM analyses to examine the effects of the loss of PID as well as WAG1 and WAG2 function on the phosphorylation of PIN1 S4 (Figure 8-figure supplement 2) and PIN3 S4 (Figure 8figure supplement 3). However, in contrast to the strong defects in PIN S4 phosphorylation that we observed in the d6pk mutants, neither pid nor wag1 wag2 mutants showed a clear reduction in PIN phosphorylation at the S4 phosphosite suggesting that PID and WAG1/WAG2 do not contribute to PIN S4 phosphorylation in this tissue. We also aimed to conduct the complementary SRM analysis genotypic groups was highly significant (p<0.001). Different letters in indicate different significance levels (p<0.01) calculated by an all-pairwise multiple comparison (Dunn's Method). DOI: 10.7554/eLife.02860.013 The following figure supplements are available for figure 6: experiment of the PIN1 and PIN3 S1, S2, and S3 phosphosites but, for technical reasons, had to restrict our efforts to SRM measurements of PIN1 S1 (Figure 8-figure supplement 4) and PIN3 S1 (Figure 8-figure supplement 5): Whereas the peptides comprising the S3 phosphosites of PIN1 and PIN3 were unsuitable for chemical peptide synthesis as predicted based on their primary amino acid sequence, we repeatedly failed to obtain synthetic peptides for the PIN1 and PIN3 S2 phosphosites. Our analysis of PIN1 and PIN3 S1 phosphorylations, however, showed that the phosphorylation at the S1 phosphosites was not affected when comparing the d6pk012 mutant with the d6pk012 mutant expressing a complementing D6PK transgene suggesting that D6PK does not contribute to the phosphorylation of S1 in vivo (Figure 8-figure supplements 4 and 5).
Since our phosphosite analyses indicated that D6PK and PID share their PIN target but display differential preferences for these phosphosites, we analyzed the functional redundancy of these two kinases in promoter swap experiments by expressing them under the control of the genes' promoter fragments in the d6pk012 and the pid mutant background, respectively. These experiments demonstrated that D6PK and PID cannot functionally replace each other when expressed from the promoter of the respective other gene (Figure 9). Whereas the expression of PID from a PID promoter fragment was sufficient to complement the phenotypes of a pid mutant, the expression of D6PK under control of the PID promoter fragment failed to complement pid ( Figure 9A). Inversely, D6PK but not PID expression from a D6PK promoter fragment was sufficient to complement the d6pk012 mutant ( Figure 9B). In summary, these genetic experiments supported our conclusion that D6PK and PID/WAGs are functionally divergent and these findings and conclusions are in line with previous observations on the differential effects of these two kinases in PIN polarity control . These differential phosophosite preferences as detected in in vitro as well as in vivo phosphosite analyses may be the basis of the distinct roles of the two kinases in the control of PIN polarity and plant growth control. The averages of at least three independent measurements, calculated as described in Figure 5-figure supplement 1, are shown after normalization to the mock control. In (A), a one-way ANOVA revealed high differences between groups (p<0.001) and a post hoc analysis (Holm-Sidak) indicated that the D6PK and PID values were significantly different from control oocytes (***p<0.001). In (C), a Student's t-test was performed: *p<0.027; ***p<0.001; n.s., not significant. (B) Immunoblots of total protein extracts prepared from oocytes expressing PIN1 and different AGC kinases. The presence and activation (phot1 only) of the non-effective kinases in the membrane (MF) and cytoplasmic fraction (CF) was confirmed with anti-phot1, anti-phot1-pS851 (for phot1 activation) and anti-UCN. DOI: 10.7554/eLife.02860.019 The following figure supplements are available for figure 8:
Auxin promotes PIN phosphorylation
Since auxin had previously been shown to regulate auxin transport at the level of PIN transcription and PIN endocytosis control, we were also interested in examining the role of auxin on PIN phosphorylation. In these analyses, we detected concentration-, time-and D6PK-dependent increases in the phosphorylation of PIN1 S4, PIN3 S4 and PIN3 S5 already 15 min after auxin application ( Figure 10A,B, Figure 10-figure supplements 1-4). While these increases were clearly observed in the wild type, only marginal increases in PIN phosphorylation at these sites were observed in the phosphorylation deficient d6pk012 mutant. At the same time, phosphorylation at the preferential PID target site S1 was neither strongly impaired in d6pk012 mutants when compared to a d6pk012 mutant expressing a complementing D6PK transgene nor clearly induced by auxin (Figure 8-figure supplements 4 and 5). Furthermore, in agreement with an auxin-dependent control of PIN phosphorylation at S4 and S5, we detected increased phosphorylation at S4 and S5 in the auxin-overproducing yucca mutant ( Figure 10A,B, Figure 10-figure supplements 1-4; Zhao et al., 2001). Although the analyses of the control peptides showed that there is also an overall increase in PIN abundance in yucca, the relative increases in phosphosite phosphorylations exceeded the increases in overall PIN abundance suggesting that PIN phosphorylation is activated in this mutant when compared to the wild type. We had previously reported that D6PK is a plasma membrane-associated protein that cycles between the plasma membrane and the cytoplasm or intracellular compartments (Zourelidou et al., 2009;Willige et al., 2013;Barbosa et al., 2014). This cycling is highly sensitive to the trafficking inhibitor Brefeldin A (BFA) and in selected BFA-treatment conditions D6PK can be depleted from the plasma membrane without significantly affecting the plasma membrane abundance of PIN1 ( Figure 11; Barbosa et al., 2014). The differential BFA-sensitivity of D6PK and PIN allowed us testing the contribution of plasma membrane-resident D6PK to PIN phosphorylation. For this purpose, we generated a phosphosite-specific antibody for PIN1 S4 that efficiently detected S4 phosphorylated PIN1 at the plasma membrane but failed to detect PIN1 S4A (Figure 11-figure supplement 1). Importantly, we found that PIN1 S4 phosphorylation was strongly decreased already minutes after BFA treatment when D6PK had become dissociated from the plasma membrane ( Figure 11). Thus, PIN1 S4 phosphorylation depended on the presence of D6PK or other BFA-sensitive protein kinases at the plasma membrane.
Discussion
In this study, we examined the functional roles of the D6PK protein kinases in PIN phosphorylation and auxin transport activation. We showed that d6pk mutants are impaired in basipetal auxin transport in inflorescence stems and postulated that PINs may be directly activated by D6PKs. This hypothesis was supported by the facts that D6PK colocalized with the basally localized PIN1 and PIN3 proteins in various cell types, that D6PKs phosphorylated PINs in vitro and that D6PKs influence PIN1 and PIN3 phosphorylation in vivo (Zourelidou et al., 2009;Willige et al., 2013;Barbosa et al., 2014). Here, we tested this hypothetical functional relationship by examining PIN1 or PIN3 activity and auxin transport at various levels. We showed that basipetal auxin transport was reduced in inflorescence stems of d6pk mutants and that PINmediated auxin efflux was activated by D6PK in Xenopus oocytes. Furthermore, we could rule out that the decreases in auxin transport as measured in inflorescence stems are the consequence of changes in PIN abundance as demonstrated using confocal imaging, immunoblotting, and SRM analyses of PIN proteins. We furthermore demonstrated that D6PK-dependent PIN activation was dependent on specific phosphosites in PIN1 and PIN3. Taken together, all our findings support the conclusion that D6PKs are major regulators of PIN-mediated auxin transport in inflorescence stems. Since d6pk mutants have a number of phenotypes such as gravitropism defects in the root, negative gravitropism defects in the hypocotyl, phototropism defects in the hypocotyl as well as defects in lateral root initiation (Zourelidou et al., 2009;Willige et al., 2013;Barbosa et al., 2014), we are tempted to speculate that these other d6pk mutant phenotypes are also the consequence of reduced auxin transport activity of PINs in the absence of the PIN-activating D6PK kinases.
The detailed analysis of the S1-S5 phosphosites in in vitro phosphorylation experiments and in oocyte auxin transport experiments revealed that PIN1 S4 as well as PIN3 S4 and S5 are major target sites for D6PK. This conclusion found support also in the analysis of the in vivo phosphorylation levels at these sites since phosphorylation at S4 and S5 was strongly reduced in the d6pk012 mutant. Interestingly, the preferential D6PK phosphosites S4 and S5 are not conserved in PIN2 and it is striking that the respective domains in PIN2 carry small insertions when compared to the other PIN proteins. Thus, the activation of PINs by phosphorylation may be regulated by the presence and abundance of activating kinases such as D6PK but also by the availability and conservation of phosphosites in their PIN targets.
Besides S4 and S5, PINs must have other phosphosites that are targeted by D6PK since auxin transport defects in pin mutants expressing the respective PIN S4A and S5A mutant variants are partially complemented and not as severe as those observed in the d6pk012 loss-of-function mutants. Besides phosphorylations at S1, which are not affected in d6pk012 mutants, S2 and S3 would be other possible target sites since their mutation further impairs D6PK-dependent PIN phosphorylation in vitro and auxin transport in the oocyte system. In this respect, it is unfortunate that we were unable, for technical reasons, to measure phosphorylation at S2 and S3 in the d6pk and pid mutants.
Our study also addressed the functional role of PID and the PID-related WAG1/WAG2 kinases in the control of auxin transport. While we found that PID and WAG2 activate PIN-mediated auxin efflux in Xenopus oocytes, we showed at the same time that PID has different phosphosite preferences when compared to D6PK. We observed these phosphosite preferences when analyzing PIN phosphorylation at S1-S5 in in vitro phosphorylation experiments, auxin transport in oocytes, and PIN phosphorylation by quantitative mass spectrometry in the d6pk012, pid and wag1 wag2 mutants. Whereas D6PK appeared to have a preference for the S4 and S5 sites in PIN1 and PIN3, PID Figure 11. PIN1 pS4 is dependent on D6PK presence at the plasma membrane. Representative confocal images of root stele cells after immunostaining highlighting (arrowheads) the presence of YFP:D6PK (D6PK), S4-phosphorylated PIN1 (PIN1 pS4) and PIN1 at the plasma membrane before and the absence of D6PK and PIN1 pS4 after BFA treatment. Note that unphosphorylated PIN1 can still readily be detected in a polarized manner after S4-phosphorylation was efficiently removed. DOI: 10.7554/eLife.02860.031 The following figure supplements are available for figure 11: preferentially phosphorylated the previously identified S1-S3 phosphosites. S1-S3 are highly related to each other and also highly conserved among all five plasma membrane-resident PIN auxin efflux carriers including PIN2.
Although D6PK had a preference for S4 and S5 phosphorylation, our in vitro phosphorylation experiments as well as the auxin transport experiments in oocytes further suggested that the phosphorylation of S1-S3 also contributes to full PIN phosphorylation and activation by D6PK. The respective inverse observations were made with PID. Whereas PID phosphorylation and activation of PIN1 was strongly impaired when S1-S3 were mutated, full impairment of phosphorylation and activation could only be achieved after S4 mutation. In this respect, we consider the complementation experiments of the pin1 mutant with the wild type PIN1 and the mutant PIN1 S4A transgenes particularly insightful. Here, we found that basipetal auxin transport in the inflorescence stem was partially impaired when PIN1 S1 was mutated whereas the strong inflorescence differentiation phenotype of the pin1 mutant was rescued. The partial complementation of the pin1 auxin transport defect indicates that PIN1 S4 is not the only phosphosite required for D6PK-dependent PIN1 activation and basipetal auxin transport. As such, this result is in agreement with the results of our in vitro phosphorylation and oocyte auxin transport experiments, which showed that D6PK can also activate PIN1 through S1-S3 phosphorylation. On the other side, the rescue of the differentiation defect can be explained because the PIN1 S4A protein still contained the preferential phosphorylation sites for PID. As shown in the in vitro phosphorylation experiment as well as in the oocyte auxin transport experiment, the PIN1 S4A mutant variant is neither strongly impaired in its phosphorylation by PID nor in its activation by PID. Thus, the essential phosphosites required for PID-dependent PIN activation and PIN polarity changes are retained in PIN1 S4A. Therefore, the selective functionality of this mutant PIN1 in the context of inflorescence development indirectly supports the findings of our other analyses.
Along the same lines, we also studied the ability of a PIN3 S4A S5A transgene to rescue the strong phototropism defect of the pin347 triple mutant. We had previously shown that d6pk d6pkl1 double mutants as well as pin347 triple mutants are severely compromised in phototropic hypocotyl bending (Willige et al., 2013). We had shown that this phenotype could be explained by a strong defect in basipetal auxin transport and the apparent accumulation of auxin in the cotyledons of darkgrown seedlings, which, in turn, correlated with the absence of an auxin maximum in the bending zone of the hypocotyl (Willige et al., 2013). At the same time, PID-dependent PIN3 polarity changes could still take place in the d6pk012 mutant indicating that PID can function independently from D6PK on PIN3. Our observation that the PIN3 S4A S5A transgene could only partially rescue the pin347 triple mutant phenotype supports the notion that phosphorylation at these sites is important for PIN3 activation but suggests further that other phosphosites, such as S1-S3, may also be targeted by D6PK. This partial inactivation of an S4A S5A mutated PIN3 as observed in planta is in agreement with the partial impairment of PIN3 S4A S5A phosphorylation in in vitro phosphorylation experiments with D6PK as well as the fact that there is still a residual activation when PIN3 S4A S5A is activated with D6PK in the oocyte system. With regard to the relevance of S1-S3 for PIN3 function, we found that mutation of PIN3 S1-S3 or S1-S5 rendered this PIN3 non-functional when introduced as a transgene into pin347. Since these mutant PIN3 variants would be expected to be impaired in D6PK-dependent basipetal auxin transport as well as in PID-dependent PIN3 polarity changes, it is difficult based on the present depth of analysis to judge whether the non-functionality of PIN3 S1A-S3A or PIN3 S1A-S5A is primarily caused by a defect in basipetal auxin transport, a defect in changing PIN3 polarity or a combination of both.
In our experiments, PIN phosphorylation led to a direct activation of auxin efflux in oocytes. The analysis of auxin transport in Arabidopsis inflorescence stems suggested that this might indeed be the primary function of this modification since auxin transport was strongly impaired in d6pk loss-offunction mutants while PIN abundance at the plasma membrane was not altered. This observation does, however, not rule out that PIN phosphorylation has other regulatory roles such control of PIN polarity by PID or WAG-dependent phosphorylation (Friml et al., 2004;Dhonukshe et al., 2010). Changes in PIN polarity as they are observed after PID or WAG2 overexpression are not observed after D6PK overexpression (Zourelidou et al., 2009;Dhonukshe et al., 2010;Barbosa et al., 2014). The differential effect of D6PK and PID/WAGs on PIN may have its molecular basis in the differential phosphosite preferences of the two kinases. Common to both kinases seems, however, the fact that their phosphorylation activity is antagonistically regulated by phosphatases. While the phenotypic effects of PID can be antagonized by PP2A phosphatases (Michniewicz et al., 2007), removal of the D6PK from the plasma membrane through BFA treatment resulted in an almost immediate decrease in PIN1 phosphorylation. Thus, it can be speculated that also D6PK-dependent PIN phosphorylation is antagonized by phosphatases, the identities of which remain to be determined.
Our data also suggest that PIN1 and PIN3 phosphorylation is not only controlled by the presence of D6PK at the plasma membrane but also by auxin itself. Using quantitative SRM analyses, we could show that PIN1 S4 as well as PIN3 S4 and S5 phosphorylation increases in response to auxin treatment in the wild type. In the d6pk012 mutant, the loss of phosphorylation at these preferential D6PK phosphosites could not be compensated by auxin application suggesting that these auxin-dependent phosphorylations are D6PK-dependent and may be mediated either directly by D6PK or by D6PK acting as an indirect auxiliary factor. Although we observed a minor increase in PIN phosphorylation at the D6PK phosphosites in the d6pk012 triple mutant, these increases were comparatively minor and may be attributed to phosphorylation through D6PKL3, which is still expressed in the d6pk012 mutant. Alternatively, they may be attributed to the activity of other PIN-regulatory kinases such as the PID/ WAGs or other as yet uncharacterized protein kinases. Theoretically, it could be envisioned that the auxin-dependent increases in PIN phosphorylation are the consequence of the previously reported inhibitory effects of auxin on PIN endocytosis (Paciorek et al., 2005). In this case, PIN phosphorylating kinases would encounter their PIN targets simply for a longer period of time thereby increasing the chances for phosphorylation. Unraveling the identity of the underlying auxin-sensory mechanism and its molecular details will be an interesting avenue for future investigations (Dharmasiri et al., 2005;Parry et al., 2009;Robert et al., 2010).
We recently reported that auxin treatment led to a transient dissociation of D6PK from the plasma membrane in root cells There, this auxin response correlates with a slight decrease in PIN1 phosphorylation as judged by immunoblots (Barbosa et al., 2014). In contrast, we report here that auxin promotes PIN phosphorylation in inflorescence stems as determined by quantitative mass spectrometry of PIN1 S4, PIN3 S4 and PIN3 S5. It is at present difficult for us to reconcile these two apparently contrasting observations. We can therefore only argue that PIN phosphorylation is controlled by different auxin-dependent regulatory mechanisms in different tissues.
In summary, our study provides evidence that PIN-mediated auxin efflux requires activation by PIN phosphorylating kinases such as D6PK and PID/WAGs. Several of our findings point at a differential biochemical activity of these two AGCVIII kinase representatives on PINs that may explain their differential effects in controlling PIN polarity, auxin transport, and plant growth. The differential PINdependent distribution of auxin within the plant is of pivotal importance for the regulation of a multitude of processes in plant growth and development. It is our view that the activation of PINs by D6PKs and PID/WAGs is a crucial component of the control of auxin transport that must be taken into account to understand auxin transport within the plant and to ultimately understand plant growth.
Cloning procedures
Primer sequences for cloning, insert amplification and site-directed mutagenesis are listed in Supplementary file 1A.
For in vitro transcription prior to protein translation in oocytes, PIN1 and PIN3 were inserted into the expression vector pOO2 (Broer, 2010). To this end, the genes were amplified from cDNA templates and cloned as blunt-ended Phusion polymerase-amplified (Biozym, Hessisch Oldendorf, Germany) PCR fragments into the SmaI or EcoRV site of pOO2. The S to A mutations were introduced by PCR-based site-directed mutagenesis using primers carrying mutations for the respective S to A replacements. YFP:D6PK and kinase-dead YFP:D6PKin were amplified in a similar manner as described for PIN1 and PIN3 from previously described vector templates (Zourelidou et al., 2009) and inserted into the EcoRV site of pOO2. A PCR-fragment of the PID coding sequence was first inserted into pJET1.2 (Fisher Scientific, Schwerte, Germany) and from there transferred as an Xho1/Xba1 fragment into pOO2. A PCR-fragment of the WAG2 CDS was cloned directly into pOO2 after Xba1/Nco1 digestion. The phot1 CDS was cloned as a BamH1/Pst1-digested PCR fragment into the corresponding sites of pOO2.
Constructs for the expression and purification of glutathione-S-transferase (GST)-tagged PIN and D6PK were previously described (Zourelidou et al., 2009;Huang et al., 2010;Willige et al., 2013). GST:PID was obtained by inserting a Gateway-compatible PCR-fragment obtained from a PID cDNA with the primers PID-GW-FW and PID-GW-RV into pDONR201 before transferring the PID insert to pDEST15 (Life Technologies, Carlsbad, CA).
Genomic PIN1 constructs were prepared by insertion of a 3558 bp Sal1/Not1-digested PCR fragment including the PIN1 open reading frame and terminator into pGREEN0229. Subsequently, a 2081 bp PIN1 promoter fragment was inserted upstream from PIN1 as a Kpn1/Sal1-digested PCR fragment. Mutations for the S4A replacement were introduced by site-directed mutagenesis (Sawano and Miyawaki, 2000). The constructs were transformed into heterozygous PIN1/pin1 (SALK_047613) plants by Agrobacterium-mediated transformation 13 and pin1 homozygous lines carrying the PIN1 transgenes were isolated from the progeny. Plants expressing comparable levels of the PIN1 protein were identified by immunoblotting.
Constructs for the expression of PIN3 under the control of its own promoter were obtained by amplifying a fragment spanning the region from 1776 bp upstream of the PIN3 translation start site to 621 bp downstream of the PIN3 stop codon with the primers PIN3g-ApaI-FW and PIN3g-NotI-RV and inserted into the KpnI and NotI sites of pGREEN0229. The S1A through S5A mutations were introduced into the wild type construct by PCR-based site-directed mutagenesis using the primers listed in Supplementary file 1A. The constructs were transformed by Agrobacterium-mediated transformation into pin3 pin4 pin7 mutants (Willige et al., 2013) and phototropism experiments were performed on T2 progeny seedlings segregating for the PIN3 transgenes in the pin3 pin4 pin7 mutant background. Assuming that 25% of the segregating population represent non-transgenic pin3 pin4 pin7 segregants, the 25% of the seedlings of the analysed population (n >50 for pin3 pin4 pin7 PIN3 S4A S5A; n >25 for pin3 pin4 pin7 PIN3 S4A S5A and pin3 pin4 pin7 PIN3 S1A S2A S3A S4A S5A) with the lowest hypocotyl angle were excluded from the analysis. The T2 progeny of at least three independent transgenic lines was analysed for each transgene, and in each case the three lines with the strongest phenotypic suppression were chosen for the graphic representations and statistical analyses. The variance between the individual transgenic populations was analysed with a Kruskal-Wallis ANOVA on ranks (Kruskal and Wallis, 1952).
Arabidopsis auxin transport assays
To measure auxin transport in Arabidopsis inflorescence stems, 25-mm stem sections were cut above the rosette of 5-week-old plants and placed, in inverted orientation, into 30 μl auxin transport buffer containing 500 pM IAA, 1% (wt/vol) sucrose, 5 mM 2-(N-morpholino)ethanesulfonic acid (MES), [pH 5.5] with or without 100 μM 1-N-naphthylphthalamic acid (NPA). At the beginning of the transport experiment, the stem segments were transferred to 30 µl auxin transport buffer containing 417 nM (11 kBq) [ 3 H]-IAA (American Radiolabeled Chemicals, St. Louis, MO). After 2 hr, 5-mm segments were dissected from the inflorescence stem, the lowermost 5-mm segment was discarded, and the remaining segments were macerated overnight in 3 ml QuickSafe A (Zinsser Analytic, Frankfurt, Germany). [ 3 H]-IAA was quantified using a liquid scintillation analyzer (Tri Carb 2100TR; Perkin-Elmer). The results presented are the average and standard deviation of at least four biological replicate measurements in the case of wild type, d6pk mutants, pin1 PIN1 and pin1 PIN1S4A, and at least two biological replicates in the case of pin1 or the NPA-treated wild type. The experiments were repeated with comparable results and the result of a comparable experiment is shown. Where relevant, auxin transport measurements were compared using the linear mixed-effects model analysis (fixed factors) using the R software package.
Oocyte auxin efflux assays
Xenopus laevis oocyte collection was performed as previously described and cRNA injection was carried out the day after surgery (Kottra et al., 2009). cRNA was synthesized using the mMessage Machine SP6 Kit (Life Technologies, Carlsbad, CA) and cRNA concentration was adjusted to 300 ng/µl PIN and 150 ng/µl protein kinase, respectively. Oocytes were injected with ∼50 nl of a 1:1 mixture of cRNAs for PIN and protein kinase. If only PIN or protein kinase cRNA was injected, the cRNA was mixed 1:1 with water (mock control). Following injection, oocytes were incubated in Barth's solution containing 88 mM NaCl, 1 mM KCl, 0.8 mM MgSO 4 , 0.4 mM CaCl 2 , 0.3 mM Ca(NO 3 ) 2 , 2.4 mM NaHCO 3 , 10 mM HEPES (pH 7.4) supplemented with 50 µl gentamycine at 16°C for 4 days to allow for protein synthesis. An outside medium buffer at pH 7.4 was chosen to prevent passive rediffusion of IAA into the oocytes, which would take place at acidic pH. At the beginning of the oocyte experiment, 10 oocytes were injected for each time point with 50 nl of a 1:5 dilution (in Barth's solution) of [ 3 H]-IAA, 25 Ci/mmol; 1 mCi/ml (ARC, St. Louis, MO) to reach an intracellular oocyte concentration of ∼1 µM [ 3 H]-IAA based on an estimated oocyte volume of 400 nl (Broer, 2010). After [ 3 H]-IAA injection, oocytes were placed in ice-cold Barth's solution for 10 min to allow substrate diffusion and closure of the injection spot. Subsequently, oocytes were washed and transferred to Barth's solution at 21°C to allow for auxin efflux. To stop auxin efflux, oocytes were washed twice and lysed individually in 100 µl 10% SDS (wt/vol) at selected time points and the residual amount of [ 3 H]-IAA in each oocyte was determined by liquid scintillation counting. At least 10 oocytes were measured per time point and mock as well as other negative controls were performed with the same oocyte batch to account for differences between batches. The relative transport rates of an experiment were determined by linear regression as shown in Figure 5-figure supplement 1. Transport rates of different biological replicates (i.e. oocytes collected from different female donors) were averaged and are presented as mean and standard error of at least three biological replicates. Comparability in protein expression between the respective wild type and mutant protein variants and between the experiments was confirmed using immunoblots or confocal laser scanning microscopy with a Axiovert 200 M microscope equipped with a LSM 510 META laser scanning unit (Zeiss, Jena, Germany).
Phosphorylation experiments with recombinant PIN cytoplasmic loop substrates were performed using 0.2 µg GST:D6PK or GST:PID and 0.5 µg GST:PIN substrate in a reaction buffer containing 25 mM Tris pH 7.5, 5 mM MgCl 2 , 0.2 mM EDTA, 1 × cOmplete protease inhibitor cocktail (Roche, Penzberg, Germany), and 0.5 µCi [©− 32 P]ATP (370 MBq, specific activity 185 TBq; Hartmann Analytic, Braunschweig, Germany). Reactions were incubated for 1 hr at 30°C and separated on 10% SDS-PAGE. Gels were dried using a vacuum drier and exposed to X-ray film. Band intensities were quantified using MultiGauge v.3.0 and normalized to the band intensities of the wild type.
Phosphorylation experiments with recombinant PIN cytoplasmic loop substrates for mass spectrometric analysis were performed at 30°C for 1 hr in a non-radioactive reaction buffer containing 25 mM Tris pH 7.5, 5 mM MgCl 2 , 0.2 mM EDTA, 1 × cOmplete protease inhibitor cocktail (Roche, Penzberg, Germany), 0.15 mM ATP, 1 × PhosSTOP phosphatase inhibitor cocktail (Roche, Penzberg, Germany) with 5 µg purified recombinant D6PK and 5 µg purified recombinant PIN cytosolic loop. For subsequent mass spectrometric analyses, the reactions were separated on a 10% SDS-PAGE and stained with Coomassie Brilliant Blue. PIN bands were cut from the gel, destained with two washes of H 2 O and two washes of 50% acetonitrile/50 mM NH 4 HCO 3 pH 8 at 37°C. The bands were then sliced into small pieces (1 mm 2 ) and transferred to a low binding microcentrifuge tube. The gel pieces were then covered in a solution with 50 mM dithiothreitol (DTT), 50 mM NH 4 HCO 3 and incubated for 1 hr at 60°C. After cooling to room temperature, the solution was replaced by 100 mM iodoacetamide in 50 mM NH 4 HCO 3 and incubated for at least 1 hr in the dark. Subsequently, the gel pieces were washed three times by vortexing for 10 min in 50 mM NH 4 HCO 3 , pH 8. Following removal of the wash solution, the gel pieces were dried in a SpeedVac concentrator for 30 min and then incubated overnight in 10 µl Bovine Sequencing Grade Trypsin (Roche, Penzberg, Germany) dissolved in 50 mM NH 4 HCO 3, 1 mM CaCl 2 . The trypsin solution was subsequently removed and transferred to a low binding tube. 10 µl of trifluoroacetic acid (TFA; 5% wt/vol H 2 O) were then added to the gel pieces and after sonication for 1 min the supernatant was transferred to the tube containing the previous liquid. The same procedure was repeated by adding 10 µl 15% acetonitrile/1% TFA to the gel pieces and combining the liquid with the previous supernatants. Mass spectrometry was performed using an nLC-LTQ-Orbitrap tandem mass spectrometer at Biqualys (Wageningen, The Netherlands), and the data were analysed using the Bioworks software (Thermo Fisher Scientific, Ulm, Germany).
Protein alignment
PIN protein alignments was performed using the ClustalW alignment option of the Geneious (Biomatters, Auckland, New Zealand) software package.
Phototropism experiments
Seedlings were grown in the dark at 22°C on vertically oriented half-strength Murashige and Skoog (MS) agar (0.8%) plates for 3 to 4 days. Agravitropically growing seedlings were reoriented toward the gravity vector in safe green light 2 to 4 hr before the experiment. The seedlings were then transferred to GroBank growth chambers (CLF Plant Climatics, Wertingen, Germany) and illuminated with unilateral white light (100 µmol m −2 s −1 ). Plates were subsequently scanned and hypocotyl bending was measured for each seedling using the NIH ImageJ software.
Live imaging of DR5:GFP or fluorescent protein-tagged proteins was performed as previously described (Barbosa et al., 2014).
Protein pellets were resuspended in 6 M urea, 2 M thiourea, pH 8. Protein disulfide bridges were reduced by adding DTT and free cysteine residues were subsequently alkylated using iodacetamide. 150 µg protein was then digested using sequencing grade trypsin (Promega) and desalted over C18 tips 22 . Phosphopeptides were enriched over titanium dioxide 23 and eluted phosphopeptides as well as flow-through after peptide binding to titanium dioxide were kept for analysis. Synthetic peptides with fully 13 C and 15 N-labeled C-terminal K or R were synthesized (Thermo Fisher Scientific, Ulm, Germany; Supplementary file 1C) and spiked into the tryptic peptide mixture at concentrations ranging from 40 to 250 fmol depending on peptide ionization properties.
Tryptic peptide mixtures including heavy standard peptides were then analysed by SRM using nanoflow HPLC (Easy nLC, Thermo Scientific, Ulm, Germany) coupled to a triple quadrupole mass spectrometer as mass analyser (TSQ Quantum Discovery Max, Thermo Scientific, Ulm, Germany). Peptides were eluted from a 75 µm analytical column (Easy Columns, Thermo Scientific, Ulm, Germany) on a linear gradient running from 10% to 30% acetonitrile in 60 min and were ionized by electrospray directly into the mass spectrometer. Specifically, phosphorylated and non-phosphorylated peptides were selected as targets of analysis after optimization of ionization conditions using the standard peptides. Visible transitions were selected from acquired mass spectra of the synthetic standard peptides. A list of transitions used for each (phospho)peptide sequence is available as Supplementary file 1C. The quadrupole Q1 was set as a mass filter for the respective parent ion, while Q3 was set to monitor specific fragment ions. For each peptide, at least three fragment ions were used. Mass width for Q1 and Q3 was 0.7 Da, scan time 5 ms.
Data analysis involving merging of fragment ion information to a parent ion sum of intensities and calculation of peak areas was done using the Software Pinpoint v.1.0 (Thermo Scientific, Ulm, Germany). For quantitative analysis of peptide abundance, ion intensity sums of the measured transitions were used and averaged between up to three biological replicates. Ion intensity sums of spiked-in heavy peptide were used to normalize for sample-to-sample variation.
|
2016-05-16T04:18:35.237Z
|
2014-06-19T00:00:00.000
|
{
"year": 2014,
"sha1": "4859ae7b8a42c472c256d7cf82b163626087be79",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.02860",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4859ae7b8a42c472c256d7cf82b163626087be79",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
8586639
|
pes2o/s2orc
|
v3-fos-license
|
Sonic Hedgehog Carried by Microparticles Corrects Angiotensin II-Induced Hypertension and Endothelial Dysfunction in Mice
Microparticles are small fragments of the plasma membrane generated after cell stimulation. We recently showed that Sonic hedgehog (Shh) is present in microparticles generated from activated/apoptotic human T lymphocytes and corrects endothelial injury through nitric oxide (NO) release. This study investigates whether microparticles bearing Shh correct angiotensin II-induced hypertension and endothelial dysfunction in mice. Male Swiss mice were implanted with osmotic minipumps delivering angiotensin II (0.5 mg/kg/day) or NaCl (0.9%). Systolic blood pressure and heart rate were measured daily during 21 days. After 7 day of minipump implantation, mice received i.v. injections of microparticles (10 µg/ml) or i.p. Shh receptor antagonist cyclopamine (10 mg/kg/2 days) during one week. Angiotensin II induced a significant rise in systolic blood pressure without affecting heart rate. Microparticles reversed angiotensin II-induced hypertension, and cyclopamine prevented the effects of microparticles. Microparticles completely corrected the impairment of acetylcholine- and flow-induced relaxation in vessels from angiotensin II-infused mice. The improvement of endothelial function induced by microparticles was completely prevented by cyclopamine treatment. Moreover, microparticles alone did not modify NO and O2 . - production in aorta, but significantly increased NO and reduced O2 . - productions in aorta from angiotensin II-treated mice, and these effects were blocked by cyclopamine. Altogether, these results show that microparticles bearing Shh correct angiotensin II-induced hypertension and endothelial dysfunction in aorta through a mechanism associated with Shh-induced NO production and reduction of oxidative stress. These microparticles may represent a new therapeutic approach in cardiovascular diseases associated with decreased NO production.
Introduction
Angiotensin II (Ang II), the principal effector peptide of the renin-angiotensin system, plays a major role in the initiation and progression of vascular diseases, such as hypertension, in part through reactive oxygen species [1]. Ang II-induced increase in reactive oxygen species in particular, superoxide (O 2 ·-), leads to decreased bioavailability of nitric oxide (NO), which impairs endothelium-dependent vasodilatation and promotes vasoconstriction. Ang II-induced increase in blood pressure, vascular O 2 ·-levels, and endothelial dysfunction are improved either upon blockade of the system and/or the prevention of oxidative stress leading to an increase of NO bioavailability [2].
Microparticles (MPs) are small fragments generated from the plasma membrane after cell stimulation. The composition of MPs and the messages they transport (proteins, mRNA or miRNA) can differ depending on their origin [3]. MPs can be engineered to over-express different proteins by driving the synthesis of the relevant protein in MP-producing cells [4]. We have demonstrated that MPs released by apoptotic/stimulated human T lymphocytes harbor the morphogen Sonic hedgehog (Shh) and improve endothelial function in the mouse aorta by increasing NO release [5]. Also, endothelial dysfunction in mouse coronary artery after ischemia/reperfusion can be prevented by treatment with Shh-carrying MPs [5]. Moreover, MPs expressing Shh favor in vitro angiogenesis [6] and the recovery of hindlimb flow after peripheral ischemia through the activation of endothelial NO synthase and the increase of NO release and pro-angiogenic factor production [7]. The present study further aims to investigate whether MPs bearing Shh may correct Ang II-induced hypertension and endothelial dysfunction in mice.
MP production
The human lymphoid CEM T cell line (ATCC, Manassas, VA) was used for MP production. Cells were seeded at 10 6 cells/ml and cultured in serum-free X-VIVO 15 medium (Cambrex, Walkersville, MD). MPs were produced as described previously [8]. Briefly, CEM cells were treated with phytohemagglutinin (5 µg/ml; Sigma-Aldrich, St. Louis, MO) for 72 h, then with phorbol-12-myristate-13 (20 ng/ml, Sigma-Aldrich) and actinomycin D (0.5 µg/ml, Sigma-Aldrich) for 24 h [8]. A supernatant was obtained by centrifugation at 750 g for 15 min, then at 1500 g for 5 min to remove cells and large debris, respectively. MPs from the supernatant were washed after three centrifugation steps (45 min at 14,000 g) and recovered in 400 µl NaCl (0.9% w/v). Washing medium for the last supernatant was used as control. Determination of the amount of MPs was carried out by measuring MP-associated proteins, using the method of Bradford, with BSA (Sigma-Aldrich) as the standard [5].
Ethics statement
The procedure followed in the care and euthanasia of the study animals was in accordance with the Guide for the Care and Use of Laboratory animals published by US National Institutes of Health (NIH Publication No. 85-23, revised 1996) and was approved by the Ethical Committee for Animal Research of Angers University.
Animals
Six groups of male Swiss mice (6-8 week old) were used: (i) group treated with infusion of saline by osmotic pump for 2 weeks, (ii) group receiving Ang II (Sigma-Aldrich, 0.5 mg/kg/ day) infusion by osmotic pump for 2 weeks, (iii) group receiving saline by osmotic pump for 2 weeks and i.v. injection of MPs (10 µg/ml of blood) every two days over the last week, (iv) group receiving Ang II by osmotic pump for 2 weeks and i.v. injection of MPs every two days over the last week, (v) group receiving i.p. injection of cyclopamine (Biomol International, Plymouth Meeting, PA, 10 mg/kg) every two days over the last week, and (vi) group receiving Ang II infusion by osmotic pump for 2 weeks and i.v. injection of MPs every two days over the last week, and i.p. injection of cyclopamine. All experiments were conducted in mice housed in a temperature-controlled animal facility with a 12-hour light/dark cycle and free access to tap water and rodent chow.
Ang II Infusion
Ang II at a dose of 0.5 mg/kg/day was delivered over 2 weeks via unprimed osmotic minipumps (Model 2004, Alzet Osmotic Pumps, Cupertino, CA) that were subcutaneously implanted into the back of mice. For control experiments mice were treated with saline delivered via osmotic minipumps. Animals were anesthetized with 2.5% isofluorane in 1.5 l/min O 2 for the duration of the surgical implantation procedure. Buprenorphine (1mg/kg) in s.c. injection was administered immediately prior to surgery.
Blood pressure measurements
Non-invasive blood pressure was measured by tail-cuff method (Letica, Barcelona, Spain). Briefly, all animals were trained everyday over a period of a week to get accustomed to the device. Measurements were performed prior to pump implantation over a week and 14 days after surgery. A total of 10 consecutive readings of systolic pressure and heart rate were daily recorded and averaged.
Arterial preparations and mounting
Mice were euthanized via CO 2 asphyxiation, and the thoracic aorta and the proximal segment of the small bowel were removed and pinned in a dissecting dish and cleaned of fat and connective tissue.
Segments of aorta (2 mm in length) were mounted on myographs filled with physiological salt solution (PSS). Aortic rings were stretched with a passive wall tension of 1 g. The PSS was continuously kept at 37°C and gassed with 95% O 2 and 5% CO 2 at pH 7.4. Isometric tension was recorded and collected by a force transducer. Cumulative acetylcholine (ACh, 1 nM -10 µM) concentration-response curves were obtained after pre-contraction of the artery with U46619 (80% of the maximal contractile response).
Branches II of mouse superior mesenteric arteries were mounted in arteriograph. Briefly, dissected arteries were mounted on two glass cannulas in the arteriograph chamber and attached with nylon ties. Arteries were bathed in PSS (pH 7.4; PO 2 160 mm Hg, PCO 2 37 mm Hg). Pressure was then set at 75 mm Hg. The presence of functional endothelium was assessed by the ability of ACh (10 µM) to induce more than 50% relaxation of vessels pre-contracted with U46619. To obtain active pressure versus diameter curves, diameter changes were measured at each step, when intraluminal pressure was increased from 10 to 125 mm Hg.
Statistical Analysis
The results are expressed as means ± SEM. Comparisons among different groups were made by one-way ANOVA followed by Bonferroni post hoc test. P < 0.05 was considered to be statistically significant.
Effects of MPs on systolic blood pressure and heart rate
Systolic blood pressure was stable throughout the duration of the experimentation in control mice infused with saline and in those treated either with MPs alone or cyclopamine ( Figure 1A). Infusion of mice with Ang II resulted in a significant rise in blood pressure that was stable during its infusion ( Figure 1B). In another set of experiments when the hypertension induced by Ang II was stabilized, i.v. injection of MPs completely decreased systolic blood pressure towards the values of control animals. This effect of MPs lasted until the end of the experimental procedure. Interestingly, cyclopamine completely prevented the ability of MPs to restore the increase of blood pressure induced by Ang II infusion.
None of these treatments were associated with significant changes in heart rate values throughout the experiments ( Figure 1C)
MPs improve endothelial dysfunction induced by Ang II infusion
The ACh-induced relaxation was not significantly different in aorta taken either from control or MP-treated mice (Figure 2A). The endothelium-dependent relaxation to ACh was significantly impaired in aorta taken from mice injected with Ang II compared with those from mice injected with vehicle ( Figure 2B). The decrease in maximal response was not associated with changes of the sensitivity to the agonist. The endothelial dysfunction induced by Ang II treatment was entirely reversed after administration of MPs showing that MPs may preserve endothelial integrity and functionality in hypertension-induced endothelial injury ( Figure 2B). Interestingly, cyclopamine completely prevented the ability of MPs to correct endothelial dysfunction in vessels from Ang II-treated mice ( Figure 2C).
To evaluate whether MPs affect smooth muscle function, concentration-response curves to sodium nitroprusside were performed in aorta. The relaxation to sodium nitroprusside was not significantly different in vessels from the four groups of mice (not shown).
5-HT produced a concentration-dependent increase in tension in vessels of saline-treated animals with functional endothelium. MPs did not affect this response when used alone. As expected, infusion of angiotensin II induced hyperreactivity ( Table 1) which was not affected by MP treatment ( Table 1).
MPs prevent the decrease in NO production and the oxidative stress induced by Ang II
In aorta, MPs did modify neither NO nor O 2 -production in comparison to saline-treated mice. By contrast, although the reduction of NO production in aorta taken from Ang II-treated mice was not significantly different, Ang II increased O 2 production ( Figure 2D and 2E). Interestingly, MP treatment significantly enhanced NO production and reduced O 2 production in Ang II-treated mice. After blockade of the Shh pathway by cyclopamine, MP effects on NO production and oxidative stress were abolished ( Figure 2D and 2E).
Impaired flow-induced dilation by Ang II infusion is improved by MPs in small mesenteric arteries
In SMAs, Ang II infusion impaired flow-induced dilation when compared with vessels taken from saline treated mice ( Figure 3). MPs slightly reduced the flow-induced dilation, but it partially restored the attenuated dilation induced by Ang IIinfusion ( Figure 3).
Discussion
We report that MPs bearing Shh completely correct Ang IIinduced hypertension without affecting heart rate via a pathway sensitive to Shh inhibitor. The beneficial effect of MPs was associated with the improvement of endothelial function of the response either to ACh or flow both in conductance and resistance arteries, respectively. However, MP treatment did not reverse the increase reactivity of the aorta to vasoconstrictor agent 5-HT. Of particular interest is that these effects of MPs were not due to change in sensitivity of the smooth muscle cells to NO but they were rather due to both increase of NO production and decrease of oxidative stress. Altogether, these results underscore the potent effect of MPs as an antihypertensive agent acting through an increase bioavailability of endothelial NO in conductance and resistance arteries.
Our strategy was to use MPs as pharmacological tools to reduce deleterious signaling in the vascular wall. For this purpose, the effect of MPs harboring Shh was assessed. This type of MPs improves endothelial function in the mouse aorta by increasing both eNOS expression and activity via PI3-kinase and Akt pathways and by reducing reactive oxygen species in human endothelial cells [5]. Also, endothelial dysfunction in mouse coronary artery after ischemia/reperfusion can be prevented by treatment with Shh-carrying MPs [5]. Moreover, MPs expressing Shh favor in vitro angiogenesis [6] and the recovery of hindlimb flow after peripheral ischemia through the activation of endothelial NO synthase and the increase of NO release and pro-angiogenic factor production [7]. Increased angiogenesis by MPs expressing Shh might participate in its ability to reduce vascular resistance and therefore vascular remodeling in Ang II-induced hypertension. However, in the present study, MPs expressing Shh + slightly but significantly attenuated the ACh response in small mesenteric arteries ( Figure 3) but did not affect ACh-response in the aorta under the same experimental conditions (Figure 2A). The differences between these results and those previously described [5] might be due to the duration of the treatment (one week versus 24 h, in the present study and in [5], respectively) and/or the vascular bed studied. In addition, MPs expressing Shh+ may harbor other proteins than Shh+, but also, mRNA and miRNA suggesting that the slightly attenuation of flow-induced dilation could not be induced by Shh+ but by other MP components. We cannot distinguish among these possibilities. Nevertheless, it is clearly shown that MPs Shh + treatment restored endothelial dysfunction in the small mesenteric arteries in response to flow. Recently, we have shown that MPs carrying Shh protect against apoptosis endothelial cells by a dual mechanism. On the one hand, MPs expressing Shh carry active antioxidant enzymes, catalase and isoforms of the superoxide dismutase, and on the other hand, they have the ability to increase the expression of manganese-superoxide dismutase in endothelial cells, through both internalization process and cyclopaminesensitive mechanism [9]. All of these effects of MPs expressing Shh probably explain their ability to completely abrogate Ang IIinduced hypertension and endothelial dysfunction in these mice. Indeed, the reduced vasodilation in response either to ACh or to flow in both conductance and resistance arteries was completely corrected upon MP treatment. Furthermore, these effects were associated with the ability of MPs to correct both the reduced NO production and the increased O 2 . -in the vessel wall. It should be noted that both NO and O 2 . -productions are variable but not the relaxation-induced by ACh. In this respect, in Ang II-induced hypertension, the relaxation to ACh involved other factors than NO, including reactive oxygen species from monoamine oxidases [10], NADPH oxidases and mitochondria, and cyclo-oxygenase-derived metabolites [11]. Thus, NO and O 2 . -productions were not variable in aorta taken from control animals in which the other endothelial factors mentioned above are not produced. Thus, it is therefore not surprising to observe such apparent discrepancies. Nevertheless, the conclusion of the present manuscript still holds in as much at least the correction of endothelial function with respect to the changes in these two radicals participate in the antihypertensive effect of MPs Shh + .
Few studies have described the role of Shh pathway in hypertension. It has been shown that although Shh is upregulated in retinas exposed to ocular hypertension, and both exogenous and endogenous Shh have neuroprotective effects on damaged retinal ganglion cells, they did not affect intraocular pressure [12] Also in a model of obesity-associated hypertension, targeting adipocytes in mice fed a high-fat diet with human heme oxygenase-1 gene decreased adiposity and hypertension that was accompanied with increased Shh expression in adipocytes [13]. In the present study, since all effects of MPs were prevented by cyclopamine, one can advance the hypothesis that they act through a mechanism sensitive to blockade of Shh. MP treatment was not however able to reverse the hyperreactivity to 5-HT observed in Ang II-induced hypertensive animals. It is known that Ang II induces cyclo-oxygenase (COX)-2 expression and prostanoid production in vascular cell types such as endothelial cells, vascular smooth muscle cells, and adventitial fibroblasts as well as in whole vessels. Oxidative stress has been also suggested to induce COX activity or up-regulate COX-2 expression, and this is particularly increased in hypertension. Recently, an excess of reactive oxygen species from NADPH oxidase and/or mitochondria and the increased vascular COX-2/TP receptor axis act in concert to induce vascular dysfunction including increased vascular reactivity, and hypertension in the same experimental model [11]. Since MPs harboring Shh decrease oxidative stress [9] but are not able to counteract hyperreactivity to 5-HT in aorta from Ang II-induced hypertensive mice, it is plausible to hypothesize that MPs are ineffective to affect the hyper-reactivity associated with COX-2/TP receptors activation. Further studies are needed to sort out the underlying mechanisms.
In conclusion, these findings suggest that Shh-positive MPs could represent a potent tool for stimulating NO release and reducing oxidative stress in the vessel wall to completely reverse Ang II-induced hypertension and extend the use of such MPs to treated disease states associated with endothelial dysfunction in addition to those associated with impaired angiogenesis.
|
2017-07-16T14:39:24.336Z
|
2013-08-16T00:00:00.000
|
{
"year": 2013,
"sha1": "971c767840a53dea33ab54c479d7b3b1fd15a292",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0072861&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49db9059742a3589c6d88244aa4715cf75582879",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54479816
|
pes2o/s2orc
|
v3-fos-license
|
School-related experience and performance with inflammatory bowel disease: results from a cross-sectional survey in 675 children and their parents
Objective We describe school performance and experience in children with inflammatory bowel disease (IBD) across Germany and Austria. Predictors of compromised performance and satisfaction were evaluated to identify subgroups of increased risk. Design This cross-sectional analysis was based on a postal survey in children aged 10–15 with Crohn’s disease, ulcerative colitis or unclassified IBD and their families. Multivariate regression analysis was used to assess influential factors on parental satisfaction with school, attending advanced secondary education (ASE), having good marks and having to repeat a class. Satisfaction was assessed based on the Child Healthcare–Satisfaction, Utilisation and Needs instrument (possible range 1.00–5.00). Results Of 1367 families contacted, 675 participated in the study (49.4%). Sixty-eight participants (10.2%) had repeated a year, 312 (46.2%) attended ASE. The median school satisfaction score was 2.67 (IQR 2.00–3.33). High socioeconomic status (SES) and region within Germany were predictive for ASE (OR high SES 8.2, 95% CI 4.7 to 14.2). SES, female sex and region of residence predicted good marks. Grade retention was associated with an active disease course (OR 2.7, 95% CI 1.4 to 5.3) and prolonged periods off school due to IBD (OR 3.9, 95% CI 1.8 to 8.6). Conclusions A severe disease course impacted on the risk of grade retention, but not on type of school attended and school marks. Low satisfaction of parents of chronically ill children with the school situation underlines the need for a more interdisciplinary approach in health services and health services research in young people.
IntroductIon
Growing up with chronic disease poses many challenges to all concerned-patients, parents, care givers and social environment. 1 This is also true for the inflammatory bowel diseases (IBD), which comprise Crohn's disease (CD), ulcerative colitis (UC) and colitis unclassified. 2 These diseases are typically first diagnosed in young adults, but may occur at any age, including childhood. 3 They are characterised by a chronically relapsing course with a large variety of potential complications. 4 In children, the disease may interfere substantially with physical and psychosocial development. 5 6 This can also imply difficulties in school, potentially compromising overall educational achievement and employability. [7][8][9] Evidence of relevantly impaired educational outcomes of adolescents with IBD as compared to the general population is not consistent. [10][11][12][13] In general, socioeconomic or psychological summary box What is already known about this subject? ► Crohn's disease and ulcerative colitis are increasingly diagnosed in childhood and early adulthood. ► These inflammatory bowel diseases (IBD) run a chronic course with potentially severe impact on quality of life. ► There is concern that inflammatory bowel disease (IBD) may impact on education and job perspectives of affected children.
What are the new findings? ► Parents showed low satisfaction with the disease-related situation at school, in particular in students with poor marks. ► Children with severe disease course and longer periods of absence had a markedly increased risk of grade retention. ► There was no association between disease related factors and type of school attended or marks received.
How might it impact on clinical practice in the foreseeable future?
► More attention needs to be paid to training of teachers and communication about the disease at school in particular where students are lower performers. Also, solutions need to be discussed and implemented to help children catch up with school in periods of absence due to IBD.
Open access characteristics seem to have more impact on schooling success than the somatic condition as such, at least unless the disease course is severe. 8 14 In a recent analysis from the Swiss IBD cohort, coping of affected children was described as excellent, including school performance. 15 Still, high levels of dissatisfaction were reported in terms of disease-related communication with school staff and appropriate health and toilet facilities. 12 16 17 School absenteeism is another frequently reported problem in chronic disease of childhood. 9 18 Overall, there are few quantitative data from larger patient groups to help target or prevent perceived problems in those most likely to be concerned. We recently performed a large survey in children with IBD to assess the current situation of medical care in this group. 19 20 Preliminary analysis showed high satisfaction of parents relating to competence and communication of physicians (manuscript in preparation). In contrast, the situation at school received rather low scores. This is remarkable, as low patient satisfaction ratings are, in general, uncommon. 21 The main focus of the current analysis was explorative, aiming at describing the experience with the situation in school from the perspective of children with IBD and their parents. In addition, we examined determinants of school performance as indicated by, on the positive side, receiving advanced secondary education (ASE) and receiving good marks, and, on the negative side, grade retention.
Methods design and setting, patient recruitment
A mailed survey was performed on the quality of care in paediatric IBD. 20 22 Eligibility criteria included a physician confirmed diagnosis of CD, UC or colitis unclassified and age between 10 and 15 years. Recruitment was via the German language paediatric IBD registry CEDA-TA-GPGE, 23 24 the associated Saxonian Epidemiological Paediatric IBD registry, 25 IBD-related research mailing lists and the national patient organisation.
data collection
Parents and patients received multimodular postal questionnaires. Validated assessment instruments were included wherever possible. 26 School-related information included type of school and current class attended, prolonged periods away from school due to IBD (ever missing 2 weeks or more at a time-never/very rarely, occasionally, frequently), skipping or repeating a class, most recent term marks in Math and German and the general impression whether the disease leads to compromised school performance (no, minor/rarely, marked/ frequently). Marks in Germany range from 6 to 1, with 5 and 6 corresponding to a fail mark and 1 and 2 to a good mark.
In Germany, secondary education starts following 4 years of elementary school and commonly entails the decision for a specific type of secondary school, depending on school performance, interest and preferences. Depending on the school type, we classified secondary education as basic (expected school leaving after 10 years of education), intermediate (expected education trail 10 years overall±additional non-academic job qualifying training) and advanced secondary education (ASE (grammar school, Gymnasium-education expected to lead to a university qualifying degree). Any others (eg, Waldorf, integrative schools) were classified as 'other'.
Parents judged the child's physical and mental development in comparison to other children of similar age (normal, ahead, behind). From quality of life measures, we present results for the school-related items (getting along at school, having missed time at school recently). 27 28 Also, we asked the children whether they disclosed the disease at school.
Satisfaction with the situation at school was available as a subdomain of the internationally validated generic questionnaire on Child Healthcare-Satisfaction, Utilisation and Needs (CHC-SUN). 29 30 Items relate to (1) the knowledge of teachers about the child's disease, (2) the teachers' attentiveness to the disease and (3) the health facilities available at school. For graphical display, we simplified the original five-point answer scales to no, moderate and high satisfaction, summary scores used the original 1-5 scale. The CHC-SUN school service subscore was calculated by estimating the mean value of all questions answered.
School-related free text comments are cited relating to the following questions: important life events (parents only), what else is considered important in care, good experiences, bad experiences, anything else you'd like to tell us (children and parents).
sociodemographic variables
Information was collected on region of residence within Germany or Austria (seven regions based on neighbouring states), migration background (both parents born abroad OR German not primary language spoken at home), family type (single parent households, both parents or other) and number of siblings. Parental socioeconomic status (SES) was assessed based on level of education, occupation and income of the parents and was used in three categories based on quintiles (high: highest 20%, medium: middle 60%, low: lowest 20%). 31 32 disease-specific variables, health status and comorbidity Type of disease included CD, UC and IBD not specified (IBD_NS) based on the questionnaire information. IBD_NS comprised IBD with unclear or inconsistent information, as well as unclassified IBD. Age at onset was classified as very early diagnosis (age 0 to <6), early (6)(7)(8)(9) and adolescence (>10). For current health status, general information on current IBD-related well-being was used as reported by parents (global question, five categories collapsed to good, moderate and poor). Disease course Open access during the preceding year (no relapse, one relapse, several relapses/persistent symptoms), having had resecting surgery and current and previous medical therapy were available to indicate severity of disease (parent information).
statistical analysis Description and exploration
Baseline description of the sample shows absolute numbers and percentages for categorical or categorised variables, stratified by age group for sociodemographic and disease type for health related variables.
In addition, school-related experience is reported in a descriptive way.
Regression analyses
Attending ASE school, having missed a class and recent marks in German and Math were used as indicators of school performance and thus represented outcomes in regression analysis, in addition to satisfaction with school as a measure of experience.
To model ASE and having repeated a class (grade retention), logistic regression was used. The median grade of math and German was analysed as an ordinal response variable with adjacent-category logistic models. 33 Determinants of satisfaction measure by the CHC-SUN subscore were examined in linear regression analysis. All variables considered relevant as potential determinants of school performance tables 1-4 underwent bivariate analysis. Variables were then selected into multivariate models based on the Akaike's information criterion (AIC). For every step, changes in estimators and SEs were monitored to possibly detect overfitting. We present all regression results of the bivariate analysis, the fully adjusted models and the selected parsimonious models, with coefficients transformed to ORs for the logistic and adjacent category models and 95% CIs for all estimates. For the adjacent-category models, the OR correspond to an improvement of the median grade by 1. OR <0.5 or OR >2.0 are considered strong associations and OR with 95% CI excluding 1.0 are considered statistically significant.
All comparisons were considered exploratory. Regression analyses were programmed in R V. 3 were currently well, and 275 (41.2%) had been in stable remission over the preceding 12 months.
General school-related experience
A large majority of children reported they had generally gotten along well at school recently (38.8% reasonably well, 39.0% very well, overall n=555). Thirty-four (5.2%) children did not get on well, and 97 reported intermediate experience. Two hundred and eightyfive (42.2%) of the parents thought the child's situation at school was not affected by the IBD; a similar proportion (288, 42.7%) reported occasional problems due to IBD, and 98 (14.5%) thought the school situation was substantially compromised due to the child's IBD. Ninety-five children (14.1%) had not disclosed at school that they had the disease, 315 (46.7%) kept some secrecy (eg, restricted circle of persons or only certain aspects disclosed) and 260 (38.5%) were completely open about their IBD in school.
Asked about the current relative developmental stage of the child, 248 (36.7%) of the parents reported they perceived the child as physically lagging behind other children the same age. For the time before starting school, this proportion was reported as 15.9% (107 children). In contrast, there was no preschool-current time difference in the proportion of parents perceiving the children as below or behind in their mental development (same as other children of similar age: current: 511 (75.7%); preschool: 488 (72.3%).
Indicators of school performance and trajectory
Time missed from school Within the preceding 2 weeks, 49 (7.3%) had missed 50% or more of classes and 130 (19.3%) had missed school for less than 50% of the classes. Repeated prolonged periods off school at any time during the course of the disease were reported for 100 children (14.8%). Slightly more than a quarter (187, 27.7%) had never had this problem.
Type of school attended There were 33 children still at elementary school. Three hundred and twelve children (46.2%) attended ASE, 43 (6.4%) basic secondary and 204 (30.2%) intermediate schools. Attending ASE was more likely with high parental SES with an excessively high OR of 8.2 (95% CI 4.7 to 14.2) and less likely when parental SES was low (reference medium SES, table 1). Also, children from single parent families less often had ASE, as compared with those with both parents at home (OR 0.5, 95% CI 0.3 to 1.0). A migration background was more common in children in ASE, but this group was very small. Disease-related variables contributed to model fit, but none of them was associated with the outcome (table 1).
School marks, being a good student Poor marks were rare: 21 and 4 of 675 children had failed Math and German respectively in the last term. In
Grade retention
Grade retention was reported in 84 children (12.4%). Seven had skipped a class and were subsequently combined with those with regular trajectory for analysis. There was a clear association with periods off school: Of 100 students who had experienced prolonged periods off schools, 28 (28.0%) had repeated a year (no or very rare periods: 9.1%) (OR 3.9, 95% CI 2.0 to 7.5). In the multivariate analysis, retention risk increased with low parental SES, time missed at school and an active disease course during the preceding year (table 3).
satisfaction with the situation at school
All items of the school services domain of the CHC-SUN showed low satisfaction in around 40%, and the lowest proportion of high satisfaction for health-related facilities (figure 1). The summary score could be calculated for 630 persons. There was a clear trend of better scores with better school marks (figure 2). This was confirmed in the multivariate analysis (table 4). Also, satisfaction was higher in Eastern states as compared with West German Due to the inclusion criteria, all children with early disease onset had long-standing disease. These variables were therefore combined to a single combination variable for multivariate analysis.
regions. It was lower when the child was a single child, and when current health status was moderate or poor, as compared with good.
Anecdotal information, free text comments
All parental free text statements alluding directly to IBD at school are quoted in figure 3. In contrast to parents, many children used the occasion for last page comments. These often related to how they liked (or not liked) the questionnaire. All school-related children's comments from this last question are listed in figure 4.
dIscussIon
In this paper, we present data on school experience and performance from a large survey in German-speaking children with IBD. The most striking finding was the low satisfaction of parents with the situation at school. In contrast, of various indicators of school performance and trajectory, only grade retention was determined by disease-related variables. Around 15% of the children with IBD had repeatedly been off school in the past for prolonged periods of time due to the disease. For a similar proportion Open access of children, parents felt the disease had a severe impact on school performance. Receiving ASE and getting good marks seemed to be unaffected by the disease, even if the course had been severe or many classes were missed. Patient satisfaction scores are notoriously difficult to interpret due to the complex interplay between patient expectations and experience, or patient and provider characteristics, further confounded by a high variation in the way satisfaction is conceptualised and assessed. 21 35 36 A common problem in satisfaction surveys are ceiling effects, in that high levels are commonly encountered even where quality of care is known to be deficient. 37 Thus, the strikingly low school service satisfaction ratings, in particular in comparison with other domains of the instrument we used, are reason for concern. Stratification for various subgroups which might be particularly susceptible for lower quality school services did not render helpful results, as neither type of school nor severity of the disease showed convincing associations. Rather, it was the parents of children with poor marks who were particularly dissatisfied, while the parents of children with good marks were most satisfied. This may underline an impression that, as shown for health services research, in general, positive attitudes towards an organisation or provider will also lead to better satisfaction ratings, irrespective of other quality indicators. Comparable data on school satisfaction in chronically ill children are scarce, and none had been available for IBD until recently. The parent-reported instrument we used has since been modified for self-report in German-speaking youth, based on a mixed sample of 182 chronically ill adolescents including 28 patients with gastrointestinal (GI) disease. 38 In this study, 36% of patients in the GI group reported unmet needs relating to school or work, which was more than for any of the other disease groups (diabetes, multiple sclerosis, arthritis, skin disease, pulmonary conditions). With respect to the CHC-SUN, as in our survey, the school services subscore was lowest as compared with other domains in all patient groups, but in particular so for GI diseases. In fact, the mean score of 2.3 out of a maximum possible of 5 reported for this group was even lower than what was measured in our analysis. Thus, it seems that low satisfaction with school services is prevalent in chronically ill children, and in particular so in children with GI diseases. It was not asked what exactly parents associate when asked about, for example, facilities at school, but there is reason to assume that the importance of clean and easy to reach toilets for persons with IBD may take effect in this point. 12 39 Inappropriate provision and maintenance of lavatories in public schools in Germany has been the subject of public discussion for several years. A prominent online newspaper received, in 2015, 3000 parent letters relating to school services, of which a third reported their children avoided using toilets at school. 40 It seems plausible that children with bowel disease would be even more sensitive to this issue. Similarly, embarrassment and privacy issues originating from insufficient knowledge of teachers are a particular concern. This is in concordance with the relatively low proportion of children who were completely open about the disease with teachers and fellow students (38.5%). In our survey, 15% kept the disease completely secret. Thus, there are specific challenges of suffering from bowel disease at school, reflected by the high proportion of reported unmet needs found by other investigators, even though this did not directly impact on educational success as measured by marks and ASE attendance, that is, expected type of school leaving degree.
Open access
Grade retention was the only indicator of school performance found to be associated with disease characteristics, hinting at a causal effect of having severe IBD. Most indicators of disease severity, as well as disease duration increased the risk for having to repeat a class. As an example, more than a quarter of those with a recent chronic active disease course had repeated a year. This was also related to prolonged periods off school. The absolute frequency is difficult to interpret. Retention policies vary by type Open access Figure 1 Single item response-school services satisfaction. repeat a year, with state-specific proportions ranging from 1.3% in Berlin and Hamburg to 3.9% in Bavaria. 41 Reliable cumulative rates are not available, but may well be higher than what we observed in children with less severe IBD in our study.
Open access
There is debate both nationally as in the international literature on the usefulness of having to repeat a class. 42 43 In the context of long periods of missed time at school, getting more time may actually be considered beneficial by many children and their parents, in particular, as more than a third of the parents we surveyed reported their child with IBD to lag behind other children of similar age with respect to their physical development. Problems of self-esteem and social interaction deriving from delayed growth and puberty have been reported in the literature. 44 45 We did not assess how the children felt about the retention. In any case, the mere fact of having to repeat a class is insufficient to judge whether this constitutes a problem to those concerned and whether it could have been avoided by appropriate support measures. We are currently performing a survey in children with IBD to find out more about any procedures in place to help students catch up with time missed from school, as well as their preferences.
For other indicators of school performance, specifically advanced schooling, beside state, parental SES was the most important influential factor of those available for evaluation in our survey. This association has been known and is generally interpreted as indicating social inequality within the German educational system. 46 Also, the differences by state are not specific to IBD children, but reflect a known north-south gradient within Germany. 41 From the perspective of caring for IBD, it is reassuring that disease related factors did not seem to have an impact. Figure 4 Children's comments.
Open access
It may seem unusual in our context of a clearly quantitative survey to narrate free text statements of patients. Of course, these constitute anecdotal evidence, expressed by a low number of single persons. The quotes cannot be considered in any way representative, and inference with respect to the relevance of the problems alluded to is not possible. They do, however, illustrate a few situations individual patients or parents considered particularly noteworthy and may help empathising with the special situation of being a child or adolescent (or his/her parent) with a chronic disease at school when planning and implementing further research, including interventions to improve the situation.
Our survey is strong in that it provides quantitative school related data from a large sample of young persons with IBD, using, wherever possible, validated instruments to collect information from both parents and children, with a focus on patient experience. There are a number of limitations. The most obvious derives from the fact that age appropriate schooling was one of several outcome indicators of quality of care in children, in a survey restricted to affected children. There were no healthy controls, so the actual burden afflicted by having the disease could not be quantified in absolute numbers. Moreover, while we appreciated the fact that good healthcare in chronically ill children is not just about physical health and vertical growth, we were in the end unprepared for the high importance parents placed on school-related issues. The endpoints chosen may insufficiently reflect how the school careers are affected by IBD, and we have now set up a study looking into support measures in more detail.
Generalisability may also be an issue. We have used a broad recruitment strategy in order to catch a wide spectrum of patients with IBD from different health contexts. Still, most patients were recruited via the paediatric specialist registry and the national patient organisation, both of which do not represent random samples of children with IBD. Rather, in combination with the response rate of around 50%, it is expected that we have surveyed parents with a particularly strong interest in the disease, better than average medical care and physician attachment, higher SES and higher compliance. This was also reflected by very low retention rates in patients with quiescent disease.
A specific challenge in examining school related factors is the very diverse German educational system. Education is organised on state level and has seen a number of major asynchronous structural changes over the last years comprominising comparability. 46 47 Several factors could not be conclusively examined due to low numbers. This concerned, for example, residency in Austria, but also several vulnerable person groups, such as those a migration background. The unexpected positive association of migration background with ASE may be due to selection effects, in that underascertainment of non-German families was over-proportional in lower SES families.
Clinical studies typically use disease activity indices to describe disease severity, for which several items were not available to us. 48 49 Recent disease course and parent global assessment of the child's health may have been insufficiently sensitive to capture this aspect. We were in this survey generally focused on the patient perspective on the situation of care, thus dealing with perceptions. The degree to which what parents and children report would be substantiated by 'objective' measures remains questionable. However, in the end, it is the children who suffer the disease and have to live with the consequences, so even where findings remain vague and unexplained, there is reason for concern when a specific area of daily life is reported to represent a problem.
conclusIons
Our study shows, in accordance with the limited data in the literature, a high degree of dissatisfaction with health-related school services in parents of children with IBD. School trajectories in children with IBD take longer due to grade retention, but from our survey there is no indication that they will, in the end, result in poorer prospects on the job market, as school marks and type of school attended were not compromised. While school facilities and teachers' knowledge and empathy seem the most obvious problems, more research is needed to identify promising targets and interventions for improvement. Our results may form a helpful basis in this respect. (DCCV e.V.) for facilitating patient recruitment. All children and their parents sharing their experience by participating in this study.
Contributors MF contributed to literature search, refinement of study questions, statistical analyses, first draft of the manuscript. AS contributed to statistics and non-standard multivariate analyses. SK and MC were consulting on the pediatric content of the study, pretesting of questionnaire, patient recruitment, review of manuscript. AB and MWL provided the pediatric expertise and contributed to the questionnaire development, patient recruitment and review of manuscript. JP was responsible for the coordination of survey and data collection, development and editing of questionnaires and data entry and review of manuscript. AT prepared the study concept and design; did the literature search, funding proposal, development of questionnaires; supervision of survey, statistical analysis and writing of manuscript. All authors contributed to and approved of the final manuscript.
Funding This work was supported by a grant from the German Ministry of Education and Science (BMBF, Ripi Study, grant number FK 01GY1139).
Competing interests None declared.
Patient consent Not required.
Ethics approval Ethical approval was granted (Bremen University Ethics Committee, date of approval 12 March, 2013). Informed consent was secured after written information from parents or guardians, and the children.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement Data sharing is not possible due to patient consent excluding third party use of data. There are no additional unpublished data available.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/ rEFErEnCEs
|
2018-12-16T18:46:00.882Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9460a6f89c6d8a27a9ac1f061e7a6b1780db537f",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopengastro.bmj.com/content/bmjgast/5/1/e000236.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9460a6f89c6d8a27a9ac1f061e7a6b1780db537f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257336757
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Biopsychosocial complexity research
COPYRIGHT © 2023 Schubert, Sulis, De La Torre-Luque and Schiepek. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: Biopsychosocial complexity research
Advancements in research over the past century have clearly helped people suffering from psychiatric disorders. However, many of these advancements have been based on a mechanistic-reductionistic approach consisting of traditional research designs and methods (e.g., randomized controlled trials, standardized questionnaires) that often fail to capture the full extent of human complexity. This may at least in part explain the inconsistencies, e.g., in conventional stress research, and the poor generalizability from laboratory to real life, naturalistic contexts (1,2).
Emerging evidence suggests that a paradigmatic shift to the biopsychosocial model of life (3) employing a biosemiotic-systemic approach (4) may accelerate progress in areas where a mechanistic-reductionistic approach has not been successful. To this end, general systems theory (5) can be used as a framework for biosemiotic-systemic thinking. It proposes that humans are deeply embedded in their environment and affected by the continuous influx of stimuli provided by nature, nurture and culture. Moreover, this human-environment entity is hierarchically structured, consisting of various vertically stratified levels of systems such as molecular, cellular, tissues, organs, person, relationship, family, population and ecosystem (6) (Figure 1).
From a systemic standpoint, dynamic complexity research suggests that environmental adaptation is connected with order transitions from lower to higher complexity levels. This transition is associated with new distinctive qualities and relationships, e.g., the psyche emerges from neuronal activity. In turn, these higher levels are superordinate to lower levels and set the boundary conditions for them (3,5,7). Such intersystemic activity features top-down/bottom-up regulatory circuits that are flexible enough to maintain the hierarchy's systemic integrity in the presence of stress. However, when stress is too great, this equilibrium is disturbed, leading to dysfunctional and disordered activity affecting all levels simultaneously in a contextually dependent manner (8).
In addition, from a biosemiotic standpoint, life evolves through continuous production, exchange and interpretation of signs at all levels of the biopsychosocial hierarchy (9). On the person level, the interpretation of a sign, e.g., a social stressor, is always connected to the whole biography of the person, including conscious and unconscious as well as objective and subjective factors. According to biosemiotics it is the subjective meaning a sign has for the interpreting person that determines how the person as a whole responds to that sign and whether the process of interpretation, i.e., the assignment of meaning, is good or bad for health (4,9).
From these introductory remarks, it should be clear that simply paying attention to diversity in biopsychosocial research and merely combining biological, psychological and social data sets cannot do justice to the complexity of human existence (10). But how can research be conducted such that it is able to match the complexities of the biopsychosocial model of life? We asked this question about two years ago as part of a Frontiers Research Topic and received five thematically relevant papers, of which four are theoretical and one empirical.
Sturmberg's paper introduces the field of complex pattern formation in disease and how it can be used to improve patient management. It then offers various perspectives supporting the philosophical/theoretical proposition of the complex-adaptive nature of health, i.e., Ashby's law of requisite variety, multiple sufficient causes, network physiology, inflammatory regulation, top-down causation in complex adaptive systems. The article also presents an outlook on how a new paradigmatic view of dynamic complex-adaptive states could alter health system practice and research to become more suitable with regard to the person as a whole.
Two papers in this Research Topic (Sulis and Trofimova) propose a continuum from temperament to mental illness and formal ways to analyze it. Both papers argue that a mathematics (or physics) based upon timeless, fixed structures and symmetries cannot express the complexity of organisms in which transience, emergence, generativity and contextuality abound. Sulis views the continuum as a landscape of transient dynamical phases, generalizing ideas of dynamic systems theory through the concept of a generating process.
Trofimova approaches the continuum from the perspective of functional constructivism, outlining universal features in the construction of behavior, from which is derived the neurochemical framework Functional Ensemble of Temperament (FET). A spectral approach to classification of temperament traits and symptoms of psychopathology is presented in the FET, based on neurochemical biomarkers. Moreover, Trofimova suggests using the concept of Specialized Extended Phenotype (SEP) to highlight the mechanisms of multi-level reinforcement of psychobehavioral diversity. Biopsychosocial complexity, therefore, could be partitioned as types of context and SEP functional "bubbles" using the same 12 categories as 12 neurochemical FET components.
The paper by Lunansky et al. follows a network approach to psychiatry where psychopathology emerges from causally interacting symptoms. It presents three studies (two simulation studies, one with empirical data) dealing with a formal system of interacting psychiatric symptoms targeted by biopsychosocial risk and protective factors to influence resilience. The studies applied two novel network resilience metrics, the Expected Symptom Activity (ESA), indicating how many symptoms are active or inactive, and the Symptom Activity Stability (SAS), indicating how stable the symptom activity patterns are.
Finally, the paper that used an empirical approach to biopsychosocial complexity is from Seizer et al. and re-evaluates an "integrative single-case study." In this study on a 25-year-old healthy woman, a dynamic complexity measure was applied to biopsychosocial time series data covering a study period of 126 12-h intervals under "life as is it lived" conditions. It was shown that the about-weekly pattern in the subject's cellular immune complexity (indicated by neopterin) was an expression of a whole-person adaptation toward the emotionally meaningful in-depth interviews during the 2-month period. This study supports the notion that integrating time and meaning in research methodology gives access to the full richness of a person's complex biopsychosocial reality.
Taken together, the contributions of this Research Topic show that considering complexity in biopsychosocial research should allow psychiatry to explore new horizons. This, however, will require a fundamental epistemological shift toward a biosemioticsystemic paradigm in medicine (1-3).
Author contributions
CS and WS wrote the manuscript. All authors provided important intellectual contributions, read, and approved the final version.
|
2023-03-05T16:15:54.308Z
|
2023-03-03T00:00:00.000
|
{
"year": 2023,
"sha1": "5fc783cc588b470e90bbf995a89da3103d442252",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2023.1157217/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "447eb2d02df089d6d2da4bc49ea5cb5f34a37a80",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
10339572
|
pes2o/s2orc
|
v3-fos-license
|
Towards sequence-based prediction of mutation-induced stability changes in unseen non-homologous proteins
Background Reliable prediction of stability changes induced by a single amino acid substitution is an important aspect of computational protein design. Several machine learning methods capable of predicting stability changes from the protein sequence alone have been introduced. Prediction performance of these methods is evaluated on mutations unseen during training. Nevertheless, different mutations of the same protein, and even the same residue, as encountered during training are commonly used for evaluation. We argue that a faithful evaluation can be achieved only when a method is tested on previously unseen proteins with low sequence similarity to the training set. Results We provided experimental evidence of the limitations of the evaluation commonly used for assessing the prediction performance. Furthermore, we demonstrated that the prediction of stability changes in previously unseen non-homologous proteins is a challenging task for currently available methods. To improve the prediction performance of our previously proposed method, we identified features which led to over-fitting and further extended the model with new features. The new method employs Evolutionary And Structural Encodings with Amino Acid parameters (EASE-AA). Evaluated with an independent test set of more than 600 mutations, EASE-AA yielded a Matthews correlation coefficient of 0.36 and was able to classify correctly 66% of the stabilising and 74% of the destabilising mutations. For real-value prediction, EASE-AA achieved the correlation of predicted and experimentally measured stability changes of 0.51. Conclusions Commonly adopted evaluation with mutations in the same protein, and even the same residue, randomly divided between the training and test sets lead to an overestimation of prediction performance. Therefore, stability changes prediction methods should be evaluated only on mutations in previously unseen non-homologous proteins. Under such an evaluation, EASE-AA predicts stability changes more reliably than currently available methods. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-S1-S4) contains supplementary material, which is available to authorized users.
Background
Even a single amino acid substitution, a mutation, in a protein sequence may result in significant changes in protein stability, structure, and therefore in protein function as well [1]. Hence, accurate prediction of stability changes in protein variants is a crucially important task in computational protein design. Moreover, the ability to predict stability changes may help us understand the relationship between protein mutations and inherited diseases.
As more experimental data about stability changes became available in the ProTherm database [2], machine learning methods for predicting stability changes emerged. Broadly, they can be categorised as structure-based and sequence-based methods. Structure-based methods [3][4][5][6][7][8] require protein three-dimensional structure on the input which can be limiting if the experimentally solved structure is not available. Thus, with the immense amounts of data coming from the genome sequencing projects, the sequence-based methods are valuable tools for studying protein variants. In this work, we focused our attention on the sequence-based methods.
Traditionally, sequence-based methods make their predictions based on the amino acid identities of the mutation site and several neighbouring residues [9][10][11][12]. Alternatively, the mutation site and its neighbouring residues can be encoded with a set of amino acid properties to account for physicochemical differences among amino acids [13,14]. In our recent work [15], we proposed a method that combines amino acid identities of the mutation site neighbourhood with evolutionary and predicted structural features.
All of the studies referenced above were able to report a high cross-validation accuracy between 77% and 86% (Matthews correlation coefficient between 0.39 and 0.65) when classifying mutations as stabilising or destabilising [9][10][11][12][13][14][15]. Regarding the real-value prediction of stability changes, the correlation of the predicted and experimentally measured stability changes reached a correlation coefficient of 0.62 to 0.83 [9][10][11]15]. Nevertheless, an assessment study [16] indicated that the prediction performance of these methods on an independent test set is considerably lower than stated in the original studies.
There might be several aspects to why currently available methods did not perform well in the independent assessment. For example, as shown in [10], when the data set used for training and evaluation did not contain multiple records for measurements of the same mutation at different experimental conditions, sensitivity (accuracy on positive examples) of the proposed method decreased from 71% to 28%. When the evaluation was further restricted to only proteins with low sequence similarity to the training set, sensitivity reached only 15%. These findings [10,16] suggest that currently available methods may suffer from over-fitting on the mutations and proteins that they experienced during training. However, the over-fitting problem is not apparent from the performance results reported in the original studies. This may mean that the evaluation scheme needs to be revisited.
Commonly, stability changes prediction performance is evaluated using cross-validation which randomly divides all data set examples into k folds where k−1 folds are used for training and one fold for testing. This is repeated k times, each time with a different test fold. Typically, a stability changes data set consists of 1,000 to 3,000 examples describing various mutations in up to 90 different proteins. Upon randomly dividing examples of such a data set into k folds, different mutations of the same protein, and even the same residue, can be found among several folds. This means that even though a prediction method is tested on mutations unseen during training, different mutations of the same protein, and even the same residue, can be found in both training and test folds. This introduces bias if a method is designed using a data set in which correlation among different mutations of the same proteinexists. For instance, the data set compiled in this study contains 1,914 unique mutations in 74 different non-homologous proteins (960 different residues). In 68 proteins which have more than one mutation record available, 78% of mutations agree with the prevailing sign of stability changes for the given protein. This number rises to 82% when we analyse mutations in each residue position with more than one mutation record available. Because of this correlation in the available data, stability changes prediction methods should be evaluated solely on mutations in previously unseen non-homologous proteins.
In this study, we provided experimental evidence of the limitations of the evaluation commonly used for assessing the prediction performance. Next, we proposed an evaluation scheme that can detect over-fitting on mutations in residues and proteins encountered during training. To achieve this, the evaluation is done solely on previously unseen proteins with sequence similarity below 25%. Finally, to improve the prediction performance of our previously proposed method [15], we identified features which led to over-fitting and further extended the model with new features. The new method bases its predictions on Evolutionary And Structural Encodings with Amino Acid parameters (EASE-AA). We compared EASE-AA with currently available methods for both classification and real-value prediction of stability changes. Our results show that EASE-AA increases prediction performance on unseen non-homologous proteins.
Methods
Stability changes prediction can be viewed as a machine learning classification problem if we are only interested in the direction of the stability change: stabilising (an increase in the free energy of unfolding) or destabilising (a decrease in the free energy of unfolding). If we are concerned with the real-value prediction, it is a regression problem. In this study, we proposed a method referred to as EASE-AA: Evolutionary And Structural Encodings with Amino Acid parameters. EASE-AA encompasses two models: one trained for classification and one for regression.
Predictive features for EASE-AA
For machine learning prediction of stability changes, each mutation needs to be encoded with a number of predictive features. We combined evolutionary and predicted structural features with physical amino acid parameters to design EASE-AA.
Evolutionary features
Some residues in a protein sequence are more conserved within the family of related proteins than others. Notably, functionally important sites tend to be conserved. This has been previously exploited for the prediction of deleterious mutations [17][18][19][20][21][22][23]. We introduced a range of evolutionary features for the prediction of stability changes in our recent work [15]. There, the best performing model included two evolutionary features: SIFT score (S) and mutation likelihood (M).
SIFT [20] predicts whether a mutation affects the function of a protein. It is calculated from a scaled probability matrix of possible amino acid substitutions generated from a multiple sequence alignment of related sequences. SIFT scores range from 0 to 1 where scores below 0.05 are predicted as deleterious mutations. We ran SIFT using the Swiss-Prot and TrEMBL databases with sequences more than 90% identical to the query removed.
The feature mutation likelihood (M) expresses the probability of the introduced amino acid to appear in the multiple sequence alignment of related proteins. To calculate this feature, three iterations of PSI-BLAST [24] in default configuration were used to search the NCBI nonredundant database. Then, M was extracted from the last position specific scoring matrix (PSSM). We divided M by 10 for normalisation so that most values fell within the range of −1 and 1.
Structural features
It has been shown previously that stability changes prediction can be guided by observing structural properties describing the secondary structure and accessible surface area of the mutated residue [25]. However, structural information is not available in the case of sequencebased prediction of stability changes. Nevertheless, in our recent work [15], we found that predicted structural features can supplement the missing structural information. There, the best performing model included features secondary structure type (SS) and accessible surface area (ASA) for classification and real-value prediction, respectively. We included both features in EASE-AA and further extended the model with predicted disorder probability (D).
We used the multi-step neural network method SPINE-X [26] for the prediction of the secondary structure type and accessible surface area of each mutation site. For the prediction of the disorder probabilities, the neural network method SPINE-D [27] was used. Since feature SS describes the mutation site as either a-helix, b-sheet, or coil, it was represented in three binary inputs (1 was used to determine the secondary structure type present, 0 otherwise). Unlike in our previous work where feature ASA encoded mutation site as buried or exposed, we included the real value of the predicted accessible surface area in EASE-AA.
Amino acid parameters
Different sets of physical parameters for encoding the substituted and neighbouring amino acids have been introduced for the prediction of stability changes [4,5,13,14]. Recently, calculating the difference in physical parameters between the introduced and deleted amino acids was proposed [8]. We followed this methodology and applied it to seven representative parameters including hydrophobicity, volume, polarisability, isoelectric point, helix probability, sheet probability, and a steric parameter (graph shape index). These parameters were first introduced in [28] and later applied to the prediction of secondary structure [26]. We used the scaled values of the seven parameters from [29]. We refer to the predictive feature encompassing the differences of seven physical parameters for the introduced and deleted amino acids as amino acid parameters (AAP).
Final set of predictive features
The final set of predictive features for EASE-AA was composed of the following features: S (1 real-value input), M (1 real-value input), SS (3 binary inputs), ASA (1 realvalue input), D (1 real-value input), AAP (7 real-value inputs). Compared to our previous work [15], EASE-AA extends the predictive model with the disorder probability (D) and seven amino acid parameters (AAP). Moreover, we excluded 6×20 binary inputs describing the three and three amino acid neighbours to the left and right from the mutation site. Also, EASE-AA does not include 20 inputs encoding the identities of the deleted and introduced amino acids. This approach resulted in an overall reduction of the number of input attributes from 145 to only 14. Hence, EASE-AA is presumably more robust against over-fitting.
Support vector machines
Support vector machines (SVM) [30] are machine learning algorithms which can approximate non-linear functions by mapping the inputs to a high-dimensional feature space using a kernel function and then, solving a linear problem by finding a maximum margin separating hyperplane. We adopted the radial basis kernel function because it has been shown to perform well for predicting stability changes [10]. To implement our method with SVM, we used the LIBSVM library [31].
The regularisation parameter C and the radial basis kernel width parameter g need to be chosen to optimise SVM performance. In the case of real-value prediction, another parameter (ε), determining the error neglected during training, is required. For classification, a parameter setting the weight (w) of the penalty for training error on positive examples should be set if the number of positive and negative examples in the data set is unbalanced. For each experiment, we optimised these parameters by running a grid search using 10-fold cross-validation on the training set so that the highest Matthews correlation coefficient (MCC) and lowest root mean square error (RMSE) were reached for classification and real-value prediction, respectively. In the grid search, we considered all possible combinations of C ∈ {2 −5 , 2 −3 , . . . , 2 15 }, g ∈ {2 −15 , 2 −13 , . . . , 2 1 }, and w ∈ {1, 1.5, 2, 2.5, 3} for classification, and C ∈ {2 −1 , 2 0 , . . . , 2 6 }, g ∈ {2 −15 , 2 −14 , . . . , 2 0 }, and ε ∈ {2 −8 , 2 −7 , . . . , 2 −1 } for real-value prediction. The range values for C, g, and ε were taken from the LIBSVM grid search [31] and extended to suit all methods assessed in this study. We also considered using a data-driven approach for optimising the kernel width parameter (g) [32], however, for the relatively small size of our data set, the grid search was a sufficient solution.
As mentioned above, we decided to optimise the SVM performance in terms of MCC in the case of classification. MCC is a measure of prediction performance that provides more relevant information than classification accuracy in cases when the data set is severely biased against one class of examples. Since destabilising (negative) mutations prevail in the available experimental data (74% in our data set), optimising on MCC allowed us to achieve a more balanced performance in terms of correctly predicted both stabilising and destabilising mutations.
Data sets
We compiled a data set of free energy stability changes from the ProTherm database [2] (February 2013). There, a stability change is defined as the difference in the unfolding free energy: ΔΔG u [kcal mol −1 ] = ΔG u (mutant) − ΔG u (wild-type). Hence, for the classification problem, we defined stabilising mutations (ΔΔG u ≥ 0) to be the positive examples and destabilising mutations (ΔΔG u <0) to be the negative examples.
We extracted 3,329 mutations with listed stability changes and cross-checked all the sources where the measurements came from. We found that incorrect values (mostly the sign of ΔΔG u ) had been entered from at least 18 sources. We corrected stability changes for all relevant (>230) mutations in the extracted data set. Next, we removed all duplicate entries of the same amino acid substitutions (different concentrations of chemicals, stability changes of the protein intermediate state, etc.). If several measurements of the same mutation under the same experimental conditions were present, we averaged the stability changes and kept only a single entry. If several measurements of the same mutation under different experimental conditions were present, we kept only the measurement closest to the physiological pH 7. We removed the other entries because we believe that there is not enough data to appropriately model stability changes of the same mutation under different experimental conditions. Moreover, stability changes of mutations differing only in temperature and pH were highly correlated in the extracted data set.
Finally, we identified 74 clusters of homologous sequences with more than 25% sequence similarity using BLASTCLUST [33]. If there were several measurements of the same amino acid substitution within a single cluster, we kept only the measurement closest to the physiological pH 7. This process yielded a non-redundant data set of 1,914 mutations in 95 different proteins grouped into 74 non-homologous clusters. We refer to this data set as S1914. The data set is available in Additional file 1.
Experiments and different evaluation schemes
Three different evaluation schemes were compared in this study: unseen-mutation, unseen-residue, and unseenprotein evaluation. The most commonly used evaluation of sequence-based stability changes prediction methods is on unseen mutations. There, mutations are randomly divided into training and test sets (or into crossvalidation folds). This means that different mutations in the same protein, and even in the same residue, can be used for training and testing. Because of the correlation in the available data sets, the most important drawback of the unseen-mutation evaluation is that even methods which over-fit on residue positions and proteins from the training set can achieve high prediction performance on the test set (or in cross-validation).
The unseen-residue evaluation guarantees that all mutations in the same residue position of a protein (or its homologue) exist either in the training or the test set (or in distinct folds for cross-validation). Hence, methods which over-fit on mutations in residue positions from the training set are unlikely to achieve good prediction performance on the test set (or in cross-validation). The unseen-residue evaluation has been previously adopted for the design of a three-state prediction method I-Mutant3.0 [34].
Finally, the strictest assessment we considered was the unseen-protein evaluation. In this case, all mutations in the same protein and its homologues were used exclusively for either training or testing. Thus, if a prediction method cannot generalise well for mutations in previously unseen non-homologous proteins, it is unlikely to achieve a good performance under this evaluation.
Training set, test set, and cross-validation folds
To achieve an unbiased evaluation, we split the S1914 data set randomly into training and independent test sets with a ratio of 2 : 1. We repeated this process 10 times producing 10 different training/test splits. Each training set was further divided into 10 cross-validation folds. The ratio of positive and negative examples in the 10 folds and in the independent test set was kept close to that of the original data set. Cross-validation using the 10 folds was employed to optimise the performance of the evaluated methods.
Then, each method was trained on the whole training set and tested on the examples in the independent test set. The whole process was repeated 10 times, utilising the 10 different training/test splits. Finally, the results of the 10 experiments were averaged.
We compared unseen-mutation, unseen-residue, and unseen-protein evaluation schemes in this study. Hence, splitting into the training and independent test sets as well as to the cross-validation folds was executed according to one of these three evaluation schemes for different experiments.
Comparison with currently available methods
We compared the prediction performance of our new method (EASE-AA) with our previously proposed method [15] which also employs evolutionary and structural encodings (thus, we refer to it as EASE). To further show how prediction performance varies when different evaluation schemes are employed, we evaluated another two sequence-based methods: I-Mutant2.0 [9] and MUpro [10]. These two methods had also been included in an independent assessment study [16]. We did not compare with I-Mutant3.0 [34] because it predicts stability changes into three states (stabilising, destabilising, and neutral).
To be able to asses I-Mutant2.0 and MUpro under different evaluation schemes, we implemented the two methods according to their description in the original publications. Therefore, rather than performing a comparison with the actual methods, we performed a comparison with the set of predictive features proposed for I-Mutant2.0 and MUpro. This approach allowed us to achieve a fair comparison of all four methods by optimising the SVM parameters and re-training the SVM models for every experiment on the same training set.
I-Mutant2.0 bases its prediction on the occurrence frequencies of the sequential neighbourhood, hence, we refer to our implementation as SEQ-FREQ. MUpro uses amino acid identities of neighbouring residues, thus, we refer to our implementation of this method as SEQ-NEIGHB.
Evaluation measures
The prediction performance in the classification task was assessed in terms of Matthews correlation coefficient (MCC), classification accuracy (Q 2 ), sensitivity (Se), specificity (Sp), positive predictive value (PPV), and negative predictive value (NPV): where TP, TN, FP, and FN refer to the number of true positives, true negatives, false positives, and false negatives, respectively. Furthermore, we assessed the classification performance by plotting the receiver operating characteristic (ROC) curve and calculating the area under the ROC curve (AUC). A ROC curve plots the true positive rate (sensitivity) as a function of the false positive rate (100 − specificity) at different prediction thresholds.
For real-value prediction, performance was assessed in terms of Pearson correlation coefficient (r) and root mean square error (RMSE):
Results
We compared the prediction performance of the two methods from the literature, I-Mutant2.0 [9] and MUpro [10] (we refer to our implementations of these methods as SEQ-FREQ and SEQ-NEIGHB, respectively), our previously proposed method [15] (denoted as EASE here), and the method designed in this study (EASE-AA). We evaluated both classification and real-value prediction employing the S1914 data set. To achieve a fair comparison of the four methods, each method was re-trained and had the SVM parameters optimised (utilising a cross-validation on the training set) for every experiment.
Comparison of different evaluation schemes
Commonly, stability changes prediction methods are evaluated using a cross-validation where different mutations of the same protein can be randomly distributed across different folds. We believe that this approach leads to a considerable overestimation of the prediction performance for proteins with low sequence similarity to the training set. To illustrate this in an experiment, we divided our data set into training and independent test sets in three different ways following the unseenmutation, unseen-residue, and unseen-protein evaluation schemes (Methods). In the unseen-mutation evaluation, different mutations are randomly distributed between the training and test sets, whereas the unseen-residue (unseen-protein) evaluation guarantees that all mutations in the same residue position (same protein) exist either in the training or the test set. Also, we performed a 10-fold cross-validation on the training set for each training/test split. In this case, the 10 folds were created by randomly dividing all mutations. This means that the cross-validation was performed in an unseen-mutation evaluation fashion regardless of the evaluation scheme used for the independent test. Table 1 compares the cross-validation and independent test classification performance of the four assessed methods using the three different evaluation schemes. In crossvalidation, EASE yielded the highest Matthews correlation coefficient (MCC) of 0.45. EASE-AA achieved an MCC of 0.43, while it was 0.41 and 0.33 for SEQ-NEIGHB and SEQ-FREQ, respectively. The area under the ROC curve (AUC) ranged from 0.75 to 0.81 for the four methods.
For the independent test, we used three different evaluation schemes: unseen-mutation, unseen-residue, and unseen-protein. The unseen-mutation evaluation resulted only in a marginally lower performance compared to the cross-validation results (an MCC and AUC decrease of up to 0.05 and 0.01, respectively). However, if the unseenresidue or unseen-protein evaluation was employed, the performance of all four methods decreased considerably when compared to the cross-validation results. The largest decline was for SEQ-NEIGHB. In this case, the MCC decreased by 0.27 (from 0.41 to 0.14) for both unseenresidue and unseen-protein evaluations. Our new method (EASE-AA) experienced the smallest decrease in prediction performance. EASE-AA's MCC declined by 0.09 and 0.08 (from 0.43 to 0.34 and 0.35) for predictions on unseen residues and unseen proteins, respectively.
The receiver operating characteristic (ROC) curves in Figure 1 compare the true positive rate of EASE and EASE-AA as a function of the false positive rate for the unseen-mutation and unseen-protein evaluation. We were interested in studying the decrease in the independent test performance between the two evaluation schemes. While in the case of EASE-AA, the area under the ROC curve (AUC) declined only by 0.02 for the unseen-protein evaluation, EASE yielded an AUC decrease of 0.11. The ROC curves of EASE and EASE-AA for the unseen-residue evaluation were similar to those for the unseen-protein evaluation (not shown in the figure). The results from the real-value prediction experiment showed the same trend in the relative comparison of the four methods under the three different evaluation schemes (Table 2). Prediction performance decreased when comparing the results from the unseen-mutation with the unseen-residue or unseen-protein evaluation. The smallest decrease in prediction performance was yielded by EASE-AA. Also, EASE-AA was the best performing method in predicting real-value stability changes in previously unseen residues and unseen proteins.
Training and evaluation on previously unseen nonhomologous proteins
We discovered that the unseen-mutation evaluation leads to overestimating the prediction performance for previously unseen residues as well as for previously unseen proteins (Tables 1 and 2). Interestingly, the prediction performance on unseen residues was similar to that on unseen proteins. Therefore, we employed the unseen-protein evaluation to further analyse the prediction performance of the four methods.
One of the reasons for the suboptimal performance in predicting unseen proteins could be that we optimised the four methods employing the unseen-mutation crossvalidation (different mutations of the same protein can appear in different folds). To optimise the compared methods more appropriately to predict stability changes in unseen proteins, we split the training set into 10 folds so that none of the folds shared homologous sequences (unseen-protein cross-validation). Table 3 summarises the cross-validation and independent test results from the classification experiment employing the unseen-protein evaluation. For cross-validation, the highest Matthews correlation coefficient (MCC) of 0.37 was achieved by our new method (EASE-AA). This result represents a relative improvement of 48% (an absolute improvement of 0.12) to the second best method (SEQ-FREQ). When we evaluated the four methods on the independent test set, the prediction performance decreased for all methods only marginally. EASE-AA, the best performing method, reached an MCC of 0.36 with a relative improvement of 50% (an absolute improvement of 0.12) compared to the second best method (SEQ-FREQ).
Positive (negative) predictive value (PPV, NPV) refers to the proportion of mutations predicted as stabilising (destabilising) that are truly stabilising (destabilising). EASE-AA yielded PPV and NPV of 46.85% and 85.85%, respectively. These results represent absolute improvements of 9.52 and 2.13 percentage points when compared to SEQ-FREQ. The respective improvements compared to EASE were 5.19 and 6.17 percentage points.
The ROC curves in Figure 2 We estimated the statistical significance of EASE-AA's improvements in the MCC and AUC over the 10 replications of independent testing using a student t-test. The null hypothesis stated that there was no statistical difference in the MCC (AUC) for EASE-AA and each of the three compared methods. The p-values associated with this null hypothesis were all less than 0.0005. The results from the real-value prediction experiment employing the unseen-protein evaluation are summarised in Table 4. As in the case of classification, EASE-AA performed the best yielding a correlation coefficient (r) of 0.51 and root mean square error (RMSE) of 1.48. These results represent relative improvements of 24% for r (an absolute improvement of 0.10) and 5% for RMSE (an absolute improvement of 0.08) to the second method (EASE).
Comparing the results when the unseen-mutation cross-validation (Tables 1 and 2) and the unseen-protein cross-validation (Tables 3 and 4) were used for model optimisation, there does not seem to be a considerable difference in the unseen-protein independent test performance. The only exception was SEQ-FREQ which seemed to benefit from the appropriate model optimisation. SEQ-FREQ'S correlation coefficients increased by 0.06 (MCC) and 0.03 (r) for classification and real-value prediction, respectively.
Prediction performance for different types of mutations
EASE-AA outperformed the other three methods (EASE, SEQ-FREQ, and SEQ-NEIGHB) in predicting stability changes in unseen proteins. We were interested in how this improvement varied for different types of mutations. We investigated how accurate (in terms of MCC) each of the four compared methods was in predicting mutations in residues of different secondary structure types (a-helix, b-sheet, and coil) and accessible surface area assignments (exposed and buried). Residues were defined as exposed if at least 25% of their surface was accessible to the solvent and as buried otherwise. Furthermore, we explored the accuracy of predicting mutations inducing 'small' (ΔΔG u ∈ [−1, 1]) and 'large' (|ΔΔG u | >1 kcal mol −1 ) stability changes. Figure 3 shows the Matthews correlation coefficient (MCC) of the four compared methods as a function of the different types of mutations that we investigated. Regarding different secondary structure types, EASE-AA achieved an MCC of 0.37, 0.43, 0.27 for the helical, sheet, and coil residues, respectively. The largest relative improvement to the second best method (SEQ-FREQ) of 80% (an absolute improvement of 0.12) was achieved for coil residues. Interestingly, coil residues were the most difficult to predict for all four methods. For helical and sheet residues, our new method yielded relative improvements of 37% and 39%, respectively (absolute improvements of 0.10 and 0.12). All four methods were able to predict buried mutations more reliably than the exposed ones. The MCC values achieved by EASE-AA for the exposed and buried residues were 0.27 and 0.40, respectively. The respective relative (absolute) improvements to the second best method (SEQ-FREQ) were 59% (0.10) and 38% (0.11). Regarding the performance for mutations with different magnitudes of stability changes, all methods yielded a better performance for mutations causing 'large' stability changes. For this category, EASE-AA achieved an MCC of 0.39, while it was 0.27 for the category of 'small' stability changes. The relative (absolute) improvements for the 'small' and 'large' categories were 69% (0.11) and 34% (0.10), respectively.
Overall, EASE-AA achieved improvements in every category included in the comparison. Moreover, since the absolute improvements were quite balanced among the different types of mutations (ranging from 0.10 to 0.12), EASE-AA yielded higher relative improvements for mutation types which appeared to be more difficult to predict for all of the four compared methods (coils, exposed residues, and 'small' stability changes).
Predictive features and the improvements yielded by EASE-AA
We found that EASE-AA consistently outperformed our previous work (EASE) when predicting mutations in unseen proteins. Hence, we were interested in how each design step of EASE-AA contributed towards the final improvement. Figure 4 compares the cross-validation First, we extended EASE with two predicted structural features, accessible surface area (ASA) and disorder probability (D). However, the improvement in the crossvalidation performance was only marginal. Next, the seven physical amino acid parameters (AAP) were added. The inclusion of AAP yielded a relative improvement of 24% (an absolute improvement 0.06) in terms of MCC. Finally, we suspected that the 140 input attributes encoding the deleted, introduced, and neighbouring amino acids implemented in EASE may have been leading to over-fitting on residue positions encountered during training. After excluding these 140 inputs (EASE-AA), there was a relative improvement of 19% (an absolute improvement of 0.06) in terms of MCC.
It has been shown previously that structural features [25] and amino acid parameters [13] can be used for the prediction of stability changes. To our best knowledge, evolutionary features have been used only in our previous work [15]. Therefore, we studied the relationship between the evolutionary information and experimentally measured stability changes. We plotted the median of stability changes in the S1914 data set as a function of the PSSM scores for the mutation likelihood (the same as feature M) and conservation likehood (C) ( Figure 5). This plot reveals that as the median of stability changes increases, so does the value of M, whereas the value of C decreases. Hence, the relationship shown in Figure 5 demonstrates that there is a higher number of destabilising mutations when the mutation likelihood is low and residue conservation high. On the contrary, stabilising mutations tend to prevail for mutations which are common in the family of related proteins.
Discussion
Our main interest was to asses the prediction of stability changes in previously unseen non-homologous proteins. We found that while high prediction performance can be achieved when different mutations of the same protein and residue positions are randomly divided for training and evaluation, it is challenging to predict stability changes in previously unseen proteins. Therefore, our results provide experimental evidence that the commonly adopted unseen-mutation evaluation lead to an over-estimation of the prediction performance. To address the prediction of stability changes in unseen proteins, we extended our Figure 3 Prediction performance of the four methods for different types of mutations. Matthews correlation coefficient (MCC) of SEQ-NEIGHB, SEQ-FREQ, EASE, and EASE-AA is shown as a function of the secondary structure type of the mutated residue, accessible surface area of the mutated residue (threshold of 25% for an exposed residue), and magnitude of the stability change. These are unseen-protein independent test results. previous work [15] and proposed a new method (EASE-AA) which was able to outperform the other three methods in our comparison ( Figure 2). For classification, EASE-AA achieved a Matthews correlation coefficient (MCC) of 0.36 (Table 3). For real-value prediction, Pearson correlation coefficient (r) reached the value of 0.51 (Table 4). Although such a performance may seem relatively low, these results represent relative improvements to the second best method of 50% (MCC) and 24% (r). We believe that one of the limiting factors in yielding more reliable predictions is the scarcity of stabilising mutations and distinct non-homologous proteins available for training. Moreover, as noted elsewhere [5], the variety of available experimental data is quite unbalanced (for instance, 26% of amino acid substitutions were to alanine in our data set).
Comparing the three different evaluation schemes, all four methods achieved a considerably higher prediction performance when the unseen-mutation evaluation was used (Table 1). This could be attributed to the correlation that exists among different mutations of the same residue in the available experimental data. Because this correlation cannot be exploited when evaluation is done solely on residues unseen during training, prediction performance of all four methods decreased considerably upon employing the unseen-residue evaluation. The unseen-protein evaluation further guarantees that all mutations of the same protein are used either for training or evaluation. Performance of all four methods changed only marginally when comparing the results from the unseen-residue and unseen-protein evaluation. This is most likely because of the absence of 'proteinwide' predictive features in the four compared methods. Hence, the unseen-residue evaluation was just as challenging as the one on unseen proteins.
When comparing performance of EASE-AA with our previously proposed method [15], the reasons for the improvements are twofold. Firstly, we excluded features encoding the identities of the deleted, introduced, and neighbouring amino acids because they led to over-fitting on residues and proteins encountered during training ( Figure 4). Secondly, we incorporated the differences in seven representative physical parameters for the deleted and introduced amino acids (feature AAP). For instance, the difference in the physical parameter encoding the volume of an amino acid can suggest if the mutation may induce strain in the protein structure due to the large size of the introduced residue. Similarly, a change in the hydrophobicity can suggest an introduction of disturbing interactions in the hydrophobic core of the protein.
Our new method adopts the evolutionary predictive features proposed in our previous work [15]. Actually, the observation that functionally important sites tend to be evolutionary conserved has been previously exploited for the prediction of deleterious mutations [20]. However, there are other reasons than the location of functional sites for the existence of conserved regions. For example, conserved regions play an important part in stabilising the structure of a protein [35]. We demonstrated that the relationship between evolutionary predictive features derived Figure 5 Relationship between evolutionary conservation and stability changes. The median of experimentally measured stability changes in the S1914 data set is shown as a function of the PSSM scores defining mutation and conservation likehood. The plot reveals that there is a higher number of destabilising mutations when the mutation likelihood is low and residue conservation high, while stabilising mutations tend to prevail for substitutions which are common in the family of related proteins.
from PSSM and experimentally measured stability changes from our data set agree with these general assumptions about sequence conservation ( Figure 5).
It seems that the most difficult mutations to predict are either located in coil and exposed residues or those which cause only small stability changes (within the range of −1 and 1 kcal mol −1 ). Prediction performance of all four methods in these three categories was lower than for any other category of different types of mutations that we investigated (Figure 3). These findings are in agreement with the results reported in a study about a neural network structure-based method [3]. Also, it has been shown previously that different interactions govern stability changes in exposed and buried residues [36]. Regarding the prediction of 'small' stability changes, naturally, it is harder to differentiate among subtle changes. Moreover, experimental data is affected by the error of measurement which can be as large as ±0.48 kcal mol −1 [37]. Hence, the strict classification of the 'small' stability changes as stabilising or destabilising can be misleading [34,13].
Overall, our new method, EASE-AA, achieved improvements in all categories of different types of mutations that we investigated. Moreover, EASE-AA yielded higher relative improvements for the types of mutations which were the most challenging to predict for all four compared methods. These results demonstrate the robustness of the performance of our new method in predicting stability changes in previously unseen non-homologous proteins.
Conclusions
In this work, we demonstrated how performance varies depending on the evaluation scheme employed. This is most likely because the machine learning methods are prone to over-fitting on mutations in residues and proteins encountered during training. When the evaluation on previously unseen non-homologous proteins was used, currently available methods could not reliably predict stability changes. To address this problem, we designed a new method which is based on Evolutionary And Structural Encodings with Amino Acid parameters (EASE-AA). Compared to our previous work [15], features leading to over-fitting were removed and the model was extended with differences in seven physical amino acid parameters.
EASE-AA achieved a Matthews correlation coefficient of 0.36 and was able to classify correctly 66% of the stabilising and 74% of the destabilising mutations. For real-value prediction, EASE-AA achieved a correlation between predicted and experimentally measured stability changes of 0.51. Even though this performance may seem relatively low, EASE-AA predicts stability changes in unseen proteins more accurately than the other three methods in our comparison. This further highlights another important finding of this study that the prediction performance of currently available methods is often overestimated due to randomly dividing different mutations of the same protein, and even the same residue, for training and evaluation.
|
2016-01-11T18:29:14.669Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "27cbc6c301848d468fcf1070bb51dc59c4d368cb",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-S1-S4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35803b99d2be652568d0bd6921bfcdd4f9b4b377",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
}
|
18450843
|
pes2o/s2orc
|
v3-fos-license
|
A pilot study of cognitive training with and without transcranial direct current stimulation to improve cognition in older persons with HIV-related cognitive impairment
Background In spite of treatment advances, HIV infection is associated with cognitive deficits. This is even more important as many persons with HIV infection age and experience age-related cognitive impairments. Both computer-based cognitive training and transcranial direct current stimulation (tDCS) have shown promise as interventions to improve cognitive function. In this study, we investigate the acceptability and efficacy of cognitive training with and without tDCS in older persons with HIV. Patients and methods In this single-blind randomized study, participants were 14 individuals of whom 11 completed study procedures (mean age =51.5 years; nine men and two women) with HIV-related mild neurocognitive disorder. Participants completed a battery of neuropsychological and self-report measures and then six 20-minute cognitive training sessions while receiving either active or sham anodal tDCS over the left dorsolateral prefrontal cortex. After training, participants completed the same measures. Success of the blind and participant reactions were assessed during a final interview. Assessments were completed by an assessor blind to treatment assignment. Pre- and post-training changes were evaluated via analysis of covariance yielding estimates of effect size. Results All participants believed that they had been assigned to active treatment; nine of the 11 believed that the intervention had improved their cognitive functioning. Both participants who felt the intervention was ineffective were assigned to the sham condition. None of the planned tested interactions of time with treatment was significant, but 12 of 13 favored tDCS (P=0.08). All participants indicated that they would participate in similar studies in the future. Conclusion Results show that both cognitive training via computer game playing and tDCS were well accepted by older persons with HIV infection. Results are suggestive that tDCS may improve cognitive function in persons with HIV infection. Further study of tDCS as an intervention for HIV-related cognitive dysfunction is warranted.
Introduction
In spite of advances in the treatment of HIV infection through the development of combination antiretroviral treatments, individuals with HIV infection, even those with nondetectable viral loads, continue to develop HIV-related cognitive deficits. 1 These cognitive deficits are significant in light of their impact on patients' functional status, [2][3][4] medication adherence, [4][5][6] and quality of life. [7][8][9][10] Cognitive dysfunction may have an even greater impact on those aging with HIV, who face both HIV-and age-related cognitive changes. 11,12 Although significant, treatments for HIV-related cognitive deficits are limited. Drug studies have shown that stimulant medications may improve some symptoms of HIV-related cognitive impairment, 13 but their usefulness is limited by their abuse potential and side effects. 14,15 Other investigators have argued for the utility of computer-delivered cognitive training interventions, 16 but the software is not always affordable for indigent patients or those on limited budgets. Further, many programs developed specifically for cognitive training do not have high levels of inherent interest, reducing their uptake outside of compensated research studies.
An alternative strategy to expensive commercial software may be more readily available computer games. 17 Developers of cognitive training software programs have often tried to include game elements into their software 18 to enhance the inherent interest of the training programs, but a vast number of readily available games have already demonstrated their commercial viability. This type of viability stems from the games having high levels of intrinsic interest, play that engages the user, and online social communities. Games such as first-person shooters (in which the participant uses some form of gun to target enemies) have been shown to improve attention and reaction time, 18 but their acceptability to the user is limited by the violent nature of their content, which at times includes simulations of violence and gore. 18,19 An alternative to first-person shooters would be a car racing game, which also requires attention and cognitive speed but may be more generally acceptable to users. One study showed that a purpose-built car racing game improved cognitive function in older persons. 20 Others have also argued for the effectiveness of commercially available games in improving and sustaining cognitive function in older adults. 17,[21][22][23][24] Commercial games are successful precisely because of their ability to engage and sustain users' interest. A number of games, for example, have millions of engaged users actively involved in online communities. Games may involve team play and some have worldwide tournaments with thousands of users. One group has shown that a commercially available game requiring psychomotor speed was more likely to result in long-term use in users compared to a commercial cognitive training computer program. 21,22 Computer games have been shown to have sustained cognitive training effects that may transfer to other domains 25 including cognitive control in older adults. 20 In addition to cognitive training, many studies have shown that transcranial direct current stimulation (tDCS) can enhance cognitive function. tDCS involves the application of very small currents (1-2 mA) using a regulated direct current source, usually via sponge electrodes placed over relevant portions of the brain. Studies of tDCS have shown enhancement of specific aspects of cognitive function, including verbal problem solving, 26 working memory, [27][28][29] and learning in various contexts including in a computer-based threat detection simulation 30 and object location memory in the elderly. 31 In addition to effects on cognition, many studies have also shown that tDCS may be an effective adjunct treatment for depression, with individuals receiving both antidepressants and tDCS showing greater improvements than those receiving antidepressant medications alone. 32 Although the precise mechanism by which tDCS produces effects on cognition and mood is unclear, one possible mechanism is particularly relevant to the treatment of individuals with HIV infection. Direct current stimulation of neurons in the motor cortex has been shown to stimulate the activity of brain-derived neurotrophic growth factor (BDNF) in that area, 33 and it is possible that similar effects occur elsewhere in the brain. BDNF is affected by HIV infection 34,35 and is related to both cognitive decline in older persons 36 and depression. 37 An intervention that might stimulate its production in older persons with HIV might be an important therapeutic intervention.
To date, no readily identifiable study has evaluated the usefulness of commercial games as a cognitive training strategy among persons with HIV infection nor has any study evaluated their effect combined with tDCS. The purposes of this study were to evaluate the feasibility and acceptability of game-based cognitive training intervention in older persons with HIV infection and to evaluate the acceptability and efficacy of the cognitive training intervention with and without active tDCS. We believed that individuals would find the computer game interesting and that those receiving active tDCS would show improvements in psychomotor speed. As the number of participants is small, analyses focus primarily on description of outcomes and effect size estimation rather than parametric hypothesis testing. We present preliminary data here from our first 11 participants.
Patients and methods game
In this study, we chose to use an off-the-shelf computer game that is inexpensive (free to download on some version of Windows™), was of an appropriate level of difficulty, and was widely popular, thus demonstrating its acceptability to
2747
tDcs for hiV cognitive impairment potential users. GT Racing 2 (Gameloft SE, Paris, France) requires that individuals steer a simulated car over courses that include city streets, racetracks, and outside courses. Each course requires that the user achieve a basic level of proficiency before moving on to the next level. The initial difficulty level allows the game to provide considerable steering and braking assistance to the person playing, guaranteeing their ability to play the game and achieve at least some success. The game is visually attractive and provides a variety of courses that stimulate user interest. All participants were able to navigate successfully at least the first four courses of the game over six training sessions.
tDcs Participants
Participants were individuals treated for HIV infection who reported cognitive difficulties and evidenced objective cognitive impairment in two neuropsychological domains. Exclusion criteria included factors that might expose individuals to increased risks if they participated in tDCS, such as history of seizures or bipolar disorder (there have been some reports of mania in studies of tDCS for depression 32,38 ). Individuals using an extensive list of psychotropic medications were excluded, as drugs in these classes have been shown to modify the effects of tDCS. 39 These included medications with serotonergic (many antidepressants), dopaminergic (stimulant medications, antipsychotics), or gamma-amino butyric acid actvitiy (benzodiazepines). Left-handed participants were excluded as well. Participants were also asked about game-playing experience; none reported substantial personal computer (PC) or console gaming experience, although one participant indicated that he played games on his phone occasionally.
Procedures recruitment and eligibility determination
Individuals were initially recruited from participants in a previous study during which they completed a battery of cognitive measures, allowing us to identify persons likely to meet entrance criteria. Individuals were also recruited from several local organizations providing services to individuals treated for HIV infection. Participants were first screened by telephone for the presence of subjective cognitive impairments using questions developed by the European AIDS Clinical Society 40 as well as for medication use, ability to bring laboratory results, and interest in participating in a study of cognitive training and tDCS. All participants were required to be in active treatment for HIV and stable on their current regimen of antiretroviral medications for 1 month. Persons who met initial inclusion and exclusion criteria were scheduled for an in-person visit to determine final eligibility.
At the in-person visit, participants completed a brief battery of neuropsychological measures selected to assess domains commonly affected in persons with HIV infection. These included attention and working memory using the Digit Span subtest of the Wechsler Adult Intelligence Scale, 4th edition, or WAIS-IV, 41 which includes assessments of digit span forward, backward, and a number and letter sequencing task. Measures also assessed executive function and mental flexibility with the Trail Making Test, Parts A and B, 42 verbal learning and memory with the Hopkins Verbal Learning Test -Revised or HVLT-R, 43 and psychomotor speed with the Grooved Pegboard Test. 44 Individuals were considered eligible if their performance in two of the domains was one standard deviation (SD) or more below the mean according to normative data. Participants were also required to report subjective cognitive difficulties in at least one of the following areas: 1) memory, 2) cognitive slowing, or 3) problems in attending. 40 They thus fulfilled Frascati criteria 45 for mild neurocognitive disorder. All participants were currently in active treatment for HIV, and routine monitoring of treatment response and immune status is part of their care. We required that participants furnish recent laboratory results at study entry, as well as bring all their medications to the eligibility visit, so that we could examine pill bottles and verify that they met protocol eligibility requirements.
Individuals who met entry criteria then completed the Patient's Assessment of Own Functioning (PAOF), 46,47 a more extensive measure of self-reported cognitive difficulties across language, perception, and memory previously used in studies of HIV-related cognitive impairment, 47 as well as the Center for Epidemiological Studies Depression (CESD) scale, 48 a self-report measure of depressive symptoms. Participants completed these measures using automated computer-assisted self-interview (ACASI) software that only required touching a computer's screen to record responses. Participants were compensated with US $40 for the first and last sessions and $20 for each training visit. After completing baseline procedures, participants were scheduled for the first training visit.
computer-based cognitive training
At the first visit, individuals were assigned to treatment condition via a predetermined computer-generated schedule with randomized blocks of four. Individuals were oriented to tDCS procedures and the computer game controller (standard Xbox game controller connected via USB interface to a PC running the Windows ® 10 operating system). The investigator sat at another desk behind and to the participant's left so that the tDCS device and the investigator recording performance were not visible during training. The investigator controlled the computer and the game via a wireless mouse. All individuals participated in tDCS anode placement over the left dorsolateral prefrontal cortex and cathode over the right supraorbital area, with locations determined according to the 10-20 placement system. 49 Electrodes were 5×5 cm sponges (Soterix EASY-Pads; Soterix Medical, New York, NY, USA). They were moistened with ~6 cc of sterile saline and held in place with an elastic band. Current was supplied using a Chattanooga Ionto iontophoresis device (DJO International, Surrey, England) with flat carbon electrodes inserted into the dual riveted sponges to improve the uniformity of current density.
Participants were informed that they might feel nothing or minor itching or burning at the onset of tDCS and that the feeling might continue or go away during the training session. This procedure has previously been successfully used to blind research participants to active vs sham tDCS. 32 Participants were encouraged to attend to the computer screen as the game and tDCS were initiated. For individuals assigned to active treatment, tDCS was begun and continued for 20 minutes at a current of 1.5 mA. For individuals assigned to sham tDCS, the tDCS device was turned on and the current allowed to ramp up to 1.5 mA over a period of 30 seconds. The device was then turned off out of sight of the participant.
Participants were allowed to work through the game at their own pace, subject to its restrictions. For example, in order to progress through the game, participants had to finish a race in first, second, or third place or finish a course in a specified time prior to accessing the next course. Participants completed at least five trials of each course before progressing. At the conclusion of each training session, participants responded to three questions via ACASI that asked how they would rate their mental abilities, mood, and level of discomfort during that session. All participants completed six training sessions over the next 2 weeks, with most sessions completed with 1-day intervening between sessions (eg, Monday, Wednesday, and Friday).
All activities were completed within a 3-week interval. After the final training session, participants again completed the PAOF, CESD, and the neuropsychological battery administered by an assessor blind to treatment condition. The assessor also completed a final interview during which participants were asked to which group they had been assigned in order to assess the effectiveness of the blinding procedure. They were asked whether they believed the computer training and tDCS were helpful and whether they would participate in the future in a study of tDCS.
human subjects approval and trial registration
This protocol was approved by the Institutional Review Board of Nova Southeastern University (protocol number 12031424F) and was registered on ClinicalTrials.gov (NCT02647645). All participants provided oral consent for screening procedures and written informed consent prior to randomization and treatment procedures.
analyses Data analyses were completed in several steps. Given the small number of participants, it was possible to inspect the data for extreme values, but inspection was supplemented by obtaining frequencies and descriptive statistics. Effects of covariates were evaluated in correlation analyses. Treatment effects were evaluated in analysis of covariance (ANCOVA) models yielding estimates of effect size and plots of estimated marginal means corrected for covariates. Our primary outcome measure was effect size estimates as we anticipated that the current sample size would not provide sufficient power to detect statistical significance.
Treatment effects were also evaluated through inspection of plots derived from ANCOVA models.
As depression can affect self-report of symptoms in persons with HIV infection 2 as well as cognition, 2 we assessed the impact of changes in depression on the observed interaction between treatment group and time in post hoc analyses. The change in CESD score over time was calculated and used as a covariate in a subset of analyses to evaluate the effect of changes in depressive symptoms on cognitive measures and self-report of symptoms.
All analyses presented here were completed using the Statistical Package for the Social Sciences, 23rd edition (IBM/SPSS, Armonk, NY, USA).
Results
Demographic and educational data for each participant are presented in Table 1. We enrolled 14 individuals, 11 of whom completed all study procedures. Two participants completed the baseline study visit but withdrew prior to randomization as they lived some distance from the study site and felt driving to our research office three times a week for 2 weeks was impractical. Another participant completed baseline assessment and was randomized, but after several training visits was hospitalized for an unrelated health issue and could not complete study procedures in the 3-week period specified in our protocol. The average age of participants was 51.5 years (SD =4.71), and they had completed a wide range of years of education (6-15 years, mean age 11.18 years, SD =2.27). Two women and two Whites were participants so that the majority of participants in this study were African American men.
Baseline and follow-up means and SDs for tests and the two self-report measures are presented in Table 2. The possible relations of covariates to cognitive variables were explored via parametric (Pearson) and nonparametric (Spearman) correlations. As correlations of age, gender, education, race, and immune status with cognitive variables were often substantial and judged to be potentially meaningful, we included them in ANCOVA models assessing differences in performance before and after training with or without tDCS. Although the number of covariates is substantial, especially in light of the small overall sample size, all showed relations to other variables and might reasonably be expected to be confounders of any evaluation of treatment effect. The complete table of nonparametric correlations is included as a data supplement to this paper (Table S1).
As we were primarily interested in exploration of preliminary results via graphing and effect sizes, repeated measures ANCOVA models for baseline and follow-up were created, with a specific focus on the extent to which the interactions between time and treatment condition might represent an effect of tDCS on cognitive outcomes. Examples of covariatecorrected baseline and follow-up changes for each group are presented in Figures 1-3. Figure 1 (higher scores reflect better performance) presents results for the HVLT total learning score, suggesting that persons in the tDCS score may have improved relatively more over the baseline assessment than did those in the sham group. Figure 2 (higher scores reflect worse performance) shows results for the Grooved Pegboard dominant hand time; in this instance, after taking covariates into account those in the sham group performed more poorly at the second assessment compared to those in the tDCS group. Figure 3 (lower scores indicate fewer complaints) shows changes in the PAOF total score over assessments. While the sham group reported modestly greater overall problems at the follow-up assessment, the figure suggests a substantial decrease in complaints for the tDCS group.
Effect sizes for the interactions of treatment group by time are presented in Table 3. Effect sizes are presented as partial eta squared and converted to the more widely used Cohen's d. Effect sizes range from moderate to large when interpreted based on the guidelines suggested by Cohen. 50 The average of all effect sizes for cognitive measures (not including Trails A and the effects corrected for change in depression) was 1.28. When the negative effect for Trails A is included, the average is 0.99. Of the 13 planned estimates of treatment effect size, 12 were in the positive direction, suggesting a positive effect of tDCS (P=0.08). Table 3 also includes effect size estimates for several cognitive measures we believed might be sensitive to changes in depression and the PAOF total score. In these models, change in depressive symptoms was included as a covariate. The inclusion of this variable reduced the effect size for the group by time interaction for the PAOF but increased it for cognitive measures.
success of blind, and participant reactions
In order to evaluate how successful the blinding procedure was, an interviewer blind to participants' treatment assignments asked them to which treatment group they believed that had been assigned. All participants indicated that they believed they had been assigned to the active tDCS group. We also asked them whether they believed the intervention had been helpful to them. Nine of the 11 participants stated they felt the intervention had been helpful to them (including several assigned to the sham condition) while two participants stated they were not sure or believed it had not been helpful. Both were assigned to sham treatment. All participants stated
2751
tDcs for hiV cognitive impairment they would participate in a similar study in the future. Both men and women indicated that they enjoyed the car racing game, with several participants inquiring about how they could obtain the game in order to continue playing it.
Discussion
The purpose of this study is to explore the acceptability and potential efficacy of computer-delivered cognitive training using commercial gaming software with or without tDCS in persons with HIV infection. Given our small sample size, our analyses evaluated the treatment effects by assessing effect sizes and inspecting graphs of covariate-corrected baseline and follow-up performance. Cognitive testing before and after training suggests the presence of a positive effect of tDCS on learning, memory, and motor speed compared to cognitive training alone. Of the 13 effect sizes presented in Table 3, 12 were positive in showing an advantage for the active tDCS group. These findings are illustrated in the figures, which show either relatively greater improvement (Figure 1) or lack of decline ( Figure 2) over assessments in those receiving tDCS. In addition, objective findings are mirrored in participants' self-report of cognitive difficulties (Figure 3). Observed changes in performance on working memory tasks such as Digit Span Backward and Sequencing are consistent with other studies that have found improvement in working memory after left dorsolateral prefrontal cortex tDCS stimulation, 27 including a study in patients with Parkinson's disease. 51 However, it should be acknowledged that positive treatment effects have not been obtained in all studies. 52 Although we did not specifically recruit participants who might be suffering from depression, mean CESD scores for both groups were in a range consistent with clinically significant disturbance of mood. It is thus noteworthy that the interaction of group by time for this measure, while not statistically significant, was small to medium based on Cohen's interpretive guidelines. 50 As tDCS has been successfully used as an adjunctive treatment for depression, 53,54 this finding is also consistent with previous literature on tDCS in other patient groups as well as results of a small trial in persons with HIV infection. 55 Strengths of this study include the success of the singleblind procedure, as all participants indicated they believed that had been in the active treatment group. All baseline and outcome data were collected either by an assessor blind to participants' treatment assignment or by way of ACASI, again reducing the likelihood of experimenter bias in these results. We collected information about participants' subjective experience of the interventions as well; their comments supported objective test results. Our participants were in many respects typical of those who might be expected to benefit from cognitive interventions based on their age, education, and cognitive deficits.
Limitations of this study include the small sample size and single-blind design. The sample size reflects that this is a pilot study targeted at determining whether further study is warranted and whether the interventions would be acceptable to older persons with HIV infection. Lack of a true double-blind design raises the concern about bias induced
2752
Ownby and acevedo by the investigators. We did a number of things to decrease the likelihood of experimenter effects, including positioning the experimenter and the tDCS device out of sight of the participant during stimulation, providing neutral information about the likelihood of experiencing physical sensations from stimulation, and collecting rating scale data via ACASI with the investigator out of the room. Follow-up cognitive testing and interviews were completed by an assessor blind to treatment condition. While these procedures reduced the likelihood of experimenter bias, they cannot eliminate it. Participants included small numbers of women and Whites, creating another potential source of bias. As the purpose of this study was to assess the acceptability and possible efficacy of training with and without tDCS, we did not include a no-treatment control condition. Thus, we cannot evaluate the effect of computer training by itself, as all participants receive the same cognitive training intervention. As noted, our sample size is quite small and we controlled for a number of covariates. Since the covariates were all factors that might reasonably be related to performance on outcome measures, such as age, gender, education, and immune status, and thus confound any evaluation of the effect of treatment, we believe this was an appropriate strategy. However, it must be acknowledged that our assessment is based on a small sample. An anomalous finding was the change in Trails A performance, in which the active the performance of the tDCS group actually declined while that of the sham group improved. Other investigators have evaluated the effect of tDCS on Trail Making Test performance. Fagerlund et al 56 assessed the effect of anodal stimulation over M1 (somewhat posterior to the site of stimulation in this study) and found no effect on Trail Making test performance. In a study of tDCS for depression, Brunoni et al 57 showed a modest differential in Trails A improvement over time favoring sham treatment (sham improved 8.1 seconds while the active group improved 3.2 seconds). We can speculate that this finding may simply be a random outcome, especially in light of otherwise consistent results favoring tDCS.
Results of this pilot study thus provide suggestive evidence for the efficacy of tDCS combined with computerdelivered cognitive training in improving cognitive function in persons with HIV-related cognitive deficits. Although our small sample size limited the power of this study to detect statistically significant treatment effects, most effect sizes were moderate to large, and all but one were in the direction of a positive effect for tDCS. Objective findings are reflected in participants' own estimation of the effectiveness of the interventions, although it should be noted that some individuals who received sham tDCS also believed they had benefited from the intervention. This may reflect a nonspecific effect of simply participating in an intervention study or a positive effect of the cognitive training intervention. As we did not include a no-treatment control condition, this possibility cannot be evaluated.
Given the importance of cognitive deficits for affected persons' functional status and quality of life as well as lack of effective alternative treatments, these results have potential clinical significance. Due to the limited scope of this study, we did not include measures of functional status so that it is not possible to know whether the observed changes in cognitive tasks had an impact on other outcomes directly related to everyday functioning, such as self-care, medication adherence, or driving. Future research should focus on assessing not only laboratory outcome measures but also outcomes with clearer real-world significance, such as instrumental activities of daily living and medication adherence.
Disclosure
The authors report no conflicts of interest in this work.
Neuropsychiatric Disease and Treatment
Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/neuropsychiatric-disease-and-treatment-journal Neuropsychiatric Disease and Treatment is an international, peerreviewed journal of clinical therapeutics and pharmacology focusing on concise rapid reporting of clinical or pre-clinical studies on a range of neuropsychiatric and neurological disorders. This journal is indexed on PubMed Central, the 'PsycINFO' database and CAS, and is the official journal of The International Neuropsychiatric Association (INA). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
|
2018-04-03T02:43:03.859Z
|
2016-10-25T00:00:00.000
|
{
"year": 2016,
"sha1": "38afd760005f6d92cd4cbd3090013147cac98ce7",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=33132",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe0337e3d36a6798cab14cd64588dcb4ab4128fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119664272
|
pes2o/s2orc
|
v3-fos-license
|
Thermal Timescale Mass Transfer Rates in Intermediate-Mass X-ray Binaries
Thermal timescale mass transfer generally occurs in close binaries where the donor star is more massive than the accreting star. The mass transfer rates are usually estimated in terms of the Kelvin-Helmholtz timescale of the donor star. But recent investigations indicate that this method may overestimate the real mass transfer rates in accreting white dwarf or neutron star binary systems. We have systematically investigated the thermal-timescale mass transfer processes in intermediate-mass X-ray binaries, by calculating binary evolution sequences with various initial donor masses and orbital periods. From the calculated results we find that on average the mass transfer rates are lower than traditional estimates by a factor of $\sim 4$.
Introduction
X-Ray binaries with neutron star accretors are traditionally divided in to two groups based on the masses of the donor stars. One is low-mass X-ray binaries (LMXBs) with donor stars less massive than ∼ 1.5M ⊙ , the other is high-mass X-ray binaries (HMXBs) with donor masses exceeding ∼ 10.0M ⊙ . In LMXBs mass is exchanged through Roche-Lobe overflow (RLOF), while in HMXBs, the accretor is likely to be fed by the stellar wind-induced mass loss of the companion. Systems with donor masses between 1.5 and 10.0M ⊙ are called intermediate-mass X-ray binaries (IMXBs). Few IMXBs have been discovered in the Galaxy. The reason is that, on one hand, mass transfer via RLOF is thought to be rapid and unstable due to the large mass ratio, leading to the formation of a common envelope; on the other hand, the donor stars at this mass range are unable to generate strong winds to power bright X-ray emission from the neutron star (van den Heuvel 1975).
Recent investigations on IMXB evolution lead to important realization on the stability of super-Eddington mass transfer, and suggest that many, or perhaps most, of the current LMXBs descended from IMXBs. The studies on the evolution of the LMXB Cyg X-2 (King & Ritter 1999;Podsiadlowski & Rappaport 2000;Kolb et al. 2000) indicate that the mass of the donor star in this system must have been substantially larger (∼ 3.5M ⊙ ) than its current value (∼ 0.6M ⊙ ), implying that intermediate-mass systems can survive the high mass transfer phase by ejecting most of the transferred mass. The calculations by Tauris, van den Heuvel, & Savonije (2000) have shown that the evolution of some IMXBs may survive a spiral-in and experience a highly super-Eddington mass transfer phase on a (sub)thermal timescale if the convective envelope of the donor star is not too deep. These systems provide a new formation channel for binary millisecond pulsars with heavy CO white dwarfs and relatively short orbital periods (3−50 days). Davies & Hansen (1998) have independently suggested that IMXBs may be the progenitors of recycled pulsars in globular clusters. These works emphasize the necessity of accurately defining an evolutionary path of IMXBs, and motivate systematic analysis of binary systems undergoing thermal timescale mass transfer.
Since the donor star in an IMXB is more massive than the accretor, the RL radius of the donor will shrink during the mass transfer. At the same time the donor star will either grow or shrink due to mass loss. The stability of the mass transfer depends on the radius-mass exponents for the donor and its RL (Soberman, Phinney, & van den Heuvel 1997), ξ 2 = (∂ ln R 2 /∂ ln M 2 ) and ξ L = (∂ ln R L /∂ ln M 2 ) L , where ξ 2 is the adiabatic or thermal response of the donor star to mass loss (M 2 and R 2 are the mass and radius of the donor star, R L is its Roche-lobe radius, respectively). In general R L decreases (ξ L > 0) when material is transfered from a relatively heavy donor to a light accretor, and vice versa. Donor stars with radiative envelopes will usually shrink (ξ 2 > 0) in response to mass loss, while donor stars with a deep convective envelope expand rapidly (ξ 2 < 0) in response to mass loss. The relative sizes of these parameters determine whether the mass transfer proceeds on either dynamical or thermal timescale. If ξ 2 > ξ L the mass transfer is dynamically stable, occurring on either nuclear or thermal timescale. If ξ 2 < ξ L the Roche lobe radius shrinks more rapidly than the adiabatic radius, and the mass transfer proceeds on dynamical timescale which leads to a common envelope and a spiral in phase. The final product could be either a Thorn-Żytkow object or a short-period binary if the envelope is ejected (e.g. Tauris, van den Heuvel, & Savonije 2000; Podsiadlowski et al. 2002). Podsiadlowski et al. (2002) made a survey of the X-ray binary sequences with the donor masses ranging from 0.6 to 7M ⊙ . These authors found the actual mass transfer rates through RLOF sometimes deviate from the values given by the traditional formula for thermal mass transfer 1 , where M i 1 and M i 2 are the initial masses of the accretor and the donor respectively, and τ KH is the Kelvin-Helmholtz time scale where G is the gravitational constant, and L 2 the luminosity of the donor star. The work done by Langer et al. (2000) on the evolution of white dwarf binaries also indicates that Eq.
(1) could overestimate the mean mass transfer rates by a factor of a few.
However, due to its simplicity, Eq.
(1) has been widely used in population synthesis investigations (e.g. Hurley et al. 2002;Belczynski et al. 2007) for thermal timescale mass transfer in close binaries. The aim of this paper is to present a modified empirical formula to estimate the mean thermal timescale mass transfer rates onto neutron stars, by calculating the evolutions of IMXBs systematically. The results may be helpful to future investigations involving mass transfer processes in IMXBs. We describe the stellar evolution code and the binary model used in this study in §2. The calculated results and fitting formulae for the mass transfer rates are presented in §3. We conclude in §4.
Binary calculations
We have followed the evolution of the binary systems containing a neutron star and an intermediate-mass secondary star using an updated version of the evolution code developed by Eggleton (1971, see also Pols et al. 1995. The opacities in the code are from Rogers & Iglesias (1992) and from Alexander & Ferguson (1994) for temperatures below 10 3.8 K. We assume a mixing length parameter of α = 2, and set the convective overshooting parameter to be 0.2. The metallicity of the secondary is taken to be Z = 0.02 and 0.001, and the corresponding Helium abundance is 0.28 and 0.242, respectively. Each system is set to start with a neutron star of mass M 1 = 1.4M ⊙ and a secondary of mass M 2 from 1.6 to 4.0M ⊙ . Systems with donor mass higher than ∼ 4.0M ⊙ always experience unstable dynamical mass transfer (Tauris, van den Heuvel, & Savonije 2000; Podsiadlowski et al. 2002). The effective RL radius of the secondary is calculated with Eggleton's equation (Eggleton 1983), where a is the orbital separation, and q = M 2 /M 1 is the mass ratio. We use the following formula to calculate the mass transfer rate from the donor star via RLOF (Eggleton 1971) where RM T is a parameter adjusted automatically in the code, usually taken to be 500.
The mass loss of the secondary via stellar wind is calculated according to the empirical formula given by Nieuwenhuijzen & de Jager (1990), To follow the details of the mass transfer processes, we also include losses of orbital angular momentum due to mass loss, magnetic braking, and gravitational-wave radiation, although the last process is not important in this analysis. For magnetic braking, we use the standard angular momentum prescription suggested by Rapport, Verbunt, & Joss (1983). The
Eddington luminosity of the neutron star is
where m p and σ T are proton mass and the cross section of Thompson scattering, respectively. We limit the maximum accretion rate of the neutron star to the Eddington We let the excess mass be lost from the system with the specific orbital angular momentum of the neutron star. The orbital separation then changes according to the following equation (e.g. Soberman, Phinney, & van den Heuvel 1997) where M = M 1 + M 2 is the total mass, J the orbital angular momentum, andJ MB the rate of orbital angular momentum loss by magnetic brakinig.
Results
We have calculated a large number evolutionary sequences for IMXBs with various initial donor mass and orbital period, so that mass transfer starts when the donor star is on early and late main sequence (cases a1 and a2), in the Hertzprung gap (cases b1, b2, b3) and on the giant branch (cases c1, c2 and c3), respectively. In Fig. 1 Fig. 2 the system contains a neutron star and a companion with initial mass of 3.0M ⊙ , which starts filling its RL roughly at the end of its central hydrogen burning. In Fig. 3 the donor has an initial mass of 3.6M ⊙ and starts filling its RL right after its helium ignition. In the figures we demonstrate the evolution of the mass transfer rate, the orbital period, the donor mass, and the neutron star mass with time. In Fig. 2 the mass transfer rate first rises rapidly to ∼ 10 −5.5 M ⊙ yr −1 , then declines to a few 10 −8 M ⊙ yr −1 after ∼ 9 Myr, and stays around this value for ∼ 1 Myr. During the former rapid mass transfer phase, the donor mass decreases from 3 M ⊙ to < 1 M ⊙ , but most of the mass is lost from the system, and efficient accretion by the neutron star occurs during the latter part of the mass transfer phase. The orbital period first decreases to around 1.2 day, and then increases to ∼ 30 day at the end of mass transfer. Mass transfer shown in Fig. 3 is more rapid due to the more massive and evolved donor star, lasting around 0.1 Myr. About 2.6 M ⊙ mass is transferred from the donor star during this phase, most of which is lost from the system, and the neutron star mass hardly changes.
In our work the initiation (t i ) and termination time (t f ) of the thermal timescale mass transfer is assumed to be once the mass transfer rate exceeds and declines to the Eddington limit of the neutron star. The mean mass transfer rateṀ mean is calculated from the following equation, where M i 2 and M f 2 are the donor mass at t = t i and t f , respectively (stellar wind mass loss is negligible). In Tables 1 and 2 we list the calculated values of t i , t f , M i 2 , M f 2 ,Ṁ mean and the maximum mass transfer ratesṀ max for evolutions with Z = 0.02 and 0.001, respectively.
For comparison, we also list the expected values ofṀ th calculated with Eq. (1). Figures 4 shows the calculated mean mass transfer rates as a function of (M i 2 − M i 1 )/τ KH . A linear fit can be obtained to bė for Z = 0.02, anḋ for Z = 0.001.
Summary and discussion
Our numerical calculations show that there are stable super-Eddington thermal timescale mass transfer processes in IMXB systems with donor mass between 1.6 and 3.6M ⊙ in both Z = 0.02 and Z = 0.001 cases. We find that on average the traditional expression (Eq. [1]) overestimates the thermal timescale mass transfer rates by a factor of ∼ 4.
The results are obviously subject to various uncertainties in treating the mass transfer processes in binary evolution. One of the issues is the mass and angular momentum loss during mass transfer. We have used Eddinton-limited accretion rate for neutron star accretion. Recent observations of quite a few binary millisecond radio pulsars constrain the pulsar masses to ∼ 1.35 M ⊙ (Bassa et al. 2006, and references therein), suggesting that that almost all of the transferred mass may be lost rather accreted by the neutron star during the IMXB and LMXB phase. If the lost mass carries the specific orbital angular momentum of the neutron star, the orbital shrinking during thermal timescale mass transfer would be slower than we have calculated. This can be clearly seen from Eq. (7) by setting f = 0. The effect on mass transfer is most significant for those binaries with mass transfer rates being mildly supper-Eddington. For example, we find that the mean mass transfer rate is decreased by a factor ∼ 5 if the donor mass is ∼ 1.5 − 2.0 M ⊙ and Z = 0.02. We note that in white dwarf binary evolution, it has also been realized that the strong mass loss from the accretor can stabilize the mass transfer even for a relatively high mass ratio, avoiding the formation of a common envelope (Hachisu, Kato, & Nomoto 1996;Li & van den Heuvel 1997;Langer et al. 2000). If, however, part of the lost mass forms a circumbinary disk rather leaves the system (Soberman, Phinney, & van den Heuvel 1997), the disk would extract orbital angular momentum from the binary by tidal torque, enhancing the mass transfer rates and leading to more rapid orbital shrink (Spruit & Taam 2001).
It should be also noted that, during the evolution of an IMXB the strong X-ray radiation by the accretor and the accretion disk could illuminate the donor star and cause expansion of the donor and strong stellar wind (Podsiadlowski 1991;Hameury et al. 1993;Phillips & Podsiadlowski 2002), which would also lead to a higher mass transfer rate and shorter duration of mass transfer. As there is not a generally accepted theory on the irradiation effect, we have not include it in our calculations.
|
2007-08-06T02:01:21.000Z
|
2007-08-06T00:00:00.000
|
{
"year": 2007,
"sha1": "693a5a083ebee67fb0864e4589ca84220990c837",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2007/48/aa7637-07.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "693a5a083ebee67fb0864e4589ca84220990c837",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.