id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
116978580
|
pes2o/s2orc
|
v3-fos-license
|
Measuring the solar atmosphere
The new CRISP filter at the Swedish 1-m Solar Telescope provides opportunities for observing the solar atmosphere with unprecedented spatial resolution and cadence. In order to benefit from the high quality of observational data from this instrument, we have developed methods for calibrating and restoring polarized Stokes images, obtained at optical and near infrared wavelengths, taking into account field-of-view variations of the filter properties. In order to facilitate velocity measurements, a time series from a 3D hydrodynamical granulation simulation is used to compute quiet Sun spectral line profiles at different heliocentric angles. The synthetic line profiles, with their convective blueshifts, can be used as absolute references for line-of-sight velocities. Observations of the Ca II 8542 {\AA} line are used to study magnetic fields in chromospheric fibrils. The line wings show the granulation pattern at mid-photospheric heights whereas the overlying chromosphere is seen in the core of the line. Using full Stokes data, we have attempted to observationally verify the alignment of chromospheric fibrils with the magnetic field. Our results suggest that in most cases fibrils are aligned along the magnetic field direction, but we also find examples where this is not the case. Detailed interpretation of Stokes data from spectral lines formed in the chromospheric data can be made using non-LTE inversion codes. For the first time, we use a realistic 3D MHD chromospheric simulation of the quiet Sun to assess how well NLTE inversions recover physical quantities from spectropolarimetric observations of Ca II 8542 {\AA}. We demonstrate that inversions provide realistic estimates of depth-averaged quantities in the chromosphere, although high spectral resolution and high sensitivity are needed to measure quiet Sun chromospheric magnetic fields.
List of Papers
This thesis is based on the following publications: I Solar velocity references from 3D HD photospheric models de la Cruz Rodríguez J., Kiselman D., Carlsson M., 2010, submitted to A&A II Non-LTE inversions from a 3D MHD chromospheric model de la Cruz Rodríguez J., Socas-Navarro H., Carlsson M., Leenaarts J., 2010, to be submitted to A&A
Introduction
The solar atmosphere constitutes a remarkably complex astrophysical laboratory that continuously performs experiments for us to observe. One reason for trying to understand this complicated environment is to establish with high precision simple but fundamental properties of the Sun. An important example is its chemical composition which can be put in context with our understanding of the astrophysical processes in the interior of the Sun and other stars, in the Galaxy, and in the early universe. To accomplish this, we need to take into account the dynamics of the solar photosphere as well as the physical processes involved in the formation of the spectral lines from which chemical abundance ratios are determined. This represents a major challenge and our confidence in the results rely heavily on the accuracy of measurements made with modern solar telescopes. A second reason for studying the solar atmosphere, and one that is even more relevant in the present thesis, is to understand the mechanisms that generate the observed fine structure, dynamics, and magnetic field and to carry over that understanding to other astrophysical plasmas, including other stellar atmospheres. Dramatic progress in this field has in recent years been made in part by improved theoretical simulations and in part by new solar telescopes, equipped with powerful instrumentation, on the ground and in space. Both simulations and observations clearly demonstrate that much dynamics occur at very small spatial scales. Obtaining accurate quantitative information that will allow us to confirm or refute new models requires pushing existing telescopes to their diffraction limit and designing future telescopes with improved spatial resolution, better signal-to-noise and equipped with multiple instruments to simultaneously diagnose different layers of the solar atmosphere. In addition, sophisticated post-processing methods are needed to enhance the fidelity of the observed data, by developing accurate methods for calibrations and for removing contamination from straylight due to limitations set by the Earth's atmosphere and optical imperfections in the telescope or its instrumentation. This thesis deals with the challenges of accurate measurements of quantities relevant to small-scale dynamics, based on observations with a major solar telescope: the Swedish 1-m Solar Telescope (SST) on La Palma.
Our observational data are from two distinct, but physically connected, layers of the solar atmosphere: the photosphere and the chromosphere. The dynamics and morphology of these two atmospheric layers are completely different. To a large extent, these differences can be attributed to magnetic fields: whereas the gas pressure falls of exponentially with height and is roughly 10 5 times smaller in the chromosphere than in the photosphere, the magnetic field strength falls off much slower. The relative importance of forces associated with gas pressure and magnetic field can be estimated from the ratio of gas pressure (P g ) to magnetic pressure (P B ), the plasma-beta parameter defined by: In the photosphere, β is much larger than unity everywhere, except in sunspots and other (mostly small-scale) concentrations of magnetic field. The photosphere is dominated by a convective energy flux, peaking just below the visible surface. Key questions today are to understand how this energy flux is maintained within magnetic structures and a major challenge is to identify and measure the velocity signatures of any convective flows present. These signatures are both weak and and small-scale and their identification relies on whether we can establish an absolute reference for measured Doppler velocities on the Sun. The first part of the present thesis deals with this problem. The second part deals with observations of the chromosphere. Here, magnetic fields are much weaker than in the strongest magnetic structures seen in the photosphere but the gas pressure is even lower. In the upper chromosphere, β < 1 thus the magnetic forces are dominant. This work aims at investigating the potential for diagnosing the weak chromospheric magnetic fields using Stokes polarimetry and sophisticated inversion techniques.
To set the context, we proceed with an overview of some photospheric and chromospheric topics.
The Photosphere
The visible surface of the Sun corresponds to the photosphere, a thin layer of about 500 km located on top of the convection zone, where the plasma changes from completely opaque to transparent (Stix 2002).
Granulation
Outside active regions with strong magnetic fields, the photosphere is dominated by a dynamic pattern of bright granules surrounded by dark intergranular lanes (see Fig. 1.1). The flow in a granule resembles that of a fountain, where the hot plasma moves upwards inside the granules and then flows out towards the edge, where the cooler plasma merges with material from neighbouring granules. Gravity and pressure increase at the edge of the granules, accelerating the fluid downwards (Stein & Nordlund 1998). Regular granules have a typical size of the order of 1 Mm, and a characteristic life time of 6 minutes (Bahng & Schwarzschild 1961).
Convective motions leave strong fingerprints on any line formed in the photosphere. An important diagnostic is the C-shaped bisector obtained from spatially-averaged line profiles. This effect is produced by the statistical average of bright blueshifted profiles from granules with dark redshifted profiles originating in the intergranular lanes (Dravins et al. 1981). This intensity weighted average is blueshifted as upflows are more heavily weighted by being brighter and covering a larger area than the narrower intergranular lanes. This shift is commonly known as the convective blueshift.
The thermodynamic history of fluid elements rising through the photosphere is described in detail by Cheung et al. (2007). The temperature decrease of a fluid element that moves up in the convection zone, is mostly produced by adiabatic expansion. As the fluid reaches the photosphere, the opacity drops and the fluid rapidly loses entropy by radiative cooling. At this point, the fluid cell is overshooting into the stably stratified photosphere, and it still interacts with the surroundings because it is not completely transparent to radiation. Fig. 1.2 shows the trajectory described by tracer fluid elements that enter the photosphere. The color coding indicates the sign of Q rad , the heat exchange with the surroundings, in dark for radiative loses (Q rad < 0) and in light grey where the fluid elements are being heated (Q rad > 0). Interestingly, there is no direct correlation between heat exchange and the temperature variation of the fluid elements, which is determined by a balance between adiabatic expansion and (non-adiabatic) heat exchange with the surroundings.
In recent years, efforts to obtain refined estimates of solar abundances have given rise to some controversies (see e.g. Asplund et al. 2000). This debate stimulated improvements of treatment of energy transfer by radiation in 3D hydrodynamic simulations, in particular for the mid and high photosphere, where spectral lines are formed. These controversies have also stimulated the development of improved non-LTE calculations of spectral lines used for abundance calculations (Shchukina & Trujillo Bueno 2001). As a result, 3D MHD simulations of more complex solar atmosphere dynamics involving magnetic fields can now also be made with improved energy transfer than just a few years ago.
Sunspots
The structure and dynamics of sunspots remain some of the most controversial topics in the solar photosphere. Sunspots constitute strong magnetic field concentrations that appear in the atmosphere and present typical sizes of 12000 km. As a first approximation, we can assume that the magnetic field acts on the gas in the form of a magnetic pressure, Horizontal force balance then dictates that the sum of the gas pressure and magnetic pressure must be the same inside the sunspot as outside. This implies that where P s is the gas pressure inside the spot and P qs is the gas pressure in the surrounding (non-magnetic) quiet sun. An immediate consequence of this is that the gas pressure inside the spot must be lower than outside and that therefore the atmosphere is more transparent inside sunspots, allowing observers to see deeper layers of the atmosphere than in quiet Sun observations. This is generally known as the Wilson depression (∼ 500 km), discovered by Wilson & Maskelyne (1774). At the same time the H − opacity decreases with temperature, and sunspots are cooler than their environment so the opacity decreases even more. Sunspots are darker than their surrounding granulation because convection is suppressed by the strong magnetic field, thus sunspots are cooler as an effect of inefficient heat transfer. During the past decade, the scientific debate has focused on the dynamics and structure of the penumbra and explaining why the penumbra is as bright as it is, about 75% of the surrounding quiet sun. Recent advances in instrumentation have unveiled very fine structure in sunspots, especially in the penumbra. Fig. 1.3 illustrates in great detail the fine structure of the penumbra of a sunspot. The blown-up section shows dark cored penumbral filaments (Scharmer et al. 2002) and the inner umbra. The following theoretical frameworks (Scharmer 2008) have become popular because they can partially reproduce the features observed in sunspots, in spite of representing different physical concepts. 1. The uncombed penumbra model (Solanki & Montavon 1993) postulates the existence of discrete flux tubes with homogeneous magnetic field inside that discontinuously changes at the boundary. Those nearly horizontal flux tubes are embedded in a more vertical magnetic field. This model was able to reproduce strongly asymmetric Stokes V profiles observed on the limb side of the penumbra in observations off disk center (Sanchez Almeida & Lites 1992). 2. Siphon flow models (Meyer & Schmidt 1968;Montesinos & Thomas 1997) are based on the idea that a difference in field strength between the two footpoints of a flux tube leads to a difference in gas pressure, driving a plasma flow in the direction of the footpoint with the highest field strength (thus lower gas pressure). Evershed flows are assumed to be steady flows between two footpoints with different magnetic field strengths (e.g., Westendorp Plaza et al. 1997). However, these models do not explain the mechanism involved in such field strength difference at the footpoints. 3. Convection and downward pumping of magnetic flux are ingredients added to the siphon models. As siphon models present a stationary solution, time variations are explained by external mechanisms to the penumbra. In this context, moving penumbral grains are assumed to be produced by a moving convective pattern in the bright side of the penumbra. Thomas et al. (2002) and Weiss et al. (2004) attribute the submergence of the flux tubes at the boundary of the penumbra to downward pumping produced by convective motions. Furthermore, they attribute the whole filamentary structure of the penumbra to the same downward pumping mechanism, explaining the structure inside the sunspot based on mechanisms that take place outside. 4. The convective gap model proposed by Scharmer & Spruit (2006). In this framework, the proposed mechanism that generates penumbral filaments is convection, in radially aligned, nearly field free gaps. The strong field gradients that are necessary to reproduce the asymmetric Stokes V profiles reported by Sanchez Almeida & Lites (1992) are assumed to be produced by the topology of nearly field-free gaps combined with line-of-sight gradients in the flow velocity. The Evershed flow is explained as representing the horizontal component of this convection.
Oscillations in the solar atmosphere
Solar oscillations were discovered by Leighton et al. (1962) with a simple observational technique: two simultaneous images were recorded in the blue and the red wings of a spectral line, respectively, and then subtracted. The resulting image contained intensity variations produced by the Doppler shift of the line. Kahn (1961) proposed that oscillations are sound waves trapped in the solar atmosphere. Towards the solar interior, the temperature and speed of sound increase with the variation of the refractive index, caused by the increased density. The wave is refracted until it starts to propagate upwards. The same process occurs above the photosphere where the waves are refracted back into the inner atmosphere. Observations contain an overlap of hundreds of modes of oscillation that effectively reach different depths. The oscillations in the photosphere typically have a 5-minute period and an amplitude around 1 km s −1 (Stix 2002 During the past 15 years combined efforts from observational and computational approaches have lead to a better understanding of chromospheric dynamics. 3D simulations of solar-like stars including a chromosphere and corona are now computationally affordable (Leenaarts et al. 2007;Hansteen et al. 2007;Carlsson et al. 2010).
On the observational side, a new generation of Fabry-Perot interferometers (FPI), for example IBIS at the Dunn Solar Telescope (DST) and CRISP at the Swedish 1-m Solar Telescope (SST), have provided evidences of very fine structure in the chromosphere.
The chromospheric landscape
The definition of chromospheric fine structure has evolved as new discoveries were made. It is widely accepted that the chromosphere includes the characteristic grass-like topology usually seen in H α images (Rutten 2006), but it is not clear where the boundaries of the chromosphere are. Fig. 1.5 shows three Ca II 8542 Å images acquired with SST/CRISP. This strong spectral line shows photospheric granulation at the wings and chromospheric fibrilar features in the core. Some chromospheric features are: • Straws are bright features seen in the core of chromospheric lines (Rutten 2007). They start in facular regions in the photosphere and are much brighter than their surroundings in the chromosphere, showing hedge shapes in filtergrams. Fig. 1.5 shows a close view of straws, . In the upper panel, photospheric faculae become brighter than the surroundings at Introduction mid-photospheric heights. Fibrils seem to originate in these straws, as shown in the lower panel in Fig. 1.5. • Fibrils, also known as mottles, are elongated dark features that in H I 6563 Å (hereafter H α ) form a grass-like canopy covering internetwork cells at any heliocentric angle. They are also present Ca II images, but they only appear around network patches, as shown in Fig. 1.5. They are very dynamic and show transversal motions over a time period of 2 seconds. Hansteen et al. (2006) and Rouppe van der proposed that dynamic fibrils are driven by magneto-acoustic shocks that leak into the chromosphere along magnetic field lines. • Spicules: limb images of the chromosphere are dominated by spicules. De provided a detailed description of spicules and proposed physical mechanisms that could drive them. Spicules appear as thin, long highly dynamic features, usually reaching heights around 5000 km (see Fig. 1.4). Their width varies from 700 km down to current telescopes diffraction limit (∼ 100 km). Spicules are classified in type I and type II. Type I spicules move up and down in time scales of 3-7 minutes and some of them present transversal motions. However, those fibrils that do not move transversely show acceleration and trajectories that are similar to those of dynamic fibrils, suggesting that they are also driven by magnetoacoustic shocks. Type II spicules are very dynamic and show apparent speeds between 50-150 km s −1 , disappearing in time scales of 5-20 s. The mechanism driving type II spicules is not well understood, although their rapid disappearance suggests that strong heating could be ionizing Ca II atoms. Recently, Rouppe van der Voort et al. (2009) found on-disk counterparts of spicules, which produce a clear signature in the blue wing of the H I 6563 and Ca II 8542 lines. • Filaments are (dark) cold clouds of material that according to their temperature, belong to the chromosphere. They present typical lengths of 200000 km, with thickness's of 5000 km (Stix 2002). Towards the limb, filaments are known as prominences that appear hanging above the chromosphere up to 50 000 km. The only known mechanism that can sustain such cold and dense material is an electromagnetic force. Photospheric observations show that filaments predominantly exist along neutral magnetic field lines. Present observational efforts aim at measuring magnetic field in the filaments.
Chromospheric heating
One outstanding question about the chromosphere relates to its energy budget. Why is the outer atmosphere of the sun hotter than the photospheric surface? Semi-empirical, one-dimensional models of quiet Sun require the average temperature to rise above the photosphere to reproduce the chromospheric intrinsic enhanced emission (VAL3, Vernazza et al. 1981). However, observa- tions show that the chromosphere is vigorously active and strongly inhomogeneous. Carlsson & Stein (1995) demonstrated that enhanced emission can be produced by shocks without increasing the mean gas temperature. At present, the connection of waves with chromospheric heating appears widely accepted (Fossum & Carlsson 2005;Cauzzi et al. 2007;Wedemeyer-Böhm et al. 2007;Vecchio et al. 2009), however the exact role of these waves is still under debate.
Magnetic field configuration
The relatively organized and elongated fibrils seen in chromospheric lines (see Fig. 1.5) suggests the presence of magnetic fields. However, if magnetic fields dominate the chromosphere with β 1, those should be almost force free, leading to relatively smooth magnetic configuration. Thus, the complex fine structure must relate to the thermodynamics of the plasma (Judge 2006).
The features described in §1.2.1 and their connection with the photosphere have been contextualized by Wedemeyer-Böhm et al. (2009) in the cartoon shown in Fig. 1.6. Magnetic field lines form a canopy where β ∼ 1. Below the magnetic canopy, acoustic waves originating in the photosphere are dissipated, producing short-lived bright features seen in the wings of Ca II images. In this cartoon, fibrils and spicules form the magnetic canopy, which originates from network patches in the photosphere and extends over internetwork regions in the chromosphere.
We investigate the relation between fibrils and magnetic fields in Paper III, looking for an observational evidence of their alignment.
Velocity references on solar observations
This chapter describes the inherent difficulties in measuring absolute line-ofsight (LOS) velocities from spectroscopic observations. To illustrate the importance of this problem, we recall the discussion in Chapter 1 where we summarized the main theoretical frameworks that have been proposed to explain the structure and dynamics of sunspots. A key difference between fluxtube models and the field-free gap model is that the latter explains penumbral filaments as convective intrusions where magnetic field is weak enough not to suppress convection. Thus, observational evidence of convective motions inside penumbral filaments would be a key to explaining the origin of its filamentary structure and choosing among existing models. These types of measurements are very hard to make because in the upper part of the filament overshooting convection is expected to be weak. At the same time, the presence of the Evershed flow and the small scales involved, makes it very hard to establish the existence of overturning convection inside penumbral filaments. For these reasons, a very accurate velocity calibration is needed to make it possible to distinguish between the upflowing and downflowing components of any convection.
The calibration problem
The fundamental question that is addressed here is: What defines the local frame of rest on the Sun? An observer placed on the Sun would apply Eq. 2.1 to measure LOS velocities, would apply the relationship where λ is the observed wavelength, λ 0 is the reference wavelength (usually the laboratory wavelength of the line of interest) and c is the speed of light. However, ground-based observations are affected by the rotation of the Earth (v ⊕,rot ), the radial component of the Earth's orbital motion (v ⊕,orbit ), the rotation of the Sun (v ,rot ) and the graviational redshift (v grav ), the relarion is in reality more complex,
2)
Furthermore the precision of the atomic data limits the accuracy of the conversion from wavelength to velocities, regardless of how accurate the instrument is. This calibration issue becomes more severe when observational practicalities make it difficult to compensate for the last three terms of Eq. 2.2. The gravitational redshift has a theoretical constant value of 633 m s −1 at the surface of the Earth (Cacciani et al. 2006). In addition, current instruments for solar observations seldom use laboratory light sources as reference wavelengths. The obvious solution of finding a λ 0 on the Sun itself is confounded by the convective lineshifts discussed in Sect. 1.1.1. The magnitude of these shifts is different for different lines.
Below, we summarize some of the methods that have been used to define a velocity reference for spectroscopic observations.
1. Convective motions in sunspot umbrae are suppressed by the presence of strong magnetic fields. Thus, it is common to assume that the umbra is at rest, defining a reference for line-of-sight velocities (e.g., Beckers 1977;Ortiz et al. 2010). However, this assumption usually does not hold higher up in the chromosphere, where umbral flashes associated with shocks (Socas-Navarro et al. 2000a) produce strong blue-shifts. This approach carries the risk of being affected by spurious line shifts produced by molecular blends that only form in the umbra because it is colder than the surrounding granulation. Eq. 2.1 can be used to compute the conversion from the wavelength scale to a velocity scale, but in this case λ 0 is the central wavelength of the spectral line measured in the umbra of a sunspot. 2. Telluric lines are sometimes present in the spectral range that is being observed. These lines are formed in the Earth's atmosphere. Thus, they allow the definition of a very accurate laboratory frame of rest that can be converted to the solar frame using ephemeris constants, the time of the observations and solar rotation (Eq. 2.2). The conversion from wavelength to velocities is calculated using the laboratory wavelength of the line of interest. Martinez Pillet et al. (1997) andBellot Rubio et al. (2008) used this approach to calibrate their observations. 3. A spectral atlas can be used to calibrate observations, as the effects of the rotation and translation of the Earth usually have been compensated for. Langangen et al. (2007) used the atlas acquired with the Fourier Transform Spectrometer at the McMath-Pierce Telescope (hereafter FTS atlas) of Brault & Neckel (1987) to calibrate some of his observations. This atlas was acquired at solar center, thus its usability is limited to disk center observations. 4. Numerical models can be used to compute the convective shift of a line and use it as a reference for velocities. The advantage of this approach is that the convective shift is measured relative to the assumed laboratory wavelength of the line, so it is insensitive to uncertainties in the atomic
Calibration data from hydrodynamic granulation models
In Paper I, we extend the calibration method employed by Langangen et al. (2007) for the C I 5380 line. Snapshots from a 3D hydrodynamical simulation are used to compute synthetic profiles assuming Local Thermodynamic Equilibrium (LTE) (see Fig. 2.1). The convective shift of the spatially-averaged profile is measured from spectra computed with the numerical simulation. Our calculations are performed for eleven selected lines of interest for solar observers (listed in Table 2.1), over a range of heliocentric angles (distance from solar disc centre). The synthetic line profiles are provided in digital form to the community. This method assumes that 3D models can reproduce the correlation between brightness and Doppler shift of granulation in a statistical sense (see Fig. 2.2). The elemental abundances is used as a free parameters to achieve the best possible agreement between the synthetic profiles and the FTS atlas. The estimated parameters should not be regarded as abundances, as they also compensate for uncertainties in the atomic data, the LTE approximation used in the radiative transfer calculations, and other errors. The accuracy of our results is inferred from experiments carried out using the 3D models. From this and from observational tests, we estimate the results to have an accuracy of 50 m s −1 at solar disk centre. Our results have been analyzed using bisectors. The bisector of a spectral line indicates the center of the profile as a function of intensity. Fig. 2.2 illustrate our results for the Fe I 5576.09 Å line. The line bisectors are shown along with the profiles at different heliocentric angles.
In addition to providing calibration data, Paper I discusses the variations of the bisectors with disk position (µ). This limb effect is found to mainly be caused by the 3D structure of the granulation, while the changing intensityvelocity correlation with height plays a minor role.
Chromospheric diagnostics
It was mentioned in Chapter 1 that chromospheric observations are more difficult than those of the photosphere, especially when polarimetric measurements are involved. Spectral lines that are sensitive to the chromospheric range are usually very broad and only the core, where less photons are emitted, shows chromospheric features (Cauzzi et al. 2008), as illustrated in Fig. 3.1 where the granulation present in the wings smoothly changes into a chromopheric landscape close to the core of the line. The lack of light, in combination with a broad profile and weaker magnetic fields than in the photosphere conspire to reduce the amplitude of Stokes Q, U and V profiles.
The obvious solution to this problem would be to increase the exposure time of observations. However, the chromosphere is vigorously dynamic and long integration times usually translate into image smearing. The evolution time scale in the chromosphere can be estimated using an estimate of the Alfvén speed v A (B = 100 G, z = 1000 km) ∼ 10 5 m s −1 in the chromosphere (see page 83, Priest 1982). In the case of the SST, the diffraction limit at 854.2 nm is 0."18 which corresponds to 130 km on the surface of the Sun. Thus, the chromosphere cannot be assumed to be static for times longer than 1.3 seconds (see van Noort & Rouppe van der Voort 2006). Furthermore, this time scale also limits the spectral coverage of FPI observations, as only one wavelength can be observed at the time. It is normally assumed that the Sun does not change during a full scan of the line. If the the spectrum is sampled using a large number of frequency points, this assumption may not hold. Therefore, observing the chromosphere involves a trade-off between sensitivity, cadence and wavelength coverage.
Detectability of magnetic fields in the chromosphere
During the past decade, the lines of the Ca II infrared triplet have been extensively used to diagnose the chromosphere (see Langangen et al. 2008;Leenaarts et al. 2009;Cauzzi et al. 2009, and references therein), sometimes including polarization. Observational papers have usually studied cases with relatively strong magnetic field (Socas-Navarro et al. 2000a;Pietarila et al. 2007a;Judge et al. 2010;de la Cruz Rodríguez et al. 2010), whereas theoretical approaches have been restricted to 1D models (Pietarila et al. 2007b;Manso Sainz & Trujillo Bueno 2010). In Paper II a snapshot from a realistic simulation of the chromosphere is used for the first time to compute synthetic full Stokes spectra. We use a simplified Ca II model atom that consists of 5 bound levels plus ionization continuum, which is illustrated in Fig. 3.2. The populations of the atom are computed in non-LTE evaluating the 3D radiation field as in Leenaarts et al. (2009).
This study is partially motivated by the ongoing debate on requirements of the instrumentation needed to observe chromospheric polarization in the quiet Sun using the Ca II infrared triplet lines. We study the combined effect of spectral resolution and noise on our simulated observations of the chromosphere, considering the following: • All the polarization is due to the Zeeman effect. We neglect the Hanle effect, which depolarizes or polarizes the light depending on the scattering geometry and changes the ratio between Q and U (Manso Sainz & Trujillo Bueno 2010). • The cores of our synthetic profiles are unrealistically narrow, probably because the model is missing small scale random motions. Conclusions based on these results would underestimate the effect of noise and overestimate the effects of instrumental degradation. Thus, we use microturbulence to broaden our profiles to the same width that is observed in spatially-resolved profiles. In Fig. 3.3 the spatially-averaged spectrum from the 3D simulation with and without microturbulence are compared with the FTS atlas (see Brault & Neckel 1987). • Instrumental degradation is described by a Gaussian point spread function that operates on the spectra. Additive random noise following a Gaussian distribution is introduced after the convolution with the instrumental profile. Full Stokes monochromatic images computed from the 3D simulations in the Ca II 8542 Å line are shown in Fig. 3.4. The images have a lot of sharp features that are partially produced by Doppler shifts of the line. As the chromospheric core of the synthetic spectra is unrealistically narrow and strong, intensity variation from Doppler shifts are stronger than in reality.
Our results suggest that current FPI instruments are not sufficiently sensitive to detect circular polarization in the quiet Sun chromosphere using the Ca II 8542 Å line.
Non-LTE inversions from a 3D MHD simulation
Inversion codes have been extensively used to infer physical quantities from spectrometric and spectropolarimetric data. Inversions involve least-squares fits of the parameters of an atmospheric model, in order to reproduce observed However, it is hard to quantify how accurately inversions can provide chromospheric information, with the commonly used assumptions related to the radiative transfer calculations: • The populations of the levels of the atom are computed in non-LTE assuming plane-parallel geometry. • Optionally, the computation of populations can be accelerated by neglecting the effects of the velocity field in the outcoming intensity. Thus, fewer azimuthal angles can be used to evaluate the radiation field. Under those conditions, the line profile becomes symmetric and only one half needs to be computed. • The fitted model is assumed to be in hydrostatic equilibrium to impose consistency between temperature and density. In Paper II, we use the synthetic observations described in Section 3.1, without microturbulence, to test the Non-LTE Inversion Code based on the Lorien Engine (NICOLE) (Socas-Navarro et al. 2010). The results of the inversion are compared with the quantities from the 3D simulation model. The inversions provide a good estimate of the chromospheric average line-of-sight velocity and magnetic field. 3D non-LTE effects could be affecting temperature, which presents less temperature contrast than the original model. Fig. 3.5 shows two examples of fitted profiles from different pixels. The left panel corresponds to a good fit of the line, whereas the right panel shows a poor fit to the observed profiles. These failures originate the inversion noise that is mentioned in Paper II.
Magnetic fields in chromospheric fibrils
It is widely assumed that fibrils outline chromospheric magnetic fields. Fibrils usually appear around facular regions in Ca II 8542 filtergrams (Rutten 2007), supporting the connection between fibrils and magnetic fields. The goal is to find direct observational evidence of the alignment between magnetic fields and fibrils. We use two full Stokes datasets acquired in different telescopes with instruments of different type, to measure the orientation of magnetic fields in superpenumbral fibrils. The first dataset was acquired with SPINOR (Socas-Navarro et al. 2006) at the Dunn Solar Telescope (DST), a slit based instrument that allows a large wavelength coverage at a spatial resolution of 0."6. The second dataset is acquired with SST/CRISP at very high cadence achieving a spatial resolution of 0."18 but with a limited spectral coverage (see Fig. 3.6). In these datasets, the Stokes Q and U spectra are integrated along the length of the fibrils in order to improve the S/N ratio. The azimuthal direction of the magnetic field (χ) is calculated using the ratio between Stokes Q and U.
Our measurements suggest that fibrils are mostly oriented along the magnetic field direction, however we find evidence of misalignment in some cases. This is both surprising, interesting, and hard to explain. Judge (2006) proposed that if β < 1 in the chromosphere, then magnetic fields should be almost force-free showing smooth spatial variations. The fine structure seen in chromospheric observations should then primarily be produced by the thermodynamic properties of the gas. Our results could be compatible with this scenario.
Data collection and processing
Some of the techniques that are described in this chapter are partially covered in Paper IV. However, the backscatter problem (see §4.3) and the telescope polarization model ( §4.4) have not yet been described in separate publications, but are planned to appear in a forthcoming paper (de la Cruz Rodríguez et al. 2011).
The SST and CRISP
The data presented in Paper III and Paper IV was acquired with the Swedish 1-m Solar Telescope (SST) (Scharmer et al. 2003), located on the island of La Palma. Our narrow band data are acquired with the CRisp Imaging Spectropolarimeter (CRISP, Scharmer 2006) which is based on a Fabry-Pérot interferometer that allow for narrow band observations at very high spatial resolution and cadence, providing spectral information at the same time. Atmospheric turbulence is compensated for with adaptive optics (AO), in order to improve image quality. CRISP is mounted in the red beam of the SST (see Fig. 4.1).
The light that has been corrected by the AO system passes through the chopper and the pre-filter. Part of the light is reflected to the wideband camera. The other part is modulated with liquid crystals, producing linear combinations of the four stokes parameters. Afterwards, the light beam passes through the CRISP. The p and s polarizations are separated by a beam splitter into two beams that are recorded with separate cameras.
Flat-fielding the data
Science data taken with a CCD camera can be corrected for pixel-to-pixel inhomogeneities in the response of the camera. Normally, if the CCD is illuminated with a flat and homogeneous light source, intensity variations are mostly produced by pixel-to-pixel sensitivity variations, dirt and fringes. Thus, flatfield calibration images can be acquired to correct for these intensity variations. A particular problem arises by the presence of time-dependent telescope polarization in the data. Normally, the flat-field images are taken at a different time than the science data. Thus, the amount of polarization introduced by the telescope can differ significantly between science and flat field data. In principle this should not be a problem if our cameras could detect Stokes parameters directly. Instead, four linear combinations of Stokes I, Q, U and V are acquired. If seeing were not present, the demodulation of the data could be carried out directly. However, in our case image restoration needs to be done in order to remove residual effects of seeing, not fully compensated for by the AO. As the image reconstruction is done with modulated data, artifacts appear if the flats and science data are not taken at similar times.
Additionally, flat-fielding narrow-band images is more complicated than flat-fielding wideband images. In the case of CRISP, inhomogeneities on the surface of the FPI etalons produce field-dependent wavelength shifts of the transmission profile of the instrument. These are called cavity errors because they introduce variations in the FPI cavities that define the wavelength. The combined effect of cavity errors and the presence of a spectral line, produce field-dependent intensity variations purely introduced by the slope of the spectral line. At the same time, variations in the reflectivity of the etalons across the field-of-view translate into minor variations of the width of the instrumental profile, and therefore also to overall transmission variations. This effect is much smaller than intensity fluctuations produced by cavity errors. Fig. 4.2 illustrates these two effects on the Fe I 6302 Å line.
In Paper IV we address the flat-fielding problems produced by time varying telescope polarization and by cavity/reflectivity errors. We propose a method to flat-field polarimetric data affected by telescope polarization. A numerical scheme is used to model and remove the fingerprints of the spectral line from our flat-field data.
The backscatter problem
The content of this section only applies to observations carried out at infrared wavelengths, and it was motivated by our first observations in Ca II 8542 Å. The methods described here were developed together with Michiel van Noort and it is the main result of my first campaign with CRISP in 2008.
The problem
The CRISP acquisition system includes three back-illuminated CCD cameras (Sarnoff) that can record 35 frames per second. The quantum efficiency of these cameras decreases towards long wavelengths. Above 700 nm the CCD becomes semi-transparent, letting part of the light to pass through. Furthermore, the images show a circuit-like pattern that cannot be removed by traditional dark-field and flat-field corrections, as illustrated in Fig. 4.3. Examination of pinhole data and flat-fielded data shows that: 1. There is a diffuse additive stray-light contribution in the whole image. 2. The stray-light contribution is much smaller in the circuit pattern and the gain appears to be enhanced.
The model
These problems can be explained by a semi-transparent CCD with a diffusive medium behind it, combined with an electronic circuit located right behind the CCD that is partially reflecting and therefore also less transparent to the scattered light. Fig. 4.4 represents the simplified structure of the camera. Under normal conditions an image recorded with a CCD, I o , can be described in terms of the dark-field D, the gain factor G f and the real image I r : However, in the infrared we need a more complicated model: where f represents the overall fraction of light absorbed by the CCD, G b is the gain for light illuminating the CCD from the back which should account for the electronic circuit pattern. In the following, we refer to G b as backgain. P is a Point Spread Function (PSF) that describes the scattering properties of the dispersive screen. In the backscatter term we assume that (1 − f )G f I r G b is transmitted to the diffusive medium where it is scattered. A fraction of the scattered light returns to the CCD, passing again through the circuit. The cartoon in Fig. 4.4 shows a schematic representation of the structure of the camera and the path followed by the light beam.
The numerical approach
In order to obtain the real intensity I r , the PSF P, the back-gain G b and the front-gain G f must be known. The transparency factor is assumed to be smooth because the properties of the dispersive screen seem to be homogeneous across the field-of-view. This allows us to include f in the front and the back gain factors so Eq. 4.2 becomes: This problem is linear and invertible, but a direct inversion would be expensive, given the dimensions of the problem. A numerical approach can be used to iteratively solve the problem. We definê In the first iteration, we initializeĴ assuming that the smearing caused by P is so large thatÎ can be approximated by the average value of the observed image multiplied with the back gain. The back gain G b is assumed to be 1 for every pixel in the first iteration. Furthermore, we assume that the product G f I r can be estimated from Eq. 4.1 ignoring back scattering, i.e., G f I r ≈ I o − D.
These values are of the same order of magnitude as the final solution, and therefore correspond to a reasonable choice of initialization.
where the initial guess of the PSF P represents an angular average obtained from a pinhole image. The small diameter of the pinhole only allows to estimate accurately the central part of the PSF. The wings of our initial guess are extrapolated using a power-law. The estimate of G f I r is then which can be used to compute a new estimateĴ. This procedure is iterated, until G f I r and I o are consistent. This new value ofĴ is used to improve our estimate of G b , by applying the same procedure to images that contain parts being physically masked (I r = 0), as in Fig. 4.5. In the masked parts where I r ≡ 0 we have, so the back-gain can be computed directly: (4.8) Thus, every pixel must have been covered by the mask at least once in a calibration image in order to allow the calculation of the back-gain. With the new G b we can recompute a new estimate of G f I r .
We now need to specify a measure that describes how well the data are fitted by the estimate of P and G b (P). Since both G f I r and G b are computed based on self-consistency, we iteratively need to fit only the parameters of the PSF. We assume that the PSF is circular-symmetric and apply corrections to the PSF at node points placed along the radius.
We use Brent's Method described by Press et al. (2002) to minimize our fitness function. This algorithm does not require the computation of derivatives with respect to the free parameters of the problem. When the opaque bars block a region of the CCD, an estimate of the back gain can be calculated for a given PSF according to Eq. 4.8. In our calibration data, the four bars of width L are displaced 0.5 L from one image to the next (see Fig. 4.5). This overlapping provides two different measurements of the back gain on each region of the CCD. However, as the location of the bars changes on each image, the scattered light contribution is different for each of these measurements of the back gain. Our fitness function minimizes the difference between these two measurements of the back gain.
Having thus obtained the PSF P and the back-gain G b , we obtain G f by recording conventional flats and assuming I r is a constant in order to obtain G f from Eq. 4.9.
Results
The numerical scheme described in §4.3 produces the backgain (G b ) and an approximate PSF (P) that describes the scattering problem. The results are illustrated in Fig. 4.6. The back gain image shows a background with vertical dark areas in the extended gaps where the circuit pattern is not present. Those hollows probably indicate that the PSF is not perfect, an expected result given the assumptions imposed on the shape of the PSF: it is constant over the entire field of view, and only radial variations are allowed. The PSF has extended wings that complicate the convergence of the solution, given the nature of our calibration data: the spacing between bars in the horizontal direction is a limiting factor to constrain such extended wings. The flat-fielding method described in the previous sections has been implemented in the image reconstruction code MOMFBD (van Noort et al. 2005) and is only used when a back-gain and a PSF are provided. Eq. 4.9 is reordered to correct the science data.
Since G f I r appears in both sides of the equation, some iterations are needed to estimate the term G b [(G b G f I r ) * P], which is a computationally expensive process if thousands of images are flat-fielded. In Fig. 4.3 we show a frame that has been traditionally flat-fielded (left panel) and the same image flat-fielded according to Eq. 4.9. The method described in the present work removes the electronic circuit pattern from the images. The assumption imposed on the PSF allows us to estimate the back gain with limited accuracy, as is obvious from the Fig. 4.7. The uncertainties present in the PSF and the back gain are likely to affect the contrast of the corrected images.
Telescope polarization model at 854.2 nm
The turret of the SST contains optical elements that polarize the incoming light. Selbing (2005) studied the polarizing properties of the telescope at 630.2 nm and proposed a theoretical model to characterize its temporal variation. Calibration images have been taken using a 1-m polarizer mounted at the entrance lens of the SST (see Fig. 4.8). The polarizer rotates 360 • in steps of 5 • and several frames are acquired on each polarizer angle. Data were adquired during the whole day. These data were used to determine the parameters of the model proposed by Selbing (2005) at 854.2 nm.
Telescope model
Each polarizing optical element of the telescope is represented with a Mueller matrix. Mueller matrices of mirrors are noted with M and have two free parameters. Assuming a wave that propagates along the z axis and oscillates in the x − y plane, the parameters are the de-attenuation term (R) between the xy components of the electromagnetic wave and the phase retardance (δ ) produced by the mirror (see Selbing 2005). This form of M assumes that Q is perpendicular to the plane of incidence. In the more general case, M can be written as, corresponds to a rotation to a new coordinate frame, rotated an angle α.
L represents the entrance Lens. The form of the lens matrix represents a composite of random retarders, so it modulates the light without (de)polarizing it.
The reference for Q is aligned with the 1m linear polarizer axis and the values of the matrix are measured in that frame.
The model is built using the Mueller matrix of each polarizing element in the telescope (Eq. 4.10), as a function of the azimuth (ϕ) and elevation (θ ) angles of the Sun at the time of the observation. We have included the conversion factor from degrees to radians in the matrices, thus all the angles are given in degrees.
where, • c 0 , c 1 , c 2 , c 3 and c 4 are the parameters of the entrance lens.
• c 5 and c 6 are the de-attenuation and phase difference of the azimuth mirror. • c 7 and c 8 are the de-attenuation and phase difference of the elevation mirror.
• c 9 and c 10 are the de-attenuation and phase difference of the Schupmann mirror. • c 11 is the angle of the field mirror. We used a Levenberg-Marquardt algorithm (see Press et al. 2002) to fit the 12 parameters (c) of the model to the calibration data. The orthogonality of 1-m polarizer states is maximum every 45 • . Thus only four angles are used in our fitting routine: 0 • , 45 • , 90 • , 135 • . The quality of the fit does not improve substantially by including data from other angles. The derived parameters of the Mueller matrix of the telescope are shown in Table 4.1. Unfortunately, we have not been able to analyze the errors due to time constraints.
The quality of the calibration data is limited by the quality of the 1-m linear polarizer. The problem is posed in such a way, that we cannot measure the extinction ratio of the polarizer and the parameters of the lens at the same time. In our case, the polarizer has a significant leak of unpolarized light at 854.2 nm. Using small samples of the sheets used to construct the 1-m polarizer, we have estimated the extinction ratio of the 1-m polarizer to be approximately 0.4, so the parameters of the lens have been determined assuming that value. In Fig. 4.9 the time dependence of the Mueller matrix of the telescope is shown along a whole day. The largest changes occur when the sun is close to zenith, because of the rapid movement of the telescope. The linear polarization reference is defined by the first mirror after the lens, however this is not very useful in practice because the turret introduces image rotation along the day. Instead, we use the solar north as a reference for positive Q by applying an extra rotation to M tel . The rotation is produced by reflections inside the turret and by the variation of the angle between the first mirror after the entrance lens and the solar north along the day. The angle between the first mirror and the solar north (β ) is computed by the telescope software every 30 seconds. Eq. 4.11 transforms the reference of Q and U to solar North-South axis.
The only remaining question is the location of the solar north in our science images. At this point, Stokes Q and U are relative to the solar North-South axis, but there is no coupling between the polarization calibration and the image orientation. The angle between solar north and the horizontal on the optical table is where ϕ is the azimuth, θ is the elevation, TC is the table constant and β is the tilt angle between first mirror in the telescope and solar north. The table constant is relative to the orientation of the optical table and it is +48 • for the current setup. Fig. 4.10 and 4.11 show monochromatic Stokes I, Q, U, V images acquired in Ca II 8542 Å. The dataset has been restored using the de-scattering scheme from §4.3 and the telescope model presented in the current section. This dataset is used in Paper III to measure the alignment between fibrils and magnetic field.
Summary of papers
Paper I: Solar velocity references from 3D HD photospheric models The aim of this paper is to help observers to accurately calibrate line-of-sight velocities. We use a 3D hydrodynamic simulation of the solar photosphere to compute spatially-averaged spectra that can be used as absolute velocity references. The line profiles are computed at different heliocentric angles, from disk center towards the limb.
Our synthetic profiles are compared with observational data and several experiments are computed to estimate the accuracy of our method, which has an estimated error of approximately ±50 m s −1 at disk center. Our tests suggest that the variation of the bisectors towards the limb, is mostly produced by the 3D topology of the photosphere.
In Paper I, I carried out all the calculations and prepared all figures. The collaborators contributed to the scientific discussion and assisted in the writing.
Paper II: Non-LTE inversions from a 3D MHD chromospheric model
In Paper II, we create synthetic full-Stokes observations in Ca II 8542 Å from a snapshot of a realistic 3D simulation of the solar atmosphere. These observations are used to estimate the amplitude of the Stokes profiles in quiet Sun. We discuss the effect that spectral degradation and noise have on our observations and discuss possible requirements for future instrumentation.
In the second part of the paper, we use our synthetic observations to test our non-LTE inversion code. The fitted model is compared with the quantities from the 3D snapshot. We conclude that the inversion code is able to estimate the average chromospheric value of magnetic field, line-of-sight and velocity. 3D non-LTE effects seem to affect the fitted temperature that in general presents less contrast than the original model.
My contribution to Paper II was to compute the full-Stokes simulated observations using the population densities provided by J. Leenaarts and carried out the Non-LTE inversions of the data. For that purpose I wrote an improved parallel version of NICOLE using MPI and a master-slave scheme. I prepared the main structure of the text in the paper and created all the figures.
Paper III: Are solar chromospheric fibrils tracing the magnetic field?
The aim of this letter is to obtain an observational evidence that confirms the alignment between chromospheric fibrils and magnetic field. We use two datasets acquired with SST/CRISP and DST/SPINOR to measure the orientation of magnetic field along chromospheric fibrils. We find that many fibrils are aligned with magnetic field, however in both datasets there are evidences of misalignment in some cases.
For this paper I provide a restored dataset from CRISP, compensated for telescope polarization and with a calibrated reference for Stokes Q and U. My co-author prepared the SPINOR data and contributed to the scientific discussion and the writing.
Paper IV: Stokes imaging polarimetry using image restoration at the Swedish 1-m Solar Telescope II: A calibration strategy for Fabry-Pérot based instruments The image restoration step that is applied to our data, decouples the 1-to-1 relation between the pixels of the CCD. When image reconstruction is applied to data showing sharp spatial variations produced by intrumentation, artifacts can appear. We propose a flat-fielding scheme for polarimetric data acquired with CRISP. We discuss the effect of the polarization introduced by the telescope and the optical setup in our flat-field data. In order to correct for spurious intensity fluctuations from cavity errors and reflectivity errors, we use a numerical framework that allows to model the spectral line on each pixel of the CCD.
My contribution to this paper was to implement the numerical scheme that is used to model the flat fields and remove the intensity fluctuations produced by cavity errors and reflectivity errors. I contributed to the scientific discussion and prepared some of the figures in the paper.
Acknowledgements
I gratefully acknowledge the Institute for Solar Physics of the Royal Swedish Academy of Sciences and the USO-SP Graduate School for Solar Physics for giving me a position in science in a prosperous academic environment. I especially value the scientific discussions with my collaborators Hector Socas-Navarro, Michiel van Noort and Roald Schnerr from whom I learned so much. I have benefited from the assistance and advice provided by my supervisors Dan Kiselman, Göran Scharmer and Mats Carlsson. I also appreciate the help from Mats Löfdahl who kindly commented on the manuscript of my thesis.
My thanks to Tiago Pereira and to the Institute of theoretical Astrophysics of Oslo for providing the 3D simulations used in my research. I especially enjoyed being in contact with Luc Rouppe van der Voort who shared observational data and also his knowledge with me. I also acknowledge Pit Sütterlin and Rolf Kever who have been of great help at the SST on La Palma. Here in Stockholm it has been great to share the office (and pubs) with Vasco Henriques.
I gratefully acknowledge the financial support of the European Commission during the first three years of my PhD. Many thanks to my institute for extending my financial support during the last months of my research.
|
2012-04-19T07:10:55.000Z
|
2012-04-19T00:00:00.000
|
{
"year": 2012,
"sha1": "020fbfad41d0bd11912f3d36addc5230a8d162e5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "020fbfad41d0bd11912f3d36addc5230a8d162e5",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
73453031
|
pes2o/s2orc
|
v3-fos-license
|
Human mass balance, pharmacokinetics and metabolism of rovatirelin and identification of its metabolic enzymes in vitro.
Abstract The mass balance, pharmacokinetics and metabolism of rovatirelin were characterised in healthy male subjects after a single oral dose of [14C]rovatirelin. [14C]Rovatirelin was steadily absorbed, and the peak concentrations of radioactivity and rovatirelin were observed in plasma at 5–6 h after administration. The AUCinf of radioactivity was 4.9-fold greater than that of rovatirelin. Rovatirelin and its metabolite (thiazoylalanyl)methylpyrrolidine (TAMP) circulated in plasma as the major components. The total radioactivity recovered in urine and faeces was 89.0% of the administered dose. The principal route of elimination was excretion into faeces (50.1% of the dose), and urinary excretion was the secondary route (36.8%). Rovatirelin was extensively metabolised to 20 metabolites, and TAMP was identified as the major metabolite in plasma and excreta among its metabolites. To identify the metabolic enzymes responsible for TAMP formation, the in vitro activity was determined in human liver microsomes. The enzymatic activity depended on NADPH, and it was inhibited by ketoconazole. Furthermore, recombinant human cytochrome P450 (CYP) 3A4 and CYP3A5 displayed enzymatic activity in the assay. Therefore, CYP3A4/5 are the most important enzymes responsible for TAMP formation.
Introduction
Thyrotropin-releasing hormone (TRH) was originally isolated from the hypothalamus (Guillemin, 1978;Schally, 1978), and TRH is distributed throughout the central nervous system (Morley, 1979). TRH binds to the G protein-coupled TRH receptor (TRHR) (Yamada et al., 1993). The endocrine functions of TRH include controlling the levels of thyrotropin, thyroid-stimulating hormone (TSH) and prolactin. In addition to these endocrine functions, TRH also possesses neuropharmacological functions, including cerebral nerve activation (stimulation of motor function), effects on spinal function (stimulation of spinal motor neurons) and effects on the central nervous system (antidepressant activity) (Daimon et al., 2013;Khomane et al., 2011). Spinocerebellar degeneration (SCD) is a degenerative disease caused by the slow degeneration of the brainstem and cerebellum, and its major symptom is ataxia. TRH has been clinically investigated for the treatment of SCD (Sobue et al., 1983), and TRH agents, such as protirelin (synthetic TRH) and taltirelin (TRH analogue), are currently prescribed to patients with SCD in Japan. However, protirelin is administered via intravenous or intramuscular injection because of its short plasma half-life (4-5 min), poor metabolic stability and low bioavailability (Bassiri & Utiger, 1973;Griffiths, 1976;Khomane et al., 2011;Kinoshita et al., 1998). These disadvantages are not as severe for taltirelin, and the drug is approved for twice-daily oral administration (Kinoshita et al., 1998).
Rovatirelin (1-[N-[(4S, 5S)-(5-methyl-2-oxooxazolidin-4yl)carbonyl]-3-(thiazol-4-yl)-L-alanyl]-(2R)-2-methylpyrrolidine) hydrate has also been tested in clinical studies as an oral TRH agent for the treatment of SCD. In phase I studies in healthy adult males, rovatirelin exhibited a linear pharmacokinetics in a single-ascending dose (0.1 to 10 mg) and a benign safety profile supportive of once-daily oral administration (Shimizu et al., 2018). From results of Phase II and III studies to evaluate the efficacy and safety of rovatirelin in SCD patients, a daily dose of 1.6 to 3.2 mg of rovatirelin have been considered to be dosage level intended for clinical use as once-daily oral administration (Shimizu et al., 2018). Rovatirelin exhibited greater affinity for TRHR (K i =702 nmol/ L) than taltirelin (K i =3880 nmol/L) in receptor binding studies using the membrane fraction of human TRHR-expressing cells, and rovatirelin increased noradrenaline levels in the medial prefrontal cortex and locomotor activity in rats compared with the effects of taltirelin (Ijiro et al., 2015). Pre-clinical pharmacokinetic studies in rats and dogs indicated that rovatirelin was absorbed rapidly with absolute bioavailabilities of 7.3% in rats and 41.3% in dogs after oral administration, and orally administered [ 14 C]rovatirelin was predominantly excreted in faeces (Kobayashi et al., 2019). In addition, the ability of rovatirelin to cross the blood-brain barrier was relatively high in rats. Furthermore, rovatirelin was stable in rat plasma and brain homogenates. In vitro metabolite profile studies illustrated that TAMP was the major metabolite of rovatirelin in human hepatocytes (Kobayashi et al., 2019).
The objectives of this study were to (1) characterise the pharmacokinetic profiles of rovatirelin after a single oral dose of [ 14 C]rovatirelin in humans, (2) investigate the pharmacokinetic fate of rovatirelin in humans, (3) evaluate the amounts of predominant rovatirelin metabolites formed in humans and (4) identify the enzymes responsible for producing a major metabolite of rovatirelin.
Study design
This study (Kissei protocol number: KPS1105) was an open-label, single-oral-dose study involving six healthy Caucasian male subjects aged 36-53 years (mean, 45 years) with body weights of 65.2-93.1 kg (mean, 78.53 kg) and body mass indices (BMI) of 23.1-27.9 kg/m 2 (mean, 25.53 kg/m 2 ). The clinical phase of this study was conducted at Covance Clinical Research Unit (CCRU, Leeds, UK) in accordance with good clinical practice guidelines and ethical principles originating from the Declaration of Helsinki, International Conference on Harmonisation (ICH) guidelines and applicable laws and regulations. Before starting the study, the protocol and consent form were reviewed and approved by the Research Ethics Committee (REC). The study commenced after receiving a Clinical Trials Authorisation from the Medicines and Healthcare products Regulatory Agency and REC approval. Permission to perform the study was also granted by the Administration of Radioactive Substances Advisory Committee.
For this study, healthy Caucasian men, between 35 and 55 years of age and a BMI between 18.5 and 30 kg/m 2 , were eligible for enrolment and the key inclusion and exclusion criteria are described in the Supplementary Table 1. Eligible subjects were in good health as determined by a medical history review, physical examination, 12-lead electrocardiogram (ECG) and clinical laboratory evaluations.
The subjects were administered [ 14 C]rovatirelin oral solution (50 mL) in the fasted state on the morning of day 1. The dosing vessel was then rinsed twice (2 Â 100 mL), and both rinses were ingested by the subject, resulting in a total fluid intake at dosing of 250 mL. Each subject participated in only one study period and remained resident in the CCRU from day À1 (the day before dosing) until at least day 11 (240 h after administration), with the possibility of extension to inhouse visits on days 11-15 (336 h after administration) and outpatient visits on days 21 (±1 day) and 28 (±1 day) at the CCRU. On day 11, subjects were discharged from the CCRU if no radioactivity was detected in two consecutive plasma samples, <1% of the total radioactive dose was recovered in excreta (urine and faeces) in two consecutive 24-h collection periods and there were no medical reasons for a longer stay. All subjects completed the study, but one subject vomited approximately 50% of his dose within 6 h after administration. His data have been excluded from the calculation of the mean and standard deviation and from the ranges quoted in the results. All subjects were given a high-fibre diet whilst resident in the unit, and meals were provided as appropriate at all other times. Grapefruit, grapefruit juice, Seville oranges, Seville orange marmalade, other products containing grapefruit/Seville oranges, poppy seeds and food containing poppy seeds were not permitted during the 7 days prior to dosing until final discharge from the study and caffeine-containing foods/beverages were not permitted from 48 h prior to dosing until final discharge from the study. Alcohol and smoking were not permitted on this study. Subjects were also requested not to undertake vigorous exercise from 7 days prior to dosing until final discharge from the CCRU.
Safety assessment
Safety was assessed throughout the study via clinical laboratory evaluations, body weight measurements, physical examinations, 12-lead ECG, vital sign assessments and monitoring for adverse events.
Plasma, urinary and faecal samples for metabolite profiling were selected after confirming the total radioactivity in plasma, urine and faeces, respectively.
Analysis of total radioactivity in samples
Aliquots of plasma (up to 800 lL) and urine (up to 1 mL, weighed aliquots) were added directly to Emulsifier-Safe scintillation fluid prior to liquid scintillation counting. Aliquots of blood (400 lL) samples and faecal homogenate samples (0.2 to 0.5 g) were combusted using a Packard sample oxidiser System 307 (PerkinElmer Life and Analytical Sciences). The resulting 14 CO 2 was trapped in Carbo-sorb E and then mixed with Permafluor Eþ. The efficiency of oxidation was determined via combustion of quality control standards. All of the samples in the scintillation fluid were counted using a TriCarb scintillation counter (2100TR, 2900TR or 3100TR, PerkinElmer Life and Analytical Sciences).
Determination of rovatirelin and TAMP in plasma and urine
Rovatirelin and TAMP in plasma (100 lL) or urine (50 lL) were extracted using an OASIS HLB mElution Plate (Waters Corporation, Milford, MA, USA). The concentrations of rovatirelin and TAMP were determined by liquid chromatography with tandem mass spectrometry (LC-MS/MS) with electrospray ionisation in the positive ion mode, which consisted of API5000 (for plasma) or API4000 (for urine) (AB Sciex Pte. Ltd., Framingham, MA, USA) and Acquity HPLC systems (Waters Corporation). Chromatographic separation was achieved using a Hypersil GOLD PFP column (100 mm  3 mm i.d., Thermo Fisher Scientific Inc., Waltham, MA, USA) maintained at 50 (for plasma) or 40 C (for urine) by employing two solvents as the mobile phase, namely solvent A (10 mmol/L ammonium formate containing 0.2% formic acid) and solvent B (acetonitrile containing 0.2% formic acid).
Pharmacokinetic analysis
Pharmacokinetic parameters were calculated from the individual subject data by non-compartmental methods using WinNonlin Enterprise Version 5.2 (Pharsight Corporation, Mountain View, CA, USA). The area under the concentrationtime curve (AUC) was calculated by the linear trapezoidal method, and AUC from time zero to infinity (AUC inf ) was estimated by via extrapolation of the terminal phase. The elimination half-life (t 1/2 ) was calculated from the slope of the linear regression of the terminal portion of the log-transformed plasma concentration-time curve (k z ). The maximum concentration (C max ) and time at the maximum concentration (t max ) were used as observed values. Oral clearance (CL/F) was calculated by dividing the dose by AUC inf . The volume of distribution during the terminal phase (V z /F) was calculated by dividing CL/F by k z .
The concentrations of total radioactivity in excreta (urine and faeces) were multiplied by the respective weights to obtain the amounts recovered in each collected interval, which are expressed in total terms (A e ) and as percentages of the administered radioactivity dose. The amounts recovered in each collection interval of faeces and urine were separately added to obtain the cumulative recovery in each of these excreta. Renal clearance (CL r ) was calculated by dividing urinary A e by AUC.
Sample preparation for metabolite profiling
Plasma samples at 3, 4, 5, 6, 8 and 12 h from the five subjects were pooled for each time point. In addition, plasma samples were pooled by subject using an established AUC pooling technique (Hamilton et al., 1981). The time-points included in the AUC pools for each subject were dependent on the level of radioactivity present. AUC pooled samples were prepared from plasma samples at 0.5-12, 2-12 or 3-72 h. The samples were mixed with 4 volumes of acetonitrile and centrifuged at 4 C for 10 min to separate the supernatant. The residue was further extracted with 4 volumes of acetonitrile/methanol (1:1, v/v) in the same manner as the first extraction. The extracts were evaporated to neardryness under a stream of nitrogen gas at room temperature. The residue was reconstituted in 10 mmol/L ammonium acetate pH 9.0/acetonitrile (97:3, v/v) (800 lL) and centrifuged for HPLC analysis. The final extract was analysed by liquid scintillation counting to calculate the recovery of radioactivity. The extraction recoveries ranged from 66.3 to 79.0% (mean, 74.2%).
Urine samples from each subject were pooled across time-points so that the pool included >90% of the total radioactivity excreted via urine. The pooled samples were prepared by combining equal sample weights of each constituent. A portion (2 mL) of each urine pool was centrifuged for HPLC analysis. To quantify recovery from urine samples, the radioactivity of the supernatant was measured by liquid scintillation counting. The extraction recoveries ranged from 96.9 to 102.3% (mean, 100.6%).
Faecal samples from each subject were pooled across time-points so that the pool included >95% of the total radioactivity excreted faecally. The pooled samples were prepared by combining equal weights of the aqueous homogenates of each constituent. A sample (2 g) of each faecal homogenate pool was extracted twice with 4 volumes of acetonitrile/methanol (1:1 v/v) in the preparation, and samples for HPLC analysis were prepared as described previously for the plasma samples. To quantify recovery from faecal samples, the radioactivity of the final extract was measured by liquid scintillation counting. The extraction recoveries ranged from 93.2 to 108.1% (mean, 100.7%).
HPLC analysis of metabolites in plasma, urine and faeces
Chromatography was performed using an Agilent 1100 HPLC system (Agilent Technologies, Santa Clara, CA, USA). Separation was achieved using a Luna C18 column (250 mm  4.6 mm i.d., 5 lm, Phenomenex, Belmont, CA, USA) maintained at 40 C by employing two solvents as the mobile phase: solvent A (10 mmol/L ammonium acetate, pH 9.0) and solvent B (acetonitrile). A linear gradient at a flow rate of 1.0 mL/min was set as follows: 3% B (0 min) ! 3% B (5 min) ! 18% B (35 min) ! 70% B (45 min) ! 70% B (50 min) ! 3% B (50.1 min) ! 3% B (60 min). The column eluate was introduced to the UV detector set at a wavelength of 210 nm and fractionated every 10 s into 96-well microplates (Deep-Well LumaPlate, PerkinElmer Life and Analytical Sciences) using a HTC PAL fraction collector (BLCA124, CTC Analytics AG, Zwingen, Switzerland). After evaporation of HPLC solvents from the microplates, the radioactivity of residues was determined using a microplate scintillation and luminescence counter (TopCount NXT, PerkinElmer Life and Analytical Sciences). Data were analysed using Laura software v3.4. (Lablogic Systems Ltd., Sheffield, UK). The metabolites were identified by comparing retention times between the radioactive peaks of samples and UV peaks of authentic standards on the HPLC connected to the UV detector. Identification of the metabolites was further conducted by comparing retention times between the ion peaks of samples and those of authentic standards on an LC-MS system equipped with multiple-stage mass spectrometry (MS n ).
The column recoveries of radioactivity were 89.7% for plasma, 85.8% for urine and 100% for faeces. Therefore, no notable losses of radioactivity were observed across the analytical system.
The radiolabelled components in each chromatogram (greater than 3-fold background) were evaluated to determine retention times and percentage peak area values as percentages of the total radioactivity in the chromatogram. The peaks not associated with the metabolite reference standards were assigned the prefix HM (human metabolite) followed by a number based on the retention time. The amounts of rovatirelin and its metabolites in urine and faeces were expressed as percentages of the dose administered.
Identification of rovatirelin and its metabolites using LC-MS (MS n )
Analytical samples prepared from plasma, urine and faeces were analysed using an LC-MS (MS n ), as they contained the peaks of interest. Chromatography was performed using an Accela HPLC system (Thermo Fisher Scientific). The HPLC conditions were the same as described in a previous section. The HPLC eluent was split post-column in a 1:10 ratio between a mass spectrometer and a Beta-RAM radioactivity detector (model 4, Lablogic Systems Ltd.). MS analysis was performed using an LTQ Orbitrap hybrid mass spectrometer (Thermo Fisher Scientific). The analytical conditions of the mass spectrometer were as follows: atmospheric pressure ionisation interface, electrospray ionisation; acquisition, full scans in the positive ion mode; scan range, 100-750 m/z (MS n data dependent); source voltage, 4.2 kV; and capillary temperature, 310 C. Metabolites in the analytical samples were identified by comparing the retention time, protonated molecule and fragment ion peaks with those of the authentic standards. Compoundrelated molecular ions were identified by comparing the retention times on the radiochromatogram. In all cases, the 12 C ion was the major species in the molecular ion cluster. For accurate mass confirmation of a suspected metabolite, the measured accurate mass had to be within 5 ppm of the theoretical value to confirm the empirical formula. The fragmentation given was based on the 12 C ion unless stated otherwise.
The concentration of TAMP in the filtrate was determined using an HPLC system with UV detection, which consisted of a Hitachi L-7000 series (Hitachi High-Technologies Corporation, Tokyo, Japan). Chromatographic separation was achieved using an L-column ODS column (250 mm  4.6 mm i.d., 5 lm, Chemicals Evaluation and Research Institute Japan, Tokyo, Japan) maintained at 40 C by employing two solvents as the mobile phase, namely solvent A (20 mmol/L potassium phosphate pH 7.5) and solvent B (acetonitrile). A linear gradient at a flow rate of 1.0 mL/min was set as follows: 10% B (0 min) ! 10% B (5 min) ! 30% B (20 min) ! 60% B (20.1 min) ! 60% B (25 min) ! 10% B (25.1 min) ! 10% B (35 min). The wavelength of UV detection was set at 220 nm. The calibration curve of TAMP was linear in the concentration range 0.5-50 lmol/L. The assay method was validated for selectivity, linearity, accuracy, precision and stability after extraction prior to the analysis.
Safety assessment
A single oral dose of [ 14 C]rovatirelin (3.2 mg) was considered to be moderately well tolerated in the healthy male subjects. All adverse events were mild or moderate in severity and transient in nature. No severe or serious adverse events were reported, and no subject was withdrawn from the study because of adverse events. There were no safety concerns based on clinical laboratory results, vital signs, 12-lead ECG, body weight changes or physical examinations.
Pharmacokinetics of radioactivity, rovatirelin and TAMP
The concentration-time profiles and pharmacokinetic parameters of radioactivity, rovatirelin and TAMP are presented in Figure 1 and Table 1, respectively. The plasma concentration of rovatirelin reached C max (7.92 ng/mL) at a t max of 5.02 h. After reaching C max , the plasma concentrations appeared to decline in a biphasic manner with a terminal t 1/2 of 14.9 h. The percentages of C max , AUC 0-8 h and AUC 0-36h of rovatirelin relative to radioactivity were 73.0, 72.7 and 43.8%, respectively. The plasma concentration of TAMP slowly reached C max (0.976 ng/mL) at a t max of 8.00 h compared with radioactivity and rovatirelin. After reaching C max , the plasma concentrations of TAMP declined in a multiphasic manner with a secondary peak at 48 h after administration. The percentages of C max , AUC 0-8 h , and AUC 0-36h of TAMP relative to radioactivity were 9.0, 6.3 and 10.6%, respectively. The maximum levels of radioactivity in plasma and blood were reached at a similar time as that for rovatirelin, with a t max of 6.00 h after administration. A similar multiphasic decline as noted for TAMP was also observed in the profiles for plasma and blood radioactivity, with a secondary peak at 48 h after administration. The concentrations of radioactivity were lower in blood than in plasma (blood/plasma ratio: 0.713-0.771 between 3 and 12 h after administration).
Excretion and recovery of radioactivity, rovatirelin and TAMP
The cumulative excretions of radioactivity in urine and faeces up to 240 h after administration were 36.8 and 50.1% of the administered dose, respectively (Table 2 and Figure 2). The sum of urinary and faecal excretion was 89.0% of the administered dose. The mean total amounts of substances excreted in urine represented 15.6 and 15.9% of the administered dose for rovatirelin and TAMP, respectively, and CL r was calculated as 5.80 L/h for rovatirelin and 9.45 L/h for TAMP. At early times up to 12 h after administration, the urinary excretion of rovatirelin was abundant (71% relative to total radioactivity in urine).
Metabolite profiles in plasma, urine and faeces
The representative chromatographic profiles of radiolabelled components in plasma, urine and faeces are shown in Figure 3, and the quantification of these components are summarised in Table 3. In time-point plasma samples, unchanged rovatirelin was the predominant component in the profiles at each time point (>70% of the total peak area), and the major plasma metabolite was TAMP (>6% of the total peak area). Two other minor components of radioactivity (HM2 and TA) were detected in the chromatograms of the time-point pooled samples. Regarding the AUC plasma samples, unchanged rovatirelin was the major component (74.21%). TAMP was the most abundant metabolite, accounting for 17.20% of the total drug-related exposure. HM2, which accounted for 2.92% of the total peak area, was detected in the AUC samples of all but one subject. The metabolite profile of the AUC pooled sample from one subject also exhibited two further components of interest (TA and HM10), and TA and HM10 The periods of excretion of radioactivity, rovatirelin and TAMP were 240, 96 and 120 h after administration, respectively. Geometric mean (CV%) data are presented. TAMP: (thiazolylalanyl)methylpyrrolidine; f e : percentage of the administered dose; CL r : renal clearance; NA: not applicable. accounted for 3.58 and 1.19% of the total peak area, respectively.
As observed for plasma, unchanged rovatirelin and TAMP were the principal components in urine (Figure 3). In addition, TA, rovatirelin-acid, HM1, HM2 and HM10 were observed in the analysed urine samples; however, they were minor components. Expressed as percentages of the total administered radioactivity (Table 3), unchanged rovatirelin and TAMP accounted for 18.07 and 16.68% of the dose, respectively. Of the five minor metabolites observed in urine, each accounted for 0.5% or less of the total administered dose.
The majority of radioactivity excreted via faeces was unchanged rovatirelin (Figure 3). Among the metabolites, the peak of TAMP was generally the most abundant. Expressed as percentages of the total administered radioactivity (Table 3), rovatirelin and TAMP accounted for 43.29 and 3.20% of the dose, respectively. Based on the mean values, all other metabolites each accounted for less than 1% of the total administered dose.
LC-MS/MS analysis of plasma, urine and faeces
Rovatirelin (m/z 367) and TAMP (m/z 240) were confirmed in plasma, urinary and faecal samples, and representative fullscan and MS n data were obtained for rovatirelin and TAMP. Representative mass spectra for rovatirelin reference standards and rovatirelin in faeces are shown in Figure 4, and those for TAMP reference standards and TAMP in faeces are presented in Figure 5. Because of the low levels of radioactivity in the samples, only unchanged rovatirelin and TAMP could be confirmed in the samples from each matrix by LC-MS/MS analysis. The response of all other components was too low to permit the detection of rovatirelin-related ions, which would have enabled structural elucidation.
Identification of enzymes for TAMP and unknown metabolite formation in vitro
Regarding the requirement of NAD and NADPH via an NADPH-regenerating system for incubation with human S9 and HLM, respectively, TAMP was only formed from rovatirelin in the presence of NADPH and HLM. In addition, an unknown metabolite was also detected in the reaction mixture (Figure 7). Conversely, TAMP was not formed in human S9 fortified with NAD.
To identify the metabolic enzymes responsible for formation of TAMP and the unknown metabolite, chemical inhibitors of CYP and ALDH were added to the reaction mixtures of HLM fortified with an NADPH-regenerating system (Figures 6 and 7). TAMP formation was inhibited by 1-aminobenzotriazole (broad-spectrum CYP inhibitor), and formation of the unknown metabolite was also inhibited by the compound (data not shown). In addition, diethyldithiocarbamate (CYP2E1/ALDH inhibitor), ketoconazole (CYP3A4/5 inhibitor) and disulphiram (ALDH inhibitor) inhibited TAMP formation in HLM. Formation of the unknown metabolite was also inhibited by ketoconazole, but not by diethyldithiocarbamate and disulphiram (Figure 7).
In the assay with recombinant human CYPs, TAMP formation was only observed in the presence of CYP3A4 and CYP3A5, and its formation in the presence of CYP3A4 was 3fold higher than that in the presence of CYP3A5 (Figure 8). The formation of the unknown metabolite was also detected in the presence of CYP3A4 and CYP3A5, and the ratio of the unknown metabolite to TAMP was greater than that in HLM (Figure 7).
Discussion
The pharmacokinetic properties of rovatirelin in humans were investigated in healthy male subjects after a single oral dose of [ 14 C]rovatirelin (3.2 mg), and the major metabolic enzyme of rovatirelin in humans was identified by in vitro studies. These studies are generally useful in understanding the fate of drug candidates in humans, drug-drug interaction (DDI) potential and the relevance of the animal species used for preclinical toxicity and pharmacodynamics studies (Isin et al., 2012;Penner et al., 2009Penner et al., , 2012Walker et al., 2009).
After a single oral dose of [ 14 C]rovatirelin, radioactivity was steadily absorbed, reaching C max at 6.00 h after administration and declining thereafter with a t 1/2 of 32.1 h. Radioactivity was mainly recovered in faeces (50.1% of the dose) and to a smaller extent in urine (36.8% of the dose). The recovery of total radioactivity (89.0% of the dose) was considered satisfactory compared with the findings in the human mass balance studies conducted to date (Bruderer et al., 2012;Nijenhuis et al., 2016;Roffey et al., 2007). The principal route of elimination was faecal excretion, in line with the findings in non-clinical studies (Kobayashi et al., 2019). Urinary excretion was considered moderate, suggesting that no less than 36.8% of the dose of rovatirelin would be absorbed from the gastrointestinal tract. In addition, the metabolites in faeces accounted for 7.07% of the dose; thus, >43% of the dose (urinary excretion þ metabolites in faeces) may be absorbed in total. Thus, the extent of absorption in humans is considered larger than that in rats (27.3%), and the bioavailability in humans may be higher than that in rats (7.3%) (Kobayashi et al., 2019).
The highest concentration of rovatirelin in plasma was observed at 5.02 h after administration, which was almost the same as that of total radioactivity in plasma, and the concentration decreased with a t 1/2 of 14.9 h (Table 1 and Figure 1). Conversely, rovatirelin was rapidly absorbed after oral administration to rats and dogs (t max : 0.4-0.6 h after administration in rats, 1.0 h after administration in dogs), and t 1/2 was 3.3-7.7 h in rats and 2.7 h in dogs (Kobayashi et al., 2019). These findings illustrate that rovatirelin is likely to be steadily absorbed and slowly eliminated from plasma in humans. Similar to the blood-to-plasma ratio (R b : 0.815-0.828) in an in vitro study (Kobayashi et al., 2019), R b in this mass balance study was as low as 0.713-0.771, exhibiting a limited degree of partitioning of drug-related material into blood cells in vivo. The urinary excretion of rovatirelin up to 12 h after administration was 71% relative to the total radioactivity in urine, revealing that rovatirelin was mainly excreted in urine at early times after administration ( Figure 2).
TAMP has been assumed to be the major metabolite in humans based on in vitro study using human hepatocytes (Kobayashi et al., 2019). Thus, the TAMP concentration in plasma and urine was determined in this human mass balance study. The plasma concentration of TAMP peaked at 8.00 h after administration, and the cumulative excretion of TAMP in urine approached its maximum at approximately 72 h after administration (Figure 2), suggesting that the formation of TAMP is slow in humans. A secondary peak was observed in the plasma concentration time-curve of TAMP. Although the secondary peak was observed in the majority of the pharmacokinetics in the individual subjects, there was a large variability in the plasma concentration of TAMP from 24 to 48 h after administration. In the other clinical pharmacokinetics of TAMP, the secondary peak was not observed (Kissei Pharmaceutical Co. Ltd., data on file). These results show that the secondary peak is more likely due to variability in the pharmacokinetic samples. CL r of TAMP (9.45 L/h) was larger than that of rovatirelin (5.80 L/h), illustrating that TAMP tends to be more readily excreted in urine than rovatirelin.
In the metabolite profiling of rovatirelin, the major components of radioactivity in plasma, urine and faeces were unchanged rovatirelin and TAMP. Although unchanged rovatirelin was the predominant component in plasma, TAMP was identified as a major circulating metabolite according to the criteria in the ICH guideline M3(R2) (ICH, 2009). In the ICH guideline, a metabolite is considered major if its exposure in human plasma is >10% of the total drug-related material and further preclinical assessments are required. The exposure of TAMP was 17.20% of the total drug-related material, but TAMP was found to display no affinity for human TRHR and extremely low affinity for other pharmacological receptors and ion channels (Kissei Pharmaceutical Co. Ltd., data on file). The absence of pharmacological potencies of TAMP and its lower exposure relative to that of rovatirelin suggest that TAMP should not be expected to contribute to the pharmacological effects of rovatirelin in vivo. In addition, TAMP was also present in animals and adequately qualified in non-clinical safety studies (Kissei Pharmaceutical Co. Ltd., data on file). As minor metabolites, TA, HM2 and HM10 were detected in plasma and excreta. Although the chemical structures of HM2 and HM10 could not be assumed by LC-MS analyses, considering the retention time in HPLC analysis between this study and the non-clinical study in rats, HM2 and HM10 are considered to correspond to metabolites RPM1 and RPM3, respectively, observed in rats (Kobayashi et al., 2019). Thus, the minor metabolites found in human plasma (TA, HM2 and HM10) appear to be present in rat plasma, and it is considered that no human-specific metabolite exists among the circulating metabolites. Among the metabolites, TAMP was the most notable circulating metabolite and the major metabolite in excreta (approximately 20% of the dose), demonstrated that the major metabolic pathway of rovatirelin in vivo is the formation of TAMP from rovatirelin ( Figure 9). Thus, we conducted in vitro studies to identify the major metabolic enzymes responsible for TAMP formation in humans. Although amount of TAMP was moderate in excreta in this human mass balance study, in vitro studies required high protein and substrate concentration for TAMP formation, which is thought to be due to a low turnover rate in vitro.
From the results of in vitro studies using human liver S9 and microsomal fractions, TAMP was not expected to be directly formed from rovatirelin by the enzymes present in these fractions, such as ALDH, aminopeptidase and esterase in the liver. Furthermore, rovatirelin was resistant to pyroglutamate aminopeptidase in the rat brain homogenate and rat plasma (Kobayashi et al., 2019). These findings indicated that TAMP is not formed from rovatirelin by hydrolysis as the first metabolic reaction. In HLM, formation of TAMP and the unknown metabolite was simultaneously observed depending on the presence of NADPH, and the formation of both compounds was inhibited by a typical CYP3A4/5 inhibitor (ketoconazole). An ALDH inhibitor (disulphiram) and its active metabolite (diethyldithiocarbamate) also inhibited the formation of TAMP in HLM, but not that of the unknown metabolite. In addition, TAMP was formed by recombinant human CYP3A4 and 3A5, but not by recombinant human CYP2E1, and only recombinant human CYP3A4 and 3A5 significantly formed the unknown metabolite. These results indicated that formation of TAMP and the unknown metabolite depends on CYP3A4/5 activity. Furthermore, it is reasonable to consider that TAMP may be generated depending on the formation of the unknown metabolite and that CYP3A4/5 activity could play an important role in the formation of the unknown metabolite from rovatirelin in addition to the possible direct formation of TAMP. Additionally, ALDH inhibitors affected the formation of TAMP from the unknown metabolite, suggesting that ALDH activity contributes to the formation of TAMP from the unknown metabolite. As ALDH is known to have NAD-independent esterase activity and is widely distributed in liver cell fractions (Marchitti et al., 2008;Sidhu & Blair, 1975;Yoshida et al., 1998), there is a possibility of hydrolysing the amide bond that the unknown metabolite may possess.
The metabolism of rovatirelin in humans is summarised in Figure 9. Although there are some metabolic routes, the major metabolic pathway of rovatirelin is conversion to TAMP. CYP3A4/5 activity is believed to be greatly involved as a rate-determining factor for TAMP formation based on the results of in vitro studies, and the mean fractional metabolism is estimated to be 46% (urinary and faecal excretion of TAMP/[total excretion À faecal excretion of unchanged rovatirelin]) from the metabolite profiling in this human mass balance study. The DDI guidelines recommend that if the enzyme is responsible for >25% of the drug's elimination based on the in vitro phenotyping studies and human pharmacokinetic data, clinical DDI studies should be conducted (EMEA guideline, 2012;FDA draft guidance, 2017;MHLW guideline, 2018). Thus, it is necessary to evaluate DDIs for TAMP formation in vivo.
Conclusion
After a single oral dose of [ 14 C]rovatirelin in healthy male subjects, rovatirelin was steadily absorbed, and rovatirelin and TAMP circulated in plasma as the major components. The total radioactivity recovered in urine and faeces was 89.0% of the administered dose. The principal route of elimination was faecal excretion (50.1% of the dose), and urinary excretion was moderate (36.8%). Urinary excretion of rovatirelin and TAMP accounted for 15.6 and 15.9% of the total dose administered, respectively. Although rovatirelin was extensively metabolised in healthy subjects, with 20 metabolites observed across plasma, urine and faeces, TAMP was identified as the major metabolite of rovatirelin. The major metabolic pathway of rovatirelin is considered the conversion of rovatirelin to TAMP, and it appears that several metabolic enzymes are involved in this pathway. The most important enzymes responsible for the pathway are believed to be CYP3A4/5. Figure 9. Summary of rovatirelin metabolism in humans. P: plasma; U: urine; F: faeces; Ã : The unknown metabolite observed in vitro, which was thought to be formed by CYP3A4/5, may be an intermediate in this metabolic pathway.
|
2019-03-08T14:16:04.576Z
|
2019-03-05T00:00:00.000
|
{
"year": 2019,
"sha1": "6d20c36c5745c7ee1043452ffce53fc81a8af832",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/Human_mass_balance_pharmacokinetics_and_metabolism_of_rovatirelin_and_identification_of_its_metabolic_enzymes_i_in_vitro_i_/7706942/files/14340671.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4119ce4235b32c80d43156d3e85c26d333821cc9",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
225040023
|
pes2o/s2orc
|
v3-fos-license
|
Assessing nutrient salts and trace metals distributions in the coastal water of Jeddah, Red Sea
In this study, eighteen sites at Jeddah coastal area, Red Sea, have been assessed for water quality status, depending on nutrients, metals, Chlorophyll-a (Chl-a) and physical variables during 2018 and 2019. The investigated parameters of the Water Quality Index (WQI) are temperature, pH, salinity, dissolved oxygen (DO), DO saturation, oxidizable organic matter (OOM), suspended particulate matter (SPM), Chl-a, ammonium, nitrite, nitrate, total nitrogen, reactive phosphate, total phosphorus, silicate, Zn, Fe, Mn Cu, Cd, Pb, and Ni. The results revealed that the pH values were slightly alkaline with a range of 7.85–8.20. The results of other parameters were as follow: salinity (36.95–42.61PSU), DO (5.22–6.67 mg/L), OOM (0.40–1.23 mg/L), SPM (12.39–21.5 mg/L), Chl-a (0.10–0.83 µg/L). The range of nutrients (μM) were 0.07–0.22, 0.45–1.47, 9.62–18.64, 23.31–57.65, 0.05–0.15, 0.55–2.78 and 2.54–5.51 for NH4/N, NO2/N, NO3/N, TN, PO4/P, TP and SiO4/Si, respectively. Cluster analysis was used to classify the stations studied. From the current study, five clusters were found, indicating the need to perform cluster analysis in the water quality assessment process to confirm the durability and consistency of the data discovered in the current application.
Introduction
The Red Sea is about 1930 km long and 280 km wide. The surface of the Red Sea is about 437,000 square kilometers, with an average depth of 490 m. The Red Sea is famous for its marine life and coral. The number of invertebrates on the Red Sea is higher than 1000 species, with 200 soft and hard coral reefs (Hassan et al., 2002;Fahmy et al., 2016). Seawater quality has a major concern, with safety and accessibility, throughout the world. Water contaminated with excess chemicals causes a highly risk on health. In recent times high levels of contaminants and waste elements at the global level are being thrown into the marine environment.
The problem of the disposal of trace metals waste has many obvious effects on marine environment such as increased levels of waste in water, sediments and organisms, reduced productivity (Shriadah et al., 2004). Thereby increasing human exposure to many harmful environmental problems. The economic impacts on the fishing and tourist trades have been related to the degree of environmental conditions deterioration. Due to metals toxic effect and their bioaccumulation, the metals demonstrate a significant and serious contaminant source that is associated with human activities in marine environments (Abohassan, 2013). Due to the increase of anthropogenic, agricultural activities, nitrogen and phosphorus inputs, nutrient contaminants (nitrate, phosphate and silicate) have adverse effects on human health due to the eutrophication widespread problem (De Jonge et al., 2002). Nitrogen and phosphorus are also generated from human and industrial wastes (Palaniappan et al., 2012).
Signs of eutrophication in surface waters show an increase in primary product rates to an abnormal level. This result in a significant increase in the growth of vascular plants, the occurrence of blooming phenomenon, and a severe decrease in the concentration of dissolved oxygen in aquatic systems, which affects the water environment (Palaniappanet al., 2012;Bhagowati and Ahamad, 2018). The effects of anthropogenic activities on the marine conditions of shorelines have been investigated in some Jeddah sites (Badr et al., 2009;Abu-Zied et al., 2013;Abu-Zied et al., 2016;Al-Mur et al., 2017;Al-Mur, 2019a,b). The main objectives of this study are to monitor and assess physical and chemical characteristics as well as trace elements contamination and eutrophication status of seawater along the Red Sea coast at Jeddah City, Saudi Arabia.
Material and methods
The area of study is in the middle region of the Red Sea between longitude 38°90 0 00 00 À 39°10 0 00 00 E and latitude 21°20 0 00 00 À 22°00 0 00 00 N (Fig. 1). This area has been characterized by a rare rainfall and extreme evaporation with tropical to subtropical climate (Chen et al., 2016). The north to north-northwest wind is usually occurred throughout the year (Omar, 2013).
Sampling and chemical analysis
Samples were collected from different sites such as Sharm Obhur, Jeddah Sea Port, Salman Bay, Southern Corniche, Downtown, and Al-Khumrah industrial area. Seawater samples were collected from eighteen sites, at each station and during each sampling period seawater was collected in duplicate, stored in 0.5-liter polyethylene containers in the icebox and analyzed in vitro. Different analytical tools such as salinity, temperature, pH as well as the oxygen content (DO) were determined at each site by using CTD (YSI-6000). The oxidizable organic matter (OOM) was performed using KMnO 4 (Calberg, 1972).
Total suspended matter (TSM) was separated by filtration through a 0.45 lm GF/C filter paper from seawater samples. The filters containing the trapped particles were then washed and dried at 65°C for 48 h until the weight was constant. The difference in weight before and after filtration is measured in mg/L (IOC, 1983). Water samples for nutrient salts were separated after sampling collection by using GF/C filters. The samples were remained frozen until analyzed by using calorimetric techniques (Grasshoff, 1976). According to APHA (1998), Nitrate (NO 3 /N) was analyzed using reduction column and color reagent (sulphanilamide and N-(1-naphthyl)-ethylene diamine dihydrochloride). Ammonia in water (NH 4 + /N) was fixed in the field without filtration (indophenol blue colorimetry method). Total nitrogen (TN) and total phosphorus (TP) were analyzed in the non-filtered water samples by the method described by Valderrama (1981). Nutrient salts were analyzed by double beam UV/V spectrophotometer (Shimadzu UV-150-02). Chlorophyll-a in water samples was extracted with 90% acetone and analyzed spectrophotometrically and its concentration was calculated using the SCORE-UNESCO equation (Jeffrey and Humphrey, 1975). Heavy metals analysis in seawater was performed by Martin method (1972) whereas trace metals were extracted and analyzed. Water sample (750 ml) was filtered through membrane filter of 0.45 lM. The pH of water sample was adjusted (4-5) with dilute HCl (Boniforti et al., 1984;Martin, 1972). Trace metals were extracted using APDC-MIBK as a solvent extraction to chelate the metals. Ammonium pyrolidine thiocarbamate (APDC) used to perform complexation with the extracted metals into methyl isobutyl ketone (MIBK). Heavy metals contaminants (Zn, Fe, Mn, Cu, Cd, Pb and Ni) in the final extracts were determined using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) (Perkin Elmer, Nex-ION 300D).
NASS-5 reference material (from NRC-Canada) was used as a quality control sample. The used glassware was cleaned by detergent and soaked in 10% HNO 3 . A batch of synthetic seawater samples were analyzed by eight measurements. The detection limit was 0.055, 0.010, 0.010, 0.009, 0.04,0.01 and 0.02 lgL À1 for Fe, Zn, Mn, Pb, Cu, Cd and Ni, respectively. The data precision was designated as a coefficient of variation (CV). CV with a value of 10% was determined by three replicate analyses of one sample. For accuracy measurement, natural seawater (750 ml) was spiked using NASS-5 reference material. The metals spiked recovery values were 92%, 93%, 95%, 90%, and 94% for Cu, Pb, Mn, Fe, and Zn, respectively. The data analysis of the metals is demonstrated in Table 1.
Statistical analyses
Statistical and correlation analysis for the samples were carried out using SPSS version 23, to explore the relationship between hydrographic, nutrient salts and trace metals in surface seawater. The range and average concentration of the variables were calculated during the period of study for18 sample sites at fixed statistical significance level (p = 0.05 and p = 0.01).
The data structures were explained in various studies by multivariate statistical techniques such as cluster analysis (CA) and principal component analysis (PCA) (Vialle, et al., 2011). The nearest neighbor program, which is called SPSS-23, was applied for the data structures analysis. Correlative and cluster analysis were applied for a sample quantitative analysis of 22 variables (temperature, pH, salinity, dissolved oxygen (DO), DO saturation, oxidizable organic matter (OOM), suspended particulate matter (SPM), Chl-a, ammonium, nitrite, Nitrate , total nitrogen, reactive phosphate, total phosphorus, silicate, and heavy metals concentration).
The TRIX index was calculated using formula applied by Vollenweider et al. (1998) and Peng (2015).
Physicochemical characteristics
Water temperature has a vital role in the metabolism of aquatic ecosystems. When the water temperature rises, the toxicity of heavy elements becomes more dangerous for aquatic life. The variations of spatial and temporal surface water temperature values (°C) are shown in Fig. 2 The salinity changes were noticed due to mixing of fresh and marine water. The coastal water salinity of Jeddah can be affected mostly by the wastewater from the urbanized surrounding area. The spatiotemporal surface salinities are presented in Fig. 2, the average salinity values showed limited changes with an average of 40.60 ± 1.77 PSU. The minimum salinity (around 37.00 PSU) was found at St. 6 and the maximum (42.00 PSU) was found at stations 11 and 17. The higher salinity results can be attributed to occurrence of high temperature and absence of river discharges into the Red Sea.
The pH of seawater is an important parameter for the biological activities in the marine environment. It is a reflection of the state of pollution and productivity. The pH values of natural seawater are around 8.2. The pH values explained by the regional and temporal distribution as shown in Fig. 2. It was found that the spatial changes of the pH values of the study area were undetectable within the studied sites during the years 2018 and 2019, where the pH data of the present period has shown that all sites showed no trends in the magnitude of pH readings (Fig. 2). The pH ranged between 7.96 and 8.20 (8.07 ± 0.07) during the year 2018 and 7.85-8.16 (8.04 ± 0.09) during the year 2019 (Table 1).
There is an increase in phytoplankton and photosynthesis processes, which results in a higher pH value and an increase in the dissolved oxygen concentration (Sawidis and Bellos, 2005;Abu-Zied et al., 2013). This was confirmed from the positive relationship exists in the pH values of water with dissolved oxygen (r = 0.515, n = 18, p < 0.005), where the two parameters were used as good indicator for the production level.
One of the most important variables in the marine environment is the dissolved oxygen (DO), which is one of the factors that determines the different water masses and the role of distribution of domestic production and consumption to a large extent. In Jeddah coastal area, the absolute DO values varied between 5.22 mg /l at St. 6 during 2018 and 6.67 mg /l at St. 14 during 2019. The regional distribution of the DO showed anomalous behavior (Fig. 2). The average concentrations during 2018 and 2019 were 6.01 ± 0.40 and 6.31 ± 0.27 mg/l, respectively (Table 1).
Based on temporal variations, the DO content revealed slight variations. The high values in surface water can be attributed to the exchange of O 2 from the air to the surface water as well as to algal photosynthesis (Abu-Zied et al., 2013). On another hand, the DO lower values of surface water could be related to marine life respiration, biochemical reactions and organic matter decomposition.
The oxidizable organic matter (OOM) may cause eutrophication in the water, where a sudden bloom of algae occurs due to the presence of high amount of limiting nutrients. The degradation of organic wastes by aerobic microbes produces carbon dioxide as a byproduct. Then, carbonic acid is produced as a result of the dissolution of CO 2 into the water, resulting in an increase in the acidity of the water and the environment becomes not suitable for the growth of some fishes due to development of low pH which has an adverse effect on the solubility of heavy metals. An acidic environment helps the growth of some organisms that are dangerous to humans and where they have the ability to live in such conditions with pH<7. In the present work OOM fluctuated between a minimum of 0.40 mg/l at St.2 and a maximum of 1. 23 mg/l at St.14 during 2019 (Fig. 2). The average concentrations during 2018 and 2019 were 0.88 ± 0.21 mg/l and 0.91 ± 0.27 mg/l, respectively. The results of total suspended matter (TSM) are shown in Table 1. It ranged between 12.39 mg/l at St. 1 during 2018 and 21.50 mg/l at St. 12 during 2019, with averages of 16.33 ± 3.12 and 17.24 ± 3. 15 mg/l during 2018 and 2019, respectively. The TSM concentrations reported in this work were very close to the previous study in the Red Sea coast Heba et al., 2004). The TSM ranged from 1.08 to 38.11 mg/l, while it showed lower concentrations than those reported by Heba et al. (2004) (TSM ranged from 10 to 440 mg/l).
Chlorophyll-a (Chl-a) is one of the main pigments that can be used to measure phytoplankton biomass (Carlson, 1977). In general, there are high concentrations of the Chl-a in waters containing high level of nutrients originated from urban runoff, fertilizers, sewage treatment plants. In the present work, the Chla varied between 0.10 lg/l (St.1) and 0.78 lg/l (St.17) with an average of 0.42 ± 0.21 lg/l during 2018 and ranged from 0.15 lg/l (St.2) and 0.83 lg/l (St. 17) with average of 0.45 ± 0.21 lg/l during 2019 (Table 1 and showed lower values at different sites in comparison to the rest of the Red Sea coastal area of Egypt, where they ranged from 0.24 to 2.46 lg/l (Guerguess et al.,2009), but they are similar to those reported by Eladawy (2017), reporting Chl-a values of 0.1-1.0 mg L -1 .
Nutrient salts
Ammonium is the main nitrogenous product resulting from the decomposition of organic material containing nitrogen, and this is done by microbial degradation. It is also an important release product from invertebrates and vertebrates. Also, ammonia is one of the inorganic forms of nitrogen and is preferred for aquatic plants via absorption (Faragallah et al., 2009). A high ammonium concentration could lead to a high or even higher phytoplankton productivity if its cells utilizes NH 4 + /N rather than NO 3 -/N (Dugdale et al., 2007 low concentrations in the present study with absolute values were between 0.05 and 0.15 lM, and an average value of 0.10 ± 0.02 lM during the study period (Table 1 and Table 1 and Fig. 3. The highest concentration of total phosphorus (TP) in seawater of the middle study area at St.'s (5-10) are very important factors to verify the pollution status on the coastal system because they are a reflection of the anthropogenic impact and the recreational use of some beaches and intensive human activities and the effluent inputs of waste, sewage and recycled processes in the coastal waters. The TP concentrations during the study period were low compared to the previous work of Fahmy (2003) who mentioned that the spatiotemporal distribution pattern of TP revealed a large variability in the Red Sea of Egypt with TP ranged from 0.49 to 2.13 lM.
Silicate (SiO 4 ) is one of the major constituents in seawater. The heterogeneous distribution of silicate in the study area indicates that, the source of silicate is not allachthonus from the drains, but its main source is autochthonous through the production of biogenic silicate by diatoms, which could cause preservation in silicate content distribution (Verschuren et al., 1998). The concentration ranges and spatiotemporal average values of SiO 4 were presented in Table 1 (Table 1). These values are within the ranges and averages recorded by Madkour and Dar (2007) . Generally, the lowest SiO 4 values and other nutrient salts were observed, due to low population density and pollution from nearby land dominated by hot, arid climate .
Distribution of trace metals
To determine the status and severity of dissolved elements in a specific region, the concentration of dissolved metals for each site is compared with the minimum concentrations of these soluble elements for water quality standards (WQC, 1972). A concentration of 50 mg/L for Fe, 20 mg/L for Mn and 10 mg/L for Cu, Pb and Cd, 7 mg/L for Ni, 20 mg/L for Zn, 2 mg/L for Mn, represents minimal risk of deleterious effect.
Iron and manganese
The spatiotemporal variation of dissolved iron and manganese contents along the coastal region of study is shown in Table 1 56 ± 0.56 lg L À1 and 2.46 ± 1.16 lg L À1 (Fig. 5). The contents of dissolved iron and manganese (Figs. 4 and 5) showed a wide variability, the different between the high and low concentration of both iron and manganese were approximately two to five-folds.
In general, the dissolved iron and manganese values revealed greater concentration in the Red Sea of the study region (increasing by fifteen-and four-times, respectively) than those recorded in the previous study (Martin and Whitfield 1983). They were lower than the smaller risk content for criteria of water quality; with values of 50 and 20 mg/L for iron and manganese, respectively (WQC, 1972) Table 3 Pearson correlation coefficients between the physicochemical parameters, nutrient salts and metals of the seawater of Jeddah coastal area. (Table 4). Weak negative correlations were recorded between salinity and both iron and manganese (r = À 0.456, À0.286, respectively), indicating a decrease in iron and manganese with the increase of saline water (Hariri and Abu-Zied, 2018). The lower concentration of both manganese and iron with the increase in salinity may be explained by the Fe +2 oxidation process to iron hydroxides.
Copper and zinc
The spatiotemporal distribution of dissolved copper in the study area is shown in Fig. 4. During the period of study, the values of dissolved copper concentration fluctuated between a minimum of 0.92 lg L À1 at station 13 during 2018 and a maximum of 5.38 lg L À1 at station 4 during 2019. The annual average concentrations of dissolved copper were in the range of 1.67 ± 0.60 and 3.15 ± 1.14 lg L À1 in 2018 and 2019, respectively (Table 1) Generally, there is a strong influence of biological factors on bioaccumulation of the metals. Zinc is naturally available and considered as one of the known contaminants in the residues of agricultural and food waste, pesticides, and anti-corrosion paints (Badr et al., 2009;Mortuza and Al-Misned 2017).
In the present study, phytoplankton plays an important role in the distributing of dissolved minerals via absorption. A significant negative correlation between Ch1-a and both copper and zinc; r = -0.519 and À0.602 (respectively) were found confirming the role of Fe in scavenging the other metals. In the other hand, the solubility of copper and zinc is greatly affected by the solubility of Fe. The contents of dissolved copper and zinc of the coastal area revealed lower levels than the smaller risk content for criteria of water quality of 10 and 20 mg/L for copper and zinc, respectively (WQC, 1972) (Table 4).
Lead, cadmium, and nickel
The spatiotemporal variations of dissolved lead, cadmium, and nickel concentrations along the coastal area of study are shown in Table 1 The levels of dissolved nickel showed a slight variability with a low content at St.11 (1.15 lg L À1 ) and the high level (2.14 lg L À1 ) at St.8 during 2018 (|Fig. 5). The average concentration of nickel ranged from 1.47 ± 0.35 lg L À1 to 1.60 ± 0.21 lg L À1 during the study period. The difference between the high and low absolute values were approximately increased two folds. Generally, the levels of dissolved nickel are lower than the smaller risk content for the criteria of water quality 7 mg/L for nickel (WQC, 1972) (Table 4). Generally, the mean concentrations of iron in the two seasons exhibited slight differences, with no obvious seasonal variations. While the concentration of dissolved manganese, copper, lead and cadmium generally increased during 2019. The concentrations of zinc and Nickle showed no obvious temporal distribution patterns; this may indicate that the main sources of the dissolved metals are an integration of many playing factors (Figs. 4 and 5).
N/P ratio
The readily bioavailable N and P for the growth of phytoplankton are mainly in forms of dissolved inorganic nitrogen (DIN) and PO 4 /P. The average data of DIN and PO 4 /P during the period of study (2018-2019) showed that the N/P ratios varied from 110 at St. 2 to 204 at St. 9 (Table 2) with an irregular distribution fluctuated between 110 and 143 (4 Stations) and from 143 to 176 (11 Stations), whereas the highest values of N/P ratio were observed at 3 stations . The N/P ratio in the present study is significantly higher than the assimilatory optimal N/P = 16/1 ratio reported by Redfield et al. (1963).
According to the previous studies, nitrogen and phosphorus are the limiting factors for the growth of marine algae. Phosphorous is the limiting factor for marine algal growth when the ratio of phosphorous and nitrogen more than six. On the other hand, nitrogen is the limiting factor for the growth when the ratio is less than 4.5 (Chiaudani and Vighi, 1978). The ratios of phosphorous and nitrogen are in the range of 4.5 and 6 that considered the optimal assimilation ratio of nutrient approaches. The present data revealed extreme variation of N/P ratio along the Red Sea coast of Jeddah, particularly at areas exposed to land-based runoff.
Trophic index (TRIX)
Water quality index is an important tool to summarize the complex of water quality data. The ecological risks of nitrogen and phosphorus were assessed using the trophic index (TRIX) formulas calculated according to Vollenweider et al., (1998). The multimetric trophic index TRIX is another important acceptable method for evaluating coastal eutrophication. The TRIX collects major effect and outcome variables of eutrophication conditions including environmental disorder, biological response, and stress response (Primpas and Karydis, 2011).
The TRIX has been chosen as an assessment reference for coastal eutrophication in this study. It is a linear mixture of several variables related to primary production (Chl-a and O 2 ), DIN (NO 2 -+-NO 3 -+NH 4 + ) and PO 4 (Melaku et al., 2003). Five category scale were defined for water quality state (Moncheva et al., 2002) such as excellent (oligotrophic), very good (low trophic level with TRIX < 4), good (moderate trophic level with 4 < TRIX < 5), fair (high trophic level with 5 < TRIX < 6), and poor (very high trophic level with TRIX > 6), which revealed high nutrient levels, low transparency, and recurrent hypoxia/anoxia in bottom waters (Peng, 2015). The TRIX model is used to express the state of prosperity and water quality for a wide range of components of elements, especially nitrogen and phosphorous, causing the state of prosperity if they are present in high concentrations from the surrounding areas. This model has been applied in many water bodies and on coastal marine waters in many seas in Europe. There are many regions where this model has been applied, for example: the Adriatic Sea, the Tyrrhenian Sea, the Black Sea, the eastern Mediterranean Sea and the Caspian Sea (Tugrul et al., 2011) The eutrophication of seawater quality of the selected coastal area was assessed. The trophic index of the study period ranged from 1.48 at St.1 to 2.54 at St.16 during 2018, showing a slight variation than those calculated during 2019 which ranged between Table 4 Dissolved trace-metal concentrations (lg L À1 ) in the Jeddah coastal waters, Red Sea, compared with those reported in the literature. (1972) 1.70 at St.1 and 2.57 at St. 17. Generally, low trophic index (TRIX < 4) was recorded in the study area (Table 2 and Fig. 6) and the eutrophication range showed no ecological risk based on the investigated of the water quality status during 2018-2019.
Factor analysis
In general, the variables including nutrient salts and metals effect, and environmental hydrographic parameters were classified into two groups according to their correlation with biological response (Chl-a). The first group is located in the right part of the graph (Fig. 7); it includes salinity, SiO 4 , SPM, NO 3 , TN, TP, PH, OOM, NH 4 , PO 4 , Fe and Chl-a. The second one lie in the left part of the graph and includes the DO, NO 2 , Cu, Zn and Mn.
Cluster analysis (CA)
In recent years, there have been an effective application of multivariate statistical methods such as cluster analysis (CA) and principal component analysis (PCA) in the assessment of surface water quality, evaluation of spatiotemporal variations in coastal seawater and in identifying pollution sources (Pekey et al., 2004;Ouyang et al., 2006;Abu-Zied et al., 2013). The CA is one of the common methods used in multivariate statistical analysis that it is used to classify the data into groups depending on their (dis-) similarity. It is used here to assess surface water quality (Shrestha and Kazama, 2007;Tokatli et al., 2014).
The CA was applied to the data set in respect of seawater quality during the period of study, using 22 variables and 18 different sites. This study showed the benefit of multivariate statistical techniques for assessing and interpreting large data sets for water quality, identifying sources of pollution, and obtaining better concepts of water quality. The variables that are measured using this type of analysis are classified into different categories, so that all similar variables are set together into one group. Then the clusters of the resulting bodies should reveal high internal and high external homogenization (between groups). These hierarchical groups are the most common approach, which shows similar relationships between samples, or the interconnections between variables and each other and this is usually illustrated in the form of a tree diagram.
In this study, cluster analysis was applied using physical and chemical analysis as well as Chl-a data of the coastal area, Jeddah, Red Sea. The dendrogram as shown in Fig. 8 revealed the association of all analyzed variables in five major associations. The first level of aggregation on the right side of the dendrogram revealed association between DO and metals (Zn, Ni, Cd, and Pb) as well as NO 2 . The second level of aggregation is established with the pair Chl-a-TN, SPM-TP, DOM-NO 3 and Salinity-SiO 4 which aggregates with the first aggregation level. At the same time, the third clusters of aggregation on left side of the dendrogram forms a cluster with the pair of Fe-Cu and both of PO 4 and NH 4 . Finally, the next stage of these clusters is associated with the third clusters of aggregation on left side of the dendrogram, it shows an association with the pair Temp.-Mn, and pH which at the same time forms the fourth clusters.
Also, in this study, CA was used to classify the selected stations of the coastal area, Jeddah, Red Sea. The dendrogram among stations showed five distinct groups (Fig. 9). The first group lie on left side of the dendrogram included stations located in the northern part (St.'s 1, 2 and 3), and the changes in water quality in them were mainly due to Salman Bay. The second group included three stations (St.'s 5, 6 and 7), which are located in the northern part; it is affected by the drainage wastewater from Sharm Obhur. The third group lie on left side of the dendrogram including St.'s 4, 8, 9 and 10, which are presented in the middle and northern parts of the study area where water quality in these stations is mainly affected by residential sources of pollutants from Jeddah port. The fourth group covers St.'s 11, 12, 13 and 14, situated in the southern part of the study area, on the right side of the dendrogram. It is noticeable that the water quality of this region is affected mainly by human activity at the Southern Corniche. The fifth group includes stations located in the southern part of the study area (St.'s 15,16,17 and 18). It can be said that differences between groups may be an indication of differences in pollution sources.
Correlation matrix
Pearson correlation analysis depends on the variability of two parameters. Correlation analysis was achieved to reveal the relationship between any pair of variables. A correlation analysis was carried out in order to reveal the relationship between nutrient salts, metals and hydrographic parameters (Table 3). There were several significant correlations between the studied parameters. High positive correlations were found between chlorophyll-a (Chl-a) and each of NH 4 , NO 3 , TN, TP and SiO 4 (r = 0.494, 0.592, 0.866, 0.863 and 0.572, respectively at P 0.01 and 0.05, N = 18). Suspended particulate matter (SPM) showed a positive and significant correlation with Chl-a, NO 3 , TN, TP and SiO 4 (r = 0.706, 0.618, 0.663, 0.803 and 0.778, respectively at P 0.01 and 0.05, N = 18).
Conclusions
The chlorophyll-a revealed high positive significantly correlation with both NH 4 and SiO 4 (r = 0.494, 0.572) at P 0.05) and with both of NO 3 , TN and TP, (r = 0.592, 0.866, 0.863 at P 0.01, respectively. Dissolved nitrogen, reactive phosphate and silicate concentrations were relatively low, and allowed classifying Jeddah Red Sea coastal water as oligotrophic to mesotrophic. From the average percentage values of different N forms relative to DIN, it can be indicated that NO 3 -N is the abundant forms, representing 91.37% from the total dissolved inorganic nitrogen (17.36 lM), while NO 2 -N and NH 4 -N are the least constituent in the studied seawater samples. The mean concentrations of dissolved Fe, Mn, Zn, Cu, Pb, Cd and Ni, in the coastal area of Jeddah, Red Sea were within the ranges of Minimal Risk Concentrations. This indicates that the seawater of Jeddah, Red Sea was not polluted by these metals.
Declaration of Competing Interest
I declare that this manuscript is original, has not been published before and is not currently being considered for publication elsewhere. I wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
|
2020-07-16T09:03:33.371Z
|
2020-07-10T00:00:00.000
|
{
"year": 2020,
"sha1": "88a364cafc57f1da25affe0f1c854fe90209985d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2020.07.012",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61f255367be397936591248a25a5ded295b00dc4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
158833146
|
pes2o/s2orc
|
v3-fos-license
|
Public transport use among the urban and rural elderly in China : Effects of personal , attitudinal , household , social-environment and built-environment factors
Public transport brings significant benefits to the aging society by providing essential mobility to the elderly. However, few studies have investigated the factors that impact public transport use among the urban or rural elderly. This study explored the effects of personal, attitudinal, household, social environment, and built environment factors on the public transport trips of the elderly. The research data was collected from 274 urban and rural neighborhoods of Zhongshan, a medium-sized Chinese city. The negative binomial regression models suggest that, all else being equal, living in a neighborhood with a high level of public transport service, abundant green space along walking routes connecting home and bus-stops, or a relatively balanced structure of age or income is strongly connected to more public transport trips of the elderly. The results also indicate that a strong preference for public transport is significantly related to the public transport use among the elderly. These findings facilitate our understanding of the correlates of public transport use while providing insights into achieving an effective design of policies to encourage public transport use among the elderly in China. Public transport use among the urban and rural elderly in China: Effects of personal, attitudinal, household, social-environment and built-environment factors
Introduction
It is fundamental to provide a basic level of mobility to the general population, especially to the transportation disadvantaged (Giuliano, 2005).The transportation disadvantaged, including the elderly, young, disabled, and low-income, etc., are commonly considered to be either unwilling or unable to drive or who do not have an access to a car (Duvarci, Yigitcanlar, Alver, & Mizokami, 2010;Giuliano, 2005).The mobility of the elderly is highly associated with their accessibility to public transport which ensures their participation in activities at greater distances from their home (Chen, 2010;Hess, 2009;Wretstrand, Svensson, Fristedt, & Falkmer, 2009).Previous research also reveals that public transport use may promote physical activity and health condition with higher levels of transportation-related walking (Day et al., 2014;Musselwhite, Holland, & Walker, 2015;Voss et al., 2016).Encouraging public transport use among the elderly is a crucial component of the efforts to improve mobility and quality of life (Zhang, Yang, Li, Liu, & Li, 2014).China has the world's largest ageing population.Up to year 2015, the population of the elderly (aged 60 or over) in China reached 222 million, accounting for 16.1% of the national population of China and this percentage is expected to rise to 25% by year 2030 (Ge et al., 2017).With the pressure of the ageing trend, it becomes a challenge to provide a public transport service that is safe, reliable, accessible and affordable to the elderly (Banister & Bowling, 2004).
In the past three decades, the level of motorization in both urban and rural areas in China is growing rapidly while the modal split of public transport keeps shrinking (Wu, Liu, Xu, Wei, & Zhang, 2017).In 2005, the State Council of the People's Republic of China launched an initiative of "Public Transport Priority Strategy" aiming to promote the development of varied public transport.However, policies that encourage public transport in the ageing population are scarce as little is known about the barriers and facilitators of public transport use among the elderly.
This study makes an important contribution to the literature.With data collected from Zhongshan, a medium-sized Chinese city, the factors that impact public transport use of the urban and rural elderly are investigated in an effort to better understand the correlations of personal, attitudinal, household, and neighborhood-level social and built environment attributes.Firstly, the study generated five categories of attributes: personal, attitudinal, household, neighborhood-level built environment, and neighborhood-level social environment.Then, negative binomial regression models were applied to examine that how public transport trips of the elderly are related to the social and built environment attributes, together with personal, attitudinal, and household attributes.The public transport trips in the present study are all-purpose bus trips.The elderly population in this study focus on adults aged 60 and over, in line with the definition of elderly population from the Law of the People's Republic of China on Protection of the Rights and Interests of the Elderly.The findings will provide insights for transportation and public health agencies, practitioners, and researchers into the effective design of interventions on health promotion of urban elderly.
Literature review
The public health and urban planning fields have mutually contributed to revealing how environmental features are related to the elderly' public transport use in the western context (Hess, 2012;Kim, 2011).
The environmental correlates of public transport use are categorized as the built environment and the social environment.The built environment is characterized as the human-made environment where we live and work (Roof & Oleru, 2008;Srinivasan, O'Fallon, & Dearry, 2003).The social environment is constituted by the people who we interact with and the culture that we live in (Barnett & Casper, 2001).With regard to built environment features, walking trips from an origin or destination to the nearest bus-stop is a barrier for older adults to ride a bus (Hess, 2012).Both perceived and actual walking dis-tance to bus-stops demonstrates significant influence on the ridership of the elderly (Broome, McKenna, Fleming, & Worrall, 2009;Chen, 2010;Hess, 2009Hess, , 2012)).Public transport service, e.g., degree of the service articulation and the level of service, is another significant predictor of public transport use among the elderly (Barnes, Winters, Ste-Marie, McKay, & Ashe, 2016;Burkhardt, 2003;Haselwandter et al., 2015;Hess, 2012).Additional built environment features that showing significant effects could be categorized as: (1) land-use density, e.g., population density and residential density (Hess, 2012;Ryan, Wretstrand, & Schmidt, 2015); (2) street network design, e.g., number of street intersections (Hess, 2012); (3) destination accessibility, e.g., distance to the nearest clinic or hospital (Chen, 2010); (4) aesthetic and safety of pedestrian environment, e.g., greenery, neighborhood crime, and pedestrian infrastructure (Aceves-González, Cook, & May, 2015;Hess, 2012).Social environment factors featuring social norm and social support are significantly correlated to the elderly' judgment and actual use of public transport (Bamberg, Hunecke, & Blobaum, 2007).
The environmental attributes employed in public transport-related studies were typically derived (Siu et al., 2012) by: (1) surveying individuals' perceptions of the social or built environment (Rodríguez, Evenson, Diez Roux, & Brines, 2009); (2) aggregating neighborhood measures from secondary data source, such as Census or Traffic Analysis Zone (Ding, Wang, Yang, Liu, & Lin, 2016); (3) measuring these characteristics within a certain distance of the individuals' residences (Cerin et al., 2013), e.g., by buffer radii (ranging from 100 m to 1 km); or (4) quantifying the built environment attributes objectively at high resolution or used cluster analysis to identify different urban forms (Riva, Gauvin, Apparicio, & Brodeur, 2009).In Ewing and Cervero's research (2010), the built environment variables that influence travel behavior were named with words beginning with D as "five Ds" from five aspects: density, design, distance to transit, destination accessibility, and diversity.
It is worth mentioning that the majority of social and built environment-travel behavior studies were predominantly conducted in the Western contexts, and their findings are not necessarily translatable to the Chinese contexts (Van Cauwenberg et al., 2011).Currently, China is facing a rapid and intense urbanization which also transforms the urban landscape in Chinese cities dramatically (Zegras, 2010).Meanwhile, China is undergoing a drastic motorization, with the vehicle ownership increasing from 32 per 1000 people to 118 (from 2007 to 2016) (Zegras, 2010;Zhang et al., 2017).The joint pressure of urbanization and motorization poses both tough challenges and great opportunities to China's land use and transportation systems (Sun, Zhang, Xue, & Zhang, 2017;Wu, Ma, Long, Zhou, & Zhang, 2016).In the past few years, researchers began to investigate the associations of built and social environment with physical activity and travel behavior of the elderly using data collected from China.(Day, 2016;Ku, McKenna, & Fox, 2007;Ying, Ning, & Xin, 2015;Zhang, Li, Ding, Zhao, & Huang, 2016;Zhu, Chi, & Sun, 2016).For example, in Nanjing, China, older commuters are less likely to access rail transit by public bicycle (Ji et al., 2017); and among Taiwanese elderly, the degree of willingness to make medical trips by bus is associated with socioeconomic and health status, hospital or clinic accessibility, and bus service (Chen, 2010).However, rare among these studies have explored the correlates of public transport use among the urban and rural elderly in China.Revealing the factors which influence public transport use among the elderly is an indispensable step to facilitate interventions to promote physical activity.For this reason, this study will serve as an extended body of literature.
3
Data and methods
Study area
As stated in a previous study (Zhang, Yang, et al., 2014), the Zhongshan Metropolitan Area was chosen to examine the public transport use of the elderly in the Chinese context.Zhongshan is a medium-sized prefecture-level city in Guangdong Province of southern China (Figure 1).In the three largest coastal urban agglomerations with the most competitive economies in China, there are about 20 mediumsized cities with similar urbanization and motorization level, as well as urban transport characteristics to Zhongshan (Zhang, Yang, et al., 2014).Thus, the research findings in Zhongshan might be typical and informative to this type of cities. , 2010).Since 2010, there are two major policies pertaining to the public transport use among the elderly in Zhongshan.The first one is to extend the bus lines to connect the rural areas where public transport service failed to be satisfying, especially for the elderly with lower mobility.This policy was initiated in 2010 since the implementation of Zhongshan Transportation Development Planning (2010 to 2020).The second policy is to provide a free bus card to the elderly to encourage public transport use, especially among the elderly with low income.Commenced in 2010, this policy covered only registered permanent residence at the beginning.Since 2014, the policy began to benefit temporary resident without household registration in Zhongshan as well.
Data collection
The frequency of elderly's public transport trips were derived from Zhongshan Household Travel Survey (ZHTS) in 2010 (Zhongshan Municipal Bureau of Urban Planning, 2010).Selected by stratified random sampling covering the whole Zhongshan City, the sample size of the elderly over 60 was 4784 (2905 male and 1879 female) from 274 urban and rural neighborhoods, with a sample rate of 2.0% (Zhang, Li, Liu, & Wu, 2018).The ZHTS provided the self-reported data of one-day public transport trips, together with personal, attitudinal, and household data.
The following data for the characterization of built environment attributes comes from Zhongshan Municipal Bureau of Urban Planning (Zhang, Li, Liu, & Li, 2014): (1) traffic analysis zones' boundaries-proxy for neighborhood boundaries; (2) land use in 2010 with five major types of land use (residential land, commercial and service facilities, industrial and manufacturing, green space, and other types); (3) neighborhood population in 2010; (4) road networks; (5) bus stops; and (6) political boundaries, such as city and zone boundaries.All the data were then integrated into ArcGIS for further analysis.
Characterization of socioeconomic and attitudinal attributes
The personal-level socioeconomic data includes sex and age.The household-level socioeconomic data includes household size, income level, and ownership of bikes, electric-bikes, motorcycles, and cars.The individual attitudinal data involves the most favorite travel modes for daily trips among walking, bike, electric-bike, public transport, motorcycle, and car.
Characterization of social and built environment attributes
The social and built environment attributes were characterized on the basis of neighborhoods, which are defined to be spatially equivalent to the traffic analysis zones.As designed to be homogeneous with respect to socio-demographic characteristics and living conditions (Martinez, Viegas, & Silva, 2009), traffic analysis zones share boundaries with administrative divisions in most cases.According to the administrative divisions of Zhongshan Metropolitan Area, a total of 274 traffic analysis zones (neighborhoods) were selected in this study.
With respect to previous literature, social environment is reflected by segments of the population with different socio-demographic characteristics, e.g., age structure, average income, average education Public transport use among the urban and rural elderly in China level (Bamberg, et al., 2007).A hypothesis is that different social environment is related to differences in elderly' judgment and actual use of public transport.The neighborhood level data of ZHTS provides some support for this hypothesis.In neighborhoods with low, medium or high average income, the public transport trips among the elderly are 0.18, 0.26, and 0.35 respectively.Thus, residents in lowincome neighborhoods demonstrate a weaker social norm to ride a bus for everyday trips.Similarly, in neighborhoods with a smaller proportion of elderly population than the average (13.6%), the elderly makes 0.30 public transport trips per day, which is 17.5% higher than their counterparts.In this study, we defined two types of attributes to demonstrate the social environment of the neighborhood where the elderly live, i.e., the proportions of elderly population, and the proportions of high (medium or low)-income households.
Ewing and Cervero suggested (2010) that each D variable of "five Ds" built environment contains a number of attributes that are commonly used in built environment-travel behavior research.Considering the best available data, we identified five neighborhood-level built environment attributes according to the five Ds in this study, which are dwelling units density (density), intersection density (design), land-use mixture (diversity), bus-stop density (distance to transit), and commercial accessibility (destination accessibility).Additionally, we also employed the sixth attribute "the percentage of green space" to represent the aesthetic factor which also influences the public transport use among the elderly (Van Cauwenberg, et al., 2011).
The dwelling unit density, intersection density, bus-stop density, and the percentage of green space are self-explanatory.The land-use diversity was calculated with respect to the context of Zhongshan, as applicable.The land-use diversity represents the degree to which different land uses in a neighborhood are mixed.We calculated the land-use diversity by the Entropy Index (EI) (Kockelman, 1997), wherein 0 indicates single-use environments and 1 stands for the equalization of different land uses in area coverage.EI is defined by: (1) where n = number of unique land uses, n ≥ 1; P i = percentage of land use i's coverage over total land use coverage (Zhang, Li, Yang, Liu, & Li, 2013).The measure of commercial accessibility describes the ease of access to commercial attractions.The neighborhood-level commercial accessibility is defined by the area of coverage of commercial facilities within one-kilometer distance from the centroid of a neighborhood.This measure relies on the data from the Zhongshan Household Travel Survey (ZHTS) in year 2010.The travel dairy of ZHTS shows that travel distance of one kilometer covers 70% of urban elderly's home-based shopping trips and is a commonly acceptable distance by the elderly.For each neighborhood, we obtained the commercial accessibility by the following two steps.For step 1, we defined the centroid of each neighborhood as the origin, distributed the acceptable travel distance of one kilometer as a buffer to the main roads from the origin, then formed an enclosed area with the endpoints of the acceptable travel distances in ArcGIS.For step 2, we collected the data of area covered by commercial facilities in the enclosed area in ArcGIS, and divided the data by the population of the neighborhood to get the commercial accessibility.
Model specification
In built environment-travel behavior research, poisson regression and negative binomial regression are extensively used for non-negative count dependent variables (Jang, 2005;Lewsey & Thomson, 2004).
Therefore, we tested the data to choose the proper model from poisson regression models and negative binomial regression models.The poisson process assumes that the conditional variance of the distribution of the elderly's daily public transport trips is equal to the expected value (Long, 1997;Long & Freese, 2006).However, in this study, this assumption could not be met by the dependent variable as the conditional variance is smaller than the expected value.Therefore, we preferred negative binomial regression rather than poisson regression.The percentage of elderly making 0, 1, 2, or 3 and more public transport trips is 86.2%, 3.1%, 9.1%, and 1.6% respectively.Thus, the count of elderly's public transport trips has more zero observations than non-zero ones, indicating a possible over-dispersion.We then tested to determine if a zero-inflated negative binomial regression is more suitable than a standard one.We employed the Vuong model selection test and the results strongly favored a standard negative binomial regression over a zero-inflated one.
Therefore, this study chose a negative binomial regression model to analyze the impact of individual, attitudinal, household, and built environment attributes on the frequency of elderly's public transport trips.We checked for the multicollinearity of all the independent variables by calculating the variance inflation factor (VIF).All the VIFs are smaller than 10, indicating a low degree of multicollinearity.The basic negative binomial regression model specifications were expressed as follow: (2) where Nr frequency is the frequency (times/day) of elderly's public transport trips; GENDER denotes whether the respondents is male or female; AGE means the respondent's age in years; PROWALK, PROBIKE, PROEBIKE, PROBUS, PROMOTOR or PROCAR demonstrates whether the respondents favor walking, bicycle, e-bike, public transport, motorcycle, or car over other travel modes in daily travel; HHSIZE_1 and HHSIZE_2 are dummies for the household size of one and two (with a reference category of more than two); HIGHINC and MEDINC are dummies for the household total annual income ranges of above 60,000 Chinese Yuan (Renminbi) (RMB, 6.3 Renminbi ≈ 1 US Dollar) and 20,000-60,000 RMB (with a reference category of 0-20000 RMB); BUSDIST represents the distance of the nearest bus-stop to respondents' home; BIKES, EBIKES, MOTORS, and CARS stand for the number of bicycles, electric bicycles, motorcycles and private cars in a household, respectively.Along with the basic model presented above, regression of the dependent variables proceeded in an expanded model with three social environment attributes and six built environment attributes added as independent variables.The three social environment attributes are P_ELDERLY, P_HIGHINC, and P_MEDINC, which denote the proportions of the elderly population, the high-income households, and the medium-income households (with a reference category of low-income households) in the neighborhood where the elderly live.The six built environment attributes are DWELLING, INTER-SECTION, MIXTURE, COMMERCIAL, BUSSTOP, and GREENSPACE, which demonstrate the dwelling unit density, intersection density, land-use mixture, commercial accessibility, bus-stop density, and the percentage of green space in the neighborhood where the elderly live.
Descriptive statistics
Descriptive statistics provide a general view of the dependent and independent variables (Table 1).The average age of the respondents is 67.05 years old.One-fourth of the elderly choose walking or public transport as their favorite travel mode.Nearly 20% of the respondents live alone while over one-third live with a partner.Nearly two-thirds of the respondents live in medium-to-high-income households.
The average distance from home to the nearest bus-stop is 0.5 km.The household ownership of motorcycle and bicycle averaged 0.76 and 0.61, much higher than that of e-bike or car.The standard deviation values of dwelling unit density, intersection density, and percentage of green space among all land uses were larger than their mean values, implying the substantial variations of land-use density, road network design, and aesthetics among neighborhoods in Zhongshan.
Negative binomial regression analysis
The results of negative binomial regressions demonstrated how differently that the personal, attitudinal, household, social environment, and built environment attributes were associated with public transport use among the elderly (Table 2).The two hypotheses of the paper are: 1) attitudinal factors are significantly correlated to the daily public transport trips among the elderly, and 2) social and built environment factors are significantly correlated to the daily public transport trips among the elderly.At the personal level, both age and gender are significant at 90% confidence.Being male or younger was related to more public transport trips.At the attitudinal level, all six attributes showed significant associations.Those who preferred public transport to other modes would make 2.87 times (= exp (1.354)-1) more public transport trips compared with their counterparts.The elderly who favored other modes over public transport would make 39.69% to 67.92% fewer public transport trips.
At the household level, attributes related to household size and the distance between home and the nearest bus-stop were significant at 95% confidence.Those who live with a partner would make 26.46% more public transport trips than those who live in a household with three or more members.
As expected, living an extra kilometer away from the nearest bus-stop was related to a 55.19% decrease in the frequency of public transport trips.
At the social environment level, all three variables demonstrated significant correlations.The elderly who live in a neighborhood with the largest proportion of elderly population tend to generate 23.85% fewer public transport trips than their counterparts in a neighborhood with the smallest proportion.More middle-to-high income households in the neighborhood, more frequently the elderly use public transport.Comparing to the elderly in low-income neighborhoods, those who reside in high-income or medium-income neighborhoods would make 47.12% and 39.59% more public transport trips.
At the built environment level, three variables representing land-use diversity, public transport service, and aesthetics were statistically significant at 90% confidence.The elderly who live in the most mixed-developed environment would make 37.71% fewer public transport trips than in the least mixed-developed environment.As expected, denser bus-stops in a neighborhood are related to more public transport use among the elderly in Zhongshan.Compared to the elderly living in the neighborhood with the least bus-stops, the elderly would make 62.86% more public transport trips if they live in the environment with the densest ones.Similarly, with an increase of 20% of green space land use based on the average, the public transport trips by the elderly would increase by 5.69%.
The directions of the effects for the individual and household attributes persisted across both basic and expanded models, and the coefficients showed slight to moderate variation.The LR chi2 and the Log likelihood of each model represent the overall goodness of fit.The changes of Pseudo-R 2 , LR chi2 and Log likelihood in expanded models implied that the social and built environment variables contributed to strengthening the explanatory power and enhanced the predictability of the models.
Discussion and policy implications
Personal, attitudinal, and household attributes related to gender, age, attitudes towards public transport, and household size, are significantly related to the elderly's public transport use.Specifically, being male or younger-old, or favoring public transport over other modes, or living with a partner is associated with more public transport trips.Male or younger elderly are more physically active to use public transport than their female or older counterparts, which is an important reason for the higher frequency of public transport trips.The findings on the correlation of a strong preference to public transport is in consistency with previous literature, including the Theory of Planned Behavior (Godin & Kok, 1996).This implies the potential effects of the dissemination of an active life style involving positive attitudes towards public transport among the elderly (Ding, Lin, & Liu, 2014).The correlations of the social and built environment attributes on the elderly's public transport use in Zhongshan yield some interesting findings.The social environment attributes relating to the degree of ageing and the affluence of the neighborhood were significantly associated with the public transport use among the elderly.With regard to the proportions of elderly population in the neighborhood, the elderly would have more public transport trips if they live in a neighborhood with a smaller older population.The neighborhood-level aggregate data also provided some support.For the elderly living in a younger or an older neighborhood, i.e., the proportion of the elderly population was below or above average in Zhongshan, the frequency of everyday public transport trips was 0.29 and 0.24 respectively.In terms of the average income of the neighborhood, the elderly in neighborhoods with high or medium income would use public transport more frequently than their counterparts in low-income ones.The research data indicates that the elderly residing in a high-income neighborhood would make nearly twice public transport trips (0.35 trips/day) than the elderly in a low-income neighborhood (0.18 trips/ day).Thus, for neighborhoods with a low average income these data demonstrates a weaker social norm to ride a bus for everyday trips.The reasons to the above finding remain unclear due to the limitation of available data at the current stage.Previous studies showed that it may be connected to social norms with active travel behaviors (Ball, Jeffery, Abbott, McNaughton, & Crawford, 2010).One assumption is that in a social environment with younger population or medium-to-high average income, residents may more willingly engage in pro-environmental behavior and choose eco-friendly travel modes (Doran & Larsen, 2015).Then, those social norms may regulate behavior of the elderly and influence their travel mode choice.Although the underlying reasons need to be explored in future research, the results showed the potentiality to promote public transport use among the elderly by balancing neighborhood age structure and enhancing neighborhood income level.It is worth mentioning that the current policy of providing free bus card to the elderly is partly in consistency with the possible effects of the social environment factor, the average income in a neighborhood.The main purpose of the policy was to encourage low-income elderly to choose public transport without a financial barrier.Since the policy was initiated in 2010, the same year in which the research data was collected, it was hard to conclude if it is effective to promote public transport use among the elderly in low-income neighborhoods.Therefore, the real effectiveness of the "free bus car" policy remains to be examined in future studies.According to the research findings, it may be more effective if coupling the current free bus card policy with the policy of enhancing neighborhood income level.
The built environment attributes featuring public transport service, land-use diversity, and aesthetics showed significant correlation to the elderly's public transport trips, albeit to varied degrees.
• Living close to a bus-stop is strongly related to more public transport trips by the elderly.This is in consistency with previous literature that shorter walking distance between home and busstops will enhance the attractiveness of public transport and increase public transport use by the elderly (Chen, 2010;Hess, 2012).• In the environment with more bus-stops, the elderly tend to increase the public transport use.This is in accordance with our expectation that denser bus-stops may provide more bus lines and available destinations which facilitate public transport service for the elderly (Burkhardt, 2003).The local policy of extending bus lines to rural areas, which was implemented in 2010, was in compliance with the effects of bus-stop density and distance from home to the nearest bus-stop.Generally, the level of public transport service in the rural areas of Zhongshan is lower than in the urban areas.The bus-stops in rural areas are scarce and the distance between home and the nearest bus-stop is sometimes twice of that in urban areas.Therefore, extending and increase bus lines will not only increase the bus-stop density, but also shorten the distance from home to the nearest bus-stop in rural areas.Consequently, the rural elderly may be attracted to public transport from other travel modes and their everyday public transport trips are expected to rise.• In neighborhoods with mixed land-use patterns, the propensity of elderly to use public transport is lower.That may be because that mixed development increases the possibility of having short-to-medium distance trips instead of long distance ones.In that case, walking and cycling may serve as better mode choices than public transport.• With more green space from home to bus-stops, the elderly tend to have more public transport trips.In Zhongshan, the average walking distance from home to the nearest bus-stop is 500 meters, equaling to eight to ten minutes' walking.Abundant greenery along the walking route not only brings aesthetic enjoyment but also provides shelters and resting places during walking, which facilitate the walking environment for the elderly (Aceves-González, et al., 2015).In the first quartile of neighborhoods in terms of the percentage of green space, the elderly make 0.34 public trips per day.While in the counterparts of the last quartile, the elderly make only 0.25 public trips per day.Based on the discussions above, we could find out that policies involving attitudes, social and built environment may be potentially effective to increase the public transport use among the elderly.The findings in this study provide insights for transportation and public health agencies, practitioners, and researchers into four possible interventions as below.
• Disseminating an active life style involving positive attitudes towards public transport is of essential importance.We recommend diversified initiatives, e.g., public transport campaigns, and specialized websites, given that it is hard to change attitudes among the elderly instantly.• Develop neighborhoods with relatively well-balanced structures of age and income which help to avoid the over aggregation of the elderly or low-income population.• Optimize the location of bus-stops.This policy includes enhancing bus-stop densities in adjacent areas of elderly's home, and shortening the distance between home and the nearest busstop.• Provide abundant greenery, especially along major walking routes connecting the residential areas and the bus-stops.
Strengths and limitations
This study has several strengths and limitations.In terms of the strengths, firstly, the study focused on the elderly population, and provided informative policy implications for the ageing society.Secondly, the study revealed the personal, attitudinal, household, social environment, and built environment correlates of the elderly's public transport use in a developing country under the context of rapid urbanization and motorization.It helps to further promote the comparative studies between different contexts.
In terms of the limitations, firstly, the dependent outcome, public transport trips, is based on self-reports which therefore may not capture all domains of this activity due to the bias of subjectivity in self-report.However, self-report in transport is one of the most commonly used methods in the research field, and it remains as the primary source for assessing public transport use in large-scale studies like this.Secondly, cross-sectional data were used in this study.For this reason, the full evaluation of causal inferences about the effects of different factors on public transport use requires further longitudinal and multilevel analyses over time.
Conclusion
This study makes an important contribution to the existing literature by investigating the correlates of the public transport use of the elderly in Zhongshan, a medium-sized Chinese city.The research findings suggest that a strong preference to public transport is substantially related to more public transport trips of the elderly in Zhongshan.In terms of the social environment, the elderly tend to use public transport more frequently if residing in a neighborhood with less elderly population and medium-to-high average income.With respect to the built environment, all else being equal, living in a neighborhood with easy access to public transport, high level of public transport service, and sufficient green space is associated with more public transport trips of the elderly.The findings facilitate the notion of developing public transport-friendly communities and are informative to create public health interventions which help to promote the public transport use together with physical activities among the elderly in Zhongshan.Possible interventions include: (1) disseminate an active life style involving positive attitudes towards public transport; (2) develop neighborhoods with relatively balanced structure of age and income; (3) optimize bus-stop locations with higher stop densities and shorter distance between home and the nearest bus-stop; and (4) provide abundant green space, especially along major walking routes connecting home and the bus-stops.
Figure 1 :
Figure 1: Study area Note: The upper side of the figure shows the city boundary, district (town) boundary, and the neighborhood boundary of Zhongshan; while the bottom half shows the location of Zhongshan in Guangdong Province.
Table 1 :
Descriptive statistics of dependent and independent variables (sample size = 4784) Public transport use among the urban and rural elderly in China Note: S. D. = Standard Deviation; Min.= minimum; Max.= maximum.
Table 2 :
Negative binomial regressions of the frequency of elderly's public transport trips Note: ***denotes significance at p < 0.01, **denotes significance at p < 0.05, and *denotes significance at p < 0.1.Blank cells mean variable was not included in that model.Obs = observations; prob = probability.
|
2019-05-20T13:06:16.217Z
|
2018-10-11T00:00:00.000
|
{
"year": 2018,
"sha1": "818218400abeb678102712f014cba0b7110e4586",
"oa_license": "CCBYNC",
"oa_url": "https://www.jtlu.org/index.php/jtlu/article/download/978/1108",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b239ca2d2a580bd81dafe29b5c781797201f8434",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
24884670
|
pes2o/s2orc
|
v3-fos-license
|
Caliber based spectral gap optimization of order parameters (SGOOP) for sampling complex molecular systems
In modern day simulations of many-body systems much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CV) or reaction coordinates. A vast array of enhanced sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here describe a new algorithm for finding optimal low-dimensional collective variables for use in enhanced sampling biasing methods like umbrella sampling, metadynamics and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. Through multiple practical examples, we show how this post-processing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs.
In modern day simulations of many-body systems much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CV) or reaction coordinates. A vast array of enhanced sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here describe a new algorithm for finding optimal low-dimensional collective variables for use in enhanced sampling biasing methods like umbrella sampling, metadynamics and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. Through multiple practical examples, we show how this post-processing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs.
I. INTRODUCTION
With the advent of increasingly accurate force-fields and powerful computers, Molecular Dynamics (MD) simulations have become an ubiquitous tool for studying the static and dynamic properties of systems across disciplines. However, most realistic systems of interest are characterized by deep, multiple free energy basins separated by high barriers. The timescales associated with escaping such barriers can be formidably high compared to what is accessible with straightforward MD even with the most powerful computing resources. Thus in order to accurately characterize such landscapes with atomistic simulations, a large number of enhanced sampling schemes have become popular, starting with the pioneering works of Torrie, Valleau, Bennett and others 1-11 . Many of these schemes involve probing the probability distribution along selected low-dimensional collective variables (CVs), either through a static pre-existing bias or through a bias constructed on-the-fly, that enhances the sampling of hard to access but important regions in the configuration space.
The quality, reliability, and usefulness of the sampled distribution is in the end deeply dependent on the quality of the chosen CV. Specifically, one key assumption inherent in several enhanced sampling methods is that of time-scale separation 12 : for efficient and accurate sampling, the chosen CV should encode all the relevant slow dynamics in the system, and any dynamics not captured by the CV should be relatively fast. For most practical applications, there are a large number of possible CVs that could be chosen, and it is not at all obvious how to construct the best low-dimensional CV or CVs for biasing from these various possible options. Success in enhanced sampling simulations has traditionally relied on an apt use of physical intuition to construct such low dimensional CVs. Identification of good low dimensional CVs is in fact useful not just for enhanced sampling simula-tions such as umbrella sampling and metadynamics but also for distributed computing techniques like Markov State Models (MSM) 13 , allowing one to significantly improve the quality and reliability of the constructed kinetic models. Last but not the least, having an optimal low dimensional CV can also help in the building of Brownian dynamics type models 14,15 . Indeed, given the importance of this problem, there exists a range of methods that have been proposed to solve it [16][17][18][19][20][21][22][23] .
In this communication, we report a new and computationally efficient algorithm for designing good lowdimensional slow CVs. We suggest that the best CV is one with the maximum separation of timescales between visible slow and hidden fast processes 12,24 , or the maximum spectral gap. The method is named spectral gap optimization of order parameters (SGOOP). Note that in this work henceforth we refer to the best CV in the singular, without loss of any generality in the treatment. The notion of such a timescale separation is at the core of not just enhanced sampling methods but also coarse-grained, Multiscale and projection operator methods [25][26][27] .
Our algorithm involves learning the best linear or nonlinear combination of given candidate CVs, as quantified by a maximum path entropy 28 estimate of the spectral gap for the dynamics of that CV. The input to the algorithm is any available information about the static and dynamic properties of the system, accumulated through (i) a biased simulation performed along a sub-optimal trial CV, possibly (but not necessarily) complemented by (ii) short bursts of unbiased MD runs, or (iii) by knowledge of experimental observables. Any type of biased simulation could be used in (i), as long as it allows projecting the stationary probability density estimate on generic CVs without having to repeat the simulation. Metadynamics 29 provides this functionality in a straightforward manner and hence it is our method of choice here. Given this information we use the principle of Maximum Caliber 28,30 to set up an unbiased master arXiv:1509.06145v3 [cond-mat.stat-mech] 8 Nov 2015 equation for the dynamics of various trial CVs. Through a simple post-processing optimization procedure we then find the CV with the maximal spectral gap of the associated transfer matrix. For instance, this optimization can be performed through a simulated annealing approach that maximizes the spectral gap by performing a robust global search in the space of trial CVs.
Through three practical examples, we show how our post-processing procedure can lead to better choices of CVs, and to several orders of magnitude improvement in the convergence of the free energy calculated through the popular enhanced sampling technique metadynamics. Furthermore, the algorithm is generally applicable irrespective of the number of stable basins. Our algorithm essentially provides the much needed ability to extract useful information about relevant CVs even from unsuccessful metadynamics runs. In addition to use in free energy sampling methods, the optimized CV can then also be used in other methods that provide kinetic rate constants 33,34 . We expect this algorithm to be of widespread use in designing CVs for biasing during enhanced sampling simulations, making the process significantly more automatic and far less reliant on human intuition.
II. THEORY
Let us consider a molecular system with N atoms at temperature T . We assume there exists a large number d of available order parameters with 1 d N , collectively referred to as {Θ}, such that the dynamics in this d−dimensional space is Markovian. These could be inter-molecular distances 16 , torsional angles, solvation states, nucleus size/shape 35 , bond order parameters 36 etc. The identification of such order parameters poses another complicated problem, but as routinely done in other methods aimed at optimizing CVs 13,16,22 , we assume such order parameters are a priori known.
There are several available biasing techniques that can sample the probability distribution of the space {Θ}, and even calculate the rate constants for escape from stable states in this space 33 . All of these techniques are feasible only for a very small number of CVs whose number is much smaller than d -typically one to three. These are the order parameters whose fluctuations are deemed to be most important for the system or process being studied, and by building a fixed or time-dependent bias of these CVs, one should be able to determine the true unbiased probability distribution of the full space {Θ}. But how does one decide what is an optimal low-dimensional subset or combination of the available order parameters? This dimensionality reduction is of central importance to methods such as umbrella sampling, metadynamics and others, the answer to which decides the speed of convergence of the biased simulation, or if it it will even ever converge within practically useful simulation times.
The key idea in the current work is to perform en-hanced sampling (e.g. metadynamics) with a choice of trial CVs, complemented by information gathered from short bursts of unbiased MD simulations and experimental observables when available, to iteratively improve the CVs. The maximum Caliber framework 28,30,37,38 , which is a dynamical generalization of the hugely popular maximum entropy framework 39 , provides a method for accomplishing this. We start by choosing a trial CV given by f {Θ}, where f maps the space {Θ} onto a lower dimensional space. The space along this trial CV f {Θ} is then discretized in grids labeled n. This CV could be multi-dimensional, with n then indexing the multidimensional grids. Let p n (t) denote the instantaneous probability of the system being found in grid n. For the sake of clarity, we assume that f is a linear combination of where ω nm is the rate of transition from grid n to m per unit time. The matrix Ω nm is the entirety of all these rates. If the dynamics of f {Θ} is Markovian, then the matrix k of transition probabilities is given for small ∆t by should not depend on the value of ∆t used in Eq. 1. This provides a self-consistency check of whether or not the CV so generated is Markovian. In the maximum Caliber approach one uses all available stationary state and dynamical information to construct probabilities of micropaths. Instead of defining the entropy as a function of microstate probabilities as in information theory and statistical thermodynamics 39 , one now defines an entropy S as a functional of the probabilities of micropaths, which is essentially a path integral. For the Markovian process of Eq.1 40 : Path ensemble averages of time-dependent quantities A ab can now be calculated as follows 28,30 , where the subscripts a,b denote grid indices: The path entropy of Eq. 3 incremented by quantities accounting for constraints placed by our knowledge of observables { A n ab }, and some other constraints such as detailed balance, is collectively called Caliber 28,30 . Maximizing the Caliber is then equivalent to being least noncommittal about missing dynamic and static information, with the end result being that one obtains a relation between the grid-to-grid rates and the stationary probabilities as follows: Here i runs over the number of available dynamical pieces of information, and ρ i is the Lagrange multiplier for the associated constraint. As a special case, consider when the only observable at hand is the mean number of transitions in observation interval ∆t over the entire grid 30 along a trial CV. In this case, the above equation takes a particularly simple and useful form: Our method then involves calculating for various trial CVs the spectral gap of the transition probability matrix k, which for a = b is k ab = ω ab ∆t and satisfies normalization Σ b k ab = 1. Let {λ} denote the set of eigenvalues of k, with λ 0 ≡ 1 > λ 1 ≥ λ 2 .... The spectral gap is then defined as λ s − λ s+1 , where s is the number of barriers apparent from the free energy estimate projected on the CV at hand, that are higher than a user-defined threshold (typically > ∼ k B T ). Estimating the Lagrange multiplier is computationally expensive, so a first estimate for maximizing the spectral gap is performed using Eq. 6 where the Lagrange multiplier ρ need not be computed. Also note that in the limit of small ∆t, the matrix k will be diagonally dominated 41 , and to estimate the spectral gap one needs only an accurate estimate of relative local free energies. More static or dynamical information 42-47 simply introduces additional Lagrange multipliers and can be treated through Eq. 5. This can be done if the intention is to calculate an accurate kinetic model with correct estimates of the dominant eigenvalues and not just the spectral gap.
We are now in a position to describe the actual algorithm. It comprises the following two steps in a sequential manner, and can be improved by iterating.
1. Perform metadynamics along a trial CV f = c 1 Θ 1 + c 2 Θ 2 + ... + c d Θ d to get a crude estimate of the stationary density.
2. As post-processing, perform optimization in the space of mixing coefficients {c 1 , c 2 ...c d } to identify the CV with the maximal spectral gap. The reweighting functionality 29 of metadynamics allows projection of free energy estimates on different CVs with minimal computational effort, and is used to calculate the k matrix through Eq. 6. We elaborate on the optimization procedure details in the next section (Illustrative Examples).
The optimization procedure gives the best CV as the one with highest spectral gap, given the information at hand. As in any maximum entropy framework 39 , the better the quality of this information, the more accurate will be the spectral gap. But even with very poor quality information, as we show in the examples, the algorithm still leads to significant improvements in the CV. Furthermore, whether or not the CV is Markovian can also be checked by repeating step 2 for different time intervals ∆t of observation and determining if the spectral gap is independent of the value of ∆t. sample this landscape at temperature k B T = 0.1, we perform metadynamics with path CVs, a class of widely used CVs that can capture non-local and non-linear fluctuations (see 32 for details). Path CVs require specification of a series of milestones between two points in configuration space, where the milestones can be described in terms of generic order parameters. Fluctuations in the system can then be enhanced in the direction along and perpendicular to these milestones, leading to efficient exploration of the space. In Fig. 1 (a) we show the 2-d potential along with several possible path CVs imposed on it. We first perform a short trial metadynamics run biasing the y-coordinate. By post-processing this, we generate the spectral gaps for various paths using Eq. 6 ( Fig. 1 (b)). By comparing Fig. 1(a) against Fig. 1(b), it is clear how the path with maximum spectral gap is the minimum energy pathway passing through the saddle point. In this case while this result could have simply been obtained through Nudged Elastic Band type calculations 48 -the point is to use this example to develop intuition for the method. Also note that moving around the best path to others that are a bit distant from it, does not lead to much change in the spectral gap. This is consistent with the observation that in several enhanced sampling methods such as metadynamics or umbrella sampling 2,6,7 , the CV need not be precisely the true reaction coordinate, as long as it has a sufficient overlap with it 32,49 .
In the Supplemental Information (SI), we provide a similar analysis on another 2-d model potential but with 3 states. The conclusions are similar.
B. 5-residue peptide
Now we move to a more complex system, which has also been considered as a test case for new enhanced sampling methods 50 in order to establish their usefulness. This is the 5-residue peptide Ace − Ala 3 − N me in vacuum (see Fig. 2 (a)), where there are six possibly relevant dihedral torsion angles. Here we ask the question: what is the best possible 1-d linear combination of these six dihedrals that we could bias but still maximally In this problem, for periodicity related numerical reasons, we bias a reference cosine defined by cos(θ − θ 0 ), where θ is one of the six dihedral angles, and θ 0 is some reference value whose optimal choice we do not know a priori. Through our algorithm we then seek to identify: (a) The best choice of mixing coefficients {c} to use in trial CV f = c 1 Φ 1 + c 2 Ψ 1 + c 3 Φ 2 + c 4 Ψ 2 + c 5 Φ 3 + c 6 Ψ 3 , where we keep the euclidean norm of {c}=1, and for any angle θ the prime denotes the transformation θ → 0.5 + cos(θ − θ 0 ).
(b) The best choice of θ 0 , kept same for all 6 dihedrals.
We start with the trial CV where all members of {c} are the same subject to euclidean norm of {c}=1, and an arbitrary choice of θ 0 = 0.75 radians is taken. A short metadynamics run is performed biasing this trial CV. See supplemental information (SI) for details of the metadynamics and MD parameters 53 , and Fig. 3 (a) for the metadynamics trajectory used for spectral gap optimization. Based on the free energy estimate generated from this run, a simulated annealing procedure is performed in the space {c} for various θ 0 values. Starting from the spectral gap estimated using Eq. 6 for the trial CV, this involves executing Metropolis moves in the {c} space with an attempt to find the global maxima of the spectral gap. In Fig. 2 (b-d) respectively, we show how the spectral gap is increased by the simulated annealing procedure, and the corresponding best estimate of {c, θ 0 }. The algorithm suggests the minimal role of the angles Ψ 1 , Ψ 2 , Ψ 3 as can be seen through their relatively low weights 50 (Fig. 2 (c)). The spectrum of eigenvalues for dynamics projected on the trial (magenta) and optimized (blue) CVs, along with respective spectral gaps is provided in Fig. 2(d). Fig. 3 (a-b) show the metadynam-ics trajectories for the three dihedral angles Φ 1 , Φ 2 , Φ 3 with the trial and the optimized CVs respectively. A very pronounced improvement in the quality of sampling can be seen. Fig. 4 (a-c) shows the rate of convergence of the error of the estimated free energy 29 with respect to reference values from other approaches 50 , through metadynamics runs performed with each of the trial and optimized CVs respectively. The error metric is the same as in 50,52 , and is calculated for all points within 25 kJ of the global minimum in the respective 1-d free energy. The behavior is robust with respect to the choice of this threshold value. As can be seen, the optimized CV, even though it was obtained on the basis of a very poorly converged and short (20 ns) metadynamics run, leads to several orders of magnitudes improvement in the rate at which the free energies converge. Interestingly, iterating the algorithm with the improved 1-d CV did not lead to much improvement in the sampling, reflecting that the optimized coefficients {c} are close to the best that can be achieved with a 1-d CV for this problem.
IV. CONCLUSIONS
To conclude, we have introduced a new approach named SGOOP (spectral gap optimization of order parameters) for improving the choice of low-dimensional CVs for biasing in enhanced sampling in complex systems. This is accomplished through the use of maximum Caliber based spectral gap estimates. The algorithm is iterative in spirit, and attempts to learn how to improve CVs based on available stationary and dynamic data. We also provide several proof-of-concept practical examples to establish the potential usefulness of the method. For model 2-d potentials the algorithm was shown to yield the minimum energy pathway. For a small peptide, we found very significant improvement in determining the best 1-d collective variable from six possible functions with no ad hoc or intuition based tuning. Future work will use this algorithm to treat a range of problems, especially involving protein-ligand unbinding. For instance, the displacement of water molecules and protein flexibility are often slowly varying order parameters in unbinding 34,49,54,55 , but do we really need to bias one or both of these for the purpose of sampling? Another issue to be considered in future work is can we use these optimized CVs to obtain reliable dynamical information from metadynamics 23,33 , including the very important off-rate for ligand unbinding 49,56 .
One central limitation of this algorithm is having to specify possibly a large number of order parameters that may be important. But for many physical problems one does have a sense of which order parameters could be at work, and this is where we expect this algorithm to be of tremendous use. Another obvious limitation is with systems devoid of a time scale separation 57 -for example, in glassy systems where there is an effectively continuous spectrum of eigenvalues with no discernible time scale separation. However, the dynamics of many complex and real-world molecular systems does thankfully show a time scale separation between few relevant slow modes and remaining fast ones 58 , and we expect our algorithm to be of help in unraveling the thermodynamics and dynamics in such systems.
|
2015-11-08T16:35:17.000Z
|
2015-09-21T00:00:00.000
|
{
"year": 2015,
"sha1": "37807e789131b08b7a2dbb530e3352a7011431d3",
"oa_license": null,
"oa_url": "https://europepmc.org/articles/pmc4801247?pdf=render",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "37807e789131b08b7a2dbb530e3352a7011431d3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
}
|
251800284
|
pes2o/s2orc
|
v3-fos-license
|
WISE/NEOWISE Multi-Epoch Imaging of the Potentially Geminid-related Asteroids: (3200) Phaethon, 2005 UD and 1999 YC
We present space-based thermal infrared observations of the presumably Geminid-associated asteroids: (3200)Phaethon, 2005 UD and 1999 YC using WISE/NEOWISE. The images were taken at the four wavelength bands 3.4$\mu$m(W1),4.6$\mu$m(W2),12$\mu$m(W3),and 22$\mu$m(W4). We find no evidence of lasting mass-loss in the asteroids over the decadal multi-epoch datasets. We set an upper limit to the mass-loss rate in dust of Q<2kg s$^{-1}$ for Phaethon and<0.1kg s$^{-1}$ for both 2005 UD and 1999 YC, respectively, with little dependency over the observed heliocentric distances of R=1.0$-$2.3au. For Phaethon, even if the maximum mass-loss was sustained over the 1000(s)yr dynamical age of the Geminid stream, it is more than two orders of magnitude too small to supply the reported stream mass (1e13$-$14kg). The Phaethon-associated dust trail (Geminid stream) is not detected at R=2.3au, corresponding an upper limit on the optical depth of $\tau$<7e-9. Additionally, no co-moving asteroids with radii r<650m were found. The DESTINY+ dust analyzer would be capable of detecting several of the 10$\mu$m-sized interplanetary dust particles when at far distances(>50,000km) from Phaethon. From 2005 UD, if the mass-loss rate lasted over the 10,000yr dynamical age of the Daytime Sextantid meteoroid stream, the mass of the stream would be ~1e10kg. The 1999 YC images showed neither the related dust trail ($\tau$<2e-8) nor co-moving objects with radii r<170m at R=1.6au. Estimated physical parameters from these limits do not explain the production mechanism of the Geminid meteoroid stream. Lastly, to explore the origin of the Geminids, we discuss the implications for our data in relation to the possibly sodium (Na)-driven perihelion activity of Phaethon.
INTRODUCTION
The near-Earth asteroid (NEA) (3200) Phaethon (hereafter just Phaethon) appears to be dynamically associated with the Geminid meteoroid stream 1 (Whipple 1983;Gustafson 1989;Williams & Wu 1993). The semimajor axis a=1.271 au, eccentricity e= 0.890, and inclination i=22 • .3 (NASA JPL HORIZONS 2 ) corresponds to a Tisserand parameter T J = 4.5, which is distinctly above those of cometary orbits (T J 3.08) and is classified as having a typical asteroidal orbit (Jewitt & Hsieh 2022). The Geminid meteoroid stream consists of near millimeter-scale or larger solid particles (by radar, Blaauw 2017), up to 10 s centimeters, as measured by lunar impact flushes (Yanagisawa et al. 2008(Yanagisawa et al. , 2021Szalay et al. 2018;Madiedo et al. 2019). The remarkable orbital feature is the small perihelion, q = 0.14 au, where Phaethon is repeatedly exposed to intense thermal processing at the peak sub-solar temperature 1000 K.
Spectroscopic measurements of sodium (Na) content of the Geminid meteors have been utilized for studying the thermal processes on Phaethon. The brightness of obtained spectra are at optical absolute magnitudes +4 or brighter, corresponding to meteoroid sizes 1 mm (e.g. Lindblad 1987). Sodium is relatively volatile metal element in meteoroids in most meteor showers, and commonly detected as neutral Na-D emission lines (Na i-doublet) at the wavelength λ∼5890Å (Ceplecha et al. 1998). But, a notable feature of the Geminid meteors is that the extreme variety in their Na content, from depletion of the Na abundance (∼7% of the solar) (Kasuga et al. 2005) to near the solar-like values (e.g. Harvey 1973;Borovička 2001;Trigo-Rodríguez et al. 2003). Similar trends are reported from the measured intensity ratios of neutral metallic atom emission lines, as well as undetected Na i (Borovička 2001;Borovička et al. 2005Borovička et al. , 2019. Kasuga et al. (2006) compiled Na in the Geminids in the last decade to investigate perihelion-dependent thermal effects on meteoroid streams. The effect is predicted to alter the metal abundances from their intrinsic values in their parents, especially for temperature-sensitive elements such as Na in alkaline silicates. However, as a result, the thermal desorption of Na is unlikely to occur for the Geminid stream even at q = 0.14 au. This is because the corresponding meteoroid temperature (characterized as large, compact, blackbody-like particle) does not reach the sublimation temperature of alkaline silicates (Kasuga et al. 2006)(also see, Springmann et al. 2019). The mm-scale or larger Geminid meteoroids take timescales of 10 4∼5 yr to lose Na (Čapek & Borovička 2009), being 1-2 orders of longer than the stream age τ s ∼ 1000(s) yr (Williams & Wu 1993;Ryabova 1999;) (Reviewed in Vaubaillon et al. 2019). Therefore the Na loss observed in the Geminid meteors must have originated from the thermal process of the parent, Phaethon (see a review, Kasuga & Jewitt 2019).
The parent body Phaethon has a diameter D e ≈ 5-6 km (Tedesco et al. 2004;Hanuš et al. 2016;Taylor et al. 2019;Dunham et al. 2019;Masiero et al. 2019;Devogèle et al. 2020) 3 and an optically blue (B-type) reflection spectrum (e.g. Bus & Binzel 2002). While most images appear as a point source without any coma (Chamberlin et al. 1996;Hsieh & Jewitt 2005;Wiegert et al. 2008), recurrent dust-releasing activity has been optically observed only at the perihelion passage (Jewitt & Li 2010;Jewitt 2012;Jewitt et al. 2013;Hui & Li 2017). This is interpreted as a thermal-induced activity. The proposed mechanisms are thermal breakdown (Jewitt & Li 2010;Jewitt 2012), mass-shedding by thermal deformation (Nakano & Hirabayashi 2020), gas-dragged (or -gradient) force by sublimation of sodium (Masiero et al. 2021), and electrostatic repulsive forces (Jewitt 2012;Jewitt et al. 2015) which might be caused by accumulated sodium ions (Kimura et al. 2022). Constraints on mechanisms are provided both by the size of ejected particles and the mass-loss rates. The released particles are of micron size (measured from the short tail), too small compared to the millimeter∼centimeter-sized Geminids. Such small particles would not be retained in the stream due to radiation pressure (Reviewed in Jewitt et al. 2015;Jewitt & Hsieh 2022). Larger particles (sizes > 10µm) must be in the Geminid stream (trail) as detected by the DIRBE/COBE thermal infrared survey from space (Arendt 2014). Furthermore, even if the measured perihelion mass-loss rates ∼3 kg s −1 (Jewitt & Li 2010) sustained in steady-state over the entire orbit, it is orders of magnitude too small to supply the Geminid stream mass M s ≈ 10 13 kg -10 14 kg (Hughes & McBride 1989;Blaauw 2017) (cf. 10 15 kg if cometary origin, Ryabova 2017) within its τ s ∼ 1000(s) years of dynamical age. The observed particle size and mass-loss rate from Phaethon are not consistent with the Geminid stream, leading to the open question that how the stream was produced.
A non-negligible possibility is that the Geminids are the products of a catastrophic event (e.g. collision, disruption). Phaethon has a hypothetical breakup event which might occurred > 10 4 yr ago. This produced fragmental kilometer-sized NEAs: (155140) UD and (225416) 1999YC (hereafter 2005UD and 1999, named as "Phaethon-Geminid Complex (PGC)" (Ohtsuka et al. 2006(Ohtsuka et al. , 2008. The optical color of 2005 UD is blue (B-type), consistent with Phaethon (Jewitt & Hsieh 2006), but the dissimilarity in the concave-shaped near-infrared spectrum makes a direct association debatable (Kareta et al. 2021;MacLennan et al. 2022). The color of 1999 YC in the visual band distinctly exhibits a neutral (C-type) slope (Kasuga & Jewitt 2008). The PGCasteroids show some physical similarity, but their dynamical association is unclear (see a review, Kasuga & Jewitt 2019). Relevantly, Phaeton is proposed to have its precursor origin in the mainbelt. The dynamical lifetime of 26 Myr can be linked back to (2) Pallas (de León et al. 2010) with ∼ 2% chance (Todorović 2018) or to the inner main-belt asteroids (e.g. (329) Svea or (142) Polana) with ∼ 18% chance . The links are of very low probabilities and still unsure, but imply that Phaeton itself could be a disrupted fragment from one of these sources. To summarize, the Geminid stream and Phaethon, both are likely to be formed stochastically rather than in steady-state, however the nature of the events are not clarified yet.
In this paper we present a space-based thermal infrared study of the Geminid-related NEAs: Phaethon, 2005 UD and 1999 YC using the NASA spacecraft Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) and the successive project Near-Earth Object WISE (NEOWISE; Mainzer et al. 2011Mainzer et al. , 2014. Long wavelength observations are aimed at larger particles potentially containing much more mass. The multiple observing epochs contain data from four infrared wavelength bands: 3.4µm (W 1), 4.6µm (W 2), 12µm (W 3), and 22µm (W 4), providing opportunities to constrain the dust environments around Phaethon, 2005 UD and 1999 YC over the last decade. The each band is specified for investigating different physical characters of the PGC-asteroids, such as the dust production rate (W 1), the gas production rate (W 2) and dust trails and potentially co-moving objects (W 3 and W 4). Another motivation is to find an activity mechanism far from perihelion, which is different from thermal-related production. This study provides a groundwork for JAXA's DEST INY + mission which is planning to flyby at the distance of 500 km from Phaethon in 2028, with a potential extended mission to flyby 2005 UD (Ozaki et al. 2022).
WISE/NEOWISE DATA
The near-and mid-infrared data were taken by WISE (Wright et al. 2010) and NEOWISE (Mainzer et al. , 2014. WISE is equipped with a 40 cm diameter telescope, which was launched on 2009 December 14 into a Sun-synchronous polar orbit. The full sky survey was conducted using four infrared wavelength bands centered at 3.4µm (W 1), 4.6µm (W 2), 12µm (W 3), and 22 µm (W 4). It was continued until 2010 September 29 when the cryogen was exhausted. Then WISE further continued surveying in a post-cryogenic two-band survey mode until 2011 February 1 when it was put into hibernation. On 2013 December 13, NEOWISE resumed the two-band survey (W 1 and W 2), which is currently ongoing as a reactivation mission (NEOWISE-R).
The spacecraft employs two-types of focal plane array detectors, the HgCdTe (Teledyne Imaging Sensors) in the shorter wavelength bands (W 1 and W 2) and the Si:As BIB (DRS Sensors & Targeting Systems) in the longer wavelength bands (W 3 and W 4) (Wright et al. 2010). Both detectors have 1024 × 1024 pixels, each 2 ′′ .76 pix −1 in the W 1, W 2 and W 3 bands and 5 ′′ .5 pix −1 (2×2 binned) in the W 4 band. The corresponding field of view (FOV) is 47 ′ ×47 ′ . With the dichroic beam splitters, consecutive frames were simultaneously imaged through the same FOV in each band every 11 second. The exposure time was 7.7 seconds in the W1 and W2 bands and 8.8 seconds in the W 3 and W 4 bands, respectively (Wright et al. 2010). The data of WISE (W 1-W 4) and NEOWISE (W 1 and W 2) are available at the NASA/IPAC Infrared Science Archive (IRSA 4 ). The instrumental, photometric, and astrometric calibrations are detailed in Cutri et al. (2012Cutri et al. ( , 2015. We used the IRSA search tools (GATOR 5 and WISE Image Service 6 ) to retrieve all detections of the three PGC asteroids: Phaethon, 2005 UD and1999 YC. In the GATOR, the queried sources are WISE All-Sky Single Exposure (L1b) (WISE Team 2020a), WISE 3-Band Cryo Single Exposure (L1b) (WISE Team 2020b), WISE Post-Cryo Single Exposure (L1b) (WISE Team 2020c), and NEOWISE-R Single Exposure (L1b) (NEOWISE Team 2020). The detections were double-checked with the WISE Image Service, and compared with the list from the Minor Planet Center (MPC) database 7 . Here, we applied the screening process to the data described in Masiero et al. (2011);Hung et al. (2022). We based our selection criteria on high Signal-to-Noise Ratio (SNR) (> 5) in at least one band, moon angular separation > 30 degrees, and angular distance from the nominal boundaries of the South Atlantic Anomaly (SAA) 10 degrees. We selected observations having flags of cc_flags = 0 which secures no contamination produced by known artifacts, e.g. latent images or diffraction spikes . We used the profile-fit photometric quality flag of ph_qual = A, B, C or U which corresponds to SNR ≥ 10, 3 < SNR < 10, 2 < SNR < 3 and SNR ≤ 2 (= 2σ upper limit on magnitude, see also § 3.1), respectively. The highest frame quality score of qual_frame=10 is used.
The WISE /NEOWISE data reduction pipeline conducts initial nonlinearity and saturation correction for data brighter than the threshold of W 1 = 8.1 mag, W 2 = 6.7 mag, W 3 = 3.8 mag, and W 4 = −0.4 mag, however further analysis shows that additional corrections are needed (Cutri et al. 2012(Cutri et al. , 2015 8 . For Phaethon taken on MJD 58104.2148 (UT 2017-Dec-17), the measured magnitudes listed in the catalog, W 1=7.745 ± 0.014 mag and W 2=2.894 ± 0.003 mag, are in the saturated regime. We corrected their saturation biases using the relations derived for those of specific catalog magnitudes 9 (cf. Masiero et al. 2019), W 1(out) = W 1(in) + (0.037 ± 0.039) mag, W 2(out) = W 2(in) + (1.330 ± 0.221) mag. (1) We substitute W 1(in)=7.745±0.014 mag and W 2(in)=2.894±0.003 mag into Equation (1), and we obtain the corrected magnitude W 1(out)=7.782±0.041 mag and W 2(out)=4.224±0.221 mag, respectively. We visually checked the image data to remove star contamination that might appear in the shorter bands (W 1 and W 2) even if they do not appear in the longer wavelength bands (W 3 and W 4). This is because the stellar thermal signatures ( 2000 K) have their blackbody peaks in the shorter wavelengths and have flat decreasing spectra, representing W 1 SNR > W 2 SNR. Likewise, as for the two-band detections of NEOWISE, we removed the contamination by requiring W 1 − W 2 > 1 mag. The principle is more detailed in Masiero et al. (2018Masiero et al. ( , 2020. We actually find this issue in 1999 YC taken by WISE on MJD 55213.7089 (UT 2010-Jan-17), requiring us to drop its shorter bands. Finally we again visually conducted spot check on all of the selected data to ensure no corruptions in the images. The image corrupted by artifacts and noises are avoided. We found five observation epochs for Phaethon, three for 2005 UD and two for 1999 YC. The observation log is shown in Table 1. The orbital information is summarized in Table 2.
Next, we inspected the full-width half maximum (FWHM) of asteroid images to find if the surface brightness profiles can be used to set their activity limits . We here show an example of Phaethon's point-spread function (PSF) taken in the W 3 band on MJD 55203.2564 (UT 2010-Jan-07). The FWHM of Phaethon is θ F =7 ′′ .8, beyond the diffraction limit of the WISE instrument θ d =6 ′′ .4 (=1.03 λ/D in radians, where λ=12µm is the wavelength band center and D=40 cm is the telescope diameter). For comparison, the FWHM of nearby field stars is ∼6 ′′ .6 (medium value of seven field stars). The angular diameter of Phaethon (a diameter D e ∼ 5 km at ∆ ∼ 2.08 au) subtends θ Ph =0 ′′ .0031, which is negligibly small. The derived composite width of Gaussian is 6 ′′ .4 (= (θ 2 d + θ 2 Ph ) 0.5 ), which is not sufficient to resolve the Phaethon image. The same circumstance is found for the other data of interest. Therefore, the surface brightness profiles are not expected to give useful limits to their activity levels.
In this study, we use the effective spherical diameters (D e ) and geometric visible albedos (p v ) determined from WISE /NEOWISE (Table 3). Many observing-epochs cover multiple viewing geometries, producing the refined data sets in the uniformity and calibration. Phaethon, for example, five epochs of data estimated D e =4.6 +0.2 −0.3 km by applying Thermophysical model (TPM) fits ). The accuracy is validated by comparing the estimated sizes of 23 main-belt asteroids (surface temperature variation is stable) from NEOWISE with those from occultation/spacecraft as the ground-truth sizes ). The size is consistent within 2σ of the other TPM result D e =5.1±0.2 km (Hanuš et al. 2016(Hanuš et al. , 2018, while not consistent with D e ∼5.5 km showing a muffin-top shape (equatorial bulge of 6.3 km) as observed by Arecibo radar during the apparition in 2017-Dec (Taylor et al. 2019). The shape model or observed orbital positions (pre-or post-q) may require some offsets in size estimations and thermophysical behavior (MacLennan & Emery 2021;MacLennan et al. 2022).
Here, we describe the justification for applying the WISE /NEOWISE-derived size with its spherical assumption. The thermal survey and radar data have a huge difference in the resolution, for example 140 km pix −1 (NEOWISE) vs. 0.075 km pix −1 (Arecibo) at the Phaethon's closest distance (∆=0.069 au). The infrared data do not resolve a 5-6 km asteroidal shape, indicating that the assumption of spherical body is reasonable for the usage of WISE /NEOWISE alone. The post-q data tend to estimate larger sizes than those of pre-q data due to significant bias suffered from extreme heating at its perihelion passage (MacLennan et al. 2022). In such a case, the pre-q data use a stable variation of surface temperature presumed to be plausible for obtaining the statistically bestfitted size D e =4.8±0.2 km ( Table 4 in MacLennan et al. 2022). This is consistent within 1σ of our WISE /NEOWISE-derived size. On the other hand, the km-scale asteroids with a few epochs of data find that the TPM-fitted size is consistent within 0.1 km of the other model ). Thus we apply the unified uncertainty in Phaethon size, i.e. D e =4.6±0.3 km for simplicity of use. The photometry and calibration methodology are given by the IRSA website 10 (Cutri et al. 2012(Cutri et al. , 2015. Magnitudes are measured based on the median FWHM of the PSF of the asteroid at the each band. The aperture radius is set as 1.25 × FWHM, where the FWHM adopts 6 ′′ , 6 ′′ , 6 ′′ , and 12 ′′ in bands W1 through W4, respectively 11 . When the Profile-fitting Photometry (WPRO) or Aperture Photometry System (WAPP) measure the flux with the SNR ≤ 2, magnitude uncertainty is expressed as "NULL" and the calibrated magnitude is interpreted as the 2σ upper limit 12 . Those results are listed without uncertainties in Table 1.
The measured magnitudes are converted into the flux density (Jy) using the published zero points and the color corrections (Wright et al. 2010). The derived flux density can be converted to SI units (W m −2 µm −1 ) using the relative system response curves (RSRs) and the zero magnitude attributes (Table 1, Jarrett et al. 2011). The measured instrumental source brightness (in digital numbers) is calibrated by referring an instrumental zero point magnitude 13 (Cutri et al. 2012(Cutri et al. , 2015. Hereafter we apply the procedure as appropriate.
Reflected Sunlight Removal
The asteroids from WISE /NEOWISE comprise a blend of reflected sunlight and thermal emission. It is significant in the shorter bands (W 1 and W 2) of km-sized NEAs Nugent et al. 2015). Here we remove the reflected sunlight to extract thermal emission alone.
The flux density of reflected sunlight from an asteroid, F Ref ν (Jy), is calculated correcting the Sun-observer-object distance for each frame, using the equation where B ν is the Planck function (Jy sr −1 ) at the Solar temperature T ⊙ = 5778 K, R ⊙ = 6.957 ×10 10 cm is the Solar radius, R h is heliocentric distance (cm (scattered fraction of the incident solar flux), where q is the phase integral (measure of the angular dependence of the scattered emission), and p v is the visible geometric albedo. Based on the shape of phase curve in the H − G model of Bowell et al. (1989), we adopt q=0.3926 from q=0.29+0.684G where G=0.15 is the phase slope parameter. D e is the asteroid diameter (cm), Φ vis (α) is the phase function (Equation (A4) of Bowell et al. 1989) in which α is the phase angle (in degrees), and ∆ is the WISE -centric distance (cm). Using the measured values of D e and p v from WISE/NEOWISE (Table 3), we obtained F Ref ν to remove from the measured flux. The extracted thermal flux density is summarized in Table 4.
Temperature
In order to estimate an effective temperature, T eff (K), we use the thermal flux densities at the W 3 (12 µm) and W 4 (22 µm) from WISE. The shorter wavelength bands (W 1 and W 2) are avoided to prevent any contamination from remaining blended sources. The ratio of flux density, W 3/W 4, corresponds to the ratio calculated from a blackbody having T eff . As a reference, an equilibrium blackbody temperature (T bb ) of a sphere, isothermal object at the same heliocentric distance (R h ) is given. The results are shown in Table 5.
The derived effective temperature is close to blackbody temperature, T eff ≈ 0.98×T bb . This suggests that Phaethon and 1999 YC are approximately considered as sphere, blackbody-like objects in the WISE/NEOWISE data. This interpretation is different from the previous high-resolved thermal image of Phaethon. During its 2017 flyby, the V LT obtained the excess effective temperature (T eff ≈ 1.14×T bb ), implying that Phaethon (D e ∼5 km) has a non-spherical, non-blackbody features in detail . The difference in T eff could be caused by the difference in the observed phase angle. While WISE observed Phaethon at α=25 • (Table 1), the V LT was at α=66 • ). The latter would see both the hot dayside and the cold nightside. For NEAs, the night-side flux could be important at high phase angle (Harris 1998;Mommert et al. 2018), although this interpretation comes from classical thermal models with uncertain parameters (e.g. a beaming parameter: η, see § 3.4). Regarding this, 2005 UD (MJD 58383: 2018-Sep-22) and 1999 YC (MDJ 57739: 2016-Dec16) taken at α > 75 • by NEOWISE (Table 1) would also include their night-side fluxes in the measurement. However our method is independent of those of models and the WISE/NEOWISE resolution is limited. Thus we assume 2005 UD and other NEOWISE data also have a blackbody-like temperature (Table 5).
Thermal Flux Density
The thermal flux density from an asteroid, F ν (Jy), is calculated using Equation (1) of Mainzer et al. (2011), where ǫ = 0.9 is the infrared emissivity for the typical value of silicate powders from laboratory measurements (Hovis & Callahan 1966;Lebofsky et al. 1986), B ν (T (θ, φ)) is the Planck function (Jy sr −1 ) for the surface temperature T (θ, φ) (K) in which θ (in degrees) is the angle from the subobserver point to a point on the asteroid such that θ is equal to the phase angle α (in degrees) at the sub-solar point, and φ (in degrees) is an angle from the sub-observer point such that φ = 0 at the sub-solar point . We again adopt the same values of D e (cm) and ∆ (cm) in section 3.2 (Tables 1 and 3). The surface temperature, T (θ, φ), is calculated using Equation (2) of Mainzer et al. (2011), where T ss is the sub-solar temperature (K) derived by T ss = χ 1 4 · T eff . The nondimensional parameter χ (1 ≤ χ ≤ 4) represents the surface area ratio of the absorbing surface to the surface that is emitting the absorbed heat energy (Jewitt & Kalas 1998). For instance, χ=1 for a subsolar patch on a non-rotating asteroid and χ=4 for a spherical, isothermal asteroid in which the Sun's heat is absorbed on πr 2 and radiated from 4πr 2 (Li & Jewitt 2015). We adopt χ = 2 corresponding to thermal radiation from an asteroid hemisphere (observable-side Figure 1, 2, and 3, respectively. We have a short note on the assumption of zero night-side emission. The classical thermal models generally use the nonphysical beaming parameter (η) for the sub-solar temperature (T ss ) to adjust the angular temperature distribution (anisotropy) of the thermal emission, as seen in the Standard Thermal Model (STM), Fast Rotating Model (FRM) (Reviewed in Lebofsky & Spencer 1989) and the near-Earth asteroid thermal model (NEATM) (Harris 1998). The common issue in those of models is that the strong dependence on η which is not an intrinsic property for the temperature distribution over the observable surface of asteroid. Also it varies with observing geometry (α, R h and ∆), as well as the physical properties (e.g. thermal inertia) of the asteroid (See §3.4, Wright et al. 2018) (See also Masiero et al. 2019). Thus we avoid the ambiguity of η.
RESULTS
Our images of Phaethon, 2005 UD and 1999 YC each look like point sources and show no apparent evidence of coma activity. When both the reflected sunlight and thermal emission from the asteroids are removed ( § 3.2 and 3.4), residuals remain in some data. For active comets, the excess signals in the W 1 and W 2 bands of WISE/NEOWISE can be used to estimate the dust and gas (CO and CO 2 ) production rates respectively (Bauer et al. 2008(Bauer et al. , 2011(Bauer et al. , 2015(Bauer et al. , 2021Mainzer et al. 2014). The longer wavelength bands (W 3 and W 4) are capable of examining the dust trails and potentially detect co-moving objects (cf. Reach et al. 2000Reach et al. , 2007Jewitt et al. 2019). Here, we asses the significance of the excess signals and attempt to set an empirical upper limit to the production rates, dust trails and co-moving objects.
Dust Production Rate: W 1
We look into the dust production rate with the W 1 (3.4 µm) data following the methodology in Bauer et al. (2008Bauer et al. ( , 2011Bauer et al. ( , 2021. The reflected dust signal is typically considered to dominate in the band for all but the nearest NEOs. Even when comets sublimate their abundant icy volatiles, dust comprise 70% of the total signal (Bockelée- Morvan et al. 1995;Reach et al. 2013) (Summarized in Bauer et al. 2015). Assuming the excess flux density is produced by coma brightness, the mass of dust particles scattering in the coma, M d (kg), is given by (Equations (3) and (5) of Bauer et al. 2008) where a d = 1.7 µm (1/2 of the W 1 wavelength, Bauer et al. 2011) is the radius of the dust particle contributing to coma brightness, ρ d = 2000 kg m −3 is the assumed common bulk density of dust and asteroid (cf. Phaethon, Hanuš et al. 2018), D e (m) is the diameter of the asteroid, F total (Jy) is the measured total flux density from an asteroid, and F calc (Jy) is the calculated flux density from an asteroid (= reflected sunlight + thermal emission). The dust production rate, Q dust (kg s −1 ), is estimated using M d (kg), velocity of dust, v d (km s −1 ), and projected size of the dust coma, R d (km) (≈ ρ in the Af ρ method, A' Hearn et al. 1984). The traveling time for the produced dust from the asteroid is t d (second) = R d /v d , and using the Equation (6) of Bauer et al. (2008), we obtain the relation where R d (km) is the size of subtended area using an aperture radius of the of 7 ′′ .5 at the WISEcentric distance ∆ (au), and v d is adopted as the escape velocity from the PGC asteroids, i.e. v d ∼ 3 m s −1 for Phaethon and v d ∼ 1 m s −1 for 2005 UD and 1999 YC, respectively. The obtained mass loss rates are shown in Table 6. We find no strong variation in the production rate at different epochs, and find no dependency on the location of the object in the orbit at the time of observation. The upper limit to the massloss rate for Phaethon is Q dust 2 kg s −1 and those of 2005 UD and 1999 YC are both Q dust 0.1 kg s −1 , respectively. The WISE/NEOWISE-derived rate limits are 1-2 orders of magnitude larger than those of optically measured limits at the R c -band of 0.001 -0.01 kg s −1 (Hsieh & Jewitt 2005;Jewitt & Hsieh 2006;Kasuga & Jewitt 2008). Meanwhile, our Q dust is ∼10 times lower than the limit of 14 kg s −1 measured at the mid-infrared band (λ=10.7µm) from Phaethon .
The estimated limiting mass-loss rates of Phaethon can be compared with the total mass of the Geminid meteoroid stream. The Geminid stream mass, M s ≈ 10 13 kg -10 14 kg (Blaauw 2017) is thought to have been produced from Phaethon over the last τ s ∼ 1000(s) years (e.g. Williams & Wu 1993). The steady-state mass-loss rate needed, dM s /dt ∼ M s /τ s ∼ 320 -3200 kg s −1 , is comparable to those of active Jupiter family comets (JFC). This requires at least 10-100 times larger rates than the observed rates in the near-and mid-infrared bands (3.4µm and 10.7µm). The leading conclusion is that the production of the Geminids occurred episodically, such as catastrophic breakup of a precursor body, not in steady disintegration at the observed rates (cf. Jewitt et al. 2019). Likewise, to estimate the mass-loss rate (Q dust ) from 2005 UD, we estimate the mass of the Daytime Sextantid meteoroid stream. The stream age is calculated to be >10 4 yr , comparable to the last lowest perihelion-passage (≈ 2×10 4 yr ago, . Assuming that the meteoroids have been released from 2005 UD over the age of 10,000 yr, the maximum mass-loss rate Q dust 0.1 kg s −1 finds the total mass of the Daytime Sextantid stream ∼ 10 10 kg. Note that if 2005 UD shared a precursor body with Phaethon, the resulting breakup event could provide more mass in the stream. To investigate the possibility of a breakup-induced stream formation, longer wavelength observations of the parental asteroids to detect larger particles, potentially containing much more mass, are worthwhile to continue Kasuga & Jewitt 2019). Further observations of the Geminids and the Daytime Sextantids and connection to a stream model will be helpful to understand the stream formation processes. 4.2. Gas (CO and CO 2 ) Production Rate: W 2 Active comets showing excess flux density in the W 2 (4.6µm)-band is attributable to gas emission lines, in particular the CO 2 ν 3 vibrational fundamental band (4.26µm) and the CO v(1 − 0) rovibrational fundamental bands (4.67µm) (Pittichová et al. 2008;Ootsubo et al. 2012;Bauer et al. 2011Bauer et al. , 2015Bauer et al. , 2021Rosser et al. 2018). These volatiles have low sublimation temperatures of 20-100 K, presumed to be preserved in frozen ices or trapped as gases in the nuclei (Prialnik et al. 2004;Bouziani & Jewitt 2022).
This gas emission is unlikely to be present in the case of the PGC-asteroids because of their thermophysical and dynamical properties (Jewitt et al. 2018. The surface temperatures at perihelia are T PGC ss 800 K, too hot for ice to survive. The largest plausible thermal diffusivity κ ∼ 10 −6 m 2 s −1 is appropriate for rock (a compact dielectric solid). The diurnal thermal skin depth, d s , is estimated by ∼ √ κP rot , where P rot is the rotational period. Setting κ = 10 −6 m 2 s −1 and P rot = 3.6 -5.2 hr for the PGC asteroids (Table 2, Kasuga & Jewitt 2019) find d s ∼ 0.14 m at deepest. The q-temperature at the thermal skin depth is > 200 K (= T PGC bb /e), far above the sublimation temperature of CO 2 and CO. The heat conduction timescale corresponding to the equator radius of Phaethon (r ∼ 3 km) is 0.3 Myr (≈ r 2 /κ), about two orders of magnitude shorter than its dynamical lifetime of 26 Myr (de León et al. 2010). The core temperatures of the PGC asteroids are sufficiently heated at T core 280 K (Equation (4) of Jewitt & Hsieh 2006). Dynamical simulations find that Phaethon on its present orbit could lose all internal ice (e.g. H 2 O) over a very short timescale of 5-6 Myr (Yu et al. 2019;. Therefore, the PGC-asteroids are highly unlikely to be reservoirs for the icy species (CO 2 and CO) which are not expected to survive on the surfaces or in the interiors. We thus do not make further discussion on this topic.
Dust Trail: W 3 and W 4
We search for dust trails that presumably formed from the mass-loss of the PGC-asteroids. Such trails consist of larger, slow-moving dust particles that would be insensitive to solar radiation pressure. These particles would follow the parent bodies' heliocentric orbits and stay close to those orbital planes (Reviewed in Jewitt et al. 2015). Previous thermal observations (DIRBE/COBE) managed to detect the surface brightness of the Phaethon's dust trail, consisting of larger particles with sizes >10 µm (Arendt 2014). For all of the small bodies observed by WISE /NEOWISE, the both W 3and W 4-bands are dominated by thermal emission ). Since the PGC-asteroids have blackbody-like temperatures (Table 5), the Wien's displacement law (λ max · T eff = 2898 µm K gives λ max =10.7∼15.8µm) indicates that the W 3-band (12 µm) is the most effective wavelength for larger particles. Thus we primarily study the W 3-data of Phaethon and 1999 YC, and the W 4-data is supplementary compared. Unfortunately WISE did not observe 2005 UD during the cryogenic mission, so we do not have W 3 or W 4 data for this object.
During the survey, the apparent motion of the asteroids with respect to the WISE location had rates up to approximately 28 ′′ hr −1 in right ascension (RA) and −32 ′′ hr −1 in declination (DEC) 14 . The corresponding drift across the frame is 0.1 ′′ during each exposure. These rates are more than an order of magnitude smaller than the pixel scale (2 ′′ .76 pix −1 ), meaning that smearing within an image is negligibly small. For this analysis, we do not include data in which asteroid is taken close to edge of the frame, where trail location is expected to be out of FOV. Also we avoid images in which trail's expected direction is contaminated by stars. Detectability of a trail is improved for small out-of-plane angles, δ ⊕ , which is given by the angle between the observer and target orbital planes (Jewitt et al. 2018). The δ ⊕ =3.8 • for Phaethon indicates it should be highly sensitivity to the dust trail close to the plane, while δ ⊕ =−23 • for 1999 YC is slightly worse.
The W 3 image was corrected by subtracting a bias constructed from a median-combination of applied images (Table 1). The orientation of each image was rotated to bring the direction of the position angle (PA) of north to the top and east to the left, and shifted to align the images using fifth-order polynomial interpolation. The images were then combined into a single summed image. The resulting summed image of Phaethon has a FWHM of 7 ′′ .8, and is shown in Figure (4). Likewise, the summed image of 1999 YC with a FWHM of 7 ′′ .1 is shown in Figure (5). No asteroid-associated dust trails, which are expected to be parallel to the " − V " vector, are apparent in either Figure. In general, highly-resolved observations of asteroidal dust trails have clarified their morphological properties. For example, the trail widths of active main-belt asteroids are commonly very narrow, near the bodies, and represent rather recently ejected debris. The FWHM of trail widths of 300-600 km (< 1 ′′ ) are typically very narrow, as measured from 133P/Elst-Pizarro and 311P/PanSTARRS (P/2013 P5) ( §3.1, Jewitt et al. 2018). Small ejection velocities of ∼ 1-2 m s −1 in the perpendicular direction to the orbital plane, comparable to the gravitational escape speed (V esc ), are measured for most active asteroids .
However, this expectation is unlikely to hold for Phaethon. The width of the Phaethon dust trail (Geminid stream) is expected to be wider than those of asteroidal trails. Optical observations of the Geminid meteor shower estimate the shower duration time. The shape of zenithal hourly rate (ZHR) of the Geminids is asymmetric at the peak time with its minimum FWHM of 1 • in the solar longitude, corresponding to the duration time 24 hr (Uchiyama 2010). Geometrically, the Earth (v ⊕ =30 km s −1 ) cuts through the stream at the crossing angle of 62 • on the projected ecliptic plane (Mikiya Sato, private communication). Thus we find the Geminid stream width has a FWHM ∼ 2.3×10 6 km (= 30 km s −1 × 24 hr × 3600 s hr −1 × sin(62 • )). For 1999 YC, neither its activity nor trail has been observed yet (Kasuga & Jewitt 2008). Assuming that 1999 YC had a trail, and the width FWHM would be at most V esc ·P orb /4, where P orb is the orbital period and 4 is because a particle nearly spends 1/4 orbit rising from the orbital plane to the peak height (David Jewitt, private communication). Substituting V esc ∼1 m s −1 and P orb =1.7 yr, the trail width FWHM would be expected to be about 1.3×10 4 km. Such a trail could be imaged by WISE, if it existed and was bright enough, as the WISE images have a resolution of 2,400 km pix −1 at the observed ∆=1.22 au to 1999 YC.
To measure surface brightness profiles perpendicular to the expected trail direction ("−V "), the summed images of Phaethon and 1999 YC were each rotated to set the projected orbit direction to the horizontal. Both the W 3 and W 4 images are used for comparison. We average ±1 pix (2. ′′ 76 for W 3 or 5. ′′ 5 for W 4) left and right of the object at a distance of 50-215 ′′ from the asteroids. The profile for Phaethon and 1999 YC is shown in Figure (6) and (7), respectively. The green region corresponds to approximate width of trail. For Phaethon (Figure 6), a trail would be detected as a symmetric excess at x = 0 ′′ and ∼1530 ′′ wide if FWHM ∼ 2.3×10 6 km. Likewise for 1999 YC (Figure 7), the calculated trail width FWHM ∼ 1.3×10 4 km has a symmetric green region centered at x = 0 ′′ and ∼16 ′′ wide. No excess is evident in the profiles shown, neither in the W 3-band nor in the W 4-band measured at distinctive distance combinations of vertical cuts. We find no any hint of an orbit-aligned trail, or any thickness.
Here, we have a short note on the non-detection of the Phathon trail by WISE . This result is contrary to DIRBE/COBE, probably caused by observing geometry. WISE measured Phaethon and its vicinity at the solar elongation ε ∼91 • and R=2.32 au. On the other hand, DIRBE/COBE detected the surface brightness of the trail at ε ∼64 • and R=1.01 au (Arendt 2014), when it was much closer to the sun, so much warmer and less dispersed. WISE was at a rather disadvantageous observing point for trail thermal emission. Despite this, the background in the WISE images can set a uniformly applicable limit for constraining signals referring the morphological evidence from the asteroids.
We set a practical upper limit to the surface brightness of the potential dust trails. The W 3-band (12µm) is inspected because of the most efficient SNR. We sampled 70 pix (193 ′′ ) along the trail direction from the each asteroid. The profile was averaged along the rows over the width FWHM of the trail. Two-consecutive highest counts were selected and averaged between the distances for placing a statistical limit. For the Phaethon trail, we used the angular distances 33 ′′ θ od 36 ′′ which corresponds to the distances of 5.0∼5.4×10 4 km from Phaethon. For the 1999 YC trail, we used the angular distances of 69 ′′ θ od 72 ′′ which corresponds to the distances of 6.1∼6.4×10 4 km from 1999 YC. The measured counts (in digital numbers) were converted to the flux density (in Jy, see § 3.1). The ratio of the measured flux density scattered by the dust particles in the trail I d (Jy) to the measured flux density scattered by the asteroid (nucleus) cross section I n (Jy), which corresponds to the ratio of the dust cross section in the trail C d (km 2 ) to the asteroid (nucleus) cross section C n (km 2 ), can be calculated , and is expressed as The optical depth was obtained from τ = C d /s 2 , where s(km)=725.27·∆ is the linear distance (corresponding to 1 arcsecond) at the WISE -centric distance ∆(in au) of the asteroid. The results are summarized in Table 7. The upper limit (> 3σ) to optical depth of the dust trail is τ < 7×10 −9 for Phaethon and < 2×10 −8 for 1999 YC, respectively. These are broadly comparable to the optical depths of cometary trails (10 −10 -10 −8 ) measured by thermal infrared surveys (Sykes & Walker 1992;Reach et al. 2000Reach et al. , 2007. These results are commonly obtained at far distance away (> 10 4 km) from the asteroids or nuclei. At much closer distances of 150∼200 km for Phaethon, on the other hand, we would estimate optical depths three orders of magnitude higher: τ < 6×10 −6 ). This implies that any larger dust particles (>10µm) are present near the parent body.
We note that optical depths measured at similar wavelengths can be directly compared, because of the similar particle size dependence of the radiating efficiency . While thermal observations (λ > 10µm) for cometary dust trails have obtained optical depths, those of asteroidal dust trails (bands) (e.g. Nesvorný et al. 2006a,b;Espy Kehoe et al. 2015) and bolides (Borovička et al. 2020) are mostly undetermined. More samples would be helpful for understanding optical depths of asteroidal dust trails.
Size Limits for Co-moving Objects: W 3
We searched for possible companions around the PGC-asteroids. Again, the W 3-images (WISE) were used. As in the Figures 4 and 5, no co-moving objects are apparent in our data. The kilometersized asteroids would be immediately obvious, while smaller bodies could linger and might just escape detection. Here we set limits to the brightness of possible co-moving point sources. We put an aperture down in blank regions and measured flux counts and the uncertainty in digital numbers. The aperture area was set to 100 pix 2 (10 pix×10 pix), comparable to those of the PGC asteroids. The projected angular distance was set at ∼28 ′′ (10 pix), 56 ′′ (20 pix) and 84 ′′ (30 pix) in the North direction from each asteroid (Figures 4 and 5). The corresponding shortest distance is 4.2×10 4 km from Phaethon and 2.4×10 4 km from 1999 YC, respectively, which corresponds to far outside of their Hill radii ( 60 km ≈ 0.04 ′′ ). Since no distance dependency is found in the measured counts, we applied those of the averaged value to obtain flux density, I e (in Jy). In order to place the limits to size, we scale them from Phaethon and 1999 YC (see I n in Table 7, respectively) by assuming that the flux density is proportional only to the cross-sectional area of the radiating body. The derived flux density ratio, I e /I n , is 5.8±0.7×10 −2 in Phaethon and 2.8±0.4×10 −2 in 1999 YC, respectively. The limiting radius of possible companions, r e (m), is given by r e = 1000 (D e /2) (I e /I n ) 0.5 . Substituting D e (Table 3) and I e /I n , we obtain the 3σ upper limit to radius of possible co-moving object, r e < 650 m in Figure 4 (Phaethon) and r e < 170 m in Figure 5 (1999 YC), respectively. Objects down to the size limits would be apparent in blank space of the W 3-images though, undetected.
DESTINY + Update
JAXA's DESTINY + mission plans a Phaethon flyby at a relative speed of 36 km s −1 either in 2028 January or 2030 November, with a closest approach distance of about 500 km (Ozaki et al. 2022). The DESTINY + dust analyzer (DDA) has the two sensor heads with a total sensitive area, A DDA = 0.035 m 2 , for collecting interplanetary dust particles (IPDs) with radii 10µm (= a particle mass 10 −11 kg) (Krüger et al. 2019). But HST optical observation reported that the DDA is insufficient for sampling the Geminid meteoroids during the Phaethon flyby (Jewitt et al. 2018). Here we have an update with the thermal observations conducted at the two distinctive distances from Phaethon. One measurement was at the short distance of 150∼200 km to Phaethon and another was at the far distances of 50,000∼54,000 km based on this WISE study. We follow the procedure described in Jewitt et al. (2018).
We examined the DESTINY + main-unit and DDA, respectively. Here, we describe the main-unit. The DESTINY + is a cuboid shape having a longitudinal cross section A D + < 1.7 m 2 (Naoya Ozaki, private communication) 15 . The total cross section of all the particles to be intercepted in this area is C = τ A D + . This is equal to the maximum cross section of a single, spherical particle having radius a < (τ A D + /π) 1/2 . The corresponding particle mass is M = 4/3πρ d a 3 , where the assumed bulk density ρ d =2000 kg m −3 . An upper limit to the number density of dust particles, N 1 (m −3 ), is placed using either the measured optical depth (τ ) as expressed by N 1 = τ /(πa 2 L), or the measured mass-loss rate (Q dust ) as expressed by (Equation (3) of Jewitt et al. 2018), where a is the maximum radius of dust particles, L is the path length along the line of sight, and U is the speed of released dust particles from Phaethon. We apply L=500 km for the closest distance of DESTINY + with the measured values of τ < 6×10 −6 and Q dust 14 kg s −1 16 . Meanwhile, we also apply L=50,000 km, τ < 7×10 −9 (Table 7) and the 3σ upper limit to Q dust 1 kg s −1 when Phaethon was at R h =1 au (MJD 58104.2148 in Tables 6) from this WISE study. U∼3 m s −1 is commonly used for the escape velocity of Phaethon. The average separation between particles, l a (m), is given by N −1/3 1 . The expected number of dust particles encountered at the distance L from Phaethon, N enc , is estimated by where V D + is the flyby speed of DESTINY + and T int is its characteristic interaction time. We adopt V D + = 36 km s −1 and T int ∼ 2L/V D + to derive N enc . For the DDA case, A D + above is replaced with A DDA . Results are shown in Table 8. During the closest approach to Phaethon (L∼500 km), DESTINY + is likely to intercept a single millimeter-scale dust particle. DDA would encounter two particles of 500µm in size, though these are beyond its sensitivity or scientific scope. At far distance from Phaethon (L∼50,000 km), a single particle of 200µm in size may hit DESTINY + . DDA would also be able to collect six of particles of 10µm-scale with N 1 ∼ 10 −6 m −3 . The derived DDA detectability is consistent with the results examined by the Helios spacecraft measurements and the simulations (IMEX) for 13 cometary trails. Spatial densities of ∼10µm-sized particles in the cometary trails of 10 −8 -10 −7 m −3 are detectable with an in-situ instrument (Krüger et al. 2020).
This concept is applicable for the case of the Geminid stream, whereas we have a note. Observations of the Geminids find the size range of the meteoroids are from nearly millimeter up to 10s of centimeters (e.g. Blaauw 2017; Yanagisawa et al. 2008Yanagisawa et al. , 2021, and even smaller particles <1 mm could be present in the stream. However, much smaller Geminids with a 10µm are swept away from the stream by the radiation pressure (Jewitt & Li 2010;Moorhead 2021) and/or by Poynting-Robertson effect (Ryabova 2012). Such smaller Geminids are probably absent in the stream (See also §3.3 in Jewitt et al. 2018). Thus we conclude that the DESTINY + main-unit may be intercept a modest number of Geminid meteoroids in the trail. DDA is capable of detecting small dust particles, however, they are unlikely to be identified as the Geminids.
Na Sublimation-driven Mass Production
The essential puzzle for the Geminid stream is its production process from Phaethon. Any measured limiting mass-loss rates for Phaethon are too small to supply the mass of the Geminid stream (See § 4.1). The rates are almost steady and low over nearly the entire orbit. Only at the perihelion the optical brightness is observed to increase suddenly, and the mass-loss rate rises sharply (about a factor of 300) for a short time (STEREO, Jewitt & Hsieh 2022). At perihelion some quasi-episodical dust ejection could have occurred. Here, we focus on the perihelion activity of Phaethon to find a possible mechanism for producing the Geminid meteoroid stream.
The optically-observed recurrent perihelion activity is presumed to be caused by thermal breakdown (fracture and/or desiccation) of rocks, and the release of micron-sized particles at ∼3 kg s −1 , the slow ejection speeds of ∼3m s −1 , and the short tail having mass of ∼3×10 5 kg are reported (Jewitt & Li 2010;Jewitt 2012;Jewitt et al. 2013;Hui & Li 2017). Rotational mass-shedding 16 We assessed a linear-interpolated τ -value between L=200 km ) and L=50,000 km. With the interpolation, we found τ < 5.96 × 10 −6 at L=500 km. The difference is only < 0.7% relative to the measured τ at L=200 km, which is negligibly small. Therefore we used τ < 6×10 −6 for the practical value at L=500 km. (Nakano & Hirabayashi 2020) and electrostatic force (Jewitt 2012;Jewitt et al. 2015;Kimura et al. 2022) are also proposed as probable mechanisms. But the observed (and modeled) mass-loss rate is tiny and lasted for only ∼1 day around perihelion (cf. Hui & Li 2017). Even if it occurred at every perihelion return over the dynamical age τ s ∼1000(s) yr, the total ejected mass is about 2×10 8 kg (with a factor of a few uncertainty from the dynamical age). This is more than four to five orders of magnitude too small to be consistent with the stream mass M s ≈10 13 -10 14 kg (Hughes & McBride 1989;Blaauw 2017), or what would be expected if Phaethon were comet (10 13 -10 15 kg, Ryabova 2017). Moreover, the micron-sized particles cannot stay in the stream due to their sensitivity to radiation pressure (Jewitt & Li 2010;Moorhead 2021) and/or Poynting-Robertson effect (Ryabova 2012). Larger particles, such as the near mm∼10s cm-sized Geminids in the stream, could be simultaneously launched at perihelion but useful data (in longer wavelength) and a mechanism for that launching were absent. As for the ejection speeds, another consideration is required. Dynamical study requires a catastrophic mass production event to launch larger particles at high ejection speeds of ∼1 km s −1 , proposing a comet-like volatile sublimation-driven activity in Phaethon at the smallest q=0.126 au about 2,000 yr ago (during one-time perihelion return) (Ryabova 2016(Ryabova , 2018(Ryabova , 2022. This is modeled by the broad width of the stream at R h =1 au from visually observed activities of the Geminid meteor shower (Ryabova & Rendtel 2018). The high-speed ejection is also suggested by the observations of the fine structure of orbits within the Geminid stream (Jiří Borovička, private communication). The fatal problem with proposed comet-like activity is that Phaethon is unlikely to contain icy volatiles (e.g., H 2 O, CO, and CO 2 ; see § 4.2) to make the strong pressure needed to launch the dust. Phaethon was unlikely to contain ice 2,000 yr ago, even if it originated from the (icy) inner or outer main-belt asteroids (de León et al. 2010;. Sublimation-driven ice-loss models find that the NEAs originating from the inner and outer main-belt would lose ice long before reaching the near-Earth region (Schörghofer et al. 2020;. The ice can only possibly be preserved at the polar regions of asteroids having small axial tilts (obliquity) of ≤ 25 • , or the interiors of asteroids with D e ≥ 10 km (Schörghofer & Hsieh 2018). A situation such as this is improbable for Phaethon given its small size (D e ≈5-6 km) and high obliquity of >> 25 • (a pole orientation of λ e =+85 • ±13 • and β e =−20 • ±10 • at ecliptic coordinates (Ansdell et al. 2014)). Furthermore, both of the northern and southern hemispheres have experienced intense solar heating at the smallest q=0.126 au by the pole-shift cycles (Hanuš et al. 2016;). As such, Phaethon is highly unlikely to contain icy volatiles that could produce the Geminids.
Recently, Masiero et al. (2021) proposed that sublimation of sodium (Na) may drive the perihelion mass-loss activity of Phaethon. Sodium should be or used to be contained in Phaethon and the thermal desorption occurs with intense solar heating, as observed in the Geminids (Kasuga & Jewitt 2019). The near-Sun activity of Phaethon is attributable to emissions from neutral Na-D lines (and Fe i lines), as reported by the STEREO coronagraphic observations (Hui 2022). Here, we examine the Na sublimation-driven mass production to find its consistency with the formation process of the Geminid meteoroid stream.
We focus on the subsurface of Phaethon at the depth d ∼ 0.05 m. There, maximum sodium sublimation pressure ( 1 Pa) and the temperatures T d =580∼770 K are modeled using several different rotational phases at perihelion (Figure (1): Right, Masiero et al. 2021) 17 . Sublimation of pure sodium is used in the model as a bounding case, which is not an unrealistic assumption. Sodium is a trace species in rocks and initially would be contained in a silicate mineral phase (e.g. feldspar). However, the pure phase can be made by segregating from the host minerals under severely heated (Čapek & Borovička 2009) or irradiated (Russell & Sanders 1994) conditions like seen in Phaethon approaching very near the Sun (See § 2, Masiero et al. 2021).
We estimate the maximum size of dust particles to be dragged out by sodium sublimation-driven gas. The critical radius of dust particles, a c , is related with the saturation partial pressure of sodium sublimation over the condensed state, P sat (T d ) (in Pa) (Equation (2), Masiero et al. 2021), and given by (Jewitt 2002), where C D ∼1 is the dimensionless drag coefficient, G = 6.67 × 10 −11 m 3 kg −1 s −2 is the gravitational constant, and the same values of ρ d = 2000 kg m −3 and D e = 4.6 km (from Phaethon) are adopted. With T d =580∼770 K, we obtain P sat (T d ) = 0.003∼0.6 Pa. The corresponding critical radius is a c ∼ 0.09-17 cm, comparable to the optically measured size of the Geminid meteoroids in lunar impacts (e.g. 20 cm, Yanagisawa et al. 2008Yanagisawa et al. , 2021. By comparison we set ρ d = 3000 kg m −3 estimated from the observations of the Geminid meteors (Babadzhanov & Kokhirova 2009) to find a c ∼ 0.04-7 cm, rather consistent to the measured Geminids sizes (cf. Blaauw 2017; Yanagisawa et al. 2008Yanagisawa et al. , 2021. For the particles around the rotational equator, a c would be larger due to rotation assist by the enhanced angular velocity (Jewitt 2002), therefore we consider our estimations as useful lower limit. The thermal speed for gas of sodium atoms, V gas , is given by (Equation (10) where k B = 1.38×10 −23 J K −1 is the Boltzmann constant, µ=22.99 is the molecular weight of sodium, m H =1.67×10 −27 kg is the mass of the hydrogen atom, and T d (in K) is the temperature at the depth of 0.05 m below the Phaethon surface. Substituting T d =580∼770 K, we find V gas (T d ) = 730-840 m s −1 . This is orders of magnitude faster than the other proposed production processes (e.g. thermal breakdown), and comparable to the required speeds of ∼1 km s −1 described by Ryabova (2016). Next, we estimate the total mass of dust particles ejected by the sodium sublimation-driven activity to compare with the Geminid stream mass. The total specific sublimation mass-loss rate of sodium at depth d ∼0.05 m, (dm/dt) d in kg m −2 s −1 , is given by integrating over temperatures 580 T d 770 K. The dust-to-gas (sodium) mass ratio ∼1 is applied (Oppenheimer 1980). Then we obtain Adopting P sat (T d ) (Masiero et al. 2021) and Equation (11), we find (dm/dt) d ∼ 0.03 kg m −2 s −1 . The mass-loss rate from the entire subsurface of Phaethon (∼ πD 2 e ) corresponds to ∼2.0×10 6 kg s −1 . If a Na-driven perihelion activity likewise lasted for 1 day each orbital return, the total mass over the dynamical age τ s ∼1000(s) yr is 1.2×10 14 kg (with a factor of a few uncertainty from the dynamical age). This is consistent with the Geminid stream mass M s ≈10 13 -10 14 kg (Hughes & McBride 1989;Blaauw 2017), and also is comparable to that of the hypothetical cometary nature for Phaethon (M s ≈ 10 13 -10 15 kg, Ryabova 2017). We note that the calculated Na-driven perihelion mass-loss rate is about five or six orders of magnitude larger than the measured mass-loss rates for Phaethon in the optical, near-and mid-infrared wavelengths ( § 4.1). For example, the STEREO optically estimated the perihelion mass-loss rates 0.01 kg s −1 (= 10 3 kg / 1 day) (Hui 2022), up to ∼3 kg s −1 (Jewitt & Hsieh 2022). These measured values are associated with the 1∼10µm-sized particles. On the other hand, the calculated Na-driven perihelion mass-loss rate is presumably related to the near mm∼10s cm-sized particles containing more mass. We again propose longer wavelength observations aimed at detecting larger particles, especially during the perihelion activity of Phaethon.
We also note that there should be a mechanism to replenish sodium-depleted subsurface with sodium-rich layer every return, but unknown yet (Jewitt & Hsieh 2022). A single perihelion passage obviously results in the depletion of sodium abundance down to the depth d 0.05 m ( Figure (2): Middle left, Masiero et al. 2021). How had the sodium been supplied into the subsurface for the stream lifetime τ s ∼1000(s) yr? As a hypothesis, combinations of the suggested active-triggers are considerable. Thermal breakdown will be concentrated in thin surface layer having thickness similar to the diurnal thermal skin depth d s ∼0.1 m (Jewitt & Li 2010), comparable within a factor of two in the sodium-driver model case. The material would be found in thin layer accessible to thermal wave, which would be disintegrating and peeled off, resulting in a renewal as fresh surface with sodium. Still, there remains unknown parameters (e.g. lifetime of thin surface layer) and a combination of activity mechanisms may be needed to advance our understanding. Future work will investigate a scenario where Phaethon's nature is most likely an asteroid with some of the mass production attributable to a past breakup, while the sodium can play a contributory role of comet-like activity at perihelion.
SUMMARY
We present WISE/NEOWISE observations of the potentially Geminid-associated asteroids, Phaethon, 2005 UD and1999 YC. The multi-epoch thermal infrared data are obtained in the wavelength bands 3.4µm (W 1), 4.6µm (W 2), 12µm (W 3) and 22µm (W 4). The asteroids appear point-like in all image data. We use the data to set limits to the presence of dust attributable to the asteroids. The decade-long survey data give the following results.
1. No evidence of lasting mass-loss was found in the Phaethon image. The maximum dust production rate is Q dust 2 kg s −1 , having no strong dependency on heliocentric distance at R h =1.0-2.3 au.
2. The measured limiting dust production rates are orders of magnitude too small to supply the mass of the Geminid meteoroid stream in the 1000(s) yr dynamical age. If Phaethon is the source of the Geminids, the stream mass is likely to be produced episodically, not in steadystate.
3. No dust trail is detected around Phaethon when at R h =2.3 au. The corresponding upper limit to the optical depth is τ < 7×10 −9 .
4. No co-moving objects were detected. The limiting radius of a possible source is r e < 650 m at 42,000 km distance to Phaethon.
5. When DESTINY + passes at 50,000 km distance from Phaethon, several of 10µm-scale particles would be captured by the dust analyzer, though they most likely will not be identified as the Geminids. During the flyby phase (at 500 km distance), two particles of 500µm in size are encountered in the instrument.
6. The 2005 UD image found no extended emission for on-going mass-loss. The maximum limit to the dust production rate is Q dust 0.1 kg s −1 at R h =1.0-1.4 au. If dust production were sustained over the ∼10,000 yr dynamical age of the Daytime Sextantid meteoroid stream, the mass of the stream is ∼10 10 kg. If it were produced by the catastrophic event, the stream mass would be more.
7. The 1999 YC data showed no coma at R h =1.0-1.6 au. The maximum mass-loss rate is Q dust 0.1 kg s −1 .
8. No dust trail associated with 1999 YC was found at R h =1.6 au, corresponding to the upper limit to the optical depth τ < 2×10 −8 .
9. No associated-fragments with radii r e < 170 m were discovered at 24,000 km distance from 1999 YC.
10. The sodium sublimation-driven perihelion activity of Phaethon is expected to eject the near mm∼10s cm-sized dust particles at speeds of 730-840 m s −1 through gas drag. A mass-loss rate ∼2.0×10 6 kg s −1 , if allowed to last for 1 day every return, would deliver about 1.2×10 14 kg in 1000(s) yr. These processes are compatible with the structure of the Geminid meteoroid stream. For this mechanism to be plausible, though, a sustainable mechanism to resupply sodium-depleted subsurface with sodium-rich minerals is required.
We are grateful to David Jewitt for productive discussion. TK thanks Mikiya Sato, Chie Tsuchiya, Sunao Hasegawa, Naoya Ozaki, Galina Ryabova, Yung Kipreos, Peter Brown, Jiří Borovička, Björn J. R. Davidsson, Hideyo Kawakita, and Jun-ichi Watanabe for support. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Also, this publication makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a joint project of the Jet Propulsion Laboratory/California Institute of Technology and the University of Arizona. NEOWISE is funded by the National Aeronautics and Space Administration. This publication uses data obtained from the NASA Planetary Data System (PDS). This research has made use of data and services provided by the International Astronomical Union's Minor Planet Center. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration. We would like to address special thanks to anonymous reviewer and to Maria Womack for scientific editor. Finally, we express deep gratitudes to Althea Moorhead, Margaret Campbell-Brown and the LOC members for Meteoroids 2022 (held in Virtual) providing opportunity to enhance this study. Note-Cases where no measurement was available from NEOWISE a indicated as "-". Entries without uncertainties indicate a 2σ upper limit extracted using the WISE photometric pipeline.
Facilities: WISE, NEOWISE
a Modified Julian Date of the mid-point of the observation.
d Heliocentric distance.
e WISE-centric distance.
f Phase angle.
g Magnitude at W 1.
h Magnitude at W 2.
i Magnitude at W 3.
j Magnitude at W 4. Note-Orbital data were obtained from NASA JPL Small-Body Database (Epoch 2459600.5: 2022-Jan-21) a Semimajor axis.
c Inclination.
d Perihelion distance.
e Argument of perihelion.
f Longitude of ascending node.
g Aphelion distance.
h Orbital period.
i Tisserand parameter with respect to Jupiter. T J > 3.08 for asteroids and T J < 3.08 for comets, given a < a J = 5.2 au . See also, other suggested cometasteroid thresholds of T J = 3.05 (Tancredi 2014) n/a n/a 57035.2801 n/a n/a 57035.4115 n/a n/a 57035.5429 n/a n/a 57035.6743 n/a n/a 57663.4184 n/a n/a 57663.5494 n/a n/a 57663.6149 n/a n/a 58024. n/a n/a Note-The 2σ upper limit is represented with "<". A result of a negative value is expressed as "n/a".
a Modified Julian Date of the mid-point of the observation.
c Dust production rate from Equation (6).
⋆ Phaethon's closest approach date to the Earth (2017-Dec-17). ⋄ Contaminated by a star. Note-From the W 3-band.
a Measured flux density scattered by the asteroid's cross sectional area.
b Measured flux density scattered by the dust particles in the trail.
c Cross section of asteroid calculated from Table 3.
d Cross section of dust particles in the trail (Equation 7).
e Optical depth.
a Distance from Phaethon.
b Radius of a dust particle.
c Mass of a dust particle. Figure 6 but for 1999 YC. The W 3-band profile is at distance 160 ′′ (≈140,000 km) from 1999 YC and the W 4-band profile is at distance 215 ′′ (≈190,000 km).
|
2022-08-26T01:16:06.329Z
|
2022-08-25T00:00:00.000
|
{
"year": 2022,
"sha1": "4497b2bf7c50d2dc3684219bc07a2792f666570f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4497b2bf7c50d2dc3684219bc07a2792f666570f",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
11695874
|
pes2o/s2orc
|
v3-fos-license
|
Moving knowledge into action for more effective practice, programmes and policy: protocol for a research programme on integrated knowledge translation
Background Health research is conducted with the expectation that it advances knowledge and eventually translates into improved health systems and population health. However, research findings are often caught in the know-do gap: they are not acted upon in a timely way or not applied at all. Integrated knowledge translation (IKT) is advanced as a way to increase the relevance, applicability and impact of research. With IKT, knowledge users work with researchers throughout the research process, starting with identification of the research question. Knowledge users represent those who would be able to use research results to inform their decisions (e.g. clinicians, managers, policy makers, patients/families and others). Stakeholders are increasingly interested in the idea that IKT generates greater and faster societal impact. Stakeholders are all those who are interested in the use of research results but may not necessarily use them for their own decision-making (e.g. governments, funders, researchers, health system managers and policy makers, patients and clinicians). Although IKT is broadly accepted, the actual research supporting it is limited and there is uncertainty about how best to conduct and support IKT. This paper presents a protocol for a programme of research testing the assumption that engaging the users of research in phases of its production leads to (a) greater appreciation of and capacity to use research; (b) the production of more relevant, useful and applicable research that results in greater impact; and (c) conditions under which it is more likely that research results will influence policy, managerial and clinical decision-making. Methods The research programme will adopt an interdisciplinary, international, cross-sector approach, using multiple and mixed methods to reflect the complex and social nature of research partnerships. We will use ongoing and future natural IKT experiments as multiple cases to study IKT in depth, and we will take advantage of the team’s existing relationships with provincial, national and international organizations. Case studies will be retrospective and prospective, and the 7-year grant period will enable longitudinal studies. The initiation of partnerships, funding processes, the research lifecycle and then outcomes/impacts post project will be studied in real time. These living laboratories will also allow testing of strategies to improve the efficiency and effectiveness of the IKT approach. Discussion This is the first interdisciplinary, systematic and programmatic research study on IKT. The research will provide scientific evidence on how to reliably and validly measure collaborative research partnerships and their impacts. The proposed research will build the science base for IKT, assess its relationship with research use and identify best practices and appropriate conditions for conducting IKT to achieve the greatest impact. It will also train and mentor the next generation of IKT researchers.
Methods:
The research programme will adopt an interdisciplinary, international, cross-sector approach, using multiple and mixed methods to reflect the complex and social nature of research partnerships. We will use ongoing and future natural IKT experiments as multiple cases to study IKT in depth, and we will take advantage of the team's existing relationships with provincial, national and international organizations. Case studies will be retrospective and prospective, and the 7-year grant period will enable longitudinal studies. The initiation of partnerships, funding processes, the research lifecycle and then outcomes/impacts post project will be studied in real time. These living laboratories will also allow testing of strategies to improve the efficiency and effectiveness of the IKT approach.
(Continued on next page)
Background
Health research is conducted with the expectation that it advances knowledge and eventually translates into improved health systems and population health. However, research findings are often caught in the know-do gap: they are not acted upon in a timely way or not applied at all. The failure to put research findings into action is therefore a major societal issue and contributes to the estimated $200B (USD) of wasted research funding because the full potential of research was not realized [1].
The magnitude of the know-do gap has stimulated governments and research funders around the globe to recognize the importance of the active translation of research into action [2]. Where historically the problem of research underutilization was considered simply a dissemination failure (knowledge users unaware of research), some now suggest this gap results from knowledge production failures (not producing research addressing knowledge user problems).
A widely recognized and accepted tenet of knowledge translation is the integration of knowledge users throughout the research process. Integrated knowledge translation (IKT) is advanced as a way to increase the relevance, applicability and impact of results [3,4]. It shares common principles with many collaborative research approaches: co-production of knowledge, participatory research, linkage and exchange, Mode 2 knowledge production, engaged scholarship and community-based participatory research [5][6][7][8][9][10].
This approach proposes researcher/knowledge user collaboration as a key step in achieving societal impact and a way for society to speak to science. IKT shifts from a paradigm where the researcher is expert to one where researchers and knowledge users are both experts bringing complementary knowledge and skills to the team. They collaborate on issue-driven research with the expectation the research will generate implementable solutions to long-standing problems [11]. With IKT, the knowledge users work with researchers throughout the research process, starting with identification of the research question-they are actively engaged in the governance, priority setting and conduct of the research. Knowledge users represent all those who would be able to use research results to inform their decisions (clinicians, managers, policy makers, patients/families and others). Increasingly, stakeholders (governments, funders, researchers, health system managers and policy makers, patients and clinicians) are showing interest in the idea that IKT generates greater and faster societal impact. Stakeholders include all those with an interest in the issue or research, some of whom (knowledge users) are in a position to make direct use of the research in decision-making while other stakeholders are not but nevertheless want the issues and problems addressed.
Research funders have also been considering how to increase the impact of the research that they fund and their role in knowledge translation [2,[12][13][14]. 'Integrated knowledge translation' is a Canadian research funder innovation, initially advanced by the Canadian Health Services Research Foundation [6] and referred to as Knowledge Exchange in the late 1990s/early 2000s. The concept was adopted and refined at the Canadian Institutes of Health Research which coined the term integrated knowledge translation in the first decade of 2000s [15]. To promote the concept of partnered research, these organizations created funding opportunities that required collaboration between researchers and knowledge users. -25]. Interest in this concept has also been demonstrated recently by publication of papers and commentaries on the topic in at least one journal [3,[26][27][28][29][30][31].
Although IKT is broadly accepted, the actual research supporting it is limited and there is uncertainty about how best to conduct and support IKT. A limited number of scoping, realist and other reviews suggest there is value of researchers and knowledge users working collaboratively and others are underway [32][33][34][35]. Emerging scholarship focusing on participatory action research [8,36], the UK CLAHRCs [37][38][39][40][41][42] and CIHR's evaluation of its IKT programmes [43,44], is beginning to support the claims that IKT may lead to increased knowledge user capacity to use research; produce research that is more useful to knowledge users; increase the use of research in practice, health systems and policy decisions; and improve patient and health system outcomes. Studies are appearing describing how research partnerships work [45][46][47]. There is some evidence to suggest that in these collaborative research partnerships, researchers are the ones who benefit more by learning about the knowledge users' context [11,48]. Other studies reveal engagement of knowledge users can influence researchers' approaches to research and the review of grants [49,50]. However, the evidence is not yet conclusive on the impacts of IKT. At least one survey study failed to find an association between researcher/knowledge user engagement and research utility [51], suggesting that the factors determining effective IKT have yet to be clearly identified. Knowledge of IKT among researchers varies [52], and there is limited evidence about how researchers and knowledge users should go about collaborating. Despite the slim evidence base, stakeholder enthusiasm for IKT continues to grow. The expectation of enhanced impacts from IKT has seldom been critically assessed nor has the research partnering process been systematically studied. In response, Gagliardi and colleagues have recently suggested a research agenda for IKT [53].
The proposed research will build the science base for IKT, determine its effectiveness at increasing research use and identify best practices and appropriate conditions for conducting IKT to achieve the greatest impact on research use. The goals, objectives and outputs of the 7-year research programme are described in Table 1.
Conceptual framework guiding the research programme
This research programme is informed by four main conceptual frameworks: (a) the Rycroft-Malone et al. Table 1 Research programme goals, objectives and outputs Goal 1: advance understanding of the concept of IKT from the perspectives of knowledge users, researchers, funders and universities Objectives: 1. Describe researcher and knowledge user partnerships and the conditions under which these partnerships succeed or fail. 2. Identify research funding mechanisms designed to support IKT and explore their effect on knowledge user engagement in research. 3. Identify how university (dis)incentives influence researcher involvement in IKT. Potential outputs: The knowledge generated will be immediately relevant to four groups: knowledge users and their organizations needing more relevant research, researchers wanting to do IKT, universities wanting to encourage IKT by faculty and funders wishing to make informed decisions about their policies and investments in support of IKT. The cumulative knowledge generated will fundamentally enhance our understanding of how and why researcher/knowledge user collaborations work and provide information on how to maximize the use of IKT as a strategy to address the underutilization of research. framework for collaborative research (FCR) [38,42], (b) the Research Impact Continuum (RIC) translational framework [54], (c) the Knowledge to Action framework (KTA) for implementation [55,56] and (d) the Gifford model of leadership [57,58].
The FCR identifies nine domains influencing knowledge use and impact stemming from researcher/knowledge user collaborations: knowledge and knowledge production, facilitation, patient and public involvement, knowledge sharing and exchange, geography, actors/ agents, temporality, architecture of the knowledge user organization and its processes and context (the interconnecting and supporting relationships between all these domains). The RIC distinguishes between research and the practice of translation, highlights the role of research in translation, including IKT, and focuses attention on research impact. Indicators of success/impact guided by the RIC framework [54] include advances in knowledge (e.g. discoveries, publications), capacity enhancement (e.g. new HQP, trainees, researchers, knowledge users with IKT skills), health system and policy impacts (e.g. use of programme findings in decision-making). The KTA framework highlights the interplay between knowledge creation and application and identifies key components required for planned action. The Gifford leadership framework specifies the leadership and management behaviors that positively influence knowledge translation, including relation-oriented behaviors (supporting, developing, recognising others), change-oriented behaviors (visioning, providing direction, building coalitions) and task-oriented behaviors (clarifying roles, monitoring and procuring resources).
Methods/design
The approach to this Canadian Institutes of Health Research 7-year foundation grant is interdisciplinary and cross-sectoral, using multiple and mixed methods that best reflect the complex and social nature of research partnerships. Knowledge users are full members on the research programme and individual project teams and will continue to be actively involved in every step of the research process. To allow the research programme to be more inclusive than those listed on this proposal, the programme is organized as a researcher/knowledge user network known as the Integrated Knowledge Translation Research Network: Doing Research with the People Who Use it (https://iktrn.ohri.ca).The network has been specifically designed to include researchers interested in IKT who include early career, mid-career and senior researchers (referred to as IKT experts, currently n = 40); IKT trainees (currently n = 16), knowledge user experts from research funding agencies; charities; health services and health authorities and other organizations (currently n = 31); and a methods resource group comprising knowledge translation and implementation science experts (currently n = 11). When feasible to do so, we will use an IKT approach within the projects to expand the team's experiential knowledge of IKT mechanisms. All projects are guided by programme goals and objectives. Table 2 presents the programme work streams along with their objectives, rationale, research questions, level of partnerships and outputs.
Several knowledge syntheses are proposed to increase understanding of the concept of IKT (projects 1a-b), how IKT works and with what impact (projects 1c-d) and to identify tools to evaluate the partnering process (project 1e). A novel aspect of the research is that three initial multiple-case studies (projects 2a-c) anchor the programme during the first half of the grant. The case studies are both retrospective and prospective and will provide data and knowledge on how IKT works, its impact and the degree of engagement required to optimise impact. The case studies will provide insight into IKT partnerships at two levels: (a) Inter-organizational: partnerships between BORN (Better Outcomes Registry and Network) Ontario and hospitals providing maternity care (project 2b); (b)Regional: partnerships between Deakin University Centre for Quality and Patient Safety Research and health services in the State of Victoria, Australia (project 2a); university and regional health authority partnership (UNBC and Northern Health) (project 2c).
The case studies will be complemented with other projects focusing on other aspects of IKT and other programme objectives. For example, project 3a is intended to capture the network members' and organizations' experiential knowledge about working in an IKT way while project 3b is about network members reflecting on the field and identifying where the science should focus. Several studies focus on funder programmes to promote research undertaken using an IKT approach (projects 4a-d). Other studies focus on the perspective of an organization that becomes the partner in an IKT project (projects 5a-b) or the perspective of the researcher or university (projects 6a-b). Project 7 is designed to develop and test an IKT questionnaire. Finally, projects 8a-d are about IKT tools and developing training modules for researchers and knowledge users. We anticipate that research questions generated from projects will subsequently be embedded into future case studies as this will be an efficient way to study these topics without having to launch new stand-alone studies.
More case studies will be added as the grant proceeds. Several knowledge user partners are already identifying opportunities to study IKT 'in the field' , and the project Case studies Objective: to amass evidence on the process of IKT and its impact Rationale: given that an experimental study design will never be feasible or practical to use to determine the effectiveness of integrated knowledge translation or to understand how it works, Rationale: given that IKT is as relatively recent phenomenon and that little knowledge has been codified about how to do IKT, we believe much can be learned from those using this approach. We will be eliciting case stories from network members (researchers and knowledge users) about their experiences working in an IKT way.
To focus discussion and research in the field, we will be generating a number of concept papers that identify areas in need of greater conceptualization or research. Objective: to understand IKT and its impact by studying funded studies that required the use of an IKT approach.
Rationale: important sources of data on integrated KT are studies funded through IKT funding opportunities that require knowledge user partnerships. Much can be learned from these studies about how IKT was operationalized, how the research was conducted, the experiences of researchers and knowledge users working in partnership and the results of the study. These IKT studies will also be used to study the effectiveness of these studies by identifying and analyzing their effects and impacts. Given our many knowledge user partners are research funders, we intend to exploit opportunities to identify funded IKT studies so that we may study them. structure enables timely incorporation of these opportunities into the programme. In years 2-3 of the grant, future case studies will be initiated. Initial criteria for selecting new projects will include addressing knowledge gaps identified in the programme's ongoing studies, prioritization of future studies by knowledge users and the Advisory Committee and feasibility. Towards year 4, intervention studies will be launched to test theorybased strategies to improve research partnering and build health organization capacity for research partnering and research use. Finally, a meta-synthesis of findings from all projects will be completed to discern patterns and differences between different knowledge user groups (patients, clinicians, managers, policy makers), organizations (healthcare delivery institutions, health authorities, ministries of health, health research funders) and contexts and to develop materials to facilitate IKT and uptake of the findings. Team members are very interested in executives/managers, who have great potential to activate organizational change for research-informed decision-making but are understudied.
Training
Objective 2 of goal 4 is about creating a training environment for IKT research and supervising and mentoring graduate students and postdoctoral trainees and colleagues. To achieve this objective, the programme has an innovative and bold plan. We have incorporated funding to support one postdoc, two PhD and two master's students a year. This will produce five to seven master's, two to three PhDs and three to four postdocs over the life of the grant with expertise in the science and art of IKT. The CIHR KT evaluation [44] revealed that IKT projects are more likely to develop more highly qualified personnel per $100k grant than a grant of the same value in the open competition ( [15], Table 2 p6). Given the value of producing the next generation of KT scholars, we have also included student/trainee stipends to facilitate the involvement of students in as many of our projects and professional networks as possible. Over the life of the grant, this represents 35-40 studentships. We will also develop short internet training modules on various aspects of IKT for researchers and knowledge users.
The programme will also fund one to two researcher/ knowledge user internships/year (eight over the course of the grant). These will be for graduate students and postdoctoral trainees to spend 3 months sharing their research expertise with one of our knowledge user organization partners while they learn about policy making in the real world. This is an efficient way for trainees to learn about policy making while at the same time exposing the organizations to researchers in training. The internship programme will be modeled after CIHR's Science-Policy Fellowships developed by IDG when he was at CIHR (https://www.canada.ca/en/health-canada/ services/science-research/career-resources/fellowship-programs/science-policy-fellowships-program.html, Accessed 22 Dec 2017). Interns will be assigned both an academic and policy maker mentor.
Governance and strategies to reduce risk to the research programme A governance structure is in place to ensure an enduring focus on excellence, flexibility and ability to capitalize on emerging opportunities and help the programme remain on track. An Executive Committee, chaired by a Program Leader (Scientific Director-IDG), will be responsible for day-to-day operations. It will include the Deputy Scientific Director (AK), two researchers, two knowledge users, one trainee and one research associate. Sub-committees responsible for science (IKT theory, methods and measures), impact (network performance monitoring) and training will provide leadership in these areas. The impact committee will convene an impact workshop with the project leaders to produce a logic model or theory of change for the network and determine how to collect data to test it. An international Advisory Committee (AC) comprising knowledge users and IKT experts will provide guidance on all aspects of the programme, annually review the performance of programme projects and suggest strategies to reduce risk of bias in study design, data analysis and interpretation. Terms of reference for all committees will be finalized in collaboration with members and reviewed annually. The research programme team will use a collaborative decision-making approach.
We have designed the programme so that no one project carries all the intellectual weight of the programme putting the programme at risk should it fail. The breadth, nature and number of projects is one riskmitigating strategy-the whole is greater than the sum of its parts. Other strategies to ensure success including a formalized process for prioritization, peer-review, optimization of quality and monitoring project progress will be developed to ensure only the best, most relevant projects are advanced. Each project will be required to have a written proposal which will be reviewed by the Advisory Committee in terms of its relevance to the programme's objectives; potential to generate new knowledge, study design and methods; potential to achieve the intended outcomes/impact; and resources required.
Monthly team teleconferences and one annual face-toface meeting will maintain team cohesiveness and momentum and facilitate knowledge sharing. Team meetings, along with the annual review of projects by the Advisory Committee, will identify challenges faced by projects and marshal the collective wisdom of the team and/or Advisory Committee to overcome them. The diversity of the team and the richness of its content and scientific and knowledge user expertise will be a considerable asset for finding mitigating strategies.
Research programme limitations
The most significant limitation relates to the initial use of observational and quasi-experimental study designs. Given the focus on research partnerships, we expect that researchers and knowledge users will not be sympathetic or agreeable to experimental study designs that would require being randomized to use an IKT approach. However, to maximize overall scientific rigor, the research programme will rely on mixed methods and triangulation of findings and strive to select the most rigorous study designs for individual projects. For example, the use of both retrospective and prospective case studies is preferable to using only retrospective case studies. Another example is that we will be studying the influence of funded IKT studies by comparing the resulting impacts with the impacts of curiosity-driven research (essentially a non-randomized control group). We also anticipate that rigor will be increased by including projects that involve different types of knowledge users (e.g. patients, indigenous groups, clinicians, health services decision-makers, funders, etc.) and examine different levels of partnership (e.g. project, health authority, etc.). These settings will allow us to describe dominant patterns across varied arrangements, thereby enhancing the generalisability of the work. During the course of the 7year lifespan of the programme, we also expect to build on the lessons learned from the first wave of studies and propose and conduct more rigorous and methodologically innovate projects in subsequent waves. We also anticipate that in future prospective case studies, we will be able to introduce and evaluate interventions to improve partnering.
Knowledge translation
Our KT strategies consist of two approaches: IKT and end-of-project KT/knowledge mobilization. In keeping with the focus on integrated knowledge translation, the Integrated Knowledge Translation Research Network will use an IKT approach in all of its studies to ensure the projects address the issues of concern of our knowledge user partners and hopefully produce useful findings that can be acted upon by our and other knowledge users.
Our end of project KT activities will be guided by the CIHR's Guide to Knowledge Translation Planning [69]. For academic audiences, we will produce peer-reviewed journal articles for relevant journals. For knowledge user and stakeholder audiences, we will use a number of strategies to disseminate our work. To facilitate dissemination, we will create a website for the network that will house all the tools and products we produce. We will create a web blog that will serve as a vehicle for early dissemination of findings, engaging the public and crossfertilizing our ideas with each other and scientists in other areas. We will use social media (e.g. Twitter) to create a presence of the IKT Research Network. We will also use a newsletter to inform audiences about our activities and to disseminate findings.
The suite of training materials, tools and sessions described above will be available online to help researchers and knowledge users build their capacity to engage in IKT. Another Network KT strategy for disseminating findings and capacity building in IKT will be the hosting of a bi-annual symposium on the State of the Art and Science of IKT. The symposium may occur around an annual meeting of the Canadian Association of Health Services and Policy Research (CAHSPR), KT Canada's annual general meeting or another conference. The purpose of these symposia will be to create a forum for knowledge users and researchers to share their experiences with partnering, present findings from the latest research on how best to undertake collaborative research, explore opportunities for working together/ network development and offer skill-building seminars and workshops on doing IKT, strategies for effective research partnerships and maintaining relationships.
We also intend to host events similar to the CIHR Best Brains Exchanges [70] with the National Alliance for Provincial Health Research Organizations (NAPHRO), the Health Charities Coalition of Canada, health sector organizations and the Canadian research Tri-Councils (Canadian Institutes of Health Research, Social Sciences and Humanities Research Council, National Science and Engineering Research Council) around the research programme findings. These exchanges will bring together researchers and policy makers/administrators in a relaxed and confidential environment to discuss IKT research and its policy implications. We will include trainees in these events so they can learn how they work, how to host them and to make connections with policy makers, health system managers and funders.
Discussion
We have proposed the first interdisciplinary, systematic and programmatic research endeavor and network focusing on IKT. The research programme was developed and will be executed with knowledge user organization executives, managers, policy makers, clinicians and patients. We will ground the programme in knowledge generated through systematic, scoping and realist reviews. Taking advantage of our pre-existing productive relationships with provincial, national and international organizations, we will use ongoing and future natural IKT experiments as multiple case studies in order to study IKT in depth. Case studies will be retrospective and prospective as the 7-year grant timeline will enable us to undertake prospective longitudinal studies of IKT. We will study, in real time, the initiation of partnerships, funding processes, the research lifecycle and then outcomes/impact post project. In the latter years of the programme, we anticipate that these living laboratories [71] will also facilitate testing of strategies to improve the efficiency and effectiveness of the IKT approach. The research will also provide scientific evidence on how to reliably and validly measure collaborative research partnerships and their impacts. Built into the programme is a vibrant training and mentoring environment for trainees and researchers interested in the science of IKT and its application.
By conducting a meta-synthesis of multiple case studies and other strategic studies undertaken during the early years of the programme, we will be able to demonstrate how IKT works, under which circumstances and with which knowledge user groups. We will determine what IKT can and cannot do and learn how researchers and knowledge users develop and maintain research partnerships. When available, we will assess the impact of IKT on health system and patient outcomes. We will also ascertain how to promote IKT among knowledge users/knowledge user organizations and researchers. Significant potential and timely opportunities exist for improving how IKT is practiced and supported. By better understanding IKT, developing instruments to measure it and its impact, and designing effective strategies that support IKT, we will be positioned to improve knowledge translation and more thoroughly reap the benefits of research.
|
2018-02-04T02:38:55.759Z
|
2018-02-02T00:00:00.000
|
{
"year": 2018,
"sha1": "d08af64d63dd009fc87b6d1543a3c77a26339b94",
"oa_license": "CCBY",
"oa_url": "https://implementationscience.biomedcentral.com/track/pdf/10.1186/s13012-017-0700-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d08af64d63dd009fc87b6d1543a3c77a26339b94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250390994
|
pes2o/s2orc
|
v3-fos-license
|
‘Meet me at the ribary’ – Acceptability of spelling variants in free-text answers to listening comprehension prompts
When listening comprehension is tested as a free-text production task, a challenge for scoring the answers is the resulting wide range of spelling variants. When judging whether a variant is acceptable or not, human raters perform a complex holistic decision. In this paper, we present a corpus study in which we analyze human acceptability decisions in a high stakes test for German. We show that for human experts, spelling variants are harder to score consistently than other answer variants.Furthermore, we examine how the decision can be operationalized using features that could be applied by an automatic scoring system. We show that simple measures like edit distance and phonetic similarity between a given answer and the target answer can model the human acceptability decisions with the same inter-annotator agreement as humans, and discuss implications of the remaining inconsistencies.
Introduction
Imagine a listening comprehension task where a student listens to two people scheduling a meeting at the library. The student is then supposed to answer the question 'Where do they want to meet?' and writes 'ribary' instead of 'library'. Is this answer acceptable or not?
The answer to this question is not an easy one. Human experts perform a complex holistic decision in such a case, primarily based on whether they assume that the learner understood the right answer (see Section 2). The aim of this paper is to get a deeper understanding on which factors influence the acceptability of a spelling variant and ultimately how to model this decision automatically. Thereby, we aim at a model that is transparent and uses features which allow to explain under which conditions the system accepts a variant and under which not. To this end, we conduct a corpus study based on real learner answers and human ratings in a high stakes test of German as a foreign language and explore different operationalizations of spelling variant acceptability. We show that our classifier does not yet reach an adjudicated gold standard, but the human decisions can be approximated up to the same level as human-human agreement. Finally, we discuss possible reasons and implications of the remaining inconsistencies.
The remainder of the paper is structured as follows: In Section 2, we give some background about listening comprehension tasks and the role of orthography. In Section 3, we introduce the data set and in Section 4, we analyze the distribution of spelling variants and the human acceptability decisions. Section 5 examines different features that could be used to operationalize the holistic human acceptability decisions.
Background
In many high stakes language tests, listening comprehension is tested with a free-text production task (e.g. DALF 1 for French, Goethe Certificate 2 and TestDaF 3 for German, Cambridge Certificate 4 for English). This means that the test takers have to listen to an audio prompt and formulate an answer in their own words. This gives rise to variance in the answers, e.g. synonyms or different syntactic or orthographic variants (Horbach and Zesch, 2019), which makes the automatic scoring of such answers a challenging NLP task.
While variance at the level of wording or syntax is a topic extensively covered both by short-answerscoring in general (Ziai et al., 2012) as well as computational semantic similarity (Bär et al., 2012), the implications of orthographic variance are an understudied topic in automatic scoring. In e.g. reading comprehension tasks, where test takers can often copy material from the prompt, spelling errors are usually ignored (Horbach et al., 2017). In listening comprehension tasks, however, the assessment of orthographic variants (e.g. ribary or librarie for library), plays a much more central role, as we will briefly outline.
Receptive skills like listening comprehension can only be measured indirectly, i.e. by inferring comprehension from the performance in a derived task (Buck, 2001, p. 99), e.g. multiple-choice or true/false questions or free-text production tasks. All these tasks require skills that go beyond pure listening comprehension (Rost and Candlin, 2014, p. 183ff), e.g. reading comprehension for answering multiple-choice items and writing skills for free-text answers. Test designers have to carefully decide whether such a skill is considered to be relevant for the construct to be tested or not. In the context of academic listening, for example, note-taking is an important skill and therefore considered to be construct-relevant (Kecker, 2015). Orthography, in contrast, is considered a construct-irrelevant skill for the task and should thus be ignored for scoring. This means that if the test-taker had the right answer in mind without being able to express it in an orthographically correct way, the answer should be marked as correct (see e.g. Ryan (2009), Harding et al. (2011)). The crucial difficulty hereby is that the spelling of the word interferes with the assessment whether the test-taker had the right answer in mind. If the test-taker, for example, just produces some vague encoding of the relevant phonetic string, this likewise leads to a spelling variant of the correct answer but it should be marked as incorrect.
Hence, the acceptability of a spelling variant is based on a complex holistic decision that an automatic scoring system is not straightforwardly able to make in the same way. Nevertheless, an operationalization has to be found which leads to ratings that match the human ratings as closely as possible. Furthermore, in a high stakes test it is crucial that the decisions of the automatic scoring system are transparent and understandable to human experts.
Data Set
In this paper, we experiment with data from the digital TestDaF. It is a high stakes test designed for students planning to apply for studying at a German university. It assesses test-takers' language abilities at the TestDaF levels 3, 4 or 5, corresponding to the CEFR levels B2 to C1. The listening comprehension section consists of seven different task types, including selectedresponse item formats like multiple-choice questions, as well as three constructed response tasks where test-takers are asked to write short answers, between single words and a few sentences in length. In this paper, we concentrate on the task that elicits very short answers of a maximum of five words per prompt. This task is particularly suitable to study the role of spelling variants because other sources of variation are limited compared to longer textual answers.
In this task, test-takers listen to a pre-recorded conversation between two or three native speakers in a situation typical for everyday student life, e.g. a conversation between a student and a professor. Test-takers are presented a table, form or chart related to the content of the listening text with five blanks that are to be filled while listening to the input text. See Figure 1 for an example. While testtakers can type in a maximum of five words per blank, all blanks can be answered correctly with one or two words.
For the analyses in this paper, we extracted all answers from 17 different prompts where each prompt corresponds to one blank in the task described above. Table 1, column FULL, shows some basic statistics of the extracted data. 5 Each answer had manually been rated by human experts for whether it was acceptable or not.
Human Ratings of Spelling Variants
In the following, we will focus on spelling variants in the data set. Figure 1: Example of a listening comprehension task in the digital TestDaF that elicits short free-text answers. Target answers are given in blue. We see a timetable for a job fair with the days as columns and morning and afternoon activities (Was?) and the corresponding locations (Wo?) as rows. The upper left gap, e.g., prompts the test-taker to complete the entry Presentation about the topic "a career in " with the target answer being public service.
Distribution of Spelling Variants
Two annotators labeled all answer types with a category that describes in which way the answer deviates from the target answer. For a subset of about 500 answer types, we compute the agreement of our two raters on the binary decision whether the answer is a spelling variant or some other variant. Other variants include for example grammatical deviations (e.g. singular/plural), synonyms (Speicherstick 'memory stick' for USB-Stick), or answers that are incomplete (Raum for Raum 5), unintelligible (OS) or referring to something different (Kaffee 'coffee' for Workshops). Inter-annotator agreement is Cohen's =.78, which shows that even for humans, distinguishing spelling variants from other variants, especially grammatical variants, is not trivial.
The two annotators then discussed those cases where they disagreed and decided on a final gold label. For the analyses in this paper, we extracted all answers gold-labeled as spelling variants, including real-word errors. Note that answers which differ from the target answer only with regard to capitalization, hyphenation or splitting a compound in two parts are not part of this set because they are always acceptable. Table 1, column SPELL, shows some statistics of the spelling variant sample. In total, about 16% of the different answer variants are attributable to spelling, showing that they account for a nonnegligible amount of variance in the data.
The distribution of spelling errors follows a Zipf distribution, i.e. most of the spelling variants in our data set occur only once while a few can be found several times. In other words, different testtakers make different kinds of errors, hence it is not possible to foresee all cases beforehand and include them in the rating guidelines or to hard-code them in an automatic scoring system.
The left panel of Figure 2 shows the number of different spelling variants per prompt. One can see that some prompts seem to be more prone to spelling errors than others, with some prompts triggering more than 30 different variants and others only triggering two. We found that there are more spelling variants in prompts with longer target answers than with shorter ones (Pearson correlation r =.58). As one can see in the right panel of Figure 2, the acceptance rate of spelling errors according to the human gold standard varies quite a lot. While for some prompts, most of the variants are accepted, for others, most are rejected. In total, 48% of the spelling variant types are accepted.
Manual Acceptability Decisions
Test-takers' responses were rated by human experts in a dichotomous format as either accepted or rejected. Inconsistencies were adjudicated by an additional annotator. Some examples are shown in Table 2. Human raters also need clear criteria to ensure that they mark according to the same standard (Weir, 1993). To achieve this, they were provided with rating guidelines, rater training sessions and standardization meetings.
The rating guidelines consist of general parts, for example that common abbreviations are accepted in an answer, and item-specific parts that contain samples of correct and incorrect answers as well as what is in general expected of a correct response for this item. For example, the guidelines for the target answer USB-Stick include the following: • USB Stik is an accepted spelling variant but USB Tick and USP Stick are not • Speicherstik (memory stick) is an accepted synonym with an accepted spelling error (stik instead of stick) • USB Gerät (USB device) is not accepted because it is too general • USB alone similarly does not contain enough information We compute the inter-annotator agreement of the human experts for the acceptability decision on the same subset as for the annotation if something is a spelling variant. We observe that spelling variants are substantially harder for humans to judge than other answer variants, with a value of .60 for spelling variants as opposed to .83 for all other items (see Table 3). Such scoring inconsistencies by human raters despite regular training, annotation guidelines and thorough pre-testing are in line with Buck (2001).
Operationalizing Acceptability Decisions
In the following, we will analyze criteria for the acceptability ratings of spelling variants which could be used by an automatic system. We base our analyses on the set of different spelling variant types. Thereby, we always use the adjudicated labels as the gold standard.
Surface Distance to Target
The manual scoring guidelines do not prescribe how many errors per word are allowed in order for the answer to count as correct. However, in our sample we can see that the Levenshtein distance between a given answer and the target answer is correlated with the acceptability of the answer. This is detailed in Table 4, column SURFACE. However, despite a trend that words with higher Levenshtein distances are less likely to be accepted, we do not see a threshold above which all answers are rejected or below which all are accepted. Most frequently, we find a Levenshtein distance of 2 between the given and the target answer. Recall that answers which differ from the target answer only with regard to letter case, hyphenation or splitting a compound in two parts are not included in our spelling variant data set because these deviations by themselves are always acceptable. However, an inspection of the included spelling variants showed that many answers mix capitalization or word-splitting errors with other error types like letter substitutions, e.g. text entworft for Textentwurf. The Levenshtein distance currently does not take into account that e.g. a capitalization error itself is not as problematic as a different letter substitution. This may blur the actual influence of the Levenshtein distance. Therefore, we standardize the given answers and the target answers by lowercasing, removing hyphens and whitespace and then re-compute the Levenshtein distance.
We can see that a clear majority of standardized answers only has a Levenshtein distance of 1 to the target answer (see Table 4, column STANDARD-IZED). Furthermore, there is a clearer trend that the majority of answers with a distance of 1 is accepted while most answers with a higher distance are rejected. Still, an automatic classifier that accepts all answers with a Levenshtein distance 1 and rejects all other answers would have an accuracy of only 71%. This is clearly above the majority-class baseline of 52% (achieved if all spelling variants are classified as rejected) but far from a sufficiently high accuracy for being used in practice.
Influence of Keyboard
There are spelling deviations which are intuitively recognized as typos, e.g.Öffentlivchendienst for offentlichen Dienst. A typo implies that the testtaker actually knew the word so that it should be marked as correct. As a proxy for whether a spelling variant is actually a typo, we can look whether the substitution or insertion of an erroneous character pertains to a key adjacent to the target key.
Hence, our operationalization of what counts as a typo is as follows: if a standardized answer contains exactly one substitution or one insertion of a character which is adjacent to the target key on a keyboard with QWERTZ, QWERTY, or AZ-ERTY layout, we consider this answer as 'probably only containing a typo'. Using this method, we identified 18 unique typos in the analyzed sample. In 13 of these answers, there are additional deviations in terms of capitalization or the use of whitespace. The human experts scored (only) 12 of the 18 answers as correct, which shows that a spelling variant that is likely a typo is not automatically accepted. The human experts reported that since they cannot know on which type of keyboard a test-taker wrote the answer, they do not explicitly treat (potential) typos differently from other types of errors.
Phonetic Similarity
In German orthography, most sounds can be represented in more than one way, using different characters or character combinations. For example, a long [a:] can be spelled as <a> (Tal 'valley'), <ah> (Zahl 'number') or <aa> (Saal 'hall'). This means that there can be answers which differ from the target answer in terms of spelling but which are nevertheless pronounced in the same or a very similar way. As with the similarity on the surface level, we can determine the similarity on the pronunciation level by computing the Levenshtein distance between a given answer and the target answer on the phoneme level. We obtained the phoneme representation of each answer from the web service G2P of the Bavarian Archive of Speech Signals (BAS) (Reichel, 2012;Reichel and Kisler, 2014). 6 As one can see from the column PHONEMES in Table 4, most answers with the same pronunciation as the target answer are accepted (85%), but not all. On the other hand, most answers with quite a different pronunciation are rejected, but again there are exceptions. This shows that phonetic similarity alone is not a decisive factor either.
Similarity to Other Words
In our data sample, we manually identified a total of 34 spelling variants that resulted in other existing words (real-word errors). Most of them occurred only once, resulting in 27 unique variants. Hence, 11% of all spelling variant types are real-word errors. Not all prompts trigger real-word errors to the same degree. For 8 out of the 17 prompts, no real-word error could be found while one of the prompts triggered eight different real-word error types.
Most of the real-word errors are rejected by the human raters -but not all of them: 3 out of the 27 real-word error types were accepted. What is noteworthy is that all of the accepted real-word errors have a Levenshtein distance (on the character level) to the target word of 1. In contrast, the rejected real-word errors have a mean Levenshtein distance of 3.6. Hence, a factor influencing the acceptability of the real-word error seems to be the surface similarity. However, among the rejected answers, there are also four real-word errors with a Levenshtein distance of 1 to the target answer, which shows that there are more complex mechanisms at work. Human experts reported that one factor influencing their decision is whether the meaning of the real-word error would be somewhat plausible yet 6 https://clarin.phonetik.uni-muenchen.de/BASWebServices/ interface/Grapheme2Phoneme still incorrect in the context of the given task, and therefore would be confusing in a real-life setting. In contrast, if an answer is far-fetched or consists of a word that is very infrequent, human raters would assume that the error was indeed only an orthographic error and the learner actually meant to write the correct word.
To illustrate this, Table 5 shows some example answers and their acceptability. Most target answers are compound words and the real-word spelling errors mostly only pertain to one part of the word. As a consequence, the error results in a grammatically well-formed answer but often in a non-lexicalized word. In some cases, the meaning of the new compound is far off the meaning of the target answer, e.g. Workshop and Wokshop (in English, the corresponding words are workshop and wok shop, i.e. the compound that is a result of the spelling error would have to be written as two words, which is not the case in German). In other cases, the meanings are somewhat close and could lead to a misunderstanding in real communication, e.g. Textentwurf ('text draft') and Testentworf ('test draft'). It remains to be seen with a larger sample of accepted real-word errors how well this can be operationalized by an automatic scoring system.
Combination of Features
While all of the criteria presented above play a role for the acceptability decision, we could see that none of these factors alone suffices to differentiate between accepted and rejected answers. In the next step, we examine whether a combination of the features can be used to approximate the human acceptability decisions. We aim for a model that yields interpretable results so that one can identify under which conditions a spelling variant is accepted or rejected. In order to do so, we train different decision trees on the whole set of spelling error types and the adjudicated gold labels using the R package rpart (Therneau and Atkinson, 2019). We then apply the trees to a test set of 127 spelling variant types from 5 new prompts, i.e. a new set of learner and target answers. We use classification accuracy as evaluation metric.
In addition, we apply the trees to the training data set itself in order to get an estimate how consistently the data can be modeled, i.e. whether the features suffice to tell accepted and rejected answers apart or whether there are answers with the same combination of features but different human judgments. The results are shown in Table 6.
Baselines If all instances are classified as rejected, this majority-class baseline reaches an accuracy of 52% on the training set. In the test set, the classes are evenly distributed, i.e. the baseline is 50%. Using character edit-distance alone as classification criterion, as discussed in Section 5.1, the accuracy rises to 71% on the training set and 73% on the test set.
Simple Trees First, we build a decision tree with default configuration using the features and their operationalizations that were described in the previous sections: • edit distance on the character level between standardized given answer and standardized target answer, i.e. ignoring letter case, hy-phens and whitespace (std levenshtein, numeric) • edit distance on the phoneme level (phonlevenshtein, numeric) • whether the word is a real-word error (realw, binary) • whether the word probably only contains a typo (probably typo, binary) This tree is grown with default parameters, which in particular means that it is automatically pruned, i.e. not grown to full depth. For a predictive model, this is necessary in order to prevent overfitting on the training data. The resulting tree is shown in Figure 3. In prose, the tree accepts a spelling variant if the edit distance on the character level is < 2, it is not a real-word error and the edit distance on the phoneme level is < 4. The nodes show how many data points fall into the respective class and how many of them are categorized correctly when applied to the training data. In total, the tree reaches an accuracy of 74.2% on the training set and 70.9% on the independent test set. For the test set, this is worse than using character edit-distance alone.
In order to find out whether the features do actually suffice in order to model the data that the tree was trained on, we next grow the tree to full depth. The resulting tree has a depth of 8 (compared to the depth of 3 in Figure 3) but still only reaches an accuracy of 76.2% on the training data. This means that there are answers with the same feature set but different acceptability decisions (see discussion in Section 5.6). As one would expect due to overfitting, the full-depth tree performs worse on the test set than the pruned tree.
Advanced Trees One potential limitation of the current feature set is that our version of edit distance is not sensitive to word length. Therefore, we normalize the character edit distance by the number of characters in the target word and also allow for transpositions of characters to count as one edit (norm std damerau lev). The other three features remain the same. The default pruned tree based on this adapted feature set has a depth of 5 and an accuracy of 75.4% on the training set, which is very similar to the result of the simple tree. See Figure 4 for an illustration of the advanced tree.
However, on the test set, the tree produces much better results than the simple tree with an accuracy of 84.3%. That the result for this tree on the test set is even better than that on the training set indicates that the tree's rules for accepting an answer are indeed transferable to new data sets. In fact, some of the rules even fit the test data better than the training data. For example, 45.6% of the training data and 46.5% of the test data fall into the rightmost leaf node in Figure 4. The answers that fall into this node are predicted to be accepted. In the training data, this decision is correct in 73% of the cases, whereas in the test data, the decision is correct even for 85%.
If we grow the advanced tree to full depth (= depth of 14), the overall accuracy on the training set rises notably, but only to 85.1%. Hence, it still does not reach the adjudicated gold standard but the result is comparable to the human-human agreement of 83%. As we will discuss shortly, the fact that we do not reach 100% accuracy even with this full-grown tree shows that more or different features are needed to tell accepted and rejected answers apart. Since this tree overfits the data, its performance on the test set is much worse than that of the pruned tree, hence it is not suitable for predicting new data points.
Discussion
We observe that our features do not suffice to perfectly model the acceptability decisions of human raters according to an adjudicated gold standard. There are conflicting cases which cannot be re- solved on the basis of the features we currently examine. Some examples are given in Table 7. Differences between the accepted and notaccepted cases are subtle and human experts often argue in terms of whether an answer looked "too far off" without being able to specify a general rule supporting their decision. Additional features might be able to distinguish between those cases. However, it may also mean that the human ratings are not fully consistent, which is in line with our observed inter-annotator agreement. In fact, the accuracy of the overfitted tree (85%) is very similar to the human-human agreement on the same data (83%), which we discussed in Section 4.2, hence, we may not expect a system to ever go significantly beyond this value. Therefore, basing the acceptability decision on objectively measurable features instead of individual holistic decisions of human raters could be a way to arrive at more consistent and more explainable results especially in a high stakes test.
Conclusion and Future Work
We presented an analysis of the rating of spelling variants in a listening comprehension task from the TestDaF test. We found that spelling variants are more challenging to score for human experts than other types of variants. Furthermore, we ex-norm_std_damerau_lev >= 0.093
|
2022-07-10T13:04:12.016Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "3b4ae1a30680bd6c87e8e65f67fbed5357ded2ef",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "3b4ae1a30680bd6c87e8e65f67fbed5357ded2ef",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
103588478
|
pes2o/s2orc
|
v3-fos-license
|
Properties of cyanobacterial UV-absorbing pigments suggest their evolution was driven by optimizing photon dissipation rather than photoprotection
An ancient repertoire of UV absorbing pigments which survive today in the phylogenetically oldest extant photosynthetic organisms the cyanobacteria point to a direction in evolutionary adaptation of the pigments and their associated biota from largely UVC absorbing pigments in the Archean to pigments covering ever more of the longer wavelength UV and visible in the Phanerozoic.Such a scenario implies selection of photon dissipation rather than photoprotection over the evolutionary history of life.This is consistent with the thermodynamic dissipation theory of the origin and evolution of life which suggests that the most important hallmark of biological evolution has been the covering of Earths surface with organic pigment molecules and water to absorb and dissipate ever more completely the prevailing surface solar spectrum.In this article we compare a set of photophysical photochemical biosynthetic and other germane properties of the two dominant classes of cyanobacterial UV absorbing pigments the mycosporine like amino acids MAAs and scytonemins.Pigment wavelengths of maximum absorption correspond with the time dependence of the prevailing Earth surface solar spectrum and we proffer this as evidence for the selection of photon dissipation rather than photoprotection over the history of life on Earth.
Introduction
Once the subject of mystical and metaphysical interpretations, the explanation of life on Earth has slowly gained a physical-chemical grounding in biochemistry and non-equilibrium thermodynamics. Founded on Boltzmann's nineteenth century insights into thermodynamics, then further elaborated by twentieth century scientists, notably by Ilya Prigogine, nonequilibrium thermodynamics attempts to explain the phenomenon of life as "dissipative structuring"; an out of equilibrium organization of matter in space and time under an impressed external potential for the explicit purpose of producing entropy (Prigogine, 1967;Glansdorff and Prigogine, 1971).
Using the formalism of Prigogine's "Classical Irreversible Thermodynamics" in the nonlinear regime, Michaelian (2009;2013;2016) has proposed a theory for life's origin and evolution as microscopic self-organized dissipative structuring of organic pigment molecules and the dispersal of these over the entire surface of the Earth as a response to the impressed high-energy (UV-C to visible) solar photon spectrum prevailing at Earth's surface. All physicochemical structuring associated with the pigments, such as the photosynthetic organisms primarily, and heterotrophic organisms secondarily, can be regarded as agents for the synthesis, proliferation and distribution of the pigments. The theory suggests that it is the thermodynamic imperative of increasing the entropy production of Earth in its solar environment that is behind the vitality of living matter as seen in its ability to proliferate, diversify, and evolve.
The theory explains satisfactorily, for example, why the three major classes of photosynthetic pigments (chlorophylls, carotenoids and phycobilins) of phototrophic organisms dissipate most of the absorbed photonic energy into heat (a process known as non-photochemical quenching, NPQ) while funneling only a minute fraction into productive photochemistry (Horton et al., 1996;Ruban et al., 2007;Staleva et al., 2015;Gupta et al., 2015). Moreover, these organisms often contain a vast array of other organic pigments in addition to the photosynthetic ones, whose absorption spectra extend outside of the photosynthetically active radiation (PAR) in the visible, and into the UV-A, UV-B, and UV-C regions, hence allowing full coverage of the past and present incident surface solar spectra (Michaelian, 2012;Michaelian and Simeonov, 2015). In the biological literature these phenomena have been explained primarily through invoking the conventional wisdom of photoprotection (Demmig-Adams and Adams, 1992; Mulkidjanian Photoprotective roles have especially been attributed to UV-absorbing biological pigments (e.g., mycosporine-like amino acids and scytonemins in cyanobacteria and algae; flavonoids and anthocyanins in plants, etc.) since they don't seem to contribute to photosynthesis at all (Moisan and Mitchell, 2001). These theories usually trace the photoprotective role of UV-pigments back to the beginnings of cellular life in the early Archean when UV radiation was far more important component of the surface solar spectrum than it is today (Sagan, 1973;Mulkidjanian and Junge, 1997;Garcia-Pichel, 1998;Cockell and Knowland, 1999;Mulkidjanian et al., 2003). UV-screening ostensibly conferred pigment-containing organisms Darwinian advantages for survival in the harsh Archean environment of intense UV radiation.
However, from a thermodynamic viewpoint, the UV is the region of the solar spectrum with the greatest possible entropy production potential per unit photon dissipated. Therefore, under the high UV ambient conditions of our primitive planet, non-equilibrium thermodynamic principles of increasing the entropy production of Earth in its solar environment were probably the motive force for the development of these microscopic dissipative structures in the form of efficient UV-dissipating organic compounds (Michaelian, 2011;2013), rather than metaphysical forces giving rise to a hypothetical "will to survive" of the individual cell. The evidence for this inextricable link between UV light and nascent life has been reinforced with the verifications for biogenicity of ∼ 3.5 Ga old euphotic stromatolitic formations (Walter et al., 1980;Awramik et al., 1983;Schopf and Packer, 1987;Schopf, 1993;Schopf et al., 2002;Tice and Lowe, 2004;Van Kranendonk et al., 2008) of evidently photosynthetically active, yet UV-C bathed, microbial colonies of cyanobacteria-like organisms (Westall et al., 2006;Westall, 2009). If confirmed, an approximately 3.7 Ga old putative stromatolite from an Eoarchean shallow marine environment (Nutman et al., 2016) would make this fossil the oldest evidence for complex life thriving on Earth under intense UV light.
In this paper we discuss how the two major classes of cyanobacterial UV-absorbing pigments, the mycosporine-like amino acids and the scytonemins, whose occurrence in organisms today is regarded as a relic from Archean times (Garcia-Pichel, 1998), perfectly match the type of microscopic dissipative structure which non-equilibrium thermodynamic principles would predict for the Archean Earth system.
In the following section we separately detail a set of germane properties of both pigment classes, taken from the literature, whereas in the third section we demonstrate how these properties are consistent with several postulates made in a previous work (Michaelian and Simeonov, 2015). Also, in the third section, we compare the pigments' optical properties with our previous construct of the most probable Earth surface solar spectrum as a function of time (see Michaelian and Simeonov, 2015).
Based on these comparisons we evaluate the relative antiquity of both pigment classes and show how the observed facts support the thermodynamic dissipation theory of the origin and evolution of life (Michaelian, 2009;2016 MAAs and a related group of organic compounds called mycosporines represent a large family of colorless, low-molecular-weight (< 400 u), water-soluble, usually intracellular secondary metabolites widespread in the biological world (Dunlap and Chalker, 1986;Carreto et al., 1990;Rosic and Dove, 2011). The exact number of compounds within this family is yet to be determined, since they have only relatively recently been discovered (for a historical overview, see Schick and Dunlap, 2002Dunlap, andȒezanka et al., 2004, and novel molecular species are constantly being uncovered. Thus far, however, their number is around 40 (Wada et al., 2015). The name "mycosporine" has to do with them being originally isolated and chemically identified from mycelia of sporulating fungi, where it was thought they played a role in lightinduced sporulation (Leach, 1965;Trione and Leach, 1969).
Physicochemical properties
Chemically both MAAs and mycosporines are alicyclic compounds (see Fig. 1) sharing a central 5-hydroxy-5-hydroxymethyl-2-methoxycyclohex-2-ene ring with an amino compound substituted at the third C-atom and either an oxo or an imino functionality at the first C-atom (Favre-Bonvin et al., 1976;Ito and Hirata, 1977;Arpin et al., 1979;Karentz, 2001). While most authors don't make a clear chemical distinction between the two groups, several authors (for example: Bandaranayake, 1998;Schick and Dunlap, 2002;Moliné et al., 2014;Miyamoto et al., 2014) when using the term mycosporines refer only to those molecular species with a central amino-cyclohexenone chromophore (also called oxo-mycosporines), and when using the term MAAs refer only to molecules with a central amino-cyclohexenimine chromophore (also called imino-mycosporines). The N-substitution on C-3 with different amino acids or amino alcohols is what gives the diversity of molecular structures within both groups (Korbee et al., 2006;Sinha et al., 2007;. Within the MAA group, the most common amino acid on the C-3 position is glycine, whereas they also have a second amino acid, amino alcohol or an enaminone system attached at the C-1 position (D'Agostino et al., 2016).
This unique molecular structuring and bonding begets their unique spectral properties. MAAs are considered to be one of the strongest UV-A/UV-B-absorbing substances in nature (Schmid et al., 2000); with wavelength absorption maxima (λ max ) in the 310-362 nm interval and molar attenuation coefficients (ε) between 28100 and 50000 M −1 cm −1 (Dunlap and Schick, 1998;Carreto et al., 2005;Gao and Garcia-Pichel, 2011). Their absorption spectra are characterized by a single sharp λ max with a bandwidth of approximately 20 nm and only about 2-3 nm apart from the λ max of structurally similar MAAs (see Fig. 2) which makes it very difficult to distinguish these compounds based solely on their absorption spectra (Karentz, 1994;Carroll and Shick, 1996).
The values of λ max and ε are dependent on the degree of derivatization of the central ring and the nature of the nitrogenous side groups (in particular the presence of additional . Smaller, mono-substituted mycosporines (typically oxo-mycosporines) have their λ max values shifted to shorter wavelengths in the UV-B and usually have lower ε values; whereas MAAs (imino-mycosporines) are normally bi-substituted, with higher ε values and λ max values shifted to longer wavelengths in the UV-A (Portwich and Garcia-Pichel, 2003).
For example, the direct metabolic precursor of all mycosporines, 4-deoxygadusol ( Fig. 1), which has the minimal level of derivatization, has λ max = 268 nm at acidic pH, and λ max = 294 nm at basic pH; mycosporine-glycine ( Fig. 1), the simplest oxo-mycosporine and direct precursor of all other mycosporines and MAAs has a λ max = 310 nm, whereas palythine ( Fig. 1 The observed red-shift in λ max is a consequence of the degree of resonance delocalization inside the molecules; the more efficient is the electron delocalization (i.e. the stronger the π-conjugation character) the lower is the energy requirement for an electronic transition and consequently the higher are the λ max and ε values Wada et al., 2015).
From a thermodynamic perspective, the fate of the electronic excitation energy is also a very relevant aspect of the absorption event since it is directly linked to the amount of entropy produced by the dissipative microscopic structure (i.e. the polyatomic molecule). Nonradiative, vibrational relaxation pathways of the excited state lead to more efficient energy dissipation and higher entropy production when compared to the fluorescent or phosphorescent radiative decay channels (Würfel and Ruppel, 1985;Michaelian, 2011;2016). In this respect MAAs prove to be very efficient dissipative structures, although all studies hitherto have discussed these thermodynamically relevant characteristics only from the standpoint of photostability and UV-photoprotection. Aiming at fully describing their photophysical and photochemical properties and expanding the evidence on the assigned UV-photoprotective role, Conde et al. (2000; made several in vitro studies on the excited-state properties and photostability of various MAAs in aqueous solution (see Table 1). The results showed picoseconds excited state lifetimes, very low fluorescence quantum yields (e.g., φ F (porphyra 334) = 0.0016), very low triplet formation quantum yields (e.g., φ T (porphyra 334) < 0.05), and very low photodegradation quantum yields (e.g., φ R (porphyra 334) = 2 − 4 × 10 −4 ) for all of the MAAs studied. These results are consistent with a very fast internal conversion (IC) process as the main deactivation pathway of the excited states, which was directly quantified by photoacoustic calorimetry experiments showing that ∼ 97% of the absorbed photonic energy is promptly dissipated into the surrounding medium as heat (Conde et al., 2004).
A computational study by Sampedro (2011) using the CASPT2//CASSCF protocol (Olivucci, 2005) and employing palythine as a model compound, confirmed these findings. The study indicates that the fast IC processes connecting the S 2 /S 1 and S 1 /S 0 states are due to the presence of two energetically accessible conical intersection points that can be reached by small geometrical changes in the atomic coordinates. It is now well established that conical intersections (a.k.a. molecular funnels or diabolic points) play a very important role in fast, non-radiative de-excitation transitions from excited electronic states to ground electronic state of molecules, particularly in many fundamental biological molecules, such as DNA/RNA, amino acids and peptides (Schermann, 2008). They enable effective coupling of the electronic degrees of freedom of the molecule to its phonon degrees of freedom, thereby facilitating radiationless decay by vibrational cooling to the ground state (in the process converting the absorbed high frequency UV photon into many low frequency infrared pho-tons), which could make them examples of microscopic dissipative structuring of material in response to the impressed photon potential (Michaelian, 2016).
Ecological distribution
MAAs and mycosporines are cosmopolitan substances in "optical" habitats -planktonic, benthic and terrestrial; with the largest concentrations detected in environments exposed to high levels of solar irradiance (Castenholz and Garcia-Pichel, 2012 and references therein). They are now known to be the most common type of UV-absorbing natural substances, especially among aquatic organisms (Rastogi et al., 2010).
While mycosporines have been reported only in the kingdom Fungi (mycosporine-glycine and mycosporine-taurine are exceptions), MAAs are more extensively distributed among taxonomically diverse organisms (Karsten, 2008;. These include: cyanobacteria; heterotrophic bacteria; dinoflagellates, diatoms and other protists; red algae; green algae; various marine animals, especially corals and their associated biota (for a database on the distribution of MAAs, see Sinha et al., 2007). They seem to be completely absent from terrestrial plants and animals, but are regularly found in terrestrial cyanobacteria (Garcia-Pichel and and terrestrial algae (Karsten et al., 2007).
An interesting discovery by Ingalls et al. (2010) reveals that MAAs represent a considerable portion of the organic matter bound to diatom frustules, accounting for 3-27% of the total carbon and 2-18% of total nitrogen content of the frustules. Previously established views held that MAAs have mainly an intracellular location in these organisms.
Biosynthesis
The cyclohexenone core of MAAs is derived from intermediates of two fundamental anabolic pathways; the shikimate pathway These basic biochemical pathways lay at the heart of carbon metabolism, shared by all three domains of life; the shikimate pathway links carbohydrate catabolism to the biosynthesis of the aromatic amino acids and other aromatic biomolecules; similarly the pentose phosphate pathway uses glycolysis for the synthesis of pentose sugars, the nucleotide building blocks (Cohen, 2014). Thus, they are considered to have an ancient evolutionary origin, possibly even dating back to prebiotic times (Richards et al., 2006;Keller et al., 2014).
As mentioned in the previous section a very interesting trait of MAAs is that they are extremely prevalent natural compounds produced by a variety of taxonomically very distant organisms from simple bacteria to algae and animals. A natural question arises: how can evolutionary so distant organisms share the same MAA encoding genes?
Several lines of evidence now suggest that the progenitor of the enzymatic machinery for MAA biosynthesis was probably a cyanobacterium or the cyanobacterial ancestor, while endosymbiotic events and prokaryote-to-eukaryote lateral gene transfer events during evolution account for their prevalence among all other biological taxa (Rozema et
Function: traditional view vs. thermodynamic view
Since their discovery in the 1960's, authors have struggled to confer a single specific physiological function to MAAs. Although from the beginning a UV-photoprotective role seemed most conspicuous, largely because of their unique UV-dissipating traits and the fact that their production is stimulated by UV-B. Later, this theory faced serious challenges, for example, the failure to find a correlation between intracellular MAA accumulation and UV-resistance in certain coral zooxanthellae (Kinzie, 1993), the phytoplankton Phaeocystis antarctica (Karentz and Spero, 1995), the dinoflagellate Prorocentrum micans (Lesser, 1996), certain cyanobacterial strains (Quesada and Vincent, 1997), and the red alga Palmaria palmata (Karsten et al., 2003), etc.
As a response, many researchers in the field came up with their own suggestions for MAA physiological roles, sometimes very different from the sunscreen role, such as; osmotic regulation, antioxidants, nitrogen storage, accessory pigments, protection from desiccation, protection from thermal stress, reproductive functions in fungi and marine invertebrates, etc.; all of which have also been challenged or discredited (for reviews of the different theories of MAA functions and the challenges they face, see: Korbee From a traditional biological standpoint this apparent lack of a clear defining physiological function for these pigments looks extremely perplexing, especially when taking into account the extraordinary prevalence of these compounds in nature. Darwinian theory in its strictest traditional formulation, with evolution through natural selection operating only at the level of the individual, categorically dismisses this kind of phenomena; where an organism wastefully spends free energy and resources for the synthesis of metabolically expensive, nitrogencontaining compounds with no vital physiological function commensurate with their ubiquity and hence no, or little, benefit for its survival and reproduction. According to Darwinian theory, such a biosynthetic pathway, with little or no direct utility to the organism, should have been suppressed or completely eliminated through natural selection. However, exactly the opposite has happened in the course of evolution; MAA biosynthetic genes have not only survived but have undergone extensive dissemination across numerous taxa through horizontal gene transfer. The failure of Darwinian theory to find a niche for MAAs in its classical "struggle for survival" paradigm is a result of it not being soundly grounded in thermodynamics and the universal physical laws (for a discussion on this topic, see: Michaelian, 2011;2016). From the perspective of non-equilibrium thermodynamics, a metaphysical "will to survive" does not exist, making the search for a particular physiological function of MAAs pointless. But MAAs do have a function and it is a thermodynamic function of energy dissipation, or, more generally, entropy production. This thermodynamic function can be readily inferred from their physicochemical properties related to photon dissipation described above. MAAs can be regarded as typical examples of microscopic dissipative structuring of matter for the sole purpose of entropy production through highly efficient dissipation of high-frequency UV photons into heat (Michaelian, 2016). This is the reason for their "coming into being" and tendency to proliferate over the surface of the Earth, as it is the fundamental reason for the origin and evolution of life on Earth, and, in fact, the reason for the ubiquity of organic pigments in the neighborhood of stars throughout the universe (Michaelian and Simeonov, 2017;Michaelian, 2016).
This biological irreversible process of photon dissipation that MAAs and other biopigments perform, then couples to a secondary abiotic irreversible process of water evaporation from surfaces through the heat it releases into its aquatic milieu (Michaelian, 2012). Evidence exist that the profusion of life and chromophoric dissolved organic matter (CDOM) in the sea-surface microlayer (SML) causes significant heating of the ocean surface fomenting evaporation (Morel, 1988 CDOM is the fraction of dissolved organic matter in water (DOM) that interacts with solar radiation (Nelson and Siegel, 2013). Light energy absorption by CDOM at the surface of the ocean usually exceeds that of phytoplankton pigments; 54 ± 15% of the total light absorption at 440 nm and > 70% of the total light absorption at 300 nm is due to CDOM Whitehead and Vernet (2000) also concluded that free-floating MAAs contributed up to 10% of the UV absorption of the total DOM pool at 330 nm during the L. polyedrum bloom. This exudation of pigments by organisms into their environment would also seem to have little Darwinian advantage.
All of the evidence presented suffices to conclude, with some certainty, that MAAs join in function most of the other bio-pigments in nature which act as catalysts for the dissipation of photons into heat at Earth's surface and the coupling of this heat to other abiotic entropy producing processes, such as; the water cycle, hurricanes, water and wind currents, etc.
Scytonemins
In 1849, Swiss botanist Carl Nägeli noted yellowish-brown cyanobacterial sheath coloration (Nägeli, 1849), and in 1877 coined the name "scytonemin" for the color-producing pigment (Nägeli and Schwenderer, 1877). Although occasionally mentioned in scientific papers during the twentieth century, scytonemin was not isolated until 1991 when Garcia-Pichel and Castenholz (1991) first made a more extensive study of the compound. Proteau et al. (1993) elucidated the chemical structure of scytonemin, which proved to be a completely novel indolic-phenolic dimeric structure unique among all hitherto known natural organic substances. The carbon skeleton of this novel eight-ring homodimeric molecule was given the trivial name "the scytoneman skeleton" (Proteau et al., 1993). Already in 1994, another scytoneman-type molecule was isolated from the cyanobacterium Nostoc commune, and termed "nostodione A" (Kobayashi et al., 1994). Thus far, four additional substances with a scytoneman-type molecular structure, or a structure derived from it, have been isolated from cyanobacteria: dimethoxyscytonemin, tetramethoxyscytonemin, scytonine (Bultel-Poncé et al., 2004) and scytonemin-imine (Grant and Louda, 2013); for which, in this review, we use the colloquial terms "scytonemins" or "scytoneman pigments".
Physicochemical properties
Scytonemin (Fig. 3), the representative and most common member of this yet poorlyexplored family of aromatic indole alkaloids, is a relatively small molecule (544 u) built from two identical condensation products of tryptophanyl-and tyrosyl-derived subunits linked through a carbon-carbon bond (Proteau et al., 1993).
Depending on the redox conditions it can exist in two inter-convertible forms: a predominant oxidized yellowish-brown form which is insoluble in water and only fairly soluble in organic solvents, such as pyridine and tetrahydrofuran, and a reduced form (Fig. 3) with bright red color that is slightly more soluble in organic solvents (Garcia-Pichel and Castenholz, 1991;Proteau et al., 1993). Dimethoxy-and tetramethoxyscytonemin can be the parent scytoneman skeleton can also be seen in scytonemin-3a-imine (a.k.a. scytoneminimine), where the C-3a atom of scytonemin has been appended with a 2-imino-propyl radical (Grant and Louda, 2013).
Only the structure of scytonine deviates substantially from the dimeric scytoneman skeleton, due to the loss of one para-substituted phenol group and ring openings of both cyclopentenones where successive methoxylation and secondary cyclizations take place (Bultel-Poncé et al., 2004).
A full in-depth photophysical and photochemical characterization of scytonemins has yet to be attained; thus far only their elemental spectroscopic properties are known. Scytonemin absorbs very strongly and broadly across the UV-C-UV-B-UV-A-violet-blue spectral region (see Fig. 2
The mucilaginous extracellular sheath (matrix) consists of heteroglycans, peptides, proteins, DNA and different secondary metabolites (Pereira et al., 2009), where scytonemins are usually deposited in the outer layers, giving the sheath its distinctive dark yellow to brown color (Ehling-Schulz et al., 1997; Ehling-Schulz and Scherer, 1999). Up to 5% of the dry weight of cultured scytonemin-synthesizing cyanobacteria is due to the pigment, but in natural assemblages this value can be even higher (Karsten et al., 1998). Curiously, reported two to six times higher concentrations of scytonemin than chlorophyll a in cyanobacterial cryptobiotic soil crusts in the Oman Desert.
Biosynthesis
The biochemistry and genetics of cyanobacterial scytonemin biosynthesis has extensively been investigated by Soule (1993), the discoverers of the scytonemin structure, that the scytoneman molecular scaffold is actually a condensation product of the aromatic amino acids tryptophan and tyrosine. Michaelian (2011) and Michaelian and Simeonov (2015) have hypothesized that these were the first amino acids to enter into a photon-dissipation-driven association with nucleic acids in the prebiotic world, a scenario backed by their high conservation inside the DNAbinding sites of photolyase enzymes (Kim et al., 1992;Weber, 2005); a phylogenetically ancient family of enzymes, common to all three domains of life (Selby and Sancar, 2006) and even found in viruses (Srinivasan et al., 2001). Not only do these amino acids absorb in the UV themselves (Michaelian and Simeonov, 2015 and references therein), but they also serve as biosynthetic precursors for most known aromatic UV-absorbing bio-pigments, including: anthocyanins, flavonoids and polyphenols in plants, melanins in heterotrophic organisms, scytonemins in cyanobacteria, etc. (Knaggs, 2003).
Eight of the genes that make up the 18-gene scytonemin biosynthesis cluster code for shikimate pathway enzymes for the biosynthesis of tryptophan and tyrosine, while the functions of the rest remain unresolved but are suspected to be involved in the coupling of the tryptophan-and tyrosine-derived precursors for the formation of the scytoneman skeleton
Function: traditional view vs. thermodynamic view
Similarly to MAAs, the Darwinian point of view can only describe scytonemin as an efficient protective biomolecule designed to filter out supposedly damaging high frequency UV radiation while at the same time allowing the transmittance of wavelengths necessary for photosynthesis (Ekebergh et al., 2015).
Within the framework of this traditional "struggle for survival" viewpoint, the majority of authors define scytonemins as an adaptive mechanism of extremophilic cyanobacteria that colonize harsh, inhospitable habitats experiencing high doses of UVR-insolation ( Among the evidence for the accredited photoprotective role is the discovery that up to 90% of incident UV photons are prevented from entering sheathed, scytonemin-producing cyanobacterial cells, thus accomplishing significant reduction in chlorophyll a photobleaching and maintaining photosynthetic efficiency (Garcia-Pichel and Castenholz, 1991;Garcia-Pichel et al., 1992). Other authors, in addition to the sunscreen role, ascribe supplementary defensive roles to scytonemin such as protection from oxidative, osmotic, heat and desiccation stress (Dillon et al., 2002;Matsui et al., 2012).
Although it is beyond doubt that the efficient UV absorption and dissipation properties of the scytoneman pigments provide, to some extent, a beneficiary effect for the survival of sheathed cyanobacterial cells, the stance that this is the primary reason for the biological production of these pigments may be erroneous. Here are few examples of serious challenges and inconsistencies that the photoprotection paradigm faces: 2. Inability to explain the production of the strongly UV-C/UV-B-absorbing methoxyscy-tonemins and scytonine, in spite of the absence of UV-C wavelengths and the low intensity of UV-B in today's surface solar spectrum. The question is raised by Varnali and Edwards (2010): "The realization that scytonemin is the parent molecule of perhaps a whole family of related molecules is important in that an analytical challenge is generated to detect these family members in admixture and in the presence of each other naturally, and also the question is raised about the role of these molecules in the survival strategy processes involving scytonemin; what subtle changes to the radiation absorption process require molecular modification of what apparently is already a highly successful radiation protectant, especially when the molecular syntheses are accomplished in energy-poor situations?" 3. Inability to explain why many species of cyanobacteria do not synthesize scytonemins nor MAAs but, nevertheless, successfully cope with UV-induced cellular damage by employing only metabolic repair mechanisms (Quesada and Vincent, 1997; Castenholz and Garcia-Pichel, 2000).
Soule et al. (2007) developed scytoneminless mutant of the cyanobacterium
Nostoc punctiforme which proved to have indistinguishable growth rate from the wild type after both were subjected to UV-A irradiation. The conclusion of the authors was that other photoprotective mechanisms can fully accommodate the absence of scytonemin in the mutant.
In addition, very efficient absorption and dissipation of high-energy photons is not a prerequisite for photoprotection, but it is for dynamical dissipative structuring of material under an external generalized chemical potential. Nature has a simpler way of creating photoprotective molecules, if this was really the intention, by making them either highly reflective or transparent to UV wavelengths (Michaelian, 2016). These problems and paradoxes, generated when trying to explain scytonemins from within the Darwinian photoprotection paradigm, can be resolved by employing established non-equilibrium thermodynamic principles. In this context, we will first address the questions raised by Grant and Louda (2013) and Varnali and Edwards (2010), and then, based on all the evidence presented, we will assign a thermodynamic dissipative role to scytonemins.
The seemingly paradoxical absorption spectra of scytonemin-imine, the methoxyscytonemins and scytonine, which extend outside of the photoprotectively-relevant part of the spectrum, make sense only when these bio-pigments are understood as microscopic dissipative structures obeying non-equilibrium thermodynamic directives related to increasing the global solar photon dissipation rate (Michaelian, 2013;Michaelian and Simeonov, 2015;Michaelian, 2016). Under these directives, one of the several ways to increase the global solar photon dissipation rate is by evolving (inventing) novel molecular structures (pigments) that cover ever more completely the prevailing surface solar spectrum (see Michaelian and Simeonov, 2015). This is precisely what is observed in the absorption spectra of the different scytoneman pigments. The strong visible absorption peaks of scytonemin-imine at 437 nm (violet) and 564 nm (green/yellow), of tetramethoxyscytonemin at 562 nm (green/yellow), of dimethoxyscytonemin at 422 nm (violet); and the strong near-UV-C/UV-B absorption peaks of scytonine (270 nm) and dimethoxyscytonemin (316 nm) is exactly where the photosynthetic pigments do not peak in absorption (see, for example, Rowan, 1989). It is because of this rich assortment of diverse pigment molecules with complementary absorption bands that cyanobacterial biofilms, mats and soil crusts in nature tend to have high absorptivities, low albedos and appear almost black in color (Ustin et al., 2009).
This fact leads us to an important conclusion on the thermodynamic function of the scytoneman pigments. We believe that it is most reasonable to consider the photon-dissipation role of scytonemins as the terrestrial analogue of the function that MAAs perform in the open aquatic environment. This assertion may be justified on their hydrophobic character and their inextricable connection to the extracellular polymeric substances (EPSs) of the cyanobacterial sheaths. Ekebergh et al. (2015) have shown that scytonemins have the greatest photostability in vivo, where they are embedded in their natural extracellular matrix milieu. These extracellular polymeric substances may therefore be playing the role of providing the dissipative medium required to disperse the excess vibrational energy after photon excitation of the pigment, bringing the system rapidly to the ground state.
In wet terrestrial regions of the planet, the thermodynamic role of photon dissipation coupled to the water cycle is performed mainly by the plant cover, but in arid and semiarid lands, where vegetation is severely restricted, this function is allotted to microscopic assemblages of cyanobacteria, heterotrophic bacteria, algae and fungi known as biological soil crusts or biocrusts (Evans and Johansen, 1999;Belnap and Lange, 2001). It is theorized that these types of microbial communities represented life's pioneering on dry land and were the dominant ecosystem on the continents up until the advent of land plants and animals in the Early Devonian (Beraldi-Campesi et al., 2014).
Michaelian (2013) postulated: "The most important thermodynamic work performed by life today is the dissipation of the solar photon flux into heat through organic pigments in water. From this thermodynamic perspective, biological evolution is thus just the dispersal of organic pigments and water throughout Earth's surface... On Earth, organic molecules are found only in association with water. As described above, this is most likely related to the efficiency of organic pigments dissipating solar photons using the high frequency water vibrational modes to facilitate their de-excitation. Without water they are poor photon dissipaters and easily destroyed by photochemical reactions. This is probably the primordial reason for the association of life with water." In the context of this citation, we emphasize the fact that cyanobacteria isolated from dry regions display very high capacity to excrete large amounts of EPSs (Huang et al., 1998;Hu et al., 2003;Roeselers et al., 2007;Rossi et al., 2012), which are the main constituent of the biofilm matrix and together with microbial filaments play a key structural role in forming the biocrusts (Mager and Thomas, 2010;Karunakaran et al., 2011). The unique hydrophilic/hydrophobic nature of the EPSs enables highly efficient water capture and water storage within the biocrust by allowing the creation of moistened microenvironments where water dynamics is intricately regulated (Colica et al., 2014 and references therein). Hence, crust-covered soils are very hygroscopic and always exhibit higher water content compared to bare neighboring surfaces (Rossi and Phillips, 2015). This phenomenon is exactly what we have postulated earlier, life's fundamental role of "dispersing organic pigments and water over Earth's entire surface" (Michaelian, 2013).
A very conspicuous analogy between these terrestrial macroscopic and microscopic photondissipating biological "carpets" can be drawn. In the same manner as ecological succession of plant coverage leads to old climax forests with higher pigment content and lower albedos (Pokorny et al., 2010;Maes et al., 2011), ecological succession in biocrusts leads to increase in biomass of the late-stage scytonemin-producing cyanobacteria, and consequently accumulation of scytonemins in the matrix, an effect macroscopically observed as darkening of the biocrusted soil (i.e. decrease in albedo) (Couradeau et al., 2016). During dry periods in deserts when water availability is very limited, the heat generated from scytonemin's photon dissipation is expected to go predominantly into sensible heat of the biocrusts instead of into the latent heat of vaporization of water, and this is exactly what Couradeau et al. (2016) found when they measured ∼ 10 • C higher temperature of biocrust-covered, dark soils in comparison to bare soils.
Discussion
In a previous work (Michaelian and Simeonov, 2015) we posited five basic tendencies that organic pigment evolution on Earth would have followed: (1) increases in the photon absorption cross section with respect to the pigment physical size, (2) decreases in the electronic excited state lifetimes of the pigments, (3) quenching of the radiative de-excitation channels (e.g., fluorescence), (4) greater coverage of the surface solar spectrum, and (5) pigment proliferation and dispersion over an ever greater surface area of Earth.
To examine whether these five tendencies are satisfied with the evolutionary invention of MAAs and scytonemins we compare their properties to those of the aromatic amino acids (AAAs) (see Table 1).
Our reason for choosing the AAAs is twofold; (1) they are considered to be among the earliest chromophoric organic molecules used by life with a prebiotic origin (a subject discussed earlier in the text, and in Michaelian (2011) and Michaelian and Simeonov (2015) in greater detail), and (2) since both MAAs and scytonemins are derived from intermediates of the shikimate pathway for AAA biosynthesis they most likely appeared later in evolution compared to the AAAs, probably when the biosynthetic machinery for the synthesis of the AAAs was already robust; an event that most likely long predated 3.4 Ga, considering that Busch et al. (2016) demonstrated that the ancestral tryptophan synthase of the last universal common ancestor (LUCA) was already a highly sophisticated enzyme at 3.4 Ga. This reasoning is also corroborated by the previously mentioned (see Sect. 2.2.4) identification of Raman spectral biosignatures of scytonemin in ∼ 3.5 Ga old relict fossilized sedimentary geological specimens (Edwards et al., 2007;Pullan et al., 2008).
Based on all the data presented and discussed in Sect. 2, we can state with a high degree of certainty that the fourth and fifth requirements are satisfied with the evolutionary inventions of MAAs and scytonemins.
For the first to third requirements, in addition to the previously discussed material, we offer data presented in Table 1. The data has been extracted from the available literature and as of 2017 is exhaustive. All of the compounds listed are representative members of their respective chemical groups. Gadusol is used instead of the more relevant compound 4-deoxygadusol because of lack of available data on 4-deoxygadusol and because of their chemical relatedness with similar spectroscopic properties (Losantos et al., 2015a(Losantos et al., , 2015b. The λ max and ε values of gadusol in water are pH-dependent: 268 nm at pH < 7 and 296 nm at pH ≥ 7 (Losantos et al., 2015a), and in Table 1 we use the values for acidic pH, having in mind that the Archean seawater was probably slightly acidic with pH ∼ 6.5 (Holland, 2003). All absorption peaks and attenuation coefficients below about 220 nm, we believe, are due to the ionization of the molecules, a process which could destroy them. Photon dissipation is not through a conical intersection at these very short wavelengths and this is why they are omitted from Table 1 and Fig. 5.
It is our hope that future experiments and studies into the nature and properties of these bio-pigments will help complete all of the data missing from the table. However, even with the limited data available and presented in this article, a trend compatible with our conjecture is evident. Another conjecture made in Michaelian and Simeonov (2015) states that the surface solar spectrum wavelength region from approximately 280 to 310 nm has never reached the surface of the Earth during its entire geologic history; because during the Hadean and Archean eons these wavelengths were probably absorbed by atmospheric aldehydes (formaldehyde and acetaldehyde) (Sagan, 1973), and from the end of the Archean onwards gradual accumulation of oxygen and stratospheric ozone was responsible for their attenuation (Matsumi and Kawasaki, 2003;Stanley, 2008). In this earlier paper we also demonstrated how numerous fundamental molecules of life, common to all three domains of life, have strong absorbances across the UV-C, UV-B and UV-A regions except in this interval, which we used as an argument in favor of the thermodynamic dissipation theory of the origin and evolution of life (Michaelian, 2009;Michaelian, 2012).
Here we demonstrate how this "rule" can also be applied to the cyanobacterial UVabsorbing pigments scytonemins and MAAs, which can be considered as evolutionary successors to the primordial pigments of life, specifically the AAAs. From the information presented in Table 1, Fig. 5, and Sect. 2, it is obvious that none of the compounds discovered so far, from both pigment groups, peak strongly in absorption inside this wavelength interval, which is consistent with our conjecture.
The absorption spectrum of scytonemin alone (Fig. 4) has a very interesting shape which seems to adhere perfectly to this pattern. Although it is continuous from ∼ 220 to ∼ 700 nm, there is a dip in the ∼ 275 to ∼ 325 nm interval, and two large maxima at ∼ 250 nm and ∼ 380 nm. This is exactly the kind of shape that would be expected if the selective force for the evolution of this pigment was our proposed Archean surface solar spectrum (Michaelian and Simeonov, 2015). Combining this crucial point with the previously discussed facts on scytonemin, it is tempting to speculate that this pigment had a key role in photon dissipation during the Archean, being capable of dissipating almost the entire Archean surface solar spectrum. The evolutionary invention of scytonemin's derivatives, as well as the mycosporines, the MAAs and still many other extinct and extant biological pigments, most likely resulted from the necessity to complement scytonemin's absorption with pigments that absorbed wavelengths reaching Earth's surface but were poorly absorbed by scytonemin itself. This kind of complementary spectral relationship between scytonemin and the MAAs has been well documented by several authors (e.g., Ehling-Schulz and Scherer, 1999; Ferroni et al., 2010; Castenholz and Garcia-Pichel, 2012) and is illustrated in Fig. 2.
Conclusions
The available data on the ubiquity of pigments covering the region from the UV-C to the infrared, many exuded by the organisms that produce them into the environment, make it increasingly difficult to assign to them a protective or antenna role within the Darwinian paradigm of the optimization of photosynthesis in benefit of the organism. We believe that sense can only be made of this by shifting the paradigm from one of "photoprotection" of the organism to the thermodynamic optimization of photon dissipation.
A number of contemporary pigment lines, most notably scytonemins and the mycosporinelike amino acids, appear to harbor relics of ancient biosynthetic production routes based on the most ancient of the amino acids, the aromatics. The aromatic amino acids have known affinities to their RNA anticodons (Majerfeld and Yarus, 2005;Yarus et al., 2009) and were perhaps the first antenna pigments for photon dissipation in the UV-C at the beginnings of life (Michaelian, 2011).
These pigment lines absorb and dissipate rapidly in the UV-C as well as the UV-B, UV-A and the visible. Some of these pigments are exuded into the environment which excludes the possibility of assigning them a role in photoprotection. Their strong absorption and dissipation in regions out of the photosynthetically active radiation (PAR) has been perplexing to perspectives within the Darwinian paradigm since these pigments appear to have little utility to the organisms themselves. In fact, they absorb exactly where the photosynthetic pigments do not (and where water does not) and appear to have complete coverage of the Archean to present day Earth surface solar spectra.
It should be emphasized that our current knowledge of the diversity of cyanobacterial, algal and plant pigments and the thermodynamic function they perform is incomplete. For example, there are several indications of even richer diversity of UV-absorbing pigments in cyanobacteria than those hitherto characterized and classified into the two groups, mycosporines and scytonemins. The chemical structure and other elemental properties of one of these poorly investigated pigments, named gloeocapsin, have yet to be determined, but initial results suggest that it is chemically unrelated to both MAAs and scytonemins (Storme et al., 2015). Still other chemically distinct UV-absorbing cyanobacterial pigments, with a unique pterin structure, have been reported elsewhere (Matsunaga et al., 1993;Lifshits et al., 2016). The wavelengths of maximum absorption of these two ill-defined groups of cyanobacterial pigments are listed in Table 1 and are plotted in Fig. 5. As with the mycosporines and the scytonemins, their absorption properties are consistent with the optimization of dissipation of the prevailing photon spectrum at Earth's surface.
Taken as a whole, these data seem to indicate that, rather than photosynthesis being optimized under a Darwinian "survival of the fittest" paradigm, that the origin and evolution of life is driven by photon dissipation with the net effect of covering Earth's entire surface with pigments and water, reducing the albedo and the black-body temperature at which Earth radiates into space. It is our hope that this article will incite further investigation into the proposition that photon dissipation efficacy has been the fundamental driver of biological evolution on Earth.
|
2017-02-25T21:24:34.000Z
|
2017-02-12T00:00:00.000
|
{
"year": 2017,
"sha1": "f9edea1b4daaf0b020ca0db30635bfa853473cc5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f9edea1b4daaf0b020ca0db30635bfa853473cc5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
266272952
|
pes2o/s2orc
|
v3-fos-license
|
Rebuilding insight into the pathophysiology of Alzheimer's disease through new blood-brain barrier models
The blood-brain barrier is a unique function of the microvasculature in the brain parenchyma that maintains homeostasis in the central nervous system. Blood-brain barrier breakdown is a common pathology in various neurological diseases, such as Alzheimer's disease, stroke, multiple sclerosis, and Parkinson's disease. Traditionally, it has been considered a consequence of neuroinflammation or neurodegeneration, but recent advanced imaging techniques and detailed studies in animal models show that blood-brain barrier breakdown occurs early in the disease process and may precede neuronal loss. Thus, the blood-brain barrier is attractive as a potential therapeutic target for neurological diseases that lack effective therapeutics. To elucidate the molecular mechanism underlying blood-brain barrier breakdown and translate them into therapeutic strategies for neurological diseases, there is a growing demand for experimental models of human origin that allow for functional assessments. Recently, several human induced pluripotent stem cell-derived blood-brain barrier models have been established and various in vitro blood-brain barrier models using microdevices have been proposed. Especially in the Alzheimer's disease field, the human evidence for blood-brain barrier dysfunction has been demonstrated and human induced pluripotent stem cell-derived blood-brain barrier models have suggested the putative molecular mechanisms of pathological blood-brain barrier. In this review, we summarize recent evidence of blood-brain barrier dysfunction in Alzheimer's disease from pathological analyses, imaging studies, animal models, and stem cell sources. Additionally, we discuss the potential future directions for blood-brain barrier research.
isolated CNS with the peripheral milieu.However, the brain microvasculature is not simply a "pipe" in the brain and has unique characteristics.The bloodbrain barrier (BBB) is the specific function of the microvasculature in the brain parenchyma and contributes to maintaining CNS homeostasis.The BBB is composed mainly of brain microvascular endothelial cells (BMECs), pericytes sandwiched between the basement membrane of the BMECs and the basement membrane of the parenchyma and end-feet of astrocytes (Figure 1).These cells are major players in the neurovascular unit (NVU), and system responsible for coordinating the peripheral environment and the CNS (Castro Dias et al., 2019).The choroid plexus also has a barrier function known as the blood-cerebrospinal fluid barrier, in which epithelial cells instead of endothelial cells form the barrier function (Castro Dias et al., 2019).The blood-cerebrospinal fluid barrier is another important transvascular route to the brain, but the present review will focus primarily on the BBB.Within the components of the NVU, BMECs play an essential role in providing BBB functions.BMECs establish continuous tight junctions, exhibiting remarkably diminished pinocytotic activity relative to the microvascular endothelial cells found in peripheral organs.These complex tight junctions inhibit the paracellular diffusion of water-soluble molecules into the CNS.BMECs also express specific transporters and efflux pumps on their surface, which take up essential nutrients from peripheral blood or export harmful molecules from the CNS.Furthermore, the restricted expression levels of endothelial adhesion molecules on the surface of BMECs strictly control immune cell trafficking into the CNS (Marchetti and Engelhardt, 2020).Under pathological conditions, the level of expression of endothelial adhesion molecules, such as intercellular adhesion molecule-1 (ICAM-1) and vascular cell adhesion molecule-1 (VCAM-1), is upregulated compared with that under physiological conditions and is crucial for immune cell infiltration into the CNS (Marchetti and Engelhardt, 2020).
In dementia, regardless of the underlying disease etiology, dysfunction of the BBB is observed as a common phenomenon, especially in the late stage (Skillback et al., 2017).Recent advances in imaging have suggested that BBB abnormalities are observed from the early stage of the disease (Yuan et al., Review 2023); however, its detailed pathophysiology remains unclear.Elucidation of the molecular mechanisms underlying disease-specific or disease-common BBB dysfunction could lead to the development of novel therapeutic strategies directly targeting the BBB.This could be particularly important for neurological disorders where effective treatments have been lacking thus far.To this end, investigation of the details of the mechanisms of BBB dysfunction in each disease is warranted.
In this review, we focus on Alzheimer's disease (AD), the most prevalent cause of dementia and known to be associated with BBB dysfunction.First, we will summarize the evidence of BBB changes in AD from humans, mainly from autopsy samples and imaging or biomarker studies.We then describe the molecular mechanisms of BBB dysfunction that have been elucidated by studies using animal models.Although studies in mice have elucidated many detailed molecular mechanisms, species differences cannot be ignored when considering therapeutic applications in humans.Moreover, due to the complexity of BBB functions maintained by various cell types, an experimental model is required in that it can be developed step-by-step from simple studies on a cell-by-cell basis to more complex systems.Therefore, human induced pluripotent stem cell (hiPSC)-derived BBB models have emerged as new tools in BBB research to address these challenges.Each experimental model has advantages and disadvantages, summarized in Table 1, and researchers must choose appropriate models or methods according to the study.We also discuss the current status and future directions associated with iPSC-derived BBB models.
Search Strategy
We have searched all publications from the PubMed database using the keywords such as Alzheimer's disease, blood-brain barrier, and human induced pluripotent stem cells on 1 st September, 2023.All years were chosen in the search.
Evidence for Blood-Brain Barrier Alterations in Alzheimer's Disease in Humans
AD is the most prevalent cause of dementia in older adults.Amyloid plaques, consisting of extracellular deposits of polymerized amyloid-β (Aβ) and neurofibrillary tangles, formed by abnormal accumulation of phosphorylated tau in neurons, are pathological hallmarks of the disease (Hyman et al., 2012).From a pathophysiological perspective, there are three major putative alterations of BBB function related to disease, that is, (1) increased paracellular transfer due to disruption of tight junctions causes entry of blood molecules, (2) accumulation of abnormal proteins due to decreased or increased transporter function, and (3) increased infiltration or interaction of peripheral immune cells, resulting in neuroinflammation.First, we review whether the findings regarding BBB alterations are actually observed in the brains of AD patients (Figure 1).Pathological changes in the BBB have been confirmed in numerous human autopsy cases.Plasma proteins such as fibrin, immunoglobulin G (IgG), and prothrombin, which are not present in the CNS under physiological conditions, were found to leak around blood vessels in the brains of AD patients (Hultman et al., 2013;Halliday et al., 2016;McAleese et al., 2019;Kirabali et al., 2020).Perivascular fibrin correlates with interruptions of tight junction proteins, such as ZO-1 (Fiala et al., 2002).An ultrastructural study showed a substantial presence of pinocytotic vesicles and cytoplasmic inclusions in the endothelial cells in the hippocampus of AD patients (Baloyannis and Baloyannis, 2012).An ELISA using a lysate of multiple brain lesions found decreased cortical tight junctional molecules such as claudin-5 and occludin in AD patients compared with normal aging individuals, and the expression levels were negatively correlated with tau protein accumulation (Yamazaki et al., 2019;Liu et al., 2020).Immunostaining highlighted the relationship between vascular amyloid depositions with a disrupted tight junction (Carrano et al., 2012), suggesting a possible relevance of BBB dysfunction to the process of plaque accumulation.
The specific transporters in the endothelium are involved in Aβ transportation.P-glycoprotein (P-gp) and lipoprotein receptor-related protein 1 (LRP-1) are expressed on the luminal (blood-facing) side and abluminal (parenchyma-facing) side of endothelial cells, respectively, and are related to Aβ efflux (Zhang et al., 2022).By contrast, advanced glycosylation endproducts (RAGE) are expressed at the luminal side of brain endothelial cells and mediate transcytosis of plasma-derived Aβ.In AD, the alteration of specific transporters has been observed.Several immunostaining studies performed on autopsied brain samples from AD patients have shown decreased expression of P-gp and LRP-1 but increased expression of RAGE in microvessels (Deane et al., 2004;Vogelgesang et al., 2004;Jeynes and Provias, 2008;Chiu et al., 2015;Halliday et al., 2016;Bourassa et al., 2019).However, a proteomics study using isolated BMECs from the hippocampus and parietal lobe did not observe any significant difference in the protein levels of P-gp and LRP-1 between AD patients and control individuals (Storelli et al., 2021).The potential discrepancy between these findings might be a difference in vascular zonation.Isolated BMECs in the latter study analyzed all types of endothelial cells in the brain parenchyma; however, recent single-cell RNA sequencing studies have shown that transcriptomes differ significantly depending on vascular zonation (Garcia et al., 2022).The expression of specific transporters in AD patients may be altered in a selective population of vascular endothelial cells.
Neuroinflammation also plays a pivotal role in disease progression in AD.
In AD, neuroinflammation is often discussed, mainly in the context of brain resident microglial abnormalities (Webers et al., 2020), but infiltrating peripheral immune cells into the CNS also induces neuroinflammation.The BBB is the gatekeeper of peripheral immune cells and actively contributes to immune cell trafficking into the CNS in other neurological diseases (Marchetti and Engelhardt, 2020).Additionally, in AD, there is some evidence in humans of an altered BBB that contributes to immune cell trafficking into the CNS as described following.Isolated microvessels from AD patients highly expressed proinflammatory cytokines, indicating that BMECs contribute to neuroinflammation in AD (Grammas and Ovase, 2001).Furthermore, an increased number of macrophages, neutrophils, natural killer cells, T cells, and B cells infiltrate the vessel wall or perivascular space in brain areas such as the hippocampus and frontal cortex that are typically affected in AD (Fiala et al., 2002;Hultman et al., 2013;Gate et al., 2020;Liu et al., 2023).
Review
Other cellular components of the BBB are also altered in AD.Pericyte coverage prominently decreases with the progression to the Braak stage of AD and is inversely proportional to plasma protein leakage (Sengillo et al., 2013;Halliday et al., 2016;Kirabali et al., 2020).An ultrastructural study showed an increased number of pinocytotic vesicles and abnormal mitochondria in pericytes (Baloyannis and Baloyannis, 2012).Astrocytes are also structurally and functionally related to AD, and reactive astrocytes, which highly express glial fibrillary acidic protein, are increased in the AD brain (Serrano-Pozo et al., 2013;Gomez-Arboledas et al., 2018;Kirabali et al., 2020).The endfeet of perivascular astrocytes swell (Higuchi et al., 1987); however, the coverage of small capillaries by astrocytes remains intact (Kirabali et al., 2020).Aquaporin 4 is a water channel highly expressed in the end-feet of astrocytes, and its perivascular staining was decreased in small vessels in the frontal cortex of AD patients (Zeppenfeld et al., 2017), suggesting that perivascular astrocytes are also altered even when they are attached.
Most evidence for BBB dysfunction in humans is derived from histological studies obtained from end-stage AD patients.Because the functions of the BBB are maintained by various types of cells in the CNS, the loss and dysfunction of each cell due to neurodegeneration consequently leads to secondary BBB breakdown, and the autopsy sample cannot exclude secondary postmortem effects.In addition, pathological aggregation such as Aβ and tau protein themselves also evokes vascular injury and inflammation (Wang et al., 2021).Therefore, it is difficult to determine whether BBB dysfunction is a causative factor in the onset and progression of the disease or merely a consequence of neurodegeneration using histology.The advanced imaging technology and biomarkers described in the next section may provide a clue.
Evidence for Blood-Brain Barrier Dysfunction in Early-Stage Alzheimer's Disease: Does Blood-Brain Barrier Dysfunction Proceed with Neurodegeneration?
Biomarkers in cerebrospinal fluid (CSF) have been used to detect BBB dysfunction in living individuals.The Qalb index, which is the ratio of CSF to blood albumin and is an indicator of how much serum Alb is leaking into the CNS, has been a conventional biomarker of BBB leakage (Skillback et al., 2017).AD patients had increased Qalb (Skillback et al., 2017), which correlates with the progression of cognitive impairment (Bowman et al., 2007).A systematic review showed that Qalb levels are not only correlated with aging and diagnosis of AD but are also significantly affected by vascular dementia (Farrall and Wardlaw, 2009).Platelet-derived growth factor beta (PDGFRβ) is a major marker of pericytes, and the soluble form of PDGFRβ in CSF is known to correlate with pericyte damage (Montagne et al., 2015).The elevation of soluble PDGFRβ in the CSF seems to be more sensitive from a very early stage of AD (Miners et al., 2019;Montagne et al., 2020).Although there are some other possible biomarkers to detect BBB breakdown (Kapural et al., 2002;Marchi et al., 2003;Gubern-Merida et al., 2022; Table 2), these biomarkers are indirect evidence of BBB damage and could be upregulated by other conditions.Reliable blood markers are required in the clinical setting that are easier to obtain and less expensive than traditional gadoliniumenhanced magnetic resonance imaging (MRI).The advanced imaging techniques have allowed us to detect BBB dysfunction at an early stage of the disease, i.e., in patients with clinically mild cognitive impairment (MCI) or before the detective accumulation of pathological proteins.First, dynamic contrast-enhanced MRI using a 3T scanner and a 1 kDa gadolinium-based contrast agent has shown that MCI patients without evidence of hippocampal atrophy have increased BBB permeability in the hippocampus (Montagne et al., 2015).BBB permeability in the hippocampus and parahippocampal gyrus of MCI patients was elevated when a patient had no evidence of increased CSF Aβ and phosphorylated tau levels, suggesting that BBB leakage was likely to be directly related to cognitive function and precedes pathological protein accumulation (Nation et al., 2019).Water permeability in the brain can be estimated by a new MRI technique, in which arterial blood in the cervical spine region is labeled by spin echo, and extravasated water in the brain is estimated by subtracting labeled blood volume in the superior sagittal sinus from total labeled arterial blood (Lin et al., 2021).Water permeability in the brain of MCI patients was elevated even when the Qalb index was not yet elevated, suggesting that BBB dysfunction occurs earlier than previously thought.Although whether Qalb is elevated in the MCI phase remains controversial (Montagne et al., 2015;Lin et al., 2021), the elevation may be due to insufficient sensitivity of Qalb, and advanced imaging techniques may resolve this inconsistency.Additionally, positron emission tomography studies using specific tracers are also useful for detecting the BBB function of each transporter in living patients.A significantly decreased activity of P-gp in the cortices and hippocampus of mild to moderate AD patients was detected (Deo et al., 2014).Thus, BBB dysfunction may not be merely a secondary injury at the end stage but may begin upstream in AD pathophysiology.Moreover, a comprehensive single RNA sequence analysis of the AD brain revealed that 30 of the top 45 genes reported to be associated with AD risk in genome-wide association studies are expressed in brain endothelial cells (Yang et al., 2022), suggesting that endothelial alteration is critical.
While advanced imaging techniques can reveal BBB dysfunction, their use to explore the underlying mechanism remains difficult.In other words, they can estimate BBB function to some extent, e.g., quantitative measurements of permeability and specific uptake of substrates.However, they are not yet sufficient to investigate detailed molecular mechanisms, such as whether increased permeability is due to permeation of paracellular or transcellular pathways or whether decreased transporter function is due to decreased protein expression or altered localization.Therefore, animal models are widely used to study the pathophysiology of BBB dysfunction in AD.
Blood-Brain Barrier Breakdown in Alzheimer's Disease Model Mice and Underlying Molecular Mechanisms
In this section, we will summarize the studies in vitro and in vivo, mainly using rodent models, which have contributed greatly to elucidating the detailed mechanisms of BBB dysfunction in AD pathophysiology.BBB abnormalities in AD are described along with each of the three major BBB functions.
Increased paracellular transfer due to disruption of tight junctions causes entry of blood molecules
Apolipoprotein E (APOE), a protein involved in the metabolism and transport of lipids such as cholesterol, is secreted mainly by astrocytes in the CNS.APOE has genetic polymorphisms ε2, ε3, and ε4.APOE3 is the most common allele, and APOE4 is known as the most prominent risk factor for AD, while APOE2 is known to act suppressively against disease onset (Wisniewski and Drummond, 2020).APOE has been studied extensively to understand the pathogenesis of AD, and this section focuses on the effect of APOE on BBB function, especially barrier tightness.Interestingly, in addition to AD, APOE4 is also a common risk factor for many diseases associated with BBB dysfunction, including cerebral amyloid angiopathy, Lewy body dementia, multiple sclerosis, vascular dementia, ischemic stroke, and traumatic brain injury, however, in age-related macular degeneration in which excessive angiogenesis causes pathological neovascularization, the APOE4 allele Review works protectively (Safieh et al., 2019).Thus, APOE might play a critical role in various diseases by suppressing essential angiogenesis or inhibiting the physiological functions of the microvasculature.Human APOE4 knock-in mice provide evidence of BBB breakdown before neurodegeneration, as described following.BBB breakdown detected by decreased tight junction protein and leakage of 40 kDa dextran and IgG was already seen in 2-week-old APOE4 mice (Bell et al., 2012), when neuronal density, expression levels of preor postsynaptic proteins, and cortical neural activity were still normal.The same investigators analyzed the transcriptomes, phosphorylation sites, and proteomes of cells composing the NVU from young and middle-aged human APOE3 or APOE4 knock-in mice (Barisano et al., 2022).Endothelial cells derived from 2-3-month-old APOE4 mice, in which BBB breakdown already appears, expressed upregulated transcriptomes related to junctions and transporters, which is likely to be a compensatory cell response to stabilize the dysfunctional BBB.However, at 7 months, APOE4 mice showed decreased or abnormally phosphorylated proteins related to cell junctions and clathrinmediated transport controlling endocytosis of the transporter at the cell membrane.Subsequently, at 9-12 months, the compensatory transcriptome expression seen in the young mice disappeared and was replaced by increased expression of the transcriptome, which promotes intracellular trafficking and enzymatic BBB disruption.In APOE4-mice, the disappearing compensatory transcriptome expression over time was also seen in pericytes, astrocytes, and microglia.Such pathway analysis is useful for elucidating the mechanism of BBB dysfunction.Immunohistochemical studies of autopsied human brains revealed that the presence of the ε4 allele is associated with increased fibrin or Aβ deposition in the vessel wall and plasma protein leakage in the brain parenchyma, decreased pericyte coverage (Hultman et al., 2013;Halliday et al., 2016;Liu et al., 2020).Contrast-enhanced MRI detects higher vascular permeability in the hippocampus of APOE4 carriers than in noncarriers, even when they are not suffering from dementia (Montagne et al., 2020;Moon et al., 2021).How does APOE contribute to BBB dysfunction?Mice selectively transfected with human APOE2, APOE3, or APOE4 in astrocytes with an APOE-null background showed leaky BBB only in APOE4-mice (Bell et al., 2012).Interestingly, the APOE-null mice also showed a leaky barrier detected by vital multiphoton microscopy via a cranial window above the parietal lobe.This finding suggests that APOE2 and APOE3, but not APOE4, secreted from astrocytes maintain BBB function.Secreted APOE binds to LRP-1 and inhibits the activation of proinflammatory cytokines such as cyclophilin A (CypA), promoting the production of matrix metalloproteinase-9, which disrupts basement membranes and reduces tight junction proteins.The ability to inhibit CypA is lower in APOE4 than in other isoforms-despite the higher affinity of APOE4 for LRP-1 (Cooper et al., 2021)-suggesting that this pathway is responsible for BBB dysfunction in APOE4 carriers.Although LRP-1 is highly expressed in pericytes and astrocytes in the NVU, the CypA pathway via LRP-1 is also inhibited in endothelial cells (Nikolakopoulou et al., 2021).The expression levels of CypA and matrix metalloproteinase-9 were much higher in the CSF and endothelial cells and pericytes of AD patients carrying APOE4 than it was in healthy individuals and AD patients with APOE3 (Halliday et al., 2016;Montagne et al., 2020).Interestingly, the CypA pathway in APOE4 carriers is activated even when the individuals with AD are cognitively normal (Montagne et al., 2020), indicating that the cascade of BBB breakdown has started in the prodromal stage of AD.APOE is also present in the peripheral blood, and the next question is whether peripheral APOE also affects the BBB.In postoperative liver transplant patients, APOE alleles in blood and cerebrospinal fluid showed that more than 90% of recipient blood APOE was replaced by donor alleles, while the cerebrospinal fluid APOE remained the recipient allele (Linton et al., 1991).Furthermore, transgenic mice in which human APOE3 or APOE4 was specifically expressed in the liver on an APOE knockout background showed that human APOE is present only in the periphery (Liu et al., 2022).These data suggest that BBB separates APOE in the CNS and peripheral blood.Expression of human APOE4 in the liver reduced endothelial tight junctions, thinned and fragmented the basement membrane, caused serum protein leakage, and impaired cognitive function (Liu et al., 2022).The investigators suggested that these phenomena are due to changes in APOE isoform-dependent humoral factors in the periphery that affect the BBB rather than APOE itself.Peripheral human APOE3 expression improved cognitive function and reduced amyloid deposition (Liu et al., 2022).Furthermore, serum from young mice expressing peripheral human APOE3 alleviated the failing BBB function in aged mice (Liu et al., 2022), but it is not clear whether peripheral APOE3 expression has the same function in maintaining the BBB as human APOE3 expressed in the CNS (Bell et al., 2012).
APOE is expected to be further investigated to help elucidate how the BBB fails in neurodegenerative diseases.
Accumulation of abnormal proteins due to decreased or increased transporter function
Both P-gp and LRP1 expressed on the surface of endothelial cells play major roles in the clearance of Aβ from the interstitial fluid of the brain (Storck et al., 2016(Storck et al., , 2018a;;Chai et al., 2020).What molecular pathways are involved in the excretion of Aβ by these transporters?When Aβ binds to LRP1 and alters its intracellular domain in mouse and human BMEC lines, phosphatidylinositol binding clathrin assembly protein attaches to the complex and induces clathrin-mediated endocytosis, which further directs the complex into the endosome (Zhao et al., 2015).Inhibition of both LRP-1 and P-gp has no synergistic effect compared with their inhibition, indicating that these molecules act through the same pathway for Aβ excretion (Storck et al., 2018b).It is noteworthy that mutation of phosphatidylinositol binding clathrin assembly protein has been reported as a genetic risk for AD (Bellenguez et al., 2022), and this pathway seems to play an essential role in Aβ transportation.
RAGE is known to accumulate Aβ by acting as an influx pump but also through many pathological pathways in AD, such as neuroinflammation and oxidative stress (Juranek et al., 2015).Other specific receptors and transporters of Aβ, such as ATP-binding cassette (ABC)C1, ABCG2, and ABCG4, have also been implicated in Aβ efflux and may be therapeutic targets (Xiong et al., 2009;Krohn et al., 2011;Dodacki et al., 2017).
Increased immune cell infiltration results in neuroinflammation
Neuroinflammation is widely thought to be a secondary event in AD pathophysiology; however, there is accumulating evidence that it plays a strong aggravating role.The plasma proteins that leak across the BBB, such as fibrinogen, strongly activate microglia, resulting in neurodegeneration, as demonstrated in a mouse study using microglia-specific deletion of the fibrinogen-binding motif gene (Mendiola et al., 2023).In addition, because the BBB controls not only the permeation of soluble molecules but also the infiltration of inflammatory cells, the recruitment of peripheral immune cells by the pathological BBB can also exacerbate neuroinflammation.Indeed, immune cells accumulate in the brains of a mouse model of AD (Browne et al., 2013;Wang et al., 2015;Zenaro et al., 2015) as well as AD patients.The Aβ peptide induces lymphocyte function-associated antigen 1 to a highaffinity state and results in increased neutrophil trafficking into the brain parenchyma, and deleting or blocking lymphocyte function-associated antigen 1 reduces neutrophil infiltration and the accumulation of Aβ and phosphorylated tau protein, thereby restoring cognitive status (Zenaro et al., 2015).To study the effect of peripheral immunity on AD, Aβ-specific CD4 + T cells were generated and injected into the periphery of amyloid precursor protein (APP)/PS1 mice (Browne et al., 2013).The injected Th1-cells but not Th2-or Th17-cells infiltrated the CNS and activated microglia, Aβ deposition, and abnormal cognitive behavior.By contrast, increased Th17-cells in the peripheral blood in AD patients have been reported (Oberstein et al., 2018), and an adaptive transfer study showed that Th17 cells contribute to disease progression in the same APP/PS1 mice.This discrepancy might arise from differences in the T cell culture environment or markers used to identify cell populations.These findings indicate that peripheral immune cells contribute to neuroinflammation.Moreover, sera from aged mice activated microglia and impaired neural progenitor cell activity in young mice, and deletion of the brain endothelial and epithelial-specific gene for VCAM-1 in young mice reduced the neuroinflammatory effect of sera from aged mice (Yousef et al., 2019).Whether the neuroinflammation is induced by infiltrating immune cells via activated endothelial cells or whether activated endothelial cells themselves activate microglia remains unclear; either way, the findings suggest that BMEC activation induces neuroinflammation.
Limitations of Animal Models and Advantages of a Human Induced Pluripotent Stem Cells-Derived Model In Vitro
Except for some types of nonhuman primates, at least rodents do not develop AD pathology, so heterologous gene transfer is necessary to reproduce ADlike pathology in rodents (Neff, 2019).Most rodent models of AD involve transfected human genes derived from autosomal dominant familial AD, such as APP and presenilin 1/2 (PSEN1/2).However, the familial form of AD is only a part of the AD population, and the majority of sporadic AD develops through a complex mix of multiple genetic and environmental factors.Moreover, although transgenic mice recapitulate some of the specific aspects of AD pathogenesis, they cannot yet fully reproduce human AD pathology, and it is important to understand the characteristics and limitations of each model and to use it accordingly (de Sousa et al., 2023).Moreover, it should be noted that APOE construction and function differ between mice and humans, so human APOE is required for transfection (Balu et al., 2019).From a BBB perspective, mouse and human BMECs also differ in specific transporters and endothelial adhesion molecules (Lecuyer et al., 2017;Song et al., 2020).Therefore, models of human origin in vitro, in which function can be analyzed, are needed for further study of AD.Advanced stem cell technology has recently allowed us to induce various types of cells from hiPSCs derived from the somatic cells of patients or healthy individuals.There are several major advantages of hiPSC-derived models compared with animal models and primary or immortalized cell lines.(1) Human cells that would otherwise be difficult to obtain from healthy individuals or patients can be produced in vitro.For example, brain tissue is difficult to obtain from patients who do not require biopsy or treatment for diagnosis or treatment.
(2) Using cells with all of a patient's genetic information makes it possible to study the characteristics of diseases, such as AD, in which multiple genetic factors are probably involved (Bellenguez et al., 2022).Conversely, combining gene editing techniques allows the effects of a target gene to be compared in an isogenic background.
(3) Through the differentiation of each component cell with the patient's genetic information, disease gene effects on each cell type can be analyzed individually.Additionally, it is possible to study the interactions between cell types in the disease by creating an autologous disease model that combines multiple cell types from patient-derived hiPSCs or by incorporating patientderived cells one at a time into models derived from healthy individuals.( 4) Patient-derived cells can be used to study the effects of therapeutic agents that differ between species or in diseases unique to humans.
Regarding each cell type comprising the BBB, several iPSC-derived pericytes (Stebbins et al., 2019;Jeske et al., 2020) and astrocytes (Perriot et al., 2018;Leventoux et al., 2020) have been established, and in particular, several models for BMECs, the main actor with barrier properties, have been created Review (Lippmann et al., 2012;Hollmann et al., 2017;Qian et al., 2017;Praca et al., 2019;Nishihara et al., 2020;Lu et al., 2021).Of course, like other methods, the hiPSC-derived model of the BBB is not ideal and currently has some limitations that should be noted.iPSC-derived cells mimic only a subset of target cells and do not currently reproduce all of their functions.
It is important to confirm in advance whether the model expresses the functions required for the intended experiment, such as (1) tightness of the paracellular diffusion barrier, (2) low nonspecific transcellular transport or expression of specific transporters and receptors, and (3) expression of adhesion molecules that control immune cell trafficking.Studies using stem cell technology in the field of AD are also accumulating.BMEC-like cells derived from hiPSCs of a familial AD patient with a PSEN1 mutation showed a decreased transendothelial electrical resistance, which reflects the ability to inhibit ion diffusion, and showed some transcriptomic distinctions compared to isogenic hiPSC-derived BMEC-like cells (Oikari et al., 2020).These studies indicate that AD patients' derived BMEC-like cells themselves have a leaky barrier in this genetic background; however, the detailed mechanism of this phenotype remains elusive.Other investigators have also analyzed BBB function in BMEC-like cells derived from hiPSCs with PSEN mutations (Katt et al., 2019;Raut et al., 2021).They found hiPSC-derived BMEC-like cells showed lower transendothelial electrical resistance, higher small tracer permeability, and lower transporters and efflux properties, but the studies only compared single, nonisogenic clones, making it difficult to determine whether this phenotype is due to the mutation itself.Interestingly, a recent study using iPSC-derived BBB organoids from patients with AD revealed not only the abovementioned three aspects of barrier functions but also the impact of the BBB on Aβ accumulation (Lin et al., 2018).Using the CRISPR/Cas9 system, the investigators generated hiPSCs carrying APOE3 or APOE4 from an isogenic hiPSC line and differentiated them into BMEC-like cells, pericytes, and astrocytes.Then, these three components of the BBB were cultured in hydrogel together and spontaneously assembled to form a vascularlike network within the gel.APOE4-BBB, when cultured with conditioned medium from neurons derived from the APP duplication hiPSC line, which produces high levels of Aβ, had increased immunoreactivity for Aβ compared with APOE3-BBB, suggesting that APOE4 expression may promote Aβ accumulation in this system.To determine which cells were responsible for Aβ accumulation, the investigators converted each component of the APOE3-BBB model to APOE4-carrying cells one by one.Ultimately, they found that APOE4 carriers had upregulated APOE expression in pericytes through alteration of a transcription factor, which induced Aβ accumulation.Such organoid models of the BBB are attractive for studying cell-cell interactions.However, the biggest challenge is to evaluate the function of the BBB as a window connecting the peripheral milieu and the CNS.Evidence that the peripheral milieu impacts AD pathology has accumulated as follows.Sera derived from elderly mice impaired synaptic plasticity, proliferation of neural stem cells, and cognitive functions in young mice (Yousef et al., 2019).Additionally, in humans, the plasma levels of systemic inflammatory markers are negatively correlated with the volume of the brain and cognitive function (Walker et al., 2017), and plasma exchange significantly reduced cognitive progression in patients with moderate AD (Boada et al., 2020).Peripheral immunity is correlated with inflammation in the CNS, and more BBB models are needed to study such a BBB-mediated link between the periphery and the CNS.
Future Directions for Models of the Blood-Brain Barrier, Including Human Induced Pluripotent Stem Cell-Derived Models in Vitro
Traditionally, many studies have used 2D models of the BBB in vitro using Transwell filters, in which BMECs are plated on the apical side of the filters to form a monolayer, and other cell types such as astrocytes and pericytes can be plated in the lower chamber for coculture (Wu et al., 2021).This type of 2D model is very simple and directly analyzes barrier properties and the impact of the peripheral milieu on the CNS, but it has some limitations that cannot be ignored.First, there is no influence of blood flow, which affects endothelial cell characteristics and immune cell-BBB interactions.To overcome this limitation and observe cell-cell interactions under the physiological flow conditions, several flow models have been established (Coisne et al., 2013;Reinitz et al., 2015), but culturing multiple cell types in these models is difficult.Recent BBB modeling with organ-on-tip uses 3D designs that mimic vascular structures, generating a flow that represents blood flow and applies shear stress to the endothelial layer (Wu et al., 2021).The model consists of two distinct cell culture compartments; the first is an open conduit where endothelial cells are seeded to form a monolayer, while the second is isolated by a porous membrane, and pericytes and astrocytes are seeded.The second limitation of 2D models of the BBB in vitro, which cannot be overcome by the organ-on-tip model, lies in that the model does not mimic the direct cell-cell adhesion that occurs in vivo.Recently, a model has been developed to reproduce blood flow under cell-cell adhesion, where cells are mixed with fibrinogen and thrombin and seeded to form a self-organizing vascular structure (Campisi et al., 2018).The big difference from previous organoid models is that the vascular endothelium forms the vascular structure and can actually allow blood flow into the vessel, and the permeability of the tracer can be measured.Challenges remain, such as reproducibility of the vessel diameters and cell assembly, the extracellular matrix being different from that in vivo, and control of fluid velocity.Moreover, neurons are not yet cocultured in such a BBB model.In the future, cocultured neurons in the abluminal side of such a model would be helpful to study whether the peripheral milieu truly affects neurodegeneration across the BBB.A third limitation, which is present not only in the 2D models but also in all BBB models, is with respect to the hiPSC-derived BMEC-like cells themselves.Well-established methods exist to generate hiPSC-derived BMEC-like cells that exhibit strong diffusion barrier properties and express BBB-specific transporters and efflux pumps, facilitating the study of molecular transport mechanisms and drug delivery in the brain, however, the previously identified cells have mixed phenotypes of endothelium and epithelium and lack a comprehensive ensemble of endothelial adhesion molecules required for immune cell interactions with the BBB (Lippmann et al., 2020;Nishihara et al., 2020).A new approach to differentiation using endothelial precursors to generate BMEC-like cells with high endothelial properties, good barrier function, and expression of robust endothelial cell surface adhesion molecules is necessary for immune cell adhesion (Nishihara et al., 2020(Nishihara et al., , 2021;;Matsuo et al., 2023).Such a model would be particularly useful for studying the interactions between immune cells and the BBB endothelium.This model has been shown to phenocopy both the disrupted barrier and accelerated immune cell infiltration into the BBB in multiple sclerosis that occurs in vivo and to be useful to confirm the efficacy of therapeutic interventions (Nishihara et al., 2022).Once again, it is important to emphasize that there is still no perfect BBB model, and researchers must be familiar with the strengths and limitations of each model and adapt it to their disease or functional studies (Table 1).The further development and combination of models in vitro and methods to differentiate hiPSCs will open new avenues for research and treatment of neurodegenerative diseases.
Conclusions
As more studies accumulate, treating BBB dysfunction is becoming a reality in murine models of various diseases.In experimental autoimmune encephalomyelitis, an animal model of multiple sclerosis, BBB sealing by ectopic expression of a tight junction molecule improved clinical scores in the chronic phase (Pfeiffer et al., 2011).Administration of specific surrogates activating Wnt signaling, which is essential for the barrier genesis of BMECs, restored BBB leakage in a mouse model of ischemic stroke (Ding et al., 2023).
Inhibiting immune cell trafficking into the CNS via very late antigen-4 and VCAM-1 interaction with BMECs reduces infarct volume and postischemic neuroinflammation (Liesz et al., 2011).BBB dysfunction is a pathological signature not only in AD but also common to many neurological diseases and appears to be upstream in their pathophysiology.BBB therapy promises to shed light on neurodegenerative diseases for which there have been no effective treatments.
Figure 1 |
Figure 1 | Human evidence for blood-brain barrier alterations in Alzheimer's disease.Under physiological conditions (right), endothelial cells are interconnected by tight junction proteins with minimal vesicle presence and express suppressed adhesion molecules such as ICAM-1 and VCAM-1 on their surface.Pericytes are embedded in the vascular basement membrane.The postcapillary venule has a perivascular space between the vascular and parenchymal basement membranes.The amyloid peptides are cleared by P-gp and LRP-1 towards the vascular side, while RAGE transports amyloid from the blood to the abluminal (parenchymal) side.Under pathological conditions in Alzheimer's disease (left), degraded tight junction proteins lead to diffusion of plasma solute molecules.Upregulated adhesion molecules such as ICAM-1 and VCAM-1 may lead to an influx of immune cells into the central nervous system.Although soluble ICAM-1 and VCAM-1 are upregulated in the serum of Alzheimer's disease patients, it is not clear whether their origin is BMECs or other cells.Expression of P-gp and LRP-1 is decreased, and the protein level of RAGE is upregulated in patients (whether these proteins are polarized to a particular side of BMECs is unclear).Endothelial cells and pericytes are vacuolated and pericytes detached from the vasculature.Immune cells are observed in the perivascular space.The end feet of astrocytes are swollen where the expression of AQP4 is decreased.Created with BioRender.com.AQP4: Aquaporin 4; BM: basement membrane; BMECs: brain microvascular endothelial cells; ICAM-1: intercellular adhesion molecule-1; LRP-1: low density lipoprotein receptor-related protein 1; P-gp: P-glycoprotein; TJ: tight junctions; VCAM-1: vascular cell adhesion molecule-1.
Extremely high TEER value > Relatively good property of BBB spesific transporters
> The models do not mimic all functions of target cells; ensure the required functions are expressed for the experiment > Must build appropriate devices or models to analyze targeted functionalityThe pros and cons of different experimental models and studies are shown in the table.BBB: Blood-brain barrier; BMEC: brain microvascular endothelial cell; hiPSCs: human induced pluripotent stem cells; TEER: transendothelial electrical resistance.
|
2023-12-16T16:16:33.359Z
|
2023-12-15T00:00:00.000
|
{
"year": 2023,
"sha1": "7ba5e7235b3659d6025971072ae8d8f5433acdb3",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1673-5374.390978",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49d2658ef273433cbe4914c9139dfb7760827df7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
262198010
|
pes2o/s2orc
|
v3-fos-license
|
Microscopic and Microchemical Characterization of Leaves and Stems of Acmella bellidioides
: Acmella bellidioides (Asteraceae), commonly known as
Application of the same common names to multiple species and the morphological similarities among different plants are some of the main elements that promote adulterations of the plant material, damaging the therapeutic efficacy and promoting the risk of intoxication [6].To address this problem, microscopy analyses of the botanicals can be used for accurate identification and authentication of the plant materials [7][8][9][10].Correct species identification is crucial for the safety and efficacy of botanicals [6].
Athayde and coauthors [11] studied and compared the anatomical characteristics of ten species in the arnica-do-campo complex, such as Calea uniflora Less., Chaptalia nutans and Lychnophora ericoides Mart., all belonging to Asteraceae family.These authors stated that several species could be designated as "arnica," making identification difficult.Ramachandran and Radhakrishnan [12] observed that Acmella species have similar morphology and evaluated ten "arnica" species microscopically.However, no anatomical study has so far been carried out for A. bellidioides.To bridge this gap, the present work aimed to provide detailed microscopic and microchemical analyses of the leaves and stems of A. bellidioides to support authentication and quality control of this herb.
Botanical material
Leaves and stems of A. bellidioides were collected in April 2018 from the State University of Ponta Grossa campus (latitude 25º 5' 11" S; longitude 50º 9' 39" W) in Paraná, Brazil.The plant material was identified by the taxonomist O. S. Ribas and the herbarium specimen (MBM 191067) was deposited in the Curitiba Botanical Garden herbarium in Paraná, Brazil.Access to the botanical material was authorized by the National System for the Management of Genetic Heritage and Associated Traditional Knowledge (CGEN/SISGEN -AF2FB38).
Microscopical analyses
Leaves and stems of A. bellidioides were fixed in FAA 70 (formaldehyde 37%, glacial acetic acid, and alcohol 70%) [13] and then preserved in 70% alcohol [14].Hand-cut cross-sections were prepared with a razor blade and double-stained with basic fuchsin and astra blue [15].Semi-permanent slides were prepared with glycerin 50% and closed with colorless nail polish [16].To analyze epidermal surfaces, small sections of the leaves were washed and then treated with hypochlorite solution (5%) until translucent.The materials were then washed with distilled water and neutralized in an acetic acid solution (5%).The sections were rewashed in distilled water, stained and mounted as described above [16].The slides were analyzed and imaged using an Olympus CX31 optical microscope equipped with a C7070 digital camera at UEPG pharmacognosy laboratory.
Scanning Electron Microscopy (SEM) was performed on fragments of fixed leaves and stems.The tissues were gradually dehydrated in ethanol solutions of increasing concentrations and dried in a Balzers CPD 030 critical point dryer (BAL-TEC AG, Balzers, Liechtenstein) and then coated with gold using a SC7620 Quorum sputter coater.Electron micrographs were taken using a Mira 3 field-emission SEM (Tescan, Brno-Kohoutovice, Czech Republic).During the SEM procedure, energy dispersive x-ray spectroscopy (EDS) was performed to verify the elemental chemical composition of the crystals.This analysis was randomly made for the crystals, and the cells devoid of crystals as control, at an acceleration voltage of 15 kV using an EDS detector attached to the SEM [17].These analyses were performed at the Multiuser Laboratory Complex (C-Labmu) at the State University of Ponta Grossa.
Three types of trichomes are observed: i) 2-3-celled simple non-glandular trichomes with rough cuticle surfaces (Figure 1E, F); ii) about 6-celled uniseriate non-glandular trichomes with obtuse apex and smooth cuticle (Figure 1G); and iii) peltate glandular trichomes (Figure 1I).The glandular trichomes are only found on the abaxial leaf surface.Simple and multicellular non-glandular trichomes have also been identified in nine other species of Acmella [21].However, the peltate glandular trichome was not previously described for any Acmella species.This feature is a good anatomical marker for the identification of A. bellidioides.
In cross-section, the leaf is dorsiventral with an unilayered epidermis covered by a striated cuticle (Figure 2A, B).Dorsiventral leaves were also previously observed in nine species of Acmella [12].The mesophyll consists of a 2-layered palisade and about 5-layered spongy parenchyma.Secretory ducts were observed in the palisade parenchyma near the minor collateral vascular bundles.The midrib, in cross-section, has a biconvex shape (Figure 2C), with a slight prominence on the adaxial surface.Subjacent to the epidermis, 1-2 layers of angular collenchyma are observed (Figure 2D); however, more layers can be seen in the prominence of the adaxial surface.Simple non-glandular trichomes are also observed in the midrib (Figure 2C, E).
The vascular system is represented by three collateral vascular bundles in an open arc (Figure 2C), with the central one having the largest diameter.In contrast, only a central collateral vascular bundle was described for Acmella species in the study of Ramachandran and Radhakrishnan [12].Secretory ducts (Figure 2F) containing essential oil are observed close to the vascular bundles.Druses of calcium oxalate (Figure 2G) are dispersed throughout the ground parenchyma (Figure 2G).
Calcium oxalate crystals are common in plants and can be formed in any organ or tissue.They occur in different morphotypes, such as druses, prisms, sand crystals, styloids, and raphides, each with several different shapes and sizes [22].The occurrence and morphotypes of crystals in plant tissues are useful in species identification [7,17,23,24].EDS spectra of the crystals in A. bellidioides showed prominent peaks of calcium, carbon, and oxygen (Figure 3), suggesting that these crystals are composed of calcium oxalate.
Brazilian Archives of Biology and Technology.Vol.66(spe): e23230451, 2023 www.scielo.br/babtThe stem is circular in cross-section (Figure 4A).The epidermis is unilayered and covered externally by a slightly thick cuticle (Figure 4C, D).Non-glandular trichomes are observed (Figure 4F, G).Beneath the epidermis, 2-3 layers of laminar collenchyma are present (Figure 4C, D, E, H).The cortex is formed by layers of angular collenchyma (Figure 4C, E) filled with phenolic compounds (Figure 6E).About 17 secretory cavities are arranged in a circle in the stem cortex.The vascular cylinder has phloem on the outside and xylem on the inside (Figure 4I, J, K).A layer of amiliferous endoderm surrounds the vascular system, which is represented by a collateral vascular cylinder.Perivascular fibers are found attached to the phloem (Figure 4I, L).The pith comprises thin-walled cells and contains numerous crystals and secretory ducts (Figure 4M).Ramachandran and Radhakrishnan [12] also found secretory ducts in nine species of Acmella.Druses and prisms of calcium oxalate (Figure 5) are found (Figure 4M, N, O).As per the literature, druses are absent in A. paniculata and A. uliginosa var.pentamera, while styloid crystals were found in Acmella tetralobata (Reshmi & Rajalakshmi) and A. vazhachalensis [12].The prismatic crystals observed in this study can help the differentiation of A. bellidioides from other Acmella species.Based on the histochemical tests, lipophilic compounds were found in the cuticle (Figure 6A), secretory ducts (Figure 6D) in the leaves and stems, and phenolic compounds in the leaf mesophyll (Figure 6B) and in the cortex and vascular bundles of the stems (Figure 6E).Palisade parenchyma has a greater amount of phenolic compounds compared to the spongy parenchyma.Lignified structures (Figure 6G) were detected in the fibers and xylem of the stems.Aggregated or isolated starch grains were visualized in the leaves (Figure 6C) and stems (Figure 6F).
CONCLUSION
Microscopy and histochemical analyses of plant tissues play vital role in the taxonomy and characterization of morphologically similar plant species and semi-processed botanical raw materials.The present study provides a detailed anatomical report of Acmella bellidioides illustrated with light and scanning electron micrographs.Noteworthy anatomical characteristics observed in this study are hypostomatic leaves, anomocytic stomata, peltate glandular trichomes, midrib with three collateral vascular bundles in an open arch, and prismatic crystals in leaves and stems.These findings would support the identification and differentiation of A. bellidioides from other species of the arnica-do-campo complex and form a basis for future studies of other taxa in the genus Acmella.
|
2023-09-24T16:28:45.965Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "67c27bc64db1b1f3e0505d041d0c23ac33a7dc31",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/babt/a/XJZQ9pcJNqVsNWSfncKRGYp/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "806cf1543e9dc0ee26b53e6371243523f140bb83",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
226967194
|
pes2o/s2orc
|
v3-fos-license
|
The Mechanisms of Type 2 Diabetes-Related White Matter Intensities: A Review
The continually increasing number of patients with type 2 diabetes is a worldwide health problem, and the incidence of microvascular complications is closely related to type 2 diabetes. Structural brain abnormalities are considered an important pathway through which type 2 diabetes causes brain diseases. In fact, there is considerable evidence that type 2 diabetes is associated with an increased risk of structural brain abnormalities such as lacunar infarcts (LIs), white matter hyperintensities (WMHs), and brain atrophy. WMHs are a common cerebral small-vessel disease in elderly adults, and it is characterized histologically by demyelination, loss of oligodendrocytes, and vacuolization as a result of small-vessel ischemia in the white matter. An increasing number of studies have found that diabetes is closely related to WMHs. However, the exact mechanism by which type 2 diabetes causes WMHs is not fully understood. This article reviews the mechanisms of type 2 diabetes-related WMHs to better understand the disease and provide help for better clinical treatment.
The continually increasing number of patients with type 2 diabetes is a worldwide health problem, and the incidence of microvascular complications is closely related to type 2 diabetes. Structural brain abnormalities are considered an important pathway through which type 2 diabetes causes brain diseases. In fact, there is considerable evidence that type 2 diabetes is associated with an increased risk of structural brain abnormalities such as lacunar infarcts (LIs), white matter hyperintensities (WMHs), and brain atrophy. WMHs are a common cerebral small-vessel disease in elderly adults, and it is characterized histologically by demyelination, loss of oligodendrocytes, and vacuolization as a result of small-vessel ischemia in the white matter. An increasing number of studies have found that diabetes is closely related to WMHs. However, the exact mechanism by which type 2 diabetes causes WMHs is not fully understood. This article reviews the mechanisms of type 2 diabetes-related WMHs to better understand the disease and provide help for better clinical treatment.
INTRODUCTION
With the increasing aging population in China, the incidence rate of type 2 diabetes among the elderly is also increasing yearly. Accordingly, the metabolic disorders and vascular diseases caused by type 2 diabetes are receiving increasing attention. Considerable evidence shows that type 2 diabetes is closely related to cerebral small vessel diseases, which are a key factor for induction of white matter intensities (WMHs).
WMHs are a silent brain injury that appears around the cerebral ventricles and/or deep subcortical white matter that are seen as high-intensity lesions in T2-weighted and fluid-attenuated inversion recovery (FLAIR) images and as isointense or hypointense on T1-weighted images on magnetic resonance imaging (1). Currently, age and hypertension are considered to be consistent risk factors for WMHs. Studies have shown that diabetic subjects are more prone to having more and larger WMHs than non-diabetic subjects (2)(3)(4)(5). A recent review by Del Bene et al. more firmly emphasized the relationship between type 2 diabetes and both the presence and severity of WMHs, although the underlying mechanism is still not fully understood (6).
In the present systematic review, we discuss the possible mechanisms for WMH-related type 2 diabetes to provide new ideas for the prevention and treatment of the condition.
The Possible Mechanism for Type 2 Diabetes-Related WMHs
The prevalence of WMHs increases from about 5% in people aged 50 years to nearly 100% for people aged 90 years (7). Although WMHs are gradually attracting more attention, their pathogenesis is still poorly understood. Numerous studies have suggested that WMHs originate from the chronic hypoperfusion and multiple factors involved in the pathogenesis of WMHs, such as cerebral blood flow autoregulation, venous collagenosis, impaired cerebrovascular reactivity, and bloodbrain barrier disruption.
At present, many new advances have been made to better understand the mechanism of WMH-related type 2 diabetes. These involve inflammatory response, oxidative stress, endothelial dysfunction, and other aspects (8, 9) (Figure 1). In the future, pathogenesis-targeted treatment may bring new hope to prevent and delay the occurrence and development of diabetic WMHs; hence, it is very important to understand these mechanisms in detail.
1. WMHs are believed to be associated with decreased local cerebral blood perfusion, impaired capillary permeability, and impaired blood-brain barrier (BBB) (10). The precise mechanisms whereby WMHs progress in patients with type 2 diabetes are unclear. However, as WMHs reflect vascular damage, small vessel abnormalities associated with DM could contribute to the formation of WMHs (Figure 1). The associations of DM and continuous measures of hyperglycemia with WMHs can be explained by several mutually non-exclusive mechanisms.
2. Regarding the possible mechanism of WMH pathogenesis, pathology studies have shown that vascular integrity declines first, followed by an increase in BBB permeability, which is mainly caused by endothelial dysfunction (11). Generally, in diabetes, endothelial nitric oxide synthase (eNOS) activity and nitric oxide (NO) production are reduced, resulting in endothelial cell dysfunction and impaired vasodilatation (8). Specifically, hyperglycemia causes cell damage by promoting advanced glycation end products (AGE), activating protein kinase C (PKC), and activating the polyol pathway. Activation of the polyol pathway consumes nicotinamide adenine dinucleotide phosphate (NADPH), which reduces eNOS activity and NO production, causing endothelial cell dysfunction (8). Patients with type 2 diabetes experience some pathologic conditions, such as long-term high blood glucose and multi-substance metabolic disturbance, which damage the blood vessel endothelium in the long term. There are reported associations between soluble intercellular adhesion molecule-1 (sICAM-1), a marker of vascular endothelial damage, and progression of WMHs in patients with type 2 diabetes (12). When damaged, endothelial cells release pro-coagulant molecules such as VWF, PAI-1, and thromboxane A2 and express on their surfaces tissue factor (TF) and adhesion molecules such as P-selectin, E-selectin, vascular adhesion molecular-1 (VCAM-1), and intercellular adhesion molecule-1 (ICAM-1), which mediate the interaction between neutrophils and platelets with the endothelium. Therefore, endothelial dysfunction can promote both pro-inflammatory and pro-coagulant states. A recent study provided novel evidence that hyperglycemia is associated with carbonyl stress in the disruption of the brain endothelial cell barrier dysfunction, a process that is correlated with elevated occludin methylglyoxal glycation, decreased glyoxalase II activity, and reduced GSH-dependent cellular methylglyoxal elimination (13). In conclusion, a variety of pathways and factors contribute to the dysfunction of brain microvascular endothelial cells in type 2 diabetic patients, thereby promoting the occurrence of WMHs. Therefore, the protection of vascular endothelial function may prevent the appearance of WMHs in diabetic patients.
3. Atherosclerosis due to diabetes mellitus is the pathological basis of diabetes mellitus combined with cerebrovascular disease. Insulin deficiency in the body of diabetic patients causes glucose to convert into a large amount of fat, which is broken down into triglycerides and free fatty acids. As a result, cholesterol increases, leading to hyperlipidemia and accelerating arteriosclerosis in diabetic patients. In addition, a previous study has shown that hyperglycemia-induced polyol pathway hyperactivity may play an important part in the development of diabetic atherosclerosis (8). Additionally, diabetic patients contain more hyaline substance in their blood vessels than nondiabetic patients, and their blood vessel walls are thickened, their lumens are narrowed, and even their blood vessels are occluded. This may be due to the increased production of growth factors such as vascular endothelial growth factor (VEGF) and fibroblast growth factor (FGF), which can stimulate the remodeling of blood vessel walls, resulting in a thickening of the basement membrane, which favors local deposition of proteins and lipids and promotes sclerosis and impaired vasodilation. Furthermore, chronic hyperglycemia acts on cerebral small blood vessels, leading to local blood flow regulation and metabolic disorders, in turn leading to decreased blood flow and ischemic injury of deep penetrating branch arteries. Atherosclerosis is the main cause of chronic cerebral ischemia. Severe atherosclerosis can lead to decreased blood flow in the distal blood supply and cause diffuse cerebral insufficiency and demyelination of the white matter. Besides, the result of vascular damage caused by this chronic disease is glial hyperplasia of the blood vessel walls around the ventricle, leading to demyelination of the white matter. In addition, diabetes is a well-known risk factor for cerebrovascular disease, as mentioned earlier, and it is related to glucose toxicity, abnormalities in cerebral insulin homeostasis, and microvascular abnormalities (4). 4. The oxidative stress in the body of diabetic patients is greater than that of non-diabetic patients, and the active oxidative products are increased accordingly. Consequently, the function of antioxidant system is weakened, and then microvascular diseases are more likely to occur. First, high glucose concentration increases oxidative stress by overproduction of the superoxide radical in the mitochondria (14), and this oxidative stress further impairs endothelial function (15,16). However, oscillating glucose levels induce more oxidative stress than the high glucose concentration itself (17,18). Aldose reductase catalyzes glucose to generate more fructose and sorbitol, which are deposited in the peripheral nerves in large quantities, further increasing the osmotic pressure in nerve cells and making them more prone to edema, degeneration, and necrosis. Oxidative stress caused by ROS overproduction plays a key role in the activation of other pathogenic pathways involved in diabetic complications, including elevated polyol pathway activity, non-enzymatic glycation, and PKC levels, which in turn lead to the development of microvascular complications (19)(20)(21). Hyperglycemia promotes the formation of ROS, which interacts with both deoxyribonucleic acid (DNA) and proteins, causing cellular damage, especially targeting mitochondrial DNA. The hyperglycemia and metabolic disturbance can increase the level of oxidative stress to further impair endothelia. 5. Increased systemic and cerebrovascular inflammation is one of the major pathophysiological features of type 2 diabetes and its cerebral vascular complications. The main mechanisms of hyperglycemia causing inflammatory response include NF-κB-dependent production of inflammatory cytokines, TLR expression, and increased oxidative stress (22). Some inflammatory proteins and cytokines increase the risk of WMHs by causing endothelial dysfunction. Among them, the activation of the classical complement system accelerates the hardening of arterioles, endothelial dysfunction, and other mechanisms in the cerebral microvessels, and finally causes pathological changes to cerebral microvessels and facilitates formation of WMHs. 6. A previous finding showed that the presence of WMHsrelated type 2 diabetes is associated with elevated homocysteine concentration and insulin resistance (IR) (23). The mechanism by which hyperhomocysteinemia leads to WMH may be as follows: elevated levels of homocysteine in the blood induce oxidative damage to vascular endothelial cells and inhibit endothelial production of nitric oxide, which is a strong vasodilator. Moreover, hyperhomocysteinemia also promotes the growth of vascular smooth muscle cells, enhances platelet adhesion ability, and is associated with elevated levels of prothrombotic factors such as β-thromboglobulin and tissue plasminogen activator. In conclusion, higher homocysteine levels are associated with WMHs by damaging the endothelial function (24,25).
It is known that IR is one of the pathogenic factors for type 2 diabetes. Although the specific mechanism of WMH with IR remains to be clarified, several mechanisms could be explained based on previous studies. First, IR is linked to WMHs by increased homocysteine levels. An animal study showed that insulin is involved in the regulation of plasma homocysteine concentrations by affecting the hepatic transsulfuration pathway, which is involved in the catabolism of homocysteine (26). Another animal study also indicated that IR is associated with elevated homocysteine concentrations and changes in two key enzymes in homocysteine metabolism, which subsequently lead to hyperhomocysteinemia (27). Second, IR can accelerate the development and progression of atherosclerosis by acting on its risk factors such as hypertension, hyperlipidemia, and obesity. Third, IR in association with lower cerebral perfusion in the frontal and temporal regions has been proposed (28). In addition, a study revealed that IR at the blood-brain barrier reduces the amount of glucose that can reach the brain, resulting in neuronal injury (29). In conclusion, it is possible that interactions among the incidence of WMHs, hyperhomocysteinemia, and IR are reinforced through mechanisms related to endothelial dysfunction.
7. Studies examining WMHs-related type 2 diabetes patients have a brain structural abnormalities basis. Because of the inconsistency between the WMHs and type 2 diabetes in previous studies, a study based on the automated segmentation method to offer precise, objective, and reproducible volumetric measurements of cerebral tissues in large numbers of patients, showed that type 2 diabetes was associated with a smaller volume of gray matter, larger lateral ventricle volume, and larger white matter lesion volume (30). These findings correspond to those of another study that reported that diabetic patients had greater WMH volume and brain tissue loss compared to nondiabetic patients (31). Some other studies have indicated that cognitive dysfunction was related to the reduced hippocampal volume (32,33). In response to this finding, Milne et al. found that asymmetric hippocampal atrophy contributes to cognitive impairment in the incidence of WMHs with type 2 diabetes (34). De Bresser et al. compared the brain volumes of 55 diabetic patients with control participants over 4 years and found a greater increase in lateral ventricular volume over time in patients with type 2 diabetes (35). All of these reports suggest that the influence of diabetes on brain function is associated with WMHs, accompanied by damage to cognitive function. However, how diabetes causes changes in the brain structure is still speculative. Degenerative changes in cerebral small vessels may primarily contribute to this aspect, with other factors such as brain microvascular endothelial dysfunction, cerebrovascular inflammation, oxidative stress, and atherosclerosis caused by diabetes mellitus also playing a role.
8. Altered insulin signaling may be another contributory factor in WMH (36). An animal experiment revealed that reduced brain insulin signaling in mouse models of diabetes increased tau beta phosphorylation and amyloid beta peptide levels, both of which are associated with WMHs and cognitive decline (37). 9. A study indicated that glucose variability is associated with volume of WMHs (38). However, the hemoglobin A1c (HbA1c) level is not related to WMH volumes (35). It has been hypothesized that glycoalbumin/glycohemoglobin A1c (GA/HbA1c) is associated with WMH volumes in elderly diabetic patients The GA to HbA1c ratio (GA/HbA1c) is calculated by dividing GA by HbA1c, because GA/HbA1c can represent glucose variability with high sensitivity (39). The mechanism by which glucose fluctuation causes WMHs is another question that has interested researchers. Glucose variability is proven to induce oxidative stress and endothelial dysfunction, and acute glucose fluctuations are a greater trigger of oxidative stress than sustained hyperglycemia (40). As mentioned earlier, dysfunction of brain microvascular endothelial cells in type 2 diabetic patients promotes the occurrence of WMHs; hyperglycemia increases the level of oxidative stress to further impair the endothelia (41).
In summary, type 2 diabetes is an independent risk factor for WMHs. It can affect the white matter by affecting the regulation of cerebral blood flow and endothelial function, aggravating atherosclerosis, reducing the number of oligodendrocytes and other mechanisms, and finally leading to high signal intensity in the white matter. Type 2 diabetes causes WMHs through multiple pathways and mechanisms, which may interact with and promote each other. Therefore, active treatment of type 2 diabetes has certain clinical significance in the prevention of WMH. The interventions targeting the above mechanism are promising as they can reduce the risk of WMHs developing in patients with type 2 diabetes. However, the drivers of WMHs in patients with type 2 diabetes need further understanding and investigation.
CONCLUSION
It is difficult to reverse WMHs in patients with type 2 diabetes; hence, blood glucose must be strictly controlled to prevent the occurrence and development of the disease. In future, targeted treatment for the pathogenesis of diabetes-related WMHs may bring new hope for preventing and delaying the occurrence and development of WMHs.
AUTHOR CONTRIBUTIONS
RL and GN designed the project. JS and BX wrote the paper. XZ revised the manuscript. ZH and ZL performed the literature search. All authors discussed and commented on the manuscript.
FUNDING
This work was supported by the National Natural Science Foundation of China (81900739).
|
2020-11-17T14:12:01.201Z
|
2020-11-17T00:00:00.000
|
{
"year": 2020,
"sha1": "fe3fd62a8bce0bfe33335fe48d629f6f43753b08",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2020.498056/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe3fd62a8bce0bfe33335fe48d629f6f43753b08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218763643
|
pes2o/s2orc
|
v3-fos-license
|
Consensus Driven Learning
As the complexity of our neural network models grow, so too do the data and computation requirements for successful training. One proposed solution to this problem is training on a distributed network of computational devices, thus distributing the computational and data storage loads. This strategy has already seen some adoption by the likes of Google and other companies. In this paper we pro-pose a new method of distributed, decentralized learning that allows a network of computation nodes to coordinate their training using asynchronous updates over an unreliable network while only having access to a local dataset. This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes. Sharing the internal model instead of the training data allows the original raw data to remain with the computation node. The asynchronous nature and lack of centralized coordination allows this paradigm to function with limited communication requirements. We demonstrate our method on the MNIST, Fashion MNIST, and CIFAR10 datasets. We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.
Introduction
With the increase in popularity of Machine Learning (ML), and as it is applied to more and more applications, the models being trained to solve these problems grow more and more complex and require more and more data to train on, and more and more computational power to process this data through the complex models. This growth, coupled with growing concerns about data security have made distributed learning methods more appealing as they allow for an array of compute nodes to work on the same problem simultaneously, reinforcing the learning done by other nodes. This creates an environment with significantly more scaleability compared to fully centralized methods. An increase in the number of compute nodes allows for an increase in the amount of data being processed at once, as the each node would maintain its own local dataset. Some early work in distributed learning introduced methods such as HOGWILD! [1] and DOGWILD! [2]. These methods involve many different compute nodes training a model, occasionally overwriting the weights in a central, master model, and occasionally overwriting their own weights with the ones on the central model in the case of HOGWILD!, or overwriting and copying each other as is the case in DOGWILD!. As Google began to explore this problem, they developed a series of asyncronous learning techniques, including their Asynchronous Advantage Actor Critic (A3C) method [3]. In these methods, compute nodes accumulate updates, then periodically push them to a global model, then copy that model and begin accumulating more updates. In 2019, they proposed an overarching framework for this sort of approach they refer to as Federated Learning (FL). In this framework, models are sent to remote compute nodes, they are trained on local data on those nodes, then the updates are sent back to the master node, which aggregates the data from the various compute nodes [4]. These methods provide advantages in scalability, and only rely on local data by sharing the current iteration of the model in training rather than the raw data. One of the major attractions of such methods is that they adress contemporary concerns about data security [4].
While these methods are great for the large scale ML applications that are common these days, we are particularly interested in their use with mobile teams of robots. With the exception of DOGWILD!, all the above methods rely on a central parameter server that tracks the current "truth" of the model. Maintaining a central model allows for good testing and verification of the model during training. This is undesirable in the case of a mobile robotic team, as it makes the individual robots dependent on a communication channel to the central server to get the latest version of the complete model, which puts severe restrictions on where the robots can move, especially if communication infrastructure does not exists, say in a disaster zone after an earthquake or hurricane. In addition, these methods, especially the only decentralized model, DOGWIlD!, also require robust, high bandwidth communication, which, is not always in field robotics.
In this paper, we propose an algorithm we call a Consensus Learning (CL) algorithm. While the previously discussed algorithms, perform well in industrial and laboratory conditions where high fidelity communication between compute nodes is a reasonable assumption, CL is designed to work with minimal communication requirements. This algorithm is inspired by Distributed Averaging Consensus (DAC) algorithms, a class of algorithms run over a network of nodes, each with a value. The average of these values is calculated in a distributed manner by sharing individual estimates of the average repeatedly until they converge to a consensus. Much of the earlier work in DAC algorithms focused on averaging constant values. Of particular interest to us were methods using asynchronous updates [5]. More recent work has focused on time varying signals in continuous time [6] [7], but much of this work requires synchronized, continuous updates. Using similar methods, our algorithm reaches a consensus on the model being trained without any centralized coordination. We implement our algorithm without the extreme bandwidth requirements of methods like DOGWILD!, and even show that it is robust to communications failures.
In this paper, we give an overview of our proposed method, and outline its theoretical advantages and limitations. We then present an example case using this method to train a model on the MNIST dataset. We perform several variations to the training setup to demonstrate the strengths and limitations of our method. We show that allowing the nodes to communicate allows them to achieve better performance overall than when they do not, and that this method is robust to various communication failures. We also show that this method allows the model to learn well, even when local datasets are extremely biased. We then repeat all of these experiments on the Fashion MNIST and CIFAR10 datasets, and compare the results from all three sets of experiments.
Consensus Learning Algorithms
CL algorithms run on a network of compute nodes with some ability to communicate with each other. Each compute node uses a standard training method with local data to train, and a consensus algorithm to coordinate the training across the full network.
Our proposed algorithm is inspired by the DAC algorithm presented in [5]. This algorithm has a node send a copy of its estimate to another node, the receiving node takes a weighted average of its estimate and the received estimate, then sends the deltas back to the originating node. The originating node then subtracts the deltas from its estimate completing the update. The results in the two estimates moving towards each other, and the mean between the two values in preserved. If a node is currently in the middle of an update, that node rejects any additional update requests. Our algorithm is based on this method, however rather than ignoring incoming updates while busy, they are stored in a buffer, and local updates to the values are performed using traditional training methodology.
Our algorithm assumes a ML problem set up as follows: • a model y = φ (x|p) where y are the labels, x are the inputs, and p are the parameters.
• a loss function λ = L (x, y) • a training function p k+1 = trainOnBatch (p k , x, y) • a training dataset with a function that returns a batch of data to train on x, y = getBatch () • the parameters p are initialized to some value p i0 . This value can be different for each node.
We also require a communications structure to facilitate the sending of messages between nodes. We assume this structure has the following capabilities for a given node i: • C it , a set of all nodes to whom a communication channel exists at time t • m, j = recvNext (U i ) a function to retrieve the next message in U i and its source node, or «««< HEAD Because our algorithm uses a similar strategy as the DAC algorithm in [5] to update the weights across pairs of nodes, there are two types of messages being sent between nodes. To signal the type of data a message contains, it is prepended with either a WEIGHTS or DELTAS flag. ======= Because the consensus algorithm uses two different types of messages, each message has a flag at the beginning to specify the type. Specifically a STATE flag to signal the message contains the current estimate of the averages, and an UPDATE flag that indicates the message contains an update to be applied to an internal state. »»»> dustin Our CL algorithm is broken into two distinct phases, the local learning phase where the nodes internal model is updated based on local training data, the asynchronous update phase where the nodes update each other to reach towards a consensus on the model. The local learning phase will perform localized training on N i batches of data, then move on the the asynchronous update phase. This phase will start by sending the node's current weights to M i random nodes in C it . Then the node will process all the messages in its communication buffer U i , then it will return to the local learning phase. Each node will alternate through these different phases until the learning task is deemed complete. Algorithm 1 outlines this process in pseudo code.
Experimental Results
We performed several experiments to demonstrate the utility of this algorithm, and to explore its strengths and limitations, by training a Deep Neural Network (DNN) on the MNIST dataset as it is a common toy case that demonstrates how our method works quite nicely. In the first set of experiments, the number of updates that are sent out each iteration are varied. In the second set, we trained the DNN on locally biased datasets, and varied the level of bias. In the last set of experiments, we explored a case where communication is unreliable between nodes by dropping packets at a varying rate. In all sets of experiments, we compare the results to a control where the DNN is trained on a the complete dataset using traditional, non-decentralized methods, we refer to this case as the monolithic model. Because MNIST is a widely considered to be a toy problem, we also performed these same tests on the Fashion MNIST dataset [8] and the CIFAR10 dataset [9] to show that it works when confronted with more complex problems as well.
For the MNIST and Fashion MNIST datasets, we trained a DNN that is a Multi-Layered Perceptron (MLP). This DNN had a single hidden layer with 72 neurons using Rectified Linear Unit (ReLU) activation, and an output layer with 10 neurons using a SoftMax activation. For the CIFAR10 dataset, we trained a Convolutional Neural Network (CNN) with two convolutional layers identifying 16 and 32 features with 3x3 pixel windows, ReLU activation, and max pooling after each layer (This network architecture was derrived from the tensorflow [10]). The network was then flatened, and a 128 neuron dense layer was applied with ReLU activation and dropout. Finaly a 10 neuron dense layer was used as the output layer with SoftMax activation. In all the experiments, the DNN was trained using a Sparse Categorical Cross-Entropy cost function along with an Adam Optimizer [11]. Each compute node also has a copy of the same validation and test sets so the results are comparable.
Data Sharing
In this experiment, the DNN described above was trained on the MNIST dataset distributed over 8 nodes, with each node having a local dataset that is 1/8th of the full dataset. We vary the number of times a copy of the weights are sent out each cycle (M i ). We do a control test where M i = 0, and compare it to cases where M i = 1, 2, and 5. We also compare these results to the monolithic model results. Figure 1 shows the metrics taken during these tests. In the case where M i = 0, the nodes seemed to learn on their local datasets for a short time, then quickly begin to memorize this dataset (as indicated by the upward trend in the validation loss signal that begins around epoch 20). When we examine the actual weights, we find that each node learns a distinct model (which is expected given randomized weight initialization). In this case, these models were not able to achieve the level of performance as the monolithic model.
When we allow the nodes to update each other, the models are able to achieve similar performance to the monolithic model. However, it took significantly longer to to achieve this performance. In addition, when examining the weights of the final model, we found that the nodes all learned more or less the same model, as we expect from employing the DAC algorithm. This indicates that the sharing of the model in our proposed algorithm allows relevant information in the local dataset to make its way to the other nodes resulting in a model that performs more like it was trained on the global dataset.
The last key observation of this experiment is that when M i = 0, the value of M i did not seem to dramatically effect the performance of the algorithm. This means that we can limit the number of updates to reduce the amount of network traffic necessary to run this algorithm. Though on unreliable networks, multiple updates might by necessary to increase the probability of an update making it to its destination, as discussed later.
Training on Locally Biased Data
To explore the ability of this method to effectively share information between nodes, we considered a case where nodes are given very biased local data while the global dataset remains unbiased. To create these biased data sets, we first sorted the MNIST Dataset by label. Given a mix rate 0 ≤ r m ≤ 100, we created 10 datasets where for dataset i = 0...9, we took r m % of the data points labeled i and distributed them among the other 9 datasets, leaving (100 − r m ) % of the data with label i to be added to dataset i. This resulted in 10 datasets with a variable level of bias regulated by r m . We setup 10 nodes to train on these datasets using the proposed CL algorithm, and ran this test for mix rates at intervals of 10% from 10% to 90%. Figure 2 shows the results of these tests. Not shown in this figure are the results when using a mix rate of 0%, in this case, the learning failed, giving an accuracy of around 10%, which is what we would expect of a model trained on a dataset consisting of only one label. However, with a very low mixing rate of even 10%, the nodes were able to learn to recognize inputs with labels not prominent in their own local set. This does come at a price however, the learning on highly biased datasets takes longer to converge than better mixed datasets.
Unreliable Communication
In this last experiment, we considered cases where communication is not reliable, and packets are dropped. As was shown in the first set of experiments we presented, increasing M i does not seem to improve the performance of the algorithm under ideal conditions. However, when we consider the possibility of the dropped packets, increasing M i could be used to increase the probability that a packet will make it through. What is more concerning to us is if this algorithm is still valid if the first packet makes it to its destination, but the reply packet gets dropped. We ran the same scenario that we ran as part of the first set of experiments, but instead of M i , we vary the probability a reply will get dropped. We use dropout rates of 0%, 25%, 50%, 75%, and 100%. The results of these experiments are displayed in Figure 3. As we can see, dropping packets did not seem to greatly effect the performance of the algorithm. This combined with the results from the first experiment show that this algorithm is fairly robust to intermittent communications failures. In fact, sending the reply message does not seem to be necessary, at least for these experiments. However we chose to keep it as an optional part of our algorithm for the sake of the DAC algorithm that was the inspiration for this algorithm.
Additional Experiments
We also performed the previously mentioned experiments on two additional datasets, the Fashion MNIST Dataset, and the CIFAR10 dataset. Table 1 shows the metrics for the results of these experiments as well as the MNIST results. For each dataset, the resulting neural networks were run on a test dataset consisting of data points that the networks did not see during training or validation, and the same test set was used for each experiment. The resulting metrics are averaged over all the nodes that were used during training. We can see the same patterns in the results with these new datasets that were apparent in the MNIST experiments. In all cases, When M i = 0, the the decentralized consensus method trained networks that perform poorly compared to when M i > 0, where the trained networks were able to achieve comparable results to the network produced by the monolithic method given enough time (100 epochs in all cases). Further, increasing M i , and adjusting the dropout rate of return packets did not seem to have a noticeable effect on the convergence and performance of the algorithm in any of our test cases. Finally, when the data was divided into highly biased subsets, the decentralized consensus method was still able to produce a network that is able to recognize categories not prevalent in a given node's local dataset. The overall performance increased with the mixing rate in all cases.
Conclusion
In this paper we propose a Consensus Learning paradigm and an algorithm that implements it. This algorithm takes inspiration from a Distributed Averaging Consensus method to reach a consensus on the model being trained, while using traditional training algorithms to train the model on local data. This algorithm does so without significant requirements on the communication network the nodes are operating on, and is robust to intermittent failure in that network. We demonstrate this algorithm by using it to train a Deep Neural Network model on the MNIST Dataset of handwritten numerals, as well as the Fashion MNIST dataset, and the CIFAR10 dataset. We demonstrate the robustness of this algorithm to failures in the communication network, as well as showing its capacity to overcome bias in local datasets in all three cases.
Broader Impacts
This work does not present any foreseeable societal consequence.
|
2020-05-22T01:00:56.141Z
|
2020-05-20T00:00:00.000
|
{
"year": 2020,
"sha1": "cfbd3d005dfc1f29703b54e263a4d44a2eaf8b11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cfbd3d005dfc1f29703b54e263a4d44a2eaf8b11",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
14152199
|
pes2o/s2orc
|
v3-fos-license
|
Integrable Conformal Field Theory in Four Dimensions and Fourth-Rank Geometry
We consider the conformal properties of geometries described by higher-rank line elements. A crucial role is played by the conformal Killing equation (CKE). We introduce the concept of null-flat spaces in which the line element can be written as ${ds}^r=r!d\zeta_1\cdots d\zeta_r$. We then show that, for null-flat spaces, the critical dimension, for which the CKE has infinitely many solutions, is equal to the rank of the metric. Therefore, in order to construct an integrable conformal field theory in 4 dimensions we need to rely on fourth-rank geometry. We consider the simple model ${\cal L}={1\over 4} G^{\mu\nu\lambda\rho}\partial_\mu\phi\partial_\nu\phi\partial_\lambda\phi \partial_\rho\phi$ and show that it is an integrable conformal model in 4 dimensions. Furthermore, the associated symmetry group is ${Vir}^4$.
Introduction
It is expected that at very high energies physical processes become scale invariant. In fact, in such regime all masses involved in any physical process are small in comparison with the energies, and they can be put equal to zero. There is therefore, no fundamental mass setting the scale of energies, and therefore, all physical processes must be scale invariant.
The above speculation is confirmed by several experiments, as deep inelastic scattering, which reveal that, at very high energies, physical phenomena become scale invariant. Therefore, any field theory attempting to provide a unified description of interactions must, at high energies, exhibit this behaviour.
It can furthermore be shown, from a mathematical point of view, that scale invariance implies conformal invariance.
Conformal field theories can be constructed in any dimension but only for d = 2 they exhibit a radically different behaviour. Chief among them is the fact that there exists an infinite number of conserved quantities making the theory an exactly solvable, or integrable, theory. The associated symmetry group becomes infinite-dimensional and after convenient parametrisation of its generators in terms of Fourier components is the familiar Virasoro algebra. To be more precise the group is V ir ⊕ V ir, with one Virasoro algebra associated to each null space-time direction. Furthermore, two-dimensional conformal integrable field theories hold several other properties which are missing when formulated into higher-dimensional space-times. Chief among them are those properties, such as renormalisability, which make of its quantum field theoretical version a mathematically consistent model. All the previous facts gave rise to the success of string theories in the recent years; cf. ref. 1 for further details.
Let us closer analise the situation. In a conformal field theory the symmetry generators are the conformal Killing vectors. They are solutions of the conformal Killing equation Only for two-dimensional spaces the solutions are infinitely many giving rise to an infinitedimensional symmetry group. A closer analysis of the conformal Killing equation shows that this critical dimension is closely related to the rank of the metric. In fact, since the metric is a second-rank tensor, in the conformal Killing equation will appear two terms containing derivatives of the Killing vectors. After contraction with the metric a 2 is contributed which leads to the critical dimension d = 2. Therefore, the critical dimension for which the theory exhibits the critical behaviour is equal to the rank of the metric. However, 2 is quite different from 4, the accepted dimension of space-time. Therefore, the ideal situation would be to have a field theory with the previous properties holding in four dimensions: scale invariance, the appearance of an infinite-dimensional symmetry group and the hope for a mathematically consistent quantum field theory. Many attempts have been done in order to reach this purpose. However, all existing proposals are not exempt from criticism and no one can claim success. All of them have so far met with considerable difficulties and in spite of the tremendous amount of work done on the subject there is still not a generally accepted integrable conformal field theory in four dimensions.
In order to obtain an integrable conformal field theory two ingredients are necessary.
The first one is the scale invariance and the second is the existence of an infinite number of conserved quantities. As mentioned above, scale invariance implies conformal invariance. However, the concept of conformality strongly depends on the metric properties of the base space. In order to clarify this point let us consider some general properties of metric spaces.
Our approach is based on the use of line elements of the form Here the natural object is the rth-rank metric G µ 1 ···µ r . With it we can define generalised inner products, norms and angles. The inner product of two generic vectors A µ and B µ is given by where, obviously, p + q = r. Next we can define the norm of the vector A µ by 4) and the same definition for B µ . Next we define the generalised angles In the second-rank case the above formulae coincide with the usual definitions. These formulae seem to be the natural generalisations to higher-rank geometries of the concepts of inner product, norm and angle, in Riemannian geometry. It can then be shown that the generalised angles are invariant under the scale transformation (1.6) Therefore, it is reasonable to call conformal this kind of transformations. This is the concept of conformality we use in our approach. The next step is to determine how conformal transformations of the metric G µ 1 ···µ r can be obtained. For this let us consider the transformations with ξ µ an infinitesimal function. At first order in ξ µ the metric is changed by A conformal transformation will be induced on the metric if the previous variation is proportional to the metric δG µ 1 ···µ r = α G µ 1 ···µ r . (1.9) We obtain then a generalised conformal Killing equation (the value of α has been fixed by taking the trace of this equation) where G is the determinant of G µ 1 ···µ r . Before continuing the analysis of this equation let us turn our attention to the curvature properties of differentiable manifolds. Curvature properties are described by the curvature tensor constructed in terms of a connection Γ λ µν . The metric and the connection are, in general, independent objects. They can be related through a metricity condition. The natural metricity condition is The number of unknowns for a symmetric connection Γ λ µν is 1 2 d 2 (d + 1), while the number of equations is This number is always greater than 1 2 d 2 (d + 1). Therefore the system is overdetermined and some differentio-algebraic conditions must be satisfied by the metric. Since, in general, such restrictions will not be satisfied by a generic metric, one must deal with the connection and the metric as independent objects. Therefore, for physical applications, the connection and the metric must be considered as independent fields.
The exception is r = 2, Riemannian geometry, in which the number of unknowns and the number of equations are the same. Since (1.12) is an algebraic linear system, the solution is unique and is given by the familiar Christoffel symbols of the second kind.
A metricity condition can be imposed consistently only if the number of independent components of the metric is less than that naively implied by (1.13). The maximum acceptable number of independent components is 1 2 d 2 (d + 1). This can be achieved, for instance, for null-flat spaces, for which the line element is given by (1.14) Spaces described by this line element have r null directions. The only non-null component of the metric is In this case eq. (1.12) has the unique solution Γ λ µν = 0, therefore these spaces are flat. That is the reason to call them null-flat spaces.
A simple counting of equations and unknowns shows that the situation for eq. (1.10) is similar to that for eq. (1.12). Therefore a consistent solution will exist only for certain kinds of metrics. A particular case is for null-flat metrics. In this case one can prove the following result: Theorem. The critical dimension, for which the conformal Killing equation has infinitely many solutions, is equal to the rank of the metric.
In this case one can furthermore prove that the symmetry group is V ir r .
Therefore, if we want to construct an integrable conformal field theory in four dimensions we must rely on fourth-rank geometry. In this case the conformal Killing equation must appear as the condition for the existence of conserved quantities.
Let us now mimic the introductory remarks for a fourth-rank metric. Conformal field theories can be constructed in any dimension but only for d = 4 they exhibit a radically different behaviour. Chief among them is the fact that the symmetry group becomes infinite-dimensional. The group, after convenient parametrisation of its generators in terms of Fourier components, is nothing more than the familiar Virasoro algebra. To be more precise the group is V ir⊕V ir⊕V ir⊕V ir, with one Virasoro algebra for each null space-time direction. The fact that the symmetry group is infinite-dimensional implies that there is an infinite number of conserved quantities making the theory an exactly solvable, or integrable, theory. Furthermore, four-dimensional field theories hold several other properties which are missing when formulated in other dimensions. Chief among them are those properties which make of its quantum field theoretical version a mathematically consistent model.
Let us closer analise the situation. In a conformal field theory the symmetry generators are the conformal Killing vectors. They are solutions of the conformal Killing equation (1.16) Only for four-dimensional spaces the solutions are infinitely many giving rise to an infinitedimensional symmetry group. A closer analysis of the conformal Killing equation shows that this critical dimension is closely related to the rank of the metric. In fact, since the metric is a fourth-rank tensor, in the conformal Killing equation will appear four terms containing derivatives of the Killing vectors. After contraction with the metric a 4 is contributed which leads to the critical dimension d = 4. Therefore, the critical dimension for which the theory exhibits the critical behaviour is equal to the rank of the metric.
Comparison of eq. (1.16) with eq. (1.1) illustrates the comments at the introduction concerning the appearance of a number of terms equal to the rank of the metric.
The simple Lagrangian where φ µ = ∂ µ φ, and φ is a scalar field, exhibits all the properties we are looking for: it is conformally invariant and integrable. Furthermore the Lagrangian (1.17) is renormalisable, by power counting, in four dimensions.
The work is organised as follows: In Section 2 we start by considering the metric properties of differentiable manifolds. In Section 3 we consider the curvature properties of differentiable manifolds and introduce the concept of null-flat spaces. Section 4 is dedicated to the conformal Killing equation in null-flat spaces. In Section 5 we introduce the fundamentals of conformal field theory. Section 6 reviews the results on conformal field theory for second-rank, Riemannian, geometry. Section 7 presents the results on conformal field theory for fourth-rank geometry. Section 8 is dedicated to the conclusions.
To our regret, due to the nature of this approach, we must bore the reader by exhibiting some standard and well known results in order to illustrate where higher-rank geometry departs from the standard one.
Metric Properties of Differentiable Manifolds
The metric properties of a differentiable manifold are related to the way in which one measures distances. Let us remember the fundamental definitions concerning the metric properties of a manifold. Here we take recourse to the classical argumentation by Riemann. 2 Let M be a d-dimensional differentiable manifold, and let x µ , µ = 1, · · · , d, be local coordinates. The infinitesimal element of distance ds is a function of the coordinates x and their differentials dx's λ > 0, and is positive definite Condition (2.2b1) was written, so to say, in a time in which line elements were thought to be positive definite. With the arrival of General Relativity one got used to work with line elements with undefined signature. Condition (2.2b1) was there to assure that the distance measured in one direction is the same one measures in the opposite direction. Therefore, condition (2.2b1) can be replaced by the weaker condition However, the above conditions can be summarised into the single condition Of course the possibilities are infinitely many. Let us restrict our considerations to monomial functions. Then we will have Condition (2.2a) is satisfied by construction. In order to satisfy condition (2.2b2) r must be an even number. The simplest choice is r = 2 which corresponds to Riemannian geometry. The coefficients g µν are the components of the covariant metric tensor. The determinant of the metric is given by If g = 0, the inverse metric is given by (2.6) and satisfies The next possibility is r = 4. In this case the line element is given by The determinant of the metric G µνλρ is given by where the ǫ's can be chosen as the usual completely antisymmetric Levi-Civita symbols. If G = 0, the inverse metric is given by and satisfies the relations That this is true can be verified by hand in the two-dimensional case and with computer algebraic manipulation for 3 and 4 dimensions. 3 All the previous results can be generalised to an arbitrary even r. In the generic case the line element is given by ds r = G µ 1 ···µ r dx µ 1 · · · dx µ r . (2.12) The determinant of the metric G µ 1 ···µ r is given by where again the ǫ's can be chosen as the usual completely antisymmetric Levi-Civita symbols. If G = 0, the inverse metric is given by and satisfies the relations It is clear that higher-rank geometries are observationally excluded at the scale of distances of our daily life. However, a Riemannian behaviour can be obtained for separable spaces. A space is said to be separable if the metric decomposes as G µ 1 ν 1 ···µ s ν s = g (µ 1 ν 1 · · · g µ s ν s ) . (2.16) In this case the line element reduces to a quadratic form and therefore all the results obtained for a generic G µ 1 ···µ r reduce to those for Riemannian geometry.
Inner Products and Angles
Let us consider the inner products of two generic vectors A µ and B µ where, obviously, p + q = r. Next we can define the norm of the vector A µ by In the second-rank case the above formulae coincide with the usual definitions. These formulae seem to be the natural generalisations to higher-rank geometries of the concepts of inner product, norm and angle, in Riemannian geometry. Furthermore, it must be observed that for higher-rank geometries we can consider inner products of more than two vectors. For our purposes it is enough to restrict our considerations to two vectors.
Let us now observe that under the transformation the generalised angles remain unchanged, they are scale invariant. This is a good reason to call the previous transformations conformal, since they preserve the (generalised) angles.
The Conformal Killing Equation
Let us next analyse how one can obtain conformal symmetries of the metric. Let us consider the transformation with ξ an infinitesimal function. Under this transformation the metric is changed, at first-order in ξ, by In order for this variation to induce a conformal transformation over the metric, it must be One arrives then to the conformal Killing equation (The value of α has been fixed by taking the trace of this equation.) This equation is completely written in terms of the metric since there is no Christoffel symbol associated to it. This equation is furthermore overdetermined. In fact the number of derivatives ∂ ν ξ µ is much lesser than the number of equations (2.24). Therefore, solutions will exist only for certain classes of metrics. A solution can be obtained for null flat spaces; cf. Section 3.1.
Curvature Properties of Differentiable Manifolds
Curvature properties are baseed on affine properties which in turn are related to how one moves from one point to a close one. These properties are mathematically described by the connection Γ λ µν . In terms of the connection one can define the Riemann tensor Up to now the connection Γ λ µν and the metric G µνλρ are unrelated. They can be related through a metricity condition. In Riemannian geometry this metricity condition reads The number of unknowns for a symmetric Γ and the number of equations (3.2) are the same, viz. 1 2 d 2 (d + 1). Therefore, since this is an algebraic linear system, the solution is unique and is given by the familiar Christoffel symbols of the second kind In the case of a higher-rank metric a condition analogous to (3.2) would read However, in this case, the number of unknowns Γ λ µν is, as before, 1 2 d 2 (d + 1), while the number of equations is (3.5) Therefore the system is overdetermined and some differentio-algebraic conditions must be satisfied by the metric. Since, in general, such restrictions will not be satisfied by a generic metric, one must deal with Γ λ µν and G µ 1 ···µ r as independent objects. A metricity condition can be imposed consistently only if the number of independent components of the metric is lesser than that naively implied by (3.5). The maximum acceptable number of independent components is 1 2 d(d + 1). This can be achieved, for instance, if the metric is that corresponding to a null-flat one.
Null-Flat Spaces
As we will see in the next section, there is a close connection between the dimension of the manifold and the rank of the metric. For a second-rank geometry in a two-dimensional manifold we have that the metric of a flat space can always be brought to the forms (3.6a) for Minkowskian and Euclidean signatures, respectively. However, both of them can be brought to the simple form for the Minkowskian and Euclidean cases, respectively. Therefore the canonical form (3.7) is independent of the signature of the underlying space. Since for higher-rank geometries there is no concept of flatness, eq. (3.7) seems to be a good definition to be generalised.
The concept of null-flat space is defined only for spaces in which the dimension and the rank of the metric coincide. Then, the line element is given by ds r = r! dζ 1 · · · dζ r .
(3.9) (In higher-rank geometry not all flat spaces are null.) It is clear that each coordinate ζ µ is associated to a null direction of the manifold. This is the reason to call these spaces null.
The only non-null component of the metric is One can now easily verify that in this case the metricity condition for higher-rank metrics, eq. (3.4), has the only solution Γ λ µν = 0. Therefore these spaces are flat. This is the reason to call these spaces flat.
The Conformal Killing Equation in Null-Flat Spaces
Now we come to what we consider to be our most important result. Let us consider the conformal Killing equation in a null-flat space (4.1) (The r d factor has disappeared since in a null-flat space r = d.) Then we can establish the following: Theorem. The critical dimension, for which the conformal Killing equation has infinitely many solutions, is equal to the rank of the metric.
The proof is quite simple. Let us observe that at most two indices can be equal. The number of equations in which two indices are equal is d(d − 1). They are equivalent to The solution is (no summation) The equation in which all indices are different is identically zero. Therefore, the components of the conformal Killing vectors are arbitrary functions of the single coordinate along the associated direction and therefore the solutions are infinitely many. Let us now define the operators (no summation) One can then easily verify that Therefore the symmetry group is the direct product of r times the group which, after Fourier parametrisation, we recognise as the Virasoro group. The symmetry group is therefore V ir r .
Integrable Conformal Field Theories
Now we state the fundamentals for the construction of an integrable conformal field theory.
Let us start by making some elementary considerations about field theory. We will restrict our considerations to generic fields φ A , A = 1, · · · , n, described by a Lagrangian where we have introduced the generalised canonical momentum The energy-momentum tensor is given by and satisfies the continuity equation The first comment relevant to our work is in order here. The definition (5.4), of the energymomentum tensor, guarantees, through (5.5), its conservation on-shell. This definition is independent of the existence of a metric or other background field. This is what we need in the next stages where we are going to independise from the usual second-rank metric.
Let us now make some considerations about conformal field theory. The main properties that a conformal theory must have are: C1. Translational invariance, which implies that the energy-momentum tensor H µ ν is conserved, i.e., eq. (5.5). C2. Invariance under scale transformations which implies the existence of the dilaton current. This current can be constructed to be 4 The conservation of D µ implies that H µ ν is traceless where the conservation of H µ ν , eq. (5.5), has been used. Now we look for the possibility of constructing further conserved quantities. We concentrate on quantities of the form Then it must be In order to obtain more information from this equation we need to introduce a further geometrical object allowing us to raise and low indices.
In the next sections we apply the above condition to second-and fourth-rank geometries.
Scale Invariant Field Theory in Second-Rank Geometry
Now we consider the properties of scale invariant field theories in second-rank geometry.
In order to obtain the consequences of scale invariance in second-rank geometry we consider a constant flat metric g µν . Then we define H µν = H µ λ g λν , (6.1) If (6.1) happens to be symmetric then eq. (5.9) can be written as Furthermore, if the energy-momentum tensor is traceless, as required by scale invariance, the most general solution to (6.3) is i.e., the ξ's are conformal Killing vectors for the metric g µν . As shown in section 4, the critical dimension for this equation is d = 2, i.e., the solutions to (6.4) are infinitely many. For the metric (3.7) the solutions to eq. (6.4) are (6.5b) where f and g are arbitrary functions. Now we define These quantities satisfy the commutation relations Relations (6.7) are essentially the algebra of two-dimensional diffeomorphisms. After conveniently parametrise them in terms of Fourier components one gets the familiar Virasoro algebra.
To be more precise there is one Virasoro algebra for each null direction. The next step is to find a conformal field theory for which the conserved quantities be determined by eq. (6.4). The simplest example is where φ µ = ∂ µ φ. This simple Lagrangian has many important properties. It is conformally invariant, it possesses infinitely many conserved quantities and is renormalisable, by power counting, for d = 2. The energy-momentum tensor is The contravariant form is (6.10) In this case the contravariant ∂φ coincides with the momentum such that this expression becomes symmetric. It is furthermore traceless. Therefore we will have an infinite set of conserved quantities, for d = 2.
The above properties of the CKE in d = 2 is the origin of the great success of string theories. In fact, string theories have all the good properties one would like for a consistent quantum field theory of the fundamental interactions. All these reasons gave hope for strings to be the theory of everything. However, string theories are interesting only when formulated in 2 dimensions, which is quite different from 4, the accepted dimension of space-time. Therefore, the ideal situation would be to have a field theory formulated in 4 dimensions and exhibiting the good properties of string theories. This is the problem to which we turn our attention now.
Integrable Conformal Field Theory in Four Dimensions
As mentioned previously, in order to obtain an integrable scale invariant model, two ingredients are necessary: scale invariance and an infinite number of conserved quantities.
In order to obtain an infinite number of conserved quantities we need that they be generated by an equation admitting infinitely many solutions.
In 4 dimensions an infinite number of solutions can be obtained with the conformal Killing equation for fourth-rank geometry in null-flat spaces, as shown in section 4. Now we need to establish the equivalence between the condition (5.9) and the fourth-rank CKE (7.1) In the fourth-rank case, however, the operation of raising and lowering indices is not well defined, therefore the operations involved in (6.1)-(6.2) do not exist. In fact this procedure works properly for second-rank metrics due to the fact that only for them the operations of raising and lowering indices are well defined (the tangent and cotangent bundles are diffeomorphic). This is not a real problem since the only thing we must require is that (7.1) gives rise to the conformal Killing equation. This can be done for a simple Lagrangian which is the natural generalisation of (6.8) to fourth-rank geometry.
Let us consider the Lagrangian This simple Lagrangian exhibits the properties we are interested in. It is conformally invariant, it possesses infinitely many conserved quantities and it is renormalisable, by power counting, for d = 4. The generalised momenta are given by π µ = G µβγδ φ β φ γ φ δ G 1/4 . (7. 3) The energy-momentum tensor is defined as in (5.4). Condition (5.9) reads Since the ξ's do not depend on ∂φ's, what must be zero is the completely symmetric coefficient with respect to ∂φ's. The result is the conformal Killing equation (7.1). Therefore, we will have an infinite-dimensional symmetry group for d = 4. Therefore, we have suceeded in implementing conformal invariance for d = 4. We have seen furthermore that the rank of the metric is essential to implement conformal invariance in higher dimensions. It must furthermore be observed that the appearance of the conformal behaviour for some critical dimension is a geometrical property of the base space and therefore it is model independent. Therefore any attempt at the implementation of conformal invariance in four dimensions by relying only on the second-rank metric is condemned to fail. The next step is of course to construct a more realistic model on lines, for example, similar to the Polyakov string.
Comments
That the symmetry group for an integrable conformal model in 4 dimensions should be V ir 4 was advanced by Fradkin and Linetsky. 5 While Vir and V ir 2 are clearly related to Riemannian geometry, a similar geometrical concept was lacking for V ir 4 . This missing geometrical link is provided by fourth-rank geometry. According to our previous results the conformal Killing equation for a fourth-rank metric exhibits the desired behaviour. Our problem is therefore reduced to construct a field theory in which the conformal Killing equation plays this central role.
Conclusions
We have seen that integrable conformal field theories can be constructed in 4 dimensions if one relies on fourth-rank geometry. Furthermore, all desirable properties of a quantum field theory are present when using fourth-rank geometry, viz., renormalisability (by power counting), integrability, etc. There seems to be a close connection between the dimension and the rank of the geometry. When they coincide, as happens for null-flat spaces, all good properties show up. The why this is so is a question still waiting for an answer.
Our future plan of work is to develop further models, even realistic, with the previous properties which perhaps will provide clues to answer the question asked in the above paragraph.
|
2014-10-01T00:00:00.000Z
|
1993-03-03T00:00:00.000
|
{
"year": 1993,
"sha1": "946000c57e8651bc1a3f4d46f62471808f54b5ea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9303019",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e22b0fd6dcaee06a6a5219d512c358e08c5c9ccb",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232141629
|
pes2o/s2orc
|
v3-fos-license
|
Binding of non-canonical peptidoglycan controls Vibrio cholerae broad spectrum racemase activity
Graphical abstract
Introduction
Amino acids exist as both L and D enantiomers, being the L-form the most predominant [1]. Whist L-amino acids (LAA) are the building blocks of proteins in all kingdoms of life [2], the presence of D-amino acids (DAA) is usually linked to the existence of dedicated amino acid racemases, which are able to interconvert L to D-amino acids and vice versa [3]. The most commonly studied DAA racemases are the Ala-racemase (Ala-R) and the Glu-racemase (Glu-R) implicated in the synthesis of D-Ala and D-Glu, main components of the bacterial peptidoglycan, also called murein sacculus [4,5]. The bacterial peptidoglycan is an indispensable net-like subcellular structure formed by groups of sugars (N-acetyl-glucosa mine-N-acetyl-muramic acid) cross-linked by short peptides chains that include both LAA and DAAs [6]. The archetypical peptide stem structure is L-alanine, D-glutamic acid, a dibasic amino acid (typically meso-diaminopimelic acid or L-lysine), D-alanine, and D-alanine. Therefore, both cytoplasmic racemases Ala-R and the Glu-R, as well as their reaction products (DAA), are fundamental to maintain the murein sacculi structure and, thus, bacterial fitness [6].
Bacteria can have several copies of both these racemases [7]. For example, Vibrio cholerae presents two non-functional redundant Ala-R, one of which is primarily related to peptidoglycan biosynthesis. Interestingly, V. cholerae encodes an additional multispecific amino acid racemase, named BsrV for broad-spectrum racemase Vibrio, which produces non-canonical DAAs (NCDAAs), i.e., DAAs that are different from those usually present in the cell wall [8,9]. BsrV is an Ala-R homolog. As Ala-R, BsrV uses pyridoxal 5phosphate (PLP) as a cofactor and can efficiently racemize Ala. However, BsrV produces nine additional DAAs, including the non-b-branching aliphatic amino acids (Leu, Met, Ser, Cys, Gln and Asn) and the positively charged amino acids (His, Lys and Arg) [9].
In V. cholerae, expression of BsrV is regulated by the stress sigma factor RpoS in response to high population density and nutrient exhaustion (i.e., stationary growth phase) [8]. In this context, BsrV is expressed, and its activity drives the production and release of millimolar concentrations of NCDAAs to the extracellular media. NCDAAs can then be used as substrates by certain cell wall synthetic enzymes to induce chemical changes in the peptidoglycan composition [8,10,11]. It has been demonstrated that such cell wall chemical editing by NCDAA down-regulates peptidoglycan synthesis to enable cell wall adaptation to stationary phase conditions [8,10,11].
In addition to being regulators of peptidoglycan synthesis and integrity [8,10,11], NCDAA has also been reported to be involved in diverse cellular processes such as catabolism [12], biofilm formation [13], bacteria-bacteria interactions [14], microbiome biodiversity [15], modulation of host immune cells, and immune cell response [16]. As their L-enantiomeric counterparts, the physiological role of NCDAAs depends both on each particular bacteria and the chemical properties of the NCDAA produced [7,17]. Nonetheless, high levels of DAA are usually detrimental to most bacterial species. So, it has been hypothesized that NCDAA-producing bacteria should be equipped with a regulatory mechanism to tolerate the toxic effects of these molecules [18].
In a previous study, we reported the three-dimensional structure of BsrV and defined a molecular fingerprint of conserved residues that define the family of broad-spectrum racemases [9]. Compared to Ala-R, BsrV's capacity to accommodate amino acid substrates other than Ala capitalizes on its broader entry channel and active site. Using a BsrV-His variant, we observed that the hexahistidine affinity tag was fully stabilized in BsrV's entry channel. Based on these data, we proposed that BsrV might interact with the cell wall muropeptides. Using in vitro biochemical and structural analyses, we demonstrated that peptidoglycan peptide moieties bind and inhibit BsrV activity. Interestingly, edited muropeptides containing NCDAAs (produced by BsrV) showed stronger binding and inhibitory properties compared to those ending in D-Ala (canonical muropeptides), suggesting that BsrV activity is controlled via a negative feedback loop by the degree of NCDAA cell wall editing.
Microbiology
All V. cholerae strains used in this study were derived from the sequenced El Tor clinical isolate N16961 [23] and were grown on Luria-broth (LB) medium with 1% NaCl. Strains, plasmids and primers, growth conditions and mutant bacterial strains, and standard molecular biology techniques are described below.
Protein expression and purification
The V. cholerae and A. hydrophila genes encoding BsrV, Bsr Ah , Alr Ah , and BsrV His-tagged less were cloned on pET28b (Novagen) for expression in E. coli BL21 (DE3) cells [24]. Expression was induced (at OD600 = 0.4) with 1 mM IPTG for 3 h. Cell pellets were resuspended in 50 mM Tris HCl pH 7.2, 150 mM NaCl, 10% glycerol, and Complete Protease Inhibitor Cocktail Tablets (Roche), and lysed with 3 passes through a French press were purified from cleared lysates (30 min, 50000 rpm) on Ni-NTA agarose columns (Qiagen) and eluted with a discontinuous imidazole gradient. Pure proteins were visualized by SDS-PAGE electrophoretic protein separation [25]. BsrV was purified from its His-tagged derivative (see Table S2), which presents a Tobacco etch virus (TEV) protease cleavage site preceding the His-tag were cloned in pET28b (Novagen). TEV protease (Sigma) digestion was performed at 30°C for 6 h, in 25 mM Tris-HCl, pH 8.0, 150-500 mM NaCl, 14 mM bmercaptoethanol.
Peptidoglycan analysis
Peptidoglycan sacculi were prepared by pelleting 500 mL of bacterial cells. Cell pellets were resuspended into a small volume of medium and slowly dropped into an equal volume of boiling 10% (w/v) SDS. The sacculi were ultracentrifuged for 15 min at 100,000 rpm (TLA110 Beckman rotor; OptimaTM Max ultracentrifuge Beckman), and the pellets were washed 3 times by repeated cycles of centrifugation and resuspension in water. The pellet from the last washing was resuspended in 300 mL of 50 mM sodium phosphate buffer pH 4.5, and subjected to overnight digestion with 30 mg/mL muramidase (cellosyl, Hoescht) at 37°C. Muramidase digestion was stopped by incubation on a boiling water bath (5 min) [26]. The supernatants were mixed with 150 mL 0.5 M sodium borate pH 9.5 and subjected to reduction of muramic acid residues into muramitol by sodium borohydride (10 mg/mL final concentration, 30 min at RT) treatment. Samples were adjusted to pH 3.5 with phosphoric acid. HPLC analyses of muropeptides were performed on an Aeris peptide reverse-phase column (250 Â 4.6 mm; 3.6 lm particle size) (Phenomenex, USA) and detected by Abs. (204 nm), using a linear gradient of Phosphate buffer/methanol. Muropeptides were quantified from their integrated areas using concentration standards as described [26]. The identity of individual D-Met-and D-Arg-muropeptides was established by MALDI-TOF (Autoflex, Bruker Daltonics) 2.4. Interaction between protein and muramidase-digested sacculi 1 mg of His-tagged proteins (BsrV, AlrV and AmpC) were immobilized in 1 mL of Ni-NTA resin (Qiagen) in sodium phosphate buffer 100 mM pH 7.5. Each protein was subjected to incubation with exact equal amounts of muramidase treated sacculi (120 mg of muropeptides, 1 mL) for 10 min at 25°C after which the eluted fraction was collected in a gravitational column. Then, the column was washed with 1 mL sodium phosphate buffer 100 mM pH 7.5, also collecting this fraction. Both fractions were quantified by HPLC analysis and the areas obtained were rested to the original input of muropeptides assayed, giving the percentage of muropeptides interacting with the proteins.
Racemase activity assays
For activity assays, in vitro LAA and DAAS were characterized with Marfey's (FDAA)-derivatization in HPLC and DAAO assay (D-amino acid oxidase) performed as described [9]. The product from a racemization reaction was derivatized with L-FDAA (1-fluoro-2-4 -dinitrophenyl-5-L-alanine amide, Marfey's reagent, Thermo Scientific). First, an equal volume of NaHCO 3 0.5 M was added to the racemization reaction; then, 6 mL of this reaction was reacted with FDAA (10 mg/mL in acetone) at 80°C for 3 min. The reaction was stopped with HCl 2 N and the samples were filtered. The products were separated with a linear gradient of triethylamine phosphate/ acetonitrile in HPLC with an Aeris peptide column (250 Â 4.6 mm; 3.6 lm particle size) (Phenomenex, USA) and detected at Abs.340 nm. To determine the inhibition effect of the sacculus in BsrV's activity, 35 mg of sacculi were incubated for 5 min at 37 C with BsrV and 4 mM of L-Ala in Tris-HCl 50 mM pH 8. The product was revealed with DAAO [21]. DAAO reaction was determined by coupling 10 mL of the extract into 150 mL of a reaction containing: sodium phosphate buffer 100 mM pH 7, Trigonopsis variabilis Damino acid oxidase (DAAO) (Komarova et al., 2012) 3.6 U/ml, horseradish peroxidase 1 U/mL, o-phenylenediamine (OPD) 2 mg/ mL and FAD 3 mg/ml. This two-step assay permits the quantification of H 2 O 2 (DAAO is able to produce a-ketoacid, NH 3 and H 2 O 2 from DAA). Peroxidase reduces H 2 O 2, releasing free O 2 that reacts with OPD, leading to the production of 2,3-25 diaminophenazine. The reaction was incubated for 2 h at 37°C and inactivated with HCl 3 M, giving a colorimetric product that can be measured at 492 nm. To determine the inhibition effect of muropeptides in BsrV's activity, 0.1 mM of M4 (GlcNAc-MurNAc-Ala-Glu-DAP (Diaminopimelate)-Ala), M3M (GlcNAc-MurNAc-Ala-Glu-DAP-Met), M3R (GlcNAc-MurNAc-Ala-Glu-DAP-Arg) and D-cycloserine were incubated with BsrV and 4 mM of L-Ala for 5 min at 37 C (1/40 relation) in Bicarbonate buffer 50 mM pH 9. In the case of tripeptide and D-Ala-D-Ala, equal concentration (amino acid, tripeptide/dipeptide) was utilized. The product was revealed with Marfey's-reagent as described above.
Structural determination
Crystallization of BsrV tagless was performed as previously described for the His-tagged proteins [9]. Briefly, a highthroughput NanoDrop ExtY robot (Innovadyne Technologies Inc.), the commercial Qiagen screens The JCSG + Suite and The PACT Suite and the Hampton Research screens Index, Crystal Screen and Crystal Screen 2 were used to get crystals by the sitting-drop vapor-diffusion method. Best crystals were obtained with 0.1 M Bis-Tris propane pH 7.5, 0.2 M Sodium Iodide, and 24% (p/v) of PEG 3350. X-ray data collection was performed on the X06SA beamline at the SLS synchrotron-radiation facility in Villigen, Switzerland. Data sets were collected using a PILATUS 6 M detector, and were processed using XDS [27] and scaled using SCALA [28] from the CCP4 suite [29]. The structure was solved by the molecular replacement method with MOLREP [30] from the CCP4 suite using the His-tagged version of BsrV (PDB code 4BEU) as initial model. Refinement was performed with PHENIX [31] and modeling with Coot [32]. The stereochemistry of the models was verified using MolProbity [33]. A summary of the data collection and refinement statistics is given in Table 1.
Docking and molecular dynamic simulations
Standard MD simulations were run using the CUDA version of the sander module in the AMBER 12 suite of programs [34]. The resulting systems were simulated under the same conditions up to a total time of 10 ns during which system coordinates were collected every 2 ps for further structural and energetic analysis. Binding energy evaluation and decomposition were achieved through MM-ISMSA scoring function [35].
Statistical analysis
The program GraphPad PRISM Ò Software (Inc., San Diego CA, www.graphpad.com) has been used for all statistical analyses. To determine the significance of the data displayed in Fig. 3, the ttest (unpaired) has been performed. P-values smaller than 0,05 were considered statistically significant, with the following ranking: p < 0,05(*); p < 0,001(***).
Data availability
The atomic coordinates and structure factors for BsrV His-Tagged less (PDB 7AGZ) have been deposited in the Protein Data Bank, Research Collaboratory for Structural Bioinformatics, Rutgers University, New Brunswick, New Jersey, USA (http://www.rcsb. org/). The rest of the data are contained within this manuscript.
BsrV C-terminal His-tag interacts with the enzyme catalytic channel
The crystal structures of His-tagged constructs of BsrV and the broad-spectrum racemase (Bsr) from Aeromonas hydrophila (Bsr Ah ) have been reported earlier [9]. Both enzymes showed a remarkable facility to crystallize [9]. After exhaustive structural analysis of BsrV and Bsr Ah crystals, we realized that the C-terminal His-tag added to the racemases for purification purposes was tethering the dimers (Fig. S1). This effect was caused by the interaction of the C-terminal His-tags belonging to one dimer with the active site of adjacent dimers and thus increasing crystal contacts. Strikingly, multiple interprotein interactions were observed between Bsr's His-tags and several residues of its catalytic channel (Fig. 1). The interactions were particularly numerous for Bsr Ah , whose His-tag extended through the entry site almost reaching the PLP in the catalytic site (Fig. 1, Fig. S1). In order to validate this, a His-tag less BsrV protein was crystallized and its structure was solved at atomic resolution (1.52 Å, Table 1). As expected, superimposition of the His-Tag and His-tag less BsrV revealed their strong similarity (RMSD: 0.350 Å), which is in concordance with their mirroring biochemical performance (Fig. S2) [8]. Besides minor structural changes (N207, Y208, Y246 and D300) resulting in a small decrease of around 1 Å in the entry site's aperture, no significant differences were observed in the conformation of the catalytic site (Fig. S2). The fact that Bsrs display an unusually large active site [9] capable of binding oligopeptides (His-tag) together with their periplasmic localization made us hypothesize that the stem peptides of the peptidoglycan (muropeptides) might be a more physiological interacting partner of BsrV.
BsrV binds to cell wall muropeptides
To test this hypothesis, we compared the capacity of BsrV and AlrV (the Ala-R from V. cholerae) to bind muropeptides. We isolated muropeptides from V. cholerae DbsrV strain given that digestion of the peptidoglycan of this mutant renders a homogenous pool of canonical muropeptides, does not present any NCDAA-modified muropeptides (Fig. S3). BsrV retained 50-60% of V. cholerae isolated canonical muropeptides compared to AlrV, which retained~25%. This binding appeared to be nonspecific since similar retention was observed using a control protein (AmpC-His) that does not bind peptidoglycan ( Fig. 2A). Remarkably BsrV muropeptide binding increased to a 60-75% when challenged with D-Arg/D-Met muropeptides ( Fig. 2A, Fig. S3), suggesting a certain specificity of BsrV for NCDAA-edited peptidoglycan.
To assess the potential fitting of muropeptides bound to BsrV's active site, we generated docking models for several muropeptides using as template the conformation exhibited by the His-tag that is stabilized within BsrV's active site in our crystal structure. We then performed molecular dynamic (MD) simulations of BsrV and Alr Ec catalytic pockets occupied by canonical (disaccharide-tetrapeptide, GlcNAc-MurNAc-Ala-Glu-DAP-Ala; M4) and non-canonical (disaccharide-tetrapeptide with terminal D-Met or D-Arg instead of D-Ala; M3M or M3R, respectively) muropeptides ( Fig. S3 and movies S1, S2 and S3). These analyses suggested that muropeptides can interact with BsrV's catalytic channel in a manner analogous to that observed in crystal structures of the His-tagged BsrV and Bsr Ah. These docked complexes show many putative strong polar and hydrophobic stacking interactions between residues from the BsrV active site and all the peptide stem residues. It is noteworthy that the sugar rings (NAG, NAM) of the muropeptides can also establish polar interactions with the loops of BsrV that shape the entry of its active site cavity (Fig. 2BC). In contrast, docking/MD analyses suggested that muropeptides would encounter numerous steric clashes along the Alr Ah active cavity, including the entry site, rendering this interaction very unlikely (Fig. S3). MD simulations also suggest that non-canonical muropeptides (M3M and M3R) bind to the active site of BsrV more strongly than canonical muropeptides establishing numerous hydrogen bonds and strong salt bridges (Fig. 2C), resulting in a more stable conformation of M3M and M3R in BsrV active site than M4 (movies S1, S2 and S3). This is also consistent with the results from the binding assays (Fig. 2BC, Fig. S3) and with previously reported BsrV's selective racemization of some non-canonical substrates (e.g., Met and Arg residues over Ala) [8].
BsrV is inhibited by NCDAA-modified muropeptides
Since amino acids in muropeptides do not exhibit free amino groups linked to chiral carbons, bound peptides seemed likely to function as non-racemizable competitive inhibitors. To test this possibility, we performed in vitro assays of BsrV capability to racemize L-Ala in the presence or absence of different muropeptides (Fig. 3A). All monomer muropeptides assayed caused a reduction (from~20% to 65%) in D-Ala production compared to control reactions without muropeptide. In addition, non-canonical muropeptides show a higher degree of inhibition (~65% reduction in D-Ala production) than the canonical M4 (~20% reduction), while D-cycloserine and short peptides (dipeptide and tripeptide), in general, inhibit the least (Fig. 3A). This result is likely due to the reduced number of potential interactions the small peptides can form compared to longer peptides, differentially affecting their stabilization within the active site cavity (Fig. 2B). Also, cross-linked monomers (D44) did not cause any detectable inhibition suggesting that linear peptides are needed to compete for the active site entry. Collectively, these analyses suggest that the production of NCDAAs modified muropeptides as result of BsrVs racemization of LAA might also modulate its activity in vivo. To further explore this possibility, we assessed whether undigested V. cholerae sacculi also exhibits inhibitory properties (Fig. 3B). When BsrV is incubated with canonical and NCDAA modified sacculi, we observed a significant reduction in BsrV dependent D-Ala production in the presence of NCDAAs-free sacculi (DbsrV peptidoglycan), which further decreased when sacculi containing NCDAA (9% of total muropeptides) were used instead (Fig. S4) [10]. In contrast, AlrV's activity was not reduced by the presence of any type of sacculi, consistent with results from the docking modeling and affinity assays (Fig. 3B).
BsrV binds polymeric peptidoglycan
Molecular models based on NMR studies suggest that peptidoglycan forms a right-handed helical structure with the peptide stems projecting out at 120°intervals [19]. To better understand the interaction between peptidoglycan and BsrV, we ran docking simulations of peptidoglycan fragments with the BsrV molecule (Fig. 4). Notably, the distance between the active sites of a BsrV dimer and their relative rotation fit well with the peptidoglycan fragment structure reported by Meroueh et al (Fig. 4) [19]. In fact, in this model, the peptide stems from the peptidoglycan fragment are perfectly accommodated and stabilized within each of the active sites of the dimer. This precise molecular fit between the BsrV structure and the two stem peptides radiating from the same strand (separated by one helix turn lends) additionally enforce the idea that BsrV activity may be regulated by its binding to macromolecular peptidoglycan. Furthermore, our data suggest that the extent of such regulation will vary according to the NCDAAs content of the peptidoglycan, and thus inhibition will be maximal during the stationary phase, when NCDAAs incorporation into the cell wall is completed [8,10,11].
Discussion
We observed that BsrV can bind to muropeptides and intact peptidoglycan, particularly those containing NCDAAs. Our modeling shows that the two active sites of a BsrV dimer can be simultaneously occupied with the peptide moieties of a single peptidoglycan -strand separated by one turn. Analyses of enzyme activity, coupled with our modeling assays, suggest that muropeptides can occupy BsrV's (but not AlrV's) catalytic site and thereby serve as competitive inhibitors. Thus, our findings raise the possibility that the production of NCDAAs by BsrV and related periplasmic broadspectrum racemases is down-regulated when peptidoglycan contains sufficient levels of NCDAA. Such downregulation might reflect global levels of peptidoglycan modification in the cell. Alternatively, it might also function to fine-tune the spatial allocation of NCDAAs by promoting their equal distribution throughout peptidoglycan.
Negative feedback control of BsrV activity by non-canonically modified-peptidoglycan seems reasonable given that NCDAAs reduce peptidoglycan synthesis [8,10,11] and that excessive concentration of NCDAAs can be lethal [18]. According to our model, the synthesis of BsrV will be produced on early stationary phase conditions in a RpoS dependent manner [8]. Following the enzyme's translocation to the periplasm, production of NCDAAs from the corresponding L-forms ensues (Fig. 4). Since NCDAA peptidoglycan incorporation appears to be constrained to active murein biosynthetic sites [20], local concentrations of NCDAA modified muropeptides are likely to become very high, promoting their binding to and inhibition of BsrV. Inactivation of BsrV by NCDAA-modified muropeptides reduces local production of NCDAA, establishing a negative feedback loop in which the products of BsrV (Fig. 4), NCDAAs, once incorporated into peptidoglycan function as competitive inhibitors of BsrV, preventing over-production of NCDAAs that might ultimately be deleterious [18]. Moreover, in addition to the effect on V. cholerae, fine-tunning BsrV's activity may also have implications on the physiology of nearby organisms as NCDAAs are known to impact a number of distinct cellular processes such as catabolism [12], biofilm formation [13], bacteria-bacteria interactions [14], microbiome biodiversity [15], modulation of host immune cells, and immune cell response [16].
The ability of BsrV to interact with peptides introduces a number of intriguing additional possibilities for the regulation of broad-spectrum racemases. For example, short linear nonribosomal peptides (NRP), such as some secreted peptides involved in bacterial communication [21], might also interact with BsrV, either as regulators or as substrates. Given the impact of NCDAA on a variety of cellular processes [14,15,22], bacteria may have evolved diverse ways to control their production and to regulate its spatiotemporal allocation. In exponential growth phase, V. cholerae does not express BsrV and consequently, its peptidoglycan is composed only of canonical muropeptides. In the transition to stationary phase, V. cholerae expresses BsrV, an RpoS-dependent, periplasmic, multispecific amino acid racemase (8). BsrV produces high (millimolar) concentrations of NCDAA that accumulate in the periplasmic space and also pass to the extracellular media (10). (B) NCDAA are incorporated into peptidoglycan in stationary phase cells.peptidoglycancontaining such modifications is a more potent inhibitor of BsrV than is unmodified peptidoglycan; thus, a negative feedback loop is generated to control BsrV's activity. (C) Ultimately, high levels of peptidoglycan modification may turn off the majority of BsrV, thereby preventing hyper modification and excessive accumulation of free NCDAA. A detail of Fig. 4 is shown. CS1 and CS2: catalytic sites. (D) BsrV expression shuts down when growth resumes, preventing further production and incorporation into peptidoglycan of NCDAA. (E) Molecular docking of the NMR-peptidoglycan structure (19) with BsrV dimer. Peptidoglycan is drawn in sticks with glycan chains colored in orange and peptide stems colored in magenta. BsrV active sites interact with the peptide moieties of the sacculus. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Collectively, our results open the door to use ''à la carte" synthetic peptides as a tool to modulate DAAs production of Bsr enzymes. So, the effect of the DAAs in bacterial fitness and biotechnology could be modulated by the usage of diverse peptides that, in turn, would control Bsr activity.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-03-09T06:22:56.513Z
|
2021-01-26T00:00:00.000
|
{
"year": 2021,
"sha1": "8b1f7867088c05e5b924189fad8b2ff0e80d9c1a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.csbj.2021.01.031",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a925ac6a09368a421c48118fa9cf2aa233842698",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
49868175
|
pes2o/s2orc
|
v3-fos-license
|
A Universal Live Cell Barcoding-Platform for Multiplexed Human Single Cell Analysis
Single-cell barcoding enables the combined processing and acquisition of multiple individual samples as one. This maximizes assay efficiency and eliminates technical variability in both sample preparation and analysis. Remaining challenges are the barcoding of live, unprocessed cells to increase downstream assay performance combined with the flexibility of the approach towards a broad range of cell types. To that end, we developed a novel antibody-based platform that allows the robust barcoding of live human cells for mass cytometry (CyTOF). By targeting both the MHC class I complex (beta-2-microglobulin) and a broadly expressed sodium-potassium ATPase-subunit (CD298) with platinum-conjugated antibodies, human immune cells, stem cells as well as tumor cells could be multiplexed in the same single-cell assay. In addition, we present a novel palladium-based covalent viability reagent compatible with this barcoding strategy. Altogether, this platform enables mass cytometry-based, live-cell barcoding across a multitude of human sample types and provides a scheme for multiplexed barcoding of human single-cell assays in general.
Results
MHC-I and sodium-potassium ATPase-subunits are broadly expressed across multiple human cell types. To facilitate robust barcoding of live human cells of different origin, we first identified cell surface proteins which were reported to be broadly expressed across different immune cell subsets, various organs 21 and in cancer cell lines 22,23 . Further requirements were high epitope abundance as well as the availability of an antibody probe for robust detection of the target. Based on these criteria, we conjugated antibodies against beta-2-microglobulin (b2m) as part of the MHC class I complex as well as antibodies against the beta-3 subunit of the Na + /K + -ATPase (CD298) to heavy-metal isotopes for their use in mass cytometry (Fig. 1A). Next, we tested their expression on various cell populations, including immune cell subsets found in whole blood (Fig. 1B,C, see Table S1), as well as various cancer and non-immune cell lines such as leukemic (U937, Ramos, HEL, Jurkat, REH and THP-1), embryonic or stem cell-derived (293 T, H9 human embryonic stem cells (hESCs) and NTERA) and carcinoma cell lines (A549, NCI-H460, HCT 116 and HeLa; Fig. 1D). As expected, b2m was robustly expressed on all major immune cell subsets found in human whole blood. Granulocytes displayed slightly lower but still considerable expression of b2m. Most cancer cell lines also expressed b2m with the exception of low or absent expression levels on embryonic/stem cell lines and intermediate levels on a subset of leukemia cell lines ( Fig. 1D; left).
CD298 was found to be expressed with intermediate levels on all identified immune cell subsets and with intermediate to high levels on the analyzed cell lines, including those which showed low to negative b2m staining (Fig. 1C,D; middle). Dying and/or dead cells might decrease surface expression of these molecules to varying extent (Fig. S1). Given these expression patterns, we reasoned that the combined staining of both molecules with antibodies conjugated to the same reporter isotopes would not only increase the number of available epitopes resulting in augmented staining intensity, but also increase its robustness and facilitate barcoding of The two surface proteins b2m as part of the MHC-I complex and CD298, a subunit of the sodium potassium pump, were selected as potential targets for live cell barcoding. (B) Human whole blood was subjected to red blood cell (RBC) lysis and subsequently stained with heavy-metal isotope-conjugated antibodies for mass cytometry (Table S1). Cells were pre-gated on live, single, CD45 + and CD235ab -. The main immune lineages were then identified via the indicated gating scheme. (C) Expression of b2m (light blue, left), CD298 (magenta, middle) and combined expression values from b2m and CD298 using the same reporter isotope (dark blue, right) on immune populations as in B. (D) Expression of b2m (light blue, left), CD298 (magenta, middle) and combined expression values from b2m and CD298 using the same reporter isotope (dark blue, right) on various cancer cell lines. Shown is one representative experiment out of two. In summary, while the wide variety of cell types tested exhibited variability in their expression of b2m and CD298, there was no instance where the abundance of both was low at the same time, thus making their combination ideal for molecular barcoding.
Monoisotopic cisplatin-conjugated antibodies are robust across analysis conditions. To maximize the compatibility of this approach with existing mass cytometry assays, heavy-metal isotopes which are not part of and which do not interfere with the typically employed lanthanide range are desirable for barcoding. One possibility is the conjugation of EDTA-chelated palladium to antibodies 18 , which provided a first extension of the lanthanide range for antibody labeling. However, given the mass response curve of current mass cytometers, palladium isotopes suffer from a low detection sensitivity. Alternatively, cisplatin-based conjugation of platinum to antibodies has been reported to possess superior signal intensities compared to EDTA-chelated palladium antibodies 20 . Consequently, we conjugated anti-b2m and anti-CD298 antibodies to four different platinum isotopes (194Pt,195Pt,196Pt and 198Pt) via direct, covalent binding of cisplatin ( Fig. 2A). Platinum-conjugated antibodies retained their initial specificity as shown by staining a mixture of CD45-positive (Jurkat) and CD45-negative (HeLa) cells with anti-CD45 antibodies. We did not detect any unspecific binding of platinum-conjugated anti-CD45 antibodies to CD45-negative HeLa cells (Fig. 2B), as had been occasionally observed with EDTA-chelated Pd antibodies (data not shown).
To assess the binding stability of cisplatin-antibodies to their target cells, cisplatin-antibody stained cells were fixed with 1.6% PFA and permeabilized using methanol (MeOH) before acquisition (Fig. 2C). Measured platinum-signal intensities from MeOH and non-permeabilized samples were comparable, indicating stable binding of platinum-antibodies to their target epitopes which is not majorly disrupted through standard cell processing methods. Further, we assessed storage stability of platinum-antibody stained samples at 4 °C in PFA-based intercalation solution (Fig. 2D). We observed a slight decrease signal intensity after prolonged sample storage, similar to what has been reported for lanthanide-conjugated antibodies 24 . Importantly, even after 21 days of storage at 4 °C, positive and negative populations could still be readily resolved. Lastly, to compare the sensitivity of the different platinum channels on our mass cytometer (CyTOF2), we stained samples with the same antibody clones conjugated to different platinum-isotopes (Fig. 2E). Measured signal intensities across the four platinum channels were found to be very similar, indicating that these channels have equivalent potential for use in mass cytometry applications, including barcoding. As such, monoisotopic cisplatin antibody conjugates provide a practical means for live-cell barcoding and do not interfere with lanthanide-based target protein measurements.
Combining b2m and CD289 for live-cell barcoding of heterogeneous human samples. Having identified suitable target epitopes and conjugation procedures, we combined these approaches to demonstrate their applicability to live cell barcoding. In order to extend the number of possible barcode combinations, we used indium isotope-conjugated (113In and 115In) anti-b2m and anti-CD298 antibodies in addition to four cisplatin-conjugated antibodies, as described above. Employing a 6-choose-3 scheme enables the barcoding of up to 20 samples with clear doublet identifying ability 17 . PMBCs from healthy donors were distributed into 20 individual samples and barcoded by staining the individual samples with combined anti-b2m and anti-CD298 in three out of the six available isotopes (Fig. 3A). Barcoded cells were combined, surface-stained, fixed and MeOH permeabilized before acquisition on a CyTOF2 mass cytometer. Barcode-positive and negative populations were readily distinguishable using both cisplatin and chelator-based conjugation methods (Fig. 3B). The combined sample was then loaded into existing matlab-based debarcoding software 17 to automatically assign cells back to their initial identity (Fig. 3C). Up to 95% of input cells were assigned to their respective samples, using appropriate debarcoding parameters (cutoff: 0.1; Mahalanobis distance: 30). Debarcoded samples displayed the expected staining patterns according to their assigned barcode ( Fig. 3D-E). Next, we tested the reliability and feasibility of barcoding different sample types, including non-immune cell subsets, into one reaction vessel. We prepared individual samples from human PBMCs or human epithelial HeLa cells, performed live cell barcoding, combined all samples into one vessel and subsequently stained for CD45 surface expression (Fig. 3F,G). To test the overall performance after acquisition and debarcoding we assessed the expression of CD45, a marker expected to be ubiquitously expressed by PBMCs but not HeLa cells. Virtually all cells (>99.5%) assigned to PBMC samples were found to express CD45 and vice versa, HeLa samples were not found to contain CD45 + cells (Fig. 3G). Together these data demonstrate that the combination of anti-b2m and anti-CD298 antibodies enables highly multiplexed, doublet-free live-cell barcoding of a variety of human cell types.
A novel covalent palladium-based viability staining. Platinum, via cisplatin staining, is often used as a viability probe in mass cytometry because of its preferential accumulation in membrane-compromised cells and its covalent binding properties 25 . As our proposed barcoding strategy (Fig. 3) makes use of all currently available purified platinum isotopes for antibody labeling, we examined potential alternatives for covalent viability probes. Given its structural similarity to cisplatin, we investigated the use of dichloro-(ethylenediamine) palladium(II) (termed 'DCED-palladium') to identify membrane-compromised (dead) cells (Fig. 4A). Mixtures of live and heat-killed PBMCs were stained simultaneously with cisplatin and DCED-palladium for 5 min at RT (Fig. 4A). Dead cells were readily identifiable by their increased binding of cisplatin compared to live cells (Fig. 4B). Further, while cisplatin low cells showed negative to low signal for DCED-palladium, we found cisplatin high cells to have strong signal in the indicated palladium channels, following the naturally occurring abundance of the different isotopes (Fig. 4B). This correlation between cisplatin and DCED-palladium binding is further confirmed by directly analyzing platinum and palladium signal in a co-stained, live and heat-killed cell mixture (Fig. 4C). Again, dead cells were found to be positive for cisplatin as well as DCED-palladium.
Lastly, we tested whether the correlation between platinum-positive and palladium-positive frequencies holds true in various biological sample types. We routinely co-stained multiple samples including, PBMCs, digested tumor-and healthy-tissue samples and cancer cell lines with both viability probes. These samples spanned a wide range of dead-cell frequencies and furthermore, contained samples which were permeabilized for intracellular staining steps with either MeOH-based protocols or commercially available transcription factor staining kits. Importantly, DCED-palladium provided highly comparable dead-cell identification across all sample types as well as the full range of dead-cell frequencies (Fig. 4D), thus indicating that it can be readily used on its own to replace cisplatin as a viability probe and thus allow the full use of platinum-conjugated antibodies in mass cytometry experiments (Fig. 4E).
Live cell barcoding enables the interrogation of tumor-immune composition and function.
Lastly, we employed the above-described methodology ( Fig. 3 and 4) to perform live cell barcoding of an exemplary, clinically relevant sample: a tissue biopsy of a lung carcinoma containing a heterogeneous mixture immune and non-immune cells (Fig. 5A). Firstly, the tissue sample was digested to prepare a single-cell suspension. We independently assessed the impact of enzymatic digestion on cellular b2m and CD298 surface expression and found that, under these conditions, cells retained sufficiently high surface expression for robust live cell barcoding (Fig. S2). Next, the sample was divided and activated for varying time periods (1-24 h) with different stimuli including interferon-γ (IFN-γ), lipopolysaccharide (LPS) or phorbol 12-myristate 13-acetate (PMA) and ionomycin, resulting in a total of 20 different conditions. Differentially stimulated samples were live-cell barcoded, combined and stained with heavy-metal conjugated antibodies (Table S1). After acquisition and debarcoding, multiple immune cell subsets but also non-hematologic cells could be identified and directly interrogated for their expression of immunologically important surface markers (Fig. 5B). Additionally, we automatically identified different cell lineages via the SPADE algorithm 26 (see also Fig. S3), giving an immediate overview of stimulation-induced changes in surface protein expression on immune but also on tumor cells. Fold changes of protein expression induced by the indicated stimuli were overlaid on selected parts of the tree structure (Fig. 5C).
Using our methodology, we confirmed the well-described upregulation of the activation-associated proteins HLA-DR or CD69 by PMA/ionomycin on various T cell subsets, as well as LPS-induced augmented expression of the co-stimulatory protein CD86 on antigen-presenting cells (APCs). Importantly, in addition to the changes detected in immune cells, interesting observations could be made on tumor cells simultaneously, including well-described IFN-γ induced upregulation of HLA class I expression 27 , downregulation of the anti-phagocytic protein CD47 [ref. 28 ] or upregulation of the programmed cell death receptor CD95, also termed FasR [ref. 29 ] on these cells (Fig. 5C). Together, this demonstrates the utility of the platform we have presented here to interrogate heterogeneous samples such as primary human tumor and immune cell mixtures to achieve a more integrated view of potential cellular interactions in a setting where the limitation of technical variability is crucial.
Discussion
Cellular barcoding approaches have been proposed for a variety of technologies, including single-cell sequencing 30 , antibody/sequencing hybrid technologies 31 , genetic barcoding for cell lineage tracing 32 , fluorescent barcoding for flow cytometry 33 and more recently heavy-metal isotope based barcoding in mass cytometry [17][18][19]34 .
In contrast to many previous methods, our approach does not require samples to be fixed or permeabilized prior to barcoding and surface staining and is thus especially suited for scenarios in which target molecules or downstream assays are fixation-or detergent-sensitive or in cases in which this is unknown. Further, the above-described method is independent of CD45 expression which has been proposed for live cell barcoding of PBMCs before 18,19 . This now allows barcoding of cells of non-immune origin, including tumor cells or stem cell populations. Additionally, within leukocytes, CD45 expression is variable and informative for cell lineage identification. Our CD45-independent barcoding therefore circumvents elaborate methods involving pre-barcode, non-saturating surface staining of CD45 [ref. 35 ] and enables easy incorporation of CD45 into the analysis panel.
Instead of CD45, we here made use of the broad expression of b2m as part of the MHC-I complex and the CD298 subunit of the sodium-potassium ATPase. Both proteins have been shown to be broadly expressed across different types of samples and biclonal barcoding using these two targets provides increased signal intensities and robustness against potential downregulation, e.g. on tumor cells 36 . Further, using this combination allowed us to confidently assign up to 95% of cells to individual samples, which is similar to or exceeding previously reported frequencies 17 .
Homologous protein-complexes of the human MHC-I and sodium-potassium ATPases are also expressed by many commonly used model organisms including mice and rats and thus, this method should be readily transferable to research questions in other species. Additionally, staining with a combination of anti-human b2m and CD298 antibodies could also serve to robustly identify human cell in various xenograft models 23 .
As aforementioned, ideal elemental isotopes for barcoding are outside of the lanthanide range typically reserved for the analysis panel. Therefore, we here employed four different platinum isotopes, 194Pt, 195Pt, 196Pt and 198Pt. Besides these, 190Pt and 192Pt occur naturally but with very low abundance and were therefore not available in sufficient enrichment grades, however they could become obtainable in the future. Platinum isotopes were conjugated to antibodies via direct binding of cisplatin which has been found to covalently react with cysteine residues 37 . Thus, conjugation does not require additional chelating-agents and chelating reactions, making the labeling reaction economical, rapid and uncomplicated. Antibodies are labelled analogously to familiar protocols and no additional equipment is needed. Furthermore, the platinum-based barcoding ensures that there is no interference through isotopic impurity or oxide-formation with the lanthanide range, thus minimizing interaction with the analytical channels for single cell quantification. At the same time, it should be noted that for individuals which have received platinum-based therapeutic agents (i.e. carboplatin or cisplatin), the cellular background of platinum should be assessed beforehand.
Since cisplatin is often employed as a live/dead probe in mass cytometry, alternative approaches have to be considered when using all four commonly available platinum isotopes for antibody-based staining. The use of rhodium (103Rh) intercalator to label dead cells has been reported before 38 , however due to its non-covalent binding, signal-intensity is lost in downstream fixation/permeabilization and wash steps making its application less robust. Additionally, amine-and thiol-reactive derivatives of EDTA or 1,4,7,10-tetraaza-cyclododecane-1,4,7,10tetraacetic acid (DOTA) chelators have been used with non-biological metals to identify dead cells 39,40 . However, these reagents are expensive, chemically unstable, and require pre-formulation with the metal of interest. Instead, the here presented DCED-palladium provides a straightforward identification of dead cells in mass cytometry experiments. DCED-palladium is inexpensive, easy to use and store, and provides good separation between live and dead cell populations. Similar to cisplatin, it is compatible with downstream sample permeabilization and washing steps. DCED-palladium therefore provides an attractive alternative to cisplatin as a viability reagent, even in scenarios where cisplatin antibodies are not used for barcoding but instead to extend the analysis panel beyond the lanthanide range 20 . If needed, monoisotopic DCED-palladium could be synthesized in a relatively easy procedure 41 . Importantly, as with many reagents used in mass cytometry, DCED-palladium should be handled with care so that oral and dermal contact as well as inhalation is prevented. Using only indium and platinum isotope-conjugated antibodies and employing the described 6-choose-3 scheme described here, up to 20 samples can be barcoded and processed as one composite sample. However, it is easily possible to increase the maximum number of samples by including anti-CD298 and anti-b2m antibodies conjugated to other heavy metals, e.g. 89Y or 209Bi. Extending to such an 8-choose-4 scheme, up to 70 samples could be barcoded using this approach. As b2m and CD298 are expressed at high copy numbers per cell, the maximum number of multiplexing channels is virtually unlimited if one is willing to incorporate additional reporter metals.
As we have shown, the here described platform for live cell barcoding is especially useful for the analysis of samples containing heterogeneous populations, such as mixtures of tumor cells and tumor-infiltrating leukocytes. Given the recent success of immunotherapies using checkpoint inhibitors or CAR T cells, a huge number of clinical studies is currently underway and mass cytometry is poised to be employed to identify outcome-associated cellular signatures, follow disease progression and predict therapeutic responses 42 . Especially in such settings with large sample numbers in which the reduction of technical variability is critical, our approach should provide significant benefit and we thus hope that this study will contribute to further implement mass cytometry in clinical research.
Methods
Samples. All human samples were obtained and experimental procedures were carried out in accordance with the guidelines of the Stanford Institutional Review Board (IRB). Specifically, fresh whole human blood in heparin collection tubes or leukoreduction system (LSR) chamber contents (Terumo BCT) were obtained from the Stanford Blood Center. Tissue specimens were obtained from the Stanford Tissue Bank. These samples are generally byproducts of blood donation procedures and normal surgical pathology workflows, the collection of which is governed independently by the Stanford IRB. Written informed consent was obtained from all donors. The experimental procedures/protocols in combination with the samples used in this study were deemed human subject research exempt under Stanford IRB protocol #42195 which was reviewed by Stanford IRB panel IRB-98.
PBMCs were isolated via Ficoll (GE Healthcare) density gradient centrifugation and cryopreserved in fetal bovine serum (FBS, Omega Scientific) supplemented with 10% DMSO (Sigma). After freezing, samples were stored in liquid nitrogen. For tissue, single-cell suspensions were prepared using the MACS tumor dissociation kit (Miltenyi Biotec) according to the supplier's recommendations. For experiments, cells were thawed by dropwise addition of pre-warmed RPMI-1640 (life technologies), supplemented with 10% FBS and 1x L-glutamine (Thermo Fisher) and washed twice. Whole blood was subjected to RBC lysis using 1x RBC lysis buffer (BioLegend), following the manufacturer's instructions. Cell numbers were determined using an automated cell counter (TC20, BioRad).
Heavy-metal conjugation of antibodies. Conjugation of anti-human antibodies to heavy metal isotopes of the lanthanide series and indium was conducted using the MaxPar X8 antibody-labelling kit (Fluidigm) following the manufacturers recommendations. Where available, pre-conjugated antibodies were obtained from Fluidigm. Cisplatin-conjugation of antibodies was based on a previous report 20 . In short, 1 mM stock solutions of isotopically enriched cisplatin in DMSO (custom order, Fluidigm) was pre-conditioned for 48 h at 37 °C [ref. 43 ] and afterwards stored at -20 °C. Antibody buffer exchange was performed by adding 100 μg anti-human b2m (clone 2M2) or anti-CD298 (clone LNH-94, both BioLegend) to a 50 kDa MWCO microfilter (Milipore) and centrifuging for 10 min, 12,000 g at RT, followed by a second wash with R buffer (Fluidigm). Antibodies were then reduced with 4 mM TCEP (Thermo Fisher) for 30 min at 37 °C and washed two times with C buffer (Fluidigm). 20 μl of the 1 mM monoisotopic cisplatin solution (equivalent to 20 nmol) was diluted in 1 ml of C buffer and added to the antibody in a 1.6 ml tube for 1 h at 37 °C for conjugation. Cisplatin-conjugated antibodies are then washed four times with 400 μl W buffer (Fluidigm) and the antibody is collected by two centrifugations (2 min, 1,000 g, RT) with 50 μl of W buffer with an inverted column into a new 1.6 ml collection tube. Protein content was assessed by NanoDrop (Thermo Fisher) measurement, antibody stabilization buffer (Candor Bioscience) was added to a final volume of at least 50 v/v % and antibodies were stored at 4 °C.
Live cell barcoding. Individual samples of up to 3 × 10 6 cells each were stained with combinations of platinum or indium-conjugated anti-b2m (1 µg/ml) and anti-CD298 (2 µg/ml) in cell staining medium (CSM: PBS with 0.5% BSA and 0.02% sodium azide (all Sigma)) for 30 min at RT. As described previously 17 , a 6-choose-3 scheme was used to ensure doublet identification and removal. Cells were washed twice with CSM and combined into a single reaction vessel for downstream staining and acquisition.
Viability staining. DCED-palladium or cisplatin (both Sigma) was resuspended in DMSO, pre-conditioned for 48 h at 37 °C and stored at -20 °C. Pre-dilutions were made in PBS and stored at 4 °C for up to two weeks. Viability staining was performed by resuspending the sample in 1 ml of PBS and adding DCED-palladium to a final concentration of 500 nM, followed by incubation for 5 min at RT and washing with CSM.
Staining procedure. All surface staining was performed in CSM for 30 min at RT. Where indicated, cells were fixed with 1.6% PFA in PBS for 10 min at RT and permeabilized with MeOH for 10 min on ice. Before acquisition, samples were resuspended in intercalation solution (1.6% PFA in PBS, 0.02% saponin (Sigma) and 0.5 μM iridium-intercalator or 0.5 μM rhodium-intercalator (both Fluidigm)) for 1 h at RT or overnight at 4 °C. Data acquisition. Before acquisition, samples were washed once in CSM and twice in ddH 2 O. All samples were filtered through a cell strainer (Falcon) and resuspended at 1 × 10 6 cells/mL in ddH 2 O supplemented with 1x EQ four element calibration beads (Fluidigm) and finally acquired on a CyTOF2 mass cytometer (Fluidigm). Barcoded samples were acquired using the Super Sampler injection system (Victorian Airship).
Data pre-processing and debarcoding. Acquired samples were bead-normalized using matlab-based software as described previously 15 . Normalized data was then uploaded onto the Cytobank analysis platform 44 to perform initial gating on single, live cells based on their DNA content, their event_length parameter and the live/ dead probe. Data was transformed with an inverse hyperbolic sine (arcsinh) transformation with a cofactor of 5. Where applicable, barcoded data was debarcoded using matlab-based software with the reported parameters 17 . Debarcoding parameters were adjusted to match the barcoding separation of the respective experiment.
Data analysis and figure preparation. Basic gating and expression analysis was performed using the Cytobank environment. SPADE representations were created in Cytobank using the available implementation. Further downstream analysis was performed within the R software package 45 . Figures were prepared with biorender software and Illustrator (Adobe).
|
2018-07-18T13:48:57.832Z
|
2018-07-17T00:00:00.000
|
{
"year": 2018,
"sha1": "e90bd0e8c4b0c1214462696e9b4131c43f8dd924",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28791-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e90bd0e8c4b0c1214462696e9b4131c43f8dd924",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
233348083
|
pes2o/s2orc
|
v3-fos-license
|
Split-thickness skin graft as a treatment for voluminous vaginal fluid discharge after surgery due to vesico-intestino-vaginal fistulation: A case report and review of the literature
Highlights: • Repeated pelvic surgery in irradiated tissue increases the risk for vaginal rupture.• We present a rare case with heavy secretion from the ruptured vagina.• Split skin grafting was used as an unusual treatment for this complication.
Introduction
Vaginal cuff dehiscence after hysterectomy is a rare but potentially morbid complication which can lead to vaginal evisceration with serious consequences, including peritonitis, necrosis, and sepsis. Treatment to replace the prolapsed structures and reconstruct the vaginal vault is required (Nezhat et al., 2018). In this report, we describe a rare case of vaginal dehiscence with vesico-intestino-vaginal fistulation, troublesome amount of fluid discharge and an unusual yet successful treatment.
Case
A 45-year old patient, II gravid II para, was diagnosed with cervical cancer FIGO IIIA in 2008. Treatment included cisplatinumbased chemotherapy and 60 Gy external radiotherapy combined with brachytherapy. At follow-up three months after completed treatment, a residual tumor was detected and verified with biopsy. The patient accepted surgical treatment including radical hysterectomy and abdominoperineal rectal excision saving the urinary bladder which was performed in 2009. Histopathological examination showed radically removed tumor with smallest resection margin 6 mm, and the surgery was considered curative.
Three months after surgery, the patient suffered from urinary leakage, both with and without urinary catheter. Cystoscopy showed explicit irritation, interpreted as a side effect of the radiotherapy. Two months later, repeated cystoscopy showed a vesico-vaginal fistula and another month later an intestino-vaginal fistula, indicating a complex vesico-intestino-vaginal fistulation. Extended surgery including cystectomy, resection of the small intestine involved in the intestinovaginal fistula and construction of a urinary conduit was conducted. The vaginal cuff could not be closed due to extensive fibrosis. The omentum was too thin to fill out the cavity. A muscular vertical rectus abdominis myocutaneous flap (VRAM) without the fasciocutanous component or a gracilis flap could have been options during this operation but were unfortunately not considered. Postoperatively the patient reported up to 1-liter fluid secretion from the vagina. Urinary leakage was excluded, and the fluid was interpreted as secretion from the radiation-affected pelvic cavity (shown in Fig. 1), including lymphatic secretion through the open vaginal cuff. Based on the hostile abdomen at the previous operation both surgeons and the patient were hesitant to an abdominal treatment with an omental flap and/or a myocutaneous flap. After thorough discussion and 2 years expectation, in close collaboration of colo-rectal and plastic-reconstructive surgeons, the decision was made to transplant a split-thickness skin graft (STSG) on the peritonealised small intestines above the disrupted vaginal cuff. To facilitate graft healing to the peritonealised small intestines, vacuum-assisted closure was placed inside the vaginal cavity and left for 4 days (Fig. 2). Graft take was good and the patient was successively mobilized.
During the aftermath, when several antibiotic treatments and irrigations were applied, secretion diminished little by little. After one year, fluid per day was less than 50 ml. Due to shrinkage of the STSG, the enormously large vaginal cavity and the large recessi at both sides had diminished in size.
Due to peripheral radiation effects, the patient's introitus had transformed into a rigid ring that was painful in regards to sitting and hygiene, constantly open, and made sexual intercourse impossible. Fat grafting had previously been reported to have positive effects on radiation-damaged tissue (Coleman, 2006;Fukuba et al., 2020) and was therefore applied to the labia, vaginal walls, and perineum 9 years after the abdominal surgery. After injection of 200 cc tumescence solution (Klein, 1988), adipose tissue was harvested with a blunt 4 ml cannula from the inside of the right thigh by means of manual liposuction. After centrifugation at 1800 rpm for 3 min, the harvested tissue stratifies in a bottom layer of liquid, an oily top layer, and a middle layer consisting of intact adipocytes. The latter consisted of 20 ml and was injected dropwise with a Coleman-I7-cannula in a fan-like pattern in several layers into the radiation-damaged tissue in the vulva and vaginal wall. This procedure was repeated after 11 months with advantageous outcome. Elasticity improved markedly, labia minora closed up in the midline and skin quality was softer and less prone for irritation. Today the patient is in good health without relapse and working halftime.
Discussion
Vaginal rupture and evisceration is a rare complication after hysterectomy, and refers to ejection of intraperitoneal content through the ruptured vaginal cuff. It typically starts with vaginal cuff dehiscense, defined as a separation of the anterior and posterior edges of the vaginal cuff (Nezhat et al., 2018).
Risk factors for vaginal dehiscence with evisceration include surgical factors like mode of incision and closure of the cuff, as well as patientrelated factors affecting tissue quality and wound healing, including previous irradiation, vaginal atrophy, obesity, tobacco use, diabetes, and long use of corticosteroids or immunosuppressant. Chronical conditions such as asthma, chronic obstructive pulmonary disease, gastroesophageal reflux disease and chronic constipation may also increase the risk of evisceration, due to a chronically increased intraabdominal pressure (Nezhat et al., 2018).
Vaginal fistulation has a tremendous impact on the patient's quality of life. Fistulas may occur as a consequence of advanced staged disease, radiotherapy, trauma, infection, and especially as a complication to extended surgery after previous irradiation (Narayanan et al., 2009). It was reported that in 15 of 20 patients, the fistula occurred as a complication after radiotherapy alone or in combination with recurrent disease (Narayanan et al., 2009).
Vesico-vaginal and entero-vaginal fistulas are the most common types of fistulas seen in patients with gynecological malignancies. Early recognition and treatment of vaginal dehiscence with evisceration is critical. The closure of the vaginal defect can be challenging due to shrinkage, fibrosis and poor vascularization of the vagina and there is no consensus regarding the optimal approach (Narayanan et al., 2009). The crucial components are inspection of the bowel and mesentery, in case of evisceration, lavage of the peritoneal cavity, and closure of the vagina with reconstruction of the vaginal vault.
In the case reported here no involvement of the urinary bladder was suspected at time of diagnosis of the recurrence and thus only a posterior exenteration was performed. Based on the high dose of radiotherapy the patient had received it could in retrospect be argued that a total exenteration including cystectomy should have been performed instead to prevent complications.
There are different methods for vaginal reconstruction. In our case, the determining factor for the method of choice was the continuous discharge of considerable amount of fluid. Since STSG are meshed, fluid is drained through the holes instead of gathering under the graft, ensuring good contact of the STSG and underlying tissue; a necessity for graft healing. VAC treatment, as used in our case, is a promising method for facilitating healing of skin grafts, especially in vaginal reconstructions, where evacuation of fluid is a crucial part of the healing process (Hallberg and Holmström, 2003).
STSG is probably the most simple method for vulvo-vaginal reconstructions, both in patients with vaginal atresia, and in patients with gynecological malignancies (de la Garza et al., 2009). An STSG is very thin and graft take usually good also on recipient beds with impaired vascularity. One feature of STSG is later contraction which in our case was an advantage in order to shrink the large post-radiation vaginal cavity, but could in other cases possibly result in stenosis and contraction of the neovagina (de la Garza et al., 2009;Hallberg and Holmström, 2003).
Other options for vaginal reconstructions are myocutaneous flaps (e. g. gracilis myocutaneus flaps, VRAM flaps, pudendal thigh flap), a bowel segment (Abd El-Aziz, 2006), omental J-flap with/without biological mesh or full-thickness skin grafts (FTSG). A myocutaneous flap requires complex surgery in an area with impaired healing after previous surgery and radiation. It would have filled the void but unlikely solve the amount of fluid. Interposition of bowel segment implies complex surgery and has a tendency to secretion. In addition the patient was reluctant to go into another abdominal operation. Taking all factors into consideration, none of the methods mentioned above was a realistic alternative in this case. FTSG resist contracture and have better aesthetic result than STSG, but there is limited supply of donor sites that can be closed directly.
Conclusion
The extensive complication involving both urinary and intestinal fistulation described in our case is very rare and probably occurred as a consequence of repeated pelvic surgery in irradiated tissue. Surgical closure of the vaginal defect was not possible, due to extended fibrosis and due to the heavy secretion from the cavity which impeded delayed closure by secondary healing. Taking all impinging factors into consideration, STSG in combination with VAC offered a technically simple and safe solution.
Even though STSG is often utilized for vulvo-vaginal reconstructions we have not found any case describing STSG in combination with VAC treatment as a successful treatment in patients with vaginal rupture.
Informed consent
Informed consent has been obtained from the patient.
Author contributions
All authors made substantial contribution to all of the following: (1) conception and design of the work, (2) drafting the work or revising it critically for important intellectual content, (3) final approval of the version to be published, and (4) agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2021-04-23T05:17:18.517Z
|
2021-03-22T00:00:00.000
|
{
"year": 2021,
"sha1": "5541c793741cbe70fdc4774fcdb52f4930bb958d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.gore.2021.100753",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5541c793741cbe70fdc4774fcdb52f4930bb958d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
47004236
|
pes2o/s2orc
|
v3-fos-license
|
Purification and Characterization of Antibodies in Single-Chain Format against the E6 Oncoprotein of Human Papillomavirus Type 16
In Human Papillomaviruses- (HPV-) associated carcinogenesis, continuous expression of the E6 oncoprotein supports its value as a potential target for the development of diagnostics and therapeutics for HPV cancer. We previously reported that the I7 single-chain antibody fragment (scFv) specific for HPV16 E6, expressed as an intrabody by retroviral system, could inhibit significantly the growth of cervical cancer cells in vitro and was even able to reduce tumor development in experimental HPV-related cancer models. Nevertheless, for the development of therapeutic tools to be employed in humans, it is important to achieve maximum safety guarantee, which can be provided by the protein format. In the current study, two anti-16E6 scFvs derived from I7 were expressed in E. coli and purified in soluble form by affinity chromatography. Specificity, sensitivity and stability in physiologic environment of the purified scFvs were demonstrated by binding studies using recombinant 16E6 as an antigen. The scFvs functionality was confirmed by immunofluorescence in cervical cancer cells, where the scFvs were able to recognize the nuclear E6. Furthermore, an antiproliferative activity of the scFvI7nuc delivered in protein format to HPV16-positive cell lines was observed. Our results demonstrate that functional anti-16E6 scFvs can be produced in E. coli, suggesting that such purified antibodies could be used in the diagnosis and treatment of HPV-induced malignancies.
Introduction
In the immune system, each antibody recognizes a specific target antigen through the antibody-antigen binding site, formed by 3 structurally hypervariable loops in the light (V ) and heavy chain (V ) variable regions of an Immunoglobulin (Ig), called complementarity determining regions (CDRs).
Antibodies for biomedical purposes can be produced in different formats thanks to the recombinant DNA technology [1]. One of the most used formats is the single-chain variable antibody fragment (scFv), including the Ig V and V domains linked by a flexible polypeptide linker. A scFv retains the specificity towards its target and has increased tissue penetrance, faster blood plasma clearance, and lower immunogenicity with respect to the parental antibody [2,3]. The ease of manipulation and expression on a large scale make the scFv an ideal antibody format for diagnostic and therapeutic application even in the oncologic field [4].
A number of human cancers are etiologically related to persistent infection with high-risk Human Papillomaviruses (HR HPVs). Worldwide, approximately 600,000 cases of HPV-related diseases occur per year, which still represents a serious problem of public health. Among all, cervical cancer (CC) is predominant and represents the fourth most common cancer in women [5]. HPV16 and HPV18 are the most oncogenic HPV genotypes, causing about 60% and 15% of the CC cases, respectively [6,7]. Nevertheless, the HPVassociated head and neck cancers increased considerably over the past two decades, especially in developed countries [8,9].
Three different prophylactic vaccines are now available to prevent infections from 2 to 7 HR HPV genotypes [10,11]. In spite of their prevention efficacy and long-term protection, a real reduction of CC incidence will not occur before some decades [12]. In fact, HPV vaccines do not affect preexisting infections and do not prevent malignant progression. Also, a total coverage is difficult to achieve especially in developing countries where HPV is endemic. Therefore, development of therapeutic strategies for the treatment of HPV-associated tumor lesions is still a priority.
HPV carcinogenesis depends on the expression of two viral oncoproteins, E6 and E7, acting synergistically to immortalize and transform the infected cells [13,14].
In view of the p53 critical role in cell cycle and maintenance of host genome integrity, the E6-mediated p53 degradation is a crucial step in cancer development [15]. Accordingly, transgenic mice expressing E6 in the skin develop malignant skin tumors, whereas the E7 expression induces primarily benign hyperplasia [16].
Recently, we selected the scFvI7, specific for the HPV16 E6 oncoprotein (16E6), from the high-diversity murine naive Single Pot Library of Intracellular Antibodies (SPLINT) by Intracellular Antibody Capture Technology (IACT) [17]. The I7 sequence was provided with a tripartite nuclear localization signal (NLS) and expressed as I7nuc intracellular antibody (intrabody) in HPV16-positive cells. Such expression reduced cell proliferation while inducing apoptosis. This effect can be ascribed to p53 rescue. A remarkable hindering effect on tumor onset due to I7nuc expression was observed also in two preclinical mouse models for HPV-associated tumors [18].
We are currently evaluating the antitumor activity of I7nuc in a therapeutic setting for experimental HPV tumors. However, in view of either a safe therapeutic application in humans or a possible use in early diagnosis of HPV infection, characterization of the anti-16E6 scFv in protein format could be extremely valuable.
In this study, we report expression, purification, and characterization of the anti-16E6 scFvI7 and scFvI7nuc. Due to the selection method and previous experiments [18], we know that I7 and I7nuc are specific and can bind to the 16E6 when expressed in eukaryotic environment. We do not know whether the same binding capacity is retained by the scFv produced in prokaryotes as a recombinant protein. Therefore, E6 binding was evaluated by different approaches in parallel with scFv stability, which represents a key feature for both diagnostic and therapeutic applications. With this perspective, the antiproliferative activity of scFvs delivered to HPV-positive cells was also analyzed.
Construction of I7 and I7nuc pQE30
Plasmids and Bacterial Transformation. The anti-16E6 scFv sequences were cloned in the pQE30 prokaryotic vector by different procedures. The I7 sequences were PCR-amplified (95 ∘ C for 1 min, 50 ∘ C for 1 min, and 74 ∘ C for 1 min, 35 cycles) from the eukaryotic vector selected by IACT, using the anti-E6Dir and anti-E6Rev couple of primers to introduce the BamHI and the HindIII restriction sites and cloned in the pGEMT vector, obtaining the I7pGEMT plasmid and then BamHI/HindIII digested. The purified BamHI/HindIII fragment was cloned in pQE30, obtaining I7QE plasmid.
The primer sequences with the restriction sites underlined are as follows: anti-E6Dir: 5 GCGCGGATCCGATATTGTGATG-ACCCAGTC 3 anti-E6Rev: 5 GCGCAAGCTTGCGGCCGCAGTA-CTATCCAGGCCCAG 3 The I7nuc sequences were PCR amplified as above from the I7nucpGEMT plasmid previously described [18] using the sense primer anti-E6Dir reported above and the I7nucSacRev antisense primer to introduce the BamHI and SacI restriction sites, with sequence as follows: I7nucSacRev: 5 GCAGGTCGACCATATGGG-AGAGCTCCCAA3 (SacI restriction site underlined) The purified BamHI/SacI fragment was cloned into pQE30, obtaining I7nucQE plasmid. The PCR products were gel-purified by GFX PCR DNA and Gel Band Purification Kit (GE Healthcare, Buckinghamshire, UK).
For transformation, highly competent E. coli NEB Turbo (Biolabs, Ontario, CA) cells were used, and the positive clones were identified by PCR and enzymatic restriction. The chosen clone was then checked by sequence analysis.
Both I7QE and I7nucQE contain a MGRS 6xHis tag at their N-terminus for purification by Ni-NTA affinity chromatography (Ni-NTA resin, QIAGEN, Hilden, Germany).
Expression, Purification, and
Refolding of scFv Proteins. E. coli cells transformed with pQE30 vectors were grown overnight (ON) in cultures in Luria Bertani (LB) broth in the presence of 2% glucose and ampicillin (100 g/ml) at 37 ∘ C with shaking. ON culture (25 ml) was used to inoculate 500 ml of LB broth and grown until OD 600 was 0.6. The culture was grown for further 4 h after addition of 2 mM Isopropyl -d-thiogalactopyranoside (IPTG, bioWORLD, USA). Bacterial cells were harvested by centrifugation at 6000 rpm in Sorval GSA rotor for 20 min at room temperature (RT) and lysed in 25 ml of lysis buffer (100 mM NaH 2 PO 4 , 300 mM NaCl, 10 mM Tris-HCl, 5% glucose, and 1 mM DTT, pH 6.3) containing 6 M Guanidine-HCl (EuroClone, Italy). Bacterial suspension was stirred for 30 min at RT followed by intermittent ultrasonication on ice for 20 min (Vibra Cell, Sonics & Materials Inc., Newton, USA). Afterwards, the lysate was centrifuged at 10000 rpm in Sorval SS34 rotor for 30 min at 4 ∘ C, and supernatant was used for affinity chromatography purification on Ni-NTA resin according to QIAexpressionist (QIAGEN). In brief, 1 ml of prewashed resin was added to supernatant into 50 ml Falcon tube and incubated at RT for 30 min while mixing. The protein-resin mixture was recovered by centrifuging at 1500 rpm for 5 min and washed repetitively in 250 ml of denaturing buffer (100 mM NaH 2 PO 4 , 10 mM Tris-HCl, and 8 M Urea) at pH 6.3 until OD 280 of the waste was 0.002. The resin was then packaged into a 1 ml polypropylene column and the captured scFv eluted with denaturing buffer at pH 5.9 (4 fractions) and pH 4.5 (4 fractions, 1 column volume each). To evaluate scFv purity, an aliquot of each fraction added with SDS-loading buffer (25 mM Tris-HCl pH 6.8, 5% -Mercaptoethanol, 2% SDS, 50% glycerol) was analyzed by 12% SDS-PAGE and Coomassie Brilliant Blue (CBB) staining.
The scFv-containing fractions were pooled and subjected to stepwise dialysis into a Slide-A-Lyzer cassettes (10 MWCO, Thermo Fisher Scientific, Rockford, IL) for 24 h at 4 ∘ C against 500 ml of renaturing buffer (rB: 50 mM NaCl, 25 mM Tris-HCl, 0.25 M glucose, and 10% glycerol, pH 7.0) containing decreasing concentrations of urea (from 6 M to 0) at 4 ∘ C. The I7 and I7nuc proteins concentration was determined using Bradford assay (Bio-Rad, Ontario, CA) with Bovine Serum Albumin (BSA, Biolabs) as a protein standard.
ELISA Microtiter Binding Assay.
A microtiter 96-well plate (Nunc MaxiSorp, Thermo Fisher Scientific) was coated with 350 ng/well of recombinant 16E6 [19] or BSA in 50 mM carbonate/bicarbonate buffer, pH 9.4 (Thermo Fisher Scientific). The plate was saturated with 200 l/well of 2% Non-Fat Dry Milk (NFDM, Bio-Rad) PBS buffer at 37 ∘ C for 2 h and incubated 1 h at 37 ∘ C with 100 l/well of scFvI7 or scFvI7nuc in PBS (2.5 g/ml). An in-house anti-16E6 mouse polyclonal IgG 1 : 1000 in 2% NFDM-PBS was used as a positive control [19]. The wells were rinsed with PBS for 3 times and incubated with 50 l/well of mouse anti-V5 tag monoclonal antibody (mAb, Life Technologies, USA) 1 : 2500 in 2% NFDM-PBS 1 h at 37 ∘ C. After repeated washes, wells were incubated with 100 l of anti-mouse horseradish peroxidase-(HRP-) conjugated polyclonal IgG (Sigma-Aldrich, USA) 1 : 20000 in 2% NFDM-PBS, 1 h at 37 ∘ C. After a final wash, color was developed using the TMB substrate kit (Vector Laboratories, Burlingame, CA). The reaction was stopped after 2 h with 100 l of 2 M H 2 SO 4 and OD 450 determined in a microtiter plate reader (iMark, Bio-Rad).
Slot
Blotting. Twofold serial dilutions in TBS of recombinant 16E6 [19] or BSA in the range of 62.5-500 ng were spotted onto 0.2 m Protran nitrocellulose membrane (Whatman, Dassel, Germany) assembled in the Bio-Dot SF microfiltration apparatus (GE Healthcare Life Sciences, Piscataway, NJ) according to the manufacturer's instructions. After extensive washing with TBS, the membrane was blocked in 5% NFDM-TBS and divided into strips and then incubated ON at 4 ∘ C with scFvI7 or scFvI7nuc in TBS (10 or 20 g/ml). The in-house anti-16E6 mouse polyclonal IgG 1 : 500 in 3% NFDM-TBS was used as a positive control [19]. After washing 3 times in TBS, the membrane was incubated with mouse anti-V5 tag mAb 1 : 500 in TBS followed by anti-mouse HRP-conjugated polyclonal IgG 1 : 10000 in 3% NFDM-TBS. After extensive washing with TBS, the 16E6-scFv complexes were detected by incubation with ECL-substrate solution and BioSpectrum Imaging system using the software program UVP.
For double detection of E6, consecutive incubations with rabbit anti-16E6 polyclonal IgG followed by Alexa-Fluor 594 goat anti-rabbit IgG and with purified scFv followed by mouse anti-V5 mAb and GAM-FITC were performed in the same well. Preimmune rabbit serum was used 1 : 300 as a negative control. All the antibodies were diluted in 1% NFDM-PBS. RedDot62 dye (Biotium, Inc., Hayward, USA) 1 : 400 in PBS was utilized as a nuclear marker. All samples were examined using a confocal microscope Leica TCS SP5 and processed with LAS AF version 1.6.3 software (Leica Microsystems). To prevent cross emission spectra, specific lasers (488 nm, 546 nm, and 633 nm) were activated in sequential mode to acquire the images.
Delivery of scFvs and Analysis of Cell Proliferation.
For scFvs delivery, SiHa cells (5 × 10 5 ) seeded in 35 mm dishes and grown in complete medium up to 50-60% of confluence were washed and treated with 500 l of DMEM containing the purified scFv at a final concentration of 2.5 g/ml. Cells incubated with 500 l of DMEM containing the same volume of rB served as controls. After 6 hours, scFv entry was detected by immunofluorescence using mouse anti-V5 tag mAb as described above and visualized using FLoid Cell Imaging Station microscope (Thermo Fisher Scientific, USA).
To analyze the effect of antibody delivery, SiHa cells were seeded (10,000 cells/well) in a 96-well plate (Corning Inc., USA). After 24 hours, cells were washed and medium was replaced with 50 l of DMEM containing the scFvs (2.5 g/ml in rB) or rB alone as a control. To exclude any possible interference by the impurities contained in the antibody preparation, SiHa cells were also incubated with 50 l of the same DMEM/rB solution, previously scFvdepleted by protein A incubation. After 6 hours at 37 ∘ C, the different solutions were replaced with complete DMEM. Cell viability was determined 48 hours after delivery by MTS assay (CellTiter 965 AQ One Solution Cell Proliferation Assay, Promega, Madison, USA) and densitometry analysis according to manufacturer's instructions. Alternatively, 48 h after scFv delivery, cells survival was analyzed by cell forming assay (CFA) as previously described [18].
Statistical
Analysis. Data were expressed as the mean ± standard deviation (SD). Statistical analysis was performed using Student's -test for unpaired data and values > 0.05 were considered not significant.
Design, Cloning, and Prokaryotic Expression of Anti-16E6
scFvs. The I7 sequence was amplified by specific primers from the plasmid originally selected by IACT, to be cloned, through pGEMT plasmid, in the BamHI/HindIII restriction sites of the pQE30 vector, obtaining the I7QE plasmid.
The I7nuc sequence was amplified by specific primers from the I7nucpGEMT previously described [18] and cloned in pQE30 BamHI/SacI restriction sites, obtaining I7nucQE. The plasmids obtained were checked by sequencing.
The scFvI7 and scFvI7nuc proteins expressed from the plasmids described are schematically represented in Figure 1(a).
The expression of scFvI7 and scFvI7nuc in E. coli was optimized on a small scale through a time-course study under 2 mM IPTG induction. The scFv product present in bacteria pellet was evaluated by SDS-PAGE before and after IPTG induction. Although a faint band corresponding to scFvI7 (29 kDa) and scFvI7nuc MW (34 kDa) was already present before IPTG induction, the proteins accumulated rapidly and reached the maximum expression at 4 h, which was the time chosen for scFv production on a larger scale (data not shown).
Different purification procedures were preliminarily performed to obtain the scFvs in a soluble form and with a high degree of purity. Neither protein could be purified using a native protocol because of contaminants present in the soluble fractions. Conversely, a high grade of purity was achieved using a denaturing protocol as described in Materials and Methods. Elution was tentatively performed at pH 5.9 and then continued at pH 4.5. For both scFvs, elution was virtually absent at pH 5.9 and definitely abundant at pH 4.5, which allows for high degree of 6xHis-tagged protein purification (QIAexpressionist). Figure 1(b) shows the results obtained during scFvI7nuc purification. The purified scFvs were subjected to refolding by stepwise dialysis in decreasing concentration of urea and analyzed by 12% SDS-PAGE and CCB staining (Figure 1(c)). For both scFvs, the total yield ranged from 1 to 4 mg/L of bacterial culture.
Determination of Specificity and Sensitivity of scFvI7 and
scFvI7nuc towards the 16E6. The specificity of scFvI7 and scFvI7nuc was analyzed in ELISA using the recombinant 16E6. To perform the assay, scFvI7 and scFvI7nuc at 2.5 g/ml were added to the wells coated with 16E6 or BSA. Both scFvs bound to their antigen specifically even though the scFvI7 signal revealed in ELISA was weaker. Interestingly, we observed similar binding profiles for scFvI7nuc and an anti-16E6 mouse polyclonal IgG (Figure 2(a)).
To analyze the anti-16E6 scFvs sensitivity by slot blot analysis, the recombinant 16E6 and BSA were applied in serial dilutions to nitrocellulose membrane and probed with two identical concentrations of purified scFvI7 or scFvI7nuc. ScFvI7 at a concentration of 10 g/ml was able to detect up to 125 ng of E6 per slot, showing sensitivity similar to that of the polyclonal antibody, while no signal was observed in the BSA slots (Figure 2(b)). Similar results were obtained with scFvI7nuc (data not shown).
Determination of the scFv Stability In
Vitro. An adequate resistance to physiologic temperature is essential for the scFvs successful employment. Therefore, we investigated the scFvs thermal resistance by analyzing in ELISA the binding to their antigen after different incubation times at 37 ∘ C in the presence of 0.2% HSA to mimic a physiologic environment. As shown in Figure 3(a), while scFvI7 binding activity to E6 remained constant for the time of analysis (24 h), scFvI7nuc binding activity decreased to 33% after only 8 h of incubation. The different degradation profile of the two proteins was confirmed in Western blotting, where incubation of scFvs with HSA at 37 ∘ C caused a gradual but sharp decrease of the scFvI7nuc amount with respect to scFvI7, which was clearly reduced after 1 week (Figure 3(b)). The reason for such difference in stability between the two scFvs is not clear as they differ from each other only in the presence of the myc-tag and tripartite NLS in scFvI7nuc. However, the presence of these peptide sequences enhances the pI from 9.45 to 9.83, suggesting that the increase of repulsive forces due to positively charged amino acids could alter conformational stability of the protein [20]. Our findings are in agreement with a paper that analyzed the NLS involvement in protein stability in vitro, showing that a highly basic NLS can severely compromise the scFv expression from the host parental vector [21].
Analysis by Immunofluorescence and Confocal Microscopy of scFvI7 and scFvI7nuc Binding to E6 in HPV16-Positive
SiHa Cells. The ability of the purified scFvs to recognize the endogenous 16E6 was analyzed in the human HPV16-positive SiHa cells by immunofluorescence and confocal microscopy analysis.
The purified scFvI7nuc was able to detect endogenous E6 expressed in the nucleus of SiHa cells (Figure 4(a)). As a positive control, we used an anti-16E6 rabbit polyclonal IgG [19]. Surprisingly, confocal microscopy analysis showed that while the polyclonal Ab (red) could detect the E6 in both the nucleus and cytoplasm of SiHa cells, the purified scFvI7nuc (green) was able to detect mainly a nuclear form of E6. ScFvI7 worked similarly to scFvI7nuc (data not shown). To exclude the fact that subcellular distribution could depend on a different E6 expression in unsynchronized cell cultures, we analyzed the E6 localization using the polyclonal Ab and each scFv in turn in the same well. We could observe overlapping of red and green staining (yellow) only in the cells showing particularly high intranuclear E6 levels, while in most cells, scFvI7nuc decorates E6 in the nucleus (Figure 4(b)). In this regard, we can speculate that our scFvs recognize an E6 epitope that is exposed only by the intranuclear E6, while the polyclonal Ab, reasonably recognizing several epitopes, would be able to detect all the E6 forms. Our findings are consistent with the observation that the HR HPVs E6 exist in monomeric and oligomeric forms, which expose different conformational epitopes. Interestingly, other authors showed a diffuse distribution of E6 homo-or heterooligomers in HPV-transformed cells, while the monomeric oncoprotein was present mainly in the cell nucleus [22].
It is known that splicing events regulate the translation of full-length and truncated forms of E6 in HR HPV-infected cells. Such different isoforms retain different functions [23] and show different intracellular distribution, and the fulllength form is expressed preferentially in cell nucleus [24,25]. However, regardless of the nature of the nuclear E6 recognized by the scFvs, it could represent the biologically active protein, since the I7nuc intrabody expression, by perturbing the E6 interactions with cellular targets, resulted in decreased proliferation and survival of HPV16-positive cells [18].
Cellular Uptake of scFvI7nuc Protein.
We have recently shown that intracellular I7nuc expression hampers the development of HPV16-positive tumors in animal models [18].
In view of potential therapeutic applications, the study of the impact of anti-E6 antibodies delivery to SiHa cells is of particular interest.
It is well known that antibodies are poorly transported across cell membranes, and different delivery strategies are investigated with particular regard to safety, necessary for clinical use. Such strategies include vehiculation by liposomes, nanoparticles, and extracellular vesicles [26] as well as the enhancement of the isoelectric point (PI) of a protein molecule by fusion to cationic peptides known as cellpenetrating peptides or chemical derivatization of surface carboxyl groups, generating primary amino groups [27]. In particular, protein cationization has been proven to be a simple and effective method to deliver functional antibodies into cells [28]. Since our scFvs have themselves an intrinsically high PI, we explored the possibility to deliver them directly into SiHa cells. After 6 hours of incubation with the antibody solution or control solution, the scFvs entry was checked by immunofluorescence. As shown in Figure 5(b), scFvI7nuc uptake was efficient, as virtually all the cells resulted to be fluorescent. However, different from intranuclear distribution of I7nuc expressed as an intrabody [18], the localization of endocytosed scFvI7nuc was mainly cytoplasmic and, only faintly, nuclear. Although we do not actually know whether the limited nuclear localization observed is ascribable to experimental conditions or to the documented instability of scFvI7nuc at 37 ∘ C, this result demonstrates that the endocytosed antibody retains the ability of nuclear entry.
Different from scFvI7nuc, scFvI7 entered scarcely (data not shown). We believe this could be ascribed to the different PIs of the two proteins, suggesting that even a slight difference related to this parameter might affect the interaction with the cell membrane, thus precluding the adsorptive-mediated endocytosis of the molecule.
To study the scFvI7nuc effect on cell viability, SiHa cells were observed at different times after delivery. Interestingly, after ON incubation, scFvI7nuc-receiving cells were suffering and were in a lower number with respect to the control cells (data not shown). To confirm this observation and investigate the potential antiproliferative effect of scFvI7nuc, we analyzed the impact of the antibody administration on proliferation and survival of HPV16-positive cells by MTS. At 48 h after uptake, we observed a sharp decrease, in the range of 30-60%, of formazan conversion in cells treated with scFvI7nuc with respect to the control cells. Instead, no reduction of cell viability was observed after treatment with either rB or the scFv-depleted rB solution in DMEM. This finding confirms that I7nuc in protein format maintains a biological activity similar to that associated with the I7nuc intrabody expression [18]. The antiproliferative effect was also confirmed in CFA performed 48 h after scFvI7nuc administration, showing a reduction of colony number in treated cells equal to 72% (67 colonies ± 14/1000 plated cells) with respect to untreated SiHa cells (240 colonies ± 10/1000 plated cells).
Conclusions
The aim of this study was to produce and characterize E6specific scFvs as potential tools for the therapy of HPV16associated lesions.
We reported successful cloning, expression, and purification of single-chain antibody fragments directed towards 16E6. We demonstrated the ability of the purified antibody fragments to detect 16E6 in different assays. First, we used ELISA and slot blotting to show the interaction of scFvI7 and scFvI7nuc with the recombinant 16E6. Next, we confirmed by immunofluorescence the ability of both scFvs to bind to E6 in immortalized HPV16-positive cells. Furthermore, we checked for stability of the scFv molecules as an essential feature for future applications. Lastly, we verified the ability of scFvI7nuc protein to hamper viability and proliferation of HPV16-positive cells in vitro.
Our results demonstrate that the purified scFvs retain specificity, sensitivity, and sufficient stability for detection of E6 oncoprotein in vivo and could pave the way for the development of novel, safe therapeutic tools specifically targeting the HPV16 oncoprotein. Figure 5: Internalization of scFvI7nuc protein in SiHa cells. SiHa cells were incubated with DMEM containing rB with (I7nuc) or without (rB) the scFvI7nuc. After 6 hours of treatment, the internalized antibody was visualized using mouse anti-V5 tag mAb followed by secondary antibody conjugated with fluorescein. Microscopy images (a) and immunofluorescence images (b) showing the scFvI7nuc internalization into SiHa cells. Images were captured using FLoid Cell Imaging Station microscope (Thermo Fisher Scientific, USA). Magnification: 600x.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
|
2018-06-12T01:22:10.971Z
|
2018-05-20T00:00:00.000
|
{
"year": 2018,
"sha1": "2be2dfa1320f60f8a855c2b426c10f43a7a658c2",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2018/6583852.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "753bfe7a697c540b2b06b4ff2837a9c2c7a5880b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
115888669
|
pes2o/s2orc
|
v3-fos-license
|
Sparse Aperture Masking of Massive Stars
We present the earliest results of our NACO/VLT sparse aperture masking (SAM) campaign to search for binarity in a sample of 60 O-type stars. We detect Delta Ks<5 mag companions for 20-25% of our targets with separations in the range 30-100 mas (typically, 40 - 200 A.U.). Most of these companions were unknown, shedding thus new light on the multiplicity properties of massive stars in a separation and brightness regime that has been difficult to explore so far. Adding detections from other techniques (spectroscopy, interferometry, speckle, lucky imaging, AO), the fraction of O stars with at least one companion is 85% (51/60 targets). This is the largest multiplicity fraction ever found.
Introduction
With masses ∼ > 16 M , massive stars of spectral type O are among the brightest and most luminous stars in galaxies. One of their most striking properties is their high multiplicity rate (for recent reviews, see Sana & Evans 2011, and Gies -this volume): above 40% for visual systems (Turner et al. 2008;Mason et al. 2009) and up to ∼60% for spectroscopic binaries (Mason et al. 2009;Sana & Evans 2011). In nearby clusters, at least 75% of the massive stars are part of a binary or higher multiplicity systems (Sana et al. 2008(Sana et al. , 2009. The multiplicity fraction and distribution of the binary parameters (periods, mass ratios, eccentricities) are one of the few observable quantities that can help constrain the formation and early dynamical evolution of these objects (see e.g., Kratter et al. this volume) and can potentially discriminate between the various scenarios (see e.g. Zinnecker & Yorke 2007, for a review). However, the census of the properties of the massive star population remains incomplete.
Early results
In March 2011, we observed a sample of 60 O stars with K < 7.5 mag with the SAM mode of NACO, providing an almost bias-free detection up to a flux contrast of 100 in the range 30-200 mas. Under the adopted configuration (512×512 windowing), the NACO field of view extends over 6"×6" and provides simultaneous, AO-corrected imaging of the surroundings of the targets. Our preliminary results are : -Multiplicity fraction : 20-25% of our targets has a very close companion detected by SAM (Fig. 1). Most of these detections are new. This fraction increases to over 50% if we include the wider pairs seen in the NACO field of view. Adding the results from other high-angular resolution imaging techniques (speckle, lucky imaging, AO) and from spectroscopy, only nine stars have no companion at all (among which two are known runaways).
-Separation distribution : Fig. 2 (left panel) shows the cumulative number distribution of the measured separations in the range 30-6,000 mas and ∆mag < 5. The distribution is clearly double-peaked with an overabundance of pairs between 30 and 100 mas and between 1" and 6".
-Brightness ratio distribution : In these two separation ranges, the ∆mag distributions show different properties (Fig. 2, right panel), as confirmed by a Kolmogorov-Smirnov test at the 0.01 significance level. The wide pairs are dominated by fainter (most likely lower mass) companions while the distribution is almost uniform for the very close pairs. This suggests that the two populations have different natures. The distribution shows two preferred separation ranges : from 30 to 100 mas and from 1" to 6". Right panel: Cumulative distribution function (CDF) of the magnitude differences in the two preferred separation. Only pairs with ∆mag < 5 are considered to limit the impact of the observational biases.
Perspectives and conclusions
The SAM mode at NACO/VLT offers a new observing window to study massive binaries allowing us to probe efficiently the short angular separation/high contrast regime. Future work involves (i) converting observational parameters to physical quantities, and (ii) investigating whether observational biases can explain the lack of companions in the 0.1-1.0 range.
|
2011-09-29T20:00:01.000Z
|
2011-09-29T00:00:00.000
|
{
"year": 2011,
"sha1": "c95a28585544881da303ecafd6c2892112efee30",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c95a28585544881da303ecafd6c2892112efee30",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2337244
|
pes2o/s2orc
|
v3-fos-license
|
The mining of toxin-like polypeptides from EST database by single residue distribution analysis
Background Novel high throughput sequencing technologies require permanent development of bioinformatics data processing methods. Among them, rapid and reliable identification of encoded proteins plays a pivotal role. To search for particular protein families, the amino acid sequence motifs suitable for selective screening of nucleotide sequence databases may be used. In this work, we suggest a novel method for simplified representation of protein amino acid sequences named Single Residue Distribution Analysis, which is applicable both for homology search and database screening. Results Using the procedure developed, a search for amino acid sequence motifs in sea anemone polypeptides was performed, and 14 different motifs with broad and low specificity were discriminated. The adequacy of motifs for mining toxin-like sequences was confirmed by their ability to identify 100% toxin-like anemone polypeptides in the reference polypeptide database. The employment of novel motifs for the search of polypeptide toxins in Anemonia viridis EST dataset allowed us to identify 89 putative toxin precursors. The translated and modified ESTs were scanned using a special algorithm. In addition to direct comparison with the motifs developed, the putative signal peptides were predicted and homology with known structures was examined. Conclusions The suggested method may be used to retrieve structures of interest from the EST databases using simple amino acid sequence motifs as templates. The efficiency of the procedure for directed search of polypeptides is higher than that of most currently used methods. Analysis of 39939 ESTs of sea anemone Anemonia viridis resulted in identification of five protein precursors of earlier described toxins, discovery of 43 novel polypeptide toxins, and prediction of 39 putative polypeptide toxin sequences. In addition, two precursors of novel peptides presumably displaying neuronal function were disclosed.
Background
Expressed sequence tag (EST) analysis is widely used in molecular biology. This analysis comprises the transcriptome of a given tissue at a given time. These data are deposited in a specialized resource at the National Center for Biotechnology Information (NCBI) -dbEST [1]. The EST databases are used to address different problems [2][3][4][5][6].
The EST database analysis requires the development of novel methods and software for data processing. The standard procedure includes processing of the biological material, production of clones, construction of libraries, and data analysis, from grouping in contigs to gene annotation and microarray design [7]. Special program modules facilitating different stages of analysis, such as those for preprocessing of data [8][9][10] and software for combining sequences in contigs and their annotation, have been developed [11][12][13]. To improve the quality of initial data processing, the results of different scanning methods can be combined from homology search of a nucleotide consensus sequence, homology search of deduced protein sequences and involvement of reference databases of known organisms [14][15][16][17].
The strategy of bioinformatics to database analysis remains the same, variety of diverse crude sequences combined by cluster analysis in contigs should be subjected to alignment search tools and function classification by gene ontologies. It gives good results although is not always optimum. Earlier, analysis of the EST database from spider venomous glands showed [18] that the conventional approach including the preprocessing of the original data and formation of contigs decreased the efficiency of identification of rare polypeptide toxins. The recommended search procedure of scanning translated sequences against characteristic toxin structural motifs proved more effective. Another alternative consists in the use of search queries created from the alignment of known proteins families for database screening. Thus, 83 new peptides were found, which were not earlier discovered in the EST databases of different aphid species [19]. A family of new proteins from corals with a Cysrich beta-defensin motif was identified as well [20].
Identification of short polypeptides in EST datasets is especially challenging since they may be aligned only with highly homologous proteins. They are synthesized as precursors, which are consequently processed into mature polypeptides. The enzymes involved in maturation recognize specific regulatory amino acid motifs, which help to identify precursor proteins in EST databases [18,19,21].
Polypeptide toxins from natural venoms are of considerable scientific and practical interest. They may be used for designing drugs of new generation [22]. Venom of a single spider contains hundreds of polypeptides of similar three-dimensional structure but divergent biological activity. In toxins, the mature peptide domain is highly variable, while the signal peptide and the propeptide domain are conserved [23,24]. The specificity of action on different cellular receptors depends on the unique combination of variable amino acid residues in the toxin molecule. Using a common scaffold, venomous animals actively change amino acid residues in the spatial loops of toxins thus adjusting the structure of a novel toxin molecule to novel receptor types. This array of polypeptide toxins in venoms is called a natural combinatorial library [25][26][27].
Homologous polypeptides in a combinatorial library may differ by point mutations or deletions of single amino acid residues. During contig formation such mutations may be considered as sequencing errors and can be ignored. Our method is devoid of such limitations. Instead of the whole EST dataset annotation and search for all possible homologous sequences, we suggest to consider the bank as a "black box", from which the necessary information may be recovered. The criterion for selection of necessary sequences in each particular case depends on the aim of the research and the structural characteristics of the proteins of interest.
To make queries in the EST database and to search for structural homology, we suggest to use single residue distribution analysis (SRDA) earlier developed for classification of spider toxins [28]. In this work, we demonstrate the simplicity and efficacy of SRDA for identifying polypeptide toxins in the EST database of sea anemone Anemonia viridis.
SRDA
In many proteins the position of certain (key) amino acid residues in the polypeptide chain is conserved. The arrangement of these residues may be described by a polypeptide pattern, in which the key residues are separated by numbers corresponding to the number of nonconserved amino acids between the key amino acids (see Figure 1).
For successful analysis, the choice of the key amino acid is of crucial importance. In polypeptide toxins, the structure-forming cysteine residues play this role, for other proteins, some other residues, e.g. lysine, may be as much important (see Figure 1). Sometimes it is necessary to find a specific residues distribution not in the whole protein sequences, but in the most conserved or other interesting sequence fragments. It is advised to start key residue mining in training data sets of limited size. Several amino acids in the polypeptide sequence may be selected for polypeptide pattern construction; however, in this case, the polypeptide pattern will be more complicated. If more than three key amino acid residues are chosen, analysis of their arrangement becomes too complicated. It is necessary to know the position of breaks in the amino acid sequences corresponding to stop codons in protein-coding genes. Figure 1 clearly demonstrates that the distribution of Cys residues in the sequence analyzed by SRDA ("C") differs considerably from that of SRDA ("C.") taking into account termination symbols. For scanning A. viridis EST database, the position of termination codons was always taken into consideration.
The flowchart of the analysis is presented in Figure 2. The EST database sequences were translated in six frames prior to search, whereupon the deduced amino acid sequences were converted into polypeptide pattern. The SRDA procedure with key cysteine residues and the termination codons was used. The converted database, which contained only identifiers and six associated simplified structure variants (polypeptide patterns), formed the basis for retrieval of novel polypeptide toxins.
To search for sequences of interest, a correctly formulated query is necessary. Queries also in pattern format (screening line in Figure 2) were based on amino acid sequences of anemone toxins after analysis of homology between their simplified structures.
At subsequent stages, from the converted database, amino acid sequences that satisfy each query were selected. Using the identifier, the necessary clones and open reading frames in the original EST database were correlated. As a result, a set of amino acid sequences was formed. Identical sequences, namely identical mature peptide domains without taking into account variations in the signal peptide and propeptide regions, were excluded from analysis. To identify the mature peptide domain, an earlier developed algorithm was used [21,29]. The anemone toxins are secreted polypeptides; therefore only sequences with signal peptides were selected. Signal peptide cleavage sites were detected using both neural networks and Hidden Markov Models trained on eukaryotes using the online-tool SignalP http://www.cbs.dtu.dk/services/SignalP [30]. To ensure that the identified structures were new, homology search in the non-redundant protein sequence database by blastp and PSI-BLAST http://blast.ncbi.nlm.nih.gov/Blast was carried out [31].
Data for analyses
To search for toxin structures, the EST database created for the Mediterranean anemone A. viridis was used [32].
The original data containing 39939 ESTs was obtained from the NCBI server and converted in the table format for Microsoft Excel.
To formulate queries, amino acid sequences of anemone toxins using NCBI database were retrieved. 231 amino acid sequences were deposited in the database to February 1, 2010. All precursor sequences were converted into the mature toxin forms; identical and hypothetical sequences were excluded from analysis. Anemone toxin sequences deduced from databases of A. viridis were also excluded. The final number of toxin sequences was 104.
The reference database for review of the developed algorithms and queries was formed from amino acid sequences deposited in the NCBI database. To retrieve Figure 1 Conversion of amino acid sequence into a polypeptide pattern using different key residues. SRDA("C") -conversion by the key Cys residues marked by arrows above the original sequence, the number of amino acids separating the adjacent cysteine residues is also indicated; SRDA("C.") takes into account the location of Cys residues and translational termination symbols denoted by points in the amino acid sequence; ("K.") -conversion by the key Lys residues designated by asterisks and the termination symbols. Kozlov and Grishin BMC Genomics 2011, 12:88 http://www.biomedcentral.com/1471-2164/12/88 toxin sequences, the query "toxin" was used. The search was restricted to the Animal Kingdom. As a result, 10903 sequences were retrieved.
Computation
EST database analysis was performed on a personal computer using an operating system WindowsXP with installed MS Office 2003. Analyzed sequences in FASTA format were exported into the MS Excel editor with security level allowed macro commands execution (see additional file 1). Translation, SRDA and homology search in the converted database were carry out using special functions on VBA language for use in MS Excel (see additional file 2). Multiple alignments of toxin sequences were carried out with MegAlign program (DNASTAR Inc.).
Anemone toxin motifs
The development of appropriate queries is the most important part of the analysis. Their tolerance determines the accuracy of EST database screening and finally the number of retrieved sequences. 104 retrieved sequences of mature anemone toxins were subjected to SRDA using a number of key amino acid residues. The best results, as suspected, were obtained with structure patterns based on key cysteine residues. The enrichment in cysteine residues is a characteristic feature of many natural toxins, thus making it possible to use cysteine as a key amino acid residue in data conversion.
Toxins are small compact molecules, whose structure is stabilized by several disulphide bonds. The spatial structure of anemone toxins is divergent on the base of their primary structure feature. We chose cysteine as the key residue for SRDA conversion, and all 104 anemone toxin sequences were processed. More than a dozen screening lines encompassing the whole complexity of anemone toxins were calculated from converted data (see additional file 1). Since amino acid sequence patterns were analyzed, the obtained motifs reflect only the distribution of the key cysteine residues and the position of termination signals (see Table 1). The total number of motifs would be higher, if special substitution symbols were not used.
Since the specific operator "Like" was employed for mining toxin sequences in the database, to optimize Screening line the following substitution symbols were used: ? -any single symbol, # -any single digit (0-9), * -gap in the search line from 0 to any number of symbols.
Since the final goal by query motifs developing was maximum retrieving of sequences from the database, we didn't try to create universal motifs with broad specificity. Conversely, many motifs were developed to ensure search specificity of key residues distribution in patterns. The first four motifs enclose the largest number of known sea anemone toxins and are the most discriminative. For motifs 5-9, we tried to achieve high identification capacity, while motifs 10-13 were made degenerative and partially overlapped earlier developed motifs.
Among anemone toxins, large cysteine-free molecules exhibiting strong cytolytic activity are present. These toxins named cytolysines comprise a heterogeneous group of membrane-active molecules subdivided into several groups on the basis of primary structure homology and similarity of physical and chemical properties [33]. For these molecules, pattern motifs developed to be too simple (0 and 14 in Table 1) and inadequate for analysis. For identification of such possible structures in databank, a novel motif K was generated; it combined two search parameters: the presence of not more than 2 cysteine residues at SRDA ("C.") and not less than 6 lysine residues at SRDA ("K.").
To check the potential of the developed pattern motifs, the efficiency of retrieval for toxin-like sequences from the reference animal toxin database was determined. Since amino acid sequences of anemone toxins were used as queries, we expected that all anemone toxins would be identified. Due to a specificity of the reference database syntax, the termination symbols in the motifs were eliminated prior to analysis. Table 2 shows the total number of identified sequences, the number of toxins of anemones and coelenterates, as well as the number of toxins in other groups of animals.
In the database studied with a total of 13 motifs, we were unable to identify 154 sequences of anemone toxins from 374 available, 108 of which belonged to predicted structures or sequence fragments, and the remaining 46 sequences referred to cytotoxins (motif K).
As shown in Table 2, motif specificity varies considerably that was already mentioned during motif construction. For instance, only motifs 1 and 2 proved specific to anemone toxins. Motifs 3 and 4, early expected to be specific to sea anemones toxin, were also found in toxins of other animals, mainly nematodes and snakes. Although motif 8 was rarest it was found for a spider toxin, a conotoxin and an anemone toxin, therefore it also could not be considered specific.
Data retrieval from EST database
To decrease the number of false positive results during converted database screening, the limitations on the search parameters were imposed. The identity to the screening line was searched only on long fragments started from the beginning or, after any termination symbols and ending by another termination symbol (see Figure 3). If the fragment did not end by the termination symbol, it was rejected as partially identified. The screening analysis was performed on each fragment separately thus a pattern motif must to match completely in the extent of analyzed fragment. This approach considerably decreased the number of false positive results, since it excluded hits with sequences containing internal stop codons (an example of false hit is given in Figure 3).
Each query compared to converted databank resulted dozens sequences in the EST database (see Table 3). As exception, for the most degenerate motif 13 more than 5000 hits were found. Almost all of them matched with sequences in wrong reading frames. This phenomenon was also observed with some other motifs. The obtained false sequences were eliminated at the stage of signal peptide identification. So, it was shown that all sequences retrieved with motifs 6, 7, 8, 10, 12 and most part with motif 13 were false.
In deduced amino acid sequences, the mature peptide chain was determined using a maturation algorithm [21,29], and repetitious mature sequences were discarded. Finally 89 unique secreted sequences possess homology to anemone polypeptide toxins were discovered in A. viridis database (see Table 3). Duplicated clones were not numerous; two most abundant sequences revealed with motifs 3 and K were repeated in the database 103 and 58 times, respectively. Detailed information on the correspondence of the deduced polypeptides to the EST nucleotide sequences is given in an additional file 3. Deduced polypeptides were compared on the next processing stage with protein databank resulting in determination of 7 known toxins.
Using motif 1, we derive four full-length precursors (see Figure 4), three of which completely coincided with earlier described toxins, sodium channel blockers namely neurotoxin 2, toxin 2-1 and neurotoxin 8. The forth polypeptide named neurotoxin 1-1 had only two substitutions as compared to earlier described neurotoxin 1.
The precursor of BDS-1 toxin interacting with the rapidly inactivating Kv 3.4 channel [39] and 12 homologues of it were discovered in the database with motif 2 (see precursor sequences in Figure 5). All members of the structural family were numbered from 3 to 14. The most abundant among them was the BDS-1 precursor (15 sequences in the EST database). The remaining less represented sequences comprised homologues, which formed the anemone polypeptide toxin combinatorial library. The total number of sequences found in the database by pattern search designated as "EST retrieved". The number of Non-redundant (Nr) mature sequences keeping signal peptide for secretion designated as "SignalP approved". BlastP approved sequences by blastp and PSI-BLAST algorithm shown identity to anemone toxins, and the number of 100% homologues structures are in the last column. * including truncated and long variants.
Another known potassium channel blocker kaliseptin [38] was not found in the library, however 11 similar polypeptides using motif 3 as a query (avtx-1 -avtx-11) were identified (see Figure 6). This group displays the lowest similarity to known toxins (see additional file 3), therefore it is possible to assume that they do not act on potassium channels, but exhibit some other still unknown functions. The protein precursor avtx-1 is the most abundant of all structures discovered, we found 103 identical sequences that suggest high expression level and functional significance of the encoded polypeptide.
The Kunitz-type polypeptides were retrieved using motif 4 (see Figure 7). The Kunitz-type scaffold is found not only in inhibitors of proteolytic enzymes but in toxins as well, for example in kalicludines. Some other polypeptides with antifungal and antimicrobial activities and those showing analgesic properties adopt the same scaffold [5,38,42,43]. In this group, the most represented sequences corresponded to the earlier described kalicludine-3 and to a new polypeptide kalicludine-4 (AsKC4). Another less abundant sequence AsKC1a had an additional residue at the C-terminus compared to kalicludine-1. Conversely, a novel homologue of a known proteinase inhibitor 5 II named proteinase inhibitor 5 III, which was C-terminally truncated by three amino acid residues, was discovered in the database. Other members of the family due to high homology to kalicludines were designated AsKC4-AsKC16. Neurotoxins 3, 7, 9 and 10 reported earlier in anemones [37,42] correlate with 6, 7 and 8 pattern structural motifs, but the relevant sequences were not found in the EST database. Several polypeptides were retrieved with motif 5. Two novel structures Gig 4 and Gig 5 showed high sequence homology to gigantoxin I from another sea anemone species Stichodactyla gigantean [44] (see Figure 8). Gigantoxin I is a weak paralytic toxin capable of binding to EGF receptor. However sequence alignment presented in Figure 8 shows that A. viridis polypeptides may exhibit different functions. This follows from nonconserved substitutions in the polypeptide chain: V E, S E, and QM KK, which considerably change the charge of the molecule. It has been suggested that generation of toxins with novel functions was accompanied by replacement of functionally important amino acid residues, while the structural fold of the molecule was preserved (this is illustrated by sequences in Figure 8).
Two interesting precursors of toxins AV-1 and AV-2 were discovered with motif 9 (see Figure 9). Several polypeptides encoded in a single precursor displayed homology to Am-1 toxins from the sea anemone Antheopsis maculata [45]. During maturation, the precursor protein Am-1 is cleaved at the sites of limited proteolysis leading to the production of six active components. In the newly discovered sequences, the number of generated active polypeptides is only four, however the specific amino acid residues involved in a proteolytic cleavage of precursor are identical. For anemone A. viridis, the complex structure of the polypeptide toxin precursor has not been described before this work.
Thirty nine sequences were retrieved from the EST database using motifs 11, 13 and K. All of them are presented in the additional file 4. Homology search with blastp algorithm failed to reveal related sequences, however there structures possess correct signal peptides providing effective secretion. For some sequences, the sites of limited proteolysis and the location of the mature peptide domain may be predicted using earlier developed procedures [21,29]. The sequences identified with motifs 11 and 13 were named toxin-like, however their function remains unknown. In the group of short sequences presents only two structural families other sequences are single (additional file 4 panel A). Homology search showed that two sequences Tox-like av-1 and 5 matched earlier predicted structures. Polypeptides Tox-like av-4, 5 and 6 were repetitious in the EST database (see additional file 3).
We also discovered long cysteine-containing sequences named Tox-like av-9 -Tox-like av-16 (additional file 4 panel B). Their structural peculiarities include a long propeptide fragment followed the signal peptide, which is enriched in negatively charged amino acid residues, and numerous arginine and lysine residues in the mature peptide chain. We assume that propeptide can stabilized precursor's structure by compensating excess positive charge of the mature peptide and prevents premature proteolytic degradation, as was demonstrated for precursors of antimicrobial peptides [46,47]. The presence of a large number of positively charged amino acid residues points to possible cytotoxic functions of these peptides.
Several other cysteine-free cytotoxins enriched in lysine residues, the so-called cytolysin-like sequences, were retrieved from the EST database with motif K (additional file 4 panel C). These sequences were repetitive in the database and formed a homologous family (additional file 3). We suppose that natural venom contains truncated variants of these sequences and suggest that two C-terminal fragments of about 40 residues in length represent the putative mature polypeptides.
With motif K, 12 short sequences were retrieved from the database. All of them, except one, grouped in four homologous families. Since their functions remain obscure, they were called 'hypothetical peptides' (additional file 4 panel D). In addition, using motif K we discovered two closely related sequences identified as precursors of neuronal peptides ( Figure 10). During limited proteolysis, each of them produces five small peptides presumably displaying neuronal activity. Figure 10 shows two examples of known neuropeptide precursors found in anemones, polyps and jelly-fish belonging to the LWamide family, which share the common C-terminal sequence Gly-Leu-Trp-NH 2 , or to the RFamide family sharing the C-terminal sequence Gly-Arg-Phe-NH 2 [48,49]. These neuropeptides induce contractions of anemone body wall muscles [50], and in control of metamorphosis in planula larvae of H. echinata, LWamides and RFamides work antagonistically [51].
There is no sequence similarity between the precursor proteins presented, however the limited proteolysis motif between generated neuropeptides is similar, and almost all of them keeping a C-terminal amidation signal. The localization of the position of the N-terminal amino acid residue is problematic; therefore we suggested that active neuropeptides should be consisted of 4-6 amino acid residues. The peptides produced during maturation ended by the sequence Arg-Pro-NH 2 therefore they were called RPamide neuropeptides.
To summarize, novel polypeptide sequences deduced from A. viridis EST database were assembled into several families with members differing by point mutations. This is a common feature of venomous animals, which produce a variety of toxins affecting different targets on the basis of a limited number of sequence patterns. Traditional sequence processing algorithms consider minor sequences as erroneous, but it is not ruled out that these structures are in fact correct. Following proteomic research is necessary to test either possibility.
The efficiency of the method developed: a comparative study The SRDA efficiency compared to grouping nucleotide sequences in contigs was earlier demonstrated for the EST database of venomous spider glands [18]. Due to the absence of substantial data on amino acid sequences of homologous proteins, the blast search fails to reveal homology with known proteins. This means that some good consensus sequence and the entire contig will be excluded from a consideration. It is exemplified by the data presented in the additional file 3, where for some sequences the homology was not revealed.
It is more reasonable to compare the efficiency of mining polypeptide sequences using SRDA with other methods, which are also operated with amino acid sequence patterns, such as Pfam or GO [52,53]. This checking was done using a set of amino acid sequences of predicted peptides. Eighty nine sequences in FASTA were downloaded in UFO web server [54]. In comparison with SRDA and blastp, assignment of sequences to protein families by UFO was less successful. The results of search are given for each analyzed sequence in the additional file 3 together with blastp data.
A similar approach was applied for retrieval of polypeptides from the rodent EST database using conserved Cys pattern of the transforming growth factor-β (TGFβ) family [55]. A special Motifer search tool with flexible interface of queries was used. Similarly to our algorithm, Figure 8 Comparison of sequences retrieved using motif 5 with gigantoxin-1 precursor (Q76CA1). Mature polypeptides are shown in black; signal peptides and propeptide domains are in light brown; amino acids that differ from the sequence of gigantoxin-1 are given in red. Motifer operates with sequences translated in several reading frames and takes into consideration the termination signals. One of the weak points of the program was low database scanning speed.
SRDA simplifies the database itself and the search queries, thus considerably simplify the comparing algorithm and consequently increasing the analysis rate. Thus, the search of 12 queries in the reference database of 10489 sequences on a standard desktop PC required 30 sec. We suggest that the simplicity and high rate of analysis make SRDA attractive not only for the study of polypeptide toxins but of other polypeptides as well.
Since some procedures in the analyses are tedious and labor-consuming, it may be useful to combine SRDA with other progressive techniques, for example based on the Hidden Markov Model. A novel consolidated algorithm will enclose best features of all parts to aid a precise and fast technology of EST processing.
Conclusions
The SRDA of A. viridis EST database showed that this method is effective for rapid retrieval of sequences from the bulk of bioinformatics data. The correct formulation of query plays the crucial role in the outcome of database screening and requires small additional study. The key residues, whose arrangement we wish to fix in the polypeptide pattern, should be selected on the basis of their structural or functional significance. The introduction of termination signals considerably decreases false positive results.
Using the procedure developed, we identified both new sequences and sequences showing high homology to already described toxins. For two known toxins, the precursor structures were determined. All retrieved sequences formed families of homologous peptides that differ by single or multiple amino acid substitutions, providing additional evidence for the combinatorial principle of natural venom formation. In addition to 23 earlier reported polypeptide toxins in sea anemone A. viridis, we discovered 43 novel sequences. Besides toxins, we also found short peptides with regulatory neuronal function, whose role is still to be investigated, and several groups of toxin-like polypeptides.
Simplification of queries and the database itself reduces the time of analysis as compared to methods based on the search for complete amino acid sequences. The procedure developed may be used for scanning newly generated databases or as a complementation to traditionally used approaches. It is suitable not only for retrieval of polypeptide toxins but for finding any type of amino acid sequences once their structural motifs have been established.
Additional material
Additional file 1: Supplementary Excel table of reduced databank used in the analysis (read only). Use Save as command and allow macros execution to reach SRDA and complementary functions in this example.
Additional file 2: Supplementary listing of VBA module. General function description and how to use section. Start a MS Excel program and change security level for macros to medium. Open any existent file or create a new one (allow macros execution in the file). Change file type of Add_file 2 SRDA_processing.bas.txt to SRDA_processing.bas and import all functions included in batch via Microsoft Visual Basic editor (File/import file command).Type in necessary cell "= function name(" and drag "fx" button located on the left from cell input line. Argument(s) required for function processing should be put in the opening window. Then copy equation to other cells.FUNCTIONS DEFINITION-Function ShortDo(seq, excpt) -is a main function capable to produce converted sequence, where:seq -String variable enclosed amino acid sequence processed by SRDA,excpt -String variable equal to key residues (combination of any single letter coded amino acid(s) with\without termination symbol".") as a solid word.-Function Translate(seq, frame)converts nucleotide to amino acids sequence in appropriated frame, where:seq -String variable enclosed sequence for translation,frame -Integer variable defined translation frame, acceptable value is 1,2,3,-1,-2,-3 or 0 (by frame = 0 only reverse compliment nucleotide sequence will be created).-Function SignalFrom(seq, limitMet, frame, format) -is a function for prediction of acceptable Met residue starting a signal peptide, where: seq -String variable enclosed amino acid sequence for processing, limitMet -Integer variable defined a searching range (from the beginning) of Met residue,frame -Integer variable equal to frame used early by translation (frame range 1-6), this variable is important for calculation a position of first nucleotide started possible signal peptide, format -Integer variable defined output style:0 -function returns the position of the first nucleotide,1 -function returns the position of the first Met in the signal peptide,2 -function returns the position of the last nucleotide in predicted signal peptide,3 -function returns the position of the last amino acid in predicted signal peptide,other digit -function returns the best score calculated for the signal peptide.-Function TrimSeq(seq, start, finish) -is a function for partial sequence presentation, where:seq -String variable enclosed nucleotide or amino acid sequence, start -Integer variable defined the first nucleotide (amino acid),finish -Integer variable defined the last nucleotide (amino acid).-Function MatureChain(seq, start, frame, format) -is a function for sequence termination search, where:seq -String variable enclosed amino acid Figure 10 Amino acid sequence alignment of precursors of LWamide neuropeptides (Q16998), N-terminal fragment of Antho-RFamide (Q01133) and novel RPamide neuropeptides retrieved with motif K. Active neuropeptides are shown in green, identical enzymatic processing sites in precursors are given in red.
sequence,start -Integer variable defined a start position for termination symbol searching,frame -Integer variable equal to frame used early by translation (frame range 1-6), this variable is important for calculation a position of the last nucleotide in termination codon,format -Integer variable defined output style:0 -function returns the position of the last nucleotide in gene,1 -function returns the position of a termination symbol,other digit -function returns a polypeptide sequence from start to detected terminus. -Function Frame6Check(pattern, seq1, seq2, seq3, seq4, seq5, seq6) -prints a frame number in which analyzed sequence(s) match query, where:pattern -String variable defined any text for matching,seq1 -seq6 -String variables enclosed amino acid sequences (or converted sequences) translated in 1 to 6 reading frame.
|
2016-05-12T22:15:10.714Z
|
2011-01-31T00:00:00.000
|
{
"year": 2011,
"sha1": "d544077a522c05ef448e0d7d244b0d2dc34e5a52",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-12-88",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2411a775f9f0b233d9011901b578d966e3fb137a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
258487061
|
pes2o/s2orc
|
v3-fos-license
|
Deep-learning-assisted reconfigurable metasurface antenna for real-time holographic beam steering
We propose a metasurface antenna capable of real time holographic beam steering. An array of reconfigurable dipoeles can generate on demand far field patterns of radiation through the specific encoding of meta atomic states. i.e., the configuration of each dipole. Suitable states for the generation of the desired patterns can be identified using iteartion, but this is very slow and needs to be done for each far field pattern. Here, we present a deep learning based method for the control of a metasurface antenna with point dipole elements that vary in their state using dipole polarizability. Instead of iteration, we adopt a deep learning algorithm that combines an autoencoder with an electromagnetic scattering equation to determin the states required for a target far field pattern in real time. The scattering equation from Born approximation is used as the decoder in training the neural network, and analytic Green's function calculation is used to check the validity of Born approximation. Our learning based algorithm requires a computing time of within in 200 microseconds to determine the meta atomic states, thus enabling the real time opeartion of a holographic antenna.
Introduction
Beam steering without requiring the mechanical movement of antenna elements is an important component of electromagnetic (EM) wave applications.To achieve this, a phased array controls the phase of each antenna element and shapes the wavefront to steer the beam direction, which leads to high power consumption and requires complex electronics.Recently, reconfigurable metasurfaces have received interest as promising candidates for low-cost beamsteering antennas [1][2][3][4][5][6][7][8][9].As a two-dimensional array of unit elements with variable states, a reconfigurable metasurface can modulate the wavefront via the collective scattering of elements and achieve beam steering by varying the state of the unit elements.On-demand beam steering requires an algorithm that can determine the corresponding state of the unit elements based on a far-field map.For general holographic far-field maps, only numerical solutions are permitted [10][11], which are usually found using iterative optimization methods such as genetic algorithms (GAs) [12][13] or particle swarm methods [14] and the Gerchberg-Saxton(GS) algorithm [15].However, iterative methods take a long time to arrive at the solution and must be performed for each far-field map.Deep learning is a potent replacement method for reconfigurable metasurfaces [16][17][18][19][20][21][22] that has been used for the creation of holograms with experimental verification [17], the beamforming of the antenna [18][19], and the adaptive invisible cloak [22].
In the present study, we present a deep-learning-based algorithm for real-time holographic beam steering using a reconfigurable metasurface.We adopt an autoencoder neural network [17,[23][24] in which the encoder generates the element states from a far-field map, while the physics-assisted decoder directly solves the scattering equation to obtain a far-field map from the generated states instead of undergoing neural network training.For simplicity, we assume the unit elements to be point dipoles and control their reconfigurable states by varying their polarizability.To ensure the stable training of the neural network, we employ higher-order Born approximation [25][26] of the scattering equation and test its accuracy by comparing it with analytic Green's function calculations [27][28][29][30][31].We investigated the effect of multiple scattering and its dependence on the dipole polarizability, as well as the quality of the far-field pattern generated, by varying the Born approximation order up to 3. Our autoencoder model is trained using handwritten digit data from the Modified National Institute of Standards and Technology (MNIST) database [32] in order to minimize the difference between the input MNIST image and the decoder-generated far-field map.We test our model with various far-field maps, including multi-directional beams and holographic maps.Our neural network generates meta-unit states that can produce high-fidelity holographic far-field patterns.Additionally, our trained neural network was able to generate untrained target images, such as a focused multi-beam, indicating that it acts as an inverse operation of the electromagnetic scattering equation.
For a metasurface with 900 unit elements, our model determines the required states within 200 μs.As a result, our learning-based algorithm enables the real-time operation of a reconfigurable metasurface for a holographic antenna.
Reconfigurable metasurface a. Dipole approximation and electromagnetic scattering
We model a reconfigurable metasurface as a two-dimensional array of dipoles with complex polarizability for the n-th dipole.In reconfiguring the unit elements, we can vary both the amplitude and phase Here, for simplicity, we fix the phase at either 0 or and vary the amplitude to a maximum of max .In other words, we consider only the real = max , −1 ≤ ≤ 1 .Later, we will also consider the binary case where = ± max .Our metasurface consists of 30 × 30 dipoles rectangularly arranged and spaced 0.2 wavelengths apart in each direction.We assume that the n-th dipole ( = 1, 2, 3 … N = 900 ) with polarizability located at had the induced polarization = ( ).The total field ( ) at is the sum of the incident and dipole-scattered fields: [21] inc ( ) ( ) ( , ) where G ⃡ (r i , r j ) denotes the dyadic Green's function of the free space.In theory, combining the equation (1) and ( ) = − , the induced polarization for each dipole can be directly calculated from the matrix equation where is a Kronecker delta.However, in practice, the inverse process often becomes unstable when training the neural network and thus should be avoided.Instead, we use a recursive Born approximation for the training of the neural network and later check its accuracy by comparing its results for the generated dipole states with those derived from Green's function.Born approximation up to the third order is used to determine the far-field map for a given dipole polarizability set , = 1, … , , [25] inc inc Figure 1 presents a schematic diagram of our metasurface antenna.The numerical values of are chosen in the range 10 −7 m 3 < | / 0 | ≤ 10 −5 m 3 which is valid for our desired frequency range and the unit size [33][34][35][36][37], but it should be mentioned that our approach is universal and can be used in various systems with different scales.In training the neural network, an incident wave is generated by a 2.6 GHz dipole feed antenna located one wavelength away from the center of the metasurface, which is taken as the origin of the coordinates.This fixes the electric field inc ( ) and Green's function ⃡ ( , ) except for the dipole vector of the feed antenna.We assume that the feed antenna is designed not to interfere with the scattered fields at the detection position so that the far-field map is generated by the scattered fields.We train the neural network to determine the states of the dipoles [ 1 , 2 , … , ] that generate a far-field map that agrees with the input map.
b. Neural network architecture
We adopt a deep autoencoder neural network consisting of an encoder and a decoder.The encoder generates the dipole states from an input holographic far-field map and the decoder calculates the far-field map from the generated dipole states.The autoencoder is trained to minimize the error between the input and calculated far-field maps.During training, the encoder generates the optimal dipole state [ 1 , 2 , … , ], which in turn generates the on-demand far-field.
Figure 2 shows a schematic diagram of the network architecture.We consider each input image to be a u-v farfield map, which is a projection of the spherical surface map onto the x-y plane.We use MNIST handwritten digit images and resize them to a pixel size of (64, 64).The u-v far-field map is only defined in the unit circle 2 + 2 ≤ 1, and we set the pixel values outside this unit circle to be 0. We normalize the sum of the image pixel values to 1 because we are concerned with the directivity of the antenna.
We employ the Residual Network (ResNet) [38] architecture as our encoder model.The encoder consists of one input convolution layer, three ResNet blocks, and one fully connected layer.Each ResNet block consists of three convolution layers with 64 channels.We use the leakyReLU [39] function as the activation function for the convolution layers.The output dimension of the fully connected layer is (1, 900), which is the same as the number of metasurface unit elements and thus the number of dipoles.For the encoder output to represent dipole state , we use the hyperbolic tangent activation function after a fully connected layer so that the value of the output vector component remains between -1 and 1.
We do not apply the neural network architecture to the decoder but rather solve the forward scattering equation ( 4) with the dipole polarizability given by state .We place a set of detectors sufficiently far away from the origin and evaluate the intensity of the scattered electric field sca ( ).where ̂ is the pixel value for the input image at index (, ).Because loss function L contains the absolute square of the complex-valued scattered fields and is thus not holomorphic in E, care is needed in the backpropagation process.Instead of dealing with the derivative with respect to the complex quantity, we take the real and the imaginary parts separately as independent quantities and also take the derivative separately.For example, the derivative of the loss function with regard to state variable is sca sca We use the Julia programming language and the Zygote package to calculate the derivative of the complex numbers by considering them as a pair of real numbers.We trained our proposed neural network structure and compared its metaunit solutions with those obtained from other optimization techniques such as GA, and modified GS algorithms of [15].We used the MNIST dataset, splitting it into training, validation, and test sets, each comprising 50,000, 10,000, and 10,000 samples, respectively.The third-order Born approximation was used for training with the Adam [40] optimizer having an initial learning rate of 1.0 × 10 −4 and a batch size of 128.We also used early stopping, and the training converged after 30 epochs, taking about 35 minutes with an Nvidia A6000 GPU.For GA optimization, we used a population size of 200, elitism rates of 0.2, and mutation rates of 0.1.We performed calculations using GA or the GS algorithms with scattering (3), using an Nvidia A6000 GPU.In the GS algorithm, we selected the meta-unit states with the lowest MSE loss after iterating the process until one of the already appeared binary states reappeared.
Results and Discussion
Figure 3 presents four test examples using the scattering equation with third-order Born approximation and maximum polarizability of / 0 = 1.0 × 10 −6 m 3 .Although our autoencoder model is trained using MNIST digit data only, we test the model using non-digit-type far-field maps and single-and multi-directional beaming and holographic images (Figure 3a). Figure 3b displays the generated dipole states and the far-field maps predicted by the trained autoencoder.In comparison with the MNIST images, the single-beam, multi-beam, and letter "E" images exhibit equally good recovery, suggesting that the encoder is not overfitted to MNIST-type far-field maps but has been trained for more general inverse operations.The resolution of the far-field maps is constrained by diffraction since we utilized a metasurface with a small (6 wavelengths × 6 wavelengths) size.Nonetheless, Figure 3b demonstrates good recovery.
In real applications, reconfiguring a metasurface by continuously varying the states is difficult to achieve.Thus, we also consider the binary truncation of the states generated by the model by taking the sign function of the states and evaluating the resulting far-field map (Figure 4).Despite the truncation, the predicted far-field maps are in reasonable agreement with the original input images.Though the holographic "E" image has side-lobe errors due to the diffraction caused by truncation, the binary approximation of our model outperforms the iterative GA in terms of directional beam steering.Figure 4b presents the binary states and far-field maps generated by the GA.The GA searches the optimal binary states by minimizing the MSE loss in (5).For a directional beam steering focusing on multiple spots, the GA easily falls into local minima missing certain spots when only the MSE loss function is used without additional regularizations.Figure 4c describes the results of the modified GS algorithm.Note the distributional similarity of the metasurface unit pattern between ours and the GS algorithm.Further, our neural network spends an average of 185 μs to generate 900 meta-units, while the GA requires an average of 232 sec, and the GS algorithm requires an average of 1.25 sec.
We also checked the validity of the Born approximation by increasing the iteration order and the maximum polarizability (Figure 5).For / 0 = 1.0 × 10 −7 m 3 case, the effect of multiple scattering by nearby dipoles is negligible and thus the neural network can be trained with only first-order Born approximation.However, for a larger max / 0 = 1.0 × 10 −5 m 3 , first-order Born approximation generates a large side-lobe (Figure 5c), which tends to disappear as the Born approximation order increases.This is also reflected in the reduction in the MSE loss during training.
While our proposed algorithm is currently limited to a theoretical prototype, it can be applied to real-world metasurface antennas.To implement our algorithm, one would need to extract the polarizability of the metasurface unit [33][34][35][36][37], which can be done through various methods such as calculating the far field scattered from the designed metasurface unit in experiments or numerical simulations and optimizing the polarizability to generate that field.Once we have the polarizabilities for all possible states, we can establish the scattering equation for the antenna structure.By training the neural network encoder to be an inverse operation of the scattering equation, we can obtain the proper solution for the desired far field to be generated.Therefore, although our current focus is on the theoretical prototype, our proposed algorithm has practical applications in the design and optimization of metasurface antennas.
Conclusion
We proposed a deep-learning-based method for the control of a reconfigurable metasurface antenna.We modeled the metasurface as a collection of dipoles with states of varying polarizability and used a deep autoencoder neural network combined with a scattering equation and Born approximation to generate on-demand far-field maps.Our proposed autoencoder exhibited high accuracy and a much faster speed compared with the conventional GA approach and the GS algorithm.This would allow for the real-time operation of a reconfigurable metasurface antenna for beam steering.
Because our model simplifies the reconfigurable metasurface elements as dipoles with varying states, the realistic application of our model requires further consideration.In a real device, the finite size effect of unit elements should be considered, which goes beyond dipole approximation.We employed dipole polarizability with varying amplitude and the phase fixed to 0 or , but this was not essential.We could have kept the amplitude fixed and varied the phase or varied both.When building a device element that represents a dipole with a variable state, it is preferable to employ a discrete state, such as a binary one.We demonstrated that the binary truncation of our continuous autoencoder model still produced a reasonable performance.The accuracy of our approach could be further improved if we employed a discrete state in our autoencoder model from the beginning, without truncation afterward.This can be achieved by modifying the normalization procedure, which will be considered in future research.Our work can easily be extended to more general metasurface antennas, phased arrays, and other far-field imaging applications.
Fig. 1 .
Fig. 1.(a) Schematic of the proposed metasurface antenna.The red and blue double arrows represent unit dipoles with a state of 1 and -1, respectively.The metasurface unit scatterers reflect and interfere with the source wave.We measure the far-field intensity for a (b) 30 x 30 metasurface antenna array.The metasurface modulates the dipole antenna source beam and generates an on-demand far-field map.
Fig. 2 .
Fig. 2. Schematic diagram of the architecture of our autoencoder neural network.The encoder generates the meta-units states, and the decoder reproduces the on-demand far-field.We use the ResNet structure for our encoder and a scattering equation with Born approximation for our decoder.
Figures and table
Fig
Fig 3. Target far-field map, the state pattern generated by the neural network, and the calculated far-field map from the analytic Green's function (a) On-demand far-field maps: single-beam, multi-beam, MNIST, and "E" images.(b) Meta-unit states and the generate far-field pattern if the meta-unit has a continuous state.
3 .
Fig 3. Target far-field map, the state pattern generated by the neural network, and the calculated far-field map from the analytic Green's function (a) On-demand far-field maps: single-beam, multi-beam, MNIST, and "E" images.(b) Meta-unit states and the generate far-field pattern if the meta-unit has a continuous state.
Fig. 4 .
Fig. 4. Comparison of the results from the proposed neural network and the genetic algorithm when trained with the MSE loss from the far-field maps.(a) Binary meta-unit states of -1 or 1 generated by the proposed neural network.(b) Meta-unit states and the far-field pattern generated by the genetic algorithm.(c) Meta-unit states and the far-field pattern generated by the GS algorithm.
Fig. 5 .
Fig. 5. Loss function of the trained autoencoder and the far-field calculated using the analytic Green's function method according to the magnitude of the polarizability and order of Born approximation used in the decoder.(a) Loss function of the neural network.(b) Target far-field map.(c) Generated far-field map.Note that for / 0 = 1.0 × 10 −5 (m 3 ), first-order Born approximation generated strong side lobes compared to the second and third order approximation.
Table 1 .
Calculation time according to the proposed methods, calculation time except Genetic Algorithm is averaged over the MNIST dataset.Genetic algorithm calculation time is averaged over the cases of Fig.4.
|
2023-05-05T13:07:01.448Z
|
2023-05-05T00:00:00.000
|
{
"year": 2024,
"sha1": "4e1c30dede6d8951899936065f4a516034a101b7",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/nanoph-2022-0789/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "f7d5e768ad6eded138e041cd9189e05b0e34f87c",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
}
|
214463571
|
pes2o/s2orc
|
v3-fos-license
|
DDS-Based Flexible UWB Pulse Generator Using Anti-Nyquist Sampling Theorem
A real all-digital and all-coherent arbitrary ultra wideband pulse generator is presented. The generator is a multi-core & single-channel (MCSC) direct-digital-synthesizer (DDS) which consists of 16 sub-cores and a high speed digital analog converter AD9739. The ultra-wideband pulses are generated according to the proposed anti-Nyquist sampling theorem. Their frequencies are aliasing in the first and second Nyquist zones. By purposely aliasing the spectrum, the output bandwidth can be increased greatly. All the parameters including pulse width, bandwidth, amplitude, pulse type, pulse repeat frequency and modulation are user-controlled on-the-fly. In order to test the performance, monopole pulse, monocycle pulse and two 4th order Gaussian pulses are generated. The monopole pulse whose pulse rate can achieve 2.5 GHz has a 10% pulse width of 450 ps, a 200 mV peak amplitude and a -10 dB bandwidth of 2.15 GHz.
Introduction
Ultra wideband (UWB) technique is widely used in communication and radar fields. A key issue of UWB technique is the generation of UWB pulse. Conventional analog generators use fiber Bragg grating (FBG) [1] and step-recovery diode (SRD) [2] to generate UWB pulses. Conventional digital sources utilize the complementary metal-oxide-semiconductor (CMOS) integrated circuits to generate glitches [3,4,5,6]. High speed DAC-based direct waveform synthesis (DWS) was first presented in [7] for UWB applications. Similar to [7], this letter contributes to provide a much more flexible and easy implementable UWB pulse source for radar and communication applications based on Direct Digital Synthesizer (DDS). The key innovation includes two parts. Firstly, the letter designs a multicore & single-channel DDS (MCSC-DDS). This MCSC-DDS contains 16 DDS sub-cores and a high speed digital analog converter (DAC). It consists of off-the-shell commercial components. The MCSC-DDS can generate arbitrary UWB pulse compared with other generators. Some waveform distortions caused by the non-ideal characteristics of the circuit can be compensated by the MCSC-DDS. Secondly, different from [7], the MCSC-DDS does not employ high sample rate like [7] but uses the proposed anti-Nyquist sampling theorem to increase the output bandwidth and generate various UWB pulses. That is, let the frequencies in the first and second Nyquist zone overlap each other. As a result, the output bandwidth will be more than 1/2 of the sample rate, or even be equal to the sample rate. By using this method, we can increase the output bandwidth without changing any hardware or increasing the sample rate. But the anti-Nyquist rule can only be applied to UWB pulse generation.
Design of the MCSC-DDS
Several simple forms of MCSC-DDS architectures have been presented in literatures [8,9,10]. However, the theory of MCSC-DDS has not been expatiated elaborately. Consider a sampled discretetime sine wave: Equation 4 represents the basic principle of the MCSC-DDS. In Eq. 4, for each subgroup i, the sample rate is s f D . So it can be implemented by a low-speed DDS sub-core which performs at the speed of s f D . The exact value of D is determined by the combination of FPGA and DAC. In general, D is relatively small when we use a high-speed FPGA. On the other hand, D is relatively large when we use a high-speed DAC. Here we set 16 D , then we need 16 DDS sub-cores to implement Eq. 4. For DDS sub-core i, the sub-core frequency tuning word is 16K, the phase control word is , the amplitude is a and the phase offset is iK. A prototyping MCSC-DDS is designed The block diagram is shown in Fig. 1. In Fig. 1, the MCSC-DDS is RAM-based. The output waveform is determined by the content in RAM and the corresponding parameters in Eq. 4. So this MCSC-DDS is capable of generating arbitrary waveforms. For all DDS sub-cores, the RAM is equal. All the DDS sub-cores perform in parallel in FPGA. The output of each DDS sub-core is delivered to DAC through a high speed Multiplexer in sequence at the speed of fs. In general, the Multiplexer is embedded in the DAC chip.
Anti-Nyquist sampling theorem
In this letter, the high performance, high frequency 14-bit DAC AD9739 [12] is selected for the MCSC-DDS. Its sample rate s f is up to 2.5 GHz. The AD9739 uses the quad-switch architecture shown in Fig. 2. In this architecture, a constant glitch at 2 s f is created in the process, as shown in For UWB applications, the MCSC-DDS must try its best to increase the output bandwidth. If the MCSC-DDS cannot increase the sample rate, it looks like that the maximum output bandwidth is no more than 1.25 GHz according to Nyquist sampling theorem. But this is not the practice in some UWB applications. As mentioned above, the DAC can create a glitch at 2 s f which implies that the output bandwidth can reach 2 2 s f for UWB applications. But the total sample rate of the MCSC-DDS is s f . So the Nyquist rule is violated with respect to the MCSC-DDS. Therefore, a method called as anti-Nyquist sampling theorem is presented for UWB applications. The MCSC-DDS purposely uses the aliasing phenomenon to increase the output bandwidth by violating Nyquist sampling theorem. That is, let the frequencies in the first and second Nyquist zone be distorted by the overlapping of frequency components above and below / 2 s f in the original signal. Using this method, the output bandwidth will be more than 1/2 of the sampling frequency or even be equal to the sampling frequency. In other words, the output bandwidth is increased without changing the sample rate of the MCSC-DDS. It is illustrated in Fig. 3. In Fig. 3, ) '( X f is the aliasing spectrum, the shadowed part is the overlapping frequency components. The output bandwidth satisfies It means that we can obtain a wider output bandwidth with lower sample rate by using anti-Nyquist sampling theorem. This is one of the most important reasons for us to use DDS technique to generate UWB pulses. From Fig. 4 we can see that, by controlling the amplitude a, the code s (n), the sample rate fs and the operating mode of DAC [7], the MCSC-DDS can generate various UWB pulses of different bandwidth and modulation.
Measurement setup and results
The FPGA is Xilinx Virtex-5. The sample rate of the DAC is 2.5 GHz. The MCSC-DDS output is directly connected to the sampling equipment LeCroy Wave-master. By controlling the RAM in FPGA and other parameters, the DDS can nearly generate arbitrary UWB pulse. In this letter, two doublets, a monocycle pulse and a monopole pulse are generated. The MCSC-DDS setup [7] with respect to these waveforms are depicted in Fig.4. The measured waveforms are shown in Fig. 5, both in the frequency and time domains. From Fig. 5, we can see that their spectrums are aliasing in the first and second Nyquist zone. The monopole pulse has a 10% pulse width (PW) of 450 ps, a peak-topeak amplitude (PPA) of 0.20 V and a -10 dB bandwidth (BW) of 2.15GHz. Its spectrum is aliasing in the first and second Nyquist zones. The monocycle pulse has a 10% PW of 0.71 ns, a PPA of 0.31 V and a -10 dB BW of 2.35 GHz. Its spectrum is also aliasing in the first and second Nyquist zones and the BW is about equal to 94% of sample rate. The first 4th order Gaussian pulse (doublet) has a PW of 0.99 ns, a PPA of 0.47 V and a -10 dB BW of 1.76 GHz which is equal to 70.4% of sample rate. The second doublet has a PW of 1.07 ns, a PPA of 0.25 V and a -10 dB BW of 1.67 GHz. Fig. 6 and Fig. 7 show the variable modulation capabilities of on-off keying (OOK), pulse position modulation (PPM) and bi-phase at the same time. The maximum pulse rates of monopole and monocycle are 2.5 GS/s and 1.25 GS/s, respectively. These results demonstrate the high flexibility of the generator and the efficiency of the proposed anti-Nyquist rule. All measured parameters are listed and compared to the performance of prior works [1][2][3][4][5][6] in Table 1. 1.0 0.24 2.0 CMOS 100 higher order [5] 0.8 0.07 6 CMOS 2500 monocycle [6] 0.5 0.67 4.5 CMOS 50 higher order
Conclusion
A DDS-based real all-digital flexible UWB pulse generator is presented in this letter. The generator consists of off-the-shell commercial components. According to the proposed anti-Nyquist method, a monopole pulse, monocycle pulse and two 4th order Gaussian pulses whose spectrums are aliasing are generated using the MCSC-DDS. Their -10 dB BW is more than 1/2 of the sample rate. In order to demonstrate the flexibility, their parameters, such as modulation, waveform type, are set to be different from each other. With these features, the DDS-based UWB pulse generator can offer more potential applications compared with prior work. It is expected to be useful for some radar and communication systems.
|
2019-12-12T10:06:47.781Z
|
2019-12-10T00:00:00.000
|
{
"year": 2019,
"sha1": "cdb082a49e254f56844ffe2b77c7c1f3b39ca57c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/677/5/052022",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2622b716107bf2bb6d7e496726dbd73e1e4378a0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
59226441
|
pes2o/s2orc
|
v3-fos-license
|
Single-dose prednisolone alters endocrine and haematologic responses and exercise performance in men
The aim of this study was to investigate the effect of a single dose of prednisolone on (A) high-intensity interval cycling performance and (B) post-exercise metabolic, hormonal and haematological responses. Nine young men participated in this double-blind, randomised, cross-over study. The participants completed exercise sessions (4 × 4 min cycling bouts at 90–95% of peak heart rate), 12 h after ingesting prednisolone (20 mg) or placebo. Work load was adjusted to maintain the same relative heart rate between the sessions. Exercise performance was measured as total work performed. Blood samples were taken at rest, immediately post exercise and up to 3 h post exercise. Prednisolone ingestion decreased total work performed by 5% (P < 0.05). Baseline blood glucose was elevated following prednisolone compared to placebo (P < 0.001). Three hours post exercise, blood glucose in the prednisolone trial was reduced to a level equivalent to the baseline concentration in the placebo trial (P > 0.05). Prednisolone suppressed the increase in blood lactate immediately post exercise (P < 0.05). Total white blood cell count was elevated at all time-points with prednisolone (P < 0.01). Androgens and sex hormone-binding globulin were elevated immediately after exercise, irrespective of prednisolone or placebo. In contrast, prednisolone significantly reduced the ratio of testosterone/luteinizing hormone (P < 0.01). Acute prednisolone treatment impairs high-intensity interval cycling performance and alters metabolic and haematological parameters in healthy young men. Exercise may be an effective tool to minimise the effect of prednisolone on blood glucose levels.
Introduction acute and chronic illnesses and diseases (3,4,5). As much as 17% of the population are prescribed GC on an annual basis, indicating the substantial prevalence of GC administration (6).
GC are often used by athletes for their proposed ergogenic effects, which has resulted in the World Anti-Doping Agency banning their use during competition. Several recent reviews suggest that short-term (>2 weeks) GC treatment likely has an ergogenic effect on exercise capacity and performance (7,8). However, the effects of acute (single dose) GC administration are less clear, largely due to limited research. As such, it is important to determine the biological effects of acute GC treatment, not only for the potential effects on performance but also for the health implications associated with GC treatment.
GC administration at supraphysiological doses can result in severe systemic side effects. In severe cases of prolonged GC treatment, this can result in iatrogenic Cushing's syndrome with skin thinning and bruising, reduced bone density, accumulation of central adiposity, skeletal muscle wasting and hyperglycaemia (9). The withdrawal of exogenous GC or minimisation of dose and duration of therapy are recommended to minimise GC-induced side effects (10). However, this is not always possible and as such additional pharmaceutical drugs are often prescribed to treat GC-related symptoms, which may lead to the development of further unwanted side effects (10). Emerging research from animal studies has demonstrated the potential for exercise to be a viable alternative for managing certain GC-induced side effects, including skeletal muscle atrophy and insulin resistance (11,12,13). Yet, there is little evidence in humans regarding the potential benefits of exercise to treat and manage the cardiometabolic side effects of GC treatment.
In men, chronic or long-term GC treatment is associated with a reduction in circulating testosterone concentrations attributed to the inhibition of the hypothalamic-pituitary-testicular (HPT) axis (14). Exercise is not thought to interfere with the diurnal rhythm of testosterone production (15), and there may be a short-term effect of exercise training to reduce metabolic clearance of testosterone (16). However, how acute administration of GC may modulate sex hormone responses to exercise in men is unclear.
Therefore, the primary aims of this study were (A) to determine whether a single dose of prednisolone can influence exercise performance during high-intensity interval cycling and (B) to investigate the effects of acute prednisolone administration on metabolic, haematological and hormonal outcomes and determine the response to high-intensity exercise.
Methods
This study is part of a larger study and as such a detailed description of the methodology is provided elsewhere (17,18). In short, nine healthy, recreationally active males (age: 27.8 ± 1.7 years; BMI: 24.4 ± 0.8; fasting blood glucose: 4.7 ± 0.2 mmol/L, Mean ± s.e.m.) participated in this doubleblind, randomised controlled, cross-over study. The participants orally ingested either prednisolone (20 mg) or placebo (Avicel), 12 h prior to undergoing an acute exercise session. The second session (either prednisolone or placebo) was completed after a washout period of at least 1 week. The high-intensity interval exercise (HIIE) consisted of 4 × 4 min cycling bouts at 90-95% of peak heart rate, interspersed with 2-min recovery periods at 50-60% of peak heart rate. We, and others, have previously reported that this type of exercise elicits favourable improvements in glycaemic control, compared to moderate-intensity exercise (19,20). The workload for the second session was continuously modified throughout the exercise to elicit the same heart rate achieved during the first session. Informed consent was obtained from each participant and the study was approved by the Victoria University Human Research Ethics Committee.
Blood samples were taken prior to exercise, immediately after exercise, and 30 min, 1 h, 2 h and 3 h post exercise. Blood glucose and lactate concentrations were analysed using YSI 2300 STAT Plus (Glucose & Lactate Analyser, Australia). White blood cell (WBC) counts, red blood cell (RBC) counts, haemoglobin and haematocrit concentrations were analysed with a Sysmex KX -21N (Kobe, Japan). Serum testosterone and dihydrotestosterone (DHT) were measured by liquid chromatography massspectrometry using a Xevo TQ-S (Waters Corporation, Milford, MA, USA) with prior liquid-liquid extraction using methyl tert-butyl ether. The lowest limit of detection of testosterone and DHT is 0.08 nmol/L. Serum oestradiol (E2) was analysed by liquid chromatography mass spectrophotometry following derivatisation with Dansyl Chloride. The lowest limit of detection is 5 pmol/L. The between-run imprecision for LCMS analysis of testosterone, DHT and E2 over the analytical range is 5-10%. Serum sex hormone-binding globulin (SHBG) was analysed on an Immulite 2000 (Siemens Healthcare Diagnostics Inc.) and luteinizing hormone (LH) on an Architect Analyser (Abbott Diagnostics).
Statistical analysis
Data were checked for normality and analysed using Predictive Analytics Software (PASW, SPSS Inc.). Comparison of means was examined using a twoway (Treatment × Time) repeated-measures ANOVA. For all significant interaction and main effects, a priori comparisons of means (baseline vs all post-exercise timepoints; placebo vs glucocorticoid for all time-points) were conducted using Fisher's least significant difference test (P < 0.05). All data are reported as mean ± standard error of mean (s.e.m.) and all statistical analysis were conducted at the 95% level of significance (P ≤ 0.05). Trends were reported when P values were greater than 0.05 and less than 0.1. Effect sizes (ESs) were calculated using Cohen's d equation.
Exercise performance
Prednisolone did not significantly affect the work performed in the first bout of HIIE. In contrast, prednisolone significantly decreased the work performed in the final three HIIE bouts, the difference became greater with each subsequent set (ES: 0.22-0.41, Fig. 1). Overall, the total work performed during the exercise session was significantly lower with prednisolone compared to placebo (prednisolone: 206 kj; placebo: 217 kj, P < 0.05). As expected, HR was not significantly different between each set, with and without prednisolone, as each set was matched according to HR (all P > 0.05, ES: 0.03-0.14). However, the mean difference for the entire session was slightly higher (1.5 bpm, P < 0.05, ES: 0.1) following prednisolone ( Fig. 1).
Blood glucose
Prednisolone ingestion caused a significant increase in blood glucose concentration at baseline, immediately following exercise and 30 min, 1 and 2 h after exercise, compared to placebo (ES: 0.55-2.19, Fig. 2A). Blood glucose concentration in the placebo trial was not significantly altered from baseline throughout the session. Three hours following exercise, blood glucose levels in the prednisolone session were equivalent to the glucose concentration in the placebo session (P > 0.05).
Blood lactate
The acute administration of prednisolone did not alter baseline blood lactate concentrations. The HIIE caused an increase blood lactate concentration with both treatments; however, the increase in lactate was reduced by 14% with prednisolone compared to the placebo (P < 0.05, ES: 0.55) (Fig. 2B). Blood lactate returned to baseline levels after 1 h of recovery in the prednisolone session, but remained elevated above baseline for up to 3 h post-exercise in the placebo session (Fig. 2B).
Haematology
Due to machine technical difficulties which resulted in missing data, results for haematological variables are reported for seven participants only.
In comparison to baseline, whole blood WBC concentration was significantly increased immediately following exercise, 2 and 3 h after exercise, for both treatments (Fig. 3A). Compared to placebo, prednisolone caused a significant elevation in whole blood WBC counts at all time-points (P < 0.01, ES: 1.18-2.01).
The HIIE caused a significant increase in whole blood RBC, haemoglobin and haematocrit immediately post exercise for both treatments (P < 0.05, ES: 1.2-2.23) (Fig. 3B, C and D). With both treatments combined whole blood RBC count decreased below baseline levels at 1 h post exercise (P < 0.05). Similarly, haemoglobin and haematocrit concentrations were decreased below baseline levels for up to 1 h after exercise (P < 0.05), returning to baseline levels 2 and 3 h into recovery. Prednisolone had no significant effect on whole blood
Figure 1
Mean work output (Watts) (solid bars) and heart rate (HR) (striped bars) during the high-intensity cycling sets. Δ expressed as a percentage difference between treatments. **P < 0.01, *P < 0.05 prednisolone significantly different from placebo.
Sex hormones
Serum concentrations of testosterone, DHT and SHBG increased immediately after exercise with both placebo and prednisolone treatments combined (Fig. 4A, B and E, P < 0.05). DHT and SHBG concentrations decreased below baseline levels 1 h following exercise, before returning to pre-exercise levels 3 h post exercise ( Fig. 4B and E).
Similarly, testosterone decreased below baseline levels 1 h into recovery and remained reduced at 3 h post exercise (Fig. 4A). E2 was significantly elevated above baseline concentrations 3 h after exercise, irrespective of treatment (Fig. 4C, ES: 0.46-0.82).
A trend was observed for higher LH at all time-points with prednisolone; however, the effect was not significant (P = 0.087, ES: 0.17-1.24) (Fig. 4D). A significant treatment effect was observed in the ratio of testosterone/LH, with prednisolone causing a reduction in the ratio of testosterone/LH at all time-points in comparison to placebo (Fig. 5A, P < 0.01, ES: 0.56-1.3). Prednisolone did not significantly alter the testosterone/E2 ratio (Fig. 5B).
Discussion
The major findings of this study are (A) a single dose of prednisolone decreases work capacity during HIIE at ∼95% of peak HR in healthy young men; (B) prednisolone significantly elevates fasting blood glucose concentrations which is restored to baseline 3 h after HIIE; (C) exercise acutely affects WBC counts and sex hormones which persists up to 3 h after exercise, while prednisolone reduces the ratio of testosterone/LH.
There is a lack of evidence regarding the effect of acute (single dose) GC treatment during HIIE, despite shortterm GC treatment indicating an ergogenic effect (7,8). We report a significant reduction in the work completed, at the same relative intensity during HIIE, with prior prednisolone ingestion. This finding corresponds with a previous study which reported that acute prednisolone ingestion (20 mg) increased VO 2 during steady state cycling at 60% of VO 2max , indicating an increased energy demand during submaximal exercise, which may indicate reduced exercise efficiency (21). In contrast, HIIE following an acute 4 mg dose of dexamethasone appears to have no effect on VO 2 or HR (22). Similarly, single-dose prednisolone (20 mg) has no effect on time to exhaustion at intensities between 70 and 85% of VO 2max (23,24). Together, the current evidence indicates that acute prednisolone reduces exercise capacity, especially at higher intensities, when the metabolic demand is high. The mechanisms by which GC reduces exercise capacity are not clear, but may be related to the acute effect of GC on skeletal muscle protein signalling including aberrant anabolic and insulin signalling proteins (18), and/or impaired skeletal muscle microvascular blood flow (25), both known mediators of exercise capacity. The discrepancy between our findings and others that have reported improved exercise performance after short-term GC is equally unclear (26,27,28). It could be speculated that a single dose of GC elicits a perturbation in wholebody homeostasis, including impaired glycaemic control as indicated by elevated fasting glucose and insulin. In contrast, longer duration GC treatment (albeit still short term) may promote compensatory mechanisms that are able to control this initial homeostatic insult, which may reflect why some studies have reported normal fasting glucose and insulin after short-term GC treatment (28,29). Further research is warranted to explore the effect of acute prednisolone on exercise capacity and performance during other exercises and sports and to investigate the potential mechanisms. Chronic GC treatment is known to induce side effects including hyperglycaemia and the development of diabetes (30,31). However, the effects of a single-dose GC on fasting and post-exercise blood glucose levels in healthy individuals are not clear, and some reported that prednisolone (20 mg) causes an increase in blood glucose concentrations at baseline (23,24,32), while others reported that it has no effect on blood glucose (21,22,33).
The results from the current study confirms that a single dose of prednisolone significantly increases blood glucose levels.
Exercise is known to improve insulin sensitivity and glucose regulation in healthy individuals and people who live with chronic conditions, including insulin resistance and type 2 diabetes (34,35). We report that prednisoloneinduced hyperglycaemia was restored to baseline levels 3 h after a single session of HIIE. It is possible that the restoration of glucose levels after exercise may, in part, be due to the degradation of the biological activity of a single dose of prednisolone over the course of time. However, previous studies have also reported restoration of glucose levels when cycling exercise (80-85% of VO 2max ) is performed as little as 2-3 h after prednisolone (20 mg) ingestion (23). These findings highlight the important role exercise can play in improving glycaemic control during GC therapy. The effect of exercise on blood glucose may depend on the dosage of GC administered. For instance, both 1 and 4 mg doses of dexamethasone increase blood glucose levels at baseline, however, normalisation of blood glucose with exercise at 90% of VO 2max occurs with only the 1 mg dose (32). It is also possible that the normalisation of glucose levels is intensity dependant. Exercise at 70-75% of VO 2max has previously been reported to have limited effect on blood glucose response following prednisolone treatment (20 mg) (24). The current findings provide new evidence that acute HIIE may help to minimise the elevation in blood glucose concentration that occurs following GC treatment. However, further research is required to confirm these findings with respect to timing of exercise following ingestion of GC and identifying the mechanisms involved (i.e. improved glucose uptake by skeletal muscle, a reduction in hepatic glucose output or both).
To determine the influence of prednisolone on other exercise-mediated metabolic responses, we measured blood lactate and changes in blood haematology. Blood lactate significantly increased from resting levels in response to HIIE with both treatments. However, the lactate response was supressed in the prednisolone trial compared to the placebo, possibly a reflection of the significant reduction in work performed during the exercise session. This finding is in contrast to previous studies which reported that blood lactate was not altered with prednisolone (21,23,24) and dexamethasone treatment (22,32). Furthermore, none of these studies reported a change in exercise performance, which supports the hypothesis that lactate was likely reduced in the current study due to the reduction in work completed. It is also possible that the use of a long duration, HIIE protocol in the current study may, in part, explain the discrepancy in findings, given that lactate metabolism is crucial in sustaining intense exercise (36). GC therapy is used as an immunosuppressive agent to treat autoimmune diseases. However, little research has been conducted to explore the immune response following GC administration and exercise. We report that acute prednisolone ingestion causes an elevation in total WBC count at baseline (12 h after ingestion), and throughout the 3-h recovery period following HIIE. Similarly, the short-term inhalation of fluticasone results in an increase in total WBC and neutrophils at baseline, with a further increase following high-intensity exercise (37). However, another study reported that lymphocyte concentration was reduced in a time-dependent manner following prednisone ingestion (38), suggesting that specific WBC subtypes may respond differently to exercise and GC ingestion, warranting further investigation.
Changes in RBC count, haemoglobin and haematocrit are reported to influence exercise performance predominantly through the contribution of oxygen to working muscles (39). Therefore, we investigated whether the decreased exercise capacity with prednisolone ingestion would coincide with changes in these variables. Although the RBC count, haemoglobin and haematocrit were altered following HIIE, there was no treatment effect of prednisolone suggesting alternative pathways are likely for the impairment of exercise capacity.
Given the potential of androgen-glucocorticoid interactions and the role of hormonal regulation for exercise performance and adaptation (40,41), we explored the effects of single-dose GC treatment on circulating sex hormone concentrations before and after a single session of HIIE. We report that HIIE increased circulating androgens, namely testosterone and DHT, as well as SHBG concentrations. While DHT and SHBG returned to baseline levels during recovery, testosterone concentrations remain below baseline, suggesting a biphasic response of testosterone to HIIE. E2 concentrations were stable during exercise and increased during recovery. The ratio of testosterone/LH is reduced in men after taking prednisolone consistent with evidence of testicular, or more specifically Leydig cell, dysfunction (42,43). Prednisolone reduced the ratio of testosterone/LH across all time-points consistent with an effect to impair testicular Leydig cell function superimposed on the effect of exercise. This occurred in the absence of any evidence of altered aromatase activity as the ratio of testosterone/ E2 was unchanged (44). This acute effect of prednisolone on Leydig cell function is noteworthy. Chronic GC therapy has been associated with lower circulating testosterone concentrations without elevation of LH, attributed to suppression of central components of the HPT axis (14,45,46). Our findings suggest that impairment of Leydig cell function may also play a role in the action of GC on the HPT axis. Further studies are warranted to ascertain the effects of chronic GC administration on Leydig cell function, and whether GC use impacts on multiple levels of the HPT axis.
Testosterone treatment in younger and older men has an anabolic effect including increased muscle strength and performance (40,41). The acute effect of exercise to increase testosterone, DHT and SHBG could reflect an element of haemoconcentration following exercise. The reduction in testosterone post exercise but not DHT or SHBG would be consistent with downregulation of the HPT axis in the setting of fatigue. Further studies would be needed to establish whether chronic GC administration and more sustained periods of exercise might jointly impact on the function of the HPT axis to the detriment of exercise capacity.
This study has several potential limitations; first, there is a relatively small sample size. This study was conducted as part of a larger, invasive study, and as such, recruitment was difficult (18). This study was adequately powered to compare changes in blood glucose from baseline to 3 h post exercise, between placebo and prednisolone, P < 0.05, effect size of 2.65; n = 9; power of 99%, which was the main outcome of the project. We also identified a significant difference in watts between placebo and prednisolone; however, a post hoc power calculation (G*Power 3.1.9.2; two-tailed dependent t-test, alpha = 0.05) demonstrated that it was underpowered (effect size of 0.41; n = 9; power of 23%). As such, future studies should include a larger sample size to account for an adequate power. Second, due to the metabolic differences between males and females, only males were tested in this study. As such, and acknowledging the differences in reproductive physiology between males and females, the results may not be applicable to females. It will be important to conduct further research in females to identify whether GC treatment has a different effect. Furthermore, the participants in this study were young and healthy, and as such these findings are delimited to this specific population with further research require to explore the effects of GC ingestion and exercise in other populations. Finally, the participants were given access to water ad libitum, it is possible that some of the changes in the haematological outcomes may be due to alterations in plasma volume.
In conclusion, a single dose of prednisolone decreases work performed during high-intensity interval cycling that suggests that acute GC administration does not act as an ergogenic aid, and in fact may reduce exercise performance. Prednisolone also increases basal blood glucose concentrations, impairs Leydig cell function and increases WBC count. Importantly, acute HIIE restores euglycaemia which may indicate HIIE as a potential treatment strategy for counteracting GC-induced hyperglycaemia. Whether HIIE training can reduce the serious metabolic side effects of chronic GC administration warrants further investigation.
Declaration of interest
The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
Funding
This research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
|
2019-01-25T14:02:59.870Z
|
2019-01-23T00:00:00.000
|
{
"year": 2019,
"sha1": "1791870f0e3269711baf9124031a13d44af7c796",
"oa_license": "CCBYNCND",
"oa_url": "https://ec.bioscientifica.com/downloadpdf/journals/ec/8/2/EC-18-0473.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c22a02cca6dcb0d237f75eb211596ce8d789210",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
88514384
|
pes2o/s2orc
|
v3-fos-license
|
Tractable Fully Bayesian Inference via Convex Optimization and Optimal Transport Theory
We consider the problem of transforming samples from one continuous source distribution into samples from another target distribution. We demonstrate with optimal transport theory that when the source distribution can be easily sampled from and the target distribution is log-concave, this can be tractably solved with convex optimization. We show that a special case of this, when the source is the prior and the target is the posterior, is Bayesian inference. Here, we can tractably calculate the normalization constant and draw posterior i.i.d. samples. Remarkably, our Bayesian tractability criterion is simply log concavity of the prior and likelihood: the same criterion for tractable calculation of the maximum a posteriori point estimate. With simulated data, we demonstrate how we can attain the Bayes risk in simulations. With physiologic data, we demonstrate improvements over point estimation in intensive care unit outcome prediction and electroencephalography-based sleep staging.
I. INTRODUCTION
Reasoning about data in the presence of noise or incomplete knowledge is fundamental to fields as diverse as science, engineering, medicine, and finance. One natural and principled way to reason with uncertainty is Bayesian inference, where a prior distribution P X about a random variable X is combined with a likelihood model P Y |X=x and a measurement y to obtain an a posteriori distribution, specified by Bayes' rule: The posterior completely represents our knowledge about X. In many machine learning architectures, a single point estimate on X is obtained, such as the maximum a posteriori (MAP) point estimate given byx ( Note that the MAP estimate does not require the calculation of β y ; moreover, (2) is tractable and can be solved with general purpose convex optimization solvers when the prior and likelihood are log-concave. Such distributions are very natural, used across many different domains [1], and include most commonly used models (exponential family priors, generalized linear model likelihoods, etc). However, as shown in Fig. 1, the MAP estimate can be ambiguous: the same MAP estimate can pertain to one posterior with less variance (less information gain) than another. Decision-making using only a point estimate, without taking into account variability, can have adverse effects in many different situations, including system design, fault tolerance, and classification [2].
In order to perform optimal decision-making, one must calculate conditional expectations of interest for minimizing expected loss and attaining the Bayes risk [3]: x p(x|y) Fig. 1. Two posterior distributions with the same MAP estimate but different variances. Decision-making using point estimates but without uncertainty quantification can significantly reduce performance as compared to using the full posterior. and are relevant for building graphical models to succinctly represent conditional indepencences in data. Outside of specialized, problem-specific Monte Carlo methods, uncertainty quantification can be tractably and accurately performed when dim X is small (e.g. credibility intervals) or dim Y is large (e.g. Laplace's method using Gaussian approximations [9], [10]). Here, we will develop a tractable, general-purpose framework for uncertainty quantification in Bayesian inference within the context of log-concavity.
A. Related work
The full exploitation of Bayesian inference has been developed in a wide range of problems. Hierarchical Bayesian modeling incorporates priors on the hyper-parameters of the prior distribution to perform high-order statistical modeling and inference [11], [12], [13]. This improves the prediction of a Bayesian model that often depends on the prior distributions. When X is a random process obeying a state-space model (e.g. Markov chain), particle filtering and sequential importance sampling incorporate the dynamics of X [14], [15] to sequentially update posteriors. These sequential Monte Carlo methods recursively compute the relevant probability distributions using ensembles of "particles" and their weights. Nonparametric Bayesian methods, attracting increasing interest, [16], [17], [18], [19], make fewer assumptions on the distributions of interest and allow their parametric complexity to increase with the amount of data acquired.
For the remainder of this manuscript, we will consider the canonical Bayesian inference problem in (1) where X ⊂ R d , P X (x) has a density p(x) with respect to the Lebesgue measure, and the likelihood model is specified in density form as p(y|x).
For a given likelihood p(y|x), in order to obviate the integration in calculating β y in (1), conjugate priors allow for a closed-form expression for the posterior. However, these are very limiting cases and the conjugate prior is often selected only for computational purposes, even if it does not actually represent prior knowledge about X.
Markov chain Monte Carlo (MCMC) methods [20], [21], [22], [23], [24] have enabled widespread attempts at fully Bayesian inference by representing a probability distribution as a set of samples and iterating them through a Markov chain whose invariant distribution is the posterior. Despite the wide adoption, MCMC has a few drawbacks: (a) the convergence rates and mixing times of the Markov chains are generally unknown, thus leading to practical shortcomings like "burn in" periods of discarded samples; (b) the samples generated are from a Markov chain and thus necessarily correlated -lowering effective sample sizes and propagating errors throughout estimates as in (3); (c) many different highly specialized, problem-specific variants of MCMC exist, all of which have their own benefits and challenges [20].
Efficient approximation methods, such as variational Bayes [25], [26], [27] and expectation propagation (EP) [28], [29], [30] have been developed. These offer a complementary alternative to sampling methods and have allowed Bayesian techniques to be used in large-scale applications. Variational methods yield deterministic approximations to the posterior distribution. A particular form that has been used with great success is the factorized one [31]. However, these methods are based upon approximations and thus there are no guarantee that the iterations of the approximation methods will converge, or provide exact results in the limit.
B. Optimal transport framework
Recently, El Moselhy et al. proposed a method to construct a map that pushed forward the prior measure to the posterior measure, casting Bayesian inference as an optimal transport problem [32]. Namely, the constructed map transforms a random variable distributed according to the prior into another random variable distributed according to the posterior. This approach is conceptually different from previous methods, including sampling and approximation methods. Optimal transport theory has a long and deep history dating back to Monge [33], Kontarovich [34], and most recently Villani [35]. This study of measure preserving maps has its roots and applications in vast areas, spanning resource allocation, dynamical systems, optimal control, and fluid mechanics.
Monge initially stated the "earth movers" problem as finding the cheapest way to move a pile of sand in a specific location and shape, to a different location and shape while maintaining the same volume. While this captures the mass-preserving and optimality criterion of the map, it is worth explaining the effect of the map a little more by way of an example. We can relate the optimal map construction to finding the optimal placement of "pegs" in a Galton board, to transform one distribution of balls into another as shown in Fig. 2. Conceptually, one could imagine searching for a unique beam placement resulting in a different output distribution. For example, Fig. 2 (a) shows the trivial schematic diagram of a map that pushes forward a uniform distribution P to another uniform Q, while Fig. 2 (b) pushes forward a uniform P to a Gaussian distribution Q.
C. Our contribution
The use of an optimal transport map for Bayesian inference was proposed in [32] by minimizing the variance of an operator, but it was a non-convex problem -even for the case of log-concavity of prior and likelihood. In [36], we consider the optimal transport map viewpoint established in [32], but we replace variance minimization with an equivalent approach based upon KL divergence minimization. Remarkably, for the case of log-concave priors and likelihoods, we showed this KL divergence minimization is a convex optimization problem and thus tractable.
The rest of the paper is outlined as follows. In section II, we first consider a more general problem: transforming samples from one continuous source distribution into samples from another target distribution. We demonstrate with optimal transport theory that when the source distribution can be easily sampled from and the target distribution is log-concave, a KL divergence minimization procedure yields samples from target distribution with convex optimization. We develop an empirical and truncation approach to computationally approximate the convex problem. We exploit the sub-exponential tail property of log-concave distributions to prove consistency of the proposed scheme in the remainder of this section. In section III, we demonstrate that fully Bayesian inference (e.g. where the source is the prior and target is the posterior) is a special case. Remarkably, we show that the tractability criterion becomes log-concavity of the prior and likelihood in x: the same criterion for obtaining tractable MAP estimates in (2). This implies that general purpose frameworks for Bayesian point estimation with convex optimization can be improved to fully Bayesian inference, still with convex optimization. Section IV demonstrate how we can attain the Bayes risk in simulations. With physiologic data, we demonstrate improvements over point estimation in intensive care unit (ICU) outcome prediction and in sleep staging based upon electroencephalography (EEG) recordings. We conclude with a discussion in Section V.
II. GENERAL PUSH-FORWARD THEOREM
Before going into the details, we present a general theorem of an optimal map construction to transform a distribution P to another distribution Q. We also informally go through an illustrating example. This general push-forward theorem will be used in the Bayesian inference framework in the next section. There are multiple maps S * that push forward P to Q: 1 2 x and 1 2 (2 − x). An arbitrary map S will always push forward someP to the target Q, as shown above, but theP does not need to be the P we started from.
A. Problem setup
We provide a problem setting with notations and definitions relevant to the development of the push-forward theorem where the latent variable is in a continuum. For a set X ⊂ R d where d is a positive integer, define the space of all probability measures on X as P (X). Then the push-forward theorem is defined as follows.
Definition II.1 (Push-forward). Given P ∈ P (X) and Q ∈ P (X), we say that the map S : X → X pushes forward P to Q (denoted as S#P = Q) if a random variable U with distribution P results in Z S(X) having distribution Q.
We say that S : X → X is a diffeomorphism on X if S is invertible and both S and S −1 are differentiable. Denote the set of all diffeomorphisms on X as D. With this, we have the following lemma from standard probability: Lemma II.2. Consider a diffeomorphism S ∈ D and P , Q ∈ P (X) that both have the densities p, q with respect to the Lebesgue measure. Then S # P = Q if and only if where J S (x) is the Jacobian matrix of the map S at x.
Throughout this section it is worth discussing a simple example: 2], and consider building the transformation Z = S(X) so that Z ∼ Q where Q is a uniform on [0, 1]. The desired transformation is represented by the solid lines in Fig. 3. Note that clearly the maps S * (x) = 1 2 x and S * (x) = 1 2 (2 − x) transform P to Q and thus satisfy the Jacobian equation (5). One is increasing, the other is decreasing in x. Note that there are multiple maps that push P to Q. An arbitrary map S, e.g., S(x) = 1 3 x, does not necessarily push P to Q, but rather pushes someP = P to Q, namely,P = unif [0,3]. This transformation is represented by the dotted line in Fig. 3.
Given a fixed density q and an arbitrary diffeomorphism S, the corresponding Jacobian equation for the induced densityp S is given byp We denote it byp S to make clear that given q and S, the associatedp, which S pushes to Q, is functionally dependent upon S through the right hand side of (6). Note that the left-hand side of (5) involves the true p induced by the optimal S * , whereas the left-hand side of (6) involves thep S induced by an arbitrary S. Next, we propose an algorithm to find an optimal nonlinear map that satisfies the Jacobian equation in (5) by searching over all possible maps S that push forward someP S to Q given by (6). Finding a map S * that pushes P to Q is equivalent to finding a map S for which the "distance" betweenP S and P is zero. Using KL divergence, this becomes: where h(P ) is the Shannon differential entropy of P [6], which is fixed with respect to S. For the remainder of the manuscript, we assume the following: From Jensen's inequality, Assumption 1 implies |h(P )| < ∞ and so we have: We note the following: Lemma II.3. S is optimal for problem A if and only if S pushes P to Q.
Proof: By the definition of the Jacobian equation induced by S in (6), S pushes someP S to Q. AssumeP S = P . Then D P P S = 0, and from the non-negativity of KL divergence, we have that S solves Problem A . Now, assume S is optimal for Problem A . Note that the KL divergence D P P S is zero if and only if P =P S and thus S pushesP S ≡ P to Q. Now we note from Example 1 that in general there can be more than one optimal solution, some of which satisfy det (J S (x)) > 0, and others, for which det (J S (x)) < 0. Both maps are equally "as good". However, from an optimization viewpoint, the search space is so "rich" that this leads to non-convexity in an optimization problem.
B. A Convex Problem in Infinite Dimensions
We here consider restricting our search to orientation-preserving diffeomorphisms, i.e., ones with positive definite Jacobian: This eliminates possible maps such as S * (x) = 1 2 (2 − x) in Example 1. As such, for any S ∈ D + , the Jacobian equation becomes:p This gives rise to following problem, which simply involves a restriction of the feasible set in Problem A to orientationpreserving maps: With this, we can state the following theorem: Theorem II.4. Problem B has an optimal solution S * , which is also optimal for Problem A , and thus pushes P to Q.
Proof: Since P and Q have densities p and q with respect to the Lebesgue measure, we consider the Monge-Kontarovich problem with Euclidean distance cost [35]: Key properties of its optimal solution S * include: (i) S ∈ D, and (ii) S * (x) = ∇h(x) where h is a strictly convex function (which implies that J S * (x) ≻ 0 for all x ∈ X). Thus S * lies in the feasible set of problem B . But from the Monge-Kantorovich problem, S * pushes P to Q and is thus optimal for Problem A . Since the feasible set of problem A contains the feasible set of problem B , S * is optimal for Problem B In Example 1, both 1 2 x and 1 2 (2 − x) push P to Q, but 1 2 x is increasing and the other is decreasing. Problem B finds a map S whose Jacobian matrix J S is positive definite (e.g., S(x) = 1 2 x). Remarkably, from Theorem II.4, the restriction to monotonicity suffers no loss in optimality. Moreover, for many problems of interest, it guarantees that finding the optimal solution is tractable: Since log det(·) is strictly concave over the space of positive definite matrices and since log q(·) is concave, we have that E P [logp S (X)] is a sum of strictly convex and convex functions, which itself is strictly convex in S. Thus an optimal solution S * , which exists from Theorem II.4, is unique.
We make the following remark as it relates to other methods involving KL divergence minimization: [26] and EP [28] methods are also based on the minimization of a KL divergence. However, the KL divergence minimization is of a reverse kind, is not over a space of maps, and is thus conceptually different. Moreover, these methods build upon deterministic approximations to the posterior, do not guarantee exactness, and in general are non-convex.
Remark 1. Variational Bayes
Although problem B is convex for log-concave q, note that the feasible set is an infinite-dimensional space of functions, and the objective function involves an expectation (e.g. a d-dimensional integral) with respect to P . Both of these require further effort to be implemented in real computational settings.
C. A Convex Problem in Finite Dimensions via Truncated Basis
Problem B involves minimizing an expectation with respect to P over D + , an infinite-dimensional space of functions. We here consider representing any S ∈ D + in terms of its (truncated) orthogonal basis representation with respect to P . We consider maps m of the form: where φ (j) (x) ∈ R are d-variate bases, w j ∈ R d are basis coefficients, and J is a set of all possible indices j.
One natural way to do this, if X ⊂ R, is to perform a polynomial chaos expansion (PCE) of the nonlinear optimal map [37], [38], meaning that we select (φ (j) : j ≥ 1) so that they are orthogonal with respect to P : For example, if X = [−1, 1] and P is uniformly distributed, then φ (j) (x) are the Legendre polynomials, and if X = R and P is Gaussian, then φ (j) (x) are the Hermite polynomials [37].
Remark 2. In principle, any basis of polynomials, for which the truncated expansion of functions is dense in the space of all functions on X, suffices. Using the PCE where orthogonality is measured with respect to the prior, means that computing conditional expectations and other calculations can be done only with linear algebra.
When |J | = K, we represent a nonlinear map as where can be represented using tensor products as where ψ ja is a univariate polynomial of order j a , and j → ( and thus polynomial bases are given by Since J m (x) = W J Φ (x) need not be positive definite, we use the Euclidean projection, or equivalently the proximal operator of the indicator function of D + [39]: By defining D K + S W : W ∈ R d×K , we can define the following optimization problem: We now show that D K + ⊂ D + is dense in D + : Theorem II.6. If q is log-concave, then problem C(K) is a finite dimensional convex optimization problem with a unique optimal solution, which we denote as S K . Moreover: Proof: Define S ∞ to be the unique solution to Problem B . As such, the random variable Z = S ∞ (X) is drawn according to Q, which is log-concave. It is well known that any log-concave random variable satisfies the sub-exponential tail property: This is a sufficient condition [38,Theorem 3.7] for the tensor products of the generalized polynomial chaos Φ 1 (x), Φ 2 (x), . . . to be dense in L 2 (X, σ(X), P ), implying: Now defineS and note that where (22) follows because S ∞ (X) = Π D+ [S ∞ ] (X) (since S ∞ ∈ D + ) and from (21); (23) follows from the firm nonexpansive property of the proximal operator [39]; and (24) follows from (19). Thus from (24), we have thatS K (X) → L2 S ∞ (X). Note that log q(S ∞ (X)) + log det J S∞ (X) = log p(X) which from Assumption 1 lies in L 1 (X, σ(X), P ). In addition, log q(·) is concave (and thus continuous), and log det(·) is concave (and thus continuous) over the space of positive definite matrices. Since L 2 convergence implies L 1 convergence, we have: Since L 1 convergence implies convergence in distribution, we have that D P PS K → 0. But sinceS K ∈ D K + and S K is the optimal solution to problem C(K) , we have that D P P SK ≤ D P PS K → 0.
D. Stochastic Convex Optimization in Finite Dimensions
Note that the expectation in problem C(K) is given as which cannot in general be evaluated for any S. But since this is an expectation with respect to P , we can define a probability space (Ω, F , P) pertaining to i.i.d. samples (X 1 , X 2 , . . .) drawn from from P . We define P n as the empirical distribution on (X 1 , . . . , X n ) and then consider the empirical expectation: log q(S(X i )) + log det(J S (X i )).
This gives rise to a stochastic optimization problem D(K,n) max Since this problem only involves S(x) and J S (x) evaluated at (X 1 , . . . , X n ), we can solve this with a multi-step procedure: Given W * K,n , we employ the proximal operator so that S * K,n ∈ D K + . Remark 3. Note that Problem D(K,n) is only tractable if generating i.i.d. samples from P is tractable. Luckily, P in many situations is a well-defined distribution that can be sampled from easily, such as Gaussian, exponential family, uniform, or sum of Gaussians. More generally, if p(x) is log-concave, then we can do as follows: simply first solve an auxiliary Problem D(K,n) withP = uniform orP = N (0, Σ), or any other distribution we can easily sample from. Then defineQ = P . By implementing problem D(K,n) , we will generate a mapS that can transform i.i.d. samples fromP , which is easy to sample from, into i.i.d. samples from P . From here, we can move forward and implement Problem D(K, n) to generate a map S to push P to Q.
This optimization problem can be implemented with convex optimization software such as CVX in MATLAB, CVXPY in Python, etc. Here, we implemented the algorithms with CVX [40] in MATLAB.
Theorem II.7. The map S * K,n as defined in (26) is the unique minimizer of problem D(K,n) and moreover, Proof: The uniqueness of the minimizer follows from strict concavity of the problem. That S * K,n is the minimizer follows because . . , n because the positive definite constraint was imposed on W * K,n in the feasible set of the problem in (25). Thus S * K,n corresponds to an optimal solution of the optimization problem defined in (25), which is a relaxation to problem D(K,n) : they have the same objective but the feasible set of the problem in (25) contains the feasible set in D(K,n) . Since S * K,n solves the relaxation, it solves the original problem. Now, we can prove (27). Note that for any where (28) follows from the firm non-expansive property of the proximal operator [39]; and (29) follows from the fact that Φ is a P -orthonormal matrix. Now since W, W ′ ∈ R d×K , from (29), we have that with D K + is locally compact. Thus we can exploit the strict convexity of problem D(K,n) over a locally compact constraint set and Assumption 1 to conclude [41, Theorem 3.1] that P lim n→∞ S * K,n = S * K = 1. With this, we have that lim n→∞ D P P S * K,n = D P P S * K P − a.s.
By taking an outer limit in K, we complete the proof.
III. OPTIMAL TRANSPORT AND BAYESIAN INFERENCE
In this section, we apply the general push-forward theorem to the context of Bayesian inference. We demonstrate that for a large class of prior and likelihood families, an optimal map can be efficiently constructed through convex optimization.
We assume that X ∈ R d is drawn a priori according to P X , which has a density p X (x) with respect to Lebesgue measure. We define the likelihood function in density form as p Y |X (y|x). Having observed Y = y, the posterior distribution P X|Y =y has a density given by p X|Y =y (x|y), determined by Bayes' rule in (1). In general, the calculation of β y is intractable, especially when d is in high dimension or p X (x) is not a conjugate prior for p Y |X (y|x). Several methods have been developed to bypass its calculation or approximate different functions of the posterior. Typical approaches to perform this are Monte Carlo methods [24]. MCMC methods are families where samples are drawn from a Markov chain, whose invariant distribution is that of the posterior. As described in Section I, one problem of these methods is that because samples are drawn from a Markov chain, they are necessarily statistically dependent; so the law of averages kicks in more slowly. Also, efficient MCMC methods are typically tailored to the specifics of the prior and likelihood and as such, lack generality. In addition, many natural situations require the calculation of β y , such as information gain calculations mentioned in (4).
A. Problem Setup
To make use of the general push-forward theorem, we set P = P X as the prior distribution Q = P X|Y =y as the posterior distribution. Then we can find a diffeomorphism S * y , for which S * y #P = Q, or equivalently S * y #P X = P X|Y =y . In this Bayesian inference framework, we denote the optimal map S * as S * y with a subscript y, since a map depends on each observation y. The Jacobian equation for orientation-preserving maps (10) within the context of Bayes' rule (1) becomes Next, we (a) exchange p X (x) in the left-hand side of (30) with β y in the right-hand side in order to put all the x-dependent terms to the right-hand side and (b) take the logarithm and use an arbitrary map S y (x), to define the operator T : D + × X → R as in [32]: Using (31), we can now state (30) equivalently as follows.
Lemma III.1. A diffeomorphism S * y satisfies S * y #P X = P X|Y =y if and only if T (S * y , x) ≡ log β y , ∀x ∈ X.
Note that this encodes a variational principle. The left-hand side of (32) is allowed to vary with x. But for S * y , at any x, T (S * y , x) takes on the same value. Thus this suggests a particular problem formulation [32]: S * y = arg min
Remark 4.
Attempting to solve this problem is in general computationally intractable. If we approximate a diffeomorphism decision variable S y using a truncated basis expansion (e.g. PCE), the above problem is still non-convex for log-concave priors and likelihoods. This follows because under log concavity, T (S, X) is concave in S, but quadratic functions of differences of concave functions are in general non-convex.
Our insight is to abandon the approach espoused in [32] and instead focus on a subset of common problems with log-concave structure, using an alternative KL divergence based criterion. This leads to a computationally tractable algorithm via convex optimization.
B. A convex problem
We now show that for many natural priors and likelihoods we can efficiently find a diffeomorphism S * y , for which S * y #P X = P X|Y =y , using an alternative optimality criterion described in Section II. Consider any other diffeomorphism S y (x) that induces someP X whose density is denoted asp X (x). From the modified Jacobian equation (30): By careful inspection of (34) and (31), we then have that So if a diffeomorphism S * y satisfies S * y #P X = P X|Y =y , then p X (x) =p X (x), which means Thus we find a map to minimize the KL-divergence between P X andP X , which is given by This suggests the following optimization problem, equivalent to B in Section II for P = P X and Q = P X|Y =y : Also note that once we have solved for S * y in (37), we in addition can obtain β y by virtue of evalution of the T operator using any x ∈ X in (32). This is fundamentally the optimization problem we aim to solve for Bayesian inference. By phrasing the inference problem as an optimal transport problem along with the natural assumption of log-concavity of the prior and likelihood, we can create a computationally efficient method to carry out Bayesian inference via convex optimization.
C. Implementation
Applying the PCE in (11) to approximate the function S(x), we re-define T (S, x) in (31) as By truncating the PCE and approximating the expectation by a weighted sum of i.i.d. samples, we finally arrive at the computationally tractable convex inference problem for Bayesian inference: where X 1 , X 2 , . . . , X N are i.i.d. samples drawn from P X .
Lemma III.2. If p X (x) is log-concave and p Y |X (y|x) is log-concave in x, then p X|Y =y (x|y) is log-concave and the problem (39) is a convex optimization problem.
Proof: That p X|Y =y (x|y) is log-concave in x is trivial: it follows directly from the assumption that p X (x) and p X|Y =y (x|y) are log-concave in x, along with Bayes' rule (1). As for showing thatT (W, x) is concave in W , this follows from (i) the assumption that p X (x) and p X|Y =y (x|y) are log-concave in x; and (ii) that concavity is preserved under affine transformations (W → W Φ(x), W → W J A (x)) [42]. As for the feasible set, a set of vectors satisfying an affine positive definite constraint is convex [42].
IV. RESULTS
This section demonstrates how the proposed method was applied in several examples. Firstly, we tested the accuracy of the proposed method using a conjugate distribution where the closed-form expression of the posterior is known. Next, we considered several Bayesian inference problems using simulated and real data. Table I summarizes the basic settings of these problems in terms of observations, unknown parameters, likelihoods, and priors. As described, we considered Gaussian likelihoods with a sparse (Lapacian or exponential) prior; and logistic regression likelihoods with a Gaussian prior. We considered many fully Bayesian scenarios, including sampling from the posterior, estimating Bayesian credible regions, and risk minimization. We compare performance to point estimation counterparts by comparing expected losses and by comparing receiver operating characteristic (ROC) curves. For the sake of brevity, we denote the densities, p X (x), p Y |X (y|x), and p X|Y =y (x|y) as p(x), p(y|x), and p(x|y), respectively.
A. Conjugate distribution: comparison to closed-form
We demonstrate how accurately the proposed algorithm can construct a map using the conjugate distribution where the posterior has the same form as the prior and could be expressed as closed-form. In this example, we chose a Poisson likelihood function (where Y = {0, 1, . . .}) and its corresponding conjugate prior, a Gamma distribution (where X = [0, ∞)).
When we have a Poisson likelihood expressed by p(y|x) = x y e −x /y! and its conjugate Gamma prior expressed by p(x) = 1/(Γ(a)b a )x a−1 e −x/b , then the posterior, p(x|y), is computed as the Gamma distribution, which is given by That is, the hyper-parameters in the Gamma prior, a and b, are changed as a + y and b/(b + 1) in the Gamma posterior. In this simulation, we set a = 2 and b = 0.5, and specified an observation y = 1. We then designed multiple maps to push forward the Gamma prior to the Gamma posterior by changing the number of i.i.d. samples, N , drawn from the Gamma prior. We chose the number of maximum order of the polynomial in (11) to be 5, resulting in K = 6. We then tested the accuracy of the constructed optimal map. The solid line in Fig. 4 plots the variance ofT (W, x) in the log-scale. For the optimal parameter W * ,T (W * , x) is almost constant over x, so the variance is close to zero. We also computed the KL-divergence between the original prior, P X , and the map-dependent prior,P S * , estimated using the designed map, shown by the dotted line. As shown, the accuracy increases in both cases as we increase the number of i.i.d. samples we used for the construction. Fig. 5 illustrates the actual transformation of the Gamma prior to the Gamma posterior. The curve inside the plot represents the designed optimal map, constructed using N = 1000 i.i.d. samples. The curve on the y-axis represents the true posterior, p(x|y), in (40). The histogram on the y-axis was generated using the posterior samples that were obtained by transforming the prior samples on the x-axis through the designed map, S * y (x) = Π D+ [W * Φ] (x). The true posterior, p(x|y), matched well with the histogram of the posterior samples obtained using the designed map. This demonstrated that the proposed method constructed the desired map accurately. We also emphasize that it is straightforward to generate samples from the posterior distribution by transforming samples drawn from the prior distribution -which is usually easy to sample from.
B. Bayesian Credible Region
In statistics, point estimates give a single value, which serves as the best estimate of an unknown parameter. For example, we can calculate the mean of samples, or estimate a value to maximize the likelihood or the posterior of the unknown parameter. However, point estimates have several drawbacks, a major one of which is that they do not provide any uncertainty measure of their estimate. It is often important to know how reliable our estimate is in many applications, and thus is desirable to calculate an interval estimate (called Bayesian credible region), within which we believe the unknown population parameter lies with high probability.
The Bayesian credible region is not uniquely defined, and there are several ways to define it: choosing the narrowest region including the mode, choosing a central region where there exists an equal mass in each tail, choosing a highest probability region, all the points outside of which have a lower probability density, etc. However, it is generally tricky to obtain these credible regions in high dimensions, even though we can compute the posterior distribution. In this paper, we introduce two approaches to compute credible regions using the designed optimal map. First, in order to compute the (1-α) credible region of the posterior where 0 ≤ α ≤ 1, we obtain a region of the prior, within which (1-α) of the prior probability mass is contained. We call this region the region with (1-α) confidence. The region of the prior with (1-α) confidence is generally easy to obtain; for instance, for a d-dimensional Gaussian prior with zero mean and σ 2 variance, we choose the region with (1-α) confidence as the d-sphere with r α radius where r α satisfies P ( X 2 2 ≤ r 2 α ) = 1−α. Since X 2 2 ∼ σ 2 χ 2 d where χ 2 d represents the Chi-squared distribution with d degrees of freedom, we can compute r 2 α /σ 2 , above which χ 2 d has its α probability mass. Then, it is straightforward to compute the (1-α) credible region of the posterior distribution by transforming this d-sphere through the designed optimal map.
To help illustrate this approach, we go through an example as follows. Suppose that we have a binary class dataset as shown in Fig. 7 where a red plus sign represents samples from one class, and a blue circle from the other. We denote samples (or regressors) in the 2-dimensional space by y i = [y i,1 , y i,2 ] T , unknown parameters of the logistic regression by x = [x 1 , x 2 ] T , and the class label corresponding to the ith sample y i by c i ∈ {0, 1}. Then, the logistic regression likelihood function is given by p(y|x) = i p(c i , y i |x) where a logistic regression model is p(c = 1, y|x) = e x T y /(1 + e x T y ) for c = 1, and p(c = 0, y|x) = 1 − p(c = 1, y|x) for c i = 0. We model a prior on X using a Gaussian distribution with zero mean and 100 variance to regularize the problem. Although both the prior and likelihood functions are log-concave over X, there is no closed-form expression of the posterior distribution; thus, we constructed an optimal map to transform the prior to the posterior.
The big circle in Fig. 6 (a) illustrates the region with 95 % confidence for Gaussian prior in 2-d space. This region was obtained using the method described above. Fig. 6 (b) illustrates the Bayesian credible regions obtained by transforming the region with 95 % confidence of the prior in (a) through the designed optimal map. The 95 % credible regions in Fig. 7 (c) and (d) were also obtained in the same manner.
Secondly, we describe another approach to obtain Bayesian credible regions using the i.i.d. samples drawn from the posterior. Once we design the optimal map, it is straightforward to generate i.i.d. samples drawn from the posterior by transforming i.i.d. samples drawn from the prior. For example, the scatter plots in Fig. 6 (a) and (b) show 2000 i.i.d. samples drawn from the prior and the posterior, respectively; The samples in (b) were generated by transforming the samples in (a) through the optimal map. Then we can find the credible interval for each parameter, within which the (1-α) portion of the samples is contained. The solid vertical and horizontal lines in Fig. 6 (b) represent 95 % central regions where there exist 5 % of total samples (50 samples) in both tails.
Next, using the proposed method we obtained Bayesian credible regions of two datasets in binary classes as shown in Fig. 7 (a) and (b), respectively. The dataset in Fig. 7 (a) included 100 samples from each class, and the dataset in Fig. 7 (b) has 2 samples for each class. Red plus signs represent samples belonging to one class, and blue circles to the other. The MAP estimates of x of both datasets are same as illustrated as the dots at (−3.9, 3.9) in Fig. 7 (c) and (d). Although the samples in the two datasets had very different numbers and were differently distributed, MAP estimation provided us with an identical inference of the unknown parameters.
In addition to the point estimate, we obtained the Bayesian credible region as a measure of confidence, as illustrated by the red contours in Fig. 7 (c) and (d). Although we obtained identical MAP estimates from both datasets, we had very different credible regions. The dataset in Fig. 7 (a) with 200 samples provided us much smaller credible region than the dataset in Fig. 7 (b) with 4 samples, meaning that we were more confident on the estimate obtained using the dataset in Fig. 7 (a). There exists more variability in the direction of quadrant II in both datasets, and of course, we can see much more variability for the dataset in Fig. 7 (b). There is less variability in the direction of quadrant IV, since the parameters in quadrant IV switch the classification result.
C. Bayes Risk
So far we have demonstrated how we can compute the posterior probability of an unknown parameter and use it to quantify the degree of uncertainty. In other situations, we perform Bayesian inference for the purpose of taking some action. What action is taken can depend on the estimate of some unknown parameter x ∈ X, our estimate of an unknown label c ∈ C of an observation, etc. Here, we demonstrate how an action can be improved when using the computed posterior as compared to when using point estimates (e.g. MAP).
Suppose that we take some action a ∈ A based on an unobserved parameter x (or a label c) given some observation y ∈ Y . As a general method to measure the performance of this action, we define a loss function l(a, x), (or l(a, c)), which quantifies the quality of the action a as it relates to the true outcome x (or c). It is well known that the procedure to minimize expected loss, attaining the Bayes risk, uses the posterior distribution as follows a * (y) = arg min a∈A x p(x|y)l(a, x)dx.
Equation (41) tells us how to devise an optimal action, but it's generally not easy to solve because it's difficult to perform computation over the posterior distribution p(x|y). In special cases, there exist corresponding optimal actions for particular loss functions: the MAP estimate for the 0-1 loss, the posterior mean for the squared loss, and the posterior median for the absolute loss. These are special Bayesian estimates for certain loss functions where point estimates can provide us the optimal action, but for an arbitrary loss function we need to solve the optimization problem based on (41), which is usually challenging. We address these challenges using our Bayesian inference method.
For binary decision problems considered in this paper, we also used the receiver operating characteristic (ROC) curve to visualize the performance across different balances of type-I and type-II errors.
In the following subsections, we demonstrate how our Bayesian inference method can be used to help devise an optimal action given observations using simulation and real data.
D. Simulation
Firstly, we applied this framework to design an optimal action in the context of sparse signal representations using simulation data. Suppose that our observation y ∈ R m is given as a noisy measurement of a linear forward model. The standard form of this problem is expressed by where X ∈ R d is the vector of parameters that are sparse, M ∈ R m×d is a matrix for the linear forward model, and e ∈ R m is noise. This simple model appears in many guises: sparse signal separation where M is a mixing matrix [43], [44], and sparse signal representation using overcomplete dictionaries where M is a basis matrix whose columns represent a possible basis [45], to name a few. In this example, we set m = 3 and d = 3. To impose the sparsity model on X, we assumed that the parameters X are endowed with a Laplace prior, p(x) ∝ exp(−||x|| 1 /b), and the noise e is assumed to be Gaussian with a zero mean, e ∼ N (0, Σ e ). We randomly generated the forward model M ∈ R 3×3 from a Gaussian distribution, and also set the variance parameters for the prior and the measurement noise as b = 1/ √ 2 and Σ e = 0.1I, respectively. Given this setting, we aimed to decide whether each component of the true parameter x was greater than a preset threshold τ > 0 or not based on the computed posterior and the MAP estimate, respectively. To achieve this, we designed a loss function as l(a, x) = 3 i=j l(a j , x j ) where l(a j , x j ) = 1 was 1 if we made an incorrect decision; that is, a j = 1 and |x j | < τ , or a j = 0 and |x j | > τ . Otherwise, if the decision was correct, the loss function was zero. Here, τ was chosen to contain 95 % of the prior density's mass.
For each simulation, we randomly generated a new M, X, and e. To devise a Bayesian optimal decision, we first approximated the expected loss in (41) as (3), using the posterior samples Z i drawn from p(x|y) using the designed optimal map. Then we found a to minimize the computed expected loss for each simulation. For MAP decision, we simply performed a * MAP = arg min Secondly, we applied this framework to the context of logistic regression using the simulation data y in binary classes as described in the third column of Table I. Logistic regression has been widely used in many applications, since it is usually simple and fast to fit a model to data and easy to interpret the resulting model. In many applications, it is useful to compute the posterior distribution of the parameters for logistic regression model. However, there is no conjugate prior, so posterior calculations can be challenging. Although the posterior is often approximated as Gaussian, it can deviate much from the true posterior, due to asymmetry as shown in Fig. 7 (d). In this example, we instead computed the posterior distribution for logistic regression using our proposed Bayesian method.
We demonstrated that the computed posterior helped make a better decision than an MAP point estimate using samples for a binary decision problem. We first computed the conditional probability of a label given observation, p(c|y), and then obtained the ROC curves by comparing the computed p(c|y) to varying decision thresholds. The conditional probability is given as p(c|y) = x p(c|x, y)p(x|y)dx where p(c|x, y) is in the form of sigmoid function. For Bayesian decision, p(c|y) is approximated by 1/n n i=1 p(c|Z i , y) using Z i drawn from p(x|y). For the MAP decision, p(c|y) is approximated by p(c|x MAP , y). We used 10 samples (5 in each class) to compute the posterior and its MAP estimate. Then we tested Bayes and MAP decisions using new 100 samples (50 samples in each class) generated from the same distribution, which were plotted in Fig. 8 (a). The Gaussian prior with zero mean and unit variance was used. The ROC curve in Fig. 8 (b) illustrates that the Bayes decision showed a better decision performance than the MAP decision.
E. Real data
Here, we applied the proposed Bayesian inference framework to real data sets such as EEG recordings for sleep study [46] and physiological measurements from ICU patients.
Firstly, we demonstrate how we estimated the Fourier magnitude representation of EEG recordings for sleep scoring from a Bayesian perspective, and then how we used this representation to improve a sleep monitoring system. Existing methods for sleep scoring analyzed the relationship of the activity of frequency bands with sleep stages [47]. So they relied on the power spectrum estimate of EEG recording, but largely ignored how reliable this estimate was. From a statistical viewpoint, the Fourier representation of a signal can be interpreted as the maximum likelihood (ML) estimate under a Gaussian noise with zero mean, and tends to produce many small weights across all frequencies. However, a sleep EEG signal has special time-frequency structure, i.e., its power spectrum tends to concentrate on a certain band or sub-bands depending on sleep stages, so the standard Fourier transform based approach for sleep monitoring may not always be an optimal representation for sleep staging. To address this issue, we put a sparse prior on the Fourier magnitude spectrum. By applying an appropriate prior on the average magnitude spectrum, the spectral analysis in the Bayesian perspective provides us not only a better representation of the power spectrum, but also an additional information on our uncertainty on these estimates. This allows for an opportunity to design a better automatic sleep scoring system.
For the Bayesian spectrum analysis we used PhysioNet [48] sleep EEG recording which provides the recordings together with their hypnograms (a graph that represents the sleep stages as a function of time) that sleep experts annotated, and we used these hypnograms as the ground truth of the decision test. We divided the full-band EEG signal into 8 sub-band ones to characterize the sleep EEG signal in terms of the power in each sub-band. A set to include all the frequency components in the ith sub-band was denoted as F i for i = 1, 2, ..., 8. The sets, F 1 , F 2 , and F 3 covered delta (0.5-4 Hz), theta (4-8 Hz), and alpha (8-12 Hz) bands, F 4 and F 5 covered beta (12-35 Hz) together, and F 6 , F 7 , and F 8 covered the high frequency band (35-50 Hz), respectively. The sampling frequency was 100 Hz. The problem settings were described in Table I. We then computed the posterior distribution of the averaged magnitudes in each sub-band. Suppose that y k represented the kth Fourier component of noisy EEG recording with an additive Gaussian noise with zero mean and σ 2 variance, and x k represented the Fourier magnitude of the original EEG signal before the noise was added. That is, y k was a complex number and x k was a non-negative real number. At the kth frequency bin, the likelihood function for the magnitude spectrum, x k , was given by the Gaussian distribution with |y k | mean and σ 2 variance, unless the noise was too high [49]. As a selection of prior density, we used an exponential distribution to impose both sparsity and nonnegativity on average spectrum magnitudes. The input was all the Fourier components included in all the 8 sub-bands denoted as y, and the output was the vector of averaged magnitudes in the 8 sub-bands denoted as x = [x 1 , · · · , x 8 ] T . Assuming independence across the frequency components, the posterior distribution of x given y is expressed as for x i ≥ 0. A closed-form representation of the posterior in (44) does not exist; we applied our approach to estimate relevant posterior quantities. Fig. 9 (a) illustrates examples of the conditional expectation of E[X|Y = y], with its 95 % credible interval for 4 different types of sleep stages. We next used these posteriors for making decision rules for automatic sleep scoring. Suppose that c represents one of 4 sleep stages that we need to determine. Our goal is to design an optimal decision rule to minimize the expected loss in terms of the posterior distribution of c given y, which is given by p(c|y) = x p(c|x)p(x|y)dx. Based on the characteristics of each sleep stage described in [50], we built a simple model for p(c|x) as in the Table II. We denoted awake, light, deep, and rapid-eye-movement (REM) sleep stages as W, L, D, and R for brevity, respectively. The loss function was 3 between W and D, 2 between W and L; W and R; 1 between others.
To evaluate our method, we computed the posterior distributions for 200 non-overlapping sliding windows. The sleep stages that the sleep experts manually annotated were provided for every 30 seconds of the EEG recordings [51]. We used the first 5 second of data in each window to make the problem more challenging. Then we designed an optimal action a given observation y for each window to minimize the expected loss in terms of the posterior distribution. Fig. 9 (b) illustrates the histograms of the losses for Bayes and MAP decisions for 200 temporal windows. As illustrated, Bayes decision rule incurred smaller losses than MAP decision rule.
Next, we applied our framework to develop a risk-prediction system to predict the survival rates or measure the severity of disease of ICU patients based on physiological measurements. The development of this system is helpful for clinical decision making, standardizing research, comparing the efficacy of medication or the quality of patient care across ICUs. We used real physiological measurements of ICU patients together with their survival outcomes in PhysioNet [48]. Since the outcome c was in binary-class, we designed the optimal action a to take for prediction based on the Bayesian logistic regression that we discussed in the previous subsection. The problem settings were described in the third column of Table I. Using the ICU data, we computed ROC curves for the Bayes and MAP decisions. Firstly, 10 physiological measurements such as blood urea nitrogen, Galsgow coma score, heart rate, urine output, etc., for 20 subjects were provided with their survival outcomes; For more details of the physiological measurements, refer to PhysioNet [48]. The survival/non-survival outcome of patients c was assigned values 1 or 0. Fig. 10 (a) illustrates the scatter plot of two input variables among 10 variables for 100 subjects as an example: blood urea nitrogen and Galsgow coma score. Fig. 10 (a) shows significant class overlap for these two features (other feature pairs show similar overlap), suggesting a challenging classification task.
Given measurements y and labels c for the 20 subjects, we computed the posterior p(x|y) of the unknown parameter x of the logistic regression model and estimatedx MAP to maximize p(x|y). Then, physiological measurements for another 100 subjects were provided without their labels. We then computed the ROC curves in the same manner described in the Simulation case. Fig. 10 (b) illustrates the ROC curves for the Bayes and MAP decision rules obtained using 100 subjects. As shown, the Bayes decision provided us a significantly improved prediction performance over the MAP decision.
V. DISCUSSION
We have proposed an efficient Bayesian inference method based on finding an optimal map which transforms samples from the prior distribution to samples from the posterior distribution. Although El Moselhy et al. [32] proposed the original optimal maps perspective, their formulation -in terms of minimizing a variance -is in general non-convex and thus computationally intractable. In this setting, we considered an alternative approach, based upon KL divergence minimization, that returns the same optimal solutions. We have also shown consistency results when using finite-dimensional approximations that can be implemented computationally. We have shown that for the class of log-concave priors and likelihoods, this results in a finitedimensional convex optimization problem. We emphasize that the class of log-concave distributions is quite large and widely used in various applications [1], and that this is the same convexity condition required for Bayesian point (MAP) estimation. As such, we have shown that from the perspective of convexity, we can "get something for nothing" by going from point estimation to fully Bayesian estimation. Through the optimal map, we demonstrated the ability to perform computations, with multi-dimensional parameters, involving the full posterior, including: constructing Bayesian credible regions, attaining the Bayes risk, drawing i.i.d. samples from the posterior, and generating ROC curves.
Other applications outside of Bayesian inference might be able to benefit from this approach. In Section II, we demonstrated a more general result about transforming samples from P to Q whenever Q is log-concave and P can be easily sampled from. Outside the scope of Bayesian inference (where P is the prior and Q is the posterior), this ability may have applications including but are not limited to data compression [6], and message-point feedback information theory [52].
Although we have established convexity of these schemes, further work can be done in developing parallelized optimization algorithms that modern large-scale machine architectures routinely use for Bayesian point estimation. Characterizing the fundamental limits of sample complexity of this approach of Bayesian inference help guide how these architectures may possibly be soundly implemented. Optimizing architectures for hardware optimization, and understanding performance-energycomplexity tradeoffs, will further allow for wider exploration of these methods within the context of emerging applications, such as wearables [53], [54] and the internet-of-things [55].
|
2015-09-29T03:44:36.000Z
|
2015-09-29T00:00:00.000
|
{
"year": 2015,
"sha1": "c68227c785e03fb1229474a22a64e8a15c7cf4f6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c68227c785e03fb1229474a22a64e8a15c7cf4f6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
14803850
|
pes2o/s2orc
|
v3-fos-license
|
Impact of chronic systolic heart failure on lung structure–function relationships in large airways
Abstract Heart failure (HF) is often associated with pulmonary congestion, reduced lung function, abnormal gas exchange, and dyspnea. We tested whether pulmonary congestion is associated with expanded vascular beds or an actual increase in extravascular lung water (EVLW) and how airway caliber is affected in stable HF. Subsequently we assessed the influence of an inhaled short acting beta agonist (SABA). Thirty‐one HF (7F; age, 62 ± 11 years; ht. 175 ± 9 cm; wt. 91 ± 17 kg; LVEF, 28 ± 15%) and 29 controls (11F; age; 56 ± 11 years; ht. 174 ± 8 cm; wt. 77 ± 14 kg) completed the study. Subjects performed PFTs and a chest computed tomography (CT) scan before and after SABA. CT measures of attenuation, skew, and kurtosis were obtained from areas of lung tissue to assess EVLW. Airway luminal areas and wall thicknesses were also measured. CT tissue density suggested increased EVLW in HF without differences in the ratio of airway wall thickness to luminal area or luminal area to TLC (skew: 2.85 ± 1.08 vs. 2.11 ± 0.79, P < 0.01; Kurtosis: 15.5 ± 9.5 vs. 9.3 ± 5.5 P < 0.01; control vs. HF). PFTs were decreased in HF at baseline (% predicted FVC:101 ± 15% vs. 83 ± 18%, P < 0.01;FEV1:103 ± 15% vs. 82 ± 19%, P < 0.01;FEF25–75: 118 ± 36% vs. 86 ± 36%, P < 0.01; control vs. HF). Airway luminal areas, but not CT measures, were correlated with PFTs at baseline. The SABA cleared EVLW and decreased airway wall thickness but did not change luminal area. Patients with HF had evidence of increased EVLW, but not an expanded bronchial circulation. Airway caliber was maintained relative to controls, despite reductions in lung volume and flow rates. SABA improved lung function, primarily by reducing EVLW.
Introduction
The heart and lungs are intimately linked, with the disease pathophysiology of one organ system often influencing the other. In heart failure (HF) altered cardiac hemodynamics lead to increased pressure within the lung vasculature contributing to bronchial circulation engorgement and/or a rise in extravascular lung water (EVLW) via increased hydrostatic forces. Concurrently cardiomegaly in HF patients competes with the lungs for space within the thoracic cavity. Reflex-mediated changes (e.g., due to cardiac stretch), biochemical modulators (e.g., natriuretic peptides, angiotensin II), and alterations in receptors (e.g., beta receptors) all potentially impact airway and vascular function of the lungs. HF patients demonstrate both restrictive and obstructive changes in lung function, but there is substantial heterogeneity in the impact of HF on the pulmonary system (Gehlbach and Geppert 2004). This loss of pulmonary function generally manifests as a decrease in forced vital capacity (FVC) and forced expiratory volume in 1 sec (FEV 1 ) as well as other measures of maximal expiratory flow (Ravenscraft et al. 1993;Dimopoulou et al. 1998). These changes in lung function have in turn been shown to contribute to abnormal gas exchange responses, both at rest and during exercise, and subsequently contribute to the symptoms of dyspnea and exercise intolerance, common in this patient population (Johnson 2000).
While previous work has documented specific functional changes to the lungs of HF patients, including decreases in lung volumes and air flows, little work has been done to understand the factors associated with pulmonary congestion and how this in turn alters lung structure and function, especially in more stable disease (Snyder et al. 2006b;Olson et al. 2007). While an increase in EVLW may occur and be causative, swelling of the bronchial circulation may also occur leading to thickening of the airway walls and narrowing of the airway lumen causing reduced function (Cabanes et al. 1989;Ceridon et al. 2009). Previous work has suggested a relationship between the airway luminal area of large to medium-sized bronchioles and FEV 1 in healthy individuals; however, the relationship between structural changes and function in the HF population remains unclear (Coxson et al. 2008).
An important modulator of airway caliber and lung fluid regulation in HF patients is the beta 2 adrenergic receptors (ADRB2). These receptors can be desensitized with chronic elevations in catecholamines or chronic use of a long-acting ADRB2 agonist as seen in asthmatics, though it remains less clear if this occurs in HF (Eaton et al. 2004;Gehlbach and Geppert 2004). In a previous study from our laboratory, we demonstrated that HF patients may have chronic overstimulation of the sympathetic nervous system with evidence for decreased ADRB2 density resulting in altered airway function and reduced lung fluid clearance (Snyder et al. 2006a,b). Stimulating these receptors may lead to improved lymphatic dilation and EVLW clearance as well as a mild influence on airway tone.
The purpose of this study was to determine if HF in stable patients is associated with evidence of chronic congestion and how this in turn might influence airway structure and function in stable HF patients. In addition, we sought to determine if an acutely inhaled ADRB2 agonist would reduce "congestion" and subsequently improve the structure-function relationships. We hypothesized that HF patients would demonstrate swollen airway walls, increased EVLW, or both leading to decreased airway luminal area as compared to healthy controls, particularly in airway generations further from the trachea, leading to the reduction in lung function commonly observed in this population. We also hypothesized acute inhalation of an ADRB2 agonist would reduce swelling of the airway wall as well as EVLW and thus increase airway luminal size and subsequently improve airway function.
Participants
Seventy-one subjects were recruited for the study; however, the full dataset was not obtained on 11 subjects due to scanner availability. Thirty-one subjects with a history of HF and 29 age-and sex-matched controls completed all portions of the study. Heart failure subjects had greater than a 1 year history of disease, left ventricular ejection (LVEF) fraction <40%, New York Heart Association (NYHA) functional class of I, II, or III, and a body mass index (BMI) less than 36 kg/m 2 . Subjects were chosen with a range of clinical severity and either nonischemic or ischemic etiology to observe a spectrum of disease. Control subjects had no history of cardiovascular or pulmonary disease and were current nonsmokers with no or minimal smoking history (<15 pack years). The study protocol was approved by Mayo Clinic institutional review board and all subjects provided written informed consent prior to participation.
Overview of experimental procedures
Experimental procedures were conducted on a single visit day. A complete blood count was assessed to rule out anemia. Spirometric measurements including FVC, FEV 1 , peak expiratory flow (PEF), mean forced expiratory flow between 25% and 75% of the FVC (FEF 25-75 ) were assessed according to standard techniques (Miller et al. 2005). Total lung capacity (TLC) was calculated using helium dilution, and a thoracic CT scan was obtained (see below details) (Brown et al. 1998). Albuterol was administered through a nebulizer at a dilution of 2.5 mg per 3 mL of saline over a 12-15 min period. Following albuterol administration, pulmonary function, TLC, and a second CT scan were obtained 45-60 min after nebulization.
CT scanning, tissue and air volumes, airway segmentation All CT scans were performed on the same scanner (GE LiteSpeed Spiral CT Scanner, GE Healthcare) and obtained using 2.5-mm-thick slices with 1.2 mm overlap and reconstructed to 1.25-mm-thick slices with a 0.6 mm overlap. An initial scout scan was performed to ensure capture of the entire lung volume. The location of the scanner table, field of view, and number images taken were recorded. A mark was made on the subject to ensure alignment between pre and postalbuterol scans. Subjects were instructed to hold their breath at TLC during all scans.
Computed tomography quantitative analysis was carried out using MATLAB (Mathworks, Inc, Natick, MA) software. The lung tissue was automatically segmented out from the surrounding tissue using built-in algorithms. Pixels outside the range of À1000 to 0 HU and airways were excluded from analysis. Values for the mean, skewness, and kurtosis of the distributions were calculated from the segmented areas (Best et al. 2003). Finally, the ratio of pixels in the range greater than À500 HU was calculated as an index of pulmonary congestion (Kato et al. 1996).
Scans were also submitted to image analysis software (Vida Diagnostics, Coralville, IA) for automated analysis. First, the software quantified the volumes of tissue (Vtis) and air (Vair) within the lungs. The fraction of each voxel that represents air and tissue was calculated from the linear attenuation of CT density from À1000 Hounsfield units (Hu) to +55 Hu (Hoffman et al. 2006). The average fraction of air and tissue of each voxel is then multiplied by the total volume of the lung to calculate Vair and Vtis, respectively.
The software used a novel segmentation algorithm that has been described elsewhere to allow segmentation further down the airway tree without "segmentation leak" (Tschirren et al. 2005). The software then calculated the area of the airway and thickness of the airway walls for each generation. The algorithm successfully segmented at least 6 generations from the trachea in all subjects and thus additional generations were dropped from analysis. Where specified, the airway areas were normalized to the subject's TLC, and the airway wall thickness was normalized to the airway area at the same generation to account for differences in body size and to show the relative change with respect to lung volume changes.
Statistical analysis
All statistical analysis was carried out using SPSS (SPSS, Chicago, IL). The independent sample t-test was used to compare subject characteristics, pulmonary function, and CT-derived data between HF and healthy populations. The paired sample t-test was used to compare pre to postalbuterol data. A Bonferroni correction was applied for statistical tests across generations. Linear regression and the Pearson correlation coefficient was used to assess the relationship between pulmonary function and lung structure data. All results are expressed as mean AE SD unless otherwise stated. The acceptable level of type I error was defined as P < 0.05.
Subject demographics
Seventy-one subjects were recruited for the study. A postalbuterol CT scan could not be obtained in 11 (3 control, 8 HF) subjects due to scanner availability. A complete dataset was obtained for 31 HF patients and 29 age-matched controls. Subject demographics are shown in Table 1. The two groups were well matched for age and height; however, average weight, BMI, and body surface area (BSA) were higher in the HF group compared to the control subjects (P < 0.01). HF patients included those with nonischemic and ischemic etiologies with a range of clinical severities.
Quantitative CT indices of congestion
The distribution of CT attenuation in the lung is shown in Figure 1. Both groups had a similar mean, but the HF group had a heavier right-side tail. Thus, the distribution was wider and had a greater percentage of voxels greater than À500 HU in the HF group. Quantitative CT indices are shown in Table 2. The skew, kurtosis, and percentage of voxels greater than À500 HU were significantly different between the HF and control groups (P < 0.01) suggesting greater levels of EVLW.
Airway wall thickness
Airway wall thickness decreased by approximately 10% in each successive generation, but there was not a difference between the groups at any generation (healthy, generations 1-6, mm: 2.53, 2.16, 1.71, 1.42, 1.22; HF, generations 1-6, mm: 2.62, 2.21, 2.03, 1.81, 1.48, 1.28; P > 0.05). The airway wall thickness as a ratio of the area of that generation increased with increasing generation (Fig. 2). There was no statistical difference between the groups at any generation (P > 0.05).
Pulmonary function
Pulmonary function characteristics are shown in Figure 4 for both groups. Percent predicted FVC, FEV 1 , forced expiratory flow 25-75% (FEF 25-75 ), and peak expiratory flow (PEF) were significantly lower (P < 0.01) in the HF group, while the FEV 1 /FVC ratio was not significantly different.
Structure-function relationships
Linear regression was performed to assess the relationship between airway area and CT quantitative indices and lung function. There were no correlations between CT tissue histogram parameters and pulmonary function at baseline. Pearson correlation coefficients for airway area versus lung function are shown in Table 3. In the control group, generation 2 was significantly correlated with FEV 1 , FVC, FEF 25-75 , and PEF, and PEF was correlated with all generations except generation 3. In the HF group, generations 1, 2, and 4 were significantly correlated with all measures, and PEF was correlated with all generations. Effects of an ADRB2 agonist on lung structure and function After ADRB2 agonist administration, the CT attenuation distributions for both groups shifted significantly to the left and were narrower relative to baseline (Fig. 1). The mean, skew, kurtosis, full width half max (FWHM), and percentage of voxels greater than À500 HU changed significantly with similar changes between the groups (P < 0.01). Linear regression for baseline values versus percent change of mean attenuation, skew, kurtosis, FWHM, and percentage of voxels greater than À500 HU showed a statistically significant correlation for all values in both groups suggesting greater clearance in those with more fluid at baseline (Table 4). Agonist administration increased the absolute size of the wall and decreased the fraction of the wall relative to the area significantly (P < 0.05) in all generations for the HF group and generations 3 through 6 for the control group (Fig. 5A). There was no statistical difference between the groups after agonist administration. After albuterol administration, neither the absolute (not shown) nor the normalized airway areas changed for either group or between groups (Fig. 5B). After albuterol administration (Fig. 5C), FEV 1 , FEV 1 /FVC, and FEF 25-75 improved significantly for the HF group (P < 0.05) and FEV 1 and FEF 25-75 improved for the control group (P < 0.05). The HF group improved more than the control group for FEV 1 and FEF 25-75 (P < 0.05). There was no difference after albuterol for FVC, PEF, and TLC between groups. There were no statistically significant correlations between changes in pulmonary function and CT quantitative indices or airway areas. Average airway area/TLC Airway generaƟon * * * * Figure 3. Airway luminal area normalized to TLC for the control (solid diamond) and heart failure (closed square) population at baseline for airway generations 1 (trachea) to 6. Error bars are standard deviation. *P < 0.05 HF mean versus control mean.
Discussion
In this study, we examined measures of lung congestion, airway structure and the relationship between airway structure and lung function in HF patients and in control subjects. In addition, we investigated if an acutely inhaled ADRB2 agonist would help clear lung fluid and as a result modulate airway structure-function relationships. We found evidence for increased EVLW in the HF population; however, no evidence was observed of an engorged or swollen bronchial circulation as inferred by CT quantification of airway walls. In addition, we found relatively well preserved airway luminal areas in the first six generations of airways despite significantly reduced lung function. The inhaled ADRB2 agonist shifted the lung density histogram away from water in both HF and control, had no impact on airway wall thickness or airway luminal areas, and improved lung function. Despite the parallel changes in EVLW and lung function there was no clear relationship between measures of air flow or lung volumes and EVLW clearance. While we were not able to detect a relationship, the increased fluid and decreased air flows and volumes suggests that increased lung fluid may be important in stable HF.
Extravascular lung water
Extravascular lung water (EVLW) is known to alter lung mechanics and could contribute to the loss of function in HF, but it is difficult to quantitatively measure in vivo (Grossman et al. 1980;Esbenshade et al. 1982). Quantitative CT indices, skew and kurtosis of the CT attenuation distribution have previously been used to study interstitial lung disease and pulmonary edema in HF. In patients with idiopathic pulmonary fibrosis, disease progression is associated with increased CT attenuation mean, decreased skew, and decreased kurtosis and statistically significant correlations between mean, kurtosis, and skew with FVC and FEV 1 (Hartley et al. 1994). In a previous study of HF patients, histogram analysis showed a decreased CT attenuation mean and percent of pixels greater than À500 HU increased for subjects with severe pulmonary congestion (Kato et al. 1996). In this study, we found evidence for more fluid in the lung tissue in HF patients compared to controls despite our patients being stable and optimally managed ( Table 2). The CT quantitative , and lung function measures (C) for the control (solid diamond) and heart failure (closed square) groups. Error bars are standard deviation. *P < 0.05 HF mean versus control mean, # P < 0.05 Control change from baseline mean different from zero, $ P < 0.05 HF change from baseline mean different from zero. indices measure both EVLW and small blood vessels; however, it has been shown that small blood vessel volume is similar in HF patients versus controls, suggesting that these indices are measuring primarily EVLW (Agostoni et al. 2006). Increased EVLW may decrease lung compliance in HF as has been found in previous studies (Frank et al. 1957). Our laboratory has previously found increased elastic work of breathing during exercise, but not at low levels of ventilation, in stable HF patients suggesting decreased lung compliance (Cross et al. 2012). While we found a mild increase in EVLW, we did not find a clear relationship between the quantitative CT indices of EVLW and measures of maximal air flow or lung volumes. One explanation may be that the lung may be able to tolerate a mild level of extravascular fluid accumulation without a direct impact on lung function; however, preventing further increases in EVLW accumulation may be important in preventing decompensation. EVLW tends to flow toward the lymph system and avoid gas exchange areas and the lymphatics in general may be upregulated in the HF population due to a rapid shallow breathing pattern and a rise in circulating catecholamines.
Airway wall thickness and luminal area
Numerous studies have attempted to understand the factors that contribute to loss of pulmonary function in HF. Heart size has been found to be a significant factor accounting for loss of lung volume (Olson et al. 2007). We hypothesized that airway structure would also change with the development of HF and increased EVLW. Specifically, we hypothesized that HF (associated with increased pulmonary wedge pressure) leading to vascular engorgement of the bronchial circulation within the bronchiole walls would cause thickening of the airway walls and a subsequent decrease in luminal areas. Previous work has suggested that the bronchial circulation is contained within the airway wall, and its modulation can improve exercise capacity in individuals with HF (Cabanes et al. 1989;King et al. 2002). Interestingly, we found that HF patients maintained airway wall thickness and luminal areas similar to control subjects, at least through six airway generations from the trachea (Figs 2 and 3).The lack of change in the wall thickness suggests that the bronchial circulation is not expanded in this population. Previous studies in humans and animals after rapid fluid loading have shown increased wall thicknesses and decreased luminal areas, especially in smaller airways (Michel et al. 1986(Michel et al. , 1987Brown et al. 1995;King et al. 2002). However, one study found changes in respiratory bronchioles and bronchioles, but not in bronchi after rapid fluid loading in dogs, and a study in humans found changes in airway wall thickness and luminal areas in only some large airways after fluid loading healthy subjects (Michel et al. 1986;Ceridon et al. 2010). In this study, we found relatively mild levels of edema, which may not be enough to cause significant swelling of the airway walls or narrowing of the airway luminal area. Additionally, only generations 0 (trachea) through 6 (mid-sized bronchi) were examined, which are relatively large, primarily cartilaginous airways whose walls may be stiff enough to resist engorgement with increased vascular volumes or compression of the airways from increased interstitial fluid (Voets and van Helvoort 2013).
Airway structure and lung function
Pulmonary function exhibited both restrictive and obstructive changes in HF patients in this study (Fig. 4). Previous studies have shown approximately 20% reductions in FVC, FEV 1 , and FEF 25-75 with the development of HF (Snyder et al. 2006b;Lizak et al. 2009). However, in both the control and HF groups, the airway luminal area in at least one generation was significantly related to each of the four spirometry variables, FEV 1 , FVC, FEF , and PEF at baseline. It is possible that the airway generations most responsible for modulating lung function with disease and after ADRB2 agonist administration are located below the resolution of CT scanning (beyond generation 6); this is discussed in more depth below (see Limitations). Furthermore, the location of the equal pressure point, or the point of airway narrowing or collapse during a maximal expiration, is dependent on the resistance and luminal area of the airways leading to limitations of flow through that segment independent of driving pressure (Mead et al. 1967;Voets and van Helvoort 2013). This study suggests that it is necessary to quantify smaller airways to properly characterize the relationship between airway structure and function.
Effect of an ADRB2 agonist
Albuterol, an ADRB2 agonist, has been shown to bronchodilate and clear EVLW through alveolar and lymphatic mechanisms (Lauweryns and Baert 1977;Giembycz and Raeburn 1991;Eaton et al. 2004). Lymphatic fluid clearance has been shown to rapidly increase after pharmacological stimulation and in response to exercise in animal models (Coates et al. 1984;Frank et al. 2000). Albuterol effectively cleared fluid from both populations and cleared more fluid in those subjects with more fluid at baseline (Table 4). There was an effect of decreasing the size of the airway wall relative to the luminal area but no effect on the size of the airways for generations that could be observed using CT imaging ( Fig. 6A but not lung volumes, in both groups (Fig. 6C). This improvement in FEF suggests that the effects of albuterol may be on smaller airways than those observed here. We did not find statistically significant correlations between changes in airway structure and lung function from before to after ADBR2 administration. However, these findings are similar to those found in study involving fluid loading healthy subjects. Fluid loading decreased pulmonary function and changes were noted in airway luminal area and wall thickness; however, relationships were not found between the changes in airway structure measurements of these large airways and lung function measurements (Ceridon et al. 2010). While ADRB2 agonists have been historically considered contraindicated in HF, inhaled agonists have not been shown to increase dysrhythmias (Maak et al. 2011).Therefore, an ADRB2 agonist may be an effective method of improving lung function and clearing lung fluid in volume overloaded HF patients.
Limitations
There are four major limitations of this study. First, the difference in weight between the two groups may have affected the differences in PFTs. However, one study showed that increased body weight is associated with increased spirometric parameters, suggesting that the differences observed are not a result of body weight differences (Omori et al. 2014). Second, all CT measurements were taken at TLC, which may not be representative of functional lung volumes. A previous study in individuals with asthma suggests that lung inflation to TLC can affect the area of the large airways measured here (Brown et al. 2001). However, to limit radiation only one CT image was taken per treatment condition. We chose to measure at TLC to maximize the size of the airways and the ability of the software to properly segment the airway structures (Brown et al. 2001). Third, the resolution of the CT scanning may not be sufficient to detect changes at smaller airway generations. At generation 6, the diameter of the airway is approximately 2 pixels, making the measurement of airway area and luminal area susceptible to partial volume effects. Nevertheless, this study has characterized the more proximal airways that tend to be cartilaginous, which are still an important determinant of maximal air flow (Coxson et al. 2008). Finally, the sensitivity of the automatic airway segmentation algorithm may not properly segment all possible airways. While the automatic segmentation algorithm has been well validated on healthy individuals, it may occasionally miss airway segments or, alternatively, segment nonairway structures, leading to large standard deviations in the measurement of airway area and wall thickness. In order to account for these potential occurrences, we have been careful to visually validate the automatic segmentation to remove significant deviations from structures of interest.
Conclusion
The heart and lungs are intimately linked and during the development and progression of the HF syndrome the pulmonary system undergoes significant changes that in turn contribute to the pathophysiology of the disease through alterations in lung function, breathing pattern, respiratory gas exchange, and ultimately symptomatology. This study examined the structure-function relationships of the pulmonary system in HF patients and the influence of an acutely inhaled ADRB2 agonist; known to dilate airways and stimulate extravascular lung fluid clearance. We determined that EVLW was increased in clinically stable HF patients. However, airway wall thicknesses and airway luminal areas were maintained relative to healthy controls in the large airways studied, despite significant reductions in pulmonary function. An acutely nebulized ADRB2 agonist caused significant clearance of EVLW, but did not change airway wall thickness or luminal area in the large generations, suggesting the importance of lung fluid in stable HF patients, and the possibility of ADRB2 agonists as a treatment in improving lung air flows and volumes and clearing EVLW.
|
2017-09-06T22:12:32.962Z
|
2016-07-01T00:00:00.000
|
{
"year": 2016,
"sha1": "a068db3b72f9c59c796f9430a34364cbf817dc28",
"oa_license": "CCBY",
"oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.12867",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a068db3b72f9c59c796f9430a34364cbf817dc28",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
48353460
|
pes2o/s2orc
|
v3-fos-license
|
Management of very late peritoneal metastasis of hepatocellular carcinoma 10 years after liver transplantation: Lessons from two cases
Recurrence of hepatocellular carcinoma (HCC) 10 years after liver transplantation (LT) is very rare. Here, we present two cases of peritoneal metastasis of HCC that occurred 10 and 12 years after LT. A 77-year-old male who had undergone deceased-donor LT 10 years earlier showed slow progressive elevation of tumor marker levels over 6 months. Close observation with frequent imaging studies and monthly tumor marker analyses revealed a solitary peritoneal seeding mass. Imaging studies revealed that the mass was highly likely to be metastatic HCC. After excision of the mass, all tumor markers returned to the normal range. Over past 10 months, the patient has received everolimus monotherapy and half-dose sorafenib, and has shown no evidence of HCC recurrence. In the second case, marginally elevated tumor marker levels were detected in a 65-year-old male who had undergone living-donor LT 12 years earlier. After observation for 3 months, follow-up studies revealed a peritoneal seeding mass. Thorough imaging studies revealed that the mass was highly likely to be metastatic HCC. Two mass lesions were excised, and the patient was administered low-dose calcineruin inhibitor, sirolimus, and full-dose sorafenib. Subsequently, the tumor marker levels increased again and growth of new peritoneal seeding nodules was observed; therefore, sorafenib was stopped after 2 years of administration. During 6 years since HCC recurrence diagnosis, the patient has experienced slowly growing tumors, but has been doing well. For very late peritoneal metastasis of HCC, the therapeutic modalities include surgical resection if possible, everolimus monotherapy, and long-term use of sorafenib.
Liver transplantation (LT) is an established treatment
for patients with liver cirrhosis and/or hepatocellular carcinoma (HCC). Patient selection according to institutional eligibility criteria contributes to reduction of HCC recurrence after LT, but a considerable number of LT recipient deaths are still associated with HCC recurrence. [1][2][3][4][5] HCC recurrence usually happens during the first a few years after LT, although very late recurrence has been reported sporadically. Because of the rarity of very late HCC recurrence that occurs at least 10 years after transplantation, 6,7 early detection is difficult and a therapeutic strategy for such tumors has not yet been established.
Currently, there are no clear recommendations or consensuses for the management of recurrent HCC, particularly peritoneal metastasis. Here, we describe two cases of peritoneal metastasis of HCC that occurred 10 and 12 years after LT and discuss a therapeutic strategy for such patients. (Fig. 2), the patient was closely observed by bimonthly computed tomography (CT) analyses of the chest and abdomen-pelvis, as well as monthly tumor marker tests. After observation for 6 months, a single 2 cm-sized mass was found around the transverse colon ( Fig. 1C and 1D). The lesion was visible on the CT scan taken 2 months previously, but we had missed it at that time (Fig. 1C). There was only a slight growth of the mass during the 2 month period. A positron emission tomography (PET) scan using [ 18 F]-fluorodeoxyglucose revealed that the mass showed hypermetabolic uptake (Fig. 3). During this work-up, serum levels of AFP and PIVKA-II were gradually elevated. These findings sug-gested that the mass was likely to be metastatic HCC.
Open laparotomy was performed, and the mass was excised with tumor-negative resection margins. A pathological analysis confirmed that the mass was metastatic HCC. After excision, the patient's tumor marker levels rapidly returned to normal ranges (Fig. 2). Considering his age of 77 years, the patient was prescribed everolimus monotherapy and half-dose sorafenib therapy (200 mg twice per day). Over the past 10 months, he has been doing well and has not shown any serious adverse side-effects or signs of HCC recurrence. In this patient, tumor marker testing is highly diagnostic and the current surveillance protocol comprises tumor marker tests every 2 months and CT scans every 6 months.
Case 2
A 53-year-old male underwent living-donor LT for hepatitis B virus-associated liver cirrhosis and HCC ( Fig. 4A and 4B). Before LT, he did not receive any HCC treatment. The resected liver had a single 1.5 cm-sized HCC without microvascular invasion, and therefore met the Milan criteria. The pretransplant serum AFP level was 16 ng/ml. was not examined at that time. A subsequent CT scan of the abdomen-pelvis was performed after detection of the slow rise in the AFP level and identified a 4 cm-sized mass at the pelvis (Fig. 4D). A PET scan using [ 18 F]-fluorodeoxyglucose revealed that the mass showed hypermetabolic uptake (Fig. 6). At this time, the AFP level was gradually elevated but was still within the normal range (Fig. 5A). These findings suggested that the mass was likely to be metastatic HCC.
Open laparotomy was performed, and two masses were excised with equivocal tumor-negative resection margins ( Fig. 7). A pathological analysis confirmed that the masses were metastatic HCCs. After excision, the patient's AFP level dropped rapidly (Fig. 5A); however, 6 months later, it increased again, although it was still within the normal range. Follow-up CT and PET scans revealed multiple seeding nodules at the pelvis (Fig. 8). The patient underwent treatment with low-dose calcineurin inhibitor, sirolimus, and full-dose sorafenib, and displayed no serious adverse side-effects. Growth of the peritoneal seeding nodules was visualized on follow-up CT scans, and sorafenib therapy was stopped after 2 years of administration.
The patient changed to receive everolimus monotherapy because its Korean National Health Insurance coverage for LT recipients. During the 6 years since HCC recurrence was diagnosed, the patient has shown very slow growing tumors, alongside elevated AFP levels ( Fig. 5B and Fig. 9), but has been doing well without significant deterioration of his quality of life. Recently, he has been hospitalized twice due to deterioration of his general condition. We expect that supportive care will prolong his life further.
DISCUSSION
Although the majority of HCC recurrence happens during the first a few years after LT, a small number of patients display very late recurrence, sometimes as late as 10 years after transplantation. Advanced HCC beyond the Milan criteria is often associated with early HCC recurrence; however, like the two cases described here, the the levels of tumor markers. 11 Compared with those that recur early, post-transplant HCCs showing very late recurrence may have less aggressive tumor biology and may be more responsive to locoregional treatments. However, we emphasize that post-transplant HCC recurrence itself is a strong evidence of aggressive tumor biology. Thus, the therapeutic strategy for very late HCC recurrence is similar to that for early recurrence. Surgical resection of metastatic lesions is the most effective therapy for recurrent HCC in LT recipients. We previously reported a beneficial effect of surgical metastasectomy of metachronous pulmonary and adrenal metastases from HCCs on patient survival. 12,13 There are only a small number of studies supporting resection of tumors arising from peritoneal seeding of HCC, and resection of peritoneal metastases should only be considered in patients whose primary liver neoplasm is under control and who have no metastases in other organs. [14][15][16] Since there was no evidence of tumor recurrence other than localized peritoneal metastasis in the two cases described here, we decided to perform peritoneal metastasectomy in both patients.
In In conclusion, very late peritoneal metastasis of HCC happens sporadically. We suggest that the therapeutic modality for this condition includes local control through surgical resection if possible, everolimus monotherapy, and long-term use of sorafenib.
|
2018-06-14T00:21:44.620Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "6b794a5fa819b5171073e50a6058ff1708014ba2",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14701/ahbps.2018.22.2.136",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b794a5fa819b5171073e50a6058ff1708014ba2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256651960
|
pes2o/s2orc
|
v3-fos-license
|
Polarons in perovskite solar cells: effects on photovoltaic performance and stability
Organic–inorganic hybrid perovskites manifest unique photophysical properties in terms of their long carrier lifetime, low recombination rate, and high defect tolerance, enabling them to be promising candidates in optoelectronic devices. However, such advanced properties are unexpected in perovskite materials with moderate charge mobility. Recent investigations have revealed that these appealing properties were endowed due to the formation of large polarons in the perovskite crystals, resulting from the coupling of photogenerated carriers and a polarized crystal lattice, which largely affected the carrier-transport dynamics and structural stability of perovskite solar cells (PSCs). In this review, first the crystal structure of the perovskite lattice and the formation mechanism of polarons are elucidated. Then, the modulation of polaron states in PSCs, including large polaron stabilization, polaron-facilitated charge transport, hot-carrier solar cells, and polaron-related stability issues such as polaron-induced metastable defects, polaronic strain, and photostriction are systematically investigated. Finally, the prospect of further understanding and manipulating polaron-related phenomena, working toward highly efficient and stable PSCs, is suggested.
Introduction
Emerging organic lead-halide perovskites have captured worldwide research attention since their first attempted inclusion in solid-state photovoltaics in 2012 [1], and recent rapid achievements have endued perovskite solar cells (PSCs) as one of the most appealing candidates for the future energy supply because of their high efficiency and relatively low production cost. Studies at the forefront of investigations on PSCs have been dedicated from a material science and optical point of view, and have regulated perovskite crystallization, designed charge-transport materials, passivated interfacial and bulk defect states [2][3][4], and managed light propagation and distribution within multilayered devices [5,6] to improve their power-conversion efficiency and long-term durability, resulting in efficiency exceeding 25% with remarkable operational lifetimes [7]. However, in-depth understanding of the photophysical properties in perovskite materials, such as photocharge generation and transport dynamics, has lagged far behind, restricting the further development of PSCs from efficiency approaching the Shockley-Queisser limit and prolonged stability, which would fulfill commercial requirements [8,9].
The achievement of superior power-conversion efficiency in PSCs is mainly attributed to the unique optoelectronic properties of perovskite materials, such as long diffusion length (>1 µm) [10] and charge-carrier lifetime (⩾1 µs) [11], low trap defect density (10 9 -10 10 cm −3 ) [12] and exciton-binding energy (2-26 meV) [13], and high defect tolerance [14]. However, these properties are unexpected for materials like perovskites, which have a modest charge-carrier mobility in the range of ∼100 cm 2 V −1 s −1 [15]. Several mechanisms have been established to explain the origin of these attractive photophysical properties in perovskite materials [16]. Notably, perovskites are considered a type of ferroelectric material because of the rotational dipolar organic cations, and spontaneous polarization of the perovskite lattice from the disordered phase to the ordered phase can occur under a critical temperature [17,18]. The as-formed built-in electric field from lattice polarization can promote the spatial separation of electrons and holes and screen their Coulomb interaction in the ferroelectric domains, leading to suppressed charge recombination and enhanced carrier lifetime [19,20]. The presence of a ferroelectric phase in perovskite materials has recently been confirmed by direct experimental evidence [21]. In addition, owing to the constituent heavy atoms (i.e. Pb, I), perovskite materials manifest strong spin-orbit coupling [22], which can facilitate spin splitting in the band edges and largely affect their photophysical properties [23,24]. In particular, Rashba spin splitting allowed for the formation of spin-forbidden recombination channels or an indirect bandgap in perovskite materials [25], resulting in a long charge-carrier lifetime and slow carrier recombination [24,26]. However, the above mechanisms cannot explain the moderate mobility in perovskite materials.
Recently, the moderate mobility and low recombination rates in perovskite materials have been correlated via the concept of polarons, which refers to electron-phonon coupling [27][28][29][30]. It has been realized that the molecule rotation of organic cations and the vibration of the soft inorganic sublattice largely affect the charge carrier transport dynamic. The strong electron-phonon interaction within polarons generates pseudo-free dressed carriers, which can screen the carriers from defect states to prevent nonradiative recombination and trapping, leading to a prolonged charge-carrier lifetime and diffusion length [30][31][32]. Moreover, polarons were also found in conjugated organic semiconductors such as charge-transport materials in PSCs, which can promote charge extraction from the device and hence improve device performance [33,34]. In addition, the formation of polarons has been employed to explain some stability issues in PSCs owing to the generated polaronic gradients and metastable defects, which majorly affect the phase stability and charge transport [35,36]. Therefore, the presence of polarons in PSCs has a great influence on device performance and photostability, which merits more investigations, including the understanding of their formation mechanism and the development of polaron manipulation, working toward high-performance PSCs [37,38].
In this review, the fundamental basics of the crystal structure and lattice vibration of perovskite materials are briefly introduced, and the formation mechanism of polarons within perovskites and their effect on charge-transport dynamics are then elucidated. Based on that, the presence of polarons in PSCs and their effect on device performance and stability are systematically reviewed. At the end, prospects on further understanding and modulating polarons in PSCs are proposed.
Crystal structure and lattice vibration of perovskite
Perovskite is a class of crystalline material with a common chemical formula of ABX 3 , as shown in figure 1(a), in which the As are monovalent cations (i.e. formamidinium (FA + ), methylammonium (MA + ), Cs + ), B bivalent metals (i.e. Pb 2+ , Sn 2+ ), and X halide ions (i.e. I − , Br − , Cl − ), respectively. The crystal lattice can be viewed as two interpenetrating sublattices of corner-sharing octahedral BX − 3 and A + cations, in which the BX − 3 framework dominates the electronic configuration of perovskite materials and the cations determine the structural deformation. Specifically, the valence and conduction band of perovskite materials originate from the electronic configuration of the inorganic lead-halide framework, enabling the properties to be comparable to inorganic semiconductors. Moreover, the mechanical properties of perovskite materials are determined by the connectivity of B-X bonds and their interaction with cations, leading to a relatively low Young's modulus and soft nature [39], which is similar to organic semiconductors. Therefore, perovskite materials manifest a crystalline solid and liquid-like behavior [32]. Particularly, the halides can move perpendicularly to the lead-lead axis with a strong anharmonic shape [40][41][42] (figure 1(b)), and the organic cations (e.g. MA + , FA + ) can tumble within the soft BX − 3 cage because of their charge anisotropy (figure 1(c)) [43], largely affecting the local polarization and charge-carrier transport [44]. The structural variation of the crystal lattice can be characterized by Raman or near infrared spectra in a specific frequency region, as shown in figure 1(d) [45].
Polaron size in perovskite
Polarons are defined as a type of charged quasiparticle, which are likely formed in polarizable materials because of the coupling of excess charges (i.e. electrons or holes) with ionic vibrations [46]. Specifically, in a perovskite lattice, an injected charge carrier can induce reorientation of the polar A-cations and structural vibration of the BX − 3 sublattice to minimize the Gibbs free energy of the local lattice, and hence form a polarization cloud that accompanies along the carrier propagation. According to the spatial size of the polarization cloud, the polarons can be classified into small and large polarons, referring to the polaron radius close to approximately one single lattice constant and multiple unit cells, respectively [35]. [43], with permission from Springer Nature. (d) Charge-injection-induced structural variation as calculated from the near infrared spectra. Adapted with permission from [45]. Copyright (2019) American Chemical Society.
Small polarons
Small polarons are short-range carrier-phonon interactions, and their formation in perovskite materials is attributed to the cation rearrangement with dipoles toward Pb atoms and system equilibrium with lattice distortion to maintain the dipole direction [47]. Thus, the polaron-binding energy is determined by the volumetric strain of the inorganic sublattice and rotational degrees of freedom of the A-cations [47], which increase as the lattice structure is distorted [48]. Small polarons transport within perovskite crystals in an incoherent motion via phonon hopping, in which the carrier is partially delocalized by thermally driven atomic distortion. Thereby, the mobility of small polarons is relatively low with typical values smaller than 1 cm 2 V −1 s −1 , which increases as the temperature increases owing to the heat-accelerated lattice distortion [46]. Small polarons are prone to accumulate charges within the perovskite lattice and induce deep-level traps within the bandgap. Moreover, the presence of point defects in the perovskite lattice can further facilitate the formation of small polarons, which largely reduce carrier mobility and promote charge-defect recombination, leading to detrimental effects on the device performance of the corresponding PSCs [49].
Large polarons
Large polarons in perovskite are characterized by low binding energy (<k B T) and large effective mass, in which the charge carrier can overcome the binding energy and coherently move across multiple lattice units. The reorientation of cations and the oscillation of the inorganic framework within large polarons tends to localize but also facilitate carrier transport. Different from small polarons, the mobility of large polarons is significantly increased with values in the order of 10 3 cm 2 V −1 s −1 and exhibits a converse trend as the temperature increases because of thermal-enhanced carrier scattering [46]. The strength of electron-phonon coupling within perovskite materials can be described by the dimensionless Fröhlich coupling constant α [50]: where e is the elementary charge, h is the Plank constant, c is the speed of light, m e * is the effective mass, ω LO is the angular frequency of the coupled longitudinal-optical (LO) phonon, and ε ∞ and ε 0 are the optical and static dielectric constants, respectively. The values of α were calculated to be located in the range of 1.1-2.7 in perovskite materials, indicating intermediate-to-large polarons [23,29,51].
Evidence of polaron formation in perovskite
The formation of polarons in perovskite materials was first confirmed in MAPbBr 3 single crystal by employing time-resolved optical Kerr effect (TR-OKE) spectroscopy [28]. As shown in figure 2(a), the MAPbBr 3 exhibited broadband and featureless TR-OKE responses when using nonresonant pump photon energy (1.85 eV), which is similar to a typical liquid, indicating structural flexibility of the inorganic PbBr 3 − sublattice. Upon preresonant photon energy (approaching the bandgap) excitation, energy-dependent slow responses (>1 ps) were clearly observed (figure 2(a)) because of the coupling of nuclear motion to electronic transitions, which were assigned to the Pb-Br-Pb bending within the organic sublattice, as confirmed by the low frequency (<100 cm −1 ) responses in the corresponding Fourier spectra (figure 2(b)). In contrast, these features disappeared upon above-bandgap (2.30 eV) excitation, indicating the interaction of nuclei and photogenerated carriers (the formation of polarons) [28]. The perovskite lattice structure under neutral and electron polaron states was further calculated by using density function theory to study the structural dynamics upon polaron formation [52]. In a polaron state, the A-cations were reoriented with dipoles toward the Pb, and the octahedral PbX − 3 geometry was also deformed to minimize the Gibbs free energy of the local lattice. As illustrated in figure 2(d), the vibration of the PbX − 3 sublattice involves skeletal and in-plane bending and stretching along the z-axis of the X-Pb-X bonds , and the most significant structure variations were the elongation of the Pb-X bond lengths and the tilting of the X-Pb-X angles [52]. The vibration of the inorganic sublattice was further confirmed by experimental measurement by using time-domain Raman spectroscopy [52] and a terahertz-electromagnetic probe [53].
In order to investigate the effect of different cations on polaron formation, the TR-OKE transient spectra of perovskites with different cations (i.e. CsPbBr 3 , MAPbBr 3 , FAPbBr 3 ) were compared, as detailed in figure 2(c). Specifically, the CsPbBr 3 manifested (1) instantaneous (∼70 fs) and (2) ultrafast (∼140 fs) responses, which were ascribed to the polarization-induced inertial reorientation of the inorganic sublattice. Different from CsPbBr 3 , two additional long-time responses appeared in MAPbBr 3 and FAPbBr 3 , which were correlated to (3) local-interaction-induced rotational motion (<1 ps) and (4) diffusive rotation (1 ∼ 2 ps) of the liquid-like molecular cations [32]. Therefore, polaron formation in perovskites is dominantly dependent on the photocarrier-induced deformation of the inorganic PbBr − 3 lattice, irrespective of the type of cations, and the cations differed from the formation time in Br-based perovskites [28]. In contrast, the polaron-formation time is irrespective of cation motion and similar values were measured for lead-iodide perovskites (e.g. CsPbI 3 , FAPbI 3 , MAPbI 3 ) [54], while it should be noticed that the type of cations affect polaron mobility because of the interaction of the cation dipole within the polaron [29].
Moreover, the perovskite films exhibited high values (∼10 3 ) of low-frequency dielectric constants in the dark, which linearly increased as a function of light intensity (figure 3(a)), resulting in a giant dielectric constant (∼10 6 ) under 1 sun irradiation [55]. This dramatic enhancement of dielectric response was attributed to photocarrier-induced structural fluctuation of the perovskite lattice, and the enlarged dielectric constant can effectively screen the Coulomb attraction between the electrons and holes and facilitate charge transport. Meanwhile, similar to other inorganic semiconductors, the mobility in perovskite materials presents a strong temperature dependence and follows a power law µ ∼ T −3/2 (figure 3(b)) [56], which indicates that the carrier scattering is majorly dominated by temperature-induced lattice vibration other than defect scattering. These experimental observations are direct hints of the correlation between the phonon modes and the carrier dynamics [55,56].
In addition, the presence of large polarons can be proven by measuring the effective mass and charge mobility. For instance, the effective mass was calculated according to the following function: where m * is the effective mass, e is the elementary charge, λ is the mean-free path, k B is the Boltzmann constant, T is the kelvin temperature, and µ is the carrier mobility. As roughly estimated from the above equation, the room-temperature effective mass in perovskites was hundreds or thousands (10-300 m e ) times heavier than the mass of a single-particle band (0.1 m e ), suggesting the presence of large polarons [30]. However, such an estimation was contradicted with some numerical evidence as well as actual experimental measurements of the band mass in perovskites [57][58][59], and the evidence of polaron formation obtained through band masses is still under debate [60]. Indeed, some polaron behaviors have been clearly observed in perovskite materials, which can be rationally explained by a classical polaron model. However, prototypical perovskites manifested unusually large lattice displacement at room temperature because of their relatively low phonon energies compared with typical inorganic crystal materials (e.g. Si, GaAs), leading to a failed interpretation of all photophysical properties of the perovskites using a standard polaron model [38]. For instance, some recent theory work found that mobilities obtained in the Fröhlich polaron model are disparate from several experimental findings, including their magnitudes and temperature dependencies [38,61,62]. To explain the unusual charge-carrier transport and light-absorption properties of the perovskite materials, the dynamic disorder concept was implemented by considering the charge and lattice fluctuations [63].
Polaron-associated carrier dynamics in perovskites
The charge-carrier dynamics in perovskite materials can be analyzed by terahertz time-domain spectroscopy [28,29,64]. As demonstrated in figure 4(a), photogenerated carriers undergo a sequential hot-carrier cooling and polaron-formation process upon above-bandgap excitation, which delays the rise rate of photoconductivity increase compared to that of resonant excitation [54]. It was noted in figure 4(b) that the timescales of photoconductivity rise were around ∼1.5 ps, which indicates that the photoconductivity is dominated by carrier thermalization rather than photocarrier generation, which is in the range of hundreds of femtoseconds [65,66].
Moreover, the rise of photoconductivity is independent of temperature upon on-gap excitation, suggesting temperature-independent polaron-formation time (τ pol ). Polaron formation under on-gap excitation is ascribed to thermally driven LO phonons at Debye temperatures, which is below 140 K for tetragonal FAPbI 3 and MAPbI 3 [51,67]. The interaction strength between the carrier and LO phonon is mainly determined by the inorganic Pb-I sublattice. In contrast, the carrier-cooling time (τ cool ) is strongly dependent on excitation energy and temperature upon above-bandgap energy excitation. Particularly, the cooling time increases as the excitation energy increases and the temperature decreases. Moreover, the cooling rates depend on the types of cation from which the high energy optical phonons are generated, following an order of Cs < MA < FA in lead-iodide perovskites [54].
Polaron modulation for high-performance PSCs
The power-conversion efficiency of a solar cell is intimately dependent on a multistep charge-carrier dynamic process, including light harvesting, photocarrier generation, carrier transport, and collection at respective electrodes, in which the above-bandgap photon energy and nonradiative recombination of oppositely charged carriers (electrons and holes) are main energy losses, restricting the efficiency of solar cells. As aforementioned, the formation of large polarons in a perovskite layer can lead to giant dielectric constants and can screen the Coulomb attractive interaction between electrons and holes, resulting in a low recombination rate [55]. Moreover, polarons can lower HC relaxation, enabling the possibility to extract high-energy carriers from perovskite materials [69][70][71]. In addition, large polarons were also observed in conjugated organic materials, which can be used to facilitate charge extraction in PSCs as grain boundary passivators [34] or a charge transport layer [33]. Therefore, the modulation of polarons in perovskites and organic charge-transport materials is expected to eliminate energy losses in PSCs.
Stabilization of large polarons in perovskite layers
It has been established from theoretical predictions and experimental observations that both small and large polarons can be generated in perovskite materials [16]. However, the presence of small polarons in perovskite materials is believed to cause deep-level traps and structure distortion, which is detrimental for the performance of corresponding PSCs [72]. In contrast, large polarons beneficially contribute to the prolonged carrier lifetime and diffusion length, which are expected to promote the efficiency of PSCs [16,46]. Given the experimental evidence, large polarons are more likely observed in perovskite single crystals than in perovskite thin films [53], in FA-and MA-based perovskite than in Cs-based perovskites [32], and in Br-based perovskite than in I-based perovskite. Therefore, the chemical composition is desirable to stabilize large polarons in PSCs.
Compositional engineering
As it has been established that the band configurations of perovskite materials are majorly dominated by the inorganic Pb-X sublattice, the A cations have a negligible impact on their optoelectronic properties [4,44]. Recent investigations have found that the A cations, such as their length, dipole moment, etc, can largely affect the formation of polarons [30,54]. For instance, small polarons are prone to form in MAPbI 3 and the magnitude of polaron-binding energy is dependent on the type of cations. Specifically, the binding energies have been calculated to be 1.3 and 0.6 eV for electron and hole polarons in MAPbI 3 , respectively, which are reduced to 0.9 and 0.3 eV in CsPbI 3 because of the smaller dipole moment of Cs + compared to MA + . The polaron-binding energy can be further reduced when employing FA + as A-site cations, which is attributed to the restricted rotation of large FA cations within the lattice [47,73]. Thus, the polaron-binding energy of iodide-based perovskites with different cations follows an order of MA > Cs > FA because of the restricted cationic reorientation in Cs-and FA-based perovskites [73], and the corresponding hole-spin density distribution was calculated, as seen in figures 5(a)-(c), respectively. Therefore, alloying FA and Cs into MA-based perovskites allows the electron and hole polaron-binding energy to be minimized [74], which is ascribed to the reduced lattice symmetry and suppressed cationic reorientation (figure 5(d)), and perovskites with mixed cations exhibit high resistance against structure distortion particular to Pb-I bonds in the octahedral framework [75]. Moreover, Br-based perovskites present tighter polaron binding than that of I-based perovskites, and the substitution of Pb by Sn can further reduce the polaron-binding energy [73].
Polaron mobility is another tunable parameter in perovskite materials via compositional engineering, as it is known that polaron mobility is correlated with carrier scattering time, which is determined by carrier-lattice interaction. The substitution of Pb or I by lighter Sn or Br, respectively, in the inorganic cage can improve the carrier mobility of perovskites due to the increased phonon vibrational frequencies [76,77]. For instance, when the perovskite composition of (FAPbI 3 ) 0.85 (MAPbBr 3 ) 0.15 (denoted as FA0.85) was changed to (FAPbI 3 ) 0.95 (MAPbI 3 ) 0.05 (denoted as FA0.95), the polaron mobility can be enhanced by 30% from 8-15 to 10-18 cm 2 V −1 s −1 [78]. As was analyzed from the terahertz spectrum, the co-substitution of the A-and X-site of FAPbI 3 by MA + and Br − , respectively, can reduce carrier scattering time because of the reduced structural distortion and cation dipole-polaron coupling in FA0.95 compared to that in FA0.85, contributing to the increase of polaron mobility and retarded carrier-phonon cooling in FA0.95. Moreover, the polaron mobility of inorganic CsPbI 3 was measured to be 270 ± 44 cm 2 V −1 s −1 , which is around one magnitude higher than that of organic hybrid perovskite in terms of MAPbI 3 and FAPbI 3 (25-75 cm 2 V −1 s −1 ) [51,79]. Therefore, the increased polaron mobility of perovskites based on Cs-and alloyed cations is indicative of the effect of polarons on charge-transport dynamics, and rational composition engineering can modulate the polaron dynamics in resultant perovskites, resulting in outstanding power-conversion efficiency of PSCs.
Molecular passivation
Large polarons can be introduced into perovskite films by molecular passivation. For example, aromatic molecules, e.g. porphyrin and monoamine Cu porphyrin (CuP), are prone to self-assemble into supramolecules upon thermal treatment ( figure 6(a)). The strong interaction between amine groups and the central metals of the adjacent molecules can affect the dipole direction, leading to the formation of homogeneously large polarons in the formed supramolecules (CuP-S2), which was confirmed by electron paramagnetic resonance and Raman spectroscopy. As the porphyrin molecules were doped into the perovskite, CuP-S2 with a gradient electric field was formed at the grain boundaries, as illustrated in figures 6(b) and (c), which can act as a continuous pathway for hole extraction across the perovskite grain boundaries and effectively suppress nonradiative recombination. Consequently, a champion efficiency up to 24.2% of the device based on porphyrin-modified perovskites was achieved, as shown in figure 6(d), and the T 80 lifetime was elongated to 3000 h under constant light illumination and an environmental temperature of 65 • C condition (figure 6(e)) [34].
Polarons in hole transport layer
Polarons can be generated in organic materials consisting of periodic structures, in terms of conjugated polymers of poly(3-hexylthiophene-2,5-diyl) (P3HT) [80], poly(triarylamine) (PTAA) [33], etc, and small molecules of spiro-OMeTAD [81,82], porphyrin [34], etc, if the charge carriers are strongly associated with the vibrational modes of components in the organics. Parameters in terms of crystallinity, chain length, molecular weight and so forth of the organics determine their interchain interaction with the charge [80]. For example, the increase of conjugation chain length of P3HT can significantly enhance the polaron delocalization, and hence improve its charge mobility [80]. In the case of PTAA, the doping of high-molecular weight PTAA can enhance the polaron delocalization. Particularly, the doping of PTAA can partially oxide the polymer and thereby affect the polarity of the adjacent chains. Moreover, the increase of molecular weight can restrict the vibration of the PTAA chains. Accordingly, the combined effect of delocalized polarons and improved charge mobility can significantly promote charge transport through the PTAA layer. In addition, the increase of molecular weight can dramatically boost the thermal stability of the PTAA. As a result, 17% efficiency of solar modules (43 cm 2 ) based on high-molecular weight PTAA was obtained, and 90% of the initial performance was maintained after 800 h aging at 85 • C [33]. . Energy diagram and work mechanism of hot-carrier solar cells, which consist of a hot-carrier absorber, energetically narrow selective contact, and electrodes. Adapted from [70] with permission from the Royal Society of Chemistry.
Toward HC solar cells
One of the major thermodynamic limits for energy conversion in a solar cell is the loss of above-bandgap (hot) photon energy, which is a femtosecond process in most photovoltaic materials [8]. As illustrated in figure 7, the concept of an HC solar cell is proposed to reduce hot energy loss by extracting HCs immediately prior to their cooling. Particularly, the hot electrons and holes with a temperature higher than the lattice (e.g. T e > T L , T h > T L ) generated at the photoactive layer are extracted through energy-selective contacts. The extracted HCs can retain their high-energy states and increase the chemical potential from ∆µ of a conventional device to c eh of an HC device, as illustrated in figure 7. Therefore, HC solar cells enable the absorption of a wide range of photon energy and increase the open circuit voltage. A theoretical efficiency up to 67% can be achieved in single-junction HC solar cells, which is approaching the limit of infinite tandem cells [65,71].
Perovskite materials manifest an ultralong HC lifetime and slow charge-recombination process, which can be utilized in the realization of HC solar cells [29,65,83]. To reach truly high performance HC PSCs, the HCs should reach a charge transport layer before their cooling, so the perovskite layer should be electrically thin and optically thick. In this regard, the crystal quality and chemical composition of perovskite thin films should be intensively engineered to retard the HC relaxation. For example, Cs-based perovskite (i.e. CsPbI 3 ) exhibited a slower cooling time than that of FA-and MA-based perovskites (i.e. FAPbI 3 , MAPbI 3 ), which is attributed to the reduced HC-phonon coupling in Cs-based perovskite [84] and the lowest cooling rates [54]. The doping of alkali ions (e.g. K + , Cs + , Rb + ) into (MAFA)Pb(BrI) 3 can markedly prolong the HC lifetime to above 10 ps with a diffusion length of over 100 nm because of the facilitated lattice strain relaxation and passivation of vacancy defects [85]. Moreover, the control of point defects (e.g. introducing interstitial iodide defects and eliminating vacancies defects) within the MAPbI 3 lattice can largely retard the hot-electron cooling, which was attributed to the reduced band degeneracy and weakened HC-phonon interaction [86]. In addition, HC lifetime is found to be proportional to light intensity, so a suitable concentrator or light management can be developed to enhance the carrier density within the perovskite layer to prolong the lifetime [66]. However, the exact origin and underlying mechanism (e.g. phonon bottleneck [66], auger heating [87], large polaron [32], etc) for such a long HC lifetime in perovskite materials are not well understood or are still under debate [4]. Furthermore, the design of charge-transport materials, which should present a narrow band pass with energy levels to selectively extract HCs without interrupting cold carriers, is another challenge to the realization of truly HC solar cells [69][70][71].
Effect of polarons on device stability of PSCs
Carrier-phonon coupling within perovskite crystals is accompanied with cation reorientation, lattice deformation, etc, which can create some metastable defects, polaronic strain, and photostriction in PSCs, largely undermining their photostability.
Polaron-induced metastable defects
As seen in figure 8(a), PSCs suffer from a fast photocurrent decay upon light illumination and recovery when resting in the dark, which seriously undermines their operational stability [35]. Such light-induced performance degradation and recovery in PSCs was attributed to polaron-induced metastable defects in perovskite layers, as illustrated in figure 8(b). As was evidenced from the increase of dielectric constants (figure 8(c)) and Raman scattering in the region of 135-210 cm −1 , the cation (i.e. MA + ) vibration within the crystal lattice was slowed down or even frozen after light illumination, which was ascribed to the coupling of photogenerated carriers to the local lattice. Polaron-induced distortions [88] of the symmetric structure and local field fluctuation can act as deep-level metastable defects, as confirmed by the absorption increase at the near infrared wavelength region ( figure 8(d)), leading to a fast photocurrent decay in PSCs; the long-term slow degradation process is attributed to the slow mobility and accumulation of small polarons [35]. Therefore, light-induced performance decay can be recovered by resting in the dark or can be significantly suppressed by reducing the activation energy for structure vibration. For instance, the switch between day-night or operating at low-temperature (e.g. 0 • C) conditions can extend the operational stability of PSCs.
Photostriction
Owing to the phonon-lattice coupling, perovskite materials demonstrate strong light-matter interaction. In the case of photoexcited MAPbI 3 , the generation of charge carriers can weaken the binding between the amine group of organic cations and the iodide from the inorganic framework because of the reduced electron density of I as the charge transfers from the valence band maximum of Pb 6s-I 5p to the conduction band minimum of Pb 6p [89]. The weakened interaction promotes the rotational degree of freedom of MA and straightens the Pb-I-Pb bond, leading to a giant photostriction, as illustrated in figures 9(a) and (b) [89]. It was realized that the straightening of the Pb-I-Pb bond can reduce the migration energy barrier of the water molecules at the crystal surface and promote water adsorption (figures 9(c) and (d)). The reason was ascribed to the enhanced electron density of Pb after photostriction, which can easily be attacked by the O of moisture molecules, leading to increased moisture sensitivity [90]. Moreover, photostriction is a self-accelerated process because the straightening of Pb-I-Pb bonds can improve photon absorption and further speed up photostriction, leading to inferior stability of PSCs in an ambient environment [90,91].
However, there is still debate about the photostrictive effect in perovskite materials since both lattice contraction and expansion were experimentally observed under low-and high-intensity light illumination, respectively [92,93]. For instance, a lattice-contraction effect was observed in MAPbBr 3 , and its correlation with cation rotation was confirmed by Raman measurements. Particularly, the Raman peak of the MAPbBr 3 was blueshifted from 317.5 to 328.2 cm −1 upon light illumination, which corresponds to the MA + rotation along the C-N axis, indicating a direct correlation between the cation rotation and photostriction due to the strong electron-phonon coupling [94]. Adapted from [90] with permission from the Royal Society of Chemistry.
Polaronic strain
Light-induced phase segregation is likely to happen in hybrid perovskites with mixed halides (e.g. Br/I), largely undermining the photovoltaic performance in the resultant PSCs [95]. The formation of highly coupled polarons can promote phase separation in mixed-halide perovskites [36,72,96]. Particularly, halide ions such as I − and Br − are randomly distributed in the as-deposited mixed MAPb(I 0.1 Br 0.9 ) 3 perovskite films, as demonstrated in figure 10(a). Upon photoexcitation, electron-hole pairs with a small binding energy (0.03 eV) are generated, which are quickly separated into free carriers (i.e. electrons and phonons). The polar cations (i.e. MA + ) are reoriented to localize the carriers because of the strong carrier-phonon interaction, leading to the formation of polarons with a diameter of 8 nm and a binding energy of 0.08 eV [96]. The deformation of the local lattice can generate polaronic strain [97], which can increase with exposure time and further reduce the activation energy for ions migration [98,99]. Polaronic strain was primarily formed at low bandgap (high I concentration) regions under low-intensity light irradiation, which causes strain gradients and stabilizes the polaron within I-rich regions, segregating the mixed perovskites into I-rich and Br-rich domains ( figure 8(b)). Such phase separation issues can be recovered as the release of strain by relaxing the perovskite film in the dark for a certain time [96]. In contrast, upon high-intensity light illumination, the polaronic strain filed from Br-rich domains overlaps with that from I-rich regions and the strain gradients disappear, leading to the homogenous remixing of halides with eliminated phase separation, as shown in figure 10(b) [72]. Recently, polaronic strain in perovskites has been experimentally visualized by real-time characterizations according to crystal structural variations [97].
To prevent phase segregation in hybrid perovskites, strategies should be dedicated to reducing carrier-phonon interaction. For example, substituting organic cations (MA + , FA + ) with Cs + [96] or doping Cs + into mixed I/Br perovskite films [74,100] can effectively mitigate halide ions segregation and boost the photostability of mixed-hybrid perovskites. The improved stability was attributed to the less polarity of Cs + , which significantly reduces carrier-phonon coupling.
Conclusions and perspectives
In summary, the specific ionic and soft nature of perovskite crystal structures enable long-range coupling between photogenerated carriers and the polarized crystal lattice in organic-inorganic hybrid perovskites. The presence of large polarons is effective in protecting the photogenerated carriers from scattering with the trap states, resulting in prolonged carrier lifetime and reduced nonradiative recombination in perovskite materials, and hence high performance of the resultant PSCs. However, the strong coupling between the carriers and ionic lattice can parasitically create metastable defects, photostriction, and polaronic strain, leading to inferior photostability of PSCs. Pioneering investigations have proven that rational engineering of the chemical composition and crystal quality of perovskite materials is effective in modulating the polaron state in the perovskite layer, leading to improved efficiency and stability of PSCs.
However, the formation mechanism of polarons in perovskite materials and their effect on carrier dynamics still lack solid understanding. Prospects for further understanding the role of polarons in the realization of highly efficient and stable PSCs should focus on the following aspects. First, fundamental understanding about the influence of defect states, chemical composition, crystal quality, and some other external parameters such as light illumination, heat, electric field, etc, on polaron formation and transport need to be explored with the assistance of computational simulations, machine learning, advanced characterizations, and rational experimental design. Moreover, direct correlation of the parameters of polarons (e.g. binding energy, mobility, coherence length, lifetime, etc) with photovoltaic metrics and the long-term stability of the corresponding PSCs could provide great insight for further modulation of polaron formation, working toward highly efficient and stable PSCs. For instance, a thorough understanding of the origin of extended HC lifetime could facilitate the design of perovskite materials or nanostructures for the realization of truly high performance HC solar cells. In addition, the correlation between polaron transport and local strain in perovskite lattices needs to be explored, and some other types of polarons, such as ferroelectric polarons and excitons polarons, need further theoretical and experimental investigations.
Data availability statement
No new data were created or analyzed in this study.
|
2023-02-08T16:02:54.409Z
|
2023-02-06T00:00:00.000
|
{
"year": 2023,
"sha1": "3c5c472b38d74b044ed00b6948ab2d0f9a0cf3a8",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2515-7655/acb96d/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "960ee66bd14d509ee1d7c676d0bc3787be787d83",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
}
|
219102542
|
pes2o/s2orc
|
v3-fos-license
|
The ability of oriental magpies (Pica serica) to solve baited multiple-string problems
Background Baited multiple-string problems are commonly used in avian laboratory studies to evaluate complex cognition. Several bird species possess the ability to use a string pull for obtaining food. Methods We initially tested and trained 11 magpies to determine whether the oriental magpie (Pica sericia) possesses the ability to solve baited multiple-string problems. Eight of the birds obtained the bait by pulling, and were selected for formal multiple-string tasks in the second stage. Second stage tests were divided into seven tasks based on string configurations. Results Only two magpies were able to solve two tasks: one solved the task of parallel strings, and the other solved the task of slanted strings with the bait farther from the middle point between the two strings and selected the short string in the task of long-short strings. When faced with more difficult tasks (i.e., the task of slanted strings with the bait closer to the middle point between the two strings, the task with two crossing strings, and the task of continuity and discontinuity), the birds initially observed the tasks and chose instead to adopt simpler strategies based on the proximity principle, side bias strategies and trial-and-error learning. Our results indicate that the oriental magpie had a partial understanding of the principle of multiple-string problems but adopted simpler strategies.
INTRODUCTION
Decades of studies have shown that complex cognitive abilities are not unique to primates and other large mammals; birds also possess a similar learning capacity (Emery & Clayton, 2004a). The anatomy of bird brains differs greatly from those of mammals (e.g., the forebrain of birds does not have a layered structure) (Medina & Reiner, 2000;Zorina & Obozova, 2011). Large-brained corvids reportedly possess forebrain neuron counts equal or greater to primates with much larger brains. The large numbers of neurons concentrated in high densities in the forebrain may substantially contribute to the neural basis of avian intelligence (Olkowicz et al., 2016). Corvids and parrots have consistently demonstrated more sophisticated qualitative and quantitative intellectual skills than other birds (Emery, 2004(Emery, , 2006Emery & Clayton, 2004b), and are similar to primates in some aspects of social ecology, neurobiology, and life history (Emery, 2006;Seed, Emery & Clayton, 2009).
Most animals, including birds, are limited in their ability to operate tools so multiple-string problems are used to test complex cognitive abilities (Vince, 1961;Bagotskaya, Smirnova & Zorina, 2012). The 1970s saw a shift toward studying developmental and sensorimotor aspects of cognition under the influence of Piaget (Jacobs & Osvath, 2015). In such tests, food (the bait) is tied to one end of a string and the animal can gain access to the food only by pulling the string (Heinrich, 1995). The debate on cognition of string-pulling tasks is ongoing. It remains unclear whether and to what extent cognitive understanding contributes to successful performance on string-pulling problems. Various multiple-string tasks can test different cognitive mechanisms. When an individual subject is faced the task of slanted strings with the bait closer to the middle point between the two strings, the task with two crossing strings, and the task with a right-angled turn on the longer baited string, the probability of wrong choice is relatively high if the subject adopts the strategy of proximity or side bias, but if the subject chooses to adopt the strategy of trial and error the probability of wrong choice will be greatly reduced (Jacobs & Osvath, 2015;Hofmann, Cheke & Clayton, 2016;Manrique et al., 2017).
Many studies have reported on the abilities of mammals and birds to solve multiplestring tasks. Mammalian studies have focused on non-human primates such as infant chimpanzees (Pan troglodytes) and cottontop tamarins (Saguinus oedipus) (Hauser, Kralik & Bottomahan, 1999;Hauser et al., 2002;Spinozzi & Potí, 1993) or carnivores such as domestic cats (Felis catus) and dogs (Canis lupus familiaris) (Riemer et al., 2014;Osthaus, Lea & Slater, 2005;Whitt et al., 2009). Avian research has primarily focused on certain birds in the family Corvidae in the order Passeriformes, or on birds in the order Psittaciformes (Werdenich & Huber, 2006). There are relatively few studies of this kind for other animals.
In the study of bird string-pulling tasks, strings are usually oriented either in a horizontal or a vertical fashion. A horizontal string can be reeled in with a single pull, whereas a vertical string requires better coordination and multiple-step motor planning with coherent movements (Werdenich & Huber, 2006;Jacobs & Osvath, 2015). A planar arrangement for multiple-string problems may reduce the need for animals to coordinate their movements under the aforementioned conditions. These approaches allow for testing various types of multiple-string tasks without the need for additional suspended items to hold the strings (Bagotskaya, Smirnova & Zorina, 2012;Manrique et al., 2017).
The majority of multiple-string task cognition experiments in Corvidae have been conducted with members of the genus Corvus. However, a few cognitive studies have reported on Pica species and indicate that the magpie (Pica pica) can remember the location of stored items (Clayton, 1998) and recognize themselves in a mirror (Prior, Schwarz & Güntürkün, 2008). The black billed magpie (Pica hudsonia) shows a superior ability to learn abstract concepts, like other jays (e.g., Garrulus glandarius) (Magnotti et al., 2016). There are few researchers who have studied the string-pulling tasks of distant relatives of Corvids, such as western scrub-jays (Aphelocoma californica; Hofmann, Cheke & Clayton, 2016) and green jays (Cyanocorax yncas; Manrique et al., 2017), and there are no reports on the multiple-string task cognition of Pica birds. The oriental magpie is a medium size bird of corvidae, widely distributed throughout China, and was revised from subspecies (Pica pica serica) to species (Pica serica) based on DNA sequencing results (Song et al., 2018). Our study is the first of its kind to determine whether the oriental magpie has the ability to solve multiple-string problems and obtain food. The aim of this study was to investigate what strategies are used by oriental magpie when confronted with various multiple-string problems. While many works on the cognition of string-pulling tasks had focused on the genus Corvus, little is known on the abilities of other corvids in these tasks. Assessing the performance of a more distantly related genus, the genus Pica, will be informative as to the distribution of complex cognition across the Corvidae.
Animals and installation of the experimental device
Eleven rescued oriental magpies (Pica serica; six males and five females, all 1 to 2 years old) from the Beijing Wildlife Rescue Center were used in this study (Table 1), which was conducted from October 2016 to August 2017. The sex of the birds was identified according to our previous report (Wang et al., 2019) and individuals were marked with colored leg rings to assist in their identification. The birds free-ranged in a 600 × 400 × 460-cm indoor aviary at Beijing Forestry University (Beijing, China) for one and a half months prior to the start of experimentation. Fresh food and water were freely available, and their diet consisted of insects, shrimps, vegetables, fruits, nuts, seeds, omnivorous bird compound feed, poultry compound vitamins and an occasional feeding of beef. The composition of the food remained unchanged during the 24 h before onset of the experiment, with the exception of insects. The food supply was withheld from the subjects while the experiment was being conducted and food (the bait) could be obtained only by pulling the string.
The experiment was conducted in an 80 × 80 × 50-cm cage ( Fig. 1), which was placed in an adjacent indoor aviary. One end of a string was connected to the cage so that it could not be swallowed by the birds. Both the string and the bait were outside of the cage so that the birds could see the bait but could not obtain it directly. The two strings connected to the cage was 0.1 cm in diameter, the distance between the two connecting ends tying the string to the cage wall was 13 cm, and the lengths of the strings and the manner of placement varied among the experimental groups. Larvae of Zophobas atratus were used as bait and the bait was connected to one of the strings; the other string was not baited (Fig. 1). The entire experiment was recorded using a digital video camera (Eos M3 (WH); Canon, Tokyo, Japan) placed in front of the strings (Fig. 1). Each bird spent an hour per day in the experimental cage for 30 days prior to the start of the experiment to become familiar with the experimental environment and to provide for their comfort during subsequent experimentation. Individual birds were placed inside the cage at the onset of the experiment and when the bird first pulled the string it was regarded as consent to proceed and the experiment began. Magpies could not leave the experimental cage until the end of the test. The subjects could view the operations when the baits were updated, and they have 30 s to observe the operation in advance. Once the experimenter left the lab, the subjects made their choice and the test was timed. The experimenter left the room prior to the start of the experiment to avoid influencing the feeding behavior. The recorded video files were analyzed after the conclusion of the experiment.
Experimental protocols
Subjects were transferred to the test cage to begin the experiments (Fig. 1). The experiments were conducted in two stages: the first stage, that is, the pre-testing, learning and training stage and the second stage, that is, the formal string-pulling task stage, which was applied to subjects that passed the first stage of the experimentation.
The first stage: pre-testing, learning and training of oriental magpies for the string-pulling task In the pre-testing phase, oriental magpies that had not been exposed to the string-pulling task were given a pulling task without learning or training to determine whether they would pull the string spontaneously to get bait. The end of each string was tied to bait and (E) points at which the string was tied to one side of the bottom of the cage (i.e., the end of the string near the bird); (F) video camera. The strings were located outside of the cage, and the two connecting points (13-cm apart) between the strings and one side of the bottom of the cage were located on both sides of the midpoint of the cage. The camera was placed directly in front of the two strings, and the lens was aimed at the midpoint of the bottom of the cage and covered the entire cage.
Full-size DOI: 10.7717/peerj.9200/ fig-1 there was no empty string ends. The baits were placed outside the cage so that the bird could only get food by pulling the string (Fig. 2C). Each bird was tested four times in 1 day. In order to reduce the influence of frequent appearances of experimenters on the magpie, four strings, each with a bait, were put in front of the magpie at the same time, and the pre-testing was conducted only once for each subject, lasting for 20 min. Once the bird pulled the string and reached the bait, it was determined that the bird had the potential to pull strings to get bait. Individual birds that did not know how to string and those did know how to string to obtain the baits were paired and placed in two adjacent cages ( Fig. 2C) in which they could see, but not access, the string and bait tied to each other's cage. Birds that did not pull a string to obtain the bait were given 20 min to observe the behavior of the other group to see if they would acquire this ability. During the 20 min observation of learning string-pulling behavior, the birds involved in the learning faced four strings connected to the baits at the same time as in the pre-testing process. The completion of four string-pulls was regarded as successful learning for the individual birds. The strings in this learning phase were all connected with bait with no empty ends and this process allowed more birds to participate in subsequent experiments.
Training refers to the string-pulling exercise for individuals that did not pass pre-testing and did not succeed in subsequent learning exercises. The training phase was divided into two steps: first, the bait was attached to the bottom edge of the cage so that the oriental magpies could eat the bait directly ( Fig. 2A); the bait was then attached to the end of the string and was gradually pulled away from the bird at the bottom edge of the cage (Figs. 2B and 2C). In this second step, the oriental magpies could not reach the bait directly and this was done to observe whether they could get food by pulling the string. Training was only conducted once. In the training stage, the magpies would face four strings at the same time in three ways as shown in Figs. 2A, 2B and 2C, and they would have 20 min to observe and get the bait in each way. All trainees passed the first step and the oriental magpies that completed the second training step were selected for further experiments. All of the strings in the training phase were connected to bait. The second stage: tests of understanding for multiple-string problems by oriental magpies The second stage of experimentation consisted of seven tasks with different multiple-string problems (referred as T1-T7; Fig. 3). There were 30 trials for each task and each trial was recorded and immediately ended when a magpie pulled the string within 15 min, regardless of whether it pulled the string with or without bait at the end, which was considered success or failure, respectively. We stipulated that the first string touched by a magpie was the string chosen by the individual. Our magpies could only test the next multiple-string problem after finishing the previous tested task, which usually took at least 3 days. The order of testing proceeded from simple to complex, as shown in Fig. 3. The difficulty of multiple-string problem was determined by whether the two strings cross, change direction, or break. The position of the bait was random to avoid memory. And in order to avoid fatigue-related error, each bird performed only one of the seven tasks per day with no more than 10 trials conducted daily. The two strings were placed on the ground outside of one side of the cage for all seven tasks ( Figs. 1 and 3). One end of each string was tied onto both sides of the midpoint of the bottom of the cage 13 cm apart. The seven tasks (T1-T7) were structured as follows: T1, the task of parallel strings, in which two 25-cm long parallel strings were set perpendicular to the bottom of the cage. The free end of one string was connected to a bait and the free end of the other string was empty; T2, the task of slanted strings, with two 25-cm long parallel strings forming a 45 -angle to the bottom of the cage and the bait closer to the middle point between the two strings; T3, the task of slanted strings, with the bait farther from the middle point between the two strings, similar to T2 but with the free end of the string connected to a bait located near the center of the cage; T4, the task of long-short strings, in which two parallel strings (one 30-cm long and the other 15-cm long) were placed perpendicular to the bottom of the cage and the two free ends were connected to a bait; T5, the task with two crossing strings, in which two 25-cm long strings intersected at 90 , with the free end of one string connected to a bait and the other being empty; T6, the task with a right-angled turn on the longer baited string and two parallel strings were placed perpendicular to the bottom of the cage (one 20-cm long and the other 40-cm long).
The free end of the 20-cm string was empty and the 40-cm long end was connected to a bait but at a right angle to the other string, 25-cm from the bottom of the cage; and T7, the task of continuity and discontinuity, in which two 25-cm long parallel strings were placed perpendicular to the bottom of the cage and each of the two free ends were connected to a bait; one of the two strings was disconnected at 5 cm, 15-cm from the cage. The tasks from T1 to T7 in this study were arranged in the order from easy to difficult. Parallel strings (T1) test the means-end understanding goal directedness; slanted (T2 and T3), long-short (T4), cross (T5) and turning (T6) strings the proximity principle; and disconnected strings (T7) whether the subjects can understand the continuous/ discontinuous nature of strings (Hofmann, Cheke & Clayton, 2016).
Data analysis
Data were tested using the two-tailed binominal test (SPSS 23.0, Chicago, IL, USA), with p values at 0.05 for significance, 0.01 for extremely significance, or 0.001 for highly extremely significance (Tables 2 and 3). Lateral bias index (LBI) was used to analyze the direction and intensity of the side bias to the strings by individual oriental magpies (Damerose & Hopkins, 2002). This was achieved by calculating the ratio of the number difference (R − L) between the right sided selection (R) and the left (L) over the sum of the two selections (R + L), that is, LBI = (R − L)/(R + L). The LBI score ranged from −1.0 to 1.0; Notes: Significant differences from random choice (Two-tailed binomial test): * p < 0.05; ** p < 0.01; *** p < 0.001. T1-T7, Task 1 through Task 7; P1-P8, oriental magpies P1-P8; C, number of correct trials; "-", tests failed due to the individual bird was unwilling to participate in the task. Results showed were the correct times during 30 trials except in one case of 26 trials (in brackets).
when the score was less than 0, the left side was preferred, otherwise the right was preferred. The absolute value of the LBI score (referred to as ABS-LBI) showed the side-bias intensity of the tested individuals.
Ethical approval
The rearing of birds strictly complied with the requirements of "Animal Feeding Standards of the Beijing Wildlife Rescue Center." Animal handling during testing was conducted in accordance with the requirements of the "Animal Ethics Committee of Beijing Forestry University." All applicable international, national and institutional guidelines for the care and use of animals were followed. Under Chinese law, no specific approval was required for this noninvasive study. This article does not contain any studies with human participants.
Pre-testing and training success of the individual birds
Only one of the 11 oriental magpies familiar with the experimental environment (P6) could pull a string spontaneously to get bait without signs of stress or neophobia (T1; Fig. 3; Table 1). P6 attempted to pull the string and retrieve the bait the first time it faced the pulling task and took only 37 s from start of the experiment. Seven magpie individuals (P1-P5, P7 and P8) passed the pre-testing after learning and training. Four birds (P2-P5) passed during the learning phase and three (P1, P7 and P8) passed with training after previous learning failure. An oriental magpie (P3) in the first stage of the training accidentally pulled a string with one of its claws and found that it could get bait by pulling; this individual was classified as learning to solve the string-pulling task. Eight individuals (P1-P8) were subjected to the second stage of trials to solve the tasks shown in Fig. 3. The other three birds did not properly participate in the experiment at any point and thus were not listed in Table 1. T1 T2 T3 T4 T5 T6 T7 L Notes: L, side bias for the left strings during 30 trials except in one case of 26 trials for P8; LBI, lateral bias index, which ranges from −1.0 to 1.0. If the value of LBI is less than 0, the left side is preferred, otherwise, the right side is preferred; p, p-value. Significant differences in two-tailed binomial test: * p < 0.05; ** p < 0.01; *** p < 0.001. T1-T7, Task 1 through Task 7; P1-P8, oriental magpies P1 through P8. "-", tests failed due to the individual bird was unwilling to participate in or did not complete the task.
Tasks with parallel strings
P5 successfully completed the parallel strings task (T1; Fig. 3) (Accuracy Rate (AR) = 90.0%, significant differences of two-tailed binomial test: p < 0.001; Table 2). P2, P3 and P6 were among the individuals that could not solve the parallel task but showed a significant side bias for the left side of the two strings (p < 0.001; Table 3).
Tasks with slanted strings
None of the eight birds were able to solve the task of slanted strings with the bait closer to the middle point between the two strings (T2; Fig. 3), and P5 and P8 showed significantly low accuracy (P5: AR = 30.0%, p = 0.043; P8: AR = 13.3%, p < 0.001; Table 2). In addition, P2 (p = 0.005; Table 3), P3 and P4 (p < 0.001) showed a significant side bias for the left side of the two strings. P6 successfully completed the task (AR = 86.7%, p < 0.001) of slanted strings with the bait closer to the middle of the two strings (T3; Fig. 3), but P1, P3, P4 and P7 were reluctant to participate in the task (Table 2). P2 showed a significant side bias for the left string during the test (p = 0.005; Table 3).
Task with long-short strings P2, P6 and P8 were the only subjects with a willingness to try to solve the task of long-short strings (T4, Fig. 3). P6 preferred the short string (AR = 73.3%, p = 0.016), while P2 (AR = 56.7%; p = 0.585) and P8 (AR = 69.2%; p = 0.076) had no significant preference for the long or short string. P6 also had a significant side bias to the left string (p = 0.016; Table 3).
Task with two crossing strings
None of the eight birds could solve the task with two crossing strings (T5; Fig. 3) and the accuracy rates of P1, P5 and P8 were relatively low (P1 and P8: AR = 33.3%, p = 0.099; P5: AR = 30%, p = 0.043; Table 2). P2 (p = 0.005; Table 3) and P4 (p < 0.001) showed significant side bias for the left string and P3 and P7 were biased toward the right string during the test (p < 0.001).
Tasks with turning and continuity strings
In the task with a right-angled turn on the longer baited string (T6; Fig. 3) and the task of continuity and discontinuity (T7; Fig. 3), all subjects attempted the two tasks, except P6 who was unwilling to attempt T7. The birds showed no ability to solve either task (Table 2). There was a significant left-sided bias among P3, P4, P5 and P8 during T6, and P2, P3, P4, P5 and P8 during T7 (p < 0.001; Table 3), whereas there was a right-side bias by P6 (p < 0.001) and P7 (p = 0.016) during T6 and P7 during T7 (p < 0.001).
DISCUSSION
Our study revealed one female oriental magpie that spontaneously passed the pre-testing, and seven (four males and three females) that learned to obtain food by pulling a string after a period of learning and/or training (Table 1) during the first stage of the experiment. These eight birds participated in the second stage of the experiment with the seven baited multiple-string problems (T1-T7; Fig. 3), of which only P5 solved the parallel task (T1; Table 2), and P6 solved one of the two tasks of slanted strings (T3) without learning. Part of our results were similar to those from reports on western scrub-jay (Aphelocoma californica) (Hofmann, Cheke & Clayton, 2016), the common raven (Coravus corax), and the hooded crow (C. cornix) (Bagotskaya, Smirnova & Zorina, 2012;Obozova et al., 2014). The number of tests conducted by different researchers was different. For examples, the raven and western scrub-jay were tested 32 and 50 times in each of their tasks, respectively, and the hooded crow 30-32 times in tasks of one report and 20 times in tasks of another (Bagotskaya, Smirnova & Zorina, 2012;Obozova et al., 2014;Hofmann, Cheke & Clayton, 2016). So we only compared the binomial test results of our magpie and other birds in the family Corvidae who also performed horizontal string-pulling tasks. Three of four common raven solved tasks like our T1 and all the four raven solved those like T3, three of four hooded crows solved those like T1 and T3, and four of five western scrub-jay solved the task like T1 and all the five western scrub-jay solved that like T3 (Bagotskaya, Smirnova & Zorina, 2012;Hofmann, Cheke & Clayton, 2016). In the above three bird species, the ratios of success for individuals in participating the tasks like our T1 and T3 were higher; but only two of our magpies solved the two tasks (i.e., P5 solved T1 and P6 solved T3). There were individuals in raven (one of four), hooded crow (two of four), and western scrub-jay (one of five) solving the task like T2 of this study, whereas all magpies of this study could not solve T2 and two magpies (P5 and P8) showed significant proximity principle. Our magpie was similar to the hooded crow, they couldn't solve the cross string (T5) and showed significant proximity principle (two of eight in two researches respectively), and there was also one magpie (P6) showing a significant proximity principle (Bagotskaya, Smirnova & Zorina, 2012;Obozova et al., 2014).
The hooded crow could solve both tasks like our T6 (four of the eight crows) and T7 (continuity strings; six of eight crows), but only one of our magpies (P1) could solve the task of turning (T6) through trial and error learning, and all of them couldn't solve the task of continuity strings (Bagotskaya, Smirnova & Zorina, 2012;Obozova et al., 2014). In a task like our T6, one hooded crows showed significantly proximity principle, but we did not find our magpies used proximity principle to solve the task of turning strings (Bagotskaya, Smirnova & Zorina, 2012). Subjects in our study could not solve one of the slanted strings tasks (T2; Fig. 3; Table 2) or the cross string task (T5; Fig. 3) as the common raven could, and the task of continuity and discontinuity (T7) as hooded crow did (Bagotskaya, Smirnova & Zorina, 2012;Obozova et al., 2014). The eight magpies in this study solved fewer multiple-string tasks than the Corvus birds (only three out of eight magpies solved a few multiple-string problems) despite their close evolutionary relationship (Ericson et al., 2005). When faced the tasks like T2 and T5, some individuals (e.g., P5 and P8) showed significant selection errors (p = 0.043 and p < 0.001, respectively, Table 2). P8 did not employ a significant selection strategy of the proximity principle in the long-short tasks (T4). The same bird seemed to adopt different selection strategies when faced with different tasks.
Among the three oriental magpies (P2, P6 and P8) participating in the task of long-short strings (T4; Fig. 3), one subject (P6) showed an obvious preference to the short string side (p = 0.016; Table 2). In the study of western scrub-jay, all the five birds had no preference in the tasks of first 50 times, but all of them preferred short-term tasks in the last 50 tasks, which was different from the results of our study (Hofmann, Cheke & Clayton, 2016). In other studies, the bait was maintained at a fixed distance to the cage wall with strings of different lengths (i.e., one of the strings was curved). The birds were unable to solve this string problem, although they exhibited behaviors by the principle of proximity. These phenomena might be due to the fact that the bird recognized that it could get access to the bait only by pulling the string nearer to bait (i.e., the proximity principle), rather than truly understanding the combined structure and relationship between the string and bait. This reaction may be explained by the perceptual-motor feedback loop rather than a comprehension of the means-end relation of string and reward (Hofmann, Cheke & Clayton, 2016).
We analyzed the change of accuracy times of individual magpie in each task, and found that P1 and P5 had trial and error learning behavior in solving multiple-string problem (T1 and T6) after excluding the individuals with side bias strategy. P1 used the strategies of preference and trial-and-error learning during the first 3 days in solving the task with a right-angled turn on the longer baited string (T6, right five times in 14 trials, p = 0.424; Fig. 3). This was followed by an increase of its string pulling AR over the next two days, with 13 out of the 16 tests being successful (AR = 81.3%, p = 0.021). The AR of P5 in T1 increased with time (77.8%, 87.5%, 100%, and 100% on day 1-4, respectively), and the AR of P1 was not stable, but on the last day it has been increased significantly (P1: 60%, 50%, 80% on day 1-3, respectively). The AR of P6 was also not stable, which increased with time in 1-2 days of T1 test (P6: 60%, 80%, 50% on day 1-3, respectively), but P6 showed strong left side bias in the third day's trials (left string selected in 10 tests). In addition, we found no other magpies that increased the accuracy rate through trial and error learning. This result indicates that by trial and error learning, oriental magpies might be able to solve certain multiple-string tasks. However, the oriental magpie's solution to the task may not be based on the understanding of the relationship between the strings and bait but rather is an accumulation of learning and experience (Seibt & Wickler, 2006;Taylor, Knaebe & Gray, 2012). The subjects may understand that the string is a means to reach a goal, but do not understand the mechanism of connectedness (Jacobs & Osvath, 2015).
The bait appeared randomly at the end of either the left and/or the right sides of the strings in our study. P5 and P6 solved one-and two-multiple-string problems, respectively (Table 2). However, the other six birds that did not completely solve any tasks, similar to common raven reported by Heinrich (1995), showed different simple strategies such as side bias strategies (i.e., to choose only one side of the string, regardless of whether the string was connected to a bait), trial-and-error learning, the proximity principle, and random selection. Most of the magpies ultimately did not acquire the ability to solve any of the tasks despite showing certain trial-and-error learning behavior after choosing the wrong string, which is in contrast to results from the kea parrot (Nestor notabilis) in its pulling experiments (Werdenich & Huber, 2006). P1 was the exception and completed the right angle turn task (T6) through trial-and-error learning.
Although our magpies made their choice after a certain period of observation, they still could not solve most of the multiple-string problems. Only when the baits are displaced could they realize whether the chosen string is correct. This further illustrates that the magpie's pulling behavior was based on the perceptual-motion feedback loop, rather than understanding the means-end relation of string and reward. The correct choice should have been visually obvious without having to pull the string first and the magpies should have noticed an incorrect string choice and adapted their strategy. Hofmann, Cheke & Clayton (2016) considered this awareness of errors and adjustment of strategies as precursors to the physical problem to be solved, providing a basis for the development of causal understanding. However, each bird was tested only 30 times for each task, limiting our ability to determine whether each magpie could eventually solve the given tasks through learning or other strategies. Nevertheless, one magpie showed the ability to solve the task by changing strategies and through trial-and-error learning in the right-angle string task (T6; Fig. 3), suggesting that certain individuals of this species can learn to solve multi-string problems. However, further study is required to determine whether the oriental magpie can understand the causal mechanism behind multi-string tasks.
CONCLUSIONS
Oriental magpies used learning and training to understand that pulling strings gave them access to bait. As a result, two magpies solved several tasks without prior exposure to multiple string tasks before onset of the experiment (Table 2). In addition, one magpie solved the task of turning string (T6) through trial and error learning. However, they were not able to solve more complex tasks, such as two crossing strings (T5; Fig. 3) and the task of continuity and discontinuity (T7). When they faced of the problems which they could not solve, different individuals showed different strategies, such as proximity principle, side bias strategies, random selection, and trial and error learning. It may be suggested that the overall cognitive ability of the oriental magpie species used in this study is poorer than that of the large birds in the family Corvidae, especially Corvus species. Crows may have evolved superior intelligence owing to their complex and changeable environment (Seed, Emery & Clayton, 2009). The oriental magpie is no exception, but they failed to solve more problems because they lacked experience in solving multi-string tasks (Taylor et al., 2010). It should be noted that only eight magpies were tested in this study, thus the conclusions may be limited.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This work was financially supported by the Research Fund for General Survey of Terrestrial Wildlife Resources (2015HXFWBHQ-SJL-01), Beijing, China. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2020-05-21T09:17:49.266Z
|
2020-05-19T00:00:00.000
|
{
"year": 2020,
"sha1": "cf92fdb898684e7d9653b116f91beabe233c9e68",
"oa_license": "CCBY",
"oa_url": "https://peerj.com/articles/9200.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f81f40ba31752d647c5f6d821a00e160f61c5f4",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
256750149
|
pes2o/s2orc
|
v3-fos-license
|
Propagation Mechanism of Deep-Water Impulse Waves Generated by Landslides in V-Shaped River Channels of Mountain Valleys: Physical Model of Regular Rigid Block
Landslide-induced impulse waves in alpine valleys are a signi fi cant risk to large-scale dam and reservoir engineering projects in the surrounding area. In this study, a 1: 200-scale physical model of landslide-induced impulse waves in a V-shaped river channel was established, and 18 groups of tests were conducted to evaluate the in fl uence of di ff erent parameters, such as the volume and shape of the landslide body, water entry velocity, and water depth of the reservoir. Based on the test results, a dimensionless formula was established for the fi rst wave height of impulse waves caused by a deep-water landslide in a V-shaped channel. An energy conversion law was determined for the impact of landslide-induced impulse waves on the reservoir bank. Finally, a distribution law was obtained for the initial maximum pressure caused by landslide-induced impulse waves along the water depth on the opposite bank. The theoretical predictions of the dimensionless formula showed good agreement with the experimental results, and the energy conversion rate of the landslide-induced impulse waves initially increased and then decreased with an increasing Froude number. The maximum dynamic water pressure showed a triangular distribution with increasing water depth below the surface of the still water body. The impact pressure of the impulse waves on the slope on the opposite bank increased with the water entry velocity. This study provides a scienti fi c basis for the risk prevention and control of landslide-induced impulse waves in river channels feeding into reservoirs.
Introduction
High-velocity landslides in alpine areas and gorges produce huge impulse waves that feature short generation and propagation times, fast velocities, and a wide disaster range.Around major hydropower projects, they can not only wash away hydraulic structures and block river channels but may also cause serious accidents such as dam failure.In 1963, a landslide at Vajont Dam in Italy caused a huge wave with a height of up to 175 m, which destroyed a 70 m high concrete dam, washed away the town of Longaroni and five nearby villages, and claimed nearly 2000 lives.In 1985, a landslide in Xintan, Zigui, near China's Three Gorges Dam, induced impulse waves that climbed up to a height of 54 m.The impulse waves overturned four fishing boats 2 km upstream and spread 42 km upstream and downstream to cause more than ten deaths.Therefore, research on landslide-induced impulse waves in alpine areas can provide a crucial reference for early warning and risk prevention of disasters in high dams and large reservoirs.
Various methods have been used in the research and analysis of landslide-induced impulse waves, e.g., analytical solutions [1][2][3][4], numerical simulations [5][6][7][8], physical models [9][10][11][12][13][14], and field data analysis [15,16].Because the location and time that a landslide enters the water cannot be accurately predicted in advance, obtaining original data on landslide-induced impulse waves is difficult.Thus, physical models are used to simulate landslide-induced impulse waves and obtain characteristic parameters such as the first wave height, propagation process, and impact pressure.This approach is widely used to research landslide-induced impulse waves in alpine areas.
Previous models of landslide-induced impulse waves have considered the influence of the water entry velocity, volume, angle, and density of the landslide body.However, these models mostly used rectangular or trapezoidal channels to represent rivers.Few studies have considered the propagation process and impact pressure distribution of landslide-induced impulse waves in V-shaped river channels, which are common in alpine areas.The slopes of the banks on both sides of a V-shaped channel significantly affect the formation and propagation of landslide-induced impulse waves.Compared with rivers in plains and flat lands, impulse waves in V-shaped river channels in alpine areas have a more obvious disaster chain.The narrow river channel means that the landslide body can easily block it upon entry, which would greatly increase the overall water depth and may even form a dam that threatens the downstream area.In addition, the opposite bank confines the huge kinetic energy generated by the landslide-induced impulse waves and does not let it dissipate.This increases the impact pressure on the banks and dam bodies, which affects the stability and safety of the facilities.Even worse, the impulse waves may strike the opposite bank directly, which would damage infrastructure along the riverfront and threaten the safety of people and property.
In this study, a 1 : 200-scale physical model was established to reveal the characteristics of deep-water landslideinduced waves in V-shaped river channels.Experiments were performed to consider the influence of the volume, shape, and velocity of the landslide body and the water depth of the reservoir.A dimensionless formula was derived for the first wave height in the V-shaped channel.An energy conversion law was obtained for the impulse waves, and a distribution law was obtained for the initial maximum dynamic water pressure on the opposite bank along the water depth.This study provides a scientific reference for the risk prevention and control of landslide-induced impulse waves in the alpine areas of major hydropower projects.
Materials and Methods
A 1 : 200-scale physical model was established based on the gravity similarity criterion and Froude similarity criterion, as shown in Figure 1.A high-definition camera was installed on one side of the tank to capture the landslide body as it slid into the water and the resulting impulse waves.Wave height gauges N.1, N.2, and N.3 were set in the channel at a spacing of 1.2 m.Wave height gauge N.4 was installed on the opposite bank to record the rise of impulse waves.The wave height gauges measured the wave height at a frequency of 50 Hz and a resolution of 0.1 mm.Eight water pressure sensors (P1-P8) were arranged on the opposite bank to measure the impact pressure of the impulse waves at a collection frequency of 50 Hz and resolution of 0.01 kPa.The pressure sensor P1, which was 73 cm away from the bottom of the river channel, was arranged at the central axis of the opposite bank.The positions of other sensors are shown in Figure 1.
In the experimental model, the inclination of the sliding bed was 34 °, the inclination of the opposite bank was 43 °, and the fall height of the center of mass of the block was 1.8 m.The water depth of the V-shaped channel was between 0.86 and 1.26 m.According to the geotechnical characteristic of landslides, they are divided into two types: loose earth landslides and rock landslides.This experiment studied rock landslides [17][18][19].As shown in Figure 2, the landslide body was represented by concrete blocks of different shapes that had a density of 2300 kg/m 3 , length of 0.45-0.55m, width of 0.3-0.4m, and thickness of 0.1-0.27m.The width of the experimental slot was 0.66 m.All landslide bodies were narrower than the width of the slot, and the generated impulse waves propagated in the lateral direction.The velocity of the landslide body was controlled by a sliding control device.The sliding velocity was measured by a Hall velocity sensor connected to the landslide body at a resolution of 0.01 m/s.
An orthogonal design was adopted for the experiments with the physical model.The maximum wave amplitude, which corresponded to the maximum water entry velocity, was assumed to be the first wave amplitude.After the landslide-induced impulse waves propagated, the distance between the first two crests was measured and was taken as the wavelength.Table 1 presents the experimental design and results.The test parameters included the length (l), width (w), and thickness (s) of the landslide body, as well as the still water depth (h 0 ), sliding bed inclination angle (α), and water entry velocity of the landslide body (u).The results were represented by the Froude number (F), first wave amplitude (a), and wave celebrity (c).
Results and Discussion
3.1.Generation and Propagation Characteristics.The velocity at which a landslide body enters the water is an important factor that affects the generation and propagation of impulse waves.However, the difficulty of measuring the water entry velocity has limited the number of related studies.Fritz et al. [10] and Heller et al. [20] used particle image velocimetry and a laser distance sensor to track and measure the vector field for the near-field impulse wave velocity.To analyze the impact velocity of the landslide, they used the calculation formula presented by Körner [21].
where u is the water entry velocity of the center of mass of the landslide body (m/s), g is the acceleration of gravity (m/s 2 ), Δz is the height between the original center of the mass and the static water depth (m), f is the friction coefficient, and α is the inclination angle of the sliding bed ( °).
Equation ( 1) ignores the effects of air resistance, underwater friction, and water resistance, so the water entry velocity u increases with an increase in any of the other terms.The sliding velocity control device was used to increase the sliding velocity of the landslide body before entering 2 Geofluids the water to 0.67, 1.16, and 1.67 m/s.The changes in the sliding velocity and acceleration as the landslide body entered the water were obtained.As shown in Figure 3, the landslide body initially showed an approximately linear increase in velocity followed by a slow acceleration or deceleration and a rapid deceleration to a final stop.These three stages corresponded to the landslide body sliding in air, sliding in water, and bottoming out, respectively.The duration of the second stage (i.e., slow acceleration or deceleration) differed with the water depth.Figure 4 shows the three stages of the sliding acceleration for each sliding velocity.Because the sliding bed and landslide body were controlled by a mechanical transmission device, the initial velocity of the landslide body was provided by the control device.Before the landslide body reached the still water surface, the reduction in acceleration was mainly caused by the friction between the sliding bed surface and the landslide body.After the block entered the water, the acceleration was affected by the friction from the sliding bed surface and the viscous resistance of the water, so it became negative.The resistance reached its maximum when the front end of the landslide body reached the bottom of the river, which caused the landslide body to quickly decelerate and stop.
Based on the results of the physical model, the generation and propagation of landslide-induced impulse waves in a Vshaped river channel can be summarized into three stages.In the first stage, the landslide body entered the water body quickly; the front edge of the landslide body hit and displaced part of the water body.A small part of the displaced water body jumped from the water surface to form a water tongue, and most of the water body near the water entry point was compressed and increased in height, which formed the first wave.In the second stage, the landslide body slid into the water body and transferred energy continuously.It displaced more of the water body, which caused the leading edge of the rising wave to move forward, while the trailing edge moved toward the impact crater that was generated by the landslide body when it struck the water body because of gravity.The water tongue formed by the impact then began to splash back into the water.In the third stage, the landslide body stopped moving and completed the process of transferring energy to the water.Then, water converted the kinetic energy to form impulse waves that spread to the opposite bank.3 Geofluids To reveal the influence of the shape and water entry velocity of the landslide body on the propagation of impulse waves, the monitoring data of key wave height gauges were selected, and 18 groups of time history curves for the amplitudes of the landslide-induced impulse waves were obtained, as shown in Figure 5.The impulse waves featured the formation of an advancing wave train dominated by waves with positive amplitudes (i.e., wave crests).The first or second wave crests had the largest amplitude, followed by oscillating waves of smaller amplitude.The largest trough occurred before the highest crest.In general, the different shapes of the landslide bodies generated similar impulse waves on the opposite bank, but the amplitude and period differed.
Figure 4: Changes in acceleration before and after the landslide body entered the water at three sliding velocities.4 Geofluids the landslide body, struck the water body and the relative thickness of the landslide body ðS = s/h 0 Þ.F was less than ð 4 − 7:5SÞ in all 18 groups of tests, so the impulse waves were all weakly nonlinear oscillatory waves.The wave amplitude and velocity are important characteristics of landslide-induced impulse waves.Previous stud-ies have shown that the geometric size of the landslide body, water depth, and water entry velocity are the controlling factors of the first wave amplitude.In this study, nonlinear regression analysis was used to identify the correlation between the geometric dimensions of the landslide body, water depth in front of the slope, and water entry velocity 5 Geofluids of the landslide body.A dimensionless formula was obtained for the first wave amplitude in a V-shaped river channel: where ðu/ ffiffiffiffiffiffiffi gh 0 p Þ is the relative water entry velocity, ðl/h 0 Þ is the relative length, ðw/h 0 Þ is the relative width, and ðs/h 0 Þ is the relative thickness of the landslide body.Figure 6 compares the predicted first wave amplitude via Equation ( 2) with the test results.A correlation coefficient of 0.82 was obtained.
The wave velocity describes the propagation distance of a waveform per unit time, and it is an important parameter for calculating the propagation of landslide-induced impulse waves.The following empirical formula for nonlinear waves is usually used for prediction.
Figure 7 compares the prediction results of Equation (3) with measured test The correlation coefficient was 0.551, with a maximum error of 16.8% and an average error of 9.1%.The error can be attributed to the frame rate of the camera.Although the correlation was low, it was numerically close.Therefore, the theoretical formula of wave celerity can be used to predict the celerity of the impulse wave [22].
Dimensionless functions are commonly used in regression analysis for prediction.Similar to Equation ( 2), a series of dimensionless power exponents are multiplied, which are usually specific values of the same controlling factor.Different scholars have used different controlling factors for regression analysis.Noda [3] proposed the method of calculating the maximum impulse wave height based on a linear relationship with the Froude number.Fritz et al. [1] proposed the method of calculating the maximum impulse wave height based on the Froude number and relative thickness.Ataie-Ashtiani and Nik-khah [9] proposed the method of calculating the maximum impulse wave height by considering factors such as the dimensionless sliding volume, Froude number, inclination angle of the sliding surface, and underwater sliding time.Zweifel et al. [23] proposed another method of calculating the maximum impulse wave height 6 Geofluids by considering the Froude number, relative thickness, and relative mass.The power exponents of each variable in the equations proposed by the above researchers varied substantially; in particular, the power exponent of the Froude number was between 0.2 and 1.4.A larger power exponent for the Froude number indicated a greater influence of the relative velocity on the wave height and vice versa.For V-shaped channels, the wave height of landslide-induced waves increased with the water entry velocity of the landslide body.The dimensionless formula proposed for the first wave amplitude of the V-shaped channel (i.e., Equation ( 2)) confirmed this cognition, which reflected its applicability and rationality.
Energy Characteristics.
The main characteristics of landslide-induced impulse waves are closely related to the law of energy transfer of the landslide mass.The time history curves of the wave heights collected by the wave height gauges were used to calculate the wave energy as follows according to Ataie-Ashtiani and Nik-khah [9]: where E w is the wave energy per unit width (kg m/s 2 ), E pot is the wave potential energy per unit width (kg m/s 2 ), ρ w is the density of water (1000 kg/m 3 ), g is the acceleration of gravity (9.8 m/s 2 ), c is the wave velocity (m/s), η is the water surface elevation when the water body is still (m), and t is the time (s).The energy of the landslide body can be calculated by using the Watts [24] formula: where E s is the energy of the landslide body per unit width (kg m/s 2 ), ρ s is the density of the landslide body (kg/m 3 ), u is the water entry velocity of the landslide body (m/s), and A is the cross-sectional area of the landslide body (m 2 ).The wave energy conversion rate of the impulse waves is expressed by [25] 7 Geofluids where E w ð0Þ is the wave energy in the near-field area of impulse wave generation.Figure 8 shows the changes in the energy conversion rate of the landslide-induced impulse waves at different Froude numbers according to the measurements of the wave height gauges.The energy conversion rate initially increased and then decreased as the Froude number F increased.
Figure 9 shows the changes in the wave energy according to the position of the wave height gauges for landslide bodies with different shapes.Tests 11,15, and 16 all had the same water entry velocity and corresponded to landslide bodies with the shapes of B4, T1, and H1, respectively.T1 had the greatest wave energy, followed by H1 and then B4.Similarly, T2 showed the greatest propagation decay in the wave energy, followed by H1 and then B4.Therefore, landslide bodies with different shapes followed a similar law of energy propagation; however, the attenuation of the wave energy during the propagation process differed.
3.4.Impact Pressure Characteristics.The formation and propagation of landslide-induced waves in V-shaped river channels in alpine valleys are significantly affected by the slopes of the banks on both sides.The slope on the opposite bank constrains the huge kinetic energy generated by the impulse waves induced by the entry of the landslide body, so it cannot dissipate in time.This produces higher impulse waves that propagate to the opposite bank, which poses a huge threat to the safety of the local infrastructure and residents.Studying the impact pressure of landslide-induced waves on the banks of river channels is of great significance for preventing and controlling disasters.
In the physical model, eight pressure sensors were installed on the opposite bank.Impulse waves propagated to the opposite bank, and the impact pressure was measured by the sensors.Figure 10 shows the time history curve of the hydrodynamic pressure measured by sensors P1-P3 in test 13, which used landslide body T1.The waves reflected, 8 Geofluids superimposed, and oscillated after they propagated to the opposite bank.The hydrodynamic pressure curves had the same wave amplitude, and the maximum value was affected by the water depth of the channel and the water entry velocity of the landslide body.The influence of the shape of the landslide body, the water depth of the river channel, and the water entry velocity of the landslide body on the impact pressure distribution were investigated: three typical shapes of landslide bodies were selected-B4, T1, and H1.The water depth was set to 0.86, 1.10, and 1.26 m.The water entry velocity was set to 0.67, 1.16, and 1.67 m/s. Figure 11 shows the impact pressure distributions generated by landslide-induced waves.The impact pressure distributions generated by different shapes of landslide bodies generally had the same pattern.The main difference among the landslide bodies was in the frontal area that entered the water.For a given water entry velocity, a larger frontal area increased the impact pressure.The increase in the water entry velocity reduced the time for the landslide body to reach the bottom, which increased the amplitude of the waves that caused the impact pressure.At different water depths, the distributions of the maximum impact pressure on the opposite bank and the maximum amplitude of the impulse waves were generally the same.However, with the increasing water depth, the impact pressure distribution fluctuated at lower depths, indicating the complexity of the energy transfer when a landslide struck deep water.
The results indicate that the maximum dynamic water pressure is caused by the initial wave and that it follows an approximately triangular distribution along the water depth.The potential energy of the landslide body is instantaneously converted into kinetic energy after it enters the water, which applies a huge impact pressure on the water body.This pressure acts on the water body along the entire depth and generates impulse waves at the surface of the water body.The impulse waves propagate to the opposite bank as the potential energy is converted to kinetic energy.Because of the narrow dimensions of the V-shaped channel, the energy cannot dissipate in time, which increases the pressure transmitted to the opposite bank.
Conclusion
A physical model was used to perform 18 tests under different conditions to characterize the propagation of landslideinduced waves and the impact pressure distribution in Vshaped river channels in alpine valleys that contain major hydropower projects.The main conclusions are as follows: (1) The impulse waveforms were all weakly nonlinear oscillatory waves.The waves generally formed an advancing wave train with positive amplitudes (wave crest) followed by oscillating waves with smaller amplitudes (2) The results of the nonlinear regression analysis were used to propose a dimensionless equation for the amplitude of the first wave induced by a landslide in a V-shaped river channel.The prediction results obtained using the proposed equation showed a strong correlation with the observed results of the physical model at a correlation coefficient of 0.82 (3) An energy transformation law was obtained for landslide-induced impulse waves in a V-shaped river channel.The wave energy conversion rate initially increases and then decreases with an increasing Froude number.The shape of the landslide body does not significantly affect the energy propagation law but significantly affects the degree of wave energy attenuation (4) The distribution of the maximum dynamic water pressure caused by landslide-induced waves in a Vshaped river channel on the opposite bank was obtained.The distribution is approximately triangular along the water depth; the pressure reaches a maximum value near the surface of the still water body and gradually decreases from the surface downward
Data Availability
An orthogonal design was adopted for the experiments with the physical model.The maximum wave amplitude, which corresponded to the maximum water entry velocity, was assumed to be the first wave amplitude.After the landslide-induced impulse waves propagated, the distance between the first two crests was measured and was taken as the wavelength.Table 1 presents the experimental design and results.The test parameters included the length (l), width (w), and thickness (s) of the landslide body, as well as the still water depth (h 0 ), sliding bed inclination angle (α), and water entry velocity of the landslide body (u).The results were represented by the Froude number (F), first wave amplitude (a), and wave velocity (c).
Figure 1 :Figure 2 :
Figure 1: Schematic diagram of the physical model.
Figure 3 :
Figure 3: Changes in velocity before and after the landslide body entered the water at three sliding velocities.
Figure 5 :
Figure 5: Time history curves of 18 groups of wave amplitudes measured by wave height gauge no. 1.
Figure 6 :
Figure 6: Predicted and experimental values of the first wave amplitude.
Figure 7 :
Figure 7: Calculated and experimental values of the wave celerity.
Figure 8 :
Figure 8: Energy conversion rate at different Froude numbers.
Figure 9 :
Figure 9: Wave energy of different shapes for the landslide body measured by different wave height gauges.
Figure 10 :
Figure 10: Time history curves of the dynamic water pressure with landslide body T1 at measurement points P1-P3.
Table 1 :
Experimental design and test results.
|
2023-02-11T16:17:52.440Z
|
2023-02-08T00:00:00.000
|
{
"year": 2023,
"sha1": "a4b2e97275da9840e5e8fdb03e2a484beb25bb3d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/geofluids/2023/1743305.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5b63c2de0217b46c948fdf16556c2f63c3799da6",
"s2fieldsofstudy": [
"Environmental Science",
"Geology",
"Engineering"
],
"extfieldsofstudy": []
}
|
220665631
|
pes2o/s2orc
|
v3-fos-license
|
There are no $\sigma$-finite absolutely continuous invariant measures for multicritical circle maps
It is well-known that every multicritical circle map without periodic orbits admits a unique invariant Borel probability measure which is purely singular with respect to Lebesgue measure. Can such a map leave invariant an infinite, $\sigma$-finite invariant measure which is absolutely continuous with respect to Lebesgue measure? In this paper, using an old criterion due to Katznelson, we show that the answer to this question is no.
Introduction
In this paper we study certain ergodic-theoretic properties of multicritical circle maps -orientation-preserving homeomorphisms of the circle that are reasonably smooth and have a finite number of critical points, all of which are non-flat of power-law type.
It is well-known that a multicritical circle map f : S 1 → S 1 without periodic points is minimal and uniquely ergodic. Its unique invariant Borel probability measure turns out to be singular with respect to Lebesgue measure λ on S 1 (see Section 2 for precise references). At least in principle, this fact does not rule out the possibility that f leaves invariant an infinite, σ-finite measure which is absolutely continuous with respect to Lebesgue measure. If such a measure µ exists, and we denote by ψ = dµ/dλ its Radon-Nikodym derivative with respect to Lebesgue, then ψ is a Borel function such that 0 < ψ < ∞ Lebesgue-a.e., and we have the cocycle identity ψ(x) = ψ • f (x) · Df (x) for Lebesgue a.e. x ∈ S 1 (1) One can ask more generally: When does a minimal C 1 homeomorphism of the circle admit an infinite σ-finite invariant measure which is absolutely continuous with respect to Lebesgue measure? As it turns out, there are indeed examples of C ∞ diffeomorphisms of the circle with this property, as shown by Katznelson in [10]. However, as we will prove below, there are no such examples in the realm of multicritical circle maps. Our main theorem can thus be stated as follows.
Theorem A. If f : S 1 → S 1 is a C 3 multicritical circle map without periodic points, then f admits no σ-finite invariant measure which is absolutely continuous with respect to Lebesgue measure.
The proof of this result (to be given in Section 4) will comprise two separate arguments. The first argument will prove the statement for almost all irrational rotation numbers only: a certain subset of the set of rotation numbers of bounded type will be excluded.
The second argument will prove the statement for all bounded type rotation numbers. In both cases, the Schwarzian derivative of f is used in a fundamental way, which is why we restrict our attention to C 3 dynamics. However, it is quite possible that the statement of Theorem A holds true under less regularity (perhaps C 2+α smoothness is enough).
Brief summary. Here is how the paper is organized. In the preliminary Section 2, we present the basic facts about multicritical circle maps and recall the fundamental tools: the real bounds, the cross-ratio inequality, Koebe's distortion principle, Yoccoz's inequality. In Section 3, we establish a criterion for non-existence of σ-finite absolutely continuous invariant measures. Since this is a slight generalization of [10, Th. 1], we call it the Katznelson criterion. In Section 4, we use Katznelson's criterion to prove two particular versions of Theorem A, namely Theorem 4.1 and Theorem 4.3. The former deals with all unbounded type rotation numbers and most bounded type ones, and its proof uses Yoccoz's inequality. The latter deals exclusively with bounded type rotation numbers, and its proof depends on a negative Schwarzian property of first return maps whose proof is given in Appendix A. Combining Theorems 4.1 and 4.3, we immediately deduce Theorem A.
Preliminaries
The non-wandering set Ω(f ) of a circle homeomorphism f without periodic points can be either the whole circle -in which case we say that f is minimal -or else a Cantor set. In the latter case, we say that f is a Denjoy counterexample, or that Ω(f ) is an exceptional minimal set. In both cases, the rotation number of f is necessarily irrational.
In his classical article [1], Denjoy constructed circle diffeomorphisms (of class C 1+α for some α > 0) having an arbitrary irrational rotation number and possessing an exceptional minimal set. For any such diffeomorphism f , even when its minimal set has zero Lebesgue measure, it is easy to construct an f -invariant σ-finite measure which is absolutely continuous with respect to Lebesgue. Indeed, it is enough to consider Lebesgue measure on any interval I in the complement of Ω(f ), and then spread this measure by f to the whole orbit of I, namely f n (I) n∈Z . This produces an f -invariant σ-finite measure (definitely not finite), which is absolutely continuous since f , being smooth, preserves sets of zero Lebesgue measure.
One might be tempted to think that such σ-finite, absolutely continuous invariant measures can only be constructed when the diffeomorphism f has a wandering interval (such as I above), but in [10] Katznelson constructed minimal C ∞ diffeomorphisms (with very special rotation numbers) which do admit such invariant measures.
In the context of circle maps with critical points, we recall that Hall was able to construct in [8] (see also [13]) C ∞ circle homeomorphisms which are Denjoy counterexamples. Hence the same construction explained above can be performed here in order to produce invariant measures which are σ-finite and absolutely continuous with respect to Lebesgue. We remark that the critical points of maps studied in both [8] and [13] satisfy some flatness condition.
The main result of our paper, namely Theorem A, states that there are no such examples amongst smooth circle homeomorphisms whose critical points satisfy the following nonflatness condition.
A multicritical circle map is an orientation preserving C 3 circle homeomorphism f having N ≥ 1 critical points, all of which are non-flat.
Being a homeomorphism, a multicritical circle map f has a well defined rotation number. We will focus on the case when this number is irrational, which is equivalent to saying that f has no periodic orbits. In particular, f is uniquely ergodic: it preserves a unique Borel probability measure µ. Furthermore, we have the following fundamental result due to J.-C. Yoccoz [15].
Theorem 2.2. Let f be a multicritical circle map with irrational rotation number ρ. Then f is topologically conjugate to the rigid rotation R ρ , i.e., there exists a homeomorphism h : Therefore, the unique f -invariant probability measure µ is just the push-forward of the Lebesgue measure under h −1 , that is, µ(A) = λ h(A) for any Borel set A, where λ denotes the normalized Lebesgue measure in the unit circle (recall that the conjugacy h is unique up to post-composition with rotations, so the measure µ is well-defined). In other words, the following diagram commutes.
Note, in particular, that µ has no atoms and gives positive measure to any non-empty open set. However, as already mentioned in the introduction, µ is never absolutely continuous with respect to Lebesgue. More precisely: 3. Let f be a multicritical circle map with irrational rotation number. Then its unique invariant probability measure is purely singular with respect to Lebesgue measure. This theorem was proved by Khanin in the late eighties, by means of a certain thermodynamic formalism [11,Theorem 4] (see also [7,Proposition 1]). We would like to point out that Theorem 2.3 is a straightforward consequence of our main result, namely Theorem A, as it follows from the simple observation that either µ is absolutely continuous with respect to Lebesgue, or else it is singular. Otherwise we would have a decomposition µ = ν 1 + ν 2 , where ν 1 is absolutely continuous, ν 2 is singular and both are non-zero. Since f preserves sets of zero Lebesgue measure, both ν 1 and ν 2 would be f -invariant, contradicting the unique ergodicity of f . Since by Theorem A f admits no invariant measure which is absolutely continuous (neither finite nor σ-finite), Theorem 2.3 follows. For more on the ergodic theory of multicritical circle maps, see [4].
2.1. The real bounds. As it is well known, any irrational number ρ ∈ (0, 1) has an infinite continued fraction expansion, say The coefficients a n are called the partial quotients of ρ. Truncating this expansion at level n − 1, we obtain a sequence of irreducible fractions p n /q n = [a 0 , a 1 , · · · , a n−1 ], which are called the convergents of the irrational ρ. The sequence of denominators q n , which we call the return times, satisfies q 0 = 1, q 1 = a 0 , q n+1 = a n q n + q n−1 for n ≥ 1. Now let f be a circle homeomorphism with rotation number ρ(f ) = ρ. For any given x ∈ S 1 we construct a nested sequence of partitions of the circle P n (x) n∈N as follows: for each non-negative integer n, let I n (x) be the interval with endpoints x and f qn (x) containing f q n+2 (x), namely, I n (x) = x, f qn (x) and I n+1 (x) = f q n+1 (x), x . We write I j n (x) = f j I n (x) for all j and n. It is well known that, for each n ≥ 0, the collection of intervals P n (x) = I i n : 0 ≤ i ≤ q n+1 − 1 ∪ I j n+1 : 0 ≤ j ≤ q n − 1 is a partition of the circle modulo endpoints (see for instance [3,Lemma 2.4]), called the n-th dynamical partition associated to x. The intervals of the form I i n are called long, whereas those of the form I j n+1 are called short. The following fundamental result was obtained by Herman andŚwiatek in the late eighties [9,14].
Theorem 2.4 (Real bounds). Given N ≥ 1 in N and d > 1 there exists a universal constant C = C(N, d) > 1 with the following property: for any given multicritical circle map f with irrational rotation number, and with at most N critical points whose criticalities are bounded by d, there exists n 0 = n 0 (f ) ∈ N such that for each critical point c of f , for all n ≥ n 0 , and for every pair I, J of adjacent atoms of P n (c) we have: where |I| denotes the Euclidean length of an interval I.
A detailed proof of Theorem 2.4 can also be found in [2,3]. In what follows, two positive real numbers α and β are said to be comparable modulo f (or simply comparable) if there exists a constant K > 1, depending only on f , such that K −1 β ≤ α ≤ Kβ. This relation is denoted α ≍ β. Therefore, Theorem 2.4 states that |I| ≍ |J| for any two adjacent atoms I and J of a dynamical partition associated to a critical point of f .
2.2.
Some geometric tools. We finish Section 2 reviewing some classical tools from onedimensional dynamics, that will be used along the text. Given two intervals M ⊂ T ⊂ S 1 , with M compactly contained in T (written M ⋐ T ), we denote by L and R the two connected components of T \ M. We define the space of M inside T as the smallest of the ratios |L|/|M| and |R|/|M|. If the space is τ > 0, we say that T contains a τ -scaled neighbourhood of M.
where C 0 is a constant depending only on f , with the following property. If T is an interval such that f k | T is a diffeomorphism onto its image, for some k ∈ N, and if k−1 j=0 |f j (T )| ≤ ℓ, then for each interval M ⊂ T for which f k (T ) contains a τ -scaled neighbourhood of A proof of Koebe distortion principle can be found in [12, Section IV.3, Theorem 3.1]. We define the cross-ratio of the pair M, T to be the ratio The cross-ratio distortion of a homeomorphism f : S 1 → S 1 on the pair M, T is defined as We have the following chain rule for the cross-ratio distortion: Given a family of intervals F on S 1 and a positive integer m, we say that F has multiplicity of intersection at most m if each x ∈ S 1 belongs to at most m elements of F .
Cross-Ratio Inequality . Given a multicritical critical circle map f : S 1 → S 1 , there exists a constant C > 1, depending only on f , such that the following holds. If M i ⋐ T i ⊂ S 1 , where i runs through some finite set of indices I, are intervals on the circle such that the family {T i : i ∈ I} has multiplicity of intersection at most m, then The Cross-Ratio Inequality was obtained byŚwiatek in [14] (see also [3,Theorem B]). A sketch of the proof can be found in [2, page 5589]. We remark that similar estimates were used before by Yoccoz [15], on his way to proving Theorem 2.2 (see [12, Chapter IV] for this and much more). Now recall that, for a given C 3 map f , the Schwarzian derivative of f is the differential operator defined for all x regular point of f by We recall now the definition of an almost parabolic map, as given in [6, Section 4.1, page 354]. Definition 2.6. An almost parabolic map is a negative-Schwarzian C 3 diffeomorphism . . , J ℓ+1 are consecutive intervals on the circle (or on the line). The positive integer ℓ is called the length of φ, and the positive real number The fundamental geometric control on almost parabolic maps is given by the following result.
k=2 J k be an almost parabolic map with length ℓ and width σ. There exists a constant C σ > 1 (depending on σ but not on ℓ) such that, for all k = 1, 2, . . . , ℓ, we have where I = ℓ k=1 J k is the domain of φ. For a proof of Lemma 2.7 see [6, Appendix B, page 386]. To be allowed to use Yoccoz's lemma we will need the following result.
Lemma 2.8. For any given multicritical circle map f there exists n 0 = n 0 (f ) ∈ N such that for any given critical point c of f and for any n ≥ n 0 we have that Sf j (x) < 0 for all j ∈ {1, · · · , q n+1 } and for all x ∈ I n (c) regular point of f j .
Likewise, we have
Sf j (x) < 0 for all j ∈ {1, · · · , q n } and for all x ∈ I n+1 (c) regular point of f j .
The Katznelson criterion
As stated in the introduction, the proof of Theorem A will consist of two separate arguments. The first argument (see §4.1 below) deals with all irrational rotation numbers except those numbers (of bounded type) whose partial quotients are bounded by a certain constant B that depends only on the real bounds (Theorem 2.4). The second argument (see §4.2 below) takes care of the bounded type case. The arguments presented in both proofs exploit different aspects of the geometry of multicritical circle maps: the first uses the real bounds and Yoccoz's lemma, whereas the second uses only the real bounds.
Despite these differences, both parts of the proof will be based on a criterion for nonexistence of σ-finite measures which is a slightly generalized version of a criterion given by Katznelson [10, Th. 1.1]. Consider the following standing hypothesis on the geometry of the dynamical partitions P n (c 0 ) of a C 1 minimal homeomorphism f : S 1 → S 1 with respect to a given point c 0 ∈ S 1 .
Standing Hypothesis. There exist a sequence N ∋ n k → ∞ of "good levels" and constants 1 < b 0 < b 1 and 0 < θ < 1 such that the following holds. For each ∆ ∈ P n k (c 0 ), the collection A ∆ = {J ∈ P n k +1 (c 0 ) : J ⊂ ∆} can be decomposed as a disjoint union (iv) The sub-collections A ∆ 1 and A ∆ 2 have the same number of elements. 1 Theorem 3.1. Let f : S 1 → S 1 be a C 1 minimal homeomorphism satisfying the above standing hypothesis. Then f does not admit a σ-finite invariant measure which is absolutely continuous with respect to Lebesgue measure.
Proof. Assume by contradiction that there exists a σ-finite measure µ which is invariant under f and is absolutely continuous with respect to Lebesgue measure. Let ψ = dµ/dλ be the corresponding Radon-Nikodym derivative. This is a Borel measurable function which is positive and finite Lebesgue a.e., and it satisfies the cocycle identity (1). By an easy induction, that cocycle identity can be written more generally as Fix a small number 0 < δ < 1; we will need it small enough that Then we have λ(E c ) > 0 for some choice of c. We choose such c and from now on write E = E c . By the Lebesgue density theorem, λ-a.e. x ∈ E is such that the density of E at x is 1. Hence for each ǫ > 0 we can find a good level n k ∈ N and an atom ∆ ∈ P n k (c 0 ) such that We will show that the assumption at the start of this proof contradicts our standing hypothesis on f if we take ǫ sufficiently small. How small ǫ has to be will be determined in the course of the argument to follow. Let A ∆ and A ∆ i , i = 1, 2, 3 be as defined before, and for i = 1, 2 let Ω i = J∈A ∆ i J. Then (iii) in our standing hypothesis tells us that Ω = Ω 1 ∪ Ω 2 satisfies λ(Ω) ≥ θ|∆|.
provided ǫ is so small that ǫθ −1 < 1. Note that our standing hypothesis also tells us that b 0 λ(Ω 2 ) ≤ λ(Ω 1 ) ≤ b 1 λ(Ω 2 ). These inequalities imply that 1 Note that nothing is said about the sub-collection A ∆ 3 : it plays no role in the arguments to come.
Using (5) and the first inequality in (6), we get and this lower bound will be positive (in fact close to one) provided ǫ is sufficiently small. Similarly, using (5) and the second inequality in (6), we deduce that Note that η → 0 when ǫ → 0. Now, since both Ω 1 and Ω 2 are disjoint unions of atoms in P n k +1 (c 0 ), it follows from (9) that there exist atoms Let k ∈ N be such that f k maps J 1 diffeomorphically onto J 2 , and let us estimate the Lebesgue measure of f −k (J 2 \ E). By (ii) in our standing hypothesis and the chain rule we have Df −k (y) ≤ b 1 for all y ∈ J 2 . Since by (10) (10) and (11) that But now observe that the equality ψ = (ψ • f k )Df k holds Lebesgue almost everywhere: this is simply the cocycle identity (3). Since for every x ∈ J * 1 we have both x ∈ E and f k (x) ∈ E, it follows from this equality and the definition of E that for Lebesgue a.e.
Combining (12) and (13) and cancelling out |J 2 | from both sides of the resulting inequality, we deduce at last that But since (1 + δ) −1 b 0 > 1, the inequality (14) is clearly violated if η is sufficiently small, which is certainly the case if we choose ǫ sufficiently small. We have reached the desired contradiction, and the proof is complete.
Remark 3.2. A close inspection of the proof shows that we do not need the full strength of the standing hypothesis. All we need is that, given any interval I on the circle, we can find inside it two disjoint intervals J ′ , J ′′ , both comparable in size with I, with |J ′ | greater than |J ′′ | by a definite factor, and an iterate of f mapping J ′ onto J ′′ with bounded distortion.
Proof of Theorem A
We are now ready for the two major steps in the proof of Theorem A. If f is a multicritical circle map with at most N critical points whose criticalities are bounded by d, and if the rotation number of f is irrational and its partial quotients a n satisfy lim sup a n ≥ B, then f does not admit an invariant σ-finite measure which is absolutely continuous with respect to Lebesgue measure.
In the proof of Theorem 4.1, we will make extensive use of the following fact, which is an immediate consequence of [2, Lemma 4.2, page 5600].
Following the terminology of [2], an interval such as f qn+kq n+1 I n+1 (c 0 ) appearing in the statement above, containing some critical point of f q n+1 , is called a critical spot. Thus, Lemma 4.2 is saying that every critical spot is large, i.e., is comparable to the atom of P n (c 0 ) in which it is contained, and the same happens to all its images up to time i = q n+1 .
Proof of Theorem 4.1. By Theorem 3.1, it suffices to show that an f as in the statement satisfies the standing hypothesis previously formulated, provided lim sup a n is sufficiently large. This will be proved with the help of the real bounds (Theorem 2.4), Yoccoz's inequality (Lemma 2.7) and Lemma 4.2 above.
Let c 0 be a critical point of f and consider the associated dynamical partitions P n (c 0 ) for n ≥ n 0 (f ), where n 0 (f ) is as in Theorem 2.4. We are also assuming that such n is large enough that the iterates f qn and f q n+1 have negative Schwarzian derivative at all points in I n+1 (c 0 ) (I n (c 0 ) respectively) where their derivatives do not vanish (this is possible by Lemma 2.8). We will only consider in the proof long atoms of P n (c 0 ), the proof for the short ones being the same. Moreover, we will decompose first the collection J ∈ P n+1 (c 0 ) : J ⊂ I n (c 0 ) , and then we will spread this decomposition iterating by f . So let ∆ = I n (c 0 ), and consider the following consecutive atoms of P n+1 (c 0 ) inside ∆: ∆ 0 = f qn (I n+1 ) and ∆ j = f jq n+1 (∆ 0 ) for j = 1, 2, . . . , a n+1 −1; note that ∆ j = f q n+1 (∆ j−1 ) for all 1 ≤ j ≤ a n+1 − 1. Some of these intervals may be critical spots (which are always comparable in size with |∆|, by Lemma 4.2). We look at the bridges between such critical spots, and pick the longest one. More precisely, let 0 ≤ j 1 ≤ j 2 ≤ a n+1 − 1 with j 2 − j 1 maximal with the property that φ = f q n+1 | ∆ j 1 ∪···∪∆ j 2 is a diffeomorphism onto its image. Let T n = ∆ j 1 ∪· · ·∪∆ j 2 , R n = ∆ j 1 , L n = ∆ j 2 and M n = T n \(L n ∪R n ) = ∆ j 1 +1 ∪· · ·∪∆ j 2 −1 . Note that φ| Mn is an almost parabolic map (see Definition 2.6) with length ℓ = j 2 − j 1 − 1, and note that ℓ ≥ a n+1 /(N + 1), where N is the number of critical points of f . Let us write J 1 = ∆ j 1 +1 , J 2 = ∆ j 1 +2 , . . . , J ℓ = ∆ j 1 +ℓ = ∆ j 2 −1 . From the real bounds (Theorem 2.4), we have |J 1 | ≍ |∆| ≍ |J ℓ |, with beau comparability constants. Therefore, by Yoccoz's inequality (Lemma 2.7), there exists a constant C 0 > 1, depending only on f , such that, for all 1 ≤ j ≤ ℓ, Now we claim that there exists a constant τ > 0 (depending only on f ) such that for all i ∈ {0, · · · , q n+1 }. Indeed, again by combining Theorem 2.4 with Lemma 4.2 we obtain the claim for both i = 0 and i = q n+1 . By the Cross-Ratio Inequality (note that the intervals T n , f (T n ), ..., f q n+1 −1 (T n ) are pairwise disjoint), we deduce the claim for any i ∈ {1, · · · , q n+1 − 1}. With this at hand, and since f i | Tn is a diffeomorphism for any i ∈ {0, ..., q n+1 }, we can apply Koebe distortion principle (Lemma 2.5) in order to obtain a constant K = K(f ) > 1 such that f i | Mn has distortion bounded by K for each i ∈ {0, · · · , q n+1 }. Let us now define B = 2(N + 1)⌈ √ 2KC 0 ⌉ + 1. We are assuming from now on that n is one of infinitely many natural numbers such that a n+1 ≥ B. Let m be the smallest natural number such that KC 2 0 m −2 ≤ 1 2 ; in other words, let m = ⌈ √ 2KC 0 ⌉. Since a n+1 ≥ B, we have Thus, setting J ′ = J 1 and J ′′ = φ m−1 (J ′ ) = J m , it follows from (15) that We are now ready to define the desired decomposition of A ∆ , the collection of all atoms of P n+1 (c 0 ) that are contained in ∆ = I n (c 0 ).
. We claim that this decomposition satisfies all conditions (i)-(iv) in the standing hypothesis. From (16), we have |J ′ | ≥ 2|J ′′ |, so (i) is satisfied with b 0 = 2. By the mean value theorem, there exists ξ ∈ J ′ such that where we have again used (16). By Koebe distortion principle, there exists C 1 > 1 (depending only on f ) such that Combining these facts we deduce that Dφ m−1 (x) ≥ (C 2 0 C 1 m 2 ) −1 , and so (ii) is certainly satisfied if we take k = q n+1 (m − 1) and b 1 = For Ω = J ′ ∪ J ′′ , we now have, using (15), the simple bound λ(Ω) = |J ′ | + |J ′′ | ≥ |J ′ | ≥ C −1 0 |∆| . This shows that (iii) is satisfied if we choose θ = C −1 0 < 1. Finally, condition (iv) is trivially satisfied because both A ∆ 1 and A ∆ 2 have a single element. Now we spread the previous decomposition along the whole family of long intervals of P n (c 0 ). More precisely, for each i ∈ {1, ..., q n+1 − 1} we define a decomposition of A ∆ , the collection of all atoms of P n+1 (c 0 ) that are contained in ∆ = f i I n (c 0 ) , as follows: let . Again, we claim that this decomposition satisfies all conditions (i)-(iv) in the standing hypothesis. Indeed, for each i ∈ {1, ..., q n+1 − 1} let x ′ i ∈ J ′ and x ′′ i ∈ J ′′ be given by the mean value theorem: By bounded distortion and (16) we obtain Therefore, just as before, (ii) is again satisfied with k = q n+1 (m−1) and b 1 = KC 2 0 C 1 m 2 = KC 2 0 C 1 ⌈ √ 2KC 0 ⌉ 2 . By Lemma 4.2, the i-th iterate of a critical spot, contained in I n (c 0 ), is comparable to f i I n (c 0 ) for all i ∈ {0, 1, ..., q n+1 } and then, by Theorem 2.4, the interval f i (J ′ ) is comparable to f i I n (c 0 ) as well, which implies (iii). Again, condition (iv) is trivially satisfied. Summarizing, we have shown that, for infinitely many values of n, the partitions P n (c 0 ) satisfy conditions (i) through (iv) of the standing hypothesis. Therefore, by Theorem 3.1, f does not admit a σ-finite invariant measure equivalent to Lebesgue measure. This finishes the proof.
Second step.
We now move to the bounded type case. Here our goal will be to prove the following result. In the proof of Theorem 4.3 we will make use of the following two auxiliary results.
Proposition 4.4. Given a multicritical circle map f with an irrational rotation number of bounded type, there exist constants C 0 > 1 and 0 < λ 0 < λ 1 < 1 with the following property.
For each x ∈ S 1 , each n, k ≥ 0 and every pair of atoms I ∈ P n (x) and J ∈ P n+k (x) with J ⊆ I, we have Proposition 4.5. Given a multicritical circle map f with an irrational rotation number of bounded type, there exists n 0 = n 0 (f ) ∈ N such that for all n ≥ n 0 we have Likewise, we have We postpone the proof of both Proposition 4.4 and Proposition 4.5 until Appendix A. We emphasize that the statement of Proposition 4.4 is false for unbounded combinatorics. On the other hand, Proposition 4.5 is most likely true for any irrational rotation number (however, this more general fact will not be needed in this paper).
Our proof of Theorem 4.3 will be based on the following lemma. Recall that we are fixing our attention on a critical point c of f . Below, we use the following notation: for all i ≥ 0, let c −i = f −i (c); we write accordingly I n (c −i ) = f −i (I n (c)) for all n ≥ 0 and all i ≥ 0. Lemma 4.6. There exist constants K > 1 and 0 < θ < 1 such that the following holds for all n sufficiently large and each 0 ≤ i < q n . There exist subintervals Proof. We assume from the start that n is so large that f qn | I n+1 (c −i ) has negative Schwarzian derivative for all 0 ≤ i < q n . This is possible by Proposition 4.5. Note that each c −i for 0 ≤ i < q n+1 is a critical point of f qn . In what follows, we keep n and 0 ≤ i < q n fixed.
Note that for all k ≥ 0 even we have I n+k+1 (c −i ) ⊆ I n+1 (c −i ). By Proposition 4.4, there exist constants 0 < λ 0 < λ 1 < 1 and C 0 > 1 such that Let us write I = I n+k+1 (c −i ) and J = f qn (I); these are obviously disjoint intervals (see Figure 1), and they are both atoms of P n+k (c −i ). Combining (17) with (18), we deduce that there exists a constant C 1 > 1 (independent of n and k) such that Note that f qn | I : I → J has at most N critical points 3 , and has negative Schwarzian at all regular points. Note that, by choosing k sufficiently large, we can make |J| definitely smaller than |I|. The meaning of "definitely smaller", and thus how large k has to be, will be clear in a moment. For p ≥ 0, let us denote the number of atoms of P n+k+p (c −i ) inside I (or J) by a = a(n, k, p). Then we have 2 p ≤ a ≤ (A + 1) p (where A = sup a n < ∞ is the least upper bound on the convergents of the rotation number of f ). Choose p = p(N) smallest with the property that 2 p > 3N + 2. Since f qn | I has at most N critical points, and since a > 3N + 2, it follows from the pigeonhole principle that there exist 3 consecutive atoms of P n+k+p (c −i ) inside I, say L, M, R, such that the open interval T = int(L ∪ M ∪ R) contains no critical point of f qn . Hence f qn | T : T → f qn (T ) is a diffeomorphism with negative Schwarzian derivative. Applying Koebe's non-linearity principle, we see that where τ is the space of M inside T , namely From the real bounds, we know that τ ≥ C 2 , for some constant C 2 > 0. Using this fact in (20) and integrating the resulting inequality, we deduce that Now, applying once again Proposition 4.4 (note that we are using the bounded type hypothesis!), it follows that there exists a constant C 3 > 1 depending on A such that as well as Putting together (19), (22) and (23), we deduce that Likewise, putting together (17), (19) and (24), we get Now let us choose k ≥ 1 smallest with the property that where d 0 = min i,n d(i, n) > 1 Such k exists (and is independent of n) because λ 1 < 1.
To finish the proof, we define ∆ ′ i,n = M and ∆ ′′ i,n = f qn (M). These, we claim, are the intervals satisfying properties (i)-(iv) in the statement. Indeed, property (i) is clear. Property (iv) follows directly from (21) if we take K = e 2/C 2 . Property (ii) follows from inequalities (24) and (26). Finally, property (iii) follows from (25), provided we take The proof is complete.
Proof of Theorem 4.3. The proof will based on the generalized Katznelson criterion given by Theorem 3.1. Our argument combines Lemma 4.6 with the Cross Ratio Inequality.
It is enough to show that f satisfies the standing hypothesis stated prior to Theorem 3.1 concerning the sequence of dynamical partitions P n (c) for some choice of critical point c. For this purpose, as we have seen in the proof of that theorem (see also Remark 3.2), it suffices to prove the following statement.
The comparability constants and bounds implicit in this statement depend only on the real bounds for f and the bound on the combinatorics. To simplify the notation a bit, let us write J k = I k (c) ∪ I k+1 (c) for all k ≥ 0. In order to prove the claim, we proceed through the following steps.
(i) We may assume that I is a long atom of P n (c), say I = f q n+1 −i (I n (c)), where 1 ≤ i ≤ q n+1 − 1. If I happens to be a short atom, all we have to do is recall that every short atom of P n (c) is a long atom of P n+1 (c). (ii) The interval T = f q n+1 (I n (c)) contains the interval J n+4 in its interior, with definite space on both sides (see Figure 2). To see why this is true, first note that, by the real bounds, the interval J n+4 is comparable to |I n (c)|, i.e., |J n+4 | ≍ |I n (c)|. Consider the following two atoms of P n+1 (c), which also lie inside T : L * = f q n+1 (I n+2 ) ⊂ I n+1 (c) and R * = f qn+q n+1 (I n+1 (c)) ⊂ I n (c) .
Both these intervals share an endpoint with T (one on the left, the other on the right). By simple combinatorics, we see that J n+4 ⊂ T is disjoint from both L * and R * . But by the real bounds, we have |L * | ≍ |I n+1 (c)| and |R * | ≍ |I n (c)|. If we denote by L and R the two connected components of T \ J n+4 , then one of them contains L * and the other contains R * . For definiteness, we assume that L ⊇ L * and R * ⊇ R. Hence we have |L| ≍ |I n+1 (c)| ≍ |T | and |R * | ≍ |I n (c)| ≍ |T |. (iii) In particular, (ii) tells us that the cross-ratio [J n+4 , f q n+1 (I n (c))] is bounded away from 0 and ∞. (iv) Now look at the interval Observe that f −i (J n+4 ) = I n+4 (c −i ) ∪ I n+5 (c −i ) (in the notation introduced prior to Lemma 4.6). Hence we can apply Lemma 4.6 (with n replaced by n + 4) and deduce that there exist intervals satisfying properties (i)-(iv) of that lemma. In particular, we have (v) The intervals ∆ ′ and ∆ ′′ already satisfy properties (a) and (c) in the claim. Therefore, all we have to do is to verify that (b) holds as well. For this, it suffices to show that the intervals f −i (J n+4 ) and I = f q n+1 (I n (c −i )) have comparable lengths. Let , and since are both atoms of P n+1 (c) contained in the same atom I ∈ P n (c), we deduce from the real bounds that |L i | ≍ |I| ≍ |R i |. By the cross-ratio inequality, the crossratio distortion CrD(f i ; f −i (J n+4 ), I) is bounded above. Combining this fact with (iii), we deduce that the cross-ratio [f −i (J n+4 ), I] is bounded below. Since the two lateral intervals L i , R i ⊂ I and the total interval I have comparable lengths, it follows that the middle interval f −i (J n+4 ) ⊂ I also has length comparable to |I|. Together with (27), this shows at last that |∆ ′ | ≍ |I| ≍ |∆ ′′ |.
This completes the proof of our claim. And as we had already observed, the claim implies that f satisfies the hypotheses of Theorem 3.1. Therefore it satisfies the conclusion as well: f does not admit a σ-finite absolutely continuous invariant measure. This finishes the proof of Theorem 4.3.
4.3. The punchline. Our main theorem, namely Theorem A, is now an immediate consequence of steps 1 and 2, or more precisely, of Theorems 4.1 and 4.3.
Appendix A. The negative Schwarzian property for bounded combinatorics Our goal in this appendix is to provide a proof of both Proposition 4.4 and Proposition 4.5.
A.1. Bounded geometry. Let f be a C 3 multicritical circle map (as in Definition 2.1) with irrational rotation number. We say that f has bounded geometry at x ∈ S 1 if there exists K > 1 such that for all n ∈ N and for every pair I, J of adjacent atoms of P n (x) we have K −1 |I| ≤ |J| ≤ K |I| . Following [5, Section 1.4], we consider the set A = A(f ) = {x ∈ S 1 : f has bounded geometry at x} .
In other words, x ∈ A if |I| ≍ |J| for any two adjacent atoms I and J of the dynamical partition associated to x at any level n. As explained in [5,Section 1.4], the set A is f -invariant. Moreover, as it follows from the classical real bounds of Herman andŚwiatek (Theorem 2.4), all critical points of f belong to A. Being f -invariant and non-empty, the set A is dense in the unit circle. However, even in the case of maps with a single critical point, A can be rather small. Indeed, the following is [5, Theorem D].
Theorem A.1. There exists a full Lebesgue measure set R ⊂ (0, 1) of irrational numbers with the following property: let f be a C 3 critical circle map with a single (non-flat) critical point and rotation number ρ ∈ R. Then the set A(f ) is meagre (in the sense of Baire) and it has zero µ-measure (where µ denotes the unique f -invariant probability measure).
By contrast, if f has bounded combinatorics, then the set A(f ) is the whole circle (as a consequence, the full Lebesgue measure set R ⊂ (0, 1) given by Theorem A.1 contains no bounded type numbers). Let us be more precise.
Theorem A.2. For any given multicritical circle map f with bounded combinatorics there exists a constant C > 1, depending only on f , such that for any given point x ∈ S 1 , for all n ∈ N, and for every pair I, J of adjacent atoms of P n (x) we have: We remark that, precisely because f has bounded combinatorics, Proposition 4.4 follows at once from Theorem A.2. As explained in [5, Section 1.3], Theorem A.2 follows from a result of Herman [9], which states that f is quasisymmetrically conjugate to the corresponding rigid rotation. For the sake of completeness (and because it is going to be crucial in Section A.2 below), we would like to end Section A.1 by providing a different proof of Theorem A.2, without using Herman's result. With this purpose, we state first the following immediate consequence of Theorem 2.4, which only holds for bounded combinatorics (if ρ(f ) = [a 0 , a 1 , ...] with sup n∈N {a n } ≤ B, we say that f has combinatorics bounded by B).
Corollary A.3. Given B > 1, N ≥ 1 in N and d > 1 there exists C = C(B, N, d) > 1 with the following property: for any given multicritical circle map f with combinatorics bounded by B, and with at most N critical points whose criticalities are bounded by d, there exists n 0 = n 0 (f ) ∈ N such that for each critical point c of f , for all n ≥ n 0 and for every pair of intervals I ∈ P n (c) and J ∈ P n+1 (c) satisfying J ⊆ I, we have that |I| ≤ C |J|.
The next result we will prove states that any two intersecting atoms belonging to the same level n of the dynamical partitions associated to a critical and a regular point respectively, are comparable. Both its statement and its proof are essentially borrowed from [2, Lemma 4.1, page 5599].
Lemma A.4. Let f be a multicritical circle map with bounded combinatorics. Let c be a critical point of f , and let x 0 be any point in the circle. If ∆ ∈ P n (c) and ∆ ′ ∈ P n (x 0 ) are two atoms such that ∆ ∩ ∆ ′ = Ø, then |∆| ≍ |∆ ′ |.
Note that Theorem A.2 follows at once by combining Theorem 2.4 with Lemma A.4 (we remark that Lemma A.4 will also be used in the proof of Proposition A.7 below). During the proof of Lemma A.4 we will use the following fact, which is [2, Lemma 3.3, page 5593].
Lemma A.5. There exists a constant C > 1, depending only on f , such that for all n ≥ 0 and all x ∈ S 1 we have: Proof of Lemma A.4. There are three cases to consider, according to the types of atoms we have: long/long, long/short, and short/short. More precisely, we have the following three cases.
A.2. The negative Schwarzian property. The remainder of this appendix is devoted to establish the following two facts.
Proposition A.6 (The C 1 bounds). For any given multicritical circle map f with bounded combinatorics there exists a constant K = K(f ) > 1 such that the following holds. For any given x 0 ∈ S 1 and n ∈ N let I n = I n (x 0 ). Then Df k (x) ≤ K f k (I n ) |I n | for all x ∈ I n and all k ∈ {0, 1, ..., q n+1 }. Moreover f q n+1 C 1 (In) ≤ K. Likewise, if I n+1 = I n+1 (x 0 ), then Df k (x) ≤ K f k (I n+1 ) |I n+1 | for all x ∈ I n+1 and all k ∈ {0, 1, ..., q n }, and moreover f qn C 1 (I n+1 ) ≤ K. Proposition A.7 (The negative Schwarzian property). For any given multicritical circle map f with bounded combinatorics there exists n 0 = n 0 (f ) ∈ N such that for all x 0 ∈ S 1 and all n ≥ n 0 we have Sf q n+1 (x) < 0 for all x ∈ I n (x 0 ) regular point of f q n+1 .
Likewise, we have
Sf qn (x) < 0 for all x ∈ I n+1 (x 0 ) regular point of f qn .
Note that Proposition A.7 immediately implies Proposition 4.5. We remark that both Proposition A.6 and Proposition A.7 are well known in the case when x 0 is a critical point of f , in which case they hold true for any irrational rotation number: see [6, Appendix A] for the case of critical circle maps with a single critical point, and see [3,Sections 3 and 4] for the case of multicritical circle maps. Our goal in this appendix is to generalize both results to the case when x 0 is a regular point of a multicritical circle map with bounded combinatorics. In the proof of Proposition A.6 we adapt the exposition in [3, pages 849-851], whereas in the proof of Proposition A.7 we adapt the exposition in [6, pages 380-381].
|
2020-07-22T01:01:11.763Z
|
2020-07-20T00:00:00.000
|
{
"year": 2020,
"sha1": "ac5723ce8dbad6ec8cf5b4316e15e227e3641114",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2007.10444",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ac5723ce8dbad6ec8cf5b4316e15e227e3641114",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
55343550
|
pes2o/s2orc
|
v3-fos-license
|
POTENTIAL OF ESSENTIAL OILS FOR THE CONTROL OF BROWN EYE SPOT IN COFFEE PLANTS Potencial de óleos essenciais no controle da cercosporiose-do-cafeeiro
The objectives of this work were to assess the in vitro effect of essential oils extracted from cinnamon, citronella, lemon grass, India clove, tea tree, thyme, neem and eucalyptus on the conidia germination and on mycelial growth of Cercospora coffeicola, and their efficacy to control the brown eye spot in coffee seedlings (cultivars Catucaí 2SL, Catuaí IAC 62 and Mundo Novo 379/19) in a greenhouse, as well as their effects on the initial germination and infection events by scanning electron microscopy. All essential oils promoted the inhibition of conidia germination with increasing concentrations. India clove, cinnamon, neem, thyme and lemon grass oils inhibited the mycelial growth of C. coffeicola. The cinnamon and citronella oils were the most promising for brown eye spot control in all cultivars. In scanning electron microscopy, the cinnamon and citronella oils reduced germination and mycelial development of C. coffeicola in vivo, eight and 16 hours after inoculation, promoting, in some cases, the leakage of the cellular content. Essential oils of cinnamon and citronella reduced the incidence and severity of brown eye spot, in addition to presenting direct toxicity to the pathogen.
INTRODUCTION
The brown eye spot, caused by Cercospora coffeicola Berk & Cooke, is one of the most important disease of the coffee plant (Coffea arabica L.), causing yield losses of up to 30%.The pathogen infects the plantlets in the nursery and also those in production stage, affecting leaves and fruits (Zambolim et al., 2005).The disease usually causes intense plant defoliation, predisposing the fruits to infection by others pathogens.Affected fruits have their maturation process accelerated, which provokes their fall before the harvest and, consequently, depreciation of the drink quality (Pozza, 2008).
The brown eye spot is conventionally controlled using protective and systemic fungicides (Zambolim et al., 2005;Abrahão et al., 2009).However, the use of these products can eventually offer health and environmental risks and also can lead to the development of resistant strains of the pathogens (Santos et al., 2006).On the other hand, the use of essential oils extracted from medicinal plants makes up a promising strategy, since, besides presenting antimicrobial properties (Schwan-Estrada & Stangarlin, 2003) and they were found to be promising for the alternative control of some plant diseases (Guiraldo et al., 2004).Fiori et al. (2000) reported the fungitoxic activity of essential oils extracted from lemongrass (Cymbopogon citratus (A.D.) Stapf and eucalyptus (Corimbia citriodora Hook) on the germination and mycelial growth of Didymella bryoniae (Fuckel).Ranasingue et al. (2002) observed the fungitoxic activity of India clove (Syzygium aromaticum (Linne) Merril) and cinnamon (Cinnamomum zeylanicum Blume.)oils in vitro control of C. musae, Lasiodiplodia theobromae (Pat.)Griff & Maubl.and Fusarium proliferatum (Matsuhima) Nirenberg.
Some researchers observed that, besides the in vitro activity, the essential oils present potential for the control of some diseases.Medice et al. (2007) and Pereira et al. (2008) observed that the thyme essential oil reduced the severity of both brown eye spot in the coffee plant and rust in soybean in the greenhouse and field, respectively.Carneiro (2003) This work evaluated the in vitro effect of eight medicinal plant essential oils on the conidia germination and mycelial growth of C. coffeicola, their efficacy in the control of brown eye spot in three coffee plant cultivars, and their effects on the initial germination and infection events of the pathogen using scanning electron microscopy.
MATERIAL AND METHODS
The trials were conducted in the Federal University of Lavras (Lavras, State of Minas Gerais State), in the period from March to December of 2008.
For the obtaining of the C. coffeicola inoculum, naturally infected coffee plant leaves were collected in the field and submitted to a humid chamber for three days.Removal of the produced conidia took place next, using a soft bristle brush moistened in distilled water.The suspension obtained was filtered in gauze and concentration determined in a hemocytometer and adjusted to 1.5 x 10 4 conidia mL -1 .This concentration was used in all the experiments.
To evaluate the toxicity on the C. coffeicola germination, the essential oils extracted from tea tree (Melaleuca alternifolia Cheel), cinnamon (Cinnamomum zeylanicum Breym), lemongrass (Cymbopogon citratus Staph), citronella, India clove (Sizygium aromaticum L.), eucalyptus (Corymbia citriodora Hook), neem and thyme (Thymus vulgaris L.) were tested at the concentrations of 0, 250, 500, 1000, 1500 and 2000 μL L -1 in distilled and sterilized water.Powdered milk, 10 g L -1 in distilled and sterilized water, was added as a natural emulsifier at the concentrations of 2000 μL L -1 , from which came the other dilutions.In order to isolate the effect of the powdered milk, a treatment only composed by this substance was added to the experiment at a concentration of 2000 μL L -1 .
Petri dishes of 6 cm diameter were used with 2% (w/v) agar-water medium (AW).The treatments were added to the medium, after the fall of the temperature to 40º C, before it was poured on the dishes, so that the final dilutions reached the pre-established ones.After the solidification of the medium, 500 μL of the conidia suspension of the pathogen were deposited on its surface and spread with a Drigalsky spatula.Next, the dishes were incubated at 25º C, with a 12 hour photoperiod, for 24 hours.The experiment was conducted in an entirely randomized design, with two dishes for each treatment, and each one divided into four quadrants, where 30 conidia per quadrant were appraised, in a total of eight repetitions.After incubation, the germination was paralyzed by the addition of four drops of lactoglycerol solution and the conidia germination percentage appraised under a light microscope.
To evaluate the toxicity of the essential oils on the mycelial growth of C. coffeicola, the same treatments evaluated in the germination test were used.However, only the concentration of 1000 μL L -1 was used.Petri dishes with 11 cm diameter were used with 2% (w/w/w) potato dextrose agar medium (PDA).This medium was prepared and poured in the dishes according to methodology described in the previous experiment.In the center of each dish, a 6 mm-diameter disk of medium containing young (four days old) C. coffeicola mycelium was added.Afterwards, the fungal cultures were incubated at 25º C with a 12 hour photoperiod, remaining under those conditions until the end of the evaluations.The experiment was conducted in an entirely randomized design, with eight repetitions, each plot composed by one dish.Evaluations of the colony diameters were carried out every four days, until the mycelium of the control treatment occupied the whole surface of the medium.Soon afterwards, mycelial growth speed index (MGSI) calculations were made, by adaptation of the Maguire (1962) formula.
With the objective of evaluating the efficiency of the essential oils in the control of the brown eye spot, three coffee plant cultivars, susceptible to C. coffeicola were chosen: Mundo Novo 379/19, Catuaí IAC 62 and Catucaí 2SL.The seedlings of them were acquired at six months of age from the Experimental Station of EPAMIG -South Minas Research Center (Lavras, Minas Gerais State, Brazil) -and transplanted to vases of seven liters containing a substrate composed of soil, bovine manure and sand, at a 2:1:1 proportion.The plants were maintained in a greenhouse during the whole experimental period, where they were periodically irrigated and fertilized according to the recommendations (Ribeiro et al., 1999).
At nine months of age, the coffee plants were sprayed until dripping with the essential oils of tea tree, cinnamon, lemongrass, citronella, India clove, eucalyptus, neem and thyme at the concentration of 1000 μL L -1 , powdered milk 10 g L -1 , acibenzolar-S-methyl (ASM) (Bion ® ) 200 mg L -1 , this used as resistance induction standard, and distilled water (control), using manual sprayer.After 30 days, these treatments were repeated.Seven days after the first application, the plants were inoculated with C. coffeicola conidia suspension.Soon afterwards, the plants were maintained in humid chamber for 14 hours.The experiment was conducted in a randomized block design, with three repetitions, and the plot was composed by six plants.Five evaluations of the brown eye spot were made starting from the 21 st day after the inoculation at 14 day intervals, according to the diagrammatic scale of Kushalappa & Chaves (1980).Soon afterwards, the areas under the incidence progress curve (AUIPC) and severity of brown eye spot (AUSPC) were calculated, according to Shaner & Finney (1977).
To evaluate the initial C. coffeicola germination and infection events, nine months-coffee plant seedlings from Mundo Novo 379/19, Catuaí IAC 62 and Catucaí 2SL cultivars were used and cultivated following the previously described methodology.The coffee plants were sprayed until dripping with the two most promising essential oils obtained in the previous experiment, in this case, the essential oils of cinnamon and citronella at the concentration of 1000 μL L -1 .A control sprayed only with distilled and sterilized water was added to the experiment.Two days later, eight leaves of the third pair from each treatment were collected, washed in distilled and sterilized water and placed in plastic trays, previously disinfested and prepared, according to Magnani et al. (2007).On the abaxial surface of each leaf, six circles of 1.0 cm in diameter were drawn with permanent marked pen.In the center of each circle, one drop of 25 L of the C. coffeicola conidia suspension was deposited.The trays with the leaves were covered with transparent plastic film and maintained in a growth chamber at 25º C and 12 hour photoperiod, until the end of the experiment.The sample collections for observation under a LEO EVO 40 Scanning Electron Microscope were done at four, eight, 16 and 48 hours after the inoculation, using circular cuts (five diameter mm) made with scalpel within each previously demarcated circle.The preparation and the observation of the samples were carried out according to Bossola & Russell (1998).
The statistical analyses of the data were conducted using the Sisvar v. 4.5 statistical software.The qualitative means were grouping using the Scott-Knott test (p<0.05)and regression graphs were generated for quantitative ones.The conidia germination percentage data were transformed to "( + 1).
RESULTS AND DISCUSSION
The germination of the conidia presented quadratic behavior as the essential oil concentrations increased (Figure 1).The cinnamon, citronella, lemongrass and thyme oils totally inhibited the germination of the conidia starting from 1000 μL L -1 , while India clove and tea tree oils totally inhibited the conidia germination starting from 1500 and 2000 μL L -1 , respectively.Even at the highest concentration used, the eucalyptus and neem oils did not totally inhibit the germination of the C. coffeicola conidia.
The essential oils of cinnamon and citronella totally inhibited the mycelial growth of C. coffeicola, however, without differing from neem oil, which presented a reduction of 95.13% in the mycelial growth speed index (MGSI) (Figure 2).Thyme and lemongrass essential oils reduced MGSI in 56.14% and 13.78%, respectively, while the other oils did not differ from the control and the powdered milk.
G
The substances present in the essential oils, when in contact with the microorganisms, affect the integrity of the cell membranes, causing the spilling out of their contents (Piper et al., 2001).This fact has been observed by Medice et al. (2007) in soybean sprayed with essential oil of thyme and inoculated with P. pachyrhizi, and Pereira et al. (2008) in coffee plants treated with essential oil of thyme and inoculated with C. coffeicola and also by Cox et al. (2000) in Saccharomyces sp.exposed to the essential oil of tea tree.
In the experiment conducted in the greenhouse, no phytotoxicity symptom was observed due to the application of the essential oils.Significant interaction was not observed in the area under the incidence progress curve (AUIPC) of brown eye spot for the cultivars and products or substances.However, it was significant for the area under the severity progress curve (AUSPC).The cultivars Catuaí IAC 62 and Catucaí 2SL presented smaller AUIPC in relation to Mundo Novo 379/19 (Figure 3A).The acibenzolar-S-methyl standard treatment presented the highest reduction of AUIPC, 27.5%, followed by citronella and cinnamon oils, with reductions of 12.0% and 10.0%, respectively (Figure 3B).The other treatments did not differ among themselves and in relation to the controls.
Regarding the severity of the disease, it was verified that the AS treatment reduced AUSPC in Catucaí 2SL by 64.94% in relation to the control, followed by the citronella oil, with a reduction of 43.08% (Figure 3C).Eucalyptus, cinnamon and India clove oils reduced AUSPC by 21.08%, 21.05% and 10.80%, respectively.In the Catuaí IAC 62, AS treatment reduced AUSPC by 58.28%, followed by the oil of cinnamon, with a reduction of 38.25% (Figure 3D).In the same cultivar, lemongrass, tea tree, thyme, eucalyptus, citronella and neem oils reduced AUSPC by 27.68%, 20.36%, 18.39%, 18.03%, 16.95% and 15.43%, respectively.In the Mundo Novo 379/19, AS treatment reduced AUSPC by 55.91%, followed by citronella and cinnamon oils, with reductions of 29.69% and 25.02%, respectively (Figure 3E).The other treatments did not differ in relation to the control.Some authors confirmed the efficiency of acibenzolar-S-methyl and of some essential oils in the control of plant diseases.Pereira et al. (2008) obtained reductions of up to 35.0% and 16.1% in the area under the lesion number progress curve (AULNPC) in coffee plants sprayed with acibenzolar-S-methyl and essential oil of thyme, respectively.Possibly, the partial control of the disease obtained in this work by the application of the essential oils of cinnamon and citronella was due to the presence of compounds such as the cinnamaldehydes and eugenol and, geraniol and citronellal, respectively.According to Montes-Belmont & Carvajal (1998) these are the components with higher antimicrobial properties present in these oils.
In the images generated by the scanning electron microscope, it could be observed to all cultivars that the germination of the C. coffeicola conidia began four hours after the inoculation, however, more significant differences were observed starting from eight hours after the inoculation (Figure 4).In the control leaves of the three cultivars, the conidia were in an advanced germination stage, with little mycelial development on their surface (Figure 4 C, F and I).However, in the observations made eight hours after inoculation in the leaves treated with the Figure 2 -Mycelial growth speed index (MGSI) of Cercospora coffeicola, submitted to essential oils extracted from India clove (CL), cinnamon (CN), neem (NE), thyme (TH), lemongrass (LG), eucalyptus (EU), tea tree (TT) and citronella (CI) at a concentration of 1000 μL L -1 , powdered milk (PM) 10 g L -1 and distilled water (WA).Averages followed by same letter do not differ among themselves by the Scott-Knott test (p<0.05).cinnamon and citronella oils, the conidia presented few and small germinal tubes, or even with non-germinated conidia.At 16 hours after the inoculation, the control leaves of the three cultivars presented well germinated conidia, with emission of a large number of germinal tubes and well developed mycelia.The same was not observed in the leaves sprayed with cinnamon and citronella oils to all the cultivars, in which the conidia were poorly germinating, with low germ tubes emission and poorly developed mycelia.In some conidia the spilling out of the cellular content can be observed, indicated by their plasmolization process.Medice et al. (2007) treated soybean leaves with essential oil of thyme, 3000 μL L -1 , and inoculated them with P. pachyrhyzi urediniospores seven days later.By means scanning electron microscopy observations, the authors verified a reduction in the size of the uredias and on the number of urediniospores, a part of which became plasmolized.Pereira et al. (2008) It can be suggested that the control of brown eye spot in coffee plant cultivars was mainly due to the direct effect of the essential oils on the pathogen.However, complementally studies should be conducted.In the specific case of the coffee plant, it was verified that the essential oils of cinnamon and citronella were the most promising for control of brown eye spot.Therefore, they can be used in the control of the brown eye spot in seedlings and in fields under organic coffee production, in which the use of pesticides is not undertaken, combined with other strategies to reduce the use of chemical products, which, over time, can cause irreversible damages to human health and the environment.
CONCLUSIONS
The essential oils of cinnamon, citronella, lemongrass, India clove, tea tree, thyme, eucalyptus and The essential oils promoted partial control of the brown eye spot in coffee plant in greenhouse.The oils of cinnamon and citronella were the most promising for the control of the disease in Catucaí 2SL, Catuaí IAC 62 and Mundo Novo 379/19 coffee plant cultivars.
The oils of cinnamon and citronella reduced the germination and the mycelial development of C. coffeicola, promoting, in some cases, the spilling out of the cellular content.
with the thyme oil, 500 μL L -1 , and inoculated with C. coffeicola.According toAmaral & Bara (2005), essential oils possibly act in the cell wall of the fungi, causing the leakage of the cellular content.Such effect was also observed later byRasooli et al. (2006), using transmission electron microscopy, where the essential oils of Thymus eriocalyx (Ronniger) Jalas and T. x-porlock promoted severe damage to the walls, membranes and cellular organelles of A. niger spores.According to the authors, mycelia exposure to the essential oils of T. eriocalyx and T. x-porlock induced morphological alterations in the hyphae, rupture of the plasma membrane and mitochondrial destruction.
Figure 4 -
Figure 4 -Scanning electron micrographs of coffee leaves from coffee plant cultivars Mundo Novo 379/19 (A, B and C), Catuaí IAC 62 (D, E and F) and Catucaí 2SL (G, H and I) eight hours after inoculation with Cercospora coffeicola.Plants sprayed with essential oils of cinnamon 1000 μL L -1 (A, D and G) and citronella 1000 μL L -1 (B, E and H) presenting conidia at the initial germination stage and, distilled and sterilized water (C, F and I) presenting conidia at an advanced germination stage.
|
2018-12-11T05:27:54.151Z
|
2011-02-24T00:00:00.000
|
{
"year": 2011,
"sha1": "f03f49ad0f8f2c42042efd0c62052a06815bb56a",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/cagro/a/cPmtT8CnsXyVtqGsgcHtS6p/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5063b67faea3b4cfc720055965d83be301a0d48",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
}
|
212477024
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Soy Protein Isolate, Starch and Salt on Quality of Ready-to-Eat Restructured Beef Products
This experiment explored the effects of different additions of soy protein isolate, starch and salt on the quality characteristics of the ready-to-eat restructured beef products. The ground beef was used as the experimental material, and the different soy protein isolates, starch and salt were studied after conditioning and recombination. The product has the characteristics of thawing loss, yield, bond strength, texture and other quality characteristics. The results show that with the increase of the amount of soy protein isolate, the indicators of the products were improved, but when the amount of soy protein isolate was more than 2%, the products showed a bean flavor and white streaks, so the final addition of soy protein isolate was not more than 2%; similar to the soy protein isolate, the amount of starch added did not exceed 2%; with the increase of salt addition, the product's various indicators have been improved, but when the salt addition amount exceeds 1.5%, the products was too salty, so the final optimum amount of salt was 1.5%.
Introduction
Due to the improvement of the quality of life and the accelerated pace of life, people are paying more and more attention to the convenience, nutrition and safety of meat products while increasing the demand for meat products [1]. At the same time, a large amount of scrap meat, minced meat and minced meat produced in the production of meat products are not effectively utilized, which pollutes the environment while wasting a large amount of animal protein, causing huge economic losses. If it is reorganized while adding certain nutrients, it can save costs and increase meat utilization, while producing safe, healthy and nutritious meat products. Ready-to-eat restructured beef products refers to the use of ground beef, scrap meat, and deboned beef as raw materials, adding appropriate binders and seasonings or nutrients, and processing such as pickling, rolling, molding, etc., in the form of packaging or bulk. Transportation, storage and sale under low temperature freezing or refrigerating conditions, a type of meat product that consumers can use directly or simply heat-treated [2]. The ready-to-eat restructured beef products combines the processing technology of both conditioned beef products and recombinant beef products, and has the product characteristics of both of products. It also avoids the tedium and boredom caused by long-term consumption of a single variety of meat products, satisfies people's growing consumption needs, and opens up new outlets for enterprise production.
Soy protein isolate (SPI), starch and salt. have a very significant effect on the improvement of meat quality and water retention performance. SPI and starch have strong water absorption capacity and gel properties, active groups and muscle proteins in soy protein isolate. The interaction between them forms a more stable gel network structure that retains more moisture. Thereby improving the water retention of the meat product. Salt can act on the meat protein system, thereby increasing the amount of myofibrillar protein, promoting the cross-linking between protein polypeptide chains, enhancing the interaction between proteins, and forming a stable three-dimensional network structure, thereby improving The role of texture characteristics such as hardness and chewiness of meat products.
The main purpose of this experiment was to investigate the effects of different added amounts of SPI, starch and salt on the quality characteristics of the conditioned recombinant beef products. In the process of conditioning the recombinant beef, about 20 minutes, about 2.0g of mitigated recombinant meat were pieced cut and tiled in a dedicated water activity measuring dish (at least covered the bottom layer), Open the sample box and lided into the sample cell, tighten the sample cell cover, turn on the power, when the reading is stable, read directly from the display sample water activity.
Texture Profile Analysis, TPA Test
Reference Pietrasik et al method and make appropriate changes, after baking, the texture of samples were directly determined, each sample to do 8 parallel samples, texture instrument parameters set to pressure, the determination parameters: before testing speed 5mm/s, test speed and speed after testing 2mm/s, the P/50 probe was uesd, the probe diameter is 5cm. Measurement results mainly take hardness, springiness, chewiness sand cohesiveness of which hardness and chewiness in grams (g) said [9].
Determination of Thawing Loss and Yield
Thawing loss (TL) was determined according to the method of Serrano et al with appropriate modifications. The frozen meat samples were cut into pieces of size 3cm×3cm×2 cm and weighed the mass (m 1 ), placed in 20°C environment 15min mitigation, to be completely mitigated, with filter paper sucked the surface of the meat moisture, again called the mass (m 2 ) [10].
Where: m 1 is the quality before thawing; m 2 is the quality after thawing.
The yield was determined according to the method of Gök et al. with appropriate modifications [11]. The sliced recombinant beef samples were roasted in an oven and weighed the mass before roasting (W 1 ) and the mass after roasting (W 2 ), The test samples to maintain the same size, each test to ensure that the number of samples is basically the same.
Yield according to equation (3) to calculate: Where: W1 is the quality before baking; W2 is the quality after baking.
Sensory Evaluation
Sensory evaluation refers to the method of Geli et al. and make some appropriate changes to make a sensory score for cooked meats. 10 postgraduates engaged in food majors were invited to made up the assessment team and tested by double-blind method [12]. Mainly on the product color, odor, status, taste and overall acceptability of assessment, each indicator of the highest score of 9 points and the minimum of 1 point, according to the score to determine the merits of the sample.
Color 9 is divided into product reddish brown luster, an appetite, 1 is divided into dark red color, dull, poor appetite; odor 9 is divided into meat smell prominent, 1 is very light meat smell or none; status 9 points into a complete piece of meat, uniform thickness, meat is closely without raw, 1 is divided into pieces of meat is not dense, inelastic, green films; taste 9 is divided into delicate meat, chewy and aftertaste, 1 minute hardwood firewood feeling, less aftertaste; overall acceptance of 9 points for acceptability, loved by consumers, 1 point is poorly accepted, the consumer is difficult to accept.
Statistics Analysis
Each treatment is repeated three times and the results are expressed as mean ± standard deviation. Statistical analysis was performed using the Linear Models program in Statistix 8.1 software with significant differences (P < 0.05). Analysis was performed using the Tukey HSD program and plotted using Sigmaplot 12.0 software. It can be seen from Figure 1 that with the increase of SPI level, the thawing loss of the product gradually decreased and the yield was significantly increased (P < 0.05). The thawing loss and yield of the product are closely related to the water retention of the meat product, which indicated that with the increase of the SPI, the water retention capacity of the product was gradually enhanced (P < 0.05), which may be due to the SPI has the strong water absorption capacity and gel properties, the interaction between the active groups of the SPI and muscle proteins form a more stable gel network structure which retaining more moisture [13]. Thereby improving the water retention of the meat product.
Effects of SPI, Starch and Salt on Thawing Loss and Yield of the Ready-to-eat Restructured Beef Products
It can be seen from Figure 1 that the amount of starch added has a significant effect on the thawing loss and yield of the product (P < 0.05). With the increase of starch addition, the thawing loss of the product decreases significantly and the yield rate increases remarkably. This may be due to the fact that the starch granules swell and absorb water, and on the other hand may be the result of water absorption, swelling, gelatinization of the starch during heating. Since the starch gelatinization temperature is higher than the denaturation temperature of the muscle protein, when the starch is gelatinized, the muscle protein has substantially undergone denaturation and forms a three-dimensional network structure. At this time, the gelatinized starch granules will take up the moisture which is not tightly bound in the network structure, and this part of the water is fixed by the starch granules without being lost by heating, so the water holding property is improved, and the water content is reduced. At the same time, when heated, the starch granules can also absorb the fat dissolved into liquid, thereby reducing the loss of fat and increasing the yield [14]. The effect of different salt addition on the thawing loss and yield of the product was shown in Figure 2. The addition of different concentrations of salt had a significant effect on the thawing loss and yield of the product (P < 0.05), with the increase of salt addition. The thawing loss of the product gradually decreased from about 2.9% when the amount of addition was 0 g/100g to about 2.1% when the amount was 2.5 g/100g. The yield of the product increased from about 78.5% at the added amount of 0 g/100g to about 83.2% at the added amount of 2.5 g/100g. The thawing loss of the product was significantly reduced and the yield was significantly improved. This may be due to the fact that as the amount of salt added increases, the water retention of the product was gradually increases, and more moisture was locked during processing, which reduces the thawing loss and increases the yield [15]. It can be seen from Figure 3 that with the increase of the amount of soy protein isolate added, the bond strength of the product gradually increased (P < 0.05). This may be due to the gelling properties of soy protein isolate, which crosslinks soy protein with meat protein to form a more stable three-dimensional gel network structure, which increases the adhesion of meat products; In addition, the mixture of soy protein isolate and water has a certain viscosity, and the higher the concentration of soy protein isolate, the greater the viscosity of the mixture, The mixture adheres to the surface of the meat to act as a binder, which enhances the bonding strength [16].
Effects of SPI, Starch and Salt on the Bonding Strength and Water Activity of the Ready-to-eat Restructured Beef Products
It can be seen from Figure 3 that the amount of starch added has a significant effect on the bond strength and water activity of the product (P < 0.05). As the amount of starch added increases, the bond strength of the product increases significantly and the water activity decreases significantly. This may be due to the swelling of the starch granules, which form a viscous colloid which covers the surface of the meat and acts to increase the viscosity of the meat. It can be seen from Figure 4 that the different salt addition amount has a significant influence on the bond strength and water activity of the product (P < 0.05). As the amount of salt added increases, the bond strength of the product increases and the water activity decreases significantly. This may be due to the addition of salt, which causes the myofibril to swell, a large amount of chloride ions were bound to the myofibrils, and sodium ions formed an ion cloud around the myofilament to wrap it. When actin swells, myosin was separated from myofibrillar protein, forming a viscous exudate on the surface of the meat, which fixes the free water, thereby enhancing the adhesion and water holding capacity of the meat [17]. At the same time, the electrostatic repulsion caused by the negative charge increases, and the ionic strength of the meat increases. Therefore, the dissolution amount of myofibrillar protein in the meat product was increased, and the emulsifying ability was improved, thereby forming a better and tighter three-dimensional network structure in the system, then the bonding strength of the product was improved and the binding force to water was enhanced. So water retention of meat products was Increased [18]. It can be seen from Figure 5 that the amount of SPI has a significant effect on the T 2 relaxation time distribution (P < 0.05). The T 2 relaxation time after LF-NMR attenuation curve fitting was mainly distributed as three peaks, which represent the three water existence states: combined water (T 2b ), non-flowable water (T 21 ) and free water (T 22 ) [19]. Compared with fresh meat, as the amount of SPI increased, the peaks representing the three different states of water gradually shifted to left, which indicating that the relaxation time became shorter, the mobility of water molecules weakened. The combination of water molecules and meat proteins was enhanced, and the water holding capacity of meat products was enhanced [20]. Compared with fresh meat, the area of water relaxation peak in each state of the experimental group with soy protein isolate was significantly decreased (P < 0.05), and the peak area of the non-flowable water peak was the most obvious. Note: A-E in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
Effects of SPI, Starch and Salt on Water Distribution of the Ready-to-Eat Restructured Beef Products
The effect of the amount of SPI added on the T 2 relaxation time of the product is shown in Table 1. It can be seen from the table that the addition of different SPI had a significant effect on the T 2 relaxation time of the product (P < 0.05). Compared with fresh meat, the T 2b and T 21 of each experimental group added with soy protein isolate were decreased significantly.
And with the increase of the amount of SPI, the relaxation time of bound water and non-flowable water decreased significantly, indicating that the relaxation time of combined water and non-flowable water were significantly shortened (P < 0.05). The binding of protein-like proteins was getting closer and closer. This may be due to the emulsification properties of SPI. SPI was a surfactant that lowers the surface tension of water and oil while lowering the surface tension of water and air, so it was easier to form a more stable emulsion. With the increase of the amount of SPI added, the emulsifying ability of the system is strengthened, and the water absorption capacity of soy protein isolate was gradually enhanced. The binding degree of bound water and non-flowable water to meat protein was more and more tightly, and the relaxation time becomed shorter [21]. In addition, as the amount of soy protein isolate added increases, the relaxation time of free water was significantly prolonged. This might be because that this part water was free of extracellular. After adding SPI, this part of the water was encapsulated by soy protein isolate particles, which makes the mobility weakened, loosens with meat protein, and prolongs relaxation time. Note: A-E in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different SPI on the percentage of the T 2 relaxation peak area of the product is shown in Table 2. It can be seen from the table that compared with fresh meat, the area of the easily-running water relaxation peak of each experimental group was significantly reduced (P < 0.05). This may be due to the salting or other excipients, which increases the osmotic pressure inside the cells and affects the distribution of hydrogen ions. In addition, with the increase of the amount of SPI added, the peak area percentage of bound water and non-flowable water increased significantly (P < 0.05), and the peak area percentage of free water decreased significantly. This may be due to the fact that with the addition of SPI, extracellular free water gradually transforms into bound water and non-flowable water. The water retention of meat products was mainly determined by the presence of non-flowable water between the muscle membranes. The more water was not easy to flow, the better the water retention of the product. Therefore, as the amount of SPI added increases, the water retention of the product gradually increases.
The effect of different starch additions on the T 2 relaxation time distribution of the product is shown in Figure 6. It can be seen from the figure that compared with the fresh meat, the relaxation time of the bound water and the non-flowable water of each experimental group were moved to a fast relaxation direction. so that the binding ability of these two parts of water to meat proteins was enhanced. And with the increase of starch addition, the combined water and the non-flowable water gradually turned to the left, indicated that the binding ability of water molecules and meat proteins was getting stronger and stronger. It can also be seen from the figure that compared with the fresh meat, the relaxation peak area of each part of the water in each experimental group was reduced, especially the relaxation peak area of the non-flowable water was most obvious. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different starch additions on the T 2 relaxation time of the product was shown in Table 3. It can be seen from the table that different starch additions had a significant effect on the T 2 relaxation time of the product. Compared with fresh meat, the T 2b and T 21 of each experimental group added with starch decreased, and the relaxation time of bound water and non-flowable water decreased significantly with the increase of starch addition. It showed that the relaxation time of combined water and non-flowable water were significantly shortened (P < 0.05), and the combination of these two water molecules with meat protein was getting closer and closer. This may be due to the oil emulsification of starch and the increased adhesion of meat products. With the increase of the amount of starch added, the emulsifying ability of the system was strengthened, and the water absorption capacity of the starch was gradually enhanced, so that the binding degree of the bound water and the non-flowing water with the meat protein was more and more tight, and the relaxation time becomes shorter. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different starch additions on the percentage of the T 2 relaxation peak area of the product was shown in Table 4. It can be seen from the table that compared with fresh meat, the area of the easily-running water relaxation peak of each experimental group was significantly reduced (P < 0.05). This might be due to the salting of salt or other excipients, which increases the osmotic pressure inside the cells and affects the distribution of hydrogen ions. In addition, with the increase of starch addition, the peak area percentage of bound water and non-flowable water increased significantly (P < 0.05), and the peak area percentage of free water decreased significantly. This might be due to the strong water absorption of starch. As the amount of starch added increases, the water retention and water holding capacity of the product gradually increase, and the extracellular free water gradually transforms into bound water and non-flowable water [22]. The change of T 2 relaxation time when different salt added was shown in Figure 7. It can be seen from the figure that compared with fresh meat, with the amount of salt added increases, the relaxation time of the combined water and the non-flowable water moved toward the fast relaxation direction. It showed that the relaxation time of these two parts of water became shorter, and the binding ability of these two parts of water with meat protein was enhanced (P < 0.05). At the same time, with the amount of salt added increasing, the relaxation time of combined water and non-flowable water gradually shifts to the left. It showed that the combination of these two parts of water whe muscle protein was getting stronger and stronger. It could also be seen from the figure that compared with fresh meat, the relaxation peak area of each state of water is significantly reduced, especially the area of the relaxation peak of the non-flowable water was most obvious. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different salt addition on the T 2 relaxation time of the product is shown in Table 5. Compared with fresh meat, with the increase of salt addition, the three different states of water (T 2b , T 21 , T 22 ) moved to the fast relaxation direction, that was, the relaxation time was shortened and the relaxation speed was increased. The T 2b relaxation time decreased from 1.73ms to 1.60ms, but the change was not significant (P > 0.05). This because that this part water was tightly bound to the protein in the meat. It was difficult to significantly affect with the salt addition. This is consistent with the findings of Wu Liangliang [23]; The T 21 relaxation time decreased from 51.60ms to 35.35ms; the T 22 relaxation time decreased from 173.75ms to 125.75ms. This indicated that with the increase of salt addition, the degree of binding strength between the non-flowable water and free water with the muscle protein molecules was significantly enhanced (P < 0.05), resulting in a significant decrease in the mobility of water molecules (P < 0.05), Thereby improving the water retention and yield of the product [24]. Compared with fresh meat, the relaxation time of T 2b , T 21 and T 22 in the experimental group with 0 g/100 g of salt addition were shortened, which might be related to other excipients added in the control group. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different salt additions on the area percentage of T 2 relaxation peak was shown in Table 6. It can be seen from the table that compared with the fresh meat, the T 2 relaxation peak area of the three different state waters of each experimental group were significantly reduced (P < 0.05). This may be due to the fact that salt or other excipients addition affect the distribution of hydrogen protons in the meat; with the increase of salt addition, the peak area percentage of T 2b and T 21 gradually increased (P < 0.05), while the percentage of T 22 peak area was gradually decreased.
The ratio of relaxation peak area of bound water and free water was very small while the ratio of less-flowable water was largest This illustrated that with salt treatment, other moisture could converted into less-flowable water [25]. And when the salt addition amount was 2.5g/100g, the relaxation peak area of the bound water and the non-flowable water reaches the maximum and the relaxation peak area of the free water reaches the minimum, as well the water retention of the product was best [26]. The effect of different SPI on product color difference was shown in Figure 8. For raw meat (left), the experimental group had lower a* values and higher b* values than fresh meat. This may be due to the fact that the addition of some excipients or binders cannot be completely absorbed by the product and adhere on the surface, the color of the product was affected. And with the increase of the SPI, the L* value and redness value of the sample decreased significantly (P < 0.05), and the b* value increased significantly (P < 0.05). This might be due to the fact that the SPI and water combined to form a mixture, With the increase of the amount of SPI added, SPI can not be completely absorbed by water and appear blocky, covering the surface of the minced meat with yellow streaks, affecting the color of the product [27].
Effect of Soy Protein Isolate, Starch and Salt on Color Difference
For cooked meat (pictured right), as the amount of SPI added increases, the L* value of the product decreased from about 30.3 when the amount was 0 g/100g to about 23.5 when the amount was 3.0 g/100g. The a* value was reduced from about 10.6 when the amount was 0g/100g to about 8.6 when the amount was 3.0 g/100g. The b* value increases from about 8.5 when the amount was 0 g/100 g to about 13.0 when the amount was 3.0 g/100 g. The L* value and redness value of the product decreased significantly and the b* value increased significantly (P < 0.05). This may be due to the fact that the soy protein isolate was denatured during the baking process, and the mixture formed with water solidifies to change the color of the product. The effect of different starch additions on product color difference was shown in Figure 9. For raw meat (left), the experimental group had lower L* values and higher b* values than fresh meat. This may be due to the fact that the addition of some excipients or binders cannot be completely absorbed by the meat product and adhere to the surface of the product, affecting the color of the product. At the same time, with the increase of starch addition, the L* value of the sample gradually increased and the a* value gradually decreased (P < 0.05). This may be due to the fact that the starch was soluble in water to form a transparent colloidal solution attached on For cooked meat (right), with the amount of starch added increases, the L* value of the product increases from about 23.3 when the amount was 0g/100g to about 28.5 when the amount was 3.0g/100g. The a* value was reduced from about 11.6 when the amount was 0g/100 g to about 7.5 when the amount was 3.0 g/100 g. During the baking process, the L* value of the product increased whlie the a* value decreased significantly (P < 0.05). This may be due to the gradual swelling of the starch during the baking process, resulting in complete gelatinization of the starch. After gelatinization, the starch becomes a translucent colloidal solution with a certain viscosity and covers the surface of the meat piece, and gradually solidifies during heating to change the color of the product, which increases the L* value [28]. The effect of different salt additions on product color difference was shown in Figure 10. The amount of different salt added has a significant effect on the color of the product. For raw meat (left), with the amount of salt added increases, the L* value of the product decreases from about 29 when the amount is 0 g/100g to about 24 when the amount is 2.5 g/100g. The a* value is increased from about 19 when the amount is 0 g/100 g to about 24 when the amount is 2.5 g/100 g. The L* value of the product decreased while the a* value increased (P > 0.05). This may be because the addition of salt can increase the water holding capacity of the meat product, and the water absorption capacity of the meat gel will affect the color of the meat product. The increase in moisture content reduces the oxygen content of the muscle gel, and the amount of hemoglobin surrounded by water molecules increases, eventually increasing the proportion of deoxymyoglobin in the meat gel system, making the meat gel The color is darkened and the redness is increased [29].
For cooked meat (pictured right), the L* value did not change significantly with the increase of salt addition (P > 0.05). This might be due to the fact that the baking temperature and baking time used by each experimental group were consistent, so that the appearance of the meat product was invisible to the naked eye. The a* value increased with the increase of salt addition, and the change was significant (P < 0.05). b* was first lowered and then increased. This might be due to the fact that the three forms of myoglobin undergo a mutual transformation through oxidation and redox reaction during heating, which ultimately affects the surface color of the meat [30]. It can also be seen from Figure 10 that the change in L* value, a* value and b* value was not significant (P > 0.05) when the salt addition amount was between 0.5 g/100 g and 1.5 g/100 g. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
Effects of SPI, Starch and Salt on the Quality Characteristics
The effects of different SPI additions on the texture properties of the product were shown in Table 7. The addition of different SPI had a significant effect on the texture characteristics of the product (P<0.05). As the amount of SPI added increases, the texture properties of the product were significantly improved. This may be due to the gelation and foaming properties of SPI, which was obtained by heating, cooling, dialysis and alkali treatment of the dispersed substance of SPI. Moreover, the higher the protein content, the stronger and more elastic the hard gel was, and the foaming property of the SPI enables the protein molecules to reach the inner surface and rapidly spread, so that the texture properties of the product were improved. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different starch additions on the texture properties of the product was shown in Table 8. It can be seen from the table that different starch additions had a significant effect on the texture characteristics of the product (P < 0.05). As the amount of starch added increases, the hardness and elasticity values of the product were increased, and the texture properties of the product were significantly improved. This may be due to the water absorption, swelling, and gelatinization of the starch granules during heating. Since the gelatinization temperature of the starch granules was higher than the denaturation temperature of the muscle protein, So, when the starch was gelatinized, various proteins in the muscle had reached the denatured solidification bonding temperature and gradually formed a three-dimensional network structure. At this time, since the colloid formed by starch gelatinization was fixed in the mesh (mesh gap), the mixed sol can combine with the residual moisture inside and outside the muscle reticular structure to form a larger and more complicated colloid. Therefore, the inherent moisture in the muscle was fixe, and it was not easily lost during the subsequent heat treatment, thereby improving the water holding capacity and adhesion of the meat product. The muscle tissue was bonded and filled with holes to make the finished product beautiful and present a good tissue morphology [31]. Note: A-D in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different salt addition on the texture characteristics of the product was shown in Table 9. Different amounts of salt added had a significant impact on the texture characteristics of the product. As the amount of salt added increases, the hardness and elasticity values of the product were increased significantly, and the texture properties of the product were improved. This might be due to the fact that salt can act on the meat protein system, thereby the amount of myofibrillar protein eluted were increased, the cross-linking between protein polypeptide chains were promoted, and the interaction between proteins was enhanced, a stable three-dimensional network structure was formed. Thereby, it played the role of improved the texture characteristics such as hardness and chewiness of the product. Note: A-C in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
Effect of SPI, Starch and Salt on Sensory Evaluations
The effects of different SPI additions on sensory evaluation of the product were shown in Table 10. It can be seen from the table that, compared with the control group, with the increase of the amount of SPI added, the color of the product was brighter, the taste was better, and as well the tableting property. And when the amount of SPI added was 2.0 g/100g, the sensory indexes were best and the overall acceptability was highest; Continue to add soy protein isolate, the color was slightly dark, the product was slightly bean flavor, Yellow streaks appeared between the meat pieces, and the taste was awkward, and the sensory indexes showed a downward trend (P < 0.05). This might be due to the fact that the continued addition of SPI, the water retention of the product was further increased, the tenderness of the meat product was continued to rise, resulting in a soft tissue state, the chewiness was decreased and the mouth feels awkward, and the taste of the beans appears. Therefore, it was finally determined that the optimum addition amount of soy protein isolate was 2.0 g/100 g. Note: A-C in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different starch additions on the sensory evaluation of the product was shown in Table 11. It can be seen from the table that, compared with the control group, with the increase of the amount of starch, the color of the product was brighter, the taste was better, and as well the tableting property. And when the starch addition amount was 2.0 g/100g, the sensory indexes were best and the overall acceptability was highest; Continue to add starch, white streaks appeared between the meat pieces, the mouth felt awkward, and the sensory indicators showed a downward trend (P < 0.05). This might be due to excessive starch, which makes the product rough, hard, inelastic, light in color and poor in taste. Therefore, in the production should pay attention to control the amount of starch to ensure product quality. The quality characteristics and sensory indicators of the integrated product finally determined that the optimal addition amount of starch in the production was 2.0 g/100 g. Note: A-C in the same column of letters, the same difference is not significant (P > 0.05), the difference is significant (P < 0.05).
The effect of different salt additions on the sensory evaluation of the products was shown in Table 12. The amount of salt added had a significant effect on the sensory quality of beef products (P < 0.05). When the amount of salt added is 0%, the color was dark, the saltiness was insufficient, and the mouth feels awkward. This was because the meat samples contains a lot of odorous ingredients such as protein and fat, which often need to be expressed on a certain salty taste [32]. At this time, the meat sample was brittle during the baking process, and the formability was not good and it was difficult to be accepted; As the amount of salt added increased, the sensory scores increasing, and the quality of the meat was getting better and better. When the amount of salt added was 1.5 g/100g, the quality of the meat was best, the color was bright reddish brown, the meat was prominent, the salty taste was moderate, the meat pieces were not brittle during the baking process, and the formability was good. The highest sensory evaluation vales, Continue to increase the amount of salt, the color was darker, the taste was too salty, the meat flavor was covered by salt, and the sensory scores begin to decline. This may be due to the continued addition of salt, the water retention of the product was further increased, resulting in a soft tissue state, a decrease in chewiness and a salty taste. This indicated that the addition of salt had a significant impact on product quality characteristics and sensory quality. Based on the various sensory indicators and overall acceptability, the optimal addition of salt was determined to be 1.5 g/100g.
Mechanism of the Effect of SPI on the Quality Characteristics
SPI was a high-purity soy protein product obtained from defatted soybean meal. Its protein content (on a dry basis) was over 90%, which was the highest protein content of soybean products. SPI also has very good features. Experiments have shown that, except for SPI, which has gelatinity, the other soy protein products were basically not gelatinous. In the processing of meat products, it could retain or emulsify the fat in the meat products, combined with moisture, And improve the organization, so that the internal structure of the meat products was fine, the bonding was good and flexible, and the slicing was good, the surface of the product was smooth and delicate, and the tenderness was improved; at the same time the fat was emulsified, improve the water retention and yield of meat products were improved.
This experiment explores the effect of adding SPI on product quality and sensory properties. The results showed that with the increase of SPI addition, the yield and bond strength of the product increased gradually (P < 0.05). The thawing loss and water activity decreased gradually (P < 0.05), and the texture characteristics of the product were improved (P < 0.05). This was because SPI has strong water absorption and gelation. The interaction between the active group and the muscle protein forms a more stable gel network structure, which increases the viscosity of the product. At the same time, more water was retained to enhance the water retention of the product. This was similar to the research results of Ma Yuxiang et al. [33]. Ma Yuxiang added a certain amount of soy protein isolate to the ham and measured the yield. It was found that the addition of SPI can significantly increased the yield of ham sausage. At the same time, it increased the water holding capacity and oil holding capacity of the ham.
Mechanism of the Effect of Starch on the Quality Characteristics
As a food additive, starch had the functions of enhancing gel strength, improving tissue structure, enhancing water retention, improving yield, and reducing production costs. At the same time, the addition of starch can prevent the oil product from seeping, and the product had a sticky and smooth tongue feeling, thereby improving product quality. Therefore, starch was widely used in meat products. However, different starch types have different effects on meat products. The adhesion and hardness of the meat emulsion increase with increasing viscosity and water retention, and also with the increase of amylopectin. Potato starch with high amylopectin produces much higher gel binding and elasticity than wheat starch with high amylose content. And the tensile strength of the gel also increases with the increase of amylopectin. Therefore, in this experiment, potato starch containing more amylopectin was selected. It is important to explore the effects of different potato starch additions on the properties of the conditioned recombinant beef products.
The results showed that with the increase of starch addition, the yield and bond strength of the product increased significantly (P < 0.05), the thawing loss and water activity decreased significantly (P < 0.05), and the texture properties of the product were improved. P < 0.05), Just because starch has a gelatinized nature, During the baking process of the product, the starch granules swell, and the gelatinization of the starch occurs. After gelatinization, the starch formed a transparent colloid that was fixed in the mesh and combines with the remaining moisture in the muscle network structure to become a more powerful colloid. And the bond strength and texture properties of the product were improved, this more powerful colloid can fixed the water molecules inside the muscle, which improved the water retention of the product. However, the amount of starch added should not be too high, too much starch will make the product texture rough, hard, inelastic, light color, poor mouthfeel, So, in order to ensure the quality of the product in the production process, the amount of starch added should be controlled.
Mechanism of the Effect of Salt on the Quality Characteristics
The initial application of salt in meat products was preservative flavoring. The addition of salt to fresh meat had the effect of increasing palatability and was added to the cured meat to provide antiseptic. With the research on the processing technology of meat products, the researchers found that there was a significant interaction between salt and phosphate. The main manifestation was that salt can promote the full play of phosphate in the system. In addition, salt was also an important extractant of salt-soluble myofibrillar protein in muscle. As the concentration of the added salt increases, the gel performance was inevitably increased, thereby obtaining a product having a good texture. However, when the concentration of salt added has a certain saturation effect on the extraction of functional protein, that is, after reaching a certain ionic strength, the increase of salt concentration no longer brings about a significant increase in the amount of muscle protein dissolved, but remains in a relatively stable state.
The results showed that with the increase of salt addition, the yield and bond strength of the product increased gradually (P < 0.05), the thawing loss and water activity decreased gradually (P < 0.05), and the texture properties of the product were improved. P < 0.05), It was because myofibrillar protein was salt-soluble, and as the amount of salt added increases, the concentration of extractable myofibrillar protein increased. The extracted "fibrin" of myofibrillar protein and water form a sticky "exudate" attached to the surface of the meat piece, which can increase the viscosity. Since salt could act on the protein system of meat products, thereby the amount of dissolution of myofibrillar proteins could be increased, the cross-linking between protein polypeptide chains could be promoted, and the interaction between proteins to form a stable three-dimensional network structure was enhanced. At the same time, the binding force to moisture is enhanced, and the texture properties and water retention of the product are improved [34].
Conclusion
With the increase of the amount of soy protein isolate, the yield and bond strength of the product increased gradually (P < 0.05), the thawing loss and water activity decreased gradually (P < 0.05), and the texture properties of the product were improved. (P < 0.05), the brightness value and redness value of raw meat and cooked meat gradually decreased. In addition, as the amount of soy protein isolate added increases, both T 2b and T 21 of the product move toward a faster relaxation direction. When the amount of soy protein isolate added was 2.0 g/100g, the sensory scores of the products reach the highest. Therefore, it is finally determined that in the production process of the ready-to-eat restructured beef products, the optimal addition amount of soy protein isolate should not exceed 2.0 g/100 g; With the increase of starch addition, the product yield and bond strength increased significantly (P < 0.05), the thawing loss and water activity decreased significantly (P < 0.05), and the texture properties of the product were improved (P < 0.05), the brightness value of the raw meat and cooked meat gradually increased and the redness value gradually decreased (P < 0.05). In addition, as the amount of starch added increases, the T 2b and T 21 of the product gradually move toward a faster relaxation direction. And when the starch addition amount was 2.0 g/100g, the sensory score of the product reaches the highest, so in the production process of the ready-to-eat restructured beef products, the added amount of starch should not exceed 2.0 g/100g; With the increase of salt addition, the product yield and bond strength increased gradually (P < 0.05), the thawing loss and water activity decreased gradually (P < 0.05), and the texture properties of the product were improved (P < 0.05), the brightness value of raw meat and cooked meat increased and the redness value decreased (P < 0.05). In addition, as the amount of salt added increases, the T 2b and T 21 of the product gradually move toward a faster relaxation direction. When the amount of salt added was 1.5 g/100g, the sensory scores of the products were highest. Therefore, it was finally determined that in the production process of the ready-to-eat restructured beef products, the optimal addition amount of salt is 1.5 g/100g.
|
2020-03-07T07:23:08.662Z
|
2019-11-21T00:00:00.000
|
{
"year": 2019,
"sha1": "c81976da7d19e6d777934610ee3e741e640d83e6",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijfet.20190302.13.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c81976da7d19e6d777934610ee3e741e640d83e6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
235877168
|
pes2o/s2orc
|
v3-fos-license
|
The historical consciousness of student youth and evaluation of the events of the Great Patriotic War
. It is not the first year that the information space has been marked by trends in the revision of evaluations of the events of the Great Patriotic War. Student youth are actively and rapidly developing, adapting to the new social realities. That is why the issues of preserving historical memory and developing the historical consciousness of contemporary young people have become so relevant. The present work is based on the analysis of the survey results of the 1st and 2nd-year students conducted at the State University of Management (SUM, Moscow, Russia), and is a continuation of the research devoted to the evaluation of the end results of the Great Patriotic War in the perception of University students. An attempt was made to determine the level of students ’ knowledge about the main facts of the Great Patriotic War. Within the framework of this study, the results of similar works by domestic and foreign authors were considered. As a rule, in Russia, the problem of awareness of the importance of the Great Patriotic War is emphasized in terms of its results, including the perception of these issues by student youth. At the same time, several foreign authors put forward the issue of re-evaluating the results of the Second World War and make it with clear politicization of the proposed conclusions, primarily, in determining the historical responsibility for war unleashing. The authors of the present study propose to strengthen the role of archival documents, memoirs, chronicles, and testimonies of direct participants and victims of the wartime events in military-historical education, which should not allow distorting the historical feat of the USSR and its decisive contribution to the defeat of Nazi Germany.
Introduction
The issues of historical memory and historical consciousness are important in the formation of active civic-minded young people. Currently, these issues are becoming even more relevant. By the beginning of the 90s, the events of the Great Patriotic War were largely interpreted in the same way in Russia and abroad. However, in recent years, several Western countries have been reviewing the place and role of various states in the Second World War (WWII). Current information technologies allow spreading these opinions to the widest possible audience, and, first of all, the young people, who, unlike their parents, know little about the events 75 years ago.
Methods
The hypothesis of the present study is the statement of the quantitative growth of people with a distorted or fragmentary view of historical events related to the Great Patriotic War, which was not least caused by increased attempts to falsify the historical past.
A special place is occupied by the issue of historical responsibility for the outbreak of the WWII, of which the Great Patriotic War was an integral part. Foreign studies of modernity promote the theory that the cooperation of England and the USA played a key role in the defeat of Nazi Germany [1]. Increasingly, the WWII becomes not a historical, but a political reference point for understanding the international system of the 21st century, which leads to distortions of the past [2].
The above has formed the ideological platform that defined the questions of the new updated survey.
Results
A previous study conducted in September 2020 has shown that to form mature civic-minded students, free from ideological clichés and political conjuncture, as well as for their subsequent activities in professional communities, it is necessary to resist attempts to distort and falsify history, including that in the information space [3]. The study has identified a problem associated with a decrease in the level of knowledge of young people about specific historical facts, including the Great Patriotic War.
First-and second-year students of the State University of Management were involved in the survey. A total of 202 people were interviewed. The questionnaire consisted of three questions.
The first question requested respondents to indicate the full dates of the start and the end of the Great Patriotic War. The results were ambiguous. Thus, 73.8% of students were able to give answers; 9.9% of respondents indicated correctly only the start and end years of the Great Patriotic War; 3.5% of students indicated correctly only the start date; 1.5% of students -only the end date, while 11.3% of respondents gave an incorrect answer.
Among the most common mistakes, the end date of the Great Patriotic War was indicated as 7.05.1945. The next common mistake was specifying the start and the end dates of the WWII instead of the requested dates. Finally, some dates had nothing to do with the question. Here one should note that O.A. Verevkina in her research, devoted to the peculiarities of the historical memory of young people about the events of the Great Patriotic War, also indicated a low level of knowledge of historical dates [4].
The second question was: "Whom of the heroes of the Great Patriotic War do you know?" Four rows of historical figures with four surnames each, selected from different periods of Russian history, were offered as answers. Not the most obvious characters were listed. This was done to assess the depth of the students' personalized knowledge [5]. Thus, the first row in the list indicated the Decembrists, the second row -the heroes of the Patriotic War of 1812, while the third row included names of the Great Patriotic War marshals, and the last row listed names of commanders of the First World War.
Results were as follows: 45.5% of the surveyed students chose the correct names, 10.9% of students did not answer the question. At that, 11.9% of students emphasized different surnames from different rows. Some students indicated several options at once: 3% of students chose the first two rows; 3.5% of students noted the second and the third rows. 13.3% of respondents chose the Decembrists, while 11.9% of students named the heroes of the Patriotic War of 1812.
It is obvious that the historical memory of today's students makes such mistakes, and this is confirmed by other studies. For example, in January-February 2020, the Far Eastern Federal University conducted a study on the analysis of ideas about war through literary works and art. As the survey data showed, some of the interviewed young people attributed the monument to Minin and Pozharsky, the cruiser Aurora, etc. to the monuments of the Great Patriotic War [3].
The third question of the questionnaire was: "Which of the countries does, in your opinion, bear the main historical responsibility for the outbreak of the WWII , whose integral part is the Great Patriotic War of the Soviet people?" The question suggested students select one or more countries from the list or specify their own version in the "other" section. According to answers, 75.7% of respondents chose Germany, while 9.4% of students chose the USSR. Noteworthy is the comment of one of the respondents, who justified his choice as follows: "The USSR. (The Molotov-Ribbentrop Pact was the trigger for the outbreak of the Second World War, at the beginning of which the USSR waged a war of conquest together with the Third Reich, against Poland). Such maxims are not uncommon today. They are distributed through the media, which replicate political statements about the WWII outside the historical paradigm. Moreover, 4% of students surveyed chose the USA, while 2% of students chose the UK. At that, 8.9% of respondents preferred the item "other", in which they either indicated various combinations of the countries proposed in the response or indicated their version.
Discussion
The conclusions made by the authors coincide with the research of many specialists. Thus, N.V. Vorobyova, in the course of her research on citizenship and patriotism of student youth, concluded that from year to year, a pronounced trend leading to a decrease in citizenship and patriotism was revealed, despite the increase in academic performance. This is influenced by the reduction of classroom hours for teaching history and the reduction of disciplines of the general humanitarian cycle [6]. Based on a sociological study, G.S. Shirokalova believes that the recently formed patriotic self-identification of Russians contains more of a verbal expression of loyalty to the generally accepted opinion [7]. I.A. Batanina notes that currently the demand for politically patriotic perception of the results of the Great Patriotic War prevails in society [8]. According to V.V. Kovrov, the formation of ideas about heroes and the heroic in the minds of young people is influenced by the falsification and deheroization technologies used in various mass media [9].
In the study conducted by S.D. Lebedev and I.S. Shapovalova within the framework of regional monitoring of the Russian Society of Sociologists, entitled "What do we know about the Great Patriotic War" (2005-2020), it is noted that "the liberation of the Baltic states, Eastern Europe during 1944-1945 was occupation". At that, 21% of the interviewed considered this statement to be debatable but discussible, while 45.4% of respondents were undecided with an answer. At the same time, 6.7% of interviewed were students who agreed with such statement [10].
However, A.L. Andreev in his research claims that the student youth of Belarus perceive with sympathy and respect the years of spiritual uplift associated with the Victory in the Great Patriotic War and the liberation of European countries from fascism [11]. Today, most Poles continue to believe that the memory of prejudice and discrimination associated with Germany during the WWII still has great influence on the attitude of Poles towards Germans [12]. Many in contemporary Europe are convinced that the WWII was certainly a global event, but its history was considered and studied from national standpoints for a long time, which does not allow today reaching a compromise in evaluations of this historical event [13][14][15].
Conclusion
In summary of the conducted study, the following conclusions can be drawn: 1. The ideas of student youth about the main facts of the Great Patriotic War and its results, in general, are historically substantiated, which is supported by both the power of state propaganda and subject courses delivered at educational institutions; 2. Some young people lack basic ideas about the facts and evaluations concerning the Great Patriotic War, and their number is growing; 3. Since with further moving away from the events of the WWII and the Great Patriotic War, the attempts to revaluate this historical event will increase, it is necessary to strengthen seriously the role of archival documents, memoirs, testimonies of participants and victims of the wartime events in the military history education, which should not allow belittling the historical feat of the multinational Soviet people and their decisive contribution to the defeat of Nazi Germany.
Both questionnaires were addressed to the students for whom the study of history in the framework of their educational areas was not a profile subject. In the future, it is advisable to address cadets of military universities, as well as students majoring in history, as respondents for such surveys. This will expand the sociological base of the research and allow covering the part of the student youth who are motivated towards historical science.
|
2021-07-16T00:05:56.429Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "67855b20fca8392c555bdef41aaf60753297f4d1",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/14/shsconf_shpr2021_01006.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7e5e18890da16718edeae7cb5430edd2d01be58e",
"s2fieldsofstudy": [
"History",
"Sociology",
"Education"
],
"extfieldsofstudy": [
"History"
]
}
|
250089075
|
pes2o/s2orc
|
v3-fos-license
|
Hermitian-Yang-Mills connections on some complete non-compact K\"ahler manifolds
We give an algebraic criterion for the existence of projectively Hermitian-Yang-Mills metrics on a holomorphic vector bundle $E$ over some complete non-compact K\"ahler manifolds $(X,\omega)$, where $X$ is the complement of a divisor in a compact K\"ahler manifold and we impose some conditions on the cohomology class and the asymptotic behaviour of the K\"ahler form $\omega$. We introduce the notion of stability with respect to a pair of $(1,1)$-classes which generalizes the standard slope stability. We prove that this new stability condition is both sufficient and necessary for the existence of projectively Hermitian-Yang-Mills metrics in our setting.
Introduction
The celebrated Donaldson-Uhlenbeck-Yau theorem [8,28] says that on a compact Kähler manifold (X, ω), an irreducible holomorphic vector bundle E admits a Hermitian-Yang-Mills (HYM) metric if and only if it is ω-stable. After this pioneering work, there have been several results aiming to generalize this to non-compact Kähler manifolds [25,1,22,21,15,19]. A key issue is to understand what role stability plays on the existence of projectively Hermitian-Yang-Mills (PHYM) metrics. An interesting special case in the non-compact setting is when (X, E) both can be compactified, i.e. X is the complement of a divisor in a compact Kähler manifold X and E is the restriction of a holomorphic vector bundle E on X, and when the Kähler metric has a known asymptotic behaviour. Under these assumptions, one wants to build a relation between the existence of PHYM metrics on E and some algebraic data on E. In this paper, we prove a result in this setting.
Let X be an n-dimensional (n ≥ 2) compact Kähler manifold, D be a smooth divisor and X = X\D denote the complement of D in X. Let E be a holomorphic vector bundle on X, which we always assume to be irreducible unless otherwise mentioned. Let E, E| D denote its restriction to X and D respectively. Suppose the normal bundle N D of D in X is ample. On X we consider complete Kähler metrics ω = ω 0 + dd c ϕ satisfying Assumption 1 (see Section 2 for a precise definition). Roughly speaking, we assume ω 0 is a smooth closed (1,1)-form on X vanishing when restricted to D, ϕ is a smooth function on X, and ω is asymptotic to some model Kähler metrics given explicitly on the punctured disc bundle of N D . Typical examples satisfying these assumptions are Calabi-Yau metrics on the complement of an anticanonical divisor of a Fano manifold and its generalizations [27,14,13] (see Section 6.2 for a sketch).
To state our theorem, we need two ingredients: the existence of a good initial hermitian metric on E and the definition for stability with respect to a pair of classes. The following lemma is proved in Section 4. Lemma 1.1. If E| D is c 1 (N D )-polystable, then there is a hermitian metric H 0 on E satisfying: (1). there is a hermitian metric H 0 on E and a function f ∈ C ∞ (X) such that H 0 = e f H 0 , Date: June 29, 2022.
1 (2). |Λ ω F H 0 | = O(r −N 0 ), where r denotes the distance function to a fixed point induced by the metric w and N 0 is the number in Assumption 1-(3).
We call H 0 conformal to a smooth extendable metric if it satisfies the first condition in Lemma 1.1. A key feature we use in this paper is that the induced metic on End(E) is conformally invariant with respect to metrics on E. Therefore the two hermitian metrics H 0 and H 0 induce the same metric on End(E) and this is the norm used in Lemma 1.1- (2). Then naturally (following [25]) one wants to find a PHYM metric in the following set Here we use √ −1su(E, H 0 ) to denote the subbundle of End(E) consisting of the trace-free and self-adjoint endomorphisms with respect to H 0 . Though H 0 in general is not unique, we will show that if we fix the induced metric on det E, then the set P H 0 is uniquely determined as long as H 0 satisfies conditions in Lemma 1.1 (see Proposition 4.7).
Next we define stability with respect to a pair of (1,1)-classes, which generalizes the standard slope stability defined for Kähler classes in [16,Chapter 5]. In the following, we use µ α (S) to denote the slope of a torsion-free coherent sheaf with respect to a class α ∈ H 1,1 (M) on a compact Kähler manifold M (see Section 3.2 for a more detailed discussion), i.e. µ α (S) := 1 rank(S)ˆM c 1 (det S) ∧ α n−1 .
The main result of this paper is By the definition of (α, β)-stability in Definition 1.2, we have the following consequence. Now let us give a brief outline for the proof of Theorem 1.3. For the "if" direction, we follow the argument in [25,19] by solving Dirichlet problems on a sequence of domains exhausting X. A key issue here is to prove a uniform C 0 -estimate. For this we rely on a weighted Sobolev inequality in [27,Proposition 2.1] and Lemma 5.4 which builds a relation between weakly holomorphic projection maps over X and coherent subsheaves over X. For the "only if" direction, we use integration by parts to show that the curvature form on E can be used to compute the degree of E with respect to [ω 0 ] (see Lemma 5.3). For both directions, the asymptotic behaviour of the Kähler metric ω plays an essential role.
Then let us compare Theorem 1.3 with some results existing in the literature. In [25] and [19], by assuming some conditions on the base Kähler manifold (X, ω) and an initial hermitian metric on E, it was proved that for an irreducible vector bundle E the existence of a PHYM metric is equivalent to a stability condition called analytic stability. In our case, since we assume that E has a compactification E on X, the existence of good initial metrics is guaranteed by the polystablity assumption of E| D . Moreover the stability we used in Theorem 1.3 is for E which is purely algebraic, i.e. independent of choice of metrics. In [1], for asymptotically conical Kähler metrics on X, it was proved that if E| D is c 1 (N D )polystable, then there exists PHYM metrics on E. No extra stability condition is needed in this case. Therefore the necessity of stability conditions depends on the geometry of (X, ω) at infinity. Another typical example for such a phenomenon is the problem for the existence of bounded solutions of the Poisson equation on noncompact manifolds. See Section 6.1 for a brief discussion.
The paper is organized as follows. In Section 2, we discuss the assumptions on the Kähler manifold (X, ω) and prove a weighted mean value inequality for nonnegative almost subharmonic functions. In Section 3, we give a brief review of some standard results for hermitian holomorphic vector bundles and give a detailed discussion on (α, β)-stability used in Theorem 1.3. In Section 4, Lemma 1.1 is proved and we also show that the assumption in Lemma 1.1 is necessary. In Section 5, we prove Theorem 1.3 and give an example which does not satisfy the stability assumption. In Section 6.1, we discuss some other results on the existence of PHYM metrics. In Section 6.2, we discuss some Calabi-Yau metrics satisfying Assumption 1. In Section 6.3, we prove a counterpart of Theorem 1.3 in a different setting where X is a compact Kähler surface and c 1 (N D ) is trivial. In Section 6.4, we discuss some problems for further study.
Notations and conventions.
• B r (p) denotes the geodesic ball centered at p with radius r and if the basepoint p is clear from the context, we will just write it as B r . • In this paper, we identify a holomorphic vector bundle with the sheaf formed by its holomorphic sections. • When we integrate on a Riemannian manifold (M, g), typically we will omit the volume element dV g . • Let (M, ω) be a Kähler manifold and (E, H) be a hermitian holomorphic vector bundle over M. We use C ∞ (M, E) to denote smooth sections of E; W k,p (M, E; ω, H) (respectively W k,p loc (M, E; ω, H)) to denote sections of E which are W k,p (respectively W k,p loc ) with respect to the metric ω and H. If bundles or metrics are clear from the text, we will omit them.
On the asymptotic behaviour of ω
As mentioned in the introduction, the asymptotic behaviour of the Kähler metric on the base manifold is crucial to make the argument in this paper work. In this section we will discuss these assumptions. Let X be an n-dimensional (n ≥ 2) compact Kähler manifold, D be a smooth divisor and X = X\D denote the complement of D in X. Let L D denote the holomorphic line bundle determined by D.
From now on, we assume the normal bundle of D, i.e. N D = L D | D is ample unless otherwise mentioned. Then we know that c 1 (D) is nef and big. We fix a hermitian metric h D on N D such that We are mainly interested in the region where |ξ| h D is small, which will be viewed as a model of X at infinity. Then we have a well-defined positive smooth function on C where using the obvious projection map p : C → D, we view ω D as a form on C. Then for every smooth function F : (0, ∞) −→ R with F ′ > 0 and F ′′ > 0, defines a Kähler form on C. Let g F denote the corresponding Riemannian metric and r F denote the induced distance function to a fixed point p.
We need a diffeomorphism to identify a neighborhood of D in N D with a neighborhood of D in X. For this, we use the following definition introduced by Conlon and Hein in [6,Definition 4.5].
p . Now we can state the assumptions for the Kähler metric ω on X. We consider a special class of potentials: H := F (t) : F (t) = At a for some constant A > 0 and a ∈ 1, n n − 1 (2.4) Assumption 1. Let ω be a Kähler form on X and g be the corresponding Riemannian metric. We assume that (1) the sectional curvature of g is bounded, (2) ω can be written as ω 0 + √ −1∂∂ϕ, where ϕ a smooth function on X and ω 0 is a smooth (1, 1)-form on X with ω 0 | D = 0, (3) there exists an exponential-type map Φ from a neighborhood of D in N D to a neighborhood of D in X and a potential F ∈ H such that There are (lots of) other potentials F besides those given in (2.4) making the argument in this paper work, but for simplicity of the statement and some computations we only consider potentials in H. The order in (2.5) is not optimal either and again we just choose the number 8 for a neat statement. From now on, unless otherwise mentioned, N 0 denote the number in (2.5).
Here are the main properties we will use for Kähler metrics defined by the potentials in H. For simplicity of notation, we omit the subscript for the dependence on F . Proposition 2.3. For the Kähler metric defined by a potential F = At a ∈ H, we have (1) The metric is complete as |ξ| h D → 0, if θ is a smooth form on D with θ| D = 0, then |θ| g = O(e −δt ) for some δ > 0.
Conditions (1)-(3) follow directly from (2.3) and (2.4). Condition (4) can be proved directly by doing computation in local coordinates on D as in [14,Section 3]. For completeness and later reference, we include some details.
Proof of (4): We choose local holomorphic coordinates z = {z i } n−1 i=1 on the smooth divisor D and fix a local holomorphic trivialization e 0 of N D with |e 0 | h D = e −ψ , where ψ is a smooth function on D satisfying √ −1∂∂ψ = ω D . Then we get local holomorphic coordinates {z 1 , · · · , z n−1 , w} on C by writing a point ξ = we 0 (z). Then in these coordinates we can write (2.3) as Then it is easy to check the following estimates: Then (4) follows directly from (2.7).
Remark 2.4. Actually, from the proof of Proposition 2.3-(4), we can give an effective lower bound for δ. For example, for 2-forms, δ can be chosen to be any positive number sufficiently close to (and less than) 1/2. However δ > 0 is sufficient for our later use.
Remark 2.5. Although not needed in this paper, we mention that following the computation in [27,Section 4] or [3, Section 3], we can show that Rm ≤ Cr 2( 1 a −1) .
In Assumption 1, we only assume the asymptotics of the Kähler forms. To get the asymptotic behaviour of the corresponding Riemannian metrics, we need to show that the complex structure of X and D are sufficiently close under the metric g F . When D is an anticanonical divisor, the following result is proved in [14,Proposition 3.4]. For a general smooth divisor D, the author learned the following proof from Song Sun. Lemma 2.6. Let J D and J X denote the complex structure on D and X respectively. And Φ * J X := dΦ • J X • (dΦ) −1 denote the pullback of J X under an exponential-type map Φ. Then we have Proof. Since dΦ p is complex linear for all p ∈ D, we know Φ * J X − J D is smooth section of End(T D) vanishing on D. But this is not enough to get the bound claimed in (2.8). We will use the integrability of Φ * J X and property (3) in Definition 2.1. In the following, we ignore the pull-back notation. Around a fixed point in D we can choose local holomorphic coordinates {w, z 1 · · · , z n−1 } of the total space of N D so that the zero section is given by w = 0. Then we can write for α = 1, · · · , n − 1 that where P α and Q αβ are linear functions of w and w, i.e. there are smooth functions p α and p α of {z i } such that P α = p α w + p α w and a similar expression for Q αβ . There are no type (1, 0) vectors in the linear term of the right hand side because J 2 is still of type (1, 0) with respect to J X , which coincides with J D when restricted to D. Therefore ∂ w P α = p α = 0. By the property (2) and (3) in Definition 2.1 and the following standard exact sequence of the holomorphic vector bundles on D 0 −→ T 1,0 D −→ T 1,0 X −→ N D −→ 0, we know that on D, the dz α component of∂ J X ∂ w is tangential to D. Note that by definition we have∂ J X ∂ w = L ∂w J X , therefore we know that Since on D, the dz α component of∂ J X ∂ w is tangential to D, we obtain p α = 0. So we have for α = 1, · · · , n − 1 9) Now on D we consider the local basis of holomorphic vector fields (with respect to J C ): e n = w∂ w , e α = ∂ zα , α = 1, · · · , n − 1 and correspondinglyē n ,ē α the conjugate vector fields, and e n , e α the dual frame etc. Then we can write where i, j ranges from 1, · · · , n, 1, · · · ,n. Then (2.9) implies that we have J j i = O (|w|) for all i, j. Then the lemma follows from the explicit expression of the Kähler metric on D, see (2.7).
From the assumption (2.5) on the Kähler form and (2.8) on the complex structure asymptotics, we obtain that for the corresponding Riemannian metric It is also useful to write down the Riemannian metric g F explicitly in real coordinates. Note that the set ξ ∈ N D : Then we can write the Riemannian metric g F as follows where g D is the corresponding Riemannian metric for ω D and θ is a connection 1-form on Y such that dθ = ω D . From the asymptotic of the Riemannian metric tensor (2.11), the explicit expression of the Riemannian metric g F in (2.12) and conditions in Assumption 1, one can directly show the following result.
Lemma 2.7. Suppose (X, ω, g) satisfy Assumption 1, then (1) the volume growth of g is at most 2, i.e. there exists a constant C > 0 such that Vol(B R (p)) ≤ CR 2 for all R sufficiently large.
(2) for large numbers K, α = 2 and β = 4 a − 2, (X, ω) is of (K, α, β)-polynomial growth if θ is a smooth form on X vanishing when restricted to D, then That (M, g) is of (K, α, β)-polynomial growth is important for us since we need the weighted Sobolev inequality in [27, Proposition 2.1] to prove a weighted mean value inequality in the next subsection.
2.1.
A weighted mean value inequality. In this subsection, using a weighted Sobolev inequality in [27], we prove a weighted mean value inequality for nonnegative functions which are almost subharmonic. This is important when we run Simpson's argument to get a uniform C 0 -estimate. As usual, r denotes the distance function to a fixed base point induced by a Riemannian metric.
Lemma 2.8. Let (X, g) be a Riemannian manifold which is of (K, α, β)-polynomial growth as defined in [27]. Let u be a nonnegative compactly supported Lipschitz function satisfying ∆u ≤ f in the weak sense. Suppose that |f | = O(r −N ), for some N ≥ 2 + α + β, then there Proof. The following argument is the standard Moser iteration with the help of the weighted Sobolev inequality in [27, Proposition 2.1].
Let γ = 2n + 1 2n − 1 . Note that we haveˆu p ∆u ≤ˆu p f for any p ≥ 1. Integration by parts and using that |f | = O(r −N ), we havê Let dµ = (1 + r) −N dV g and without loss of generality, we may assume dµ has total mass 1.
Then the weighted Sobolev inequality shows that Applying the triangle inequality and Hölder inequality, we get . Let p i = γ i , i = 0, 1, · · · . We have for any i Either there exists a sequence of p i j → ∞ such thatˆu p i j dµ ≤ 1, which implies that u L ∞ ≤ 1, or there exists a smallest i 0 such that andˆu p i dµ > 1 for i ≥ i 0 . In the second case, we have Iterating gives that The assumption on the degree a. The only reason why we need to assume a ≤ n n − 1 is that the volume growth of the corresponding Riemannian metric is at most 2. In fact we have the following easy but useful degree vanishing property for Rimannian manifolds with at most quadratic volume growth. Lemma 2.9. Let (M, g) be a complete Riemannian manifold with volume growth order at most 2. Let u ∈ C ∞ (M) satisfying |∇u| ∈ L 2 and ∆u ∈ L 1 , then M ∆u dV g = 0.
Proof. By the Cauchy-Schwarz inequality and the assumption on the volume growth, we have Therefore there is a sequence R i → ∞ such thatˆ∂ ∆u dV g for any sequence R i going to infinity. Using Stokes' theorem, 2.3. Assumption on Φ and ω. By Proposition 2.3 and the assumption on the decomposi- (2.14) and integrating this exact 2-form, we can show the following result, whose proof is similar to that given in [14,Lemma 3.7].
Lemma 2.10. There exists a real 1-form η outside a compact set of C with And it suffices to write θ = dη with |η| . We identify C with R + × Y in such a way that the Riemannian metric g F can be written as dr 2 + g r , where r is the coordinate function on R + and g r is a metric on {r} × Y 2n−1 that depends on r. Then θ is supported on the region {r > r 0 } for some r 0 > 0. Then there exist a 1-form α and a 2-form β supported on the region {r > r 0 } such that ∂ r α = 0 and θ is closed, therefore dα + ∂ r β = 0 and then one can directly check that θ = dη. Since dr ∧ α is perpendicular to β and we assumed |θ| Fix a smooth background Riemannian metricḡ on Y . Then from (2.12) and (2.11), we obtain the following estimate Then the estimate for |η| g F follows from a direct computation.
Remark 2.11. A similar argument can be applied to dd c ϕ directly on X (using Assumption 1 ) and we obtain that there exists a cut-off function χ supported on a compact set and a smooth real 1-form ψ supported outside a compact set satisfying |ψ| = O(r 1+ 1 a ) such that dd c ϕ = dd c (χϕ) + dψ This is quite useful when we want to integrate by parts on X.
We assumed that ω 0 is a closed (1,1)-form on X and vanishes when restricted to D. In particular,ˆX c 1 (D) ∧ ω n−1 0 = 0. Then by the Lelong-Poincaré formula, we obtain the following.
Lemma 2.12. Let S ∈ H 0 (X, L D ) be a defining section of D and h be any smooth hermitian
Hermitian holomorphic vector bundles
Firstly let us recall the definition of projectively Hermitian-Yang-Mills metrics. Given a hermitian metric H on a holomorphic vector bundle E, there is a unique connection compatible with these two structures and it is called the Chern connection of (E, H). Let F H denote the curvature of the Chern connection and we call it the Chern curvature of (E, H). Let E be a holomorphic vector bundle on a Kähler manifold (X, ω). A hermitian metric H is called an ω-projectively Hermitian-Yang-Mills metric (ω-PHYM) if for some constant λ. We also use the notation F ⊥ H to denote the trace-free part of the curvature form, i.e. F ⊥ 3.1. Basic differential inequalities. Let E be a holomorphic vector bundle and H, K be two hermitian metrics on E, then we have an endomorphism h defined by We will write this as H = Kh and h = K −1 H interchangeably. Note that h is positive and self-adjoint with respect to both H and K. Let ∂ H and ∂ K denote the (1, 0) part of the Chern connection determined by H and K respectively. By abuse of notation, we use the same notation to denote the induced connection on End(E). Simpson showed that (2) and (3), if det(h) = 1 then the curvatures can be replaced by the trace-free curvatures F ⊥ .
Slope stability.
If |tr(Λ ω F H )| ∈ L 1 , the ω-degree of (E, H) and ω-slope of (E, H) are defined to be Now let us assume M is compact. Integration by parts shows that the degree defined above is independent of the metric H and only depends on the cohomology class of [ω] ∈ H 2 (X, R), i.e. by the Chern-Weil theory, And for any coherent subsheaf S of E, one can define its ω-degree as follows (see [16,Chapter 5]). It is shown that det S := (∧ r S) * * is a line bundle, where r is the rank of S and define And as before we define µ ω (S), the ω-slope of S, to be deg ω (S) rank(S) . Note that for the definition of ω-degree and ω-slope, we do not need ω to be a Kähler form at all, and a real closed (1, 1)-form is enough. That is for every real closed (1, 1)-form α, we can define The slope µ α (S) is defined similarly as before and we will use the notation µ(S, α) and µ α (S) interchangeably.
We have the following definition which generalizes the standard slope stability defined for Kähler classes in [16,Chapter 5]. ( From the definition, we know that if β = 0, then E is (α, β)-stable if and only if it is α-stable; if E is α-stable, then it is (α, β)-stable for any class β. In applications, typically the first class α has some positivity. For example, in our Theorem 1.3, α = c 1 (D) is nef and big.
Remark 3.4. For every coherent subsheaf S of a holomorphic vector bundle E, we have an exact sequence of sheaves: 0 → S → S * * → S * * /S → 0, where S * * /S is a torsion sheaf and supports on an analytic set with codimension at least 2. Then by [16,Section 5.6], we know det S = det(S * * ). In particular, we know that E is α-stable (respectively (α, β)-stable) if and only if the conditions in (1) (respectively (2)) hold for every coherent subsheaf of E.
3.3. Coherent subsheaves and weakly holomorphic projection maps. Let (E, H) be a hermitian holomorphic vector bundle over a Kähler manifold (M, ω). Suppose S is a coherent subsheaf of E, since E is torsion-free, then S is torsion free and hence locally free outside Σ which is a closed analytic set of codimension at least 2. Moreover on X\Σ we have an induced orthogonal projection map π = π H S satisfying Outside the singular set Σ, the Chern curvature of (S, H| S ) is related to the Chern curvature (E, H) by (3.5) Let us mention a result in current theory: ). Let Σ be a closed analytic subset of codimension at least 2 in a Kähler manifold (M, ω). Assume T is a closed positive current on M\Σ of bidegree (1, 1), i.e a (1, 1)form with distribution coefficients, then the mass of T is locally finite in a neighborhood of Σ. More precisely, every p ∈ Σ has a neighborhood U ⊆ M such that U T ∧ ω n−1 < ∞.
Applying the above theorem to tr √ −1∂π ∧∂π , one gets And in general we call π ∈ W 1,2 loc (M, End(E); ω, H) is a weakly holomorphic projection map if it satisfies (3.4) almost everywhere. By the discussion above, we know that for a coherent subsheaf S of E, π H S is a weakly holomorphic projection map. A highly nontrivial result due to Uhlenbeck and Yau [28] is that the converse is also true (see also [24]).
Theorem 3.6 ( [28]). Suppose there is a weakly holomorphic projection map π, then there exists a coherent subsheaf S of E such that π = π H S almost everywhere. If X is compact, deg ω (S) defined in (3.3) can be computed using the curvature form F S,H . The following result is well-known, see [16,Section 5.8]. We include a simple proof using Theorem 3.5.
Proof. Let r denote the rank of S. Since S is a subsheaf of E, there is a natural sheaf homomorphism Φ : Note that Φ is only injective on M\Σ in general. Let ∧ r H denote the metric on ∧ r E induced from H, then Φ * (∧ r H) defines a singular hermitian metric on (∧ r S) * * which is smooth outside Σ and whose curvature form is equal to tr(F S,H ). Since Φ is a holomorphic bundle map, by choosing a local holomorphic basis of (∧ r S) * * and ∧ r E, it is easy to show that Φ * (∧ r H) = f K, where K is a smooth hermitian on (∧ r S) * * , the function f is positive smooth outside Σ and converge to 0 polynomially along Σ. Then by Theorem 3.5, it suffices to prove the following: for every smooth positive function Note that | log f | ∈ L 2 , then (3.8) follows from the Cauchy-Schwarz inequality and existence of good cut-off functions. More precisely, since Σ has real codimension at least 4, it is wellknown that there exists a sequence of cut-off functions χ ǫ such that 1 − χ ǫ is supported in the ǫ-neighborhood of Σ and we have uniform L 2 bound on ∆χ ǫ . We briefly describe a construction of these cut-off functions. Let s be a regularized distance function to Σ in the sense that s : M → R ≥0 is smooth and satisfies that there exist positive constants C k such that The existence of such a regularized distance function can be derived from [26, Theorem 2 on page 171]. After a rescaling, we may assume s < 1 on M. For every ǫ > 0, let ρ ǫ be a smooth function which is equal to one on the interval (−∞, ε −1 ) and zero on (2 + ε −1 , ∞). Moreover we can have |ρ ′ ǫ | + |ρ ′′ ǫ | ≤ 10. Then we define χ ǫ = ρ ǫ (log(− log s)), and we can directly check they satisfy the desired properties.
Motivated by the above result, Simpson [25] uses the right hand side of (3.7) to define an analytic ω-degree of a coherent subsheaf on a noncompact Kähler manifold. Typically one needs to assume |Λ ω F H | ∈ L 1 to ensure the first term of (3.5) is integrable. Then the degree of a coherent subsheaf is either a finite number or −∞ depending on whether |∂π| ∈ L 2 . In general, this analytic degree depends on the choice of the background metric H. And a key observation in this paper is that when E has a compactification and H is conformal to a smooth extendable hermitian metric, this analytic degree does have an algebraic interpretation, see Lemma 5.3 and Lemma 5.4.
Dirichlet problem.
We have the following important theorem of Donaldson: . Given a hermitian holomorphic vector bundle (E, H 0 ) over (Z, ω) which is a compact Kähler manifold with boundary, there is a unique hermitian metric H on E such that (1) As observed in [19], one can do conformal changes to H to fix the induced metric on det E and still have it to be a projectively Hermitian-Yang-Mills metric. Proposition 3.9 ( [19]). Given a hermitian holomorphic vector bundle (E, H 0 ) over (Z, ω) which is a compact Kähler manifold with boundary, there is a unique hermitian metric H on E such that (1) Donaldson functional for manifolds with boundary. Next we recall Simpson's construction [25] for Donaldson functional. We follow the exposition in [19,Subsection 2.5] and focus on compact Kähler manifolds with boundary.
Let (Z, ω) be a compact Kähler manifold with boundary, (E, H 0 ) be a hermitian holomorphic vector bundle. Let b be a smooth section of End(E) which is self-adjoint with respect to H 0 . Then for any smooth function f : as follows: at each point p ∈ Z, choose an orthonormal basis Let Ψ(λ 1 , λ 2 ) denote the smooth function (3.9) Then put Mochizuki proved the following important result 3.6. Bando-Siu's interior estimate. The following result shows that to get a local uniform bound for a sequence of PHYM metrics, it suffices to have a uniform C 0 -bound.
Existence of a good initial metric
In this section, we continue to use notations in Section 2 and always assume the Kähler metric on X satisfies the Assumption 1. We begin by working on the model space (C, ω C ), where ω C = dd c F (t) for some potential F (t) ∈ H. Using the explicit expression of ω C in (2.3), it is easily to show that Lemma 4.1. Let E be a holomorphic vector bundle on D and p * (E) be its pull back to C. Let E be a holomorphic vector bundle on D. We still use p to denote the projection map from D to D. Then we can compare the holomorphic structure on E and p * (E| D ) as follows. In a neighborhood of D, which we may assume to be C, we fix a bundle map Φ : E → p * (E| D ) such that Φ| D is the canonical identity map and Φ is an isomorphism as maps between complex vector bundles. Then Φ pulls back the holomorphic structure on p * (E| D ) to E. Now we have two holomorphic structures on E and denote them by∂ 1 and ∂ 2 . Then the difference β =∂ 2 −∂ 1 is a smooth section of A 0,1 (End(E)) and is 0 when restricted to D. Locally near a point in D, choose holomorphic coordinates {z 1 , · · · , z n−1 , w} such that D = {w = 0}. Using these coordinates, β can be written as f i dz i + hdw where f i and h are smooth sections of End(E) and f i | w=0 = 0. Now suppose we have a Hermitian metric H on p * (E| D )| C , then via Φ we view it as a metric on E| C . Let ∂ i denote the (1, 0) part of the Chern connection determined by∂ i and H. Then one can check that where β * H denote the smooth section of A 1,0 (End(E)) obtained from β by taking the metric adjoint for the End(E) part and taking conjugate for the 1-form part. Locally Extend H D smoothly to get a hermitian metric H 0 on E. Using the diffeomorphism Φ given in the Assumption 1 -(3) we get a positive smooth function on X by abuse of notations stilled denoted by t, which is equal to (Φ −1 ) * t outside a compact set on X.
Define a hermitian metric on E using Then we claim that From the construction, Recall that for a 2 form θ, Since we assume that |Φ * (ω) − ω C | = O(r −N 0 ), then (4.4) will follow from the following estimate on C: there exist a δ > 0 such that Using the same argument as we did before Proposition 4.3, we can show that there exists a δ > 0 such that . Remark 4.5. From the above discussion, we also obtain that |F H 0 | = O(r 1−a ). In general, we can not expect a higher decay order for the full curvature tensor F H 0 since it has nonvanishing component along the directions tangential to D, but if n = 2 we actually proved that From the proof given above, the assumption that E| D is c 1 (N D )-polystable is used crucially to have a good initial metric H 0 satisfying (1) and (2) in Lemma 1.1, which both are important for the proof. We show the assumption that E| D is c 1 (N D )-polystable is also necessary subject to the conditions in Lemma 1.1. More precisely, we have that Proof. By these two assumptions, we have is a smooth bundle valued (1,1)-form on X, its pull-back under Φ to D is a smooth bundle valued 2-form satisfying that its restriction to D is of type (1,1) and . From the explicit expression of the Kähler form ω F in (2.3) and the assumption on the potential F in Assumption 1, we know that Next we can show that the set P H defined in (1.1) is unique if we fix the induced metric on the determinant line bundle. More precisely, we have In particular, there exists a constant C > 0 such that C −1 H 0 ≤ H 1 ≤ CH 0 .
Proof. By condition (1) in Lemma 1.1, we know that there are smooth hermitian metrics H 0 and H 1 and smooth functions f 0 and f 1 on X such that for i = 0, 1 where ∇ denotes the induced connection on End(E) from the Chern connection on (E, H 0 ). From this and noting that H 0 is conformal to an extendable metric, we can check directly that |∇h| ∈ L 2 (X; ω, H 0 ) Then from the definition of P H 0 , we obtain that P H 0 = P H 1 .
Proof of the main theorem
We first prove a lemma on the degree vanishing property. Proof. Fix a base point p ∈ X and let ρ R be a smooth cut-off function which is 1 on B R (p), 0 outside B 2R (p) and |∇ρ R | ≤ C R where C is a constant independent of R. Integrating by parts, we have the following which is bounded by CˆB |β| ω n . And this term tends to 0 as R → ∞ since |β| ∈ L 1 .
From now on, we assume ω = ω 0 + dd c ϕ is a Kähler form satisfying the Assumption 1 in Section 2. Note that we only proved that dd c ϕ = dψ for a smooth form ψ with |ψ| = O(r 1+ 1 a ) (see Remark 2.11). Therefore we can not apply Lemma 5.1 directly. Typically we have a definite decay order for |β|, so we can still use integration by parts to show some degree vanishing properties. More precisely, we have Proof. By a similar integration by part argument as in the proof of Lemma 5.1, it suffices to show that lim This follows from the facts that |β| = O(r −N 0 ), |ψ| = O(r 1+ 1 a ) and the volume growth order of ω is at most 2.
The following two lemmas are crucial for us since they relate information on X and that on X. Lemma 5.3. Let H 0 be the metric constructed in Lemma 1.1. One has the following equality: Proof. Firstly, recall that By the construction in Lemma 1.1, we know that |Λ ω tr(F H 0 )| = O(r −N 0 ). Since the volume growth order of ω is at most 2, we know that Λ ω tr(F H 0 ) is absolutely integrable. Therefore the left hand side of (5.3) is well-defined. By the Chern-Weil theory, for any smooth hermitian metric H 0 on E we havê By the construction (4.3), H 0 = e Ct H 0 for some constant C and t defined in Section 4. Moreover by Remark 4.4, t = − log |S| 2 h for some smooth hermitian metric on L D . By Lemma 2.12, we obtain thatˆX dd c t ∧ ω n−1 Using (5.2) and ω = ω 0 +dd c ϕ, to prove (5.1), it suffices to show that for any k = 1, · · · , n−1, Since ω 0 vanishes when restricted to D, by Lemma 2.7, we know that |ω 0 | = O(r −N 0 ). Combining this with Remark 4.5, we know that tr(F H 0 ) ∧ ω n−1−k 0 is a closed (n − k, n − k)-form with decay order at least r −N 0 . Therefore Lemma 5.2 implies that its integral is 0. Case 2. k = n − 1. If n = 2, then by (4.8) we can still apply Lemma 5.2. If n ≥ 3, note that though |Λ ω tr(F H 0 )| = O(r −N 0 ), | tr(F H 0 )| is not in L 1 in general. So we can not apply Lemma 5.2 directly. Instead we shall use the asymptotic behaviour of tr(F H 0 ) obtained from the construction. Integrating by parts and pulling back via Φ, we know that Then by (2.5), Lemma 2.10 and the assumption N 0 > 8, we obtain that the right hand side of the above equality equals By (4.6) and (4.7), we know that it equals Note that when restricted to the level set of t, (1). Let S be a coherent reflexive subsheaf of E. If S| D is locally free and a splitting factor of E| D , then∂π H 0 S ∈ L 2 (X; ω, H 0 ). (2). Let π ∈ W 1,2 loc (X, E * ⊗ E; ω, H 0 ) be a weakly holomorphic projection map. If∂π ∈ L 2 (X; ω, H 0 ), then there exists a coherent reflexive subsheaf S of E such that π = π H 0 S a.e. and S| D is a splitting factor of E| D .
Proof. A crucial point here is that H 0 is conformal to a smooth extendable metric H 0 . In particular, for a coherent subsheaf S of E, the projections induced by H 0 and H 0 are the same. Note that by [4, Lemma 3.23 and Remark 3.25], for every coherent reflexive subsheaf S of E, S| D is torsion free and can be naturally viewed as a subsheaf of E| D .
(1) Let π = π H 0 S . Then π is smooth in a neighborhood of D and∂π| D = 0 by assumption. Note that π H 0 S = π| X , so it suffices to show∂π ∈ L 2 (X, ω, H 0 ). Fix small balls U i of X covering D such that there are holomorphic coordinates {z 1 , · · · , z n−1 , w} on each U i with D ∩U i = {w = 0} and E is trivial on each ball U i . Under these coordinates and trivializations we can write∂ π =∂ z πdz +∂ w πdw, where we view∂ z π and∂ w π as matrices of smooth functions and∂ z π| w=0 = 0. So we have |∂ z π| ≤ C|w| and |∂ w π| ≤ C. Then the result follows from the explicit estimate given in (2.7).
Since |π| H 0 ≤ 1 and by [7,Lemma 7.3], it suffices to show∂π ∈ L 2 (X; ω X , H 0 ). By (2.8) and (2.11), we may assume in local coordinates around D the Kähler metric ω is exactly given by the model space. We choose local holomorphic coordinates z = {z i } n−1 i=1 on the smooth divisor D and fix a local holomorphic trivialization e 0 of N D with |e 0 | h D = e −ψ , where ψ is a smooth function on D satisfying √ −1∂∂ψ = ω D . Then we get local holomorphic coordinatea {z 1 , · · · , z n−1 , w} on C by writing a point ξ = we 0 (z). Choose a basis of (0, 1)forms dz 1 · · · dz n−1 , dw w −∂ z i ψdz i . Then we can writē where f i and h are sections of End(E). Notice that dz i is perpendicular to the dw Since∂π is in L 2 with respect to ω, by (2.7) we know that Then we know that f i −h∂ z i ψ, h w are all L 2 -integrable with respect to the Lebesgue measure. Therefore the claim is proved: Then Uhlenbeck-Yau's result (Theorem 3.6) implies that there exists a coherent subsheaf S of E such that π = π H 0 S outside the singular set of S. Taking the double dual, we may assume S is reflexive. By the integrability condition (5.5),∂π H 0 S | D = 0, which means that S| D is a splitting factor of E| D since E| D is polystable. Now we are ready to prove the main theorem. We decompose it into two propositions.
(When E is a vector bundle, this follows from the fact that the first Chern class c 1 (D) is the Poincaré dual of the homology class defined by the divisor D. For a general reflexive sheaf, the key point is to show that c 1 (E)| D = c 1 (E| D ) using the fact that E is locally free outside an analytic set of (complex) codimension at least 3.) Therefore we have µ(S| D , c 1 (N D )) ≥ µ(E| D , c 1 (N D )). (5.6) By assumption, E| D is c 1 (N D )-polystable, so (5.6) implies that S| D is locally free and is a splitting factor of E| D . Then by Lemma 5.4, we havē ω, H). For simplicity of notation, in the following we omit the dependence on S. By the definition of P H 0 and H ∈ P H 0 , we know that H = H 0 e s with s L ∞ + ∂ s L 2 < ∞. The claim follows directly from the following pointwise inequality (outside the singular set Σ of S) where C is a constant independent of points and all the norms are with respect to H 0 . Let r 0 , r denote the rank of S and E respectively. Near any given point p ∈ X\Σ, we can find a local holomorphic basis {e 1 , · · · , e r 0 , e r 0 +1 , · · · , e r } of E such that S = Span{e 1 , · · · , e r 0 }, e i , e j H 0 (p) = δ ij , ∂ e i , e j H 0 (p) = 0 for 1 ≤ i, j ≤ r 0 and r 0 + 1 ≤ i, j ≤ r .
In the following we use Einstein summation convention and use i, j to denote numbers from 1 to r, α, β to denote numbers from 1 to r 0 . Under this basis π H 0 can be written as Similarly, π H can be written as e ∨ α ⊗ e α + H iβ H βα e ∨ i ⊗ e α . Note that as a matrix H = H 0 h, where h is the matrix representation of e s under the basis which gives (5.7). Let π = π H S . Using the Chern-Weil formula and the fact that H ∈ P H 0 is PHYM, we have and consequently is L 1 .
Assume this for a moment, then by Lemma 5.3, we know that and equality holds if and only if∂π = 0. Suppose∂π = 0. Since Again by [7,Lemma 7.3], there is a global holomorphic section of End(E), which is still denoted by π, such that π = π H S a.e. and π 2 = π. Note that since rank(π) = tr(π) is real valued and holomorphic, it follows that rank π is a constant. Thus E holomorphically splits as the direct sum of ker π and Im π, which contradict with our assumption that E is irreducible. Therefore we prove that Proof of the claim: since H ∈ P H 0 , we have tr(F S,H ) − tr(F S,H 0 ) = ∂∂u, for a bounded real valued smooth function u with |∇u| ∈ L 2 . By Lemma 2.9, tr(F S,H ) ∧ ω n−1 =ˆtr(F S,H 0 ) ∧ ω n−1 .
By the same argument in Lemma 5.3, we can shoŵ Hence we complete the proof of the claim. Proof. Uniqueness is obvious. Suppose we have two ω-PHYM metrics H 1 , By the definition of P H 0 , we know that det h = 1 and h is both bounded from above and below and |∂h| ∈ L 2 . Then by taking the trace of the differential equality in Therefore∂h = 0 and since h is self-adjoint with respect to H i , it is parallel with respect to the Chern connection determined by (∂, H i ). Then its eigenspaces give a holomorphic decomposition of E which contradicts the assumption that E is irrducible unless h is a multiple of identity map. Since det h = 1, it must be that h is the identity map, i.e. H 1 = H 2 .
For the existence part, we follow Simpson and Mochizuki's argument [25,19]. For completeness, we include some details. Let {X i } be an exhaustion of X by compact domains with smooth boundary and we solve Dirichlet problems on every X i using Donaldson's theorem (Theorem 3.8). Then we have a sequence of PHYM metrics H i on E| X i such that H i | ∂X i = H 0 | ∂X i and det H i = det H 0 . Let s i be the endomorphism determined by H i = H 0 h i = H 0 e s i . Then we have s i | ∂X i = 0 and tr(s i ) = 0.
We argue by contradiction to prove a uniform C 0 -estimate for s i . First note that by Lemma 3.2, e s i satisfies the elliptic differential inequality ∆ log(tr(e s i )) ≤ |ΛF ⊥ H 0 | (5.8) therefore tr(e s i ) satisfies the weighted mean value inequality in Lemma 2.8. Since tr(e s i ) and |s i | are mutually bounded, we know that |s i | also satisfies the weighted mean value inequality (2.13). Lemma 2.8 plays an essential role since it ensures that after normalization we can have a nontrivial limit in W 1,2 loc . Suppose there is a sequence s i such that sup Then by Lemma 2.8, we obtain Let u i = l −1 i s i . Then by Lemma 2.8 again we obtain there is a constant C independent of i such thatˆX i |u i |(1 + r) −N 0 = 1 and |u i | ≤ C, (5.9) where the norms are with respect to the back ground metric H 0 . Then following Simpson's argument, we can show that Lemma 5.7. After passing to a subsequence, u i converge weakly in W 1,2 loc to a nonzero limit u ∞ . The limit u ∞ satisfies the following property: if Φ : R × R → R is a positive smooth function such that Φ (λ 1 , By the definition of Ψ in (3.9), we know that as l → ∞, lΨ (lλ 1 , lλ 2 ) increases monotonically Fix a Φ as in the statement of the lemma. We know that for all A > 0 there exists l A such that if |λ i | ≤ A and l > l A , then we have Since sup |u i | are bounded, its eigenvalues are also bounded. Then by (5.11) and (5.12), we obtain that for i sufficiently large Again since sup |u i | is bounded we can find Φ satisfying the assumption in the lemma and Φ(u i ) = c 0 for all i, where c 0 a fixed small positive number. Then by (5.13) and the construction of H 0 , there exists a positive constant C such that Therefore by a diagonal sequence argument and after passing to a subsequence we may assume u i converge weakly in W 1,2 loc to a limits u ∞ withˆX |∂u ∞ | 2 ≤ C. We claim that u ∞ = 0. Indeed by (5.9), there exists a compact set K ⊆ X independent of i such that Since on compact sets the embedding from W 1,2 to L 1 is compact, after taking the limit, we In particular u ∞ = 0.
Next we prove (5.10). By the uniform boundedness of u i , the O(r −N 0 ) decay property of |ΛF H 0 | and the nonnegativity of the second term of the left hand side in (5.13), we know that there exists ǫ i → 0 such that for any j ≥ i, we have Note that Φ (u j ) ∂ u j ,∂u j H 0 = |Φ 1 2 (u j )(∂u j )| 2 H 0 . By [25, Proposition 4.1], we know that on each fixed X i , Φ 1 2 (u j ) → Φ 1 2 (u ∞ ) in Hom L 2 , L q for any q < 2. Since∂u j converge weakly in L 2 (X i ) to∂u ∞ , we obtain that Φ 1 2 (u j )(∂u j ) converge weakly to Φ 1 2 (u ∞ )(∂u ∞ ) in L q (X i ) for any q < 2. Then we know that for any q < 2, Letting i → ∞, the inequality (5.10) is proved.
Simpson's argument in [25, Lemma 5.5 and Lemma 5.6] can be applied verbatim to the infinite volume case, so we have Lemma 5.8 ([25]). Let u ∞ be a limit obtained in the previous lemma. Then we have (1) The eigenvalues of u ∞ are constant and not all equal.
Moreover using (5.10), Simpson proved that Lemma 5.9 ([25]). There exists at least one γ such that By Lemma 5.4, we get a filtration of E by coherent reflexive subsheaves S i whose restrictions to D are splitting factors of E| D . Since we assume that E| D is c 1 (N D )-polystable, we know that for every i µ(S i | D , c 1 (N D )) = µ(E| D , c 1 (N D )).
Then by Lemma 5.3, (5.15) which contradicts with the (c 1 (D), [ω 0 ])-stability assumption. Therefore we do have a uniform C 0 -estimate for s i .
Bando-Siu's interior regularity result Theorem 3.11 can be applied to get local uniform estimate for all derivatives of s i . Then we can take limits to get a smooth section s ∈ End(E), which is self-adjoint with respect to H 0 and tr(s) = 0 and more importantly s L ∞ < ∞ and H = H 0 e s is a PHYM metric.
Indeed taking the trace of the equality in Lemma 3.2-(2) and noting that where ν i denotes the outward unit normal vector of ∂X i . Integrating (5.16) over X i and using Stoke's theorem in the left hand side, we obtainˆX Since we have uniform C 0 -estimate for s i = log h i , there exist constants C 1 and C 2 independent of i such thatˆX On the stability condition. Note that global semistability is known [18], if we assume the restriction to D is semistable. There do exist irreducible holomorphic vector bundles which are polystable when restricted to D but not globally stable, even under more restrictive assumptions that X is Fano and D ∈ |K −1 X |. ) corresponding to a non-splitting exact sequence of holomorphic vector bundles whose restriction to D splits as a direct sum of two line bundles with the same degree. Therefore E itself is not c 1 (D)-stable but E| D is c 1 (N D )-polystable. Such an E is irreducible, because if E = L 1 ⊕ L 2 , then deg(L i , c 1 (D)) = deg(L i | D ) = 0 since E| D is polystable of degree 0, which implies that S has to be one of the L i and Q is the other one. This contradicts with the construction of E.
6. Discussion 6.1. More results on the existence of PHYM metrics. By Donaldson's theorem on the solvability of Dirichlet problem (Theorem 3.8), the elliptic differential inequality (Lemma 3.2-(3)), the maximal principle and Bando-Siu's interior estimate (Theorem 3.11), we get the following well-known existence result. There are many examples for which (6.1) has a positive solution and even bounded solutions [1,21,23].
for some ǫ > 0, then (6.1) admits a bounded solution. In particular, if (M, g) has nonnegative Ricci curvature, volume growth order greater than 2, |ΛF ⊥ H 0 | = O(r −2 ) and |ΛF ⊥ H 0 | ∈ L 1 , then (6.1) admits a bounded solution. Theorem 6.1 can not be applied to (X, ω, g) satisfying Assumption 1 since we do not know whether (6.1) admits a positive solution (for this volume growth order at most 2 is a key issue). And actually Theorem 1.3 tells us that there are some obstructions for the existence of ω-PHYM metrics which are mutually bounded with the initial metric.
Such a phenomenon also appears when we seek a bounded solution for the Poisson equation on a complete noncompact Riemannian manifold (M, g) with nonnegative Ricci curvature. Suppose f is compactly supported for simplicity, then we know that (1) if the volume growth order is greater than 2, i.e. there is a constant c > 0 such that Vol(B r ) ≥ cr 2+ǫ for some ǫ > 0, then (6.2) admits a bounded solution. (Since by Li-Yau [17], (M, g) admits a positive Green's function which is O(r −ǫ ) at infinity, a bounded solution of (6.2) is obtained by the convolution with the Green's function.) (2) if the volume growth order does not exceed 2, i.e. there is a constant C > 0 such that Vol(B r ) ≤ C(r + 1) 2 , then (6.2) admits a bounded solution if and only ifˆM f = 0.
(For the "if" direction, see [11,Theorem 1.5]. For the "only if" direction, suppose we have a bounded function u and a compactly supported function f such that ∆u = f . Then by Cheng-Yau's gradient estimate [5], we obtain |∇u| ≤ C r for some C > 0 independent of r. Multiplying u both sides in (6.2) and integrating by parts, we obtain that |∇u| ∈ L 2 . Then Lemma 2.9 impliesˆM f = 0.) Next we discuss another result whose proof is similar to the proof of Theorem 1.3. Let (X, ω) be an n-dimensional (n ≥ 2) compact Kähler manifold, D be a smooth divisor. Let ω D = ω| D denote the restriction of ω to D and X = X\D denote the complement of D in X. Let L D be the line bundle determined by D and S ∈ H 0 (X, L D ) be a defining section of D. Fix a hermitian metric h on L D . Then after scaling h, the function t = − log |S| 2 h is smooth and positive on X. For any smooth function F : (0, ∞) −→ R with |F ′ (t)| → 0 as t → ∞ and F ′′ (t) ≥ 0 there exists a large constant A such that is a Kähler form on X. By scaling ω we may assume A = 1. One can easily check that ω is complete is and only ifˆ∞ 1 √ F ′′ = ∞ and it always has finite volume. In the following, we always assume the function F satisfies |F ′ (t)| → 0 as t → ∞ and F ′′ (t) ≥ 0. Then we can state assumptions on ω.
Assumption 2.
Let ω be the Kähler form defined by (6.3) and g be the corresponding Riemannian metric. We assume that (1) the sectional curvature of g is bounded.
A consequence of these assumptions is that (X, g) is complete and of (K, α, β)-polynomial growth defined in [27, Definition 1.1], so we can use the weighted Sobolev inequality as we did for the proof of Lemma 2.8.
Let E be an irreducible holomorphic vector bundle on X such that E| D is ω D -polystable. Then by Donaldson-Uhlenbeck-Yau theorem, there exists a hermitian metric H D on E| D such that Extend H D smoothly to get a smooth hermitian metric H 0 on E. Then by (6.4) and Assumption 2 -(2), one can easily show that . Then we have the following result Theorem 6.3. Suppose (X, ω) satisfies Assumption 2 and E| D is ω D -polystable. Let H 0 be a hermitian metric as above and P H 0 be defined by (1.1). Then there exists an ω-PHYM metric in P H 0 if and only if E is ω-stable.
Using the argument in Proposition 5.5, the "only if" direction follows from Lemma 2.9 and the following lemma .
Lemma 6.4. For every smooth closed (1,1)-form θ on X, we havê X θ ∧ ω n−1 =ˆX θ ∧ ω n−1 (6.5) Proof. Firstly note that since there exists a positive number c > 0 such that ω > cω and ω n < ∞, the left hand side of (6.5) is well-defined. Therefore it suffices to show that for Let S ǫ denote the level set {|S| h = ǫ}. By integration by part, it suffices to show that Case 1. k = 1. Note that with respect to the smooth back ground metric ω, Vol(S ǫ ) = O(ǫ) and |d c F | ≤ C|F ′ (t)|ǫ −1 on S ǫ . Then (6.6) follows from the assumption that |F ′ | → 0 as t → ∞. Case 2. 2 ≤ k ≤ n − 1. Then (6.6) follows from the fact that |F ′ (t)| → 0 as t → ∞ and For the "if" direction, the argument in Proposition 5.6 applies. We will not give the details and just point out the following two observations which make the argument work in this setting. The key points are (1) Assumption 2 and Lemma 6.2 ensure that we can apply the weighted mean value inequality proved in Lemma 2.8. (2) We have L 2 (X, ω) ⊂ L 2 (X, ω) since ω ≥ cω for some c > 0, therefore by Uhlenbeck-Yau's theorem (Theorem 3.6) a weakly projection map π of E over X with |∂π| ∈ L 2 (X, ω) defines a coherent torsion free sheaf S of E.
6.2.
Calabi-Yau metrics satisfying Assumption 1 . As mentioned in the Introduction, there do exist interesting Kähler metrics satisfying the Assumption 1, which contain Calabi-Yau metrics on the complement of an anticanonical divisor of a Fano manifold and its generalizations [27,14,13]. We will call them Tian-Yau metrics. Here we give a sketch for the construction of these Calabi-Yau metrics and refer to [13]-Section 3 for more details. Let X be an n-dimensional (n ≥ 2) projective manifold, D ∈ |K −1 X | be a smooth divisor and X = X\D be the complement of D in X. Suppose that the normal bundle of D in X, N D = K −1 X | D is ample. Fixing a defining section S ∈ H 0 (X, K −1 X ) of the divisor D whose inverse can be viewed as a holomorphic volume form Ω X on X with a simple pole along D.
Let Ω D be the holomorphic volume form on D given by the residue of Ω X along D. Using Yau's theorem [29] , there is a hermitian metric h D on K −1 X | D such that its curvature form is a Ricci-flat Kähler metric ω D with by rescaling S if necessary. One can show that the hermitian metric h D extends to a global hermitian metric h X on K −1 X such that its curvature form is nonnegative and positive in a neighborhood of D.
By glueing a smooth positive constant on a compact set, we get a global positive smooth function z which is equal to (− log |S| 2 h X ) 1 n outside a compact set. For any A ∈ R, we denote h A = h X e −A and v A = n n + 1 n , which is viewed as a smooth function defined outside a compact set on X. We denote by H 2 c,+ (X) the subset of Im(H 2 c (X, R) → H 2 (X, R)) consisting of classes k such thatˆY k p > 0 for any compact analytic subset Y of X of pure dimension p > 0. Then Hein-Sun-Viaclovsky-Zhang proved the following result Theorem 6.5. [13] For every class k ∈ H 2 c,+ (X), there is a unique Kähler metric ω ∈ k such that (1) ω n = ( √ −1) n 2 Ω X ∧ Ω X , and for some δ, A > 0 and all l ≥ 0.
And from the construction in [13]-Section 3, we have the decomposition ω = ω 0 + dd c ϕ, where ω 0 is a smooth (1,1)-form on X vanishing when restricted to D. And by Theorem 6.5 and the estimate in [14,Proposition 3.4], one can directly check that these Kähler metrics satisfy Assumption 1.
Remark 6.6. It was proved in [14] that Tian-Yau metrics ω T Y can be realized as the rescaled pointed Gromov-Hausdorff limits of a sequence of Calabi-Yau metrics ω k on a K3 surface. We expect that ω T Y -PHYM connections we obtained in this paper give models for the limits of ω k -HYM connections on the K3 surface.
6.3. On the ampleness assumption of the normal bundle N D . In this subsection, we first explain why we assume the normal bundle of D is ample and then discuss the case where the normal bundle is trivial on compact Kähler surfaces.
In order to have the above equality, a natural (possibly the only reasonable) choice is that α = c 1 (D).
To make the argument in this paper work, we also need the following property: if a vector bundle F on D is polystable with respect to α| D and S is a coherent subsheaf of F with the same α| D -degree as F , then S is a vector bundle and is a splitting factor of F . (Note that this does not follow from the definition since α| D may not be a Kähler class. For example if α| D is 0, then definitely it does not satisfy this property.) In general in order to have this property, we need α| D to be a Kähler class. This is one of the reasons why we assume that the normal bundle of D is ample, i.e. c 1 (D)| D is a Kähler class. Another reason is that by assuming N D is ample, on the punctured disc bundle C we have explicit exact Kähler forms, which give models of the Kähler forms on X.
However if X is a compact complex surface, in which case the divisor D now is a smooth Riemann surface, then the property mentioned above always holds. Note that on a Riemann surface D, the slope of a vector bundle is canonically defined and independent of the choice of cohomology classes on D.
Lemma 6.7. Let X be a compact Kähler surface and D be a smooth divisor. Suppose E| D is polystable. Let S be a coherent reflexive subsheaf of E. Then µ(S, c 1 (D)) = µ(E, c 1 (D)) if and only if S| D is a splitting factor of E| D .
Using this, most of the arguments in Section 5 can be modified to work for divisors D with c 1 (N D ) = 0 in complex dimension 2. In the following, we assume c 1 (N D ) = 0 in H 2 (D, R). Then it is easy to see that c 1 (D) is nef and by the global ∂∂-lemma on D, we know that there exists a hermitian metric h D on N D with vanishing curvature. Let L D be the line bundle determined by D and S ∈ H 0 (X, L D ) be a defining section of D. Then we can extend h D smoothly to get a smooth hermitian metric h on L D and after a rescaling, we may assume that t = − log |S| 2 h is positive on X. In this case, we can consider (at least) all monomials potentials with degree greater than 1 H := {F (t) = At a : A > 0 is a constant and a > 1} . (6.8) Assumption 3. Let ω be a Kähler form on X and g be the corresponding Riemannian metric. We assume that (1) the sectional curvature of g is bounded.
Suppose (X, ω, g) satisfies Assumption 3, then we have the following consequences: • the Riemannian metric g is complete and has volume growth order at most 2, • (X, g) is of (K, 2, β)-polynomial growth as defined in [27, Definition 1.1] for some positive constants K and β. Let E be an irreducible holomorphic vector bundle over X such that E| D is polystable with degree 0. Then by Donaldson-Uhlenbeck-Yau theorem (for Riemann surfaces this was first proved by Narasimhan and Seshadri [20]), there exists a hermitian metric H D on E| D such that Λ ω D F H D = 0.
Since D is a Riemann surface, this is equivalent to say that H D gives a flat metric on E| D , i.e. F H D = 0. (6.9) Extending H D smoothly to get a hermitian metric H 0 on E then by (6.9) and the proof of Lemma 1.1, we know that H 0 is already a good initial metric in the following sense: |F H 0 | = O(e −δt ). (6.10) Then we have the following result, whose proof is essentially the same as that for Theorem 1.3. We just point out the difference. The argument in Section 5 can be applied if Lemma 5.3 still holds. The analog of Lemma 5.3 in this case is the following lemma, for which we need to assume E| D is flat. Lemma 6.9. Suppose (X, ω) satisfies Assumption 3 and E| D is flat. Let H 0 be a hermitian metric as above. Then we have the following equality: Proof. By Chern-Weil theory, it suffices to show that X √ −1 2π tr(F H 0 ) ∧ dd c ϕ = 0. (6.11) The argument in Lemma 2.10 can be used again to show that there exists a cut-off function χ supported on a compact set and a smooth 1-form ψ supported outside a compact set such that dd c ϕ = dd c (χϕ) + dψ. Moreover |ψ| grows at most in a polynomial rate of r. Then (6.11) follows from integration by parts and (6.10).
Example 6.10. Let X = CP 1 × D, where D is a compact Riemann surface. Then D = {∞} × D is a smooth divisor with trivial normal bundle. Fix a Kähler form ω D on D and also view it as a form on CP 1 × D via the pull-back of the obvious projection map. Note that up to a scaling [ω D ] ∈ c 1 (CP 1 ) in H 1,1 (X). We can consider asymptotically cylindrical metrics on X = C × D given by the Kähler forms where z denotes the coordinate function on C and ϕ = Φ ′′ is a positive smooth function defined on R such that ϕ(t) = e t when t is sufficiently negative and ϕ(t) = 1 for t sufficiently positive. Then one can easily check that (X, ω) satisfies Assumption 3 with F (t) = t 2 . Let E be an irreducible holomorphic vector bundle on CP 1 × D such that E| D is flat. Then by Theorem 6.8, we know that Similar examples as in Example 5.10 show that the condition c 1 (D), c 1 (CP 1 ) -stability is non-trivial. More specifically, let D be a Riemann surface with genus g ≥ 1 and k ≥ 2 be an integer. Then similar argument as in Example 5.10 shows that there exists a non-splitting extension 0 −→ O −→ E −→ p * 1 (O P 1 (−k)) −→ 0. whose restriction to D splits. Then one can easily check that E is irreducible and not c 1 (D), c 1 (CP 1 ) -stable.
6.4. Some problems for further study. Let (X, ω) satisfy the Assumption 1. As illustrated by Theorem 6.5 it is more natural to assume a stronger condition on the background Kähler metric ω. More precisely, we assume that in (2.5) the right hand side is replaced by O(e −δ 0 r α 0 ) for some δ 0 , α 0 > 0 and we also have the same bound for higher order derivatives. Under these assumptions and motivated by the result of Hein [12] for solutions of complex Monge-Ampère equations, we make the following conjecture. Note that the key issue is to prove that |s| decays exponentially, since all of the higher order estimates will follows form standard elliptic estimates.
|
2022-06-29T01:15:47.454Z
|
2022-06-27T00:00:00.000
|
{
"year": 2022,
"sha1": "333e3e730695fe535c42c5de66fde567be5b1493",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00208-024-02849-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "333e3e730695fe535c42c5de66fde567be5b1493",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
266160075
|
pes2o/s2orc
|
v3-fos-license
|
Effects of glass fibers reinforced and non-reinforced composite resin on fracture behavior of severely destructed primary incisors and restored with post and core system
Objective To evaluate fracture resistance and failure type of coronally rehabilitated primary incisors with EverX Flow or Grandio Core post and core with or without fiber post. Materials and Methods Forty-eight extracted maxillary primary incisors were root canal treated and obturated with Metapex. The coronal 4-mm of Metapex was removed to create 3-mm intracanal post space. Next, coronal enamel and radicular dentin surfaces were acid-etched, and a bonding agent was applied and light-cured. Based on intracanal post and 2-mm height core buildup materials, specimens were divided equally (n = 12) into 4 groups as follow; Group I (EverX Flow), Group II (Grandio Core), Group III (Fiber post, and EverX Flow), and Group IV(Fiber post and Grandio Core). The coronal restorations were finalized to 4-mm height using G-aenial Anterior composite and specimens were tested for fracture resistance. Force required to induce fracture was recorded and failure type was examined. Results Fracture resistance of Fiber post and EverX Flow group was statistically significant high than other tested groups. However, fracture resistance of EverX Flow group showed non-statistically significant difference from that of Fiber post and Grandio Core group. Regarding failure type, no specimen presented root fracture, and all failures were favorable and repairable. Conclusion EverX flow post and core with or without fiber post enhanced fracture resistance of restored primary incisors compared to Grandio Core alone. Clinical relevance EverX flow post and core system with or without fiber post could be a promising restorative option for severely destructed primary incisors.
Introduction
Severe destruction of maxillary primary incisors is a common sequala of early childhood caries and in available literature, no standardized restorative technique has been documented for these teeth (Alamdari et al., 2023;Baghalian et al., 2014;Memarpour and Shafiei, 2013;Mehra et al., 2016).Different types of post have been proposed as intracanal retention when restoring severely destructed primary incisors (Mittal et al., 2015;Vafaei et al., 2016).However, the development of fiber post was a turning point in the restoration of severely destructed anterior teeth (Sawant et al., 2017).The close modulus of elasticity and mechanical properties of fiber post to those of dentin decreased possibility of root fractures associated with metallic posts (Jacob et al., 2021).
Insertion of fiber posts inside root canal requires use of post drills which not only remove additional radicular dentin beyond need for root canal treatment but also produce cracks inside root canal (Fernandes et al., 2021;Fráter et al., 2020).In addition, the space originally occupied by dentin is replaced with mechanically inferior cement than that of dentin (Lassila et al., 2020a).For these reasons, post-debonding was frequently associated with adhesively cemented fiber posts (Salama et al., 2021).
Grandio Core (GC Group, Tokyo, Japan) is a flowable dual-cure composite material that combines quick self-curing as well as ondemand light-curing features (Säilynoja et al., 2021).The higher filler content of this material improved its mechanical properties and showed promising results alone or with fiber posts in permanent teeth (Fráter et al., 2021a;Lassila et al., 2020a).Accordingly, Grandio Core could overcome post-debonding failure and ensure the longevity of restored anterior primary teeth (Jacob et al., 2021).
Direct composite post is another way to overcome post-debonding E-mail address: rizkelagamy@mans.edu.eg.
Contents lists available at ScienceDirect
The Saudi Dental Journal failure as it makes an exact copy of canal space without need for luting cement (Fráter et al., 2021a).However, high polymerization shrinkage and low fracture toughness of particulate-filled composite (PFC) were the main causes of failure (Ibrahim and Nourallah, 2020;Salama et al., 2021).Short fiber-reinforced composites (SFRCs) are bulk-fill materials that contain randomly oriented glass fibers embedded within resin matrix which could eliminate need for post (Alshabib et al., 2022).
Recently, EverX Flow (GC Dental Corp., Tokyo, Japan) is a flowable version of SFRCs that can be used in limited spaces, such as root canals (Alshabib et al., 2022).The application of EverX Flow with fiber post improved fracture behavior of restored bovine or human incisors (Suni et al., 2023;Uctasli et al., 2021).Accordingly, the application of EverX Flow could improve resistance of restored primary incisors to fracture.Therefore, the present study aimed to assess fracture resistance of restored primary incisors with EverX Flow or Grandio Core post and core foundation with or without fiber post under the PFC veneer layer.It was hypothesized that resistance to fracture and type of failure of restored primary incisors with tested post and core buildup techniques would be different.
Ethical approval and Sample size
The present study was conducted after approval of Dental Research Ethics Committee of the Faculty of Dentistry, Mansoura University with Code number (M109023PP).Sample size of the present study was calculated based on the previously published studies of Alamdari et al who compared fracture resistance of incisors restored with different post and core materials (Alamdari et al., 2023).The analysis was performed using G*Power program version 3.1.9.7. at 80 % power with 0.05 significance and showed that fracture resistance of each group could be evaluated using 12 primary incisors.
Specimens' collection
Forty-eight extracted primary maxillary central incisors collected from outpatient dental clinics were used in this study.The incisors were selected based on presence of at least intact two-thirds of root length and sound cervical third of the crown and the absence of any previous pulp therapies.The collected incisors were cleaned, disinfected in 0.5 % thymol, and kept in normal saline until used.
Specimens' preparation and pulpectomy procedure
The coronal portion of incisors was cut 1-mm above cemento-enamel junction (CEJ) with low speed diamond disc and the orifices of root canals was enlarged with size 3 Mani Gates-Glidden bur (Mani Inc, Japan).With working length of 1-mm short than apex, the canals were manually instrumented using Mani H-files (Mani Inc, Japan) up to size 35 and irrigated with normal saline after each file.After cleaning and shaping, canals were obturated with Metapex (Meta Biomed Co., Republic of Korea) as it is the most effective and applicable obturating material.Then, coronal 4 mm of Metapex was completely removed from root canals, and 1-mm glass ionomer cement was placed over its apical part (GC Fuji I; Tokyo, Japan).
Post and core restorative techniques and grouping
The enamel border and intracanal dentin of incisors were acidetched for 15 sec with 37 % Scotchbond gel (3 M, MN, USA), water rinsed, and air dried but left moist.According to manufacturer's instructions, G-Premio Bond (GC Group, Tokyo, Japan) was applied over etched areas, air dried, and light cured for 20 sec.Next, prepared incisors were divided equally (n = 12) into 4 groups based on post and core materials as follows: Group I: EverX Flow.Approximately 3 mm of EverX Flow was injected into canal and light cured for 40 sec.Then, 2 mm core height was built up for each incisor with EverX Flow and light cured for 20 s.
Group II: Grandio Core.The post and core were directly built up with Grandio Core as described in group I.
Group III: Fiber post and EverX Flow.A 5-mm length of Fiber Post (GC Group, Tokyo, Japan) was cut with high-speed diamond bur under water spray.The post surface was conditioned for 15 s with 37 % Scotchbond etchant, water rinsed, and air dried.The EverX Flow was first injected inside root canal and followed by fiber post, and EverX Flow was light cured for 40 s.Then, 2-mm EverX Flow core was built up and light cured for 20 s.
Group IV: Fiber post and Grandio Core.Post and core buildup were performed as described in group III except Grandio Core was used.
Final restoration and thermocycling
The coronal restoration of incisors was finalized to 4-mm height using G-aenial Anterior composite (GC Group, Tokyo, Japan), lightcured for 20 s, and finally finished with Sof-Lex discs (3 M, MN, USA).Then, each incisor was vertically inserted up to 1 mm below CEJ in acrylic resin block, and all specimens were thermocycled for 1000 cycle between 5 • -55 • C and 30 s dwell time.
Fracture resistance test and failure type
After that, the restored incisors were subjected to loading forces at 148 • to their long axis and 0.5 mm/min cross-head speed using Instron machine (model 8500, Instron Co, USA) with 2-mm diameter metallic rod.The rod tip was applied to mid -palatal surface of restoration and force-inducing fracture was recorded in Newton.All specimens were examined visually for failure type that was categorized into: type 1; partial fracture of coronal restoration but intact post, type 2; coronal fracture of post and restoration, type 3: post-debonding with restoration, and type 4; root fracture (Pamato et al., 2023).
Statistical analysis
The data was analyzed statistically using SPSS software program version 25 (SPSS for Windows, Chicago, USA).One-way ANOVA and Tukey's Post Hoc test were used to compare between mean fracture resistance values of four groups at P-value ≤ 0.05.2) revealed that fracture resistance of Fiber post and EverX Flow group was statistically significant high than those of EverX Flow, Grandio Core, and Fiber post and Grandio Core groups (P = 0.021, 0.000, and 0.002 respectively).However, fracture resistance of EverX Flow group was not different significantly from that of Fiber post and Grandio Core group (P = 0.811).
Failure types
Table 3 summarizes frequency and percentage of failure types among four groups.Partial fracture of coronal restoration was predominately observed in EverX Flow group and Fiber post and EverX Flow group (66.6 % and 83.3 % respectively) as shown in Fig. 1 A and B. While coronal fracture of post with restoration was 75 % in Grandio Core group (Fig. 1 C), post-debonding was 58.3 % in Fiber post and Grandio Core group (Fig. 1 D).
Discussion
Rehabilitation of severely destructed primary incisors is a clinical challenge for pediatric dentists and their residual tooth structures necessitate need for post and core (Mehra et al., 2016).Although restoration of anterior permanent teeth with fiber post increased their fracture resistance, their fracture rate is three times more than that of posterior teeth due to great horizontal force (Garcia et al., 2019;Jurema et al., 2022).The application of EverX Flow inside root canal of primary incisors has not been studied yet in literature, so it might be essential to evaluate effects of EverX Flow on fracture resistance of restored primary incisors with post and core system.
In this study, an attempt was made to rehabilitate severely destructed primary incisors with restorations that mimic lost dentin-enamel structure (Uctasli et al., 2021;Singer et al., 2023).EverX Flow contains random microscale glass fibers embedded within resin matrix which could mimic dentin collagen fibers within hydroxyapatite matrix (Lassila et al., 2020b).Besides close fracture toughness of EverX Flow to that of dentin, its protruding fibers at its interface with PFC layer could mimic dentin-enamel junction (Fráter et al., 2021c;Lassila et al., 2020b).Moreover, SFRCs remain attached after fracture and preserve their strength even after repair more than other bulk-fill composite (Alshabib et al., 2022).
In this study, post-depth was standardized to 3 mm and separated from Metapex with 1-mm base to avoid interaction with composite materials (Ravikumar et al., 2017).Also, fiber post that could fit coronal third of root canal without using post drills was selected to preserve radicular dentin as it could affect fracture resistance of incisors (Fráter et al., 2021a).In addition, fiber post surface was conditioned for 15 sec as this enhanced its adhesion without damaging its integrity (Jacob et al., 2021).
The rationale behind placement of PFC final restoration instead of crown was that crown could mask fracture resistance of post and core foundation (Uctasli et al., 2021).The loading force was applied to midpalatal surface at oblique angle (148 • ) to long axis of restored incisors to simulate normal incisal force (Alamdari et al., 2023).In addition, applying force at this angle represents worst-case scenario of fracture resistance as it places heavy stress on coronal portion of incisor as well as post-root canal interface (Fráter et al., 2021b).
The null hypothesis of this study was accepted as results of fracture resistance showed statistically significant differences between Fiber post and EverX Flow group and other tested groups.However, fracture resistance of EverX Flow group was not significant different from that of Fiber post and Grandio Core group which could indicate similar biomechanical behavior of fiber post with Grandio Core to that of microglass fibers with resin matrix of EverX Flow (Fráter et al., 2021a;Säilynoja et al., 2021).Regarding failure type, partial fracture of coronal restoration was predominant in EverX Flow groups, especially Fiber post and EverX Flow group and this fracture can be simply re-restored with same composite material without additional cost (Alshabib et al., 2022;Fráter et al., 2021c).On other hand, complete fracture of coronal restoration or post-debonding was predominant in Grandio Core groups (Doshi et al., 2019).This failure can be re-restored, but it increases cost for patient and dentist (Uctasli et al., 2021).
In existing literature, no previous studies have evaluated effect of EverX Flow on fracture resistance of restored primary incisors with post and core system.However, the outcomes of this study were partly supported by Fráter et al., (2021a), Lassila et al., (2020a), andSuni et al (2023) studies, where application of EverX Flow as core or post-luting material improved fracture resistance of restored incisors.
In line with this study, Alamdari et al (2023) revealed that Fiber post and EverX Flow core increased fracture resistance of restored incisors On other hand, results of this study were inconsistent with those of Garoushi et al (2009) and Bijelic et al (2013) who reported that restored incisors with SFRC post and core showed an increase in their fracture resistance compared to fiber post with PFC core.However, different restorative materials, tooth specimens, post type, and loading force used in this study could explain this discrepancy.
The favorable outcomes of EverX Flow in this study could be explained on the base of perspectives.Firstly, glass fibers in EverX Flow provided multidirectional isotropic reinforcement of restoration which re-directed crack propagation toward restoration periphery (Lassila et al., 2020a).Secondly, conditioning of post surface provided micromechanical bond between post and EverX Flow (Fernandes et al., 2021).Thirdly, effective light transmission of fiber post ensured optimal polymerization of Ever X Flow inside canal (Fráter et al., 2021c).Finally, mechanical interlocking of protruding fibers of EverX Flow core with PFC veneer allowed uniform stress distribution along restoration without detrimental effects (Doshi et al., 2019).
Conclusions
1. Restored incisors with Fiber post and EverX Flow system showed promising results regarding fracture resistance.2. EverX Flow post and core improved restoration resistance more than Grandio Core. 3. Restored incisors with Fiber post and Grandio Core showed fracture resistance comparable to that with EverX flow alone.4. Restored incisors with EverX flow with or without fiber post showed the least type of fracture.
more than Bulk fill composite post and core, EverX Flow post and core, Fiber post and Bulk fill composite core, and Conventional composite post and core.Also,Uctasli et al (2021) assessed fracture resistance of restored anterior permanent teeth with different post and core systems using either Grandio Core or EverX Flow as post-luting materials.Their results revealed that EverX Flow increased fracture resistance of incisors compared to Grandio Core.
Fig. 1 .
Fig. 1.Schematic diagram illustrating the predominant type of failure among four tested groups.A. Partial fracture of coronal restoration but intact post in EverX Flow group.B. Partial fracture of coronal restoration but intact post in Fiber post and EverX Flow group.C. Coronal fracture of post and restoration in Grandio Core group.D. Fiber post-debonding with restoration in Fiber post and Grandio Core.
Table 1
Table1illustrates mean values of fracture resistance and standard Mean and standard deviations (SD) of fracture resistance values in Newtons(N) for four tested groups.SD) for four tested groups in Newtons(N).Highest fracture resistance values were recorded in Fiber post and EverX Flow group and lowest values were in Grandio Core group.Regarding fracture resistance, ANOVA revealed statistically significant differences among four groups (P < 0.001).while results of Tukey's post hoc (Table * Level of significance was set at P-value ≤ 0.05.R. El Agamydeviations (
Table 2
Pairwise comparison between mean and standard deviations (SD) of fracture resistance values for four tested groups.
* Level of significance was set at P-value ≤ 0.05.
Table 3
Number (N)and Percentage (%) of failure types among four tested groups.
|
2023-12-11T16:07:48.471Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "55ea1ee47eb07fa8505db077b42b5539a89a17aa",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sdentj.2023.12.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98967f376c1d9b6198cbdcccdb791f006e7560ab",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
}
|
253458321
|
pes2o/s2orc
|
v3-fos-license
|
Water Molecular Dynamics in the Porous Structures of Ultrafiltration/Nanofiltration Asymmetric Cellulose Acetate–Silica Membranes
This study presents the characterization of water dynamics in cellulose acetate–silica asymmetric membranes with very different pore structures that are associated with a wide range of selective transport properties of ultrafiltration (UF) and nanofiltration (NF). By combining 1H NMR spectroscopy, diffusometry and relaxometry and considering that the spin–lattice relaxation rate of the studied systems is mainly determined by translational diffusion, individual rotations and rotations mediated by translational displacements, it was possible to assess the influence of the porous matrix’s confinement on the degree of water ordering and dynamics and to correlate this with UF/NF permeation characteristics. In fact, the less permeable membranes, CA/SiO2-22, characterized by smaller pores induce significant orientational order to the water molecules close to/interacting with the membrane matrix’s interface. Conversely, the model fitting analysis of the relaxometry results obtained for the more permeable sets of membranes, CA/SiO2-30 and CA/SiO2-34, did not evidence surface-induced orientational order, which might be explained by the reduced surface-to-volume ratio of the pores and consequent loss of sensitivity to the signal of surface-bound water. Comparing the findings with those of previous studies, it is clear that the fraction of more confined water molecules in the CA/SiO2-22-G20, CA/SiO2-30-G20 and CA/SiO2-34-G20 membranes of 0.83, 0.24 and 0.35, respectively, is in agreement with the obtained diffusion coefficients as well as with the pore sizes and hydraulic permeabilities of 3.5, 38 and 81 kg h−1 m−2 bar−1, respectively, reported in the literature. It was also possible to conclude that the post-treatment of the membranes with Triton X-100 surfactants produced no significant structural changes but increased the hydrophobic character of the surface, leading to higher diffusion coefficients, especially for systems associated with average smaller pore dimensions. Altogether, these findings evidence the potential of combining complementary NMR techniques to indirectly study hydrated asymmetric porous media, assess the influence of drying post-treatments on hybrid CA/SiO2 membrane’ surface characteristics and discriminate between ultra- and nano-filtration membrane systems.
Introduction
It is established that the structure and dynamics properties of pore-confined molecules are greatly affected by the morphology of porous media [1][2][3]. In membrane separation, the state of water within a membrane's three-dimensional porous network plays a role in elucidating the mechanisms of its selective mass transfer task. Concertedly, the separation performance of a membrane can be gauged by the interplay of factors such as the pore size, electrical charge, hydrophilic/hydrophobic characteristics of the membrane polymeric or hybrid matrix and the solutes [4,5]. Therefore, the membranes' porous structure and the state of water within its porous matrix are crucial to understanding the mechanisms of membrane selective transport.
The determination of the accurate morphological features of porous media still represents a challenge as many properties depend not only on the void size's distribution but also on their connectivity and liquid-surface interactions [6]. Although there is a vast amount of scientific literature focused on microscopic and spectroscopic characterisation for elucidating the mechanisms of membrane selective transport in the active layer structures of integrally skinned cellulose acetate (CA) or cellulose esters membranes [4,[7][8][9][10][11][12], this subject is more complex in the study of hybrid CA and silica, CA/SiO 2 , asymmetric membranes constituting the system of this work [13,14]. Previous studies by de Pinho et al. [15,16] on the characterisation of the water order and dynamics in asymmetric CA/SiO 2 hybrid membranes, covering a wide range of ultrafiltration (UF) and nanofiltration (NF) permeation properties, pointed to an essential indication that Nuclear Magnetic Resonance (NMR) relaxometry observables, which are strongly dependent on water-surface interactions due to confinement, can be reliably correlated with the membranes' asymmetric porous structures and selective permeation performance.
Nuclear Magnetic Resonance (NMR) relaxometry is a widely used experimental technique that enables the study of a large variety of chemical compounds, such as liquid crystals, polymers, ionic liquids and complex food systems, just to name a few [17][18][19][20]. The 1 H NMR longitudinal relaxation rate dispersion (R 1 in the function of the 1 H Larmor frequency) is sensitive to molecular motions occurring at timescales ranging from millito picoseconds and from slower collective motions in liquid crystalline phases to fast molecular rotations. 1 H NMR relaxometry is especially sensitive to the existence of some degree of confinement, enabling an indirect study of a confining matrix by introducing a well-known liquid, usually water, into its structure. Relaxation-inducing interactions of the probing liquid with the surrounding surfaces, often referred to as rotations mediated by translational displacements, enables the characterization of a given matrix in terms of the effective mean square displacement of the liquid molecules confined in the porous system as well as the degree of order induced by these interactions [21][22][23][24].
In the present work, the main objective is to probe the water molecular dynamics within the porous structure of asymmetric CA/SiO 2 hybrid membranes over a wide range of UF and NF permeation properties by 1 H NMR relaxometry as a means to assess the effect of the drying post-treatments on the membranes' asymmetric structure modification.
Membrane Preparation and Characterization
A series of flat asymmetric CA/SiO 2 hybrid membranes were made in a laboratory by coupling the wet phase inversion [25] with sol-gel techniques [26]. The synthesis methodology is described by de Pinho et al. [13]. Membranes were made from casting solutions containing 16.4 wt.% cellulose acetate (CA) polymer (≈30,000 average molecular weight), supplied by Sigma-Aldrich (Steinheim, Germany), a SiO 2 content equal to 5 wt.%, and three different solvent system ratios of formamide (enhancing pore-forming agent) and acetone. The acid catalysed hydrolysis of the SiO 2 alkoxide sol-gel precursor was promoted in situ by adding deionised water, tetraethyl orthosilicate (TEOS), supplied by Sigma-Aldrich (Steinheim, Germany), and nitric acid to the polymer casting solution. All chemicals were of reagent grade and 65% nitric acid was of technical grade. Membrane films were cast with the aid of a 250 µm calibrated doctor blade, followed by evaporation for 30 s before coagulation in an ice-cold deionised water bath. Table 1 shows the membranes' casting solution compositions and film-casting conditions used in the preparation of three membranes with distinct UF porous structures, labelled as CA/SiO 2 -22, CA/SiO 2 -30 and CA/SiO 2 -34. In these membrane labels, the second field is represented by numbers 22, 30 and 34, which correspond to the formamide contents of 21.3%, 29% and 32.9% (wt.%), respectively, in the casting solutions. Following preparation, the asymmetric CA/SiO 2 hybrid membranes were conditioned in surfactant mixtures by a procedure adapted from Vos et al. [27]. This treatment was carried out using aqueous solutions of non-ionic surface-active agents composed of glycerol, supplied by PanReac (Darmstadt, Germany), and/or triton X-100, supplied by VWR (Briare, France). In that regard, membrane films were immersed for 15 min in one of the following solutions: (a) an aqueous solution of glycerol 20 vol.% (G20) or (b) an aqueous solution of triton X-100 4 vol.% and glycerol 20 vol.% (GT). All chemicals used in the treatments were of reagent grade and the conductivity of the deionised water was lower than 10 µS cm −1 . For NMR sample preparation, to access the water behavior within the membranes' porous matrices, the membrane films were immersed in deionised water for 48 h. Excess surface water was gently removed before enclosing a roll of hydrated membrane film in a sealed 5 mm outer diameter NMR tube. The membranes are identified throughout this work by a three-field code: the first code refers to the membrane hybrid matrix (CA/SiO 2 ), followed by a second field relative to the formamide content (in wt.%) in the casting solutions (of 22, 30 and 34), and the third corresponds to the drying membrane post-treatment of G20 or GT.
The membranes were characterised in terms of pure water hydraulic permeability (L p ) and a molecular weight cut-off (MWCO) referring to the molecular weight of the solute that is 95 % retained by the membrane. Details on the characterisation of the membranes studied are described in da Silva et al. [15].
Methods
1 H NMR Spectroscopy: The series of spectra obtained from the high resolution 1 H NMR relaxometry experiments performed at 7T was analyzed in order to extract the number of Lorentzian components and their respective longitudinal relaxation rates and signal contribution.
1 H NMR Diffusometry: At controlled temperatures and using a probe head with field gradient coils (Bruker Diff 30, Billerica, MA, USA) and a Bruker 7T superconductor connected to a Bruker Avance III NMR console, it was possible to measure the self-diffusion coefficient, D, of water molecules entrapped in the membrane matrix. The applied Pulsed Gradient Stimulated Echo (PGSE) sequence produces an attenuation of the signal intensity for increasing magnetic field gradient strengths, expressed by Equation (1): where γ1 H is the proton gyromagnetic ratio, g is the gradient strength, δ is the length of the gradient pulses and ∆ is the delay between pulsed gradients. Expression (1) does not take into account that water molecules are confined, which means that the obtained diffusion coefficients can be viewed as having apparent values with an order of magnitude that is well-estimated. More exact estimations of the diffusion coefficients would require the development of robust models that take into account the experimental conditions, namely magnetic field gradient pulse durations, which, as far as the authors know, were not yet achieved. In the case of the studied systems, except for pure water, multi-exponential decays were observed, which lead to the addition of the corresponding number of components to Equation (1). 1 H NMR Relaxometry: The longitudinal relaxation rate, R 1 , was measured across a broad frequency range at controlled temperatures. For 1 H Larmor frequencies ranging between 10 kHz and 9 MHz, the measurements were made using a home-developed Fast Field Cycling (FFC) relaxometer [28]. For the remaining frequencies, the conventional inversion recovery technique was applied using the Bruker Avance II console paired with a variable field iron-core Bruker BE-30 electromagnet (10-100 MHz) or with a Bruker Widebore 7T superconductor magnet for the measurements at 300 MHz. Table 2 shows the hydraulic permeability, L p , and molecular weight cut-off, MWCO, of the asymmetric CA/SiO 2 hybrid membranes. Table 2. Characteristics of the asymmetric CA/SiO 2 hybrid membranes [15].
Membrane
Hydraulic Permeability, Molecular Weight Cut-Off, As it can be observed by looking at the hydraulic permeabilities previously obtained by de Pinho et al. [15] for the membrane systems studied in the present work, the CA/SiO 2 -30 and CA/SiO 2 -34 membranes present marked ultrafiltration characteristics, whereas the CA/SiO 2 -22 membrane has a hydraulic permeability tjhat is one order of magnitude lower, thus standing within the border between nano-and ultrafiltration.
1 H NMR Spectroscopy
Generally, the results from relaxometry experiments are obtained by integrating over the entire 1 H NMR spectrum and fitting the varying amplitudes, proportional to the magnetization along the fixed external magnetic field, to Equation (2). In Figure 1, the model fitting results following spectral integration are exemplified.
In the case of the present work, the high resolution spectrum, obtained at a 7T external magnetic field, was divided into the minimum number of Lorentzian components for which it was possible to determine the longitudinal relaxation rate and the fraction of the population corresponding to each contribution. The obtained results are presented in Appendix A.1. For the majority of the studied systems, two components are observed. In these cases, the fraction of more confined water molecules, q, relates with the shortest relaxation time, For samples CA/SiO 2 -22 G20 and CA/SiO 2 -34 G20, four and three contributions were, respectively, detected. In the case of sample CA/SiO 2 -22 G20, one of the contributions was immediately disregarded in view of the extremely small T 1 (0.038 s), which would make it undetectable at lower frequencies.
The other contribution having the shortest longitudinal relaxation time represented a very small percentage of the signal (3%), and it was, therefore, also not considered. For the CA/SiO 2 -34 G20 system, we simply considered the contribution with the shortest relaxation time to be that of water molecules in a more confined environment. The list of more confined water population fraction is presented in Table 3. Table 3. Considered fraction of more confined water molecules to apply in the 1 H NMR relaxometry analysis. As it can be immediately concluded from the results presented in Table 3, the CA/SiO 2 -22 G20 membrane is dramatically different from CA/SiO 2 -30 G20 and CA/SiO 2 -34 G20 systems in terms of more confined population fraction or, in other words, the surfaceto-volume ratio is much larger for CA/SiO 2 -22 G20. This result is consistent with the smaller pores observed for the CA/SiO 2 -22 membranes and the consequent lower hydraulic permeability of this system (see Table 2). The post-treatment with triton X-100 (GT) appears to have uniformized the confined population ratio for the three membrane compositions. Figure 2 shows the model fitting analyses made for each of the studied hydrated membranes, and Table 4 presents the obtained diffusion coefficients. The model fitting to the diffusometry and relaxometry data was performed using the open access online platform at fitteia.org (accessed in 1 September 2022), fitteia ® , which applies the non-linear least squares minimization method with a global minimum target provided by the powerful MINUIT numerical routine from the CERN library [29,30]. As it can be observed, all hydrated membranes present at least two diffusion coefficients that can be associated with the water molecules experiencing different degrees of confinement. For the CA/SiO 2 -30 (G20 and GT) and CA/SiO 2 -34 (G20 and GT) systems, three diffusion components were observed. In all these cases, the third residual component can only be observed in the logarithmic scale. In the case of CA/SiO 2 -22 (G20 and GT), the slowest component is probably not observable due to its smaller value, which may fall out of the measurable range for this technique. From Table 4, it is possible to conclude that the CA/SiO 2 -22 systems present much smaller diffusion coefficients than the CA/SiO 2 -30 and CA/SiO 2 -34 systems, which is expected in view of their smaller pores. Membranes CA/SiO 2 -30 and CA/SiO 2 -34 seem to be harder to distinguish in terms of the diffusion coefficient, possibly because their higher permeability increases the relative amount of less confined water. The slower and intermediate diffusion coefficients, D slow and D int , respectively, seem to be smaller for CA/SiO 2 -30 systems, which is consistent with the smaller pore sizes observed for these membranes [15]. However, the faster diffusion component is larger for membranes CA/SiO 2 -30 than for membranes CA/SiO 2 -34, which might be a consequence of a the pore size distribution in membranes CA/SiO 2 -30 varying across a broader range of characteristic lengths. The fact that previous SEM studies have shown a wide distribution of pore sizes in these membranes makes it difficult to compare the 1 H NMR diffusometry results obtained for membranes CA/SiO 2 -30 and CA/SiO 2 -34 [15]. Nevertheless, the CA/SiO 2 -22 systems are markedly less permeable and lead to much smaller diffusion coefficients, rendering the comparison between this and the CA/SiO 2 -30 and CA/SiO 2 -34 systems meaningful.
Raw Data and Theoretical Models
In Figure 3, the 1 H NMR relaxometry profiles obtained for the membranes studied in the present work are obtained. In order to enable a comparison between the profiles of membranes that were subject to a different drying process, the results previously obtained by de Pinho et al. [16] for membranes dried using the solvent exchange procedure were also added to the figure. As it can be immediately concluded from the observation of the longitudinal relaxation profiles displayed in Figure 3, for the systems studied in the present work (black circles-G20-and blue squares-GT), the CA/SiO 2 -30 and CA/SiO 2 -34 membranes present rather similar relaxometry profiles, while CA/SiO 2 -22 membranes presented significantly different results.
It is also possible to see that post-treatment with triton X-100 leads to very small differences for CA/SiO 2 -30 and CA/SiO 2 -34 membranes, while it produces a significant R 1 decrease across the lower frequency range in the case of membranes CA/SiO 2 -22.
Furthermore, comparing the results obtained in the present work with those related to the solvent-exchange-(SE) dried membranes (red diamonds)-it is possible to observe a significant difference for the CA/SiO 2 -34 membranes while the CA/SiO 2 -22 systems seem almost insensitive to the drying process. This result may be explained by the fact that membranes that are more permeable, such as CA/SiO 2 -34, are bound to be more impacted by the drying process than systems with smaller pores. In fact, the hydraulic permeabilities found for these systems, reported in previous studies, also show that the permeability of the CA/SiO 2 -22 membrane is almost unaffected by post-treatment drying processes, while permeabilities obtained for the CA/SiO 2 -30 and CA/SiO 2 -34 systems vary over a wider range of values, especially when comparing the SE drying process to the G20 and GT post-treatments [15].
The curves presented in Figure 3 representing the longitudinal relaxation rate, R 1 , obtained at different magnetic fields (or 1 H Larmor frequencies) and called NMR dispersion (NMRD) curves encode information on the molecular dynamics of the systems under analysis. In the present work, it was considered that the water entrapped in the membranes' pores may relax as a result of rotational and translational diffusions and rotations mediated by translational displacements, which are motivated by the interactions of water molecules with the porous matrix. Furthermore, assuming that these mechanisms are effective at different time scales and, thus, independent of one another, the total relaxation rate may be written as the sum of the individual rates (Equation (3)): where q is the fraction of water molecules interacting with the pore walls, which was determined with the analysis of the spectral components of the signals obtained at a 1 H Larmor frequency of 300 MHz: • Rotational diffusion (Rot): The model by Bloembergen, Purcell and Pound, better known as the BPP model, was applied in order to describe rotations of water molecules in the membranes [31,32]. The contribution of this mechanism to the NMR dispersion curves of water 1 H spins is given by Equation (4). (4) The prefactor A Rot depends on the effective intramolecular distance between 1 H nuclear spins, r e f f (1.58 Å in the case of the water molecule), via Expression (5), which can easily be calculated for the water molecule: with µ 0 denoting the vacuum magnetic permeability (4π × 10 −7 H/m), γ I denoting the magnetic ratio of the nucleus with spin I andh = h/(2π) denoting the reduced Planck constant (1.0545718 × 10 −34 m 2 Kg/s). Given that A rot can be estimated and fixed, the only parameter in Equation (4) that needs to be determined via the modelfitting analysis is the rotational correlation time, τ Rot .
• Translational Diffusion (SD): Self-diffusion of water molecules may be accounted for using the Torrey model [33,34]. Torrey assumed that molecules have equal probabilities of jumping in any direction from an initial state into another, reaching a random jump-like solution. The associated longitudinal relaxation rate frequency dependence is described by Equation (6).
Parameter n is the 1 H spin density, and d is the average intermolecular interspin distance. τ D , the translational diffusion correlation time, < r 2 >, the mean square jump distance, and the diffusion coefficient, D, are related by the following equation.
The functions j (i) (ω, τ D , d, r, and n) are the spectral density functions described in references [33,34]. • Rotations mediated by translational displacements (RMTD): The water motion in the confined system gives rise to a relaxation mechanism associated with rotations mediated by translational displacements. This model describes the movement of water molecules near the pores' walls and, therefore, is related to the interaction of those molecules with the membranes' surfaces. The contribution of this model to the longitudinal relaxation rate is given by [35,36] the following: where This contribution exhibits one high cut-off frequency, ν max , and one low cut-off frequency, ν max , which are, respectively, associated with the largest and smallest possible translational relaxation modes and, therefore, to the smallest and largest possible average displacements, respectively: ν −1 max = l 2 min π/2D and ν −1 min = l 2 max π/2D, where D is the diffusion coefficient and l is the average displacement. Exponent p can vary between 0.5 and 1, where p = 0.5 corresponds to a situation where there is an isotropic distribution of coupled rotations and self-diffusion motions along the pore/channel's surfaces, while for p = 1, there is a preferential orientation of the rotations/translations relaxation modes along the constraining surfaces. The parameter A RMTD is inversely proportional to the square root of the diffusion coefficient and to the range of wave numbers related to the motional modes induced by the surface, ∆q. This parameter is proportional to the square of the fraction of molecules interacting with the surface and to the square of the order parameter, representing the long time limit residual correlation of restricted tumbling.
Model Fitting
In Figures 4 and 5, the model fitting results produced by the Master module of the online platform fitteia ® [37] are presented. Figure 4 show the results obtained for pure water and the CA/SiO 2 -22 G20 and GT-hydrated membranes. The model fitting analysis of CA/SiO 2 -30 and CA/SiO 2 -34 systems is presented in Figure 5. The model fitting parameters resulting from the NMRD curves analysis are summarized in Table 5. Table 5. Parameters obtained from the NMRD model fitting analysis made on the studied membranes. The model fitting was performed considering an uncertainty of the relaxation rate equal to 10% of its value. The 1 H spin density, n, needed for the Torrey model for translational self-diffusion was fixed to the calculated value of 6.69 × 10 28 1 H nuclear spins per cubic meter. Parameters D SD and D RMTD were fixed to the D f ast and D int diffusion coefficients presented in Table 4, respectively. In the case of pure water, the diffusion coefficient was fixed to that presented in Figure 4. The fraction q, representing the more confined water, was also not a free parameters, and its value was set to that presented in Table 3 for each studied hydrated membrane. Parameter A rot was also calculated and fixed as explained in the rotations model section. The model proposed in this work and the combination of 1 H NMR relaxometry and diffusometry experimental techniques allowed for a consistent analysis of all the studied hydrated membranes, as it can be concluded by the good quality of the fits.
In Figure 4, it is possible to observe the striking NMRD profile difference when comparing pure water with confined water. Water molecules entrapped in the matrix have the additional RMTD relaxation pathway, which significantly increases the longitudinal relaxation rate. Moreover, confined water presented diffusion coefficients that are up to three orders of magnitude smaller than that measured for free water (see inserted image in Figure 4a and Table 5).
The parameter q was fixed to the value obtained from the analysis of the spectral components. D SD and D RMTD were set equal to the value of D f ast and D int obtained from the diffusometry analysis and presented in Table 4, respectively. D SD corresponds to a less confined fraction of water that does not interact directly with the matrix, while D RMTD corresponds to a more confined fraction of water that relaxes as a result of interactions with the surface.
Despite the apparent similarities between the relaxometry profiles obtained for the G20 and GT versions of membranes CA/SiO 2 -30 and CA/SiO 2 -34, the model-fitting analysis evidences a decrease in the self-diffusion relaxation rate contribution for the membranes that were post-treated with triton X-100 (compare the dashed red line of the sub figures with that of the respective inserted image in Figure 5). This contribution decrease is more significant for the CA/SiO 2 -22 membranes, as observed in Figure 4. This observation is consistent with triton X-100 increasing the hydrophobicity of the cellulose acetate matrix, making the water less bound to it and leading to higher diffusion coefficients (see D SD and D RMTD in Table 5 or, respectively, D f ast and D int in Table 4). The fact that this increase is more significant for the CA/SiO 2 -22 hydrated membranes may be explained by the fact that smaller pore sizes relate to a larger ratio of water/surface interactions.Furthermore, the increased hydrophobicity suggested from this model-fitting analysis might explain the uniformization of the bound water fraction, q, found for the GT porous membranes (see Table 3).
This analysis enabled the estimation of the characteristic pore size given by the parameter l max , that, on average, induces more effective 1 H NMR relaxation through rotations mediated by translational displacements. As it can be observed, the additional treatment with triton X-100 does not significantly affect this dimension, except in the case of CA/SiO 2 -34 systems. Combining the previously described increased hydrophobicity with the fact that this membrane is the most permeable, it is possible that the signal from more bound water molecules is masked by the signal of unbound water, leading to an apparently larger characteristic dimension.
Regarding the fact that A RMTD is inversely proportional to the square root of the diffusion coefficient, the values obtained for this parameter seem to be consistent for all the samples and further support the increased hydrophobicity conferred upon treating the matrix with triton X-100 (GT).
Parameter p shows that there is an isotropic distribution of coupled rotations and self-diffusion motions along the matrix' pores for all systems, except for the CA/SiO 2 -22 pair, where some degree of anisotropy is detected. The fact that CA/SiO 2 -22 membranes have smaller pores is expected to increase the degree of confinement, thus evidencing water-ordering induced by the surface.
Conclusions
In this study, 1 H NMR spectroscopy, diffusometry and relaxometry were successfully combined in order to consistently analyze three pairs of hydrated ultrafiltration/nanofiltration asymmetric cellulose acetate-silica membranes. Each CA/SiO 2 -22, CA/SiO 2 -30 and CA/SiO 2 -34 pair of membranes was composed of one membrane in which the post-treatment involved an aqueous solution of glycerol with 4 vol.% of triton X-100 (GT) and another where triton was not involved in the post-treatment (G20).
The results seem to be consistent with the post-treatment with triton X-100 rendering the matrix surfaces more hydrophobic and increasing the self-diffusion coefficients obtained for water molecules in different confinement environments. This impact is more significant when the characteristic pore sizes are smaller given the increased probability of water/matrix interactions.
Comparing the results obtained in the present work with those related to membranes dried using the solvent-exchange procedure, presented in previous studies, it becomes clear that the drying process has a much less pronounced impact on the cellulose acetate-silica matrix when the pores are characterized by smaller dimensions.
The surface-bound water population variation observed between the CA/SiO 2 -22 G20 and the two analogous membrane systems is in line not only with the diffusion coefficients obtained in the present work but also with the hydraulic permeabilities reported in previous studies.
On the whole, this work evidences the advantage of combining complementary experimental techniques with a relatively simple relaxation model to study and differentiate between ultrafiltration/nanofiltration porous media and track their sensitivity to different post-treatments/drying processes.
|
2022-11-12T06:18:18.185Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2ad2b66b6757937b134a0c072d1fa61b4afa6a71",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/12/11/1122/pdf?version=1668654497",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "41ef67810f572e02da6bc99c4e4bfedbe0fad4a0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59039916
|
pes2o/s2orc
|
v3-fos-license
|
Student teachers ’ understanding and acceptance of evolution and the nature of science
The focus of this study was student teachers at a South African university enrolled in a Bachelor of Education (B.Ed.) programme and a Postgraduate Certificate in Education (PGCE), respectively. The purpose of this study was to explore students’ understanding and acceptance of evolution and beliefs about the nature of science (NOS), and to discover if these understandings and acceptances changed with the level of their studies. In so doing, we wished to determine if there is a relationship between their understanding of evolution and the NOS, and their level of acceptance of evolution. The study is located within a quantitative framework. Questionnaires were administered to pre-service teachers, who were enrolled in the School of Education. All participants had chosen Biology as their teaching specialisation. Three instruments were included in the questionnaires. The findings revealed that students in the B.Ed. programme have a poorer understanding of evolution and NOS than the graduate group (PGCE), and that there is no significant difference in understanding between different levels within the B.Ed. group. A further significant finding was that acceptance of evolution is independent of changes in conceptual understanding of evolution and independent of changes in beliefs about the NOS.
Introduction and Background
believed that the process of evolution is fundamental to an understanding of Biology, as it allows scientists to understand both past and present observations within an explanatory framework.Evolution has acquired the status of a scientific theory due to the fact that a convincing body of evidence has accumulated to support it.However, it is important to understand that all scientific knowledge is tentative, and theories may be modified as new evidence emerges.Hence there is the need to understand the principles underpinning the NOS as well.
Prior to the implementation of the National Curriculum Statement i in 2006, evolution was not included in the South African school curriculum.With the introduction of this curriculum, evolution, as well as the NOS, were subsequently included.However, many teachers in other African and Asian countries as well as the United States of America (USA), (Clément, 2013;Lovely & Kondrick, 2008;Mpeta, De Villiers & Fraser, 2014;Trani, 2004) have experienced problems teaching evolution.These problems include: a lack of understanding of the concept, and consequently the ability to teach the topic competently and problems with regard to the acceptance of evolution as the main organising principle in Biology, due to the fact that many people believe it contradicts their religious beliefs.This study is concerned with the former problem, as South African learners did not have the opportunity prior to 2006 to learn about evolution.Consequently, students entering teacher education programmes have been educated by teachers who themselves often did not have an adequate understanding of evolution or the NOS.The onus is therefore on higher education institutions to provide the necessary foundation for students to become competent teachers of Biology.
As a developing country, South Africa strives to compete globally in many areas, including teacher education, and competent biology teachers should demonstrate an understanding of fundamental concepts and processes, such as evolution, and the NOS.Currently, South Africa spends in excess of five percent of its gross domestic product (GDP) on education, which is a severe drain on resources.Teacher education programmes should therefore ensure that student teachers are adequately educated, and in the case of biology teachers, this would entail knowledge and understanding of evolution and the NOS.Such education is important for South African teachers to be able to compete with teachers in both the developing and developed world, with regard to their knowledge and skills pertaining to their disciplines.
The focus of this study was student teachers at a South African university enrolled in a B.Ed. programme and a PGCE, respectively.The purpose of this study was to explore students' understanding of evolution and acceptance of evolutionary theory, as well as their beliefs about the NOS, and; to discover if these understandings and acceptances changed with the level of their studies, specifically the second and fourth years of study towards a B.Ed. degree, and graduate students in the PGCE.In so doing, we wished to determine if there was a relationship between the students' understanding of evolution and their beliefs about the NOS and the level of acceptance of evolution.From an international perspective, it is important to understand how South African students, as future teachers, compare to pre-service teachers in both developing and developed countries.The critical questions that the research attempts to answer are:
•
What is the difference in the level of understanding of evolution and the nature of science in students at different levels of study?
• What is the difference in the level of acceptance of evolution in students at different levels of study?
• What is the relationship between students' understanding of evolution and NOS and their acceptance of evolution?
Literature Review Rutledge and Mitchell (2002) correctly state that evolution is the central and unifying theme of the discipline of Biology and should provide the framework for all biology courses.While this view is supported in the scientific community, the general public does not hold this view, and often rejects the teaching of evolution at secondary level (Scott, 2007).This situation has led to a proliferation of research on the teaching of evolution in schools (Aguillard, 1998;Anderson, 2007;Banet & Ayuso, 2003;Bryner, 2005;Clough, 1994;Farber, 2003;Lawson, 1999;Rutledge & Mitchell, 2002;Rutledge & Warden, 2000).
Research has found that many South African teachers have reservations about teaching evolution, due to the negative views they hold.These views may be due to the fact that they do not accept the theory of evolution (Coleman, 2006), or due to their fear of teaching a topic for which they feel under-prepared (Ngxola & Sanders, 2008).Studies have shown that for evolution to be taught effectively, teachers require a deep understanding of both the NOS and evolutionary theory (Lederman, 1992).Teachers who lack understanding of the NOS have difficulty teaching evolution for scientific understanding (Eick, 2000;Rutledge & Warden, 2000).In addition, a biology teacher's acceptance or rejection of evolutionary theory is important in terms of the central role that it plays in the biology curriculum (Rutledge & Warden, 1999).Teachers who lack understanding of the theory of evolution and the basic NOS may present the topic to learners in an isolated manner, leaving room for misinterpretations and misconceptions.
With regard to attitudes towards evolution, a number of researchers (Clough, 1994;Lawson, 1999;Sinclair & Pendarvis, 1998) are of the view that the differences between evolutionists and antievolutionists should be addressed in such a way that these differences are diminished.This may be achieved by adopting a constructivist approach to teaching, which requires teachers to discover what their learners know and believe (Alters & Nelson, 2002).Most South African learners are generally religious, and as a consequence, feel that they cannot accept the theory of evolution (Coleman, 2006).One way of addressing this is the adoption of sound pedagogical strategies, where evolution is presented in a scholarly manner, without attacking religion (Woods & Scharmann, 2001).
The level of understanding of the NOS and evolutionary theory and the relationship between these two aspects as well as the acceptance of evolution, have been the focus of a number of studies.Rutledge and Mitchell (2002) are of the view that specific courses in evolution and NOS should be a requirement of the subject matter preparations of biology teachers.In the South African context, it is also important that teacher education programmes include both evolution and the NOS, so as to prepare student teachers to teach biology effectively.
While addressing the needs of learners is of paramount importance, it is essential to address the factors that impact the teaching of evolution, which relate to teachers as well (Rutledge & Mitchell, 2002).While a considerable body of knowledge exists with regard to teacher opinions and attitudes concerning the evolution-creation controversy (Banet & Ayuso, 2003;Bryner, 2005;Van Koevering & Stiehl, 1989), very little research has been conducted on teachers' understanding of evolutionary theory (Rutledge & Warden, 2000).Many high school biology teachers in the South African context have no formal training in the principles and mechanism of evolution as a biological process.Keke (2014), in a study of 147 secondary biology teachers, found that teachers expressed the greatest need for professional development in topics relating to evolution from all the topics in the Grade 10-12 life sciences curriculum.
Research suggests that conceptual understanding in Science is facilitated when the Science learnt is deemed interesting to the learner as well as relevant to his or her everyday life (Taylor, 2001).Earlier work by Hewson (1981), Posner, Strike, Hewson and Gertzog (1982) has attempted to explain how conceptual change occurs by defining a theory of conceptual exchange (or accommodation).This theory foregrounds a notion of 'competition' between the conceptions that students hold and any new concepts with which they are confronted.For students to change their existing conceptions, these researchers believe accommodation needs to occur.For accommodation to occur, Posner et al. (1982) are of the view that four conditions need to be met.The first step is that a student develops dissatisfaction with the existing conceptions he/she may hold.Once this occurs, the student may exchange this conception with a new conception on condition that the competing conception is intelligible, plausible and fruitful.More recent work on the nature of conceptual understanding has been reported by Clark (2006), DiSessa, Gillespie and Esterly ( 2004) and others, who propose different models of how conceptual change occurs.One such model views understanding in terms of collections of multiple quasi-independent elements (Özdemir & Clark, 2007).
The question arises as to whether these existing models of conceptual change apply in situations where the topic being studied presents concepts that are counter-intuitive.Evolution is such a topic.Students find it very difficult to align their existing views with the new views presented to them when engaging with evolutionary biology.This creates a resistance to the new concepts, and understandably hampers understanding.Under these circumstances it may be more helpful to examine student conceptual understanding within a framework of resistance.An example of such a framework is a model proposed by Jegede (1995), known as collateral learning.This model defines collateral learning as an accommodative mechanism for the conceptual resolution of potentially conflicting tenets within a person's cognitive structures.Collateral learning was proposed as an explanatory model to understand the conflict arising when learners from non-western cultures are faced with a western world-view.It represents the process whereby a learner in a non-western classroom simultaneously constructs, with minimal interference and interaction, western meanings of a simple concept (Jegede, 1995).This model facilitates the holding of a scientific as well as a traditional view of the world.This is in contrast with the conceptual change framework, where learners would have to replace their prior concepts with currently accepted western science concepts.Students often see very little relevance in learning about evolution (Coleman, 2006) as they are not able to relate it to their everyday lives.Jegede's (1995) model of collateral learning may therefore apply to the learning of evolution, as many evolutionary concepts are counter-intuitive.Previous research has found that even teachers often find it difficult to understand concepts pertaining to evolution (Kirsten, 2014).
The difficulty teachers face with regard to the teaching of evolution appears to be compounded by their poor understanding of the NOS.Abd-El-Khalick and Lederman (2000) have demonstrated that both teacher and learner beliefs about the NOS are inconsistent.The way a teacher understands the NOS may influence the way he/she teaches science, and in particular, evolution.This in turn, has an influence on the way learners understand Science, and this may be particularly problematic with a topic such as evolution, where so many misconceptions abound.Similar findings have been obtained by Brickhouse (1990), Shulman (1986) and Singh (1998).Hammrich (1997) is also of the view that the conceptions teachers hold of the NOS shape their understanding of science, as well as of how science should be taught.These conceptions are firmly entrenched as their epistemologies with regard to science were influenced by their socialisation as teachers, as well as the way they were taught as learners.
Lederman is of the view that the NOS may be regarded as the cornerstone in the teaching of Science as a subject.It is for this reason that science curricula in many countries agree on the "development of an adequate understanding of the NOS" (Lederman, 1992:331).An important observation made by Lederman (1992), was that teaching experience does not contribute to a teacher's understandings of the NOS.A teacher's view of the NOS does, therefore, not change through experience, but because of a change in his/her view of what constitutes science.It is therefore important that education courses address the issue of the NOS, as research shows the knowledge that pre-service teachers have of the NOS to be inadequate (Irez, 2006).Research by Abd-El-Khalick ( 2001) and Abd-El-Khalick and Akerson ( 2004) has further shown that implicit teaching of NOS through enquiry-based courses is less successful than explicit teaching of how to teach the NOS as covered in methods courses in science education programmes.It is thus incumbent on the developers of science education courses to include the teaching of the NOS in their curricula.This study will contribute to the understanding of teachers' views of the NOS, obtained from the courses they attended at university.
While a lack of understanding may influence students' acceptance of evolution, religious views or the views of the community from where the students come may have a similar impact (Evans, 2001).On the other hand, a better understanding of evolutionary biology has not/does not necessarily lead to a general social acceptance of evolution as a scientific fact (Bishop & Anderson, 1990).
Methods
This study replicates a study conducted by Cavallo and McCall (2008), who reported that ninth-grade biology students improved their knowledge of evolution after a unit of instruction on evolution, but did not change their beliefs about the NOS or acceptance of evolution.The research is located within a quantitative framework.Questionnaires were administered to 200 pre-service teachers (hereafter referred to as students), who were enrolled in the School of Education at a tertiary institution in South Africa in 2012.Incomplete questionnaires were discarded.As a result, the responses of 164 students were analysed.
Sample
Participants were either enrolled in a B.Ed. degree (n = 128), or in a PGCE programme (n = 36).Both groups were Biology/Life Sciences majors.The B.Ed. degree is a professionally focused degree, in which students construct their subject matter knowledge as appropriate for teachers.They do not attend mainstream science courses, which are intended for students studying towards a Bachelor of Sciences degree.Their biology programme includes a substantial theme on evolution and all modules are approached from the premise that evolution is the underlying principle that informs all biology teaching.Postgraduate Certificate in Education (PGCE) students have Bachelor of Science (B.Sc.) degrees, and only focus on education in the PGCE programme.The minimum requirements for registration as a biology teacher are at least two years of study in disciplines related to Biology.The extent to which students are exposed to the NOS and evolution depends on the structure of the degree, which may differ between various institutions.The students who constituted the sample of the study, were representative of the student population of the university at which the research was conducted, that is, approximately 75% of the students do not have English as their home language.Data concerning gender and year of matriculation was collected.Year of matriculation proved particularly important, since 2008 marked the first cohort of school-leavers who had studied evolution during their schooling.
The codes for the modules students participating in the study were registered for, were Biological Sciences for Educators BIO210, BIO 410 and BIO610.The BIO210 group of students were in their second year of study towards a B.Ed., but were in the first year of study for the course BIO210.Females constituted 56% of the class, and males 44% of the class.The majority of students (93%) had matriculated between 2008 and 2010, with only 7% having matriculated in 2007 or earlier.
The BIO410 group of students were in their fourth year of study towards a B.Ed. degree, and the third year of studying BIO410.Females constituted 59% of the class, and males 41% of the class.A minority of students (39%) had matriculated in 2008, with the majority (61%) having matriculated in 2007, or earlier.
The BIO610 group were studying towards a PGCE, and all were registered for the Biology Teaching Specialisation module.Females constituted 58%, and males 42% of the group.All students had matriculated in 2007 or earlier.
The three groups were therefore similar in gender representation, but differed in exposure to evolution in their schooling.Most of the BIO210 group had been taught evolution during schooling, while only 39% of the BIO410 group and none of the BIO610 group had experience studying evolution as part of their schooling.All three groups had a similar proportion of English home-language speakers.Statistical comparisons among the three groups were therefore valid.
Data Collection and Procedure
The questionnaire consisted of five sections: Section A of the questionnaire collected basic demographic data.In addition, students were asked whether evolutionary ideas were in conflict with their religious beliefs.Section B, which covered students' worldviews, was not utilised for the present study.We administered three instruments as follows.
Section C
To assess students' acceptance of evolutionary theory, the Measure of the Acceptance of the Theory of Evolution (MATE) (Rutledge & Warden, 1999) was administered.This section of the questionnaire consisted of 20 statements and was scored using a Likert scale.The response indicating strongest degree of acceptance of the theory of evolution received a score of 5, and the response indicating least degree of acceptance of the theory of evolution received a score of 1.Thus, the possible range of total scores was 20-100.
Section D
To assess students' beliefs about the NOS, a questionnaire consisting of 17 items was used (Rutledge & Warden, 2000).The items were scored using a Likert scale.The response most congruent with science received a score of 5, and the response least congruent with science received a score of 1.Thus, the possible range of total scores was 17-85.
Section E
To assess students' understanding of evolutionary theory, 21 multiple choice items were administered to students (Rutledge & Warden, 2000).These items addressed various aspects of evolutionary theory.For each item, a choice of five possible answers was presented.The number of correct responses was tallied, and a cumulative score obtained.The possible range of scores was 0-21.This variable will hereafter be referred to as 'MCQ' (multiple choice questions).
Data Analysis
Data was scanned, and cleaned, before statistical analysis was conducted.Candidates' responses were deleted if they did not comply with the following criteria: 1. Completed all three surveys; 2. Omitted less than five questions in the Evolution Understanding test.This criterion was introduced after it became apparent that a number of students failed to answer questions after they reached a certain point.It was not clear whether this was due to test fatigue, lack of knowledge or refusal to answer the questions.The number of students eliminated by each criterion is shown in Table 1.The excluded students are important, in that they indicate that answering the NOS survey was more acceptable to all three groups than answering the MATE or the Understanding Evolution questions for the 210 group.It is also significant that 45.7% of the 610 group omitted more than four questions on the multiple-choice test, and were therefore excluded from the final sample.Some students satisfied two or more criteria for exclusion.This process eliminated 22 students from the 210 group, two from the 410 group, and 16 from the 610 group.The percentage of participants who therefore completed the questionnaire was 75.6 percent.
The instruments used had been developed and validated by previous researchers (Cavallo & McCall, 2008;Rutledge & Warden, 2000).Minor adaptations related to simplifying the language in the questionnaire were made to suit the South African context.However, the reading level required to answer many of the questions may have resulted in many incomplete questionnaires, since the majority of students participating in the study were not English first-language speakers.There was evidence of resistance to participation in the collection of data, and one group of students (the 310 class) was abandoned entirely, due to the sheer number of incomplete questionnaires.
Although the respondents were mostly students of one of the researchers, questionnaires were completed voluntarily.Furthermore, respondents were assured of their anonymity.The ethics committee of the university where the research was conducted gave permission for the research to be conducted.
Results
Table 2 provides the results, which enabled us to answer the first two research questions.The mean scores obtained on each questionnaire were compared among the three groups of students, 210, 410 and 610.Summary statistics are shown in Table 2.A one-way analysis of variance was conducted to compare the mean scores across student groups in each part of the survey.The mean scores for the understanding of evolution were low across the three groups, given that the maximum possible score is 21.However, the graduate group (610) obtained significantly higher scores than did the two B.Ed. groups (210 and 410).The mean score obtained by fourth-year students (410) was non-significantly higher than the mean score obtained by second year students (210).Students in the B.Ed. programme therefore have a poorer understanding of evolution than the graduate group (it ought to be mentioned that this research was conducted before the section on evolution was covered in the 410 module; however at this stage, students had studied four biology modules).
Beliefs about the NOS revealed that the graduate group (610) also obtained significantly higher mean scores than did the two B.Ed. groups (210 and 410).The mean score obtained by fourth year students (410) was non-significantly higher than that obtained by second year students (210).
Mean scores for the Acceptance of Evolution were high in all three groups (over 70% in all groups), and did not differ significantly among the three groups.This suggests that factors other than level of knowledge about evolution and the NOS are implicated in students' acceptance of evolution.A more nuanced analysis is possible if the proportion of students choosing each option (1-5) is analysed.This is shown in Figure 1 for acceptance of evolution.
The frequency of selection of each level from 1 (Strongly Disagree) to 5 (Strongly Agree) was plotted as percentages of the total number of selections made by students in each module (Figure 1).Over 60% of answers given in all three classes accepted the theory of evolution (scored as 4 or 5), while less than 20% of the answers indicated rejection of the theory of evolution (scored as 1 or 2).All three groups had similar profiles, shown in Figure 1.The chi-squared statistic comparing the proportion of choices for levels 1 to 5 in the three groups was significant (chi-squared = 16.72,p < 0.05).The z-scores indicated that there was no difference among the three groups in the selection for levels 1, 2 and 3. Level 4 (Agree) was significantly more frequently selected by the 410 group than the 610 group, with the 210 group straddling both groups.However, the reverse was found for level 5 (Strongly Agree), with the 610 group being significantly more likely to choose this option than the 210 group, and the 410 group straddling the 210 and 610 groups.
Figure 1 and the chi-squared analysis reveal an increase in the acceptance of evolution between the 210 and 410 modules, as indicated by fewer 410 students refusing to answer this questionnaire, and more 410 students agreeing or strongly agreeing with statements about evolution.The 610 students were more certain about their acceptance of evolution than any other group, but they were equally as likely to disagree as B.Ed. students.
Figure 1
Frequency of selection of each level from 1 (Strongly Disagree) to 5 (Strongly Agree) with statements about acceptance of evolution Figure 2 shows the results of the NOS survey, which indicates a greater difference between the PGCE and B.Ed. students than was seen in the results of the MATE survey.This observation is supported by the results of a chi-squared test comparing the proportion of choices for levels 1 to 5 (1 = non-scientific, 5 = scientific) in the three student groups being highly significant (chisquared = 33.81,p < 0.001).The 210 group was significantly more likely to have strongly nonscientific beliefs than either the 410 or 610 groups, but groups did not differ significantly on moderately non-scientific beliefs or indecision.The 610 group was significantly more likely to hold scientific beliefs than the 210 group, with the 410 group straddling both groups.The 610 group was significantly more likely to hold strongly scientific beliefs (level 5) than either of the other two groups.The marked difference between the B.Ed. and the PGCE groups is made apparent when the total "agreement" is calculated: 54% of the responses for the 610 group indicated acceptable beliefs with regard to the NOS, while 44% of 410 students and 42% of 210 students had the same responses.The results shown here support the findings of the ANOVA that the 610 group displayed better understanding of the NOS than either the 210 or 410 group.The results of the chi-squared test show that the 410 group had a somewhat better understanding of the NOS than the 210 group, but it is not a convincing improvement.The 210 group were equally split between scientific and nonscientific beliefs about the nature of science.
A linear regression analysis was performed to investigate the relationship among the three variables for all three groups combined.These results enabled us to answer our third critical question.Predictions are that knowledge of evolution would correlate positively with acceptance of evolution and understanding of the NOS.Pearson Correlation coefficients are shown in Table 3.
All variables are positively correlated with one another, but only one correlation is highly significant, namely that between understanding of evolution and beliefs about the NOS (p < 0.001).Acceptance of evolution is not significantly positively correlated with either evolution understanding or acceptable beliefs about the NOS.Table 3 shows that both evolution understanding and acceptable beliefs about the NOS were significantly higher in the graduate group than in the undergraduate groups, but that acceptance of evolution did not differ by group of students.The linear regression reinforces that acceptance of evolution is independent of evolution understanding and independent of beliefs about the NOS.
Discussion and Conclusion
Compared with the data collected by Cavallo and McCall (2008) using the same three instruments used here, South African pre-service teachers have more acceptable beliefs about the NOS, as well as a higher level of acceptance of evolution.This is in accordance with the research conducted by Mpeta et al. (2014), who found that there was a moderate acceptance of evolution by learners in the Limpopo province of South Africa; but contrary to the findings of Peker, Comert and Kence (2010), who report low levels of acceptance of students in Turkey.Schröder (2013) is of the view that low levels of acceptance of evolution may be interpreted as resistance to change, regardless of increase in knowledge.This study supports this view, as high levels of acceptance of evolution were evident, irrespective of the level of students' understanding.Sinatra, Southerland, McConaughy and Demastes (2003) report similar findings.In the South African context, the reason for this may be that, in spite of low levels of understanding, attitudes to evolution may be changing.While evolution was formerly treated as a controversial issue, and met with resistance when introduced into the South African school curriculum (Sanders, 2008), it would appear that it is no longer as controversial.Students enter university courses having been exposed to evolution concepts and are taught courses in Biology by staff who accept evolution as a scientific fact.Students' acceptance of evolution is therefore possibly strengthened by the measure of exposure they have had to teachers and lecturers who teach evolution as an integral part of the discipline, irrespective of the level of knowledge of evolution concepts.This is a positive development, as acceptance of such a fundamental aspect of the discipline is very important for student teachers who wish to become competent teachers of Biology.
With regard to the understanding of evolution only the graduate students in our study match the level of understanding of evolution achieved in the post-test by the ninth-grade students sampled by Cavallo and McCall (2008).This lack of knowledge of concepts relevant to evolution is clearly a cause for concern, as all teachers of Biology/Life Science should demonstrate adequate knowledge of evolution and the NOS.Many students were eliminated from the data analysis because they were unable or unwilling to answer many of the questions in the questionnaire.It is also a concern that the B.Ed. students had a relatively poor understanding of the NOS, which was considerably stronger in the PGCE students.The reason for students' poor understanding of evolution may be twofold.Firstly, students' lack of understanding of NOS may contribute to their lack of understanding of evolution.This was confirmed by the research conducted by Eick (2000), and Rutledge and Warden (2000), who respectively found that those teachers who lack understanding of the NOS, have difficulty teaching evolution for scientific understanding.However, the students in this study have acceptable beliefs of NOS, although their knowledge of concepts related to NOS may be lacking.Secondly, students may resist conceptual change, because the concepts related to evolution are counter-intuitive.Deeper engagement with concepts related to evolution and NOS are required in order to facilitate conceptual change.This explains why the PGCE students have a better understanding of evolution and NOS than the B.Ed. students, as they probably engaged more deeply with these concepts in a pure science degree.This research points to a possible lack of emphasis on evolution and NOS in biology courses.More time should be spent on these concepts, not only as important concepts, but also as guiding principles in the design of these courses.If evolution and the NOS are taught as an integral part of Biology in all modules, including method modules, students may be able to engage at a deeper level, and develop a better understanding of these important concepts.This is especially applicable to students in B.Ed. programmes.While the focus may be education, it is important that students gain indepth knowledge and understanding of the concepts pertaining to their specialisations.As a developing country, South Africa cannot afford to produce teachers who lack fundamental knowledge of the disciplines they will be teaching.
This study provides base-line data on the level of understanding of the NOS and the level of understanding and acceptance of evolution in life sciences student teachers at a single higher education institution in South Africa.The findings of this study may provide valuable information for tertiary institutions more generally, in terms of the design of their biology programmes for student teachers; and in addition allow comparisons to be made between South African pre-service teachers, with those in other countries.Furthermore, a country such as South Africa, which experiences a number of extreme economic constraints, ought to consider the allocation of funding for professional development courses for teachers very carefully.The findings of this study may therefore contribute to the process of identifying where in the country the need for professional development of life sciences teachers is greatest.
Notes
i.
The first post-apartheid curriculum implemented in South Africa.
Figure 2
Figure 2Frequency of selection of each level from 1 (Strongly Disagree) to 5 (Strongly Agree) with statements about the nature of science
Table 2
Summary statistics and results of ANOVA comparison of group means for each variable
|
2018-12-18T15:38:44.794Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "03cc3223efadd2f1641973d08135c0fa6c548998",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15700/saje.v35n2a1079",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "03cc3223efadd2f1641973d08135c0fa6c548998",
"s2fieldsofstudy": [
"Education",
"Biology",
"Philosophy"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
252185260
|
pes2o/s2orc
|
v3-fos-license
|
A route to minimally dissipative switching in magnets via THz phonon pumping
Advanced magnetic recording paradigms typically use large temperature changes to drive switching which is detrimental to device longevity, hence finding non-thermal routes is crucial for future applications. By employing atomistic spin-lattice dynamics simulations, we show efficient coherent magnetisation switching triggered by THz phonon excitation in insulating single species materials. The key ingredient is excitation near the $P$-point of the spectrum in conditions where spins typically cannot be excited and when manifold $k$ phonon modes are accessible at the same frequency. Our model predicts the necessary ingredients for low-dissipative switching and provides new insight into THz-excited spin dynamics.
Ultrafast switching of magnetic materials has been shown to be predominantly thermally driven, but excess heating limits the energy efficiency of this process. By employing atomistic spin-lattice dynamics simulations, we show that efficient coherent magnetisation switching of an insulating magnet can be triggered by a THz pulse. We find that switching is driven by excitation near the P-point of the phonon spectrum in conditions where spins typically cannot be excited and when manifold k phonon modes are accessible at the same frequency. Our model determines the necessary ingredients for low-dissipative switching and provides new insight into THz-excited spin dynamics with a route to energy efficient ultrafast devices.
The control of magnetic order on an ultrafast timescale in an efficient and robust manner is crucial for the development of the next-generation of magnetic devices [1]. The interaction of magnetic materials with femtosecond laser pulses has shown multiple fundamental effects that culminate in the ability to switch the magnetisation by means of purely optical excitation [2,3]. However, in metallic systems this is accompanied by a large rapid temperature increase, which is detrimental to the long-term usage of device. The use of insulators can be beneficial in this respect, however, the routes for energy efficient switching, involving minimal energy losses, still needs to be found.
A number of new possibilities have been presented recently using ultrafast excitations in the terahertz (THz) regime by means of femtosecond lasers or THz sources [4]. One of the fundamental questions in this research is understanding the angular momentum transfer between spins and lattice, with recent results demonstrating that angular momentum transfer between both systems primarily takes place on an ultrashort timescale (around 200fs) [5], in contrast with previously considered timescales in the range of 100 ps [6]. Therefore, at the picosecond timescale and below, the dynamics of spin and lattice occurs simultaneously and one system can excite the other. Firstly, excitation of the lattice can change magnetic parameters such as exchange [7][8][9] or anisotropy [10] and therefore can excite magnetisation dynamics. Reciprocally, ultrafast excitation of the spin system can produce excitation of the lattice at the same timescale. An example of that is the ultrafast Einstein-de Haas effect [5] or localised spin-Peltier effect in antiferromagnets [11].
The most exciting results, however, are related to the possibility of magnetisation switching via THz phonon pumping. Such a possibility was anticipated by theoretical investigations [13,19] based on a phenomenological model which includes magneto-elastic anisotropy. Recent experiments by Stupakiewicz et al [14] demonstrated ultrafast magnetisation switching in the magnetic insulator YIG by means of resonant pumping of specific longitudinal optical phonon modes. The switching was explained by excitations of local stresses that induce magneto-elastic anisotropy. The results also suggest a new universal ultrafast switching mechanism which may be applied to a wide range of materials.
Thus, energy-efficient magnetisation switching is particularly interesting to explore in insulators. This is due to the fact that, firstly, the electronic system with its low specific heat can have little participation in the energy uptake and, secondly, that excitation of the spin-phonon system on the subpicosecond timescale will have minimal interaction with the outside world in terms of energy diffusion. Therefore, one can try to find conditions for almost dissipationless ("cold") switching when the angular momentum is efficiently transferred from phonons to spin and the energy is used to switch magnetisation without losses. In this Letter, we demonstrate magnetisation switching under the application of a THz phonon pulse, as shown schematically in Fig. 1. The modeling is done within the molecular dynamics approach in a self-consistent spin-lattice framework [15]. We demonstrate the possibility of an energy efficient switching in the conditions when phonons are excited with high k-values and THz frequencies, corresponding to a maximum in the density of states with no available spin-wave modes. The mechanism of switching is via local magneto-elastic fields created by atomic displacements due to the phonon pulse. In this condition, the absence of magnon excitations means that practically all phonon angular momentum is transferred to precessional magnetisation switching. The mK spin temperature developed during the switching process shows that the switching process can be considered nondissipative.
The computational model for this work has been previously employed in the systematic investigation of equilibrium and dynamic properties of BCC Fe [15]. The Fe parameters were chosen as they are well studied in the literature and good parameterisation exists from theory and experiments, providing a realistic spin-phonon spectrum. However, for the sake of proof of concept, the system is effectively treated as a insulator -no electronic damping is considered on the spin system and the only thermostat is applied to the lattice. The system size used in the simulations is 10 × 10 × 10 BCC unit cells. For phonon interactions, we consider a Harmonic Potential (HP) and Morse Potential (MP). The HP is defined as U(r i j ) = V 0 (r i j − r 0 i j ) 2 , where V 0 has been parametrised for BCC Fe in [16] and is parameterized in [17] for BCC Fe (D = 0.4174eV, a = 1.3885Å, r 0 = 2.845Å). For both potentials the interaction range was restricted to r c = 7.8Å. Magnon properties, such as damping and equilibrium magnetisation have been shown not to be influenced by the choice of potential [15]. The spin Hamiltonian used in our simulations consists of contributions from the exchange interaction, Zeeman and anisotropy energies as described in [15]. In our model system, we used an uniaxial anisotropy with the easy axis e parallel to z-direction to mimic the switching experiments. Finally, the spin-lattice coupling term (H c ) was taken to have a pseudo-dipolar form The magnitude of the interactions is assumed to decay as f (r i j ) = CJ 0 /r 4 i j as presented in [16] with C taken as a constant, for simplicity, measured relative to the exchange interactions J 0 (CJ 0 = 0.452eVÅ 4 ). The coupling strength C can be parameterised via the strain-dependent magneto-elastic anisotropy or via the value of the damping of magnon/phonon origin [15]. For the spin degrees of freedom we solve only the precessional Landau-Lifshitz equation: ∂ S i ∂t = −γS i × H i without the Gilbert damping term, as this appears intrinsically in our model via the direct coupling to the lattice.
To model the effect of THz phonon excitation, we apply a periodic external force to each atom is the Heaviside step function, t p is the rectangular pulse duration, α the x, y, z coordinates of the forces. The excitation force F x T Hz is nonhomogeneous in space by choosing different k-vectors and for simplicity is applied along the x direction.
The complete magnon-phonon spectrum is presented in Fig. 2. Note that for the HP, the phonon and magnon spectrum intersect, however the MP does not present this feature. The power spectral density of the autocorrelation function in the frequency domain [15] (see also Fig. S1 in Supplementary Material) reveals a peak for phonons at frequencies ca. 8.3 THz for the HP and a broad-band excitation for the MP around the similar frequency (with no available spin-wave modes), hence we first excite our systems at around 8THz. The application of the THz force drives the atom displacement to excite phonons within the system. Within the spin-lattice framework, these phonons break the local symmetry of the lattice which, through the pseudo-dipolar coupling term, generates an internal field capable of switching the magnetisation. The k-vector corresponding to the P-point (obelisk symbol in Fig. 2) and the application of the force on the x direction was selected as it gives rise to movement of the atoms entirely out of phase along the x-direction, with oscillations of the atoms around their equilibrium position up to 7% of the lattice spacing (for f x 0 = 0.05) -Supplementary Material, Fig. S2. In the case of the HP - Fig.2, panel b), forced excitation with a THz pulse of 8.3 THz leads only to a response at the same frequency for both magnons (spin S x ) and phonons (position r x ) - Fig.2, panel d). For the MP, although we excite the system only on the x direction and at the P point in the Brillouin zone, we observe multiple spin and phonon modes ( Fig.2, panel c) reflecting the decay of the forced excitation into other modes, along the P − Γ path. This gives rise to displacements in all three directions, (Fig. S2, Supplementary Material) and a more complicated switching pattern and additional heating (as observed later in Fig. 3, 4). Note that, the analysis of the coupling fields (see Supplementary Material, Fig. S1) shows a broadening of the spectra at the excitation frequency for the MP, in contrast with the peak observed for the HP, suggesting a much weaker coupling to the THz force (hence a lower peak amplitude at the excitation frequency).
In spite of these differences in the excitation spectrum, we have observed magnetisation switching for both potentials. Fig. 3 shows examples of switching cases for HP (panel a) and MP (panels c). We observe that the parallel to the magnetisation component of the coupling field H c is zero, and the THz excitation leads to the apparition of a perpendicular component H ⊥ c , which will drive the switching. In the case of the HP -panel a, we observe that at around 40 ps there is an increase in the H ⊥ c , leading to the initiation of switching. For the case shown in panel b, the perpendicular field developed is smaller and cannot lead to switching. For MP, we observe that the perpendicular coupling field fluctuates due to the wider range of excited phonons. The switching is triggered when this field reaches a relatively constant, large value, which is a random event. For the case of the panel d, no switching is triggered since the in-plane component of the perpendicular field is rotating (see Fig. S5 in Supplemental Material). Importantly, after the THz excitation, the magneto-elastic fields are almost zero showing that all the energy was transferred and used during the switching. After this time, the only effective field that leads the magnetic system back to saturation is a small uniaxial anisotropy, H z a = 0.05T, which defines a slow relaxation of the system at much longer timescales (> 30ns) than those presented here.
To characterize the spin system we calculate the spin temperature [18], see Fig. 3, bottom panels. We observe that the increase of temperature is in the mK range, proving the lack of heating during this process. Moreover, the spin temperature in the case of the HP is raised to several mK while approximately 10 times larger temperatures (ca. 10 mK and up to 28 mK) are created for the MP. In the case of the HP, the very small temperature increase is due to the fact that no magnons can be excited in the spin system and thus the energy goes efficiently into the magneto-elastic fields created by the spinphonon coupling with the rest of the increase in temperature coming from switching only as shown in Fig. S3, Supplementary Material. This underlines the main differences between the HP and an-harmonic MP. In the latter case, large nonlinearities are present in the system. Since our excitations are strong, these non-linearities act as an efficient scattering mechanism for phonons which finally add a temperature-like effect, with quite slow relaxation. Fig. 4 shows the corresponding phase diagram at T = 0K for HP (a) and MP (b) in terms of the excitation frequency and the pulse duration. An important difference between the two potentials is a much larger switching region for the MP, however with scattered points due to scattered phonon modes and finite-size effects which we will discuss below. We note that the excitation at this frequency produces a large phonon response, up to 5-7% of the interatomic distance (see Supplementary Material, Fig. S4) and many phonon modes are available at these frequencies. We underline that magnons do not have modes at this frequencies and corresponding kpoint hence the switching is triggered by phonons. During the switching, the change in the magnetisation length is less than 0.04%, suggesting once more a non-thermal switching.
One of the main characteristics of the switching is that a minimum duration of the THz pulse, ca. 50 ps is required, as shown by the phase diagram. A detailed examination shows that the switching is precessional due to the fact that the application of the force on the x direction for the k-vector corresponding to the P point in the Brillouin zone was selected to generate a coupling field that acts in plane, on the x direction (see Fig. 3). Additional proof is shown in Fig. S6, Supplementary Material, where we impose an x displacement on atoms of the same magnitude, frequency and phase as in the case of the numerical simulations with a HP. This periodic motion creates a perpendicular coupling field to the magnetisation and leads to precessional switching. Panels c) and d) in Fig.4 shows the final magnetisation state for different pulse widths (points) after the application of a THz laser pulse. For longer pulse widths, there is an oscillatory behavior of the magnetisation which corroborates the precessional and coherent nature of the switching. The fluctuation of the in-plane coupling field for MP is responsible for random switching events visible in the phase diagram of Fig.4, in contrast to the regular behavior of HP. The random switching effects are especially visible in small systems, as is the case of our simulations. Analysing panels c and d in Fig. 4, we confirm that in the case of the MP -panel c, the scattering of the phonon modes and subsequent heating leads to a scattered final magnetic state with each realisation, while for the HP, a similar magnetic state is obtained with each realisation. A similar "randomization" has been produced when we analysed the switching diagrams at non-zero temperatures for the HP, see Supplementary Material, Fig. S2. This "random" switching diagram has been also reported for a macrospin nanoparticle with magneto-elastic anisotropy term at non-zero temperature [19].
Finally, Fig.5 presents the results of lower frequency THzphonon excitation near the Γ-point (at the red asterix symbol in Fig. 2) for the MP and for two excitation strengths. This is a characteristic example of what typically occurs in both systems. A small excitation, f x 0 = 0.02, does not develop any transverse coupling field and produces no effect except small-amplitude spinwaves. Larger excitations (for f x 0 = 0.03, average H t c = 1.5T) randomises the spin system leading to a large increase of its temperature. Hence, although excitation close to the Γ-point produces random (non-coherent) switching events, for some particular excitation strengths, the process is associated with subsequent heating. The final spin temperature increases when the excitation strength increases. In conclusion, using a spin-phonon model with angular momentum transfer, we investigated the effect of magnetisation switching driven by phonon excitations with minimal energy dissipation. SLD models are crucial for the investigation of magnetisation switching via THz phonons, as we are able to access the magnetisation dynamics corresponding to the excitation of individual or collective phonon modes. Our results suggest that ferromagnetic materials that present a flat phonon spectral region where a large number of phonon modes can be efficiently excited, are good candidates for THzassisted "cold" switching. The key factor is excitation with THz phonons with frequencies and k-points at a maximum in the phonon density of states and no spin excitations. The mechanism corresponds to the phonon-driven generation of magneto-elastic fields with components perpendicular to the magnetisation producing precessional switching on the 100 ps timescale. Importantly, we predict this possibility for the case of single species materials unlike the first experimental observations [14] where the switching was due to optical phonon excitation. In this regard, the important factor is related to the possibility of experimental access to the excitation with k-vectors close to this "flat" phonon spectrum, in our case the P-point. Our prediction may be of crucial importance for nextgenerations of eco-friendly storage devices since heat production is one of the major problems for large data storage centers.
Financial support of the Advanced Storage Research Consortium and ARCHER2-eCSE06-6 is gratefully acknowledged. MOAE gratefully acknowledges support in part from EPSRC through grant EP/S009647/1. The simulations were undertaken on the VIKING cluster, at the University of York. SR, RWC, RFLE acknowledges funding from European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 737709. The authors acknowledge the networking opportunities provided by the European COST Action CA17123 "Magnetofon" and the short-time scientific mission awarded to MS. Fig. S1 shows the power spectrum of magnons (S x ), phonons (v x ) and coupling field (H c x ) in the case of Morse Potential (MP) and Harmonic Potential (HP). In the case of HP, since the phonons present a large peak due to the flat phonon spectra, the coupling field will inherit this peak and will lead to a strong magnon-phonon coupling which translates to a strong spin response when applying the THz pulse. This is not the case with the Morse potential, where the coupling has lower intensity peaks, and hence, the magnons are not so efficiently coupled to the THz pulse. temperature effect leads to a randomisation of the phase diagram (panel a) in comparison with 0K results.To reduce the thermal noise, the switching phase diagram is averaged over multiple realisations -panel b). We observe that switching occurs in the same frequency range as for the T = 0K ( Fig. 4), however the initial pulse width necessary for switching decreases with approximately 10ps. The decrease can be due to the fact that phase diagram is averaged only for 4 realisation, hence a small change like this of the pulse width can still exhibit an effect of the thermal fluctuations. To investigate how much heating occurs during magnetisation switching, in Fig. S3 we show the increase in the spin temperature when applying a magnetic fields of −1T . We observe that the increase in the spin temperature is of about 0.5mK, the same order of magnitude as the increase in temperature when switching occurs via THz excitation, in the case of Harmonic Potential. Since the spin system is coupled to a thermostat in this case (so it will allow for fast switching dynamics) after the switching, the system goes to the temperature of the thermostat of 1K.
SUPPLEMENTARY INFORMATION
During the application of the THz pulse, the lattice displacements are relatively low (less than 5 − 7% depending on frequency of the THz pulse). Fig. S4 shows the evolution of the displacements. For the Harmonic potential, excitation around the P point in the Brillouin zone (obelisk symbol in Fig. 2 leads to a periodic motion of the atoms on x direction (with no displacements on y and z coordinates). This gives rise to the FFT peak observed in Fig.2, panel d). For the Morse potential (Fig. S4, panel b), although initially the displacements are on x direction, with the frequency corresponding to the one given by the THz excitation, this mode rapidly decays in phonon modes along the P − Γ path, as shown by the FFT of the x displacement in Fig.2, panel c. The decay of the phonon modes will lead to a coupling field that strongly fluctuates in time, leading to scattered switching phase dia- Figure S4. Temporal evolution of the displacements of the atom at position (0,0,0) normalised to the lattice constant for a Harmonic (panel a) and Morse potential (panel b). The THz frequencies are 8.3THz for the Harmonic potential and 8.6THz for the Morse potential. The THz pulse is applied continuously during the simulation. gram.
The perpendicular xyz components of the coupling field in the case of Morse potential (switching and non-switching events presented in panels c, d in Fig. 3) are shown below. For the non-switching case -panel b in Fig.S5 we observe that the coupling field fluctuates around zero. This suggests that the coupling field is rotating, and hence is not sufficient to trigger switching. Panel a, that corresponds to the switching case, still presents fluctuations, but after 50ps, when switching is triggered, the in-plane fluctuations are around a finite field value. To show that the developing of the coupling field is solely responsible for the precessional switching, we have imposed on our system a variation of the x displacement with a frequency of 8.4THz and phase and amplitude as in the numerical simulation. This simple harmonic motion on the x direction leads to precessional switching, as observed in Fig.S6, which is triggered by the magneto-elastic field.
|
2022-09-12T01:16:13.092Z
|
2022-09-09T00:00:00.000
|
{
"year": 2022,
"sha1": "cc059c3f53b32530339730f100749a77e7bf229a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cc059c3f53b32530339730f100749a77e7bf229a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
247386785
|
pes2o/s2orc
|
v3-fos-license
|
Collision Avoidance Considering Iterative Bézier Based Approach for Steep Slope Terrains
The Agri-Food production requirements needs a more efficient and autonomous processes, and robotics will play a significant role in this process. Deploying agricultural robots on the farm is still a challenging task. Particularly in slope terrains, where it is crucial to avoid obstacles and dangerous steep slope zones. Path planning solutions may fail under several circumstances, as the appearance of a new obstacle. This work proposes a novel open-source solution called AgRobPP-CA to autonomously perform obstacle avoidance during robot navigation. AgRobPP-CA works in real-time for local obstacle avoidance, allowing small deviations, avoiding unexpected obstacles or dangerous steep slope zones, which could impose a fall of the robot. Our results demonstrated that AgRobPP-CA is capable of avoiding obstacles and high slopes in different vineyard scenarios, with low computation requirements. For example, in the last trial, AgRobPP-CA avoided a steep ramp that could impose a fall to the robot.
I. INTRODUCTION
The sector of agriculture is crucial for the global economy. With the continuous growth of the world's population [1], and the decrease in the availability of human resources of agricultural labour [2], the agricultural sector requires a sustainable increase in productivity. The strategic European research agenda for robotics [3] states that robots will have an important paper in this task with the automatisation and optimisation of agricultural tasks. A recent literature review under the subject of agricultural robots [4] found that robots have been most explored for harvesting and weeding. The study infers that optimisation and further development is vital. However, the deployment of robots in agriculture is still a challenge, particularly in the context of steep slope vineyards (placed in the Douro Demarcated Region (Portugal), UNESCO Heritage place). Localisation and path planning, which are two crucial tasks to achieve a full autonomous navigation robot, represent several challenges in these scenarios: Global Navigation Satellite Systems (GNSS) gets frequently blocked by The associate editor coordinating the review of this manuscript and approving it for publication was Tao Liu . the hills providing unstable positioning estimations, and the irregular steep slope terrain presents a challenge for path planning algorithms. To tackle some of these issues, we have proposed VineSlam [5] and Agricultural Robotics Path Planning framework (AgRobPP) [6]. AgRobPP performs path planning operations considering the robot centre of mass and the terrain slopes but requires a complete previous representation of the terrain (map). Such map could be obtained with a Simultaneous Localisation and Mapping (SLAM) process, such as VineSlam, or by resorting to high-resolution aerial images, as proposed in a previous work [7]. VineSlam will be able to provide a full detailed map, but the construction process in extensive agricultural fields would represent a time-consuming or impractical procedure. It becomes easier to extract a map of the entire terrain with satellite images, but the result will never have the same level of detail. Even if a previous high-resolution representation of the terrain were available, unexpected events would be conducive, such as the appearance of new obstacles or changes in the terrain. AgRobPP is able to change the original trajectory with the appearance of new obstacles. Nevertheless, the framework always relies on the previous map information to find an VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ alternative path. To tackle the problem, this work contribution proposes AgRobPP-CA, an independent collision-avoidance system that relies on real-time local observations (obstacles and elevation) and uses parametric Bézier curves to draw an alternative trajectory. This tool will be used with AgRobPP, and it will proceed to make corrections in the trajectory generated by the path planner whenever necessary. AgRobPP-CA takes into account new obstacles and dangerous inclinations that may not be visible in the map used to generate the initial path. In this paper, Section II presents the related work of collision avoidance strategies used in agricultural contexts. Section III briefly describes the path planning algorithm AgRobPP, and Section IV introduces the collision avoidance tool AgRobPP-CA. Section V reveals the tests and results of AgRobPP-CA under several simulations, and Section VI presents the conclusions of this work.
II. RELATED WORK
The concept of robotic path planning is recurrent in the literature [8]- [10], and consists of finding a collision-free path between an initial and a final point, optimising time, distance, and energy parameters. Path planning may be based on different concepts, such as potential field [11], samplingbased methods [12] and cell decomposition [13]. Independently, path planning algorithms can be divided into two categories: off-line (global) or online (local). Off-line path planners require a previous full environment representation with all the obstacles information, while, in the second category, the map can be constructed/updated during the navigation [8]. A literature review under the topic of robotic path planning in agriculture [14] reveals that most approaches have been resorting to off-line path planning solutions, where it is assumed a full knowledge of the surrounding environment. To make such an assumption may be dangerous for autonomous robot navigation in harsh agricultural terrains, so online methods are preferable. Zhang et al. [15] proposed an improved Dynamic Window Approach (DWA) to guarantee an optimal collision avoidance system of a global path planning system, transforming an off-line path planning approach into an online method. In the agrarian context, in 2011, Pingzeng et al. [16] presented an obstacle avoidance system based on multi-sensor information, such as ultrasonic, light detection and ranging (LiDAR) and camera. The retrieved data is used to detect the obstacles and their posture. Combined with a fuzzy control algorithm, the agricultural robot navigates automatically in a non-structural environment.
More recently, Juman et al. [17] work approaches the collision avoidance problem with a D* algorithm to navigate through an oil palm plantation autonomously. The trees detection was performed using Viola and Jones framework with a Kinect camera. The planner did not consider all of the robot's constraints, which led to the incapability of following the generated path in some scenarios.
Fleckenstein et al. [18] approached this topic using a robot with adjustable wheel position. Using a search guidance heuristic, the authors efficiently handled the high number of degrees of freedom in a complex environment with many obstacles.
Ohi et al. [19] resorted to a Dijkstra search algorithm in Voronoi graphs to assure precision pollination with a mobile robot inside a greenhouse. The authors choose a DWA based approach to avoid local obstacles.
A velocity control strategy for collision avoidance was proposed by Xue et al. [20]. The authors performed collision prediction in dynamic environments with an improved obstacle space-time grid map and used a velocity generator for collision avoidance with a cloud model.
An online path planning algorithm was proposed for an automated tractor steering control, following points localised in the ground and using structures provided by the environment for orientation. [21] Saeed et al. [22] presented a solution for long-term path planning tasks in agriculture. Their study is composed by a method to generate a path to visit all the selected places on a farm, combined with an online path optimisation scheme for re-planning and scheduling the new path at run-time.
In the context of vineyards, an automatic path planning method was studied to perform agricultural tasks in the hilly vineyards of Italy. The maps are generated with Unmanned Aerial Vehicle (UAV) images, and an A* algorithm is used to plan a path between an initial and a goal point. However, the authors did not address the collision-avoidance in the run-time problem [23]. In a similar context, also in the hilly vineyards of Italy, Mammarella et al. [24] describes cooperation between aerial and ground robots, exploiting the DWA based on a receding-horizon scheme to perform local path planning.
Another approach presented a local motion planner in vineyards using an RGB-D camera-based algorithm to generate a proportional control of the robot. A deep learning backup algorithm resilient to illumination variations takes over in case of failure in the RGB-D system. [25] Ravankar et al. [26] suggest an algorithm that relies on LiDAR data and can avoid dynamic obstacles in a flat vineyard while smoothing the robot's trajectory.
These presented works in this small literature review, do not address the problem of collision avoidance in steep slope vineyards, considering the robot's centre of mass, and to the best of our knowledge, such an approach does not exist. Our previous work, AgRobPP [6], is a path planning tool aware of the robot centre of mass based on cell decomposition method with A* search algorithm. It considers a complete knowledge of the environment obstacles and the terrain elevation model, with a slight adaption to change the trajectory with the appearance of new obstacles. AgRobPP may be classified as an online path planning method. Nonetheless, it does not consider changes in the elevation model of the terrain. So, this paper will study a way to perform real-time collision avoidance based on the robot's sensors local observations, considering the robot's centre of mass and the real-time perception of the terrain slant.
III. AgRobPP -AGRICULTURAL PATH PLANNING
AgRobPP is a open-source path planning framework built on ROS (Robot Operating System) described in detail in previous works [6], [7], and available at [27]. The framework is composed by three separate tools: • AgRob Vineyards Detector -a satellite image segmentation tool [7].
This paper focuses on the AgRob Path Planning tool and intends to improve its capacity of collision avoidance in realtime. AgRob Path Planning tool consists of an A* algorithm for robot navigation in steep slope vineyards characterised by uneven terrain. As Fig. 1 depicts, this algorithm subscribes an Occupancy Grid Map, the Altitude/elevation Map, the robot's localisation (or starting point) and a goal point. With this data, AgRobPP generates a safe path, considering the robot's centre of mass and its dimensions, so that the robot does not suffer a large vertical deviation and, consequently, a severe fall. The output provides a safe path described by a set of discrete way-points or by a set of parametric curves. AgRobPP also has a point cloud translator algorithm called PC2GD (Point Cloud to Grid Map and DEM), which extracts the Occupation Grid Map and Digital Elevation Model (DEM) from a point cloud.
IV. AgRobPP-CA -COLLISION AVOIDANCE
The diagram in Fig. 1 mentions a Local Safety Re-Planner used in AgRobPP for local obstacle avoidance. When a new obstacle is detected, it places an order to cancel the current trajectory. Then a goal point is generated to perform a new path planning operation considering the recent obstacle.
This approach has two main drawbacks: changes in the elevation map are not taken into consideration, and real-time performance is not achieved. The A* algorithm search processing time depends upon several factors (i.e. the size of the new obstacle), so it may not be feasible for a real-time collision avoidance task. Nonetheless, the obstacle expansion process (described in [6], [13]) can not be performed instantaneously, as it is executed in the 16 layers of the occupation grid map to consider the different possible headings of the robot.
AgRobPP-CA is an independent framework that uses local observations from the robot's sensors to detect obstacles and dangerous slopes. This tool is also open-source and is available in [27]. The goal is to perform slight deviations from the original trajectory to reach the destination point in safety. Fig. 2 shows that AgRobPP-CA has three inputs: the LiDAR observations, the parametric curve that robot is following, and the robot's localisation. The parametric curve may be provided by AgRobPP, but it is not mandatory. Currently, AgRobPP-CA is prepared to work with four wheels differential robots.
AgRobPP-CA is composed by the following steps, which are detailed in the sections below: • Definition of local safety zone; • Prior obstacle detection; • Construction of local elevation map; • Detection of dangerous slopes; • Alternative path for collision avoidance. The results expressed either here, in the step by step demonstration, or in the Section V, were constructed with a Gazebo simulation. 1 The robot has a four wheel differential configuration, and represents a simulated version of AgRobv16 [28] shown in Fig. 3, equipped with a VLP-16 Puck LiDAR. 2 The robot navigates autonomously using a trajectory controller software based on parametric curves, which was previously developed by Pinto et al. [29]. In this section, the environment is composed by a simulation of a steep slope vineyard created with a modelling software, Fig. 4 [30].
The framework contains different coordinate frames, namely the Map frame M , the Sensor frame S, the Base frame B, and the Footprint frame F. The frames location is illustrated in Fig. 5.
The map frame M is fixed in the environment, assuming that the real terrain is stationary to this frame. The base sensor B is attached to the centre of the robot's base, and is related to the map frame M through a translation τ mb and a rotation ω mb . This transformation is provided by the localisation system, which, for this particular scenario is given by Gazebo's ground truth. The remaining relations between frames are static and expressed in equation (1). The frames F and S, are related to the frame B without any rotation and the respective static translations: τ bs , τ bf .
A. SAFETY REGION DEFINITION
The initial step is to define the region around the robot that will be considered for collision avoidance purposes. At this stage, characteristics of the robot must be provided as well, such as the robot's whole dimension, footprint and centre of mass. The region around the robot is rectangular, and the measurements are provided in meters. Given this data, AgRobPP-CA defines a local grid with a previously provided resolution. The example presented in Fig. 6 shows the LiDAR observations in a square safety region with 100 m 2 around the robot frame B, represented in a grid with 5cm/pixel resolution.
B. PRIOR OBSTACLE DETECTION
The initial obstacle detection is simply based on the local observation of the LiDAR. So, the LiDAR measurements close to the robot conducive to collision are marked as obstacles. Considering the referential cantered in the robot's base, frame B, every z-axis measurement between zero and the robot's height is viewed as an obstacle, as long as the planar euclidean distance to the robot is less than 3 meters. Fig. 7 shows the simulation of the robot with a vineyard and an obstacle and its representation in the safety region. The LiDAR observations marked as potential barriers are identified with the yellow colour. In this figure, the safety map classifies the ramp access to the vineyard as a potentially dangerous zone. With the construction of the local elevation, in the posterior stage, all obstacles are analysed to verify if they actually represent a collision or fall danger.
C. LOCAL ELEVATION MAP CONSTRUCTION
The elevation map is constructed with continuous observations of the LiDAR. This part requires a localisation system, which is often a requirement for any global path planning system. The elevation map is saved into a grid centred in the robot frame B with the size and resolution of the local safety region, with the z-axis information in the map frame M . The pseudo-code of this process is presented in Algorithm 1. For this pseudo-code, L represents a data structure in a list format, and G is the matrix where the elevation map is saved. The algorithm 1 is activated when the robot moves, running in a loop at a pre-defined rate in ROS. We must ensure that the rate of such a loop is sufficient to run the entire process at a maximum robot speed with a specific grid resolution. Considering the resolution of the grid elevation map map res = 0.05 m, we should assure not only that the code is executed in a time that allows capturing information every 0.05 m, but also that the code is not executed so fast that the information will be repeated in several iterations.
Considering a speed v robot = 0.5 m/s, the ROS loop must have a duration of at least map res v robot seconds, that is, a rate of 10 Hz. Considering this rate of 10Hz, the execution of the loop needs to occur in every iteration. At half of the speed, v robot = 0.25 m/s, the algorithm only has to be executed each map res v robot × Rate iterations, that is, every 2 iterations. Fig. 8 shows the result of a local elevation map constructed while the robot travelled through the trajectory marked with the orange line, at a speed of 0.25 m/s with a 0.05 m resolution. In this visualisation, the white colour
Algorithm 2 Dangerous Slopes Detection Strategy
1: Discretize the input parametric curve C with eq. (2) 2: Create dC by transforming all points from frame M to B: ω mb τ mb 3: For all points ∈ dC ahead of the robot: 4: Collect altitude information of surrounding points for normal vector estimation 5: Estimate roll and pitch 6: Check safety of centre of mass [6] if point is not safe and is not obstacle mark point as obstacle/danger else if point is safe and is obstacle mark point as safe represents the maximum altitude, while the black represents zero elevation.
D. DANGEROUS SLOPE DETECTION
During the autonomous robot navigation process, the remaining trajectory and its surroundings will constantly be verified to detect either dangerous slopes unknown by the robot and to detect false obstacles previously marked in the initial stage of the collision avoidance process.
The process of detecting a dangerous slope uses the information from the local elevation map to estimate the terrain inclination. It verifies the robot centre of mass projection to avoid a large vertical deviation. The method is detailed in a previous work [6] applied to a global map.
Algorithm 2 presents a brief description of the procedure. Here C represents the input parametric curve that the robot is following. dC is a set of points which contains the discrete representation of C, in the base frame B. The input parametric curve follows the format expressed in equation (2), where n represents the degree of the curve. We are considering a maximum of 4 th degree curves.
E. COLLISION AVOIDANCE METHOD
The problem is structured into a state machine, Fig. 9, with four states: WAITING_FOR_PATH, FOLLOW_ORIGINAL_PATH,
RECALCULATE_PATH and FOLLOW_ALTERNATIVE_PATH.
In the initial state, WAITING_FOR_PATH, the robot waits for the reception of a parametric curve and is stopped. Once the input is received, the robot starts following it, changing the state to FOLLOW_ORIGINAL_PATH. Now, either the robot completes the trajectory and returns to the initial state or finds an obstacle ahead and goes to the RECAL-CULATE_PATH state. Here, the framework tries to design an alternative safe path to contour the obstacle. If there is no success, the robot stops and returns to the initial state. When an alternative is found, the machine follows to the FOLLOW_ALTERNATIVE_PATH state. Once the new path gets travelled, the tool goes to FOLLOW_ORIGINAL_PATH, unless a different obstacle is detected. In this case, the RECALCU-LATE_PATH is called again. At any moment, when the user or the robot's controller software send a stop signal, the state machine is rebooted into its initial state. The collision avoidance procedure that occurs in the state RECALCULATE_PATH is performed with a heuristic approach using parametric Bézier curves (Equation (3)). The Bézier representation is helpful to define a curve based on several control points. Nevertheless, the robot's software controller reads the simpler parametric form, so the same curve is later converted to the format of Equation (2). Every alternative path is composed of a 4 th degree Bézier curve (3). Fig. 10 depicts the generation of an alternative path using a Bézier curve. The original trajectory is designed with a red colour. The control points P 0 to P 4 , from Equation (3), are represented in the picture with green circles. The yellow circle denotes an obstacle, and the red point P c stands for the collision point. P m represents the middle point of the new path, and its location is crucial to avoid a possible collision. The resulting alternative trajectory is illustrated with green colour.
The definition of the controls points of the new Bézier curve obeys the following procedure: • P 0 is placed in the current robot position, that is, in the origin of base frame B.
• P c is the trajectory point intersected by the obstacle.
• Lenght 1 is the linear distance from P 0 to P c (over the trajectory).
• Lenght 2 = Lenght 1 and is the linear distance from P c to P 4 (over the trajectory).
• P 4 is placed on the original trajectory at P c + Lenght 2.
• P 1 is placed in a fixed distance (dist(P 0 , P 4 )/4) after P 0 following the direction of the curve (6).
• P 3 is placed with the same fixed distance (dist(P 0 , P 4 )/4) before P 4 following the direction of the curve (6). The control point P 2 is calculated with the point P m , which is iterative. Given the Equation (4) to define the alternative parametric Bézier curve, we consider the point P m will be in the middle of the trajectory, as Lenght 1 = Lenght 2.
With such assumption, for the point P m , the variable t of equation (4) is 0.5. Then, the control point P 2 will be extracted from equation (5). P 0 and P 4 are already defined. P 1 and P 3 are defined in equation (6), where A P 0 and A P 4 are the angles of the trajectory heading in the respective points P 0 and P 4 . D 1 and D 3 represent the respective distances: P 0 , P 1 and P 3 , P 4 .
1) Create a perpendicular vector to the original trajectory in the point P c with length P L (initially P L = robot's width). 2) Place P m in the extremity of the vector, and use the value to extract P 2 from equation (5). 3) Generate a new Bézier curve, and perform a safety verification in the previously created local grid safety region. 4) If the first curve is not valid rotate the vector (180 • ) and repeat the process. 5) If both options are not valid, increment P L and repeat the whole process until one of the curves is valid, or the euclidean distance between P m and P c becomes greater than 2 meters. We consider this situation to be the maximum deviation allowed. If a valid trajectory is achieved, the Bézier curve is converted to the parametric format of equation (2), according to the formulation in (7).
Finally, the curve is ready to be sent to the robot. However, we need to recover the remaining of the original path. With the initial trajectory defined in the format of equation (2), it is known that the transition from the obstacle avoidance path to the original one is performed in the point P 4 , which corresponds to t = t P 4 , a known value. Then, the equation is transformed so that t ∈ [t P 4 , 1], which is achieved with the following change: t = t P 4 + (1 − t P 4 )t. By applying this variable shift, the resulting equation (8) expresses the remaining of the initial trajectory from the point P 4 to the final point. To simplify the visualisation of equation (8) (7) and (8) are sent to the trajectory controller, the robot is able to contour the obstacle and reach its destiny.
V. TESTS AND RESULTS
The tests of AgRobPP-CA were performed with the simulated version of the robot AgRobv16 (Fig. 3) in different simulated environments, such as empty space, straightforward flat vineyard, curved vineyard and simulation of an actual steep slope vineyard (Quinta do Seixo). In all trials, the trajectory is manually designed. For all the pictures portraying AgRobPP-CA results, the robot AgRobv16 is following either an orange or a red line. The orange line corresponds to the original trajectory, while the red line portrays the alternative path found by AgRobPP-CA. The pictures also illustrate blue circles which represent the beginning and end of the trajectory, and the LiDAR observations with several colour points. The basic scenario expressed in Fig. 11 consists of open space with one and two simple obstacles. The application of AgRobPP-CA is shown in Fig. 12, where the LiDAR observation of the obstacles is visible with the green points.
The second test scenario (Fig. 13) consists of a simulation of a simple vineyard with a human being in the middle of the row as an obstacle. The local observations of AgRobPP-CA are shown in Fig. 14 with the visualisation of the safety region points and the representation of the local elevation map. The resulting obstacle avoidance appears in Fig. 15. The larger yellow circle represents the estimated collision point. The third scenario is similar, with a curved vineyard depicted in Fig. 16. The result is expressed in Fig. 17. The last test scene (Fig. 18) represents a simulation of ''Quinta do Seixo'', a real steep slope vineyard located in the Douro region of Portugal (41.166775, −7.555080). The simulated scenario is depicted in Fig. 19.
The first test demonstrates a path placed on an uphill trail blocked by an obstacle. The Gazebo simulation is visible in Fig. 20. The visualisation of the trajectory and the obstacle avoidance from AgRobPP-CAis presented in Fig. 21. As the larger yellow circle represents the expected collision point, the figures indicate that AgRobPP-CAperformed a small deviation in an initial phase. However, with the continuous information acquisition from the sensors, a new collision point is estimated, forcing AgRobPP-CAto perform a bigger deviation.
In the second trial, the path was purposely designed on top of a dangerous slope without any artificial obstacles. The robot AgRobv16 can not perform this trajectory safely, as Fig. 22 demonstrates.
With AgRobPP-CA activated, a new and safe path is generated. The process is similar to Fig. 21. Starting with a small contour of the dangerous zone, and increasing the deviation with the acquisition of new information, Fig. 23.
The corresponding Gazebo simulation is visible in Fig. 24, with an available video demonstration of these two situations in Zenodo. 3 The plots below show the robot's velocities and poses along time while performing the dangerous trajectory from Fig. 22. Fig. 25 and 26 present the linear and angular velocity of the robot without and with AgRobPP-CA. The results show greater stability when AgRobPP-CA is activated (Fig. 26).
The robot's pose is plotted just in the Z-axis for better resolution, as this is the most relevant value to observe when the robot is trying to approach a ramp. These plots are visible in Fig. 27 and 28. It is visible that without AgRobPP-CA the robot can not reach the maximum altitude as it does not overcome the steep ramp.
The robot's orientation is represented by the respective Euler angles (roll, pitch and yaw) in radians. Fig. 29 and 30 plot these values indicating that the robot is safer with AgRobPP-CA. Fig. 29 shows that the robot's pitch approximates to 1 rad (∼ 57 degrees), which is dangerous and may impose a fall of the platform. With AgRobPP-CA, Fig. 30, the worst registered case is a peak at approximately 0.5 rad (∼ 28 degrees).
VI. CONCLUSION
The current paper presents an approach for real-time collision avoidance in steep slope agricultural fields based on local observations. For such purpose, we have proposed AgRobPP-CA. This tool constantly verifies the current robot's trajectory looking for predictable collisions or dangerous inclined zones as AgRobPP-CA considers the robot's centre of mass. The local observations consist of the LiDAR measurements for obstacle and harmful slope detection. Once a collision is detected, AgRobPP-CA uses an iterative based approach to generate an alternative path with parametric Bézier curves. The method was tested in different simulated scenarios, including a straight vineyard, a curved vineyard, and an actual steep slope vineyard (''Quinta do Seixo''). The results demonstrated that the tool is capable of avoiding obstacles in all scenarios. In the tests performed with the steep slope vineyard, AgRobPP-CA generated an alternative path to avoid a dangerous slope. As future work, we will carry on the tests to an actual vineyard. Other sensors, such as OAK-D camera 4 with a deep learning model to detect people and animals, will be considered. A new LiDAR (Livox MID-70 5 ) will be pointed to the floor to improve the perception of the elevation map.
|
2022-03-12T14:08:49.186Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "7bc8007f9e5e14c94de0bba215525870169dde25",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09718337.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "7bc8007f9e5e14c94de0bba215525870169dde25",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
214708835
|
pes2o/s2orc
|
v3-fos-license
|
How Does Obesity Influence the Risk of Vertebral Fracture? Findings From the UK Biobank Participants
ABSTRACT Obesity and osteoporotic‐related fractures are two common public health problems, although it is unclear how obesity affects the risk of vertebral fractures. The purpose of this study was to examine the association between different measures of obesity and the risk of vertebral fracture, and to establish the various clinical factors that can predict such risk. We analyzed data obtained from 502,543 participants in the UK Biobank (229,138 men and 273,405 women), aged 40 to 69 years. Imaging information was available in a subset of this cohort (5189 participants: 2473 men and 2716 women). We further examined how BMD and geometry of the vertebrae were related to body fat measures. It was shown that a larger waist circumference (WC), but not BMI, was associated with an increase in fracture risk in men, but in women, neither BMI nor WC affected the risk. Trunk fat mass, visceral adipose tissue (VAT) mass, and limb fat mass were negatively associated with vertebral body BMD and geometry in men and women. BMD and geometry are related to vertebral strength, but may not be directly related to the risk of fractures, which is also influenced by other factors. The binary logistic regression equation established in this study may be useful to clinicians for the prediction of vertebral fracture risks, and may provide further information to supplement the fracture risk assessment tool, which assesses general fracture risks. © 2020 The Authors. JBMR Plus published by Wiley Periodicals, Inc. on behalf of American Society for Bone and Mineral Research.
Introduction
O besity and osteoporosis are two very common public health problems. Obesity is sometimes thought to have a protective effect against osteoporotic fractures. (1) A higher body weight may impose larger mechanical loading on bone and consequently help improve bone health and reduce the risk of fracture. (2) However, recent studies show that when the mechanical loading effect of total body weight is accounted for, fat mass actually has a negative effect on bone health. (3,4) Recent epidemiological evidence also reveals that the relation between obesity and bone health may be site dependent. (5) Obesity has been shown to increase the risk of fractures at the ankle and upper leg in postmenopausal women, (5) but how it affects the risk in the vertebral column is still not clear.
Obesity is often believed to be beneficial to bone health because of the positive effect of mechanical loading conferred by body weight on bone formation. (6) However, adipose tissue may have negative effects on bone metabolism. (6) A number of previous studies have shown that fat mass was associated with a decrease of bone mass and bone quality at spine, (7)(8)(9) leading to lower vertebral bone strength. The interaction of the different effects of mechanical loading and adiposity is still unclear. (10) The underlying mechanism between obesity and bone health is likely to be complex, and may be different in men and women. Obesity in men is more characterized by central adiposity in comparison with women. Visceral adipose tissue (VAT) is particularly detrimental to bone health as it is associated with a number of hormones and cytokines that contribute to bone loss. (6) A number of studies have shown that obesity is more consistently associated with increased prevalence of vertebral fracture when obesity is assessed using visceral fat mass. (11)(12)(13) On the other hand, obesity has been found to be associated with increases in vertebral fracture risk in women, but not in men. (11,(13)(14)(15)(16) Waist circumference (WC) has been a reliable clinical parameter for predicting visceral fat, (17) whereas BMI has a stronger correlation with nonabdominal and abdominal subcutaneous fat. (17) The correlation between obesity and the risk of vertebral fracture is likely to be dependent on whether obesity is measured by BMI or WC. There is thus a need to clarify such correlation.
Vertebral body strength is related to its BMD and the geometry of the bone. (18)(19)(20)(21) Previous studies have examined how fat mass influences vertebral body BMD, which is only a "proxy measure" of the risk of fractures. (13,16,18) It would also be useful to examine how fat mass affects the geometry of the vertebrae. The smaller vertebral size in women has been suggested as one of the reasons for the higher prevalence of vertebral fractures in women. (22) However, there is no information about how obesity may affect vertebral body geometry. There is clearly a need to study such a relationship, as it would provide additional insights into how fat mass may affect bone strength and potentially the risk of vertebral fractures.
Although the risk of vertebral fracture is possibly related to mechanical loading and adiposity as discussed above, various other clinical factors will need to be considered to provide an accurate prediction of the risk of vertebral fractures. They may include history of prior fractures, age, gender, smoking, alcohol use, glucocorticoid use, rheumatoid arthritis, and secondary osteoporosis. The fracture risk assessment tool (FRAX) has been developed to evaluate osteoporotic fracture risk in untreated postmenopausal women and men aged >50 years, (23) although the algorithm is not specifically developed for vertebral fractures. Some previous studies attempted to predict vertebral fracture risk. (24,25) They showed that fracture risks are related to morphological factors such as vertebral size or kyphosis. But clinically this information may not be available for fracture prediction. Their sample sizes were also generally small with limited power. This study will further explore the prediction of vertebral fractures considering a range of clinical factors as used in FRAX.
The aim of this study was (i) to examine the association between obesity and the risk of vertebral fracture, and whether this association was influenced by the methods of measuring obesity, (ii) to predict the risks of vertebral fractures using various clinical factors, and (iii) to study how vertebral BMD and geometry, which are both related to vertebral strength, are associated with body fat measures.
Study design and sample
The UK Biobank is a health resource aiming to provide data for researchers around world to study the cause of a wide range of diseases such as cancer, cardiovascular diseases, diabetes, arthritis, osteoporosis, eye disorders, depression, and dementia (https://www.ukbiobank.ac.uk). The UK Biobank is based on a prospective cohort consisting of around 500,000 UK volunteer participants aged 40 to 69 years who were first recruited and assessed from 2006 to 2010. Subsets of this original cohort were then repeatedly assessed over several time periods. The current study was based on data sets collected from two time periods: 2006 to 2010 and 2014 to 2019. It was conducted in November 2016 after approval was obtained to access the data.
The full data set consisted of 502,543 participants (229,138 men and 273,405 women) aged 40 to 69 years who were assessed using a self-completion questionnaire and physical measurements from 2006 to 2010. The current study used this data set to examine the incidence of vertebral fractures in participants with different body weights. The data subset was a subset of this cohort, consisting of 5189 participants (2473 men and 2716 women) who were followed up in an imaging study (from 2014 to 2019) that provided DXA data of the body. This allowed us to further study BMD and the geometry of the vertebrae of the participants, as these data were not available for every participant in the full data set.
Clinical information from the full data set
Anthropometric measurements
Height (standing), weight, and WC were obtained for all participants.
Incidence of fractures
Each participant was asked to fill in a self-completion questionnaire in baseline assessment which included questions asking whether they had fractured/broken bones in the last 5 years and where the fractured bone sites were (eg, spine, hip, wrist, leg, ankle, arm, or others).
Other information
Categorical data, including smoking status (never, previous, or current smoker), daily alcohol consumption of three or more units (yes or no), history of rheumatoid arthritis (yes or no), secondary osteoporosis (yes or no), type 2 diabetes (yes or no), hormone-replacement therapy (yes or no), and menopause (yes or no) were obtained from a self-completion questionnaire.
Imaging information from the data subset
Vertebral body BMD and geometry DXA images (GE-Lunar iDXA, Madison, WI, USA) were collected to obtain numerical measures of vertebral body size and areal BMD at the whole spine (C4 to L4) and the lumbar spine (L1 to L4) in the anteroposterior (AP) direction. The measures from the lumbar spine AP scan included L1 to L4 BMD, the L1 to L4 area (ie, the estimated projected area of L1 to L4 in the AP scan), L1 to L4 average height (ie, the vertebral height from the bottom of L4 to the top of L1), and L1 to L4 average width (ie, the average width of the four lumbar vertebrae L1 to L4). The measures from the whole-spine AP scan included the spine BMD and the spine bone area. The vertebral body BMD and geometry data were obtained from 5189 participants (men = 2473, women = 2716).
Body composition
Body composition data were also obtained from this data subset. The measures used in this study included trunk fat mass, VAT mass, and limb fat mass, which is the sum of leg fat mass and arm fat mass. These measurements were not normalized to body weight or height.
Data analysis
Participants (N = 502,543) were categorized into underweight, normal weight, and obese using BMI and WC. When BMI was used, both male and female participants were categorized according to the same criteria: underweight (BMI < 25 kg/m 2 ), normal weight (25 kg/m 2 ≤ BMI < 30 kg/m 2 ), and obese (BMI ≥30 kg/m 2 ). When WC was used, male and female participants were categorized using different criteria. Women were categorized as underweight (WC <80 cm), normal weight (80 cm ≤ WC < 88 cm), and obese (WC ≥88 cm); men were categorized as underweight (WC <94 cm), normal weight (94 cm ≤ WC < 102 cm), and obese (WC ≥102 cm). (26) The association of the various categories of BMI and WC with incidence of vertebral fracture was examined in men and women using chi-square tests.
The relationship between vertebral fractures and various clinical risk factors was studied using full data sets, including age, gender, body weight, height, history of hip and other limb fractures (they were studied separately as the risks of fractures were site-dependent (14) ), smoking, alcohol consumption, rheumatoid arthritis, type 2 diabetes, and secondary osteoporosis. The significance of these relations was examined using chi-square tests for category data and logistic regression for continuous data. The ORs of each risk factor were determined.
Multivariate logistic regression was employed to predict the risks of vertebral fractures using the clinical factors identified above (enter method). However, only those statistically significant clinical factors were entered into the regression equation.
The imaging data subset provided information that allowed us to study fat mass, vertebral body BMD, and geometry, which was not available in the full data set. Linear regressions were employed to look at how BMD and geometry were related to trunk fat mass, VAT, and limb fat mass. Each of these fat mass measures was entered into regression analysis individually, while using age, weight, height, smoking status, hormonereplacement therapy (for women only), and menopause (for women only) as covariates. Linear regression analysis was conducted on men and women separately. Multicollinearity between independent variables was checked by a variance inflation test (VIF <10). SPSS 22.0 (IBM, Armonk, NY, USA) was used for all statistical analyses. Data from any participant with missing values were not included in the statistical analyses. The level of statistical significance was set at p < 0.05.
Results
Obesity and risk of vertebral fracture Characteristics of the participants in the full data set and the imaging subset are provided in Tables 1 and 2, respectively. The ethnic background for the majority of participants was white (94.1% for baseline assessment and 96.9% for imaging study).
There were 479 vertebral fractures in 229,138 male participants and 645 vertebral fractures in 273,405 female participants in the previous 5 years, which resulted in an incidence rate of vertebral fracture at 4.2 per 10,000/ year in men and 4.7 per 10,000/ year in women.
Chi-square analysis was conducted on BMI data from 496,812 (226,945 men and 269,867 women) participants out of 502,543 participants and WC data from 500,383 (228,062 men and 272,321 women) participants out of 502,543 participants because of missing data. There was no significant association between BMI and incident vertebral fracture in men (χ 2 = 0.94, p = 0.625) or in women (χ 2 = 4.28, p = 0.118; Table 3). There was a significant association between WC and incident vertebral fracture in men (χ 2 = 8.51, p = 0.014), but not in women (χ 2 = 0.71, p = 0.701; Table 4). Obese men (WC ≥102 cm) had higher vertebral fracture incidence (5.0 per 10,000/year) than normal weight men (3.7 per 10,000/ year) and underweight men (3.8 per 10,000/ year).
The ORs of the various clinical risk factors are given in Table 5. All these factors were entered into the logistic regression equation, with the exception of alcohol consumption and type 2 diabetes, which were not shown to be significantly related to vertebral fracture risks. The logistic regression model was found to be statistically significant (omnibus test, p = 0.000), and was therefore a good predictor of vertebral fractures.
Vertebral body BMD and geometry
Because of missing values, the multiple linear regression analysis was conducted on data from 4849 participants (2277 men and 2572 women).
Vertebral body BMD and geometry generally showed negative association with VAT mass, trunk fat mass, and limb fat mass Hormone-replacement therapy (yes)1073 in both men and women (p < 0.05; Table 6). However, spine bone area appeared to show positive association with VAT mass and trunk fat mass, but its association with limb fat mass remained negative (p < 0.01). The association of limb fat mass with vertebral body BMD and geometry, compared with VAT mass and trunk fat mass, appear to be stronger with larger correlation coefficients. It should also be noted the associations between L1-to-L4 BMD and VAT mass were weak and not statistically significant in both men and women (p > 0.05).
Discussion
A strength of the present study is that it utilized data from a large cohort and attempted to answer the important clinical question of how obesity may affect the risk of vertebral fractures. BMI and WC are commonly used clinical measures to assess obesity, but only WC appears to influence the risk of vertebral fractures in men. Obese men with WC over 102 cm had a significantly higher vertebral fracture incidence compared with normal weight and underweight men. We also showed that trunk fat mass, VAT mass, and limb fat mass were negatively associated with vertebral body BMD and geometry, but the negative association was strongest for limb fat mass.
The current study provides important clinical information about how various clinical risk factors are related to and may predict the risk of vertebral fractures. These risk factors are in agreement with previous findings. (27) The binary regression equation derived in the present study may be used by clinicians to predict the risk of vertebral fractures, providing information additional to FRAX which assesses the general risk of fractures. It is noteworthy to mention that a previous history of hip fractures is the most significant predictive factor among all the variables in the equation. This finding is in agreement with those of previous studies that the risks of fractures of these two body regions are closely related. (28) The current study provides support for previous findings that obesity measured by BMI is not associated with vertebral fracture risk. (5,29) However, in previous studies, there were inconsistent observations about the effect of BMI on the risk of fractures. Some studies reported BMI was associated with increased risk, (9,15,16) whereas others found BMI was negatively correlated with the risk. (14) When obesity was measured by different measures, especially those related to central adiposity such as WC, trunk fat mass, and VAT mass, previous studies found that obesity was associated with an increased prevalence of vertebral fracture. (11)(12)(13) This is in line with our findings from this study. Therefore, our study, together with others, suggests that central adiposity may be an important risk factor for the risk of vertebral fracture. In addition, the binary regression equation revealed that the risk of vertebral fractures is higher in men than in women, and this in general agreement with the observation reported previously. (13) However, previous studies reported that obesity only affects the risks in women, but not in men. (11,12,14,16) This is in contrast to our finding that WC affects men only. The effect of obesity in different genders is likely to be affected by how we measure or define obesity. Another explanation is that we looked at the risk of vertebral fractures, whereas the previous studies examined other anatomical sites.
Our findings are also in line with a previous study that found that lumbar spine BMD was negatively associated with trunk fat mass and limb fat mass, but not with abdominal fat mass. (30) However, some previous studies found that lumbar spine BMD was negatively correlated with VAT mass. (4,7,8,31) The different findings may be because of the different methods used in measuring vertebral body BMD and VAT mass. Although the data employed in the current study were based on DXA measurement, (30) CT was used in those studies where different results were found. (4,7,8,31) Although DXA is a valid method to estimate body composition, it may not be as accurate as CT when assessing abdominal fat. (32) There were few studies that examined the effect of fat mass on vertebral geometry, and their findings were inconsistent. One recent study found that whole-body fat mass was negatively associated with AP vertebral diameter of lumbar spine in both men and women aged 60 to 64 years, (33) whereas another study found that there was no association between total body fat mass and cross-sectional area of lumbar vertebrae in teenagers and young adults. (34) Our current study provided clear evidence that fat mass had a negative association with vertebral geometry in the lumbar spine and the whole spine.
The results from the imaging data subset showed that limb fat and trunk fat mass had a greater effect on vertebral body BMD and geometry than VAT fat mass, suggesting that visceral fat may have less influence on vertebral strength in comparison with other fat tissues. However, the results from the full data set showed that WC, which is related to visceral fat, is the only measure that is related to the risk of vertebral fracture in men. These two observations may appear to be in disagreement, but this clearly shows that BMD and the risk of fractures are not directly related to each other. Obese subjects have been found to have increased prevalence of vertebral fracture based on poor bone quality, despite normal BMD. (9) The risk of fractures is clearly not affected by BMD only, but also a range of clinical factors including smoking, alcohol use, glucocorticoid use, rheumatoid arthritis, and secondary osteoporosis. (23) Moreover, in obese patients, the accuracy of BMD measurement using DXA images has been shown to be adversely influenced by the thickness of VAT. (35) In summary, it is suggested that BMD does not adequately assess the risk of vertebral fracture, especially in obese subjects.
In this study, we observed weak associations between fat mass and vertebral body BMD and geometry in both men and women. This implies there are other factors that may also influence BMD and geometry. Biomechanical factors may play a role in the associations between obesity and vertebral fracture risk. Spinal loads depend on trunk mass and the distance between trunk center-of-mass to the vertebrae, both of which was found to be significantly larger in the obese subjects. (36,37) It has been shown that for the same body weight a larger WC, which is related to increased visceral fat mass, can significantly move the center-of-mass forward and increase the spinal loads. (38) It is possible that the increased spinal loads, together with the reduced BMD and smaller vertebral geometry associated with obesity, are responsible for the increased incidence of vertebral fractures.
The current study has some limitations. The incidence of vertebral fracture was obtained from self-report questionnaires, and there was no information about how the reported vertebral fractures were diagnosed. It is possible that not all vertebral fractures were reported in the questionnaires as vertebral fracture is generally underdiagnosed. (39) However, the incidence rate of vertebral fracture observed in the current study is comparable to a previous study that was based on medical records. (40) This previous study found that for a UK population of 5 million adults the incidence rate of vertebral fracture was 3.2 per 10,000/ year for men and 5.6 per 10,000/ year for women, whereas our study found that the incidence rate was 4.0 per 10,000/year for men and 4.7 per 10,000/ year for women. Another limitation of the current study is that the logistic regression equation was derived from data obtained within a short period (between 2006 and 2010), and therefore does not represent the prospective risks as compared with FRAX, which provides a 10-year risk prediction. However, the model is the only one at the moment that can assess vertebral fracture risk, and may be used clinically in conjunction with FRAX. Finally, low serum vitamin D level in the obese may be an important factor that may contribute to bone fragility, (7) but we were unable to include this as a risk factor in our analysis because this information was not available from the UK Biobank. Values are the standardized regression coefficients from linear regression models adjusted for age, weight, height, smoking status, hormone-replacement therapy (for women only), and menopause (for women only). *p < 0.05; **p < 0.01. VAT = visceral adipose tissue.
|
2020-03-12T10:24:24.185Z
|
2020-03-26T00:00:00.000
|
{
"year": 2020,
"sha1": "cc005fea1dcb8e1db99fdd25431e5e75435f3892",
"oa_license": "CCBY",
"oa_url": "https://asbmr.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jbm4.10358",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc3c541368aac410cd35f06f8606550498e2fead",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73492777
|
pes2o/s2orc
|
v3-fos-license
|
The effects of single versus combined therapy using LIM-kinase 2 inhibitor and type 5 phosphodiesterase inhibitor on erectile function in a rat model of cavernous nerve injury-induced erectile dysfunction
We aimed to determine whether combination of LIM-kinase 2 inhibitor (LIMK2i) and phosphodiesterase type-5 inhibitor (PDE5i) could restore erectile function through suppressing cavernous fibrosis and improving cavernous apoptosis in a rat model of cavernous nerve crush injury (CNCI). Seventy 12-week-old Sprague–Dawley rats were equally distributed into five groups as follows: (1) sham surgery (Group S), (2) CNCI (Group I), (3) CNCI treated with daily intraperitoneal administration of 10.0 mg kg−1 LIMK2i (Group I + L), (4) daily oral administration of 20.0 mg kg−1 udenafil, PDE5i (Group I + U), and (5) combined administration of 10.0 mg kg−1 LIMK2i and 20.0 mg kg−1 udenafil (Group I + L + U). Rats in Groups I + L, I + U, and I + L + U were treated with respective regimens for 2 weeks after CNCI. At 2 weeks after surgery, erectile response was assessed using electrostimulation. Penile tissues were processed for histological studies and western blot. Group I showed lower intracavernous pressure (ICP)/mean arterial pressure (MAP), lower area under the curve (AUC)/MAP, decreased immunohistochemical staining for alpha-smooth muscle (SM) actin, higher apoptotic index, lower SM/collagen ratio, increased phospho-LIMK2-positive fibroblasts, decreased protein kinase B/endothelial nitric oxide synthase (Akt/eNOS) phosphorylation, increased LIMK2/cofilin phosphorylation, and increased protein expression of fibronectin, compared to Group S. In all three treatment groups, erectile responses, protein expression of fibronectin, and SM/collagen ratio were improved. Group I + L + U showed greater improvement in erectile response than Group I + L. SM content and apoptotic index in Groups I + U and I + L + U were improved compared to those in Group I. However, Group I + L did not show a significant improvement in SM content or apoptotic index. The number of phospho-LIMK2-positive fibroblasts was normalized in Groups I + L and I + L + U, but not in Group I + U. Akt/eNOS phosphorylation was improved in Groups I + U and I + L + U, but not in Group I + L. LIMK2/cofilin phosphorylation was improved in Groups I + L and I + L + U, but not in Group I + U. Our data indicate that combined treatment of LIMK2i and PDE5i immediate after CN injury could improve erectile function by improving cavernous apoptosis or eNOS phosphorylation and suppressing cavernous fibrosis. Rectification of Akt/eNOS and LIMK2/cofilin pathways appears to be involved in their improvement.
ORIGINAL ARTICLE
The effects of single versus combined therapy using LIM-kinase 2 inhibitor and type 5 phosphodiesterase inhibitor on erectile function in a rat model of cavernous nerve injury-induced erectile dysfunction pathway plays a role in cavernous fibrosis through coordination with transforming growth factor (TGF)-β1 after CN injury. [19][20][21][22] However, LIMK2 inhibition does not normalize erectile function or cavernous fibrosis completely, 21,22 suggesting that a combination therapy of LIMK2 inhibitor (LIMK2i) and other agents might be needed to restore erectile function in animal models of ED after CN injury. Therefore, we noted that treatment with PDE5i in a rat model of ED induced by CN injury alleviated ED through the anti-apoptotic effect or positive effect on endothelial nitric oxide synthase (eNOS). 4,23 In this context, we hypothesized that erectile function in a rat model of ED after CN injury could be further improved by combined therapy of LIMK2i and PDE5i with known anti-apoptotic effect or positive effect on eNOS phosphorylation, compared to administration of LIMK2i alone or PDE5i alone. 4,23 Thus, the objective of this study was to determine whether combined administration of LIMK2i and PDE5i could further improve erectile function through anti-fibrotic effect and improvement of cavernous apoptosis or positive effect on eNOS. week-old Sprague-Dawley rats were equally divided into the following five groups (n = 14 per group): (1) sham surgery (Group S), rats were treated with daily intraperitoneal administration of saline vehicle and daily oral administration of saline vehicle, (2) bilateral CN crush injury (Group I), rats were treated with daily intraperitoneal administration of saline vehicle and daily oral administration of saline vehicle, (3) rats with bilateral CN crush injury were treated with daily intraperitoneal administration of 10.0 mg kg −1 LIMK2i (LX-7101, Cellagen Technology, San Diego, CA, USA) 21,22 and daily oral administration of saline vehicle (Group I + L), (4) rats with bilateral CN crush injury were treated with daily intraperitoneal administration of saline vehicle and daily oral administration of 20.0 mg kg −1 udenafil (PDE5i, Dong-A, Seoul, Korea) (Group I + U), and (5) rats with bilateral CN crush injury were treated with combined administration of 10.0 mg kg −1 LIMK2i 21,22 and 20.0 mg kg −1 udenafil (Group I + L + U). After anesthetizing rats with intraperitoneal injection of zoletil (10.0 mg kg −1 ; Vibac Laboratories, Carros, France) and isoflurane (Abbott Laboratories, North Chicago, IL, USA) inhalation, a lower abdominal midline incision and pelvic dissection were made by the same trained surgeon. For rats in Group S, bilateral CNs were dissected without any direct injury to CNs. Crush injury was induced by mechanical compression of bilateral CNs at a location 3-4 mm distal to the major pelvic ganglion using a microsurgical vascular clamp (Solco, Pyeongtaek, Korea). The microsurgical vascular clamp was held to the closure twice for 70 s each. Treatment was started from the next day after surgery. It was interrupted 2 days before in vivo assessment of erectile function (a 48-h washout period). Our animal studies were approved by the Institutional Animal Care and Use Committee of the Clinical Research Institute at the Seoul National University Hospital (Seoul, Korea), an Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC)-accredited facility. Care for these animals was conducted in accordance with the National Research Council guidelines for the care and use of laboratory animals.
Evaluation of in vivo erectile function
Erectile function was determined using a standardized model by electrical stimulation of the CNs at 2 weeks after surgery to generate erectile responses as described previously. 10,[20][21][22] After a lower midline incision, major pelvic ganglions and the CNs were isolated. Then, a platinum bipolar electrode (Grass Instrument Company, Quincy, MA, USA) was placed around the CN distal to the site of nerve injury. Stimulation parameters were as follows: 1.0 V, 2.5 V, and 4.0 V at 16 Hz with a square wave duration of 0.3 ms for 30 s. Erectile responses were expressed as intracavernous pressure (ICP) and the area under the curve (AUC) during the entire erectile response normalized with mean arterial pressure (MAP). In vivo assessment of erectile function was performed for eight rats in each group. Total penectomy was performed for the remaining six rats in each group for western blot and histological analyses. The middle part of the skin-denuded penile shaft undamaged by a needle was harvested, fixed in 10% formaldehyde solution (Sigma-Aldrich, St. Louis, MO, USA) overnight, and embedded in paraffin (Sigma-Aldrich) for histological staining. The remaining cavernous tissues were stored at −80°C until processing.
Histological examinations
To determine the ratio of SM/collagen fibril in the cavernous tissue, Masson's trichrome staining (Abcam, Cambridge, UK) was performed as described previously. 19,20,22 For each stained slide, areas of SM (stained in red) and collagen fibril (stained in blue) were analyzed. To determine SM content in the cavernous tissue, immunohistochemical staining was carried out using a primary antibody against α-SM actin (α-SMA) (1:100, Dako, Glostrup, Denmark) as described previously. 10,19,20 For histological studies, each stained slide was reviewed through quantitative analysis for the image of the penis comprising the corpora cavernosum half at ×40 using ImagePro Plus 4.5 software (Medica Cybernetics, Silver Spring, MD, USA). We analyzed stained slides of six rats from each group (two tissue sections per animal). An independent examiner reviewed these slides in a blinded fashion using the same standard.
Evaluation of cavernous apoptosis
To evaluate apoptosis in the cavernous tissue, we performed terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay using ApopTag® Red In Situ Apoptosis Detection Kit (Merck Millipore, Darmstadt, Germany) as described previously. 20 Cell nuclei were labeled with 4,6-diamidino-2-phenylindole (DAPI, stained in blue). Six rats from each group were evaluated, and two tissue sections per animal were analyzed. For each slide, four nonoverlapping zones were randomly selected at ×400 and examined using a confocal laser microscope (Leica TCS SP8; Leica Microsystems, Mannheim, Germany). The apoptotic index was presented as a ratio of apoptotic nuclei (stained in pink) to the total number of nuclei counted in the given area.
Double immunofluorescent microscopy
To determine the content of fibroblasts positive for phosphorylated LIMK2 in the cavernous tissue, we performed double immunofluorescent labeling for cavernous tissue with antibodies to vimentin and phosphorylated LIMK2 as described previously. 20,22 Paraffin-embedded tissue sections (2.5 µm) were incubated with primary antibodies to vimentin (a fibroblast marker, 1:100, Dako) and phospho-LIMK2 (phospho T505, 1:50; Abcam). After several washings with phosphatebuffered saline, sections were incubated with two secondary antibodies (anti-mouse IgG 488 and goats' anti-rabbit IgG 594, Abcam) in 1% bovine serum albumin at room temperature for 1 h. We acquired images through confocal laser scanning using a confocal microscope. Nuclear staining was performed with DAPI. For each slide, four nonoverlapping zones were randomly selected at ×400. Among DAPI-positive (blue or purple color in merged or magnified images, respectively) cells in the cavernous tissue, the number of fibroblasts positive for phosphorylated LIMK2 (yellow color) was determined by an independent examiner who was blinded to group allocation.
Western blot assay
As described previously, 10,[19][20][21][22] western blot assay was carried out to determine protein expression levels. Briefly, after cavernous tissue samples were homogenized, equal amounts of protein (50 µg in each well) were electrophoresed on Mini Protean TGX gels (7.5% or 12.0%; Bio-Rad, Hercules, CA, USA). They were run at 200 V for 30 min and transferred to polyvinylidene fluoride (PDVF) membrane (Merck Millipore) at 100 V for 60 min. After adding primary antibodies, they were incubated overnight at 4°C. After adding secondary antibody, they were incubated for 60 min. Primary antibodies were as follows: anti-Akt
Statistical analyses
Results in bar graphs of figures are reported as a median (interquartile range). We analyzed statistical differences among groups using Mann-Whitney U test or Kruskal-Wallis test. P < 0.05 was considered statistically significant. All statistical tests were two-sided. SPSS version 20.0 (SPSS Inc., Chicago, IL, USA) was used for all data analyses.
Effect of combined administration of LIMK2i and udenafil on erectile response
Rats in Group I showed lower ICP/MAP and lower AUC/MAP after stimulation at 1.0 V, 2.5 V, and 4.0 V, compared to those in Group S (ICP/MAP: P = 0.001; AUC/MAP: P = 0.001; Figure 1). In all treatment groups, ICP/MAP and AUC/MAP after stimulation at 1.0 V, 2.5 V, and 4.0 V were improved compared to those in Group I (ICP/MAP: P < 0.05; AUC/MAP: P < 0.05). However, they did not recover to values observed in Group S (Figure 1). After stimulation at 4.0 V, rats in both I + U and I + L + U groups showed higher ICP/MAP and AUC/MAP compared to those in Group I + L (ICP/MAP: P = 0.018 in Group I + U and P = 0.001 in Group I + L + U; AUC/MAP: P = 0.022 in Group I + L + U). However, there were no differences in ICP/MAP or AUC/MAP among Groups I + L, I + U, and I + L + U after stimulation at 1.0 V. There were no statistically significant differences in ICP/MAP or AUC/MAP at any stimulation parameter between Groups I + U and I + L + U.
Effect of daily administration of udenafil alone, LIMK2i alone, or combined administration of the two drugs on cavernous apoptosis and eNOS phosphorylation
Rats in Group I showed decreased immunohistochemical staining of α-SMA (P < 0.001), higher apoptotic index (P < 0.001), decreased Akt and eNOS phosphorylation (P < 0.001 and P = 0.002, respectively), decreased protein expression of total Akt, total eNOS and total nNOS (P = 0.048, P = 0.033, and P < 0.001, respectively), and increased nNOS phosphorylation (P = 0.025) compared to those in Group S (Figure 2a, 2b and 3). Regarding immunohistochemical staining of α-SMA, SM contents in Groups I + U and I + L + U were improved compared to those in Group I (P < 0.001 in Group I + U and P < 0.001 in Group I + L + U; Figure 2a). However, Group I + L did not show improvement in the SM content compared to Group I. Results of TUNEL analysis showed that apoptotic indices in Groups I + U and I + L + U were restored to sham control level, whereas Group I + L did not show such improvement (Figure 2b). According to densitometry, protein expression of total Akt, Akt phosphorylation level, protein expression of total eNOS, and eNOS phosphorylation level in Groups I + U (all P < 0.05) and I + L + U (all P < 0.05) were improved compared to those in Group I. However, those in Group I + L did not show such improvement (Figure 3). Protein expression level of total nNOS or NOS phosphorylation level did not change in any of these three treatment groups.
Daily administration of udenafil alone, LIMK2i alone, or combined administration of the two drugs can improve cavernous fibrosis
Rats in Group I showed lower SM/collagen ratio (P < 0.001), increased protein expression of fibronectin (P = 0.001), increased amounts of fibroblasts positive for phosphorylated LIMK2 (P < 0.001), and increased LIMK2 and cofilin phosphorylation (P = 0.001 and P < 0.001, respectively) compared to those in Group S (Figure 2c, 3 and 4). Regarding SM/collagen ratios and protein expression of fibronectin, all three treatment groups showed improvement compared to Group I (P < 0.001 and P = 0.002 in Group I + L; P = 0.021 and P = 0.024 in Group I + U; P < 0.001 and P = 0.004 in Group I + L + U, respectively). However, they did not recover to levels observed in Group S (Figure 2c). Both Groups I + L and I + L + U had greater improvement in SM/collagen ratio (P = 0.014 in Group I + L; P < 0.001 in Group I + L + U) and protein expression of fibronectin (P = 0.033 in Group I + L; P = 0.029 in Group I + L + U), compared to Group I + U. After double immunofluorescent staining of cavernous tissue with antibodies to phospho-LIMK2 and vimentin, amounts of fibroblasts positive for phosphorylated LIMK2 in Groups I + L and I + L + U were normalized, but not in Group I + U (Figure 4). LIMK2/cofilin phosphorylation levels in Groups I + L and I + L + U, but those in Group I + U, were restored to sham control levels (Figure 3).
DISCUSSION
Up to date, ED induced by CN injury during RP remains a difficultto-treat disease due to a few significant pathophysiologic conditions including cavernous apoptosis and fibrosis, although some potential therapies have been tested in animal models of post-RP ED. 5,[7][8][9][10][15][16][17][18] With such background, the current study aimed to identify whether combined treatment of LIMK2i with PDE5i could further improve erectile function through both suppression of cavernous fibrosis and improvement of cavernous apoptosis in a rat model of ED induced by CN injury, compared to administration of LIMK2i alone or PDE5i alone. The present study has three main findings. First, combined administration of udenafil with LIMK2i for 2 weeks immediately after injury showed greater improvement in erectile function compared to administration of LIMK2i alone, although erectile function was not completely normalized. Meanwhile, there was no statistically significant difference in improvement of erectile function between the administration of udenafil alone and combined administration of the two drugs. Second, daily administration of udenafil alone or combined administration of udenafil with LIMK2i immediate postinjury restored cavernous apoptosis, protein expression of total eNOS, and eNOS phosphorylation to sham control levels. Third, all three treatments (LIMK2i alone, udenafil alone, and a combination of udenafil with LIMK2i) significantly alleviated cavernous fibrosis at 2 weeks after CN injury. However, the group treated with LIMK2i alone or a combination of the two drugs showed a greater degree of improvement in cavernous fibrosis by normalizing LIMK2/cofilin signaling pathway and the content of fibroblasts positive for phosphorylated LIMK2 compared to the group treated with udenafil alone.
The combined administration of udenafil with LIMK2i for 2 weeks immediate postinjury did not completely normalize erectile function. Furthermore, the combined administration of the two drugs did not show a significantly additive effect on improvement of erectile function at 2 weeks postinjury. There are some plausible explanations for this finding. First, during the acute phase (1 week to 2 weeks) after CN injury, recovery from cavernous apoptosis and endothelial dysfunction might play a more important role in the Figure 3: Representative immunoblot images and bar graphs (median and interquartile range) showing the comparison in the protein expression of (a) phosphorylated Akt/total Akt and total Akt, (b) phosphorylated eNOS/total eNOS and total eNOS, (c) phosphorylated nNOS/total nNOS and total nNOS, (d) phosphorylated LIMK2/total LIMK2 and total LIMK2, (e) phosphorylated cofilin/total cofilin and total cofilin, and (f) fibronectin from the cavernous tissues among the five groups using densitometry. The data were normalized by β-actin expression and presented as fold changes over controls. Six rats in each group were evaluated. * P < 0.05, the indicated group compared to I group; and # P < 0.05, the indicated group compared to S group. f a e improvement of erectile function than recovery from cavernous fibrosis. This can be a potential explanation for our finding, given that apoptosis of cavernous SM or endothelium is well known to occur primarily during early postinjury period (1 week to 2 weeks) while cavernous fibrosis progresses over time. 2,5,19,24,25 Thus, further time-course studies with long-term follow-up are needed to elucidate relative roles of pathophysiologies (endothelial dysfunction, cavernous apoptosis, and fibrosis) in ED at chronic phase after CN injury. Second, previous studies have shown that daily administration of PDE5is from the early post-injury period can reduce SM/collagen ratio, collagen deposition, and TGF-β1 protein expression, thus improving ED. 15,23 Because PDE5is have, to some degree, anti-fibrotic effect in addition to their anti-apoptotic effect or positive effect on eNOS phosphorylation, 4,23 a significantly additive effect of combined treatment with udenafil and LIMK2i might not be observed for further improvement of erectile function during the acute phase after CN injury. Third, combined administration of LIMK2i with PDE5i did not improve protein expression of nNOS in cavernous tissue at 2 weeks after CN injury. Thus, this therapeutic strategy may not result in complete recovery of erectile function at acute phase after CN injury.
As expected, our results confirmed that daily administration of udenafil alone or combination of LIMK2i with udenafil improved ED through alleviation of cavernous apoptosis, protein expression of total Akt or eNOS and dysregulated Akt/eNOS phosphorylation in a rat model of CN injury. Our finding is consistent with results of previous studies showing that daily administration of PDE5is can mend erectile function by rectifying cavernous apoptosis and eNOS phosphorylation in rat models of CN injury. 4,15,23 Interestingly, daily administration of LIMK2i alone or combination of LIMK2i with udenafil provided greater degree of improvement in cavernous fibrosis through normalizing amounts of fibroblasts positive for phosphorylated LIMK2 and LIMK2/cofilin phosphorylation compared to daily administration of udenafil alone. Treatment with udenafil alone did not have any significant impact on amounts of fibroblasts positive for phosphorylated LIMK2 or LIMK2/cofilin phosphorylation. This suggests that rectification of dysregulated TGF-β1-driven pathways other than the LIMK2/cofilin pathway might be involved in the moderate degree of improvement in cavernous fibrosis. However, further studies are needed to elucidate the precise molecular mechanism by which PDE5i improves cavernous fibrosis in ED induced by CN injury, although previous studies have suggested the restoration of the TGF-β1-driven pathway as a candidate mechanism. 4,23 There are several limitations in the present study. First, the results of the present study fell short of our expectation that the combination therapy of LIMK2i with PDE5i could normalize erectile function at acute phase after CN injury in a rat. Therefore, to render our therapeutic strategy meaningful, subsequent studies are needed to determine the therapeutic effect of the combination therapy at chronic phase after CN injury. Combined therapies of LIMK2i with antifibrotic effect, other anti-apoptotic agents (i.e., JNK inhibitor, sonic hedgehog, and dipyridamole) and agents for promoting angiogenesis/ neurogenesis (i.e., hepatic growth factor and basic fibroblast factor) need to be tested in the future studies. Second, the treatment with LIMK2i or PDE5i did not improve protein expression of total nNOS in cavernous tissue. Meanwhile, the phosphorylation (at Ser1417: positively regulatory site) of nNOS was increased in Group I as well as the three treatment groups after CN injury. This increase in the phosphorylation at Ser1417 of nNOS appears to function as a compensatory mechanism after CN injury. Given that a recent study showed a significant increase in phosphorylation on negatively regulatory site of nNOS after CN injury, 26 the dysregulated nNOS phosphorylation might contribute to incomplete recovery from ED induced by CN injury despite the combined treatment with LIMK2i and PDE5i. However, the status of phosphorylation on negatively regulatory site of nNOS was not evaluated in our study. Thus, in the future, the more detailed evaluation including assessment of phosphorylation status on both positively and negatively regulatory sites of nNOS in an animal model of ED induced by CN injury is needed to draw a solid conclusion regarding it. Furthermore, the effect of adding an agent for the restoration of both nNOS phosphorylation status and total nNOS protein expression to our combined regimens on the recovery of erectile function may need to be determined in animal models of ED induced by CN injury.
CONCLUSION
Our data indicate that combined treatment with LIMK2i and udenafil from the immediate postinjury period after CN injury can improve erectile function by suppressing cavernous apoptosis, improving eNOS phosphorylation and suppressing cavernous fibrosis. Rectification of Akt/eNOS and LIMK2/cofilin pathways appears to be involved in their improvement. However, such combined treatment did not completely normalize the erectile function at acute phase after CN injury, although the degree of improvement by the combined treatment was somewhat greater than that by treatment with LIMK2i alone. Furthermore, the combined treatment did not appear to provide statistically greater improvement in erectile function than treatment with udenafil during the acute phase after CN injury. Further studies are needed to determine therapeutic effects of the combined treatment on ED at a chronic phase after CN injury and determine whether other combined regimens, including other antiapoptotic agents, agents for promoting angiogenesis/neurogenesis and an agent for the restoration of both nNOS phosphorylation status and total nNOS protein expression, could restore erectile function.
AUTHOR CONTRIBUTIONS
MCC carried out substantial contributions to conception/design, animal experiments, data acquisition, data analysis, interpretation, drafting the manuscript, and statistical analysis. JL carried out substantial contributions to conception/design and data interpretation. JP carried out substantial contributions to conception/design and data interpretation. SO carried out substantial contributions to statistical analysis and critical revision of the manuscript for scientific and factual content. JSC helped animal experiments and data acquisition. HS carried out substantial contributions to conception/design and data interpretation. JSP carried out substantial contributions to conception/design and data interpretation. SWK carried out substantial contributions to conception/design, data interpretation, and critical revision of the manuscript for scientific and factual content, and helped to draft the manuscript. All authors read and approved the final manuscript.
|
2019-03-11T17:23:48.580Z
|
2019-02-22T00:00:00.000
|
{
"year": 2019,
"sha1": "d74dbf82bfa255d0a216c6f3136c2a6fb7515797",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/aja.aja_114_18",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d74dbf82bfa255d0a216c6f3136c2a6fb7515797",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
248875651
|
pes2o/s2orc
|
v3-fos-license
|
Design and Implementation of Mine Information Management System Based on Wireless Network
: With the increasing demand for mineral resources in China, how to manage mineral resources scientifically, efficiently and intelligently has become a practical problem faced by relevant governments, enterprises and academia. As the most advanced technology of GIS in recent years, Web GIS can collect, store, process, analyze and visualize heterogeneous, multi-source and massive spatial geographic data. Therefore, introducing Web GIS technology into mine information management and building an information system for massive, multi-source and heterogeneous mine information management is of great significance for the government and enterprises to improve the management efficiency of mineral resources and realize the accurate and scientific planning and management of mineral resources.
Introduction
Ore blending is an approach to program and manage ore quality. It mainly improves the uniformity and stability in the quality of minerals and mineral products, makes full use of mineral resources and reduces the volatility of ore grades, raising labour productivity of ore dressing, improves the quality of the products and reduces production costs. There are two aspects to ore quality control. The first one is the short-term quality plan of production in a mine. It is made to conform to the annual plan and conditions of the stope. This plan is drawn up, organized and implemented daily, weekly and monthly. The second aspect is process control in mine production. It is implemented by gradual control in all processes of exploitation and processing according to the output of ore and the characteristics of different production processes. Thus, mineral quality control is a process of making an ore blending plan and ensuring comprehensive control in implementing this plan [1] .
Today, the management system of ore blending in developed countries has matured. In some mines, special measures, adapted to local conditions, have been taken to control ore quality. Especially, computer technologies are applied to control and manage all processes of mining, transportation, processing and storage in the control of ore quality. For example, in the Minntac Mine, USA, an orderly approach of loading by shovel and unloading from railway wagons has been adopted to control the balance of ore quality. The ore quality control system dispatches trucks and railway cars by computer. It can dispatch trucks and trains according to certain indicators of production and ore blending. As well, it monitors the conditions of equipment by a remote control device. Simultaneously, this system maintains ore quality and operating equipment in an optimal state. Computer networks of ore quality control have also been applied in other developed mines such as the Newman iron ore and Paraburdoo mines in Australia. They keep on-site working information seamlessly connected with a control center. The produce in the stope complies strictly to instructions of ore production operations, which are released by the quality control center.
Another example is the dispatch system of the Hibbing Taconite Company in America. Its computer control center is connected with the vehicles dispatched by computer in this system. With the communication terminal of dispatched vehicles and other information terminals, the operators have a good command over each process in ore blending according to the changes in ore quality [2] . The fundamental reason for their success in ore blending management is a strict and meticulous quality control system and scientific, advanced technical means to enforce it.
In most domestic mines, many ore quality control systems have only recently implemented computerrelated technologies to establish ore blending models, such as linear programming, 0-1 integer programming and so on. These models allow short-term quality production plans in mines. Examples are the computer auxiliary optimum blending ore systems of the Baiyun Iron Ore, the system for low-grade pyrite ore in Yanfu and for the Zhujiabaobao iron ore. However, these systems can not control and manage the entire ore production process in real time. Control of an entire ore production process is one of the most important aspects to maintain stability in ore quality. So, our study proposes the use of GIS/GPS/GPRS technologies and a linear programming model. The proposed system can implement control of ore quality in real time, automatically draw up a scientific and rational short term ore blending plan and dynamically track, dispatch and manage production equipment to implement ore blending in an open pit mine. This is of great importance for improving the level of ore quality control of digital mines in our country.
GIS/GPS/GPRS
The Geographic Information System (GIS) is one kind of computer system for gathering, storing, managing, analyzing, demonstrating and applying geographic information. It is a general technology that can analyze and process enormous amounts of geographic data. GIS has been applied in many fields to establish all kinds of spatial databases and decision support systems, each with different criteria and provide answers to many different formal spatial inquiries, spatial analyses and assistance plans and decision-making functions. To date, GIS has been gradually applied in open pit mines [3] .
The task of a dynamic management system of ore blending in open pits is to track, monitor and manage production equipment, which depends largely on spatial geographic information. Therefore GIS plays an important role in visual supervisory systems of ore blending, real-time dynamic management and assistance in decision analyses.
The Global Positioning System (GPS) is a satellite-based navigation system made up of a network of 24 satellites placed in orbit. Their ground stations are managed by the U.S. Department of Defense [4] . With four or more satellites in view, a GPS receiver can determine the 3D position of shovels (latitude, longitude and elevation) and track movements. Once the position of the shovel has been determined, the GPS unit can calculate other information, such as speed, bearing, track, trip distance, sunrise and sunset time and more. GPS works in all weather conditions, anywhere in the world, 24 hours a day. There are no subscription fees or establishment charges to use GPS. With GPS, the dynamic management system of ore blending provides accurate position (latitude and longitude), speed, bearing, time, track and more basic information of trucks and shovels. Besides, the accurate position of muck-pile boundaries and blasting holes can be calculated by GPS units. The ore quality of these pile boundaries and blasting holes can be monitored on electronic maps.
The General Packet Radio Service (GPRS) is one of GSMPhase2+ standard realization systems and can provide fast data transmission speeds [5] . It provides end-to-end, wide area wireless IP connections and has remarkable advantages in many other areas. The dynamic management system of ore blending in open pit mines largely uses some of the advantages of GPRS, such as faster speed and instant connections, when the need arises and is charged by the amount of data generated. It can provide real-time wireless transmission and is very quick without a dial-up modem connection to GPS equipment for position information. That is very important because GPS position information has room for only a small amount of data and needs to be transmitted frequently. So this system can make good use of GPRS to transmit GPS position information and other information such as dispatching commands and ore grades.
Principle of Dynamic Management
System of Ore Blending
Structure of Ore Blending Management System
The dynamic management system of ore blending is composed of mobile terminals carried by vehicles, a communication network and a dispatch center, as shown in Figure 1. In this system the mobile terminals receive GPS signals and then calculate latitude, longitude, angle, elevation and speed of the vehicles. The expansion interfaces of mobile terminals can also connect to many examination and control lines to obtain the information from the vehicles. Each kind of information is transmitted to the monitoring center through GPRS and the internet. GPRS, as a telecommunications network between the mobile terminals and dispatch Mining Science and Technology Vol.20 No.1 134 center, mainly transmits information on position and condition of the vehicles, as well as information in case of an alarm, to the dispatch center, which in turn transmits dispatches and control commands to the vehicles. In the dispatch center, the communication server, the database server and the console are connected by a 100 M local area network. The dispatch center, under control of the system software system, receives and processes all kinds of information coming from the controlled vehicles. The position, ore grade and other information of the shovels are displayed at multimedia monitors and electronic maps in the dispatch center and from these the vehicles can be monitored and dispatched [6].
Modeling
We established. a model for a number of shovels and crushing stations for an actual mine production. Assume that the number of shovels is m, the number of crushing stations n, the amount of ore from shovel (i) to crushing station (j) is x ij , the ore grade (f) supplied by shovel (i) is g fi , the ore grade (s) needed by crushing station (j) is g sj . The constraint conditions are as follows [7]: 1) Amount of ore supplied and production capacity of shovels. The amount of ore supplied by shovels should be within their range of production capacity and meet the requirement of production, i.e., where x ij is the amount of ore supplied by shovel (i) for crushing station (j), A i the maximum amount of ore shovel (i) can supply and Q i the production task of shovel (i).
2) Minimum task of crushing station. In the ore blending plan, each crushing station will be assigned a minimum task. Assume that the minimum production task (r) of crushing station (j) is Q jr , i.e., 3) Non-negative constraints. The amount of ore supplied by a shovel to a crushing station cannot be negative, that is: 4) Objective function. The target ore grade of the mine is g and its error range should be less than 5%. According to the actual production of the mine, the deviation from actual ore grade and target ore grade at every crushing station should be minimized by the objective function, i.e., (4) where, g fi is the ore grade supplied by shovel (i) and Q j the task of crushing station (j). According to the actual mine requirement, this model is optimized by adding a new constrain, i.e.,
Solution
The model is solved by a two-stage method. In the first stage, many new non-negative variables (x n+1 , x n+2 , …, x n+h ) are added to the model. Its purpose is to ensure that the m-moment unit sub matrix is included in the coefficient matrix A ( A =(b ij ) m×(n+h) (i=1, 2, …, m; j=1, 2, …, n, n+1, …, n+h)) of the newly composed initial simplex tableau.
At this first stage, the sum of all added artificial variables is minimized, i.e., the objective function is: If the optimal solution of Z 1 =0 can be obtained, all the added artificial variables are non-basic variables, while the m original variables, before the addition, are basic variables. This leaves an m-moment unit sub matrix in the coefficient matrix when the corresponding column of artificial variables is deleted and it is assumed that this is the initial feasible base B 0. Then we turn to the second stage of the two-phase method to solve the problem. Otherwise, there is no feasible solution for this problem.
Dynamic management system of ore blending in the Sandaozhuang open pit mine
The Sandaozhuang open pit mine is part of the Luoyang Luanchuan Molybdenum Industry Group Inc. Its output is 10 million ton. Its length is 2350 m, its width 1350 m and its mining elevation is between 1114 and 1630.8 m. Mining takes place over a vertical distance of 516.8 m and the bench height is 12 m. Rotary drilling machines, shovels and trucks are used in this mining process. The transportation system consists of a number of trucks, three crushing stations, an ore pass and electric locomotives. The dynamic management system of ore blending in this open pit consists mainly of terminals and management software for ore blending. The software system is composed of modular subsystems.
Dynamic Ore Blending Management Subsystem
The function of this subsystem is mainly to implement the production management of ore blending for shovels in the dispatch center, including the automatic drawing up of an ore blending plan and real-time dynamic monitoring and control in the mining operation process of shovels. Its main functions are as follows: 1) Map operation: zoom in, zoom out, roam and display lamination maps; display coordinates of a random point on maps; compute the distance between two random points; compute the area of a random polygon; obtain information about the geographic target, etc.
2) Management of muck-piles: import the coordinate data of the holes, the boundaries of muck-piles and ore grade of holes; this permits these holes and boundaries to be shown on electronic maps; given the ore grades of holes and other properties, randomly shaped ore blocks on maps can be selected by circles or polygons and their average grades of ore and their quantities can be calculated automatically.
3) Drawing the isolines of ore grade: on the basis of data from muck-piles, different grade values can be randomly established and according to this, different isolines of ore grade can be drawn in different colors on maps; the amount of ore in different areas, divided by isolines, can also be calculated. 4) Display of shovels: display each shovel and its different ore grades, e.g., by different ways, different colors and different marks in real-time; don't display of shovels and ore grades according to given commands. 5) Playback of historical paths and ore grade: permits play back of the travel path of any shovel at any time and display its historical ore grade mined on electronic maps. 6) Location of shovels: permits inquiry of current position, operating radius, condition and driver of any shovel at any moment. 7) Instruction dispatch: given the ore grade of any shovel in real time on an electronic map, the dispatch center can send out dispatch instructions as text to dispatch the shovels and call any terminal; the terminal carried by shovels will clearly indicate and display, with a red light and phone, the dispatch instructions on the terminal. 8) Terminal information feedback: the terminal carried by shovels can upload the prefabricated fixed information on the operation surface of the terminal and return feedback to the dispatch center which can deal with the information in a timely fashion. 9) Making an ore blending plan: the linear program can arrange the plan of ore blending for every day according to the current ore grade, the loose ore coefficient and ore lithology of the work zone, the production capacity of shovels and the task, capacity and target ore grade of the crushing station.
10) Other functions: the expanding interface available to the system can be integrated with the GPS monitoring system of trucks and weighing system of ore; at any time inquiries can be made about the amount and grade of ore crushed; it is useful for the dispatcher to hold dynamically the progress of the plan of ore blending. 4.1.2 Data communication control server The communication control server mainly gathers, transmits and routes the data through the TCP/IP and analyzes the communication protocol and the distribution of data. This part is also in charge of the handling of traffic (monitoring, dispatch etc.) and other data connections (localization of data input, condition, renewal of vehicles, etc.). 4.1.3 Database management system The database management system mainly manages the database and increases, deletes, modifies and requires data from shovels, drivers and operators; it regularly backs up the data and then deletes it.
System Deployment
The entire system deployment is as follows: 1) Terminal installation: high precision GPS terminals had been installed in 20 shovels during an earlier period. The terminals consist mainly of a high-performance mainframe, a GPS antenna, a GPRS antenna, a display, a red indication light, earphones, a sound box as well as a microphone. After installation, the terminals need debugging. This mainly sets the parameters of the main frame (number, IP, port, etc.) using an operation handle.
2) Software deployment: there are three sub-systems in the project. The data communication control system is deployed in the web server of the company; the database management server is deployed in the database server of the company and the dynamic management system of ore blending is deployed in the dispatch center of the open pit.
3 Fig. 2. After obtaining the electronic map of Tab format, the electronic map is saved as Geoset by MapX5.0.
System Application
The dynamic management system of ore blending in the Sandaozhuang open pit mine has been used since May, 2008. From continuous on-site testing, it is seen that the entire system is working well. Typical applications are: 1) Making a daily ore blending plan: the process of making the ore blending plan is essentially a human-computer interaction protocol. So, if some unusual conditions occur, dispatchers can be flexible and adjust the parameters of ore blending at any moment. The rationality and practicality of this plan is very well ensured. Given the requirements of the Luoyang Luanchuan Molybdenum Industry Group Inc., the ore grade should meet the established standard and is not permitted to be more than 5% off. There are many benches mined in the open pit and the ore grades vary widely in different benches. There are 8~10 shovels working and three crushing stations in each shift (three shifts per day). The relevant parameters of the ore blending plan for a particular day in the open pit are shown in Table 1, where 10 shovels and three crushing stations involved. The target ore grade should be between 0.114% and 0.126% and the amount of ore should be between 16.005 and 16.995 thousand tons. The ore blending plan designed is presented in Table 2, which shows the optimal solution according to the actual production and linear program for the open pit. Table 1. Key parameters of ore blending Note: Gp is the grade provided by shovels; Cs is the target amount that shovel must provide to crushing station. The production task of crushing stations is 5000 t (1#), 8000 t (2#), 3500 t (3#).
2) Dynamic tracking and control of ore blending production: in the dynamic management interface of ore blending, the ore grades and real-time location where the shovels are workings are monitored on the electronic map and dynamically displayed. Thus dispatchers can duly dispatch the shovels according to their current ore grade. Through the ore weighing system and truck dispatching system, dispatchers can keep the amount of the remaining ore in the current muck-piles and the actual loads of the shovels in real time and dynamically adjust and dispatch the shovels and trucks according to the ore blending plan. In addition, the ore grade and amount of ore handled by shovels can also be displayed at the terminal screen of the shovels. It is very useful for the operators to know the current ore grade and actual workload. The state of the shovels is shown in Figure 3. Note: Tc (t) is the average load for some kind of trucks. The unit of the amount of ore supplied by shovel for crushing station is number of truck loadings. 3) Application results: the ore quality in the Sandaozhuang open pit varied considerably due to factors such as ore blending given the subjective experience of operators, cavities formed by underground mining and nonstandard production in the stope. After the application of this system, the ore quality was based on an overall consideration of various factors, such as ore reserves in the stope, the distribution of ore quality, the status of ore mining, the blasting situation, status of the shovels and the indicators of production planning. On the one hand, the system overcomes the shortcomings of making ore blending only from incidental experience. It places the ore blending plan on a scientific footing; on the other hand it can monitor and control the process of ore production and regulate the production in the stope in an orderly fashion. The statistical data shows that the range of error in the ore grade to be off target decreased from 15.82% to 4.35%. The system further guarantees the stability and uniformity of ore quality and provides a good foundation for the flotation in ore dressing. Dynamic monitoring of the ore grade is shown in Figure 4.
Conclusions
1) The dynamic management system of ore blending in an open pit based on GIS/GPS/GPRS uses technologies from space, wireless location, wireless communication and computers to control the ore quality in an open pit mine and ensures the stability of the ore grade. It improves the productivity of the mine and saves production costs of the enterprise considerably.
2) By means of linear programming, the system has moved away from its irrational and subjective experience in making ore blending plans. It reduces the effect of human factors in the production process and provides a scientific basis for ore blending in open pit mines.
3) The system is of great importance in the realization of automation and information technology in one open pit, which can definitely be applied to other open pit mines.
|
2022-05-19T15:07:50.396Z
|
2022-05-09T00:00:00.000
|
{
"year": 2022,
"sha1": "e0e50e9ef71b90a600c23943ef7543ebac57cda1",
"oa_license": "CCBY",
"oa_url": "https://drpress.org/ojs/index.php/ajst/article/download/354/293",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "26d220e84b47acc8f8a6038cd14d20052b3a9bee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
254203255
|
pes2o/s2orc
|
v3-fos-license
|
E ff ect of pH on the Leaching of Potentially Toxic Metals from Di ff erent Types of Used Cooking Pots
Humans are exposed to Potentially Toxic Metals (PTMs) through many routes. Cooking foods in cookwares which are prone to material leaching can be an exposure route to PTMs. This study assessed the e ff ect of pH on the leaching of some PTMs from used cooking pots into deionized water. Series of deionized waters were prepared from pH 3 to 7. Each water was brought to boil in clay, non-stick, stainless steel, cast aluminum, pressed aluminum and glass pots respectively. The PTMs leached from each sample pot were determined by Inductively Couple Plasma-Optical Emission Spectrophotometer (ICP-OES) (Agilent nu7m technologies 700 series). The deionized water from the aluminum cast pot and nonstick pot gave the highest concentration of aluminum (2273 µ g / L) and Zinc (24.39 µ g / L) respectively. While that from the clay pot gave the highest concentrations of Chromium and Nickel, (7.27 and 22.63 µ g / L) and that from the stainless-steel pot gave the highest concentration of iron (237 µ g / L) and lead (24.39 µ g / L). No PTM was found in the deionized water from the glass pot. The results from this study showed more leaching of PTMs into deionized water occurred more at lower pHs (pH 3 to 5) than at neutral pH for almost all the pots. Thus, cooking of acidic foods in pots except when glass pots are used should be avoided. The results of this study therefore reveal the health implications associated with using metal pots for cooking slightly acidic foods as metals can be easily leached from the pots into the foods.
Introduction
Potentially toxic metals (PTMs) are significant environmental pollutants which by ingestion, inhalation and dermal contacts they may enter the human body. Ingested metal are hazardous to humans because they tend to bio-accumulate and cause harm to internal organs [1]. Their concentrations increase in biological systems over time because they are slowly metabolized or excreted and are therefore stored in the system [2]. Metals, unlike organic molecules, do not require bio-activation or undergo enzymatic modification to produce reactive chemical species for detoxification process [3,4] but use other mechanisms, such as long-term storage (for example cadmium and iron), biliary and/or urinary excretion. Though metal toxicity to a biological system depends on the amount ingested, exposure to certain metals, such as cadmium (Cd) and lead (Pb), may lead to severe toxic effects even at low concentrations [5,6].
There are several sources/ routes of exposure to PTMs [7]. Leaching of PTMs into food and water from cookware during cooking presents additional exposure to PTMs. Foods are usually cooked /processed in pots to make them edible, this process may leach metals into food [8]. Different types of pots are currently available for cooking. They vary from locally fabricated cast pot commonly called ikoko irin and clay pots, to industrially produced aluminum pots, non-stick pots, stainless steel pots and glass-pyrex pots. The fabricated cast pots in Nigeria are produced from Aluminum scraps (such as cans from food packaging, roofing sheets among others) by melting and molding them in sand moulds [9] while the clay pots are molded from wet clay and fired to make them strong. They are not glazed when they are to be used for cooking [10].
Street et al., [11] in South Africa investigated the risk of metal exposure from the use of artisanal cookware. These cook wares were produced from metal scraps and e-waste. Using XRF and inductive coupled plasma mass spectroscopy, total and leached silver (Ag), asernic (As), baron (Ba), Cd, colbalt (Co),chromium (Cr), copper (Cu), Iron (Fe), mecury (Hg), manganese (Mn), molybedium (Mo), nickel (Ni), Pb, antimony (Sb), selenium (Se), tin (Sn), vanadium (V), Al and zinc (Zn) were evaluated. The mean total Al was 509 mg/ L and was over 100 times the EU maximum permissible level allowed for cookware. Pb was detected in the leachates of the pot samples with some concentration higher than the maximum EU permissible level (10 µg/L) for 1st, 2nd and 3rd migrations respectively of Pb. Cd and Hg were also detected in the leachates from the pots. Chagas et al., [12] investigated the leaching of Pb from clay pot of Brazil origin using inductively coupled Plasma Optical Emission Spectroscopy (ICP-OES). The concentration of Pb leached was found to be higher than 2.0 mg/kg, the value regulated by the Brazilian Health Regulatory Agency and the concentration increased with increase in contact time of food with the pot. In another study by Odularu et al., [13] Al concentration of rice cooked in old and new aluminum, stainless steel, steel and clay pot respectively showed that Al was leached from all their pot samples. The Al concentrations varied between 186.83 ± 75.18 and 350 ± 130 µg/g. In 2016, Ojezele et al., [14] also analyzed the level of metals (Fe, Zn, Cd, Ni, Mn, Cr, Co, Pb, Cu and Al) in rice cooked in iron, stainless steel, Aluminum and clay pots. They also observed increased levels of metals in rice cooked in pots compared to the all glass pot used as control. Kamerud et al., [15] in 2013 also found stainless steel pots leached metals. Nickel and chromium increased by 24 and 35 fold respectively in tomato sauce cooked in stainless steel pots, however the amount leached reduced with subsequent use and was dependent on factors like grade of pot, length of cooking and cookware usage. Lar et al., [16] found that varying amount of metal (Al, Ca, Fe, Mg and Na) leached from clay pots and cast pots made in different parts of Nigeria (clay-Ife, Ilorin, Minna, Sokoto, Makurdi, Lokoja, Plateau and Calabar; Cast pots-Plateau, Nassarawa, Anambra , Kaduna) when distilled water is boiled in them.
Previous researches conducted on the leaching of metals from different cook wares used specific food in their studies such as tomatoes sauce [15,17], fish stew [12], 3% acetic acid solution [11], rice [13], vegetables [18], fruit juices [19], beans [20], except Lar et al., [16], Alabi et al., [21] Alabi and Adeoluwa, [22] and Noemie et al [17] used water. With food substances, parameters such as pH cannot be easily monitored or varied. Though, Lar et al., [16] used water in their study they only determined metals leached from cast and clay pots while Alabi et al., [21], Alabi and Adeoluwa, [22] only studied Aluminum pots. Noemie et al [17] studied the effect of salty water and tomato juice on leaching from cast aluminum pots. So far no study has been carried out on the amount of PTMs leached nor from nonstick pots and at varying pH's in Nigeria despite the presence of these pots in virtually every home. Ingestion is the main route of toxic metal exposure to the human body [23] thus there is a need to study the leaching of metals to foods from cooking pots. Therefore the aim of this study is to determine the effect of pH on the amount of PTMs leached from different cookware pots into food during cooking.
Sampling
Samples of used pressed aluminum, clay, stainless steel, non-stick and casted aluminum cooking pots of similar sizes (1 liter) were randomly collected from different households in Lagos state, Nigeria (Plate 1). The pots were properly washed with soap, rinsed with tap water followed by distilled water, allowed to air dry. An all glass pyrex glass pot was used as the control for this study.
Sample preparation
One liter of distilled, de-ionized water were boiled in each pot (clay, non-stick, stainless steel, aluminum cast pot (koko-Irin), aluminum and glass pots ) and aliquots stored in cleaned plastics storage bottles (plastics that have been soaked in 0.1 M nitric acid overnight) and analyzed within 24 hours by an ICP-OES (Agilent nu7m technologies 700 series) for aluminum (Al), chromium (Cr), Iron (Fe), Nickel (Ni), lead (Pb), and zinc (Zn). pH values of water samples were adjusted using a 1 Molar sodium hydroxide and 1 Molar hydrochloric acid respectively with a calibrated pH meter. The water samples were adjusted to pH values of 3, 4, 5, 6, and 7 respectively. These pH values were chosen to stimulate pH of food usually eaten since there is no standardized test for the study of metals leached for pot thus to simulate the cooking condition, one liter of water at pH of 3, 4, 5, 6, and 7 were brought to boil at 100 • C for 30 minutes in the different pots.
ICP -OES Analysis
Extracts from the study were kept at 4 • C for metallic element analysis using the Inductively Couple Plasma-Optical Emission Spectrophotometer (ICP-OES) (Agilent nu7m technologies 700 series). All operating parameters for the ICP-OES were optimized for the sample solutions which base on 2 Plate 1: Pictures of the pots used in this study de-ionized water. Plasma observation axial, nebulizer Licht emodified, spray chamber cyclonic, torch injector quartz diameter, plasma power, coolant gas, auxiliary gas, nebulizer gas were all optimized. The uptake rate for solutions were set at 2.0 mL/ min, replicate read time was 45s, pre-flush time was set at 60s [24]. The analytical method was validated by using instrumental detection limit (IDL), limit of detection (LOD), limit of quantification (LOQ), precision, and accuracy studies. In this study, the IDL for each metal was calculated from the analysis of seven replicates of the blank concentration. IDL = 3×Sbl, where Sbl is the standard deviation of the seven calibrated blank solutions, LOD for each metal was determined using seven replicates of method blank which were digested using the same procedures as the samples. LOD = 3×Sbl, where Sbl is the standard deviation of the seven method blank solution and the LOQ is the lowest concentration of an analyte in the sample which can be quantitatively determined with some levels of uncertainty. This can be achieved in triplicate of seven method blanks which are digested as the actual samples. LOQ = 10×Sbl), where Sbl is the standard deviation of the seven method blank solution [25].
Quality Control
Appropriate measures were taken to prevent contamination and ensure reliability of data. They include actions such as the use of glass pot as control. Deionized water that was initially distilled was used throughout the experiment as blank. All glass wares were soaked overnight and rinsed with 0.1 mole HNO 3 and allowed to dry. They were also rinsed with the solutions to be measure in them prior to use. Recovery studies were carried out using standards to spike de-ionized water for analyses and values between 75 and 100 % were obtained.
RESULTS AND DISCUSSION
The glass, aluminum, cast, stainless steel, clay and nonstick pots were used to boil distilled water at pH 3, 4 5, 6 and 7. Foods are usually within the pH of 3 to 7. The amounts of PTMs leached were quantified as described in the methodology and the results are as shown in Table 1 and Figures 1 to 5 (including Tables 1S to 5S; supplementary data).
Effect of pH on the amount of PTMs leached from glass
pot Glass pot was used as control to compare with other pots which were all made of metals and the results are shown in Table 1. No metal was found to leach from the glass pots (as the leached metals from the glass pot were all below the limit of detection for each metal). This may be due to the type of glass used in this study. The glass material was made of Pyrex-type sodium borosilicate known to have improved thermal shock resistance, excellent weathering resistance, very low solubility, resistant to chemicals (inert) and excellent chemical durability against most leaching solution. They are commonly used for 3 cookware, chemical laboratory ware, flat panel devices and fluorescent lamps and are considered as one of the best glasses. They are usually made of 80.6% SiO 2 , 12.6% B 2 O 3 , 4.2% Na 2 O, 2.2% Al 2 O 3 , 0.1% CaO, 0.1% Cl, 0.05% MgO, and 0.04% Fe 2 O 3 [26]. Aluminum pot sometimes referred to as pressed aluminum pots was used to boil distilled water of pHs 3 to 7 and the results of analyses of the water samples for leached metals are as shown in Table 1S (Supplementary data) and Figure 1. At pH 3 and 4, Zn was found to leach but from pH 5 to 7, no Zn leached. Other elements monitored did not leach from pH 3 to pH 7. Aluminum may not have been found in the leachate because of the quality of aluminum pot used and the remediating effect of hot water on aluminum pot. Karbouj et al., [27] discovered in their study on aluminum cook ware that boiling water in the cookware prior to cooking decreases the amount aluminum thus remediated aluminum leaching. In this study, the leaching of aluminum pot and other pots at the different pHs were done using the same pot with deionised water after each other.This process must have inhibited the leaching of aluminum. However, Alabi et al., [21] found that when water was boiled in new aluminum pots for 1 hour it leached 0.023 mg/L of aluminum. While three year old pressed aluminum pots leached 0.029 mg/L and 6 years old aluminum pots leached 0.048 mg/L of aluminum. Lomolinon et al., [24] also found that the amount of aluminum leached from aluminum cooking materials depends on quality of aluminum material it is made of.
Effect of pH on the amount of PTMs leached from Cast
Aluminum pot Aluminum cast pots are produced by the informal sector (artisanal sector) in many African Countries including Nigeria from recycling of scrap metal and e-waste. The cast pot is neither as smooth nor compacted in appearance as pressed aluminum pot usually industrially manufactured. Cast pots can be cracked if smatched unlike the pressed pot and it also flakes when water is stored in it for a while. Probably due to the temperature of smelting, type of mould and the purity or type of the starting material. Temperature employed in the informal sector is less, sand moulds are used and starting materials are aluminum scraps. Cast aluminum pot (Koko Irin), was used to boil de-ionised water of pHs 3 to 7 and the results of analyses of the water samples for leached metals are as shown in Figure 2 (Table 2S of Supplementary data). Aluminum leached increased from pH 3-5 and then reduced at pH 6 before increasing slightly at pH 7 which was still lower than at pH 3. Thus the pH 5 leached the highest amount of Al (2273 µg/L). In a study by Verissimo et al., [8] when food samples were cooked with different acidic additives (lemon juice, wine vinegar and cider apple vinegar) and low pH in cast aluminum pot, increased leaching of Al was observed. Similar trend was observed in this study. Amount of aluminum leached at pH 3 to 5 were higher than amounts leached at pH 6 to 7. Cr was only leached at pH 7 while other metals were not leached at all pH tested. In the study by Street et al., [11] the average concentration of Al leached from cast pot was 509,000 µg/L which was above the EU maximum permissible limit of 5000 µg/kg for cook ware and Pb was detected in all their samples with some instances at concentrations higher than the 10 ug/L. Weidenhamer et al., [28] also detected lead in the extract from leached casted aluminum pots made from scraps in Cameroun. However, in this study the concentration of Al leached from cast pots were between 1147 and 2273 µg/L which are lower than the values reported in Street et al., [11] and lower than the EU permissible value for pots. Deposits of clay exist in many parts of Nigeria and are usually used in pottery [29]. In the clay pot, Pb was not detected 4 at all the pH tested. The concentration of Al gradually reduced from pH of 3 to pH 4 and increased till pH 7 as shown in Figure 3 (Table 3S Supplementary data). Clay is known to be of a source of aluminum [30]. The concentration of Fe gradually increased from pH 3 to 5 and at pH 6 it decreased and increased at pH 7. Boisa et al., [31] observed that the concentration of Fe leached upon exposure of Ara-Ekiti clay pot to water of Acidic pH, Neutral pH, alkaline pH were (1.15 mg/L 0.16 mg/L, and 0.08 mg/L respectively). Aleksanyan et al., [32] reported the tendency of Fe to leach at near neutral to neutral pH conditions. The Concentration of Zn was higher at pH 5 than at pH 7, this varying trend in concentration of Zn leached at the different pH conditions was also recorded by Boisa et al. [31]. Pb was not detected at all pH conditions. However, Chagas et al., [12] in their study of Pb leached from clay pot found high levels higher than 2.0 mg/kg which they attributed to the substance used for glazing the pots. The pot used in this study was not glazed but was an oven baked hardened pot. The high level of Fe recorded in this study may be attributed to the availability of the metal in the earth's crust. Soils from Nigeria are known to be rich in iron [31,33] and the pot used was made in Nigeria.
Effect of pH on the amount of PTMs leached from non-stick pot
The concentration of Al, Fe, Cr, Ni and Pb leached were all below the limit of detection, meanwhile the Zn (24.39 µg/L) was only detected at pH 3 as shown in Figure 4 ( Table 4S). The Zn may have been from the exposed part of the handle inside the pot that had contact with water. Most part of non-stick pot are coated with Polytetrafluoroethylene (PTFE, Teflon) or its other substitutes which are usually organic in nature [34].
3.6. Effect of pH on the amount of PTMs leached from stainless steel pot Stainless steel cookware is made from metal alloy primarily made of mostly iron and chromium along with differing percentages of molybdenum, nickel, titanium, copper and vanadium. Stainless steel is known to exhibit good thermal conductivity and good corrosion resistance due to its ability to readily form an iron/chromium-enriched passive layer [35]. (Table 5S Supplementary data). Okazaki et al., [36] conducted a seven day immersion test using several solutions on stainless steel and it was found that the quantities of Fe and Ni released gradually decreased with increasing pH from 2 to 7.5. Hedberg et al., [37] stated that acidic pH causes changes n the ionic strength, affects corrosion and dissolution processes, affects ligand conformation and their adsorption behavior, changes the surface hydroxide which all results into increased corrosion of stainless steel. Dan and Ebong, [18] also found that stainless steel pot leached more Fe than those cooked with aluminum pot, this tallied with the result obtained from this study. They noted that the Fe content in cooked food was higher than in uncooked food, and attributed it to the over fifty percent iron content of stainless steel. In the study by Herting, [38] Cr and Pb leached into acetic acid solutions which are also acidic solutions from stainless steel surface. In this study Pb was also detected at an acidic pH of 3 for solutions in stainless steel pot. Lead is a very dangerous heavy metal poison that can accumulate in the body over time to cause neurological and behavioral, renal, cardiovascular dysfunction. Thus, it is important that its levels are well monitored and are way below toxic levels in foods at all times.
The concentration of Zn (1.09 µg/L) suggests that the stainless steel may not directly cause or be involved in the toxic effects of zinc in the body as the recommended intake values for zinc range zinc is between 7 and 16 mg/day for adults, pregnant and lactating women, depending on sex and dietary phytate intake [39] while for infants and children its is from 2.9 to 14.2 mg/day [40].
The The type of food with respect to the pH was found to be an important factor during the study of metal leachability from cooking pots. Semwal et al., [41] also found that migration of metals into the food was higher with acidic foods than with alkaline foods. They found that more metals leached when food of low (pH of 4.25) was cooked in aluminum cooking pot and that the concentration of Al metal increased by 20.3 mg/kg [8] and also reported that red cabbage samples cooked with different acidic additives (lemon juice, wine vinegar and cider apple vinegar) showed increased leaching of aluminum. Similarly, this study showed that leaching PTMs were considerably higher at more acidic pHs than at neutral pH for most of the pots. Thus cooking of acidic foods should generally be avoided. Longterm exposure to PTMs can lead to gradually progressing physical, muscular, and neurological degenerative processes that imitate diseases such as multiple sclerosis, Parkinson's disease, Alzheimer's disease and muscular dystrophy [42]. Repeated long-term exposure of some metals and their compounds may even cause cancer [43].
Conclusion
The results obtained from this study shows that the pots (aluminum, cast Aluminum, clay, nonstick, stainless steel) used for the analysis leached considerable concentrations of PTMs into the distilled water on varying pH especially at acidic pHs while no PTM was leached from the glass pot. More leaching were observed between pH 3 to 5 than between pH 6 to 7 for most the pot types. Since PTMs are known to bioaccumulate, there are health implications associated with the amount leached from the pots. Thus the public should be enlightened about exposure to metals from foods. It should be a public health priority
|
2022-12-04T19:18:47.951Z
|
2022-11-27T00:00:00.000
|
{
"year": 2022,
"sha1": "6c5ef22d809f4469f1a6e9a0e68c29ef6910c0e0",
"oa_license": "CCBY",
"oa_url": "https://journal.nsps.org.ng/index.php/jnsps/article/download/712/199",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0165fbc9f2c1150be2f03a0a1008bb6f21efafa8",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": []
}
|
13332441
|
pes2o/s2orc
|
v3-fos-license
|
Continuous Amplitude-Integrated Electroencephalographic Monitoring Is a Useful Prognostic Tool for Hypothermia-Treated Cardiac Arrest Patients
Supplemental Digital Content is available in the text.
S ince therapeutic hypothermia (TH) was shown to effectively improve the neurological outcome of comatose cardiac arrest survivors, 1,2 TH has become the standard of care for a subset of these patients. 3 However, the range of neurological outcomes remains wide, and prognostication has become more complex. 4 Currently, neurological outcome prediction in these patients has primarily focused on end-of-life decisions, such as the withdrawal of life-sustaining therapies (LSTs), 5 and such prognostication should be delayed beyond the previously recommended 72 hours after cardiac arrest. 6 However, early positive prognostication during the first few hours after the return of spontaneous circulation (ROSC) is important for treating physicians when counseling families and making appropriate treatment decisions, although not when deciding whether to withhold or withdraw LSTs because of a perceived poor neurological prognosis. The importance of the timing of the neurological assessment for prognostication is related to the earliest time at which the brain structures can recover function to enable reliable clinical assessment. 7 Therefore, a good predictor should be based on a test that continuously reflects the status of the brain. and recommended that EEG should be performed, promptly interpreted, and monitored frequently or continuously in comatose patients after ROSC. 3 cEEG is a noninvasive technique that can be used to monitor the postischemic brain after cardiac arrest. However, early cEEG monitoring in these patients has remained challenging because it requires serial surveillance by experienced specialists, who are often unavailable or expensive. Amplitude-integrated electroencephalography (aEEG) provides a simplified and, therefore, more readily available brain function monitoring tool for perinatal hypoxic-ischemic encephalopathy in neonates and cardiac arrest in adults. [10][11][12][13][14] In neonates, the time required after birth for the aEEG to recover to a normal background pattern was the best predictor of poor neurological outcome, and all infants who did not recover a normal background pattern by 36 to 48 hours either died or survived with severe disability. 10,11 Regarding aEEG in adult patients undergoing TH, a continuous pattern at registration and at normothermia was associated with good neurological outcome, and burst suppression (BS) or status epilepticus (SE) during the normothermia period indicated poor neurological outcome. 14 The present study aimed to assess whether the time from ROSC to a normal trace (TTNT), as measured via continuous aEEG monitoring, represents a neurological outcome predictor for TH-treated adult patients with cardiac arrest. The second aim was to determine the association between malignant aEEG patterns and poor neurological outcome, with a particular focus on the time of the occurrence of these patterns.
Study Design and Patients
This was a prospective observational study of TH-treated adult patients with cardiac arrest at a single tertiary hospital from September 2010 to April 2013. During this period, all unconscious adult (age >19 years) patients receiving successful resuscitation were considered eligible for TH, and all TH-treated patients were monitored via aEEG. This study included consecutive patients with TH, but the patients were excluded if (1) they died within 72 hours after cardiac arrest, (2) their cardiac arrest occurred as a result of spontaneous or traumatic brain injury, or (3) they had a known history of neurological diseases, such as epilepsy.
This study was approved by the Institutional Review Board of Seoul St. Mary's Hospital. Informed consent from each patient's next of kin was obtained; subsequently, if the patient recovered consciousness, consent was reobtained from the patient.
Therapeutic Hypothermia Protocol
All patients who were resuscitated were considered eligible for TH at 33°C for 24 hours according to the current recommendations. 3 Before the induction of TH, sedation (midazolam, 0.08 mg/kg intravenously) and paralysis (rocuronium, 0.8 mg/kg intravenously) for shivering control were immediately administered, followed by continuous infusion of midazolam (0.04-0.2 mg•kg -1 •h -1 ) and rocuronium (0.3-0.6 mg•kg -1 •h -1 ). The target temperature of 33°C was maintained for 24 hours. After the completion of the TH maintenance period, controlled rewarming at a rate of 0.25°C/h was performed until the patient's temperature reached 36.5°C. Sedation and paralysis were reduced during rewarming and were discontinued as soon as the central temperature reached 35°C.
aEEG Monitoring and Analysis
As performed in our previous study, 13 all patients were monitored via aEEG using a combined single-channel aEEG/EEG digital device (Olympic Medical CFM 6000, Natus, Inc, Seattle, WA) as soon as possible by attending emergency physicians in the emergency department; subdermal needle electrodes were applied across the forehead to record EEG channels Fp3-Fp4. Recording continued until the patient regained consciousness, the patient died, or at least 72 hours had passed since ROSC. Clinically concerning or seizurelike activity on the aEEG or raw EEG scan resulted in the treatment of the patient according to the local protocol, and cEEG was initiated instead of aEEG if there were no limitations related to technician support for EEG. The patients experiencing SE were initially treated with boluses of valproic acid, levetiracetam, and clonazepam, followed by maintenance dosing. Pentobarbital was administered to refractory SE cases.
After clinical interpretation during treatment, all aEEG/EEG recordings were reinterpreted by an experienced neurologist (Y.M.S.) who was blinded to the neurological outcome and the clinical data. The aEEG background patterns were classified into the following categories by using the voltage method [11][12][13] (Figure 1): continuous normal voltage (CNV), discontinuous normal voltage, low voltage, flat trace, BS, and SE. CNV was defined as continuous cortical activity on the raw EEG scan; in addition, the upper margin of the aEEG scan, referred to as the aEEG maximum, was >10 μV, and the lower margin of the aEEG scan, referred to as the aEEG minimum, was >5 μV. Discontinuous normal voltage was defined as cortical activity, with the exception of discontinuous intermittent periods displaying a low amplitude on the EEG scan with an aEEG maximum >10 μV and an aEEG minimum ≤5 μV. The low-voltage pattern was defined as an aEEG maximum ≤10 μV, and flat trace was defined as isoelectric activity. We defined BS as the virtual absence of activity (<2 μV) between bursts of high voltage (>25 μV). SE was defined as repetitive epileptiform discharges with amplitudes >50 μV and a median frequency ≥1 Hz for >30 minutes, producing an aEEG trace exhibiting a sawtoothlike appearance with continuously narrowing bandwidths and increasing peak-to-peak amplitudes or with an abrupt elevation in the aEEG levels from the continuous background pattern. According to our definition, periodic epileptiform discharges were classified as SE. The aEEG background patterns from the beginning to the end of monitoring were analyzed according to their time of occurrence. To evaluate TTNT as a predictor of good neurological outcome for TH-treated adult patients with cardiac arrest, we considered only CNV as a normal trace. [11][12][13]
Neurological Outcomes
In all patients, the prognosis after cardiac arrest treated with TH was determined based on a combination of predictors of poor neurological outcome. However, in no patient was care withdrawn based on the results of these predictors before hospital discharge; the treatment team provided sufficiently prolonged life support to patients who did not recover consciousness after rewarming. Neurological outcome at 6 months after resuscitation was evaluated by the authors (K.N.P., S.H.K., and S.H.O.) via a telephone interview. 15 The neurological outcome measure was the score on the 5-point Glasgow-Pittsburgh Cerebral Performance Category (CPC) scale at 6 months after ROSC. Neurological outcome was dichotomized as good or poor. Good neurological outcome was defined as a CPC score of 1 or 2, and poor neurological outcome was defined as a CPC score of 3, 4, or 5. If the patients who presented as CPC 1 or 2 ultimately died of rearrest within 6 months, we used the highest CPC score for classification.
Statistical Analysis
The categorical variables were expressed as the numbers and percentages, and the continuous variables were expressed as the means and standard deviations or the medians and the 25th (Q1) and 75th (Q3) quartiles according to a normal distribution. Univariate comparisons of neurological outcome were performed by using χ 2 tests for categorical variables or using t tests for continuous variables as required. The performance of the neurological outcome predictors was evaluated based on their sensitivity, specificity, positive predictive value (PPV), and negative predictive value using an exact binomial 95% confidence interval (CI). To evaluate the prognostic value of the TTNT, receiver operating characteristic analysis was performed; we determined the best TTNT threshold for the prediction of good neurological outcome, and 100% was used as the threshold of specificity for poor neurological outcome. The 95% CI was calculated September 22, 2015 for the area under the curve. Evolution-specific aEEG patterns and their time points were analyzed to evaluate the prognostic value of these factors for poor neurological outcome. All statistical analyses were performed using SPSS version 16.0 (SPSS, Chicago, IL) and the Medcalc program (Medcalc Software, Mariakerke, Belgium). All reported P values are 2-sided. A P value <0.05 was considered to be significant.
Characteristics of the Study Population
During the study period, 166 TH-treated adult patients with cardiac arrest were monitored via aEEG; 36 patients were excluded from this study because of death within 72 hours after ROSC. Ultimately, 130 patients were included in this study; a portion of this study cohort (55 patients) overlapped with that of our previous study. 13 Of the included patients, 83 (63.8%) were male, and the mean patient age was 51.5±16.6 years. A majority of the patients (86.9%) experienced an out-of-hospital cardiac arrest; 45 patients (34.6%) exhibited an initial shockable rhythm, and the mean time from cardiac arrest to ROSC was 30.9±18.4 minutes. The median interval from ROSC to the initial aEEG reading was 134.5 minutes (Q1-Q3, 71.8-239.8 minutes). At 6 months after ROSC, 55 (42.3%) patients exhibited a good neurological outcome, and 77 patients (57.7%) exhibited a poor neurological outcome. The baseline characteristics of the included patients and the comparison between those exhibiting good and poor neurological outcome are shown in Table 1. Significant differences in the presence of a witness, the initial rhythm, the cardiac arrest etiology, and the time from arrest to ROSC were observed between the good and poor neurological outcome groups. Figure 2 presents the EEG evolution in all patients over time between the good and poor neurological outcome groups. In most patients (98, 75.4%), the background pattern changed during monitoring. In only 32 patients, the initial background pattern persisted without any evolution, and among these patients, most (24, 75.0%) initially exhibited a CNV trace and had a good neurological outcome. Eight patients who initially exhibited a low-voltage pattern ultimately had a poor neurological outcome without any evolution. A CNV trace was initially observed in 25 of the 130 patients (19.2%), most of whom exhibited a good neurological outcome. The initial observation of a CNV trace resulted in a PPV of 96.0% (sensitivity and specificity of 43.6% and 98.7%, respectively). One patient initially exhibiting a CNV trace developed circulatory shock without recovering consciousness and ultimately died.
EEG Evolution in Both Neurological Outcome Groups
Of the 105 patients not initially exhibiting a CNV trace, 51 patients exhibited a CNV trace within 72 hours. Among these patients, 31 patients exhibited a good neurological outcome, and 20 patients exhibited a poor neurological outcome. A difference in TTNT was observed between the good and poor neurological outcome groups ( Figure 3). All patients experiencing a good neurological outcome developed a CNV trace within 36 hours.
TTNT as a Neurological Outcome Predictor
A short TTNT predicted good neurological outcome. Receiver operating characteristic analysis revealed that the diagnostic performance of the TTNT for neurological outcome was good, displaying an area under the curve of 0.97 (95% CI, 0.92-0.99; Figure 4). The achievement of TTNT within specific time windows in both neurological outcome groups is shown in Table 2
Other Predictors of Poor Neurological Outcome
The occurrence of BS and SE in both neurological outcome groups is shown in The combination of negative predictors, consisting of no CNV development within 36 hours or the occurrence of BS or SE, in which poor neurological outcome was predicted if at least 1 of these 3 criteria were met, improved prognostic performance. These negative predictors were observed in 92.0% (69/75) of the patients who exhibited a poor neurological outcome at a median of 6.2 hours after ROSC (Q1-Q3, 2.5-18.7 hours). These combined aEEG-based negative indicators predicted poor neurological outcome with a specificity, PPV, and negative predictive value of 96.4%, 97.2%, and 89.8%, respectively (Table 3).
Discussion
We evaluated the prognostic value of aEEG using a single-channel frontal montage for TH-treated cardiac arrest survivors during the initial 72 hours after ROSC without withholding or withdrawing LST. Our study demonstrated that the application of aEEG was capable of the early prediction of neurological outcome in these patients. First, when the aEEG displayed a normal CNV trace within 24 hours, the physicians were able to predict a good neurological outcome with a PPV of 88.1%. Second, the occurrence of SE or BS at any time and the lack of the development of a normal CNV trace within 36 hours were associated with a poor neurological outcome at a high sensitivity. The combination of these negative predictors may improve their prognostic performance at an earlier stage.
Our findings were consistent with those published in a previous study. A cEEG background pattern was strongly associated with the recovery of consciousness. Cloostermans et al 16 reported that cEEG monitoring during the first 24 hours after resuscitation contributes to the prediction of both good and poor neurological outcome. In that study, continuous activity patterns within 12 hours predicted good neurological outcome, and isoelectric or low-voltage activity after 24 hours predicted poor neurological outcome. Rundgren et al 14 evaluated aEEG at a median of 8 hours after cardiac arrest and after the patients achieved a normal temperature. In that study, an initial continuous pattern and the return of a continuous pattern at normothermia served as good predictors of the recovery of consciousness. In our previous study based on a small number of cases, all of the patients exhibiting a good neurological outcome displayed a CNV trace within 26 hours. 13 However, the threshold TTNT for the prognosis of good neurological outcome was unknown when the normalization of the background pattern was delayed. In term infants exhibiting perinatal asphyxia at hypothermia, some investigators found that the recovery time to a normal background pattern was the best predictor of poor neurological outcome at a suggested threshold for aEEG normalization of 36 to 48 hours of age. 10,11 These findings were similar to our results, in which all patients experiencing a good neurological outcome developed a normal trace within 36 hours. We used a continuous sedation protocol during hypothermia. Of the 55 patients exhibiting good outcome, 53 patients achieved a normal trace within the mean time of the initiation of sedative reduction (within 28.5 hours after ROSC), and 2 patients exhibited a TTNT before sedative withdrawal (34.4 and 35.1 hours, respectively). Therefore, we believe that the effect of sedation on the TTNT threshold is minimal.
Most investigators agree that, in patients treated with TH, the time to prognostication should be delayed beyond 72 hours after rewarming. [17][18][19][20] Early prognostication during TH should focus on good rather than poor neurological outcome. 21 However, in some reports, LST was withdrawn before the resumption of normothermia by family request, or treatment was even suspended for some patients who were given a poor prognosis during TH. 22,23 Our study showed that the presence of a normal aEEG pattern within 24 hours after ROSC was a predictor of a good neurological outcome. This finding can impact treatment decisions even in cases in which the withdrawal of LST is considered to be consistent with the caregiver's wishes without delayed prognostication.
The definition of BS was inconsistent between many studies, potentially influencing the relevance of the observed predictive value of BS. 13,14,16,24 According to the definition of BS in American Clinical Neurophysiology Society's Standardized Critical Care EEG Terminology: 2012 version, suppression was defined as a period in which the voltage was <10 μV. 25 Alternatively, we defined BS as the virtual absence of activity (<2 μV) between bursts of high voltage (>25 μV) on aEEG. With the use of our revised definition, the occurrence of a BS pattern accurately predicted poor neurological outcome in all but 1 case.
Recent studies have demonstrated that SE is common in these patients and is associated with poor neurological outcome, 14,26-28 despite exceptional reports of recovery. 29,30 It is unknown whether prolonged SE contributes to secondary brain injury after cardiac arrest or whether SE is simply an epiphenomenon of severe brain injury. Our results suggest an answer to this question. Rossetti et al 30,31 described benign postanoxic SE in cases involving a reactive background. Rundgren et al 14 identified SE evolving from a continuous pattern in 10 patients and suggested that this type of SE reflects a less injured and potentially salvageable brain. In our study, 5 patients developed SE from a CNV trace via the sequential evolution of the EEG pattern. Although these patients exhibited a evolution pattern similar to that of other patients with good neurological outcome, inconsistent with our expectations, only 1 patient exhibited a good neurological outcome. We propose that, in these cases, SE serves as a contributor to poor neurological outcome rather than as a simple epiphenomenon of severe brain injury and that appropriate treatment is necessary to recover consciousness. To distinguish this pattern of SE from more malignant SE patterns, cEEG monitoring may be an essential component of postcardiac arrest care.
The introduction of mild hypothermia and standardized treatment protocols in the past decade has improved neurological outcomes in survivors of cardiac arrest. [1][2][3] In 2013, new strategies based on a near-normal temperature (36°C) not displaying differences in comparison with mild hypothermia (33°C) were introduced. 32 However, in both intervention groups, a significant number of patients did not regain consciousness after treatment. To improve neurological outcome in these patients, it is quite clear that tailored therapies according to the extent of brain injury are needed. Some investigators have attempted to categorize patients according to brain injury severity based on EEG. 33,34 According to our results, based on early aEEG monitoring, these patients can be categorized early. The initial or early restoration of CNV predicts good neurological outcome in circumstances involving the maintenance of active care. Additionally, the development of CNV between 24 and 36 hours after ROSC and SE originating from CNV render prognostication difficult. However, for cooled patients exhibiting an abnormal voltage trace during TH, a good neurological outcome remains possible even if normalization of the background patterns is delayed beyond 24 hours. Alternatively, the lack of the development of a normal trace within 36 hours and the occurrence of BS or SE originating from BS indicated a poor neurological outcome using the present cooling strategies. However, good neurological recovery in these patients may remain possible with advancements in the existing cooling strategies that have been associated with poor neurological outcome. 33 Because the risk of extensive hypoxic brain injury increases over time, to spare these patients, it is important to identify those expected to exhibit a poor prognosis very early. Interestingly, using the combination of negative predictors via continuous aEEG monitoring, prognostication was possible at a median 6.2 hours after ROSC. To the best of our knowledge, our study represents the first attempt to address the issues of the time at which physicians determine the severity of brain injury using EEG and the time of prognostication. Future studies should continue to define patient subgroups and evaluate the benefit of tailored therapies for each patient's injury. 35 Several features of our study deserve further mention. First, withdrawal of LST based on prognostication was not applied to our patients. Currently, the practice of withdrawal of LST is widespread based on recommendations and guidelines. 5,36 However, most prognostication studies were performed in Western countries, which have a social consensus regarding the withdrawal of LST, and these studies did not adequately address certain important limitations concerning the risk of bias. A self-fulfilling prophecy is present in most prognostication studies of cardiac arrest, in which the treating physicians are not blinded to the results of the neurological outcome prediction and use this prediction to make decisions regarding the withdrawal of LST. 7,37,38 In South Korea, the withdrawal of LST for patients who have a terminal illness remains under debate, and the social consensus and legislative processes are developing; the withdrawal of LST for postcardiac arrest patients based on prognostication is not currently permitted. 39 Our clinical and ethical situation resulted in the natural neurological outcome for these patients. Second, this study included consecutive patients with TH during the study period, but patients who died within 72 hours after cardiac arrest were excluded. The aim of this study was to evaluate the prognostic value of continuous aEEG (from immediately after ROSC to 72 hours later) according to time. Early death after ROSC is often caused by persistent hemodynamic instability leading to multiple organ failure. 40,41 If patients with good brain function but poor cardiorespiratory function were included in the study, the prognostic value of aEEG in the early phase would be confounded. However, our analysis did not include additional potential variables affecting neurological outcome, such as initial organ system dysfunction in 72-hour survivors. For these reasons, our results should be interpreted cautiously.
Limitations
This study contained several limitations. First, we only performed frontal EEG monitoring via single-channel aEEG. Although SE was detected in 21.5% of our patients, a result similar to that of other reports using multichannel cEEG monitoring, 14,26-28 our rate of good outcome was slightly lower than that of other studies. 30,42 Our technique using single-channel monitoring may disturb the detection of focal epileptic activity, and this may influence the rate of good outcome in SE patients. However, reducing the number of channels used for bedside aEEG monitoring is crucial for facilitating the monitoring of the cerebral cortex and for its immediate application after ROSC. Because the frontal cortices are better protected than the parietal and occipital cortices, 43 and because continuous activity appeared first in the frontal leads after ROSC, 44 the frontal cortices may represent neural recovery during the early stage. Second, this study used a single-center design in a country with a unique healthcare environment. This study design raises important questions regarding the generalizability of these results. A large, multicenter study including Western countries may provide more precise prognostic values and may help determine the utility of aEEG in this population. Third, although physicians are not permitted to withdraw LST in South Korea, there was an inevitable risk of bias because the treating team was not blinded to the aEEG data during treatment. Finally, especially regarding SE, our study could not differentiate periodic epileptiform discharges from SE and did not evaluate clinical manifestations (such as myoclonus) that were examined in previous studies using cEEG. 22,27,28 Conclusion Early aEEG monitoring of adult patients with cardiac arrest receiving TH enabled the early prediction of neurological outcome. Based on these results, a TTNT within 24 hours after ROSC was associated with good neurological outcome. The lack of CNV development within 36 hours and the occurrence of SE or BS within 72 hours after ROSC contributed to the prediction of poor neurological outcome. The combination of these negative predictors via aEEG monitoring may improve their prognostic performance at an earlier stage.
|
2017-04-21T01:18:37.765Z
|
2015-09-21T00:00:00.000
|
{
"year": 2015,
"sha1": "5b52e6541e860bd6ee2233cde68839657f199dc2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1161/circulationaha.115.015754",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b52e6541e860bd6ee2233cde68839657f199dc2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
43932105
|
pes2o/s2orc
|
v3-fos-license
|
Anatomical Correlates of Uncontrollable Laughter With Unilateral Subthalamic Deep Brain Stimulation in Parkinson’s Disease
Introduction Subthalamic nucleus deep brain stimulation (STN-DBS) is a well-established treatment for the management of motor complications in Parkinson’s disease. Uncontrollable laughter has been reported as a rare side effect of STN stimulation. The precise mechanism responsible for this unique phenomenon remains unclear. We examined in detail the DBS electrode position and stimulation parameters in two patients with uncontrollable laughter during programming after STN-DBS surgery and illustrated the anatomical correlates of the acute mood changes with STN stimulation. Case report Unilateral STN-DBS induced uncontrollable laughter with activation of the most ventral contacts in both patients. However, the location of the electrodes responsible for this adverse effect differed between the patients. In the first patient, the DBS lead was placed more inferiorly and medially within the STN. In the second patient, the DBS lead was implanted more anteriorly and inferiorly than initially planned at the level of the substantia nigra reticulata (SNr). Conclusion Unilateral STN-DBS can induce acute uncontrollable laughter with activation of electrodes located more anterior, medial, and inferior in relationship with the standard stereotactic STN target. We suggest that simulation of ventral and medial STN, surrounding limbic structures or the SNr, is the most plausible anatomical substrate responsible for this acute mood and behavioral change. Our findings provide insight into the complex functional neuroanatomical relationship of the STN and adjacent structures important for mood and behavior. DBS programming with more dorsal and lateral contacts within the STN should be entertained to minimize the emotional side effects.
INtRoDUCtIoN
Subthalamic nucleus deep brain stimulation (STN-DBS) is an established and effective procedure for the treatment of motor complications in Parkinson's disease (PD). Because of its small size and functional organization, STN-DBS may cause a variety of sensory and emotional changes associated with stimulation of limbic regions and other surrounding structures (1)(2)(3). While there are numerous reports of neuropsychiatric and affective changes associated with STN-DBS, it has been difficult to delineate the specific functional neuroanatomy in cases of acute enlightened mood (4,5). In addition, reports lack detailed information of active contacts or clear post-operative imagining allowing assessment of potential factors and structures responsible for this uncommon side effect (4,5). Marked differences among patients, methods, and inconsistent reports limit concrete conclusions ( Table 1).
DBS provides a unique opportunity to analyze the effects of electrical stimulation in neuronal structures and its connections. Stimulation of the ventral-medial STN region might result in spread of current into the limbic STN territories and therefore contribute to acute mood and behavioral changes (3,4,6). In this report, we presented two patients with reproducible uncontrollable, "mirthful" laughter after STN-DBS and reviewed prior reports of similar symptoms reported in the literature (4,5,7). Uncontrollable laughter has been considered as a pseudobulbar effect or a neuropsychiatric effect of STN-DBS surgery (8)(9)(10). We aim to assess the location of responsible contact to provide further insight into the complex basal ganglia mechanisms driving emotional responses in PD.
Case 1
A 52-year-old right-handed man with a history of idiopathic PD for 20 years presented with severe motor fluctuations including end-of-dose wearing off, freezing of gait, and peak-dose dyskinesia. He had a history of impulse-control disorders associated with dopamine agonists and a history of depression. After comprehensive multidisciplinary DBS assessment, he had undergone staged bilateral STN-DBS based on the ability to reduce dopaminergic medications with the target compared with the GPi. He underwent unilateral left STN-DBS placement (Activa SC DBS implanted with a 3389 Medtronic lead) followed by right STN-DBS 6 months later. Microelectrode recording was conducted sequentially, and typical STN-neuronal discharges were recorded. Kinesthetic cells were encountered and macro stimulation after final placement of the lead was unremarkable with non-specific side effects including dizziness at high levels followed by corticobulbar side effects with dysarthria. During initial DBS monopolar review 1 month after surgery, he noticed a sudden onset of "giddiness and euphoria" best described as a need to laugh that was overwhelming and precluded him from talking when activating right STN contact 0. The programming was done unilaterally and the contralateral DBS was off during the session. Symptoms appeared around 3.0 V [pulse width (PW) of 60 µs and frequency of 140 Hz] and became increasingly prominent as voltage was increased to 3.5 V. The patient reported a feeling of happiness and joy associated with involuntary laughing. Despite the jovial nature of his symptoms, he was uncomfortable and the sensation was not pleasant. He reported mild nausea and recurrence of euphoric, loose, and giddy feelings when activating adjacent contact 1 at 3.8 V. At 4.3 V, this feeling became persistent and more intense. Partial benefit in Parkinsonism was observed with ventral contacts and the patient developed right foot dyskinesia, mild reduction in bradykinesia, and reduction in rigidity around 3.5 V. The rest of his monopolar review was unremarkable with improvement in Parkinsonism with contacts 2 and 3 and dyskinesia at 2.5 V with contact 2. The uncontrollable laughter was reproducible after 6 months. Corticospinal side effects and non-specific dizziness were the most common side effects associated with stimulation of contacts 2 and 3. The final programming settings were bipolar with contact 3 positive, contact 2 negative, PW 90 µs, and frequency 140 Hz. His UPDRS-III score off medications at 1 year improved from 42 to 15 points.
Case 2
Patient was a 39-year-old right-handed woman who received bilateral STN stimulation for progressive Parkinsonism. She reported progressing symptoms started 6 years ago with development of early motor fluctuations and dyskinesia. She underwent simultaneous placement of bilateral STN-DBS 5 years after her diagnosis of PD (Activa SC DBS, implanted with a 3389 Medtronic lead). While performing monopolar review 1 month post-operatively, she noted sudden and uncontrollable laughter when activating contact 0 at 2.0 V (PW of 60 µs and frequency of 140 Hz) on the left side. Her symptoms were intermittent, with sudden laughter at higher voltages and episodes of normal mood. Similarly, contralateral DBS was off during the programming session. Uncontrollable laughter was reproducible with unilateral left STN stimulation but did not occur with right STN stimulation. Programming other contacts provided improvement in parkinsonism without acute mood effects. The final programming settings were bipolar with contact 3 positive, contact 2 negative, amplitude 2.5 V, PW 60 µs, and frequency 140 Hz. Her UPDRS-III score off medications at 1 year improved from 51 to 26 points.
anatomical Location of the Leads
The post-operative outlining and labeling of subthalamic structures were performed through identification of SN and STN between white and gray matters by using stereotactic MRI. The anatomical location of the four contacts was determined after matching post-operative imaging with pre-operative stereotactic MRI (BrainLab, Germany, and WayPoint™ Navigator Software, USA). After matching, the anatomical location was determined contact by contact in a three-dimensional anatomical environment for each patient (Figures 1 and 2). The standard STN coordinates (X, Y, Z) commonly used for indirect targeting are 12 mm lateral from midcommisural point (MCP), 3-4 mm posterior from MCP, and 3-4 mm below intercommissural line. The standard trajectory and approach angles were 60° of anterior angle and 15° of lateral angle from mid-sagittal plane. The trajectory of the electrode in the first patient showed an anterior angle of 59° and lateral angle of 16°. The active contact 0 was located 9.48 mm lateral to MCP, 4.32 mm posterior to MC, and 5.43 mm inferior to MCP. Analysis of the postsurgical images showed that contact 0 was ventrally and medially located electrode within the STN (Figure 1). In the second patient, the trajectory of the electrode demonstrated an anterior angle of 51° and lateral angle of 12°. The anatomical localization of contact 0 was 11.64 mm lateral, but 1.61 mm posterior and 6.10 mm inferior to MCP at the level of the lateral SNr (Figure 2).
DIsCUssIoN
Uncontrollable laughter after STN-DBS in PD occurred in two patients with electrodes located more anteriorly, medially, and inferiorly than originally planned. In both instances, uncontrollable laughter appeared acutely with simulation of the most ventral contacts. Post-operative lead location and clinical features suggest that stimulation of the ventral STN or closely related structures accounted for this reproducible emotional side effect. In patient 2, responsible contacts were decisively inferior to the STN (Figures 1 and 2). Uncontrollable laughter has been associated with a direct effect of stimulation of STN or the lesion effect of DBS surgery (9,11). The STN is a small structure and electrical current can spread from one territory to another especially when high amplitude stimulation is used (6,12). It is considered an integral part of the indirect pathway by which the striatum controls the output of the basal ganglia and motor function (13,14). There is evidence indicating that the STN is a critical component of networks controlling motor function as well as cognition and emotion (8,15,16). The STN has several functional regions including the dorsolateral area for motor control and the ventral-medial region interfacing with limbic circuits (3,17). The medial STN is also adjacent to the lateral hypothalamus. Therefore, stimulation of ventral-and medial-STN can impact emotion and cognition networks inducing mood and behavior changes (7,15,18). Medial forebrain bundle (MFB) is a key structure of the mesolimbic-dopamine system that is related to affective disorders (19). It is located in the lateral wall of the hypothalamus and bidirectionally connected with the hypothalamus, ventral striatum, accumbens nucleus, and septal area (20,21). Stimulation of MFB, which is located medially and anteriorly in relation to STN-limbic territories, has been associated with acute hypomania (22,23). Consequently, stimulation of ventral-and medial-STN can result in spreading of electrical current to co-activate the MBF. The trajectory of the DBS electrodes could affect the anatomical position of the contact. The double oblique trajectory of the quadripolar electrode places the lower contacts more medial and ventral, closer to the limbic STN (3,6,24). Stimulation of limbic-related brain structures outside of STN could also explain the emotional and behavior effects (25). The substantia nigra reticulata (SN) is one of the major dopamineproducing areas of the brain and the main output of the basal ganglia with connections and functions that extend beyond motor control (26,27). It is also thought to play important roles in behaviors including learning, drug addiction, and emotion. The pars reticulata of the substantia nigra reticulata (SNr) is located at lateral SN (28). It is an important processing center in the basal ganglia. The GABAergic neurons in the SNr convey the final processed signals of the basal ganglia to the thalamus and superior colliculus (29,30). The GABAergic neurons also spontaneously fire action potentials. In rats, the frequency of action potentials is roughly 25 Hz (31). The purpose of these spontaneous action potentials is to inhibit targets of the basal ganglia, and decreases in inhibition are associated with movement (32). The STN gives excitatory input to the SNr and modulates the rate of firing of these spontaneous action potentials (33). Acute mood change has been reported with stimulation of the SN (26). In this report, the acute mood change in patient 2 was noticed with the stimulation of the most ventral contact in an anteriorly and inferiorly located lead. The active contact anatomical localization is within the SN and probably in the SNr.
Three reports were published regarding uncontrollable laughter after STN-DBS ( Table 2). Most case studies report the phenomenon occurring with bilateral STN stimulation (4,5,7). All of the reports implicated the most ventral contact as the cause except for one by Krack et al., who reported onset of mirthful laughter in a patient with stimulation of contact 3 from 3.2 V/60 μs/130 Hz to 5.0 V/60 μs/130 Hz on the right side (4). However, no detailed anatomical location of postoperative lead has been reported in previously published cases. When comparing different DBS targets, ventral STN stimulation may lead to more mood and cognitive changes as compared with GPi due to possible spreading of stimulation to the limbic region (34). This is also consistent with our findings that unilateral ventral-and medial-STN stimulation induced acute mood change with uncontrollable laughter. Additionally, cerebral infarctions can cause pathological laughing and crying. Previous reports indicate that post-stroke pathological laughing and crying occurs with bilateral, multiple hemispheric lesions, and in the pons, specifically the bilateral paramedian basal and basal-tegmental areas (35). This is consistent with the postulated-anatomical localization of the centers for facial expression residing in the lower brainstem, along with the thalamus, and hypothalamus (36).
CoNCLUsIoN
Taken together, our findings suggested that acute uncontrollable laughter can be elicited by stimulation of different limbic structures contiguous to the STN. Our report highlights that, in addition to the limbic STN, the MFB and the SNr might play an important role in the induction of acute mood changes. It is unclear why positive valence was noted with stimulation as opposed to depressed mood. In addition, it would be valuable to monitor if the uncontrollable laughter is replicated at long-term follow up. Stimulation of medially and inferiorly placed contacts or inferiorly and anteriorly located electrodes in relation to the STN should be considered if acute behavioral symptoms are elicited with neurostimulation. Clinicians should consider using more dorsal contacts for DBS programming if this unusual side effect is encountered.
etHICs stateMeNt
The patients gave the signed consent for releasing her health information in the format of this case report. The authors consulted Albany Medical Center IRB Office. Additional ethics review was not required in this case study.
aUtHoR CoNtRIBUtIoNs YH and AR-Z analyzed the data and wrote the manuscript. JP and LG collected all the data. JA analyzed the data and made the figures, JD and EM edited the manuscript.
aCKNoWLeDgMeNts
We would like to thank the patients for agreeing to allow use of their available clinical data for scientific use and publication.
FUNDINg
Research reported in this publication was supported by Diamond research foundation at Dartmouth-Hitchcock Medical Center.
|
2018-05-25T13:05:13.212Z
|
2018-05-25T00:00:00.000
|
{
"year": 2018,
"sha1": "b8a9bd386de0a258a57b7ba476a340a89d77a488",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2018.00341/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8a9bd386de0a258a57b7ba476a340a89d77a488",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257764734
|
pes2o/s2orc
|
v3-fos-license
|
A metacognitive approach to the study of motion-induced duration biases reveals inter-individual differences in forming confidence judgments
Our ability to estimate the duration of subsecond visual events is prone to distortions, which depend on both sensory and decisional factors. To disambiguate between these two influences, we can look at the alignment between discrimination estimates of duration at the point of subjective equality and confidence estimates when the confidence about decisions is minimal, because observers should be maximally uncertain when two stimuli are perceptually the same. Here, we used this approach to investigate the relationship between the speed of a visual stimulus and its perceived duration. Participants were required to compare two intervals, report which had the longer duration, and then rate their confidence in that judgment. One of the intervals contained a stimulus drifting at a constant speed, whereas the stimulus embedded in the other interval could be stationary, linearly accelerating or decelerating, or drifting at the same speed. Discrimination estimates revealed duration compression for the stationary stimuli and, to a lesser degree, for the accelerating and decelerating stimuli. Confidence showed a similar pattern, but, overall, the confidence estimates were shifted more toward higher durations, pointing to a small contribution of decisional processes. A simple observer model, which assumes that both judgments are based on the same sensory information, captured well inter-individual differences in the criterion used to form a confidence judgment.
Introduction
Deciding about the relative duration of two or more brief visual events can be influenced by both purely visual manipulations, such as visual adaptation (Bruno & Cicchini, 2016;Johnston, Arnold, & Nishida, 2006), and more cognitive factors, such as attention (Cicchini & Morrone, 2009;Pariyadath & Eagleman, 2007;Tse, Intriligator, Rivest, & Cavanagh, 2004). To some extent, the formulation of any visual duration judgment requires contributions from both sensory and cognitive components, but the relevance of each component in forming that judgment is often difficult to estimate. More specifically, we still do not know how to assess the perceptual nature of duration distortions, disentangling it from the cognitive interpretation of the elicited sensations, which can also affect our decisions.
In this study, we were interested in the effect of visual motion on duration judgments. The ability to accurately estimate the duration of an object that moves at a constant or changing speed in our visual environment is essential in many everyday activities. For example, when we need to cross the road, to avoid a collision we need to be able to estimate the time to arrival of approaching cars or bicycles, integrating, among other things, information regarding their driving speed. As much as we can perform the task in this context (Sudkamp, Bocian, & Souto, 2021), the temporal metrics for perceived duration and anticipatory actions were shown to be dissociable (Marinovic & Arnold, 2012). In fact, when we are required to compare objects moving at different speeds, our perception of their duration is often biased. A moving visual object is perceived to last longer than a stationary object of equal duration and with otherwise identical features (Brown, 1931;Kanai, Paffen, Hogendoorn, & Verstraten, 2006;Kaneko & Murakami, 2009;Roelofs & Zeeman, 1951). Also, the duration of an interval containing a visual stimulus with increasing or decreasing speed over time is judged to be different from that of an interval that embeds an identical stimulus moving at a constant speed (Binetti, Lecce, & Doricchi, 2012;Binetti, Tomassini, Friston, & Bestmann, 2020;Bruno, Ayhan, & Johnston, 2015;Matthews, 2011;Sasaki, Yamamoto, & Miura, 2013), even when the two stimuli have the same average speed. At the same time, the precision of duration judgments remains constant across different speed profiles, indicating that differences in speed affect perceived duration but not duration discrimination. These results support the idea, proposed by a content-sensitive clock model (Johnston, 2010;Johnson, 2014) and by a neural network model (Roseboom et al., 2019), that the sensory content of an interval (for example, the speed profile of a stimulus) and not just its onset and offset is taken into account when a decision about its duration is formulated in our brain. Beyond biases in sensation, however, there might as well be a contribution of more decisional factors, such as if participants base their response on an irrelevant feature of the stimulus. These could introduce biases in the response even if perception was in fact unaffected.
Our ability to reflect on our own performance allows us to assign levels of confidence to our decisions. These self-evaluations contribute to guide our future behavior (Lisi, Mongillo, Milne, Dekker, & Gorea, 2021). Traditionally, a confidence judgment has been thought to reflect our beliefs in the correctness of these decisions. In the visual modality, much attention has been focused on how confidence evaluations are informed by perception (Arnold, Saurels, Anderson, & Johnston, 2021;de Gardelle & Mamassian, 2014;Mamassian & de Gardelle, 2021;Song, Kanai, Fleming, Weil, Schwarzkopf, & Rees, 2011;Spence, Dux, & Arnold, 2016;Zylberberg, Barttfeld, & Sigman, 2012). An alternative view is that confidence reflects instead the consistency of our perceptual experience (Caziot & Mamassian, 2021).
It has been recently proposed that the way confidence judgments map onto performance in perceptual tasks might disambiguate between perceptual biases and systematic decision biases (Gallagher, Suddendorf, & Arnold, 2019). The authors adapted two groups of participants to either coherent motion or random motion and instructed only the latter group to always provide the same response when uncertain about the motion direction of the test stimulus. In this way, the shift in the reported direction of motion in the condition with random motion (which was purely "decisional") was found to be comparable to that induced by adapting to coherent motion, but the uncertainty peaks (i.e., stimulus levels that elicited minimal confidence) were aligned with performance only after adaptation to coherent motion, revealing a dissociation between the two measures. This means that, when the observed effect is little or not affected by decisional factors, confidence judgments can be equally good estimators of perceptual changes as discrimination judgments. The assumption is that, when judgments are primarily based on sensory information, our confidence in those judgments should be minimal when there is no sensory difference being detected (i.e., when we see the two stimuli as being identical along the dimension of interest). On the other hand, when the judgments are subject to a decisional bias, we would expect a dissociation between the point of minimal confidence and the point where the two stimuli are seen to be identical. For example, when required to compare the duration of two stimuli that appear equally long, we may repeatedly respond that the faster stimulus is of a shorter duration, inducing a bias in discrimination, but our confidence in those judgments would still be minimal at this point, indicating that the distortion is primarily due to a decisional but not a sensory difference.
So far, this approach has been employed in various contexts, such as to show that implied motion aftereffect depends more on decision making than perceptual processes (Gallagher, Suddendorf, & Arnold, 2021), to rule out a gaze-contingent response bias in the effect of pursuit eye movements on perceived background motion (Luna, Serrano-Pedraza, Gegenfurtner, Schutz, & Souto, 2021), or to demonstrate that both motor and sensory adaptation influence numerosity perception at a pre-decisional stage (Maldonado Moscoso, Cicchini, Arrighi, & Burr, 2020). Also, confidence estimates were shown to closely follow perceptual decisions both after sensory adaptation (to orientation or color) and after manipulation of prior statistics of the presented stimuli (Caziot & Mamassian, 2021). To our knowledge, however, no study has yet approached duration perception using the same method.
Here, in two experiments, we asked our participants to compare the relative duration of two sequentially displayed visual intervals, one of them containing a stimulus with the same duration (500 ms) across trials and different speed profile across conditions (drifting at a constant rate, stationary, accelerating or decelerating) and the other containing a stimulus with variable duration and drifting at a constant speed. Participants were required to judge the relative duration of the two intervals and then rate their confidence in the correctness of their decision. We estimated, for each speed profile, the point of perceived equality of the durations of the two stimuli and the point that elicited the minimum confidence, and we used them as our measure of perceived duration as estimated by discrimination and confidence, respectively. We then compared the alignment between these two measures. We found that both measures were affected by stimulus speed in the expected direction (i.e., strong duration compression for stationary stimuli, milder compression for accelerating and decelerating stimuli). However, the magnitude of these shifts was overall smaller for confidence, indicating a small contribution of decisional processes to the effect of speed on perceived duration. Finally, we describe the predictions of a simple observer model, which assumes that both types of judgments depend on the discriminability of the same sensory signals but that high confidence requires an internal criterion Mamassian & de Gardelle, 2021) to be exceeded. We found that we can use this criterion to account for inter-individual differences in confidence and to estimate the contributions of different components to the formation of a confidence judgment.
Participants
Two separate groups of observers took part in either Experiment 1 or Experiment 2. Participants were either University of York students and participated for course credit or were recruited through Prolific (Palan & Schitter, 2018) at https://www.prolific.co/ and received monetary compensation for their participation (£7 per hour). We selected participants who were fluent in the English language and who reported normal or corrected-to-normal vision. For Experiment 2, we also restricted the age range for participation to 18 to 50 years old. The study was conducted in accordance with the tenets of the Declaration of Helsinki and approved by the ethical committee of the University of York. Informed consent was sought from all participants prior to the experiment. The individual and processed data presented in this manuscript are available at https://osf.io/f2ta4/.
Apparatus
Participants completed the experiment on the online platform Gorilla (Anwyl-Irvine, Massonnie, Flitton, Kirkham, & Evershed, 2020). At the beginning of the experiment, we estimated the screen size of each participant by using the "credit card" method (Li, Joo, Yeatman, & Reinecke, 2020). Participants were instructed to place a credit card (or any other card of the same format, with a width of 85.6 mm) against a credit card image on their screen and to adjust the image size using a slider (controlled by their mouse) so that the image matched the size of their actual card. We also estimated their viewing distance by asking them to report how far away from the screen they were sitting. We suggested a simple way to estimate this distance using their arm as reference: a picture of an arm was shown on the screen together with instructions informing them that, on average, the length of a forearm, measured from the elbow to the tip of the middle finger, is ∼43 to 48 cm. We stressed the importance of keeping the same distance from the screen throughout the experiment. We used these two estimates (i.e., screen size and viewing distance) to adjust the absolute size of our stimuli so that their size in degrees of visual angle was kept constant across participants.
Stimuli and procedure
Participants were required to fixate a black cross in the center of the screen while two test stimuli were sequentially displayed, separated by a 500-ms blank page ( Figure 1a). One of the intervals, containing the standard stimulus, had the same duration across trials (500 ms), whereas the duration of the other interval, containing the comparison stimulus, varied on a trial-by-trial basis to generate a psychometric function. Both the presentation order (standard first or standard second) and the relative spatial location of the two test stimuli (standard left or standard right) were randomized across trials to avoid a response bias and visual adaptation, respectively. Participants were required to first report which interval appeared to have the longer duration (discrimination judgment) and then indicate whether their confidence in their judgment was either high or low (confidence judgment).
The comparison stimulus drifted at a constant speed of 5°/s, whereas we manipulated the speed profile of the standard stimulus across four experimental conditions ( Figure 1b). In the drifting condition, the standard drifted at the same constant speed as the comparison; in the stationary condition, the standard was stationary; in the accelerating condition, the speed of the standard increased linearly across the interval between 0 and 10°/s (average speed: 5°/s); and, finally, in the decelerating condition, the speed of the standard decreased linearly across the interval between 10°/s and 0°/s (average speed, 5°/s). Standard and comparison always drifted in opposite left-right directions (except for the stationary condition, where no drifting was associated with the standard).
The stimuli were generated in Psychopy (Peirce et al., 2019), and each trial type was saved as an MP4 video file (H.264 format, 60 frames per second) and then uploaded in the experimental setup on Gorilla. A trial-type video contained the following sequence of events: initial blank page (500 ms), first test interval (standard or comparison), blank page between the tests (interstimulus interval = 500 ms), second test the procedure adopted in Experiments 1 and 2. Two visual stimuli (luminance-modulated Gabors similar to those depicted here in Experiment 1; simple gratings in Experiment 2) were sequentially presented (separated by a 500-ms blank page) and participants were required to make two decisions using their computer keyboard. First, they had to judge the relative duration of the intervals that contained the test stimuli ("Which is longer?"), and then they rated their confidence in their discrimination judgment ("How confident are you your response was correct?"). The standard duration was fixed across trials (500 ms), whereas the comparison duration varied on a trial-by-trial basis (between 200 and 800 ms in Experiment 1 and between 50 and 950 ms in Experiment 2). (b) Speed profiles over time for the standard stimulus in the four experimental conditions. In the drifting condition, the speed of the standard remained constant at 5°/s across the interval. In the stationary condition, there was no motion associated with the standard. In the accelerating condition, the speed of the standard increased linearly across the interval from 0°/s to 10°/s (average speed, 5°/s). In the decelerating condition, the speed of the standard decreased linearly across the interval from 10 to 0°/s (average speed, 5°/s). In all of the experimental conditions, the comparison stimulus always drifted at a constant rate of 5°/s. interval (standard or comparison), then final blank page (300 ms). For each participant and for each trial, we calculated the difference between the start and end times of the video playback in Gorilla and compared it with the expected trial duration to make sure that the timing of the video presentation was not substantially distorted (see next section).
Exclusion criteria
For both experiments, we adopted four different types of exclusion criteria to make sure that the parameters we extracted from psychometric fits were of sufficient quality. First, participants were prevented from starting the online experiment unless they responded correctly to three comprehension checks regarding where they needed to look, what characteristic of the stimulus they needed to respond to, and what the confidence judgment referred to. Second, participants were excluded according to the following goodness-of-fit criteria (based on pilot data): if half of the confidence interval for either the duration point of subjective equality (PSE) or the point of minimal confidence (PMC) estimate was larger than 325 ms; if the R 2 for either the duration PSE or the PMC estimate was lower than 0.15; and if the proportion of low confidence responses for both the shortest and the longest interval was larger than 0.5. Third, we ran an outlier analysis, and participants were excluded if the PSE, the PMC, the just noticeable difference (JND), or the full width at half height (FWHH) differed by more than three scaled median absolute deviations (MADs) from the group median in any given condition. Finally, participants were excluded if the mean difference between the mean duration of the video playback in Gorilla (averaged across all trials) and the actual mean video duration (averaged across all video durations) exceeded ±10% of the actual mean video duration (which was 2300 ms for both Experiments 1 and 2).
To our knowledge, Experiment 1 was the first attempt at measuring confidence judgments in an online study on time perception. Therefore, the sample size we deemed acceptable for Experiment 1 was based on previous psychophysical and neuroimaging studies on the effects of stimulus speed on time perception (Binetti et al., 2012;Binetti et al., 2020;Bruno et al., 2015;Matthews, 2011;Sasaki et al., 2013), where the sample size varied between 5 and 28 participants. We decided to opt for a larger sample size as we expected higher inter-individual variability with data collected online, under less controlled conditions. For Experiment 2, we ran a power analysis using G*Power 3.1 (Faul, Erdfelder, Buchner, & Lang, 2009). We calculated that, for a repeated-measures analysis of variance (ANOVA), we needed 53 participants to be able to detect a small to medium effect size (F = 0.2) with an alpha error probability of 0.05 and a power of 0.8. Our estimate of the correlation between repeated measures was based on what we observed in Experiment 1 (average Pearson's r = 0.25).
Pre-registration
Experiment 2 was pre-registered (https: //osf.io/59ywt). The following statistical analyses were exploratory and we did not pre-register them: one-way ANOVAs on precision estimates and confidence peak heights, Deming regression analysis, and all of the paired-samples t-tests. The description of the model was not pre-registered, either.
Experiment 1: Stimulus speed affects both duration discrimination and confidence judgments Methods
In Experiment 1, participants had to pay attention to the relative duration of two subsecond intervals containing luminance-modulated Gabor gratings (vertically oriented, spatial frequency = 1 c/°), displayed 5°away from the center of the monitor on the horizontal midline. The diameter of the grating window was 5°of visual angle, the standard deviation of the Gaussian spatial envelope was 0.83°of visual angle and the Michelson contrast was 100%. The standard had a fixed duration (500 ms) across trials and a variable speed profile across conditions. The comparison interval always contained a drifting grating (constant speed of 5°/s), whereas its duration varied on a trial-by-trial basis in seven steps (200, 300, 400, 500, 600, 700, or 800 ms). At the end of each trial, participants had to report, first, a discrimination judgment ("Which was the longer interval?") and then a confidence judgment ("How confident are you your duration judgment was correct?"). Psychometric fits were determined for each participant.
In Experiment 1, the combination of speed profiles, interval durations, relative locations, and presentation orders yielded a total of 112 different trial types (4 experimental conditions × 7 durations × 2 standard/comparison relative locations × 2 presentation orders). Each trial was saved as an MP4 video file (H.264 format, 60 frames per second) and then uploaded in the experimental setup on Gorilla. Participants completed an initial block of 28 practice trials (one repetition for each of the seven comparison durations for each of the four experimental conditions) to familiarize themselves with the task, followed by four experimental blocks (one for each experimental condition) of 140 trials each (corresponding to 20 repetitions for each of the seven comparison durations presented in a randomized order), for a total of 560 experimental trials. The order of the blocks was randomized across participants.
Data analysis
We fitted cumulative Gaussian functions through the individual and mean discrimination data. They described the proportion of trials where the interval containing the comparison was judged to be longer than the interval containing the standard, as a function of the comparison duration. The PSE (defined as the 50% point on the fitting function) was our discrimination measure of perceived duration. The JND, corresponding to half the distance between the 25% and 75% points on the psychometric function, was our measure of the precision of participants' discrimination estimates.
We fitted raised Gaussian functions through the individual and mean confidence data, which described the proportion of "low confidence" responses as a function of the comparison duration. The peak center of the Gaussian fit, which is the Figure 2. Group data in Experiment 1. (a) Cumulative Gaussian functions (fitted through data from all participants at once; blue fits) are plotted for discrimination judgments together with the mean proportions of "comparison longer" responses (averaged across all participants; blue circles) for all the seven comparison durations and for the four experimental conditions. Note that those fits are shown here and in panel b for reference but the statistics are based on psychometric fits to individual data. The vertical blue lines represent the PSEs (i.e., corresponding to the 50% points on the functions), and the vertical dashed lines (here and in panel b) indicate the actual standard duration (i.e., 500 ms). (b) Raised Gaussian functions (red fits) are plotted for confidence judgments together with the mean proportions of "low confidence" responses for all of the comparison durations and all of the experimental conditions. The vertical red lines point to the duration levels corresponding to the PMCs (i.e., the peaks of "low confidence" responses). Error bars indicate ±1 SEM.
PMC, was our confidence measure of perceived duration. The FWHH of the Gaussian fit was our measure of the precision of participants' confidence estimates.
All of the duration and precision estimates used for statistical analyses were derived from individual fits. For reference, we provide the fitted functions based on the data of all participants in Figures 2 and 4.
We ran a repeated-measures ANOVA (4 × 2) to test the effect of stimulus type (stationary, drifting, decelerating, or accelerating), judgment (discrimination or confidence), and their interaction on our central tendency (i.e., PSE and PMC) measures. As we observed a small but consistent bias in the drifting condition (which we considered our baseline condition as there was no difference between the speed profiles of the standard and comparison intervals), we also tested the same effects on the changes in PSE and PMC relative to the same measures in the drifting condition. In this case, the stimulus type factor had three levels, corresponding to the perceived duration change for the stationary, accelerating, and decelerating conditions. Where required, we applied the Greenhouse-Geisser correction for violation of sphericity. Significant effects of stimulus type on the measures of central tendency were followed up by planned contrasts testing that stationary < accelerating < decelerating < drifting (i.e., three tests) for each judgment type. The Bonferroni correction for multiple comparisons was applied to the significance level of the planned contrasts (p = 0.05/3 = 0.0167) and of the paired-sample t-tests between the PSEs and the PMCs for each of the four speed conditions (p = 0.05/4 = 0.0125), as well as those between the PSE and PMC changes in the stationary, accelerating, and decelerating conditions relative to the drifting condition (p = 0.05/3 = 0.0167). We ran separate one-way ANOVAs to test the effect of stimulus type on our precision measures (i.e., JND and FWHH) and on the peak heights of the confidence functions (i.e., curve height at PMC). The F statistic presented here for all of the experimental conditions. Here and in panel b, the boxes are drawn from the first to the third quartile, the blue and red horizontal lines depict the means, and the whiskers are drawn to the highest and lowest data points within 1.5 times the distance between the first and third quartile (i.e., the interquartile range) from above the third quartile and from below the first quartile, respectively. Circular symbols represent individual PSE and PMC estimates. The horizontal dashed line indicates the actual standard duration. (b) Box plots for the JNDs of the discrimination judgments (blue, left panel) and the FWHHs for the confidence judgments (red, right panel) for all of the experimental conditions. of the Brown-Forsythe test was reported when the assumption of homogeneity of variances was violated.
Results
In Figure 2, we plotted the mean psychometric fits and data points (averaged across all participants) for the two judgment types (i.e., discrimination and confidence) and for the four speed profiles of the standard stimulus (i.e., drifting, stationary, accelerating, and decelerating). As far as the discrimination judgments are concerned (Figure 2a), the mean psychometric functions appeared steep and ordered, and the errors associated with the mean data points are small. The mean data points for the confidence judgments (Figure 2b) were well captured by raised Gaussian functions. As expected, the degree of uncertainty was associated with the duration difference between the standard and comparison interval, so that smaller duration differences yielded a higher proportion of low confidence judgments.
It is noticeable that even the highest proportion of "low confidence" responses (i.e., the curve height at the point of minimal confidence) for all speed profiles was substantially smaller than 1, suggesting that participants were generally overconfident about the correctness of their duration judgments even when there was no actual difference between the standard and comparison durations. On the other hand, near the tails of the distribution, when the comparison duration was substantially different from that of the standard, the participants' uncertainty was above 0, indicating a lack of confidence, especially for very long comparison durations.
In this study, we were mainly interested in the degree of alignment between the PSE, which was our discrimination measure of perceived duration, and the PMC, which was our confidence measure of perceived duration. Figure 2 shows that there was a fair degree of alignment between the two measures of central tendency. We used the individually determined estimates to analyze this effect statistically (individual and mean estimates for PSE and PMC are plotted in Figure 3a).
The same pattern of results emerged when the drifting condition was used as a baseline. By subtracting the PSE and PMC of the drifting condition from those of the other speed conditions for each participant and then testing the effects of speed condition and judgment type, we once again observed significant main effects for both speed condition, F(1.65, 52.92) = 33.92, p < 0.0001, and judgment type, F(1, 32) = 8.01, p = 0.008, but no significant interaction, F(1.5, 47.89) = 0.664, p = 0.478. Also, none of the comparisons between PSE and PMC changes relative to the drifting condition reached statistical significance after correcting for multiple comparisons (Bonferroni-corrected p = 0.0167. For paired-samples t-tests: stationary duration change, t(32) = 1.95, p = 0.06; accelerating duration change, t(32) = 1.65, p = 0.108; and decelerating duration change, t(32) = 2.52, p = 0.017.
The JNDs extracted from the psychometric functions for the discrimination judgments ( Figure 3B, blue symbols and box plots) were not different across conditions, F(3, 128) = 0.86, p = 0.466. Similarly, no difference was detected for the FWHHs for the confidence judgments ( Figure 3B, red symbols and box plots), F(3, 128) = 1.42, p = 0.239, indicating that the different speed profiles of our stimuli had comparable effects on the precision of both the discrimination and confidence judgments. The proportion of "low confidence" responses peaked at values that were substantially smaller than 1 (mean peak heights: drifting, 0.75 ± 2.46; stationary, 0.71 ± 0.21; accelerating, 0.75± 0.21; decelerating, 0.74 ± 0.21), implying that participants were generally overconfident even when their performance was at chance. This tendency was not influenced by the speed profile of the test stimuli, F(3, 128) = 0.21, p = 0.891.
Experiment 2: Participants did not use feedback to calibrate their confidence
To help our participants to better calibrate their confidence judgments, in Experiment 2 we let participants know if their duration judgment was correct at the end of each of the 32 practice trials. We also used a wider range of durations for the comparison interval, adding anchor points (i.e., durations that were very clearly shorter or longer than 500 ms), and we interleaved trials from the different speed conditions within the same block (rather than having all the trials from one condition in the same block, as in Experiment 1).
We deemed these latter changes necessary, as in the drifting condition of Experiment 1, where no change in perceived duration was expected (as both standard and comparison had the same speed profile), we actually observed some small but significant duration dilation for both the discrimination, with mean PSE = 521.78 ± 30.89 and one-sample t-test against 500, t(32) = 4.05, p < 0.001, and confidence judgments, with mean PMC = 525.72 ± 38.78, t(32) = 3.81, and p = 0.001. We thought this unexpected bias might have been due to two factors: first, the blocked presentation, and, second, the narrow duration range. If we look at the panel corresponding to the drifting condition in Figure 2a, we can see that, when standard and comparison intervals had the same duration, participants were at chance, as expected, but they underestimated the duration of the longer intervals (especially 800 ms), and this shifted the PSE toward higher values.
Methods
The overall structure of Experiment 2 was identical to that of Experiment 1, with the following exceptions. The stimuli were not Gabors but simple luminancemodulated gratings. The duration of the comparison stimulus varied across trials in nine steps (50, 162, 275, 388, 500, 612, 725, 838, and 950 ms). Participants completed five experimental blocks of 144 trials each, for a total of 720 experimental trials (4 experimental conditions × 9 durations × 2 standard/comparison relative locations × 2 presentation orders × 5 repetitions). We interleaved trials from the different conditions within the same block. The initial practice block consisted of 32 trials (four repetitions for each of the eight comparison durations-we did not include trials where standard and comparison had the same duration-from the drifting condition only), and, at the end of each practice trial (which required a duration judgment only), participants received feedback about their performance. No feedback was provided for experimental trials.
Data analysis
After conducting the same statistical analyses as in Experiment 1, the higher number of participants and comparison durations in Experiment 2 also allowed us to further explore our data by performing an orthogonal (or Deming) regression (Deming, 1943;Hall, 2014;Kane & Mroch, 2020) between our discrimination and confidence estimates of perceived duration for each speed condition. This analysis can be used to determine the equivalence of measurement instruments. Unlike linear regression, orthogonal regression assumes that both the dependent and the independent variables (which are supposed to be linearly correlated) are measured with error (as is the case in the present study), and it minimizes the distances of the data points in both the x and y directions from the fitted line; that is, it minimizes the sum-of-squared orthogonal deviations. It also produces confidence interval estimates for the slope and the intercept of the orthogonal fit, which can be used to test whether the two parameters are significantly different from 1 and 0, respectively, indicating a deviation from a perfect linear correlation between the two measures. In addition, we determined the Bayes factor, which gave us the amount of evidence favoring the reduced model (with slope fixed to 1 and intercept fixed to 0) over the orthogonal model given the data. To calculate the Bayes factor, we used the large sample approximation method (Burnham & Anderson, 2004). A similar application of this method can be found, for example, in Schütz, Kerzel, and Souto (2014). We first determined the Bayesian information criterion (BIC) (Schwarz, 1978) for both methods: where n corresponds to the number of participants, RSS is the residual sum of squares, and k is the number of free parameters (0 for the reduced model and 2 for the orthogonal model). Then, for each model i, we determined the posterior probability p: 5 BIC r where BIC is the difference, for each model, between the BIC for that model and the lower BIC between the two models (i.e., the BIC for the minimum BIC model is 0). Finally, the Bayes factor was calculated as the ratio between the two posterior probabilities: BF 10 = p reduced p orthogonal Figure 4 shows the mean psychometric functions for both the judgment types and for all the speed conditions of Experiment 2. As in Experiment 1, the functions for the discrimination judgments were steep and ordered, and the confidence in the correctness of participants' decision, as predicted by a raised Gaussian function, was determined by the magnitude of the difference between standard and comparison durations.
As for Experiment 1, this pattern of results remained unchanged when we ran the same analyses on the differences in PSE and PMC relative to the drifting condition. We observed significant main effects for speed condition, F(1.24, 64.41) = 54.84, p < 0.0001, and judgment type, F(1, 52) = 9.54, p = 0.003, as well as for the interaction between these two factors, F(2, 104) = 3.91, p = 0.023. Only for the stationary condition did the comparison between PSE and PMC changes reach statistical significance after correcting for multiple comparisons (Bonferroni-corrected p = 0.0167. For paired-samples t-tests, stationary duration change, t(52) = 3.73, p < 0.0001; accelerating duration change, t(52) = 0.73, p = 0.472; and decelerating duration change, t(52) = 2.43, p = 0.019. presented here for all of the experimental conditions. Here and in panel b the boxes are drawn from the first to the third quartile, the blue and red horizontal lines depict the means, and the whiskers are drawn to the highest and lowest data points within 1.5 times the distance between the first and third quartile (i.e., the interquartile range) from above the third quartile and from below the first quartile, respectively. Circular symbols represent individual PSE and PMC estimates. The horizontal dashed line indicates the actual standard duration. (b) Box plots for the JNDs of the discrimination judgments (blue, left panel) and the FWHH for the confidence judgments (red, right panel) for all of the experimental conditions.
To gain a better understanding of the underlying functional relationship between our two central tendency estimates, we conducted an orthogonal or Deming regression analysis (Deming, 1943;Hall, 2014;Kane & Mroch, 2020), which can be simply thought of as a linear regression between two dependent variables (see Data analysis section). As orthogonal regression assumes that the two variables are linearly correlated, we first made sure this was the case for all of the speed conditions (all Pearson's r > 0.45, all p < 0.0001). The orthogonal fits (Figure 6a, blue lines) showed positive correlations that are not perfect. In fact, the 95% confidence intervals derived from the orthogonal regression (Figure 6b) crossed both the 1 line for the slope and the 0 line for the intercept only for the drifting and decelerating conditions. Furthermore, we determined the Bayes factor, which provides the amount of evidence supporting the null hypothesis, which here indicated that the data could be better fitted by a reduced model with fixed slope = 1 and fixed intercept = 0 (indicating a perfect correlation between the two estimates), over the alternative hypothesis that an orthogonal model with free-to-vary slope and intercept should be favored. A Bayes factor analysis provided strong and moderate evidence for the null hypothesis for the drifting (BF 10 = 26.03) and accelerating (BF 10 = 8.05) conditions, respectively, indicating that in those conditions the equality line was the best fitting model. However, there was moderate and anecdotal evidence favoring the alternative hypothesis for the stationary (BF 10 = 0.137) and decelerating (BF 10 = 0.8857) conditions, respectively, implying that the two estimates were not perfectly correlated. As in Experiment 1, both the JNDs (Figure 5b) for the discrimination judgments, F(3, 208) = 0.41, p = 0.742, and the FWHHs for the confidence judgments, F(3, 155.4) = 0.17, p = 0.918, did not differ across conditions. Also, the feedback during training did not improve the calibration of participants' confidence, as the peak heights were still substantially smaller than 1 (mean peak heights: drifting, 0.71 ± 1.79; stationary, 0.64 ± 0.19; accelerating, 0.7± 0.18; decelerating, 0.67 ± 0.19), indicating overconfidence when their performance was at chance. The amount of overconfidence did not change across speed conditions, F(3, 208) = 1.41, p = 0.242.
Modeling: A simple observer model captures inter-individual differences in the confidence criterion
We designed an observer model to explain what factors influenced the precision of discrimination and confidence judgments. The model assumed that both discrimination and confidence judgments were based on duration discriminability (individually estimated as the JND based on the actual discrimination judgment). It then simply compared the duration signals associated with the standard and comparison intervals to decide which was longer. For confidence judgments, the difference between the two duration signals divided by our discriminability measure had to exceed an internal confidence criterion for the model to report high confidence.
Methods
We created a simulated dataset consisting of the same number of participants as in Experiment 2. For each judgment type and speed condition, the model generated a psychometric function using the same number of trials used in Experiment 2. For each trial, the model generated a simulated duration signal (SDS), the value of which was randomly sampled from a normal distribution with the actual duration as the mean and the real JND (extracted from the function of the real participant whose discrimination and confidence the model aimed to predict) as the standard deviation:
SDS ∼ N(Actual Duration, JND real )
The model predicted a "comparison longer" judgment if the SDS for the comparison duration exceeded that for the standard duration:
SDS Comp > SDS Stand
The model predicted a "high confidence" judgment if the ratio of the absolute difference between the SDSs for the two tests to the real JND exceeded the confidence criterion: The confidence criterion was the only free parameter of our model, and for each simulated participant we chose the value that minimized the absolute difference between the FWHH of the simulated confidence curve and that of the real confidence curve. After having determined the criterion for each participant, we extracted the JNDs for the discrimination judgments and the FWHHs for the confidence judgments from the simulated dataset. For each speed condition, we then ran a linear regression analysis to test how well the simulated JNDs and FWHHs could predict the real ones. BICs were calculated as described above for a linear model with the slope and intercept free to vary and for a reduced model with fixed slope = 1 and fixed intercept = 0; the Bayes factor was determined for each speed condition.
Results
For the discrimination judgments (Figure 7a), virtually all of the variance in the real data was predicted by the simulation (all R 2 > 0.9991). For all of the speed conditions, the Bayes factor was 0, indicating extremely large evidence against the reduced model. For the confidence judgments, between 75% and 80% of the variance in the real data was captured by the simulation. Bayes factors revealed anecdotal evidence supporting the reduced model in the stationary condition (BF 10 = 1.12), whereas it provided strong to very strong evidence in favor of the alternative model in the accelerating (BF 10 = 0.0545) drifting (BF 10 = 0) and decelerating (BF 10 = 0.0004) conditions. The mean confidence criterion estimate ranged from 1.3 to 1.46 across speed conditions, indicating that, on average, the difference between the comparison and standard duration signals had to be almost 1.5 times as big as the JND for our participants to report high confidence in their decision.
When we looked more closely at individual differences in the confidence criterion estimates, a clear The real FWHHs are plotted as a function of the FWHHs predicted by our model. Data are the same as in Figure 7b, but they were split into two groups: participants with a confidence criterion lower (red symbols) or higher (blue symbols) than 1. Linear regression lines are plotted in red and blue. The R 2 values for all of the speed conditions are also reported.
pattern emerged. Figure 8 shows that the performance of our model in minimizing the absolute difference between real and simulated FWHHs was substantially better when the confidence criterion estimate was higher than 1. A linear regression analysis conducted on the two criterion ranges separately revealed that almost all of the variance in the real data was predicted by the model for participants with criterion > 1 (R 2 values: drifting, 0.996; stationary, 0.915; accelerating, 0.996; decelerating, 0.986), whereas about 70% of the variance was explained for those with criterion < 1 (R 2 values: drifting, 0.739; stationary, 0.693; accelerating, 0.686; decelerating, 0.671). A criterion of 1 or higher meant that the difference between the standard and comparison duration signals had to be at least as big as the JND for a participant to report high confidence. In other words, those with a criterion estimate > 1 based their confidence judgment almost exclusively on the perceptual discriminability between the two test durations, as assumed by our model. Those with a criterion estimate < 1, though, reported high confidence even when the perceptual difference between the two test durations was smaller than the JND, indicating that their confidence judgment was also influenced by other factors that we did not include in our model.
Discussion
In this study, we used confidence judgments to probe the perceptual nature of the effect of the speed profile of a visual object on the apparent duration of a subsecond interval that contains it. The two experiments we described here yielded a similar pattern of results. First, we found that the confidence estimates of perceived duration were affected by stimulus speed in a similar way as discrimination estimates: An interval containing a stationary stimulus appeared substantially shorter than that containing a stimulus drifting at a constant rate, whereas intervals that embedded linearly accelerating or decelerating stimuli appeared more mildly compressed. However, duration estimates extracted from confidence judgments were overall higher than those extracted from discrimination judgments. Second, the precision of both discrimination and confidence judgments did not change across speed conditions. Third, an orthogonal regression analysis revealed that the discrimination and confidence measures of perceived duration were positively correlated, but the hypothesis that the regression lines coincided with the equality line was not strongly supported across conditions. Finally, we described an observer model that assumes that both the discrimination and confidence judgments depend on duration discriminability, which is extracted from the discrimination judgment, but the latter also require an additional step, where this information is compared against an internal criterion. The model predicted all of the variance in the real precision estimates for the discrimination judgments. For the confidence judgments, over 75% of the variance in the real data was predicted by the model, which takes the internal confidence criterion as the only free parameter. A clear pattern of inter-individual differences in the confidence criterion estimates in our dataset was highlighted by the predictions of our model, revealing a distinction between those participants that based their judgments entirely on duration discriminability and those who did not.
In both Experiments 1 and 2, we replicated the previous finding that the duration of an interval containing a stationary visual object appears substantially compressed when compared to the duration of an interval containing a temporally modulated sensory stimulus (Brown, 1931;Kanai et al., 2006;Kaneko & Murakami, 2009;Roelofs & Zeeman, 1951). An interval with an accelerating visual pattern was previously reported to appear compressed relative to an interval containing a stimulus that changes over time at a constant rate (Binetti et al., 2012;Binetti et al., 2020;Bruno et al., 2015;Matthews, 2011;Sasaki et al., 2013), as we observed here, in both experiments. As far as the perceived duration of an interval containing a decelerating pattern is concerned, the reported effects differ across studies: Some of them have reported duration compression (Matthews, 2011;Matthews, 2013), and other studies have reported no change or mild dilation (Binetti et al., 2012;Bruno et al., 2015). Here, we used similar stimuli and procedure as in a previous study (Bruno et al., 2015), where the differences in perceived duration between intervals with accelerating and decelerating visual objects were substantially more pronounced than those reported here, as acceleration induced strong duration compression (up to ∼30% for a 600-ms interval), whereas deceleration induced only a very mild expansion (less than ∼10%). The lack of replication of the magnitude (and direction, in the case of deceleration) of these effects in the present study may be ascribed to the different speed range used here for the accelerating and decelerating conditions. In fact, in the condition eliciting the largest difference in apparent duration in Bruno et al. (2015), the minimum speed was 0 and the maximum speed was 20°/s (average speed, 10°/s), whereas, in the present study we used a lower maximum speed of 10°/s (average speed, 5°/s) to make sure that the stimuli were displayed online without distortions. In the same study, the authors showed that the magnitude of the duration changes was contingent on the speed range rather than acceleration or deceleration per se, as it did not change when they kept the initial and final speeds constant and varied the standard stimulus duration.
More generally, the results of Experiment 2, where we used a wider comparison duration range and where participants received feedback about their performance in the practice trials, replicated the pattern observed in Experiment 1 for both perceived duration and precision. Previous reports showed that people are aware of mistakes in their decisions even without explicit feedback (Rabbitt, 1966;Yeung & Summerfield, 2012). In addition to that, we showed here that participants did not seem to use feedback to calibrate their confidence. In fact, in both experiments, the curve heights at the PMCs were substantially smaller than 1 (Figures 2 and 4), indicating a tendency to overestimate the correctness of very difficult judgments, which was previously reported for perceptual decisions (Baranski & Petrusic, 1994;Baranski & Petrusic, 1999;Bruno & Baker, 2021;Harvey, 1997). The same studies also reported a tendency to underestimate the correctness of very easy judgments, which we also observed for very short and, especially, very long durations. Previous studies suggest that personality traits (Pallier, Wilkinson, Danthiir, Kleitman, Knezevic, Stankov, & Roberts, 2002), but not cognitive styles (Blais, Thompson, & Baranski, 2005), play a role in over-and underconfidence biases in discrimination tasks. Interestingly, in the present study, the simulated observers generated by our model showed an even larger overconfidence (curve height at PMC = 0.624 ± 0.022) than the real participants (0.68 ± 0.03), with a main effect of participant type, F(1, 52) = 8.172, p = 0.006 (data not shown), which might suggest that sensory noise contributed to the overconfidence effect more than the participants' personality.
The mild but significant bias in perceived duration observed in the drifting condition, where no change was expected because standard and comparison had the same speed profile, was mainly due to a tendency to report very long comparison durations as being longer than the standard duration more often than reporting very short durations (with the same absolute distance from the center of the range) as being shorter. In fact, although the proportion of "comparison longer" responses for the longest comparison duration was smaller than 1, participants' performance was at chance when the two tests had the same duration (see Figures 2 and 4). This bias, too, did not disappear either with the feedback provided in Experiment 2 or by extending the duration range to include extremely long and extremely short durations. The fact that the confidence estimates were also shifted the same amount argues against a decisional bias. As the direction of the said effect was opposite to that observed for the main effects of the present study (i.e., duration overestimation vs. underestimation), it is unlikely that it created any form of confound. Also, we showed that the pattern of results remained the same when we used the drifting condition as baseline, indicating that the differences between the discrimination and confidence measures were not exaggerated by comparing them against the standard duration. We can only surmise that having randomized both the presentation order of the two tests and their relative spatial location might have played a role, as it seems clear that participants perceived the very long duration as being longer than the standard duration, but somehow they attributed it to the wrong location. If, for example, we assume a memory buffer that stores the duration of the first test interval, together with the location of the embedded stimulus, to compare it with the duration of the second test (and its location), then this buffer might have a limited capacity, and when that is nearly exceeded (e.g., when one of the test intervals is very long) then duration information is favored relative to spatial information, which might be more easily forgotten.
The speed profile of our stimuli had a similar impact on how both discrimination and confidence judgments estimated perceived duration. For both, we observed stronger duration compression for the stationary stimuli than for the accelerating and decelerating stimuli. The mean PMCs for confidence were slightly misaligned relative to the mean PSEs for discrimination, though, indicating that discrimination judgments were affected by both perceptual and decisional processes (Gallagher et al., 2019). Orthogonal regression helped us further analyze the relationship between our two measures of central tendency at the individual level by testing the equivalence of the two measures. It showed that a positive orthogonal correlation existed between our two measures of perceived duration for all of the speed conditions, but for most of our speed conditions the slope and intercept of the regression lines differed from 1 and 0, respectively. In their paper on the effect of pursuit eye movements on perceived background motion, Luna et al. (2021) observed a similar pattern, where a linear regression analysis revealed that their measures of central tendency for discrimination and confidence were positively correlated, but the best fit was never the equality line. As their mean estimates of central tendency for discrimination and confidence did not differ, they concluded that this positive correlation further argued against the existence of a decision bias. In the present study, the two mean estimates are instead different (Figures 3a and 5a); therefore, we cannot exclude the influence of decisional processes, but the positive (orthogonal) correlations between the individual PSEs and PMCs indicate a more predominant perceptual component.
The observation that the effect of speed profile on discrimination and confidence measures of perceived duration followed a similar pattern suggests that both types of judgments were informed by sensory information. However, to decide whether one has high or low confidence in the correctness of their discrimination judgment, the sensory difference in the two durations has to be compared against an internal criterion. We modeled the duration information associated with each test interval with sensory noise. We assumed that a simple comparison between the two resulting duration signals was enough to formulate a discrimination judgment. For the confidence judgment, the same comparison had to be weighted by the JND (i.e., an approximation of d ) and compared against a confidence criterion, modeled as a threshold to exceed to formulate a high confidence judgment. Note that the assumption of a separate criterion for confidence did not entail the two judgments being based on different types of sensory information, as suggested by some studies (De Martino, Fleming, Garrett, & Dolan, 2013;Fleming, Ryu, Golfinos, & Blackmon, 2014;Li, Hill, & He, 2014). In fact, our model assumed that both types of judgment are based on duration discriminability and are therefore affected by the same sensory noise.
The idea that we need a separate criterion to determine how confident we are in our decisions has been previously proposed in different forms. In a recent study, Arnold et al. (2021), measured changes in perceived orientation and precision after adaptation to contrast-modulated Gabors using both discrimination and confidence judgments. Their results were well predicted by a labeled-line observer model (consisting of several channels, each maximally responding to a given stimulus orientation) that assumed that the two judgments were based on different magnitudes (i.e., different criterions) of the same kind of sensory information. This means that a high confidence judgment would require a larger sensory difference between the stimuli than that required to formulate a discrimination judgment. Mamassian and de Gardelle (2021) proposed a generative model based on signal detection theory that contains both a sensory criterion and a confidence criterion and assumes that a confidence decision is affected by both sensory and confidence noise.
The main difference between these models and ours is that we included the confidence criterion as the only free parameter to predict confidence judgments, equaling the assumption that confidence decision are based on duration discriminability and sensory noise. The discrimination judgment, on the other hand, was modeled to be entirely based on duration discriminability. Overall, between 75% and 80% of the variance in our real data was captured by this simple assumption (Figure 7). More importantly, the predictions of our model allowed us to highlight a clear pattern of inter-individual differences in weighting sensory evidence to form a confidence judgment. In fact, the model explained virtually all of the variance (all R 2 > 0.915) in the real data for participants with a predicted criterion higher than 1 (Figure 8). This value is not arbitrary, as a criterion of 1 or higher indicates that, to have high confidence, the difference between the two duration signals (which are affected only by sensory noise, according to our model) has to be at least as large as the JND between the two durations. Therefore, participants with a criterion higher than 1 based their confidence judgments on the same sensory information they used for their discrimination judgments, and, in fact, their FWHHs were very well captured by our model. For these participants, the confidence criteria ranged between 1 and 4. This finding indicates that, as shown by Arnold et al. (2021) for tilt perception, high confidence in a perceptual decision requires a different magnitude of the same sensory information (i.e., a larger difference in duration between the two test intervals relative to the JND). Also, it shows that individual participants set their internal thresholds at different distances (in sensory units) from the JND, pointing to a tendency to be more conservative or less conservative in their confidence criterion. It is worth stressing that, even though we did not include a random component to account for this tendency, our model was still able to capture this variability. In fact, it predicted participants' FWHHs equally well when the estimated confidence criterion was substantially larger than 1.
On the other hand, participants with a criterion lower than 1 tended to report high confidence even when the difference between the two duration signals was smaller than the JND, implying that their judgment was also affected by other components that we did not include in our model. The correlations between the individual JNDs and confidence criteria (data not shown) reached statistical significance for participants with a criterion higher than 1 but not for those with a criterion smaller than 1, further suggesting that confidence judgments in these two groups are affected by different factors or computations. Our model could account for about 70% of the variance in the data of participants with a criterion estimate lower than 1. One possibility is that the rest of the variance might be explained by sensory factors that are not used for the discrimination decision. In fact, Mamassian and de Gardelle (2021) proposed that there might be some additional sensory information used to form a confidence judgment that would be acquired after the perceptual decision and would boost the participants' confidence (they refer to this component as "confidence boost," to distinguish it from "confidence noise," which would instead reduce confidence). Alternatively, the remaining variance could be due to non-sensory noise components that have been shown to specifically affect confidence judgments (Shekhar & Rahnev, 2021). Bang, Shekhar, and Rahnev (2019) recently showed that, perhaps counterintuitively, higher levels of sensory noise in a perceptual task can lead to higher metacognitive efficiency, measured using meta-d , which is the ratio between the signal and a combination of sensory and confidence noise, and M ratio , which is the ratio of meta-d and d (Maniscalco & Lau, 2012). They suggested that this finding supports the idea that confidence judgments are affected by independent metacognitive noise. If that is the origin of our unexplained variance, it would be interesting to investigate why only some participants are affected by this confidence noise but other participants (i.e., those with a confidence criterion > 1) do not seem to show this influence.
In two studies where biases were induced in perceptual appearance with either adaptation or by manipulating the prior statistics of the presented stimuli, Caziot and Mamassian (2021) showed that confidence judgments, which aligned well with the perceptual reports (indicating that both judgments were based on the same sensory evidence), were modulated more by the subjective sensory distance of the test stimulus (i.e., the distance in sensory units of the stimulus from the participant's PSE) than by its objective sensory distance (i.e., the distance in sensory units of the stimulus from the physical equality of the two tests). In our model, we took the distance of the simulated duration signal for the comparison interval from that of the standard (see Modeling section), which, in the framework of Caziot and Mamassian, would correspond to the objective sensory distance, as the subjective sensory distance would correspond to the distance of the SDS for the comparison from the PSE. One of the predictions that might be drawn from Caziot and Mamassian's observation is that the performance of our model in predicting the real confidence data should be worse for the conditions where the average difference between the PSE and the physical equality between comparison and standard durations was larger (e.g., in the stationary condition, where the mean perceived duration was substantially shorter than 500 ms). This did not seem to be the case. In fact, the amount of variance explained by our model in the stationary condition did not differ from that captured in the other conditions (see Figure 7b). However, it must be noted that here we only focused on predicting the FWHH for the confidence curves, whereas Caziot and Mamassian focused their analysis on their measures of central tendency.
More generally, we did not intend for our model to account for all aspects of confidence. More complex models, such as those cited above, can do that much better. We were mainly interested in seeing how much variability in our dataset we could explain with the fewest assumptions. We believe our very simple model served this purpose quite well. We showed that, for about half of our participants, the only assumption of a confidence criterion based solely on sensory information was enough to account for all of the individual variability. It must be noted that our model makes assumptions only on our estimates of precision but not on those of central tendency. Therefore, it can only make sensible predictions regarding the link between the discrimination JNDs and the confidence FWHHs. Future developments of the model will include both a content-based (Johnston, 2010;Johnston, 2014;Roseboom et al., 2019) explanation of the speed-related changes in perceived duration and a decisional noise component to account for the differences between discrimination and confidence estimates of central tendency.
Conclusions
We showed here that the effect of stimulus speed on apparent duration contains both perceptual and decisional components. The perceptual component was substantially more pronounced. We proposed a simple observer model that assumes that the same type of sensory information informs both discrimination and confidence judgments and that the latter require an internal criterion based on discriminability. The criterion estimates revealed a clear pattern of inter-individual differences between those participants who relied entirely on perceptual differences to rate their confidence and those who also used information that did not influence their discrimination judgments.
|
2023-03-28T06:15:56.419Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e301da266682c4ee62c6fdde4c7cceb27d46ce3f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1167/jov.23.3.15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9633d3333c2594edc35a3c6be9a91d5a3268f1e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234101753
|
pes2o/s2orc
|
v3-fos-license
|
The effect of Cu-doping on the corrosion behavior of NiTi alloy arch wires under simulated clinical conditions
Allergy to nickel based alloy arch wires, which is largely induced by corrosion behavior, can cause severe problems during the orthodontic treatment. However, no consensus has been reached in the comparison of anti-corrosion behavior between Nickel-Titanium (NiTi) and Copper Nickel-Titanium (CuNiTi) alloy arch wires. Herein, the anti-corrosion behavior of NiTi and CuNiTi arch wires was simultaneously studied in artificial saliva under loading stress to simulate clinical conditions. Scanning electron microscope (SEM) was utilized to detect the surface morphology and following x-ray diffraction (XRD), electrochemical impedance spectroscopy (EIS) as well as x-ray photoelectron spectroscopy (XPS) were used to evaluate the potential anti-corrosion tendency of the arch wires, implying that CuNiTi arch wire had more defects on the surface yet intriguingly less release of Ni compared with NiTi arch wire after test. Both groups of arch wires were more corroded when loaded with clinic-simulating stress, nevertheless, the doping of Cu element can reduce the release of Ni to some extent, which is conducive to lowering the probability of metal allergy and supplying meaningful instructions for the manufactories and orthodontists.
Introduction
Since the Nickel-Titanium (NiTi) alloy material was firstly introduced into the orthodontic clinic by Andreasen [1,2] in 1971, numerous researches have flourished in this field due to its unique capability of shape memory and super elasticity. By reason of the NiTi alloy crystal structures are determined by temperature and mechanical pressure, a reversible transformation between dual crystal arrangements: austenite and martensite phase, as well as super elasticity can be achieved when exposed to the oral environment.
However, there were some inevitable shortcomings of NiTi arch wires and the most important of them is excessive release of Ni of arch wires that may cause negative effects on patients including allergy reactions, imperfect corrosion rate and mechanical properties [3]. To detect a way across this dilemma, researchers found that the doping of Cu could reduce the sensitivity of phase transition temperature and thermal hysteresis to components and improve their mechanical properties to a certain extent [4][5][6]. Therefore, CuNiTi arch wires can apply stress more accurately, thereby providing stable corrective force and improving the efficiency of tooth movement [7]. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Though a small amount of Cu release donates slight anti-bacterial properties to CuNiTi arch wire, disadvantages including insufficient surface hardness and undesired surface roughness that may lead to unexpected fractures is hard to ignore [8][9][10]. Moreover, ideal corrosion resistance determined by surface morphology and composition, is a prerequisite that metal materials should meet, especially when they are used for clinical applications. Previous studies showed that the surface roughness of CuNiTi arch wire was larger than that of NiTi arch wires, whereas inconsistent results were observed in studies on the corrosion resistance of CuNiTi arch wires [9]. Deo K. Pun found that the corrosion rate of CuNiTi arch wire was higher than its NiTi counterpart [11], while other researchers reported that the Cu-doping would reduce the corrosion resistance of the NiTi arch wire [12,13]. Moreover, Iijima and Zheng YF reported that minute amount of doped Cu can regulate the superelastic properties of the alloy, rather than its corrosion resistance in the simulated clinical environment [14,15]. In fact, the differences in corrosion resistance and the specific corrosion mechanism between CuNiTi and NiTi arch wires have not been clear yet. Thus, Damon CuNiTi and Smart NiTi arch wires were selected as samples to determine whether surface roughness or composition weighs more in the anticorrosion of arch wires by exploring the corrosion resistance and the release of Ni in artificial saliva under stress loading, which simulates clinical conditions during orthodontic treatments, providing experimental reference and theoretical guidance for orthodontic arch wires application and selection.
Preparation of artificial saliva
Artificial saliva was prepared according to the ISO/TR10271 standard, and the composition ratio was shown in table 1. After the artificial saliva was prepared according to the above formula, the pH value was adjusted to 6.75 with lactic acid.
Stress-loading device
Damon CuNiTi (Ormco, USA) and Smart NiTi (Beijing Smart Technology Co. LTD., China) arch wires were selected as samples in this experiment and the main composition of them, according to instructions provided by manufacturers, was listed in table 2. U-shaped bending was used to simulate the stress-loading environment of the NiTi arch wire in the orthodontic practices. The stress-strain curve of the NiTi arch wire was found to be linear within a certain strain range. When the linear strain region is exceeded, the corresponding relationship between the strain and stress will change and permanent deformation may happen. In order to select the appropriate stress application value, we cut the straight parts at the end of arch wires as samples which were fixed in the holes of the resin board by adhesive with 13 mm exposed to artificial saliva and set the distance between the two holes to 10 mm (figure 1). Then, we made sure that permanent deformation did not happen on the arch wires after stress loading.
Surface morphology and composition analysis
We selected the straight ends (length,10 mm) of the two arch wires as samples and cleaned them in ethanol (75 wt.%) and deionized water by ultrasonic for 5 min respectively, and then dried with cool air blow. Scanning Electron Microscope (SEM, Merlin compact, Germany) was used to observe the surface morphology of the arch wires, and the x-ray Energy Dispersive Spectroscopy (EDS, Merlin compact, Germany) was used to analyze the surface of the samples. X-ray diffraction (XRD, Bruker D8 ADVANCE, China) analysis was used to determine its phase structure and the element composition, valence state and chemical bond of the sample surface were analyzed by x-ray photoelectron spectroscopy (XPS, Thermo Fisher, China).
Electrochemical test
The straight ends (length, 20 mm) of the two arch wires were cut as samples and then exposed to artificial saliva with a length of 10 mm. After confirming that there was no wrong with the conductivity, we used the same steps as mentioned in 2.3 to clean them. The electrochemical test method was used to study the corrosion resistance of the material. The classic three-electrode system was applied: a 10 mm arch wire served as the working electrode with a 20 mm×20 mm platinum auxiliary electrode; a saturated calomel electrode (SCE, 232-01, China) was used as the reference electrode. The artificial saliva served as the erosion environment and the temperature was set to be a constant 37°C in a water bath. At the beginning of the experiment, the open circuit potential (OCP) curve was measured for 10 min. After the OCP was stabilized, the electrochemical impedance spectroscopy (EIS, CS2350H, China) and dynamic potential polarization curve were tested. The amplitude of the excitation potential of the EIS test was 10 mV and the frequency sweep range was 100 kHz − 10 mHz. The potentiodynamic polarization curve adopted a potential scanning rate of 5 mV s −1 with a scanning range of −0.5-2 V (Versus V OCP). To ensure the repeatability of the experiment, the above experiment was repeated for three times.
Immersing experiment
We divided samples of both arch wires into NiTi-stress-loading, NiTi-nonstress-loading, CuNiTi-stress-loading and CuNiTi-nonstress-loading group and briefly called them NiTi-L, NiTi-NL, CuNiTi-L and CuNiTi-NL group respectively. Then all groups were immersed in the artificial saliva at 37°C and we used SEM to observe the surface morphology of the samples soaked for 7, 14 and 28 d respectively and XPS to test the composition of the surface of the samples soaked for 28 d. The leaching solutions were tested for Ni concentration by inductively coupled plasma mass spectrometry (ICP-MS, NexION 350D, China).
Surface morphology and composition analysis
The pitting and point defects could be observed on the surface of the two unused arch wires (figures 2 and 3) and the pitting were unevenly distributed on the surface of the arch wires with few point defects. Compared with that of the NiTi arch wire, point defects on CuNiTi arch wire were larger and deeper.
The EDS results showed that the surface of the NiTi arch wire was mainly composed of Ni and Ti with a dominant 56.56% Ni content and a 42.06% Ti content (table 3), while that of the CuNiTi arch wire was mainly composed of Ni, Ti and Cu, with a 49.62%, 41.09% and 6.99% content respectively (table 4). There was also an oxide layer on the surface of the both two wires. The Ni content in NiTi arch wire was higher than that of CuNiTi arch wire and there was only a small amount of Cu on the NiTi arch wire (tables 3 and 4).
The XRD results of the two arch wires showed that both accorded with the characteristics of the cubic crystal NiTi alloy with no impure phases, indicating that the sample had good crystallinity and a rather single-phase composition. Despite the doping of a small amount of Cu, the alloy still mainly showed the NiTi alloy phase, indicating that Cu did not cause significant changes in the structure of the CuNiTi arch wire (figure 4).
Comparison of the corrosion resistance of two arch wires without stress loading
The OCP curve of the two arch wires that were soaked in artificial saliva for 0.5 h in advance showed that after 10 min of testing, the NiTi arch wire and CuNiTi arch wire gradually stabilized at −0.2598 V and −0.28073 V ( figure 5) indicating that the corrosion resistance of CuNiTi arch wire was slightly higher than that of the NiTi arch wire.
The EIS result showed that the Nyquist diagram (figure 6) of the alloy was an incomplete semicircle and CuNiTi arch wire had smaller impedance arc radius than its NiTi counterpart, indicating that it had worse corrosion resistance. Besides, Bode diagram ( figure 7) showed a relatively wide phase angle, indicating that the passivation films of the two materials are relatively dense. The EIS was fitted with Z-view software and the equivalent circuit was shown in figure 8. Among parameters of the circuit, Rs is the resistance of the solution, and CPE is the interfacial capacitance. Similar fitting results of these two parameters proved that the test system was basically the same. R p is the polarization resistance commonly used to evaluate the corrosion resistance of the passivation film of the alloy. In brief, the larger the R p is, the better the corrosion resistance of the alloy passivation film would be. The NiTi and CuNiTi arch wire's R p were 2858.6 kΩ·cm 2 , 1452.4 kΩ·cm 2 respectively, indicating that NiTi one had better corrosion resistance (table 5).
The polarization curves of the two arch wires in artificial saliva were shown in figure 9. Curve patterns of the two wires were similar and both were in line with typical passivation characteristics, indicating that both wires had the same corrosion resistance in artificial saliva. The self-corrosion potential (E corr ) and corrosion current density (I corr ) were calculated and listed in table 6. The results showed that the I corr of the NiTi arch wire was higher than that of the CuNiTi arch wire, indicating that the NiTi arch wire had better corrosion resistance; the E corr refers to the OCP in a steady state, sharing similar meanings as the OCP results mentioned above.
Comparison of corrosion mechanism of two arch wires under stress loading
SEM results showed that with the extension of the immersing time, the point defects on the surface of the arch wires in all NiTi groups gradually increased and became larger. The NiTi-L group showed deeper surface defects and dents at the same immersing time than that of the NiTi-NL group. The similar results also appeared in the CuNiTi groups ( figure 10).
The XPS results of the NiTi arch wire were shown in figure 11. The main elements before immersing were Ti, Ni and O, which was consistent with the EDS test results. The Ti element had two main high-resolution peaks, both of which were peaks of TiO 2 . Among them, the peak appearing at 458.8 eV represented Ti2p 3/2 , and the latter one represented Ti2p 1/2 ; besides, the two peaks of Ni element were groups changed significantly, more obvious in the NiTi-L group than that of NiTi-NL group. Meanwhile the corresponding peak of the 529 eV metal-oxygen bond in the O1 s spectrum continuously strengthened.
The XPS results of CuNiTi arch wire were shown in figure 12. The main elements on the surface of the arch wire before immersing were Ti, Ni, Cu, O, which was consistent with the EDS test results. The main highresolution peaks of Ti showed similar results as the NiTi wire. Only the NiO peak was observed in the Ni spectrum and the Cu element peaked at 933.6 eV representing Cu2p 3/2 , indicating that the main component of the outermost oxide film was TiO 2 with a small amount of CuO and NiO. After the wires soaked in artificial saliva for 28 d, the Cu element signal peaks of both groups almost disappeared; the peak signal of the Ni element spectrum also weakened; the peak of the O1 s spectrum remained stable and no obvious deviation was found between CuNiTi-L and CuNiTi-NL group. The peak of the Ti element spectrum only had a small amount of deviation which was smaller than that of the NiTi-L group.
Comparison of ion release behavior of two arch wires under stress loading
With the extension of the soaking time, the Ni release of all nonstress-loading groups gradually increased and that of the NiTi-NL group was higher than that of the CuNiTi counterpart, which were statistically different for all the timepoints (P<0.05) ( figure 13(a)). The similar trend of the release of Ni was also seen in stress-loading groups: the amount of the Ni release grew steadily with the corrosion time. And the statistical difference occurred for all the timepoints (P<0.05) ( figure 13(b)). It was also noteworthy that both arch wires had a significantly more Ni release under stress loading compared with those without. In addition, it was true that the Ni release in artificial saliva of all the groups was less than the European Union (EU) Ni release standard (table 7). In contrast, there was no significant difference between the Cu release of CuNiTi-NL group and that of CuNiTi-L group at all time points ( figure 14). Besides, no obvious trend of the Cu release was observed in each group during the whole period.
Discussion
The effect imposed by the doping of Cu on the corrosion mechanism and surface morphology of the alloy are not clear. SEM results showed that the surface roughness of CuNiTi arch wire was greater than that of NiTi arch wire. This may be due to the doping of Cu, which requires changes in the surface processing technology. Many scholars have also found that there were more or less defects on the surface of NiTi alloy [16,17]. These defects were prone to pitting corrosion in the complex oral environment [18,19]. A small amount of O signal of the two samples in the EDS results showed that the oxide film on the surface of the arch wires existed. The XRD test results confirmed that the crystal structures of the two NiTi alloys were basically the same. Therefore, the same phase composition of these two arch wires suggested a good comparability of this study. In this study, two arch wires showed similar corrosion behavior under similar testing environment and the corrosion resistance of CuNiTi was slightly worse than NiTi arch wire according to the EIS, OCP and polarization curve results, which is in accordance with some studies while contradicted other mentioned in the introduction. Some researchers [20][21][22] found that the Cu content of CuNiTi would lead to different phase transition behaviors and J. Briceño's research [23] results showed that the presence of martensite phase improved corrosion resistance and significantly reduced the release of Ni other than the fact that the surface morphology would affect it too. Hence, the contradiction results may be due to the different proposition of Cu in the alloy and various alloy processing technologies [24].
The XPS and SEM results indicated that though corrosion pitting appeared on the surface of the arch wire, the corrosion intensity was low, which did not affect the stability of the Ti of the NiTi arch wire remarkably in NiTi-NL group. Ti is easily oxidized to form a dense oxide film which prevents metal from being corroded and ions from entering the solution and thus protects the alloy from corrosion, so the NiTi alloy generally has good corrosion resistance. The valence state of the Ti and O changed in the NiTi-L group, which indicated the Ti oxide film on its surface might have been greatly damaged. This change was consistent with the SEM results. In addition, after the corrosion, the ratio of Ni 2+ and Ni 3+ changed, which confirmed that the corrosion caused a certain change in the valence state of nickel on the surface and some Ni 2+ /Ni 3+ might be eluted in both NiTi groups.
It is worth noting that although the CuNiTi arch wire corroded more remarkably than the NiTi arch wire according to its morphology changes under the same conditions, XPS results showed that Ti2p and O1s remained stable in the CuNiTi-NL group. Even in the CuNiTi-L group, the binding energy of Ti only shifted slightly, indicating its good stability. Before the corrosion, the Cu2p signal on the surface of the arch wire was obvious, but there was almost no corresponding peak position after the corrosion in both CuNiTi groups. Therefore, it could be inferred that the obvious corrosion pitting on the surface of the CuNiTi arch wire were mainly caused by the preferential corrosion of Cu, because of which the stability of the CuNiTi alloy's main structure was protected, and the release of Ni was reduced. Previous study also showed that the amount of Ni release in the NiTi alloy depended on the content of Ni in the solid solution, but it could be influenced by the presence of Cu [6,25]. This could be explained by the higher chemical bonding energy of Cu than Ni and Ti and thus the ability of Cu to lose electrons is stronger than that of Ni and Ti. As a result, when corrosion occurs, the corrosion preferentially occurred on the Cu.
The ion release and SEM analysis revealed two facts: stress-loading condition would enhance the Ni release in both groups, indicating extra stress corrosion crack (SCC) happened; the amount of Ni release in the CuNiTi arch wire was significantly lower than that in the NiTi arch wire, indicating that the addition of Cu in the alloy imposed a certain inhibitory effect on the release of Ni. Both results were consistent with the former analysis. Firstly, Stress corrosion crack (SCC) requires three conditions to be met at the same time, namely a specific medium, a sensitive alloy and a certain stress. In this experiment, the all three necessary elements were in place and thus the differences of the corrosion behavior between the stress-loading and nonstress-loading groups could be explained by this phenomenon. Several researchers also proved that stress could increase the nickel release [26]. The density and width of the surface oxide cracks dramatically increased with decreasing radius of U-shape bending device [27]. Then, the surface of inner metal content was exposed in the corrosive medium, on which the oxide film could not be formed in time, and the stress corrosion thus happened, resulting in a faster ion release. Moreover, Jianqiu Wang proved that Ni-Ti arch wires did undergo stress corrosion by the failure analysis of deformed arch wires; high stress and acidic saliva contributed greatly to the stress related metal corrosion [28]. The rupture of the passive film may act as major cause of corrosion cracking. Secondly, the Ni release of NiTi arch wire was higher than that of CuNiTi counterpart due to the different chemical bonding energy mentioned above. On the other hand, the Cu release remained steady during the soaking process and no significant difference was observed between stress-loading and nonstress-loading groups. In other words, the Cu release was at similar level, whether there was the presence of the stress-loading condition or not. This is probably because the content of Cu was rather limited in the alloy and thus the Cu release reached a plateau soon after corrosion began. The experimental results showed that both arch wires were qualified for the release of Ni but it should be noted that the EN 1811:2011 standard was set for jewelry and there is no standard for the release of Ni for metallic materials in oral environment. The corrosion rate of metals and the release of Ni are bound to be affected by pH, stress, temperature, saliva conditions, mechanical loads, microorganisms, enzymes, and bacterial acidic substances and the galvanic corrosion may happen when other metal materials exist [11,[29][30][31][32][33]. Therefore, the actual amount of Ni released in oral environment may be higher than the amount detected in this study. In clinical practices, attention should be paid to the release of Ni.
Conclusions
In summary, the corrosion behavior of NiTi and CuNiTi alloy wires under stimulated clinical conditions was systematically investigated in the study. The main conclusions were listed as follows: 1. Although the corrosion resistance of CuNiTi are slightly worse than that of NiTi arch wires, both of them are within acceptable range.
|
2021-05-11T00:07:13.059Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "67be35c344a6579079cb91bc8d9132941cfbdd9e",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2053-1591/abdb4e/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1b4d1648dfc3f13fdbdc7a1ee5a6ab46f19008ed",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
15547111
|
pes2o/s2orc
|
v3-fos-license
|
Radiation-Associated Toxicities in Obese Women with Endometrial Cancer: More Than Just BMI?
Purpose. The study characterizes the impact of obesity on postoperative radiation-associated toxicities in women with endometrial cancer (EC). Material and Methods. A retrospective study identified 96 women with EC referred to a large urban institution's radiation oncology practice for postoperative whole pelvic radiotherapy (WPRT) and/or intracavitary vaginal brachytherapy (ICBT). Demographic and clinicopathologic data were obtained. Toxicities were graded according to RTOG Acute Radiation Morbidity Scoring Criteria. Follow-up period ranged from 1 month to 11 years (median 2 years). Data were analyzed by χ 2, logistic regression, and recursive partitioning analyses. Results. 68 EC patients who received WPRT and/or ICBT were analyzed. Median age was 52 years (29–73). The majority were Hispanic (71%). Median BMI at diagnosis was 34.5 kg/m2 (20.5–56.6 kg/m2). BMI was independently associated with radiation-related cutaneous (p = 0.022) and gynecologic-related (p = 0.027) toxicities. Younger women also reported more gynecologic-related toxicities (p = 0.039). Adjuvant radiation technique was associated with increased gastrointestinal- and genitourinary-related toxicities but not gynecologic-related toxicity. Conclusions. Increasing BMI was associated with increased frequency of gynecologic and cutaneous radiation-associated toxicities. Additional studies to critically evaluate the radiation treatment dosing and treatment fields in obese EC patients are warranted to identify strategies to mitigate the radiation-associated toxicities in these women.
Introduction
Endometrial cancer (EC) is the most common gynecologic cancer in the United States with nearly 50,000 new diagnoses estimated in 2013 [1][2][3]. Radiation therapy for EC is one of the fundamental adjuvant treatment modalities and typically includes personalized field design based on patient's pathological and clinical characteristics [2,3]. Radiotherapy strategies broadly include external beam radiotherapy and intracavitary brachytherapy. Typical treatment courses may include either modality or some combination of the two. External beam radiotherapy includes either whole pelvis radiation therapy (WPRT), with or without extended field radiotherapy (EFRT) to include the para-aortic lymph node regions [2].
A sizeable body of literature supports the use of adjuvant radiotherapy in EC to achieve local control, particularly when high risk features are present: deep myometrial invasion, histologic grades 2-3, and older age. Despite certain nuances in study design, 3 pivotal trials support the use of adjuvant radiation, either WPRT, ICBT, or some combination of WPRT/ICBT [4][5][6][7][8]. Interpretation of data from these studies in the context of patient's clinical and pathological characteristics forms the basis for the prescribed radiation treatment plan for EC patients [4][5][6][7][8].
Obesity is a growing public health problem and is a wellreported risk factor for developing EC [9,10]. Analysis of the Multiethnic Cohort Study (MEC) found that obese women (BMI, ≥ 30 kg/m 2 ) had a 3.5-fold increased risk of EC and that this magnitude of risk varied by ethnicity [11,12]. However, 2 The Scientific World Journal little is known about the relationship between increasing BMI and the toxicity of adjuvant radiation treatments.
Patient survival after treatment for early stage EC is high (∼80%) [5][6][7]; therefore, complications associated with treatment for EC are of particular concern for survivors and their treating oncologists. While radiation-associated toxicities can be generally classified by organ system (e.g., gastrointestinal, gynecologic, and genitourinary) and by onset (e.g., acute, delayed, and late), the specific relationship between BMI and radiation-associated toxicity is poorly understood and is the focus of the current study.
Materials and Methods
After Institutional Review Board (IRB) approval, all patients with EC treated by the radiation oncology service at our institution from 1999 to 2010 were identified for inclusion in this review. Patients were included for analysis if they met the following criteria: pathologic diagnosis of endometrial cancer, hysterectomy with bilateral salpingo-oophorectomy, receiving adjuvant radiotherapy at our institution, and having available radiotherapy records (i.e., treatment plans, dosage, and weekly symptom reports). Patients who received concurrent chemotherapy were excluded from final analysis as were patients treated with extended field radiotherapy or any patient whose radiation record was incomplete. All patients were treated with the standard pelvic 3-dimensional conformal radiation (3D-CRT) technique for WPRT incorporating the tumor bed and regional pelvic lymph nodes. No intensity modulated radiation treatment was performed for gynecologic malignancies during this time frame at the County Hospital.
Patient demographic data, including anthropometric measurements, were obtained from medical records. Radiotherapy data, including patient-reported symptoms, were culled from radiation records, and radiation-related toxicities were reviewed and graded by two radiation oncologists using the RTOG Acute Radiation Morbidity Scoring Criteria [13]. Data from weekly radiation treatment visits were evaluated, and the maximum acute radiation toxicity was scored according to the RTOG criteria. Acute side effects from radiation occur during treatment and within the first three months posttreatment. Maximum acute radiation toxicity was used as the variable to analyze because it could be assessed from patient charts during the standard weekly on treatment radiation clinic notes as well as from follow-up during the first three months after radiation treatment. There was variability in time to radiation after surgery in our patient population. There was also variability in documentation of rate of timing for onset and severity of acute side effects and so maximum acute radiation toxicity was chosen as the consistent variable. Obesity was classified using WHO criteria.
Mean BMI was associated with reported radiation-related toxicities for GYN and skin (Figure 1(a)). A higher mean BMI was significantly associated with more severe (i.e., higher grade) GYN ( = 0.027) and skin toxicity ( = 0.022). GI and GU toxicity was not associated with mean BMI on logistic regression. GI and GU toxicities were more dependent on the adjuvant radiation technique with the use of WPRT significantly associated with higher and more frequent GI ( < 0.0001) and GU ( < 0.0001) toxicities (Figure 1(b)).
Logistic regression also showed that GYN toxicities were significantly correlated with younger age (Figure 2(a)). There was also a relationship between younger age and increased BMI. We used recursive partitioning analysis to model the interaction of age and BMI. The first significant branch point was for BMI > 45.2 kg/m 2 , suggesting that patients above this branch point may be at particularly high risk for GYN toxicities. A second branch point was identified at age <38 years, implicating a potential age threshold at which point a treating radiation oncologist may be more attuned to early management of GYN-related symptoms (Figure 2(b)). Taken together, the highest chance of grade 2 GYN toxicity was observed in young morbidly obese women.
Discussion
The main findings from our study of this urban largely Hispanic obese population revealed that patients with increased BMI experienced more radiation-associated gynecologic and cutaneous toxicities. Recursive partitioning analysis suggests that the gynecologic toxicities may be especially increased for the morbidly obese young woman. GI and GU toxicity was associated with the use of WPRT and we did not observe any increased risk associated with increasing BMI in this cohort. As overall survival for most patients with EC, particularly the well-differentiated Type I EC, is high (∼80%) [14], the potential for long-term consequences of treatment-related toxicities is high. Whether or not obesity is an independent predictor of increased risk of recurrence or death remains controversial [12]. Therefore, given the high frequency of obesity among EC patients, a comparably high frequency of EC survivors will also be obese and likely be subject to radiation-related toxicities as well. Given the multitude of other medical comorbidities experienced and reported by obese patients, particular attention to recognize and/or prevent these complications is paramount.
Treatment field design may contribute to radiation toxicities. Jereczek-Fossa and colleagues suggested that 4field radiotherapy may be associated with fewer late bowel toxicities; however, their findings did not retain statistical significance on multivariate analysis. Overall, 85% of the patients in their study received either WPRT + ICBT or WPRT alone and 61% had some grade 1-2 toxicity defined by RTOG criteria [15]. Our patient population reports lower (51%) than expected frequencies of radiation-associated GI toxicities among patients receiving whole pelvic radiotherapy by either 4-field box or 2-field techniques though this may be an underestimate given patient's self-reported symptoms. New technologies have emerged that may abrogate some of these toxicities. Intensity modulated radiation treatment (IMRT) can modulate where the hot spots of radiation area are placed in the treatment field and also minimize the radiation dose to nearby normal structures such as bowel [16]. None of our patients received IMRT. At our County Hospital IMRT use started in late 2009-2010 and could potentially help deliver homogeneous dose to patients with significantly high BMI in future. In lieu of IMRT for pelvic radiation, a recent paper suggests that the field-in-field (FIF) technique may also be utilized (over standard 3D-CRT used in our study) to improve dose homogeneity and reduce radiation to critical normal structures in the pelvis (i.e., bowel, bladder, and bone marrow) especially in obese patients (BMI 30-39.9). Similar to IMRT, the FIF technique could also help ameliorate acute radiation toxicities in young endometrial patients with high BMI and is now standard for other treatment sites to improve homogeneity such as breast cancer [17][18][19].
In addition to IMRT, image guided radiation treatment (IGRT) that can allow daily visualization of the patient anatomy and allow for tighter margins may also decrease toxicity [16]. How we use new technology in the obese population is still under investigation. Some have reported more setup errors (e.g., positioning either rotational or translational) in the obese population and a suggested planning treatment margin of 7-10 mm may miss tumor in the moderately and severely obese [16].
A recent SEER analysis shows progressive decline in the use of external beam radiation with a corresponding increase in the use of vaginal brachytherapy since 2000 [20]. This practice change may also help reduce GI and GU toxicities but there will still be a concern for GYN toxicity especially in the moderately and severely obese population. Moreover, the concern for additional radiation-associated toxicities in the obese population is associated with a commensurate trend for less radical surgery (TAH without pelvic lymph node dissection) and less radiation (trend toward more ICBT over WPRT) in this population [10,21,22]. Whether BMI negatively influences overall survival and surgical outcomes in the obese patient is still debatable. Some investigators have suggested that the less frequent use of adjuvant radiation in The Scientific World Journal 5 the obese population may have contributed to a decrease in cancer specific survival [21] but more data are needed to better define this relationship.
Although the current study is limited by the relatively small sample size and its retrospective design, the nature of this single institution radiation oncology practice also provides a homogeneous treatment pattern for these patients, thereby simplifying the interpretation of maximum acute radiation toxicities relative to radiation fields. All radiationassociated acute toxicities were assessed and scored by 2 radiation oncologists.
Another strength of our study is in the unique ethnic distribution of our patient population. Compared to historic randomized trials, our study population is primarily Hispanic (71%) versus the GOG-99 in which 83% were Caucasian [4,7]. Our study begins to offer some insight into a diverse patient population not commonly enrolled in historic randomized studies. Our study population may be more comparable to the MEC because approximately 19% of the study population was Hispanic and 32% was Asian which is similar to our patient characteristics and is not mentioned in Phase 3 studies [11]. In the MEC, among the Hispanics, the endometrial cancer risk was the highest in those patients who had a BMI gain of ≥18.46% whereas for Japanese Americans a BMI gain of only ∼5% was associated with a 2.17-fold higher endometrial cancer risk. The currently available MEC data is limited by the restricted radiation treatment and radiation-associated toxicity information available. However, a direct comparison of the side effects of adjuvant radiation treatment in this cohort as compared to our group may help illuminate the impact of BMI on radiation-associated toxicities in ethnically diverse populations.
Overall, adjuvant, postoperative radiation remains a key treatment modality for endometrial cancer. However, due to high survival rates especially in early stage EC, predicting and mitigating toxicities are important. Our results show that younger morbidly obese women are likely to have more toxicity and thus careful attention needs to be paid to this population.
|
2017-09-25T18:21:42.287Z
|
2015-06-04T00:00:00.000
|
{
"year": 2015,
"sha1": "768e874ad9c93d14af08e1dd82ab4c3318e8317f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2015/483208.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4b4038c577857154cde7185b3d2ab372295b8d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258190538
|
pes2o/s2orc
|
v3-fos-license
|
Modulation of macrophage polarization by iron-based nanoparticles
Abstract Macrophage polarization is an essential process involved in immune regulation. In response to different microenvironmental stimulation, macrophages polarize into cells with different phenotypes and functions, most typically M1 (pro-inflammatory) and M2 (anti-inflammatory) macrophages. Iron-based nanoparticles have been widely explored and reported to regulate macrophage polarization for various biomedical applications. However, the influence factors and modulation mechanisms behind are complicated and not clear. In this review, we systemically summarized different iron-based nanoparticles that regulate macrophage polarization and function and discussed the influence factors and mechanisms underlying the modulation process. This review aims to deepen the understanding of the modulation of macrophage polarization by iron-based nanoparticles and expects to provide evidence and guidance for subsequent design and application of iron-based nanoparticles with specific macrophage modulation functions.
Introduction
Macrophages are efficient effector cells of the innate immune response, secreting various molecules that regulate the inflammatory response, host defense, and immune homeostasis [1]. The diversified immune effect of macrophage is achieved by cell polarization, during which process cell subsets with different phenotypes are produced in response to microenvironmental stimulations. Currently, the most commonly studied macrophage polarization mode is the M1/M2 dichotomy, which means primary macrophages are polarized into the M1 phenotype with pro-inflammatory functions or the M2 phenotype with anti-inflammatory functions. The plasticity of macrophage polarization allows them to adapt to various physiological and pathological conditions. Therefore, understanding the regulation of macrophage polarization provides an important rationale for the development of immune therapeutic strategies for many diseases.
Iron-based nanoparticles are mainly divided into iron monomer, alloy, iron oxide and iron complex [2], which are widely used in industrial and medical fields because of their unique physicochemical properties. Among them, iron oxide nanoparticles (IONPs), including magnetite (Fe 3 O 4 ), hematite (α-Fe 2 O 3 ) and maghemite (γ-Fe 2 O 3 ) nanoparticles, exhibit unique superparamagnetism and excellent biocompatibility and are widely used in the field of nanomedicine. Based on their superparamagnetic properties, iron oxide nanoparticles can be used in magnetic resonance imaging, separation of bio-molecules, magnetic hyperthermia, and magnetically targeted delivery of drugs [3][4][5].
Various iron-based nanoparticles have been shown to regulate the polarization and function of macrophages, thus holding the potential to be applied as an immune regulator for disease treatment. For example, the iron supplement ferumoxytol (2.73 mg Fe/mL) co-injected with cancer cells was found to promote the polarization of M1 macrophage in tumor microenvironment and inhibit the tumour growth and metastasis of subcutaneous adenocarcinoma in mice [6]. While a 34 nm-sized Prussian blue (PB) nanozyme (50 μg/mL He Ding and Yuxin Zhang contributed equally to this study. or 500 μg/mL) applied to skin wounds could promote the anti-inflammatory phenotype of macrophages, relieving skin inflammation and accelerating wound healing and tissue regeneration [7]. The regulation of iron-based nanoparticles on macrophage polarization is a complex process, with significant variation under different conditions, which can be attributed to the interaction of multiple regulatory factors. This review starts with the characteristics and mechanism of macrophage polarization, summarizes the existing studies on the regulation of polarization by iron-based nanoparticles, and focuses on the material properties and possible mechanisms affecting iron-based nanoparticle-mediated macrophage polarization. This review may provide informative insight for further studies to develop novel immune modulation strategies using iron-based nanoparticles.
Polarization of macrophage Origin of macrophage polarization
Tissue macrophages have a dual origin. Most tissueresident macrophages are established before birth from yolk sacs or fetal precursors, which are independent of monocytes and capable of self-renewal [8]. While other adult-derived macrophages are terminally differentiated from circulating monocytes that arise from bone marrow and are released into the peripheral blood [9]. Embryonic macrophages mainly participate in tissue remodeling, whereas differentiated mononuclear macrophages and their ancestors constitute the mononuclear macrophage system, which acts as short-lived effector cells in tissues and assists in host defense [10]. Differentiated macrophages have plasticity, which can be further polarized into different phenotypes to play their roles. Polarization is a process in which macrophages respond functionally by producing different phenotypes to microenvironmental stimuli and signals in specific tissues [11]. In the 1960s, Mackness [12] reported an antimicrobial macrophage activation state known as classically activated macrophages (CAM, also known as M1). In 1992, the alternatively activated macrophages (AAM, also known as M2) were first proposed [13]. Subsequently, studies on the polarization of macrophages were deepened continuously, and it was found that the polarization of macrophages was varied, among which M1 macrophages and M2 macrophages were two extremely activated states [14].
Phenotypes and functions of macrophage polarization
Macrophages are polarized to M1 or M2 macrophages upon stimulation by the local cytokine milieu, inducing a proinflammatory response or promoting immune regulation and tissue remodelling. M1 macrophages are induced by bacterial lipopolysaccharide (LPS) or other cytokines, such as granulocytemacrophage colony-stimulating factor (GM-CSF), interferonγ (IFN-γ) and tumor necrosis factor-α (TNF-α). The typical phenotype of M1 macrophages involves the secretion of high levels of pro-inflammatory cytokines (TNF-α, IL-1β, IL-6, IL-12, IL-23, IL-1α), high levels of Th1-recruiting chemokines (CXCL9, CXCL10, CXCL11), high expression of major histocompatibility complex class II (MHC II) and co-stimulatory molecules (CD40, CD80, CD86) and low levels of IL-10 [15]. Under physiological and pathological conditions, M1 macrophages mainly play their roles in bactericidal, tumor killing, tissue damaging, and hindering tissue regeneration and wound healing. For example, when infected by pathogens, M1 macrophages could activate the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase system and produce reactive oxygen species (ROS) to facilitate their antibacterial function [16]. Moreover, tumor-infiltrating M1 macrophages function as tumor killers by secreting tumor growth inhibitors, including TNF and nitric oxide (NO) [17].
Regulatory factors of macrophage polarization
Macrophage polarization is a dynamic process that regulates the body in a balanced and integrated way [24]. Phenotypes of M1-M2 macrophage polarization can be reversed in vivo and in vitro. The M1/M2 axis balance of macrophages is the basis of maintaining body homeostasis. Once the balance is broken, chronic diseases and inflammation will be induced [25]. Therefore, understanding the regulatory factors affecting macrophage polarization is particularly crucial, which would provide an important rationale for developing disease treatment strategies.
Single regulatory factors
When macrophages encounter invading microorganisms or inflammatory microenvironment, their gene expression profiles change dramatically [26]. In this process, the direction of macrophage polarization can be regulated through the activation or inhibition of signaling pathways and downstream transcription factors. In addition, epigenetic mechanisms also mediate macrophage polarization by altering gene expression profiles [11]. Various single factor-mediated regulations at the transcription and post-transcription levels contribute to the dynamic and reversible polarization of macrophages.
Signaling molecules and transcription factors coordinate to regulate macrophage polarization. Different molecules have various effects on changing the polarization phenotype of macrophages. For example, alterations in AKT1/2, SHP-1/2, and TNF can regulate the polarization of macrophages. Different Akt kinase isoforms regulate macrophage polarization differentially, with Akt1 ablation promoting M1 phenotype and Akt2 knockout causing M2 phenotype [27]. By decreasing the production of TNF mRNA, which can block M2 gene expression in macrophages, the amount of M2 macrophages is enhanced [28]. In addition, the lack of STAT6, IRF4, JMJD3, PPARδ, and PPARγ blocks the M2 polarization phenotype and inhibits their anti-inflammatory effects. Studies have shown that PPARγ is required for alternatively activated macrophage maturation. Destruction of PPARγ impairs macrophage activation towards M2 [29]. Various genes associated with the mouse M2 macrophage phenotype are modulated by STAT6, such as arginase 1 (Arg1) and macrophage mannose-receptor 1 (Mrc1) [26]. Furthermore, the molecular switches of certain factors can completely reverse the polarization phenotype, such as RBP-J, Btk1, KLF4/6, let-7c, and DAB2 [30]. In particular, RBP-J inhibits the expression of the M2 macrophage characteristic genes and induces the activation of the M1 phenotype [31]. KLF6 promotes the M1 phenotype by cooperating with NF-κB and suppresses M2 targets by inhibiting PPAR-γ expression [32].
Epigenetics changes gene function by inducing or modifying the information encoded in DNA [33]. The epigenetic mechanisms involved in the modulation of macrophage polarization mainly include MicroRNA (miRNA), DNA methylation (DNAm), and histone modification. MiRNAs are short non-coding RNA molecules regulating gene expression at the post-transcriptional level. For instance, increased expression of miR-155 induces the M1 subtype with the secretion of TNF-α, whereas macrophages shift to M2 phenotype after miR-155 knockout [34,35]. Besides, the abnormalities in DNAm patterns, especially the chemical modification of DNA cytosine residues, significantly affect the behavior of macrophages. Modification modes include DNA hypermethylation and DNA hypomethylation [36]. Various research proved that DNA hypermethylation was a determinant of macrophage polarization, leading to the development of inflammatory diseases [37]. Furthermore, the genes encoding enzymes catalyze post-translational modification of histones, which induce gene activation and gene silencing and are differentially expressed in M1/M2 macrophages [37]. For example, the promoters of pro-inflammatory cytokines TNF-α and IL-6 have a histone 3 lysine 36 (H3K36) dimethylation effect under the modification of specific methyltransferase SET, which inhibits NF-κB and ERK signaling and leads to decreased M1 polarization [38].
Systematic regulatory factors
There are significant metabolic differences between M1 and M2 macrophages, which make metabolic modulation a powerful factor in regulating the polarization of macrophages. In glycometabolism, M1 macrophages enhance the absorption of glucose, accelerating the aerobic glycolytic pathway and pentose phosphate pathway with the production of lactic acid and NADPH. At the same time, reactive oxygen species (ROS) and NO are produced intracellularly, which provide the cells with rapid energy and bactericidal activity which regulates [39,40]. Under low oxygen levels, macrophages modulate polarization by changing the level of glucose metabolism at the site of inflammation. In this process, transcription factor HIF is the key mediator for macrophages to adapt to hypoxic conditions. HIF-1α regulates glycolysis through the NF-κB pathway, resulting in the production of pro-inflammatory cytokines and M1 phenotypes [41]. On the opposite, M2 macrophages upregulate glucose metabolism to meet the energy demand. Apart from glycometabolism, M2 macrophages also obtain fuel through fatty acid oxidation, enhancing antiinflammatory function [42]. In addition, the differential metabolism of arginine is one of the most accurate differentiators between M1 and M2 macrophages. Arginine is the common substrate of Arg1 and inducible nitric oxide synthase (iNOS). According to different activation states, different enzymes responsible for arginine metabolism induce different phenotypes [43]. In M1 macrophages, iNOS upregulation leads to the breakdown of arginine into citrulline and NO, which resists bacterial infection. On the contrary, in M2 macrophages, Arg1 is induced to produce polyamines and ornithine, which promote wound healing [44]. In addition, serine, glycine, and glutamine are also vital metabolic regulators of macrophage polarization [45]. As for iron metabolism, M1 macrophages express proteins connected with iron absorption and storage (ferritin, natural resistance-associated macrophage protein 1, and divalent metal transporter-1), restricting the utilization of iron in bacteria growth. While, M2 macrophages up-regulate molecules relevant to iron circulation and release (transferrin receptor, hemeoxygenase-1, and ferroportin), contributing to cell proliferation and wound healing [46].
In addition to metabolic modulation, physical factors also systematically regulate macrophage polarization. Changes in the physical microenvironment, including the structure, morphology, and stiffness of the extracellular matrix, affect the polarization phenotype of macrophages. Matrix stiffness affects cell adhesion, contraction, migration and differentiation by changing the elastic modulus of substrate and density of surface adhesion ligand and indirectly regulates cell polarization [47]. Furthermore, substrate pattern and surface roughness also affect cell phenotypes. Previous studies have reported that macrophages showed biphasic polarization in response to the size of substrate microgrooves [48], and the surface roughness synergically up-regulated the secretion of all inflammatory cytokines [49].
Iron-based nanoparticles modulate macrophage polarization
Numerous studies reported the role of iron-based nanoparticles in modulating macrophage polarization and function. Macrophage polarization is a dynamic and reversible process and involves the changes of a series of markers and signals. Macrophage polarization is not a dualistic model, while M1 and M2 represent the two extremes of the phenotype. By means of phenotypic loss, phenotypic induction, and phenotypic reversal, iron-based nanoparticles polarize macrophages toward either a pro-or anti-inflammatory phenotype ( Figure 1).
Polarization towards the pro-inflammatory phenotype
A variety of iron-based nanoparticles are found to promote a pro-inflammatory phenotype of macrophages by means of phenotype induction and phenotype reversal (Table 1). In most cases, IONPs promote the pro-inflammatory polarization of macrophages, while in rare cases PBNPs with specific coating materials [50], can also induce a M1-like phenotype. For example, the polyethylenimine-coated superparamagnetic iron oxide nanoparticles (SPION) induced M1 polarization dramatically, characterized by notable upregulation of typical M1-related genes such as CD80, IL-1β, TNF-α and so on [51]. Likewise, ferucarbotran and ferumoxytol, the two kinds of clinically used SPION, were reported to promote M1-like inflammatory response both in vitro and in vivo [52]. Furthermore, PBNPs coated with low molecular weight hyaluronic acid (LMWHA molecular weight [MW]< 5 kDa) have been shown to induce the pro-inflammatory phenotype and inhibit tumor growth [50,53].
For diseases that are attributed to the lack of active pro-inflammatory M1 macrophages, reprogramming Resovist (ferucarbotran) Balb/c mice Mammary tumour model macrophages from immunosuppressive M2 to the killing mode of M1 could be an ideal strategy. It is widely reported that most of the tumor-associated macrophages (TAMs) in the tumor microenvironment are the typical M2 phenotype [54][55][56]. By tunning macrophages from M2-like TAMs to M1, iron-based nanoparticles can reverse the immunosuppressive status and promote effective tumour killing. For example, ferumoxytol inhibited tumor progression effectively by the promotion of M1 macrophages and inhibition of TAMs in mice [6]. Likewise, Hou et al. [57] designed and synthesized biological membrane-coated hollow mesoporous Prussian blue nanomaterials (PBNPs) for cancer therapy. This biomimetic nanosystem demonstrated dramatic effects on turning TAMs into M1 macrophages and suppressing tumor growth both in vitro and in vivo.
Polarization towards the anti-inflammatory phenotype
On the other hand, some iron-based nanoparticles are capable of inhibiting the inflammatory response and inducing M2 polarization, which has great potential to be applied for damaged tissue repair, wound healing, antiinflammation, etc (Table 2). Iron-based nanoparticles can induce an anti-inflammatory phenotype of macrophage in three ways, loss of M1, phenotype induction from M0 to M2, and phenotype reversal from M1 to M2. Loss of M1 could be achieved by inhibiting pro-inflammatory cytokines release or reducing the numbers of M1 macrophages. For example, Park et al. reported that hyaluronan-coated PBNPs significantly suppressed both the function and population of M1 macrophage in an LPS-induced murine model, with great therapeutic effect for murine peritonitis [58]. Similarly, Chen et al. found that both 10 and 30 nm PEG-coated SPIONs inhibited the expression of LPS-induced pro-inflammatory factor, IL-6 and TNF-α, in a dose-dependent way [59]. However, in these models, whether the loss of M1 phenotype is associated with increased numbers and function of M2 macrophages remains unknown. In addition to phenotypic loss, treatment of iron-based nanoparticles could directly induce the M2-like phenotype from either M0 or M1 phenotype. Liu et al. [60] developed self-assembled Fe 3+ -catechin nanoparticles (Fe-cat NPs) with a strong ability to facilitate M2 polarization from M0. By secreting anti-inflammatory cytokines, the Fe-cat NPs-treated macrophages contributed to efficient bone repair. For inflammation-related diseases characterized by massive accumulation of M1, the shift from M1 to M2 by iron-based nanoparticles could strongly diminish inflammation and reverses disease conditions. Fan et al. developed hollow-structured manganese prussian blue nanozyme with the ability to convert macrophages from M1 to M2, resulting in effective treatment of osteoarthritis [61]. In addition, Huang et al. also applied polyvinylpyrrolidone (PVP)-coated PBNPs to eliminate inflammation in the hepatic ischemiareperfusion injury model, mainly by promoting M2 macrophages [62].
Influence factors of macrophage polarization by iron-based nanoparticles
As we summarized in Tables l and 2, different types of ironbased nanoparticles exert different influences on macrophage polarization. The modulation of iron-based nanoparticles on macrophage polarization is a complex process relying on the coordination of various factors [82][83][84][85]. The inherent physicochemical properties of designed nanoparticles, such as composition, size, and surface characteristics, largely affect the interactions between nanoparticles and macrophages and are attributed to the differences in polarization (Figure 2).
Composition
The different compositions and structures of iron-based nanoparticles notably affect their functions on macrophage polarization. In general, IONPs tend to induce a proinflammatory M1 phenotype, while PBNPs tend to induce an anti-inflammatory M2 phenotype. For example, one of the FDA-approved IONPs ferumoxytol strongly induced the M1 phenotype at a dose of 2.73 mg Fe/mL in vivo for effective inhibition of the growth and metastasis of mammary mouse tumor [6]. On the other hand, polyvinylpyrrolidone-coated PBNPs with an average size of 80 nm exhibited a notable induction of M2 phenotype in LPS-treated macrophage and was successfully applied for alleviating hepatic ischemiareperfusion injury in mouse model [62]. However, the general principle is not always followed when other influencing factors become dominant, such as size, surface modifications, heterogeneous compositions, and external factors. Therefore, exceptions were found in special cases where PBNPs triggered M1 polarization and Fe 3 O 4 nanoparticles induced M2 polarization. For instance, the PMA-coated mesoporous hollow Fe 3 O 4 nanoparticles induced M0 to M2, which is largely affected by the addition of an alternating magnetic field [78]. In addition, the PB-based nanoparticles composed of hollow mesoporous Prussian blue, hydroxychloroquine (HCQ), and mannose decoration (Man-HMPB/ HCQ) are reported to induce TAM to M1, probably due to the strong autophagy suppression effect of HCQ since inhibition of autophagy is known to contribute to the phenotypic reversal of TAMs [57]. Furthermore, the differences in valence states of iron ions in nanoparticles also influence whether and how they induce macrophage polarizations. It was generally believed that magnetite IONPs (Fe 3 O 4 ) are much more effective in inducing M1 polarization than hematite IONPs (Fe 2 O 3 ) [68]. To compare the impact of states of iron (II and III) on macrophages polarization, Yu et al. synthesized the Fe 2 O 3 @D--SiO 2 and Fe 3 O 4 @D-SiO 2 nanoparticles with similar size (about 40 nm), parallel core-shell structures, identical ellipsoidal morphology, as well as the same surface modification by large-pore dendritic silica shell (D-SiO 2 ). They found that only Fe 3 O 4 @D-SiO 2 achieved significant M1 polarization from M0 or M2 macrophages, resulting in an excellent tumor suppression effect in the melanoma mouse model. Compared with Fe 2 O 3 @D-SiO 2, Fe 3 O 4 @D-SiO 2 induced much higher intracellular iron accumulation and thus triggered the interferon regulatory factor 5 (IRF-5) pathway, which is one of the key transcriptional factors that promoted the M1 polarization ( Figure 3A). The reason for these different intracellular iron levels largely depends on their diversity in the dynamic process of iron endocytosis, intracellular degradation, and export rate from cells. In this model, Fe 3 O 4 @SiO 2 exhibited a relatively higher uptake ability than Fe 2 O 3 @SiO 2 , which is probably due to the magnetism-induced aggregation [86].
Size
Iron-based nanoparticle-induced macrophage polarization is typically affected by the size properties of the materials. In particular, nanoparticles with similar composition but different sizes could exhibit different functions on macrophage polarization [87,88]. For instance, Cheng et al. synthesized two groups of amphiphilic polymers (PMA)modified Fe 3 O 4 NPs and Au NPs with two different sizes, 4 and 14 nm, to investigate the influence of nanoparticles' size on macrophage polarization [63]. They demonstrated that the 4 nm Fe 3 O 4 NPs triggered M1 polarization more effectively than 14 nm Fe 3 O 4 NPs, which might be due to the higher cellular uptake rate and intracellular accumulation of small nanoparticles. Meanwhile, the same trend was found with Au NPs ( Figure 3B). Besides, Dalzon et al. compared the effects of two carboxymaltose-modified Fe 2 O 3 nanoparticles with different sizes (20 and 100 nm) on the macrophages. They found that at a concentration of 1 mg/mL, the 100 nm Fe 2 O 3 nanoparticles significantly inhibited the secretion of proinflammatory cytokines such as IL-6, TNF-α, and NO, as compared to smaller ones [89]. These results demonstrated that the size of iron-based nanoparticles indeed has an essential influence on the degree of macrophage polarization. However, whether and how the size factor contribute to the direction of macrophage polarization is unclear and requires further exploration in the future.
Surface
As another important characteristic of nanoparticles, the surface property is of great significance in influencing macrophage polarization, including surface charge, morphology, hydrophilicity, etc. In general, nanoparticles with a charged surface induce the polarization of M1 or M2 more easily than neutral ones. For example, Saw et al. explored the impact of the surface charge of nanoparticles on macrophage polarization by modifying ferumoxytol with different charges. They demonstrated that both positively and negatively charged SPIONs established a strong effect to repolarize macrophage from M2-like TAM to M1 and significantly suppressed tumor growth in the mouse model, while the neutral ones failed to induce polarization of macrophage [90]. It was presumed that the neutral surface limited the nanoparticles' internalization by cells, and thus restrained the M1 polarization of ferumoxytol ( Figure 3C). Consistently, a previous study has reported that PMA-coated IONPs significantly induced M1 polarization in RAW264.7, while further modifying IONPs with PEG outside the original layer dramatically reduced the effect of M1 polarization [63]. This is likely due to the limited uptake and endocytosis of PEGylation of nanoparticles by macrophages, resulting in less amount of particle accumulation and decreased activation of cellular signal and immune response [91]. The difference in cellular uptake of distinct surface modified-nanoparticles is supported by another research, since polyethyleneimine (PEI)-coated ultrasmall superparamagnetic IONPs induced much higher cellular uptake than PEG-modified IONPs ( Figure 4) [87,91].
In addition, certain coating materials are known to have a strong macrophage modulating effect, therefore iron-based nanoparticles coated with them are inclined to gain the same effect as the coating material. For example, LMWHA can induce macrophage polarization from M2 to M1 phenotype [92,93]. Therefore, iron-based nanoparticles coated with LMWHA could promote M1 polarization macrophages. Zhang's group synthesized the LMWHA-modified mesoporous PBNPs (LMWHA-MPB) with an average size of 127 nm and further loaded them with photosensitizer indocyanine green (ICG) to form LMWHA-MPB/ICG. Both LMWHA-MPB and LMWHA-MPB/ICG promoted a M2 to M1 phenotype reversal in RAW264.7 cells [50,53], which are opposite to most PBNPs reported by other studies [7,62,79], suggesting that the functional modifications on the surface of iron-based nanoparticles largely shift their effect on macrophage polarization.
Mechanisms of macrophage polarization by iron-based nanoparticles
The interactions between macrophages and iron-based nanoparticles involve multiple procedures and diverse modulation pathways ( Figure 5), including cellular signaling sensing and transduction, intracellular redox level rebalancing, lysosomal-autophagy modulation and iron metabolism alteration, etc. [84,94].
Membrane receptors-mediated modulation
Surface receptors are the first-line sensors by which macrophages identify and respond to external signals. Various macrophage surface receptors including pattern recognition receptors (PRRs), scavenger receptors, complement receptors, cytokine receptors, etc., are responsible for phagocytosis and immunomodulation, mainly through the modulation of macrophage polarization. As important group members of PRRs, the Toll-like receptors (TLRs) family consists of several subtypes and is responsive to different danger signals. Among them, the activation of TLR-4 by lipopolysaccharide (LPS) strongly triggers M1 polarization to facilitate the killing of invaded bacteria, while blockage of TLR-4 limits its function effectively [95].
Upregulation of M1-related transcriptional factors directly results in M1 polarization and the secretion of pro-inflammatory cytokines. For example, the hyaluronic acid-decorated SPIONs has a notable effect on shifting macrophages from M2 to M1 phenotype mainly through the activation of NF-κB [73]. In addition, Fe 3 O 4 @D-SiO 2 was found to promote the activation of IRF-5, thus inducing the polarization to M1 [68]. Similarly, Cheung et al. reported that IRF-5 was highly elevated by carboxymethylated IONPs, resulting in polarization from M2 to M1 [98]. Furthermore, bioinformatic gene analysis revealed that transcription factors like NF-κB and AP-1 were significantly upregulated by DMSA-coated Fe 3 O 4 magnetic nanoparticles, and thus M1-like pro-inflammatory reactions were induced [99]. On the other hand, suppressing key transcriptional factors restricted the ability of macrophage polarization and their specific functions. For instance, hollow Prussian blue nanozymes were able to inhibit NF-κB and efficiently limited the M1 polarization induced by LPS, with down-regulated expression of inducible nitric oxide synthase (iNOS), cyclooxygenase-2 and IL-1β [77].
ROS modulation
During M1 macrophage-mediated anti-microbicidal process, reactive oxygen species (ROS) generate and act as the main weapon to destroy foreign pathogens. In addition, it is revealed that intracellular levels of ROS are involved in the modulation of macrophage polarization [100,101]. In general, excessive intracellular ROS induces M1 polarization, mainly by activation of MAPK, STAT1 and NF-κB signaling, while ROS scavenging suppresses M1-mediated pro-inflammatory effects and leads to an M2 phenotype. During the differentiation process of macrophages from monocytes, the existence of ROS is essential for the subsequent M2 polarization, since the complete inhibition of ROS during differentiation strongly limited the later M2 or TAMs polarization [102]. As for the function of mitochondrial ROS on macrophage polarization, they could induce M1 or M2 polarization according to different reports, which are required for further studies to explain [103,104].
It is well-known that IONPs can produce massive ROS by triggering Fenton reactions. Therefore, a variety of IONPs induces macrophage polarization through the modulation of ROS. For instance, Kim et al. demonstrated that the combination of dextran-coated IONPs with ascorbic acid was capable of eliciting vast ROS production and facilitating the pro-inflammatory phenotype of macrophages, which exhibited excellent bactericidal function against intracellular staphylococcus aureus [105]. In addition, Fan et al. found that alternating magnetic field further amplified ROS level in macrophages treated by ferrimagnetic vortexdomain iron oxide nanoring and graphene oxide (FVIOs-GO), and stimulated macrophage polarization from TAM to M1 [106]. Besides, the Prussian blue-modified ferritin nanoparticles (PB-Ft NPs) possessed ROS-inducing abilities by their peroxidase-like activity, particularly under 808 nm laser irradiation, and thus promoted M1 polarization. The enhanced peroxidase-like activity of PB-Ft NPs largely depends on the increased temperature under external laser irradiation [107].
In contrast, the elimination of excessive ROS by iron-based nanoparticles reverses the M1 phenotype and relieves the inflammatory condition. It is reported that a series of PBNPs exert ROS scavenging effects and suppress M1 polarization [7,58,79,80,108]. Due to their excellent anti-inflammatory ability, PBNPs have been widely studied for the treatment of peritonitis, maxillofacial infection, vascular restenosis and skin wound in different tissue. For example, Fan et al. reported that the hollow-structured manganese Prussian blue nanozyme (HMPBzymes) with intracellular ROS clearing activities induced M1 to M2 macrophage polarization under the hypoxia condition and suppressed inflammation in mice osteoarthritic model [61,109].
Lysosomal-autophagy modulation
Nanoparticles' internalization, transportation, and degradation greatly relied on the intracellular membrane structurebased organelle system such as endosomes, lysosomes and autophagosomes, which also acted as important regulatory pathways for macrophage polarization. The co-localization of nanoparticles with lysosomes sometimes causes lysosome dysfunctions with high-permeable membranes. Thus, the damaged lysosomes would release their containing enzymes like cathepsin-B and trigger NLRP3-mediated inflammation, which contributes to M1-like polarization ( Figure 7A) [110]. For example, Song et al. demonstrated that PMA-coated IONPs caused M1 polarization mainly by inducing lysosomal damage. The same phenomenon was also observed in Au nanoparticles-treated macrophages [63]. Furthermore, the dysfunction of specific secretory lysosomes may block the transportation pathway of certain cytokines, thus inhibiting the function of macrophages. The commercially-used IONPs, Resovist, was reported to inhibit LPS-induced IL-1β generation by hindering the secretory lysosome-mediated excretion, and thus partially reduced the inflammatory phenotype in murine microglial cells [111].
As an essential intracellular process for energy recycles and stress handling, autophagy is commonly involved in the modulation of immune response including macrophage polarization. In general, the high level of autophagy induces macrophage polarization to the M1 phenotype. Ai's group has conducted a series of studies to clarify the modulation effect of autophagy on polarization. They found that both Resovist (ferucarbotran) and Feraheme (ferumoxytol), the two clinically applied SPION, can trigger the autophagy-mediated proinflammatory response in vitro and in vivo. Importantly, suppression of autophagy by the autophagy inhibitor, chloroquine, dramatically inhibited the SPION-mediated secretion of inflammatory cytokines, strongly suggesting that autophagy acted as one of the important modulation factors for M1-like polarization ( Figure 7B) [52,96]. Recently, further studies demonstrated that compared with bare IONPs, IONPs with carboxyl modification on the surface induced a much lower level of autophagy in macrophages, leading to reduced immune activation [96].
Ion modulation
Most iron-based nanoparticles ultimately undergo degradation by acid organelles and release iron ions into cells. These released ions would further lead to a series of physiological changes and serve as an important regulator of macrophage polarization [112,113]. Studies have proved that, after the treatment with IONPs, intracellular iron levels were increased in a dose-and time-dependent manner [114]. Theoretically, endocytosis of iron-based nanoparticles with higher iron leaching tends to contribute to a higher level of intracellular irons and trigger M1 polarization. Notably, iron chelate desferrioxamine (DFO) blocked the polarization effect of SPIONs in macrophages [70,112]. In addition, Resovistinduced polarization from M2 to M1 was strongly limited by the pre-chelation of cellular iron in human-derived macrophages. Also, with the addition of DFO, carboxydextran-coated SPIONs failed to increase the expression of M1 makers, such as ferritin and cathepsin L in THP-1 [70]. These results strongly highlight the crucial roles of degraded iron ions in modulating macrophage polarization.
Summary and perspectives
Since the increasing studies on the design, synthesis, and biomedical applications of iron-based nanoparticles, their immune modulation effects have received much attention. After entering human bodies, most nanoparticles will be captured and metabolized by the reticuloendothelial system (RES), such as monocytes and macrophages. Upon uptake of iron-based nanoparticles, the polarization and function of macrophages can be modulated. In this review, we systemically summarized different iron-based nanoparticles that could regulate macrophage polarization and function and discussed the influence factors and mechanisms underlying the modulation process, with the expectation of providing evidence and guidance for subsequent design and application of iron-based nanoparticles with specific macrophage modulation functions.
In summary, the effect of iron-based nanoparticles on macrophage polarization is a systemic result influenced by the intrinsic physicochemical properties of nanoparticles including composition, size, surface, and so on. Besides, the addition of various external factors such as magnetic fields, laser irradiation, and temperature can trigger specific properties of iron-based nanoparticles and influence macrophage polarization in unconventional ways. Multiple mechanisms are involved in the modulation of macrophage polarization by iron-based nanoparticles, including [111]. Copyright 2013, Springer Nature. B. The relative mRNA levels of TNF-α, IL-1β, and IL-6 in SPIO-treated RAW264.7 cells (i) with autophagy inhibitor CQ and (ii) autophagy enhancer rapamycin, respectively. (iii) Schematic diagram of carboxylmodified SPIONs inhibited macrophage autophagy and inflammatory cytokine secretion. Reproduced with permission [96]. membrane receptors interference, transcriptional modulation, ROS rebalancing, lysosomal-autophagy pathway, and iron ions release. Although some of the regularity has been revealed, there are still essential issues requiring further studies: (1) Among all the influencing factors and regulating mechanisms, which is dominant in macrophage polarization over others under specific conditions? (2) How do different factors and mechanisms coordinate with each other and determine the final direction of macrophage polarization? (3) Under different biological conditions, do the same ironbased nanoparticles present identical or different functions on macrophage polarization? (4) How to rationally design iron-based nanoparticles to achieve precise modulation and control of macrophage polarization? In the future, more in-depth research and studies considering the above issues are expected to gain a better understanding of iron-based nanoparticles' modulation on macrophages and promote their better applications to biomedicine.
|
2023-04-19T13:06:26.943Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "541ba8eae7876bd91ebae82b4f143f23dcc95bf0",
"oa_license": "CCBYNCND",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/mr-2023-0002/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "158474a7e80400fba9b9b317d319ff2bdf10b10b",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213394183
|
pes2o/s2orc
|
v3-fos-license
|
Fog-Removing Method for the Monitoring Traffic Image along with Mathematical Analysis of Retinex Algorithm
: In this paper, we have a tendency to propose an improved fog-removing technique along with mathematical analysis of retinex algorithm so as to form the pictures changing into more clear and less difficult to recognize. The projected technique combines the retinex algorithmic rule and wavelet transform algorithmic rule. The projected technique firstly uses Retinex algorithmic rule to reinforce the image, then the wavelet transform algorithmic rule is employed to reinforce the details of the image, finally, a clear image that is removed fog will be obtained once reduce the non-necessary coefficients. Through analyzing the PSNR (Peak S/N Ratio) of the image contrast, the pictures that are processed by our proposed technique have the PSNR values above the standard Retinex algorithms. In this, we intend to build up a novel picture defogging calculation by specifically anticipating the haze thickness of improved pictures as opposed to receiving earlier suppositions or requirements. The proposed method reduces the halo artifacts by adaptively limiting the boundary of an arbitrary haze image. A new multi-scale image fusion method for single image dehazing has also been proposed to produce a more natural visual recovery effect. The results show that this method outperforms state-of-the-art haze removal methods in terms of both efficiency and the dehazing visual effect.
Enlivened by a few examinations, Land and McCann discovered that an effective method to process the softness estimations of a pixel i, in a picture was to think about a specific number of ways, beginning aimlessly focuses and finishing at i, and afterward to figure the normal of the results of proportions between the power estimations of consequent focuses in the ways, with these solutions : 1) In the event that the proportion doesn't vary from 1 in excess of fixed limit esteem, at that point, the proportion is viewed as unitary.
2)
In the event that the chain of proportions passes the worth 1 at a point of the way, at that point the cumulated relative softness is compelled to 1, so the calculation restarts starting there. The primary remedy is called edge instrument, and, appreciation to it, the computation disregards smooth changes in concealing due, for example, to smooth points of the illuminant. The subsequent solution is known as the reset component, and it is liable for the supposed white-fix conduct of Retinex, implying that the focuses that empower the reset system become the nearby references for white. where jk is the starting point of the kth path (i is the end point of every path) and where B. General Background for the Formula Given a computerized picture, consider an assortment of N situated ways _k made out of requested chains of pixels beginning at jk and finishing at I. Let nk be the quantity of pixels went by the way _k, and let tk=1, . . . ,nk be its parameter, i.e., _k : _1, . . . ,nk_→Image_R2, _k_1_=jk and _k_nk_=i.
For effortlessness, compose two resulting pixels of the way as _k_tk=xtk and _k_tk+1_=xtk+1, for tk=1, . . . ,nk−1. Consider, in each fixed chromatic channel c__R,G,B_, the pixels' forces I_xtk_, I_xtk+1_ and afterward figure the proportion Rtk=I_xtk+1_/I_xtk_. For specialized reasons put R0=1 and standardize the forces to take their qualities in the genuine unit interim (the standardization factor is 1/255 if 8 bytes are utilized for every pixel in each chromatic channel).Formula: We claim that the (normalized) value of lightness given by Retinex for a generic pixel i, in every fixed chromatic channel c, can be obtained by this formula: The main alternative is fulfilled when the force of the pixel xtk+1 is obviously littler than the power of the pixel xtk; at that point _k repeats the estimation of the proportion Rtk. The subsequent choice happens when just a little change in power is estimated between two ensuing pixels. For this situation, _k_Rtk_ is characterized to be 1, with the goal that the result of the proportions remains precisely equivalent to in the past advance. This is the numerical usage of the edge instrument. The third choice is alluded to as the situation when the proportion Rtk is more prominent than 1+_ however the item _k_R1__k_R2_ . . . _k_Rtk−1_Rtk isn't more prominent than 1+_. In this circumstance, _k replicates the estimation of Rtk as in the primary choice. At last, the fourth alternative holds when _k_R1__k_R2_ . . . _k_Rtk−1_Rtk _1+_, and for this situation _k resets the chain of items to 1 in light of the fact that a "nearby white pixel" has been come to. This alternative executes the reset instrument (thus the whitefix conduct) of the calculation. It is valuable to compose the commitment of the single way _k to L_i_ as Lk_i_=_tk=1 nk−1_k_Rtk_, with the goal that recipe (3) diminishes basically to the normal of these commitments:
C. The Wavelet Transform Algorithm
The Fourier theory uses sine and cosine as basis functions to analyze a particular signal. Due to the infinite expansion of the basis functions, the FT is more appropriate for signals of the same nature, which generally are assumed to be periodic. Hence, the Fourier theory is purely a frequency domain approach, which means that a particular signal f(t) can berepresented by the frequency spectrum F(w), as follows: The original signal can be recovered, under certain conditions, by the inverse Fourier Transform as follows: Obviously, discrete-time versions of both direct and inverse forms of the Fourier transform are possible. 2-D discrete wavelet transform algorithm is a notable strategy for picture preparing. It utilize high-pass channel and low spend channel multiple times separately at even and vertical heading, the disintegration results are as per the following: the estimated segment A, the degree of detail parts H, vertical detail coefficients V and corner to corner detail segment D. Inexact coefficients speak to the foundation picture which has the most minimal recurrence, detail coefficient speaks to the scene data which has the high recurrence.
II. LITERATURE SURVEY A. Introduction
Ordinary plans of picture catch bring about a corrupted picture in terrible climate conditions which is hard to remake. Dimness expulsion from a solitary picture stays a difficult assignment as fog is subject to obscure profundity data. Throughout the years numerous scientists have endeavored to defeat this disturbance. R. Fattal proposed another technique that can reestablish the picture just as locate a solid transmission map for extra applications, for example, picture pulling together and neon vision. In view of the refined model, the picture is separated into fragments of consistent albedo. It is accepted that surface concealing and medium transmission are factually uncorrelated. It utilizes a solitary info picture. Results are physically stable and produce a decent outcome, in spite of the fact that it can't deal with overwhelming pictures. Likewise, it flops on the off chance that the suspicion of surface concealing and medium transmission being measurably uncorrelated isn't met. Tan's technique saw that a dimness free picture must have a higher complexity contrasted with an information picture. It amplifies neighborhood differentiate. Dim direct preceding utilized in this technique. Environmental light is evaluated from the sky locale. Transmission is evaluated from a coarse guide by reclassifying a fine guide. Two straightforward channels are consolidated based on neighborhood pixel data in this manner calculation cost is decreased. Results are outwardly engaging however physically not legitimate. Results are over-immersed. Transmission might be thought little of. Tarel, authored in a technique that improves meteorological perceivability separations estimated in foggy climate by utilizing a camera on a moving vehicle. It is powerfully actualizing Koschmieder's Law which relates evident balance of the picture with sky foundation, at realized perception separation, to the characteristic difference, and to the barometrical transmissivity. Meteorological perceivability separation measure characterized by the International Commission on Illumination (CIE) as the separation past which a dark object of a suitable measurement is seen with a complexity of under 5%. It is factually superior to [4], as far as perceivability levels. It utilizes a middle channel to figure environmental cover which brings out serious barometrical cloak discontinuities. Zhang, performed visibility enhancement using image filtering. Based on Tarel et al's approach. Enhanced by using dimension reduction to correct preliminary haze layer estimation. He developed a new filtering approach based on projection onto the signal subspace spanned by the first K eigenvectors. Noise reduction and Texture reduction is also performed. It takes a longer time to compute than Tarel's method. He et al, utilized guided picture separating, and proposed a basic yet successful technique for cloudiness evacuation utilizing dim channel earlier strategy. Most pictures contain a fog-free part which has exceptionally low power in at any rate one shading. Thusly, the thickness of the fog might be legitimately determined. The yield of one channel might be the contribution of the following guided channel. It very well may be utilized for edge-protecting and smoothening and has preferable outcomes over the famous reciprocal channel. It has fundamentally quicker handling time. An excellent profundity map is additionally made. May not work for pictures with objects innately like the barometrical light, transmission at that point will be thought little of as the dim channel has factual reliance. The basic fog image model used for the removal of fog from image is as follows: ) is Airlight and t(x) is the Medium Transmission. Direct attenuation will be zero in case t(x) tends to zero. In order to avoid such an ambiguity t(x) is restricted to a lower limit t0. 2) The Methods Based on the Depth Relationship of fog Image Restoration: Diminish the profundity data of the picture is a significant intimation to the reclamation of mist pictures. As indicated by the scene profundity data is known, this recuperation strategy can be separated into two classifications. One strategy is expected scene profundity data is known. This strategy right off the bat proposed by Oakley. Another strategy is to utilize the helper data extraction technique. Intelligent profundity estimation calculation and the realized 3D model to get the scene profundity, for example, the Kopf technique is to get the profundity of field utilizing the known 3D model, in order to recuperate the haze picture. 3) Image Restoration Based on Prior Information: Numerous scientists center around how to unravel totally evacuating haze for a sign picture as per the variety in the mist focus. In this early work was finished by Tan. Besides, Fattal and others under the supposition that the transmission of light is neighborhood not related with and the scene target surface concealing part, to gauge the scene irradiance, and therefore inferred the engendering picture.
4) Automatic Picture De-enduring Utilizing Bend Let-based Disappearing point Identification: Under poor climate conditions
(downpour, mist, and so on.), both the difference and shade of a picture will be corrupted. To switch these impacts, spatially versatile calculations have recently been created to improve pictures under these conditions. Because of the dispersing of light, the corruptions of a picture will be corresponding to the good ways from where the picture was taken this requires de-enduring calculations to get profundity data separated from the picture. Recently actualized techniques are constrained, in that an intelligent advance requiring the client to choose both the skyline shading and the disappearing purpose of the picture was required. Utilizing new strategies in geometric information portrayal (curvelets), a programmed de-enduring calculation has been created. Past Work -Turbid Mediums Poor climate conditions (haze, downpour, and so on.) in the scene condition can be thought of as a turbid medium that dissipates photons going through the scene. Photons going through a dispersing medium can be arranged into three sorts. Ballistic photons experience no dissipating and travel in an immediately viewable pathway to the picture plane. Snake photons experience some slight dissipating yet hold their direction data. Diffuse photons experience a lot of dissipating and land at the picture plane having lost the vast majority of their direction data.
C. Proposed System
Fog removing method for the traffic image monitoring, which consolidates the Retinex calculation and the wavelet change calculation is proposed. The proposed strategy initially utilizes the Retinex calculation to improve the picture, at that point the wavelet change calculation is utilized to upgrade the subtleties of the picture, at last, a reasonable picture which is expelled mist can be acquired in the wake of decreasing the none-significant coefficients. The proposed strategy can successfully expel haze from the picture taken in an overwhelming haze climate. The estimation demonstrated that it is superior to the conventional calculations, for example, the Retinex Algorithm [2,3] and Dark gray earlier.
III. ANALYSIS OF FOG REMOVING AND MATHEMATICAL ANALYSIS OF RETINEX ALGORITHM
Fog removing method is having the process of combining both the retinex algorithm and wavelets transform methods to get clear and realistic image view for un conditional weather view of a transporting system or visibility system. For this process of for removal for the foggy image we are using the requirements of : 1) Collected Foggy Images.
2) Experiments carried out on i5 Processor with system of 2.3GHz, and 8GB RAM.
3) Using MATLAB 2.0 for Development of this Project. We using the traditional methods of both Retinex Algorithm [2,3] and Dark channel Algorithm [8], to compare with the propose method (R+WT method). The proposed method is having the process of combining retinex and wavelet transform algorithm
A. Retinex Algorithm
Retinex algorithm [2,3] has indicated a decent impact on expelling haze from the picture. Retinex calculation is to diminish the impacts of episode light on the picture, and to reinforce the reflection picture as pursues: Rl(x,y) is the yield comparing to the L channel, II (x, y) is an info luminance picture pixel estimation of the L channel, the parameter * is the convolution activity, the parameter n in the shading channel number, F (x, y) speaks to the inside/encompass work, it is spoken to by Gauss work as detailing (2).
The parameter (j controls focus/encompass work go, the worth is littler, the inside/encompass work is more honed.
131
©IJRASET: All Rights are Reserved B. The Wavelet Transform Algorithm 2-D discrete wavelet change calculation is a notable strategy for picture preparing. It utilizes high-pass channel and low pass channel multiple times separately at even and vertical course, the decay results are as per the following: the rough segment A, the degree of detail segments H, vertical detail coefficients V and askew detail segment D.
Fig: wavelet transform algorithm block diagram
Surmised coefficients speak to the foundation picture which has the most reduced recurrence, detail coefficient speaks to the scene data which has the high recurrence.
C. Proposed Method
Retinex algorithm can upgrade the majority of the data of a picture. be that as it may, however since it just builds the general diagram, the details of the picture are not extraordinary. Then again, wavelet picture improvement by smoothening low-frequency data of the picture and upgraded picture of high-frequency data in order to improve picture details and decreases picture outline noise simultaneously. We propose an improved mist evacuating technique which has joined the benefits of Retinex calculation and Wavelet change calculation, this improved fog expelling strategy First uses Retinex algorithm to upgrade in general diagram data of the picture; at that point use wavelet picture improvement strategy to get high-frequency data from the Retinexed picture, at last, an all the more clear and fog evacuated picture can be gotten. Here, we give Foggy image as an input for the process of fog removing image and then given to first block to applying dark channel, then that will enter into the block of retinex algorithm here this will enhance the image view and gives the detailing for the image in terms of color and detailing. After this process the output of the retinex algorithm will move to the process of wavelet transform algorithm block and it will combine the both R+WT method and Finally, gives the Fog removed Output Image as an output. The resultant output image is gives the clear and realistic image from an fog contained image. by this we can measure the PSNR the effectiveness of the fog removing technique and the performance of this process or proposed method.
D. Mathematical Analysis of Retinex Algorithm
Retinex Algorithm scientific examination within the sight of the limit makes this assignment difficult to figure it out. Truth be told, to make forecasts about the conduct of Retinex, we should realize what number of proportions are underneath the limit, however, this is difficult to realize except if we know the picture and the topology of the ways. Presently, on the off chance that we ignore the limit, at that point with the streamlined variant of the equation we can make forecasts about the subjective characteristic conduct of Retinex and furthermore about its conduct in connection to its parameters.
1) Improvement of the Formula for =0: When =0, the Meaning of the Capacities Turns out to be a lot Less Complex
Henceforth when the edge is 0, it acts like the personality capacity or like the reset capacity. To stay away from an unwieldy documentation we can wipe out the subscript k from the talk by fixing consideration on a given way beginning in (1)= j and completion in (n)= i. Leave H alone the estimation of the parameter of to such an extent that is the pixel with the most noteworthy power in the entire way. the commitment of unequivocally as we can see, the pixel xH empowers the reset instrument. Assume in certainty that no reset happens before xH; at that point we have what's more, one can see quickly that the primary proportions drop each other to give Presently, the proportion I(xH)/I(xH−1) is without a doubt more prominent than 1 in light of the fact that, by theory, Hx is the pixel with the most noteworthy force in _; besides, on account of the abrogation's, the result of this proportion and the past ones lessens to what's more, even this proportion is more prominent than 1, Thus the reset instrument is empowered, and the chain of items diminishes to On the off chance that there are different pixels with a similar trademark as xK, the end is unaltered. On the off chance that there is more than one pixel with similar power as xH, at that point, every one of the contemplations above must have alluded to the last pixel with the most noteworthy force went by _. Every one of these contemplations shows that the commitment of the way diminishes basically to I(i)/I(xH). Since the contentions just displayed can be utilized for each way, we have that their commitments are where xHk is the pixel with the highest intensity traveled by _k for every k=1, . . . ,N. Thus, The formula L(i) Becomes, OR Finally, we recall that the intensity values are normalized, so for every k=1, . . . ,N, and then . It follows that for every pixel i : this is thorough evidence of the way that a picture sifted with Retinex without limit is constantly more splendid than or equivalent to the first one.
4) R+WT Image
Finally, with this method we got more clear and realistic images than previous existing method . We get clear image but this is not obvious, so here we are using another method to get clarity in image or to evaluate the effective of fog removing i.e, PSNR. Pinnacle signal-to-clamor proportion, frequently truncated PSNR, is a building term for the proportion between the most extreme conceivable intensity of a sign and the intensity of adulterating commotion that influences the constancy of its portrayal. Since numerous sign have an exceptionally wide powerful range, PSNR is generally communicated as far as the logarithmic decibel scale. PSNR is most effectively characterized through the mean square mistake (MSE). Given a clamor free m×n monochrome picture I and its uproarious estimation K, MSE is characterized as: By using This PSNR formula, We can measure the effectiveness of the image quality. Table: PSNR of Experiments Finally, The effectiveness of the algorithms and the proposed method have been successfully completed by using the PSNR method of effectiveness.
|
2020-01-23T09:07:48.550Z
|
2019-01-31T00:00:00.000
|
{
"year": 2019,
"sha1": "061cdca36c33bf390374ed37e60cae53f4e4c0ed",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2020.1022",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f37e4960055a595240cf7fc598fc317a7218b467",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256759990
|
pes2o/s2orc
|
v3-fos-license
|
Paenibacillus polymyxa Antagonism towards Fusarium: Identification and Optimisation of Antibiotic Production
An antibiotic produced by Paenibacillus polymyxa 7F1 was studied. The 7F1 strain was isolated from the rhizosphere of a wheat field. Response surface methodology was used to optimize the physicochemical parameters. The strain showed broad-spectrum activity against several plant pathogens. Identification of the strain was realized based on 16s rRNA gene and gyrB gene sequencing. The antibiotic was optimized by one-factor-at-a-time (OFAT) and response surface methodology (RSM) approaches. The suitable antibiotic production conditions were optimized using the one-factor-at-a-time method. The individual and interaction effects of three independent variables: culture temperature, initial pH, and culture time, were optimized by Box-Behnken design. The 16SrRNA gene sequence (1239 nucleotides) and gyrB gene (1111 nucleotides) were determined for strain 7F1 and shared the highest identities to those of Paenibacillus polymyxa. The results showed the optimal fermentation conditions for antibiotics produced by Paenibacillus polymyxa 7F1 were a culture temperature of 38 °C, initial pH of 8.0, and culture time of 8 h. The antibiotics produced by Paenibacillus polymyxa 7F1 include lipopeptides such as iturin A and surfactin. The results provide a theoretical basis for the development of bacteriostatic biological agents and the control of mycotoxins.
Introduction
Fusarium head blight (FHB) is a severe disease of wheat, corn, barley, and other grain and occurs in all regions worldwide. Researchers have found that 17 species of Fusarium are associated with the symptoms of FHB [1]. The following Fusariums are the main causal species of FHB: Fusarium graminearum, Fusarium avenaceum, Microdochium nivale, Fusarium culmorum, and Fusarium poae, especially the Fusarium graminearum species complex [2][3][4]. These pathogens reduce wheat yield and quality by infecting wheat spikes. In addition, FHB pathogens produce substances that are toxic to humans and other animals, such as deoxynivalenol (DON) and nivalenol (NIV) [5]. The incidence of food poisoning and mycotoxin presence vary considerably from year to year and are influenced by environmental growing conditions, local agronomic systems, and interactions between the two. Since Fusarium can survive on saprotic crop remnants of host plants such as maize and wheat, no-tillage or reduced-tillage regimes favor FHB infection.
Over the past few decades, there has been a growing desire to reduce the use of harmful chemical fungicides and control plant diseases by other means. This desire has led Toxins 2023, 15 scientists to study more and more microbial agents to control plant diseases. It has been shown that certain microorganisms can inhibit the growth of Fusarium wheat in previous studies, and these microorganisms do not pollute the environment, reducing the use and dependence on chemical pesticides, thereby slowing down the development of chemical pesticide resistance to pathogens. [6]. Several bacteria and fungi have been found to inhibit the growth of Fusarium graminearum [7]. Among the antimicrobial agents identified, Bacillus is the most compelling antibiotic-producing strain, and it has more advantages than other biocontrol microorganisms due to its inherent endospore formation and resistance to extreme conditions. Bacillus strains were demonstrated by in vitro antibacterial [8], reducing the severity of disease caused by in situ destructions of spikelet infection [9] and identification of lipopeptides [10]. In the study of antibacterial mechanisms, the production of antifungal compounds is considered the primary mode of action against bacteria. Response surface methodology (RSM) is a method to simulate absolute limit-state surfaces through a series of deterministic tests with multiple variables. The main advantage of RSM is to fit the complex unknown function relationship in a small area with a simple first or second-order polynomial model, which is simple to calculate and can be analyzed continuously at every level of the experiment. It is widely used in optimizing the extraction process variables [11][12][13][14][15][16]. Box-Behnken design has three levels and needs fewer experiments. Their operating cost is lower than that of central composite structures with the same number of factors, and they are widely used by researchers [17,18].
Several conditions, such as initial pH, culture temperature, and culture time, must be optimized to maximize levels of antibiotic produced by Paenibacillus polymyxa 7F1. In this study, a strain of Paenibacillus polymyxa with an antagonistic effect was identified, and the conditions for producing the antibiotic were optimized by OFAT and RSM methods.
Identification of Antagonistic Bacteria 7F1
Strain 7F1 was isolated and purified from the rhizosphere of a wheat field in Nanjing and showed a robust antagonistic effect on the Fusarium graminearum GZ3639 strain. Strain 7F1 was identified as a Gram-positive, catalase-positive, facultatively anaerobic, milky rod-shaped bacterium with an oval endospore center by physiological and biochemical tests. Table 1 showed that strain 7F1 could not use inositol and citrate as carbohydrate sources to produce gas and acid. The salt tolerance test results showed that strain 7F1 was not able to grow above 4% NaCl concentration in (Luria-Bertani) LB medium. From a 1239 bp region of the 16s rRNA sequence and an 1111 bp region of the gyrB sequence of 7F1 (GeneBank Accession NO. KP410736) compared with that of known species of microorganisms, it was confirmed that 7F1 shared 99% sequence similarity with that of Paenibacillus polymyxa (Figures 1 and 2). Therefore, strain 7F1 was identified as Paenibacillus polymyxa.
From a 1239 bp region of the 16s rRNA sequence and an 1111 bp region of the gyrB sequence of 7F1 (GeneBank Accession NO. KP410736) compared with that of known species of microorganisms, it was confirmed that 7F1 shared 99% sequence similarity with that of Paenibacillus polymyxa (Figures 1 and 2). Therefore, strain 7F1 was identified as Paenibacillus polymyxa.
Antifungal Activity and Culture Conditions for Antibiotic
Paenibacillus polymyxa 7F1 showed high antifungal activity against all fungi strains tested in this study. Figure 3 shows the antifungal activity of Paenibacillus polymyxa 7F1 against seven pathogenic fungi. The results of the time course experiment revealed that the antibiotic produced by Paenibacillus polymyxa 7F1 was growth dependent. At the culture temperature of 37 °C, the antifungal activity and OD600 value increased with the extension of culture time, and the isolate had the highest antifungal activity and OD600 value at 28 h of incubation with an initial pH of 7.0 and temperature of 37 °C (Figure 4).
Antifungal Activity and Culture Conditions for Antibiotic
Paenibacillus polymyxa 7F1 showed high antifungal activity against all fungi strains tested in this study. Figure 3 shows the antifungal activity of Paenibacillus polymyxa 7F1 against seven pathogenic fungi.
Antifungal Activity and Culture Conditions for Antibiotic
Paenibacillus polymyxa 7F1 showed high antifungal activity against all fungi strains tested in this study. Figure 3 shows the antifungal activity of Paenibacillus polymyxa 7F1 against seven pathogenic fungi. The results of the time course experiment revealed that the antibiotic produced by Paenibacillus polymyxa 7F1 was growth dependent. At the culture temperature of 37 °C, the antifungal activity and OD600 value increased with the extension of culture time, and the isolate had the highest antifungal activity and OD600 value at 28 h of incubation with an initial pH of 7.0 and temperature of 37 °C (Figure 4). The results of the time course experiment revealed that the antibiotic produced by Paenibacillus polymyxa 7F1 was growth dependent. At the culture temperature of 37 • C, the antifungal activity and OD 600 value increased with the extension of culture time, and the isolate had the highest antifungal activity and OD 600 value at 28 h of incubation with an initial pH of 7.0 and temperature of 37 • C ( Figure 4).
Model Fitting and ANOVA
The response values for each set of the variable combinations are shown in Table 2, and the statistical analysis shows that the response values are most consistent with the second-order polynomial model, which is then adopted. The models had satisfactory levels of adequacy (R 2 ). Antagonistic diameter (mm) varied from 8.37 to 16.46 ( Table 2). All models for antibiotics are shown in Tables 3 and 4 and found to be significant. The regression analysis of the optimization study indicated that the model terms X1, X2, X3, X1X2, X2X3, X1 2 , and X3 2 were significant (p < 0.05), the X1X2, X2 2 were not significant (p > 0.05). These results indicate that the interactions between culture temperature, initial pH, and culture time are directly related to antibiotic production. The coefficient of determination R 2 was 0.9945, indicating that 99.45% of the sample variation was attributable to variables, and only 1% of the total variance could not be explained by the model. Regression models with R 2 values greater than 0.9 are considered to have a high correlation [12]. Therefore, the R 2 value reflected a perfect fit between the observed and predicted responses. The adjusted determination coefficient (R 2 Adjusted = 98.47%) also satisfactorily confirmed the significance of the model. Regression analysis of experimental data was carried out to fit the mathematical model to determine the optimal region of the studied response. The predicted response Y of each response yield could be expressed in coded values using the following second-order polynomial equation: where Y is the antagonistic diameter, and X1, X2, and X3 are the variables for culture temperature, initial pH, and culture time, respectively. In general, exploration and optimization of fitted response surfaces can produce poor or misleading results unless the model exhibits a good fit, so it is critical to check the adequacy of the model [19]. The F-ratio in this table is the ratio of the mean square error to the pure error obtained from the repetition of the design center. The significance of the Fvalue depends on the number of degrees of freedom (DF) in the model and is shown in the p-value column (95% confidence level). Therefore, the column below 0.05 has a significant effect [20].
Model Fitting and ANOVA
The response values for each set of the variable combinations are shown in Table 2, and the statistical analysis shows that the response values are most consistent with the secondorder polynomial model, which is then adopted. The models had satisfactory levels of adequacy (R 2 ). Antagonistic diameter (mm) varied from 8.37 to 16.46 ( Table 2). All models for antibiotics are shown in Tables 3 and 4 and found to be significant. The regression analysis of the optimization study indicated that the model terms X 1 , X 2 , X 3 , X 1 X 2 , X 2 X 3 , X 1 2 , and X 3 2 were significant (p < 0.05), the X 1 X 2 , X 2 2 were not significant (p > 0.05). These results indicate that the interactions between culture temperature, initial pH, and culture time are directly related to antibiotic production. The coefficient of determination R 2 was 0.9945, indicating that 99.45% of the sample variation was attributable to variables, and only 1% of the total variance could not be explained by the model. Regression models with R 2 values greater than 0.9 are considered to have a high correlation [12]. Therefore, the R 2 value reflected a perfect fit between the observed and predicted responses. The adjusted determination coefficient (R 2 Adjusted = 98.47%) also satisfactorily confirmed the significance of the model. Regression analysis of experimental data was carried out to fit the mathematical model to determine the optimal region of the studied response. The predicted response Y of each response yield could be expressed in coded values using the following second-order polynomial equation: where Y is the antagonistic diameter, and X 1 , X 2 , and X 3 are the variables for culture temperature, initial pH, and culture time, respectively. In general, exploration and optimization of fitted response surfaces can produce poor or misleading results unless the model exhibits a good fit, so it is critical to check the adequacy of the model [19]. The F-ratio in this table is the ratio of the mean square error to the pure error obtained from the repetition of the design center. The significance of the F-value depends on the number of degrees of freedom (DF) in the model and is shown in the p-value column (95% confidence level). Therefore, the column below 0.05 has a significant effect [20]. Table 4 listed the analysis of variance (ANOVA) for the fitted quadratic polynomial model of antagonistic diameter. The lack of fit was an essential term in the functional relationship between factors and response variables of regression models. As shown in Table 4, the p-value of the lack of fit was 0.32, meaning that it's not significant relative to the pure error and suggesting that the model equation is sufficient to predict the antagonistic diameter for any combination of variable values. The p-value was used as a tool to check the significance of each coefficient and the strength of the interaction between each independent variable [21]. The higher it is, the better correlation between the observed values and predicted values [22].
Analysis of Response Surfaces
Since the models showed a lack of fit is not significant, the response values could be adequately explained by the regression equation. The regression models allowed the prediction of the effects of the three parameters on antagonistic diameter. The relationship between independent and dependent variables was explained by establishing a threedimensional curved surface and a two-dimensional contour plot generated of the response surface. Based on coded data, when different factors interact with each other, the antibacterial zone diameter of the antibiotic has a saddle point, which is the maximum value when factors interact with each other.
It's indicated from Figure 5a that the maximum antagonistic diameter at optimum culture temperature and a high initial pH value. Hence the effect of initial pH was dependent on culture temperature. Figure 5b depicts an increase in culture temperature and culture time resulting in the increased antagonistic diameter. However, the trend is reversed with further increases in culture temperature and culture time. A similar trend was observed in the initial pH and culture time interaction (Figure 5c). Hence increasing the initial pH would increase the antagonistic diameter at optimum culture time and temperature.
Analysis of Response Surfaces
Since the models showed a lack of fit is not significant, the response values could be adequately explained by the regression equation. The regression models allowed the prediction of the effects of the three parameters on antagonistic diameter. The relationship between independent and dependent variables was explained by establishing a three-dimensional curved surface and a two-dimensional contour plot generated of the response surface. Based on coded data, when different factors interact with each other, the antibacterial zone diameter of the antibiotic has a saddle point, which is the maximum value when factors interact with each other.
It's indicated from Figure 5a that the maximum antagonistic diameter at optimum culture temperature and a high initial pH value. Hence the effect of initial pH was dependent on culture temperature. Figure 5b depicts an increase in culture temperature and culture time resulting in the increased antagonistic diameter. However, the trend is reversed with further increases in culture temperature and culture time. A similar trend was observed in the initial pH and culture time interaction (Figure 5c). Hence increasing the initial pH would increase the antagonistic diameter at optimum culture time and temperature.
Optimization of the Models
The primary goal of the optimization process was to maximize response value. The results for the three analyzed parameters and the maximum predicted and experimental value is given in Table 5. Additional experiments using the predicted optimum conditions for antagonistic diameter were carried out: culture temperature of 38.83 °C, initial pH of 8.00, culture time of 7.87 h, and the model predicted a maximum response of 16.46 mm. Experiment rechecking was performed using these modified optimal conditions to ensure the predicted result did not bias the practical value: culture temperature of 38 °C, initial pH of 8, culture time of 8 h, and the mean value of 16.38 ± 0.084 mm (n = 5) was attained, there was a significant agreement with the predicted value (p > 0.05) to verify the validity of the RSM model. The above results showed that the response model could fully reflect the expected optimization results, and the model was satisfactory and accurate.
Optimization of the Models
The primary goal of the optimization process was to maximize response value. The results for the three analyzed parameters and the maximum predicted and experimental value is given in Table 5. Additional experiments using the predicted optimum conditions for antagonistic diameter were carried out: culture temperature of 38.83 • C, initial pH of 8.00, culture time of 7.87 h, and the model predicted a maximum response of 16.46 mm. Experiment rechecking was performed using these modified optimal conditions to ensure the predicted result did not bias the practical value: culture temperature of 38 • C, initial pH of 8, culture time of 8 h, and the mean value of 16.38 ± 0.084 mm (n = 5) was attained, there was a significant agreement with the predicted value (p > 0.05) to verify the validity of the RSM model. The above results showed that the response model could fully reflect the expected optimization results, and the model was satisfactory and accurate.
Identification of Antibiotic
Antibiotics produced by Paenibacillus polymyxa 7F1 were separated using HPLC-MS. Three peaks were analyzed to confirm the active peak. The molecular weight of the chemicals was elucidated by LC-ESI mass spectrometry [23,24]. Data from the positive ion mass spectrum analysis showed that antibiotics produced by Paenibacillus polymyxa 7F1 contain a variety of compounds, three peaks showed good antagonistic effect among them, the three active peaks were analyzed by secondary mass spectrometry, [M+H] + ion peaks were 1057.7, 1008.8, and 1022.8, respectively ( Figure 6). They corresponded to C 14 iturin A, C 13 surfactin, and C 14 surfactin, respectively.
Identification of Antibiotic
Antibiotics produced by Paenibacillus polymyxa 7F1 were separated using HPLC-MS. Three peaks were analyzed to confirm the active peak. The molecular weight of the chemicals was elucidated by LC-ESI mass spectrometry [23,24]. Data from the positive ion mass spectrum analysis showed that antibiotics produced by Paenibacillus polymyxa 7F1 contain a variety of compounds, three peaks showed good antagonistic effect among them, the three active peaks were analyzed by secondary mass spectrometry, [M+H] + ion peaks were 1057.7, 1008.8, and 1022.8, respectively ( Figure 6). They corresponded to C14 iturin A, C13 surfactin, and C14 surfactin, respectively.
Discussion
FHB has always been an important disease threatening the yield and quality of the grain, especially in 2010 and 2012, which caused a large number of grain production re-
Discussion
FHB has always been an important disease threatening the yield and quality of the grain, especially in 2010 and 2012, which caused a large number of grain production reductions [25]. There has been no good solution to the control of FHB, such as the longterm use of carbendazim and other chemical pesticides, and there is no variety completely immune to FHB so far, so an alternative method to control FHB is urgently needed [26]. Biological control refers to the use of natural organisms or their metabolites to control plant diseases, which mainly includes three categories, namely, treating diseases with bacteria, treating insects with insects, and treating insects with bacteria [6]. Nowadays, resistance breeding and chemical control have made some achievements in controlling FHB. With the rapid development of ecological pathology, it has become the focus of attention to find biological control measures that give attention to disease prevention, yield increase, and environmentally friendly and sustainable development from the perspective of microorganisms [7]. It has been a very active and promising development field to control many kinds of plant diseases from beneficial microorganisms and achieve comprehensive control of plant diseases. Among them, the use of biocontrol bacillus to control plant diseases has been a developing trend in recent years. Bacillus has strong stress resistance, is harmless to people and animals, and does not pollute the environment [27]. It is a good biocontrol bacterium and can produce a variety of antagonistic substances during fermentation culture, including bacteriocin, antibiotics, antibacterial proteins, degrading enzymes, and so on.
In this study, Paenibacillus polymyxa 7F1 was isolated from wheat rhizosphere soil, and the antibiotic produced by Paenibacillus polymyxa 7F1 had a good inhibitory effect on Fusarium graminearum. Optimization of cultivation conditions was investigated through the OFAT and RSM applications. Based on the OFAT, RSM was used to estimate and optimize the experiential variables-culture temperature ( • C), initial pH, and culture time (h). To gain a better understanding of the effects of variables on the antibiotic, the predicted model was presented as two-dimensional surface plots [28,29]. All the independent variables, X 1 , X 2 , X 3 , X 1 X 2 , X 2 X 3 , X 1 2 , and X 3 2 culture temperature, initial pH, culture time, the interaction of culture temperature and initial pH, the interaction of initial pH and culture time, quadratic of culture temperature and quadratic of culture time had significant effects on the response value. A high correlation of the quadratic polynomial mathematical model was gained and could be a highly produced antibiotic by Paenibacillus polymyxa 7F1. The optimal conditions for antibiotics were determined as follows: culture temperature of 38 • C, initial pH of 8, and culture time of 8 h. The capacity of the model equation to predict the optimal response value is proved by the test under the optimal conditions [30,31]. Under these conditions, the experienced antagonistic parameter was 16.38 ± 0.084 mm, which agreed closely with the predicted value.
The antibacterial substances produced by Bacillus mainly include lipopeptides, antibacterial proteins, polyketones, etc., according to reports [32,33], among which lipopeptide antibiotics such as Fengycin, Iturin, and Surfactin synthesized by non-ribosomal pathway [34] are the main antibiotics produced by Bacillus fermentation and play an important role in inhibiting fungal diseases. Ongena et al. found that esubtilisin can destroy the cell membrane of yeast cells, make potassium ions and other important substances permeate, and cause the death of yeast cells [35]. Antifungal subtilisin produced by Bacillus subtilis (an important active substance of the esubtilisin family) can inhibit the growth of various yeasts, but the best effect is to inhibit Aspergillus flavus (Aspergillus spp.) [36]. Fenyuan can affect the surface tension of fungal cell membranes, lead to the formation of micropores, the leakage of potassium ions and other important ions, and cause cell death, but it has no significant effect on the morphology and cell structure of Fusarium oxysporum [37]. Zhao et al. believe that B. subtilis SG6 produced progenin and surfactant in the inhibition of Fusarium graminearum and played a major role in the growth process [38]. The results of mass spectrometry indicated that the antibiotic produced by Paenibacillus polymyxa 7F1 contained lipopeptides such as iturin A and surfactin. The identification results in this paper are consistent with those in the literature.
Among the biological control agents against plant diseases, the effect of antagonistic bacteria is more prominent than other control materials. The reasons are analyzed as follows: Firstly, some studies have pointed out that biological control mainly promotes the growth and development of host plants to form systematic resistance through antibiotic action, competition, increasing the solubility of inorganic nutrients, and inducing the host to produce disease resistance [39]. When antagonistic bacteria play a biocontrol role, several biocontrol mechanisms often act at the same time, which makes it difficult for pathogenic bacteria to produce drug resistance. Secondly, the source of antagonistic bacteria is mostly farmland, which has the same ecological adaptability as pathogenic bacteria, and the colonization effect is better [40]. Moreover, bacteria multiply relatively rapidly, and it is easier to fight against pathogenic bacteria to achieve the purpose of disease prevention and control, and the prevention and control cycle is longer than other microorganisms. Thirdly, bacteria have the characteristics of a short metabolic cycle, and rapid reproduction and passage [10], so antagonistic substances are produced at an alarming rate. In field experiments and popularization, the synergistic effect of living bacteria and their antagonistic metabolites can greatly improve the potential of antagonistic bacteria to play a biocontrol role and, at the same time, save time and control costs. Finally, the use of beneficial microorganisms for biological control is more friendly to the ecological environment and beneficial to ecological balance [41]. The antagonistic substances produced by it have strong specificity, so they only target some pathogenic bacteria, do not react with other beneficial microorganisms, are generally harmless to the human body, and pose little or no harm to the ecosystem. Therefore, using antagonistic microorganisms to replace traditional chemical pesticides has great potential and market application prospects.
Strain and Culture Conditions
The antagonistic bacterium 7F1 was a wild-type strain initially isolated and identified from the rhizosphere soil of wheat infected with FHB and stored at −70 • C in 20% glycerol in the laboratory. Fusarium graminearum GZ3639 was supplied by the Department of Agriculture, Agriculture Research Service, Peoria, IL, USA.
The frozen strains were taken out and streaked onto Luria-Bertani agar, pH 7.0. After a 24 h incubation at 30 • C, strains were removed from the surface of the medium and inoculated into 100 mL of Luria-Bertani liquid culture (10 g tryptone/L; 5 g yeast extract/L; 10 g NaCl/L; the pH was adjusted to 7.0) using inoculation rings in a 250 mL shake flask and cultivated at 37 • C and 170 r/min for 24 h as a pre-culture [4]. The bacterial concentration of pre-culture reached 10 8 -10 9 CFU/mL. 10 mL pre-culture medium was placed in 100 mL Luria-Bertani liquid medium in a 250 mL Erlenmeyer flask and cultured under the same conditions to produce antibacterial substances.
Extraction of Antibiotic
A total of 100 mL of fermentation medium was taken and centrifuged at 10,000 rpm for 20 min. The supernatant was slowly added to an ice bath with ammonium sulfate to 60% saturation and placed in a refrigerator at 4 • C overnight, and then the supernatant was centrifuged at 10,000 rpm for 20 min. The supernatant was discarded, and the precipitation was suspended with a phosphate buffer solution (pH 6.8). The suspension was put into a dialysis bag (8000-14,000 D of interception), fully desalted at 4 • C, and freeze-dried; the antibiotic was obtained and dissolved in 1 mL sterile water until use [37,38].
Antifungal Test
The antifungal activity was determined by the agar well diffusion method [42]. Potato glucose Agar (PDA) medium was used for antifungal experiments. The medium was prepared with 200 g potato, 20 g glucose, and 15 g Agar dissolved in 1000 mL distilled water. PDA plates were prepared to contain Fusarium graminearum GZ3639 spore 10 4 per mL. In the plates, wells of 6.0 mm diameter were cut using a sterile steel borer. Then 50 µL of the antibiotic were added into the wells, respectively, followed by aerobic incubation at 28 • C for 48 h. The same volume of PBS (pH 6.8) was used as a control. The antibacterial effect was judged by observing the antibacterial zone, and the diameter of the circle's antibacterial zone relative to each index was measured with a caliper. Other test strains fungi: Fusarium equiseti (CGMCC 3.6911), Fusarium verticillioide (CGMCC 3.7987), Fusarium semitectum (CGMCC 3.6808). Colletotrichum gloeosporioides (IVFCAAS PP08050601), Fusarium proliferatum (CGMCC 3.4741), Fusarium oxysporum (CGMCC 3.6855).
Selection of the Suitable Conditions for Antibiotics by OFAT Approach
Initial screening of the appropriate conditions for antibiotics was performed by the OFAT method [15]. Five conditions: culture time, initial pH, culture temperature, shaking speed, and bottled fluid volume, were investigated. The incubation step was described as before. Samples were collected and tested for their antifungal activity.
Optimization of Antibiotic by RSM
RSM was used to optimize antibiotics produced by Paenibacillus polymyxa 7F1. The following three variables were selected using the Box-Behnken design (a three-factor three-level response surface analysis method): culture temperature (X 1 ), initial pH (X 2 ), and culture time (X 3 ). The level of selected factors was determined by a single-factor experiment. These three single factors were coded at three levels (−1, 0, 1), which resulted in an experimental design of 17 experimental points. Experimental data were analyzed to fit a second-order polynomial model containing the coefficient of linear, quadratic, and two factors interaction effects. Design Expert 8.0.5 (Stat-Ease Inc., Minneapolis, MN, USA) was used for designing experiments and statistical data analysis (ANOVA) [43]. Outliers were identified and excluded when necessary in describing the relevant models. The model equation of response (Y) of the independent variables by the Design Expert (8.05b) software was: where Y was the dependent variable, β 0 was the constant coefficient, β i was the linear coefficient (primary effect), β ii was the quadratic coefficient, β ij was the two factors interaction coefficient, n was the number of factors studied and optimized in the experiment; X i , X j was the encoded independent variables. The fitting quality of the model was evaluated by determination coefficient (R 2 ) and ANOVA. The establishment of the response surface and contour plot was obtained by quadratic multinomial fitting equation obtained through regression analysis, the two independent variables were kept at the constant values corresponding to the stable points, and the other two variables were changed. The response values predicted by the process equations were compared to the actual measured response values. The % error between the predicted and actual values was obtained. Optimization of process parameters was conducted using the Design Expert 8.0.5 software by combining numerical simulation coupled with the desirability function.
Identification of Antibiotic
The accurate molecular mass and structure of the purified antibiotic were determined by employing a reversed-phase capillary LC directly coupled to the mass spectrometer (LC-MS) [11]. The antibiotic obtained from 4.3 was filtered through a 0.22 µm water membrane and determined by high-performance liquid chromatography directly coupled to the mass spectrometer (HPLC-MS) (TRIPLE QUAD 3500, AB SCIEX, Framingham, MA, USA). Diamonsil C18 reversed-phase column (250 mm × 4.6 mm, 5 µm) (Dikema Technology Inc., Beijing, China) was used. The mobile phase consisting of 0.1% trifluoroacetic acid aqueous solution and acetonitrile, was programmed at the flow rate of 0.6 mL/min. The sample volume injected was 20 µL. Antibiotics were eluted in a gradient system: 0-15 min, linear gradient 30-45% (B), 15-40 min, linear gradient 45-55% (B). The antibiotics were monitored at a wavelength of 280 nm. The MS analysis was conducted by electrospray ionization in positive ion mode.
Statistical Analysis
The resulting data are shown as the mean ± standard (n = 5), and the SPSS 18.0 software was used to analyze the results (Munich, Bavaria, Germany). ANOVA was used for statistical analysis, with α = 0.05 as the significance level.
|
2023-02-11T16:16:49.493Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "038705ce3485cfacba6ec2a38d215bf5e2654735",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/15/2/138/pdf?version=1675923467",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d84e11cb9bfe4418a97df1fa7a088363d9025c75",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257470005
|
pes2o/s2orc
|
v3-fos-license
|
Hemotoxic effects of polyethylene microplastics on mice
Micro- or nanoplastics, which are fragmented or otherwise tiny plastic materials, have long been a source of environmental worry. Microplastics (MPs) have been well documented to alter the physiology and behavior of marine invertebrates. The effects of some of these factors are also seen in larger marine vertebrates, such as fish. More recently, mouse models have been used to investigate the potential impacts of micro- and nanoplastics on host cellular and metabolic damages as well as mammalian gut flora. The impact on erythrocytes, which carry oxygen to all cells, has not yet been determined. Therefore, the current study aims to ascertain the impact of exposure to various MP exposure levels on hematological alterations and biochemical indicators of liver and kidney functions. In this study, a C57BL/6 murine model was concentration-dependently exposed to microplastics (6, 60, and 600 μg/day) for 15 days, followed by 15 days of recovery. The results demonstrated that exposure to 600 μg/day of MPs considerably impacted RBCs’ typical structure, resulting in numerous aberrant shapes. Furthermore, concentration-dependent reductions in hematological markers were observed. Additional biochemical testing revealed that MP exposure impacted the liver and renal functioning. Taken together, the current study reveals the severe impacts of MPs on mouse blood parameters, erythrocyte deformation, and consequently, anemic patterns of the blood.
Introduction
Plastic particles with a diameter of less than 5 mm are now widely acknowledged as a threat to the environment and a health risk to human populations, ranging from oxidative stress to DNA damage (Rochman et al., 2013;Katsnelson 2015;Smith et al., 2018;Prata et al., 2020;Ibrahim et al., 2021;Blackburn and Green 2022). There are two major sources of microplastics (MPs): 1) cosmetics, detergents, sunscreens, and medicine delivery systems all containing plastic powders or particles (Galloway 2015) and 2) bigger plastic pieces breaking down in the environment due to UV radiation, mechanical abrasion, and biological deterioration (Andrady 2011).
MPs can reach human populations either directly through the environment or indirectly through food. According to numerous studies (Thompson et al., 2009;Cole et al., 2013) and food chains (Setälä et al., 2014), a variety of marine organisms (bivalves, fish, etc.) consume MPs. MPs are therefore anticipated to accumulate in the environment and increase the risk of exposure for wild creatures and human populations over time due to their extensive use and durability.
Studies demonstrating the potential health risk and tissue accumulation of MPs in mammals are scarce, although most research on the toxic effects of MPs has been on aquatic creatures. Accumulation of MPs in tissues can have various negative effects, which include physical harm. This is because the majority of these plastics have been detected in the marine animals examined, and it has been hypothesized that malnutrition contributes to their death, which is the rupture of the stomach from being trapped by debris (De Stephanis et al., 2013). Many species of birds, reptiles, and fish were found to directly ingest plastic, which may block their stomach and intestines (Hämer et al., 2014). This may cause inhibition of growth and development (Koelmans et al., 2014) and energy deficiency (Galloway 2015). Among the cellular impacts were modifications to immunological responses, the lysosomal compartment, peroxisome proliferation, antioxidant system, neurotoxic effects, and the start of genotoxicity (Avio et al., 2015;Hamed et al., 2019;Hamed et al. 2020;Hamed et al. 2021;Sayed et al., 2021;Ammar et al., 2022;. Microplastic-induced reactive oxygen species (ROS) has been shown to be an inducer of oxidative stress in some marine organisms (Wright et al., 2013). It has been found to have severe effects on the feeding and water behavior as well as the metabolism of fish. Hence, it has been concluded that polystyrene nanoparticles have severe effects on both behavior and metabolism (Mattsson et al., 2015). Therefore, information on MP tissue accumulation in mammalian models would be crucial for determining the risk of MPs to human health (Prata et al., 2020;Ibrahim et al., 2021;Blackburn and Green 2022).
The toxic effects of MPs on erythrocytes (RBCs) are yet to be verified. Hence, the current study determined how exposure to different MP concentrations influenced haematological changes and biochemical indicators of liver and kidney function as a proxy for the effects that microplastics on human health.
Animals
This study used 60 C57BL/6 male mice purchased from the Tudor Institute in Cairo. They ranged in age from 2 months (Li et al., 2020) and were 20 g in weight. We divided the animals into four groups (15 mice in each group). At the Molecular Biology Research and Studies Institute, Assiut University, the practice was carried out in accordance with the established ethics regulations in the care and treatment of animals. The animals were housed at a temperature of 23 ± 2°C with a 12:12 light:dark cycle. Feed and drinking water were freely available to all animals.
Chemicals
Microplastics (MPs) were purchased from Toxemerge Pty Ltd. in Australia as powders with asymmetrical particles (>90% of the microplastics were larger than 100 nm in size). A methodology for characterizing the microplastics was performed using light and transmission electron microscopy at TEMU, Assiut University (Hamed et al., 2019).
Stock preparation and characterization for microplastics
A stock solution was prepared after the manufacturing procedure and kept at room temperature. Before each use, the stock solution (0.1 g MP/500 ml D.W.) was prepared by using a magnetic stirrer. From this stock, 30 μl representing a concentration of 6 μg, 300 µl representing 60 μg, and 3 ml representing 600 μg were taken just before the start of each experiment. MP particles were characterized using a light microscope.
Experimental approach
Fifteen mice each were assigned to one of four groups: group 1, which is considered the control group; group 2, which got 6 μg/ml of the MPs extract each day orally; group 3, which got 60 μg/ml of MPs; and group 4, which got 600 μg/ml of MPs, for 15 days (Li et al., 2020;Ragusa et al., 2021;Huang et al., 2022;Pironti et al., 2023).
RBC's alterations
The blood was drawn, and smears were dried, fixed in 100% methanol for 10 min, and then stained with hematoxylin and eosin. Slides were chosen based on their staining quality and randomly graded to maintain anonymity. According to Al-Sabti and Metcalfe (1995), 3,000 cells (minimum of 100 cells per slide) were analyzed in each group under a ×40 objective to identify morphologically changed red blood corpuscles. The morphological changes of erythrocytes, which included acanthocytes, sickle-shaped cells, crenated cells, enlarged cells, and changes in nuclear morphology, were noted by using a VE-T2 microscope and were photographed using a 14 MP OMAX Camera (MN: A35140U3, China).
Qualitative examination of microplastics
To identify MPs qualitatively and ascertain their presence in the digestive system, small, medium, and large intestinal fragments were cut and placed in a hydrogen peroxide solution (30%) before placing them in a hot water bath set to 70°C degrees for 2 hours.
The fourth group was observed to have more MPs than the second and third groups, which were also detected in these groups but in reduced amounts.
Statistic evaluation
The minimum, maximum, averages, standard errors, and measured parameter ranges are considered to constitute fundamental statistics. For the raw data, the homogeneity of variance was assumed. Additionally, one-way ANOVA was used to record the pattern of differences in all treatments and the control group in the absence of interactions. The tests of Tukey and Dunnett were considered for multiple comparisons. At a significance level <0.05, the IBM-SPSS software version 21 (IBM-SPSS 2012.) and Xls sheets were considered.
Quantification and characterization of microplastics
The pictures taken using the light microscope at a magnification of ×40 revealed that the microplastic particles were of irregular shapes ( Figure 1). When compared to the control, the treated groups had more MPs in their guts. In contrast to the exposure groups, MPs were not found following the recovery period.
Erythrocytes alterations
The blood smears of mice from all groups, stained with hematoxylin and eosin, are displayed in Figure 2. The blood smear in the control group represents the erythrocytes' typical structure. According to Figure 2A, the blood comprises rounded, biconcave, non-nucleated erythrocytes (Er) and various leucocyte types (L).
Additionally, echinocytes exhibited the most specific change in red blood cell morphology that was noticed (Lechner and Ramler)an indication of uremia-showing a significant increase to 13 in the 60 μg/ml MPs group, which spontaneously recovered after 15 days Frontiers in Physiology frontiersin.org to 1.6. helmet cells (HE) that significantly increased to 7.9 in the 6 μg/ml MPs group, which also spontaneously recovered after 15 days to 0.033. Teardrop-like cells (Pollastro and Pillmore, 1987), which are considered indicators of myelofibrosis, showed a significant increase to 4.9 in the 6 μg/ml MPs group, which spontaneously recovered after 15 days to 1.5. It also showed a significant increase to 1.86 in the 60 μg/ml MPs group, which spontaneously recovered after 15 days to 0.6 ( Figure 3; Table 1). Frontiers in Physiology frontiersin.org 04 After 15 days of recovery under normal conditions, it was clear that with the 6 μg/ml MPs, there was a marked improvement in the teardrop and boat-shaped cells, while the schistocytes, helmet cells, ovalocytes, keratocytes, SC poikilocytes (poikilocytes are cells that have a variable appearance but are usually dense and may resemble sickle cells. They often have single or multiple angulated branches; some of which that resemble sickle cells may instead have straight edges. Classical SC poikilocytes may be quite rare, so they must be actively sought), and sickle cell types had completely disappeared. The recovery rate was 10.24%. With the 60 μg/ml MPs, there was a marked improvement in teardrop-like cells, helmet cells, ovalocytes, and echinocytes, while the boat-shaped, triangular, and folded shape cells and stomatocyte types had completely disappeared. The recovery rate was 15.63%. With the 600 μg/ml MPs, there was a marked improvement in the helmet cell types and ovalocytes, with a significant increase in the number of boat-shaped cells, teardropshaped cells, sickle cells, and echinocytes. The response rate of the animals to recovery appeared at the concentration of 109.6%, and this does not mean that the recovery rate here is as high as would be expected due to the continued high numbers of abnormal blood cells. This is evidence of the presence of MPs and that the animals could not eliminate the high concentration of MPs inside their bodies. Considering the preceding, we conclude that a 15-day recovery period helped mitigate the impacts of microplastics in small quantities but was insufficient to eliminate them; however, at the same time, in high concentrations, it was never sufficient (Table 2).
Moreover, a drop in the RBC diameter indicates several types of blood anemia. In this study, the diameter of the RBCs was measured in all animal groups, showing a significant decrease in the MPstreated groups: 3.8, 3.3, and 3.4 μm for 6, 60, and 600 μg/ml MPs, respectively ( Figure 4). Together, the aforementioned studies have demonstrated how altered RBC shape transitions might be impacted by lower RBC deformability.
Hematological parameters
In the treated groups, the concentration of 6 µg/ml MPs affected (p> 0.05) neutrophils and lymphocytes; the concentration of 60 µg/ ml MPs significantly affected (p < 0.05) RBCs, Hbs, HTs, and monocytes; these concentrations had highly significant effects (o < 0.00001) on neutrophils, lymphocytes, and the N/L ratio. By contrast, the concentration of 600 µg/ml MPs was found to be significant on monocytes only and highly significantly on RBCs, Hbs, HTs, neutrophils, lymphocytes, and the N/L ratio (Table 3).
Only the RBCs, HCT, Hbs, neutrophils, lymphocytes, monocytes, and N/L ratio remained affected by MPs as stressed by the treatment of 60 and 600 μg/ml MPs during the recovery periods. The MPsexposed animals' hematological parameters showed substantial fluctuations when compared to those of the control and of 6 μg/ ml MPs group. Except for RBCs and HCT, which showed substantial alterations after recovery from the pollutant, there was no other significant variation in the hematological parameters. In contrast to the exposed animals, these changes may have been brought about by individual differences. In conclusion, the concentration of MPs plays a significant role in determining how harmful these are, and the 15day recovery time improved the hematological features.
Biochemical parameters
The metabolic profile of the animals exposed to MPs changed, and the liver enzyme activity, particularly elevated aspartate aminotransferase (AST) and alanine aminotransferase (ALT) (Connes, Lamarre, et al.), increased considerably with the dose. A comparable pattern of the considerable rise in serum glucose was seen. When comparing the treated animals with those of the control group, the treated animals' creatinine levels were higher. Animals exposed to MPs displayed noticeably higher levels of total protein. Additionally, a considerable rise in the neutrophil to lymphocyte ratio (N/L), a reliable immunological indicator of inflammation, was dose-dependently seen in the MPs-treated mice (Table 4).
Discussion
In recent years, a significant increase in the negative impact of environmental contaminants on human health (Inhorn and Patrizio. 2015) is seen. High MP concentrations have been found in freshwater (0-1 106 items/m 3 ) and marine (0-1 104 items/m 3 ) waterbodies. MPs have also been seen in various animal species such as mussels and fish that humans consume (Desforges et al., 2014;Deng et al., 2017). MPs can spread through the aquatic food chain, which will likely cause a biological buildup of the substance. Microplastics (MPs), an environmental pollutant, cause toxicity in the liver, kidneys, and gastrointestinal system (Hou et al., 2021;Wang et al., 2021;Zhao et al., 2021) in animals and aquatic organisms. A recent study has shown that MPs harm the male reproductive system (Hou et al., 2021). However, little is known regarding how microplastics affect vascular biology or humans/mammals. A recent study has revealed that MPs fundamentally impact the hematological system in mice and that these alterations in gene expressions were connected (Sun et al., 2021). However, further research is needed to determine how MPs impact blood cells and other hematological variables. Red blood cell deformability has a significant impact on blood circulation at the microcirculation level. Consequently, any reduction in RBC deformability could impact flow resistance, tissue perfusion, and oxygenation (Nader et al., 2019). In this investigation, we discovered that MP particle accumulation was primarily dose dependent in the gastrointestinal system. Their particle size highly influences their distribution and the tissue accumulation kinetics, and they accumulate in the liver, kidneys, and gut (Deng et al., 2017). The cellular components in the plasma-an aqueous solution which includes organic compounds, proteins, and salts-make up the whole blood, a two-phase liquid (Baskurt and Meiselman 2003). The blood's erythrocytes, leukocytes, and platelets make up its cellular phase. The white blood cells and platelets impact blood rheology, but under typical circumstances, erythrocytes (RBCs) have the most significant impact (Pop et al., 2002) The physical characteristics of these two phases and their proportional contributions to the total blood volume determine the rheological characteristics of the blood. Additionally, hematocrit, plasma viscosity, RBCs' capacity to deform under flow, and RBC aggregation-disaggregation characteristics all affect blood viscosity (Baskurt and Meiselman 2003;Cokelet et al., 2007).
Teardrop-like cells (Pollastro and Pillmore, 1987), helmet cells (HE), stomatocytes (Sto), sickle cells (Sic), schistocytes (Sch), folded cells (Gregorio et al., 2009), boat-shaped cells (Bo), ovalocytes Ov), and echinocytes are only a few of the RBC alterations that have been observed in the groups exposed to MPs (Lechner and Ramler). Teardrop-like cells (Pollastro and Pillmore, 1987), which are thought to be a marker for myelofibrosis, significantly increased in the 6 μg/ml MPs and 60 μg/ml MPs groups before reducing on their own after 15 days. The echinocytes are spiculated RBCs in high numbers in the 60 μg/ml MPs blood samples and indicate widespread electrolyte depletion. The distortion of red blood cells into echinocytes is due to the bilayer membrane alterations, which is the result of a protective mechanism (Svetina 2012). On the other hand, there is another explanation mechanism as high ROS levels can easily promote lipid peroxidation because RBC membranes contain a lot of polyunsaturated fatty acids, which further damages RBCs by disrupting membrane integrity and lowering their resistance to injury (Remigante et al., 2022). It is possible to utilize the cationic surfactant benzalkonium chloride as a cell membrane surface state changer since it can integrate into the erythrocyte membrane and change the shape of the erythrocytes in saline solution (Rudenko and Saeid 2010). A reduction in intracellular erythrocyte potassium (K+) leads to red blood cell dehydration and echinocyte formation (Glader and Sullivan 1979;Gallagher 2017). Most of these alternations in the 6 μg/ml MPs and 60 μg/ml MPs groups were recovered after 15 days, demonstrating the blood's quick, natural mending process. The recovery time, however, was insufficient for the 600 μg/ml MPs Histogram of diameters of erythrocytes in the exposure and recovery groups of the C57BL/6 murine model.
Frontiers in Physiology
frontiersin.org 08 group to return the RBCs to their characteristic morphology. Thus, a 15-day recovery period helped lessen the impacts of microplastics in lower dosages but was insufficient to eliminate them; by contrast, it was never adequate at high concentrations (Table 4). The discovered RBC alternations impacted normal blood flow because they affected their capacity for deformation ( Figure 3C) and subsequently led to the formation of cellular aggregations. Blood viscosity is linearly connected to hematocrit since it depends on the quantity (and number) of erythrocytes in the blood (Nader et al., 2019). At low shear rates (such as in the veins), hematocrit (HT) has a more significant effect on blood viscosity than it does at high shear rates (such as in the arteries) (Cokelet et al., 2007). According to estimates, a one unit increase in hematocrit at high shear rates would result in a 4% increase in blood viscosity (if RBC rheological properties remain the same). In the current investigation, the 60 and 600 μg/ml MPs groups had considerably lower HT levels. Additionally, at the higher MP concentrations (60 and 600 μg/ml MPs), hemoglobin (Hb) levels were much lower, which is thought to be an indication of sickle cell anemia (Magrì et al., 2018). Additionally, RBCs become more fragile and susceptible to hemolysis when they lose their deformability (Connes et al., 2014). Vaso-occlusion and pre-capillary obstruction can both be caused by stiff, sickle-shaped RBCs (Rees et al., 2010).
After exposure to MPs, the biochemical markers (creatinine, AST, ALT, glucose, and total protein) significantly increased. Creatinine can be used as a biomarker for renal impairment and as an indicator of glomerular filtration rate (Lien et al., 2006).
Enzymes (AST and ALT) are found in the cells of several organs throughout the body (Lenaerts et al., 2005). These enzymes' release and increased blood levels are signs of damaged cell membranes (Lenaerts et al., 2005). According to our findings, paraquat and/or microplastic particles increased the activity of intracellular enzymes (ALT and AST), which may be a sign that cell plasma membranes have been damaged (Cheng et al., 2022). Proteins are essential for preserving physiological homeostasis and in stopping blood leak from the circulatory system (Bergmeier and Hynes 2012). Increased levels of total protein result from MP exposure, and these suggest problems in the kidney and liver functions.
With low MP dosages, when compared to the control group, microplastics buildup and hemato-biochemical changes were improved after the recovery period. However, the high dose group was affected negatively by MPs. The recovery period has an effect ranging from cells to tissues where the defense mechanisms were reported (Hamed et al., 2019;Hamed et al. 2020;Hamed et al. 2021;Sayed et al., 2021;Ammar et al., 2022;.
Conclusion
In C57BL/6 mice, microplastics produced a range of toxic consequences, which included anemia and changes in hematobiochemical parameters, which may induce severe toxic effects in all organs at higher concentrations and for extended periods. Our findings have shown that MPs had damaging effects on mice's RBCs, reflecting the dangerous implications of these MPs on human health. The current study may initiate future comprehensive studies on the impacts of MPs on other body systems.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by the Research Ethical Committee of the Molecular Biology Research and Studies Institute (IORG0010947-MB-21-10-A), Assiut University, Assiut, Egypt.
|
2023-03-12T15:54:45.796Z
|
2023-03-08T00:00:00.000
|
{
"year": 2023,
"sha1": "42a792692fef400738413fb157ca7b693fbd058b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2023.1072797/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38d9e37c5144d84b070c8fee6e649dc95fba39e8",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248598046
|
pes2o/s2orc
|
v3-fos-license
|
Descending Necrotizing Mediastinitis Resulting From Sialadenitis Without Sialolithiasis
Descending necrotizing mediastinitis (DNM) is an uncommon and life-threatening condition that arises from an oropharyngeal infection and descends along the cervical fascial planes into the mediastinum. Without aggressive surgical management, a high mortality rate exists. We report a case of an otherwise healthy 49-year-old male who presented with an abscess formation of the right submandibular gland secondary to sialadenitis without sialolithiasis. Computed tomography revealed fluid collection around the right submandibular gland suggestive of sialadenitis without sialolithiasis with severe inflammation and leftward deviation of the aerodigestive tract. Despite multiple drainages, the infection eventually progressed inferiorly into the mediastinum, resulting in DNM. After multiple takebacks to the operating room for exploration and washout of the neck and chest, intensive care unit management, and aggressive IV antibiotic therapy, the patient eventually had a successful recovery and was discharged home. In this paper, the etiology, anatomy, pathophysiology, and management of DNM are discussed. To our knowledge, this is the first report in the literature of DNM developing from sialadenitis without sialolithiasis in the submandibular gland.
Introduction
Descending necrotizing mediastinitis (DNM) is a virulent polymicrobial infection that originates from an oropharyngeal source and descends into the mediastinum creating a potentially life-threatening condition. Delayed diagnosis and failure to appropriately recognize DNM can lead to rapid progression of sepsis and eventual death [1,2]. Mortality rates have been reported as between 20 and 50 percent and have generally not improved even in the era of antibiotics and 3D imaging [1,2]. DNM has been shown to statistically increase mortality rates in deep neck infections [3]. The criteria for diagnosing DNM include clinical evidence of severe pharyngeal infection, radiographic evidence of mediastinitis on CT scan, documenting the necrotizing mediastinal infection pre-and post-operatively, and an established relationship of the oropharyngeal infection and its development into DNM [4,5]. Risk factors include poor dentition, diabetes, AIDS, IV drug use, and excessive alcohol use. DNM presents more often in males than females [6][7][8]. Successful treatment consists of aggressive surgical drainage of the deep cervical spaces and mediastinum, broad-spectrum antimicrobial therapy, and upper airway management. The majority of DNM cases are odontogenic in origin, with less frequent causes including peritonsillar abscesses, retropharyngeal abscesses, cervical trauma, epiglottitis, and sialadenitis [2,9,10]. A glandular infection has been shown to be a higher risk factor for the development of DNM than an odontogenic infection [10]. Very few cases of DNM report submandibular sialadenitis with sialolithiasis. In this report, we discuss the successful treatment of DNM as a complication of sialadenitis without sialolithiasis. To our knowledge, this is the first reported case of DNM from a primary submandibular gland infection without sialolithiasis as a precipitating cause.
Case Presentation
A 49-year-old male with no significant past medical history or comorbidities presented to the emergency department with a chief complaint of right neck swelling and difficulty swallowing for four days. On physical examination, he was toxic appearing, restless, and unable to tolerate secretions with hoarseness and dysphonia. Trismus was minimal to none. Remarkable vitals signs were a temperature of 103.1° F, pulse rate of 109 beats per minute, respiratory rate of 22 breaths per minute, blood pressure of 137/87, and oxygen saturation of 97% on room air. White blood cell count was 18.66/mm 3 with 83% neutrophils, and C-reactive protein was 239 mg/dL ( Table 1). COVID-19 and HIV results were negative. There was swelling of the right submandibular region with overlying erythema of the anterior neck extending inferiorly to the chest. The swelling was indurated and painful to palpation, particularly over the right submandibular gland. Oral examination revealed edema and erythema of the right tonsils and uvula. The teeth appeared healthy, and manipulation of the right submandibular gland failed to elicit any salivary drainage. Flexible laryngoscopy showed a sluggish and partially paralyzed right vocal cord with edema and partial obstruction of the glottis. Computed tomography (CT) scanning of the patient's head and neck showed significant enlargement and uniform enhancement of the right submandibular gland and mass effect of the aerodigestive tract deviating to the left. Unusual spaces, including paratracheal and parapharyngeal had rim enhancing collections, in addition to the submandibular, submental, and sublingual spaces (Figures 1,2). Findings were suggestive for acute suppurative sialadenitis without sialolithiasis.
FIGURE 1: Axial view of CT scan at initial presentation
There was significant enlargement and uniform enhancement of the right submandibular gland representing sialadenitis without sialolithiasis with severe surrounding inflammatory changes and mass effect upon the upper aerodigestive tract with severe deviation to the left. Multiloculated fluid (white arrows) was noted within the right aspect of the neck extending into the upper aerodigestive tract in several spaces, including the submandibular, submental, sublingual, paratracheal, and retropharyngeal spaces.
FIGURE 2: Coronal view of CT scan at initial presentation
White arrows represent fluid collection surrounding the right submandibular gland.
The patient was emergently taken to the operating room for neck exploration and washout. Frank purulence (approximately 35 cc) was encountered in the right masticator, submandibular, and retropharyngeal spaces, as well as paratracheal and anterior neck compartments. The neck anatomic structures were found to be grossly inflamed and, due to gross purulence, were difficult to identify. A thorough washout with normal saline of the deep neck spaces was performed, and multiple gravity-dependent drains were placed in addition to iodoform packing. Iodoform packing changes were performed several times a day throughout the patient's hospital course. The gland excision was postponed at the initial drainage because the definitive source of infection had not been elucidated at the time, as well as to decrease the chances of facial, lingual, and hypoglossal nerve injury. Ultimately, the submandibular gland was not excised since the infection eventually spread well outside of the gland, involving other critical structures, then ultimately resolved. The patient remained intubated postoperatively and was transferred to the surgical intensive care unit for close airway monitoring. The patient initially showed improvement and was extubated on postoperative day 2. Initial wound cultures speciated Parvimonas and Prevotella. Ampicillin/sulbactam and vancomycin daily were initiated.
On hospital day 5, increased amounts of purulence were appreciated from the neck drains, and labs were remarkable for an up-trending white cell count (now 23.1/mm 3 ). Repeat CT neck and chest showed multiple fluid collections extending from the right submandibular space into the right carotid space, right prevertebral space, and the right retropharyngeal space. Investigation of the chest CT showed communication of the right neck collections crossing midline and below the thyroid into the retropharyngeal space and inferiorly into the anterior and middle mediastinum compartments creating a 1.2 x 3.4 cm rim-enhancing collection (Figures 3-5). The patient was taken back to the operating room for reexploration and washout of deep neck space infections, as well as mediastinal exploration and washout by thoracic surgery via right video-assisted thoracic surgery (VATS). Postoperatively chest and gastrostomy tubes were placed. Widespread necrosis of the superficial fascia in the submandibular, cervical, and mediastinal regions was encountered during the procedure. An extended neck exploration was performed intraoperatively, including dissection of the carotid sheath, retropharyngeal space, and the anterior neck compartment with dissection communicating with the mediastinum. Of note, minimal trismus was noted as the infection did not originate in the oral cavity. Therefore, intubations were performed with relative ease, and a tracheostomy did not need to be performed throughout the hospital course. The patient remained intubated in the cardiothoracic intensive care unit and was extubated on hospital day 7. On hospital day 10, the patient again showed signs of clinical deterioration with a temperature of 101.2° F, white cell count of 32/mm 3 , complaints of unremitting chest pain, and purulence appreciated from his right superior pleural drain. A repeat CT neck and chest showed multiple small right supraclavicular loculations and a 5.2x7.0 cm right rim-enhancing infraclavicular collection involving the posterior mediastinum. Reexploration and washout of the deep neck spaces and mediastinum were performed, and retropharyngeal and superior mediastinal drains were placed in direct communication to the pre-existing cervical drains. Antibiotics were transitioned to linezolid, piperacillin/tazobactam, and fluconazole. Significant clinical improvement was seen in the days to follow, and on hospital day 16, he was taken to the operating room for a final washout with the removal of neck packing. In total, the patient was taken to the operating room four times for drainage, washouts, and packing. Antibiotics were discontinued on hospital day 17. On hospital day 20, a peripherally inserted central catheter (PICC) line was placed, and all remaining drains and tubes were removed. On hospital day 22, he was discharged home. He received four weeks of outpatient IV ertapenem 1 g daily, and his recovery was satisfactory.
Discussion
Descending necrotizing mediastinitis (DNM) is a major complication that can result from an uncontrolled episode of infection that originates in or travels to the deep cervical spaces. Fortunately, the disease process is rare, with approximately 100 cases having been reported in the literature [2]. Odontogenic infection is the most common source of DNM, occurring in 58% [1]. Other sources can include abscess of the retropharyngeal or peritonsillar spaces, abscess of the salivary or thyroid glands, lymphadenitis, traumatic intubation, or infection secondary to IV drug use [2]. Mortality rates remain high in this disease process despite advancements in diagnosis and care.
Typically, descending necrotizing mediastinitis originates from an odontogenic infection in the third molar region as a complication of Ludwig's angina. However, there are several other etiologies of DNM that can result from a primary deep neck infection, such as a retropharyngeal abscess, sialadenitis, or another deep cervical phlegmon/abscess [11]. Reported mortality rates of DNM are between 20% and 50%, even with prompt treatment, accurate diagnostic imaging, and appropriate antibiotics, making early recognition critical [11][12][13].
Certain deep cervical fascial spaces are interconnected to the mediastinum, providing a pathway for infection [1,2,11]. There are three layers of the deep cervical fascia; from superficial to deep they are the pretracheal, retrovisceral, and the prevertebral. These three layers create three potential spaces, named the pretracheal, perivascular, and retrovisceral (prevertebral) spaces, in which infection can descend inferiorly into the mediastinum. The pretracheal, perivascular, and retrovisceral spaces travel to the anterior, middle, and posterior mediastinum, respectively [1,11,14,15]. The retrovisceral space can be further divided into the retropharyngeal space and the danger (alar) space; these two entities are separated by the alar fascia. Rupture of the alar fascia provides access to the danger space, thus providing a pathway to the diaphragm and pleural spaces [11]. The carotid sheath is formed by the fusion of the three layers of the deep cervical fascia and surrounds the perivascular space, providing possible vascular complications [11,16,17]. Gravity and negative intrathoracic pressures contribute to mediastinal spread [2]. Approximately 70% of primary odontogenic infections that travel into the mediastinum do so via the retrovisceral space, and 8% do so via the pretracheal space [2,8].
Cultures of patients with DNM typically result in polymicrobial flora with aerobic and anaerobic bacteria [1,11,18]. In the case of our patient, initial wound cultures grew Parvimonas and Prevotella.
Management of DNM includes prompt diagnosis and urgent surgical intervention. Given the polymicrobial nature of DNM, broad-spectrum antibiotics are warranted. Surgical management of the neck should include source control as well as drainage, debridement, and washout. Intraoral and extraoral approaches may be used for adequate access. Surgical management and access to the mediastinum may include cervical drainage alone, but often more extensive intervention such as subxiphoid incision, open thoracotomy, VATS procedure, or transpleural drainage is necessary [1,2]. Access and extent of incision and drainage sites should be determined by the involved spaces on CT scan; however, clinicians should appreciate the rapidly evolving nature of the disease process and understand that the clinical extent of the disease may be more substantial than radiographic extent. Additionally, the threshold for operating room takeback should be low, and patients will often require multiple re-explorations in the operating room.
A classification system has been developed to describe different presentations of DNM. DNM type I has been described as located above the tracheal bifurcation, DNM type IIA has been described as extending into the anterior mediastinum below the tracheal bifurcation, and DNM type IIB has been described as extending into both the anterior and posterior mediastinum below the tracheal bifurcation [19]. These classifications may be used by cardiothoracic surgeons to guide the treatment of mediastinal involvement. Tracheostomy should be performed if there is any concern for airway compromise.
Despite advances in the understanding of the disease process and CT imaging, the mortality rate of DNM remains significant. Studies of DNM in the post-CT scan era have shown mortality rates of between 15% and 40%, which is similar to mortality rates in the pre-CT scan era (33%) [1,7]. Thus, early surgical intervention and open communication with other surgical teams are critical for adequate treatment outcomes of DNM.
In the case of our patient, the submandibular gland was found to be the primary source of infection. There have been reported cases of DNM with submandibular or parotid gland abscess as the primary source [6,10]. However, all these reported cases developed abscesses secondary to a salivary stone or did not explicitly state if a stone was involved. To the best of our knowledge, our patient is the first reported in the English language literature who developed DNM secondary to acute sialadenitis without sialolithiasis, which is a condition typically linked to salivary duct strictures, anticholinergic medication use, or predisposing factors such as diabetes mellitus, hypothyroidism, or Sjogren's syndrome [20]. This case is unique as the patient did not have any of the typical comorbidities associated with DNM, and it is unknown why the infection spread past the typical anatomical boundaries.
For the management of our patient, a combination of transcervical and transthoracic approaches was used for surgical access and drainage, and multiple takebacks to the operating room were required for adequate drainage. In addition, broad-spectrum antibiotics and multiple daily packing with dressing changes were required. The patient did not require a tracheostomy as his airway remained patent throughout his hospital course.
Conclusions
Descending necrotizing mediastinitis (DNM) is a rare but feared complication of a primary deep neck infection. Due to anatomical connections between spaces in the deep neck and mediastinum, the infection can travel freely, requiring extensive drainage. Treatment includes prompt airway management, source control with aggressive surgical intervention, and tailored broad-spectrum antibiotics. We present the first case in the literature of sialadenitis without sialolithiasis that resulted in DNM. The patient was successfully treated with aggressive exploration and washouts of the cervical and superior mediastinal compartments, intensive care management with careful airway monitoring, and continuous antibiotic therapy. Despite early recognition, the patient required a lengthy hospital stay and multiple takebacks to the operating room.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-05-10T16:22:09.255Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "95d2d284cbb45e4d31d729c0b6e0478ad7c2c422",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/94617-descending-necrotizing-mediastinitis-resulting-from-sialadenitis-without-sialolithiasis.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64468f2b613308aeec6d78e0bd97024c2a021077",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258427079
|
pes2o/s2orc
|
v3-fos-license
|
Twistorial monopoles & chiral algebras
.
Introduction
It has been recently appreciated that four-dimensional quantum field theories, possibly coupled to gravity, on asymptotically flat spacetimes have classical asymptotic symmetries which organize into chiral algebras [1,2].The operator product on algebra elements arises from certain collinear limits of scattering amplitudes of massless states.Utilizing the standard map between 4d null momenta and points on the sphere at asymptotic null infinity invites a possible interpretation of the chiral algebra as the holomorphic symmetry algebra of an exotic two-dimensional conformal field theory supported on the celestial sphere, which is proposed to be holographically dual to the four-dimensional "bulk" theory.
The impact of quantum corrections on these classical chiral algebras, and their fourdimensional origin, has been recently explored by several groups, e.g.[3][4][5][6][7][8][9].It was observed in [7] that in a formulation of self-dual Yang Mills (SDYM) theory, the chiral algebra of asymptotic symmetries, when deformed to include the theory's one-loop collinear splitting amplitude, failed to be associative.This "quantum" failure of associativity was directly related to an obstruction to lifting the four-dimensional theory to a local holomorphic theory on twistor space.Such an obstruction manifests as a gauge anomaly in the six-dimensional holomorphic theory [10].Conversely, one can couple SDYM to a certain scalar field with a quartic kinetic term, which cancels the 6d gauge anomaly via a Green-Schwarz-like mechanism on twistor space and obtain an associative chiral algebra, including loop effects [6][7][8].In this work we focus on theories with associative chiral algebras: either self-dual theories at tree-level or, when we wish to move beyond classical physics, such integrable "twistorial" 4d theories, with lifts to local, non-anomalous holomorphic theories on twistor space, guaranteeing chiral algebra associativity.Such theories can sometimes be uplifted to genuine topological string theories [11], and produce integrable toy models of flat space holography [12], along the lines of the twisted holography program [13,14].It will be interesting to understand if associativity can be restored by other means, or if such chiral algebra axioms can be somehow relaxed to accommodate an exotic CFT dual, perhaps along the lines of [15].
Among the many open questions relating to the physics of celestial CFTs and their associated chiral algebras is how to formulate the 2d avatar of magnetically charged states in four dimensions.Soft theorems in (abelian) gauge theories with magnetically charged objects have been studied in [16,17].In this work, we will instead emphasize the chiral algebraic formulation of conformally soft modes of such states, leveraging the twistorial construction of [6].Along the way, we will unearth a host of novel or under-explored phenomena in the representation theory of non-unitary 2d chiral algebras.
To proceed, let us recall part of the basic setup of [6].Twistor space is a complex manifold given by the nontrivial fibration PT := O(1) ⊕ O(1) → CP 1 (1.1) i.e.PT is the total space of two copies of the line bundle O(1) → P1 , and as a real manifold is equivalent to We may study various local, anomaly-free holomorphic theories on PT, which reduce to the 4d theories of interest upon dimensionally reducing along the CP 1 .In particular, we focus on the 6d theories studied in [10] given by holomorphic BF theories with particular gauge algebras g, each coupled to a free limit of the Kodaira-Spencer theory of "gravity" (complex structure deformations) [18] via a holomorphic Chern-Simons term with a carefully chosen coefficient.These reduce to non-unitary CFTs in 4d, given by SDYM theories coupled to a quartic axion-like field.The 2d chiral algebra of asymptotic symmetries was constructed by twistorial methods explained in [6], and is supported on the zero section of the twistor fibration, which in turn can be identified with the celestial sphere at null infinity.
An alternative view of this chiral algebra presented in [6], which is well-suited to our present purposes, is as follows.Consider PT\CP 1 ≃ S 3 × R >0 × CP 1 , where we remove the sphere corresponding to the origin of R 4 .We consider an alternative compactification of the 6d theory, down to 3d dimensions, by reducing along S 3 ; in contrast to the usual reduction along CP 1 , there is an infinite tower of KK modes that must be included to access all the (infinitely many) states in the vacuum module of the chiral algebra.Compactifying a 6d holomorphic "gravitational" theory in this manner yields a 3d holomorphic-topological nongravitational theory on R >0 × CP 1 , where the chiral algebra furnishes the algebra of boundary local operators supported on the holomorphic CP 1 at infinity in the radial coordinate.3d holomorphic-topological theories are known to arise from certain supersymmetric twists of 3d N = 2 theories [19] and in fact, it is easy to identify "downstairs" which 3d N = 2 theory can be twisted to yield the resulting holomorphic-topological theory with the "celestial" chiral algebra as its boundary chiral algebra [6].This 3d perspective will be our primary window into the physics of magnetic states and their impact on the chiral algebra.
On the other hand, three-dimensional N = 2 gauge theories have long been intense objects of study.They often flow to strongly interacting superconformal field theories and enjoy a host of infrared dualities, as well as qualitative similarities to their non-supersymmetric cousins with useful applications to condensed matter physics.Monopoles in these 3d theories have been studied from many points of view (see, e.g., the classic papers [20,21]).Particularly important for our purposes, 1/2-BPS boundary conditions of these theories have been studied extensively in [22][23][24][25], and in [25] it was shown that Dirichlet boundary conditions on the gauge field support nontrivial boundary monopole operators as well.Such 1/2-BPS boundary conditions are compatible with the holomorphic-topological twist, and may be studied directly in the first-order formalism from this point-of-view [26].Further, the resulting bulk (associative, commutative) chiral algebras of local operators, including monopoles, and boundary (associative, non-commutative) chiral algebras, which may also include monopoles, were explored in [26].In non-abelian gauge theories, boundary monopoles remain difficult to work with directly 1 but 3d abelian bulk and boundary monopoles can be accessed concretely in the twisted formalism [27].The primary purview of this note is to study monopoles in 3d holomorphic-topological theories which support "celestial" chiral algebra boundary conditions.By pulling these states back to twistor space and then "pushing" them back down to four-dimensions, we can explore various aspects of the flat space holographic dictionary including the corresponding nonperturbative field configurations.
The plan for the rest of this note is as follows.In Section 2, we review the physics of 3d monopoles in holomorphic-topologically twisted theories, and how monopoles localized on holomorphic boundaries implement spectral flow automorphisms in perturbative boundary chiral algebras, at least in the abelian setting.We also emphasize how boundary monopoles enforce integrality of the charge of modules of the perturbative chiral algebra (in appropriate conventions), and impose relations among such modules.Classes of such modules are realized physically in 3d theories with boundary via Wilson lines ending on the boundary chiral algebra plane, and we construct, via Koszul duality, the analogous (holomorphic) Wilson line modules on twistor space associated to the perturbative celestial chiral algebra in 3.1.4.These modules will interact with, and be constrained by, the celestial boundary monopoles that generate the spectral flow automorphisms.
In Section 3 we review the uplift of 4d self-dual gauge theories to 6d local holomorphic theories, and in turn recall how the latter can also be viewed as 3d holomorphic-topological theories with boundary, with an infinite number of fields, following [6].We discuss possible nonperturbative extensions of perturbative 3d boundary algebras (in this context, equivalently, 4d celestial chiral algebras) in self-dual QED, and we further discuss 4d abelian self-dual magnetically charged states and some puzzles that arise when trying to map such states to 3d boundary algebras.Finally, in Section 4 we study self-dual non-abelian gauge theories, and show how 4d magnetically charged states lead to spectral flow automorphisms on the celestial chiral algebra.The spectral flow automorphisms on the celestial chiral algebra (and their inverses) arising from the insertion of a 4d self-dual magnetically charged state are presented in (4.31), (4.32).We then conclude with open questions and future directions.
To aid in readability, we include the following table of relevant mathematical symbols.In this table M is a 3-manifold with a THF structure; G is an algebraic group (over C); and X is a complex manifold with Dolbeault operator ∂ and E is a holomorphic vector bundle over X.
Monopoles in 3d Gauge Theories
We begin by reviewing basic aspects of monopoles in 3d N = 2 gauge theories, focusing mostly on abelian gauge theories, as well as aspects of boundary monopoles on Dirichlet boundary conditions.For more details on the latter, see [25,26].
Local operators in 3d gauge theories have a natural flavor symmetry, called the topological flavor symmetry, with an associated conserved charge called monopole number.This mannian GrG associated to the gauge group G, with values in a certain line bundle; for non-abelian G, the non-trivial geometry of GrG makes it difficult to make the chiral algbra structure explicit.Ω = Ω(M)/(dz) DG algebra of forms on M modulo forms proportional to dz Ω (j) Ω twisted by dz j ΠV parity shift operator of a super vector space algebra of formal Laurent series in a variable z the formal bubble Dolbeault homology of X with values in E A particularly important class of monopoles for our later applications are those that arise in twisted N = 2 gauge theories.Recall that the 3d N = 2 supersymmetry algebra has two fermionic generators Q α , Q α that transform as spinors with respect to Spin(3) ∼ = SU (2); their anti-commutators (in the absence of central charges) take the form where (σ µ ) α β are the Pauli matrices and we raise and lower SU (2) indices with the Levi-Civita tensor ǫ αβ , with the convention that ǫ +− = ǫ +− = 1, as χ α = ǫ αβ χ β and χ α = χ β ǫ βα .The U (1) R R-symmetry rotates Q α (resp.Q α ) with weight −1 (resp.1).We note that Q HT := Q + is a square-zero supercharge and, moreover, the momenta P z = 1 2 (P 1 + iP 2 ) = {Q HT , 1 2 Q + } and P t = P 3 = {Q HT , −Q − } belong to the image of the twisting supercharge Q.In particular, we can consider the Q HT twist (i.e.Q HT -cohomology) of our 3d N = 2 theory and the resulting twisted theory behaves holomorphically in the x 1 -x 2 plane and topologically along the remaining x 3 direction; the twist by Q HT is correspondingly called the holomorphictopological (HT ) twist.
Due to the holomorphic-topological nature of the twisted theory, we can expect to naturally put it on a 3-dimensional manifold M that locally looks like C z,z × R t with transition functions (z, z, t) → (w(z), w(z), s(t, z, z)) compatible with the complex structure.Manifolds with this structure, compatible with the HT twist, are said to admit a transverse holomorphic foliation (THF) [19].To perform the twist, we need to use the U (1) R R-symmetry to define a twisted spin Spin(2) ′ rotating C z,z under which the supercharge Q HT is a scalar; this choice is called a twisting homomorphism, and we take J = 1 2 R − J 3 .With this choice, the various fields of the N = 2 theory will have their transformation properties modified to reflect their twisted spin; for example, a scalar of R-charge r will become a section of the r/2-th root of the canonical bundle K r/2 , i.e. it naturally comes with a factor (dz) r/2 .
We will focus on N = 2 theories of gauged chiral multiplets, often coupled with a superpotential.The data of such a theory is: a choice of compact gauge group G c (with complexification G) and unitary (complex) representation R of G c (G); a choice of Chern-Simons levels k ∈ H 4 (BG c ), whose precise quantization depends on the choice of R (roughly, k is a collection of integers or half-integers for each simple factor of G c ); and a choice of gauge-invariant holomorphic function W : R → C, the superpotential.We will further require that the theory realizes the U (1) R R-symmetry of the N = 2 algebra.This implies that there is an R-charge assignment of the chiral multiplets R = r R r , where R r is the subrepresentation chiral multiplets of R-charge r ∈ Z, so that W has R-charge 2.
The work [19] provided a remarkably compact description of the HT twist of this class of N = 2 gauge theories in the Batalin-Vilkovisky (BV) formalism; see also the more recent work [26].To write down the theory, we need to introduce some notation.Let Ω denote the differential graded algebra of forms on M. The THF on M implies that there is a natural subalgebra of differential forms proportional to dz; we denote by Ω the quotient of all differential forms by those proportional to dz.The THF on M also gives us a distinguished line bundle K, the canonical bundle on the complex plane C (or, more generally, the complex leaves of the THF), and we denote by Ω (j) the quotient of all differential forms with values in K j by those forms proportional to dz.There is a natural product Ω (j) × Ω (j ′ ) → Ω (j+j ′ ) induced by the wedge product of forms as well as a natural integration map : Ω (1) → C. In local coordinates, sections of Ω (j) look like elements of C ∞ (R 3 )[dt, dz] dz j , where dt and dz are treated as fermionic variables and dz a bosonic variable.We denote by d ′ the fermionic differential on Ω (j) induced by the de Rham differential; in local coordinates it takes the form d ′ = ∂ t dt + ∂ z dz.Similarly, we denote by ∂ the natural bosonic derivation mapping With this notation, we can finally state the model of [19] for the HT twist of the N = 2 gauge theory above.First, a chiral multiplet Φ of R-charge r turns into a bosonic field Φ ∈ Ω (r/2) and a fermionic field Ψ ∈ ΠΩ (1−r/2) , where Π denotes a parity shift -the lowest component of Ψ is a fermion whereas the lowest component of Φ is a boson. 3In terms of the components with homogeneous form degree, we can write Φ = φ + η * + ψ * and Ψ = ψ + η + φ * ; φ and ψ are identified with the complex scalar in the chiral multiplet and a component of the fermion in the anti-chiral multiplet, respectively.The higher form fields are the holomorphic-topological descendants of φ and ψ.For example, the components of η are identified with derivatives on the conjugate scalar φ.In the same vein, a vector multiplet V turns into an adjoint-valued fermionic field A ∈ ΠΩ (0) ⊗g and a coadjoint-valued bosonic field B ∈ Ω (1) ⊗ g * ; we expand these fields in components as In this expansion, the 1-form A = A t dt + A z dz is identified with components of the physical gauge field (A t is actually complexified by the real scalar σ), c is the BRST ghost, and B is identified with F zt up to Chern-Simons terms.In pure Chern-Simons theory, or the HT twist of its N = 2 supersymmetric enhancement, B = B a T a is identified with K ab A b z , for K ab the pairing used in the Chern-Simons action, cf.[19,Section 2.3].The HT -twisted theory is described by the following action: In this expression, d ′ A = d ′ + A is the covariant exterior derivative and F ′ (A) = d ′ A + A 2 is its curvature.We have suppressed the natural gauge-invariant pairings of a representation and its dual; Tr denotes the non-degenerate bilinear form on g used for the Chern-Simons terms, e.g. the Killing form with a suitable normalization, with k the (possibly vanishing) level.
In the BV formalism, the equations of motion are encoded in a square-zero supercharge In the present context, the supercharge Q is a combination of two supercharges: the BV/BRST supercharge of the physical theory Q BV as well the HT supercharge Q HT .Finally, we note that this theory has a natural Z grading, traditionally called ghost number, but actually contains a contribution from the R charge r and form degree; the vector multiplet fields A and B are degree 1 and 0, respectively, while the fields Φ and Ψ in a chiral multiplet of R-charge r are in degree r and 1−r, respectively.The ghost numbers of the component fields are uniquely determined by the convention the differential forms dt, dz are given degree 1 whereas dz is given degree 0. We note that this grading is cohomological, i.e. the supercharge Q has ghost number 1.
Perturbative local operators in the HT twist can be realized by Q-cohomology classes of G-invariant operators built from the lowest components φ, ψ, ∂c, B of the fundamental fields and their ∂ derivatives. 4In order to access the full, non-perturbative algebra of local operators it is useful to use a state-operator correspondence to relate these operators to states of the theory on a sphere P 1 .The space of such states can be obtained from the above twisted description via geometric quantization of the moduli space of solutions to the equations of motion on P 1 .
Consider a spacetime of the form P 1 × R t .The ghost number 0 part of the equation of motion F ′ (A) = 0 reads ∂ t A z = 0 in a gauge where A t = 0. Thus, the covariant derivative D z = ∂ z + A z is a connection on a holomorphic G bundle E on P 1 and this bundle is locally constant in t.One salient feature of this analysis is that the moduli space of solutions naturally fibers over the moduli space Bun G (P 1 ) of holomorphic G bundles on P 1 -many interesting aspects of local operators are encoded in this simple fact.
It is worth noting that it is common to use a slightly different moduli space of G bundles.Instead of considering a finite sphere P 1 , it is more convenient to work with the "formal bubble" or "raviolo" B. Roughly speaking, the formal bubble is the non-Hausdorf space obtained by gluing two infinitesimally small disks (alias "formal disks") away from the origin.The formal bubble naturally arises upon consideration of the state-operator correspondence in any mixed holomorphic-topological quantum field theory: states on the formal bubble B are expected to be identified with local operators at the center of the bubble.In essence, any sufficiently small neighborhood in a THF 3-manifold takes the form of a small cylinder D × I; the holomorphic-topological nature of the QFT implies that field configurations away from operator insertions are holomorphic on D and constant along I.In particular, the size of the interval I is inconsequential and the data of the field configuration is encapsulated by data on D. In the presence of a local operator, however, this holomorphic data on D can jump.This jump cannot be arbitrary and should reflect the local nature of the operator: the holomorphic data should be equivalent away from the insertion point of the local operator.Putting this together, field configurations on the boundary of this infinitesimal cylinder can be identified with two copies of the necessary holomorphic data on D and an isomorphism of this data away from the origin.See e.g. the discussion just before Section 3.1 of [27] for more details.
The formal bubble can be constructed in an analogous manner to P 1 by gluing two open sets to one another.One first replaces the complex plane C, with its algebra of functions given by polynomials in a single variable C[z], by the formal disk D, with algebra of functions given by formal series in a single variable O = C[[z]]; one should think of D as an algebro-geometric avatar of the above infinitesimally small disk.Just as P 1 is realized by gluing two copies of C over C × , the bubble B is realized by gluing two copies of the formal disk D over the "formal punctured disk" • D. The algebra of functions on the formal punctured disk is given by formal Laurent series in a single variable K = C((z)), cf.functions on C × are Laurent polynomials.Aside from working with series over polynomials, the main difference between the formal bubble is that the transition function relates the two formal disks D z and D w via w = z, rather than the more familiar w = 1/z used for P 1 .
The convenience of working with the formal bubble B is to provide a rather concrete realization of the moduli space of holomorphic bundles as a particularly nice coset.If we first trivialize the bundle on the two formal disks, it suffices to give the transition function, a gauge transformation on the formal punctured disk • D; the moduli space of holomorphic bundles is then realized as a quotient by changes of trivialization (gauge transformations on the two patches or formal disks).The group of gauge transformations on • D will be denoted G K ; when G is a matrix group, elements of G K are simply matrices with Laurent series entries.Similarly, the group of gauge transformations on D will be denoted G O ; this is the group with Taylor series entries.Together, we see that the moduli space of G bundles on where the single coset Gr G = G K /G O is known as the affine Grassmannian.The two factors of G O act on the left and right of G K : (h 1 , h 2 ) • g = h 1 gh −1 2 .Practically speaking, this moduli space is much easier to manage than its P 1 counterpart simply due to the fact that formal series are easier to handle than Laurent polynomials.For example, ensuring that a matrix of Laurent polynomials is invertible is much harder than ensuring a matrix of Laurent series is invertible -in the example of G = C × , the group G[z, z −1 ] only contains the elements z n , whereas G K contains all nonzero Laurent series.Geometrically, this difficult arises from the fact that the transition function defining the bundle must be invertible everywhere except at 0 and ∞; in a sense, the formal bubble B only contains those points, i.e. we don't have to worry about singularities at finite values of z, and so the choice of transition function is much less constrained.
Abelian monopoles
Consider the case G c = U (1), G = C × ; the inequivalent holomorphic C × bundles on B are labeled by integers m ∈ Z.For any non-zero Laurent series g(z) = az m + ... ∈ C × K there is an invertible Taylor series of the form h(z) = a + ... ∈ C × O (invertible implies it has a non-zero constant term) so that g(z)h(z) −1 = z m . 5In particular, the moduli space of solutions to the above equations of motion has disconnected components EOM = EOM m , where EOM m denotes the component of the space of solutions for which the underlying gauge bundle has transition function z m .The states arising from EOM m correspond to operators with magnetic charge (or, better, monopole number) m.The integer m is naturally identified with a cocharacter of C × .
We note that from the perspective of P 1 , the above bundles are identified with the line bundles O(m).This line bundle is characterized by the requirement that sections σ transform as σ → λ m σ under the homogeneous scaling of projective coordinates z α → λz α .In terms of the affine coordinates z, w = 1/z, this corresponds to saying a section of O(m) transforms as σ z (z) → σ w (w) = w m σ z (1/w) under the change of coordinates z → w.The line bundle O(m) over P 1 is the holomorphic avatar of the famous (charge m) monopole bundle over S 2 .
For more general abelian gauge group T c = U (1) r the story is nearly identical; the moduli space of equations of motion has disconnected components labeled by r-tuples of integers which give rise to states of the associated monopole number.Recent work of Zeng [27] described an explicit geometric quantization of the moduli space EOM on P 1 for the HT twist of abelian N = 2 theories, thereby realizing (the vector space of) local operators in these twisted theories.
Before moving on, we note that the collision of monopole operators is inherited from the group structure on G K .We consider two bubbles B, corresponding to two monopole operators, and stack them on top of one another; the moduli space of bundles on the double-bubble is then the double coset where where the π i are (the maps on the quotient coming from) the natural projection maps from An important consequence of the above convolution structure is that monopole operators act by modifying the gauge bundle at the point of the monopole operator.For the simplest monopole operators, this corresponds physically to performing a large gauge transformation (mathematically this process is called a Hecke modification).For example, we could perform a meromorphic gauge transformation of the form g = z m ; such a gauge transformation maps a section s of O(n) to a section gs of O(n + m), defined in terms of affine coordinates as (gs) z (z) = z m s z (z). 6
Non-abelian monopoles
The story for general gauge group G c is somewhat more complicated.We choose a maximal torus T c ⊆ G c .Any holomorphic G bundle is equivalent to a T = (T c ) C bundle of the above form.(We can always perform a G gauge transformations on the two disks to bring the transition function to the chosen maximal torus.)In particular, for every magnetic charge/cocharacter m ∈ Hom(C × , T ) ∼ = Z r , where r is the rank of G, we get a holomorphic G bundle whose transition function is again g = z m . 7Two such cocharacters m, m ′ yield equivalent G bundles when m and m ′ differ by the action of the Weyl group.If we choose a collection of positive roots, we can therefore label such bundles by a dominant cocharacter -one that pairs non-negatively with all positive roots.Passing from the magnetic charge m to the monopole number γ ∈ π 1 (G c ) corresponds to taking its image under the quotient map To orient readers, we note that, for G c = U (r) and T c = U (1) r the diagonal torus, cocharacters of T can be identified with r-tuples of integers m = (m 1 , ..., m r ), dominant cocharacters have m i ≥ m j if j ≥ i (for the standard choice of positive roots), and the monopole number is simply the sum m 1 + . . .+ m r ∈ Z.Similarly, for SU (r) (with its dialgonal torus) cocharacters are identified with r-tuples satisfying m 1 + ... + m r = 0 and for P SU (r) ∼ = SU (r)/Z r (with its diagonal torus) are identified with r-tuples (m 1 , ..., m r ) modulo shifts by (1, ..., 1).Monopole number is trivial for SU (r) since it is simply connected, whereas monopole number for P SU (r) is simply m 1 + . . .+ m r mod r.
A description of holomorphic G bundles in terms of dominant cocharacters as a discrete set of points is inadequate to capture the intricacies of non-abelian monopoles; it is possible to describe a sequence of holomorphic G bundles with a given magnetic charge m (a dominant cocharacter) such that the limit bundle has a magnetic charge m ′ different from m, a phenomenon known as monopole bubbling.As an example of this phenomenon, consider the GL(2, C) bundle with transition function g(z) = z (2,0) corresponding to a monopole operator with magnetic charge m = (2, 0), i.e. g is the diagonal 2 × 2 matrix with entries z 2 and 1.We can pre-and post-compose with any non-singular gauge transformation to arrive at an equivalent bundle; for example, we can consider the family of holomorphic GL(2) bundles given by the transition function which simply differs form the above bundle by a gauge transformation on the top disk.If we take the limit a → ∞, we find the gauge bundle with magnetic charge m ′ = (1, 1): away from a = 0 we can perform the following gauge transformation on the lower disk from which the a → ∞ limit is straightforward.In particular, we arrive at a gauge bundle with magnetic charge m ′ = (1, 1).More generally, the magnetic charge m ′ is always dominated by m (i.e.m − m ′ is dominant) and projects to the same monopole number γ To contrast this with the abelian case above, the moduli space of holomorphic G bundles for general G has disconnected components labeled by π 1 (G) -monopole number, and not magnetic charge, is a good quantum number for local operators in 3d.
One of the primary obstacles to the study of monopole operators in HT -twisted N = 2 theories with non-abelian gauge groups is having to contend with the highly non-trivial structure of the moduli space of bundles.A related problem arises in the topological A-twist of standard N = 4 theories, but the increased amount control offered by the topological twist makes it possible to progress -the full, non-perturbative algebra of local operators in these theories was be described explicitly by Braverman-Finkelberg-Nakajima [29,30]; see [31] for a physical analysis of this algebra, related to the work of Braverman-Finkelberg-Nakajima via fixed-point localization in equivariant cohomology. 8
Boundary monopoles
Although monopole operators in the bulk of a 3d gauge theory are themselves quite fascinating, we will be more interested in their appearance in the algebra of local operators on boundary conditions of these gauge theories.In 3d N = 2 gauge theories, the BPS equations F = ⋆Dσ and D ⋆ σ = 0 for the gauge field strength F and real scalar σ admit non-trivial, monopole solutions when the vector multiplets are given half-BPS Dirichlet boundary conditions; see e.g.Section 3.4 of [25] for more details.Local operators on this Dirichlet boundary condition can be obtained in a similar fashion to bulk local operators; we can realize them via a state-operator correspondence as states on a hemisphere attached to the boundary.Again, this space of states can be determined via geometric quantization of a suitable space of equations of motion.We note that the chiral algebras studied in [32], realized as algebras of local operators on an interval compactification of 3d N = 2 theories rather than on a boundary, share many of the features we describe below.
It is convenient to work with an infinitesimal version of the hemisphere, i.e. a formal disk D with the boundary conditions imposed on its boundary formal punctured disk • D; the Dirichlet boundary conditions require that the holomorphic gauge bundle is equipped with a trivialization on the boundary.Following the argument above, we see that the moduli space of solutions to the equations of motion naturally fibers over the affine Grassmannian Gr G , which parameterizes such bundles.In particular, we again see that these equations of motion have disconnected components labeled by monopole number π 1 (G) -this is again a good quantum number for boundary local operators.The fact that we arrive at the affine Grassmannian, rather than its quotient by G O , is due to the fact that we are not required to impose gauge invariance of local operators on Dirichlet boundary conditions.
The fact that the space of solutions to the equations of motion fibers over the affine Grassmannian Gr G can be a powerful conceptual tool, but the full, non-perturbative algebra of local operators can still be quite difficult to describe.The work [26] provided a conjectural description of the algebra of local operators as follows; see Section 7 of loc.cit.for more details.Suppose the algebra of local operators in the absence of gauge fields realizes some vertex algebra V (it need not have a holomorphic stress tensor); V necessarily admits an action of G O .We can then use this action to construct an interesting vector bundle V Gr G over the affine Grassmannian as the product for a given holomorphic G bundle P , the fiber V P over P should be thought of as V coupled to the bundle P .The algebra of local operators on this Dirichlet boundary condition is then conjectured to be the Dolbeault homology of Gr G with coefficients of this vector bundle twisted by a certain power of the determinant line bundle L: where, for a complex manifold X, H n,∂ (X, E) is defined to be the linear dual of the Dolbeault cohomology valued in the dual bundle H n,∂ (X, E) := H n ∂ (X, E * ) * , cf.Eq. (7.13) of loc.cit.See, e.g., [33,Section 1.5] for a detailed description of the line bundle L → Gr G .The power κ denotes the effective Chern-Simons level, cf.[25,Eq. 3.32], and encodes the anomaly for the G flavor symmetry on the boundary.Practically speaking, the effect of the correction L −κ is to give magnetically charged operators (boundary monopoles) a non-trivial electric charge in the presence of a Chern-Simons term.It is important to note that Eq. (2.9) is merely a description of the vector space of local operators.Equipping this vector space with the structure of a vertex algebra is far from trivial; this is sketched in [26,Section 7.2] in the absence of matter fields.
Boundary monopoles in abelian gauge theories and spectral flow
The relative simplicity of the affine Grassmannian Gr T for abelian gauge group T once again implies that the description of boundary local operators in Eq. (2.9) can be made fairly concrete.Recall that Gr T is essentially the lattice Z r associated to the holomorphic T = (C × ) r bundles with transition function z m ; the vector space H •,∂ (Gr T , V Gr T ⊗ L −κ ) is thus graded by magnetic charge/monopole number Z r .
Local operators with vanishing magnetic charge are simply the perturbative local operators V built from the fundamental fields; since magnetic charge is additive, the full, nonperturbative algebra furnishes a module for this perturbative subalgebra.By construction, this subalgebra has abelian currents J a , a = 1, ..., r that generate the action of the T flavor symmetry on the boundary.The statement this flavor symmetry has an anomaly κ translates to the OPEs (2.10) The OPEs of these currents with a perturbative local operator O built from the matter fields simply measures it charge q a under T :9 Given the OPEs with J a , we can immediately write down the result of coupling to the gauge bundle with magnetic charge m ∈ Z r by performing the large gauge transformation z m .Let m a ∈ Z denote the components of the magnetic charge.If a local operator O has charges q a then it transforms as O → z q,m O, where q, m = q a m a is the natural pairing of the weight q ∈ Hom(T, C × ) and cocharacter m ∈ Hom(C × , T ).The currents J a , on the other hand, transform as J a → J a − κ ab m b z ; if κ ab is non-degenerate, one could say that κ ab J b transforms as a connection.Indeed, this is no mistake: the current J a can be identified with the boundary value of the field B a appearing in Section 2.1, which itself is identified with component A a z of the gauge field as B a = κ ab A b z .It is important to note that the transformations just described are automorphisms σ m of the (mode algebra of the) perturbative subalgebra known as spectral flow automorphisms; for every abelian current J, there is a lattice of such automorphisms.More generally, if B is an abelian current, then there is a family of automorphism of its mode algebra that sends B → B + λ z ; if this is a current subalgebra of a larger vertex algebra V, it need not lift to an automorphism of all of V, cf.Section 2.4 of [34].For example, if O ∈ V is a current algebra primary of weight q then we need λq ∈ Z.The inverse of σ m is simply σ −m .Given the automorphism σ m and a module M , we can construct a new module σ m (M ), called the spectral flow of M , by pre-composing with the automorphism: the action of an operator O on the module σ m (M ) is given by the action of σ −m (O) on M : In the present situation, we are interested in the spectral flows of the vacuum module V; we claim that, as a module for the perturbative algebra V, the full, non-perturbative algebra of local operators is simply the direct sum over all such spectral flows: It is worth noting that the notion of spectral flow and spectral flow modules is totally independent of the presence of a 3d bulk; for example, (the symmetry algebra of) a 2d free boson ϕ(z, z) has spectral flow automorphisms and the resulting modules are generated by the usual vertex operators : e λϕ :.When a vertex algebra V with spectral flow automorphisms σ n is realized as the boundary vertex algebra for a 3d TQFT, the spectral flows of the vacuum σ n (V) are identified with vortex line operators and generator of a 1-form symmetry of the 3d TQFT, cf.Physics Proposition 6.2 of loc.cit.
To illustrate how this works in practice, consider the simplest case of a single abelian current J at level κ ∈ Z >0 without any matter fields.The vacuum module of this current algebra is generated by the vacuum vector |0 on which the modes J n of J = J n z −n−1 act as The vacuum module V is then spanned by vectors of the form J n 1 ...J nm |0 where n 1 ≤ ... ≤ n m < 0. The spectral flow automorphism σ m takes the form σ m (J)(z) = J(z) − κm z and is simply a shift in the zero mode J 0 .This implies that the spectral flow of the vacuum module σ m (V) is generated by the spectral flow of the vacuum vector σ m (|0 ), on which the modes act as The spectral flow of the vacuum vector σ m (|0 ) corresponds to an operator V m (0), the boundary monopole of magnetic charge m; this action of the modes J n corresponds to the OPE As claimed above, the effect of a non-trivial level κ is to give the magnetically charged operator V m (w) a non-trivial electric charge κm.
The above analysis readily generalizes to more interesting examples.That said, it is important to note that the precise OPEs of the boundary monopole operators is not encoded in the action of the perturbative algebra on the sectors of non-trivial magnetic charge.It is believed that the spectral flow morphisms σ m are compatible with the fusion of modules for the perturbative algebra V; given two modules In particular, since V × V = V, the fusion of the modules σ m (V ) respects the grading by magnetic charge m.The direct sum over the σ m (V) is thus a simple current extension of the perturbative algebra V.
In practice, the precise OPEs of the fields in σ m 1 (V) and σ m 2 (V) are determined by a free-field realization of V.In the above example of the abelian current J, we can realize it as J = ∂ϕ for a chiral boson ϕ(z); the boundary monopole operators V m (z) are then identified with the vertex operators : e mϕ :.The full, non-perturbative algebra in this example is thus identified with a lattice VOA.See e.g.[34][35][36] for examples of how this process can be implemented in more interesting examples; the work [35] studies the algebra of local operators on a Dirichlet boundary condition in the (topological) B-twist of 3d N = 4 SQED, whereas the more recent [34,36] apply these same techniques to study yet more involved abelian N = 4 gauge theories.The much earlier work [37] studies related simple current extensions from a purely VOA perspective.
Before moving on, we note that the sorts of simple current extensions that arise from spectral flows of the vacuum module have an important physical role to play.General aspects of these simple current extensions were described in [38,39], which we now sketch.For starters, given any module M of the perturbative algebra we can consider the direct sum m σ m (M ).Importantly, not all m σ m (M ) will yield a module for the full non-perturbative algebra: the OPE of a monopole of magnetic charge m and a weight q operator O takes the form To ensure we get an honest module for the simple current extension, rather than a twisted module, requires the exponent qm must be integral for all m, i.e. q is integral.In particular, 1. boundary monopoles will disallow the modules M that do not have integral weights Moreover, given a perturbative module M that induces a module for the non-perturbative algebra, the map M → m σ m (M ) is not injective: the modules M and σ m ′ (M ) will both yield the same non-perturbative module.If we wish to describe modules for the non-perturbative algebra in terms of modules for the perturbative algebra, we see that
boundary monopoles identify perturbative modules that differ by spectral flow
One often phrases this phenomenon physically as "Wilson line are being screened by gauge vortex lines." To make this more explicit, consider again the case of U (1) k Chern-Simons theory.The perturbative algebra is simply a Heisenberg algebra at level k; the full non-perturbative algebra obtained by extending by spectral flows of the vacuum can be identified with the lattice VOA based on √ kZ.There is a Fock module F q for the perturbative Heisenberg algebra for each infinitesimal weight q ∈ C, viewed as a Wilson line for a representation of weight q; spectral flow acts by shifting the weight σ m (F q ) = F q+km .A straightforward computation shows that σ m (F q ) defines a module for the lattice VOA so long as q ∈ Z, i.e. we can only consider Wilson lines of integer charge.Finally, since F q and F q+km induce the same module for the lattice VOA, we see that gauge vortex lines screen Wilson lines with charge a multiple of k, i.e. that modules for the lattice VOA are labeled by integers mod k.
Boundary monopoles in non-abelian gauge theories
Perhaps unsurprisingly, boundary monopoles in non-abelian gauge theories are notably more difficult than the case of abelian gauge theories.This increase in difficulty is directly tied to the non-trivial geometry and topology of the affine Grassmannian for non-abelian groups.
Even in the case of a lone current algebra, i.e. in the absence of the matter VOA V, the story of boundary monopoles is not well understood.To illustrate the difficulty, consider for simplicity SU (2) Chern-Simons theory at level κ, a positive integer.Perturbatively, the algebra of boundary local operators on the above Dirichlet boundary condition is generated by three holomorphic currents J H , J E , J F (labeled by the three Chevalley generators of sl(2) denoted by H, E, F ), with singular OPEs (2.17) On the other hand, the full, non-perturbative boundary algebra is expected to be a WZW current algebra [25,26]; this is a simple VOA obtained by removing singular vectors, essentially : J E κ+1 : and its descendants, from the perturbative current algebra.We immediately see a dramatic difference from the abelian setting: the full, non-perturbative algebra is a quotient of the perturbative algebra, rather than the simple-current extension appearing in the abelian setting.More generally, when the gauge group G isn't simply connected, the non-perturbative algebra is an extension of the simple quotient by simple currents furnishing the fundamental group π 1 (G) -the simple-current extension is thus graded by the (abelian) group π 1 (G) capturing the decomposition into sectors of a given monopole number.
With this said, we note that there are still spectral flow automorphisms σ m which can be thought of as considering perturbations around the holomorphic SL(2) bundle with transition function z (m,−m) ; equivalently, we can think of this as performing a large gauge transformation on the perturbative algebra by the diagonal matrix We again identify the J a with the boundary values of the gauge fields A a z via J a = κK ab A b z , where K ab = Tr(t a t b ) is the Killing form.Thus, the currents transform as We note that there is a square root σ 1/2 of the spectral flow automorphism σ 1 .This square root should be thought of as performing a large gauge transformation by the diagonal matrix z (1,0) ∼ z (0,−1) in a related SO(3) ∼ = P SU (2) gauge theory.Although we will not address the full problem here, we note that there are hints that it is possible to connect the two perspectives: it should be possible to extend the perturbative subalgebra by modules E m labeled by cocharacters m together with a differential encoding the geometry of the affine Grassmannian Gr G .The modules E m should be thought of as contributions from the torus fixed points on Gr G (the torus action arising from left multiplication on G K ) with the differential encoding how these fixed points fit into honest homology classes.We are unaware of a description of the E m , but mention that the spectral flows of the vacuum are likely submodules thereof: the index computations of [25, Section 7.1] illustrate that, upon analytic continuation, the characters of the full non-perturbative algebra can be matched with the direct sum of these spectral flows.
The fact that they only match after analytic continuation is an indication that the spectral flow modules are extended by modules whose characters vanish upon analytic continuation.For a simple illustration of this fact, consider the algebra C[X] of polynomials in a bosonic variable X and the module M m = X m C[X].If we count powers of X with a fugacity q, the character of M m is simply (2.20) So long as |q| < 1 this character converges to the rational function q m /(1 − q); upon analytic continuation to |q| > 1 and then expansion as a series in powers of q yields −1 times the character of the quotient module C(X)/M m : Of course, the modules M m and C(X)/M m are very different, but they fit into the following exact sequence of C[X]-modules: In particular, we see that the module M m is a submodule of a resolution 0 → M m → C(X) of the module C(X)/M m obtained by analytically continuing its character; the character of C(X) is the formal delta function δ(q) = i∈Z q i and vanishes upon analytic continuation to |q| > 1.
Translating this lesson to the case of interest, we arrive at the expectation that the spectral flows of the vacuum σ m (V) are themselves submodules of the E m used to resolve the non-perturbative algebra.Even though the σ m (V) do not constitute the entirety of the non-perturbative algebra outside of abelian gauge theories, they are still an important part to understand.
Twistorial Monopole Operators
We now apply the above analysis to the 3d theories arising from compactifying holomorphic BF theory on twistor space PT as described in [6].See Appendix A for our twistorial conven-tions.We will be principally interested in the effect of non-perturbative monopole operator insertions on the 3d boundary chiral algebra.We remind the reader that precisely this chiral algebra is the asymptotic symmetry algebra of (conformally) soft modes [1], supported on the celestial sphere at asymptotic null infinity from the perspective of the 4d BF theory [6].
Self-dual gauge theory from PT
We start with a brief reminder of the realization of self-dual 4d gauge theory in terms of a holomorphic field theory on PT; we mostly follow [6].Our goal for this subsection is to relate these self-dual gauge theories to 3d theories reminiscent of the twisted theories described in Section 2.1 whereby the celestial chiral algebras of asymptotic symmetries are realized as boundary chiral algebras of the 3d theory, cf.Section 6 of loc.cit.
Consider self-dual gauge theory with gauge Lie algebra g.This is the 4d theory of a gauge field A ∈ Ω 1 (R 4 , g) and self-dual 2-form B ∈ Ω 2 (R 4 , g * ) − with a BF action: where F (A) − is the anti self-dual part of the curvature F (A).The Penrose-Ward correspondence relates this self-dual gauge theory to a 6d field theory on twistor space PT; the fields of the 6d theory include a (0, 1) gauge field A ∈ Ω (0,1) (PT, g) together with a (3, 1)-form field B ∈ Ω (3,1) (PT, g * ) with a holomorphic BF action given by where F (0,2) (A) = ∂A + A 2 is the (0, 2) part of the curvature of A. When formulated in the BV formalism, we can account for the anti-fields, ghosts, and anti-ghosts by extending A and B to Ω (0,•) (PT, g) [1] and Ω (3,•) (PT, g * ) [1], respectively.We denote the extended fields by the same characters and use the convention that integration is only non-vanishing on top forms; with this choice, the BV action takes the same functional form as above.
Anomaly on PT
It is important to note that the above 6d holomorphic theories suffer from perturbative anomalies [10,11].These anomalies do not say that the 4d self-dual gauge theory is ill-defined.Rather, they imply the Penrose-Ward correspondence fails to hold quantum mechanically.Consequently, we should not expect the 3d theory obtained by reduction from 6d to describe the corresponding 4d self-dual gauge theory beyond tree-level computations.As shown in [10], it is possible to remedy this anomaly via a Green-Schwarz-like mechanism for certain special choices of gauge Lie algebra g.In particular, when the gauge Lie algebra is sl(2), sl(3), so (8), or an exceptional algebra one can introduce a Kodera-Spencer-like field (and its BV partners) η ∈ Ω (2,•) (PT) satisfying ∂η = 0 with full 6d action where λ g is a g-dependent constant. 10From the 4d perspective, the new field η introduces an axionic field with a quartic kinetic term.Since a majority of our computations are at tree-level, we will mostly ignore the contributions from this axionic field -the classical theory with the Kodera-Spencer field is inconsistent, as the 1-loop anomaly of the holomorphic BF theory is canceled by a tree-level anomaly involving η.
Reduction to 3d
In order to connect the above 4d and 6d theories to the discussion of Section 2, we consider a radial slicing on R 4 \{0} and compactify the 6d theory on PT\P 1 ≃ R 4 \{0} × P 1 along the radial S 3 slices.The resulting 3d theory on R >0 × P 1 remains holomorphic on P 1 and becomes topological along R >0 ; the Kaluza-Klein modes along the S 3 organize themselves into representations of the SU (2) isometry of S 3 .The field content of this mixed holomorphictopological 3d theory can be determined in many ways, see Section 6 of [6] for a slightly different approach to the one presented below.Our approach will be to mimic the algebraic reduction outlined in [40] in the context of higher-dimensional Kac-Moody algebras; the recent paper [41] of Zeng describes how this reduction can be done in a smooth, rather than algebraic setting; as usual, the algebraic classes are dense in the smooth classes, cf.Appendix B of loc.cit.We use the notation of Section 2.1.We start by (locally) placing the theory on the open set C × (C 2 \{0}) ⊂ C 3 .The ring of functions, i.e. the cohomology of the structure sheaf, on this open set has support in nonzero cohomological degree, reflecting the fact that C 2 \{0} is not affine.Indeed, a standard computation shows that The fact that the degree 0 part of this ring of functions agrees with the ring on functions on all of C 2 is a manifestation of Hartog's Lemma that states any holomorphic function on C n \{0} extends to a holomorphic function on C n , so long as n > 1.Note that the degree 1 subspace can be identified with the dual space to the degree 0 part using the residue pairing.
When we reduce to 3d by compactifying around the S 3 inside C 2 \{0} ≃ R >0 × S 3 , each of these cohomology classes will produce a 3d field.For example, the gauge field A reduces to two towers of fields: where m 1 , m 2 ≥ 0; the 3d fermionic field A[m 1 , m 2 ] corresponds to the coefficient of the degree 0 cohomology class (v 1) m 1 (v 2) m 2 and the 3d bosonic field Φ[m 1 , m 2 ] corresponds to the degree 1 cohomology class (v 1) −(1+m 1 ) (v 2) −(1+m 2 ) .Importantly, these fields inherit a 10 Explicitly, the constant λg satisfies Tr(X 4 ) = λ 2 g tr(X 2 ) 2 for Tr the trace in the adjoint representation and tr is the trace in the fundamental representation.The existence of a non-zero solution implies the above restriction on the gauge Lie algebra g. non-trivial 3d spin from the fact that the coordinates v α transform a sections of O(1) → P 1 , i.e. the coordinate v α has 3d spin 1 2 .Correspondingly, . In a similar fashion, B produces towers ).The KK modes with fixed m 1 + m 2 transform in the spin m 1 +m 2 2 representation of the SU (2) rotating the S 3 slices.Passing the action S 6d through this compactification, we arrive at a theory of the form described in Section 2.1, albeit with an infinite number of fields coming from the KK modes.In order to write it down concisely, it is convenient to collect the KK modes into generating functions and similarly for the towers coming from B, cf.Eq. (6.2.3) of [6].With this notation, we can write the action of the 3d theory in a remarkably compact form: where we view A(v α) as a partial connection for the infinite-dimensional Lie algebra g with covariant exterior derivative d ′ A(v α) and curvature F ′ (A(v α)).In terms of the component fields, this action takes the following form: (3.8)
The celestial chiral algebra as a boundary chiral algebra
With this reduction from 6d to 3d, it is a straightforward task to realize the celestial chiral algebra of asymptotic symmetries as a boundary chiral algebra for the above 3d theory using the tools outlined in Section 2. This phenomenon is quite general: the work [6] says that the celestial chiral algebra of asymptotic symmetries of any twistorial theory can be realized as the boundary chiral algebra of a suitable 3d mixed holomorphic-topological theory.The celestial chiral algebra controlling (tree-level) scattering amplitudes is realized by imposing Dirichlet boundary conditions on the gluon "vector multiplets" (A, B) and imposing Neumann boundary conditions on gluon "chiral multiplets" (Φ, Λ).In analogy with [ and similarly for J .The generating functions J[ λ] used in [6,7] correspond to f = e ω λ αv α .With this notation, the non-trivial OPEs of these generators take the following form: The OPEs with J a [f ] are particularly important: they encode the action of infinitesimal, holomorphic (bosonic) gauge transformations. 11Recall that the Lie algebra of constant (from the 3d perspective), infinitesimal gauge transformations is the ring of holomorphic maps from C 2 \{0} to g, whose bosonic subalgebra is simply g[[v α]].The group of constant, finite gauge transformations is simply the group G = G[[v α]] of series-valued group elements.With this, it is straightforward to verify that the infinitesimal action of gauge transformations is captured by the simple poles in the above OPEs.
This can be straightforwardly generalized to holomorphic (from the 3d perspective) bosonic gauge transformations: the Lie algebra of infinitesimal, holomorphic gauge transformations can be identified with g O = g[[z; v α]] and the group of finite, holomorphic gauge transformations is the group of invertible series As with constant gauge transformations, the above OPEs are invariant under the action of holomorphic gauge transformations.
Wilson line modules from Koszul duality
We can engineer modules for the (celestial) chiral algebras by introducing additional defects transverse to the chiral algebra plane, as suggested in §5.3 of [6].In that work it was noted that a holomorphic Wilson defect of the form v 1 −λv 2 =−ǫ A supported on a noncompact surface of the form v 1 − λv 2 + ǫ = 0 sources a state on twistor space of conformal dimension 1 12 of the form dzδ z=z 0 1 v 1 +λv 2 +ǫ .(More generally, operators of positive integral conformal dimension are anticipated to be related to Goldstone modes conjugate to the conformally soft tower and have been discussed from a 4d perspective in e.g.[42,43]).In this subsection, we will study a closely related family of modules arising from holomorphic Wilson lines.These differ from the Wilson "defects" of [6] because of the presence of a nontrivial integral kernel, which is essential for constructing a partial connection on a complex manifold like twistor space (see, e.g.[44][45][46] for more details on the construction of holomorphic Wilson lines and Wilson loops in twistor space).
We will derive the action of the chiral algebra elements on modules corresponding to 6d holomorphic Wilson lines using the technique of Koszul duality; for related analyses of modules by way of Koszul duality 13 , see e.g.[47][48][49].Here, we will describe the modules realized by Wilson lines from both 3d and 6d realizations of these chiral algebras; in brief, 6d (holomorphic) Wilson lines will reduce to 3d Wilson lines, with the representation of g[v α] in 3d encoding both the 6d representation and the support of the Wilson line. 14e will warm up with the 3d Wilson line computation and then generalize to the case of modules arising from a 6d holomorphic Wilson line.Consider a Wilson line intersecting/ending on a universal holomorphic defect in one of the 3d mixed holomorphic-topological gauge theories described in Section 2. Focusing on the gauge fields, the universal holomorphic defect has local operators J a (z) and J a (z) coupling to the bulk fields A, B as (3.11) We then consider a local operator M R on the holomorphic defect that is also attached to a Wilson line W R extended in the topological direction. 15Requiring this configuration to be gauge invariant then constrains the OPEs of J a J a with the local operator M R : where ρ a are the representation matrices for the g action on the representation R; we used integration-by-parts and Stokes' theorem in the last equality.We see that the OPE of J a , J a and M R take the following form: Namely, M R is necessarily a primary operator for the g current algebra generated by J a that has non-singular OPEs with J a .We now turn to the 6d analogue of this computation.There are two important differences.First, unlike the above example, there is a non-trivial choice of support for the 6d Wilson line: along with the location w on the the chiral algebra plane, we must choose a holomorphic curve in the transverse C 2 .Additionally, we must include a tower of junction local operators that couple to the fluctuations of the Wilson line in the directions mutually transverse to the chiral algebra plane and the defect.For simplicity, consider placing the Wilson line along the v 1 plane at v 2 = 0.The coupling between the junction local operators and the Wilson line is then given by n≥0 where Notice that to define a holomorphic Wilson line we must wedge the gauge field with an appropriate meromorphic differential, which results in a nontrivial integral kernel depending on the support of the Wilson line.
We can derive the module structure by imposing gauge invariance of this coupling together with the universal defect coupling Due to the similarity to the above computation, we simply state the result: the non-trivial OPEs are given by We see that the tower of operators M R [n] can be identified with primary operators for the g[v α] currents J a [m 1 , m 2 ] induced by the representation R[v α]/(v 1).More generally, if we place the Wilson line along the curve v 1 − λv 2 = 0 we find the module induced from R[v α]/(λv 1 − v 2).We will denote the more general module elements M R,λ [n]; the parameter λ naturally lives on the Riemann sphere P 1 with the above case corresponding to λ = ∞.In terms of the notation introduced above, we can write this action as follows: As mentioned above, we arrive at the conclusion that these 6d holomorphic Wilson lines reduce to Wilson lines in the 3d theory obtained by reduction on S 3 .Before moving on, we note the quantum numbers of the operators M R,λ [n].For starters, the local operator M λ [n](w) has spin − n 2 and scaling dimension −n; this follows from the spin and scaling dimension of the J a [ m] together with the fact that the Wilson line is rotation invariant.The action of the SU (2) rotating the v α as a doublet does not preserve these modules; geometrically, this is because the locus v 1 − λv 2 = 0 isn't invariant.Instead, this SU (2) rotates these modules into one another via its natural action on P 1 .The module at λ = ∞ (resp.λ = 0) is compatible with the diagonal torus that rotates The modules associated to other points of P 1 also admit a grading coming from the U (1) ⊂ SU (2) stabilizing λ, but the operators J a [ m], J a [ m] will no longer be weight vectors.
Celestial monopole operators
In this subsection, we will describe non-perturbative corrections to the celestial chiral algebra.From the perspective of the above 3d theory, we find such corrections arise from boundary monopole operators.To simplify the discussion, and to avoid the difficulties of non-abelian monopoles described in Section 2, we will restrict our attention to self-dual abelian gauge theory; to make it somewhat more interesting, we introduce electron and positron fields to realize a self-dual version of QED.
Reduction to 3d and the celestial chiral algebra
The reduction of the 6d gauge fields A, B to 3d is the same as in pure self-dual gauge theory.The electron fields produce four towers: the fields arising from Y will be denoted and 3+m 1 +m 2
2
) and the fields arising from X will be denoted ).The decomposition of the positron is identical.The fields Ψ, Π, Ψ, Π are fermionic and the fields X, Y, X, Y are bosonic.The 6d action in Eq. (3.18) reduces to the following action (we suppress the dependence on v α): The generalization of the above boundary conditions corresponds to imposing Dirichlet boundary conditions on the photon "vector multiplets" (A, B) and the electron/positron "chiral multiplets" (X, Ψ), (Y, Π), (X, Ψ), (Y, Π) while imposing Neumann boundary conditions on photon "chiral multiplets" (Φ, Λ).In analogy with [26, (corresponding to positive/negative helicity electrons and positrons).As before, we denote B, φ by J, J to match the notation of [6].Using the notation described in the previous section, the non-trivial OPEs of these generators take the following form: The Lie algebra of constant (from the 3d perspective), infinitesimal (bosonic) gauge transformations is simply The group of constant, finite gauge transformations is simply the group of invertible series G = C[[v α]] × , where the group structure comes from multiplication of series; a series g(v α) ∈ C[[v α]] is invertible if and only if g(0) = 0.With this, it is straightforward to verify that the action of the constant gauge transformations g The Lie algebra of holomorphic, infinitesimal gauge transformations can similarly be identified with and the group of holomorphic, finite gauge transformations is the group of invertible series As before, a series g(z; v α) is invertible if it is non-vanishing at z = v α = 0.The action of this holomorphic gauge transformation follows from the above.For example, the action on Π[f ] takes the following form: As with constant gauge transformations, the above OPEs are invariant under the action of holomorphic gauge transformations.We note that this tree-level chiral algebra has two gradings beyond spin and electric charge q: there is the axial charge q A and the auxiliary Z grading a of BF -like theories that scales the cotangent directions with weight 1 (this is the "number of Js" grading used in [6, Lemma 9.0.1]).Note that the latter grading does not survive the deformation away from self-dual gauge theory.
Table 2: Electric charge (q), axial charge (q A ), and auxiliary grading (a) of the generators of perturbative celestial chiral algebra for tree-level self-dual QED.
Celestial spectral flow in self-dual QED
We now move to non-perturbative operators in the celestial chiral algebra.From the perspective of this chiral algebra as living on the boundary of a 3d theory, these non-perturbative corrections will correspond to boundary monopole operators.
As mentioned above, the group of holomorphic gauge transformations in our 3d incarnation of self-dual Maxwell theory is the group of invertible series in the three coordinates z, v α.From the perspective of the celestial chiral algebra, these holomorphic gauge transformations preserve the vacuum module.That said, the algebra of modes is invariant under a larger symmetry group: the group of meromorphic (from the 3d perspective) gauge transformations ] × ; such a Laurent series g(z; v α) is invertible so long as the coefficient of the lowest power of z is an invertible series in v α.
The quotient of the group of meromorphic gauge transformations by the group of holomorphic gauge transformations is the affine Grassmannian Gr G = G K /G O .It is fairly straightforward to see that the (closed points of the) affine Grassmannian for G = C[[v α]] × can be identified with Z: let g(z; v α ) ∈ G K be any meromorphic gauge transformation and let m 0 by the smallest power of z with a non-vanishing (and necessarily invertible) coefficient.It follows that g(z; v α ) and z m 0 differ by the holomorphic gauge transformation g ′ = z −m 0 g and, moreover, there are no non-trivial elements in G O that preserve z m 0 , whence the quotient is Z.
In analogy with the discussion in Section 2.2, the point z m in the affine Grassmannian Gr G gives a spectral flow operation σ m .The action of this spectral flow is realized by applying the meromorphic gauge transformation z m : We will denote the local operator arising from spectral flow of the vacuum vector by V m ; the integer m encodes the 3d magnetic charge/monopole number of the monopole.We take as our ansatz that the full, non-perturbative chiral algebra is realized by extending the perturbative celestial chiral algebra by these modules.Once we extend the celestial chiral algebra by these generators, we gain another Z grading by 3d magnetic charge m; the perturbative chiral algebra corrresponds to the subalgebra of charge 0.
We expect these boundary monopoles to serve two roles in the same way as outlined in Section 2.2.1.First, they should enforce the quantization of electric charge: modules for the full, non-perturbative celestial chiral algebra must have integral electric charges, whereas the perturbative analysis described in Section 3.1.4allows Wilson lines with arbitrary complex charge.At the level of OPEs, we find whence qm must be integral.Second, these celestial monopoles will identify modules for the perturbative algebra that differ from one another by spectral flow.We note that because the abelian currents in the celestial chiral algebra have vanishing level the boundary monopoles have vanishing electric charge and therefore do not identify modules with different electric charges as in 3d Chern-Simons theories.
4d interpretation
We attempt to understand the interpretation of the 4d states corresponding to these spectral flow modules by way of the 6d bridge established in [6].We start by noting that the states in the mth spectral flow of the vacuum are in 1-to-1 correspondence with the states in the vacuum module, but their spins are shifted: for example, we saw and so the spin of this field, and hence its modes Ψ[m 1 , m 2 ] n and the states they generate, are shifted by m -the field whereas its mth spectral flow has spin . The quantum numbers m 1 , m 2 are unchanged -our spectral flow morphisms only alter the quantum numbers for the half of the 4d spin group Spin(4) ≃ SU (2) × SU (2) rotating the twistor sphere.
Importantly, we see that the resulting states are precisely those arising from reducing a 6d field σ m (Y) valued in O(−2m − 3), corresponding to a massless 4d field of helicity − 1 2 − m.More generally, spectral flow will take a 6d field Z valued in O(2h − 2) with electric charge q to another 6d field σ m (Z) valued in O(2h − 2 + 2qm).Thus, the states in the mth spectral flow module can be identified with states resulting from perturbing around the holomorphic gauge bundle O(2m) → PT in 6d. 16Coupling to this bundle results in an apparent shift in helicity h → h + qm.
The shift in helicity due to the presence of a non-trivial gauge bundle is a familiar phenomenon: this is exactly the shift in angular momentum experienced by a electrically charged particle in the presence of a magnetic monopole [50][51][52]!Somewhat more precisely, one should say that a 2-particle state of an electrically charged particle and a magnetically charged particle has a non-trivial "pairwise helicity."We understand this as follows, see, e.g., [53,Section 2] for a detailed discussion of these notions. 17If we boost to the center of momentum frame, or any frame where the momenta of the particles point towards antipodal points on the celestial sphere, it is easy to see that any such configuration is preserved by an SO(2) little group rotating a transverse plane; 2-particle states are thus labeled by a pair of 1particle data as well as an additional quantum number describing its possible transformations under this "pairwise little group."Namely, under such a rotation by angle φ, a general 2particle state acquires a phase e i(s 1 +s 2 +h 12 )φ , where s i are the spins/helicities of the two particles and h 12 is the aforementioned pairwise helicity.The case of h 12 = 0 describes the direct product of two 1-particle states.More generally, if particle 1 has electric charge q ∈ Λ weight and particle 2 has magnetic charge m ∈ Λ ∨ weight , then one finds exactly h 12 = 1 2 m, q .In the present setting, we interpret the insertion point of the operator V m as a point on the celestial sphere; placing a another operator of electric charge q at z, we see that the shift in spin by qm precisely accounts for the expected additional phase accrued when rotating the celestial sphere.Curiously, the natural spectral flows σ m appear to realize even magnetic charges, corresponding to the fact that they realize a coupling to O(2m).There are spectral flow morphisms for the remaining magnetic charges, corresponding to coupling to the bundles O(2m + 1), but these lead to twisted modules, cf. the spectral flow maps exchanging Ramond and Neveu-Schwarz sectors in 2d superconformal theories, and hence cannot be used to extend the celestial chiral algebra.It would be interesting to see if these twisted modules can be related to the twisted sectors proposed by [54] to describe out-states in the scattering of charged particles off of heavy monopoles.
Absent this shift in helicity, the 4d interpretation of our celestial chiral algebra generators V m is not immediately clear.An interesting and immediate question raised by our 3d magnetically charged operators realized by spectral flow is whether there are magnetic analogs J ∨ of the currents J that implement Strominger's magnetic soft theorem [16] (or the non-abelian extension proposed in [17]).In the case of pure abelian self-dual gauge theory, it is clear that there can be no such operator without adding additional generators to the chiral algebra: the operators V m (z) corresponding to the states σ m (|0 ) have regular OPEs with everything.A tantalizing possibility is that there is such an operator after the inclusion of suitable Goldstone fields that are canonically conjugate to the J's, cf.Section 3 of [55].As seen in [56], suitable coherent states of these Goldstone fields form interesting modules in the gravitational celestial chiral algebra realizing self-dual Kerr Taub-NUT backgrounds.In the case of pure self-dual abelian gauge theory, there are analogous Goldstone fields S that are canonically conjugate to the Js; the modules realized as coherent states of these fields lead to Faddeev-Kulish dressings factors [57,58], cf.Section 3.3 of [59]; see also [60].Our monopoles V m serve a complementary role where the monopole modules are coherent states built from J, cf.3d monopoles are vertex operators e mϕ for the current J = ∂ϕ, rather than from S. As the Goldstone modes S are canonically conjugate to the J's, there is an OPE and hence We see that the V m (w) are charged under the symmetry generated by Goldstone boson S[0, 0].Such Goldstone modes 18 in gauge theory generate a large gauge transformation at null infinity, resulting in a shift of the 4d vacuum.The nontrivial action on the spectral flow vertex operator is natural, as the V m should interact with the nontrivial 4d background of QED created by the Goldstone.Although these operators carry 3d magnetic charge, they are not the right candidates for constructing the dual "magnetic" soft currents in 4d.As pointed out by Footnote 16 of [59], further incorporating the magnetic soft theorem seems to require a further extension or doubling of the algebra generated by J, S; how this could arise in a twistorial set-up is an open question.
Another perspective on our monopoles V m comes from reducing the 6d gauge bundles down to 4d.If we start with the bundle O(m) → PT, we can attempt to pass it to 4d through the correspondence between PT and (complexified) spacetime.We consider the correspondence space F = P 1 × R 4 (or P 1 × C 4 ); this space fits into a natural diagram: where It is straightforward to check that pulling back O(m) → PT to the correspondence space F simply results in the product of the trivial bundle on R 4 and the bundle O(m) → P 1 .Pushing forward this bundle along π 2 is somewhat more delicate: we to compute the pushforward along π : P 1 → point of the bundle O(m), which naively computes the space of global sections.Of course, because P 1 isn't affine, we should really consider the derived pushforward, which computes sheaf cohomology.We conclude that the pushforward of π * 2 O(m) along π 1 produces H • (P 1 , O(m)) copies of the trivial bundle on R 4 .A straightforward computation shows that this cohomology is Translating this computation to the twistor correspondence, the 6d bundle O(m) on PT becomes (a suitable symmetric power of) the positive chirality spin bundle S + or its dual S * + : 19 18 An infinite tower of such modes of positive integer conformal weight, symplectically paired with the conformally soft modes comprising the perturbative chiral algebra, has been explored recently in [43].There it was shown that suitably constructed Goldstone-dressed states trivialize the full tower of conformally soft theorems.It will be interesting if these properties can help us better understand the 4d interpretation of the spectral flow of the 2d vacuum. 19We thank L. Mason for explaining the identification with the spinor bundle to us.
The easiest way to see this is to note that sheaf cohomology of of O(m) for non-negative m is identified with degree m polynomials in z α ; Serre duality says O(m) for m ≤ −2 is the dual of O(−m − 1); and the bundle O(−1) has no cohomology and hence reduces to a rank 0 vector bundle, i.e. the constant sheaf 0 whose fiber is the 0-dimensional vector space 0. Our results have so far been at the level of associated bundles to the principle U (1) bundle, and we would like to understand things directly in terms of the principle bundle.A major difficulty in relating this bundle to a familiar 4d gauge field configuration per the usual twistor correspondence is the fact that this gauge bundle has nonvanishing Chern class over the twistor sphere at each point in spacetime.Sparling [61] has explored extensions of the twistor correspondence in the non-abelian case, and where the Chern class is nonvanishing at isolated points in spacetime, which he calls jumping points.There, he finds singularities in the gauge field, which lead to a breaking of the Yang-Mills symmetry. 20In such cases, Sparling shows that one can obtain solutions to the Yang-Mills equations for symmetry subgroups of the original gauge group, plus certain on-shell massless matter fields.For instance, simple jumping loci for SU (2) result in solutions to two self-dual Maxwell equations plus additional charged fields under the two gauge groups.It is unclear to us how to generalize his reasoning to our situation, but we might hope that our principal U (1) bundle emerges by a similar "symmetry breaking" phenomenon, and perhaps could be obtained by reducing the structure group of the spin bundle to the diagonal U (1) ⊂ SU (2) + .
Self-dual monopoles
Before moving on, it important to mention that there have already been investigations into self-dual field configurations with 4d magnetic charge.Magnetic monopoles in self-dual abelian gauge theory cannot solely have magnetic charge: the self-duality equation implies that there must be an accompanying electric charge so that E +iB = 0. Thus, it is reasonable to ask for a holomorphic line bundle on (an open set of) twistor space PT that realizes such a 4d "self-dual" monopole.Thankfully, such a line bundle is known by work of Penrose and Sparling [62]; see also Chapter 8.4 of [63] and references therein. 21e start with a self-dual monopole with worldline along the x 4 axis.The corresponding field strength can be realized by taking the twistor function ln(v 2 − v 1 z ); passing this twistor function through the Penrose transform yields a field strength with self-dual part where r 2 = (x 1 ) 2 + (x 2 ) 2 + (x 3 ) 2 , as desired.This twistor function admits a natural generalization to arbitrary momentum p α α by replacing the twistor function with The corresponding self-dual field strength then takes the form This field strength is invariant under scaling p → λp for λ ∈ R >0 , so it only depends on the worldline and not the momentum, and vanishes if we choose a null momentum p 2 = 0. We would like to ask how the insertion of one of these self-dual monopoles alters the celestial chiral algebra.The twistor function f p above should be interpreted as the logarithm of a single transition function describing an underlying line bundle L p on PT; although we will not describe them, the line bundles L p are constructed explicitly in [62].The construction of the bundle in [62] works twistor sphere by twistor sphere: for each twistor sphere away from the worldline, they introduce a four-set open cover and glue them with certain holomorphic transition functions.
Unfortunately, we do not expect the line bundle L p admits an algebro-geometric description.Indeed, it is a (complex) line bundle over a non-complex submanifold of PT: it is defined on the complement of R × P 1 in PT.Although this manifold is not complex, it inherits a structure from the holomorphic structure on twistor space.The transition functions of the quadrille are compatible with this inherited structure.Because of the compatibility between the transition functions for L p and the inherited complex structure, it is possible to define Dolbeault cohomology with values in L p .
Reducing this 6d field configuration to 3d would involve computing tangential Cauchy-Riemann cohomology of S 3 \{N, S} ⊂ PT\(R × P 1 ) with values in L p , cf. [41].We expect this should introduce some sort of module for the celestial chiral algebra whose structure depends on the precise insertion point of the module on the chiral algebra plane, thereby encoding the 4d support of the self-dual monopole.We expect that this module, arising from the insertion of a charged source in spacetime, will have similar features to the configuration in self-dual gravity studied in [64]; in both gauge theory and gravity, it would be desirable to have a description of the module that allows us to write down the action of all modes of the perturbative chiral algebra.
One structural aspect of this module is that it should not be preserved by either the SU (2) rotating the twistor sphere or the SU (2) rotating the KK modes.Indeed, the worldline of the self-dual monopole isn't invariant under all of Spin(4) ≃ SU (2) × SU (2): it is only invariant under a Spin(3) ≃ SU (2) subgroup thereof that fixes the worldline (i.e. the subgroup rotating the linking sphere).Correspondingly, this should be a feature for any module associated with a 4d line operator.In the example where the worldline is along the x 4 axis, we should only expect to preserve the diagonal SU (2) subgroup.As a result, elements in such a module would need a compensating translation in the fiber directions v α to be invariant under translations in the chiral algebra z plane: the worldline is only invariant under ∂ z + v 2∂ 1, corresponding to the translation (z, v 1, v 2) → (z − w, v 1 − wv 2, v 2).Similarly, the action of rotations is modified for the existence of such a splitting.Note that upon restriction to the P 1 above x µ , the exponent γ 4 becomes γ which is independent of x 4 .We start by choosing a splitting γ 4 = h 4 − h 4 into a part h 4 holomorphic away from z = ∞ and a part h 4 holomorphic away from z = 0, say 3) The existence of the desired splitting matrices then depends on the coefficient of z 0 of the Laurent series expansion of In particular, the points x µ where this series coefficient vanishes are exactly where the splitting matrices do not exist or, equivalently, where the 4d gauge field is singular.
It is straightforward to extract this series coefficient: it is given by where r 2 = (x 1 ) 2 + (x 2 ) 2 + (x 3 ) 2 .Note that this never vanishes for real x 1 , x 2 , x 3 and so this holomorphic bundle on PT induces non-singular gauge fields in 4d.In terms of this series coefficient, the 4d gauge field can be expressed in R-gauge as follows: In these expressions D α α denotes the covariant derivative for the holomorphic line bundle with transition function e γ 4 , i.e.D µ = ∂ µ − iδ µ4 .Note that, because γ 4 and F 4 are independent of x 4 , it follows that the corresponding 4d gauge field A αα is independent of x 4 .Plugging this into the the self-duality equations, it follows that where D a = ∂ a − A a is the covariant derivative for A a , a = 1, 2, 3.These are the equations for a BPS monopole (with A 4 identified with the Higgs field)!Explicitly, we find
Celestial spectral flow
We saw in the previous section how to encode a monopole with worldline given by the x 4 axis into a certain holomorphic bundle on twistor space with transition function g 4 .The states in the presence of the above holomorphic gauge bundle can be obtained by simply applying the above gauge transformation g 4 .For example, consider the state corresponding to the field configuration for t a , a = 1, 2, 3, a basis of sl(2); gauge transformations act on this state by conjugating this field configuration.It will be convenient to consider the Chevalley basis t a = {E, F, H}.The state with t a = E has the easiest transformation: The next easiest is that of t a = H: Finally, the most involved transformation is that of t a = F : We now translate this transformation of states into a statement about the transformation properties of the (perturbative) generators of the celestial chiral algebra associated to the SU (2) holomorphic BF theory on twistor space (equivalently, self-dual Yang-Mills theory in 4d). 22Because the propagator connects A a [m 1 , m 2 ] and B a [m 2 , m 1 ], the above field configuration is sourced by a boundary insertion of As above, we denote the 0-form component of this local operator by J a [f ].(The higher form components are never Q-closed and trivialize the t and z dependence of this operator in cohomology.)The remaining 1-particle states of interest correspond to boundary insertions of which sources the fields Λ a [m 1 , m 2 ].The 0-form component will be denoted by Ja [f ] to match the above.Translating the above action of large gauge transformations on states to the generators J, J results in a spectral flow automorphism σ 4 of the perturbative chiral algebra.For brevity, we only write down the action on J; the action on J can be obtained by replacing J → J.
First, the generator J E [ λ] is mapped as where we have suppressed the dependence of the current J and γ 4 on the insertion point z 0 .Similarly, the spectral flow of J H [f ] is given by (4.17)Although we have presented a relatively compact expression for the above spectral flow morphism, it is important to note that it has a rather non-trivial expression in terms of the component fields J a [m 1 , m 2 ].Even the simple action on J E [f ] involves an infinite number of fields when expressed in term of the component fields (conformally soft modes): We note that if we set to zero J a [m 1 , m 2 ] with m 1 , m 2 = 0 (this is equivalent to restricting to v α = 0) we see that the spectral flow morphism takes the following form: This is the usual spectral flow automorphism in Section 2.2 combined with a holomorphic gauge transformation:
Scaling and rotations of the twistor sphere
Note that the spectral flow σ 4 has neither a definite scaling dimension nor a definite spin, but this should be no surprise -the 4-momentum p = (0, 0, 0, 1) parameterizing the chosen holomorphic bundle is not preserved by scaling R 4 or rotating the twistor P 1 .More generally, we find bundles parameterized by a general momentum where we replace γ 4 by The corresponding spectral flow operation σ p is obtained by simply replacing γ 4 by γ p .It is straightforward to check that the σ p have the desired transformation rules with respect to scaling and rotations.For example, performing a scale transformation x µ → Λx µ sends v α → Λv α and therefore J a It follows that conjugating the spectral flow σ p by such a scale transformation g Λ results in compatible with the expected scaling of momenta.Rotations of the twistor sphere z → e iθ z are coupled with rotations of the fiber coordinates v α → e iθ/2 v α.Since these currents correspond to positive helicity gluons, they have spin +1 and so the generating function transforms as Thus, conjugating σ p by such a rotation is the same as rotating the momentum as p 1 α → e iθ/2 p 1 α and p 2 α → e −iθ/2 p 2 α σ e iθ p = g θ σ p g θ −1 .(4.26)
Pairwise helicity
In the presence of the above monopole, the notion of helicity/spin of the gluons must be refined; this is reflected in the fact that σ p (J a [m 1 , m 2 ]) does not have a definite 3d spin.From a 4d perspective, an insertion of the current J a [m 1 , m 2 ](z 0 ) corresponds to a positive helicity gluon with null momentum parameterized by the energy ω and the point z 0 on the twistor sphere P 1 via If we consider the spectral flow operator for this momentum (or any scalar multiple thereof), we find that γ p vanishes at the antipodal point z = −1/z 0 (and only at z = −1/z 0 ) In particular, the spectral flow σ p(ω,−1/z 0 ) acts on the currents J a [f ](z 0 ) in a particularly simple manner: We see that, in the presence of such a operator, the generating function J a [f ](z 0 ) has an apparent shift in spin/helicity proportional to its weight with respect to the Cartan generator J H [0, 0].
For the above, we see that the apparent shift in helicity is exactly reproduced by taking q = α ∈ Λ root ∼ = 2Z ⊂ Λ weight ∼ = Z and m = 1 ∈ Λ ∨ weight ∼ = Z.In particular, we expect that correlation functions with this celestial monopole operator (obtained by the spectral flow σ p ) should correspond to scattering amplitudes in the presence of a 4d conformal primary of magnetic charge m = 1, i.e. a 4d monopole.Moreover, scattering amplitudes involving states of non-trivial electro-magnetic charge (in the presence of some 4d local operator O) should be captured by correlation functions of the full, non-perturbative celestial chiral algebra (in the conformal block associated to O).
Celestial OPE of gluons and monopoles
Now that we have identified monopole operators in the celestial chiral algebra, e.g.states in the spectral flow of the vacuum module of the perturbative algebra, with magnetically charged scattering states, we expect that the OPEs of the currents J, J and the operator M p associated to the state σ p (|0 ) should encode the singularities of form factors involving scattering of gluons and a magnetic monopole with momentum p.We will be particularly interested in the case where the momentum p = p(ω, w) is null.Following the analysis of Section 4.1, it is straightforward to determine the 4d gauge field associated to such a null momentum: where, as above, a = 1, 2, 3. Note that this gauge field is singular at points x ∈ R 4 where p • x is a non-zero integer multiple of π.
We will denote the corresponding spectral flow operation for the momentum p(ω, w) by The above discussion can be generalized to other gauge Lie algebras in a fairly straightforward manner.Namely, there is an analogous g bundle on PT resulting in a magnetically charged excitation for any embedding sl(2) ֒→ g and hence spectral flow of the corresponding celestial current algebra.The gauge bundles arising from conjugate embeddings are equivalent to one another, as are the corresponding spectral flow modules.
It is an open problem, in both mathematics and physics, to understand how these families of spectral flow automorphisms, labeled by monopole momenta, help organize the module categories of perturbative celestial chiral algebras, as well as how these spectral flows play with the possible action of dualities.
Conclusions & Speculations
We conclude by highlighting some major open questions and future directions raised by the investigation presented here.We have argued that it is fruitful to view celestial chiral algebras that admit twistorial uplifts as boundary chiral algebras of certain 3d holomorphic-topological theories.Our hope is that the 3d perspective, reviewed in Section 2, may shed some light on outstanding mysteries in the celestial holography program (at least for twistorial theories).current extension of a quotient module, where one removes singular vectors.However, the analogous result for zero-level Chern-Simons theory, which arises in the celestial setting, has not yet been determined; moreover, in the case of zero-level abelian Chern-Simons theory, using the standard vertex algebra inner product, all states in the perturbative chiral algebra are null although the vacuum module character is non-vanishing.From a spacetime point of view, there are singular vectors in the perturbative algebra, and their removal is crucial for the implementation of Ward identities for soft theorems [67,68], but it is important to note that the relevant conjugation operation on the celestial currents is not the same as the natural one for a vertex algebra and instead reflects the pairing of incoming and outgoing scattering states, cf.Section 3 of [69].The definition of this pairing for integer conformal dimension, which is the case for the twistorial celestial chiral algerbas discussed in this paper, is particularly subtle and still being developed [43].
We expect that many of these issues can be resolved by a more concrete description of 3d non-abelian monopoles supplementing the abstract proposal of [26].This is an active area of research in derived algebraic geometry and in the representation theory of vertex algebras.It is our hope that such mathematical conjectures will ultimately be translated, via the twistorial correspondence, into concrete selection rules for four-dimensional physicswith a better understanding of the boundary chiral algebras for non-abelian gauge theories as well as their conformal blocks, the perspective of [6] offers a direct window into the scattering of 4d states with general electric and magnetic charges.
For twistorial theories, one can also characterize celestial chiral algebras at the quantum level [7][8][9].It would be interesting to explore quantum deformations of the full nonperturbative chiral algebras (i.e.including 3d boundary monopole operators); such a characterization will necessarily include the generators E[r, s], F [r, s], associated to the anomaly-cancelling axion of [6,10] which restores associativity.At the classical level, these generators are gaugeinvariant and should be unchanged by the spectral flow morphism.
The twistorial studies discussed above are in fact recent contributions in a long history of work exploring the quantization of holomorphic BF and Chern-Simons theories on twistor (and ambitwistor) space, recently summarized in [70]; see also [71] for earlier connections to asymptotic symmetries.It will be fascinating to flesh out further connections between twisted holography, celestial holography, and (ambi)twistor string theory, as well as the 3d holomorphic-topological perspective espoused in this note.
Finally, it will be very interesting to explore, from this 3d perspective, possible nonperturbative enhancements of the celestial chiral algbebra for self-dual gravity, whose twistorial uplift has been recently studied in [9,72]; for further progress on understanding the celestial symmetries of gravity from a twistorial point of view, see [73,74].For instance, the twistorial "quadrille" construction for self-dual dyons reviewed in 3.2.4 has long been known to admit Schwarzschild or Kerr-like analogues in the nonlinear graviton theory of Penrose [75] (they can be thought of perhaps more precisely as self-dual Taub-NUT solutions, with a fixed relationship between the Schwarzschild mass and NUT charge).It would be very interesting to study the 3d HT and chiral algebraic interpretations of such configurations along the lines of this work, and perhaps connect to the recent work of [56,64]; the higher-spin multipole moments encoded in the charges under the w 1+∞ algebra for the self-dual black hole solutions studied in loc.cit.may admit a geometric realization in terms of relative cohomology in a non-Hausdorff twistor space.
Table 1 :
Collection of commonly used mathematical symbols and their meaning.
monopole number is determined by surrounding the given local operator by a sphere S 2 and measuring the topological type of the gauge field sourced by this operator.If G c is the (compact, connected) gauge group, a topological classification of principal bundles on S 2 is given by π 1 (G c ): if we cover S 2 with two charts covering the northern and southern hemispheres, the transition function on the overlap is topologically equivalent to a map from the equatorial S 1 to G c .Local operators with non-trivial monopole number are known as monopole operators. 2.1 Monopoles in HT -twisted N = 2
|
2023-05-02T01:16:11.356Z
|
2023-04-28T00:00:00.000
|
{
"year": 2023,
"sha1": "82ec23e50dd5a7364d0bdee88cc1133d29b2d077",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP08(2023)088.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "4b737623fe5faac8bcdd64362cdfcba1e6da3918",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
229011019
|
pes2o/s2orc
|
v3-fos-license
|
CONTROL OF COOLING WATER STABILITY WITH RESPECT TO CARBONATE DEPOSITS
It is shown that according to the analysis of deposits on the condensers of 22 power plants the main component of deposits is calcium carbonate. Magnesite is practically not available in the deposits. Mg ions in circulating water behave like chlorides being concentrated during evaporation. Note that the existing methods for determining the stability of cooling water (CW) provide only qualitative information about its stability. They rely on the use of total hardness and give incorrect stability values. The method for the quantitative determination of the degree of CW stability is suggested, which is based on measurements of the concentration of Ca ions in the CW and in the feed water and on the determination of the degree of concentration of salts in the CW. Based on these parameters, the CW stability index is calculated in relation to CaCO3 formation. Having current values of the stability index, it is proposed to promptly develop and correct measures to achieve the required level of CW stability, for example, to determine the value of acidification or blowing of the circulating cooling system which is sufficient for achieving an expected degree of stability.
Introduction
Circulating cooling systems (CCS) are the most efficient elements of water supply technology for thermal and nuclear power plants (Shabalin, 1972;Kucherenko, 1980;Kochmarskii and Pospelov, 1986;Andronov, 2004;Gayevskii and Kochmarskii, 2018) on the rational use of water and reduction of chemical and thermal pollution of the water basin. However, due to the evaporation of water and the associated concentration of salts, as well as the heterogeneous hydrodynamic regime WATER AND WATER PURIFICATION TECHNOLOGIES. SCIENTIFIC AND TECHNICAL NEWS ISSN 2218-9300 DOI: https://doi.org/10.20535/2218 76 Нетрадиційні підходи та методи в водопідготовці in water mains and coolers, CCS have specific operational problems (Kochmarskii and Pospelov, 1986;Gayevskii and Kochmarskii, 2018): 1) carbonate, sulfate and silicate deposits on heat exchange surfaces and water mains due to the concentration of the relevant components; 2) fouling of structural elements by fungi, microscopic algae and dreisen -the result of increased temperature of cooling water (CW); 3) mechanical deposits of dispersions in hydrodynamically stagnant zones.
Since surface waters, which are mainly calcium-carbonate systems (CCS), are most often used for cooling, the main contaminants of heat exchange surfaces are the deposition of low soluble salts such as CaCO3. Fig. 1 and (Kochmarskii and Pospelov, 1986). (Kochmarskii and Pospelov, 1986): 1 -SiO2, 2 -Fe2O3, 3 -CaCO3, 4 -organic compounds, 5 -MgCO3, 6 -CaSO4, 7 -CaO , 8 -P2O5. Deposits reduce the efficiency of heat exchangers (HE), as well as other equipment due to the reduction of their heat transfer coefficient, increasing the hydraulic resistance of networks and increasing energy consumption for their operation. In addition, pitting corrosion occurs under the deposits in the pipe systems (PS), which, for example, at nuclear power plants has led to the need to replace steam generators. Because of deposits their life cycle was reduced from 30-40 to 10-15 years.
Analysis of the existing situation and problem statement
Deposits, in particular carbonate, occur due to the deviation of the state of CW from equilibrium, i.e. its instability. Today, the SI supersaturation index relative to calcium carbonate SI and the Langelier LI index are used to characterize the state of CW relative to deposits (Alekin, 1970;RD 34.37.307-87. SPO Soyuztekhenergo, 1989 K1, K2 -thermodynamic constants of dissociation of carbonic acid of the first and second degrees, activity of the corresponding ions, g-ion/dm 3 , (Hst), (H) -activity of hydrogen ions corresponding to the equilibrium (stable) and current state of CW; LCaCO3 is the product of the solubility of CaCO3.
It is the indices (1) that are recommended to be used to characterize the stability of CW in the guiding documents on the operation of CCS and industry standards, for example, (RD 34.37.307-87. SPO Soyuztekhenergo, 1989). According to (1) CW is considered stable when SI ≈ 1, and the In order to develop effective measures for removing deposits we need quantitative characteristics of the degree of stability of CW, which could be determined on the basis of standard measurements performed during the operation of CCS.
Therefore, the task of determining the quantitative degree of stability of CW is a pressing one in the operation of CCS and related heat exchange equipment, and it is the development of quantitative methods for determining the stability of CW that this work considers.
Control of CW stability
In order to develop effective measures for reducing the growth rate of deposits, it is necessary to have a quantitative characteristics of the stability of CW. This problem was studied in the Physical and Technological Laboratory of Water Systems (Kochmarskii and Kochmarskii, 2009;Pat. UA 114060, 2017;Pat. UA 128018;Kochmarskii and all, 2017), where for laboratory conditions we developed the method of quantitative determination of CW stability based on the evaporation of CW sample at a temperature of (60-70) o C, which is characterized by the stability index, StI -water stability index, PrIdeposit index (instability); CCa(t), CCa(0), -respectively, the concentration of calcium ions at time t and at the initial moment t = 0; kev(t) -water evaporation coefficient, kev (t) = V(0)/V(t); V(0), V(t) is the initial volume of the water sample and at time t. It is obvious, that both Sti and PrI indices are interrelated, see (2) Sti + PrI = 1. ( For circulating CCS water, the operative value of the stability index is determined similarly to (2), only the content of concentrations in formula (2) is different (Kochmarskii and Kochmarskii, 2009;Pat. UA 114060, 2017), ССа(t), СCl(t)respectively, the concentration of calcium ions and chlorides in CW at time t; ССа0(t), СCl0(t) -the same concentrations in the feed water of CCS at the same time.
The stability index can be expressed by the CCS mode of operation. To do this, we use the relationship between the concentration coefficient of salts k and water consumption for feeding Gf and blowing Gb (Gayevskii and Kochmarskii, 2018) Gb(t), Gf(t)blow and feeding water discharge respectively. A slight difference between the values of kCl and kMg is within the measurement error. The values of StICa by (4) and determined by the total hardness of StIJ (7) in accordance with the data of Fig. 2, are shown in Fig. 4. We see that the determination of the stability of the CW in terms of hardness overestimates the real stability by (25-45)% and disorients the operating personnel.
Note that
Compare the data for the curves 1 of Fig. 2 of the definition of StI, it is clear that high concentration of Ca 2+ ions in CW corresponds to their absence in solid forms that form deposits. Thus, high saturation or Langelier indices are more likely to indicate the use of deposits inhibitors, which contain the corresponding substances in ionic or molecular forms and, conversely, stabilize water. Note that the latter is not always realized by operating personnel and researchers. There is another advantage of using the stability index, even with the correct interpretation of the value of saturation index. In fact, curve 1 in Fig. 4 indicates that the CW stability in the first half of the year was not higher than 73%, and in general it did not exceed 75%. This means that in November-February only 65% of Ca 2+ ions were in solution, and 35% was released in a solid form capable of forming deposits. Therefore, stabilization of CW during this period was insufficient and additional stabilization measures would be required, such as acidification of CW with strong acids, in particular sulfuric acid, or adjustment of blow according to (6).
The acid concentration can be determined by using (4). To do this, take into account that each gram-ion of acid binds an equivalent amount of calcium ions, i.e. the concentration of acid Ck, which must be in circulating water is determined by the formula and in feed water, respectively, maxStI is the maximum (desired by the station personnel) CW stability index.
Taking, for example, maxStI = 0.9 according to Fig. 4, curve 1, the required concentration of 100% H2SO4 in the feed water is calculated, which is sufficient to increase the stability of CW from the existing curve 1 in Fig. 4, to a value of 0.9 (90%). The result of the calculation is shown in Fig. 5. ISSN 2218-9300 DOI: https://doi.org/10.20535/2218 81
Mathematical modeling and optimization
We see that the acid concentration varies depending on the current CW stability in the range from 8 to 17g/m 3 . Therefore, using the suggested method of determining the stability of CW, you can quickly adjust the dosage of inhibitors and acids for acidification of CW, avoid overconsumption of reagents and excessive environmental pollution. Note that you can correct the stability of CW by adjusting the degree of concentration of salts in CW, or the amount of blow. In this case, under the conditions of the invariance of CCa(t) and kCa(t) = CCa(t)/CCa0(t), we obtain from (6) and condition G ev = Const the expression for calculating the blow, which increases the degree of stability from StI(t) to maxStI, Gb m , Gbblow woter discharge corresponding to the mode of operation of CCS with maxStI and with the current one. Using the data of Fig.2, you can get the dependence of the blow on the number of the month similar to Fig.5. For a specific case: maxStI = 0.9, StI = 0.7, kCa = 1.8, we obtain Gb m / Gb = 1.57, i.e. the blow must be increased by 57%. 3. The peculiarity of the method of determining the stability of CW against carbonate deposits is its simplicity, efficiency and the ability to create a database to automate the process of stabilization of CW and minimize deposits in the CCS.
Conclusions
4. Data on determining the degree of CW stability by calcium carbonate allows us to quickly develop and apply measures to minimize deposits, for example, by adjusting the concentration of inhibitors, acidification of CW or by regulating the blow of CCS.
5. The current methods of assessing the stability of water, based on the use of the concept of total water hardness, which are provided by Regulations, are not correct.
|
2020-11-05T09:11:29.024Z
|
2020-09-30T00:00:00.000
|
{
"year": 2020,
"sha1": "d71157965bef93dc1907f66c3d4b99a5cf9e8cfe",
"oa_license": "CCBYNC",
"oa_url": "http://wpt.kpi.ua/article/download/209843/215784",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5a9c2a59596cb43f6a738133545d47670686f8e2",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
255862361
|
pes2o/s2orc
|
v3-fos-license
|
Low insulin-like growth factor 1 is associated with low high-density lipoprotein cholesterol and metabolic syndrome in Chinese nondiabetic obese children and adolescents: a cross-sectional study
Low serum high-density lipoprotein cholesterol (HDL-C) is an independent risk factor for developing cardiovascular disease. Insulin-like growth factor 1(IGF-1) levels have been proven to be positively associated with HDL-C, but few studies were based on the dataset of children or adolescents. The aim of this study is to investigate the relationship among IGF-1, HDL-C and the metabolic syndrome in Chinese nondiabetic obese children and adolescents. As a cross-sectional study, this study includes 120 obese Chinese children and adolescents and 120 healthy ones. The obese subjects were divided into two groups based on using 1.03 mmol/L as a threshold value for HDL-C. Clinical examination and laboratory examinations were assessed for all participants. Obese subjects had significantly lower IGF-1SDS and higher Height SDS than those in the control group. Among 120 obese children and adolescents, 22 (18.3 %) subjects had an HDL-C level <1.03 mmol/L. IGF-1SDS was significantly lower (P = 0.001) in obese subjects with low HDL-C. According to the results of multivariate logistic regression analysis, IGF-1 SDS is significantly associated with low HDL-C(OR 0.518, 95 % CI 0.292–0.916; P = 0.024), after being adjusted for age, gender, pubertal status, BMI SDS, SBP, DBP, HOMR-IR, total cholesterol, low density lipoprotein-cholesterol, triglycerides, ALT and uric acid. In addition, IGF-1 SDS is significantly correlated with the level of serum HDL-C in study population (r = 0.19, P = 0.003). Based on logistic regression analysis with adjustment for age, gender and pubertal status, the increased IGF-1 SDS was associated with a decreased probability of metabolic syndrome (OR 0.555, 95 % CI 0.385–0.801; P = 0.002) and hypertriglyceridemia (OR 0.582, 95 % CI 0.395–0.856; P = 0.006), but no significant correlation with hypertension. Obese children had lower IGF-1SDS and taller stature compared with the control group. Low levels of IGF-1 SDS were associated with low levels of HDL-C in chinese nondiabetic obese children and adolescents, independent of insulin resistance, as well as other traditional cardiovascular disease risk markers.
Background
The prevalence and severity of childhood obesity are increasing rapidly, and they have reached epidemic levels globally. In developed countries, the prevalence of overweight and obesity has been reported to be 23.8 % of boys and 22.6 % of girls in 2013 [1]. A previous survey conducted by Chinese scholars showed that the prevalence of obesity was 49.1 % for boys and 30.8 % for girls in the center of Shanghai city [2]. The ever-growing number of obesity children will cause lots of serious issues since obesity is recognized as a central feature of the metabolic syndrome, and it is always associated with insulin resistance, hypertension, dyslipidaemia, hyperglycemia and cardiovascular disease [3]. Low serum high-density lipoprotein cholesterol (HDL-C), which is identified as an independent risk factor for developing cardiovascular disease is also linked with obesity.
As an important mediator of growth hormone (GH) secretion, Insulin-like growth factor 1 (IGF-1) is primarily synthesized from the liver, which plays a key role in normal growth and development. Nowadays, increasing evidences suggest that a lower IGF-1 level is associated with obesity [4], insulin resistance [5], metabolic syndrome [6,7], impaired glucose tolerance [8], nonalcoholic fatty liver disease (NAFLD) [9] and cardiovascular disease [10]. Similarly, HDL-C and IGF-1 share many common features. Both of them are secreted partially from the liver, and HDL-C is also linked to the same cluster of cardio-metabolic disorders. Moreover, the positive correlation between IGF-1 and HDL-C had been proposed in some studies [11][12][13][14]. However, previous studies relating to the relationship between HDL-C and IGF-1 are still some topical and controversial. Furthermore, this relationship didn't focus on the samples of children and adolescents, especially those of obese group. Therefore, in this present study, we aimed to investigate the association between IGF-1 and HDL-C, metabolic syndrome in nondiabetic obese children and adolescents.
Subjects
This retrospective cross-sectional study was carried out at the Department of Pediatrics of Shandong Provincial Hospital affiliated to Shandong University. We investigated a group of obese subjects and a group of healthy controls respectively. A total of 120 obese children and adolescents (age ranging from 10 to 16) was conducted from July of 2011 to November of 2015. The diagnosis of obesity is those children whose body mass index (BMI) are above the 95th percentile with age and sex adjustment, based on the national reference data [15]. The exclusion criteria for our study were: 1) children who had ever had renal disease, liver failure, cancer, and other systemic severe diseases; 2) children with hypothalamus diseases, pituitary diseases, thyroid dysfunction, diabetes mellitus, chromosome abnormalities or all sorts of syndromes; 3) the obese children who had been smokers, and those who ever had chronic alcohol intake, and used drugs which may influence lipid metabolism, blood pressure, liver function, insulin action, glucose and weight. Control subjects which include 120 children and adolescents, were recruited from a population of non-obese healthy children and adolescents. The obesity group and control group were matched on age, pubertal status and gender. The study was approved by the Ethics Committee of the Shandong Provincial Hospital Affiliated to Shandong University. Written informed consents had been signed by all participants' parents.
Anthropomorphic measurements
Anthropometric measurements, including measurements of weight, height, systolic blood pressure (SBP), diastolic blood pressure (DBP), and pubertal staging were assessed for each participant. Body weight was measured to the nearest 0.1 kg for all subjects wearing light clothing and no shoes. The degree of obesity was calculated as body mass index (BMI) (kg/m2). To minimize the confounding effects of age and sex, we transformed BMI into BMI standard deviation scores (BMI SDS) based on normative values for Chinese children [15]. Height was measured in the morning, and there was also nearest 0.1 cm errors. Height was expressed as height standard deviation scores (Height SDS) based on normative values for Chinese children [16]. An average of three blood pressure measurements was taken while the subjects were seated after 5 min' rest. The stage of puberty was assessed according to Tanner criteria [17]. Boys with pubic hair and gonadal stage 1, and girls with pubic hair and breast stage 1 were considered as prepuberty [18,19].
Laboratory measurements
Fasting blood samples were obtained for all participants after an overnight fast to measure serum IGF-1 level and other metabolic parameters.
Serum concentration of IGF-1 was estimated based on chemiluminescence assay (IMMULITE 2000, Siemens Health care Diagnostics, USA). Intra-assay and interassay coefficient of variation declared by the manufacturer was 2.5-7.6 % for IGF-1 measurement. IGF-1 was represented as IGF-1 standard deviation scores (IGF-1 SDS), based on age and sex normal range of the population by past researches [20]. Total cholesterol (TC), HDL-C, low density lipoprotein-cholesterol(LDL-C), triglycerides(TG), fasting plasma glucose (FPG), alanine aminotransferase (ALT), and uric acid (UA) were determined using an Auto Biochemical Analyzer (AU5400, Beckman Coulter, Tokyo, Japan). Fasting insulin were assayed using a chemiluminescent immunometric assays (CobasE170, Roche Diagnostics, Mannheim, Germany). The intra-assay and inter-assay coefficients of variation were <7.0 % in these assays. Estimates of insulin resistance were calculated using the following formula of homeostasis model assessment-insulin resistance (HOMA-IR) = insulin (uIU/ml) × glucose (mmol/l))/22.5 [21]. An oral glucose tolerance test (OGTT) was performed with 1.75 g of glucose kg of weight (to a maximum of 75 g) in fasting glucose ≥ 5.6 mmol/L subjects in order to rule out diabetes mellitus.
Definition
The metabolic syndrome was defined as the presence of any three or more of the following five constituent risks according to the criteria of International Diabetes Federation [22]: 1) abdominal obesity. 2) low HDL-C. 3) hypertriglyceridemia. 4) hypertension. 5) impaired fasting glucose. Low HDL-C was defined as HDL-C levels <1.03 mmol/L. Hypertriglyceridemia was defined as triglyceride ≥ 1.7 mmol/L. Hypertension was defined as SBP ≥ 130 mmHg or DBP ≥ 85 mmHg. Impaired fasting glucose was defined as fasting glucose ≥ 5.6 mmol/L.
Statistical analysis
Normally distributed variables were expressed as means ± standard deviation (SD). Skewed distributed variables were natural log transformed, and were expressed as mean ± SD. Skewed distributed variables were expressed as median (interquartile range) since they can not be transformed to normal distribution. Groups were compared using the Student's t tests or Mann-Whitney U test. Categorical variables were compared by chi square test. The relationships among serum IGF-1 SDS, clinical and metabolic variables were evaluated with Pearson's correlation analysis or Spearman correlation analysis. A multivariable logistic regression analysis was used to determine the association between IGF-1 SDS and the low HDL-C(<1.03 mmol/L) after adjustment for anthropometric, metabolic variables. In multiple logistic models, only HOMA-IR representing fasting glucose and insulin respectively, were used to avoid collinearity. Logistic regression analysis was also applied to demonstrate the association among the metabolic syndrome, hypertriglyceridemia, hypertension and IGF-1SDS. The presence of the metabolic syndrome, hypertriglyceridemia and hypertension was used as a dependent variable and IGF-1SDS as an independent variable after adjustment for age, gender and pubertal status. A P value <0.05 was considered statistically significant. Statistical analyses were all carried out using SPSS version 17.0 (SPSS Inc. Chicago, USA).
Results
The characteristics of the study population are shown in Table 1. Gender, age, pubertal status and fasting glucose were similar in obesity group and control group. In the obese group, significant percentages of clinical and metabolic alterations were observed. Notably, The obese group had significantly lower IGF-1 SDS (P < 0.001), HDL-C (P < 0.001) than the control group. In contrast, the obese group showed significantly higher levels of Height SDS, SBP, DBP, BMI SDS, HOMA-IR than the control group, The levels of total cholesterol, low density lipoprotein-cholesterol, triglycerides, uric acid, ALT also were significantly higher in the obese group than those in the control group.
A total of 120 obese participants with a mean age of 12.09 ± 1.35 years old were included in the study. Among the 120 obese participants, 101 (84.2 %) children were male. The majority of children were prepubertal (81, 67.5 %). Low levels of HDL-C were observed in 22 of 120 obese participants (18.3 %). 26 subjects had a triglyceride ≥ 1.7 mmol/L(21.7 %, 26/120), while Hypertension was observed in 39.2 % (47/120) of assessed individuals and metabolic syndrome in 28.3 % (34/120) of them. In addition, we observed 18 of 120 obese participants (15 %), which has impaired fasting glucose after excluding type 1 and 2 diabetes mellitus.
Obese children were further divided into two groups according to HDL-C level. The characteristics of the two groups are shown in Table 2. IGF-1 SDS were significantly lower (P = 0.001) in obese subjects with low HDL-C group than in subjects with HDL-C ≥1.03 mmol/L, whereas triglycerides were higher in obese children with HDL-C < 1.03 mmol/L than obese children with HDL-C ≥1.03 mmol/L. No difference existed in the age, gender, pubertal status, Height SDS, SBP, DBP, BMI SDS, HOMA-IR, fasting glucose, total cholesterol, low density lipoprotein-cholesterol, uric acid and ALT between the two groups. Through multiple logistic regression analysis, we observed that in the final model, after adjusting for age, gender, pubertal status, BMI SDS, SBP, DBP, HOMR-IR, total cholesterol, low density lipoprotein-cholesterol, triglycerides, ALT, uric acid. Notably, the increased IGF-1SDS was associated with a decreased probability of low HDL-C (OR 0.518, 95 % CI 0.292-0.916; P = 0.024).
We performed a bivariate correlation analysis to explicit the relationship between IGF-1SDS and cardiometabolic variables in study population (Table 3) Logistic regression analysis was also applied to study the association among IGF-1SDS and the metabolic syndrome, hypertriglyceridemia and hypertension, adjusted for gender and pubertal status. The increased IGF-1 SDS was associated with a decreased probability of metabolic syndrome (OR 0.555, 95 % CI 0.385-0.801; P = 0.002) and hypertriglyceridemia (OR 0.582, 95 % CI 0.395-0.856; P = 0.006), but with no significant correlation with hypertension (OR 0.765, 95 % CI 0.567-1.031; P = 0.079).
Discussion
In this study, we found that low IGF-1 SDS is positively correlated with low HDL-C levels in nondiabetic obese children and adolescents. More importantly, the positive relationship remains significant after controlling several confounders.
Obesity represents a major worldwide health problem, and it is always associated with the incidence of multiple comorbidities. In present study, in accordance with previous reports [3], we found that obese children had significantly lower HDL-C, higher levels of SBP, DBP, BMI SDS, HOMA-IR, uric acid, ALT and an adverse lipid profile compared with the control group.
Apart from obesity-related co-morbidities, obesity is also associated with endocrine perturbations. The influences of obesity on the GH-IGF-1 axis and growth have been recognized. There are lots of evidences to suggest that obesity is associated with the reduced stimulated GH release [23,24] as well as the endogenous GH secretion [25]. This GH deficiency associated with obesity is relative and with weight loss GH secretion is fully reversed [24]. But unlike the unequivocal effects of significantly lower GH observed in patients with obesity, there are some controversies about the relationship between obesity and IGF-1. Reduced IGF-1 [26,27], normal [28] or high [29] IGF-1 in obese subjects have been reported previously. Accordingly, we futher assessed the relationship between IGF-1SDS and obesity, and our study showed that the obese subjects had lower IGF-1 SDS compaed with control group. The result is consistent with those of a previously epidemiological study based on 3328 subjects (aged 19-72 years) which reported a similar negative correlation between IGF-1 and obesity-related anthropometric markers in both men and women [26].
Another study based on a small sample dataset also showed that obese adolescent had lower total IGF-1 values comparing with lean adolescent subjects [27]. The plausible reasons for these differences of previous studies are that: (1) we used the IGF-1SDS as a proxy of IGF-1; (2) the different samples may creates different results due to heterogeneity. Interestingly, despite low GH secretion and the abnormalities in the peripheral GH-IGF system in obese children, a normal or tall stature is usually observed [30]. Several studies have shown that obese children have higher height velocity and taller stature than non-obese children during the prepubertal years, while similar final heights were usually observed that between obese and nonobese children [31,32]. Our study is mainly based on a sample of prepuberty population and the results of this study showed that obese group had higher height SDS compared with control group which is also consistent with those of previous studies. The underlying mechanisms in this paradoxical combination of the abnormalities GH-IGF axis and tall stature remain unknown. Severalmechanisms such as increased leptin [33] and insulin [30], increased amount of free-IGF-1 [27], upregulation of GH receptors [34] have been suggested.
In this study, we provided evidences that lower IGF-1 is positively associated with low HDL-C (HDL-C < 1.03 mmol/L) in Chinese nondiabetic obese children and adolescents. First, patients with lower HDL-C had lower IGF-1 SDS. Second, IGF-1 SDS was significantly correlated with serum HDL-C level. Third, logistic regression analysis further showed that lower IGF-1SDS significantly contributed to the risk of low HDL-C. Decreased IGF-1 and HDL-C levels have been reported to be associated with insulin resistance [5,[35][36][37]. Our results showed that decreased levels of IGF-1SDS in obese children and adolescents were associated with low HDL-C, independent of insulin resistance, as well as other traditional cardiovascular disease risk markers. Causal relationship between HDL-C and IGF-1 levels was not established in this cross sectional series. However, we speculate that IGF-1 might exert some direct or indirect effects on HDL-C metabolism.
The exact mechanisms of the relationship between IGF-1 and HDL-C in obesity are not fully understood at present. The possible reasons might be listed as following: 1) it is easy to explain and prove that IGF-1 may influence HDL-C, at least partially by improving the insulin sensitivity [38]. IGF-1 shares structural homology and downstream pathways with insulin. Low plasma IGF-1 is associated with reduced insulin sensitivity and increased insulin resistance, which had been showed in clinical studies [35,39]. Insulin resistance was considered to affect HDL-C metabolism and result in lower HDL-C [40,41]. 2) Another possible explanations for the association between IGF-1 and HDL-C are driven by GH. Since it is known that IGF-1 serves as a proxy for the pulsatile GH secretion. Basal and peak stimulated GH levels were positively associated with HDL-C in several research [42][43][44], and GH treatment in some studies had been shown to be correlated with a higher level of HDL-C [45,46]. It supports that IGF-1 is correlated with modulating serum HDL concentration. 3)IGF-1 inhibits expression of hepatic scavenger receptor of class BI (SRB1) on the surface of liver Hep2 B cells, leading to a reduction of HDL reverse transport and an increased circulating HDL-C concentration [47].
Our study highlights the importance of IGF-1 to regulate HDL-C. As the component of metabolic syndrome, low HDL-C may be partially accounted for the association between IGF-1 and risk of metabolic syndrome. Several studies had suggested that IGF-1 had robust negative relationships with metabolic syndrome and its components [6,12,[48][49][50][51]. Saydah. et al. [48] and Parekh. et al. [49] noted that IGF-1 concentrations were negatively correlated with metabolic syndrome as well as its individual components among NHANES III participants. In another study among patients with impaired glucose tolerance and subjects with and without diabetes, low plasma IGF-1 concentrations were significantly associated with the presence of metabolic syndrome and were inversely associated with a number of individual metabolic syndrome components [12]. However, some other clinical reports showed some inconsistent results regarding the relationship between IGF-1 with metabolic syndrome [52] and its components [53]. A possible reason for this discrepancy might be that different studies were based on the different characteristics of study groups. In Chinese nondiabetic obese children and adolescents, we observed that the association between low IGF-1 SDS and metabolic syndrome was independent of age, gender and pubertal status. We further demonstrated low IGF-1 SDS is in negative correlation with hypertriglyceridemia independent of age, gender and pubertal status, but it was not in statistically significant association with hypertension. This is the first study in obese children concerning IGF-1 and metabolic syndrome and its components.
The positive association between low IGF-1 with a low level of HDL-C and metabolic syndrome observed in this study suggests that a low IGF-1 level contributes to the increased risk of cardiovascular disease in obese children and adolescents, implying that IGF-1 may be an important biomarker for clinicians to identify subjects at risk for early cardiovascular disease in obese children. Further research in prospective studies is needed to better understand these relationships.
The present study has several strengths and fills some research gaps. First, to the best of our knowledge, our study is one of the first to focus on the relationship between circulating IGF-1 levels and serum HDL-C based on a dataset of obese children individuals independent of several confounding factors. Second, several confounding variables had been reported to affect IGF-1 levels. In the current study, in order to limit the confounding effects of age and sex, we used the IGF-1 SDS as a proxy of IGF-1. The standardized IGF-1 levels have not been applied in previous studies regarding the relationship between IGF-1 and HDL-C. Third, our study included the relatively large sample size with detailed clinical characterization.
However, our study has some potential limitations. The first limitation is the cross sectional design of this study doesn't not allow us to determine causality. Secondly, we did not analyze GH, IGF binding proteins (IGFBPs) and free IGF-1. Thirdly, serum HDL-C and IGF-1 were measured without replication.
Conclusion
In conclusion, our results suggest that, obese children and adolescents had lower IGF-1SDS and taller stature compared with the control group. There is an independent association among lower IGF-1, low HDL-C and metabolic syndrome. Thus, the inclusion of measurement of IGF-1 and HDL-C levels might be warranted in the assessment protocols for obese or overweight children and adolescents, in order to verify possible complications of early cardiovascular alterations. Also developing a new prevention and treatment strategies for increasing serum IGF-1 levels would benefit in controling low HDL-C and metabolic syndrome.
|
2023-01-17T15:07:24.329Z
|
2016-06-24T00:00:00.000
|
{
"year": 2016,
"sha1": "e24dc54ff80a8a64ec3799344d53b07fede985b1",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-016-0275-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "e24dc54ff80a8a64ec3799344d53b07fede985b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
55248954
|
pes2o/s2orc
|
v3-fos-license
|
A Review of Teacher Self-Efficacy, Pedagogical Content Knowledge (PCK) and Out-of-Field Teaching: Focussing on Nigerian Teachers
Teachers are crucial to the success of any educational system and the success of any nation in general. In fact, it is not an overstatement to say the teacher is the most important educational resource in school. The world is not static but dynamic. Therefore, systems in a dynamic world are changing every day. Based on this conjecture this paper reviewed three educational constructs as related to teacher development in a changing world. These are teacher self-efficacy, pedagogical content knowledge (PCK) and out-of-field teaching. The paper observed that these constructs are paramount to the success of any teacher because studies indicate their influence on students’ academic performance. The conclusion of the paper was that these constructs are yet to be taken seriously by the stakeholders in the Nigerian educational system. The paper suggested some recommendations for improving teachers’ self-efficacy, PCK and reduction in out-of –field teaching in Nigeria.
Introduction
Teachers are crucial to the success of any educational system and the success of any nation in general. In fact, it is not an overstatement to say the teacher is the most important educational resource in school. The world is not static but dynamic. Therefore, systems in a dynamic world are changing every day. Based on this conjecture this paper reviewed three educational constructs as related to teacher development in a changing world. These are teacher selfefficacy, pedagogical content knowledge (PCK) and out-offield teaching. The paper observed that these constructs are paramount to the success of any teacher because studies indicate their influence on students' academic performance. The conclusion of the paper was that these constructs are yet to be taken seriously by the stakeholders in the Nigerian educational system. The paper suggested some recommendations for improving teachers' self-efficacy, PCK and reduction in out-of -field teaching in Nigeria.
The most important educational resource is the teacher [10]. [1]; [40] were of the opinion that a teacher can significantly influence students' achievement. [32] said teachers have an important role to play to adequately prepare the students to be able to play their roles in the society to achieve the national set objectives. The quality of any educational system depends to a great extent on the quality of teachers in terms of qualifications, experience, competency and the level of dedication to their primary functions [33].
The success of any teaching and learning process that influence students' academic performance depend on how effective and efficient the teachers are [42]. Teachers are the facilitators who are to impact on students the concepts expected to be learnt [34]. Teachers are the most important factor in the effectiveness of schools and the quality of a child's education [2]. This paper will review these constructs in the details and the possible relationship with students' academic performance.
Teacher Self-Efficacy
Teacher self-efficacy is the beliefs a teacher has about his perceived capability in undertaking certain teaching tasks. It is the beliefs a teacher has about his or her ability to accomplish a particular teaching task [29]. Self -efficacy is the set of beliefs a person holds regarding his or her capabilities to produce desired outcomes and influence events that affect his or her life [4].
Teachers' self -efficacy is the set of beliefs a teacher holds regarding his or her abilities and competencies to teach and influence student behaviour and achievement regardless of outside influences or obstacle [47]. Many of the teachers we have in science classes today are such with low self-efficacy, and that is why we have many topics in science that were taught not to the students are about writing the final examination.
[37] said teachers with a high level of teacher self-efficacy have been shown to be more resilient in their teaching and likely to persist in a difficult time to help all students reach their academic potential. The authors believed that a teacher with strong beliefs in his or her efficacy would be resilient, able to solve problems and, most importantly, learn from their experience. [29] believed that self -efficacy affects the teachers' level of efforts and persistence when learning difficult tasks. Teachers who do not trust their efficacy will try to avoid dealing with academic problems and instead turn their effort inward to relieve their emotional distress [5]. Teachers with high efficacy persisted with low-achieving students and used better teaching strategies that allow such students to learn more efficiently [45].
The lower level of achievement often recorded in some science subjects today could be traced to low teachers' selfefficacy as opined by [48]. The author said that teachers' selfefficacy had proved to be related powerfully to meaningful educational outcomes such as students' achievement. [15] emphatically said low teachers' self-efficacy leads to low academic achievement.
Every teacher must have that belief in himself or herself that he or she has the capability to teach the subject or else he or she should not be a teacher. [21] observed that teachers' beliefs about themselves and their capabilities can be influential in the quality of their performance. It is not an overstatement to say that cannot separate poor academic performance often recorded among Physics students in Nigeria secondary schools from teachers' low self -efficacy. Teachers' self -efficacy has consistently associated with students' academic achievement [23]. Teachers' self-efficacy differs significantly according to their qualifications [3]. Teachers' self-efficacy is central to effective teaching [47].
There is no way a teacher with low self-efficacy can be effective in the classroom and that is why looking at the the relationship between teacher self-efficacy and teacher effectiveness is critical. The question is "Is there any relationship between teacher self-efficacy and teacher effectiveness"?
Relationship Between Teacher Self-Efficacy and Effectiveness
Studies have shown that teacher efficacy is an important variation in teachers' effectiveness that is related consistently to teacher behaviors and student outcomes [11]. The assumption by some people that teacher who has low selfefficacy cannot be effective is supported by [39]. The author argued that high efficacy teachers are more apt to produce better student outcomes because they are more persistent in helping students who have problems. Studies revealed that teachers who have a high level of self-efficacy regarding their ability to teach can produce superior student achievement across a range of academic disciplines [11]. [5] believed that teachers who have high self-efficacy will spend more time on student learning, support students in their goals and reinforce intrinsic motivation. [8] posited that there is a positive correlation between self-efficacy and teacher effectiveness. Teacher selfefficacy account for individual differences in teacher effectiveness [11]. Many teachers who have low self-efficacy depend on reading from textbooks when teaching students. No effective teacher will be reading a textbook for his or her students while teaching. In support of this point, [13] said efficacious, high teachers are found to be using inquiry and studentcentered teaching strategies, they are not using teacherdirected strategies like lecture method and reading from the text. When come across a teacher who comes to teach from the textbook in a class, that the teacher is not sure of his or her ability and, therefore, may score very low on efficacy scale.
[51] opined that teacher self-efficacy is a reliable predictor of the improvement of the personality characteristics of teachers. According to [11], teacher self-efficacy is a strong self-regulatory characteristic that enables teachers to use their potentials to enhance students learning. Self-efficacy is informed by the teachers' understanding of what effective teaching is [38]. Teachers' self-efficacy is an important motivational construct that shapes teacher effectiveness in the classroom [37].
After the consideration of the teacher self-efficacy, it is imperative to consider what a teacher is teaching and how he or she teaches it. The subject content and how the teacher transfers this content knowledge to students is crucial in education.
Pedagogical Content Knowledge (PCK)
PCK according to [19] first was introduced as the dimension of subject matter knowledge for teaching by Shulman. [46] considered PCK as a special amalgamation of content and pedagogy that is especially the province of teachers, their own special form of professional understanding. PCK is a characteristic of teacher knowledge of how to teach the subject matter [28]. In a related term, [36] viewed PCK as a professional knowledge for teachers. PCK embodied a unique form of teacher professional knowledge [28]. PCK is specifically for professional teachers because it guides the teachers' actions when dealing with subject matter in the classroom [50]. It is a particular body of knowledge of teacher required to perform successfully teaching within complex and varied context [35].
PCK is the knowledge that teachers develop over time, and through experience, about how to teach a particular content in particular ways to lead to enhanced student understanding [27]. PCK is not a single entity that is the same for all teachers of a given subject area. However, a particular expertise with individual idiosyncrasies and significant differences that influenced by (at least) the teaching context, content, and experience [27].
PCK stands out as different and distinct from knowledge of pedagogy, or knowledge of content alone. Pedagogical content knowledge is a form of practical knowledge that is used by teachers to guide their actions in highly contextualized classroom settings [27].
PCK according to [31] can combine knowledge of a particular discipline along with teaching of that discipline. The author further stressed the need for the teacher to be able to blend content knowledge with the pedagogical. [42] underscored the importance of PCK to teaching and learning as a construct to help our thinking about what teachers continue to learn as they study their teaching practice.
In a different perspective, [20] called PCK an amalgamation or transformation, but not integration of subject matter, pedagogical and context knowledge. The context knowledge here is referring to the school and students, according to the authors. According to [49], PCK is a construct that surrounded by the knowledge of the subject matter, general pedagogical knowledge, and contextual knowledge. PCK is considered by [12] to be a knowledge of teaching that is domain specific; it is making what teachers know about their subject matter known to students. [35] identified five components of PCK as knowledge of students' thinking about science, science curriculum, sciencespecific instructional strategies, assessment of students' science learning and orientations of teaching science. [6] viewed these components imperative because they work together to help teachers represent specific subject matter in ways that make it comprehensible to students.
[9] viewed PCK as the knowledge base required for teaching that are subject matter knowledge and pedagogical knowledge. It consists of knowledge of the curriculum, the knowledge of learning difficulties of students and the knowledge of instructional strategies and activities [9].
PCK is important in teacher education as [49] said PCK is a knowledge base for teaching. The author further said PCK is not just the knowledge of the subject matter but include the understanding of learning difficulties, and student conceptions. No matter how brilliant a teacher may be, the moment he or she could not interpret the subject-matter knowledge to facilitate student learning he or she has not achieved anything. Therefore, PCK is referred to as teachers' interpretations and transformation of knowledge of subject matter to facilitate student learning [49]. PCK is a heuristic for teacher knowledge that can be useful in changing the complexities of what teachers know about teaching and how it changes over broad spans of time [42].
Assessment is vital to teaching and learning, based on this fact [19] observed that PCK is an important resource for teachers engaging in formative assessment. However [9] found that the teachers under training lacked the necessary pedagogical knowledge to teach relevant science topic to students.
PCK is not only important in the classroom but helps teachers to do better professionally. Teachers' content knowledge or pedagogical knowledge alone does not contribute to their professional development [31] unless the two merged. From this submissions, it is very clear that PCK is essential for all teachers. Students' success depends on what the teachers know about a subject and how he or she can impart to the students what he or she knows.
Experience and research show that school administrator transfers teachers from one class to another because he or she has a good PCK. If a teacher has a right PCK in maths does not mean, he or she should be made a physics teacher when he or she was not trained to teach physics. The next construct we shall discuss is called out-of-field teaching.
Out-of-Field Teacher
These are teachers assigned to teach subjects for which they have not got adequate training and qualification [25]. [16] defines out-of-field teachers as teacher teaching out of their field of qualification, this field might be a specific subject or year level. There is a problem of out-of-field teaching in Nigeria, especially in physics because of lack of qualified physics teachers.
Holders of NCE are trained to teach in primary or at worst in junior secondary school, but today most of the teachers we have been teaching physics in rural senior secondary school are majorly NCE teachers. Out of-field teaching is a problem of poorly prepared teachers [26]. Interestingly, out-of-field teaching is not a Nigeria problem alone; it happens even in developed countries like U.S, Australia and even in South Africa. Hobbs, Silva and Loveys in [18] noted that 16% and 30% of science teachers in Australia and South Africa respectively, were unsuitably qualified. The author said those not qualified was 31.4% of physics teachers in the United Kingdom. In any given year, out-of-field teaching may be more than half of all secondary schools in U.S [24].
In Nigeria, it is a common practice to see a qualified teacher teaching a subject he/she was never trained for, at that point such teacher becomes unqualified. [16] supports this by referring to the concept of out-of-field teaching that, qualified teachers become unqualified by assigning them to teach subjects or year groups for which they lacked suitable qualifications. Darby-Hobbs in [18] opinion were that out-offield teachers are still in the process of developing, and they are less suited to teach the subjects not qualified to teach.
Out-of-field teaching has been suggested to be indicative of a teacher's inadequate subject-matter knowledge, and inadequate subject-matter knowledge has been found by some to be a critical factor lowering the standard of quality teaching [14]. Out-of-field teaching is a problem for our educational system, and most of the problem caused by this phenomenon are great that we may not be able to quantify. The most significant consequences of out-of-field teaching are probably those not easily quantified [24]. There are many consequences of out-of-field teaching as highlighted by [24]. Some of these consequences as pointed out by the author are: Decrease in preparation time for teaching; Decrease in time for teaching; and Decrease in teacher morale and commitment The assignment of teachers to teach fields in which they have no training could change the allocation of their preparation time across all of their courses. They may decrease the time supposed to use for other courses in a way to prepare for the one(s) for which they have no background. Out-of-field teacher whose concentrate efforts on subject content that is new to him has less time to focus on understanding students' needs and interests [41]. Out-of-field teachers have low self-esteem, they feel they do not meet the requirements or expectations [17]. Pillay, Goddard and Wills in [22] posited that it is possible for out-of-field teaching to compromise 'teaching competence' and disrupt a teacher's identity, self-efficacy, and well-being. [30] made it clear that out-of-field teaching is a factor that contributes to stress for teachers. Webster and Mark succinctly pointed out in [30] that the problem of out-of-field teaching will not allow us to know the reality of a shortage of teachers. [22] observed that a lack of teachers in science has led to an increase in the number of teachers teaching outside their subject areas. The author said this had influence on the quality of educational outcomes and the teacher well-being. Out-of-field teachers are often not confident to take risks in unfamiliar subjects or year levels because they feel exposed in unfamiliar subject territories [18].
These teachers may not have the knowledge of the subject matter as well as the skill to teach this subject because they are not qualified. [22] understanding was that out-of-field teacher lacked knowledge and pedagogical skills. [17] contended that out-of-field teachers are insecure because of lack of pedagogical knowledge and are not qualified in a subject or year level he or she was assigned.
The negative effect of out-of-field teaching is on the teacher themselves as [17] examined that out-of-field teaching influences teachers' development opportunities. These authors argued further that anything restricting professional development of teachers is also restricting educational development.
Conclusion
Teacher self-efficacy and PCK are so important that once a teacher is not adequate in PCK such a teacher will surely have low self-efficacy. A teacher that is very sound in subject content and can impart well to the students through proper strategies of teaching will have the confidence to teach any concept in the curriculum. Out-of-field teaching is not good for our educational system because it affects both students and teachers professional development.
Each teacher should teach subject(s) he or she was trained for and also maintain the same class level. In Nigeria, where a holder of the Nigerian Certificate in Education (NCE) teaches senior secondary school is not the best. Engineers are in classroom teaching mathematics and physics in Nigeria. It is one of the reasons the government failed to realize there is a shortage of teachers. Engineers are not trained to teach in primary and secondary schools.
The following recommendations are therefore paramount based on this review: There should be a reforms in preservice teacher education program in all our teacher training institutions. This reform should aim at strengthening both content and pedagogical knowledge of pre-service teachers.
Teachers in service should always avail themselves of every opportunity to attend the seminar, conference and workshop to develop themselves. School libraries should be equipped with journals for the benefit of developing teachers' knowledge of new ideas in the teaching profession.
The government should organize seminars, conferences and workshop on teacher self-efficacy from time to time. Many of the teachers do not know what teacher self-efficacy is, therefore such teachers may not see the need for attending its seminar, conference, and workshop. However, by the time they have attended seminars, conferences and workshop on self-efficacy, they will never remain the same in their classes.
|
2019-04-11T13:15:10.169Z
|
2015-06-17T00:00:00.000
|
{
"year": 2015,
"sha1": "3d45cd90f6b35f9611316883a1b51db0dd964d49",
"oa_license": null,
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijeedu.20150403.15.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9aa77ba56a25b565bdb9b76b9bc27edac1767fe3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
208637481
|
pes2o/s2orc
|
v3-fos-license
|
Functional Convergence of U-Processes with Size-Dependent Kernels
We consider sequences of U -processes based on symmetric kernels of a fixed order, that possibly depend on the sample size. Our main contribution is the derivation of a set of analytic sufficient conditions, under which the aforementioned U -processes weakly converge to a linear combination of time-changed independent Brownian motions. In view of the underlying symmetric structure, the involved time-changes and weights remarkably depend only on the order of the U -statistic, and have consequently a universal nature. Checking these sufficient conditions requires calculations that have roughly the same complexity as those involved in the computation of fourth moments and cumulants. As such, when applied to the degenerate case, our findings are infinite-dimensional extensions of the central limit theorems (CLTs) proved in de Jong (1990) and Döbler and Peccati (2017). As important tools in our analysis, we exploit the multidimensional central limit theorems established in Döbler and Peccati (2019) together with upper bounds on absolute moments of degenerate U -statistics by Ibragimov and Sharakhmetov (2002), and also prove some novel multiplication formulae for degenerate symmetric U -statistics — allowing for different sample sizes — that are of independent interest. We provide applications to random geometric graphs and to a class of U -statistics of order two, whose Gaussian fluctuations have been recently studied by Robins et al. (2016), in connection with quadratic estimators in non-parametric models. In particular, our application to random graphs yields a class of new functional central limit theorems for subgraph counting statistics, extending previous findings in the literature. Finally, some connections with invariance principles in changepoint analysis are established.
Overview
Consider a sequence {X i : i = 1, 2, ...} of independent and identically distributed (i.i.d.) random variables with values in some space (E, E). The aim of this paper is to prove a class of Gaussian functional central limit theorems (FCLTs) involving general sequences of U -processes with symmetric kernels, that is, of càdlàg processes on the time interval [0, 1], obtained by progressively revealing the argument of a symmetric U -statistic of order p ≥ 1 based on the sample (X 1 , ..., X n ), for n ≥ 1.
The type of weak convergence we deal with is in the large sample limit n → ∞, and holds in the sense of the Skorohod space D[0, 1] of càdlàg mappings on [0, 1], endowed with Skorohod's J 1 topology (see e.g. [Bil99,p. 123]). The specific difficulty tackled in our work -marking a difference with previous contributions (see e.g. [Neu77,Hal79,MT84,NP87,AG93]) -is that we allow the kernels of the considered U -statistics to explicitly depend on the sample size n, and we do not assume a priori any form of Hoeffding degeneracy.
Despite the generality of the above setup, the limit processes displayed in our results always have the form (1.1) where each α k,p ∈ [0, ∞) is a constant depending on the sequence of U -statistics under study, and {Z k,p (t) : t ∈ [0, 1], 1 ≤ k ≤ p} denotes a class of independent centered Gaussian processes obtained as follows: first consider a sequence {B k (t) : t ∈ [0, 1], 1 ≤ k ≤ p} of independent standard Brownian motions on [0, 1], and then set Note that, in particular, the processes (Z(t)) t∈[0,1] appearing in (1.1) all have continuous paths. Such a rigid asymptotic structure originates from the fact that we exclusively focus on symmetric U -statistics and i.i.d. samples: these strong assumptions yield then the emergence of the 'universal' time-changes t → t k and time-dependent weights t → t p−k from purely combinatorial considerations. One should compare this situation with the reference [Bas94], where a Gaussian FCLT is proved for sequences of non-symmetric homogenous sums, displaying as possible weak limits arbitrarily time-changed Brownian motions. We will see in Section 3 that our results contain a (weaker) version of Donsker's Theorem for sums of i.i.d. random variables -see [Bil99,p. 90].
The sufficient conditions for weak convergence discussed above are stated in the forthcoming Theorem 3.1 and Theorem 3.4, and are expressed in terms of the contraction kernels canonically associated with a given U -statistic (see Section 2.2 for definitions). We will see in Section 5 that the conditions derived in our paper are a slight strengthening of the sufficient conditions for one-dimensional CLTs derived in [DP19, Section 3] -see Remark 3.6 below for a full discussion of this point. Some of the additional requirements with respect to [DP19] (in particular, Assumption (a) in Theorem 3.1 and Assumption (a') in Theorem 3.4) are necessary and sufficient for the pointwise convergence of the covariance functions of the degenerate U -processes associated with a generic U -statistic via its Hoeffding decompositon: such a technical assumption (that can in principle be relaxed at the cost of more technical statements -see Remark 3.6-(ii)) is unavoidable in the case of degenerate U -statistics, and is automatically verified in the applications developed in Section 4.
As discussed in Section 5 -and similarly to the main findings of [DP19] -when applied to Hoeffding degenerate U -processes (see Section 2), the conditions expressed in Theorems 3.1 and 3.4 are roughly equivalent to the requirement that the joint cumulants of order ≤ 4 associated with the finite-dimensional restrictions of the U -processes under consideration converge to those of an appropriate Gaussian limit, and that such a convergence takes place at a rate of the type O(1/n α ), where α > 0. As such, our findings can be regarded as functional versions of the well-known de Jong CLT for degenerate U -statistics, first proved in [dJ90] and then substantially extended in [DP17] (to which we refer for an overview of the relevant literature). To the best of our knowledge, apart from the reference [Bas94] (that only deals with homogeneous sums), ours is the first functional version of de Jong's CLT proved in the literature.
In the last four decades, numerous FCLTs for U -processes have been derived by several groups of authors; yet -to the best of our knowledge -none of them have a nature that is directly comparable to our findings. Among the large set of contributions in this domain, we refer the reader to the following relevant sample. References [Neu77,Hal79,MT84] contain functional limit theorems for sequences of degenerate U -statistics with a kernel independent on the sample size n: in such a framework, consistently with the known one-dimensional results (see e.g. [DM83]), the limit process lives in a Wiener chaos of order > 1 and it is therefore non-Gaussian. The already mentioned paper [Bas94] proves Gaussian FCLTs (in a spirit close to [dJ90]) for sequences of homogeneous sums: in this case, there is no overlap with our work since symmetric homogeneous sums (that are in principle contemplated in our setting) are necessarily multiples of degenerate U -statistics with a kernel not depending on the sample size n, and their asymptotic behaviour is consequently non-Gaussian (by virtue of [Neu77,Hal79]). References [NP87,AG93] are influential general contributions to the theory of U -processes, containing uniform FCLTs for sequences of U -processes indexed by function classes not depending on the sample size, both in the non-degenerate and degenerate case. The recent contribution [CK17] deals with suprema of U -processes indexed by non-degenerate symmetric function classes possibly depending on the sample size and not necessarily verifying a FCLT, and also contains a detailed review of further relevant literature. See also [GM07,Che18], as well as [Bor96,DlPG99] for general references.
As discussed above, our main results are expressed in terms of explicit analytical quantities (e.g. norms of contraction kernels), and they are therefore particularly well-adapted to applications. As a demonstration of this fact, in Section 4 we deduce two new classes of FCLTs, respectively related to subgraph counting in geometric random graphs (retrieving novel functional versions of one-dimensional CLTs from [JJ86, Pen04, BG92, DP19])), and to quadratic U -statistics emerging e.g. in the non-parametric estimation of quadratic functionals of compactly supported densities (see e.g. [BR88,LM00,RLTvdV16]). In Section 3.2, we also illustrate some connections with invariance principles related to changepoint analysis, see e.g. [CH88,CH97,Gom04].
We eventually mention the challenging problem of deriving explicit rates of convergence for the FCLTs derived in this paper. While some promising partial results seem to be obtainable by adapting the infinite-dimensional 'generator approach' to Stein's method developed in [Bar90,BJ09,Kas17], we prefer to consider this point as a separate issue, and leave it open for subsequent research.
Notation and tightness criteria
From now on, every random object considered in the paper is assumed to be defined on a common probability space (Ω, F, P), with E denoting expectation with respect to P. Given a collection of stochastic processes {X, X n : n ≥ 1} with values in D[0, 1], we write X n =⇒ X to indicate that X n weakly converges to X, meaning that E[ϕ(X n )] → E[ϕ(X)], as n → ∞, for every bounded mapping ϕ : D[0, 1] → R which is continuous with respect to the Skorohod topology. Given two positive sequences {a n , b n }, we write a n ∼ b n whenever a n /b n → 1, as n → ∞. We will also use the notation a n b n to indicate that there exists an absolute finite constant C such that a n ≤ C b n for every n.
In several places of the present paper, tightness in the space D[0, 1] is established by using the following criterion. The argument in the proof reproduces the strategy adopted in [NN18, Lemma 3.1], and is reported for the sake of completeness.
Lemma 1.1. Let X n = {X n (t) : t ∈ [0, 1]}, n ∈ N, be a sequence of stochastic processes with paths a.s. in D[0, 1]. Suppose that there is a stochastic process X = {X(t) : t ∈ [0, 1]} whose paths are a.s. continuous and such that the finite-dimensional distributions of X n , n ∈ N, converge to those of X, as n → ∞. Then, we have X n =⇒ X, if there are constants C > 0, β > 0 and α > 0 such that, for all n ∈ N sufficiently large and for all 0 ≤ s < t ≤ 1, (1. 5) In particular, in this case the sequence (X n ) n∈N is tight in D[0, 1].
Proof. We are going to use the following well-known criterion from [Bil99]: let X n = {X n (t) : t ∈ [0, 1]}, n ∈ N, be a sequence of stochastic processes with paths a.s. in D[0, 1] such that there is a stochastic process X = {X(t) : t ∈ [0, 1]} whose paths are a.s. continuous and such that the finite-dimensional distributions of X n , n ∈ N, converge to those of X, as n → ∞. Then, the sequence (X n ) n∈N , converges in distribution with respect to the Skorohod topology to X, if there are finite and strictly positive constants C 1 , α and γ such that, for all n ∈ N sufficiently large and for all 0 ≤ r ≤ s ≤ t ≤ 1, (1.6) Note that (1.6) is in fact a more specialized instance of formula (13.14) in [Bil99]. Now assume (1.5) and observe that where the last inequality follows from an argument used in the proof of [NN18, Lemma 3.1]. Hence, we conclude that (1.6) holds true with γ = β/2 and with C 1 = 3 1+α C.
Acknowledgments
We are grateful to Yannick Baraud, Andrew D. Barbour, Ivan Nourdin and Omar El-Dakkak for useful discussions. The research developed in this paper is supported by the FNR grant FoRGES (R-AGR-3376-10) at Luxembourg University.
Plan
Section 2 contains some general information about U-statistics and U-processes and several useful estimates for contraction operators. The main results of the paper, which give sufficient conditions for functional convergence of U-processes, are presented in Section 3.1; some connections to changepoint analysis are described in Section 3.2, whereas multidimensional extensions are discussed in Section 3.3. Section 4 deals with applications of our main results to subgraph counting in random geometric graphs and U-statistics of order 2 with a dominant diagonal component. Finally, Section 5 contains some further ancillary results, as well as the proofs of the main results.
2 General setup 2.1 Symmetric U -statistcs and U -processes As before, we assume that X 1 , X 2 , . . . is a sequence of i.i.d. random variables taking values in a measurable space (E, E) (that we fix for the rest of this paper), and denote by µ their common distribution. For a fixed p ∈ N, let ψ : be a symmetric and measurable kernel of order p. By "symmetric" we mean that, for all x = (x 1 , . . . , x p ) ∈ E p and each σ ∈ S p (the symmetric group acting on {1, . . . , p}), one has that In general, the kernel ψ = ψ n can (and will most of the time) depend on an additional parameter n ∈ N (the size of the sample in the argument of the associated U -statistic), but we will often suppress such a dependence, in order to simplify the notation. We use the symbol µ p to denote the p-th power of µ (which is a measure on (E p , E ⊗p )).
In what follows, we will write X := {X i : i ∈ N} and, for p, ψ, X as above and n ∈ N, we define where D p (n) indicates the subsets of size p of {1, ..., n}. We say that the random variable J (n) p (ψ) is the U -statistic of order p, based on X 1 , . . . , X n and generated by the kernel ψ. For p = 0 and a constant c ∈ R we further let J 0 (c) := c.
Let p ≥ 1, and let ψ ∈ L 1 (µ p ) be symmetric. The kernel ψ is called (completely) degenerate or canonical with respect to µ, if E ψ(x 1 , x 2 , . . . , x p )dµ(x 1 ) = 0 for µ p−1 -a.a. (x 2 , . . . , x p ) ∈ E p−1 , or, equivalently, if E ψ(X 1 , . . . , X p ) X 1 , . . . , X p−1 = 0, P-a.s.. Now assume that ψ ∈ L 1 (µ p ) is a symmetric but not necessarily degenerate kernel. In this case, the random variable J (n) p (ψ) can be written as the sum of its expectation and a linear combination of symmetric U -statistics with degenerate kernels of respective orders 1, . . . , p. More precisely, one has the following Hoeffding decomposition of J (n) p (ψ): and the symmetric functions g l : E l → R are defined by g l (y 1 , . . . , y l ) := E ψ(y 1 , . . . , y l , X 1 , . . . , X p−l ) , in such a way that, for 1 ≤ k ≤ p, ψ k is symmetric and degenerate of order k. In particular, one has g 0 ≡ ψ 0 ≡ E ψ(X 1 , . . . , X p ) and g p = ψ. See e.g. [Ser80,Vit92] Whenever ψ is a symmetric element of L 2 (µ p ), we will also make use of the notation one has that each ϕ (k) is a degenerate kernel and, using the notation V k (t) := J ( nt ) k (ϕ (k) ), one infers the following useful representation of W It is immediately verified that, for each n ∈ N, both W n := {W n (t) : t ∈ [0, 1]} and U n := {U n (t) : t ∈ [0, 1]} are random elements with values in D[0, 1]. As anticipated, the aim of this paper is to deduce verifiable analytical conditions, under which the sequence {W n : n ∈ N} converges in distribution to some continuous Gaussian process Z = {Z(t) : t ∈ [0, 1]} of the form (1.1).
We observe that ψ 0 p ψ = ψ 2 is square-integrable if and only if ψ ∈ L 4 (µ p ). As a consequence, ψ l r ϕ might not be in L 2 (µ p+q−r−l ) even though ψ ∈ L 2 (µ p ) and ϕ ∈ L 2 (µ q ). Moreover, if l = r = p, then ψ p p ψ = ψ 2 L 2 (µ p ) is constant. The next result, taken from [DP19], lists the properties of contraction kernels that are useful for the present work.
(i) For all 0 ≤ l ≤ r ≤ p ∧ q the function ψ l r ϕ given by (2.10) is well-defined, in the sense specified at the beginning of the present subsection.
(ii) For all 0 ≤ l ≤ r ≤ p ∧ q one has that where both sides of the inequality might assume the value +∞.
(iii) For all 0 ≤ l ≤ r ≤ p ∧ q one has that where both sides of the inequality might assume the value +∞.
3 Weak convergence of U -processes with symmetric kernels
Main results
For the rest of this section, we let p ≥ 1 be an integer. Moreover, for positive integers 1 ≤ r, i, k ≤ p and 0 ≤ l ≤ p such that 0 ≤ l ≤ r ≤ i ∧ k, we let Q(i, k, r, l) be the set of quadruples (j, m, a, b) of nonnegative integers such that the following hold: (1) j ≤ i and m ≤ k.
The next statement is the main result of the paper.
Theorem 3.1 (Functional convergence of general U -statistics, I). Let the assumptions and notation of Section 2 prevail, define the sequence according to (2.7), and assume that the following three conditions (expressed by means of the notation (2.4)) are satisfied: (a) for all 1 ≤ k ≤ p, the real limit Var g k (X 1 , . . . , X k ) exists; (b) for all 1 ≤ v ≤ u ≤ p and all pairs (l, r) and quadruples (j, m, a, b) of integers such that 1 ≤ r ≤ v, (c) there exists some > 0, such that, for all 1 ≤ r ≤ p, all 0 ≤ l ≤ r − 1 and all quadruples (j, m, a, b) ∈ Q(r, r, r, l), the sequence is bounded.
Then, as n → ∞, one has that W n =⇒ Z, where Z = {Z(t) : t ∈ [0, 1]} is the centered Gaussian process defined in (1.1), for Remark 3.3. In the case p = 2, after removing redundant terms, verifying conditions (b) and (c) of Theorem 3.1 boils down to checking that the following quantities converge to zero, as n → ∞: and that the following sequences are bounded for some > 0: i) n → n 5/2+ σ 2 n (Eg 2 (X 1 , X 2 )) 2 ii) n → n 5/2+ Recall also that in this case we have that g 2 = ψ.
When the Hoeffding decomposition of a given U -statistic is directly provided, it is more convenient to work with the kernels {ψ k } defined in formula (2.3), rather than with the class {g k }. The next statement allows one to obtain the same conclusion as in Theorem 3.1 by uniquely checking conditions related to the family {ψ k } defined in (2.3). (a') for all 1 ≤ k ≤ p, the real limit b 2 k := lim n→∞ (c') there exists some > 0, such that, for all 1 ≤ r ≤ p and 0 ≤ l ≤ r − 1, the sequence is bounded. and the following sequences are bounded for some > 0: Remark 3.6. (i) By inspection of pur proofs, one sees that Conditons (a) and (b) in Theorem 3.1 (resp. Conditions (a') and (b') in Theorem 3.4) are sufficient conditions for the convergence of the finite-dimensional distributions of W n towards those of Z, whereas Conditons (c) and (c') therein imply tightness. Condition (b') in Theorem 3.4 implicitly appears in [DP19, Section 4 and Section 5], as an analytical sufficient condition ensuring that (in the notation of the present paper) W n (1) converges in distribution to a one-dimensional standard Gaussian random variable. On the other hand, Condition (b) in Theorem 3.1 is a substantial improvement of the sufficient conditions for onedimensional asymptotic normality that can be deduced from [DP19, Theorem 5.2]. The difference between the conditions emerging from [DP19, Theorem 5.2] and those deduced in the present paper is explained by the fact that our findings use instead Lemma 5.7 below, which is a refined version of [DP19, Lemma 5.1].
(ii) It will become clear from the discussion to follow that Condition (a) in Theorem 3.1 and Condition (a') in Theorem 3.4 are equivalent for the same values of b 2 k , that is: for every k = 1, ..., p, the limit lim n→∞ n 2p−k σ 2 n Var g k (X 1 , . . . , X k ) exists and is finite if and only if the same holds for lim n→∞ , and in this case the two limits coincide.
(iii) The proofs of Theorems 3.1 and 3.4 will show that Conditions (b) and (c) in Theorem 3.1 imply Conditons (b') and (c') in Theorem 3.4, whereas the opposite implication does not hold in general.
(iv) (Relaxing Conditions (a) and (a')) Suppose that all the assumptions of Theorem 3.4 are verified, except for Condition (a'). In such a situation, we can define b 2 .., p, and observe that, for every k, the mapping n → b 2 k (n) is bounded (by virtue of (2.8)). For every n ≥ 1, we set moreover Z n to be the Gaussian process obtained from (1.1), by replacing the coefficient Then, a standard compactness argument combined with Theorem 3.1 yields the following conclusion: if ρ is any distance metrizing weak convergence on D[0, 1] (see e.g. [Dud02, Section 11.3]), one has that where ρ(W n , Z n ) is shorthand for the distance between the distributions of W n and Z n , as random elements with values in D[0, 1]. By virtue of Points (ii)-(iii) of the present remark, the exact same conclusion holds if one supposes that all assumptions of Theorem 3.1 are verified, except for Condition (a). In view of the content of Point (i) above, it follows that Theorem 3.1 and Theorem 3.4 contain and substantially extend the one-dimensional qualitative CLT stated in [DP19, Section 5].
One should notice that the techniques developed in [DP19] also allow one to deduce explicit rates of convergence, and that such a feature does not extend to our infinite-dimensional results.
The following corollary deals with the (simpler) situation of a degenerate kernel.
Corollary 3.7 (Degenerate kernels). Let the assumptions and notation of Section 2 prevail, define the sequence W n , n ∈ N, according to (2.7), and suppose in addition that the kernel ψ is degenerate. Assume that, for all pairs (l, r) of integers such that 1 ≤ r ≤ p and 0 ≤ l ≤ r ∧ (2p − r − 1), and that, for all 0 ≤ l ≤ p − 1 and for some > 0, the sequence is bounded. Then, as n → ∞, and, for any ∈ 0, 1 2 , the sequence is bounded.
(ii) If p ≥ 2 and the degenerate kernel ψ does not depend on n, then condition (3.2) is not satisfied for l = r = 1, since This phenomenon is consistent with the known non-Gaussian fluctuations of degenerate U -processes of orders p ≥ 2 having a kernel ψ independent of the sample size -see [Neu77,Hal79,DM83,MT84].
Connection to changepoint analysis
The techniques developed in the present paper can be used to characterize the weak convergence of families of processes that are more general than the ones defined in (2.7) and, in particular, to deal with limit theorems related to changepoint analysis (see e.g.
In order to illustrate such a connection, we will show how to suitably adapt our results in order to generalise an influential invariance principle for order 2 symmetric U -statistics, originally proved by Csörgö and Horvath in [CH88]. As explained e.g. in [CH97,Gom04,RW19] such an invariance principle has been the starting point of a fruitful line of research, focussing on changepoint testing procedures based on generalisations of Wilcoxon-Mann-Whitney statistics. Further possible extensions of the results of this section, involving in particular antisymmetric kernels [CH88, GH95, CH97, Fer01] and rescaled U -processes [CH88,Fer94,RW19], are outside the scope of the present paper and will be investigated elsewhere.
As before, we start by fixing a sequence of i.i.d. random variables X 1 , X 2 , ... with values in (E, E), and with common distribution µ. We also consider a sequence of kernels {ψ (n) : n ≥ 1}, such that each mapping ψ (n) : E 2 → R is symmetric and square-integrable with respect to µ 2 . For applications, ψ (n) is typically chosen in such a way that the quantity ψ (n) (x, y) is small whenever x, y are close, e.g. ψ (n) (x, y) = x − y β , β > 0 (assuming E is a normed space), but such a property has no impact on the convergence results discussed below. Here, to simplify the notation we assume from the start that each ψ (n) is centered, that is, E[ψ (n) (X 1 , X 2 )] = 0 for every n. We are interested in the family of U -processes {Y n : n ≥ 1} given by Defining ψ (n) u , u = 1, 2, according to (2.3) and writing γ 1 (n) := ψ (n) 1 L 2 (µ) and γ 2 (n) := ψ (n) 2 We also set γ 2 n := Var(Y n (1/2)) 1 , and Y n := Y n /γ n . The next statement corresponds to one of the main findings in [CH88].
The following statement (containing Theorem 3.9 as a special case) shows that, by allowing the kernels ψ (n) to explicitly depend on n, one can obtain a larger class of functional limit theorems. We recall that a centered Gaussian process Theorem 3.10. Let the above assumptions and notation prevail, and assume moreover that: (I) the kernels ψ verify the asymptotic relations expressed in Conditions (b') and (c') of Theorem 3.4 for p = 2, and (II) as n → ∞, Theorem 3.10 is proved in Section 5.5. An alternate class of FCLTs displaying Brownian bridges as limits can be found in [GH95]. In the forthcoming Corollary 4.4, we will present a direct application of Theorem 3.10 to edge counting in random geometric graphs.
Extension to vectors of U -processes
In this subsection we state multivariate extensions of Theorems 3.1 and 3.4. We first introduce the setup and some notation: Fix a dimension d ≥ 1 and, for 1 ≤ i ≤ d, let p i be a positive integer and suppose that ψ(i) = ψ (n) (i) ∈ L 2 (µ p i ) is a symmetric kernel (that may again depend on the sample size n). Define the corresponding kernels g k (i) and ψ s (i) (which may also depend on n) for all 0 ≤ k ≤ p i and 1 ≤ s ≤ p i in the obvious way. W.l.o.g. we may assume that Var(U i (1)) and W i := (W i (t)) t∈[0,1] . Then, with obvious notation we have that In this section, the vector-valued Gaussian limiting processes Z = (Z 1 , . . . , Z d ) T will have zero mean, and a covariance structure that is given by ) . The next two statements are multidimensional counterparts to Theorem 3.1 and Theorem 3.4.
Theorem 3.11 (Functional convergence of vectors of general U -statistics, I). Let the assumptions and notation of this subsection prevail, and assume that the following three conditions are satisfied: is bounded.
Gaussian process defined by the covariance structure (3.5) and where, for exists; is bounded.
Subgraph counting in random geometric graphs
Random geometric graphs are graphs whose vertices are random points scattered on some Euclidean domain, and whose edges are determined by some explicit geometric rule; in view of their wide applicability (for instance, to the modelling of telecommunication networks), these objects represent a very popular alternative to the combinatorial Erdös-Rényi random graphs. We refer to the texts [Pen04,PR16] for an introduction to this topic, and for an overview of related applications. Our aim is to use our main findings (Theorem 3.1 and Theorem 3.4) in order to establish a new collection of FCLTs for arbitrary subgraph counting statistics associated with generic sequences of random graphs. These FCLTs -whose statements appear in Theorem 4.2 below -hold in full generality and with minimal restrictions with respect to the already existing one-dimensional CLTs; as such, they substantially extend the one-dimensional CLTs proved in [Pen04, Section 3.5 and Section 3.4], as well as in [JJ86, BG92, DP19].
We fix a dimension d ≥ 1 as well as a bounded and Lebesgue almost everywhere continuous probability density function f on R d . Let µ(dx) := f (x)dx be the corresponding probability measure on (R d , B(R d )) and suppose that X 1 , X 2 , . . . are i.i.d. with distribution µ. Let X := {X j : j ∈ N}. We denote by {t n : n ∈ N} a sequence of radii in (0, ∞) such that lim n→∞ t n = 0. For each n ∈ N, we denote by G(X; t n ) the random geometric graph obtained as follows. The vertices of G(X; t n ) are given by the set V n := {X 1 , . . . , X n }, which P-a.s. has cardinality n, and two vertices X i , X j are connected if and only if 0 < X i − X j 2 < t n . Furthermore, let p ≥ 2 be a fixed integer and suppose that Γ is a fixed connected graph on p vertices. For each n we denote by G n (Γ) the number of induced subgraphs of G(X; t n ) which are isomorphic to Γ. Recall that an induced subgraph of G(X; t n ) consists of a non-empty subset V n ⊆ V n with an edge set precisely given by the set of edges of G(X; t n ) whose endpoints are both in V n . We will also have to assume that Γ is feasible for every n ≥ p. This means that the probability that the restriction of G(X; t n ) to X 1 , . . . , X p is isomorphic to Γ is strictly positive for n ≥ p. Note that feasibility depends on the common distribution µ of the points. The quantity G n (Γ) is a symmetric U -statistic of where ψ Γ,tn (x 1 , . . . , x p ) equals 1 if the graph with vertices x 1 , . . . , x p and edge set {{x i , x j } : 0 < x i − x j 2 < t n } is isomorphic to Γ, and equals 0 otherwise. We denote the corresponding normalized U -process by For obtaining asymptotic normality one typically distinguishes between three different asymptotic regimes: Note that we could rephrase the regimes (R1) and (R2) as follows: where, for positive sequence a n and b n we write a n b n , n ∈ N, if and only if lim n→∞ a n /b n = 0. It turns out that, under regime (R2) one also has to take into account whether the common distribution µ of the X j is the uniform distribution , or not. To deal with this peculiarity, we will therefore distinguish between the following four cases: (C3) nt d n → ∞ as n → ∞, and µ is not a uniform distribution.
The following important variance estimates will be needed. Except for the case (C2), which needs a special consideration, these have already been derived in the book [Pen04]. Since it does not make the argument much longer, we provide the whole proof.
for a constant c ∈ (0, ∞). Moreover, there exist constants c 1 , c 2 , c 3 , c 4 ∈ (0, ∞) such that, as n → ∞, Proof. For notational convenience, for k = 0, 1, . . . , p we simply denote by g k the function corresponding to the kernel ψ Γ,tn , i.e. we suppress the dependence on n and on the graph Γ. We will make use of formula (2.9). First note that, for 1 ≤ k ≤ p, Hence, by (2.9), we have (4.1) For k = 1, . . . , p, we have that (e.g. by dominated convergence) where, for 1 ≤ k ≤ p, Also, from [Pen04, Proposition 3.1] we know that This implies that (4.5) Since 0 < t n < 1, for 2 ≤ k ≤ p, this yields that In order to discuss the case k = 1 we have to carefully compare d 1 to ν. Note that we have By Jensen's inequality we have whereas, if µ is a uniform distribution we can only conclude that Var g 1 (X 1 ) (t d n ) 2p−2 but, in general, we cannot give any lower asymptotic bound on Var g 1 (X 1 ) . Note that, for 1 ≤ k ≤ p − 1 we have (4.6) This implies that there are positive constants c 1 , c 3 and c 4 such that whereas, in case (C2) we can conclude (as claimed) that there is a positive constant c 2 such that (4.8) The next collection of FCLTs, extending the one-dimensional CLT proved in [Pen04,DP19], is the main result of the section. Note that, in view of the large number of parameters, in the forthcoming Theorem 4.2 we choose to express the distribution of the limit process Z directly in terms of its covariance function (1.3), rather than using the representation (1.1). We will also need the following definition: For fixed ρ ∈ (0, ∞), introduce the positive definite function Ψ : [0, 1] × [0, 1] → R : (s, t) → Ψ(s, t) given by where d k , 1 ≤ k ≤ p, and ν have been defined in (4.3) and (4.4), respectively.
Remark 4.3. Note that, interestingly, in the case (C4) the covariance function Ψ of the limiting process depends not only on ρ but also on the difference d 1 −ν 2 . In particular, the analytic properties of Ψ depend on whether µ is a uniform distribution or not.
In particular, (4.10) and (4.11) make sure that condition (a) of Theorem 3.1 is always satisfied in this example and that the covariance function Γ of the potential limiting process given by coincides with the one in the statement. Now fix integers 1 ≤ v ≤ u ≤ p and l, r such that 1 ≤ r ≤ v and 0 ≤ l ≤ r ∧ (u + v − r − 1). The computations on pages 4196-4197 of [LRP13] show that for all where the second relation follows from 0 < t n < 1 and the inequality j + m + a − b ≤ v + u + r − l (we observe that the authors of [LRP13] actually deal with the rescaled measure n · µ, which is why they obtain another power of n as a prefactor). Now suppose that (j, m, a, b) ∈ Q(v, u, r, l) ∩ P . We are going to repeatedly use (4.13) and Proposition 4.1 for the following estimates: In case (C1) we have where we have used that v + u + r − l ≤ 3p for the second inequality and, hence, under the assumptions of the theorem it follows that In case (C2) we obtain where we have used that (v + u + r − l) ≥ 3. Hence, under the assumptions of the theorem it follows that In case (C3) we similarly obtain where we have again used that (v + u + r − l) ≥ 3. Finally, in case (C4) we have Eventually, we have to deal with the quadruples (j, m, a, b) ∈ Q(v, u, r, l) \ P . In order to do this, we first remark that we have the asymptotic relations Relation (4.14) follows from the computation f (x 1 + t n u l )f (x 1 + t n v l )du l dv l ψ Γ,1 (0, y 2 , . . . , y m , u m+1 , . . . , u p )ψ Γ,1 (0, y 2 , . . . , y m , v m+1 , . . . , v p ) du l dv l ψ Γ,1 (0, y 2 , . . . , y m , u m+1 , . . . , u p )ψ Γ,1 (0, y 2 , . . . , y m , v m+1 , . . . , v p ) where made use of the translation invariance and scaling property of the kernel ψ Γ,tn as well as of the a.e.
-continuity of f . The derivation of (4.15) is similar but easier and is for this reason omitted. First, if a = b = 0 and j, m ≥ 1, then we have Now note that by the definition of the set Q(v, u, r, l) we further have that which provides a bound of the same order as (4.13). If a = b = j = 0 and m ≥ 1, then using m = j + m − a − b ≤ u + v − r − l and r ≥ 1, , which again yields a bound of the same order as (4.13). The only remaining possibility is that 1 ≤ a = b = m = j ≤ p − 1. In this case, we first claim that Indeed, if j < u, then 2j < u+v ≤ u+v +r −l since j ≤ v. On the other hand, if j = u, then j = v and we must also have r = j and l ≤ r −1 = j −1 since j = a ≤ r ≤ v = j and 0 ≤ l ≤ u+v −r −1 = j −1 = r −1. Hence, u + v + r − l ≥ 2j + r − l ≥ 2j + 1. Thus, we obtain that 4p−(u+v+r−l)−1 which is the same bound as in (4.13). Since all these bounds are at most the same as the bound in (4.13) we conclude that the above estimates continue to hold for all (j, m, a, b) ∈ Q(v, u, r, l) \ P . Since the estimates just proven are independent of the variables v, u, c and r this implies that conditions (b) and (c) of Theorem 3.1 are satisfied in the asserted cases.
As announced, the following statement is a consequence of Theorem 3.10 and provides a changepoint counterpart to the previous theorem, in the special case of edge counting.
The proof of Corollary 4.4 (whose details are left to the reader) follows from the fact that, under the regime (C1) and in the notation of Theorem 3.10, one has that γ 2 n ∼ σ 2 n /2, and also c 1 = 0 and c 2 = 2. Writing k := nt for a fixed t, the sum S(n, t) := 1≤i≤ nt <j≤n 1 {0< X i −X j ≤tn} counts the number of edges in G(X; t n ) such that one endpoint belongs to the set {X 1 , ..., X k } and the other belongs to {X k+1 , ..., X n }; a small value of S(n, t) implies that most distances between the elements of the two blocks of variables are larger than t n . For testing procedures related to changepoint analysis (see [Fer01,CH97]), one is typically interested in understanding the asymptotic distribution of such quantities as M n := max (−T n (t)), or A n := argmax where argmax t∈[0,1] g(t) stands conventionally for the smallest maximizer of a function g admitting a maximum 2 . Corollary 4.4 immediately implies that M n and A n converge in distribution to m := √ 2 max b t and a := argmax b t , respectively. It is a well-known fact (see e.g. [Fer95,CH97]) that m/ √ 2 is distributed according to the Kolmogorov-Smirnov law, whereas a is uniformly distributed on [0, 1]. More general limit theorems (involving in particular an independent process A, as in Theorem 3.10) could be obtained by considering an adequately renormalized version of T n under the remaining regimes (C2)-(C4).
General statements
In the paper [RLTvdV16], a remarkable collection of one-dimensional CLTs was proved for sequences of U -statistics of order 2 displaying size-dependent kernels, as well as dominant non-linear Hoeffding components. The Gaussian fluctuations of the U -statistics considered in [RLTvdV16] emerge asymptotically from the fact that the corresponding kernels tend to concentrate around a diagonal, a phenomenon leading to Gaussianity if one assumes some additional Lyapounov-type condition. The scope of the applications developed in [RLTvdV16] covers e.g. the estimation of quadratic functionals of densities and regression functions, as well as the estimation of mean responses with missing data (see Section 4.2.2 below, as well as [RLTvdV16, Section 3], and [BR88, Lau96, Lau97, LM00]).
Our aim in this section is to use our Theorem 3.1 in order to prove a functional version of the forthcoming Theorem 4.5, corresponding to a special (but fundamental) case of [RLTvdV16, Theorem 2.1]. Two explicit examples related to kernels based on wavelets and on Fourier bases, respectively, are studied in full detail in Section 4.2.2.
In order to state the announced results, we adopt a notation similar to [RLTvdV16] and consider a sequence of i.i.d. random variables {X i : i ≥ 1} with values in the measurable space (E, E) and with common distribution µ. We also consider a sequence {K n : n ≥ 1} ⊂ L 2 (µ 2 ) of symmetric kernels For every n, we define the constant σ n and the processes U n = {U n (t) : t ∈ [0, 1]} and W n = {W n (t) : t ∈ [0, 1]} according to (2.5)-(2.9), in the special case p = 2 and ψ = K n , that is: (4.16) We write k n := E K 2 n (X 1 , X 2 ) = K n 2 L 2 (µ 2 ) , and assume that where • op denotes the operator norm of the Hilbert-Schmidt operator f → E K n (·, y)f (y)µ(dy), and Assumptions (4.17) and (4.18) are easily checkable conditions implying that the linear part of the Hoeffding decomposition of W n (1) vanishes in L 2 (P) as n → ∞. Assumption (4.19) can be relaxed (see e.g. formula (10) and Lemma 2.1 in [RLTvdV16]), but we decided to avoid such a level of generalitywhich is not needed for the examples developed below -in order to keep our paper within bounds. Then, as n → ∞, one has that σ 2 n ∼ n 2 kn 2 and W n (1) where Z is a standard Gaussian random variable.
The main abstract result of the present section is the following functional version of the previous statement.
As we will point out in Remark 4.7 below each of the four assumptions (4.24), (4.25), (4.26) and (4.27) plays a substantially different role in the proof.
Proof of Theorem 4.6. Using the notation introduced in this section, for every n we define and write Q c n = Mn m=1 (X m,n ×X m,n ) c in order to denote the complement of Q n in E 2 . We set A n := K n 1 Qn and B n := K n − A n = K n 1 Q c n , and define T n := {T n (t) : t ∈ [0, 1]} and R n := {R n (t) : t ∈ [0, 1]} as in such a way that W n = T n + R n . Our first remark is that, for every f ∈ L 2 (µ) with unit norm, one has that from which we infer that sup n A n op , sup n B n op < ∞, (4.28) where we have applied (4.18) and the triangle inequality in order to deal with B n . Using the identity (2.9) (in the case ψ = B n ) together with (4.28), with the relations σ 2 n ∼ n 2 k n /2 and n/k n → 0 and with (4.20), shows immediately that, as n → ∞, E[R n (t) 2 ] = o(σ 2 n ) for every t ∈ [0, 1]. This in turn implies that σ 2 n ∼ Var(T n (1)). We will now study R n and T n separately, and prove that (ii) the sequence {T n : n ≥ 1} verifies the assumptions of Theorem 3.1 in the case p = 2, with α 1,2 = 0 and α 2,2 = 1, and therefore T n weakly converges to B(t 2 ) in D[0, 1].
[Proof of (i)] We first define the functions g 0 , g 1 , g 2 according to (2.4), in the case p = 2 and ψ = B n , so that the Hoeffding decomposition of the U -statistic R n (t), t ∈ [0, 1], is Now fix 0 ≤ s < t ≤ 1. Then, writing g 1 − g 0 := ψ 1 , as before, and using as always the symbol c in order to denote an absolute finite constant whose exact value might change from line to line, We can assume without loss of generality that α 1 ∈ (0, 1]; we have that where we have used (4.18) and (4.26) to deduce the last inequality. Analogously, one shows that where the last inequality follows again from (4.18) and from (4.17), as well as from the fact that nt − ns n ∈ [0, 1].
In order to deal with R , we adopt as before the notation ψ 2 (X i , X j ) := g 2 (X i , X j ) − g 1 (X i ) − g 1 (X j ) + g 0 , and observe that, for every a ≥ 1, E[|ψ 2 (X i , X j )| a ] ≤ c E E |B n | a dµ 2 , for some absolute constant c depending solely on a. For every n and every 0 ≤ s < t ≤ 1, we define the set of integers In order to bound such a quantity, we use orthogonality of the summands in the above sum, fix an α 2 > 0 such that condition (4.27) is satisfied, and note that (4.31) We have therefore shown (in (4.29), (4.30) and (4.31)) that {R n } satisfies the tightness criterion of Lemma 1.1, for α = min (α 1 , α 2 ) and β = 2, and the proof of Point (i) is concluded.
[Proof of (ii)] In this part of the proof, we denote by g 0 , g 1 , g 2 the functions obtained from (2.4) by selecting p = 2 and ψ = A n . Note that each of the three kernels g i implicitly depends on n and that, by virtue of (4.28), one has sup n E[g 1 (X 1 ) 2 ] + |g 0 | < ∞. (4.32) Since g 2 = A n , σ 2 n ∼ k n n 2 /2 and (4.20) is in order, we see immediately that the constants b 1 and b 2 appearing at Point (a) of Theorem 3.1 are such that b 1 = 0 and b 2 2 = 2, yielding α 1,2 = 0 and α 2,2 = 1. In order to conclude our proof, we have now to check that the quantities appearing at Points 1.-6. of Remark 3.3 all converge to zero as n → ∞ and that the quantities in points i)-iv) of the same remark are bounded for some > 0. This is immediately done for the quantities at Points 1., i) and ii), by virtue of (4.32). To deal with the quantity at Point iii), we note that, for some > 0, (g 1 (x)g 2 (x, y)) 2 µ(dx)µ(dy) by (4.24) and (4.28), showing that the quantity at Point 2. vanishes. We can deal at once with the quantities at Point 3. and 5. by means of the following considerations. For a fixed n, denote by {λ j : j ≥ 1} and {e j : j ≥ 1}, respectively, the sequence of eigenvalues (taken in decreasing order) and eigenfunctions of the Hilbert-Schmidt operator on L 2 (µ) given by f → E A n (·, y)f (y)µ(dy). Then, such eigenfunctions form an orthonormal system in L 2 (µ), and one has that A n = g 2 = i λ i e i ⊗ e i , with convergence in L 2 (µ 2 ). Such a relation yields that A n op = λ 1 , g 2 1 1 g 2 = i λ 2 i e i ⊗ e i , g 1 = i λ i µ i e i (where µ i := E e i dµ) and g 1 1 1 g 2 = i λ 2 i µ i e i . Since |µ i | ≤ 1 (by Cauchy-Schwarz), we infer that g 1 1 1 g 2 L 2 (µ) , g 2 1 1 g 2 L 2 (µ 2 ) ≤ i λ 4 i ≤ A n op A n L 2 (µ 2 ) , and the desired convergence to zero follows from (4.17), (4.20) and (4.28). The vanishing of the quantity at Point 4. follows from n 3/2 σ 2 n g 2 0 1 g 2 L 2 ∼ 1 n 1/2 k n m Xn,m Xn,m Xn,m K 2 n (x, y)K 2 n (x, z)µ(dx)µ(dy)µ(dz) Xn,m Xn,m Xn,m where we have applied (4.19) and (4.22). One has also that, for some > 0, (4.19), (4.20), (4.22), (4.23) and (4.25) -this yields the boundedness of the sequence at Point iv). Finally, the convergence to zero of the quantity at Point 6. is a direct consequence of (4.17) and (4.20).
Remark 4.7. By inspection of the previous proof, one sees that Assumptions (4.26) and (4.27) imply that the off-diagonal part of the U -process W n is tight in the space D[0, 1]. On the other hand, Assumptions (4.24) and (4.25) are needed in order to ensure that the (dominating) diagonal component of W n meets the requirements of Theorem 3.1. Note that Assumption (4.24) is such that (a) it does not appear in [RLTvdV16], and (b) it would be needed if one wanted to prove a one-dimensional CLT for W n (1) by using the techniques developed in [DP19]. This slight discrepancy between the assumptions of [DP19] and [RLTvdV16] is explained by the fact that the sufficient conditions discovered in [DP19] would imply not only a CLT for W n (1), but also that E[W n (1) 4 ] → 3, and consequently need to be stronger.
Two examples
As an application of Theorem 4.6, we will consider two families of kernels satisfying the set of sufficient conditions for functional convergence pointed out in the previous section. As explained in [RLTvdV16, Section 3] both types of U -statistics can be used in the non-parametric estimation of quadratic functional of densities -see also [BR88,LM00] (I) (Wavelet-based kernels) Following [RLTvdV16, Section 4.1], we consider expansions of functions f ∈ L 2 (R d ) on an orthonormal basis of compactly supported, bounded wavelets of the form The functions ψ v i,j are orthogonal for different indices (i, j, v) and given by scaled and translated versions of the 2 d base functions ψ v 0,0 : We concentrate on functions f with support in E = [0, 1] d . As noted in [RLTvdV16, Section 4.1], for each resolution level i and vector v, only the order 2 id elements ψ v i,j are nonzero in E. We denote the corresponding set of indices j by J i . We then truncate the expansion at the level of resolution i = I and look at the kernel (II) (Kernels based on Fourier expansions) Any function f ∈ L 2 [−π, π] can be represented through the Fourier series f = j∈Z f j e j for e j (x) = e ijx/ √ 2π and f j = π −π f e j dλ, where λ is the Lebesgue measure. We can write f k = |j|≤k f j e j to obtain an orthogonal projection of f onto a 2k + 1dimensional space. Assuming that k depends on n, we can also write down the corresponding kernel as: and note that K n (x, y) = D k (x − y), where D k is the well-known Dirichlet kernel.
Theorem 4.8. Let the above assumption and notation prevail.
1. Let µ be any probability measure on [0, 1] d with a Lebesgue density that is bounded and bounded away from zero. The sequence of wavelet-based kernels {K n : n ≥ 1} defined at Point (I) above satisfies the assumptions of Theorem 4.5, with respect to µ, as soon as n k n n 2 , for k n = 2 Id . Moreover, a sufficient condition for such a sequence to satisfy the assumptions of Theorem 4.6 is n 1+γ 1 k n n 2−γ 2 , for some γ 1 , γ 2 > 0.
2. Let µ be any measure on R with a bounded Lebesgue density and k n = 2k + 1. The sequence of Fourier-based kernels {K n : n ≥ 1} defined at Point (II) above satisfies the assumptions of Theorem 4.5 as soon as n k n n 2 . In addition, a sufficient condition for such a sequence to meet the assumptions of Theorem 4.6 is n 1+η 1 k n n 2−η 2 , for some η 1 , η 2 > 0.
Proof.
1. For n k n n 2 and K n defined in point (I) above, the assumptions of Theorem 4.5 are verified in [RLTvdV16, Proposition 4.1]. The authors note that, by assumption, each function ψ v I,j is supported within a set of the form 2 −I (C + j) for a given cube C that depends on the type of the wavelet, for any v. They take X n,m to be blocks (cubes) of l d n adjacent cubes 2 I (C + j), giving M n = O(k n /l d n ) sets X n,m . In order for the assumptions (4.18)-(4.23) to be satisfied, the authors require that Now assume n 1+γ 1 k n n 2−γ 2 for some γ 1 , γ 2 > 0. Condition (4.26) is then automatically satisfied. As noted in the proof of [RLTvdV16, Proposition 4.1], µ(X m,n ) is of order 1 Mn . Now, it is also noted in the proof of [RLTvdV16,Proposition 4.1] that, if K n (x 1 , x 2 ) = 0 then there exists some j such that x 1 , x 2 ∈ 2 −I (C + j). Moreover, the set of (x 1 , x 2 ) in the complement of m X n,m × X n,m where K n (x 1 , x 2 ) = 0 is contained in the union U of all cubes 2 −I (C + j) that intersect the boundary of some X n,m . It is also noted that the number of such cubes is of order M 1/d n k 1−1/d n and that µ 2 −I (C + j) 1 kn . Therefore, using K n ∞ k n , we note that, for any α 2 > 0, n α 2 k n K n (x, y)1 (x, y) ∈ (X n,m × X n,m ) c 2 µ(dx)µ(dy) Condition (4.27) requires this quantity to be bounded for some α 2 > 0. It will indeed be bounded for α 2 ≤ 1 2d if we choose M n = k 1/2 n . Moreover, for M n = k 1/2 n , Mn kn → 0. Also, kn Mnn = k 1/2 n n → 0, as k n n 2 . Under the same assumption, M n → ∞ and n 1/2+γ 1 /2 M n n 1−γ 2 /2 n and so conditions (4.24) and (4.25) are also satisfied (as µ(X m,n ) is of order 1 Mn ). Therefore all the conditions a), b), c), d) from above, as well as conditions (4.24)-(4.27), are satisfied. This finishes the proof. Now, assume that n 1+η 1 k n n 2−η 2 , for some η 1 , η 2 > 0. This makes condition (4.26) readily satisfied. In order for condition (4.24) to be satisfied, we require that n 1/2+ 1 δ is bounded for some which is bounded, if n α 2 /2 k 1/2 n n −α 2 . Each of the remaining triangles in the complement of m X n,m × X n,m has sides of length of order . Hence, for a typical triangle ∆ and an interval I of length of the order , n α 2 k n ∆ |K n (x, y)| 2 dxdy v=y u=x−y n α 2 k n I 0 |D n (u)| 2 dudv n α 2 k n k n = n α 2 . (4.33) There are 2(M n − 1) such triangles. Therefore, condition (4.26) will be satisfied if, in addition to n α 2 /2 k 1/2 n n −α 2 , M n n α 2 = 2π n α 2 δ is bounded, i.e. n α 2 δ.
Technical results and proofs of main statements
Unless otherwise specified, for the rest of the section we adopt the same conventions and notation put forward in Section 2.
A new product formula
We start by proving a new product formula for symmetric U -statistics with arguments of possibly different sizes. In order to state it, we need to recall the Hoeffding decomposition of not necessarily symmetric kernel functions: Let f ∈ L 1 (µ p ). Then, f can be decomposed as follows: For all (x 1 , . . . , x p ) ∈ E p one has where we follow the convention that in (x i ) i∈J the coordinates i appear in increasing order, i.e. if J = {i 1 , . . . , i k } with k = |J| and 1 ≤ i 1 < . . . < i k ≤ p, then (x i ) i∈J = (x i 1 , . . . , x i k ). The kernels f J , J ⊆ [p], are given by and they are canonical with respect to µ in the sense that for each ∅ = J ⊆ p with |J| = k, each j ∈ J and all (x i ) i∈J\{j} ∈ E |J|−1 one has that where we again suppose that J = {i 1 , . . . , i k }, 1 ≤ i 1 < . . . < i k ≤ p and where i l = j. For a detailed discussion and proofs of these facts we refer the reader to [Maj13,Chapter 9]. Note that, if the kernel f is symmetric as in Section 2, then we can define the (symmetric) functions g k , 0 ≤ k ≤ p by as before and we obtain that, for every subset J ⊆ [p] with 1 ≤ k := |J| ≤ p, where the symmetric and degenerate kernel f k has been defined in (2.3).
For the statement of our product formula we have to fix some more notation: Let us fix two positive integers p and q. Then, for nonnegative integers l, n, m, r such that n ≤ m, r ≤ p ∧ q, l ≥ p + q − 2r and sets L ⊆ [m] with |L| = l, we denote by Π r,n,m (L) the collection of all triples such that L is the disjoint union of A, B and C.
Proposition 5.1 (Product formula). Let p, q ≥ 1 be positive integers and assume that ψ ∈ L 2 (µ p ) and ϕ ∈ L 2 (µ q ) are degenerate, symmetric kernels of orders p and q respectively. Moreover, let n ≥ p and m ≥ q be positive integers with m ≥ n. Then, whenever n ≥ p + q we have the Hoeffding decomposition: Moreover, for such an M , we further have the bound and the product formula reduces to the one in [DP19, Proposition 2.6]. The main difference in general is that, in the situation of Proposition 5.1 and for n = m, the product is no longer (in general) a finite sum of degenerate and symmetric U -statistics. However, its Hoeffding decomposition (in the sense of not necessarily symmetric statistics -see e.g. [KR82,DP17]) is still completely explicit and hence suitable for providing useful bounds.
Proof of Proposition 5.1. Write for the respective Hoeffding decompositions of W and V , i.e.
From Theorem 2.6 in [DP17] we know that the Hoeffding decomposition of V W is given by it follows from M ⊆ J ∪ K that r ≤ p + q − k. Moreover, since |J ∪ K| = |J ∩ K| + |J∆K| and J∆K ⊆ L ⊆ M , it follows again from (5.8) that 2r ≥ p + q − l ≥ p + q − k. In particular, we have l ≥ |p − q| ∨ (p + q − 2r) = p + q − 2r. Moreover, note that Note that we have The claim now follows from the facts that |M ∩ [n]| = k − s, Thus we have proved that Now, suppose that (A, B, C) ∈ Π r,n,m (M ) such that, in particular, |A| = 2r + k − p − q. Moreover, suppose that Then, it is easy to see that B ∪ C ⊆ L, that Now, recall that by the Hoeffding decomposition for non-symmetric kernels, for each (A, B, C) ∈ Π r,n,m (M ) we have that Thus, from (5.10), (5.11) and (5.12) we can conclude that as claimed. The bound (5.5) then follows immediately from and from the fact that In the next subsection, we focus on convergence of finite-dimensional distributions (f.d.d.) for processes of the form (2.7). Our approach makes important use of the general (quantitative) CLTs from [DP19].
A general qualitative multivariate CLT
Fix a positive integer d and, for 1 ≤ i ≤ d and n ∈ N, let p i ≤ n i = n i,n ≤ n be positive integers. We will always assume that the sequences {n i,n : n ∈ N} diverge to ∞ as n → ∞, for each i = 1, . . . , d. Moreover, let ψ (i) = ψ (i,n) ∈ L 4 (µ p i ) be degenerate kernels . Define Then, Y is a centered random vector with components in L 4 (P). We will write V = V n = {v i,k : 1 ≤ i, k ≤ d} for its covariance matrix. Throughout the section, we denote by Z = Z n = (Z 1 , . . . , Z d ) T ∼ N d (0, V) a centered Gaussian vector with the same covariance matrix as Y . Note that, due to degeneracy, we have v i,k = 0 unless p i = p k . The following finite-dimensional CLT is one of our crucial tools.
Proposition 5.3. With the above notation and definitions, assume that C := lim n→∞ V n ∈ R d×d exists. Then, Y n converges in distribution to N d (0, C), provided Conditions (i)-(iii) below hold for all 1 ≤ i ≤ k ≤ d : (i) lim n→∞ n a/2−r ψ (i,n) a−r r ψ (k,n) L 2 (µ p i +p k −a ) = 0 for all pairs (a, r) of integers such that 1 ≤ a ≤ min(p i + p k − 1, 2(p i ∧ p k )) and a 2 ≤ r ≤ a ∧ p i ∧ p k , = 0 for all for all pairs (a, r) of integers such that 1 ≤ a ≤ 2p i − 1 and a 2 ≤ r ≤ a ∧ p i , and Proof. For 1 ≤ i, k ≤ d, we use the notation to indicate the Hoeffding decomposition of Y (i)Y (k). The following bound is taken from Lemma 4.1 in [DP19]: for h ∈ C 3 (R d ) whose partial derivatives up to order three are all bounded, there exist constants and using the inequality (5.14), we obtain where the symmetric and degenerate kernels ψ k : E k → R of order k are given by and the symmetric functions g l : E l → R are defined by g l (y 1 , . . . , y l ) := E ψ(y 1 , . . . , y l , X 1 , . . . , X p−l ) .
Without loss of generality, we can thus assume that t 1 > 0 and also that 0 < ψ L 2 (µ p ) < +∞ which implies that for all n ≥ p. We will further write With this notation at hand, we define the random vector Y : In this way, our notation is fitted to the framework of Subsection 5.2.1. We are going to reformulate the conditions from Proposition 5.3. First note that E[Y i Y j ] = 0 whenever s i = s j , due to the degeneracy of the involved kernels. On the other hand, if s i = s j = s, then Since n i = nt i for 1 ≤ i ≤ d, the covariance matrix of Y thus converges to some limit Γ ∈ R d×d if and only if the real limit which is implied by (5.15). Taking into account that, for j = 1, . . . , m, and that (W 1 , . . . , W m ) T is obtained from (Y 1 , . . . , Y d ) T by applying a linear functional, from Proposition 5.3 we thus deduce the following result. Note that we also apply the reindexing l := a − r.
Theorem 5.6. With the above notation and definitions, the vector (W 1 , . . . , W m ) T converges, as n → ∞, to a multivariate normal distribution, whenever the following conditions hold for all 1 ≤ v ≤ u ≤ p: Due to the complicated expressions of the kernels ψ s , the following result, which is a rectified version of Lemma 5.1 of [DP19], is often useful for bounding the contraction norms appearing in Theorem 5.6. Recall the definition of the set Q(i, k, r, l) defined before Theorem 3.1.
The next result is a direct consequence of Theorem 5.6, of Lemma 5.7 and of the fact that, for all 1 ≤ v ≤ p, we have Note further that, using (2.8), we can conclude that for fixed s, t ∈ [0, 1], the sequence Cov W n (s), W n (t) , n ∈ N, is always bounded.
Criteria for tightness
We are going to establish tightness using Lemma 1.1 and, as a result, obtain the following theorem: Theorem 5.10 (Tightness of general U -processes). Let p ∈ N and suppose that ψ = ψ(n) ∈ L 4 (µ p ), n ≥ p, is a sequence of symmetric kernels. For t ∈ [0, 1] let U (t) := U n (t) := J ( nt ) p (ψ) and define W (t) = W n (t) by (2.7), where σ 2 n := Var(U n (1)) = Var(J (n) p (ψ)). Suppose that there is an a.s. continuous Gaussian process Z = (Z(t)) t∈[0,1] such that the finite-dimensional distributions of W n , n ∈ N, converge to those of Z. Consider the following conditions: (i) There is an > 0 such that for all 1 ≤ r ≤ p and all 0 ≤ l ≤ r − 1, the sequence n 2p−r− r−l 2 + σ 2 n ψ r l r ψ r L 2 (µ r−l ) is bounded.
(ii) There is an > 0 such that for all 1 ≤ r ≤ p, all 0 ≤ l ≤ r − 1 and for all quadruples (j, m, a, b) ∈ Q(r, r, r, l) the sequence n 2p−r− r−l 2 + σ 2 is bounded.
Then, one has that (ii) → (i), and also that (i) is sufficient in order for the sequence W n , n ∈ N, to be tight in D[0, 1].
The proof of Theorem 5.10 is detailed in the forthcoming Subsections 5.3.1 and 5.3.2. There, we are however not going to establish (1.6) directly, but will show that there is a finite constant C 1 > 0 such that, under the assumptions of Theorem 5.10, for all n ∈ N and all 0 ≤ s ≤ t ≤ 1 we have the inequality E W n (t) − W n (s) 4 ≤ C 1 nt − ns n 1+ , (5.19) where is the same as in the statement of Theorem 5.10. This is sufficient by Lemma 1.1.
Proof of Theorem 5.10, I: degenerate kernels
Throughout the present and subsequent section, we can assume without loss of generality that ∈ (0, 1] Let us first assume that W n is a U -process of order p based on a degenerate kernel ϕ, i.e. for 0 ≤ t ≤ for each ∈ (0, 1]. Now, if, for some ∈ (0, 1], there is a C 1 = C 1 ( ) ∈ (0, ∞) such that for all n ∈ N we have n 2p ϕ 4 L 2 (µ p ) + max k=1,...,p n 2p−k+ ϕ p−k p ϕ 2 L 2 (µ k ) ≤ C 1 (5.24) we conclude from (5.23) that (5.19) is satisfied. This concludes the argument in the case of degenerate kernels.
Remark 5.11. Incidentally, one can show that inequality (5.22) also holds in the opposite direction when the constant B appearing there is replaced with a small enough positive constant C, which only depends on p. Our way of bounding E|W n (t) − W n (s)| 4 is therefore optimal with respect to the order in n.
Then, one has that Y n (t) = U n (1)−U n (t)−I n (t), in such a way that the tightness of { Y n } in D[0, 1] follows from a direct application of Theorem 5.10 first to U n /γ n and then to I n /γ n . The asymptotic Gaussianity of the finite-dimensional distributions of Y (n) now follows from Remark 5.5, and one can check that the covariance function of Y (n) converges to that of c 1 A + c 2 b by a direct computation.
|
2019-12-06T04:36:45.007Z
|
2019-12-05T00:00:00.000
|
{
"year": 2019,
"sha1": "cafb81788f206a42bfe0b83fdec90bcfd40c73b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7c6897e6e142213ee0b330b6c1139e399abaa8fd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
221219980
|
pes2o/s2orc
|
v3-fos-license
|
Status of meat alternatives and their potential role in the future meat market — A review
Plant-based meat analogues, edible insects, and cultured meat are promising major meat alternatives that can be used as protein sources in the future. It is also believed that the importance of meat alternatives will continue to increase because of concerns on limited sustainability of the traditional meat production system. The meat alternatives are expected to have different roles based on their different benefits and limitations. Plant-based meat analogues and edible insects can replace traditional meat as a good protein source from the perspective of nutritional value. Furthermore, plant-based meat can be made available to a wide range of consumers (e.g., as vegetarian or halal food products). However, despite ongoing technical developments, their palatability, including appearance, flavor, and texture, is still different from the consumers’ standard established from livestock-based traditional meat. Meanwhile, cultured meat is the only method to produce actual animal muscle-based meat; therefore, the final product is more meat-like compared to other meat analogues. However, technical difficulties, especially in mass production and cost, remain before it can be commercialized. Nevertheless, these meat alternatives can be a part of our future protein sources while maintaining a complementary relationship with traditional meat.
INTRODUCTION
Meat can be defined as "the flesh of an animal destined for our consumption as food" and includes edible parts of animal carcass, such as lean meat, fat, intestines, etc. [1,2]. Historically, as a food resource, meat has contributed to human evolution and development [3]. Meat is composed of essential nutrients, especially proteins, which are necessary for various physiological functions in the human body [4]. It provides approximately 15% of the proteins consumed in our diet and contains all the essential amino acids as well as various fatty acids and micronutrients (e.g., vitamin B complex, Fe, Zn, and Se) [5,6]. Moreover, meat protein has high digestibility with a corrected amino acid score reaching 0.92 [3]. In addition, it is flavorful and known to have important social and cultural meanings in human society [7][8][9]. Therefore, without doubt, meat is not only an important food for humans but is also an essential part of our lives.
Currently, the world population is growing fast and will reach 9 billion by 2050 [10]. It is estimated that we will need at least doubled amounts of meat compared to those we are producing now. This rapid increase in the global demand for meat is attributed not only to population growth, but also to economic development of developing countries [11,12]. Taking these factors into consideration, we have to shortly find a way to increase the production of meat. Earlier, industrialization of livestock farming fulfilled the increasing demand for meat and its products [10]. However, it is no longer possible to increase meat production for future demands because of the limited land and water resources for sustainability of livestock farming, rapid increase in animal welfare issues, and undesirable impact on the environment and climate changes [1]. Based on the gap between future demand and the present capability to supply meat, there is an increasing need for producing meat alternatives as protein sources. Furthermore, expansion of halal and kosher markets will also require the development of meat alternatives instead of livestock-based traditional meat, as the number of people consuming such foods might exceed 30% of the world population by 2025 [13].
Consequently, several efforts have been made to increase the production of conventional meat and/or different meat alternatives (Table 1) [10]. Among them, plant-based meat analogues, edible insects, and cultured meat are garnering the interest of most consumers, although cultured meat is still under development for commercialization. Therefore, in this review, the major meat alternatives (e.g., plant-based meat analogues, edible insects, and cultured meat) are introduced as promising protein sources that can be utilized in the near future for supporting and complementing the limited sustainability of the traditional meat production system.
PLANT-BASED MEAT ANALOGUES Definition and present features
Plant-based meat analogues can be manufactured using protein extracted from plants [10]. Wheat, soybean, legumes, oil seeds, and fungi are known to be the main sources of plantbased meat analogues (Table 2) [13]. In fact, plant protein is one of the oldest food sources in our history. Tofu was first consumed in 965 CE, and several products, including wheat gluten, yuba, and tempeh, have been used for decades in different countries and regions [14,15]. Moreover, plant-based products have been suggested as a meat substitute since 1888. However, most of them had very different characteristic features compared to traditional meat, especially with respect to flavor and texture. Therefore, these products did not succeed in the market until the 1900s: the consumption of plant-based meat analogues was only limited to its economic benefits and social demands related to health, religion, and ethical reasons; however, its consumption did not have a pleasurable effect as far as flavor and texture were concerned [13].
In recent times, the market for plant-based meat analogues is expanding with increasing social demands, and constant efforts are being taken to improve their sensory qualities [14,15]. Introduction of texturized vegetable protein (TVP) produced using various ingredients led to the development of plant-based meat analogues; currently, it occupies the biggest market among the different meat alternatives, and it is believed that the market will increase to over $21.23 billion US dollars by 2025 [14,15].
Benefits as meat alternatives
The major reason for meat consumption is to obtain nutrition [13]. Thus, it is very important to manufacture plantbased meat analogues to meet the nutrient specifications of traditional meat [16]. In general, plant protein is limited in nutritional value because of the lack of several essential amino acids such as lysine, methionine, and/or cysteine, and has low bioavailability [17].
Based on their nutritional values and functions, wheat gluten and soybean proteins are the most-used sources among different plant proteins to prepare plant-based meat analogues [13]. Wheat, containing 8% to 17.5% proteins, is one of the most important crops. Gluten (subdivided into gliadin and glutenin) from wheat can be produced during the wet processing of flour and is approved as "Generally Recognized as Safe" (GRAS) grade. When it is heated above 85°C, gluten can be coagulated, resulting in gel formation without loss of its structural order. Moreover, as gluten can form a cohesive blend between protein and the other ingredients, it can be utilized as a plant protein to produce meat analogues. Meanwhile, soybean protein is derived from leguminous plants, as are clover, peas, and alfalfa. It is recently attracting the interest of consumers as a good protein source with economic benefits. Malav et al [18] reported that soybeans have 35% to 40% of high-quality proteins, 15% to 20% of fats, 30% carbohydrates, as well as Fe, Ca, Zn, and vitamin B groups. Liu et al [19] suggested that soybean protein can be used as an alter- native to meat products because of its excellent capacity for rehydration, oil absorption, emulsification, and water absorption.
In the current market, several products are successful as plant-based meat analogues and seem to provide sufficient amount of proteins to our diet as meat alternatives. Bohrer [14] investigated the nutritional contents in four major types (beef burger products, beef meatballs, pork ham, and chicken nuggets) of traditional meat and plant-based meat analogues in market. They found that each beef patty in a burger contains 23.33 g of protein, whereas a meat analogue patty has approximately 19.46 g of protein ( Figure 1). However, plantbased meat analogues have less cholesterol and more dietary fiber, which can be appealing to consumers. The other types of products (beef meatballs, pork ham, and chicken nuggets) also showed similar overall results (See Bohrer [14] for more detailed information). Therefore, as far as nutritional aspects are concerned, especially the protein contents, plant-based meat analogues are likely to be good substitutes to traditional meat. The products will be beneficial to consumers who cannot eat traditional meat and meat products, mostly owing to their religious and ethical beliefs. In particular, when considering the massive market expansion for halal and kosher food products as well as the increasing interest in animal welfare, among the various meat alternatives, protein sources devoid of animal protein will be in high demand as plant-based meat analogues in the future.
Research trends and challenges
Despite the good nutritional value and continuous development of plant-based meat analogues, their palatability remains a critical obstacle for consumer acceptability. For improving the texture and flavor of plant-based meat analogues, different ingredients are added during the manufacturing process (Table 3). Regarding texture, different techniques such as spinning, thermoplastic extrusion, and steam texturization have also been applied for the structural organization of plant protein, as plants are mainly composed of amorphous tissue Meat ball Ham Nugget
Conventional meat
Plant-based meat analogues Coloring agents • Appearance/eye appeal 0-0.5 • Natural or artificial Adapted from Asgar et al [13]. [20,21]. Among these, extrusion is the most frequently used technique, as it is an economical method and can manufacture different shapes and sizes of meat analogues. The process is based on a screw system within a barrel [22] by means of which plant proteins are compressed, heated to be restructured into a striated, layered, and cross-linked mass, ultimately leading to the production of TVP [13,23]. Previous research suggested that utilizing wheat gluten and soybean protein as TVP ingredients could impart an appearance, texture, taste, and nutritional value similar to that of traditional meat [24]. In addition, proteins produced from starch by-products using fungi (a.k.a. mycoprotein) have structures and diameters similar to those of muscle fibers of meat with almost a similar texture [13,21].
The flavor of traditional meat is mainly derived from flavorrelated compounds such as free amino acids, free fatty acids, nucleotides, and reducing sugars. Besides, vitamin B1 and myoglobin also affect the flavor of meat [25]. Therefore, when plant-based meat analogues are produced, flavor enhancers are added (Table 3). According to Kyriakopoulou et al [26], when volatile compounds in traditional meat are isolated after a combination of various thermal processes, a flavor concentrate of meat is obtained. Subsequently, different techniques have been investigated and developed to incorporate such flavor concentrates into plant-based meat analogues to achieve a meat flavor. Addition of fat/oil (e.g., canola oil, coconut oil, and sunflower oil) can also affect the formation of flavor in plant-based meat analogues as well as their texture and mouthfeel [13,14].
Another challenge for plant-based meat analogues is the appearance, especially color. The color of meat and meat analogues is an important attribute at the point of purchase in the market [27]. To represent the color of red meat, some meat analogue products contain beet juice extract or tomato paste [14]. However, meat color does not always appear red, and it changes depending on the chemical state of myoglobin, which is primarily responsible for the meat color. Despite fresh meat possessing a bright red color due to high oxymyoglobin content, the meat color changes to brown, and metmyoglobin content increases when meat is cooked [28]. Some researchers have proposed that meat analogues should have color attributes similar to those of traditional raw or cooked meat [26]. Thus, the meat industry produces and uses leghemoglobin, which has a similar chemical state and structure as myoglobin. A representative product containing leghemoglobin is the Impossible Burger (Impossible Foods Inc., Redwood City, CA, USA). When leghemoglobin is added to a meat analogue product, it imparts cooked-color characteristics similar to those of traditional meat [14,29]. Myoglobin also affects meat flavor. Thus, Fraser et al [30] reported that the use of leghemoglobin, which is similar to myoglobin, provided a distinct meat flavor to meat analogues. In addition, leghemoglobin was shown to be free of toxicity as examined by in vitro chromosomal aberration tests and in vivo systemic toxicity test [30].
As plant protein and food-grade ingredients are mainly used during manufacture of plant-based meat analogues, their safety is approved, and production cost is feasible [31]. However, several anti-nutrients (e.g., protease inhibitors, α-amylase inhibitors, lectin, polyphenols, and phytic acid) are present in plant-based meat analogues. Although these compounds are known for their positive effects, such as anticarcinogenic, anti-obesity, lymphocyte stimulation, antioxidant effects, and others, their negative effects have also been reported [13]. For example, polyphenols can decrease the activities of digestive enzymes as well as bioavailability of proteins and amino acids. Phytic acid can induce mineral depletion and micronutrient deficiency as it reduces the bioavailability of essential minerals and binds micronutrients (e.g., Fe, Zn, K, Cu, Co, Mg, and Ca). Furthermore, food allergies to plant protein need to be addressed, since plant proteins, especially legume proteins themselves contain some allergens.
Interestingly, when compared with natural beef, plantbased meat analogues have more energy value, total fats, saturated fats, and Na and Fe contents [14], perhaps because of the addition of excess fat and/or oil (e.g., coconut oil and cocoa butter) for mimicking animal fat, coloring agents, and spices to the meat analogues during the processing of plant proteins (Table 3). These results reveal that manufacture of plant-based meat analogues may reduce the benefits of nutrients present in the original plant protein itself. In the absence of such a processing step, fat and saturated fat contents of plant protein varied from 0.5 to 8 and 0 to 0.9 g/100 g, respectively [32]. Nevertheless, the challenges can be overcome by advanced technological development, and plant-based meat analogues will be important protein sources in the future.
Edible insects
Definition and present features: Insects are one of the largest living resources on the earth, with a total of 5.5 million species [33]. Among them, almost 2,000 species of insects are consumed in 113 countries, especially Africa, South America, and Southeast Asia [34,35]. In such regions, eating insects is an ancient custom (so-called entomophagy) from at least 3,000 years ago. Insects have been used as a valuable protein resource for their high protein content with essential amino acids sufficient for our daily requirement [36][37][38]. The most frequently consumed species of insects are coleoptera (beetles), lepidoptera (caterpillars), hymenoptera (ants, wasps, and bees), orthoptera (locusts, grasshoppers, and crickets), hemiptera (leafhoppers, planthoppers, and cicadas), isoptera (termites), odonata (dragonflies), and diptera (flies) [39,40].
However, the acceptance of eating insects is low in western consumers, mostly because of a negative image regarding insects, especially as a food component. Therefore, entomophagy has decreased in our diet, as various food product options are increasingly available with the development of food science and technology [36,37,41]. Consequently, there is an urgent need for meat alternatives due to the importance of traditional meat as a main diet in our lives [42]. Nonetheless, the importance of edible insects has emerged because of the increasing need for meat alternatives for proteins. In recent years, the market for insects is steadily increasing and is expected to exceed $ 522 million US dollars by 2023 [43].
Benefits as meat alternatives
The major purpose of the consumption of insects by humans is to provide an excellent source of proteins. The nutritional values of edible insects vary depending on their species, sex, metamorphosis state (e.g., larvae, pupae, and adults), origin, diet, and different methods of processing due to their large diversities (Table 4) [35,44]. Xiaoming et al [45] reported that protein content in 100 different species ranged from 13% to 77% on the basis of their dry matter. Analyses of 87 insect species in Mexico revealed a protein content of 15% to 81% with high digestibility [46]. de Castro et al [44] reviewed the nutritional value of frequently consumed insects (e.g., beetles, files, bugs, bees, wasps, sawflies and ants, butterflies and moths, grasshoppers, crickets, and locusts) and found large variations (1% to 81%) among the protein contents. The bioavailability of insect protein is also high with good digestibility (76% to 96%), which is a little less than that of egg or beef protein (95% and 98%, respectively) [35,47]. Thus, undoubtedly, insects can serve as a fine protein source in our diet. In Central Africa, there was a time when about 50% of dietary proteins were obtained from insects [42]. Compared to plant protein, insect protein has nutritional benefits with respect to total protein levels, essential amino acids, and bioavailability. Kouřimská and Adámková [35] stated that some species of insects have high lysine, tryptophan, and threonine contents, which are not found in some plants.
Edible insects can provide other beneficial nutrients such as fats with highly unsaturated fatty acids, vitamins, and minerals [35,44]. In insects, fat is the second abundant nutrient (approximately 10% to 60%, on the basis of dry matter) followed by proteins. In general, the fats can be classified into 80% triglycerides and 20% phospholipids, which play a role in energy reserves, cell membrane structure, and regulatory physiology [35,48]. The profile of unsaturated fatty acids in edible insects is comparable to that of poultry and fish; however, insects have more polyunsaturated fatty acids [42,49]. Rumpold and Schlüter [49] reported that major omega-3 fatty acids, including eicosapentaenoic acid and docosahexaenoic acid, were not detected in most insects; however, their levels could be increased with feed modifications during insect rearing. In addition, edible insects are rich in Fe, Zn, Na, Ca, P, Mg, Mn, Cu, riboflavin, pantothenic acid, and biotin [49,50]. The benefits of edible insects are not only limited to their high nutritional content, but also to high feed/meat conversion rate and lower requirements of land, water, and feed [44,51,52]. In addition, they have a high fecundity rate with year-round breeding and small space requirements. In some species (e.g., palm weevil larvae), the byproducts can be used for other livestock and/or humans, resulting in high recycling capability.
Research trends and challenges
Many studies have been conducted on the use of edible insects as human food or ingredients. However, despite constant efforts to expand their market and consumption, eating insects may not become a mainstream dining option [43]. People are hesitant to consume insects owing to a skeptical attitude towards novel foods [42,52]; This is a part of food neophobia, which can determine the acceptance of edible insects as meat alternatives [53]. Consumers who have not experienced consuming edible insects perceive insects to be dirty, disgusting, and dangerous, ultimately rejecting them as a food resource [54]. This phenomenon is a main challenge for consumption of edible insects, especially in Western coun- tries [43]. According to Verbeke [55], only a few consumers (12.8% males and 6.3% females) in the Western society accept edible insects as a food item. Post [56] also reported that most of the insects in the Netherlands are used as a pet food rather than human diet. To overcome food neophobia related to insects, regular inclusion of insects in the daily diet can be helpful, while increasing its positive perception [44,52]. Imparting information on the benefits of edible insects on the aspects of nutrition, environment, and culture is considered another solution [51,57]. However, its actual effect is still negligible, and the Western civilization is not ready to eat edible insects in intact forms [52]. The development of insect-based ingredients/products rather than intact forms can facilitate the adoption of insects as a food resource [57][58][59]. Therefore, several studies have been conducted to process insects as new food ingredients and to include them in familiar foods or in processing of food products [36,37]. These methods involve raw material processing, protein processing, and oil processing [53], which can improve the quality characteristics (e.g., flavor) and functional properties (e.g., angiotensin I converting enzyme inhibitory activity and antimicrobial and antioxidant functions) when applied to food ingredients [44,60]. Recently, raw material processing through drying and/or milling is the most widely used method for applying edible insects as a food product. When insects are converted to a dry powder, their volume is lower than that of the original product, resulting in easier transportation; in addition, the product can be stored for a long time owing to low water activity. Meanwhile, protein and oil-processing methods have been investigated to extract proteins and oils from insects. These extracting processing not only enhance nutritional values but also increase technical functional properties [37]. Various edible insects are added as ingredients to foods such as bread, cookies, and sausages to enhance their nutritional value and food quality. Therefore, insects can be used without their negative image impact to enhance the aforementioned properties of food products [52].
Nonetheless, safety issues of edible insects, such as antinutrients (e.g., chitin and toxic substances [cryptotoxics and phanerotoxics]), microbial risk, and allergens, still exist [42,44]. Sufficient data to confirm the safety of anti-nutrients in insects should be obtained in future studies. In particular, since studies on food allergies of insects are limited, further investigations are needed for the growth of the edible insect industry [39]. Till date, some allergic cross-reactive proteins of arthropods (arachnids and crustaceans) are known [61].
Cultured meat
Definition and present features: Cultured meat (also called in vitro meat, synthetic meat, lab-grown meat, bioartificial muscle, and Frankenstein meat) is the latest emerging meat alternative. It can be defined as artificial meat produced using stem cell technology [62]. The idea of cultured meat was first mentioned in 1932 by Winston Churchill, a previous prime minister of UK. Cell and tissue engineering techniques have been developed for medical purposes. However, recently, because of advanced technological inputs, they have been applied in the field of food technology [63,64] for large-scale culturing [56]. Based on such developments, the first beef patty cultured from bovine muscle cell was introduced to the public in 2013. The patty was made of muscle cells with the addition of beet juice and saffron to make a meat-like product; however, production cost was extremely high [63,65].
So far, cultured meat could not be commercialized owing to technical difficulties in its mass production and cost. The patty (approximately 85 g) made by Dr. Post required US $330,000 in 2013, and a meatball (approximately 1 kg), which was recently unveiled by Memphis Meat, cost US $40,000 [63,65]. Therefore, to launch cultured meat in the market, its production cost should be lowered, and quality characteristics should be improved. The optimization process should be preceded by the whole process of cultured meat [66]. Once cultured meat is produced with a similar quality to that of traditional meat, it may play an important role in increasing meat supplies because it will be the only actual meat that has animal protein [67,68]. Mosameat, Memphis Meats, Super Meat, Integriculture, Just, and others are major companies manufacturing cultured meat; they are planning to release their products from 2021. Various types of cultured meat, such as meatballs, burgers, and sausages may be launched, and their market size is expected to be US $4.3 million for meatballs, US $3.7 million for burgers, and US $3.3 million for sausages. One of the articles reported that the appearance of cultured meat is expected to change the trends in global meat market, as it is expected to occupy 35% of the global meat market in the next 20 years [69].
Benefits as meat alternatives
The biggest merits of cultured meat are its similarities to traditional meat, as it is derived from farm animals, and may be environmentally sustainable [67,68]. This product can meet both the nutritional and sensory preferences of consumers because of its superior taste and texture than other meat alternatives [62]. In this respect, cultured meat can attract consumers who do not want to change their traditional diet style of meat consumption. Besides, according to Zhang et al [68], during the production of cultured meat, a single cell can proliferate several times; therefore, fewer numbers of animals are needed than in livestock farming.
In addition, there are other advantages of cultured meat. Bhat et al [63] suggested that cultured meat may be utilized for several other applications, such as creation of functional and designer meat, quick production, availability of exotic www.ajas.info 1539 Asian-Australas J Anim Sci 33:1533-1543 meat, vegan meat, efficient nutrient, and energy conversion. Besides, the benefits of cultured meat include public support, animal welfare, reduction in zoonotic and food borne disease, reduction in resource use and ecological foot print, and reforestation and wild life protection; it can also be used for space missions and settlements. Although the development of cultured meat is still in progress, it may be possible to control the ingredients in the products to have more health benefits without long farming processes [63]. In addition, all processes in culturing meat are conducted under sterile conditions employing various food quality and safety management systems such as Good Manufacturing Practice and Hazard Analysis and Critical Control Points. Therefore, it is possible to produce safer products devoid of hazards such as contamination, antibiotic abuse, infectious diseases, and food poisoning [68].
Research trends and challenges
Although cultured meat is about to be released in a few years, technologies for its processing are still insufficient. The most urgent challenge could possibly be the development and optimization of mass production process with reasonable pricing. From the choice of cells to tissue engineering techniques (Table 5) (See Specht et al [70] for more detailed information), uncertainties in cell culture and muscle development should be studied and further optimized for the mass production of cultured meat [63,65]. Gaydhane et al [71] suggested cells, culture media, scaffolds, bioreactors, culture conditions, and processing (also called mimicking) as the key factors for producing cultured meat; this report mostly agrees with other studies [68,[72][73][74]. As the range of studies conducted on such factors is quite wide and comprehensive details are not yet clear, only a brief introduction on the culture media, scaffolds, and bioreactors will be discussed in this review based on the currently-available literatures.
During cell culture, optimal formulation of culture media is important, as it can affect growth rate of cells [71]. Culture media contain various nutrients, hormones, sera with growth factors, and other components for cell growth [73]. Among them, the use of serum (e.g., fetal bovine serum, horse serum) in culture media is a cause for concern. Serum is a necessary component in culture media, as it can facilitate the growth of muscle satellite cells. However, researchers have suggested that its use in culture media should be replaced or eliminated, as it is variable and expensive and is a main reason of high production cost of cultured meat [67]. In addition, its production process may not be ethical and sustainable, as it is derived from calves. Therefore, alternative ingredients for serum in culture media, especially serum-free media, have been one of the main research areas for cultured meat.
Bioreactor and scaffolds are other important factors in mass production of cultured meat [75]. In general, a bioreactor is applied for large-scale cell growth under controlled conditions of temperature, pH, oxygen partial pressure, and shear stress, providing a more homogeneous environment during cell proliferation and/or differentiation with detailed monitoring of its conditions [76,77]. Previous studies have reported that different types and conditions of bioreactors can affect mass production of cultured meat. In the last few years, various types of bioreactors (e.g., stirred tank bioreactor [a.k.a. spinner flask], High-Aspect-Ratio-Vessel bioreactor, fluidized bed bioreactor, hollow fiber bioreactors, and packed bed bioreactors) have been developed with different sizes [76,78]. Moreover, not only temperature and pH, but also oxygen partial pressure and shear stress are important for optimal conditions of a bioreactor. For example, low oxygen partial pressure decreases the differentiation rate of cells, but increases their proliferation. In the case of shear stress, its application with increasing impeller size and rpm as well as its location and the internal vessel used can affect cell damage. Therefore, low shear stress and stable oxygen perfusion should be set up in a bioreactor even at large volumes [72]. Furthermore, efficiencies of bioreactors varied for different cell lines. Therefore, customized bioreactors and their proper use should be investigated for optimization of mass production of cultured meat. Scaffolding is a method that can impart more meat-like texture to cultured meat instead of complex co-culture of connective tissue [79]. Scaffolds consist of biopolymers, and their application is known to be best suited for cultured meat. Cell-attached scaffolds are suspended in a bioreactor with culture media, producing cultured meat on a large-scale [64,68]. When considering the requirements for scaffolds, collagen is the most frequently used material, and plant-based sources (e.g., alginate, cellulose, or chitosan) have also been developed [79]. However, so far, scaffolding cannot be used to prepare a highly-structured product and is only capable of producing ground and/or emulsified products; therefore, improvement of a highly-developed structure of cultured meat is one of the challenges in future [63,72].
CONCLUSION
There is no doubt that livestock-based traditional meat and meat products are the best protein sources, with excellent palatability and ample consumption. However, changes in consumers' perception and the value of land/water resources and environmental sustainability will lead to the development of meat alternatives. Consequently, to conserve the limited supply of traditional meat, meat alternatives, including plant-based meat analogues, edible insects, and cultured meat, will play important roles, depending on the degree of their technical development and consumer acceptance, while maintaining a complementary relationship with traditional meat (Figure 2).
CONFLICT OF INTEREST
We certify that there is no conflict of interest with any financial organization regarding the material discussed in the manuscript.
ACKNOWLEDGMENTS
This study was supported by "High Value-added Food Technology Development Program (Project No. 118042-03)", Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries and the BK21 Plus Program of the Department of Agricultural Biotechnology, Seoul
|
2020-07-30T02:06:18.178Z
|
2020-07-28T00:00:00.000
|
{
"year": 2020,
"sha1": "790eab3b3aa28162f670f5f9c78205879182f436",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/ajas-20-0419.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "493220d4c39a5d03bd29c3543aecec941b0a4b76",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
}
|
59902438
|
pes2o/s2orc
|
v3-fos-license
|
A Study of the External Process of Specialized Document Production
The process of producing a specialized document can be considered to consist of an internal and an external part. The internal process is the mental or cognitive side, not accessible to direct observation, whilst the external process is all which can be witnessed by an observer. We model the internal part as a process of decision-making which is steered by controlling in fl uences. These originate in the external process. The way in which the task, the agents and the controlling in fl uences interrelate is then elaborated on in an empirical analysis of the production process of Patient Information Lea fl ets.
Focus on the Process
In the present study we look into the process of creating specialized documents. This process has two sides which we call the internal and the external process. The internal process is the mental or cognitive activity required for a person to produce a document. The external process is all which can be noted by an outside observer. It may be roughly equated with the workfl ow 1 .
It is our objective to describe some of the ways in which the two sides of the overall process are interrelated. To this end, we discuss a combination of two models which depict the decision process and the controlling infl uences between the external and the internal process (section 2). We then elaborate on the models using the document type of the Patient Information Leafl et (PIL) as an example (section 3). In a short conclusion we try to bring the theoretical and practical fi ndings together (section 4).
The present contribution is a short study in which we try to sketch our theoretical viewpoint and to substantiate it with an analysis of a single set of actual materials from professional practice. This article cannot, however, give a comprehensive account of the fi eld. To provide such an account, large-scale further study is required.
Choosing as its object of study the process of producing documents with a specialized content, this study positions itself in specialized communication studies. This is a fairly new discipline which still lacks a consolidated name in English 2 . It came into being when its two precursor strands -the study of languages for special purposes for the monolingual and translation studies for the multilingual perspective -to some extent converged where they both investigate oral and written communication with content from specialized domains. This is an area where it quite fre-quently is unclear and very often impossible to establish whether a certain text or document is an original or a translation 3 , whether it was created as a coherent workpiece by a single author, by a team of co-operating authors or maybe by a documentation manager who recombined elements of documents from a content management system previously created at different points in time by different authors and as components of different documentations 4 . The latter kind of re-use of components is typical for large sectors of technical documentation, especially where the techniques of single-source publishing and cross-media publishing are applied 5 . In section 3, we analyse pharmaceutical documentation. Given its highly standardized nature, similar techniques of component re-use are applicable in that fi eld as well.
Both forerunner disciplines of specialized communication studies have at some point in their development been mainly concerned with the workpiece, that is, the text or document viewed as a static object, and only later turned their interest towards the activity and the process in which the workpiece is created. One of the common ways of modelling work processes is focusing on the decisions made by the acting person. We adopt this view and try to describe the document production as a decision-making process. This may seem contradictory, since we stated that we are concerned with the external process, whilst decision-making quite obviously is part of the internal process. Yet decisions can be controlled or at least infl uenced by external factors and an analysis of the external process has to account for the ways in which it exerts this control over the internal.
The Process of Specialized Document Production
In specialized communication, authors, technical writers, translators and other specialists carry out work which underlies many conditions, norms, constraints and other factors which we subsume under the notion of controlling infl uences (Schubert 2007: 136). The controlling infl uences constitute a major difference between these professional writers and the schoolchildren and essaywriters whose behaviour is often studied in writing process research. To capture this professional work process, we sketch a model of decision-making (2.1.) and a related model of the controlling infl uences (2.2.).
A Model of Decision Processes
We have proposed a model of decision processes which takes into account both internal and external infl uences 6 .
3 Cf. House's well-known concept of covert translation (House 1977: 188;1997: 29). -Cf. Knapp/Knapp-Potthoff (1985: 451), Hatim/Mason (1990: 16-19), "the translator's invisibility" (Venuti 1995). 4 We use the term workpiece to reserve the word product for the object (engine, software system, drug, procedure...) dealt with in the text or document. -We use the term recombination for processes in which previously stored document components are reassembled to form new documents, whereby the reading path may come to differ from the writing path (cf. Schubert 2003: 232 note 8). 5 Single-source publishing and cross-media publishing are techniques commonly used in technical documentation, website management, software localization and other areas of specialized communication. The fi rst basic principle is producing, editing and, where applicable, translating text in small portions, called contents or topics. These units are normally stored in a database or some other kind of repository from which they can be retrieved, re-used and recombined to make up new texts. The second basic principle is storing the text and its formatting information separately. These two principles allow for the production of different versions of documents, in part or wholly identical in text, but different appearance. For instance, a user's manual can be produced in print, as a PDF document and as a webpage, with essentially the same text but with small differences such as "see chapter 7" in the print version and a hyperlink in the electronic documents. This is the cross-media aspect. Although the eventual documents differ in appearance and, in some parts, in content, each unit of text is stored only a single time. This is the single-source aspect. The techniques are used to reduce labour and production cost and to achieve consistency. Content management systems provide a software environment which can accommodate the entire process of single-source and cross-media publishing. -See for instance Hennig/Tjarks-Sobhani (eds) (1998, s.v. Produktion, medienneutrale), Albers (2003), Williams (2003), Closs (2005Closs ( : 2007, Ferlien (2006).
The basic idea of this model is conceiving of the deciding as the process of selecting one out of a given number of possible options. Depending on the task, the number of possible options may be smaller or larger, including the infi nite. The set of possible options is called the decision space. Each option has a number of features. It is then assumed that there is a (mental or automated) decision mechanism which consists of rules that comprise criteria. The mechanism will then match the features of the options against the criteria of the rules. If the criteria and the features are suffi ciently distinctive, a single option will be selected. If not, arbitrary criteria will be resorted to, such as (in a mental mechanism) the nicest option or (in an automated mechanism) the fi rst-encountered option. (Schubert 2009b: 27-28) This model may give a rather deterministic impression. However, it is not at all meant to imply any equivalence between human and automated processes. Its main purpose in our present reasoning is to provide some concepts and terms by which to account for the decision process.
In the creation of specialized documents, decisions need to be made at various levels and concerning a number of different features of the workpiece at hand and concerning characteristics of the activity. Decisions are required as to what to say, in which order to say it, what to express by means of language and for what to prefer images, graphics, videos and other forms of illustrations, at which level of speciality to express it, in which language to write the text, by which linguistic means to word it, in which way to arrange the text on the sheet or screen and eventually by which tools to carry out the work and how to organize the process. This list of decisions is based on the fi nding that specialized communication, and along with it to some extent other types of communication as well, can in a meaningful way be described in terms of four dimensions. These are the dimension of the technical content, the dimension of the linguistic form, the dimension of the technical medium and the dimension of the work processes (Schubert 2007: 248). Obviously the list of decisions is by no means exhaustive. Its length depends on the granularity of analysis applied in a given case.
Using this model given above, the decision-making in the creation of specialized documents can be described roughly as follows.
For each of the decisions to be made in an overall document production process, the model assumes that there is a set of all possible options from which to choose. This set is the decision space. The options contained in the decision space have features. From a systematic point of view it is important to establish whether each option can be uniquely identifi ed by means of its features. Only if this is the case, the features are distinctive and only then a meaningful decision can be achieved by means of the features. If such distinctiveness cannot be ascertained, one either has to refi ne the set of features or to resort to arbitrary decisions. Both approaches are legitimate and both can be assumed to occur in everyday decision-making.
Decision-making is a two-sided act. On the one side, there is the decision space, made up of the options, which in turn are characterized by their features. On the other side, there is the agent, who is equipped with (explicit or implicit, conscious or unconscious) decision rules which in turn are based on criteria. At the core of the overall decision-making act, the features are matched against the criteria.
With this rather abstract account of the decision process in mind, the production of specialized documents can now be described in a more systematic and at the same time more concrete way.
In the dimension of the technical content, decisions have to be made as to which portions of content to include, in which sequential or networked structure to arrange them and which access structure to provide 7 . The options in this dimension can be described by means of features worded in terms of the macrostructure. The more the description takes into account the microstructure, features can be taken from speech act theory 8 and other models which classify propositions or portions of content at the semantic or pragmatic level. In technical writing, information structuring techniques such as Information Mapping 9 or Funktionsdesign 10 are commonly applied which take their basic concepts explicitly from speech act theory. While the design of these two techniques has its starting point in a semantic and pragmatic approach to communication, the more technically oriented newer techniques and data formats such as the Darwin Information Typing Architecture (DITA) 11 which has been very much en vogue in the technical communication of this decade, are no longer aware of (or do not explicitly acknowledge) their roots in speech act theory, but nevertheless refer to it indirectly by quoting the preceding structured-writing techniques.
In the dimension of the linguistic form, the decisions concern the choice of words, especially the choice between terms and common words, between various degrees of specialization and formality. They further concern the choice of syntactic constructions including syntactic complexity. At the text level, decisions are made as to the use of the instruments of cohesion and coherence, among which are the verbatim repetition of words versus stylistic variation, the use of anaphora, cataphora and other pro-forms, and the arrangement of theme-rheme structures.
In the dimension of the technical medium, the typography and the layout need to be decided upon, along with the general design of the document including, where applicable, the webdesign and the hypertext functionality. This comprises the placement of illustrations, captions and other components which accompany the body of the text.
In the dimension of the work processes, an important set of decisions is concerned with the choice and use of tools. The main tools in specialized document production are software systems such as word processing and desktop publishing systems, XML-and HTML-editors and terminology management, translation memory, authoring memory, machine translation, content management and workfl ow management systems. This dimension, however, is not only concerned with tools and their impact on the process. It also includes decisions on the specialists to be assigned tasks in the overall process, on the route the workpiece should take from one specialist to the other, on secondary processes to be started and subsequently on the workpieces from these processes to be fed into the primary process etcetera 12 .
Above we outlined a model of decision processes in general terms which we then applied to specialized document production. The idea of describing communicative acts in terms of decisiabundance of literature. 9 Information Mapping is a structured-writing technique (The Top Ten Things 2008: 9) developed for technical communication by Robert E. Horn. It uses a classifi cation of speech acts derived from classical speech act theory, but restricted to and refi ned for the speech acts needed in technical documentation. The essential principle is building up complex documents from small, monothematic units, called blocks, which are assembled into larger units, maps. Information Mapping is a commercially exploited trademark so that most of the publications from its author and his team have a commercial rather than a scholarly tenor. -By the author: Horn (1986Horn ( , 1989Horn ( , 1999. Independent publications: Jansen (2002), Information Mapping (2006?), Böhler (2008). 10 Funktionsdesign, too, is a structured-writing technique for technical communication. Its authors are Jürgen Muthig and Robert Schäfl ein-Armbruster. Their approach is inspired by the speech act theory and controlled languages. The technique analyses documents into four layers of which the funktionale Einheit 'functional unit' is a small chunk comparable to the block of Information Mapping. These units are arranged in sequences which in turn form documents. Funktionsdesign is a commercially exploited trademark, so that independent publications are scarce. Publications by its authors: Schäfl ein-Armbruster (2004), Muthig/Schäfl ein-Armbruster (2008). 11 DITA is a data structure for accomodating the components of documents written in a structured way. In the literature on DITA, which is mainly concerned with the dimension of the technical medium, only a few scattered paragraphs summarize some of the deliberations underlying its design, thereby positioning this technique in the dimensions of the technical content and the linguistic form as well: DITA (2007: 14), Closs (2007: 112). Closs's information concerning the authors of the speech act theory must be a blunder, but she is right in connecting DITA to this theory. 12 A complex work process normally consists of several tasks. We consider these tasks elements of a single process as long as they handle the same workpiece. Tasks and sequences of tasks which have a different workpiece are considered to make up a separate process. If the workpiece or deliverable of one process is used in another process, the former is called secondary to the latter (Schubert 2007: 9). For instance a terminographic process which delivers a set of entries in a termbank can be secondary to a translation process in which these terms are used. The translation process may in turn be secondary to a document production process, in which translations of the original document are ordered and subsequently assembled to make up a single multilingual documentation. The distinction of primary and secondary processes thus is a relative one. on-making is not new. It has roots in many disciplines of which translation studies is closest to our present discussion. This discipline emerged in the late 1940's and early 1950's primarily in response to machine translation 13 from which it inherited the procedural view on translation. With a view to technical and scientifi c text types, Jumpelt (1961: 186) notes the desirability of a theory which would describe the aspects steering the translator's decisions. Coming from literary translation, Levý (1967) suggests describing translating as a decision process 14 . His short article is remarkable in several respects. Levý adopts a pragmatic vantage point and speaks of "the working situation of the translator" (Levý 1967(Levý : 1171, an unheard-of category in the linguistic discourse of his day. Levý introduces the term of paradigm for "the class of possible solutions" (Levý 1967(Levý : 1171. This term corresponds to the decision space in our model. Levý's model is less clear as to the distinction of what we call features and criteria. He uses the term instruction to denote various functions which both defi ne the features of the possible solutions and the criteria by means of which the translator chooses among them. The decision-making approach catches on in translation studies. It is used, varied and developed. 15 In a much more elaborate form it is continued in the approach advocated by Gerzymisch- Arbogast and Mudersbach (1998). This latter direction is especially relevant to our research interest, since from the methods approach there is a connection to specialized communication research as pursued by Kalverkämper (1998: 1-2) 16 .
Controlling Infl uences
The description of the specialized document production process sketched in 2.1. in terms of the suggested model, primarily focuses on decision-making and thereby on the internal process. This now allows for an analysis of the ways in which the external process exerts an impact on the internal so that the two eventually form a single whole. The external part of the overall process is that which can be observed by others, that is, all physical rather than mental actions carried out by the document-producing specialist.
Many among the activities in the external process have an effect on the decisions made by the document producer. We call these effects controlling infl uences (Schubert 2007: 136;2009b: 23-25). Before we proceed, it may be worth considering the concept of the controlling infl uence which is central to our argument. The concept is taken from the Integrative Model of Specialized Communication suggested by Schubert (2007: 136 et passim: lenkender Einfl uss). Generalizing what was discussed in the above paragraphs, one can say that the term denotes every kind of stimulus or constraint affecting a document-producing professional's decisions which originates from any other person or group of persons. Infl uences of this kind can be positive in the sense that they prescribe a certain option and they can be negative in the sense that they forbid a specifi c option. Maybe the word constraint would sound more familiar. However, constraint is not as neutral and open as infl uence. A constraint is more on the negative side, often denoting a restriction, whereas control and infl uence comprise both positive and negative meanings. Another term to consider in this connection is the norm, amply discussed by Chesterman (1997: 54-59). With a reference to Bartsch (1987: 76), Chesterman deliberates the possible prescriptive and descriptive readings of the term and opts for the latter. In his words, the concept of a norm is "descriptive of particular practices within a given community" (Chesterman 1997: 54). Defi ned in such a way, 13 Fedorov (195313 Fedorov ( /1968, Kade (1968: 7), Wilss (1988a: 2;: 2), Gerzymisch-Arbogast (2002: 18;2003: 25), Schubert (2007: 163-173). 14 Levý (1967) is often referred to as being the fi rst to view translation as a form of decision-making. In general translation studies, Levý's work has indeed become seminal. However, the book by Jumpelt (1961) appeared earlier, and to the study of specialized communication it is even more pertinent than Levý's. The importance of Jumpelt's work is emphasized among others by Oettinger (1963), Kade (1968: 7), Stolze (1994: 71-72), Chesterman (1997: 41), Schubert (2007: 176) and Olohan (2009). 15 See for instance Reiß (1976Reiß ( /19931981/2000, Kußmaul (1986Kußmaul ( /1994. Wilss (1988a: 92-107;1988b;: 174, 1998), Gerzymisch-Arbogast (1996, cf. Shuttleworth/Cowie (1997. 16 Concerning these connections cf. Schubert (2007: 200). the term is very useful in specialized communication studies 17 . Yet for our present study we need a term which includes all infl uences which control the professional's decisions: the prescriptive ones (such as standards and legislation) and the habitual ones (such as a specifi c industry's best practice). For this, we choose the term controlling infl uence.
Most of the controlling infl uences become relevant for the process when the document producer interacts with other persons. We therefore review some of the major infl uences by discussing the agents who take part in the process, directly or indirectly and in some cases even unknowingly.
Specialized document production normally is a form of mediated communication (Schubert 2007: 136). The document producer carries out an assignment received from an external customer or a department within the same enterprise or organization. We call this agent the initiator. The main instrument of the controlling infl uences from the initiator's side is the assignment brief. This can be a letter, a fax, an e-mail message, a telephone call or the like in which the initiator specifi es the assignment. Leaving aside the business elements of the brief, such as price and deadline, the main contents is instructions what kind of document to create, about which topic, in which language or languages, for which audience, using which sources of information, applying which resources such as termbanks, authoring memories, content repositories etc. If the instructions for the document producer are more numerous or more detailed and especially if the same set of instructions will be used for many assignments, it is common to compile them into a style guide which then becomes the main instrument of these controlling infl uences 18 .
A second group of agents from whom controlling infl uences can originate is the recipients or audience. An audience analysis is a common task in the overall process. From this analysis, controlling infl uences derive which set criteria for the decisions within the dimensions of the technical content and the linguistic form. Comprehensibility requirements may for instance control the choice of common words rather than terms (linguistic form) and if terms are inevitable, they may result in a decision to add explanations (technical content). Whilst it normally is assumed that the audience analysis is the document producer's duty, it is worth noting that Göpferich in her Karlsruhe Comprehensibility Concept lists this among the information the initiator has to provide (Göpferich 2001;2009: 34 Fig. 1).
A third group of agents to be considered is the team in which the document producer works. In specialized communication, the workpieces are frequently much too large and the deadlines too short for a single person to carry out the entire assignment. Therefore, teams are employed, which leads to consistency requirements in all dimensions and in addition co-ordination requirements in the dimension of the work processes. Another form of team work in a very wide sense is the use of content management systems and similar repositories. In processes supported by such systems, documents may be created by a recombination of previously produced documents or document components which then often originate from various authors who did not know when producing their workpieces, when, by whom and in which assignments these would be re-used.
The fourth important group of agents is the informants. We use this term in a large sense for all persons with whom the document producer is in contact when researching information. This can be experts consulted for content matter clarifi cations or for information on terms or other questions of the language use in the relevant speciality community. Much of the information research is of course done by searching libraries, archives, the Internet and other sources of printed or written documents, and we count the authors of such documents among the informants. They play a role in the document production process normally without even knowing.
The model of decision-making allows analysing the controlling infl uences more precisely. Take as an example the assignment brief. The more detailed it is and in the more dimensions it gives instructions, the more it controls the document producer's decisions. In the model of decision-making there are two prominent elements where controlling infl uences can take effect. These are the decision criteria and the decision space. If the decision criteria are infl uenced, the effect will be that out of a given set of options, some specifi c options will be preferred and others dispreferred. If an option is dispreferred, it will not be chosen, as long as there are other, preferred options. By contrast, an infl uence on the decision space will make an option either selectable or unselectable. An option removed from the decision space cannot be chosen, even if there is no other option. In this way, controlling infl uences which affect the decision space are more rigorous than infl uences on the criteria.
When the initiator in the assignment brief instructs the document producer to write a package insert of a drug, this restricts the decision space for the choice of content drastically. When the text type in itself involves the use of a prescribed macrostructure or if such a requirement is explicitly worded in the brief, this reduces the decision space for the sequencing and arrangement of the content. Both controlling infl uences lie in the dimension of the technical content. Other infl uences in this dimension originate from the use of information structuring techniques such as Information Mapping or Funktionsdesign which recommend or prescribe patterns of content sequencing at the macro-and at the microlevel. Macrolevel infl uences also derive from standards, manuals and legislation. This can be seen in detail in section 3.
In the dimension of the linguistic form very many different controlling infl uences of varying degrees of rigour can be observed. Recommendations for preferred wording are often contained in style guides, corporate-identity manuals and similar instruments. They affect the decision criteria. When these instructions are more rigorous and especially when they are enforced by means of software systems which simply do not allow for the dispreferred words and phrases to be used, the infl uence affects the decision space. A specifi c and more elaborate form of controlling infl uences in the dimension of the linguistic form is contained in the use of controlled languages. These are derived from normal ethnic languages by means of lexical and syntactic reductions (Lehrndorfer 1996, Huijsen 1998. When prescribed for a document production process, they narrow the decision space.
In the dimension of the technical medium, controlling infl uences come mainly from the initiator who orders a workpiece to be arranged according to a given model and who prescribes its data format. These infl uences can restrict the decision space, and they are especially rigorous when they are enforced by means of software templates or the like provided by the initiator.
In the dimension of the work processes, the controlling infl uences may consist of a prescribed sequence of tasks. This kind of an infl uence can originate from the initiator, but also from the document producer's own organization. It is most rigorous, when the workfl ow is steered by means of a software environment which enforces a certain sequence of tasks as is the case in authoring memory and translation memory systems with workfl ow or team functionality and if possible still more so when content management software is applied in single-source and cross-media publishing processes. These software-based processes affect the decision space.
This short account shows a multitude of controlling infl uences. An analysis of the purposes and objectives of those who exert the infl uences is far beyond the scope of this contribution. Many of the controlling infl uences aim at optimizing the communication (cf. Schubert 2009a).
Production of a Patient Information Leafl et as a Specialized Document: an Analysis
To illustrate how the production of specialized documents is affected by controlling infl uences that determine producer's decision process, we will analyze the production process of a text type within the pharmaceutical documentation, i.e. the Patient Information Leafl et (PIL), also called Package Information Leafl et or package insert. A thorough empirical study of this text type 19 has been carried out as a preparatory step to the development of a software tool that aims at optimizing this text type with regard to different features of the dimension of the technical content and the dimension of the linguistic form. The tool will help to introduce the prescribed structure (included mandatory headings and sentences), to reduce redundancy, to use common words instead of scientifi c terminology and to formulate instructions and warnings in an unambiguous way. In developing the software it was necessary to take into account all aspects that infl uence the production process of this text type: prescriptive infl uences, constraints and the workfl ow within a pharmaceutical company 20 . The analysis in this section is based on the insights gained by the many contacts with pharmaceutical companies that have not yet been described systematically in the scientifi c literature. This section consists of three subsections. First of all we will defi ne the PIL as a specialized document. Then we will analyze the impact of the controlling infl uences on each of the four dimensions discussed in section 2.1., and fi nally, we will study the controlling infl uences of the agents, their relation to the work process as well as their interrelations.
The PIL as a Specialized Document
Before analyzing the production process of a PIL, it is important to justify why we consider this text type a specialized document. The PIL is a text with a specifi c content and with specifi c objectives: it informs the consumer about a medicine as such and shows the necessary instructions for a proper use. At the same time it aims at ensuring legal coverage and constitutes an integral component in the registration procedure of the medicine. Three characteristics, mentioned in section 1, manifestly apply to the PIL. First of all, it is very diffi cult to fi nd out whether the text of a PIL is an original text or a translation. On the European level, particularly in the case of the centralized procedure 21 , the English version has to be considered as the original one; the versions in other languages as translations. In the case of national procedures, the document is drawn up in the offi cial language or in one of the offi cial languages of the authorities of the country concerned. As the scientifi c discussion is carried out in English, also these texts that are drawn up in any other language than English will include parts that are translated from English or based on studies written in English. This leads to the second characteristic mentioned in section 1. It is very diffi cult to discover whether the text is written by one author or by a team of authors. The PIL is a text with a high degree of intertextuality, among other things, because the content has to correspond to that of the Summary of Product Characteristics (SmPC), of which it is an adaptation 22 . Moreover, as 19 For previous empirical research see Van Vaerenbergh (2007a) and(2007b). 20 The software tool has been developed within the framework of the ABOP project funded by IWT Vlaanderen 21 On the European level there are three procedures for marketing authorization applications: centralized procedure (CP), mutual recognition procedure (MRP) and decentralized procedure (DCP). Medicines authorized through the centralized procedure are registered for all EU countries. In the case of MRP, the marketing authorization is given by the competent authority of one of the EU Member States (called the reference Member State -RMS) and can be recognized in an abridged procedure by the competent authority of other Member States. In the case of DCP, identical dossiers are submitted in all Member States that want to receive a marketing authorization. A RMS is selected by the applicant. In the case of MRP and DCP, the dossier contains beside the English version of the PIL a version in the language of the RMS as well. (Defi nitions by the authors of the article) 22 About intertextuality in technical texts: Ostapenko (2007); about intertextuality in European text types: Schippel (2006); about the intertextual and intergeneric relation between the product summary (SmPC), a scientifi c document composed by experts and meant for other experts, and the PIL, a document meant for laymen: Askehave/Zethsen (2002) a result of the work process (cf. sections 3.2. and 3.3.) the text will rarely be, if ever so, the work of one single author. A more in-depth analysis will show that the PIL is a highly standardized document, which is its third characteristic. In the next subsection, we deal with a list of documents that contain regulations, requirements (including required headings and standard sentences) and recommendations that control and limit the options during the decision process and that are the same for all PIL texts within the European framework.
Controlling Infl uences in the Production Process of the PIL
In each of the four dimensions discussed in section 2.1. the technical content, the linguistic form, the technical medium and the work processes -the decision process is affected by controlling infl uences. (EMEA 2009) 25 , together with some other documents listed under the title of "QRD reference documents" 26 and fi nally (4) circulars with annexes sent by the national authorities. These circulars are intended to draw attention to the valid regulations and to give additional instructions and explanations for the use of the templates. Because these circulars only have an explanatory function and are country bound, they do not introduce new elements regarding the controlling infl uences. We mentioned them to have a complete overview, but we do not analyze them further.
The Directive 2004/27/EC includes a few articles concerning the PIL (European Commission 2004, L 136/48-49). They have an impact on the dimension of the technical content and the dimension of the work processes. Article 59 (1) stipulates which elements a PIL has to include and in which way sections and elements have to be ordered. This means that article 59 (1) particularly determines features of the macrostructure. Furthermore, article 59 (1) starts with the stipulation that the PIL "shall be drawn up in accordance with the summary of product characteristics" (L 136/48). This phrase does not only express a requirement of the technical content, but it also refers to the dimension of the work processes: drawing up the SmPC precedes the writing of the PIL. The dimension of the work processes is also affected by the content of article 59 (3) and article 61 (1): The package leafl et shall refl ect the results of consultations with target patient groups to ensure that it is legible, clear and easy to use. (Art. 59 (3) L 136/49) And article 61(1) respectively: ... The results of assessments carried out in cooperation with target patient groups shall also be provided to the competent authority. (Art. 61 (1) L 136/49) Consulting target patient groups in the form of a user testing 27 is a constituent part of the production process of a PIL.
Concrete support for writing the information and performing a user testing is provided by Guideline on the readability (European Commission 2009). This guideline is based on the information design concept developed at the Communication Research Institute of Australia (CRIA), particularly on the work of Sless/Wiseman (1997 2 ) that gives actual guidelines for people writing Con- sumer Medicine Information. The principles applied by Sless/Wiseman largely resemble those of the Information Mapping and of the Funktionsdesign 28 , but they have been especially designed for application in consumer medicine information. Moreover, the work of Sless/Wiseman does not only include recommendations for writing the information, but also contains a guideline for the performance of diagnostic tests, in the same way the European guideline does.
Chapter 1, section A of the Guideline on the Readability (8-10) encompasses recommendations for: (1) type size and font, (2) design and layout, (3) headings, (4) print colour, (5) syntax, (6) style, (7) paper, and (8) use of symbols and pictograms. Most of the recommendations -with the exception of those on syntax and style -have an impact on the dimension of the technical medium. The recommendations with regard to syntax and style infl uence, on the one hand, the microstructural organization of the technical content. On the other hand, they control the dimension of the linguistic form. Regarding the syntax, the Guideline on the Readability recommends to split up long paragraphs, to point out the side effects by frequency of occurrence, starting with the highest frequency and to use bullet points for lists (9). However, other recommendations concerning syntax rather belong to the dimension of the linguistic form: it is recommended to use simple words of few syllables and to avoid long sentences (9). On the one hand, paragraph 6 on style deals with the structure of directive speech acts: When writing, an active style should be used, instead of passive. … Instructions should come fi rst, followed by the reasoning, for example: 'take care with X if you have asthma -it may bring on an attack'. (European Commission 2009: 9) On the other hand, it deals with the choice of words: instead of repeating the name of the medicine, it is recommended to use "your medicine, this medicine" (9). Uncommon abbreviations and acronyms should be avoided and medical terms should be explained "by giving the lay term with a description" (9-10). Chapter 1, section A ends by referring to the QRD templates. These templates have to ensure consistency in the information "across a number of different medicines and across Member States" (11).
Chapter 3 of the Guideline on the Readability is linked to article 59 (3) and to article 61 (1) of the Directive 2004/27/EC stipulating the requirement of a user testing and it can be considered as a "Guidance concerning consultations with target patient groups for the package leafl et" (19). As an illustration, one possible way of undertaking a user testing is outlined in the annex. Five aspects of the testing are explained: performing the test, recruiting participants, suggested testing procedure, preparing for the test and success criteria (24-27). What is described here is the external work process of a consumer testing that, by itself, is a constituent part of the external process of the PIL production.
Whereas the Guideline on the Readability relies on the Directive 2004/27/EC, the documents produced by the Quality Review of Documents (QRD) working group transpose the requirements and recommendations of the Directive 2004/27/EC and the Guideline on the Readability into a practical writing help and style guide with a binding effect. The QRD template (EMEA 2009: 14-16) provides a model for the PIL and it is available for the producer in an electronic form. It determines the macrostructure of the technical content that must consist of an introduction and six sections with headings, listed in a preceding table of contents. On a microstructural level, the QRD template provides standard sentences to be used not only for headings and subheadings, but also for the expression of directives such as instructions and safety warnings such as "Do not <take> <use> X <if …>" or "Take special care with X <if you…> / <when …>" (14). Two other documents published under the heading of QRD reference documents have to be considered as additional to the templates. The Convention to be followed 29 (EMEA 2007) includes e.g. an explanation of the bracketing convention used in the templates (<…> or {…}) and other instructions with re-spect to the technical medium. The Compilation of QRD decisions on stylistic matters in product information (EMEA 2008) consists of a list of QRD solutions meant to solve specifi c problems. These can concern the content as well as the linguistic form or technical aspects. This will be illustrated by an example of each of these three dimensions.
(1) Dimension of the technical content Can general information on health or disease be included in the package leafl et in certain justifi ed cases? (EMEA 2008: 2) (2) Dimension of the linguistic form The patient or physician is often referred to as "he". (= problem) "He/she" should be used if no other neutral gender locution is possible. Patients can be referred to as "he" or "she" when the medicinal product is exclusively for use by males or females. (= so lution suggested) (EMEA 2008: 1) (3) Dimension of the technical medium Different languages use different number separators (a comma or a dot) to distinguish between thousands and decimals. Style of number must correspond to language used. (EMEA 2008: 2) The controlling infl uences of the documents discussed in this section do not only affect the decision space and the decision criteria of the producer, but they also have an effect on other agents in the external process of the PIL production, as it will be shown in the next section.
Controlling Infl uences and the Agents in the Production Process of the PIL
In section 2.2. we have listed and discussed four agents or groups of agents that have controlling infl uences: the initiator, the recipients or audience, the team and the informants. In the case of the PIL, the role of these agents is determined and restricted by the documents dealt with in section 3.2. This can be demonstrated by the role of the initiator. When a person or a department of a pharmaceutical company assigns the task to produce a PIL text to a colleague, the assignment brief can be very concise. It suffi ces to refer to the Directive 2004/27/EC, the Guideline on Readability and the QRD reference documents that contain all requirements, instructions and recommendations necessary for writing a PIL text. Furthermore, these documents introduce a further, fi fth agent or group of agents and determine their role. It is the person or the team in a service company assigned by the pharmaceutical company (initiator) that performs a consumer testing in accordance with the relevant articles in the Directive 2004/27/EC and the recommendations of the Guideline on the Readability.
As mentioned at the beginning of section 3, the contacts with pharmaceutical companies, indispensable for the development of the software tool, have contributed to better insights into the production process of the PIL. From these contacts we know that within a pharmaceutical company, the assignment to produce a PIL text originates from a steering committee (initiator) and that the work is carried out by somebody of the Regulatory Affairs department. The author knows that, on the basis of the scientifi c report, i.e. the Summary of Product Characteristics (SmPC), he has to write a comprehensible text for a large audience of laymen. This text has to fulfi l the requirements of the documents discussed in 3.2. The text produced in the Regulatory Affairs department is read through and revised by several other departments. On the basis of its own specialization, each department pays particular attention to specifi c parts and elements in the text. The text production is actually a matter of team work, not only because it makes use of the content of an already existing document (the SmPC), but also because of staff collaboration. The revised version produced by the pharmaceutical company does not represent the fi nal stage. In many cases the service company, which has performed a consumer testing, and/or the competent authority involved returns the text to the pharmaceutical company asking for further adaptations and corrections. In that way new additional initiators appear. Adaptations to the text are often made by the service companies themselves as well. The concerned adaptations are implemented before the consumer testing is performed and also between the different test rounds 30 . If these adaptations are performed in consulting the original producer 31 , the continuity between the text production in the pharmaceutical company and the text production in the service company can also be considered as a kind of team work.
The fourth group of agents is the informants. Almost all of them have already been mentioned before: the colleagues of the different departments in the pharmaceutical company, the authors of the SMPC, the organizers of the consumer testing, the competent authorities, the authors of the Directive 2004/27/EC and the Guideline on the Readability and the QRD working group. We only have to add the authors of studies not mentioned before.
The presence of several initiators and different forms of team work has a considerable impact on the external production process. To some extent, the authorities responsible for the Directive 2004/27/EC, the Guideline on the Readability and the style prescriptions act as initiators and informants. They play a decisive role regarding the assignment, in determining the comprehensibility requirements with respect to the audience and in stipulating the requirements for the organizers of the consumer testing. This means that their controlling infl uences affect to a large degree the external process of the PIL production as well as the role of the other agents. All the agents involved restrict the decision space of the document producer. This is a characteristic that the production of the PIL has in common with the production of other specialized documents. As in the case of other specialized documents, the aim is to optimize the quality of communication by means of controlling infl uences 32 . Whether these controlling infl uences do contribute or not to optimization of the communication quality and to which degree they do so, is an interesting and important issue; though not within the scope of this article.
Conclusion
In this article, we studied the external process of specialized document production and the internal decision-making process based on four dimensions: technical content, linguistic form, technical medium and work processes as well as on four agents or groups of agents: the initiator(s), the recipients, the team and the informants. We showed the external process as a source of infl uences controlling the internal process and postulated that the controlling infl uences affect the decision space as well as the decision criteria.
The analysis of the production process of the text type Patient Information Leafl ets showed how external factors have impact on the decision space within each of the four dimensions as well as on the different agents. With the theoretical model it is possible to name the different stages of the workfl ow and to describe more systematically the decision-making process as well as the actual controlling infl uences.
It would be very interesting to study for one or more PILs the whole work process from the fi rst draft to the last version after consumer testing. Such an ethnographic study would bring insights into the contribution of each of the text producers and proofreaders, into the collaboration within the team, into the relation between writing and translating, into the nature of the additions and changes i.e. if they concern the technical content, the linguistic form, the technical medium or the work processes. To carry out this study the collaboration with one or more pharmaceutical companies is indispensable, because they have to provide the material and to give permission for the research and the publication. Up to now, we did not fi nd companies that have systematically collected the different versions of a PIL with metadata. The point will be to convince them of the importance of an ethnographic study.
|
2022-05-31T08:12:33.890Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "29fcece2845e14bddde1a85dd8b4bc90b266bb62",
"oa_license": "CCBY",
"oa_url": "https://tidsskrift.dk/her/article/download/97262/146062",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "29fcece2845e14bddde1a85dd8b4bc90b266bb62",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
4953017
|
pes2o/s2orc
|
v3-fos-license
|
Association of Source of Memory Complaints and Increased Risk of Cognitive Impairment and Cognitive Decline: A Community-Based Study
Background: Memory complaint is common in the elderly. Recently, it was shown that self-report memory complaint was predictive of cognitive decline. This study aimed to investigate the predictive value of the source of memory complaints on the risk of cognitive impairment and cognitive decline in a community-based cohort. Methods: Data on memory complaints and cognitive function were collected among 1840 Chinese participants (aged ≥55 years old) in an urban community at baseline interview and 5-year follow-up. Incident cognitive impairment was identified based on education-adjusted Mini-Mental State Examination score. Logistic regression model was used to estimate the association between the source of memory complaints and risk of cognitive impairment conversion and cognitive decline, after adjusting for covariates. Results: A total of 1840 participants were included into this study including 1713 normal participants and 127 cognitive impairment participants in 2009. Among 1713 normal participants in 2009, 130 participants were converted to cognitive impairment after 5 years of follow-up. In 2014, 606 participants were identified as cognitive decline. Both self- and informant-reported memory complaints were associated with an increased risk of cognitive impairment (odds ratio [OR] = 1.60, 95% confidence interval [CI]: 1.04–2.48) and cognitive decline (OR = 1.30, 95% CI: 1.01–1.68). Furthermore, this association was more significant in males (OR = 2.10, 95% CI: 1.04–4.24 for cognitive impairment and OR = 1.87, 95% CI: 1.20–2.99 for cognitive decline) and in higher education level (OR = 1.79, 95% CI: 1.02–3.15 for cognitive impairment and OR = 1.40, 95% CI: 1.02–1.91 for cognitive decline). Conclusions: Both self- and informant-reported memory complaints were associated with an increased risk of cognitive impairment conversion and cognitive decline, especially in persons with male gender and high educational background.
better predicted cognitive impairment than self-complaint. [8] Furthermore, Gifford et al. [9] recently found that participants of normal cognition with both self-and informant-reported complaints were at the highest risk of conversion to dementia. The cognitive complaints are an important early sign of imminent AD, especially in persons with a high level of education, [10] who played well in objective cognitive assessment because of high cognitive reserve. Therefore, measuring the predictive value of different sources of memory complaint is of importance for evaluation of future cognitive function change.
We performed a prospective population-based study on participants aged 55 years or older in an urban Chinese community, aiming to investigate the predictive value of the source of memory complaints on cognitive impairment conversion and cognitive decline.
Ethical approval
The prospective study was approved by the Ethics Committee of Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine. Written informed consent was obtained from all participants in this study.
Study population
This study was part of Wuliqiao (urban) Community Epidemiological Study that began in 2009. [9] In this community, a total of 3176 patients aged ≥55 years were enrolled in 2009 for cognitive complaint study. [11] After 5 years of follow-up in 2014, Mini-Mental State Examination (MMSE) was reassessed in 1840 participants (respondent rate of 57.9%). The respondent rate was low for several reasons, including too busy to complete the MMSE assessment, death, moving to other place, hearing problems, not willing to follow-up, and hospitalization. Detailed flowchart of the study is depicted in Figure 1.
Assessment of cognitive complaint
Specially trained investigators interviewed all participants face to face in 2009. Questions about memory complaints were: "Do you think that you have any problems with your memory?" for the participants and "Do you believe the subject has any problems with memory" for the spouse or the close relative who accompanied the participant for interview. The source of memory complaints was classified into three categories according to the reply to the above questions: no complaint (neither participant nor his/her spouse or relatives complain any problems with memory), self-or informant-reported memory complaint (either participant or his/her spouse or relatives complain any problems with memory), and both self-and informant-reported complaints (both participant and his/her spouse or relatives complain any problems with memory).
Assessment of cognitive function and covariant
After training, MMSE was applied by local doctors for cognitive function assessment. Participants were screened for cognitive function at baseline on May 2009 and follow-up reassessment on June 2014. Information about age, gender, and self-reported depression was obtained through questionnaire administrated by local doctors.
The diagnosis of cognitive impairment was based on MMSE scores with different cutoffs for education level: MMSE ≤17 for illiterates; MMSE ≤20 for primary school graduates (≤6 years of education); and MMSE ≤24 for junior high school graduates or above (>6 years of education). [11] The diagnosis of cognitive impairment conversion was defined as people with normal cognitive function in 2009 converting into cognitive impairment in 2014 based on education-adjusted MMSE score. The diagnosis of cognitive decline was defined as MMSE score decreased by ≥2 points during the 5-year follow-up.
Statistical analysis
All data analyses were performed using SPSS version 20.0 (SPSS Inc., Chicago, IL, USA). The demographic features were compared by Student's t-test for measurement data and Chi-square test for categorical data. Logistical regression model was used to investigate the association of the source of memory complaints with the risk of cognitive impairment conversion or cognitive decline after adjusting for covariant. A P < 0.05 was considered as statistical significance.
results
A total of 1840 participants (559 males and 1281 females; mean age: 71.1 ± 10.0 years) were included into this study. Among the 1840 participants, there were 1713 normal participants and 127 cognitive impairment participants in 2009. After 5 years of follow-up, 130 normal participants were converted into cognitive impairment and 606 were with cognitive decline. Baseline demographic characteristics of all the participants in cognitive impairment conversion study are presented in Table 1, and baseline demographic characteristics of all participants in the study for cognitive decline are presented in Table 2.
For cognitive impairment conversion study, among 1713 normal participants in 2009, 130 participants were converted to cognitive impairment after 5 years of follow-up. Both self-and informant-reported memory complaints were associated with a higher risk of cognitive impairment conversion (odds ratio [OR] = 1.60, 95% confidence interval [CI ]: 1.04-2.48, P = 0.030), after adjustment for age, gender, depression complaint, and baseline MMSE score [ Table 3]. In subgroup analysis, both self-and informant-reported memory complaints conferred a high risk of cognitive impairment conversion in males (OR = 2.10, 95% CI: 1.04-4.24, P = 0.040) and in people with high education level (OR = 1.79, 95% CI: 1.02-3.15, P = 0.040; Table 3).
The similar results were found in the study of cognitive decline, in which 606 people were identified with cognitive decline after 5 years of follow-up. Both self-and informant-reported memory complaints were associated with a higher risk of cognitive decline (OR = 1.30, 95% CI: 1.01-1.68, P = 0.045), after adjustment for age, gender, depression complaint, education level, and baseline MMSE score [ Table 3]. In subgroup analysis, both self-and informant-reported memory complaints conferred a high risk of cognitive decline in males (OR = 1.87, 95% CI: 1.20-2.99, P = 0.008) and in people with high education level (OR = 1.40, 95% CI: 1.02-1.91, P = 0.040; Table 3). dIscussIon This study found that both self-and informant-reported complaints of cognitive impairment were associated with further cognitive impairment conversion and cognitive decline, especially for males with high educational level. These findings, combined with other studies, [6,8,10,12,13] supported that subjective memory complaints (SMCs) might be used as an appropriate measurement to predict further memory decline. The probable explanation was that subtle underlying pathological changes might be involved in memory complaint people with normal cognitive function. In line with that, kryscio et al. [14] reported that SMC reporters had more severe AD neuropthological burdens in an autopsy-based longitudinal study. It was found that SMCs were associated with hippocampal volumn change in another magnetic resonance imaging-based longitudinal study.
Furthermore, this study suggested that the predictive value of SMCs for cognitive change was more evident in people with high educational level, which was in consistent with the findings from van Oijen et al. [10] Highly educated persons usually play well on cognitive screening tests because of high cognitive reserve. [15,16] In addition, Sajjad et al. [17] indicated that highly educated persons were more likely to notice subtle changes in their memory than less educated, which made memory complaints an appropriate measurement to evaluate subtle cognitive impairment in highly educational elders.
Another interesting finding was that SMCs were more predictive on cognitive impairment in males than females in this community-based study. Gender differences with regard to the risk of dementia in SMCs have been reported. Pérès et al. [18] suggested that women with SMC were more likely to develop dementia, but other reported different result. [19,20] Although loss of the protective estrogen after menopause might make women more vulnerable to memory disorders, [21] women were twice as likely as men to suffer from depressive symptoms, [22,23] especially those in peri-or postmenopausal stage. [24][25][26][27] Therefore, it was possible that some SMCs in women were due to depressive mood and might decrease the predictive value of SMCs in women.
The strengths of this study were its population-based prospective design, its large number of participants from representative urban communities, and available data over the follow-up period. Factors that may confuse the assessment of the association between cognitive complaint and risk of cognitive impairment such as depression and objective memory function were evaluated as covariates at baseline interview.
However, some limitations still existed in this study. First, this study did not investigate informant-complaint cohort separately from self-only complaint because this study did not insist the spouse or partner who helped the participant to complete the questionnaire, which probably contributed to the limited data of informant-complaint-only cases to some degree. Second, cognitive complaint was judged by means of a single question which might lead to less well-defined SMCs. Third, MMSE scale alone is not sensitive enough to assess objective cognitive impairment, though it is commonly used in epidemiological cognitive screening. Well-designed studies with more extensive cognitive assessment are needed to overcome these problems in the future.
In conclusion, the findings of this study highlighted the predictive value of the source of cognitive complaints on progression from cognitive normal to cognitive impairment or cognitive decline, especially both self-and informant-reported memory complaints in participants with high education level and male gender. The findings might help identify community dwelling elders with those characteristics who do well in cognitive test at baseline, but at high risk of dementia, at an early stage.
|
2018-04-27T03:28:06.281Z
|
2018-04-20T00:00:00.000
|
{
"year": 2018,
"sha1": "c5b615e0a17336519b11321561f40b6317ae37bc",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.229904",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5b615e0a17336519b11321561f40b6317ae37bc",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234262826
|
pes2o/s2orc
|
v3-fos-license
|
Two Randomized, Double Blind, Placebo-Controlled Trials Evaluating the Efficacy of Red 635nm Low Level Laser for the Treatment of Low Back Pain
Objectives: Low back pain (LBP) is the global leading cause of disability with over eighty percent of the population experiencing an episode of LBP. A new emerging technology, low-level laser therapy (LLLT) has demonstrated promising results for the treatment of various chronic musculoskeletal conditions. The objective of the following review was to analyze data from two separate double-blind placebo-controlled trials using red 635 nm low-level laser for the treatment of chronic low back pain. Materials and Methods: Subjects (n=120) were males and females, ages 18 years or older with episodic chronic low back pain of at least 3 months duration. Subjects received eight laser treatments to the lower back region consisting of two procedures per week, 3 to 4 days apart. The low-level laser device contained three independent 17 mW, 635 nm red laser diodes (Erchonia® FX-635™; Erchonia Corporation, Melbourne, FL). The primary efficacy assessment was the change in visual analog scale (VAS) pain scores. The predefined outcome measure was the proportion of subjects achieving a ≥30% change in VAS pain scores at a 2-month follow-up assessment. Overall study success was predefined as a ≥35% between-group difference in the proportion of subjects achieving treatment success. Results: 80% of subjects treated with the low-level laser achieved a ≥30% decrease in low back pain VAS scores vs. 28% of placebo-treated subjects (52% difference; p<0.00001). The mean decrease in low back pain VAS scores was 36.59 points for subjects treated with the laser vs. 8.70 points for placebo-treated subjects (27.89-point difference; p<0.0001). Conclusion: The data demonstrates the clinical utility of low-level laser for treatment of LBP. Based on efficacy and safety, 635 nm red laser has received Food and Drug Administration (FDA) market clearance for use to provide relief of minor chronic low back pain. (K180197).
Introduction
Low back pain (LBP) is the global leading cause of disability with over 80 percent of the population experiencing an episode of LBP at some time during life [1]. An estimated 264 million days of work per year are lost due to LBP [2]. The condition also presents a major challenge to United States health system, with total costs estimated to be between 100-200 billion dollars annually, twothirds of which are due to decreased wages and productivity [3]. The point prevalence has been shown to increase with advancing age, from 4.2% among individuals 24 to 39 years old to 19.6% among those 20 to 59 years old [4]. It is among the ten leading causes of years lived with disability in every country surveyed [5]. In the United States, LBP has a point prevalence of approximately 12%, a 1-month prevalence of 23%, a 1-year prevalence of 38% and a lifetime prevalence of 40% [6]. Among all types of disorders in the United States, low-back pain ranks third for disabilityadjusted life-years and first by years lived with disability [7]. One common treatment for LBP, is the use of Non-steroidal antiinflammatory (NSAID) drugs. For acute LBP, traditional NSAIDs do reduce pain, without any clear evidence that one agent is superior to another [8]. However, NSAID treatment should be used for the shortest time at the lowest dose that achieves adequate pain relief [8]. Short-term use of NSAIDs is appropriate in most cases of acute LBP [8]. The toxicities of chronic NSAID administration are concerning, hence only a very small number of chronic LBP patients should use NSAIDs, besides an as needed basis [8]. The burden of toxicity from NSAIDs denies what had been the widely held perspective that NSAIDs are "safer" analgesics. Both Cox-1 and Cox-2 inhibitors have adverse drug reactions in both the short term and the long term. It has been reported that as many as 107,000 hospitalizations and 16,500 deaths annually in the United States may be attributable to NSAID toxicity [9]. Through the 1990s it was suggested that for every dollar spent on NSAIDs, treatment of NSAID toxicity cost $1.25 [10].
In meta-analysis of 35 randomized placebo-controlled trials, [11] NSAIDs reduced spinal pain and disability but provided clinically unimportant effects over placebo. For one participant to achieve clinically important pain reduction, six participants are needed to be treated with NSAIDs, rather than placebo [11]. Opioids appear to have short-term efficacy for treating chronic LBP with much less evidence supporting long-term use [12], possibly due to tolerance [13], and may not provide additional benefits over the use of NSAIDS alone [14]. Half of patients treated with opioids discontinue using them due to lack of efficacy or adverse events [15], including constipation, nausea, sedation addiction, and overdose-related mortality [12].
Noninvasive, drug-free treatments for LBP are becoming more prevalent and recommended. The American College of Physicians (ACP) developed a guideline to present evidence and provide clinical recommendations on noninvasive treatment of low back pain [16]. Included in the list of strong recommendations is the use of low-level laser therapy [16]. In addition, low-level laser became more recognized option for LBP in 2018 when The Food and Drug Administration (FDA) cleared the first laser device (Erchonia® FX-635™) as an adjunct to provide relief of minor chronic low back pain of musculoskeletal origin (K180197) [17]. The treatment administered red 635nm non-thermal laser. The objective of the following review is to analyze data from two separate doubleblind placebo-controlled studies totaling 120 subjects assessing the improvement in chronic low back pain following treatment of the red 635 nm nonthermal laser.
Methods
Data is compiled from two IRB-approved clinical trials which included four different independent investigator test sites. Both clinical trials were double-blind, placebo-controlled design. A computer program was used to perform the participant group assignment. The double-blind component of the study was established by including a treatment investigator and an assessment investigator. The treatment investigator was responsible for administering the active and placebo interventions. That investigator was the only individual present in the room during the treatment phase and did not participate in the pre-or post-treatment evaluation activities. The assessment investigator was responsible for conducting the pre-and post-procedure evaluations and determining the diagnosis and eligibility of the participants for study participation. The assessment investigator was never aware of the participants' group allocation. Additionally, the study participants were never informed of their group assignment and wore darkened protective glasses designed to filter out the laser light during the treatment procedure. Study subjects were male or female, ≥18 years old and recruited from among each investigators' pool of patients and individuals responding to local recruitment flyers and print ads. Qualifying subjects received financial compensation for completed study compliance and participation. Low back pain was of musculoskeletal origin involving lumbar sprain, strain or stretch injury to the ligaments, tendons, and/or muscles of the low back in the absence of nerve root compromise. Each subject was required to have primary pain located in the left, right or both sides of the lower back, defined as the area between the lowest rib and the crease of the buttocks. Diagnosis included a history of initial LBP onset occurring after one or more of the following events: known injury, such as an accident or fall; overexertion of a muscle, such as after unusual amounts of exercise or unaccustomed activity, or sustained positioning (strain injury); or sudden force or movement exerted upon ligaments, such as unusual turning or twisting (sprain injury). Subjects experienced at least two of the following: pain and/or loss of function such as inability to turn, twist or bend normally; pain located along lower back and upper buttocks which may radiate into surrounding tissue; pain that worsens with activity; painful muscle spasms that can worsen with activity or at night while asleep; or history of prior back injury.
Diagnosis was further based on a physical examination which revealed at least three of the following features: inability or difficulty straightening into normal posture while standing; activities such as sitting, standing, walking or driving are limited, difficult or impossible; palpation of muscles in lower lumbar area reveals local tenderness and muscle spasm while lying in prone position; change in sensation and/or motor function of knees and ankles; raising straight leg from supine position produces sciatica; or upon observation, there is no notable posture, spinal alignment or other back deformities. The LBP was chronic, defined as ongoing over ≥3 preceding months, with pain having occurred on ≥15 days of each preceding month, and each episode lasting ≥24 hours followed by a subsequent period of ≥24 hours without pain. Other inclusion criteria included a self-reported score of ≥40 on the 100-point Visual Analog Scale (VAS) pain scale; ability to refrain from consuming analgesic, anti-inflammatory or muscle relaxing medications throughout the study except for the studyrelated pain relief medication; refraining from other therapies for managing LBP, such as physical therapy, occupational therapy and hot or cold packs, chiropractic care or acupuncture; and ability to complete a daily patient diary.
Orthopedics and Rheumatology Open Access Journal (OROAJ)
Subjects with LBP known to be caused by the following etiologies were excluded from study participation: mechanical (apophyseal osteoarthritis, thoracic or lumbar spinal stenosis, spondylolisthesis), inflammatory (ankylosing spondylitis, rheumatoid arthritis, infection), neoplastic (primary or metastatic bone tumors, intradural spinal tumors), metabolic (osteoporotic fractures, osteomalacia, chondrocalcinosis) or psychosomatic conditions (tension myositis syndrome). Other exclusion criteria included the use of the muscle relaxants cyclobenzaprine, diazepam or meprobamate within the prior 30 days, use of the muscle relaxants carisoprodal or metaxalone within the prior 7 days, initiation of the antidepressants duloxetine or a tricyclic or serotonin-selective reuptake inhibitor within the prior 30 days, systemic corticosteroid therapy or narcotics within 30 days; infection, wound or other external trauma to the planned treatment area; prior back or spine surgery; history of alcohol or other substance abuse; pregnancy, breast feeding, or planning pregnancy prior to the end of the study; participation in a clinical study or other type of research during the past 30 days.
Intervention
The low-level laser device used in this study was comprised of three independent 17 mW, 635 nm red laser diodes mounted in a scanner device with flexible arms (Erchonia ® FX-635™; Erchonia Corporation, Melbourne, FL). The device utilizes internal mechanics that collects light emitted from each laser diode that is processed through a proprietary patented lens which redirects into a line generated beam. The device then applies the line laser light into a spiraling circle pattern that is totally random and independent of the other diodes. The device delivers 10.2 joules to each of the three treated areas consisting of the lower spine and both hip flexors. As the device mechanically scans the three areas simultaneously, the estimated amount of total energy delivered is 0.0865 J/cm 2 . The placebo group participants were treated using the same multi-head device, however the placebo group instead received treatment from light-emitting diodes (LED), which produced noncoherent light of the same color when activated. Eye protection was provided for use by the investigator and the subject (Laser Safety Industries; St. Paul, MN).
Procedures
Eligible subjects entered a 2-day pretreatment Washout Phase and abstained from non-study related medications for low back pain and began using the as-needed study rescue medication acetaminophen 325 mg tablets (Tylenol ® ; McNeil Consumer Healthcare, Fort Washington, PA) which continued until the end of the post-treatment evaluation phase. Upon waking on these 2 days, subjects recorded their pain severity using the 0-100 VAS scale and completed the daily diary documenting study compliance. Subjects were then randomized to receive treatment with the active or placebo light in double-blind fashion. Each subject received eight 20-minute treatments applied to the lower back region with their assigned treatment group over a consecutive 4-week period consisting of two procedures per week, 3-4 days apart. Each procedure administration occurred at the investigator's test site.
Outcome Measures
The pain severity was assessed using a 0-100 Visual Analog Scale (VAS). The VAS is a 100 mm horizontal line on which the patient's pain intensity is represented by a point between the extremes of "no pain at all" and "worst pain imaginable". The VAS is widely used across a broad range of populations and clinical settings and has been well-accepted as a generic pain measure for many years [18]. VAS evaluation was completed within ten minutes following each study procedure and repeated at study endpoint 2-months post-treatment. The following analysis is based on the change in mean pre-treatment subject VAS scores at study endpoint.
Efficacy Endpoint
The aim of each of these studies was to determine if the treatment effect of the Erchonia laser device for the active treatment group was more effective than placebo treatment for alleviating LBP. The Primary efficacy outcome measure was predefined as the difference in the proportion of subjects between test and control groups who achieved a clinically meaningful and statistically significant decrease in self-reported VAS low back pain rating of 30% or greater at study endpoint relative to baseline. The clinical relevance of a 30% change in VAS score was previously established by the U.S. Food and Drug Administration Division of Surgical, Orthopedic and Restorative Devices through numerous pre-investigational device exemption (IDE) reviews. Overall study success was predefined as at least a 35% difference in the proportion of individual subject successes between procedure groups.
Statistical Analysis
A t-test for independent samples was used to analyze betweengroup differences in demographics and baseline characteristics. A Fischer's Exact Test for two independent proportions was used to analyze primary efficacy, and an ANCOVA analysis was used to analyze the mean change in low back pain VAS scores. As every randomized subject completed all study visits and procedures and had all study measurements recorded through the final evaluation, only an intent-to-treat analysis was performed for primary outcome success.
Ethics
The study protocols and related materials were approved by a commercial institutional review board (Western Institutional Review Board, Olympia, WA; IRB number 20120787 and 20151815) and conformed to the good Clinical Practice guidelines of the International Conference on Harmonization. All subjects provided signed informed consent prior to participating in any study-related activities.
Demographics
The 120 participating subjects were randomized to the active (n=60) and placebo treatment groups (n=60). All subjects completed the study according to protocol. Demographics and baseline characteristics of enrolled subjects are summarized in Table 1. A t-test for independent samples revealed no statistically significant between-group differences for any parameter.
Primary Efficacy Measure
At the end of the study, 80% of subjects treated with low level laser achieved a ≥30% decrease in baseline LBP VAS scores vs. 28% of subjects treated with the placebo device, a difference of 52% (p<0.00001). The mean decrease in LBP VAS scores was 36.59 points for subjects treated with the laser vs. 8.70 points for subjects treated with the placebo device, a difference of 27.89 points (p<0.001) ( Table 2 and Chart 1).
Primary Safety Measure
No adverse events were reported by any subject throughout the duration of the study.
Visual Analog Scale Low Back Pain Scores
Among subjects treated with LLLT, there was a progressive and substantial decrease in mean LBP VAS scores throughout the duration of the study (Chart 1). In contrast, there was a small decrease in VAS scores among placebo-treated subjects which was not clinically meaningful.
Subject Satisfaction
Subjects rated their satisfaction with the change in LBP at the study endpoint. Using the 5-point Likert scale in response to the question "Overall, how satisfied or dissatisfied are you with any change in the pain in your lower back following the study procedures with the study laser device?," 47 subjects randomized to active treatment were satisfied vs. 19 placebo-treated subjects (Table 3).
Discussion
The process of LLLT is based on a photochemical reaction in which discrete units called photons are absorbed within the visible spectrum (380-700 nm). The photon-induced chemistry ultimately gives rise to the observable effect at the biological level [19]. If light of a particular wavelength is not absorbed by a system, no photochemistry will occur, and no photobiological effects will be observed, no matter how long one irradiates with that wavelength of light [20]. The wavelength used in this review was 635nm (red) laser which is in the visible light spectrum.
The enzyme, cytochrome c oxidase (CCO) has been shown to be activated in vitro by red laser (633 nm) [21]. Therefore, optimal biological stimulation can be achieved utilizing a device that emits light within the red spectrum. UV and visible light are absorbed by proteins and pigments, whereas the absorption of infrared light can be attributed to water molecules [22]. Water has a narrow window of transparency which includes the visible light spectrum (400-700nm) [23]. There is no physical mechanism which produces transitions in the visible light spectrum, as it is too energetic for the vibrations of the water molecule and below the energies needed to cause electronic transitions [23]. The infrared light spectrum exhibits strong absorption from vibrations of the water molecule. The result of infrared absorption is heating of the tissue since it increases molecular vibrational activity [23]. Infrared radiation does penetrate the skin further than visible light. 23 The primary mechanism for the absorption of visible light photons is the elevation of electrons to higher energy levels [23]. Simply, while visible light can produce photochemical effects, infrared only produces molecular rotations and vibrations [24].
The benefits of infrared wavelengths on low back pain are still unsubstantiated. One study showed that infrared laser combined with exercise is more beneficial than exercise alone for chronic low back, however there was no difference in the laser group alone and the placebo laser after six weeks of invervention [25]. A systematic review of twelve randomized controlled studies all emitting infrared wavelengths on pain associated with nonspecific low back pain [26], concluded that the current evidence does not support the use of laser to decrease pain and disability in people with non-specific LBP. Another type of light source being marketed for low back pain is light emitting diode (LED).
Based on the outcomes in the two reviewed double-blind placebocontrolled studies, in which the placebo group received LED treatment, it can be concluded that laser is superior to LED in reducing pain associated with low back pain and should be the first line of therapy. Currently there are no FDA cleared infrared lasers or light emitting diodes for low back pain.
The intracellular effects generated by the absorption of 635nm irradiation are responsible for the reduction of inflammatory phase and the expression of genes involved in tissue repair. These changes are defined as Laser Pharmacology™ which describes the discipline in which a series of interactions caused by Erchonia laser photons produce a change in physiology, through similar if not the same biological pathways of pharmaceutical drugs. The mechanism of action is completely nonthermal.
Following trauma to the low back, the inflammation phase is formed by the enzyme Cyclooxygenase-2 (COX-2). Inhibition of cyclooxygenase (Cox) and prostaglandin E2 (PGE2) protects cells against injury from inflammation and oxidative stress, which is the most likely mechanism of action for NSAID-mediated analgesia [27]. Comparable effects have been documented following exposure to red LLLT were a significant reduction in COX-2 mRNA expression was found in the sub plantar (~2.5-fold) and brain (4.84-9.67-fold) tissues [28]. Normally, lesion-induced pain subsides and does not develop into chronic pain. A probable factor in the pathophysiology of low back pain and the transition to a chronic state is considered to be due to the lack of nitric oxide (NO) [29]. Irradiation of 635 nm laser has shown to produce a significant upregulation of iNOS after a single treatment of post inflammation induction, whereas other wavelengths (785, 808 and 905 nm) were not significantly different from the control group [30]. Considering the low quantum energy per photon for the 785 to 905 nm range, equal to 1.52 to 1.37 eV, they apparently do not induce direct photochemistry as the minimum quantum energy for cis-trans isomerization is on the order of 1.7 eV [30].
The modulation of transcription factors has become a common therapeutic strategy to prevent or provoke the expression of specific genes, and the approach could potentially provide a means to treat a wide assortment of medical disorders. Red laser can play a direct role in gene expression by first stimulating cytochrome C oxidase which accelerates the electron transport leading to increased ATP production [31,32]. At the same time, this photochemistry is linked to the generation of ROS [33]. In higher concentrations ROS can be cytotoxic, however, in lower concentrations it can result in the activation of various transcription factors such as NF-kB33, AP-133, and HSP33, which increases signaling pathway and gene expression leading to increased protein synthesis [33], and cell homeostasis [34]. A substantial amount of evidence has been published that supports the theory that laser irradiation within the red spectrum does play a unique role in the expression of specific genes. Perhaps the most significant was Zhang et al. who used cDNA microarray technique to investigate the gene expression profiles of human fibroblasts irradiated by low-intensity red light [35]. The gene expression profiles revealed that 111 genes were regulated by the red-light irradiation and can be grouped into 10 functional categories [35]. The affected genes were related to cell growth, collagen production, microcirculation, antiapoptotic, DNA repair, and antioxidation.
Conclusion
Based on the data of 120 subjects, the use of red 635 nm non-thermal laser is an effective means for reducing episodic chronic low back pain of musculoskeletal origin. At the end of the study, most subjects treated with the low-level laser (80%) achieved a ≥ 30% decrease in baseline LBP VAS scores vs. 28% of subjects treated with the placebo device. In addition, one of the reviewed studies documented changes in the Oswestry disability index, which demonstrated a clinically meaningful improvement in the LLLT treated group [36]. Although additional studies are warranted, the noninvasive nature of laser therapy enables this technology to serve as a primary treatment of chronic low back pain.
|
2021-05-11T00:03:52.213Z
|
2021-01-15T00:00:00.000
|
{
"year": 2021,
"sha1": "d91da492c20f07924fa8e812607c35ba5775a6b8",
"oa_license": "CCBY",
"oa_url": "http://juniperpublishers.com/oroaj/pdf/OROAJ.MS.ID.555964.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4506948ff0aa77a846094136118c1badf7d55bf1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216122014
|
pes2o/s2orc
|
v3-fos-license
|
Metastatic Acetabular Fracture: A Rare Disease Presentation of Recurrent Head and Neck Paraganglioma
We present a case of a rare metastatic bone lesion of the acetabulum, associated with a pathologic fracture, found to be metastasis from a malignant carotid body paraganglioma upon histological analysis. We present a report of the patient’s clinical course following the identification of metastatic disease to the right acetabulum, as well as a review of paragangliomas and their propensity for metastasis.
Introduction
Paragangliomas are rare tumors of neuroendocrine cells from autonomic paraganglia outside the adrenal glands. A distinction is made from intra-adrenal endocrine tumors, defined as pheochromocytomas, that are distinguished by adrenergic phenotypes, lower rates of malignancy, and a stronger association with hereditary phenotypes [1]. Paraganglioma tumors can be seen in the sympathetic chain ganglia associated with catecholamine secretion, but can also be found in the suboccipital and neck region in parasympathetic ganglia, which are often asymptomatic [2]. The high vascularity of these tumors cause them to have a predilection for the carotid body, and the histological analysis often reveals excess catecholamine metabolites, primarily norepinephrine [3][4][5]. Excess catecholamine secretion is also seen in pheochromocytoma, an intra-adrenal neoplasm that mirrors symptoms of paragangliomas and can be difficult to distinguish [6]. The classic presentation typically occurs as a painless neck mass. Cranial nerve deficits may also be present.
Case Presentation
A 28-year-old male presented to the emergency department (ED) with a chief complaint of right hip pain following a restrained motor vehicle accident (MVA). The patient had a past medical history significant for resection of right carotid body paraganglioma approximately ten years prior. At that time, the patient presented to the primary care clinic with complaints of severe generalized headaches, syncopal episodes, and associated blurry vision. Initial complaints of non-tender, right-sided neck mass resulted in a biopsy, at that time diagnosed as a benign lymph node. Upon continued symptoms, he was eventually referred to the oncology clinic for follow up for his neck mass. After a subsequent biopsy of the lesion, a carotid body paraganglioma was diagnosed and excised, and he was treated with two months of radiation without chemotherapy.
Upon presentation to the ED, a pathologic acetabular fracture was discovered. On physical exam, the patient demonstrated tenderness to palpation of the right hip. He noted recent night sweats, as well as weight loss. Anteroposterior (AP) X-rays of the pelvis demonstrated a right acetabular lytic lesion ( Figure 1). Additionally, computed tomography angiogram (CTA) of the neck demonstrated a left carotid body lesion, approximately 13 mm in diameter splaying the left internal and external carotid artery, consistent with paraganglioma ( Figure 2). The patient underwent open reduction internal fixation (ORIF) to the right pathological posterior wall acetabulum fracture and biopsy of bone marrow. The bone marrow biopsy demonstrated the presence of infiltration from a metastatic malignant paraganglioma via histopathological analysis (Figures 3-7).
Discussion
Accounting for approximately 0.3% of all neoplasms, paragangliomas are rare tumors arising from the extra-adrenal paraganglia [7]. The rarity of metastatic paragangliomas was exhibited by a National Cancer Database study identifying only 10 cases over a 10-year period (1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996) [8]. We present a case of a 28-year-old male with a history of carotid body paraganglioma presenting with recurrence and metastasis to the femur. CTA of the neck, as seen in Figure 2, demonstrates a classical sign, Lyre sign, associated with carotid body tumors, typified by splaying of the internal and external carotid arteries [9].
Pheochromocytomas and extra-adrenal paragangliomas share identical histological morphology. However, a distinction between the two is important as the latter have a greater propensity for malignancy, less adrenergic symptoms, and is not as strongly associated with hereditary syndromes [1]. Hematoxylin and eosin (H&E) staining demonstrated a classical pattern associated with paraganglioma, as seen in Figure 3. Previous studies have demonstrated a Ki index > 3% to be indicative of metastatic potential [10,11]. Staining for Ki-67 from this patient, upon pathological examination of bone biopsy, demonstrated a Ki-67 index between 1%-4%.
Metastasis of paragangliomas is a proportionally rare event, with only 10-30% of cases advancing to metastatic disease, depending on the study parameters [12,13]. Accordingly, attempts have been made to delineate the histopathological features predictive of metastasis. Though the majority of cases are surgically resectable, those that advance to a malignant state have an intractable clinical course. Secondary to long latency periods between the discovery of primary tumor and metastasis, patients suspected of having metastatic disease require lifelong surveillance [14]. Previous studies have demonstrated the mean time to recurrence after initial resection to be approximately 3-6 years [10,15]. Our patient underwent an uncomplicated clinical course following ORIF of the right acetabulum, with discharge instructions for follow up with the orthopedic clinic, otolaryngology clinic, and hematology/oncology for future imaging surveillance.
Conclusions
We present a rare case of a 28-year-old male with a history of carotid body paraganglioma resection, who presented to the ED with a chief complaint of right hip pain, following a restrained MVA. Subsequent imaging demonstrated a pathological fracture of the acetabulum with imaging suspicious for metastatic disease to the bone. The patient subsequently underwent ORIF to the right pathological posterior wall acetabulum fracture and biopsy of bone marrow, demonstrating the presence of infiltration from a metastatic malignant paraganglioma. This case highlights some of the features of metastatic paragangliomas with a discussion of the propensity for metastasis, as well as the histopathological analysis. Patients found to have metastatic paragangliomas will require life-long surveillance.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2020-04-16T09:16:32.440Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0c0cec5544ccef1113f06fa94b1f4bfe6f12cb08",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/22966-metastatic-acetabular-fracture-a-rare-disease-presentation-of-recurrent-head-and-neck-paraganglioma.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ce16f34c1064a20ba0b09f15981f58b89ab1f76",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221399538
|
pes2o/s2orc
|
v3-fos-license
|
Optimized Vivid-derived Magnets photodimerizers for subcellular optogenetics
Light-inducible dimerization protein modules enable precise temporal and spatial control of biological processes in non-invasive fashion. Among them, Magnets are small modules engineered from the Neurospora crassa photoreceptor Vivid by orthogonalizing the homodimerization interface into complementary heterodimers. Both Magnets components, which are well-tolerated as protein fusion partners, are photoreceptors requiring simultaneous photoactivation to interact, enabling high spatiotemporal confinement of dimerization with a single-excitation wavelength. However, Magnets require concatemerization for efficient responses and cell preincubation at 28°C to be functional. Here we overcome these limitations by engineering an optimized Magnets pair requiring neither concatemerization nor low temperature preincubation. We validated these “enhanced” Magnets (eMags) by using them to rapidly and reversibly recruit proteins to subcellular organelles, to induce organelle contacts, and to reconstitute OSBP-VAP ER-Golgi tethering implicated in phosphatidylinositol-4-phosphate transport and metabolism. eMags represent a very effective tool to optogenetically manipulate physiological processes over whole cells or in small subcellular volumes.
INTRODUCTION
Macromolecular interactions between and amongst proteins and organelles mediate a considerable amount of biochemical signaling processes. A principal method of testing the physiological significance of such interactions is to drive their association with a user-supplied stimulus such as light or drugs. Typically, two different components, each fused to a specific protein, come together ("heterodimerize") to reconstitute a given protein-protein interaction following addition of a small molecule (DeRose et al., 2013;Putyrski and Schultz, 2012;Spencer et al., 1993) or upon light illumination (Losi et al., 2018;Rost et al., 2017). Light offers much greater spatial and temporal resolution than drugs, and as such, optogenetic dimerizers are generally used to probe phenomena at cellular and subcellular scales. At the organism scale, light is much less invasive but suffers from penetration issues.
One popular photodimerizer pair is "Magnets", engineered from the Neurospora crassa Vivid photoreceptor, which comprises an N-terminal Ncap domain responsible for homodimerization and a C-terminal light-oxygen-voltage-sensing (LOV) domain (Kawano et al., 2015). Magnets employ the ubiquitous cofactor flavin adenine dinucleotide (FAD) as the light-sensing moiety.
The Magnets pair was engineered from the Vivid homodimer by introducing complementary charges, giving rise to nMag (negative Magnet) and pMag (positive Magnet). The two Magnets components are quite small (150 aa) for photodimerizers, exhibit relatively fast association and dissociation kinetics, and function when fused to a broad range of proteins, including peripheral and intrinsic membrane proteins (Benedetti et al., 2018;Kawano et al., 2015Kawano et al., , 2016. Furthermore, heterodimerization of Magnets requires light-dependent activation of both components, rather than just one. This property results in low levels of background activity and allows induction of dimer formation with single-wavelength excitation in small cytoplasmic volumes (Benedetti et al., 2018).
However, the Magnets system has two prominent shortcomings. First, the low thermodynamic stability of the Magnets components precludes their proper expression and folding at 37oC. Thus, they cannot be used in mammals. When used in cultured mammalian cells they require a preincubation at low temperature (28oC) for 12 hours to allow expression and folding. Second, as the Magnets components heterodimerize with low efficiency, robust activation requires concatemerization (Furuya et al., 2017;Kawano et al., 2015), which may affect trafficking, motility and function of target proteins, create vector payload constraints, and give rise to recombination and/or silencing of the sequence repeats.
Here, we overcome these limitations of the Magnets by structure-guided protein engineering and validation by cellular assays. The resulting reagents, "enhanced Magnets" (eMags), have greater thermal stability and dimerization efficiency, as well as faster association and dissociation kinetics. We confirmed their effectiveness in a variety of applications including protein recruitment to different organelles, the generation/expansion of organelle contact sites, and the rapid and reversible reconstitution of VAP-dependent inter-organelle tethers that have key regulatory functions in lipid transport.
Optimization of the Magnets heterodimer interface
Optimal photo-heterodimerizer performance convolves together several parameters: i) Efficient, fast interaction of the two different components upon light stimulus, ii) little or no formation of homodimerswhich would compete with productive heterodimer complexes, iii) low background before light stimulus; and ideally, iv) fast heterodimer dissociation following light offset. The existing Magnets systems, especially the Fast1 and Fast2 variants with fast dissociation kinetics (Kawano et al., 2015), have weak dimerization efficiency and thus perform poorly on the first criterion, necessitating the use of concatemers (usually 3 copies) of either or both monomers to achieve acceptable reconstitution in a number of settings (Benedetti et al., 2018;Furuya et al., 2017;Kawano et al., 2015).
A pair with greater dimerization efficiency would be desirable, ideally allowing single copies of the complementary Magnets to suffice. With the goal of engineering such a pair, we first established a robust screen for reconstitution of Magnets dimerization using light-dependent accumulation of a protein at the outer mitochondrial membrane (Benedetti et al., 2018) (Fig. 1A), which is readily visible and quantifiable. The nMagHigh1 monomer, tagged with the green fluorescent protein EGFP, was used as bait on the outer mitochondrial membrane by fusion to the transmembrane C-terminal helix from OMP25 ("nMag-EGFP-Mito") (Supp. Fig. 1A and Supp. Table 1). The pMagFast2 monomer, tagged with the red fluorescent protein TagRFP-T (Shaner et al., 2008), was used as the cytoplasmic prey ("pMag- TagRFP Table 2). We co-expressed both constructs in HeLa cells by co-transfection, grew cells at 28°C for 24 hours, and tested light-dependent prey capture and release by the bait (Fig. 1B).
Importantly, excitation light for TagRFP-T, as well as that for mCherry and the infrared fluorescent protein iRFP (Shcherbakova and Verkhusha, 2013), is well outside the action spectrum of LOV domain proteins (400-500 nm light excitation) (Losi et al., 2018); EGFP excitation light is coincident with Magnets activation and is thus used sparingly in these experiments.
Next, we began the process of Magnets redesign by optimizing the placement of chargecomplementing amino acids in the Vivid dimer interface, using the crystal structure of the lightactivated dimer (PDB ID 3RH8) (Vaidya et al., 2011) (Supp. Fig. 3A-C) as a guide, and mitochondrial recruitment as the testbed. The original Magnets pair was built upon the mutations Ile52 and Met55 to Arg (positive Magnet) and Ile52 to Asp and Met55 to Gly (negative Magnet) within the Ncap domain (See Supp. Fig. 3A), which mediates dimerization. To achieve more efficient dimerization, we first sought to optimize charge placement at the interface. Substitution of Asp52 to Glu in nMag-Asp52Glu to modify the position of the negative charges somewhat disrupted heterodimerization, consistent with Kawano et al., 2015(Kawano et al., 2015. We next tried to introduce two negative charges into nMag, at the same two sites where positive charges had been introduced into pMag. nMag-Gly55Glu completely inhibited heterodimerization, whereas nMag-Gly55Asp somewhat improved it (Supp. Fig. 3D). Adding a third positive charge to pMag at position 48 also completely disrupted heterodimerization. In the end, we left the charges alone and instead sought to improve heterodimer interface packing and helical preference with nMag-Gly55Ala, which indeed improved both heterodimerization efficiency and association kineticsmore so than nMag-Gly55Asp. In fact, the nMag-Gly55Ala mutation alone sufficiently improved mitochondrial recruitment after preincubation at 28°C so that it functioned well as a monomer (Supp. Fig. 3D).
Thermostabilization of the Magnets proteins
Having improved the system to allow single-copy use at 28°C, we next sought to improve the temperature stability of the proteins to allow experiments at 37°C. As before, recruitment to the mitochondrial membrane in HeLa cells was used as the cellular assay: nMagHigh1-Gly55Ala-EGFP-OMP25 and pMagFast2-TagRFP-T were co-expressed on the outer mitochondrial membrane and in the cytoplasm, respectively, of HeLa cells by co-transfection. Identical amounts of DNA, in the same plasmid ratio, were used, to allow side-by-side quantification of expression level, background association in the dark, heterodimerization efficiency, and kinetics of association and dissociation. Cells were preincubated at 28°C, 33°C, 35°C, or 37°C for 12-24 hours and then imaged at 37°C to quantify mitochondrial accumulation. We made and tested a number of mutants (Supp. Fig. 3A, Supp. and Ser99Asn (all from thermophilic homologues) each improved dimerization efficiency at 28°C, and the latter allowed it at 33°C. Thr69Leu is in the interface and improves hydrophobic interactions (Supp. Fig. 5A,B), Met179Ile is in the hydrophobic core and improves packing (Supp. Fig. 5C,D), and Ser99Asn is surface-exposed and optimizes hydrogen bonding and secondary-structure preference (Supp. Fig. 5E,F). Combining these three mutations substantially increased dimerization at both 28°C and 33°C, and all further variants were tested on top of this combination. Mutations of Asn133 to lysine or phenylalanine (the latter from thermophiles) both enhanced dimerization at 33°C, with Asn133Phe facilitating it at 35°C, but with slower dissociation kinetics. The additional Tyr94Glu mutation (from thermophiles, improves helical preference) permitted weak dimerization at 37°C with dissociation kinetics comparable to the original Magnets molecules. The adjacent mutations Asn100Arg/Ala101His (from thermophiles, improves helical preference) allowed stronger 37°C dimerization. Finally, Tyr126Phe (from thermophiles, improves helical preference) and Arg136Lys (from thermophiles, improves helical preference, improves electrostatics with FAD cofactor; Supp. Fig. 5G,H) further increased dimerization efficiency.
We selected a pair of variants, eMags, with these nine mutations (Thr69Leu, Tyr94Glu, Ser99Asn, Asn100Arg, Ala101His, Tyr126Phe, Asn133Phe, Arg136Lys, and Met179Ile) added to nMagHigh1-Gly55Ala and pMagFast2. eMags supports dimerization upon growth at 37°C without preincubation at a lower temperature, while the original Magnets variants were completely nonfunctional after these growth conditions (Fig. 1C,D and Supp. Fig. 6). eMags show greater dimerization efficiency (~4-5x), as judged by greater prey accumulation on mitochondria (p=0.0004, Kruskal-Wallis and Dunn's multiple comparison post hoc tests; Fig. 1D) and faster association and dissociation kinetics (τON = 3.6 0.3 s, τOFF = 23.1 0.6 s) than original Magnets in cells preincubated at 28°C (τON = 7.6 0.3 s, τOFF = 32.0 1.3 s; p < 0.0001 for both τON and τOFF, unpaired Student's t-test; Fig. 1C). Omission of the Tyr126Phe mutation in eMags produced eMagsF, with similar but slightly lower dimerization efficiency as eMags, but significantly faster association and dissociation kinetics (τON = 2.8 0.3 s, τOFF = 14.0 0.6 s; p < 0.0001 for both τON and τOFF, unpaired t-test; Fig. 1C). A 3x prey concatemer (i.e. nMagHigh1-EGFP-OMP25 and pMagFast2(3x)-TagRFP-T)still requiring preincubation at 28°Cis needed to bring the prey recruitment of original Magnets in line with that of monomeric eMags and eMagsF (Fig. 1D). This concatemerized original Magnets also suffers from slower dissociation kinetics (τON = 5.6 0.5 s, τOFF = 45.9 1.4 s; p = < 0.0001 for both τON and τOFF, unpaired t-test; Fig. 1C,D). We refer to nMagHigh1-Gly55Ala and pMagFast2 with these nine mutations as eMagA (Acidic heterodimerization interface) and eMagB (Basic heterodimerization interface), respectively. eMags enables rapid, local and reversible control of protein recruitment to subcellular compartments We then sought to establish performance of the new eMags constructs in a variety of experimental contexts. In the first, we used eMags to conditionally recruit cytosolic proteins to intracellular organelles other than mitochondria. For the endoplasmic reticulum (ER), we selected the N-terminal transmembrane domain of cytochrome P450 (Szczesna-Skorupa and Kemper, 2000), which displays on the cytoplasmic face of the ER, as bait (fused to EGFP). Coexpression of this construct, ER-EGFP-eMagA, with eMagB-TagRFP-T (prey) in COS7 cells showed large, rapid, reversible accumulation of prey to the ER upon whole-cell illumination ( Fig. 2A, Supp. Fig. 2B) (See Supp. Fig. 1A,B, Supp. Tables 1,2, Methods for a complete list and detailed information on bait and prey constructs used in these experiments). With focal illumination, robust prey accumulation occurred only in the irradiated ER region (Fig. 2B), in spite of the known rapid diffusion of proteins within the ER network (Nehls et al., 2000).
Optogenetic regulation of inter-organellar contacts
In another set of applications, we validated the efficiency of eMags to induce organelle contacts ( Fig. 3A, Supp. Fig. 1D). Conditional induction or expansion of such contacts may help elucidate the contribution of inter-organelle contacts and signaling to a variety of biochemical pathways.
We first designed a light-inducible ER-lysosome tethering system. Using the targeting sequences above (Fig. 2), ER-mCherry-eMagA and Lys-eMagB-iRFP were co-transfected into COS7 cells. Before blue light activation, ER-lysosome overlap, as detected by mCherry and iRFP overlap, was minimal (Fig. 3B); during 1 min. irradiation, colocalization rapidly increased by ~50% (τON =7.5 0.8 s, N=14 cells, 3 independent experiments), most likely through expansion of pre-existing contacts or by stabilization and expansion of new contacts. Following light offset, ER-lysosome colocalization declined quickly to baseline (τOFF = 35.9 1.7 s; Fig. 3B). The longer time courses of organelle association-dissociation (tens of seconds), relative to cytoplasmic protein recruitment (seconds), is consistent with a combination of slower mobility of organelles than free protein and the processive assembly and disassembly of membrane contacts.
Using a similar targeting strategy, ER-mCherry-eMagA and eMagB-iRFP-Mito were used to drive ER-mitochondrial association (Fig. 3C). In HeLa cells, used for these experiments, ER and mitochondria form a closely interacting network even in control conditions. Upon 2 min. irradiation, however, overlap increased by ~20%, with kinetics (τON = 28.0 1.9 s, τOFF = 49.1 2.5 s, N=14 cells, 3 independent experiments; Fig. 3C) on the order of that seen for ER-lysosome contacts.
Finally, for mitochondrion-lysosome manipulation, we used eMagA-mCherry-Mito and Lys-eMagB-iRFP. In HeLa cells, baseline colocalization was quite low (Fig. 3D); such contacts are typically transient and involve small contact area (Wong et al., 2018). Upon activation, increased associations between lysosomes and mitochondria were observed, revealing contact expansion (τON = 40.1 2.6 s, τOFF = 58.4 2.6 s, N=17 cells, 3 independent experiments). In some cases, movement of lysosomes away from mitochondria resulted in the elongation of tubules from mitochondria, and even in their fission (Fig. 3E), indicating strong association.
Control of the PI4P Golgi pool by reconstitution of VAP (Opto-VAP)
In a final application, we tested eMags for acute manipulation of intracellular PI4P via reconstitution of an ER-transGolgi network (TGN) tether. Key components of this tether are the ER protein VAMP-associated protein (VAP) and Oxysterol-binding protein 1 (OSBP1). OSBP1, which binds VAP (via an FFAT motif) and membranes of the TGN (via a PI4P-binding PH domain), also contains an ORD domain (OSBP-related domain) that promotes exchange of TGN PI4P for ER cholesterol (Murphy and Levine, 2016). Following shuttling to the ER, PI4P is degraded by the phosphatidylinositide phosphatase Sac1 (Mesmin et al., 2013;Saint-Jean et al., 2011;Zewe et al., 2018). This model of ER-Golgi PI4P transport is supported by biochemical, pharmacological, and genetic studies (Dong et al., 2016;Mesmin et al., 2013;Strating et al., 2015). We sought to use the eMags tools to offer direct optogenetic control over this PI4Pcholesterol exchange through regulation of VAP-OSBP1 binding interactions.
The overall design strategy was to replace endogenous VAP with a split version, which could be reconstituted by eMag dimerization and would then associate with OSBP1 to drive transport.
Unlike the earlier examples, this necessitated careful consideration of the domain architectures of VAP and OSBP1, to best ensure that 1) split-VAP would not reconstitute in the absence of light activation and 2) that the eMagA and eMagB fusions would not interfere with either VAP reconstitution or OSBP1 interaction. VAP is an integral membrane protein composed of a cytosolic major sperm protein (MSP) domain (which binds FFAT motif-containing proteins), a coiled-coil domain and a C-terminal membrane anchor (Kaiser et al., 2005;Kim et al., 2010) ( Fig. 4A and Supp. Fig. 1E). Two distinct VAP genes exist in the vertebrate genome: VAPA and VAPB, which can form either homomers or heteromers with one another. OSBP1 has an Nterminal PH domain that preferentially binds PI4P (Mesmin et al., 2013;Murphy and Levine, 2016;Venditti et al., 2019), an internal FFAT motif, and a C-terminal ORD domain which binds in a competitive way PI4P and cholesterol.
Given this domain structure, we opted to convert VAPB into a cytosolic version through deletion of the C-terminal transmembrane helix (leaving VAPB(1-218)); we retained the MSP and coiledcoil domains as both may contribute to VAP dimerization (Kim et al., 2010) (Fig. 4B). We fused TagRFP-T to the N-terminus of this cytosolic fragment, and eMagB to its C-terminus (TagRFP-T-VAPB(1-218)-eMagB; Fig. 4B, Supp. Fig. 1E and Table S2). We then used ER-eMagA-EGFP to recruit VAPB(1-218) to the ER upon blue light irradiation, where it could interact with OSBP1.
We refer to this pair of constructs as "Opto-VAP".
We first tested the efficiency of Opto-VAP by transfecting both components into HeLa cells and imaging them by confocal microscopy. The prey protein (TagRFP-T-VAPB(1-218)-eMagB) was imaged throughout the experiment, while ER-eMagA-EGFP was imaged only during optogenetic activation. Before blue light irradiation, the prey protein was homogeneously distributed throughout the cytosol, with focal accumulation around the Golgi (Fig. 4C). We interpret this observation as reflecting interaction of VAPB with endogenous OSBP1, which is abundant in the Golgi, where it binds the PI4P-rich TGN membranes via its PH domain (Mesmin et al., 2013).
The cytosolic VAPB(1-218) prey, with its MSP domain, could compete with endogenous VAP for binding to the FFAT motif of OSBP1 (Fig. 4A,B). A robust presence of PI4P in the TGN under To confirm that the observed PI4P transfer was indeed mediated by OSBP and Opto-VAP, cells were preincubated for 30 min with 10 μM itraconazole (ITZ), an antifungal and anticancer agent that occludes the lipid-transport domain of OSBP and thus blocks its lipid trafficking properties (Strating et al., 2015). After ITZ treatment, no change was detected in the accumulation of the PI4P probe (iRFP-P4C) at the Golgi (graph in Fig. 4C and Supp. Fig. 7, 8A), despite the efficient recruitment of TagRFP-T-VAPB(1-218)-eMagB to the ER membrane (N=16 cells, 2 independent experiments).
We next tested the Opto-VAP system in gene-edited HeLa cells lacking both VAP genes (VAP double-KO cells). It was reported that in these cells the Golgi complex is partially disrupted, with formation of PI4P-enriched hybrid Golgi-endosome structures (Dong et al., 2016), a finding that we have confirmed in cells kept in the dark (Fig. 4D-bottom). Blue light activation led to rapid recruitment of TagRFP-T-VAPB(1-218)-eMagB to the ER (Supp. Fig. 8B), whose reticular appearance was less obvious in these cells (Fig. 4D- Golgi-endosome structures was observed (Fig. 4D), indicating PI4P loss. Thus, Opto-VAP is able to fully restore the activity of the deleted VAPA and VAPB genes in recruiting OSBP1 to perform PI4P-cholesterol exchange. After blue-light interruption, both Opto-VAP localization and PI4P levels reversed to baseline (τOFF = 93.7 5.0 s) (Fig. 4D) (N=20 cells, 4 independent experiments). As before, ITZ completely inhibited PI4P transport but had no effect on Opto-VAP recruitment (N=16 cells, 3 independent experiments) ( Fig. 4D and Supp. Fig. 8B). The time courses of Opto-VAP recruitment and recovery, and of PI4P loss and recovery, are similar between the wild-type and double-KO cells, suggesting that Opto-VAP assembly and function are largely independent of endogenous levels of VAPA and VAPB.
As a final verification of the necessity of the ORD domain in the observed PI4P transport, we constructed TagRFP-T-eMagB-PHOSBP, with the PH domain of OSBP1 but not the ORD domain ( Fig. 4A, Supp. Fig. 1E, Supp. Fig. 9A and Table S2). In both wild-type or VAP-DKO HeLa cells, blue-light activation induced rapid prey recruitment to the ER, but with no accompanying changes in iRFP-P4C fluorescence (Supp. Fig. 9B,C; n=16 cells for HeLa, n=17 for VAP-DKO, 2 independent experiments). Thus, the ORD domain is critical for PI4P transport, with the PH domain alone having no effect.
CONCLUDING REMARKS
In this work, we have both engineered a dramatically improved photodimerizer pair and used it in a set of experiments elucidating details of organellar interactions and cellular lipid metabolism and transport. In a previous study (Benedetti et al., 2018), we had compared multiple optogenetic dimerizer reagents and found that the Magnets system, based on orthogonalization of the Vivid LOV domain homodimer (Kawano et al., 2015), offers major advantages over other systems in several different assays. Magnets have rapid association and dissociation kinetics and require both monomers to undergo blue-light activation to permit dimerization. These properties make the background activation of Magnets low, so that they are well-suited to optogenetic modulation of small volumes and sub-cellular organelles. However, the existing Magnets tools have two critical disadvantages, which preclude their wider adoption: 1) their weak dimerization efficiency necessitates the use of concatemers, which can perturb target proteins and slow kinetics, and 2) the low thermodynamic stability means that expression and maturation must occur at reduced temperatures, complicating cell culture experiments and ruling out mammalian in vivo work entirely.
To overcome these limitations, we established a robust cell-culture screen that captures dimerization efficiency, association and dissociation kinetics, and folding and maturation. This screen allowed us to identify variants encompassing mutations across the whole protein with particular focus on the dimer interface. Mutations were selected based on sequence alignments with thermophilic fungal Vivid domains and structure-guided design. After several rounds of mutagenesis and screening, we selected final "enhanced Magnets" (eMag) variants with nine mutations over the starting scaffolds. The eMag reagents showed greater dimerization efficiency allowing use as monomers instead of concatemers, full function after their folding and maturation at 37oC, and faster association and dissociation kinetics than the original Magnets.
We thoroughly validated the eMag constructs in a range of cellular assays involving protein recruitment to different membranes, inter-organellar association, and bilayer lipid metabolism and trafficking. The success of the engineering effort validates the design strategy and shows that many mutations from thermophilic fungi grafted well to the scaffold of the Vivid photoreceptor of Neurospora crassa, a mesophilic fungus. These mutations improved packing, hydrogen bonding, and secondary structure preference. These improved optogenetic dimerizers will be broadly applicable and useful for applications across diverse fields.
B. Localized and global recruitment of a soluble prey to an ER-targeted bait in a HeLa cell.
Localized activation was achieved by illuminating the cell within a 3 µm x 3 µm ROI with 200 ms blue-light pulses at 0.5 Hz for 60 seconds. The cell was then allowed to recover in the absence of blue light for 2 min prior to global illumination. Scale bar: 5 m.
C. Recruitment of a soluble prey to lysosomes in a DIV14 primary hippocampal neuron. The left two fields show colocalization of the lysosomally anchored bait with the lysosomal marker Lamp1-iRFP. Recruitment of the prey to a single lysosome, or to all lysosomes, was achieved by local and global illumination, respectively. Following localized illumination delivered as in (B), the cell was allowed to recover in the absence of blue light for 1 min, and then globally illuminated. Scale bar: 5 m.
D. Schematic representation of the strategy and constructs used to induce PI(4,5)P2 depletion at the plasma membrane via the eMagF-dependent recruitment of an inositol 5-phosphatase. iRFP-PHPLCδ is a PI(4,5)P2 probe. during, and after Opto-VAP activation, with or without ITZ treatment (N=20 and 16, respectively; from 3 independent experiments).
TABLES AND TABLES LEGENDS
Supp.
nMagHigh1-EGFP-Mito was generated through the PCR amplification of the nMagHigh1-EGFP coding sequence from nMagHigh1-EGFP-CAAX, and inserted into a pGFP-OMP25 (Nemoto and De Camilli, 1999) vector at NheI and XhoI sites. pMagFast2(1x)-TagRFP-T was generated through the PCR amplification of the third unit of pMagFast2(3x) and TagRFP-T in pMagFast2(3x)-TagRFP-T (Benedetti et al., 2018) and inserted in the same vector at HindIII and XbaI site. In order to recreate an optimal Kozak sequence Met and Gly were added before the initial His, at the N-term of pMagFast2 in this construct. All nMagHigh1 and pMagFast2 mutants eMagAF-EGFP-PM and eMagA-EGFP-PM were generated replacing nMagHigh1 in nMagHigh1-EGFP-CAAX with the engineered variants at HindIII and XbaI sites. mCherry-eMagBF -5ptaseOCRL was synthetized by digesting mCherry-pMagFast2(3x)-5ptaseOCRL (Benedetti et al., 2018) with NotI and PvuI, and then ligated with eMagBF amplified from eMagBF-TagRFP-T. iRFP-PHPLC plasmids were previously described (Idevall-Hagren et al., 2012 Light-dependent induction of contacts between ER and lysosomes was achieve transfecting COS7 cells with ER-mCherry-eMagA and Lys-eMagB-iRFP at a 2:1 ratio in OptiMEM-I (1:4 DNA: lipofectamine ratio). ER-mitochondria contacts were elicited in HeLa cells transfected with ER-mCherry-eMagA and eMagB-iRFP-Mito at a 1:1 ratio in OptiMEM-I (1:4 DNA: lipofectamine ratio). Mitochondria-lysosome contacts were evoked in HeLa cells transfected with eMagA-mCherry-Mito and Lys-eMagB-iRFP. Cells were incubated with the transfection mix for 1 hour. Subsequently, the serum-free medium was replaced by complete DMEM with no phenol red, and imaging was performed in the same medium between 16 and 28 hours after transfection.
Confocal microscopy
All optogenetic experiments, with the exception of the experiments with Opto-VAP and its controls and the light-dependent induction of inter-organellar contacts, were performed using the Improvision
Image Analysis and Statistics
Association and dissociation rates for each dimerization system were calculated from changes in prey fluorescence inside a cytosolic ROI before, during, and after the photoactivation and Statistical analyses were carried out in GraphPad Prim 8.2.1 (Graph Pad Software).
Kinetics analysis
We found that the apparent kinetics of the Magnets variants reported in this study fit well to an exponential decay model. We used the curve-fitting tool (cftool) in MATLAB to determine the kinetic rate constants, τON and τOFF, by fitting the curve to the following equation: [ ] = 0 + ∆ − − 0 Where = ∘ , 0 is time at which the light is turned on or off (for on-or off-kinetics, respectively), S0 is S at time t0, and ∆ = 0 − (∞). During the fitting process, each point is given a weight proportional to 1 . . . 2 . The parameters of the fit can be found in Supplementary Table 4. For all the datasets acquired in this work, the R2's obtained for exponential fits are always larger than 0.86 with a median of 0.98.
|
2020-09-02T13:08:50.656Z
|
2020-08-30T00:00:00.000
|
{
"year": 2020,
"sha1": "f37615401e84838342cbabf7170dbe69cf752c64",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.63230",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "f37615401e84838342cbabf7170dbe69cf752c64",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
}
|
259274140
|
pes2o/s2orc
|
v3-fos-license
|
Energy and Infrared Radiation Characteristics of the Sandstone Damage Evolution Process
The mechanical characteristics and mechanisms of rock failure involve complex rock mass mechanics problems involving parameters such as energy concentration, storage, dissipation, and release. Therefore, it is important to select appropriate monitoring technologies to carry out relevant research. Fortunately, infrared thermal imaging monitoring technology has obvious advantages in the experimental study of rock failure processes and energy dissipation and release characteristics under load damage. Therefore, it is necessary to establish the theoretical relationship between the strain energy and infrared radiation information of sandstone and to reveal its fracture energy dissipation and disaster mechanism. In this study, an MTS electro-hydraulic servo press was used to carry out uniaxial loading experiments on sandstone. The characteristics of dissipated energy, elastic energy, and infrared radiation during the damage process of sandstone were studied using infrared thermal imaging technology. The results show that (1) the transition of sandstone loading from one stable state to another occurs in the form of an abrupt change. This sudden change is characterized by the simultaneous occurrence of elastic energy release, dissipative energy surging, and infrared radiation count (IRC) surging, and it has the characteristics of a short duration and large amplitude variation. (2) With the increase in the elastic energy variation, the surge in the IRC of sandstone samples presents three different development stages, namely fluctuation (stage Ⅰ), steady rise (stage Ⅱ), and rapid rise (stage Ⅲ). (3) The more obvious the surge in the IRC, the greater the degree of local damage of the sandstone and the greater the range of the corresponding elastic energy change (or dissipation energy change). (4) A method of sandstone microcrack location and propagation pattern recognition based on infrared thermal imaging technology is proposed. This method can dynamically generate the distribution nephograph of tension-shear microcracks of the bearing rock and accurately evaluate the real-time process of rock damage evolution. Finally, this study can provide a theoretical basis for rock stability, safety monitoring, and early warning.
Introduction
The fracturing of surrounding rock is the fundamental cause of coal pillar instability and mine water inrushing [1]. A large number of engineering measurements and tests have shown that the fracture mechanical characteristics and mechanisms of rock involve complex mechanical problems with respect to the rock mass such as energy concentration, storage, dissipation, and release [2,3]. Thus, rock fracture mechanics are the most valuable way to study coal rock failure and instability from an energy perspective. Numerous scholars have contributed to basic research on the energy dissipation and release laws of rock mass in the process of mechanical failure. Zhao et al. [4] proposed the minimum energy principle of rock mass dynamic failure. Xie et al. [5,6] concluded that the combined action of energy dissipation and energy release results in rock deformation and failure, and they proposed the strength loss criterion and global failure criterion based on energy dissipation and releasable strain energy, respectively. Liang et al. [7] found that energy dissipation caused rock damage, lithology deterioration, and loss of strength; however, the internal cause of sudden rock failure is energy release. Chen et al. [8] proposed a damage coefficient based on the energy evolution mechanism, and the mechanical properties and damage evolution process of rock were analyzed from the perspective of energy. Zhang [9] obtained the evolution and distribution rules of the elastic energy properties and dissipation energy of rock samples in the process of deformation and failure. Kong et al. [10] found that the formation of macroscopic cracks requires greater dissipation energy than the breeding and initiation of microcracks; further, the elastic energy drops sharply and the damage reaches the maximum when macroscopic cracks are formed.
Considering the highly nonlinear process of rock failure, it is often difficult to invert the whole failure process effectively with only a single evaluation index. It is necessary to use a variety of monitoring equipment (stress-strain, acoustic emission, thermal infrared, etc.) to carry out research and tests with respect to the failure laws of stressed rocks at the laboratory scale and to identify reliable signals of failure precursors. Infrared thermal imaging is a nondestructive remote sensing monitoring technology [11][12][13] that has advantages in the study of rock failure processes and energy dissipation and release characteristics under load and has been widely used in rock failure warning systems. Shen et al. [14], Cao et al. [15], Wang et al. [16], and Lai et al. [17] proposed that spatial differentiation infrared thermal images, obvious turning of the average infrared radiation temperature curve, and the migration characteristics of abnormal regions of infrared radiation could be taken as the precursors of rock failure. Cai et al. [18] found that the existence of water promoted the release of heat energy. Aiming to reveal the law of infrared radiation of rock under an impact load, Zhou et al. [19] adopted the separated Hopkinson pressure bar (SHPB) system to carry out impact tests on sandstone under different strain rates and found that with the increase in the strain rate, the average infrared radiation temperature increment keeps increasing, and the relationship between them is a power function. Based on the characteristics of spatial differentiation in infrared thermal images before rock failure, Liu et al. [20], Ma et al. [21], Wu et al. [22], and Sun et al. [23] proposed the characteristic roughness index for infrared thermal images, differential infrared radiation variance, infrared temperature variation field, and infrared radiation count, respectively. These indexes quantitatively describe the evolution characteristics of the infrared radiation temperature field of rocks under load and provide a new idea for the infrared radiation analysis of rock disaster processes.
Few reports have associated the energy release law with infrared radiation characteristics. In this study, firstly, after analyzing the infrared radiation characteristics of rock and the release and dissipation characteristics of fracture energy under uniaxial compression, a quantitative relationship between infrared radiation parameters and rock strain energy parameters is established from the perspective of macroscopic energy conservation. Based on this relationship, the mechanical mechanism of the rock fracture degree is described so as to better reveal the fracture characteristics of rock. Secondly, a method based on infrared thermal imaging technology is proposed to identify the location and propagation mode of sandstone microcracks. This method can be used to dynamically generate the distribution cloud map of tension-shear microcracks of the bearing rock and accurately evaluate the real-time process of rock damage evolution. Finally, the results can provide a theoretical basis for rock stability, safety monitoring, and early warning.
Experimental Samples
Due to the uneven distribution of pores and microcracks, the internal structure of coal measure sandstone is quite different from that of other rocks. As a result, the dispersion of infrared observation experimental results with respect to such sandstone under uniaxial loading is large. In this study, sandstone samples with texture and uniformity were selected to ensure that the dispersion of the experimental results was small. The sandstone samples used in this study were from a quarry in the city of Jinan in Shandong Province and were extracted from a single block of rock. The sample size was 70 mm × 70 mm × 140 mm, and samples were numbered as A i (i = 1~13). All sandstone was processed in accordance with the requirements of the test procedures for rock physical and mechanical properties, as shown in Figure 1, and the mechanical properties of the tested sandstone samples are shown in Table 1. After processing, a grinding machine and sandpaper were used to carefully grind the two end-surfaces, and the longitudinal parallelism of both ends was not greater than 5 × 10 −5 m. The sandstone was placed in the laboratory in advance so that the temperature of the sandstone was consistent with the ambient temperature of the laboratory.
Due to the uneven distribution of pores and microcracks, the internal structure of coal measure sandstone is quite different from that of other rocks. As a result, the dispersion of infrared observation experimental results with respect to such sandstone under uniaxial loading is large. In this study, sandstone samples with texture and uniformity were selected to ensure that the dispersion of the experimental results was small. The sandstone samples used in this study were from a quarry in the city of Jinan in Shandong Province and were extracted from a single block of rock. The sample size was 70 mm × 70 mm × 140 mm, and samples were numbered as Ai (i = 1~13). All sandstone was processed in accordance with the requirements of the test procedures for rock physical and mechanical properties, as shown in Figure 1, and the mechanical properties of the tested sandstone samples are shown in Table 1. After processing, a grinding machine and sandpaper were used to carefully grind the two end-surfaces, and the longitudinal parallelism of both ends was not greater than 5 × 10 −5 m. The sandstone was placed in the laboratory in advance so that the temperature of the sandstone was consistent with the ambient temperature of the laboratory.
Experimental System and Method
The uniaxial compression test of sandstone involved the equal displacement control method for loading, and the loading rate was controlled at 0.2 mm/min. A VarioCAM HD head 880 uncooled infrared thermal imager was used to collect the infrared radiation information of the sandstone during the whole process from loading to failure under
Experimental System and Method
The uniaxial compression test of sandstone involved the equal displacement control method for loading, and the loading rate was controlled at 0.2 mm/min. A VarioCAM HD head 880 uncooled infrared thermal imager was used to collect the infrared radiation information of the sandstone during the whole process from loading to failure under uniaxial compression. The main parameters of the infrared thermal imager were as follows: thermal sensitivity > 0.02 • C; pixels, 1240 × 768; acquisition rate, 25 Fps/s; and measuring band, 7.5~14 µm.
In the lead-up to the experiment, plastic films were arranged on the upper and lower contact surfaces between the sandstone and the loading machine to reduce the friction and heat conduction effect between the sandstone and the loading head during the experiment. The advantage of this was that the end effect could be reduced but the mechanical properties of sandstone did not changed. The reference sample was placed parallel to the left of the loaded sandstone, but the reference sandstone was not loaded, as shown in Figure 2. No one was allowed to move around during the experiment, and the windows, curtains, and all radiation-generating lighting sources of the laboratory were closed [24]. uniaxial compression. The main parameters of the infrared thermal imager were as follows: thermal sensitivity > 0.02 °C; pixels, 1240 × 768; acquisition rate, 25 Fps/s; and measuring band, 7.5~14 µm.
In the lead-up to the experiment, plastic films were arranged on the upper and lower contact surfaces between the sandstone and the loading machine to reduce the friction and heat conduction effect between the sandstone and the loading head during the experiment. The advantage of this was that the end effect could be reduced but the mechanical properties of sandstone did not changed. The reference sample was placed parallel to the left of the loaded sandstone, but the reference sandstone was not loaded, as shown in Figure 2. No one was allowed to move around during the experiment, and the windows, curtains, and all radiation-generating lighting sources of the laboratory were closed [24].
Variation Characteristics of Strain Energy in Sandstone Fracture Process
Assuming that there was no energy exchange between the sandstone and the external environment during the loading process, according to the first law of thermodynamics [5,6]: where U is the total energy input by the loading machine, e U is the elastic energy stored in the sandstone, and d U is the dissipation energy of the sandstone. Therefore, the total energy absorbed under uniaxial loading is as follows [5,6]: where 1 σ is stress and 1 ε is strain.
The elastic energy can be expressed as follows [5,6]: where 0 E is the initial elastic modulus.
Therefore, the dissipation energy is as follows: Hence, the energy value of the sandstone in the loading process could be calculated based on the abovementioned energy calculation equations. The energy curve variation
Variation Characteristics of Strain Energy in Sandstone Fracture Process
Assuming that there was no energy exchange between the sandstone and the external environment during the loading process, according to the first law of thermodynamics [5,6]: where U is the total energy input by the loading machine, U e is the elastic energy stored in the sandstone, and U d is the dissipation energy of the sandstone. Therefore, the total energy absorbed under uniaxial loading is as follows [5,6]: where σ 1 is stress and ε 1 is strain. The elastic energy can be expressed as follows [5,6]: where E 0 is the initial elastic modulus. Therefore, the dissipation energy is as follows: Hence, the energy value of the sandstone in the loading process could be calculated based on the abovementioned energy calculation equations. The energy curve variation characteristics corresponding to the compaction stage, linear elastic deformation stage, plastic deformation stage, and post-peak stage of all the sandstone samples had similar characteristics, as shown in Figure 3. From Figure 3, it can be seen that in the linear elastic stage (I), most of the work performed by external forces was converted into releasable elastic energy and stored in the sandstone, so the increase rate of elastic energy was faster, the slope of the curve was larger, and the slope of the dissipation energy curve was approximately equal. In Figure 3, it can also be seen that the total strain energy and elastic energy increased almost in the same trend, and the increase rates of both were greater than that of the dissipation energy.
In the post-peak stage (III) shown in Figure 3, the sandstone required more energy to drive the crack coalescence, so the dissipation energy increased sharply, and the slope of the dissipation energy curve became larger. After the formation of large-scale cracks, the sandstone lost the ability to store energy, and the releasable elastic energy stored in the sandstone was released in large quantities and converted into dissipation energy. Hence, in Figure 3, the elastic energy suddenly decreased, and the dissipation energy and the growth rate gradually increased.
Infrared Radiation Index
The infrared radiation information of bearing sandstone is composed of effective signals (microcracks are generated resulting in changes in infrared radiation) and noise signals (environmental temperature changes and uncooled infrared focal plane array response drift with time, resulting in changes in infrared radiation), and the infrared radiation variation amplitude of the reference sandstone is only due to noise signals [25,26]. In order to eliminate the interference of noise signals in relation to the effective signals of the bearing sandstone in this study, the threshold value of the infrared radiation temperature matrix of the bearing sandstone at the corresponding time was determined by the maximum value in the infrared radiation temperature matrix of each frame of the reference sandstone. When the value in the infrared radiation temperature matrix of the bearing sandstone was greater than the threshold value, it was regarded as the infrared radiation temperature change caused by damage to the bearing sandstone, and the temperature value is known as the infrared radiation sudden change temperature. The IRC is obtained by obtaining statistics on the infrared radiation temperature change matrix of sandstone that are larger than the threshold value [23]. The physical significance of the IRC is the number of cracks produced by sandstone at a certain time in the process of the damage evolution of sandstone. The greater the IRC at a certain time, the more cracks produced inside the sandstone at that time, and the more serious the damage.
Characteristics of Infrared Radiation Time Sequence Change
The relationship between the IRC and stress is shown in Figure 4. The periodic variation characteristics of the IRC of all the sandstone samples were similar. Due to limited space, only sandstone A2 is given as an example. When the loading times were 436.0 s (where the stress was 92.2% of the peak stress) and 455.8 s (where the stress was 94.2% of the peak stress), the IRC had a pulse surge with a sudden stress drop in the loading curve In the plastic deformation stage (II) shown in Figure 3, plastic deformation occurred at the same time as elastic deformation, and the process was also accompanied by the initiation and development of microcracks, so the releasable elastic energy and dissipation energy increased gradually. This stage was the key stage for new crack breeding, formation, and propagation in the sandstone; therefore, the total input energy was mainly transformed into the dissipation energy driving the failure of the sandstone, which was manifested as the growth rate of the dissipation energy and the slope of the dissipated energy curve increasing. After the formation of cracks in the sandstone, the material deteriorated and the energy storage capacity decreased, so the elastic energy tended to slow down. When the total stress-strain curve of some sandstone samples reached the peak stress, there was a slight drop in the stress, the stored elastic energy was released quickly with a small amplitude, and there was a small surge increase in energy dissipation (as shown in Figure 3c,d).
In the post-peak stage (III) shown in Figure 3, the sandstone required more energy to drive the crack coalescence, so the dissipation energy increased sharply, and the slope of the dissipation energy curve became larger. After the formation of large-scale cracks, the sandstone lost the ability to store energy, and the releasable elastic energy stored in the sandstone was released in large quantities and converted into dissipation energy. Hence, in Figure 3, the elastic energy suddenly decreased, and the dissipation energy and the growth rate gradually increased.
Infrared Radiation Index
The infrared radiation information of bearing sandstone is composed of effective signals (microcracks are generated resulting in changes in infrared radiation) and noise signals (environmental temperature changes and uncooled infrared focal plane array response drift with time, resulting in changes in infrared radiation), and the infrared radiation variation amplitude of the reference sandstone is only due to noise signals [25,26]. In order to eliminate the interference of noise signals in relation to the effective signals of the bearing sandstone in this study, the threshold value of the infrared radiation temperature matrix of the bearing sandstone at the corresponding time was determined by the maximum value in the infrared radiation temperature matrix of each frame of the reference sandstone. When the value in the infrared radiation temperature matrix of the bearing sandstone was greater than the threshold value, it was regarded as the infrared radiation temperature change caused by damage to the bearing sandstone, and the temperature value is known as the infrared radiation sudden change temperature. The IRC is obtained by obtaining statistics on the infrared radiation temperature change matrix of sandstone that are larger than the threshold value [23]. The physical significance of the IRC is the number of cracks produced by sandstone at a certain time in the process of the damage evolution of sandstone. The greater the IRC at a certain time, the more cracks produced inside the sandstone at that time, and the more serious the damage.
Characteristics of Infrared Radiation Time Sequence Change
The relationship between the IRC and stress is shown in Figure 4. The periodic variation characteristics of the IRC of all the sandstone samples were similar. Due to limited space, only sandstone A 2 is given as an example. When the loading times were 436.0 s (where the stress was 92.2% of the peak stress) and 455.8 s (where the stress was 94.2% of the peak stress), the IRC had a pulse surge with a sudden stress drop in the loading curve (the IRC immediately reverted to its pre-surge state).
Spatial Variation Characteristics of Infrared Radiation
In order to study the spatial distribution characteristics of infrared radiation in the loading process of sandstone, a cloud map of the IRC spatial distribution of sandstone A2 in the compaction stage, elastic stage, plastic stage, and the moment of IRC surge is shown in Figure 5a-d, respectively. An IRC spatial distribution cloud map can show the location of new cracks in sandstone at a certain time. The higher the IRC, the more cracks there are in the sandstone and the more serious the damage is. Before the IRC surge (as shown in Figure 5a,b), cracks were disordered in the spatial distribution cloud map of the IRC, and the number of cracks was at a low scale. Only when the IRC mutated were the cracks mainly distributed in a certain area in the spatial distribution cloud map of the IRC. According to the moment of the IRC surge in the sandstone, the spatial distribution
Spatial Variation Characteristics of Infrared Radiation
In order to study the spatial distribution characteristics of infrared radiation in the loading process of sandstone, a cloud map of the IRC spatial distribution of sandstone A 2 in the compaction stage, elastic stage, plastic stage, and the moment of IRC surge is shown in Figure 5a-d, respectively. An IRC spatial distribution cloud map can show the location of new cracks in sandstone at a certain time. The higher the IRC, the more cracks there are in the sandstone and the more serious the damage is. Before the IRC surge (as shown in Figure 5a,b), cracks were disordered in the spatial distribution cloud map of the IRC, and the number of cracks was at a low scale. Only when the IRC mutated were the cracks mainly distributed in a certain area in the spatial distribution cloud map of the IRC. According to the moment of the IRC surge in the sandstone, the spatial distribution map of the IRC at the corresponding time could be obtained, and the damaged and broken areas of the sandstone could be accurately located.
Spatial Variation Characteristics of Infrared Radiation
In order to study the spatial distribution characteristics of infrared radiation in the loading process of sandstone, a cloud map of the IRC spatial distribution of sandstone A2 in the compaction stage, elastic stage, plastic stage, and the moment of IRC surge is shown in Figure 5a-d, respectively. An IRC spatial distribution cloud map can show the location of new cracks in sandstone at a certain time. The higher the IRC, the more cracks there are in the sandstone and the more serious the damage is. Before the IRC surge (as shown in Figure 5a,b), cracks were disordered in the spatial distribution cloud map of the IRC, and the number of cracks was at a low scale. Only when the IRC mutated were the cracks mainly distributed in a certain area in the spatial distribution cloud map of the IRC. According to the moment of the IRC surge in the sandstone, the spatial distribution map of the IRC at the corresponding time could be obtained, and the damaged and broken areas of the sandstone could be accurately located.
Correlation between Infrared Radiation and Strain Energy in Sandstone Loading Process
The relationship between the IRC and strain energy of the loaded sandstone over time is shown in Figure 6. In the compaction stage and elastic stage, the IRC did not show obvious surge characteristics with the accumulation of elastic energy and dissipation energy. In the plastic stage and failure stage, the elastic energy release, dissipation energy surge, and IRC surge occurred synchronously in all the sandstone samples. Taking
Correlation between Infrared Radiation and Strain Energy in Sandstone Loading Process
The relationship between the IRC and strain energy of the loaded sandstone over time is shown in Figure 6. In the compaction stage and elastic stage, the IRC did not show obvious surge characteristics with the accumulation of elastic energy and dissipation energy. In the plastic stage and failure stage, the elastic energy release, dissipation energy surge, and IRC surge occurred synchronously in all the sandstone samples. Taking sandstone A 1 as an example, the IRC surge occurred synchronously at 427.4 s and 456.9 s during the elastic energy release and dissipation energy surge, as shown in Figure 6a. In the plastic stage, the dissipation energy began to rise rapidly after the first IRC surge, and at the end of loading, the dissipation energy reached its maximum. The above phenomena indicate that the transition of the loaded sandstone from a stable state to another stable state occurred in the form of a sudden change, which was specifically manifested as the simultaneous occurrence of the elastic energy release, the surge in dissipated energy, and the surge in the IRC. This phenomenon represents a transient change in state with the characteristics of a short period and large amplitude.
Quantitative Relationship between Infrared Radiation and Strain Energy in Sandstone Stress Drop
When the stress drop occurred in the sandstone, its energy was released and dissipated at the same time as the internal structure was damaged. Meanwhile, the IRC mutated simultaneously. As a result, there was good correspondence between the elastic energy release, dissipation energy surge, and IRC surge of the sandstone. There must be a quantitative relationship between the strain energy change information and IRC surge information. For the sake of exploring the quantitative relationship between the variation in strain energy and the surge of infrared radiation during the generation of the stress drop in the sandstone, three variation indexes based on elastic energy, dissipated energy, and IRC were proposed in this study, and the corresponding three indexes of each stress drop process were calculated and counted. the plastic stage, the dissipation energy began to rise rapidly after the first IRC surge, and at the end of loading, the dissipation energy reached its maximum. The above phenomena indicate that the transition of the loaded sandstone from a stable state to another stable state occurred in the form of a sudden change, which was specifically manifested as the simultaneous occurrence of the elastic energy release, the surge in dissipated energy, and the surge in the IRC. This phenomenon represents a transient change in state with the characteristics of a short period and large amplitude.
Quantitative Relationship between Infrared Radiation and Strain Energy in Sandstone Stress Drop
When the stress drop occurred in the sandstone, its energy was released and dissipated at the same time as the internal structure was damaged. Meanwhile, the IRC mutated simultaneously. As a result, there was good correspondence between the elastic energy release, dissipation energy surge, and IRC surge of the sandstone. There must be a quantitative relationship between the strain energy change information and IRC surge information. For the sake of exploring the quantitative relationship between the variation in strain energy and the surge of infrared radiation during the generation of the stress drop in the sandstone, three variation indexes based on elastic energy, dissipated energy, and IRC were proposed in this study, and the corresponding three indexes of each stress drop process were calculated and counted.
The variation in dissipation energy (∆U d ) is the difference between the highest energy value (U d a ) and the lowest energy value (U d b ) of the dissipation energy during the generation of the stress drop in sandstone: The variation in IRC (∆IRC) refers to the absolute difference between the highest value (IRC a ) and the lowest value (IRC b ) of the IRC at the corresponding moment when elastic energy release occurs in sandstone: The ∆U e , ∆U d , and ∆IRC values of the sandstone samples are shown in Table 2. The phenomena of elastic energy release, dissipation energy surge, and IRC surge occurred simultaneously 21 times in 13 sandstone samples. Among them, eight samples showed one instance of elastic energy release, surge in dissipation energy, and synchronous IRC surge, while multiple elastic energy releases, dissipated energy surges, and IRC surges synchronously occurred in the other five samples. The variation in the elastic energy of the sandstone samples ranged from 0.002 to 0.449 MJ/m 3 (average was 0.083 MJ/m 3 ), and the variation in dissipation energy ranged from 0.003 to 0.561 MJ/m 3 (average was 0.092 MJ/m 3 ); the variation in dissipation energy was slightly higher than that of elastic energy. The stress ratios for the above phenomena ranged from 34.63% to 100% (the average was 91%). Figures 7 and 8 show the relationship between ∆IRC and ∆U e , and ∆IRC and ∆U d , for all the sandstone samples, respectively. The trends in the two figures were basically the same; therefore, only the diagram of the relationship between ∆IRC and ∆U e is illustrated here. The sandstone samples showed three development stages with different increasing trends, i.e., stage I, fluctuation; stage II, steady rise; and stage III, rapid rise (as shown in Figure 7). With the increase in ∆U e , ∆IRC had three different development stages, with stage I being fluctuation, stage II being a steady rise, and stage III being a rapid ascent, as shown in Figure 7.
Stage I, fluctuation: In eleven cases (about 52.4% of the total), ∆IRC tended to oscillate with the growth in ∆U e . In this stage, ∆U e ranged from 0.002 to 0.017 MJ/m 3 (average was 0.008 MJ/m 3 ), and ∆IRC ranged from 24 to 508 (average was 180.4).
Stage II, steady rise: In seven cases (about 33.3% of the total), ∆IRC tended to rise steadily with the growth in ∆U e . In this stage, ∆U e ranged from 0.032 to 0.129 MJ/m 3 (average was 0.084 MJ/m 3 ), and ∆IRC ranged from 49 to 559 (average was 312.3).
Stage III, rapid ascent: In three cases (about 14.3% of the total), ∆IRC tended to ascend rapidly with the growth in ∆U e . In this stage, ∆U e ranged from 0.223 to 0.449 MJ/m 3 (average was 0.355 MJ/m 3 ), and ∆IRC ranged from 1223 to 15,519 (average was 8061.3). In summary, ∆IRC increased with the growth in ∆U e or ∆U d . These results indicate that the magnitude of the stress drop in the sandstone depended on the local damage degree of the sandstone samples, namely, the more obvious the IRC surge, the greater the local damage degree of the sandstone and the larger the corresponding amplitude.
cend rapidly with the growth in Figure 9 shows the variation law of the average infrared radiation temperature (AIRT) and IRC of sandstone A7 after de-noising during the loading process. The AIRT can directly reflect the overall infrared radiation intensity of the bearing sandstone surface, which is an important feature reflecting the change in infrared radiation. Therefore, the average infrared radiation temperature can be used as a quantitative analysis index to reflect the change characteristics of infrared radiation in the bearing sandstone. Figure 9 shows the variation law of the average infrared radiation temperature (AIRT) and IRC of sandstone A 7 after de-noising during the loading process. The AIRT can directly reflect the overall infrared radiation intensity of the bearing sandstone surface, which is an important feature reflecting the change in infrared radiation. Therefore, the average infrared radiation temperature can be used as a quantitative analysis index to reflect the change characteristics of infrared radiation in the bearing sandstone. Figure 9 shows the variation law of the average infrared radiation tempera (AIRT) and IRC of sandstone A7 after de-noising during the loading process. The A can directly reflect the overall infrared radiation intensity of the bearing sandstone face, which is an important feature reflecting the change in infrared radiation. There the average infrared radiation temperature can be used as a quantitative analysis ind reflect the change characteristics of infrared radiation in the bearing sandstone. Figure 9. Sensitivity of sandstone sample A7 to infrared radiation information. Figure 9. Sensitivity of sandstone sample A 7 to infrared radiation information.
s
The physical significance of the IRC is that the number of cracks in sandstone at a certain time in the process of damage evolution can be obtained based on infrared thermal imaging technology. As can be seen from Figure 9, the variation characteristics of the AIRT and IRC indexes were different in the different loading stages of the sandstone. From the initial loading to the first stress drop, the AIRT of sandstone A 7 had an obvious downward trend, from the initial value of 0 • C to −0.88 • C. The IRC showed no significant change. This indicates that at this stage, the AIRT index was more sensitive to the change in the sandstone state than the IRC. During the time from the first stress drop to the final failure of the sandstone, the thermal image of sandstone A 7 showed obvious anomalies at 691.8 s and 713.9 s, the change trend of the AIRT was slow and had no obvious change characteristics, and the IRC synchronization showed a significant mutation.
The reasons for the difference and asynchronism between the AIRT anomaly and infrared thermal image anomaly are as follows. First, the generation of infrared radiation information is closely related to the failure form of sandstone. When there are thermal effects with opposite trends on the surface of a sandstone failure (i.e., shear cracks have a heating effect and tension cracks have a cooling effect [20]), such opposite thermal effects offset each other, resulting in insignificant changes in the AIRT [20]. Second, when the sandstone was broken, the infrared thermal image only showed anomalies in small areas, and the overall infrared radiation temperature of the sandstone surface changed very little.
In contrast, the AIRT had a better response to both the compaction stage and elastic stage of the sandstone, while the IRC had a better response to the generation of the stress drop in the plastic stage and failure stage of the sandstone. The amplitude of the IRC changed little before the fracture (stress drop) of the sandstone, and abrupt changes occurred with the stress drop near the moment of sandstone failure. Moreover, the spatial distribution map of the IRC reflected the evolution and differentiation characteristics of the sandstone surface in the process of fracture and failure. As a consequence, the IRC has more advantages in terms of the identification of damage information and infrared radiation precursors of sandstone, and this makes it easier to capture the precursors of the failure of bearing sandstone. It is suitable for use as an indicator to find the precursors of the failure of bearing sandstone and establish the index of the quantitative relationship between the damage information of sandstone and the infrared radiation information.
Identification of Location and Propagation Pattern of Sandstone Microcracks
There are great uncertainties in the location and propagation pattern recognition of sandstone microcracks. The variation amplitude of the infrared radiation temperature in sandstone is closely related to the propagation mode of the microcracks. Namely, shear cracks have a heating effect and tension cracks have a cooling effect [20]. If the points where the temperature rises and cools can be screened from the infrared radiation temperature field, the location and propagation mode of sandstone microcracks can be identified. Previous studies have shown that the region above 0 • C in differential infrared thermography represents the warming region, while the region below 0 • C represents the cooling region [21]. It is difficult to distinguish the local infrared radiation temperature variation in differential infrared thermal image sequences. Taking sandstone A 2 as an example, the differential infrared thermal image at the corresponding moment in Figure 5 is shown in Figure 10, and the rising and cooling zones are difficult to identify in Figure 10a,b,d. Aiming to solve the above problems, a method is presented in this paper to identify the location and propagation mode of sandstone microcracks. Firstly, the location information of the cracks in the spatial distribution cloud map of the sandstone IRC was determined, and then the corresponding positions of these cracks in the differential thermal map were determined as warming points or cooling points. Warming points are shown in red in the spatial distribution cloud map of the sandstone IRC (representing a shear crack). Cooling points are shown in blue (representing a tensile crack). According to the above mentioned method, the spatial distribution diagram of the sandstone tensile and shear cracks at the corresponding moment in Figure 5 is shown in Figure 11. The distribution of the abnormal region in the spatial distribution map of the tensile crack based on the IRC was basically the same as that of the differential infrared thermal image sequence map. The spatial distribution cloud map of tensile cracking based on the IRC could easily and intuitively identify the shear crack region and tension crack region on the surface of sandstone, which could more clearly reflect the information of damage evolution and fracturing failure of sandstone and could accurately locate the damage and fracturing regions of sandstone. Aiming to solve the above problems, a method is presented in this paper to identify the location and propagation mode of sandstone microcracks. Firstly, the location information of the cracks in the spatial distribution cloud map of the sandstone IRC was determined, and then the corresponding positions of these cracks in the differential thermal map were determined as warming points or cooling points. Warming points are shown in red in the spatial distribution cloud map of the sandstone IRC (representing a shear crack). Cooling points are shown in blue (representing a tensile crack). According to the above mentioned method, the spatial distribution diagram of the sandstone tensile and shear cracks at the corresponding moment in Figure 5 is shown in Figure 11. The distribution of the abnormal region in the spatial distribution map of the tensile crack based on the IRC was basically the same as that of the differential infrared thermal image sequence map. The spatial distribution cloud map of tensile cracking based on the IRC could easily and intuitively identify the shear crack region and tension crack region on the surface of sandstone, which could more clearly reflect the information of damage evolution and fracturing failure of sandstone and could accurately locate the damage and fracturing regions of sandstone.
distribution of the abnormal region in the spatial distribution map of the tensile crack based on the IRC was basically the same as that of the differential infrared thermal image sequence map. The spatial distribution cloud map of tensile cracking based on the IRC could easily and intuitively identify the shear crack region and tension crack region on the surface of sandstone, which could more clearly reflect the information of damage evolution and fracturing failure of sandstone and could accurately locate the damage and fracturing regions of sandstone.
Engineering Value and Practical Significance
The experimental results showed that when a stress drop occurred in the sandstone, the release and dissipation of strain energy were generated synchronously with the IRC mutation. The spatial distribution map of the IRC at the corresponding time can be drawn to accurately locate the damaged and broken area of sandstone, which is important for the monitoring and early warning of surrounding rock instability in mines, tunnels, and other geotechnical projects. The essence of the instability failure of engi-
Engineering Value and Practical Significance
The experimental results showed that when a stress drop occurred in the sandstone, the release and dissipation of strain energy were generated synchronously with the IRC mutation. The spatial distribution map of the IRC at the corresponding time can be drawn to accurately locate the damaged and broken area of sandstone, which is important for the monitoring and early warning of surrounding rock instability in mines, tunnels, and other geotechnical projects. The essence of the instability failure of engineering rock mass is the process of crack development until the loss of the peak bearing capacity, such as coal pillar instability, roadway surrounding rock instability, slope instability, etc. The rock mass in this study experienced crack closure, elastic deformation, stable crack development, unstable crack propagation, and a post-peak stage under the action of stress. In the phase of unstable crack growth, internal macroscopic cracks began to form, combine, and expand. The rock mass was irreversibly damaged, and its inherent strength was reduced. At this time, a large amount of elastic energy was dissipated and converted into other forms of energy (including infrared energy), which caused the temperature of the infrared radiation temperature field to rise, and the IRC underwent obvious mutations. Therefore, in practical application, thermal images can be used for the real-time monitoring of engineering rock mass, such as coal pillars, roadway-surrounding rock, and side slopes. The significant abrupt change in the IRC indicated an abrupt change in the rock fracture state (occurring in an unstable extension of the rock). It should be noted that the time interval between the IRC mutation and peak stress in the experiment was relatively short. This means that if a significant IRC mutation is observed, then the rock is close to instability. Therefore, corresponding necessary measures should be taken in time in actual projects, such as strengthening supports and evacuating nearby workers and equipment so as to avoid unnecessary losses.
Conclusions
The infrared radiation information of rock is related to its energy dissipation and release, which can reflect the deformation and failure processes of rock. Previous studies did not correlate the law of strain energy release with infrared radiation characteristics. In this study, the quantitative relationship between infrared radiation parameters and rock strain energy parameters was established from the perspective of macroscopic energy conservation, and based on this relationship, the mechanical mechanism of the rock fracture degree was described. Furthermore, a method based on infrared thermal imaging was proposed to accurately evaluate the real-time process of rock damage evolution. The results provided a theoretical basis for rock stability, safety monitoring, and early warning. The conclusions are as follows: (1) The relationship between infrared radiation parameters and rock strain energy parameters in the process of sandstone stress was determined. The simultaneous phenomenon of elastic energy release, dissipated energy surge, and infrared radiation count surge occurred when a stress drop occurred in all the sandstone samples. Among these, 61.5% of the sandstone samples exhibited the simultaneous occurrence of elastic energy release, surging of dissipated energy, and surging of the infrared radiation count, while 38.5% of the samples exhibited multiple simultaneous occurrences of elastic energy release, surging of dissipated energy, and surging of the infrared radiation count.
(2) Quantitative analysis indexes of the variation in elastic energy and mutation of the infrared radiation count were proposed, which can quantify the relationship between infrared radiation parameters and rock strain energy parameters. With the increase in the elastic energy variation, the mutation of the infrared radiation count of the sandstone samples showed three different development stages: stage I, fluctuation; stage II, steady rise; and stage III, rapid ascent. In general, the greater the mutation of the infrared radiation count, the greater the local damage degree of the sandstone, and the greater the amplitude of the corresponding elastic change (or dissipated energy change).
(3) Based on the relationship between the infrared radiation parameters and the rock strain energy parameters, the mechanical mechanisms of the rock fracture degree were described, and the fracture characteristics of the rock mass were better revealed. Sandstone loading is the process of a transition from a stable state to another stable state, and this process takes place in the form of mutation. This sudden change is a transient state, which is characterized by a short period and large amplitude.
(4) A method based on infrared thermal imaging was proposed to identify the location and propagation mode of sandstone microcracks, which can dynamically generate a distribution cloud map of tensile and shear microcracks of bearing rock, accurately evaluate the real-time process of rock damage evolution, easily and intuitively identify the shear crack regions and tension crack regions of the surface of sandstone, more clearly reflect the information of sandstone damage evolution and fracturing failure, and accurately locate regions of sandstone damage and fracturing.
However, this study was completed under uniaxial loading conditions, which do not fully reflect the real influence of mining modes and engineering disturbances. In future research, the quantitative relationship between the infrared radiation parameters and strain energy parameters of sandstone under different stress conditions such as shear, biaxial, and triaxial stress can be considered. In addition, consideration should be paid to the fact that in mine engineering research, sandstone is often in different water-bearing states. The presence of water can not only change the characteristics of the variation in sandstone strain but can also affect the infrared radiation characteristics of sandstone under stress. Subsequent studies should also fully consider the influence of water on the quantitative relationship between the infrared radiation parameters and the strain energy parameters of sandstone.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author.
|
2023-06-29T06:15:54.748Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "f7465837ec547a3e696b298b3e6ffd47338dc0c8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/12/4342/pdf?version=1686798626",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b35845ef65a015d8b19b1799d8da0c98f1d2fa0e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125179303
|
pes2o/s2orc
|
v3-fos-license
|
Contributions of electromagnetic and strong anomalies to the η(η′) → γγ decays
We study contributions of electromagnetic and strong axial anomalies in the radiative decays of η and η′. Applying a dispersive approach, we derive the anomaly sum rule for a singlet axial current where both electromagnetic and strong parts of the axial exist. A low energy theorem is generalized for a case of mixing states and is applied to evaluate a subtraction part of the strong anomaly. We found a relatively small contribution of the strong anomaly to the two-photon decay amplitudes of the η and η′ mesons.
Introduction.
An important property of QCD is the presence of an axial (chiral) anomaly -the violation of the axial symmetry of the classical Lagrangian in the quantum theory. A two-photon decay of the π 0 is a well-known example of the process that happens mainly due to the axial anomaly. In fact, it is the pion decay problem that led to the discovery of the axial anomaly [1,2] and triggered further studies that revealed a tremendous role of the anomalies in the quantum field theory. Precise measurements of the pion decay width [3,4] provide a valuable test for theoretical approaches at low energies (e.g. Chiral perturbation theory) as the usual perturbative QCD methods encounter difficulties in this region due to the confinement. Two-photon decays of the η and η mesons is another important example of the processes which are governed by the axial anomaly and are closely related to the problems of mixing and chiral symmetry breaking.
Besides the case of real photons, axial anomaly plays an important role in the processes involving off-shell photons. In particular, the transitions γγ * → π 0 (η, η ) are known to be connected with the axial anomaly. One of the methods to study these processes is based on the anomaly sum rules (ASRs) [6,7] -the result of the dispersive treatment of the axial anomaly [5]. The anomaly sum rules approach was developed for the study of the π 0 [8] as well as the η and η [9, ?, 10, 11, 12] transition form factors in the space-like and time-like regions of the photon virtuality. Recent experimental progress in measurements of the π 0 , η, η meson transition form factors has led to extensive theoretical studies [13].
The η and η mesons, which we deal in this work with, manifest significant mixing, i.e. they are not pure eigenstates of the 8th (octet) component of the octet of axial currents J The 8th component of the octet of axial currents acquires an electromagnetic anomaly term while the singlet axial current additionally acquires a strong (gluon) anomaly term where F and G are electromagnetic and gluon strength tensors respectively, F and G are their duals, and charge factors C (a) are The distinctive feature of the octet of axial currents -absence of the strong component of the axial anomaly -significantly simplifies the study of the anomaly-related processes corresponding to with these currents, leading to exact ASRs in the cases of the 3rd and the 8th components. On the other hand, η and η two-photon processes are related to the singlet axial current as well. This gives us a good opportunity to study the role of electromagnetic and strong anomalies in these processes. In order to do this, basing on a dispersive representation of axial anomaly, we develop the ASR for the singlet axial current.
2. Dispersive approach to axial anomaly for the singlet axial current Let us outline the derivation of the anomaly sum in the singlet channel of the axial current. Consider the triangle graph amplitude, composed of the axial current J α5 with momentum p = k + q and two vector currents with momenta k and q This amplitude can be decomposed [14] (see also [15,16]) as where the coefficients F j = F j (p 2 , k 2 , q 2 ; m 2 ), j = 1, . . . , 6 are the Lorentz invariant amplitudes constrained by current conservation and Bose symmetry. Note that the latter includes the interchange µ ↔ ν, k ↔ q in the tensor structures and k 2 ↔ q 2 in the arguments of the scalar functions F j . An anomalous axial Ward identity for T αµν (k, q) for the singlet axial current J (0) µ5 (p) and photons γ(k, (k) ), γ(q, (q) ) reads where We have introduced here the form factors G and N , while the transition FF -γγ is point-like up to QED corrections.
In the kinematical configuration with one real photon (k 2 = 0) which we consider in the rest of this section, the above anomalous Ward identity can be rewritten in terms of form factors We can write the form factors G, F 3 , F 4 as dispersive integrals without subtractions: in the case of isovector and octet channels (free from gluon anomaly) it can be shown explicitly [7]. For the singlet current it can be shown using dimensional arguments. At the same time, one cannot claim the existence of unsubtracted dispersion integral for the form factor N . We rewrite it in the form with one subtraction, where the new form factor R can be written as an unsubtracted dispersive integral. By taking the imaginary part of (11) w.r.t. p 2 (s in the complex plane), dividing the obtained equation by (s − p 2 ) and integrating it over s ∈ [0, +∞), using the dispersive relations for the form factors F 3 , F 4 , G, R, in the end we arrive at ImRds.
Comparing (13) with (11) we can write down the anomaly sum rule for the singlet current: Hereafter, we limit our consideration to the case of real photons (k 2 = q 2 = 0).
Low-energy theorem generalized for mixing states
The form factor N in the ASR (14) represents the strong anomaly and is related to the matrix element 0|GG|γγ . Rigorous QCD calculation of it is not known yet because of difficulties of confinement. Despite this, it is possible to estimate it in the limit p µ = 0. The idea (see, e.g., [17]) is as follows. Supposing that there are no massless particles in the singlet channel in the chiral limit (i.e. no η meson contribution in the singlet channel), as the η meson remains massive, one must get lim p→0 p µ 0|J µ5 (p)|γγ = 0. This corresponds to 0|∂ µ J µ5 |γγ = 0, and therefore, one immediately relates the matrix elements 0|GG|γγ and 0|FF |γγ in the considered limits using the expressions for the divergence of the singlet axial current in the chiral limit (m q = 0). In reality, the η meson has a significant contribution to the singlet channel (because of the η − η mixing) spoiling this low-energy theorem. Nevertheless, we can follow the line of reasoning of the theorem for a specifically constructed current with no contribution of the states yielding the poles in the chiral limit (namely, η in our approximation). Requiring 0|J µ5 we obtain the current that is suitable for the theorem, where b is an (arbitrary) constant, and the decay constants f i M are defined as the currents' projections onto meson states (i = 8, 0; M = η, η ), So, for this current we can conclude that even in the chiral limit lim µ5 (p)|γγ = 0, and therefore, using (2) and (3) in the chiral limit, at p µ = 0 we obtain the following relation between the matrix elements of GG and FF : This yields the value of the form factor N (9), 4. Two-photon decays of η and η and analysis of the ASR In order to draw the conclusions for the processes of two-photon decays of the η and η , let us saturate the l.h.s. of the ASR (14) with resonances according to the quark-hadron duality. The first (lowest) contributions are given by the η and η , while the rest (higher) states are represented by the integral with a lower limit s 0 , where, for the sake of brevity, we introduced notations The decay amplitudes of the two-photon decays of the η and η mesons A M (M = η, η ) can be expressed in terms of their decay widths Let us have a closer look at the obtained ASR (19). The term B 0 stands for the subtraction constant in the dispersion representation of the gluon anomaly and we evaluated it from the the low energy theorem (18). The term B 1 consists of the integral representing the spectral part of the gluon anomaly and the term covering higher resonances. The value of the lower limit s 0 ("continuum threshold") in the last integral of B 1 should range between the masses squared of the last taken into account resonance (η ) and the first resonance included into the integral term, s 0 1 GeV 2 . This results in a α 2 s suppression of this integral comparing to the first integral term in B 1 as the form factor F 3 is described by a triangle graph (no α s corrections) plus diagrams with additional boxes (∝ α 2 s for the first box term). By making use of the ASR for the 8th component of the axial current [11] we can express the two-photon decay amplitudes, where In order to do a numerical analysis of the ASR (19), we use the experimental values of the two-photon decay widths of the η and η mesons and the decay constants values f (a) M . We employ the values of decay constants estimated in different analyses basing on the octet-singlet (OS) mixing scheme [11], quark-flavor (QF) mixing scheme [11,19] or mixing-scheme-free analyses [11,18].
The B 0 + B 1 term can be evaluated directly from Eq. (19). The low energy theorem additionally gives estimation for B 0 (18), so we can separately evaluate B 0 and B 1 . The results are shown in the Table 1. These results demonstrate that B 0 and B 1 enter the ASR with opposite signs and almost cancel each other, giving only a small total contribution to the twophoton decay widths of the η and η . The contribution of the gluon anomaly and higher order resonances (expressed by the B 0 + B 1 term) to the two-photon decay amplitudes appears to be rather small numerically in comparison with the contribution of the electromagnetic anomaly.
Conclusions
Using the dispersive approach to axial anomaly in the singlet current, we have obtained the sum rule with the electromagnetic and strong anomaly contributions. The strong anomaly contribution consists of the spectral part (originated from p 2 -dependent term) and the subtraction constant (independendent of p 2 ). The gluon matrix element 0|GG|γγ is related to the electromagnetic amplitude 0|FF |γγ in the chiral limit at p µ = 0 by means of the low energy theorem, which we generalized for the case of mixing states. It gives us an estimation for the subtraction constant of the gluon anomaly contribution in the dispersive form of the axial anomaly.
Combining the low-energy theorem and the anomaly sum rule in the singlet channel we determined separately the subtraction and the spectral parts of the gluon anomaly. The spectral part of gluon anomaly is found to be significant: it is of the order of electromagnetic anomaly contribution. However, it is almost canceled out by the subtraction term, resulting in overall small contribution of gluon anomaly to the η(η ) → γγ decays.
The contributions of gluon and photon anomalies in the η(η ) → γγ decays have been evaluated using the anomaly sum rule for the singlet axial current. We found a relatively small contribution of the gluon anomaly part.
|
2019-04-22T13:08:50.764Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "930325c656a819c121e9f168307c1811c6fe1990",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/938/1/012052/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "eb923ae708ddc41e7605cc351215be7966f38988",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
220883998
|
pes2o/s2orc
|
v3-fos-license
|
Can Non-expert Physicians Use the Japan Narrow-band Imaging Expert Team Classification to Diagnose Colonic Polyps Effectively?
Objectives: In 2014, the Japan narrow-band imaging expert team (JNET) proposed the first unified colorectal narrow-band imaging magnifying classification system, the JNET classification. The clinical usefulness of this system has been well established in JNET member institutions, but its suitability for use by “non-expert physicians” (physicians with no expertise in the use of JNET classification) remains unclear. This study aimed to examine the clinical usefulness of the JNET classification by “non-expert physicians”. Methods: We retrospectively analyzed 852 consecutive patients who underwent screening colonoscopy following a positive fecal occult blood test between January 2017 and May 2018. Endoscopic results from colon polyp diagnosis by physicians who started using the JNET classification (JNET group) were compared with those of physicians who did not (control group). Mann-Whitney U test and Fisher's exact test were used to compare continuous and categorical variables, respectively. Results: The median patient age was 68 years, and the male-to-female ratio was 1:0.84. When no lesions were found, the median withdrawal time was significantly different between groups (JNET group: 12 min; control group: 15 min; P < 0.01). The number of resected adenomas per colonoscopy was significantly higher in the JNET group (1.7) than in the control group (1.2; P < 0.01). Among the resected lesions, 8.9% in the JNET group and 17% in the control group were non-neoplastic lesions that did not require resection (P < 0.01). Conclusions: Colon polyp diagnosis using the JNET classification can reduce unnecessary resection during magnifying colonoscopy when conducted by “non-expert physicians”.
Introduction
Colorectal cancer (CRC) is the third most commonly diagnosed cancer globally, accounting for about 1.3 million new cases and 700,000 estimated deaths, annually [1]. Polypectomy of all identified adenomatous polyps of the colon or rectum reduces the incidence and mortality rate of CRC [2]. To remove adenomatous polyps quickly and efficiently, the physician performing the colonoscopy needs to be skilled at distinguishing between adenomatous polyps and non-neoplastic polyps, which do not require resection.
The usefulness of narrow-band imaging (NBI) systems with magnification for differentiating between adenomatous and non-neoplastic polyps has been previously re-ported [3][4][5][6][7][8][9][10][11][12]. To establish a common diagnostic strategy for using magnifying NBI (M-NBI), a committee of 38 magnifying colonoscopy specialists in Japan, the Japan NBI Expert Team (JNET), was created in 2011. In 2014, JNET proposed the first unified colorectal M-NBI classification, the JNET classification [13]. Sumimoto et al. reported the usefulness of the JNET classification at an educational hospital representing a JNET member institution [12]. Iwai et al. reported that the proportion of non-neoplastic lesions among the resected diminutive polyps was 7.9% at a JNET participating tertiary cancer center [14]. It has been reported that the JNET classification is useful when implemented by experts in magnifying colonoscopy. However, for it to be considered clinically useful in routine settings, non-expert physicians must also be able to use it effectively; there are currently no reports concerning this issue.
Therefore, in this study, we aimed to clarify the advantages of introducing the JNET classification into colonoscopy examinations performed by physicians who are not experts in JNET classification. The study was performed in general hospitals that were not educational facilities and did not belong to the JNET network.
Participating physicians (JNET and control groups)
In our hospital (the Ise Red Cross Hospital, Mie, Japan), the qualitative diagnostic strategies for colonic polyps using colonoscopy are not unified. The use of the JNET classification for polyp diagnosis depends on the preference of individual physicians.
After 2 years of clinical training in our hospital, three trainees decided to become gastroenterologists. In the next 2 years, they learned the skills of colonoscope insertion; in the latter half of these 2 years, they learned about the JNET classification for colon polyp diagnosis. The training for scope insertion was supervised by experienced gastroenterologists (S.S., M.T., J.O., and A.K.), and study of the JNET classification was performed by S.S., who had not undergone special education or lectures pertaining to the JNET classification from magnifying colonoscopy experts; these 4 physicians referred to figures in an article by Sano et al. for learning the classification [13]. While learning JNET classification, the vessel pattern observed using NBI and the surface structure observed by NBI and indigo carmine spray were classified according to the JNET classification. In addition, these 4 physicians were required to compare the JNET classification and pathological findings, thereby increasing diagnostic accuracy. From January 2017, they decided to perform polyp diagnosis according to the JNET classification, using only NBI. At that time, three trainees inserted the colonoscope without supervision. In this retrospective study, these three trainees and S.S. were classified into the JNET group (those who started using the JNET classification since January 2017). Among the 10 physicians performing colonoscopies at our hospital, 4 in the JNET group started using the JNET classification for qualitative diagnosis of polyps, whereas 6 based their diagnoses on their own clinical experience; these 6 physicians were classified into the control group (those who determined the need for polyp removal based on their clinical experience using characteristics such as size, morphology, color, and location). The 4 physicians in the JNET group had, on average, fewer years of colonoscopy experience (3, 3, 3, and 9 years) than the 6 in the control group (16,11,9,5,4, and 3 years). We defined the JNET group physicians as "non-expert physicians" on the JNET classification because they had no experience in using the JNET classification until January 2017, i.e., the first day of this retrospective study period.
Endoscopic procedure and pathological evaluation
All procedures were performed using high-resolution magnifying colonoscopes (CF-H260AZI, CF-H290ZI, or PCF-H290ZI colonoscope, EVIS LUCERA ELITE System; Olympus, Tokyo, Japan). Polyp location, size, and morphology [15] were recorded by physicians in both groups. Locations were categorized as being on the right side of the colon (cecum, ascending, and transverse colon); on the left side of the colon (descending and sigmoid colon); or within the rectum, and were classified based on the Japanese Classification of Colorectal Carcinoma [16]. Pathological evaluations were performed according to the Vienna classification and the classifications of the Japanese Society for Cancer of the Colon and Rectum [16].
Endoscopic diagnosis and treatment strategy in the JNET group
In the JNET group, if any lesion was found during colonoscopy, the endoscopist immediately decided whether to remove it based on the real-time diagnosis, including findings from M-NBI endoscopy. The vessel and surface patterns were evaluated according to the JNET classification using M-NBI. Type 1 JNET polyps larger than 6 mm in diameter and located on the right side of the colon were resected using cold snare polypectomy (CSP), endoscopic mucosal resection (EMR), or endoscopic submucosal dissection (ESD). These criteria were decided for the JNET group in order to not fail to remove SSA/P when introducing the JNET classification. JNET type 2A polyps were considered adenomatous and were treated with cold forceps polypectomy (CFP), EMR, or ESD. Because JNET type 2B polyps were likely to be high-grade adenomas or intra-mucosal cancers, they were treated with EMR or ESD, unless obvious massive submucosal invasion was found. When JNET type 3 was identi-fied, the polyp was further evaluated using high magnification endoscopy with 0.05% crystal violet staining. JNET type 3 polyps with obvious massive submucosal invasion were surgically resected because of their likelihood of being deep submucosal invasive cancer.
Endoscopic diagnosis and treatment strategy in the control group
In the control group, if any lesion was found during colonoscopy, the endoscopist immediately decided whether to remove it based on the real-time diagnosis, and findings, such as size, morphology, color, and location. Physicians in the control group used the chromoendoscope or M-NBI without the JNET classification; however, there was no common diagnostic strategy.
In the control group, participants underwent CFP, CSP, EMR, or ESD according to polyp size. CFP was used for diminutive (1-5 mm) polyps, CSP was used for small (6-9 mm) polyps, and EMR and ESD were used for polyps >10 mm in size. When a deep depression or a coarse nodule was identified on the surface of a polyp, it was further evaluated using high magnification endoscopy with 0.05% crystal violet staining. Polyps with obvious massive submucosal invasion were surgically resected. If the endoscopist determined that a lesion should be removed by surgery, tattooing, and biopsy were performed at the time of the colonoscopy. As a common procedure for both groups, all participants underwent resection, even for diminutive adenomas. This decision was made by each individual endoscopist, with or without consideration of the JNET classification.
Non-neoplastic lesion resection rate
To evaluate the efficiency of colonoscopies, we compared the non-neoplastic lesion resection rate (NNR) for each physician. We defined NNR as the proportion of unnecessarily resected non-neoplastic polyps among the total number of resected polyps.
Study population
Patients who underwent screening colonoscopy after positive fecal occult blood tests for cancer screening, and lacked clinical symptoms (e.g., abdominal pain, diarrhea, and fresh blood in the stool), between January 2017 and May 2018 at the Ise Red Cross Hospital, Mie, Japan, were considered eligible for enrollment in this retrospective study. Baseline colonoscopy was defined as the first colonoscopy in life for patients who had no previous history of colonoscopy from their medical records. Exclusion criteria were as follows: (i) patients who underwent colonoscopy by a physician during scope insertion training; (ii) patients who could not have their entire colon observed because of obstruction due to advanced colon cancer; (iii) patients with ulcerative colitis; and (iv) patients who underwent colon resection. During the study period, 15 of the 852 patients underwent second colonoscopy 1 year after the first colonoscopy. They underwent resection of five or more adenomas during the first colonoscopy. European guidelines for quality assurance in colorectal cancer screening and diagnosis recommend that if more than five adenomatous polyps are removed, the next surveillance colonoscopy should be performed before 1 year [17]. Surveillance intervals after the removal of colorectal tumors have not been established in Japan; therefore, our hospital provides surveillance colonoscopy for patients who had more than five neoplastic polyps before a year. Therefore, in this retrospective study, the total number of colonoscopies was 867.
Statistical analyses
The Mann-Whitney U test and Fisher's exact test were used to compare continuous and categorical variables, respectively. Statistical significance was defined as P < 0.05. All data analyses were performed using R software (version 2.15.2, R Core Team, Foundation for Statistical Computing, Vienna, Austria).
Ethical considerations
Participants gave their written informed consent. The study protocol was approved by the research institute's committee on human research. The Institutional Review Board of the Ise Red Cross Hospital gave ethical approval for this study (Institutional code: 30-54).
Patient characteristics and endoscopic examination results
A total of 852 consecutive patients (867 colonoscopies) who underwent screening colonoscopy after positive fecal occult blood tests for cancer screening were included in the final analysis in this study. Patient characteristics are shown in Table 1. The median age of patients was 68 (range: 24-90) years, and the male-to-female ratio was 1:0.84. The cecal intubation rate was 100% (867/867). The median cecal intubation and withdrawal times were 8 (range: 1-60) and 13 (range: 4-31) min, respectively. At least one adenoma was removed in 58% of patients (496/852), and this was limited to baseline colonoscopy in 59% of cases (386/655). Since we did not record all lesion data obtained during follow-up without resection, the precise adenoma detection rate could not be calculated; however, it was not less than 59% (386/655). Table 2 shows the endoscopic examination results of the two study groups. The 4 JNET group physicians had fewer number of years of colonoscopy experience (3, 3, 3, and 9 years), than the 6 in the control group with more extensive number of years of experience in colonoscopy (16,11,9,5,4, and 3 years). The median cecal intubation time was 7 (range: 1-50) in the JNET group and 8 (range: 2-60) min in the control group (P < 0.01). The median withdrawal time with no lesions was significantly different between the two groups (P < 0.01) (JNET group: 12 min vs. control group: 15 min). However, the proportion of patients with at least one adenoma removed was not significantly different (60% in the JNET group vs. 57% in the control group; P = 0.25). The number of removed adenomas per colonoscopy was significantly higher in the JNET group (JNET group: 1.7 vs. control group: 1.2; P < 0.01).
Characteristics of removed polyps (JNET group vs. control group)
The histological diagnosis of all resected polyps was reviewed by an experienced gastrointestinal pathologist (TY).
The characteristics of the polyps removed in each group are shown in Table 3. In the control group, four out of five deep submucosal invasive (SM-d) carcinomas that should have been resected by radical surgery were removed by ESD. Additionally, one submucosal superficial (SM-s) invasive carcinoma that should have been resected by ESD was instead resected by radical surgery. Among the resected lesions, the proportion of non-neoplastic lesions that did not require resection was 8.9% (64/722) in the JNET group and 17% (130/768) in the control group (P < 0.01). Physicians in the JNET group removed adenomatous polyps more efficiently within a shorter time and with fewer medical resources.
Non-neoplastic lesion resection rate according to physician
The NNRs for each physician are summarized in Table 4. The NNR was 8.9% (64/723) in the JNET group and 16.8% (130/774) in the control group (P < 0.01). One control group physician with 16 years of colonoscopy experience showed a higher NNR than one of the JNET group physi-
Characteristics of removed polyps in the JNET group
The results of M-NBI diagnosis for 725 lesions using the JNET classification (performed by the JNET group) are summarized in Table 5. Histologically, 2, 3, and 15 type 1 lesions were identified as tubular adenomas, SSA/P, and non-neoplastic polyps, respectively. Moreover, 1, 12, 634, 2, 5, and 49 type 2A lesions were identified as superficial submucosal invasive carcinomas, intramucosal carcinomas, tubular adenomas, SSA/P, traditional serrated adenomas, and non-neoplastic polyps, respectively. There were no lesions with JNET classification type 2B; there were 2 type 3 cases.
Discussion
According to this study, the JNET classification in a general hospital setting (does not belong to the JNET network and was not an educational hospital), is useful for avoiding the unnecessary resection of non-neoplastic polyps. In other words, the JNET classification is a useful and applicable clinical tool. Physicians with a sufficiently high adenoma detection rate should also be aware that unnecessary polypectomy leads to increased risks for the patient. With the increasing use of antithrombotic drugs, endoscopic procedures at high-risk of bleeding can lead to death [18]. This study is the report that even non-expert physicians in the use of JNET classification can eliminate unnecessary resection without lowering the adenoma detection rate. In Table 2, the proportion of patients with at least one adenoma removed was not significantly different (60% in JNET group vs. 57% in the control group; P = 0.25). We have not verified the quality of the JNET classification. What we want to reveal here is that the adenoma detection rate of the JNET group did not decline. Even if unnecessary resection can be reduced, it would be useless if the adenoma detection rate decreases. In addition, to maintaining the quality of colonoscopy, the proportion of resected SSA/P should not differ between the two groups. In our cohort, the number of removed SSA/Ps per patient was not significantly different For submucosal invasive cancer, there is a consensus in Japan that submucosal superficial (SM-s) carcinoma is appropriate for endoscopic resection, and that SM-d carcinoma should be surgically resected due to the possibility of lymph node metastasis. In this study, only 11 lesions with submucosal invasive cancer were included; therefore, definitive conclusions could not be made with respect to these cases. However, in the control group, four out of five SM-d carcinomas, which should have been resected by radical surgery, were removed by unnecessary ESD; and one SM-s carcinoma, which should have been resected by ESD, was resected by unnecessary radical surgery. Although this occurred in only a few cases, such incorrect choices of resection methods due to misdiagnosis of invasion depth were not observed in the JNET group.
Although the incidence of adverse events from EMR or CSP for colorectal polyps is not high, such events are still prevalent [19]. Colonoscopy quality indicator is primarily assigned according to the number of adenomas that are removed [20,21]. Avoiding unnecessary removal of nonneoplastic polyps while identifying and removing adenomas is important in patients, particularly when considering the recent increase in antithrombotic drug administration. Appropriate diagnosis of JNET type 1 lesions as hyperplastic polyps was found to be useful for avoiding unnecessary polypectomy. In contrast, it is concerning that the resection rate of adenomatous polyps decreased due to misidentification of JNET type 1 lesions by physicians. In our study, the proportion of patients with at least one adenoma removed was not significantly different between the two groups (JNET group, 60% vs. control group, 57%; P = 0.25), and the number of adenomas removed per colonoscopy was sig-nificantly higher in the JNET group (JNET group: 1.7 vs. control group: 1.2; P < 0.01). These results suggest that the JNET group detected more lesions in less time than the control group, but the causal relationship between this finding and the use of the JNET classification is not clear. The JNET classification is a useful classification only after polyps are found, but it cannot explain the statistically significant difference in the number of adenomas removed per colonoscopy. Notably, the JNET group was able to significantly reduce the removal of non-neoplastic polyps while removing a similar number of adenomas as the control group. In addition, both groups exceeded the target level set by the American Society for Gastrointestinal Endoscopy and Bowel Cancer screening program [20,21].
A total of 20 JNET type 1 lesions were resected due to the possibility of SSA/P. JNET type 1 lesions can be considered important for both efficient colonoscopy and for reducing the risk of unnecessary complications. Regarding JNET type 2A lesions, Sumimoto et al. reported that 1% (17/ 1888), 86% (1626/1888), and 12% (230/1888) of type 2A lesions were identified as hyperplastic/sessile serrated polyps, low-grade dysplasia, and high-grade dysplasia, respectively [12]. However, in this study, suspected JNET type 2A lesions that were identified by "non-expert physicians" consisted of 7.0% (49/703) non-neoplastic lesions, 90.0% (634/ 703) tubular adenomas, and 1.8% (13/703) carcinomas. The proportion of non-neoplastic lesions among suspected type 2 A lesions appears to be high when diagnosed by non-expert physicians. In addition, JNET type 2A lesions consist of various lesion types, including hyperplastic polyps, adenomas, intramucosal carcinomas, and invasive SM-s carcinomas. In contrast, the target lesions of interest for endoscopic treatment are adenomas, intramucosal carcinomas, and invasive SM-s carcinomas. Given that JNET 2A lesions cannot be accurately detected, there are minimal clinical disadvantages to removing all neoplastic lesions.
Regarding the diagnostic performance for differentiating between neoplastic and non-neoplastic lesions, the JNET classification, when used by "non-expert physicians", showed diagnostic accuracy similar to that of previous studies [13,22]. Even when "non-expert physicians" used the JNET classification for diagnosis, the number of unnecessary polypectomies was reduced, but the diagnostic accuracy of JNET type 2A lesions was not as high as that for the control group [12]. One of the reasons for this was that an educational system for improving the understanding of the JNET classification has not been established for physicians.
On the other hand, as shown in Table 4, NNR was 8.9% (64/723) in the JNET group and 16.8% (130/774) in the control group, respectively (P < 0.01). On the other hand, NNR did not appear to be physician dependent (P = 0.36 across JNET and 0.45 across control group, respectively). So, we considered that the statistically significant difference in NNR was not caused by differences between specific individual physicians but was caused by introducing the JNET classification into one group. In addition, one control group physician with 16 years of colonoscopy experience showed a higher NNR than one of the JNET group physicians with only 3 years of experience in colonoscopy. This tendency was confirmed among other physicians in the JNET and control groups. It is likely that many endoscopists may be motivated to use the JNET classification, as it may help overcome some degree of inexperience.
The present study has several notable limitations. First, there were insufficient patient numbers to evaluate JNET type 2B and type 3 lesions. JNET classification is used to reduce unnecessary resection in clinical practice and to identify deep invasive carcinomas not eligible for endoscopic resection. This study only discussed one part of the usefulness of the JNET classification. Secondly, this was a retrospective study conducted at a single institution. Finally, we have no pathological data regarding the non-removed polyps, and will never know the number of adenomas that remained; we consider this to be one of the biggest limitations of this study. Future studies should be conducted at multiple centers, including a larger sample size, and with data on nonresected polyps. Despite the issue of unresected polyps, the introduction of the JNET classification to colonoscopy examinations performed by physicians who are not experts in this classification did not reduce the number of removed adenomas per patient. In conclusion, compared to that of experience-based diagnosis, colon polyp diagnosis performed by "non-expert physicians" using the JNET classification may reduce the rate of unnecessary resections of nonneoplastic lesions.
|
2020-08-01T05:08:08.238Z
|
2020-07-30T00:00:00.000
|
{
"year": 2020,
"sha1": "8ddc5c2e748173f17c1609b9a75a7b812d513d8f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jarc/4/3/4_2019-036/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ddc5c2e748173f17c1609b9a75a7b812d513d8f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
170078993
|
pes2o/s2orc
|
v3-fos-license
|
Observing Isotopologue Bands in Terrestrial Exoplanet Atmospheres with the James Webb Space Telescope---Implications for Identifying Past Atmospheric and Ocean Loss
Terrestrial planets orbiting M dwarfs may soon be observed with the James Webb Space Telescope (JWST) to characterize their atmospheric composition and search for signs of habitability or life. These planets may undergo significant atmospheric and ocean loss due to the superluminous pre-main-sequence phase of their host stars, which may leave behind abiotically-generated oxygen, a false positive for the detection of life. Determining if ocean loss has occurred will help assess potential habitability and whether or not any O2 detected is biogenic. In the solar system, differences in isotopic abundances have been used to infer the history of ocean loss and atmospheric escape (e.g. Venus, Mars). We find that isotopologue measurements using transit transmission spectra of terrestrial planets around late-type M dwarfs like TRAPPIST-1 may be possible with JWST, if the escape mechanisms and resulting isotopic fractionation were similar to Venus. We present analyses of post-ocean-loss O2- and CO2-dominated atmospheres, containing a range of trace gas abundances. Isotopologue bands are likely detectable throughout the near-infrared (1-8 um), especially 3-4 um, although not in CO2-dominated atmospheres. For Venus-like D/H ratios 100 times that of Earth, TRAPPIST-1 b transit signals of up to 79 ppm are possible by observing HDO. Similarly, 18O/16O ratios 100 times that of Earth produce signals at up to 94 ppm. Detection at S/N=5 may be attained on these bands with as few as four to eleven transits, with optimal use of JWST NIRSpec Prism. Consequently, H2O and CO2 isotopologues could be considered as indicators of past ocean loss and atmospheric escape for JWST observations of terrestrial planets around M dwarfs.
INTRODUCTION
In the near future, terrestrial exoplanets around small M dwarf stars will be observed by the James Webb Space Telescope (JWST) and extremely large ground-based telescopes (Cowan et al. 2015;Quanz et al. 2015;Snellen et al. 2015;Greene et al. 2016;Lovis et al. 2017;Morley et al. 2017;Lincowski et al. 2018). A number of nearby transiting targets have been discovered, including the seven-planet TRAPPIST-1 system (Gillon et al. , 2017Luger et al. 2017), which provides a plausible opportunity for studying the liquid water habitable zone (HZ) and planetary evolution in a single system. However, the atmospheres of these planets will likely be heavily evolved from their primordial composition due to the long, superluminous pre-main-sequence evolution (Baraffe et al. 2015) and life-long stellar activity (Tarter et al. 2007) of their M dwarf host stars.
The superluminous pre-main-sequence phase of M dwarf stars may drive significant loss of a planet's surface water and atmosphere, and potentially produce large quantities of atmospheric oxygen. This superluminous phase could last for up to one billion years for the smallest stars (Baraffe et al. 2015). During this time, ocean-bearing planets that formed in what is presently the habitable zone would have been subjected to fluxes up to 100 times the stellar irradiation of the main-sequence. This would cause a runaway greenhouse environment and severe hydrodynamic escape , which occurs when heating by extreme UV absorption induces an escape flow (e.g. Hunten et al. 1987). During this period, the TRAPPIST-1 planets could have lost up to twenty Earth oceans of water and generated thousands of bars of oxygen (c.f. Bolmont et al. 2017;Wordsworth et al. 2018;Lincowski et al. 2018). This oxygen could remain in the atmosphere, but is more likely to be severely reduced by a number of planetary processes. These processes include oxidation of the surface, interaction with a magma ocean that reincorporates the O 2 into the mantle (Schaefer et al. 2016;Wordsworth et al. 2018), or loss of O 2 to space either via hydrodynamic escape early on or by a number of ongoing escape mechanisms (e.g. Hunten 1982;Lammer et al. 2007;Ribas et al. 2016;Airapetian et al. 2017;Dong et al. 2017;Garcia-Sage et al. 2017;Egan et al. 2019).
Any atmospheric oxygen left by these sequestration/loss processes could constitute a false positive biosignature if the planet is being assessed for signs of life (Meadows 2017;. Even with sequestration and loss processes, ocean loss may leave behind several bars of O 2 (Schaefer et al. 2016;Wordsworth et al. 2018), which could be potentially discriminated from the modest amounts generated by a photosynthetic biosphere via detection of O 2 -O 2 collision-induced absorption bands. The O 2 -O 2 bands, particularly at 1.06 and 1.27 µm, are more prominent in massive O 2 atmospheres because the absorption cross-section is proportional to the density of gas squared (Schwieterman et al. 2016;Lincowski et al. 2018). Detection of these bands alone does not prove that the planet lacks a surface ocean, only that large amounts of oxygen have been liberated, and this is likely from water vapor photolysis and subsequent hydrogen escape.
Evidence of past atmospheric escape and ocean loss may help test the concept of the habitable zone, that region around a star where liquid water could exist on the surface of an Earth-like planet (Kasting et al. 1993;Kopparapu et al. 2013). The inner edge of the habitable zone (IHZ) is conservatively defined by the moist greenhouse greenhouse limit (where stratospheric H 2 O exceeds 1000 ppm and so water vapor is easily lost to space), which lies close to Earth (0.99 au around a Sun-like star). However, an optimistic "recent Venus" limit can be defined under the assumption that Venus may have had surface water prior to approximately one billion years ago, before the last global resurfacing event, when the Sun was fainter (Solomon & Head 1991;Kasting et al. 1993). Additionally, there have been a number of modeling studies positing revised limits for the IHZ that depend on perturbations to some of the parameter assumptions, including planetary mass (Kopparapu et al. 2014) and rotation rates (e.g. Kopparapu et al. 2016). Since TRAPPIST-1 d is between the recent Venus limit and the conservative inner edge as defined by Kopparapu et al. (2013), it is a valuable target for probing the position of the inner edge of the habitable zone around M dwarf stars.
In a multi-planet system like TRAPPIST-1, evidence of past atmospheric escape from the inner planet(s) could inform the suitability for more difficult follow-up observations of a habitable zone target (e.g. TRAPPIST-1 e). It is much easier to characterize the inner planets due to the possibility of obtaining more transit observations and the larger scale heights afforded by the hotter atmospheres (Morley et al. 2017;Lincowski et al. 2018). The survival of an atmosphere inward of the IHZ could be an indicator for atmospheric survival of the other planets. For example, if TRAPPIST-1 b still has an atmosphere, then planets farther away-including those in the habitable zone-have a higher likelihood of also hosting an atmosphere. However, the presence of an atmosphere on a habitable zone planet does not guarantee the planet is habitable, as even habitable zone planets may have undergone complete ocean loss (Lincowski et al. 2018).
Another piece of evidence that can indicate a planetary environment lost its surface water, so is not likely to be habitable, is severe isotopic fractionation. Both Venus and Mars once likely had surface oceans (e.g. De Bergh et al. 1991;Wordsworth 2016), evidence for which includes isotopic fractionation in the observed atmospheric deuterium to hydrogen ratios (D/H) compared to Vienna Standard Mean Ocean Water (VSMOW) for Earth. Compared to VSMOW, the atmosphere of Venus is enhanced by a factor of 120-140 (De Bergh et al. 1991;Matsui et al. 2012) and the atmosphere of Mars is enhanced by a factor of ∼4 (Owen et al. 1988;Villanueva et al. 2015;Encrenaz et al. 2018). These enhancements likely occurred from near-complete loss of their available water reservoirs (Hunten 1982;Owen et al. 1988), because any primordial reservoir would dilute fractionated gas and reduce the total observed fractionation. Note that Earth has the lowest D/H ratio among the solar system terrestrials-VSWOW has D/H ∼8 times the solar abundance (c.f. Hagemann et al. 1970;Asplund et al. 2009). Measurements of large isotopic fractionations in the atmospheres of nearby exoplanets relative to their host stars would also likely represent departures from primordial compositions (Mollière & Snellen 2019).
Atmospheric water vapor observed in transmission is suggestive but not definitive proof of the presence of an ocean (c.f. Earth, Venus; Lincowski et al. 2018). Unlike reflectance spectroscopy, transmission spectroscopy cannot detect surface absorption or reflectance features-additional observations would be needed to determine the likelihood of surface liquid water.
Atmospheric escape is the only known mechanism capable of the extreme mass-dependent fractionation observed in D/H in our solar system (see summary in Mollière & Snellen 2019, and references therein). Similarly, fractiona-tion of oxygen ( 18 O/ 16 O) during hydrodynamic or nonthermal escape of oxygen generated from photolysis of vaporized ocean water is another potential signature of past ocean loss. If the abiotic oxygen generated via ocean loss is not lost to space, but is instead dissolved into a magma ocean (Schaefer et al. 2016;Wordsworth et al. 2018) or oxidizes the surface, then only a comparably small level of fractionation would occur. Unlike atmospheric loss, adsorption by the surface sequesters heavier isotopes slightly more than lighter isotopes (e.g. Sharp 2017), so would impart a small fractionation signature opposing that of escape. Measuring the hydrogen and/or oxygen isotope fractionation for planets orbiting M dwarfs could provide additional evidence of past extreme ocean loss and atmospheric escape in systems very different from our own.
Since O 2 and its isotopologues are likely to be difficult to observe with JWST, isotopologues of CO 2 can be used as a more easily observed proxy for oxygen fractionation. Laboratory experiments (e.g. Shaheen et al. 2007) and numerical modelling (Liang et al. 2007) of CO 2 in the stratosphere of Earth have demonstrated rapid isotopic equilibrium (on the order of days) between CO 2 and O 2 , indicating that fractionation in CO 2 can be efficiently induced if co-existing with heavily fractionated O 2 . Since this process is UV-driven, it is likely to also occur efficiently in the atmospheres of planets orbiting M dwarfs.
Isotopic fractionation may therefore be a useful indicator for past ocean loss on terrestrial exoplanets and may help to observationally test the inner edge of the habitable zone. Here we assess how large isotopic fractionation could be observed spectroscopically in terrestrial exoplanet atmospheres with JWST. We focus primarily on the two TRAPPIST-1 planets most likely to produce the strongest transit signals, due to their large atmospheric scale heights and small semimajor axes: TRAPPIST-1 b, which receives approximately twice the irradiation of Venus, and d, which lies between the conservative and optimistic IHZ limits. We also assess the more observationally challenging TRAPPIST-1 e, a habitable zone candidate. The TRAPPIST-1 system is scheduled to be observed with JWST and is likely to produce a favorable signal that could be used to characterize evolved atmospheres (Morley et al. 2017;Lincowski et al. 2018). In §2, we summarize our models and methods, in §3 we show our results, in §4 we discuss the implications of our results for observations with JWST, and in §5 we summarize our findings.
METHODS
To produce simulated transit transmission spectra (see e.g. Robinson 2017, and references therein) with increased isotopic fractionation, we use a 1D line-by-line radiative transfer model and adjust the input line list isotopologue abundances. To assess simulated observations for JWST, we use an instrument noise model. We describe our models and inputs in the following subsections.
We adopt the following isotope geochemistry convention (e.g. Sharp 2017) for describing the isotopic fractionation of hydrogen (note this does not include the multiplier of 1000 typically used in isotope geochemistry, due to the extreme values we adopt here): and similarly, δ 18 O is the notation for changes in 18 O/ 16 O. "VSMOW" refers to the Vienna Standard Mean Ocean Water isotopic standard (Coplen 1995).
Radiative Transfer
To generate transmission spectra for analysis, we use the Spectral Mapping Atmospheric Radiative Transfer model (SMART, Meadows & Crisp 1996;Crisp 1997, developed by D. Crisp). SMART is a 1D, line-by-line, multi-stream, multiscattering model, which incorporates the Discrete Ordinate Radiative Transfer code (DISORT, Stamnes et al. 1988;Stamnes et al. 2000) to solve the radiative transfer equation. SMART has been shown to faithfully reproduce observed spectra of Mars (Tinetti et al. 2005), Earth (Robinson et al. 2011), and Venus (Meadows & Crisp 1996;Arney et al. 2014). SMART incorporates extinction from Rayleigh scattering and absorption from UV-visible electronic transitions, rotational-vibrational transitions, and collision-induced absorption (CIA). SMART can produce transit transmission spectra, including refraction (Robinson 2017).
Rotational-vibrational absorption coefficients are calculated from line lists using our line-by-line absorption coefficients code (LBLABC, Crisp 1997). Here we have updated the partition functions (Gamache et al. 2017) and draw from the HITRAN2016 line list (Gordon et al. 2017), which are appropriate for the temperatures and pressures in these terrestrial atmospheres. We consider isotopic fractionation δD of 10-100, and δ 18 O of 2-100, depending on the model atmosphere environment (see §2.5), and adjust the abundances of the isotopologues accordingly (see §2.2).
Calculating Isotopologue Abundances
We determine the modified abundances for all isotopologues available for the molecules H 2 O, CO 2 , O 3 , CO, and O 2 in the HITRAN2016 line lists by calculating the abundances for each isotopologue given the specified isotopic fractionation. For oxygen we assume standard terrestrial linear mass fractionation, such that δ 17 O = 0.5δ 18 O (Sharp 2017), although around M dwarfs this may depend on the details of atmospheric escape for a given planet. We assume that enhancement of doubly-fractionated molecules (e.g. D 2 O) is stochastic, and therefore proportional to the production of the isotopic fractionation for each affected atom, which is generally a good assumption, especially at higher temperatures (Eiler 2007).
To compute isotopologue abundances for each molecule, we numerically solve for the multipliers (X) for the isotopologues of a given molecule to adjust its VSMOW abundances given in HITRAN. These multipliers are not wholly independent, because we assume each substitution is proportional to the abundance. For example, substitution of hydrogen for deuterium in H 2 O, H 17 2 O, or H 18 2 O is equally likely per molecule, in proportion to the abundance of hydrogen in each molecule. The multiplier is squared for doubly-substituted isotopes, because it requires the probability that one atom is substituted and the other is also substituted. A generic notation may be written as: where the square brackets (e.g. [A]) are the notation for abundance and the individual multipliers are (x A ) for a particular isotope A. This equation reduces to: For our example of D 2 O (also the example in box 1 of Eiler 2007), if x D is the multiplier for the enhancement in deuterium, the abundance multiplier of D 2 O is: For a molecule with different isotopic substitutions, such as HD 17 O, the abundance adjustment would be: where here we have set x17 O = 0.5x18 O . The multipliers x are solved for simultaneously using a standard minimization code under the constraint that the total number of atoms of each isotope for a given family of molecules (e.g. H 2 O and its isotopologues) satisfy the desired fractionation criterion (i.e. for δD=100, that the ratio of abundances for deuterium across all isotopologues for a given molecule is 100 times greater than compared to VSMOW). We assume the line intensities (and absorption coefficients) of the adjusted isotopologues scale directly with their abundances, in accordance with the Boltzmann equation: where n is the number of molecules in a given energy state (directly proportional to the absorption coefficient), g represents the multiplicity of states (here, the abundance of molecules of a given isotopologue), E is the energy of each state, k is the Boltzmann constant, and T is the temperature.
JWST Instrument Simulator
We model the noise expected from our simulated spectral signals for various JWST observing modes using the JWST time-series spectroscopy simulator, PandExo 1 (Batalha et al. 2017). PandExo uses Pandeia, the core of the Exposure Time Calculator of the Space Telescope Science Institute 2 (Pontoppidan et al. 2016). We consider only the optimal JWST observing modes for these atmospheres as determined by Lustig-Yaeger et al. (2019), who conducted a comprehensive analysis of JWST observing modes for the suite of atmospheres generated by Lincowski et al. (2018). As in Batalha et al. (2018) and Lustig-Yaeger et al. (2019), for the NIRSpec Prism, we also consider an observing mode with a high-efficiency readout pattern by using a larger number of "groups" (n groups = 6) that allows saturation near the peak of the TRAPPIST-1 spectral energy distribution (SED), and therefore offers an improved duty cycle for the unsaturated spectral intervals. We do not impose a noise floor (c.f. Greene et al. 2016), as the on-orbit performance and systematic errors for JWST are not currently known.
Model Planetary Atmospheres and Stellar Inputs
To model post-ocean-loss atmospheres that have undergone isotopic fractionation, we use nominal 10 bar atmospheres for TRAPPIST-1 b, d, and e from Lincowski et al. (2018). The choice of 10 bars is consistent with the findings of O 2 sequestration by Wordsworth et al. (2018), though other stable climate states with different compositions and higher or lower surface pressures and temperatures are also possible (c.f. Wolf 2017; Turbet et al. 2018;Wunderlich et al. 2019). TRAPPIST-1 b and d are the planets that require the fewest transits to observe molecular absorption features, due to larger expected scale heights resulting from higher atmospheric temperatures, and a low surface gravity for d (Lincowski et al. 2018;Lustig-Yaeger et al. 2019). Planet d also sits between the current conservative and recent Venus estimates for the inner edge of the habitable zone, and so could help constrain the actual inner limit of the HZ for the TRAPPIST-1 system. TRAPPIST-1 e is firmly in the conservative HZ, and is perhaps most likely to be temperate (Wolf 2017;Turbet et al. 2018;Lincowski et al. 2018). Though some or all of the TRAPPIST-1 planets could have much larger water abundances because their densities are gener- Note: Climatically and photochemically self-consistent environments from Lincowski et al. (2018) simulated for spectral analysis, dominated either by O2 or CO2, assuming a range of trace species outgassing. TRAPPIST-1 b, d, and e were simulated for all environments with the listed fractionation levels. We model a reduced H2O case for the O2 outgassing environments, where the water vapor is scaled down by a factor of 100 (TRAPPIST-1 b) or 10 (TRAPPIST-1 d), which reduces the water vapor to the level observed in the stratospheres of Venus and Earth (∼ 1 − 3 ppm). As a result, the reduced H2O atmospheres do not have self-consistent climates/photochemistry. There was no reduced H2O case for TRAPPIST-1 e because the water vapor was already ∼1-5 ppm.
ally lower than the density of Earth, here we assume initially terrestrial bulk compositions, as their 3σ error bars also encompass the density of Earth (Grimm et al. 2018). For this isotopic fractionation detection study, we have assumed that all three of these planets, including TRAPPIST-1 e, have lost their surface water due to an early runaway greenhouse phase and atmospheric escape during the superluminous pre-mainsequence of the host star, and so are not habitable (Lincowski et al. 2018). For all three planets, we modeled O 2 -dominated atmospheres, both desiccated and outgassing, and clear-sky Venus atmospheres. A cloudy/hazy Venus case is not included for the detectability studies considered here, as haze opacity (due to Mie scattering at < 2.5µm and absorption by H 2 SO 4 at > 2.5µm, Palmer & Williams 1975;Pollack et al. 1993;Ehrenreich et al. 2012;Lincowski et al. 2018) likely precludes detection of the isotopologue bands. Furthermore, due to high temperatures, H 2 SO 4 will not likely condense in the atmosphere of a Venus-like TRAPPIST-1 b (Lincowski et al. 2018). We model a second set of spectra for the O 2dominated atmospheres that reduce the water abundances for b and d by a factor of 100 and 10 respectively, to simulate lower outgassing rates (and a drier stratospheric water abundance of ∼1 ppm) than those assumed in Lincowski et al. (2018).
The model atmospheres are detailed in Lincowski et al. (2018) and listed in Table 1. Briefly, they contain 58-66 levels from the surface to 0.01 Pa. The O 2 desiccated atmosphere is 95% O 2 , 0.5% CO 2 , and 4.5% N 2 in photochemical-kinetic equilibrium with the primary photolytic products CO and O 3 (see Lincowski et al. 2018, their Figure 4). The O 2 outgassing atmosphere is 95% O 2 and 4.5-5% N 2 , with Earth-like volcanic outgassing fluxes at the surface for H 2 O, CO 2 , SO 2 , and other molecules not detectable in these spectra (see Lincowski et al. 2018, their Figures 4 and 14). Note that the O 2 atmospheres with reduced water abundance are not climatically or photochemically self-consistent. The Venus-like atmosphere is 96.5% CO 2 , 3.5% N 2 , with trace amounts of H 2 O, SO 2 , and others fixed to values at the surface consistent with Venusian abundances at 10 bar (Lincowski et al. 2018, their figure 6).
These atmospheres were generated using a 1D line-by-line, multi-stream, multi-scattering radiative-convective equilibrium climate model coupled to a photochemical-kinetics model. More information about the climate-photochemical and spectral modeling of these model atmospheres can be found in Lincowski et al. (2018), particularly their §2 (Model and Input Descriptions) and §3 (Results). As in Lincowski et al. (2018), we use the updated planetary masses and radii from Grimm et al. (2018), and the semi-major axes and stellar radius from Gillon et al. (2017) to compute transit spectra.
Isotopic Fractionation
Since isotopologues have not been considered in M dwarf terrestrial planetary atmospheric escape modeling (Lammer et al. 2007;Schaefer et al. 2016;Ribas et al. 2016;Airapetian et al. 2017;Bolmont et al. 2017;Dong et al. 2017Dong et al. , 2018Wordsworth et al. 2018;Lincowski et al. 2018;Egan et al. 2019), we use Venus observations to constrain plausible fractionation values for our spectral modeling. While the Venus literature is mostly in agreement that hydrodynamic loss was responsible for the primordial loss of water (Hunten 1982;Kasting & Pollack 1983;Kasting 1988;Chassefière 1996;Gillmann et al. 2009;Chassefière et al. 2012;Bullock & Grinspoon 2013;Lichtenegger et al. 2016;Lammer et al. 2018) and may have been responsible for fractionation of D/H (c.f. equation (17) Hunten et al. 1987), the specific mechanism(s) that caused the current D/H fractionation are uncertain (e.g. Kasting & Pollack 1983;Kasting 1988;Grinspoon 1993;Gillmann et al. 2009;Collinson et al. 2016;Lichtenegger et al. 2016 and reviews by Chassefière et al. 2012;Bullock & Grinspoon 2013;Lammer et al. 2018). These processes generally cause some degree of fractionation and favor escape of the lighter elements, either due to the larger escape energy required by heavier species or due to the diffusive stratification of the homosphere and resultant higher abundances of lighter elements near the exobase (i.e. Rayleigh fractionation, Rayleigh 1896;Hunten 1982;Sharp 2017).
Higher fractionation values compared to Venus may be possible due to the more extreme stellar radiation environment experienced by M dwarf planets. M dwarfs have a much longer superluminous pre-main-sequence phase (Baraffe et al. 2015) and they emit comparatively more XUV and FUV flux than our Sun (e.g. Ribas et al. 2016), which enhances atmospheric escape over time through stronger ionospheric heating from XUV absorption, and stronger photolysis of water vapor by FUV absorption, a key limiting factor defining the diffusion-limited escape flux (e.g. . Terrestrial planets in and around the habitable zones of M dwarf stars could lose hundreds of bars of oxygen to nonthermal escape processes, such as CMEor solar-wind-driven ion pick-up (c.f. Kulikov et al. 2006;Lammer et al. 2007;Gillmann et al. 2009) and polar winds (c.f. Collinson et al. 2016;Airapetian et al. 2017), both of which may be aggravated for planets without strong (i.e. Earth-like) magnetic fields. Although these studies have not considered isotopes or isotopic fractionation, the large loss potential for oxygen and other heavier species may plausibly result in severe isotopic fractionation. Without detailed calculations of the possible isotopic fractionation in the atmospheres of planets around M dwarf stars, we assume a range of values for δD and δ 18 O both consistent with and more severe than Venus. For the environments with water vapor, we conservatively simulate δD up to 100 times VSMOW, consistent with Venus (∼120-140). For cases where δD=100, we also simulate δ 18 O up to 10 times VSMOW (note Venus exhibits no oxygen fractionation). In our most severe case for atmosphere and ocean loss, we simulate δ 18 O up to 100 for the desiccated, O 2 -dominated environment. For this extreme case, we assume that early com-plete ocean and hydrogen loss was followed by continued escape of oxygen. This severe value may not be possible, but it is useful to calculate a range of values in these spectral experiments to demonstrate the thresholds for detection. We include lesser fractionation values as appropriate for each case.
RESULTS
We present noiseless simulated transit transmission spectra at 1 cm −1 resolution demonstrating the signal present due to different levels of extreme isotopic fractionation for δD up to 100 VSMOW (similar to Venus) and δ 18 O up to 100 VS-MOW. These spectra are presented as "relative transit depth", given as (c.f. Winn 2010; Lincowski et al. 2018): where F is the stellar flux, dF a is the difference in stellar flux due to occultation by the atmosphere of the transiting planet, R p is the planet solid-body radius, R a is the atmospheric height from the surface of the planet, and R * is the radius of the star. In this work, the relative transit signals discussed are generally relative to the transit signal calculated with the nominal VSMOW abundances. The spectra and data are available online using the VPL Spectral Explorer 3 , or upon request. We assess the detectability of the isotopically-enhanced features of these spectra propagated through the PandExo JWST instrument noise simulator (Batalha et al. 2017).
Isotopologue Abundances
We solve for the isotopologue abundances of the model atmospheres we adopt from Lincowski et al. (2018) numerically as described in §2.2. The calculated abundances for all fractionation values for the primary detectable species, H 2 O and CO 2 , are listed in Table 2, which are listed in columns for each iteration of δD and δ 18 O.
Simulated Spectra
We assessed isotopic fractions of δD up to 100 times VS-MOW for two outgassing, O 2 -dominated atmospheres, and a (clear-sky) Venus-like, CO 2 -dominated atmosphere, for TRAPPIST-1 b, d, and e. We assessed δ 18 O up to 10 for these atmospheres, and up to 100 VSMOW for a completely desiccated O 2 -dominated atmosphere. Justification for these values was discussed in §2.5 and these environments are listed in Table 1.
Noiseless simulated spectra are shown in Figures 1-2 Figure 3 we show simulated spectra for TRAPPIST-1 d and e for the O 2 -dominated atmospheres with outgassing. The atmospheres of both planets have smaller absorption features due to lower water vapor abundances and temperatures, and due to the refraction of stellar photons. TRAPPIST-1 e has very small isotopologue features here compared to VSMOW abundances, but this simulated, uninhabitable environment has a temperate climate and stratospheric water abundance similar to a potentially habitable planet, which makes it an important comparison case.
The transit signals of isotopic fractionation in our individual atmospheres vary considerably and depend on abundances of water vapor and CO 2 (Figure 1). In an O 2dominated atmosphere with some water vapor still present due to outgassing, Venus-like fractionation of δD could be observable. With ∼500 ppm stratospheric H 2 O (Figure 1, upper panel), the modeled TRAPPIST-1 b atmosphere may exhibit a transit signal up to 74 ppm in the broad 3.7 µm HDO band. There are also weaker features due to HDO at 1.5 and 2.4 µm if HDO is sufficiently abundant. If fractionation in oxygen was also present, the 18 O 12 C 16 O features between 3.0-4.1 µm would overlap with the 3.7 µm HDO band, though the band widths and shapes are different. The climate-photochemistry models of Lincowski et al. (2018) calculated lower water abundances in the stratosphere of this environment for TRAPPIST-1 d (10-50 ppm) and e (∼1 ppm), and the transit signal for enhanced HDO is similarly smaller (55 and 11 ppm, respectively). These transit signals are also reduced due to refraction for both planets and a much smaller scale height for e. With reduced stratospheric water vapor (to ∼1-5 ppm each for b and d), compared to VSMOW the spectra exhibited isotopologue transit signals of 79 and 29 ppm for b and d, respectively.
For a more severe case in a desiccated (water-free) O 2dominated environment (Figure 2, upper panel), enhancements to δ 18 O in 18 O 12 C 16 O are evident throughout the NIR (strongest at 1.7, 2.2, and 3-4 µm; labeled as 18 OCO), exhibiting transit signals compared to VSMOW of up to 94 ppm for b and 72 ppm for d. With a signal of 29 ppm, this was the only TRAPPIST-1 e atmosphere to exhibit a transit signal near 30 ppm compared to VSMOW (i.e. the putative noise floor for NIRSpec, Greene et al. 2016). There are also strong 18 O 12 C 16 O isotopologue bands in these atmospheres at 6.0, 7.3, and 7.9 µm (not shown), though this spectral region is less favorable for JWST observations (Morley et al. 2017;Lustig-Yaeger et al. 2019).
For all three modeled planets (b, d, and e), Venus-like atmospheres are thoroughly dominated by CO 2 , with minimal differences in spectral features due primarily to 18 O 12 C 16 O (less than 30 ppm) in the transit signal compared to VSMOW (see e.g. Figure 2, lower panel). These differences were at the same wavelengths exhibited by the O 2 atmospheres. With CO 2 absorption saturating the NIR spectrum, HDO was not detectable in our Venus-like cases.
While HDO and 18 O 12 C 16 O were the primary detectable isotopologues, many other bands were included in our spectral models, but were generally not present in the simulated transit transmission spectra, including the O 2 A-band. No isotopologues containing 17 O were distinctly present, due to its lower abundance compared to 18 O. Isotopologue absorption by 12 C 18 O at 2.4 µm is distinguishable in the completely desiccated atmosphere (see Figure 1), but not likely individually discernible with JWST.
Detectability Assessment
Lustig-Yaeger et al. (2019) conducted a comprehensive detectability study for JWST instruments using the suite of model spectra from Lincowski et al. (2018) and identified several useful instrument modes for these atmospheres. These optimal instrument modes were used here, and are listed along the bottom of Figure 4, which provides a summary of the number of transits required to attain an expected signal-to-noise ( S/N ) of five compared to the transit spectra with nominal VSMOW abundances for each model atmo- sphere. The number of transits required for a different S/N can be calculated by: where N is the number of transits (here at S/N =5) and N is the number of transits to attain S/N . Except for our CO 2 -dominated, Venus-like environments, Venus-like levels of fractionation of D/H consistent with past ocean loss for each TRAPPIST-1 b and d atmosphere may be detectable with JWST (see e.g. Figure 5). The completely desiccated, O 2 -dominated atmospheres modeled for TRAPPIST-1 b and d have 18 O 12 C 16 O bands that are also accessible to JWST. Assuming volcanically-outgassed species in the O 2 -dominated atmospheres allows detection of HDO bands. Fractionation in oxygen, in addition to hydrogen, increases the detectability of isotopologue bands due to complementary signal increases in both HDO and 18 O 12 C 16 O bands. As in Lustig-Yaeger et al. (2019), we find that NIR-Spec Prism SUB512 (512x512 subarray) with a partial satu- ration strategy (allowing partial saturation at the SED peak to provide higher signal at other wavelengths), with six groups per integration , is generally the ideal instrument/mode for these detections. NIRSpec G395H grism is nearly as good, or in some cases better, due to coverage of the 3-4 µm region containing broad HDO and 18 O 12 C 16 O bands. Similarly, due to its wavelength coverage, NIRCam F322W2 is also an acceptable instrument for these detections.
With lower levels of stratospheric water vapor similar to Venus and Earth in an O 2 -dominated atmosphere (here, 1-5 ppm H 2 O), a similar number of transits are required for TRAPPIST-1 b compared to the nominal outgassing atmospheres of Lincowski et al. (2018). This is significantly worse for TRAPPIST-1 d, which may be in part due to the higher refraction altitude exhibited by d.
The Venus-like atmosphere with Venus-like isotopic fractionation is likely very difficult to detect with JWST, even without aerosols. This is due to the small scale height and high opacity of a CO 2 -dominated atmosphere, which confines the transmission to the upper stratosphere and strongly reduces the signal from H 2 O and HDO. To detect a fraction-ation signal, such a Venus-like planet would also have to exhibit substantial fractionation in oxygen (unlike Venus itself), which could enhance the transit depths of the 18 O 12 C 16 O bands. In a cooler atmosphere such as that of TRAPPIST-1 d, the CO 2 bands are not as broadened, and the occupancy of the lower energy states of isotopologue bands is higher (Mollière & Snellen 2019), so detecting 18 O 12 C 16 O in a Venus-like (albeit clear-sky) atmosphere of d would require fewer transits than b.
For TRAPPIST-1 e, detecting isotopic fractionation in H 2 O would be difficult, requiring more than 100 transits for all fractionation values, atmospheric environments, and instrument modes considered here. Fractionation in CO 2 oxygen isotopes in the desiccated O 2 -dominated atmosphere may be possible, if such extreme fractionation is possible. The HDO and 18 O 12 C 16 O features in the model atmospheres for TRAPPIST-1 e are substantially masked by atmospheric refraction. The cooler temperatures and higher surface gravity compared to TRAPPIST-1 b and d result in small scale heights, which conspire to produce shallow transmission depths.
DISCUSSION
We have shown that an enhancement of 100 in either δD (via water vapor) or δ 18 O (via CO 2 ), which would likely be due to extreme ocean or atmospheric loss, may be detectable with JWST for the inner planets of the TRAPPIST-1 system. Simulated spectra for our O 2 -dominated atmospheres exhibit transit signals up to 79 ppm and 94 ppm for δD and δ 18 O respectively for TRAPPIST-1 b (55 and 72 ppm respectively for d; 11 and 29 ppm respectively for e). With optimal use of NIRSpec Prism , these signals could be detected at S/N =5 in eleven and four transits respectively for b (25 and five for d). In the desiccated, O 2 -dominated case with δD = 100 and δ 18 O = 10, the isotopic enhancement can be distinguished from the nominal Earth VSMOW abundances in as few as eight transits for b (13 for d). This is only a few more transits than the two transits required with NIR-Spec Prism SUB512 n = 6 to detect an atmosphere by ruling out a featureless spectrum at S/N = 5 for b or d (Lustig-Yaeger et al. 2019). It is more difficult to observe a fractionation signature in the atmosphere of a habitable zone planet like TRAPPIST-1 e; only extreme fractionation of oxygen is possibly detectable (in 28 transits), due to a smaller atmo-spheric scale height, lower levels of stratospheric water vapor, and the refraction of stellar rays through the atmosphere during transit (Bétrémieux & Kaltenegger 2013Misra et al. 2014;Robinson 2017).
The strongest transit signal is due to rotational-vibrational bands in the NIR, 1-5 µm, consistent with JWST NIRSpec. Transit observations are dependent on the stellar photons for signal, and this is the spectral region with the most stellar photons from M dwarf stars. These rotational-vibrational transitions are subject to mass-dependent spectral shifts for the bands between 1-3 µm, because such transitions are inversely proportional to the reduced mass of the molecule. The asymmetric molecules such as HDO and 18 O 12 C 16 O provide new degrees of freedom due to breaking molecule symmetries compared to H 2 O and CO 2 , which produce new absorption bands between 3-8 µm.
The severe fractionation considered in this work would not likely be caused by any other known fractionation mechanism, including the composition of the host star (e.g. Mollière & Snellen 2019, and references therein). Interpreting observed isotopic abundances in the atmospheres of an exoplanet must be considered in relation to the host star. However, nearby stars differ in D/H by less than a factor of ∼ 2 compared to the Sun (c.f. Linsky et al. 2006;Asplund et al. 2009). Fractionation of isotopic abundances compared to the host star value would indicate that a planet evolved from the primordial conditions, which could include a small effect due to formation. However, a large fractionation compared to the host star would likely indicate an evolved atmosphere, though at this time we cannot say which mechanism may be responsible.
The ability to observe isotopic fractionation in H 2 O or CO 2 may help to more robustly identify worlds without surface oceans. Transmission spectroscopy cannot directly probe the surface composition of a planet, and water vapor observed in transmission is suggestive, but is not conclusive proof that an ocean is present. Current planetary hydrogen loss (though also not conclusive evidence of a surface ocean) could be identified via UV measurements of Lymanα (e.g. Jura 2004). The detection of O 2 -O 2 collision-induced absorption is indicative of a large inventory of oxygen that Simulated transmission spectra for the O2 outgassing environment for TRAPPIST-1 b with ∼500 ppm stratospheric H2O, comparing δD --100 (blue) vs VSMOW (black). These spectra were processed through the PandExo JWST instrument simulator (Batalha et al. 2017) and are presented with error bars at the native resolution for NIRSpec Prism (nominal resolving power R ∼ 100), using ngroups = 6 , with 11 transits co-added. The difference between these spectra is detectable at S/N =5. The broad HDO band at 3.7 µm provides significant contribution to the overall detection. The O2-O2 bands at 1.06 and 1.27 µm can also be seen here.
is most likely produced during the loss of oceans of water Schwieterman et al. 2016). A large O 2 inventory does not preclude the possibility that a significant amount of surface water remains, especially for more volatile rich worlds, which may include the TRAPPIST-1 planets. However, the identification of isotopic fractionation in H 2 O and/or CO 2 in conjunction with the detection of O 2 -O 2 would be strong evidence that the planet lost the bulk of its available water inventory, and that it is unlikely that surface water remains. This is because an extant inventory of surface water would dilute the isotopic fractionation signal. Isotopic fractionation signals could therefore complement observations of O 2 -O 2 for ocean loss planets, and these two sets of observations may be attainable with JWST, at least for the inner planets of TRAPPIST-1. Lustig-Yaeger et al. (2019) found that O 2 -O 2 features in O 2 -dominated atmospheres may be detected in as few as ten transits (for TRAPPIST-1 b) in the outgassing case with the NIRSpec Prism SUB512 (n groups = 6), equal to the ten transits required to distinguish HDO bands compared to VSMOW. The NIRSpec Prism is particularly useful because the extra wavelength coverage (0.6-5.3 µm) can provide simultaneous evidence of the O 2 abundance (via O 2 -O 2 at 1.06 and 1.27 µm) and of isotopic fractionation of hydrogen in H 2 O or oxygen in CO 2 . Here we have shown that for a planet like TRAPPIST-1 b, the number of transits required to identify these individual O 2 -O 2 bands is sufficient to begin distinguishing isotopologues as well, without additional observing time than necessary to identify a large (abiotic) inventory of O 2 .
Although our motivation for assessing atmospheric and ocean loss through measuring the D/H ratio was inspired by remote-sensing measurements of D/H in the atmosphere of Venus, it will not likely be possible to conduct these measurements in the atmospheres of Venus-like exoplanets using transit observations. Sulfuric acid aerosol formation truncates the atmospheric levels that can be probed in transmission and the saturated CO 2 bands largely blanket the NIR-MIR spectra of CO 2 -dominated atmospheres. The effective greenhouses cause high temperatures, which disfavor the occupation of isotopologue ro-vibrational energy states (Mollière & Snellen 2019).
Isotopologue observations may help constrain the location of the inner edge of the habitable zone by providing evidence for whether water vapor detected in the atmosphere of a planet is consistent with ocean loss or with a primordial reservoir (i.e. surface water or outgassing). Since TRAPPIST-1 d sits between the recent Venus and moist greenhouse limits, it is a valuable target for this observation. The observation of water vapor with no evidence of fractionation in the atmosphere of an IHZ planet like TRAPPIST-1 d would support the recent Venus limit, though the lack of fractionation could also be due to outgassing fluxes or a high volatile inventoryi.e. it could be a water world in a permanent runaway greenhouse state, which would be indicated by a large abundance of statospheric water vapor. Conversely, isotopic evidence of ocean loss would strongly support the moist greenhouse limit as the true inner edge. Robust observational evidence for the location of the IHZ would require a survey of multiple planets and multiple systems with high-fidelity observations to constrain both the H 2 O abundances and D/H ratios.
It may be difficult to observe isotopologues in the atmospheres of planets within the habitable zone. We found that TRAPPIST-1 e exhibited only small signals, and even in the optimistic assessment we conducted that neglected any systematic noise floor, it would generally require greater than 100 transits to identify isotopologues at S/N =5. This is likely to apply to HZ planets in general, due to the lower atmospheric temperatures (and resultant lower atmospheric scale height), additional refraction due to distance from the star, and lower stratospheric water vapor.
The fractionation we have modeled may not be attainable for all small planets that may have undergone ocean and atmospheric loss in and around the habitable zones of M dwarf stars. If a planet targeted for observation is more volatile-rich than Earth, the volatile inventory available to the atmosphere may never be lost, and as discussed above, would not impart a significant fractionation signature on the atmosphere. This may be the case for one or more of the TRAPPIST-1 planets, which may be more volatile rich than Earth due to slightly lower nominal densities (Grimm et al. 2018) and dynamical evidence of possible migration (Luger et al. 2017). However, within the calculated error, these densities are still within the range of terrestrial values in a mass-radius relationship (Grimm et al. 2018).
An observation of oxygen fractionation in CO 2 could indicate extreme atmospheric loss but may be difficult to achieve. The fractional mass difference between 18 O and 16 O is small, and Venus does not exhibit fractionation in oxygen (Hoffman et al. 1980;Bezard et al. 1987;Iwagami et al. 2015). So significant fractionation in oxygen may require atmospheric escape mechanisms unknown in our solar system, or operating over longer time periods or at higher fluxes than for the solar system planets. However, if it occurred, oxygen fractionation could be retained within CO 2 if oxygen loss occurred in the presence of CO 2 . Without surface liquid water, surface weathering processes (i.e. carbonate-silicate weathering, Walker et al. 1981) would no longer draw outgassed CO 2 from the atmosphere. Since the oxygen in CO 2 would dilute the isotopic signal from oxygen loss, the remaining CO 2 inventory must be small compared to the remaining oxygen to prevent significant dilution of the oxygen isotope abundances.
In addition to potential dilution of oxygen isotopic fractionation, atmospheres with large inventories of CO 2 would make it more difficult to observe any isotopic fractionation signal considered here. Unlike the other typical terrestrial bulk atmospheric gases O 2 and N 2 , CO 2 exhibits significant absorption throughout the NIR-MIR, which masks weaker absorption lines, including isotopologues. Large quantities of water vapor can similarly interfere with observing other trace gases. While Earth-like abundances of CO 2 and H 2 O may be the most favorable for distinguishing and characterizing important isotopologue bands, low CO 2 abundance may not be likely for a planet that experienced total ocean loss because the lack of liquid surface water would reduce surface weathering that draws outgassed CO 2 from the atmosphere.
Other factors not considered in this work can affect the possibility of detecting isotopic fractionation, such as surface pressure and clouds. Morley et al. (2017) showed that for atmospheres with surface pressure of 1 bar and lower, the number of transits required to detect features increased with decreasing pressure. In Lincowski et al. (2018) the amplitudes of transit transmission features did not change significantly as a result of higher surface pressure at pressures higher than 10 bars. Clouds and hazes, which may form at high altitude, can significantly truncate transit transmission spectra, particularly hazes such as those in the atmosphere of Venus, which form at much higher altitude than water clouds (e.g. Lincowski et al. 2018). For Venus-like clouds in particular, H 2 SO 4 has many absorption bands longward of 2.5 µm. These absorption bands are not the same as CO 2 or H 2 O, but together with these gases and due to the high altitude of haze aerosols, the presence of H 2 SO 4 aerosols can effectively eliminate primary detectable isotopologue features in the 2.8-4.3 µm range, between CO 2 bands. While TRAPPIST-1 d and e may form aerosols, Lincowski et al. (2018) showed that, due to high atmospheric temperatures, the most likely condensates in these oxidized atmospheres, H 2 O and H 2 SO 4 , would not condense in the atmosphere of a Venus-like TRAPPIST-1 b. While other metal aerosols have been suggested for hotter exoplanets (such as sodium and potassium-based condensates), these are not plausible for the temperature regimes considered here, as it would not be possible to evaporate them from the planetary surface (c.f. Schaefer & Fegley 2009). Hydrogen-dominated atmospheres could support other types of aerosols, particularly if these planets reached higher temperatures (e.g. He et al. 2018), but the TRAPPIST-1 planets are not likely to have H-dominated atmospheres (de Wit et al. 2016;de Wit et al. 2018;Moran et al. 2018), with the possible exception of TRAPPIST-1 g (Moran et al. 2018), which we do not consider here. Consequently, TRAPPIST-1b is not likely to support the majority of hypothesized aerosols, and so is more likely to be clear sky than other planets in the system.
Here TRAPPIST-1 b, d, and e were used as sample planets to assess the possibility of detecting isotopologue bands in exo-terrestrial atmospheres. The results could be extended to other planets in the system, in light of the results of Morley et al. (2017), Lincowski et al. (2018), and Lustig-Yaeger et al. (2019), or to other systems. While TRAPPIST-1 c is one of the inner planets with parameters similar to Venus, it is more difficult to observe features in the atmosphere of TRAPPIST-1 c than b and d, due to the higher density of c. Given the results for TRAPPIST-1 e, it is unlikely that isotopologue bands could be observed in the atmospheres of the outer planets, TRAPPIST-1 f, g and h. The results of this work could be used to assess the detectability of isotopologue bands in the atmospheres of other M dwarf targets of interest, both those currently known and those yet to be discovered by SPECULOOS (Delrez et al. 2018), TESS (Barclay et al. 2018), CHEOPS (Broeg et al. 2013;Benz et al. 2018), and PLATO (Rauer et al. 2014).
We have shown that isotopic fractionation signals inferred from HDO or 18 O 12 C 16 O may be feasible to observe with JWST and may provide important clues of the evolutionary history of planets around M dwarfs. Measurements with ground-based high-resolution instruments may also be able to search for isotopic signatures for the very nearest M dwarf planets (Mollière & Snellen 2019). The values we have considered can guide observers considering different levels of fractionation. These levels could also be used to assess the possibility of detecting isotopic fractionation if future comprehensive modeling of atmospheric escape demonstrates at what level hydrogen or oxygen fractionation is possible, and serve as a testable hypothesis for fractionation due to ocean loss. After first determining that a planet has an atmosphere (Lincowski et al. 2018;Lustig-Yaeger et al. 2019), the next important assessment of the planetary environment may be to search for signs of severe ocean loss. Isotopic measurements can contribute evidence for assessing atmosphere and ocean loss of M dwarf planets, which will soon be observed with JWST. Although severe atmospheric escape is likely to afflict all small planets in or near M dwarf habitable zones, caveats against such fractionation occurring also exist. Observations of one or more inner planets in a multiple-planet system could be used to inform the suitability of more timeconsuming follow-up observations of a habitable zone sibling.
CONCLUSIONS
We have shown that for Venus-like isotopic fractionation of D/H, or a similar fractionation in 18 O/ 16 O, isotopologue bands may be observable and distinguished from Earth-like isotopic abundances with JWST in as few as ten transits (δD=100) or four transits (δ 18 O=100) for a clear-sky atmosphere not dominated by CO 2 . The large fractionation values considered here are meant to demonstrate the potential for detecting these bands and discriminating them from Earthlike abundances. These fractionation values would require an ocean-free surface and ocean loss at least as severe as experienced by Venus. A detection of these bands in transit transmission spectra would be evidence of a lack of a surface ocean and the extreme atmospheric loss and oxygen buildup that has been proposed by a number of authors. This would provide valuable constraints for atmospheric escape models, the location of the inner edge of the habitable zone (whether recent Venus or moist greenhouse), and on the habitability of M dwarf planets. Researchers currently preparing JWST observing proposals, and those who will be conducting retrievals on future JWST observations, may want to consider different isotopic abundances in line lists used as inputs to retrieval pipelines. Further work modeling atmospheric escape that includes elemental isotopes is also warranted, to understand to what degree isotopic fractionation is theoretically possible in the atmospheres of planets orbiting M dwarfs. A thorough analysis of atmospheric escape should consider thermal and nonthermal escape, including photochemistry and vertical transport, life-long outgassing, and the possibility of deep surface reservoirs (i.e. water worlds).
We thank Rodrigo Luger and David Crisp for useful discussions that helped improve this paper. This work was performed as part of the NASA Astrobiology Institute's Virtual Planetary Laboratory, supported by the National Aeronautics and Space Administration through the NASA Astrobiology Institute under solicitation NNH12ZDA002C and Cooperative Agreement Number NNA13AA93A, and by the NASA Astrobiology Program under grant 80NSSC18K0829 as part of the Nexus for Exoplanet System Science (NExSS) research coordination network. A.P.L. acknowledges support from NASA Headquarters under the NASA Earth and Space Science Fellowship Program -Grant 80NSSC17K0468. This work was facilitated though the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington. We also thank the anonymous reviewer, whose thoughtful comments helped us greatly improve the manuscript.
|
2019-05-30T01:56:19.000Z
|
2019-05-30T00:00:00.000
|
{
"year": 2019,
"sha1": "f7695cd23025e50c5c34befb74da5254072865f7",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-3881/ab2385/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "f7695cd23025e50c5c34befb74da5254072865f7",
"s2fieldsofstudy": [
"Geology",
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
}
|
28073943
|
pes2o/s2orc
|
v3-fos-license
|
Diagnostic accuracy of maternal anthropometric measurements as predictors for dystocia in nulliparous women
Background: Dystocia is one of the important causes of maternal morbidity and mortality in low-income countries. This study was aimed to determine the diagnostic accuracy of maternal anthropometric measurements as predictors for dystocia in nulliparous women. Materials and Methods: This prospective cohort study was conducted on 447 nulliparous women who referred to Omolbanin hospital. Several maternal anthropometric measurements such as height, transverse and vertical diameters of Michaelis sacral rhomboid area, foot length, head circumference, vertebral and lower limb length, symphysio-fundal height, and abdominal girth were taken in cervical dilatation ≤ 5 cm. Labor progression was controlled by a researcher blind to these measurements. After delivery, the accuracy of individual and combined measurements in prediction of dystocia was analyzed. Dystocia was defi ned as cesarean section and vacuum or forceps delivery for abnormal progress of labor (cervical dilatation less than 1 cm/h in the active phase for 2 h, and during the second stage, beyond 2 h or fetal head descend less than 1 cm/h). Results: Among the different anthropometric measurements, transverse diameter of the Michaelis sacral rhomboid area ≤9.6 cm, maternal height ≤ 155 cm, height to symphysio-fundal height ratio ≤4.7, lower limb length ≤78 cm, and head circumference to height ratio ≥ 35.05 with accuracy of 81.2%, 68.2%, 65.5%, 63.3%, and 61.5%, respectively, were better predictors. The best predictor was obtained by combination of maternal height ≤155 cm or the transverse diameter of the Michaelis sacral rhomboid area ≤9.6 cm and Johnson’s formula estimated fetal weight ≥3255 g, with an accuracy of 90.5%, sensitivity of 70%, and specifi city of 93.7%. Conclusions: Combination of other anthropometric measurements and estimated fetal weight with maternal height in comparison to maternal height alone leads to a better predictor for dystocia.
D
ystocia or abnormal progress of labor is the most common maternal problem and cause of burden in low-income countries. [1,2]It is estimated that 600,000 maternal deaths occur due to pregnancy and delivery disorders each year in the world, [3] of which 95% is reported from developing countries, and in 30% of the cases, the problem is cephalopelvic disproportion. [3,4]homboid area, and intertrochanteric diameter in labor cervical dilatation ≤5 cm (latent phase until the first stage of active phase), which were taken by a unique researcher.Mothers' foot length was measured by a wooden ruler, head circumference as the distance between highest occipital peak and mid-forehead line, the length of vertebra as the distance between the first cervical vertebra and the end of sacrum bone, and the length of lower limbs was measured in the right side of the body as the distance between femoral trochanter and heel.Transverse diagonal of the Michaelis sacral rhomboid area (the distance between two notches in superior posterior iliac spines in two transversal ends of sacrum) and vertical diagonal of Michaelis sacral rhomboid area (the distance between the fifth lumbar and the last sacral vertebra) were measured.All measurements were made using a measuring tape, while the mothers were in a standing position [Figure 1].Mothers' height was measured in standing position, following the height measurement standards.The intertrochanteric diameter was measured by Breisky pelvimeter in standing position.Symphysio-fundal height was measured by a measuring tape after being sure of mothers' bladder voiding and correction of mothers' uterine deviation by measurement of the distance between superior edge of symphysis and uterine fundus and abdominal girth at the level of the umbilicus while in supine position.Maternal prepregnancy weight or that of the first trimester was collected based on recorded data in maternal prenatal files and BMI was calculated based on that.Data related to labor and delivery were collected through constant control of the patients during labor and delivery.Estimation of fetal weight was conducted by two methods.In Johnson's formula, the height of uterus was measured, and in case of fetal engagement, it was subtracted by 12 and multiplied by 155.In the other method, used in the present study, fetal weight was calculated by multiplication of uterine height by mothers' abdominal circumference.In order to delete tool error and researcher's bias, all the subjects' anthropometric measurements were repeated and their mean was considered as the final value.The data associated with these anthropometric measurements were not given to the researcher conducting labor control.Labor control was conducted by observing and recording cervical dilatation and fetal head descend every hour by the same researcher.For labor in the form of CS or vacuum, in addition to the existence of efficient contractions of the uterus, in the active stage of delivery, cervical dilatation <1 cm/h for two straight hours, and in the second stage of delivery, fetal head descend <1 cm/h, and/or when the length of this stage was more than 2 h, [18,19] it was considered as a criterion for dystocia.Number, severity, and length of uterus contractions were calculated by manual touch of uterine head.In the active stage of delivery (cervical dilatation ≥4 cm), existence of three to five contractions in 10 min lasting for ≥40 s, and when touching the fondues of uterus in a mid-severe contraction, if the researcher's hand fingers could not dent the abdominal muscle were considered as efficient uterine contractions.The subjects were divided into two groups of normal delivery and dystocia after delivery.Type of delivery was considered as a golden standard of pelvic capacity, and maternal anthropometric diagnostic value was calculated based on that.It should be noted that in selection of the cut-off points for anthropometric measurements, the obtained numerical value of these measurements was of great importance.With regard to the fact that precise detection of both healthy and diseased individuals is important, to prevent unnecessary referral of healthy individuals and/ or not referral of unhealthy individuals, it was tried to consider the cases with the highest accuracy in addition to high level of sensitivity and specificity (>50%) as the cut-off points.So, sensitivity and specificity for percentiles and quarters of various anthropometric measurements in the study population were calculated, and in the second percentile, transverse diagonal of Michaelis sacral rhomboid area ≤9.6 cm was considered as its cut-off point.In the third percentile, maternal height ≤155 cm was considered as its cut-off point.In the fourth percentile, intertrochanteric diameter ≤31 cm; in the fourth percentile, ratio of height to fundal height ≤4.7; in the third percentile, lower limbs' length ≤78 cm; in the sixth percentile, ratio of head circumference to height ≥35.05; in the first quarter, foot length ≤23 cm; in the second quarter, fetal estimated weight by Johnson's formula ≥3255 g; in the second quarter, fetal weight estimated by multiplication of uterine height by abdominal circumference ≥3255 g; in the sixth percentile, fundal height >33 cm; in the second quarter, mother's BMI >22 kg/m 2 ; in the second quarter, vertebral length ≤58.5 cm; in the sixth percentile, abdominal circumference >98.6 cm; in the second quarter, head circumference >55 cm; in the second quarter, vertical diagonal of Michaelis sacral rhomboid area ≤9.5 cm; in the sixth percentile, lower limb length to height
RESULTS
In this research, 527 nulliparous women entered the study of whom 80 were excluded due to CS from any other cause except dystocia, such as meconium fetal amniotic fluid (n = 25), fetal distress (n = 16), fetal macrosomia (n = 4), abruption placenta (n = 2), severe vaginal bleeding (n = 3), brow presentation (n = 1), birth weight <2500 g (n = 1), no response of ineffective uterine contraction to oxytocin (n = 3), and severe pelvic contraction (n = 25).Finally, 447 had delivery, of whom 56 subjects (12.5%) had dystocia including 9 subjects (2%) with vacuum delivery and 47 subjects (10.5%) with CS.Total of 391 subjects (87.5%) had normal delivery.Mean mothers' height (P = 0.002), the transverse diagonal of the Michaelis sacral rhomboid area (P = 0.000), lower limb length (P = 0.016), and height/symphysio-fundal height ratio (P = 0.001) were significantly less in dystocia group.Mean vertebral length was less in dystocia group, but not significant (P = 0.0691).Mean ratio of head circumference to height was significantly higher in dystocia group (P = 0.012), and mean symphysio-fundal height (P = 0.059), estimation of fetal weight through Johnson's method (P = 0.059), estimation of fetal weight by multiplication of symphysio-fundal height by abdominal girth (P = 0.072), and mean abdominal girth (P = 0.310) were higher in dystocia group, but not statistically significant.Mean vertical diagonal of the Michaelis sacral rhomboid area, intertrochanteric diameter, BMI, head circumference, foot length, vertebral length to height ratio, and lower limb length to height ratio were not significantly different in the two groups [Table 1].
On evaluation of diagnostic values of height and each of the anthropometric measurements alone, the transverse diagonal of the Michaelis sacral rhomboid area with sensitivity of 60.7%, specificity of 84.1%, and accuracy of 81.2% had the highest diagnostic value.After that, mothers' height with sensitivity of 50%, specificity of 70.8%, and accuracy of 68.2% was in the second rank.Ratio of height to fundal height with accuracy of 63.5%, lower limb length with accuracy of 63.3%, head circumference to height ratio with accuracy of 61.5%, and their respective diagnostic values were almost similar to that of mother's height.Other pelvic measurements presented lower diagnostic values [Table 2].
Combination of mothers' height with most of the maternal anthropometric measurements in comparison with the diagnostic value obtained for height and each of the anthropometric measurements alone led to a better predictor of dystocia, of which combination of the third percentile of mothers' height ≤155 cm with the second percentile of the transverse diagonal of the Michaelis sacral rhomboid area ≤9.6 cm with a sensitivity of 58.3%, specificity of 89.9%, and accuracy of 86.2% was the best predictor for dystocia.Combination of mothers' height with uterine height >33 cm with an accuracy of 73.7%, combination of mothers' height with estimation of fetal weight by Johnson's method with an accuracy of 73.3%, combination of mothers' height with lower limb length with an accuracy of 71.5%, combination of mothers' height with abdominal circumference with an accuracy of 71.5%, and combination of mothers' height with foot length with an accuracy of 69.2%, in comparison with mothers' height alone, led to a better predictor for dystocia.Combination of height with other anthropometric measurements in comparison with mothers' height alone did not result in a better predictor [Table 3].Combination of estimated fetal weight by Johnson's method with pair combination of height and other anthropometric measurements led to a better predictor for dystocia, and the highest diagnostic value obtained in the present study was for combination of the third percentile of mothers' height with the second percentile of transverse sacral Michaelis diameter and the second quarter of fetal weight estimation through Johnson's formula with a sensitivity of 70%, specificity of 93.7%, and accuracy of 90.5% [Table 4].
DISCUSSION
In the present study, with the goal of achieving better predictors for dystocia, in addition of height, we calculated the other maternal anthropometric measurements: mothers' head circumference, head circumference to height ratio, lower limb length, lower limb to height ratio, vertebral length, vertebral length to height ratio, transverse and vertical diagonals of Michaelis sacral rhomboid area, intertrochanteric diameter, height to symphysio-fundal height ratio, and abdominal girth.
The accuracy obtained for mothers' height ≤155 cm was 68.2%, with a sensitivity of 50% and specificity of 70.8%.Among the various maternal anthropometric measurements, the transverse diagonal of the Michaelis sacral rhomboid area ≤9.6 cm with an accuracy of 81.2%, sensitivity of 60.7%, and specificity of 84.1% was the best predictor for dystocia, and had a high accuracy, sensitivity, and specificity compared to mothers' height.Height to symphysio-fundal height ratio ≤4.7, lower limb length ≤78 cm, maternal head circumference to height ≥35, fetal weight estimation ≥3255 g (Johnson's method), fetal weight estimation ≥3255 g by multiplication of symphysio-fundal height with abdominal girth, symphysio-fundal height >33 cm, mothers' BMI >22 kg/m 2 , vertebral length ≤58.5 cm, abdominal girth >98.6 cm, intertrochanteric diameter ≤31 cm, and vertical diagonal of Michaelis sacral rhomboid area ≤9.5 cm had a higher sensitivity in the prediction of dystocia compared to mothers' height, but their obtained specificity and accuracy were lower than those of mothers' height.Mothers' foot length ≤23 cm and head circumference >55 cm had lower sensitivity, specificity, and accuracy compared to mothers' head.Few studies have compared the diagnostic value of maternal anthropometric measurements with height.In the study of Liselele et al., (2000) [3] the cut-off points of pelvic diameters were selected based on percentile 10 of their society, and their obtained sensitivity, specificity, and positive likelihood ratio for mothers' height were 21.%, 93.8%, and 3.5, respectively.Transverse diagonal of Michaelis sacral rhomboid area with a sensitivity of 42.9%, specificity of 91.1%, and positive likelihood ratio of 4.8, and intertrochanteric diameter with a sensitivity of 38.1%, specificity of 89.4%, and positive likelihood ratio of 3.6 were better predictors for dystocia compared to mothers' height. [3]n Rozenholc et al., (2007) report, mothers' height with a sensitivity of 28.6%, specificity of 98.4%, and positivity likelihood ratio of 18.4 was the best predictor for dystocia and other maternal anthropometric measurements had lower diagnostic value.For transverse diagonal of Michaelis sacral rhomboid area, they found a sensitivity of 45.9%, specificity of 92.7%, and positivity likelihood ratio of 6.3.Although it had a higher sensitivity compared to mothers' height, its specificity and positive likelihood ratio were lower, and the intertrochanteric diameter with a sensitivity of 26.5%, specificity of 88.9%, and positive likelihood ratio of 2.4 had lower diagnostic value compared to mothers' height. [15]The calculated sensitivity for mothers' height and transverse diagonal of Michaelis sacral rhomboid area in our study was more than the sensitivity obtained in the aforementioned studies and is not consistent with them, possibly due to different determined cut-off points in our study, which have been obtained based on the best sensitivity, specificity, and accuracy calculated by various percentiles and quarters.In our study, the diagnostic value of transverse diagonal of Michaelis sacral rhomboid area was higher than the diagnostic value of mothers' height, which is consistent with the results of Liselele et al. [3] Benjamin et al., (2011) determined the cut-off point of maternal anthropometric measurements based on Rock's curve, and calculated sensitivity, specificity, and positive predictive value for mothers' height ≤155.5 cm as 70.4%, 52.1%, and 15.4%, respectively.They reported that mothers' foot length ≤23 cm with a sensitivity of 77.8, specificity of 58.6%, and positive predictive value of 18.6% had a better prediction value compared to for dystocia compared to mothers' height, [20] which is not consistent with the present study.In the study of height. [15,20]In the study of Van Bogaert et al. (1999), the mean lengths of lower limb in the groups of natural delivery and dystocia were 91.3 and 89.3 cm, respectively (P = 0.014), the mean lengths of vertebra in normal delivery and dystocia groups were 75.2 and 73.8 cm, respectively (P = 0.0003), and the mean mothers' heights in normal delivery and dystocia groups were 157.6 and 154.1 cm, respectively (P = 0.0001). [21]n the study of Barnhard et al., (1997) the mean ratios of height to symphysio-fundal height in normal delivery and dystocia groups were 7 and 3.7, respectively (P = 0.02). [22]n the study of Connolly et al., (2003) the mean mothers' head circumference values in normal delivery and dystocia groups were not significantly different, but the mean ratios of head circumference to height in normal delivery and dystocia groups were 34 and 35.1, respectively (P = 0.001). [23]Their results are in line with ours.Some researchers have argued that an increase in the ratio of head circumference to height in animals is a risk factor for dystocia.They reasoned that the women with high ratio of head circumference to height possibly have faced a growth disorder in their fetal period, leading to an imbalance in their ratio of head circumference to height, and consequently, this growth disorder may have affected their pelvis size. [23]The ratio of lower limb to height is an important predictor for nutrition and health status of individuals, so women with malnutrition face shortness of vertebra and acute shortness in their lower limb as well as a reduction in the ratio of lower limb to height. [24]lnutrition in childhood is an important risk factor for bones' growth and shortness, which can be associated with growth disorder of pelvic bones. [17]olf Michael suggested the importance of Michaelis sacral rhomboid area in the evaluation of pelvic capacity for the first time in 1851. [25]Abnormal size of Michaelis sacral rhomboid area is a predictor for mothers' shape and abnormal pelvic size, [25,26] and in pelvises with stenosis, its transverse diameter is shorter than its vertical diameter. [26]The distance between femoral great trochanters is associated with transverse pelvic diameter and in a number of studies has been reported to have a better diagnostic value compared to mothers' height. [3]Fetal size is estimated through different measurement methods such as measurement of symphysio-fundal height, abdominal girth, calculation of fetal weight by Johnson's formula, and multiplication of symphysio-fundal height with abdominal girth.Fetal size alone is not counted as an appropriate criterion for an unsuccessful delivery, as in most of the cases, cephalopelvic disproportion is observed among the fetuses with their weight in a normal range. [19]herefore, evaluation of the imbalance of fetal size with mothers' pelvis can be a better criterion to predict dystocia compared to fetal weight alone. [27]These results are in accordance with those of the present study which showed the ratio of height to symphysio-fundal height was a better predictor for dystocia compared to symphysio-fundal height alone.In the present study, combination of mothers' height with different anthropometric measurements led to better predictors for dystocia compared to mothers' height alone.Combination of mothers' height ≤155 cm with transverse diagonal of Michaelis sacral rhomboid area ≤9.6 cm with an accuracy of 86.2%, sensitivity of [3,15] Benjamin et al., (2011) suggested combination of mothers' height with fetal weight estimation by Johnson's formula and measurement of mothers' foot length as better predictors compared to mothers' height alone. [20] the present study, maternal anthropometric measurements in addition to height and fetal weight estimation by Johnson's formula were combined, and among the triple combinations, the highest accuracy (90.5%) was for combination of mothers' height, transverse diagonal of Michaelis sacral rhomboid area, and estimated fetal weight by Johnson's formula, with a sensitivity of 70% and specificity of 93.7%, which was better than the combination of height and transverse diagonal of Michaelis sacral rhomboid area concerning specificity and accuracy.Triple combination of fetal estimated weight with height and lower limb length with a sensitivity of 64%, specificity of 78.7%, and accuracy of 76.3%, triple combination of fetal estimated weight with height and vertebral length with a sensitivity of 73.6%, specificity of 71.6%, and accuracy of 72%, and triple combination of fetal estimated weight with height and intertrochanteric diameter with a sensitivity of 75%, specificity of 70%, and accuracy of 70.6% resulted in better predictors concerning sensitivity, specificity, and accuracy, compared to each of the paired combinations.
O u r r e s u l t s a r e c o n s i s t e n t w i t h t h o s e o f Benjamin et al., (2011). [20]In the investigation of diagnostic value of each maternal anthropometric measurement, the accuracy obtained for mothers' height was 68.In the present study, the best predictors concerning sensitivity, specificity, and accuracy were obtained by triple combination of maternal anthropometric measurements and height with fetal weight estimation by Johnson's method, and the best predictor was related to combination of third percentile of mothers' height (≤155 cm), the second percentile of transverse diagonal of Michaelis sacral rhomboid area (≤9.6 cm), and fetal estimated weight (≥3255 g) by Johnson's method, which had accuracy of 90.5%, sensitivity of 70%, and specificity of 93.7%.
CONCLUSION
Based on the results of the present study, mothers' height alone is not an appropriate predictor for dystocia, and its combination with other maternal anthropometric measurements and estimation of fetal weight yields better predictors to predict dystocia.
Table 1 : Mean of maternal anthropometric measurements in the two groups, normal delivery and dystocia
SD: Standard deviation mothers' height and transverse diagonal of Michaelis sacral rhomboid area ≤10.4 was not a better predictor
Table 3 : Diagnostic values of combining of maternal height with other maternal anthropometric measurements
Rozenholc et al., (2007) and Benjamin et al., (2011) symphysio-fundal height, mothers' foot length, vertical diagonal of Michaelis sacral rhomboid area ≤10.1, fetal weight estimation by Johnson's formula and abdominal girth had lower specificity compared to mothers'
Table 4 : Diagnostic values of combining different deciles and percentiles of maternal height with pelvic diameters and pelvic diameters with each other by the highest validity
3. Except for transverse diagonal of Michaelis sacral rhomboid area, other anthropometric measurements had lower diagnostic values compared to mothers' height.Combination of mothers' height with the other anthropometric measurements led to better predictors compared to mothers' height alone.The best predictor was pair combination of mothers' transverse diagonal of Michaelis sacral rhomboid area and height with a sensitivity of 58.3%, specificity of 89.9%, and accuracy of 86.2.Combinations of mothers' height with symphysio-fundal height (accuracy = 73.4%),lower limb length (accuracy = 71.5%),and abdominal girth (accuracy = 71.5%)were the best predictors compared to mothers' height alone.
|
2017-06-11T21:38:39.256Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "3011a1cf5f4d98e0254902d3e3dfad6e7cfa995c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3011a1cf5f4d98e0254902d3e3dfad6e7cfa995c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259108844
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Interpretable Change Point Detection
Identifying change points (CPs) in a time series is crucial to guide better decision making across various fields like finance and healthcare and facilitating timely responses to potential risks or opportunities. Existing Change Point Detection (CPD) methods have a limitation in tracking changes in the joint distribution of multidimensional features. In addition, they fail to generalize effectively within the same time series as different types of CPs may require different detection methods. As the volume of multidimensional time series continues to grow, capturing various types of complex CPs such as changes in the correlation structure of the time-series features has become essential. To overcome the limitations of existing methods, we propose TiVaCPD, an approach that uses a Time-Varying Graphical Lasso (TVGL) to identify changes in correlation patterns between multidimensional features over time, and combines that with an aggregate Kernel Maximum Mean Discrepancy (MMD) test to identify changes in the underlying statistical distributions of dynamic time windows with varying length. The MMD and TVGL scores are combined using a novel ensemble method based on similarity measures leveraging the power of both statistical tests. We evaluate the performance of TiVaCPD in identifying and characterizing various types of CPs and show that our method outperforms current state-of-the-art methods in real-world CPD datasets. We further demonstrate that TiVaCPD scores characterize the type of CPs and facilitate interpretation of change dynamics, offering insights into real-life applications.
Introduction
In domains such as healthcare and finance, real-world time-series data is highly influenced by change points (Truong et al., 2018). Identifying and analyzing these changes in data distribution not only enhances our comprehension of the underlying patterns but also helps mitigate risks and improve decision-making. Change point detection (CPD) methods segment a time-series into distinct intervals with varying underlying properties. Precisely inferring time points associated with such transitions is essential to decipher the behaviors of the processes being modeled (Aminikhanghahi & Cook, 2017). With a substantial increase in the volume of time-series data collected in a variety of domains such as finance (Lavielle & Teyssière, 2007) and healthcare (Yang et al., 2007), the importance of CPD methods that automatically capture changes in the signal has grown. CPD provides a way for the identification and localization of sudden changes in signals, such as patient vital signs, without the need for labeled information. For instance, in a medical setting, CPD enables the timely identification of significant variations such as changes in heart rate or declines in oxygen saturation levels, alerting doctors to potential health issues that demand immediate attention. Most existing CPD methods fail to consider the underlying variability in the properties and root causes of change points (CPs) and therefore cannot generalize effectively to time-series with complex change dynamics. CPs can be characterized as changes in the distribution of the measurements over time. Alternatively, they could also be the result of changes in the correlation structure between features. The former is more studied in the literature, but the latter is also of significance in many applications. For instance, among physiological signals, the Heart Rate Variability (HRV) measure always shows a negative correlation with the Heart Rate (HR) measurement, but in some rare situations, they might exhibit a positive correlation, indicating a concerning change in the underlying health state of an individual. At the same time, changes in the average HR can be indicative of other conditions. Methods that rely solely on detecting changes in marginal distributions will fail to identify such scenarios.
In this paper, we propose a statistical CPD scoring method called TiVaCPD that captures different types of CPs in time-series without the need for labeled instances of change. TiVaCPD offers a non-parametric solution that does not require any distributional assumption of the generative process, and can therefore generalize to various scenarios. TiVaCPD, as shown on Figure 1, assigns a CP score to each time-point that consists of two parts: 1) change in the correlation of features and 2) change in the underlying distribution of the time-series features. Each part of the score is interpretable, allowing us to characterize and classify CPs with similar underlying properties and understand the cause of the change. To identify changes in the joint distribution of features over time, we rely on the hypothesis that changes in feature interactions can be effectively captured via correlation networks constructed from adjacent windows of time. To this end, we employ a dynamic network inference method, time-varying graphical lasso (TVGL) (Hallac et al., 2017), to acquire sparse time-varying precision matrices to detect changes in feature correlation patterns. To identify changes in the probability distributions of adjacent windows, we build on recent statistical results by Schrab et al. (2021) in the theory of non-parametric two-sample MMD tests. Unlike existing CPD methods that employ a fixed-size sliding-window approach, we dynamically establish the window size, accounting for the variable length of states between CPs. Fixing window sizes introduces issues: small windows are likely to compromise the power of the statistical test, and larger windows are at risk of aggregating different distributions. By dynamically adjusting the sliding window size, we address these concerns. We propose to ensemble different components of the TiVaCPD score. Briefly, the ensemble method adaptively assigns weights to scores based on their dissimilarity, placing greater emphasis on scores that capture changes not detected by other components. We evaluate our method's ability in identifying various categories of CPs on 4 simulated and 2 real-life time-series datasets and compare its performance against 3 state-of-the-art CPD methods; showing that our method outperforms all competitors across real-word and one complex simulated datasets. There are four main contributions of this work: • We detect and characterize different types of changes in feature dynamics and/or distribution. The categorization and visualization of the underlying CP cause enhances interpretability providing valuable insights of the observed patterns. • We present a novel use of Time-Varying Graphical Lasso for quantifying changes in feature interactions. We demonstrate its ability to detect CPs that occur due to changes in the covariance of the joint distribution of features over time. • We introduced a dynamic window selection method that effectively addresses the limitations of static windows when detecting changes in data distribution. • We propose a novel ensemble technique to best aggregate unsupervised CP scores from different statistical tests. We also introduce a post-processing procedure to smooth the score estimate while preserving local minima and maxima, using the Savitsky-Golay filter (Press & Teukolsky, 1990). Figure 1: TiVaCPD overview that shows an abstract illustration of the generation process for DistScore, CovScore, and the ensemble weights W used to aggregate the components of the model. SG stands for Savitzky-Golay filter.
Related Work
There is abundant literature on CPD methods (Truong et al., 2018;Aminikhanghahi & Cook, 2017;Reeves et al., 2007). CPD methods consider a time-series to be a collection of random variables with abrupt changes in distributional properties over time. Most of these methods are parametric (Yamanishi & Takeuchi, 2002;Kawahara et al., 2007) and involve estimating the underlying probability density function of the signal, which limits detection to certain type of distributions and is usually computationally expensive. Non-parametric methods (Chang et al., 2019;Cheng et al., 2020;Matteson & James, 2014) are used where the time-series dynamics cannot be easily modeled and prior assumptions about the data distribution cannot be made. An optimal transport-based method proposed by Cheng et al. (2020), conducts two-sample Wasserstein tests between the cumulative distribution of contiguous subsequences. It uses fixed-size sliding windows to compute the test statistic. However, basing CP decisions on the local maxima of this statistic can result in a higher false positive rate. Moreover, this method projects the data onto one-dimension and uses the mean statistic, potentially leading to the loss of detection power.
Deep learning-based methods are another type of non-parametric approach that recently gained popularity due to the increasing amount of available data. For example, Time-Invariant Representation (TIRE) (De Ryck et al., 2021), an autoencoder-based CPD approach, learns a partially time-invariant representation of time series and computes CPs using a dissimilarity measure. Another deep learning technique, referred to as T S − CP 2 (Deldari et al., 2021a), utilizes contrastive learning to detect CPs, leveraging the representation of time series acquired from temporal convolutional networks.
Other deep learning-based CPD methods use kernel functions (Li et al., 2015) for greater flexibility in representing the density functions of intervals of time. One such method, KLCPD (Chang et al., 2019) uses deep generative models to increase the test power of the kernel two-sample MMD test statistic (Gretton et al., 2007). It overcomes limitations of prior kernel-based CPD methods by removing the need of a fixed number of CPs or relying on prior knowledge of a reference or training set for kernel calibration. However, its performance depends on the choice of kernel and kernel bandwidths.
The lack of interpretability in deep learning-based CPD methods hinders our understanding of why and how these methods make predictions. Moreover, current methods often fail to capture changes in correlation patterns that occur due to evolving dynamics of multivariate time series. To address such CPs, Gibberd & Nelson (2015) introduced GraphTime; A Group-Fused Graphical Lasso estimator for grouped estimation of CPs in dependency structures of a time-series captured by a dynamic Gaussian Graphical Model. As the estimated graph topology is piece-wise constant, this is useful only when we are interested in detecting jump points and abrupt changes, and leads to excessive number of false positives for other gradual CPs. Another CPD method (Roy et al., 2017) looks for dependencies between spatial or temporal variables using high-dimensional Markov random-field model. This method relies on two assumptions of known covariance structure and stationary for the data. These are a strong assuption that might not hold in many real-world applications.
Method
Problem Formulation Consider a multivariate time-series sample X ∈ R d×T to be a sequence of random variables [X 1 , X 2 , ..., X T ] with d indicating the number of features in X. T represents the total number of measurements over time. To identify change points in time steps of a data sample X, a score S[t], ∀t ∈ [T ] is estimated for each time step that measures the amount of change in the underlying generative distribution of the data.
Our CPD Algorithm -TiVaCPD In this section, we introduce our CPD algorithm called Time Variable Change Point Detection (TiVaCPD). The score S generated by TiVaCPD is composed of two components: 1) a score that measures change in correlation-CovScore (Algorithm 1), 2) a score that measures change in the distribution-DistScore (Algorithm 2). In the rest of the section we introduce each component separately, explain how each results in a unifying score that captures a variety of CP types, and demonstrate how to interpret the score to better understand the CPs.
Detecting changes in the correlation structure (CovScore)
A CP can be caused by a change in the correlation between features. This results in a change in the covariance of the joint distribution that can be identified as a state change in the feature network. The evolving dynamics of features can be modeled using graphical models, i.e. at every time point, the interactions of features can be modeled as a graph network, with nodes and links corresponding to each feature and correlation between sets of variables, respectively ( Figure 2a). However, in a time-series setting, estimating the graphical network at every time step is computationally challenging.
To estimate these networks to detect CPs, we use Time-Varying Graphical Lasso (TVGL) (Hallac et al., 2017), an efficient algorithm for estimating the inverse covariance matrix (precision matrix) of multivariate time-series with time-varying structures. TVGL infers the structure of graph networks by estimating a sparse time-varying inverse covariance matrix, P t = Σ −1 t (Σ t indicating the covariance matrix at time t), of the variables for all t ∈ [T ]. TVGL extends the graphical lasso problem to dynamic networks by allowing covariance Σ to vary over time, taking into account how the relationships between signals evolve. TVGL method enforces sparsity in the covariance matrix for less computational cost and easier interpretability of the graphical network structure. A scalable message-passing algorithm called Alternating Direction Method of Multipliers (Banerjee et al., 2008) is employed to estimate the sparse inverse covariance. Additional details on the TVGL method can be found in the Supplementary Material. (b) Dynamic window: The expanding window (gray) and fixed-size future observation window (blue) enlarge to include more samples from the generative distribution as the algorithm proceeds. Once a CP is detected at t, the window size reverts to its initial size.
Figure 2: TiVaCPD method breakdown
Learning the inverse covariance estimates from windows of data reveals the underlying evolutionary patterns present in the time-series. We employ TVGL technique to identify points of change in feature interactions between adjacent local windows of time. To promote the identification of shifts in the covariance pattern of features, we integrated an L2-norm penalty function into the estimation of the matrix inverse. Moreover, to ensure the invertibility of Σ t , we applied feature standardization and removed highly correlated features before feeding them to TVGL. The partial correlation of two features can be estimated from the joint precision matrix entries as − , which means contrasting consecutive precision matrix entries over time quantifies the change in the features' correlation. By taking the difference between the absolute values of adjacent precision matrices, we quantify these changes in the distribution. A value close to 0 in this matrix indicates nearly identical estimations of the network and therefore no CP in that feature interaction (see Algorithm 1). A negative value indicates an increase in absolute correlation and a positive value indicates a decrease.
Detecting shift in distribution (DistScore)
Assuming in a time-series sample X, each X t is independently generated from a joint probability distribution p t (·), a CP occurs at time t * if observations after t * are generated from a different distribution. To compare the probability distributions of adjacent windows, we employ a nonparametric two-sample testing procedure called MMD Aggregate (MMDAgg), introduced in Schrab et al. (2021). Let ∆ − represent the initial size of the window of X before a query point t, and let this prior window be denoted by X t Similarly, the window of future observations can be denoted by X t ∆ + = {X t+1 , ..., X t+∆ + −1 , X t+∆ + }, where ∆ + represents the length of a future window. Kernel-based MMD tests serve as a measure between two probability distributions. With the statistical test threshold α, if the null hypothesis H 0 , is rejected, the time-series Algorithm 1 Estimating CovScore 1: Input: X (multivariate time-series) 2: Output: CovScore (score representing change in adjacent precision matrices) 3: Pre-processing: Remove features with correlation higher that 0.95 4: P =TVGL(X) // Estimate the sparse inverse covariance ∀t ∈ [1, . . . , T ] 5: for all t ∈ [1, ..., T ] do 6: end if 11: end for 12: return DistScore may be partitioned by a CP at t * , signifying that measurements in the X t * −∆ − :t * windows come from a different distribution than measurements in X t * :t * +∆ + . The performance of a single kernel-based MMD test typically depends on the choice of kernel and kernel bandwidths. Since we compare adjacent windows with restricted number of samples, any loss of data to kernel bandwidth selection can be detrimental to our method's performance. To overcome this problem, MMDAgg aggregates multiple MMD tests using different kernel bandwidths, ensuring maximized test power over the collection of kernels used and eliminating the need for data splitting or arbitrary kernel selection. The method considers a finite collection of bandwidths where the aggregated test is defined as a test that rejects H 0 if one of the tests over a given bandwidth rejects H 0 .
We propose to dynamically establish the window size based on the presence of CPs (Algorithm 2). Let ∆ − represent the size of the dynamic window of data points from the last estimated CP ( t), up until the current time point (t). Starting with a constant ∆ + and a small ∆ − window, the length of the running window ∆ increases with each new observation until a new CP occurs, according to the MMD test. If a significant change in distribution isn't detected by the MMD test, i.e. the MMD score is smaller than a pre-defined threshold ϵ, the two sub-sequences are combined and compared against the next sub-sequence in the series. This process is also explained in Figure 2b. Our dynamic windowing method eliminates the need for repetitive fixed-window comparisons and utilizes a growing sample set for the MMD test.
For determining the final CP score, we need to meaningfully ensemble the DistScore with the CovScore, which is challenging because the covariance score is bounded while MMD is a positive unbounded score. Hence, TiVaCPD incorporates kernel normalization in the MMDAgg algorithm. we use a generalization of Cosine normalization (Ah-Pine, 2010) to normalize our kernels so as to have a similarity index. For a given kernel function, K z=1 (x, y) represents the normalized kernel of order z. We use the generalized mean with exponent z = 1 (arithmetic mean), which means K z=1 (x, y) = K(x,y) . This normalization technique projects the objects from the feature space to a unit hypersphere and guarantees |K z=1 (x, y)| ≤ 1.
Ensemble Score and CP Categorization
An effective method for combining unsupervised CPD scores is poorly studied. Here, we use an ensemble method that utilizes the score differences to highlight the scores that contribute the most to representing the CPs. Algorithm 3 indicates our ensemble technique for generating a unified score S by combing CovScore and DistScore. We use four score variants derived from CovScore and DistScore to generate dynamic dissimilarity weight W to effectively aggregate the scores into a unified one (As shown in Figure 1). The four scores used are as follows: a) b) Normalize(CovScore) and Normalize(DistScore): standardized CP scores using z-score normalization to bring them into the same scale, c) SG(CovScore): Since CovScore is sensitive to small distributional changes that can lead to false change point detection, we mitigate the risk of detecting spurious CPs by applying Savitzky-Golay (SG) smoothing filter. SG filtering technique is a widely used method for smoothing patterns and reducing noise in time series data (Press & Teukolsky, 1990). d) SG_combined: We also apply the filter to the sum of filtered CovScore and DistScore, which effectively reduces noise and improves the performance of CPD. The ensemble approach utilizes a weighted average of aforementioned four scores. The importance weight W is determined by computing the mean absolute difference between scores. By assigning greater weight to scores that are more dissimilar, we can effectively identify CPs that may have been missed by other scoring methods. The weights are calculated for each 21 time-points window, as the distribution of scores' importance may vary over time. To locate the exact time of the CPs, we look for peaks in the ensemble scores by searching for local maxima with a threshold that is used to remove number of false positive CPs created by noise. Using a threshold for peak detection is a common practice that requires careful tuning based on circumstances.
Understanding the change points and interpreting TiVaCPD score
TiVaCPD offers valuable insights into the underlying nature of the observed change points. Detecting both changes in correlation and data distribution, such as an inverse correlation between heart rate variability and heart rate, is crucial in clinical practice, as it can indicate an adverse health event. The DistScore detects changes in the underlying distribution of the time-series, while the CovScore detects changes in the correlation structure of feature pairs, providing a detailed analysis of the feature dynamics at each time step. This information allows for the categorization of CPs, thereby enhancing interpretability, as shown in Figure 3. This figure presents a multivariate time series sample showcasing TiVaCPD results, featuring ConvScore, DistScore, and a weighted ensemble score. In addition, CovScore's heatmap illustrates the feature pairs that caused a CP, and TiVaCPD identifies the direction of correlation change at each CP. It also includes a comparative analysis with other CPD methods, such as KLCPD, Roerich, GraphTime, and TIRE, providing a comprehensive evaluation of the TiVaCPD results against alternative CPD methods. The first two CPs (CP 1 and CP 2 ) are caused by changes in the correlation between features 0 and 1, where CP 1 corresponds to a negative change in correlation and CP 2 corresponds to a positive change in correlation. CP 3 is caused by changes in the mean between features 0 and 1 as the DistScore is high. CP 4 is caused by changes in a combination of variance, correlation, and mean, which are simulated to resemble real-world data.
Datasets and Hyper-parameter Settings
We demonstrate the performance of our method compared to multiple baselines on four simulated and two real-life multivariate datasets commonly used in the CPD literature. Different simulations test the functionality of CPD methods on a variety of potential CP scenarios. All hyper-parameters are determined based on random search over 10% of the datasets (more details on best parameters and sensitivity to hyperparameter change are provided in the supplementary material).
Simulated Data: We created 4 different datasets to simulate different types of CPs. In all datasets, each time series sample X ∈ R d×T consists of d = 3 features, and each X t is sampled independently from a Gaussian distribution x i,t ∼ N (µ i,t , σ 2 i,t ). • Jumping Mean: For this dataset, the variance is assumed to be constant over time and across all features and is set to σ 2 = 0.5. The ground-truth CPs correspond to abrupt jumps in the mean that can happen independently in any of the features. • Changing Variance: In this dataset, all three features are generated with constant mean µ = 1, but their distribution variance changes over time. CPs are indicated as time points with changes in σ 2 . • Changing Correlation: This data set consists of a multivariate time-series generated with constant σ 2 and µ. To introduce correlation changes between variables, feature 2 is modified to be: where ρ t controls the correlation between the two features that can vary over time and are randomly sampled from [−1, 1]. Here, the ground truth CPs correspond to points in time where the correlation ρ t changes.
• Arbitrary CPs:This data set consists of a multivariate time series with CPs due to varying µ, or σ 2 , or correlations between pairs of variables, resulting in a mixture of CPs scattered over time.
Real-world Data: • Bee dance (Min Oh et al., 2008): This dataset consists of six three dimensional time-series of bees' positions while performing three-stage waggle dances. The bees communicate through actions like left/right turn, and waggle. The transition between the states represents ground truth CP. with 3-axial linear acceleration and angular velocity sensors, for a total of 6 features. The groundtruth CPs are labeled as the transitions between activities.
Baseline Methods
We compare the performance of TiVaCPD with SOTA CPD methods on various types of CPs. 1 The selected SOTA approaches include those that measure a change in distribution and those that focus on the graphical structure of features over time. In addition, we conducted an ablation study on various parts of the TiVaCPD score to analyse each component impact.
Kernel Change Point Detection (KLCPD) (Chang et al., 2019) is a kernel learning framework for CPD that uses a two-sample test and optimizes a lower bound of test power via an auxiliary generative model. For this method, we used window sizes w ∈ [10, 25] for all experiments and trained the model for 25 epochs, unless more training led to improved results. Consistent with our own post-processing steps, we performed peak detection to detect the exact time of change.
Roerich (Hushchyn & Ustyuzhanin, 2021) is a CPD method based on direct density ratio estimation to detect the change in distribution. We set all parameters to default and use window sizes w ∈ [10, 25] for all experiments.
Group Fused Graph Lasso (GraphTime) (Gibberd & Nelson, 2015) is a time-varying graphical model based on the group fused-lasso. Similar to TiVaCPD it uses the graphical model to model the dependencies of variables in time series. GraphTime models the temporal dependencies between variables while TiVaCPD models the pairwise dependencies to identify CPs. Deldari et al., 2021b). Given a user-defined margin of error, M > 0, an estimated CP is a True Positive (TP) if the distance between the ground truth (t * ) and the estimated CP ( t) is smaller than the margin, i.e. |t * − t| ≤ M . As explained in Figure 4, if an estimated CP falls outside the margin, then Our method, TiVaCPD, effectively detects various types of CPs in simulated datasets by using different components of the score to capture specific CP's types. The CovScore component excels at detecting CPs caused by changes in correlation, the DistScore component detects CPs caused by changes in distribution, and the ensemble score combines the strengths of both components for improved overall performance. Additionally, each component of the score individually outperforms baseline methods, indicating that the performance boost is not just the result of ensembling the scores, but each component of the score itself does a better job at detecting the relevant CPs compared to baseline methods. Although Roerich performs well in detecting simple CPs; however, it fails to address more complex CPs in both synthetic and real-world data. Our method is the only one that performs well in detecting complex CPs. The performance results on the real-life datasets are reported in Table 3, for all baselines with a margin value of 5. We show that our method outperforms all baselines in detecting the exact time of CP in real-world and simulation datasets. Figure 3 shows a graphical representation and comparison of the different CPD methods for a time-series sample (row 1), and demonstrates how different methods generate scores for CPs. The different components of TiVaCPD are shown in rows 2 and 3, and show what category of CPs they identify. The heatmaps of CovScore can be used to identify in which pair of features the change in correlation has occurred, and also in which direction the change was.
Discussion
In this paper, we introduce TiVaCPD, a novel CPD method for detecting and characterizing various types of CPs in time-series data. By capturing changes in feature distribution, dynamics, and correlation networks, TiVaCPD provides valuable insights into the underlying causes of CPs, enhancing interpretability for end users. This is particularly crucial in domains like healthcare, where the type of CP significantly influences downstream decision making. The method is currently designed for offline settings to retrospectively detect changes. For future work, we intend to extend TiVaCPD to the online setting, where real-time measurements are acquired. Moreover, we plan to incorporate techniques for imputing missing data by leveraging correlated features and temporal dynamics.
|
2022-11-09T06:42:52.253Z
|
2022-11-08T00:00:00.000
|
{
"year": 2022,
"sha1": "b9a2e2313d6991cb04e5d43b0a8b8f9de3977c63",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b9a2e2313d6991cb04e5d43b0a8b8f9de3977c63",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
235232817
|
pes2o/s2orc
|
v3-fos-license
|
A study on improving the current density performances of CO2 electrolysers
Electrochemical CO2 reduction reaction (CO2RR) technology can reduce CO2 emission with converting excess electrical energy to high-value-added chemicals, which however needs further improvement on the electrolyser cell performance. In this work, extensive factors were explored in continuous CO2 electrolysers. Gold, one of the benchmark materials for CO2RR to produce CO, was used as the catalyst. Electrolyser configurations and membrane types have significant influences on cell performance. Compact MEA-constructed gas-phase electrolyser showed better catalytic performance and lower energy consumption. The gas diffusion electrode with a 7:1 mass ratio of total-catalyst-to-polytetrafluoroethylene (PTFE) ionomer exhibited the best performance. At a low total cell voltage of 2.2 V, the partial current density of CO production achieved 196.8 mA cm−2, with 90.6% current efficiency and 60.4% energy efficiency for CO producing respectively. Higher CO selectivity can be achieved using anion exchange membranes, while higher selectivity for hydrogen and formate products can be achieved with cation exchange membranes. This research has pointed out a way on how to improve the CO2RR catalytic performance in flow cells, leaving aside the characteristics of the catalyst itself.
www.nature.com/scientificreports/ In this paper, extensive attempts have been made to obtain larger current density at lower cell potentials. First, two electrolyser structures and two different cathode feeding methods were tested and compared. The zero-gap MEA structure with low resistance was preferred, and the humidified gas-phase CO 2 was directly fed into the cathode to alleviate the mass transfer limitation caused by the low solubility of carbon dioxide in an aqueous solution. Then, through the test and comparison of Nafion and PTFE binders, it was found that adding a small amount of PTFE can improve the hydrophobicity of GDE and obtain a higher selectivity in CO production. Besides, the ion transfer mechanism of the anion exchange membrane was more advantageous in terms of catalytic reduction of CO 2 to CO.
Results and discussion
Influence of the electrolyser structures and the cathode feeding method. CO 2 electrolysers with two different structures were adopted in this paper, as shown in Fig. 1. Both adopted a common membrane electrode assembly (MEA) configuration 7,21 , except that a liquid buffer layer was added at the cathode side of the second one. Two different cathode feeding methods were used in the MEA configuration (Fig. 1a). For the second one (Fig. 1b), there was a 2 mm thick liquid buffer layer between the membrane and cathode, CO 2 gas (dry) diffused from the back of the gas diffusion electrode to the catalyst surface.
The cell performances of the CO 2 electrolyser under three different conditions ( Figure S3) were shown in Fig. 2. As shown in Fig. 2a, the total current densities obtained with humidified-CO 2 and liquid buffer layer were almost linearly related to the cell potential (resistance polarization control), and the resistances were 5.6 Ω and 16.6 Ω respectively. The total current density (j total ) obtained with CO 2 -saturated KHCO 3 increased rapidly when the cell potential was higher than 2.2 V, obtained the highest value of 340.9 mA cm −2 at 2.6 V. Combined with the product detection results (Fig. 2b), the current increase in this case mainly came from the side reaction of hydrogen evolution. The main products were CO and H 2 , and the current efficiency of H 2 formation was not indicated in Fig. 2b.
The different cathode feeding methods affected the available CO 2 concentration on the catalyst surface. As shown in Fig. 2b, when the amount of available reactant gas CO 2 was sufficient (humidified-CO 2 feeding method), a current density for CO production (COCE) more than 80% was obtained between 1.8 V and 2.4 V, and the COCE gradually decreased from 2.2 V. Due to the low solubility of carbon dioxide in aqueous solution, there were not that much reactant gases used to produce CO with the CO 2 -saturated KHCO 3 feeding method. Therefore, the total reduction current density obtained between 1.6 and 2.2 V was much lower than that of the gas-phase feeding method. Until the cell voltage was up to 2.4 V and 2.6 V, the H 2 evolution selectivity increased, and the total current density increased rapidly. The j CO was in this order (see Fig. 2c): humidified-CO 2 > with diffusion layer > CO 2 -saturated KHCO 3 . The corresponding j CO were 128.4 mA cm −2 , 54.1 mA cm −2 and 0.4 mA cm -2 at 2.6 V cell potential, respectively.
By adding a thin liquid pH buffer layer, a triple-phase boundary can be formed. The gas-phase CO 2 molecules can be quickly diffused to the surface of the catalyst (compared with that in the liquid phase), in this way the CO 2 RR catalytic selectivity can be improved and the hydrogen evolution reaction can be partially suppressed 22 . According to Weekes et al. 23 , the mass transfer limitations can be alleviated by using a gas-phase stream, thereby www.nature.com/scientificreports/ increasing the current density. This can explain why the j CO obtained with CO 2 -saturated KHCO 3 was the lowest (less than 4 mA cm −2 ), as the mass transfer of CO 2 molecules under this condition was the worst 24 . When the cell potential was between 1.6 and 2.2 V, the energy efficiency for producing CO (COEE) obtained with humidified-CO 2 remained above 60% (see Fig. 2d). As shown in Fig. 3a, the overall resistance between the two electrodes (R s ) increased significantly (from 1.06 to 8.24 Ω) after adding a liquid buffer layer. This explained the decrease in current density after adding a liquid www.nature.com/scientificreports/ buffer layer, the large resistance led to the decrease of current density. This also means an increase in energy consumption in industrial applications. The Rs of the electrolyte feeding mode was smaller than that of the gasphase mode. That is, under the same total cell voltage, the actual potential applied on the cathode with electrolyte feeding mode was slightly higher than that with gas-phase feeding mode, which also has some influence on the production selectivity. The thickness of the buffer layer needs to be extremely thin for better application 8 . As shown in Fig. 3b, after iR compensation, the best performance was still obtained with the MEA structure, humidified-CO 2 feeding method, so this mode was adopted in the following research.
Besides, a small number of hydrocarbons (CH 4 , C 2 H 4, and C 2 H 6 ) were detected when using CO 2 -saturated KHCO 3 as a catholyte. As shown in Figure S4, the current efficiency for CH 4 production was 2.3% at 2.4 V. In contrast, the CE of hydrocarbon product was negligible (less than 0.05%) in the tests under the other two conditions. Binder types and contents. Cell performance. Two commonly used binders were used to prepare gas diffusion electrodes (GDE). The morphologies of GDE prepared with different Nafion contents were shown in Fig. 4. The catalyst layer was uniformly distributed on the surface of the gas diffusion electrode using the airbrush method. There was no obvious difference in the morphologies of the electrodes prepared with different Nafion contents.
As shown in Fig. 5a,c, j total and j CO were both in this order: 10:1 > 7:1 > 5:1 > 3:1. That is, the more Nafion binder added, the lower the current density. There was no obvious difference between the four samples in Fig. 5b except for the much lower COCEs of the 3:1 sample at high cell potentials (2.4 V and 2.6 V). The range of the four binder ratios was narrow, and the inherent properties of the Au/CN catalyst had a greater impact on catalytic performance, the COCE of the four samples followed the same trend. The j CO of the 3:1 sample (the highest amount of Nafion added) was much lower than the other three samples, especially at 2.6 V, the j CO was only 165.7 mA cm −2 , while the 10:1 sample was 259.5 mA cm −2 (with 93.8 mA cm −2 difference). This may be ascribed that Nafion is a hydrophilic resin with no hydrophobic gas-phase channels. Especially in high current density zones, adding too much Nafion may make it difficult for the reaction gas CO 2 to be transported to the catalyst surface 25 . The COEE results were shown in Fig. 5d, and the highest energy efficiency (approximately 70%) was achieved at 1.8 V. When the mass ratio of total-catalyst-to-Nafion was 7:1, the j CO reached 116.0 mA cm −2 at 2.0 V cell potential, the COCE and COEE were 90.6% and 66.4%, respectively.
Similarly, four GDEs with different PTFE binder contents were prepared. The morphologies of GDEs were shown in Figure S5, and there was no obvious difference in appearance. As shown in Fig. 6a, there was not much difference in the total current density of the four electrodes, the total current density of the 3:1 sample was the lowest, similar to the results obtained with the Nafion binder. As the cell potential increased, the COCE of the four electrodes increased first and then decreased, reaching a maximum value (~ 90%) at 2.0 V (see Fig. 6b). www.nature.com/scientificreports/ Increasing the PTFE addition amount, the COCE of the four electrodes increased first and then decreased, reaching the maximum value when the mass ratio of total-catalyst-to-PTFE was 7:1. Except that at 2.6 V, the COCE of the 3:1 sample was slightly higher than that of 5:1. When the cell voltage was between 1.6 and 2.4 V, the j CO of the 3:1 sample was lower than that of the 5:1 sample, but at 2.6 V, the j CO of the 3:1 sample was higher than that of 5:1 (see Fig. 6c). The j CO of the 7:1 sample reached 122.7 mA cm −2 at 2.0 V cell potential, with 93.7% COCE and 68.7% COEE. As shown in Fig. 6d, the maximum energy efficiency (~ 70%) was reached at 1.8 V, which was consistent with the results obtained with Nafion ionomer (see Fig. 5d).
Electrochemical and hydrophilicity characterization. Too much polymeric binder addition may have a cladding effect on gold particles, which will reduce the active surface area of the gold catalyst. If an Au particle is completely covered by an ionomer, or a C particle loaded with Au particles is covered by an ionomer, it is difficult to conduct electrons with the surrounding C particles. Under such conditions, these Au particles cannot conduct electrochemical reactions, and their surface areas cannot be measured by the CV curves 26 . As shown in Figure S8, the CVs of the electrodes prepared with different binder contents coincide, and the calculated active surface area of gold was about 3.6 cm 2 (see Table S1). The slight differences in electrochemical surface area (ECSA) may come from weighing errors or pipetting errors. The addition ratios in this research did not have a coating effect on the surface of the gold particles or they are at the same cladding level. 20% to 35% of Nafion polymeric binder addition was generally considered suitable in water electrolysers and fuel cells 27,28 . The highest ratio (3:1) used in this study was included in this optimal range. Therefore, it can be considered that none of these four binder ratios hindered the utilization of gold catalysts. The Tafel plots were shown in Figure S9, the corresponding Tafel slope and exchange current density (i 0 ) were shown in Table S1. For the Nafion binder, the Tafel slope of the 3:1 sample was significantly higher than the others, while the Tafel slope of the 7:1 sample was the minimum. For PTFE ionomer, the Tafel slope was in this order with a slightly difference:10:1 < 7:1 < 5:1 < 3:1. The lower Tafel slope indicates a faster first-electron transfer step 29 , while higher i 0 indicates easier electrode polarization 30 . The electrochemical reduction of CO 2 to CO on gold under neutral to alkaline pH can be written as follows [31][32][33] , where * denotes an adsorption site: If Eqs. (1) or (3) was the rate-determining step, the Tafel slope should be 116 and 39 mV dec −1 , respectively 34 . The Tafel slope for all electrodes was between 143 to 187 mV dec −1 , the rate-determining step was closer to Eq. (1).
The static contact angles of water on the gas diffusion electrodes were shown in Fig. 7, and the corresponding specific angle values were shown in Table S1. For both Nafion and PTFE binders, the hydrophobicity of the electrode decreased as the binder content increased. GDEs prepared with PTFE binder were more hydrophobic than those prepared with Nafion binder. The water and gas distribution management of the GDE is important, and it needs to have the functions of gas transporting, liquid discharging, and electrons transferring 35 . Moderate hydrophobicity can improve the management of gas and liquid distribution and decrease the possibility of electrode flooding. Nafion can conduct protons but not electrons, and the increased local proton concentration may promote hydrogen evolution reaction. PTFE can neither conduct electrons nor protons, but it can form www.nature.com/scientificreports/ hydrophobic pores, which is beneficial to water and gas distribution management 16 . However, adding a large amount of PTFE will increase the internal resistance of the electrode. There should be an optimum value in the balance between conductivity and hydrophobicity. In this study, the best performance was obtained with the 7:1 mass ratio of total-catalyst-to-PTFE. As shown in Table 1, the CO 2 electrolysers in this research exhibited excellent cell performance. RT means room temperature, and if no reference electrode was specified, then the potential signified total cell voltage. The mass ratio of catalyst and binder was estimated by the given parameters in each research. In this work, higher partial current density, current efficiency, and energy efficiency in producing CO can be obtained at a lower cell potential, and no heating source was required. The partial current densities for CO production obtained at 2.0 V, 2.2 V and 2.4 V were 122.7 mA cm −2 , 196.8 mA cm −2 and 247.7 mA cm −2 , respectively. With such a low total cell potential of 2.2 V, a high mass activity for CO production of 985 A/g Au was achieved at room temperature.
The influences of membranes. The CO 2 RR performances with different membranes were shown in Fig. 8, the j total values were in this order: G60 > FAA50 > N115 > PK75 (see Fig. 8a). The polarization curves of FAA50 and G60 membrane conform to ohmic polarization, and the calculated resistances are 5.6 Ω and 2.8 Ω, respectively. As shown in Fig. 8b,c, the N115 membrane exhibited the poorest catalytic performance on conversing CO 2 to CO. Between 1.6 and 2.6 V, the COCE and the j CO obtained with the N115 membrane was less than 40% and 10 mA cm −2 , respectively. And the COEE obtained with the N115 membrane was the lowest among the four membranes (see Fig. 8d). This phenomenon was consistent with Kutz et al. 15 , that the anion exchange membrane (AEM) exhibited better catalytic performance in the conversion of CO 2 to CO. With the use of AEM, the H + will not be transported to the cathode, so the hydrogen evolution reaction was suppressed 9 . Among the three AEMs, the G60 membrane exhibited excellent catalytic performance. Under the same cell potential (from 1.8 to 2.6 V), the total current density obtained with the G60 membrane was almost twice that of the FAA50 membrane. Using the G60 membrane, the j CO achieved 149.6 mA cm −2 at 2.0 V, with high current efficiency (95.0%) and energy efficiency (69.7%) in the conversion of CO 2 to CO. This result was not surprising as the G60 membrane was developed especially for CO 2 electrolysis 15 , and has been commercialized and adopted by more and more researchers [38][39][40] .
Combined with the EIS results in Figure S10, the total current density obtained with a smaller Rs was increased, which was the same as the previous test results under different configurations. According to the corresponding technical datasheet, the area resistance of the PK75 membrane (1.2-2.0 Ω cm −2 in Cl − form) is larger than that of the FAA50 membrane (0.6-1.5 Ω cm −2 in Cl − form), and the FAA50 membrane (45-55 µm) is thinner than the PK75 membrane (70-80 µm). This may explain why the Rs measured with the two membranes were nearly four times different. The best-performance G60 membrane is not only thin (50 µm) but also has the lowest average area resistance (0.045 Ω cm −2 ) under alkaline conditions. Besides, hydrogen evolution reaction was more likely to occur when using N115 membrane, and so did the formate product. As shown in Figure S13 and S14, the current efficiency of formate obtained with N115 membrane was 23.6% at 2.2 V. No formate accumulation was observed when using anion exchange membranes. As shown in Figure S15, the formate concentration decreased with sample collection and deionized water replenishment.
Discussion
In this work, beyond the properties of the catalyst, we extensively explored the influence of many other factors on the selectivity of CO production in continuous CO 2 electrolysers. Compared with the similar structure and operation mode of the H-cell, adding a thin liquid buffer layer between the cathode and the membrane can improve the catalytic performance by promoting the diffusion of CO 2 gas to the catalyst surface. A compact gasphase MEA structure CO 2 electrolyser was preferred, which has lower Rs and excellent CO 2 gas mass transfer. PTFE was more suitable than Nafion as a binder for CO 2 RR GDE preparation. When the mass ratio of totalcatalyst-to-PTFE was 7:1, the total current density reached 131.0 mA cm −2 at a low cell potential of 2.0 V, and the current efficiency and energy efficiency of CO production were 93.72% and 68.7%, respectively. Through the test Characterizations. The morphologies and the phase identification of the Au/CN catalysts were examined by transmission electron microscopy (TEM, Hitachi HT7700) and X-ray diffraction (Haoyuan, DX-27mini) respectively. The X-ray photoelectron spectroscopy (Thermo ESCALAB 250XI) measurement was performed.
The actual mass ratio of gold in the catalyst was determined by an inductively coupled plasma emission spectrometer (ICP, Optima8300DV). The morphologies of the spray-prepared cathode were analyzed by scanning electron microscopy (SEM, Hitachi TM3030). The hydrophobic and hydrophilic performance of the prepared electrodes were characterized by static contact angles (Dataphysics, JY-82B Kruss DSA100).
Catalysts and gas diffusion electrodes preparation. The Au nanoparticles supported on N-doped carbon (Au/CN) were synthesized by following our previous report 7 . The gold mass ratio of the Au/CN catalysts was 20 wt% and the particle size of gold nanoparticles was mainly distributed around 2 nm. Hydrophobic carbon paper (Toray, TGP-H-60) was used as the substrate, and no microporous layer was constructed. The Au/CN catalyst was dispersed in a solvent comprised of 1:1 (volume ratio) ethanol and water. After 20-min ultrasonication, 4 mg mL −1 (calculated based on the total mass of catalyst) ink was obtained. For experiments on different cathode feeding methods and electrolyser structures, Nafion dispersion was used as the binder, the mass ratio of total-catalyst-to-binder was 3:1. To investigate the influence of binder on CO 2 RR performance, dif- www.nature.com/scientificreports/ ferent amounts of Nafion or PTFE dispersion (the mass ratios of total-catalyst-to-binder were 3:1, 5:1, 7:1, 10:1, respectively) were added. The total mass of catalyst loading of the gas diffusion electrode was 1.0 ± 0.1 mg cm −2 .
As-prepared electrodes need to be sintered for one hour before use, Nafion 130 °C and PTFE binders 330 °C, respectively. Nickel foam (1 × 1 cm 2 ) was used as an anode catalyst to facilitate oxygen evolution. For three-electrode tests, 6.3 μL ink was dropped on the glassy carbon electrode with a diameter of 4 mm and dried at room temperature, making a mass loading of 0.2 mg cm −2 (calculated based on the total mass of catalyst).
Full-cell tests.
For the MEA structure, the humidified CO 2 or CO 2 -saturated 0.5 M KHCO 3 was induced into the cathode chamber, while 2 M KOH was circulated in the anode chamber. Under the three operation conditions ( Figure S3), the flow rates of gas and liquid were set to 15 sccm and 30 ml min −1 , respectively. Four different polymer electrolyte membranes, Fumasep FAA-3-PK-75, Nafion 115, Fumasep FAA-3-50, and Sustainion X37-50 Grade 60 were used in this research. For ease of illustration, they are referred to as PK75, N115, FAA50, and G60, respectively. Except for the membrane-related tests, the Fumasep FAA-3-50 membrane was used in other tests. At each given cell voltage (1.6 V, 1.8 V, 2.0 V, 2.2 V, 2.4 V, and 2.6 V), a 20-min electrolytic test was performed using an electrochemical workstation (IVIUM, CompactStat.h A32718). The export gas was passed into deionized water to absorb liquid phase product. The actual gas outlet flow rate was monitored by a mass flow meter (Sevenstar, D07), 1 mL of the dried effluent gas was sampled automatically into a gas chromatograph (GC-2030) every ten minutes. At the end of electrolysis under each given cell voltage, 3 mL water absorption liquid was extracted and 3 mL deionized water was replenished. Formate concentration was examined by UV spectrophotometer (Metash, UV-5800H). At least two fresh-made parallel electrodes were tested for each sample. The electrochemical impedance spectroscopy (EIS) measurements were conducted from 100 kHz to 1 Hz with 10 mV amplitude under open circuit potential. There was no reference electrode included in the flow cell and no iR compensation was made unless otherwise specified. All the experiments were carried out at room temperature (25 °C) and ambient pressure.
Electrochemical measurements. For the three-electrode system tests, a Pt foil and saturated calomel electrode (SCE) were used as the counter electrode and reference electrode respectively. Linear sweep voltammetry (LSV) scans were performed at 1 mV s −1 in 0.5 M KHCO 3 saturated with CO 2 (pH 7.3). The ECSA of the Au catalyst was calculated by the reduction peak area measured in 0.1 M HClO 4 . And 390 µC cm −2 was used as the reference charge value for Au 41 .
The current efficiency (CE) of a specific product is defined by the ratio of charge consumed in forming the product to the total charge consumed. The energy efficiency (EE) is defined by the ratio between the thermodynamic voltage to the practical cell voltage. The energy efficiency of CO (COCE) needs to be multiplied by its current efficiency 7 .
|
2021-05-29T06:16:52.700Z
|
2021-05-27T00:00:00.000
|
{
"year": 2021,
"sha1": "fd199e4dd9c1235d9c4efcc697011e0a9c1aa072",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-90581-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cf47d778a132f057a8a2c8c152e67b8e0eec1b2",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247994606
|
pes2o/s2orc
|
v3-fos-license
|
To haft and to hold: Evidence for the hafting of Clovis fluted points
: Clovis fluted points vary considerably in technology and morphology, but also share a set of attributes, the most diagnostic of which are the flute scars, the remnants of the flake removals from the basal region that travelled up towards the tip. Fluting on Clovis and Clovis-like points generally extends no further than a third of the way up the face of the point. Finished points are usually ground smooth along the base and lower edges, suggesting facilitation of the hafting (attachment) to a wooden shaft or handle by way of an ivory or bone socket. The points may have been hafted directly to a main-shaft and used as a thrusting spear during close encounter attacks, or in the hand as knife or butchery tool. Alternatively, an intermediary shaft, or foreshaft may have been used to secure the point. The suggestion of foreshafts being used by Clovis hunters received support after the discovery of bone rods in association with mammoth remains and Clovis points at the type site at Blackwater Draw, New Mexico in 1936. Several other Clovis-aged sites across North America have yielded ivory and beveled rods that have also been associated with foreshafts and the hafting of Clovis points. Scratches that are present on a couple of Clovis points made on varieties of obsidian, have been identified as being “hafting abrasion” evidence, this roughening of the surface would have helped in securing the point into the shaft or socket. In one example from the Hoyt site in Oregon, remains of a “pitch” or hafting adhesive was discovered in the abrasions in the fluted area of the point.
Introduction
Clovis fluted points (see Howard 1990) are found across all of contiguous North America and are now generally accepted as dating to ca. 11,500 14 C years BP (e.g., Waters & Stafford 2007, but see Haynes et al. 2007). Two primary technologies dominated Clovis stone tool flaking, bifacial and blade (Collins 1999a). Bifacial flaking was used to produce the large flake blanks or preforms on which fluted points were produced, and it is these points that will be the main focus of this paper. The other technology produced long regular pieces, known as blades, which were shaped into various tool forms such as scrapers, burins, gravers and other small unifacial tools. There is a considerable variation within Clovis-aged fluted points (see Buchanan et al. 2014;Miller et al. 2013), and research into the causes of the variation and the morphological forms play an important role in contemporary studies in early Paleoindian archaeology (e.g., Amick 2017;Buchanan & Hamilton 2009;Prascianus 2011). Understanding variability in Clovis point shape and size not only assists in establishing material culture that is vital for archaeological studies (see Buchanan et al. 2014;Miller et al. 2013), but can also reveal Clovis landscapes, hunting practices and social behaviour (see Buchanan et al. 2011;Morrow & Morrow 1999). One particular suggestion for Clovis point variability concerns how they were affected by the hafting process (Buchanan et al. 2012). Hafting Clovis points could influence the size and shape of the basal section, whilst not affecting the blade section (Judge 1973;Keeley 1982;Musil 1988), the basal area being the most diagnostic section of the point. A recent study of Clovis and Clovis-like points carried out on basal morphology and basal concavity morphometrics, supported this hypothesis (Slade 2018). It was during research on certain Clovis fluted point specimens for that study, that evidence for hafting was recognised on certain specimens and led to a presentation at the conference that in turn led to the inclusion of this paper in this volume (Slade 2016). This paper will look at the material evidence for Clovis hafting that is available; this includes examples of Clovis fluted points that display evidence of hafting and bone and ivory artefacts associated with the hafting process, the various sites and locations where this evidence is present, and as part of the current study, whether there is a suggestion that hafting does affect the point's morphology.
Osseous rods as Clovis foreshafts
The exact way in which Clovis points were employed has been the subject of much discussion almost since the first discoveries were made at Blackwater Draw, Locality No. 1. in a gravel pit in New Mexico, back in the 1930s (Hester 1972) and Clovis was recognised (e.g., Frison 1991a). The points may have been hafted directly to a wooden main-shaft and used as thrusting weapons at close quarters. Alternatively, an intermediate shaft, or foreshaft may have been used to secure the point, whilst the opposite end would have either been spliced onto or inserted into the main-shaft (e.g., Lahren & Bonnichsen 1974;Stanford 1996), and used as a projectile weapon, such as an atlatl, allowing attack from a safe distance. Both of these methods were possibly available to the Clovis hunters and used in a hunting situation (Frison 1991a). Another theory was put forward based on the assumption that bi-beveled rods were indeed foreshafts (Pearson 1999, but see Lyman & O'Brien 1999. It proposed combining two bi-beveled rods on their ventral sides to form a clothes peg-like foreshaft, allowing two 'v' shaped openings permitting the insertion of a Clovis point and a main shaft. With a Clovis point securely attached to the composite foreshaft, it becomes an efficient hand-held thrusting weapon or spear-like cutting tool. The strength of this proposal is that it links each characteristic of the bi-beveled rods to a specific purpose and to function as a whole. The other two ideas rely on pieces or sections of the composite tool, that do not appear in the archaeological record (i.e., wooden or bone splints, antler bits, foreshaft sockets. etc.). However, an antler artefact from an Indiana peat bog was sent to the Smithsonian Institution for identification (Stanford 1996: 45) and was recognised as being a possible foreshaft socket, and would fit perfectly on a single-beveled osseous rod, such as the examples from the Anzick site in Montana (Wilke et al. 1991). Although the AMS date taken from a portion of extracted collagen postdates Clovis, the hafting technology of which it may be part of may well resemble that employed during Clovis times (Stanford 1996: 46).
The discovery in 1936 at the Blackwater Draw site of two cylindrical bone rods in direct association with mammoth bones and fluted points, strengthened the suggestion of a foreshaft and evidence of hafting (Cotter 1937). Cotter proposed that that the rods, with either one or both ends beveled, served as the foreshafts on Clovis spears. The suggestion was further (Taylor 1969), and the reconstruction models proposed by Lahren & Bonnichsen (1974). Known examples of these osseous rods, and their possible foreshaft association, come from other Clovis-aged locations across North America; most notably the Sheaman, in Wyoming (Frison 1982), East Wenatchee, in Washington (Gramly 1993), and Aucilla River sites in Florida (Dunbar & Webb 1996), and from various site types which include caches, campsites and kill sites (Table 1). These tools are the most common non-lithic artefacts found in the Clovis archaeological record, but vary in size and shape. Some rods are beveled at only one end, some at both, while others are beveled at one end and the other pointed ( Figure 1). Some specimens are very long and thin, others are shorter and fatter (see Lyman et al. 1998). Since the discoveries of the osseous rods, the idea of them as foreshafts has never seriously been challenged. However, some researchers have questioned their description, and suggested that the rods were commonly and erroneously referred to as "foreshafts" (e.g., Hemmings 2004). Several alternative ideas of their function have been put forward (see Boldurian & Cotter 1999;Bradley 1995;Pearson 1999). It was suggested that they were used as projectile points (e.g., Frison & Stanford 1982;Jenks & Simpson 1941), as tip breaks on some of the examples have been found in kill sites, in direct association with mammoth bones, and also in campsites (Bradley 1995). Wilke et al. (1991) put forth the idea that the bone rods from Anzick, Montana, were handles for pressure flakers, while Taylor (1969) originally suggested the Anzick specimens were fleshing tools. Another idea was that they were used as pry bars: a bone crowbar used in mega fauna butchery (e.g., Saunders & Daeschler 1994). Another theory, developed from the East Wenatchee specimens, was that they served as shoes for the underside of sled runners (Gramly 1993), andBradley (1995) suggested that the rods from East Wenatchee may have been ceremonial staffs and held some spiritual significance. These latter two suggestions are not supported by many Paleoindian specialists, and so the most widely accepted hypothesis is that of Cotter's in 1937 (Lyman et al. 1998;Pearson 1999;Stanford 1991 ).
An interesting find, and currently the only one its kind from a Clovis context, is a bone tool discovered at the Murray Springs site in Arizona (Haynes & Hemmings 1968). The shape and structure of the tool appears to be well suited for the purpose of straightening wooden shafts. Experimentation with casts of the bone tool indicate that it would be highly effective for straightening shafts (Haynes & Hemmings 1968: 187).
The fluted point evidence
Several of the osseous rods display evidence of criss-crossed grooves or cross-hatching on the beveled ends (Haynes 1982: 390). Roughening the surface like this would increase friction with the adjoining, opposing bevel that would strengthen and make the tool more effective as the binding would have something to grip onto, and if covered with a resin-like pitch acting as an adhesive, the cross-hatching on the beveled ends of the rods would aid the securing of the flat fluted area of the point to the foreshaft (Lahren & Bonnichsen 1974: 149). At least one of the rods from the Anzick site have remains of a black substance present in the cross-hatching on the beveled ends; this material is believed to be an adhesive pitch (Wilke et al. 1991: 258). On one of the rods, cat #88.08.10 (Wilke et al. 1991: 260), incised lines occur at right angles on the back of the beveled surface, that suggests the incisions were made to prevent slippage of the binding, and on another, cat #88.68.13 (Wilke et al. 1991: 261), the short diagonal cuts to the side of the bevel could have functioned as slots where traces of a pitch were found, used as a binding agent (Lahren & Bonnichsen 1974). For the purpose of this paper the use of the term "pitch", used as a synonym to describe a tree resin and other Figure 1. One of the Clovis osseous rods from the East Wennatchee cache that might be a foreshaft which Clovis fluted points were hafted to (after Gramly 1993 Traces of an adhesive to bind fluted points to the beveled osseous rods interpreted as foreshaft components, were discovered in scratches on the channel-flake scars of an obsidian Clovis point recovered from the Hoyt site in Oregon. This Clovis point (Figure 2) was the first suggestion of original hafting adhesive preserved on the surface of a point (Rondeau 2009a;2009b;Tankersley 1994). The Hoyt site is part of a large Clovis workshop which may be part of the same campsite complex that includes the Dietz site (Fagan 1986), which also has specimens of Clovis points that exhibit similar scratches to the fluting area (Rondeau 2008) and had possible evidence of hafting adhesive present. The Hoyt Clovis point was found by an amateur archaeologist, Mr. J. Dyck, who made the point available for study. It is made on an opaque black obsidian, both faces of the point have scratches on the fluted surface. During analysis of the point traces of the resinous material were found, believed to be an amber-like tree resin that was a binding adhesive. (Tankersley 1994, but see Beck 1996Tankersley 1996). The texture and position of the substance suggested it was a hafting adhesive, an amber-like substance had also been previously reported from the later Paleoindian Folsom site Lindenmeier (Wilmsen & Robert 1978), but should also be disregarded as resembling amber (Beck 1996). The outline of the scratches on the point morphologically and metrically correspond to the dimensions of the beveled ends of the Clovis osseous rods, thus supporting the foreshaft hypothesis further and the possible evidence for hafting (Tankersley 1994: 123). (2021) The scratches on the fluted surface of Clovis points are found most commonly on specimens made from obsidian, and these seem to be limited to the far west (see Frison 1991a: 44;Harrington 1948;Wormington 1957: 61;). Examples have been recorded in Oregon, California, Nevada, and Utah (Table 2). However, some obsidian Clovis points that display these flute scratches have been recorded further east ( Table 2). One of the best examples being an obsidian point: specimen #107 (Frison & Bradley 1999: 19), from the Fenn cache, somewhere along the borders of Utah, Wyoming and Idaho (Figure 3), and it has been suggested that the purpose of the scratches on this point may have been to enhance the facilitation of the binding of a point to the foreshaft (Frison 1991b: 330). This Clovis point was also reported to have similar traces of a pitch in the striations in the fluted area, similar to those of the Hoyt specimen (Frison & Bradley 1999). Another Clovis fluted point with flute scratches was identified by this author in the Blackwater Draw, New Mexico assemblage (Figure 4) whilst carrying out my research on Clovis fluted point variability (Slade 2010(Slade , 2018. The point was discovered in the 1930s by George Roberts and donated to the Colorado Museum of Natural History (now the Denver Museum of Nature and Science) in 1936, although believed to come from the main Blackwater Draw Locality No. 1 site, it is possible that it was collected from one of the nearby blowouts at Blackwater Draw (Holen 2004). I was unable to examine the original, but I did have access to a very good quality epoxy resin cast replica, that had the flute scratches and abrasions present (Slade 2017). The cast was part of the Blackwater Draw Clovis fluted point assemblage, part of the C.V. Haynes Cast Collection, Arizona State Museum, Tucson. I believe that this is the first time the scratches and their association with the points hafting has been reported anywhere. The original specimen is made on an obsidian sourced in Utah but was found in New Mexico (Holen 2004), and it is thought that this is the first instance that the flute scratches on this specimen have been identified. Two other Clovis points that were recognised as having flute scratches and were until now unrecognised can be recorded (Table 2). Both specimens are in private collections but good quality casts have been made and were available to study (Slade 2017). The Utah Clovis fluted point was identified and studied by several Paleoindian specialists whilst in the Smithsonian Institution, but no mention was made of the flute scratches on both faces of the point. The large Clovis fluted point, or possibly a hafted knife, was found only 12 km from the East Wenatchee site in Washington, and was recorded in the publication, but again no mention of the flute scratches and the hafting association was made (Gramly 1993). To date there is only one recorded non-obsidian Clovis point that displays flute scratches (Rondeau & Temple 2010). The specimen is an isolated surface find from the Shell Rock Butte area of Malheur County. It is made on a semi-translucent mottled variety of agate (Rondeau 2009c). Table 2. Occurrences of Clovis points that display scratches and or abrasions that may indicate evidence to facilitate hafting. Notes: 1. This Clovis point was discovered in 1986, it is in a private collection and is to date unpublished elsewhere. The point was examined by several Paleoindian archaeologists at the Smithsonian Institution. There is a good quality cast of the point in the University of Southampton (Slade 2016). 2. This specimen was found in New Mexico, but the obsidian was sourced in Utah and the process of scratching the fluted area to facilitate the hafting is believed to have taken place at the source when the point was produced and there is no suggestion that this process took place at Blackwater Draw. * This is the first instance to the authors knowledge that the scratches on these specimens have been reported, and associated with the possible hafting to osseous foreshafts. There are three lines of evidence supporting the Clovis point hafting model associated with flute scratches. First, the scratches on the fluted surface of the point form a rectangular pattern that is consistent with the patterns of the bevelled ends of the osseous rods in Clovis (Figure 1). Second, the width of the scratches on the fluted surface compare with the width of the bevelled ends of the rods. And thirdly, the direction of the scratches on the fluted surface of the points are at right angles to the marks on the rods, this is expected if two areas were bound together with an adhesive (Lahren & Bonnichsen 1974;Stanford 1996). Flute scratches have had surprisingly limited attention and the argument of them being associated with the facilitation of hafting remains largely speculative (Rondeau & Temple 2010). When the Borax Lake Clovis-like fluted points were first reported (Harrington 1948), the scratches were noticed, but not elaborated on further and were not associated with the hafting process at the time. It was a few years later that the first reference was made to the scratches on the Californian points ( Figure 5) being possible hafting evidence (Wormington 1957: 61). Flute scratches and their purpose were not discussed again until the Dietz site in Oregon was reported (Fagan 1986: 4). Since then, there have several reported cases from further sites in Oregon, Utah, Idaho ( Figure 6), California, and Nevada (Table 2). More research needs to be done on the nature and range of flute scratches, and to look at more Clovis and Clovis-like points that are made on obsidian in existing collections, and see if they display any evidence of flute scratches, and or traces of the adhesive pitch. It may also be possible to carry out a study on some non-obsidian Clovis fluted point assemblages to see if the scratches exist on more specimens other than the Shell Rock Butte, Oregon agate specimen (Rondeau 2009c;Rondeau & Temple 2010). It may be, however, that other materials used to produce Clovis points, such as chert and chalcedony produce roughened surfaces when knapped, and it was just not necessary to abrade the fluting areas of the point, as this provided sufficient friction for the hafting process (Tankersley 1994: 122).
Discussion and concluding remarks
As we have seen since their discovery, the cylindrical osseous tools have been termed as foreshafts (e.g., Cotter 1937;Dunbar 1991;Lahren & Bonnichsen 1974), points (Cotter 1954Jenks & Simpson 1941), rods (Gramly 1993), pins (Dunbar et al. 1989) and wedges used for tightening up loose haft bindings (Lyman et al. 1998). The differences in the terminology reflects the issues that Paleoindian archaeologists have had in trying to interpret the functionality of these implements, and there is no reason to limit their function to just one of these possibilities. Although the true function of the bi-beveled rods remain a matter of some debate, researchers recognise the importance of these objects and that they are an important element of the Clovis toolkit (see Boldurian & Cotter 1999;Haynes 2002;Stanford 1991 ). Several of the osseous rods display evidence of criss-crossed grooves or cross-hatching, as roughening the surface would increase friction with the adjoining, opposing bevel that would strengthen and make the tool more effective (Haynes 1982: 390). Other engravings found on some specimens are definite distinctive patterns, such as zigzag designs on both sides of an ivory rod from the Aucilla River, Florida (Haynes 1982: 390), and the zipper designs found on some of the East Wenatchee specimens (Gramly 1993). Experimental analysis surrounding the feasibility of the hafting procedures has been carried out on Clovis fluted points and beveled rods through replication projects (e.g., Lahren & Bonnichsen 1974). Casts of replica Clovis fluted points were used, along with scale replicas of shafts, wooden and ivory foreshafts, and splints. The wooden splint was made to fit onto the fluted surface of the point, and extended up the foreshaft. Both of the fluted point surfaces were coated with an adhesive, and the beveled ends of the foreshaft were set on the points surfaces (Lahren & Bonnichsen 1974: 149).
The osseous tools of the Upper Palaeolithic in Europe are recognised as projectile point technologies in the Aurignacian that change shape over time (Knecht 1993;Peyrony 1933). The earliest industries are split-based with distinctive haft widths and lengths (Peterkin 1993). Later examples are more simply lozenge-shaped and spindle-shaped, that do not have beveled ends. The earliest beveled-based hafts appear in the Gravettian assemblages (Knecht 1993;Pike-Tay & Bricker 1993). By the time of the Magdalenian in western Europe these implements were numerically very common in the archaeological record and the size ranges are remarkably consistent among the types with various bases (Peterkin 1993). The Clovisaged specimens from North America although similar, do not include the split-based or lozenge-shaped bases. If the specimens from the Wizards Beach Clovis site in Nevada (Table 1) are made from mammoth ivory and bone, then the ranges of shape and size of the New World osseous tools are conspicuously similar to those from the Old World (see Haynes 2002). Amber or similar fossil resins have been found in eastern Upper Palaeolithic sites (e.g., Soffer 1985), and it seems likely that the use an adhesive can be traced from Clovis sites in North America to the European Upper Palaeolithic, and in doing so, add another shared cultural trait between Clovis and the Old World. The bone shaft straightener, or wrench, from the Murray Springs site has obvious similarities with the "bâton de commandement", or "bâtons percés" from the Upper Palaeolithic Gravettian and Magdalénian, such as the examples from the Czech Republic and the Ukraine (see Augusta & Burian 1960; Journal of Lithic Studies (2021) Boriskovsky 1958). Western European examples are similar in size but vary in shape, and often engraved. These European bâtons are generally thought of as shaft straighteners as well as having other uses too (Haynes 2002;Leroi-Gourhan 1957;Oakley 1982).
As yet there is no definite archaeological evidence of whether and how Clovis points were hafted. Perhaps with all the current work being carried out on the submerged sites in the southeast (e.g., Hemmings et al. 2004), and the work on the submerged landscapes on the eastern seaboard (e.g., Lowery et al. 2010) we might soon have direct evidence of hafting. Different hafting methods and techniques were perhaps employed on Clovis points of various shapes and sizes, and for varying functions (i.e., throwing spears, thrusting weapons, knives etc.). Indeed, this could go some way in explaining the variability within Clovis fluted points in North America (but see Buchanan et al. 2012).
Extensive evidence for the hafting of Clovis unifacial tools is present in the archaeological record, although it was originally thought that regular hafting by colonising hunter-gatherers and foragers would have decreased their toolkit portability (see Kuhn 1994;Morrow 1996). In the Great Lakes region of the Midcontinent of North America recent research supports the hypothesis of Clovis habitually hafting unifacial tools (Eren 2012). In this case, there is no reason to suggest that Clovis groups from elsewhere across North America, were not hafting Clovis fluted points as well.
|
2022-04-07T15:13:48.420Z
|
2021-12-21T00:00:00.000
|
{
"year": 2021,
"sha1": "fbe31db5cf1b7d739c468674ba31a42bee40ca8e",
"oa_license": "CCBY",
"oa_url": "http://journals.ed.ac.uk/lithicstudies/article/download/3033/8968",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a0f108d9c08d30e8319acecd0faef080f252677",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
256462718
|
pes2o/s2orc
|
v3-fos-license
|
MAPkinases regulate secondary metabolism, sexual development and light dependent cellulase regulation in Trichoderma reesei
The filamentous fungus Trichoderma reesei is a prolific producer of plant cell wall degrading enzymes, which are regulated in response to diverse environmental signals for optimal adaptation, but also produces a wide array of secondary metabolites. Available carbon source and light are the strongest cues currently known to impact secreted enzyme levels and an interplay with regulation of secondary metabolism became increasingly obvious in recent years. While cellulase regulation is already known to be modulated by different mitogen activated protein kinase (MAPK) pathways, the relevance of the light signal, which is transmitted by this pathway in other fungi as well, is still unknown in T. reesei as are interconnections to secondary metabolism and chemical communication under mating conditions. Here we show that MAPkinases differentially influence cellulase regulation in light and darkness and that the Hog1 homologue TMK3, but not TMK1 or TMK2 are required for the chemotropic response to glucose in T. reesei. Additionally, MAPkinases regulate production of specific secondary metabolites including trichodimerol and bisorbibutenolid, a bioactive compound with cytostatic effect on cancer cells and deterrent effect on larvae, under conditions facilitating mating, which reflects a defect in chemical communication. Strains lacking either of the MAPkinases become female sterile, indicating the conservation of the role of MAPkinases in sexual fertility also in T. reesei. In summary, our findings substantiate the previously detected interconnection of cellulase regulation with regulation of secondary metabolism as well as the involvement of MAPkinases in light dependent gene regulation of cellulase and secondary metabolite genes in fungi.
. Thereby, MAPkinases are known to be subject to feedback inhibition, which contributes to signal fidelity and is often achieved by phosphatases dephosphorylating and hence inactivating MAPkinases 13 . Evaluation of the functions of the pheromone MAPkinase pathway in Aspergillus flavus showed that its members (steC, mkkB, mpkB and steD) act as a complex and are required for aflatoxin B1 production, while in the respective deletion mutants an increase in production of leporin B and aspergillicins was observed 9 . Mechanistic investigation of the role of this pathway in aflatoxin production revealed that the regulatory impact of this kinase targeted biosynthesis of precursors rather than regulation of the aflatoxin gene cluster 16 . In contrast, deletion of the Hog1-type MAPkinase SakA in A. flavus caused an increase in aflatoxin production 17 . Components of the cell wall integrity pathway are involved in regulation of secondary metabolism in many fungi, where they are often required for their production 10 . Already these few examples show that regulation of secondary metabolism is a common trait for the function of MAPkinase pathways in fungi.
Fungi use chemicals to communicate with mating partners and competitors 18,19 . Importantly, a considerable part of the functions of MAPKs is aimed at appropriate communication with the environment, which is crucial not only for competition, but also for virulence and pathogenicity 7,20 . While the correct function of such a communication can be detected relatively easily by genetic screenings and microscopic analysis, the compounds responsible for this interaction-the chemical(s) eliciting the response-are much harder to identify. One example is the chemotropic growth of the phytopathogen Fusarium oxysporum towards plants which is regulated by the CWI MAPkinase pathway, for which a peroxidase was found to be responsible 21,22 , which is however unlikely to be the chemical that is detected. Another case of chemical communication is represented by the rhythmic activation of MAPkinases upon fungal communication between Neurospora crassa hyphae 23 . This interaction mechanism is conserved between N. crassa and B. cinerea 24 although also here the chemical compounds mediating this interaction are not yet known.
The rotation of earth causing night and day represents one of the most important environmental cues for life, including fungi 1 . Thereby, organisms do not simply respond the the increasing light intensity in the morning, but they prepare for both dusk and dawn using a circadian clock, which keeps running even in the dark 25,26 . Light is essential for entraining the clock and a light pulse resets the clock, which impacts the whole gene regulation machinery as well 25,27 . MAPkinases play an important role in circadian rhythmicity due to their rhythmic activation and their role in phosphorylation of clock proteins 28 . They are a crucial output pathway of the circadian clock 29 .
Both upon constant light conditions as well as during a time course reflecting circadian rhythmicity, discrepancies between mRNA abundance and protein abundance were observed 30,31 and also metabolism related gene oscillate during the circadian day 32 . With respect to circadian rhythmicity, it is particularly interesting, that the rhythmic activation of the osmosensing MAPK pathway influences regulation of translation in dependence of osmotic stress 33 .
The Hog-pathway transmits the phytochrome-related red light signal independently of its function as a stress signaling factor in Aspergillus nidulans 34 .
In Trichoderma, light profoundly influences physiology 41,42 with respect to growth [43][44][45] , asexual and sexual development 46,47 , regulation of plant cell wall degrading enzymes 48 , secondary metabolism 49,50 and stress response [51][52][53] . Moreover, the MAPkinase encoding gene tmk3 is induced by light in a photoreceptor dependent manner in T. atroviride 54 and in T. reesei 55 and early, transient phosphorylation of TMK3 occurs in T. atroviride 56 . Also the photoreceptor gene env1 and the photolyase gene phr1 have strongly increased transcript levels in a strain lacking tmk3, hence indicating a dampening effect of the HOG pathway on light response and potentially increased light sensitivity in deletion strains 56 .
In S. cerevisiae, the MAPkinase of the pheromone pathway is Fus3 12 , the homologue of T. reesei TMK1. Upstream of the S. cerevisiae MAPkinase cascade, the G-protein beta and gamma subunit mediate transmission of the pheromone signal to the MAPkinases 5 . In filamentous fungi not only Fus3 homologues, but also components of other MAPkinase pathways were shown to be required for proper sexual development. The MAPkinase mediating the cell wall integrity (CWI) pathway in N. crassa was found to be required for formation of protoperithecia if a strain was meant to assume the female role in a cross 57 . Moreover, Slt2 homologues are required for female fertility in F. graminearum 58 and Magnaporthe grisea 59 . In F. graminearum, lack of of the Hog-pathway MAPkinase blocked sexual development 60 . Crosstalk was observed among the CWI and pheromone response pathways in N. crassa 61 . Hence, while the pheromone response pathway has a central function in sexual development, all three MAPkinases contribute to the process of sexual reproduction.
Induction of sexual development in T. reesei deviates from methods in other fungi in that so far, no protoperithecia or similar early female stages were observed in this fungus 62,63 . However, due to the inability of the prominent wild-type strain QM6a to assume the female role in a cross, which is due to a defect in the scaffolding protein HAM5 64,65 , is considered female sterile 67 .
In Trichoderma, three MAPkinase pathways were detected, which are conserved in the genus 40,67 . Early investigations showed that T. virens TmkA and TmkB are required for full antagonistic potential against fungal phytopathogens 68,69 and TmkA is needed for inducing full systemic resistance 70 . In T. atroviride, lack of Tmk1 reduced mycoparasitic activity, yet higher antifungal activity attributed to low molecular weight substances including 6-pentyl-α-pyrone (6PP) and peptaibol antibiotics 71 . Recently, T. atroviride Tmk3 and Tmk1 were implicated in polarity stress response during hyphal interaction upon mycoparasitism and the chemotropic interaction between individual hyphae in this process 72 . Another case of antagonism was shown for T. atroviride with Drosophila melanogaster larvae, which fed on the fungal mycelium. Tmk3 was required for secondary metabolite production in T. atroviride, which was the reason for larvae preferentially feeding on a tmk3 mutant, although the mortality of larvae doing so was increased compared to feeding on the wild-type 73 www.nature.com/scientificreports/ Tmk3 was required for proper response to cell wall stress, especially upon exposure to light 56 , which suggests a certain interrelationship of the cell wall integrity pathway (represented by Tmk2) and the osmosensing pathway. Investigation of the functions of the MAPkinase pathways in T. reesei as well as selected upstream signaling processes revealed roles in cell wall integrity, stress response, glycogen accumulation and asexual development [74][75][76][77] . Previously, TMK1 (Fus3-like), TMK2 (Slt2-like) and TMK3 (Hog1-like) were shown to impact regulation of cellulase gene expression: TMK3 was reported to exert a strongly positive influence on cellulase production 76 , while the influence of TMK2 on transcript abundance of cellulase genes is minor, despite its negative influence on secreted cellulase activity 74 . TMK1 also negatively influences cellulase production 75,77 , although a positive effect of TMK1 was shown on transcript levels of major cellulase and xylanase genes 77 .
Despite the fact that the influence of light on MAPKinase dependent regulation of stress response and secondary metabolism was shown previously, this environmental cue was not considered in previous studies of the topic with T. reesei. Consequently, we investigated the impact of light on regulation of cellulase production and we show significant differences between growth in light and growth in darkness. Our study further revealed that MAPkinases are required for female fertility upon mating in T. reesei and that MAPkinases differentially impact secondary metabolite production under mating conditions, hence reflecting an influence on chemical communication.
Results
Information on environmental cues is transmitted via multiple signaling cascades in fungi, one of which are the MAPkinase cascades. Although the MAPkinase genes of T. reesei do not show significant regulation by light 49,78 , previous work revealed an involvement of phosphorylations in general and specifically also by MAPkinase cascades in light response and circadian rhythmicity 25,28 . Additionally, we showed that the random mutant QM9414 is less light sensitive with respect to cellulase production than the wild-type strain QM6a 79 . Therefore we deleted the MAPkinase encoding genes tmk1, tmk2 and tmk3 in the wild-type background of QM6a by replacement with the hygromycin selection marker cassette 80 . Througout our study, we investigated the phase of active growth and cellulase production of QM6a, which grows somewhat more slowly than QM9414 and produces lower levels of cellulases, but has the advantage that the machinery of cellulase regulation associated signaling and gene regulation is not altered.
MAPkinases impact growth and sporulation. As expected, tmk1, tmk2 and tmk3 were not essential in QM6a and grew well on malt extract agar plates (Fig. 1A). Analysis of biomass formation in liquid cultivations with cellulose as carbon source revealed strikingly different impacts in constant light and constant darkness. While in darkness Δtmk3 formed considerably less biomass (Fig. 1B), a similar effect was observed in light for Δtmk2 (Fig. 1C). This clear difference in the functions of TMK2 and TMK3 in modulating growth in light and darkness strengthens the need for cultivation under controlled light conditions. Moreover, the three MAPkinase pathways of T. reesei obviously exert signal transmission tasks for which it is crucial whether they grow in the dark or in light.
We also found that lack of tmk3 in the genome causes abolishment of the typical green pigmentation of spores ( Fig. 1A), which is in agreement with data from T. atroviride 56 . Hence, we were interested whether this is due to an impact of MAPkinases on regulation of pks4, the polyketide synthase responsible for this pigmentation 81 .
RTqPCR confirmed our hypothesis (Fig. 1D,E), showing that deletion of tmk3, which results in a white phenotype, also correlates with abolishment of pks4 transcription in light and darkness. Interestingly, we also found that pks4 transcript levels are strongly increased in a strain lacking tmk2, both in light and darkness and that Δtmk1 also shows elevated pks4 levels only in darkness. Consequently, MAPkinases crucially impact spore pigmentation, both in light, as the preferred sporulation condition and in darkness.
TMK3 is required for chemotropic response to glucose. Glucose represents an important nutrient for T. reesei, which represses cellulase gene expression and elicits carbon catabolite repression 82,83 . However, genome analysis revealed that T. reesei lacks a direct homologue of the prototypical glucose sensors GPR-4 or Git1 67 . Investigation of G-protein coupled receptors (GPCRs) implicated two class XIII (DUF300 domain) GPCRs, CSG1 and CSG2 in glucose sensing due to their impact on cellulase regulation on cellulose and lactose 78 . This function was supported by the requirement of CSG1 and CSG2 for chemotropic responses to specific concentrations of glucose 84 . Since a role in chemotropic reaction to glucose was shown for FMK1, the Fusarium oxysporum homologue of filamentation pathway MAPkinase 22 , we were interested in the role of T. reesei MAPkinases in chemotropic reactions to glucose.
Interestingly, in T. reesei TMK3, but not TMK1, the homologue of FMK1, is required for chemotropic response to glucose. As for the F. oxysporum homologue MPK1 22 , lack of the cell wall integrity pathway MAPkinase TMK2 in T. reesei does not perturb chemotropic response to glucose ( Fig. 2A).
Since also the GPCRs CSG1 and CSG2 are required for chemotropic reactions to glucose 84 , the signaling pathway triggering this reaction in T. reesei might not be exclusively channeled through the G-protein pathway but may be subject to biased signaling 85 .
MAPkinases regulate cellulase transcription and secreted activity differentially in light and
darkness. An involvement of T. reesei MAPkinases in cellulase regulation was shown previously [74][75][76] . However, in these studies, the relevance of light for cellulase regulation was not considered and T. reesei TU-6, a parental strain derived from QM9414, with decreased and probably altered light response 79 was used. Therefore, we aimed to evaluate these previous results under controlled light conditions with cellulose as carbon source and we tested for a potential relevance of MAPkinases in the strong down-regulation of cellulases in light. We observed that lack of tmk3 in the genome virtually abolished specific cellulase activity in darkness (Fig. 2B), which is in agreement with the strongly decreased biomass formation of Δtmk3 under these conditions (Fig. 1B). Due to the strong effect of TMK3 on cellulase regulation, chemotropic response to glucose and biomass formation upon growth on cellulose, we were interested whether the growth defect of Δtmk3 is a general phenomenon or conditions specific i.e. carbon source specific. Analysis of hyphal extension of Δtmk3 on malt extract medium (3% w/v) showed a colony size decreased by 48 + /− 1% (standard deviation of 3 biological replicates), on carboxymethylcellulose the decrease was considerably stronger with 86 + /− 1% and on glucose Δtmk3 showed no growth after the 48 h in darkness of the experiment used in parallel for the other measurements. Consequently, the growth defect caused by the lack of TMK3 is obvious on all media used, albeit the extent of the retardation is dependent on the carbon source. The more severe growth defect on carboxymethylcellulose compared to the full medium (malt extract) is in agreement with the strong decrease of cellulase expression in Δtmk3. The fact that Δtmk3 does not chemotropically react to glucose anymore, a degradation product of cellulose is in agreement with its growth defect on glucose, as it obviously as problems to sense it, which may well be connected to perturbed cellulase regulation and the subsequent glucose liberation intra-and/or extracellularly.
Scientific Reports
Deletion of tmk1 caused increased cellulase activity and for Δtmk2 we found a positive trend (Fig. 2B). In the wild-type QM6a, cellulase activity in light decreases to levels around or below the detection limit 79 , which did not www.nature.com/scientificreports/ change in deletion strains of tmk1, tmk2 or tmk3 (data not shown). Consequently, MAPkinases are not involved in the (posttranscriptional) mechanism responsible for the block of cellulase formation in light, although they do influence cbh1 transcript abundance. Transcript abundance of cbh1, the major cellobiohydrolase gene of T. reesei, correlated with the results for specific cellulase activity in darkness, with significantly increased cbh1 levels in Δtmk2, hence supporting the positive trend of cellulase activity in Δtmk2 (Fig. 2C). In light, cbh1 transcript levels are decreased in all three MAPkinase mutants (Fig. 2D), reflecting a clear difference to the situation in darkness.
In darkness, transcript levels of the major cellulase transcription factor gene xyr1 correlates with those of cbh1 (Fig. 2E), which was shown for other conditions previously 86 . Also for xyr1, the situation is different in light (Fig. 2F), in that the correlation with cbh1 was not observed and in contrast to the down-regulation of transcript levels of cbh1 in Δtmk2, xyr1 transcript levels follow the up-regulation as seen in cbh1 and xyr1 in this strain in darkness. Therefore, it is tempting to speculate that TMK1 and TMK3, but not TMK2 are relevant for the function of XYR1 in cellulase regulation in light. Since XYR1 comprises MAPK phosphorylation sites 76 , this would not be without precedent.
In case of the carbon catabolite repressor gene cre1, we also found clear differences in gene regulation by TMK1, TMK2 and TMK3 in light and darkness (Fig. 2G,H). The lack of significant regulation of cre1 in darkness does not indicate a relevance of MAPkinases for carbon catabolite repression at the level of modulation of transcript abundance of cre1 (Fig. 2G). In light, cre1 transcript abundance decreases in all three deletion strains (Fig. 2H), the relevance of which is difficult to interpret, due to the very low levels of expressed cellulases in light on cellulose.
MAPkinases are involved in sorbicillin production. An involvement of MAPkinases of T. reesei in regulation of secondary metabolism has not been tested previously. Sorbicillin production is connected to the regulation of cellulase gene expression and carbon catabolite repression in T. reesei 50,87,88 . Therefore, we assessed this function with a photometric screening for yellow pigments representing mainly sorbicillin derivatives, which show a typical light absorbance maximum close to 370 nm. These compounds are biosynthetized by the products of the SOR secondary metabolite cluster 50,89,90 upon growth on liquid media with cellulose as carbon source (Fig. 3A,B).
We found that both TMK2 and TMK3 positively influence sorbicillinoid production in darkness upon growth on cellulose (Fig. 3C), which correlates with the difference in biomass production in case of Δtmk3. In light, the situation is reversed for TMK2 (Fig. 3D), which has a considerably negative effect on the production of sorbicillin derivatives. This prompted us to investigate a possible influence of MAPkinases on secondary metabolism in more detail.
MAPkinases impact regulation of secondary metabolism. Among the most crucial regulators of secondary metabolism is VEL1, which regulates sexual development and secondary metabolism in T. reesei 91 , shows a regulatory interaction with the photoreceptor ENV1 92 and is essential for cellulase gene expression 93 . Therefore, we asked whether the regulatory function of the MAPkinases might be connected to the role of VEL1 by testing transcript abundance of vel1 in deletion strains of tmk1, tmk2 and tmk3.
Indeed, we found a light dependent regulation of vel1 in all MAPkinase mutants, with differential impacts either in constant light or in constant darkness (Fig. 3E,F). The regulation pattern of vel1 did not correlate with production of sorbicillin derivatives (Fig. 3A,E) as the clear increase of vel1 transcript abundance in Δtmk3 should rather result in an increased level of sorbicillinoid production in case of a direct correlation, which is not the case. Consequently, the regulatory impact of the MAPkinases on sorbicillin production is unlikely to be mediated by VEL1.
MAPkinases are required for normal sexual development. An involvement of MAPkinases in regulation of sexual development was shown previously in fungi. Since the parental strain QM6a is female sterile due to a defect in the MAPkinase scaffolding protein HAM5 64,65 , we outcrossed this defect by mating with the fully fertile QM6a derivative FF1. The resulting strains with fully fertile strain background were confronted under conditions favouring sexual development. All strains were able to form fruiting bodies with the fully fertile wild-type strains CBS999.97 MAT1-1 and CBS999.97 MAT1-2 (Fig. 4). However, none of the strains lacking a MAPkinase gene could mate with a female sterile strain of the respective compatible mating type (FS69 or QM6a) or with another strain lacking a MAPkinase. Therefore, we conclude that deletion of tmk1, tmk2 or tmk3 causes female sterility.
In homozygous crosses of strains lacking TMK2 or crosses between Δtmk2 and Δtmk3 of either mating type we observed a small but visible clearing zone. This finding suggests that the clear effects in regulation of secondary metabolism under different conditions by TMK2 and TMK3 also affect chemical communication and potentially cause a retardation of growth or decrease in aerial hyphae formation prior to contact. The minor effects of TMK1 on secondary metabolism are unlikely to be relevant for chemical communication. However, it has to be noted that for example fatty acid derived secondary metabolites would not be detected in our assay and hence we cannot fully exclude an influence of TMK1 on certain compounds not observed here.
MAPkinases contribute to regulation of chemical communication. Secondary metabolite production changes under fermentative conditions in T. reesei, which was also shown for sorbicillinoids 94,95 , which are responsible for the yellow coloration of liquid and solid media inoculated with T. reesei wild-types 89 Our analyses showed that TMK1 is required for production of at least one metabolite, which is also decreased upon lack of TMK3. Deletion of tmk2 further resulted in a shift of abundance of certain secondary metabolites (Fig. 5). The most striking effect was found for Δtmk3 (Fig. 5A) revealing that in this strain the production of all compounds detected in the wild-type was downregulated or abolished. Using a reference compound 95 , we could identify the sorbicillin derivative trichodimerol that is strongly regulated by TMK3 (Fig. 5A and Figure S1). Hence, the hypothesis that MAPkinases contribute to regulation of chemical communication of T. reesei by secreting (secondary) metabolites to the environment is well supported. However, although a correlation of defects in secondary metabolite secretion with perturbed mating behavior was reported previously 91,94 , the precise role of these secondary metabolites in initiation of sexual development still remains to be clarified.
Considering the results for growth in liquid media with cellulose as carbon source, we conclude that MAPkinases represent important signaling cascades, differentially integrating signals with varying relevance upon growth on different carbon sources, on surfaces or submerged and in dependence of light.
MAPkinases regulate production of trichodimerol (21S)-bisorbibutenolid. Besides trichodimerol as product of the SOR cluster, also several other compounds showed alterations in one or more MAPki- www.nature.com/scientificreports/ nase deletion strains. Hence, we were interested in the nature of these compounds and aimed at isolation and structural elucidation of one strongly regulated and hence the most interesting changing peak. Due to the complexity of different structures of sorbicillinoids, which nevertheless show similar UV spectra, we aimed to purify a compound of interest to enable unequivocal assignment of the structure. 2.8 The yellow color of the compound selected for detailed anaysis revealed that it is likely to be a sorbicilliniod and mass spectrometry indicated a similarity with bisorbibutenolide, which required more indepth investigation 1D and 2D NMR measurements led to a total number of six methyl-, zero methylen-, eleven methine groups and eleven quaternary carbon atoms resulting in three additional non carbon bound protons. Further investigations of the UV and NMR spectroscopic as well as MS spectrometric data imply a molcular structure of an unsymetric dimer of sorbicillinol. The central moiety of this dimer is identified as a bicyclo[2.2.2]octane sleketon. This structure can be determined in HMBC by the 2 J C-H and 3 J C-H couplings of protons in its positions 4, 7 and 8 as well as of the protons in two methyl substituents in positions 1 and 5 (Fig. 5B). Namely, the methyl group at position 1 shows couplings to the carbons C-1, C-2, C-6 and C-7 while the methyl group at position 5 shows couplings to C-4, C-5 and C-6. Protons H-4, H-7 and H-8 each show eight or nine C-H long range couplings to the corresponding carbons via two or three covalent bonds, respectively ( Figure S2). Some of these couplings even reach to carbon atoms in substituents which are bound to the bicyclo[2.2.2]octane sleketon. Additionally, chemical shifts of δC 210.7 and 197.4 as well as the multiplicities of carbons C-2 and C-6 indicate the presence of ketone functionalities in these positions. Furthermore, the chemical shift and the multiplicity of C-5 indicate that attached apart from the methyl group there is a hydroxy group bound in this position.
A (E,E)-hexa-2,4-dienoyl (sorbyl) substituent is attached in position 7 to the bicyclo[2.2.2]octane. This substituent can be identified by 3 J H-H couplings in COSY ( Figure S3) as well as in HSQC by the 2,3 J C-H couplings within this moity and to the methine group in position 7 ( Figure S2). The E configurations of both double bonds result in particular from the quite large 3 J H-H coupling constants between the sp 2 hybridised methin groups. An second (E,E)-hexa-2,4-dienoyl substituent can be identified to be bound in position 3. However, this moiety is predominately present as enol tautomer between C-9 and C-3, which emerges of the chemical shifts and multiplicities of these two carbon atoms. The presence of these two diene conjugated carbonyl chromophores can be confirmed by UV absorption at 372 nm ( Figure S4). Furthermore, an enolized 3-oxo-2,4-dimethylbutanolide ring is bound to C-8. The carbon skeleton of this moiety can be identified by the 2 J C-H and 3 J C-H couplings of the protons in methyl groups bound to C-21 and C-23. The chemical shifts of C-22, C-23 and C-24 (δC 188.8, 92.3 and 180.2, respectively) further clearly indicate the enolization in this structural moiety.
The relative stereochemistry of (21S)-bisorbibutenolide was determined using NOEs recorded in the NOESY spectum ( Figure S5). The stereochemistry at positions 4, 5, 7 and 8 in the bicyclo[2.2.2]octane sleketon can especially be explained by NOEs between the CH 3 group at C-5 and the protons H-10 and H-11 as well as by the missing NOEs from this methyl group to H-7 and H-8. Furthermore, H-8 shows an NOE to H-16 as well as H-7 has an NOE to the methyl group at position 21. The absolute stereochemistry was deduced on the stereochemistry of S-sorbicillinol, which is yet only repored enantiomer of this natural product 96 (Scifinder, 2022). It results in the (1R,4S,5S,7R,8S)-bisorbibutenolide for the stereocenters in the central moiety (Fig. 5C), which are in agreement with those reported earlier 97,98 for the same molecular structure. Furthermore, the stereochemistry at position 21 in the butanolide moiety was determined with regard to Maskey et al. 98 . They have shown that an 21S configuration causes the deprotonation of the OH group in position 22 with a concomitant enolisation of C-22, C-23 and C-24. This is caused by a spatial proximity of the deprotonated hydroxy group at C-22 to the hydroxy group at C-9 as well as to the ketone at C-3. In case of a 21R configuration, such deprotonation occures to a significantly lesser extent, since the described spatial proximity between C-3, C-9 and C-22 is not possible.
Overall, the structure is those of (21S)-bisorbutenolide, which is shown in Fig. 5D. All recorded spectroscopic data are summarized in section "Materials and Methods" and the spectra are shown in the Supplementary Material ( Figures S2-S11). These data are consistent with those reported by Maskey et al. 98 for (21S)-bisorbutenolide as well as well as with those reported by 97 for the structurally identical "trichotetronine". Thus, we assume that all three independently determined structures are identical.
Discussion
Fungi have to react to multiple environmental cues to succeed in competition in order to balance resources between investment in biomass formation and colonization, reproduction and warfare-production of secondary metabolites to defend nutrients, mating partners and reproductive structures. Our study revealed that the MAPkinase pathways of T. reesei are central to regulation of these tasks, as they differentially integrate signals and coordinately rather than separately modulate their output pathways (Fig. 6). The different functions, which TMK1, TMK2 and TMK3 assume are all influenced by light. This is in perfect agreement with the crucial functions of their homologues in light response and circadian clocks in other fungi. Importantly, the MAPkinase pathway acts downstream of the circadian clock and hence also of the photoreceptor complex members as its core components 28,99 . Thereby, the MAPkinases obviously provide important information on the environment which are integrated with the light signals perceived by photoreceptors to achieve an appropriate response in light or darkness.
For TMK1 we see a small, but significant increase in specific cellulase activity in darkness and a corresponding trend in slightly elevated cbh1 and xyr1 transcript levels, while in light cbh1 transcript levels decrease, which may have contributed to the lack of detection of an effect of TMK1 in previous work 75 .
TMK2 negatively influences cellulase expression upon growth on wheat bran combined with Avicel. However, biomass formation of this strain is unclear and data on specific activity are not available in this study 74 . Deletion of tmk2 caused decreased growth in the presence of lactose and glucose, but not glycerol in T. reesei 77 . We could now confirm the negative impact of TMK2 on cellulase regulation in T. reesei upon growth on cellulose. This regulatory effect is reflected in an increase of transcript abundance of cbh1 and xyr1 as well as a positive trend in specific cellulase activity in Δtmk2. The previously detected only minor effect of TMK2 on cbh1 transcript abundance 76 , although also here the regulation pattern we observed is more severe, with activity and transcript levels barely detectable anymore. Again, random light pulses during cultivation and harvesting may have alleviated the strongly decreased values we found.
MAPkinases are well known to act at higher levels of the signaling cascade, above the transcription factors of the downstream pathways, which may be impacted directly by phosphorylation or indirectly be regulation of positive or negative factors influencing them. However, a potential feedback regulation acting via a nutrient sensing pathway might still influence regulation of MAPkinase genes at the transcriptional level. We therefore checked available transcriptome data from comparable conditions for indications if such a feedback might exist 50,100-102 , but since we did not find significant regulation of tmk1, tmk2 or tmk3 in these data, we conclude that this is not the case.
Interestingly, in N. crassa the OS pathway, corresponding to the Hog1-pathway in yeast and comprising a homologue of TMK3 has no significant influence on cellulase production 103 , which is in contrast to our results.
In summary, our data obtained with experiments under controlled light conditions clearly show a light dependent regulatory function of all three MAPkinases on cellulase gene regulation and secreted cellulase activity, which is jeopardized by random light pulses.
The GPCR CSG1, which is essential for the chemotropic response of T. reesei to glucose 84 , was shown to be required for posttranscriptional regulation of cellulase gene expression 78 . Importantly, this GPCR is not related to other known glucose sensing GPCRs like GPR-1 in N. crassa or Gpa2 of S. cerevisiae 78 . In contrast, the function of CSG1 as a member of class XIII of GPCRs was for the first time characterized as posttranscriptional regulation of cellulases 78 . Here we found that also TMK3 is needed for the chemotropic response to glucose, although here, in contrast to the situation with CSG1 78 , not only cellulase activity, but also transcript abundance decrease strongly (Fig. 2B,C). Hence, we assume that perturbed chemotropic reaction to glucose does not necessarily correlate with diminished cellulase transcript abundance, but is likely to be important for regulating the amount of produced cellulases at different levels.
Interestingly, research with F. oxysporum showed a dependence of the chemotropic response to glucose on TMK1 22 , which we did not observe and the relevance of TMK3 on this process was not studied yet. Due to the different habitats and ecological functions of these two fungi-F. oxysporum being a plant pathogen and T. reesei mainly a saprotroph-glucose sensing may have a different relevance in these fungi. However, the widespread presence and conservation of MAPkinase pathways from yeast to man rather speaks against such a hypothesis and the reason for this discrepancy remains to be investigated.
We found that the glucose signal is transmitted via the class XIII GPCR CSG1, which is also essential for the chemotropic response to glucose 84 . Our results for TMK3 reveal, that this chemotropic response is not exclusively channeled throught the heterotrimeric G-protein pathway, but also through the MAPkinase pathway. Hence, a potential role of biased GPCR signaling 85 in the chemotropic response to glucose is worth exploring in T. reesei. www.nature.com/scientificreports/ Female sterility is defined as the inability to assume the female role during sexual development and can have diverse physiological reasons 104 including a defect in hyphal fusion, for example due to mutations in the ham5 gene 105,106 . In fungi like N. crassa, formation of protoperithecia is induced in the female strain prior to fertilization with conidia of the male strain to assess male and female fertility. In T. reesei this method is not applicable, because no growth condition is known under which such structures are formed. Consequently, tests for male or female fertility are performed by assessment of mating and fruiting body formation with strains comprising a female sterile strain background in addition to the deletion of the gene of interest or as mating partners 63 . Defects in sexual development due to lack of MAPkinases were shown for all three pathways in N. crassa 107 as well as in other fungi. Sexual development is consistently impacted by all three MAPkinases in T. reesei, which are obviously responsible for the ability to mate with a partner having a defect in female fertility such as mutations in HAM5. HAM5 acts as a scaffolding protein for MAPkinase pathways and is crucial for their function 106 . Consequently, the phenotype we see upon deletion of tmk1, tmk2 and tmk3 is in agreement with the female fertility caused by the pathway involving HAM5, which is also responsible for the sexual defect of T. reesei QM6a 64,65,67 .
Since at least the TMK1 and TMK2 mutant strains in S. macrospora and N. crassa are fusion mutants as are those lacking HAM5 105,108 , it would not be without precedent if the sexual defect of the T. reesei MAPkinases were due to abolished ability of hyphal fusion in these strains as well.
Carbon catabolite repression was recently reported to be impacted by the high osmolarity MAPK pathway, which contributes to a protein complex regulating CreA cellular localization and dissociates upon addition of glucose 109 . In N. crassa, genetic and omics analyses showed that the MAPkinase pathway is not acting through the canonical carbon catabolite repressor CRE-1 103 . Hence, the minor changes in transcript abundance we found for regulation of T. reesei cre1 by MAPkinases in light gives a hint to their relevance, but does not reflect the full mechanism of regulation, which may be considerably more significant at the protein-and interaction level also in T. reesei. However, the abolished chemotropic response to glucose in a strain lacking TMK3 suggests that the Hog pathway may be connected to glucose signal transmission also in T. reesei. Additionally, the differences between light and darkness we see in our experiments indicate that both conditions should be investigated in fungi to obtain a comprehensive picture.
As previously shown in T. reesei, interaction with potential mating partners of opposite mating types involves specifically changing secondary metabolite patterns 91,94 . We chose conditions enabling sexual development for our assay to enable conclusions as to altered chemical communication by strains lacking one of the MAPkinases. Among the compounds regulated via TMK3 is the sorbicillinoid bisorbibutenolide 110 . Bisorbibutenolid (or bislongiquinolide) deters the aphid Schizaphis graminum from feeding 111 and showed significant growth inhibitory activity against cancer cell lines through cytostatic and not cytotoxic effect 112 . The production of bisorbibutenolide is hence likely to be aimed at fending off competitors, which is in agreement with findings in T. atroviride on larvae preferentially feeding on tmk3 mutants 73 . However, the SOR cluster , which is mainly responsible for sorbicillinoid production in T. reesei, was acquired through lateral gene transfer and is subject to strong evolutionary selection 113 . This cluster is not present in T. atroviride and consequently, a conservation of this phenomenon between T. reesei and T. atroviride remains to be shown.
Materials and methods
Strains and cultivation conditions. The wild-type strain used in this study is QM6aΔku80 50 (deficient in non-homologous end joining). For analysis of gene regulation, enzymatic activity and biomass formation by TMK1, TMK2 and TMK3 strains were grown in liquid cultivation in constant light (white light; 1700 lx) or constant darkness, 200 rpm and 28 °C for 96 h. Before inoculation, strains were grown on 3% (w/v) malt extract (MEX) agar plates in constant darkness for 14 days (to exclude influences by the circadian rhythm). For liquid culture 10 9 conidia/L were inoculated in Mandels Andreotti minimal medium 114 with 1% (w/v) microcrystalline cellulose (Alfa Aesar, Karlsruhe, Germany) as carbon source, 5 mM urea and 0.1% peptone to induce germination. After 96 h, mycelia and supernatants were harvested, for the constant darkness cultures only a very low red safety light (darkroom lamp, Philips PF712E, red, 15W) was used as single light source.
Construction of recombinant strains. Deletion of tmk1, tmk2 and tmk3 was done in QM6aΔku80 following the procedure as described previously 80 with the hygromycin (hph) marker cassette constructed by yeast recombination of the 1 kb flanking regions up-and downstream of the gene of interest and the hph marker. Transformation was done by protoplasting and 50 µg/mL hygromycin B as selection reagent (Roth, Karlsruhe, Germany) 115 . Protoplasts were isolated three to six days after transformation and subjected to a minimum of two rounds of single spore isolation. Successful deletion was confirmed by the absence of the gene by PCR (Table S1). All three mutants were confirmed to only have a single integration of the deletion cassette by copy number determination 102 . Crossing and selection for fully fertile progeny for assessment of sexual development. All crosses for the analysis of sexual development were performed on 60 mm 2% MEX agar plates at 22 °C and 12 h light-dark cycles as previously described 116 . To obtain progeny carrying the deletion in both mating types with a functional ham5 gene, the mutant strains in the QM6a (MAT1-2, defective ham5 copy) background were crossed with the female fertile strain FF1 (MAT1-1, functional ham5 copy). The FF1 strain was obtained from backcrossing the female fertile strain CBS999.97 (described in detail previously 65 ) 10 times with QM6a to acquire sexual fertility while retaining the QM6a phenotype 91 . Ascospore derived progeny were analyzed for the presence of gene deletion and mating type by PCR (Table S1). The functionality of the ham5 gene was confirmed by high resolution melt curve (HRM) analysis, performed as described previously 117 www.nature.com/scientificreports/ Isolation of nucleic acids and RTqPCR. Isolation of RNA was done from mycelia from liquid culture using the Qiagen RNeasy Plant mini kit following the manufacture's guidelines. After DNase digest (Ther-moFisher) of 1 µg total RNA and cDNA synthesis (GoScript reverse transcriptase, Promega, Madison, WI, USA), RT-qPCR was performed using the GoTaq® qPCR Master Mix (Promega) as previously described with sar1 as reference gene and other primers listed in Table S1 94,118 . For RT-qPCR three biological and three technical replicates were considered, for cbh1, twice three technical replicates were included and for the analysis CFX maestro analysis software was used. Isolation of DNA for mutant and progeny screening, was done following the rapid minipreparation protocol for fungal DNA as described previously 119 .
Analysis of enzyme activity and biomass formation. Enzymatic activity was measured from supernatants of liquid cultures using the CMC-cellulose kit (S-ACMC-L Megazyme) measuring endo-1, 4-ß-D-glucanases. For specific cellulase activities, the activities were correlated with the biomass produced which was determined from frozen mycelia in the presence of insoluble cellulose 45 . Shortly, mycelia were frozen in liquid nitrogen and ground with pestle and mortar before sonification and incubation in 0.1 M NaOH to break up cells. The freed protein content was measured using the Bradford method.
Chemotropic response assay. Analysis of chemotropism assay was done essentially as described previously 22 except that the water agar was supplemented with 0.0025% peptone as optimized previously 84 . The chemoattractant (1% glucose) was applied onto the plates in comparison with water as a control on the opposite side. The orientation of germ tubes was determined under the microscope (VisiScope TL524P microscope; 200 × magnification) and chemotropic indices calculated from a minimum of 3 biological replicates, counting a minimum of 400 germ tubes per plate, as previously described 22 .
Photometric analysis of sorbicillinoid production. Supernatants of liquid cultivation were centrifuged for 5 min at 10.000 g to remove residual cellulose and absorbance at 370 nm indicative yellow sorbicillinoids were measured from biological triplicates.
Isolation of (21S)-bisorbibutenolide. The dry crude extract (350 mg) was dissolved in 2 mL pure methanol (MeOH) and the obtained suspension centrifuged ar 14,000 rpm for 3 min. The supernatant was subsequently subjected to column chromatography over Sephadex LH20 eluted isocratically with pure MeOH. A total of 30 fractions á 5 mL were collected. Fractions 17 to 21 were pooled (11.3 mg) and finally purified by preparative thin layer chromatography (precoated glass plates, silica gel 60, F 254 , 0.25 mm thickness) developed in CHCl 3 /MeOH (95:5). This step afforded 4.3 mg of (21S)-bisorbibutenolide. All separation steps were monitored by HPLC.
Secondary metabolite analysis by HPLC.
For the extraction of secondary metabolites, strains were grown on 3% malt extract medium in constant darkness for 14 days. For each strain three biological replicates were used. For each sample, two agar plugs of 1,8 cm 2 were taken from 3 plates. Agar plugs were collected in 15 mL tubes and 3 mL of 50% acetone in water (v/v) was added and put into an ultrasonic bath for 15 min for better dilution. Subsequently 1 mL of chloroform was added. Tubes were then centrifuged at 4 °C at 1000 g for 1 min for phase separation. The organic phase was transferred to a glass vial and chloroform extraction was repeated twice before the vials were left for evaporation over night. The dry extracts were redissolved in 140 μL methanol and stored in glass vials at − 20 °C before analysis. Analytical HPLC measurements were performed on Agilent 1100 series coupled with UV-diode array detection at 230 nm and a Hypersil BDS column (100 × 4 mm, 3 µm grain size). An aq. buffer (15 mM H3PO4 and 1.5 mM Bu 4 NOH) (A) and MeOH (B) was used as eluents. The following elution system was applied: From 55-95% B within 8 min, and 95% B was kept for 5.0 min, with a flow rate of 0.5 mL min −1 . The injection volume was 5.0 µL.
Statistics.
Statistical significance was evaluated by the t-test in R-studio (compare means, ggpubr version 0.4.0) **p value < 0.01, *p value < 0.05. At least three biological replicates were considered in every assay. NMR spectroscopy. For NMR spectroscopic measurements (21S)-bisorbibutenolide was dissolved in CD 3 OD (~ 4.2 mg in 0.7 mL) and transferred into 5 mm high precision NMR sample tubes. All spectra were measured on a Bruker DRX-600 at 600.18 MHz ( 1 H) or 150.91 MHz ( 13 C) and performed using the Topspin 3.5 software. Measurement temperature was 298 K ± 0.05 K. 1D spectra were recorded by acquisition of 64 k data points and Fourier transformed spectra were performed with a range of 7200 Hz ( 1 H) and 32,000 Hz ( 13 C), respectively. To determine the 2D COSY, TOCSY, NOESY, HMQC, and HMBC spectra 128 experiments with 2048 data points each were recorded, zero filled and Fourier transformed to 2D spectra with a range of 6000 Hz ( 1 H) and 24,000 Hz (HSQC) or 32,000 Hz (HMBC) ( 13 C), respectively. Residual CD 2 HOD was used as internal standard for 1 H NMR measurements (δH 3.34) and CD 3 OD for 13 C NMR measurements (δC 49.0).
Mass spectrometry.
Mass spectra were measured on a high resolution time-of-flight (hr-TOF) mass spectrometer (maXis, Bruker Daltonics) by direct infusion electrospray ionization (ESI) in positive and negative ionization mode (mass accurancy +/− 5 ppm). TOF MS measurements have been performed within the selected mass range of m/z 100-2500. ESI was made by capillary voltage of 4 kV to maintain a (capillary) current between 30 and 50 nA. Nitrogen temperature was maintained at 180 °C using a flow rate of 4.0 L min −1 and the N 2 nebulizer gas pressure at 0.3 bar. Numbering of protons and carbons is shown in Fig. 5D and in agreement with those used previously 98 . All data as well as the naming of the compound are in agreement with those reported earlier for this compound 97,98 , (there named as "trichotetronine"). It should be noted that the naming of this compound, particularly with regard to the stereochemistry at position 21, as well as of structurally and biosynthetically closely related compounds are not entirely consistent throughout the entire literature. 1D and 2D NMR spectra are shown in Figures S2, S3 and S5-S9, HR ESI MS spectra (pos. and neg. mode) are shown in Figure S10,S11, and chromatogram as well as UV spectrum are shown in Figure S4.
Data availability
The datasets generated and analysed during the current study are available from the corresponding author on reasonable request.
|
2023-02-02T14:58:10.896Z
|
2023-02-02T00:00:00.000
|
{
"year": 2023,
"sha1": "b8e63eeb2fe28f49d931950c6204fc6d90232378",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "b8e63eeb2fe28f49d931950c6204fc6d90232378",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231879781
|
pes2o/s2orc
|
v3-fos-license
|
Numerical analysis of a new formulation for the Oseen equations in terms of vorticity and Bernoulli pressure
A variational formulation is introduced for the Oseen equations written in terms of vor\-ti\-city and Bernoulli pressure. The velocity is fully decoupled using the momentum balance equation, and it is later recovered by a post-process. A finite element method is also proposed, consisting in equal-order N\'ed\'elec finite elements and piecewise continuous polynomials for the vorticity and the Bernoulli pressure, respectively. The {\it a priori} error analysis is carried out in the $\mathrm{L}^2$-norm for vorticity, pressure, and velocity; under a smallness assumption either on the convecting velocity, or on the mesh parameter. Furthermore, an {\it a posteriori} error estimator is designed and its robustness and efficiency are studied using weighted norms. Finally, a set of numerical examples in 2D and 3D is given, where the error indicator serves to guide adaptive mesh refinement. These tests illustrate the behaviour of the new formulation in typical flow conditions, and they also confirm the theoretical findings.
Introduction
In this paper, we propose a reformulation of the Oseen equations using only vorticity and Bernoulli pressure. A similar splitting of the unknowns has been recently proposed in [9] for the Brinkman equations. We extend those results for the Oseen problem and propose a residual-based a posteriori error estimator whose properties are studied using a weighted energy norm, as well as the L 2 -norm.
There is an abundant body of literature dealing with numerical methods for incompressible flow problems using the vorticity as a dedicated unknown. These include spectral elements [6,10], stabilised and leastsquares schemes [5,15], and mixed finite elements [8,[22][23][24]30], to name a few. Works specifically devoted to the analysis of numerical schemes for the Oseen equations in terms of vorticity include the non-conforming exponentially accurate least-squares spectral method for Oseen equations proposed in [27], the least-squares method proposed in [32] for Oseen and Navier-Stokes equations, the family of vorticity-based first-order Oseen-type systems studied in [18], the enhanced accuracy formulation in terms of velocity-vorticity-helicity investigated in [12], and the recent mixed and DG discretisations for Oseen's problem in velocity-vorticitypressure form, proposed in [7].
The method advocated in this article focuses on Nédélec elements of order k ≥ 1 for the vorticity and piecewise continuous polynomials of degree k, for the Bernoulli pressure. An abridged version of the analysis for this formulation has been recently advanced in [3]. In contrast, here we provide details on the a priori error estimates rigorously derived for the finite element discretisations in the L 2 -norm under enough regularity and under smallness assumption on the mesh parameter. Furthermore, we prove error estimates for two postprocesses for the velocity field in the L 2 -norm. The first one is similar to the one used in [9] for Brinkman equations, which exploits the momentum equation and direct differentiation of the discrete vorticity and Bernoulli pressure. For the second post-process we solve an additional elliptic problem emanating from the constitutive equation defining vorticity, and it uses the continuity equation and the discrete vorticity appears on the right-hand side. This problem is discretised with, e.g., piecewise linear and continuous polynomials.
On the other hand, we address the construction of residual based a posteriori error estimators which are reliable and efficient. Adaptive mesh refinement strategies based on a posteriori error indicators have a significant role in computing numerical solutions to partial differential equations, and this is of high importance in the particular context of incompressible flow problems. Robust and efficient error estimators permit to restore the optimal convergence of finite element methods, specifically when complex geometries or singular coefficients are present (which could otherwise lead to non-convergence or to the generation of spurious solutions) [33], and they can provide substantial enhancement to the accuracy of the approximations [1]. A posteriori error analyses for vorticity-based equations are already available from the literature (see, e.g., [4,5,8,16]), but considering formulations substantially different to the one we put forward here. Our analysis complements these works by establishing upper and lower bounds in different norms, and using an estimator that is scaled according to the expected regularity of the solutions (which in turn also depends on the regularity of the domain). Reliability of the a posteriori error estimator is proved in the L 2 -norm, and local efficiency of the error indicator is shown by using a standard technique based on bubble functions.
We further remark that the present method has the advantage of direct computation of vorticity, and it is relatively competitive in terms of computational cost (for instance when compared with the classical MINIelement, or Taylor-Hood schemes). The type of vorticity-based formulations we use here can be of additional physical relevance in scenarios where boundary effects are critical, for example as in those discussed in [21,28]. Moreover, the corresponding analysis is fairly simple, only requiring classical tools for elliptic problems.
We have structured the contents of the paper in the following manner. We present the model problem as well as the two-field weak formulation and its solvability analysis in Section 2. The finite element discretisation is constructed in Section 3, where we also derive the stability, convergence bounds and we present two post-processes for the velocity field. Section 4 is devoted to the analysis of reliability and efficiency of a weighted residual-based a posteriori error indicator, and we close in Section 5 with a set of numerical tests that illustrate the properties of the proposed numerical scheme in a variety of scenarios, including validation of the adaptive refinement procedure guided by the error estimator. equations, that use velocity, vorticity, and Bernoulli pressure (see, e.g., [7,29]) in Ω, (2.1) where ν > 0 is the kinematic viscosity, and a linearisation and backward Euler time stepping explain the terms σ > 0 as the inverse of the time step, and β as an adequate approximation of velocity (representing for example the velocity at a previous time step). The Bernoulli pressure relates to the true fluid pressure P as follows p := P + 1 2 u · u − λ, where λ is the mean value of 1 2 u · u. The structure of (2.1) suggests to introduce the rescaled vorticity vector ω := √ ν curl u as a new unknown. Thus, the Oseen problem can be formulated as: Find u, ω, p such that The vector of external forces f absorbs the contributions related to previous time steps and to the fixed states in the linearisation procedure that leads from Navier-Stokes to Oseen equations. Along with to the Dirichlet boundary condition for the velocity on Γ, the additional condition (p, 1) Ω,0 = 0 is required to have uniqueness of the Bernoulli pressure. We will also assume that the data are regular enough: f ∈ L 2 (Ω) 3 and β ∈ L ∞ (Ω) 3 . However, we do not restrict the behaviour of div β. For different assumptions on β we refer to, e.g., [11,19,20,32].
For the sake of conciseness of the presentation, the analysis in the sequel is carried out for homogeneous boundary conditions on velocity, i.e. g = 0 on Γ. Non homogeneous boundary data, as well as mixed boundary conditions, will be considered in the numerical examples in Section 5, below.
Here, L 2 0 (Ω) represents the set of L 2 (Ω) functions with mean value zero. In addition, for sake of the subsequent analysis, it is convenient to introduce the following space Lemma 2.1 The space V endowed with the norm defined by is a Hilbert space.
Proof. Note that (2.6) is in fact a norm as (θ, q) V = 0 implies (θ, q) = (0, 0) a.e. Now, it is easy to check that the norm satisfies the parallelogram identity and hence, it induces an inner product by the polarisation identity. Therefore, V equipped with this inner product is an inner product space. To complete the proof, it remains to show that this space is complete. To this end, let {(θ n , q n )} n∈N be an arbitrary Cauchy sequence While our whole development will focus on this vorticity-pressure formulation, we stress that from (2.8) we can immediately have an expression for velocity The reason for scaling the vorticity with √ ν is now apparent from the structure of the variational form in (2.10). On the other hand, if we write insteadω := curl u, then (2.9) could be written as: and the analysis of this problem follows the same structure as that of (2.9).
Let us first provide an auxiliary result to be used in the derivation of a priori error estimates.
Lemma 2.2
The multilinear form A satisfies the following bounds for all (θ, q) ∈ V,
13)
A((ω, p), (θ, q)) ≤ (ω, p) V (θ, q) V . (2.14) Proof. From the definition of A(·, ·), we readily obtain the relation Subsequently, an appeal to the Cauchy-Schwarz inequality leads to Proof. Choose (θ, q) = (ω, p) in (2.9). From (2.13) with (2.16), and the bound we obtain Then, we note that p 0,Ω ≤ C s ∇p −1,Ω , and invoking the definition of the H −1 -norm, it is observed that Using integration by parts for the second term in this last relation, and using that q ∈ H 1 0 (Ω) as well as ∇ · curl ω = 0, ( √ ν curl ω, ∇q) = 0, we end up with Altogether, it completes the rest of the proof. On the other hand, for the existence we note that the multilinear form A(·, ·) is both coercive and bounded in V with respect to (·, ·) V because of Lemmas 2.2 and 2.3. Therefore, an appeal to the Lax-Milgram Lemma completes the rest of the proof.
Remark 2.2 Even if β violates (2.16) we can still address the well-posedness of problem (2.9). Since problem (2.1) with the boundary conditions u = 0 on Γ is equivalent to (2.9) under the assumption of sufficient regularity, then the unique solvability of (2.1) implies that of (2.9). Now, denoting by P the Leray projection operator that maps L 2 onto a divergence-free space, we can see that the following problem L(u) := P(−ν∆u + curl u × β + σu + ∇p) = Pf , defines a Fredholm alternative (see for instance, [17]). Note also that, as long as zero is not in the spectrum of L, the operator L is invertible. With the null space of L being a trivial space, the operator L is indeed an isomorphism onto the dual space Z of Z. Finally, p is recovered in a standard way.
In any case, for the rest of the paper we will simply assume that (A) The problem (2.9) has a unique weak solution (ω, p) ∈ V.
Finite element discretisation and error estimates
This section focuses on finite element approximations and their a priori error estimates.
Galerkin scheme and solvability
Let {T h (Ω)} h>0 be a shape-regular family of partitions of the polyhedral regionΩ, by tetrahedrons T of diameter h T , with mesh size h := max{h T : T ∈ T h (Ω)}. In what follows, given an integer k ≥ 1 and a subset S of R 3 , P k (S) will denote the space of polynomial functions defined locally in S and being of total degree ≤ k. Now, for any T ∈ T h (Ω) we recall the definition of the local Nédélec space where R k (T ) := {p ∈P k (T ) 3 : p(x) · x = 0}, and whereP k is the subset of homogeneous polynomials of degree k. With this we define the discrete spaces for vorticity and Bernoulli pressure: and remark that functions in Z h have continuous tangential components across the faces of T h (Ω).
Let us recall that for s > 1/2, the Nédélec global interpolation operator N h : H s (curl; Ω) → Z h (cf. [2]), satisfies the following approximation property: On the other hand, for all s > 1/2, the usual Lagrange interpolant The Galerkin approximation of (2.9) reads: where the multilinear form A : V h × V h → R and the linear functional F : V h → R are specified as in (2.10) and (2.11), respectively.
Next, let us prove that the discrete formulation (3.4) is well-posed.
Before that, we address the stability of the discrete problem.
, a use of the Cauchy-Schwarz inequality with the estimate By (2.17), it follows that And eventually we arrive at In order to complete the proof, we require an estimate for σ 1/2 ω h 0,Ω . For this we apply the Aubin-Nitsche duality argument to the following adjoint problem: and whose solution (after assuming the natural additional regularity ω ∈ H δ (curl; Ω) and p ∈ H 1+δ (Ω), for some δ ∈ (1/2, 1]) satisfies for C reg > 0 a uniform regularity constant. Then, we set (θ, q) = (ω h , p h ) and find out that for all (θ h , q h ) ∈ Z h × Q h , the following relation holds: In this way, we obtain and on substitution of (3.7) into (3.6), we readily see that Therefore, there is a positive h 0 such that for 0 < h ≤ h 0 , the following holds: for some positive γ 0 , independent of h. This completes the rest of the proof.
Proof. Since the assembled discrete problem (3.4) is a square linear system, it is enough to establish uniqueness of solution. Considering f = 0 and using (θ h , q h ) := (ω h , p h ) as a test function in (3.4), the discrete stability result in Lemma 3.1 (which is valid assuming (2.16)) immediately implies that ω h = 0 and p h = 0, thus concluding the proof.
Remark 3.1 When the condition (2.16) is satisfied, we modify the stability proof of Lemma 3.1 as follows: From (3.5), using the Young inequality on the right-hand side In addition, applying Young's inequality once again, and appealing to (2.17), we obtain the desired stability result.
Note that in this case, we do not need a smallness condition on the mesh parameter h.
A priori error estimates
In this subsection, using a classical duality argument we bound the error measured in the L 2 -norm by the error in the norm (·, ·) V . Then, we establish an energy error estimate that eventually yields an optimal bound in L 2 .
Let (ω, p) ∈ V and (ω h , p h ) ∈ V h be the unique solutions to the continuous and discrete problems (cf. (2.9) and (3.4)), respectively. Then, we obtain Proof. We appeal again to the Aubin-Nitsche duality argument. For this, let us consider the adjoint continuous problem: In addition, let us suppose that (3.9) is well-posed and that ω ∈ H δ (curl; Ω) and p ∈ H 1+δ (Ω), and there exists a constant C reg > 0, such that Next, we proceed to test the adjoint problem (3.9) against (θ, q) := (ω − ω h , p − p h ) and to use the error Here, we have used (3.2) and (3.3) with s = δ and this completes the rest of the proof.
An error estimate in the energy norm can also be derived in the following manner.
Theorem 3.2 Assume that problem (2.9) has a unique solution (ω, p) satisfying the additional regularity ω ∈ H s (curl; Ω) and p ∈ H 1+s (Ω), for some s ∈ (1/2, k]. Then, there exists C > 0, independent of h, such that the following error estimates hold for h small enough: Proof. Now rewrite (3.8), then use boundedness, (3.2) and (3.3) to arrive at Next, for the term on the left-hand side of (3.11), apply (2.13) to obtain Then, a use of Lemma 3.2 yields Choosing h small, the term within brackets, , can be made positive; therefore, concluding the proof.
As a consequence of Theorem 3.2, with e ω := ω − ω h and e p := p − p h , we obtain the following inf-sup condition: There exists γ 0 > 0, independent of h, such that,
Convergence of the post-processed velocity
Let (ω h , p h ) ∈ V h be the unique solution of the discrete problem (3.4). Then following (2.12), we can recover the discrete velocity as the following element-wise discontinuous function for each T ∈ T h (Ω): Consequently, we can state an error estimate for the post-processed velocity.
Theorem 3.3 Let (ω, p) ∈ V be the unique solution of (2.9), and (ω h , p h ) ∈ V h be the unique solution of (3.4). Assume that ω ∈ H s (curl; Ω), p ∈ H 1+s (Ω) and f ∈ H s (Ω) 3 , for some s ∈ (1/2, k]. Then, there exists a positive constant C, independent of h, such that Proof. From (2.12), (3.13), and triangle inequality, it follows that Then, the result follows from standard estimates satisfied by P h , as well as from Theorem 3.2.
An issue with the post-process (3.13) is that it requires numerical differentiation (taking the curl of ω h and the gradient of p h ). A possible way to getting around this problem is to set and recover the discrete velocity in this space, using the discrete versions of (2.3), (2.4), and (2.5).
This results in findingũ h ∈ U h such that The discrete velocity produced by (3.15) gives not only but also, thanks to the identity relating vector Laplacians with curl and divergence −∆Φ = curl curl Φ − ∇(div Φ), one can show, using duality arguments, that where s and δ are given as in Theorem 3.2.
A posteriori error analysis for the 2D problem
In this section, we propose a residual-based a posteriori error estimator. For sake of clarity, we restrict our analysis to the two-dimensional case (the extension to 3D can be carried out in a similar fashion). Therefore, the functional space Z considered in the a priori error analysis now becomes Z := H 1 (Ω), and We note that in the 2D case, the duality arguments presented in Section 3, hold for any δ ∈ (0, 1]. In particular, this fact will be considered in the definition of the local a posteriori error indicator. Moreover, to keep the notation clear, in this section we will denote by N h the usual Lagrange interpolant in Z h . For each T ∈ T h we let E(T ) be the set of edges of T , and we denote by E h the set of all edges in T h , that is where E h (Ω) := {e ∈ E h : e ⊂ Ω}, and E h (Γ) := {e ∈ E h : e ⊂ Γ}. In what follows, h e stands for the diameter of a given edge e ∈ E h , t e = (−n 2 , n 1 ), where n e = (n 1 , n 2 ) is a fix unit normal vector of e. Now, let q ∈ L 2 (Ω) such that q| T ∈ C(T ) for each T ∈ T h , then, given e ∈ E h (Ω), we denote by [q] the jump of q across e, that is [q] := (q| T )| e − (q| T )| e , where T and T are the triangles of T h sharing the edge e. Moreover, let v ∈ L 2 (Ω) 2 such that v| T ∈ C(T ) 2 for each T ∈ T h . Then, given e ∈ E h (Ω), we denote by [v · t] the tangential jump of v across e, that is, [v · t] := ((v| T )| e − (v| T )| e ) · t e , where T and T are the triangles of T h sharing the edge e.
Next, let k ≥ 1 be an integer and let Z h , Q h and U h be given by (4.1), (3.1), and (3.14), respectively. Let (ω, p) ∈ Z × Q and (ω h , p h ) ∈ Z h × Q h be the unique solutions to the continuous and discrete problems (2.9) and (3.4) with data satisfying f ∈ L 2 (Ω) 2 and f ∈ H 1 (T ) 2 for each T ∈ T h . We introduce for each T ∈ T h the local a posteriori error indicator for δ ∈ (0, 1] as Let us now establish reliability and quasi-efficiency of (4.2).
Reliability
This subsection focuses on proving the reliability of the estimator in the L 2 -norm, and we note that this bound holds for δ ∈ (0, 1].
Theorem 4.1 There exists a positive constant C rel , independent of the discretisation parameter h, such that Proof. Note that A((e ω , e p ), (θ, q)) = R(θ, q), (4.4) where the residual operator R : Z × Q → R is given by Integration by parts on this residual yields For the estimate (4.3), an appeal to the Aubin-Nitsche argument, using (3.9) with (θ, q) Then, we can rewrite the residual as and an application of the Cauchy-Schwarz inequality together with the approximation properties (3.2), (3.3) and (3.10) completes the rest of the proof.
Efficiency
This subsection deals with the efficiency of the a posteriori error estimator in the weighted V-norm depending on δ ∈ (0, 1) (a result that we call quasi-efficiency), and a bound in the L 2 -norm, valid for δ = 1.
Theorem 4.2 (Quasi-efficiency)
There is a positive constant C eff , independent of h, such that for δ ∈ (0, 1] where h.o.t. denotes higher-order terms and h δ T h (e ω , e p ) V := T ∈T h h δ T (e ω , e p ) 2 The second efficiency result is stated as follows.
Theorem 4.3 (Efficiency)
There is a positive constant C eff , independent of h, such that for δ = 1 A major role in the proof of efficiency is played by element and edge bubbles (locally supported nonnegative functions), whose definition we recall in what follows. For T ∈ T h (Ω) and e ∈ E(T ), let ψ T and ψ e , respectively, be the interior and edge bubble functions defined as in, e.g., [1]. Let ψ T ∈ P 3 (T ) with supp(ψ T ) ⊂ T, ψ T = 0 on ∂T and 0 ≤ ψ T ≤ 1 in T. Moreover, let ψ e | T ∈ P 2 (T ) with supp(ψ e ) ⊂ Ω e := {T ∈ T h (Ω) : e ∈ E(T )}, ψ e = 0 on ∂T \ e, and 0 ≤ ψ e ≤ 1 in Ω e . Again, let us recall an extension operator E : C 0 (e) → C 0 (T ) that satisfies E(q) ∈ P k (T ) and E(q)| e = q for all q ∈ P k (e) and for all k ∈ N ∪ {0}.
We now summarise the properties of ψ T , ψ e and E. For a proof, see [1] or [33]. (i) For T ∈ T h and for v ∈ P k (T ), there is a positive constant C 1 such that (ii) For e ∈ E h and v ∈ P k (e), there exists a positive constant say C 1 such that (iii) For T ∈ T h with e ∈ E(T ) and for all v ∈ P k (e), there is a positive constant again say C 1 such that Proof of Theorem 4.2. With the help of the L 2 (T ) 2 -orthogonal projection P T onto P (T ) 2 , for ≥ k, with respect to the weighted L 2 -inner product (ψ T f , g), for f , g ∈ L 2 (T ) 2 , it now follows that For the second term on the right-hand side, a use of Lemma 4.1 shows that In a similar manner, we can derive the bounds We proceed to choose (θ, q) = ψ T (P T R 1 , P T R 2 ) in (4.4) and obtain ψ 1/2 Next, we invoke estimate (i) of Lemma 4.1. This yields Altogether, we now arrive at Regarding the estimates associated with J h,1 and J h,2 , we introduce, respectively, P T and P e as the weighted L 2 -orthogonal projections (say, with respect to the weighted inner product (ψ e f, g) e ), onto P (T ) 2 and P (e), for ≥ k. Then, we can bound J h,1 and J h,2 as [ P e (J h,1 ) · t] 2 0,e + [ P e (J h,2 ) · n] 2 0,e . (4.7) In order to estimate the first term on the right-hand side of (4.7) we use the trace inequality, yielding Finally, we substitute (4.8) and (4.9) in (4.7), and then combine the result with (4.6) to complete the rest of the proof.
Remark 4.1 Note that the a posteriori lower bound derived in Theorem 4.3 is valid only upon the assumption of H 2 -regularity, that is, for δ = 1. When δ ∈ (0, 1), obtaining an efficiency result for the a posteriori error indicator in the L 2 -norm is much more involved, essentially due to the presence of corner singularities. For instance, a reliable and efficient estimators using weighted L 2 -norms is available for the Poisson equation in [34]. A similar analysis could eventually be carried out in the present case, provided an additional regularity is established using weighted Sobolev spaces and appropriate interpolation results. However here we restrict ourselves only to verifying these properties numerically in the next Section.
In addition, the result of Theorem 4.2 does indicate that the estimator is quasi-efficient, as the error in the L 2 -norm, (σ 1/2 e ω , e p ) 0,Ω , is proportional to C (e ω , e p ) 0,Ω .
Numerical tests
In this section, we report the results of some numerical tests carried out with the finite element method proposed in Section 3. The solution of all linear systems is carried out with the multifrontal massively parallel sparse direct solver MUMPS.
The discrete formulation is extended to the case of mixed boundary conditions, assuming that the domain boundary is disjointly split into two parts Γ 1 and Γ 2 such that (2.5) is replaced by (see similar treatments in [13,14]) and the condition of zero average is imposed on the Bernoulli pressure, using a real Lagrange multiplier approach, only if Γ 2 = ∅. Using (5.1), the linear functional F h : V h → R defining the finite element scheme adopts the specification Example 1. First, we construct a manufactured solution in the two-dimensional domain Ω = (−1, 1) 2 and assess the convergence properties and verify the rates anticipated in Lemma 3.2, and Theorems 3.2 and 3.3. We compute individual errors and convergence rates as usual for all fields on successively refined partitions of Ω. For this test we assume that Γ 1 is composed by the horizontal edges and the right edge, whereas Γ 2 is the rest of the boundary. We propose the following closed-form and smooth solutions satisfying u = 0 on Γ 1 . In addition, we consider together with the model parameters σ = 100 and ν = 0.1, which in turn fulfil (2.16). These exact solutions lead to a nonzero right-hand side that we use to verify the accuracy of the finite element approximation.
We report in Table 5.1 the error history of the method in the L 2 -and V-norms, where we also show the convergence of the post-processed velocity using the direct computation (3.13) producing u h ∈ U h , and the alternative post-processing through solving the auxiliary problem (3.15), givingũ h ∈ U h . It can be clearly seen that optimal order of convergence is reached for all fields in both polynomial degrees k = 1 and k = 2, which confirms the sharpness of the theoretical error bounds. and we take ν = 10 −3 , σ = 10, and β(x, y) := curl ϕ. Only Dirichlet velocity conditions are considered in this example (that is, Γ 2 is empty), which amounts to add a real Lagrange multiplier imposing the condition of zero-average for the Bernoulli pressure. In Table 5.2 we collect the error history of the method, including individual errors and convergence rates as well as the errors analysed in Theorems 4.1, 4.2, 4.3. As the estimator and the quasi-efficiency depend on the values of δ, we explore three cases δ ∈ {1/10, 1/2, 1}. The robustness is assessed by computing the effectivity indexes as the ratios The results confirm that the estimator is robust with respect to the weighted V-norm for all values of δ, but the second-last column of the table indicates thatη is not necessarily efficient in the L 2 -norm, for δ < 1.
Next, as Examples 2B and 2C, we consider exact solutions with higher gradients and see how the estimator performs guiding adaptive mesh refinement as well as restoring optimal convergence rates. For this we follow a standard procedure of solving the discrete problem → estimating the error → marking cells for refinement → refining the mesh → solving again. The marking is based on the equi-distribution of the error in such a way that the diameter of each new element (contained in a generic triangle T on the initial coarse mesh) is proportional to the initial diameter times the ratioη h /η T , whereη h is the mean value ofη over the initial mesh [33]. The refinement is then done on the marked elements as well as on an additional small layer in order to maintain the regularity of the resulting grid. An extra smoothing step is also applied after the refinement step.
For Example 2B we concentrate on the L-shaped domain Ω = (−1, 1) 2 \(0, 1) 2 , and use the exact solutions employed also to compute boundary data and right-hand side forcing terms. We keep the values of ν, σ from Example 1. The regularity of the coupled problem (due to the corner singularity) indicates that δ = 2/3.We collect the results in Table 5.3, showing similar trends as those seen in Table 5.2, that is, optimal convergence for all fields, and robustness of the a posteriori error estimator in the V-norm. Samples of approximate vorticity, Bernoulli pressure, and post-processed velocity, also for the case of δ = 2/3, and after six steps of adaptive mesh refinement are shown in Figure 5.1.
For Example 2C, starting from a coarse initial triangulation of the domain, we construct sequences of uniformly and adaptively refined meshes and compute errors between approximate solutions and the following closed-form solutions exhibiting a vertical inner layer near the central axis of the domain (see [11]) where p 0 is such the average of p over Ω is zero, and we take ν = 10 −4 , σ = 10, and β(x, y) := curl ϕ. Again we take Dirichlet velocity conditions everywhere on ∂Ω. Figure 5.2 shows the error history in both cases, confirming that the method constructed upon adaptive mesh refinement provides rates of convergence slightly better than the theoretical optimal, whereas under uniform refinement the lack of smoothness in the exact solutions hinder substantially the error decay, exhibiting sublinear convergence in all cases and even stagnating for vorticity. The top left plot portrays the individual errors, and for reference the optimal error decay for the case of less regular solutions (that is, O(h)); whereas the right panel shows the error in the V-norm and the effectivity index eff 2 defined in (5.2). In addition, the bottom panels of Figure 5.2 display the outputs of mesh refinement indicating a higher concentration of elements where the large gradients are located.
Example 3. Next, we conduct the well-known test of flow past a backward-facing step. This is also a 2D example where the domain is Ω = (0, 6) × (0, 2) \ (0, 1) 2 . For this case we choose a method with k = 2 and assume that β is the discrete velocity at the previous time iteration of a backward Euler time step. Assuming that no external forces are applied, we then have f = σβ and after each time step characterised by σ = (∆t) −1 = 100, we update the current velocity β ← u. The flow regime is determined by a moderate viscosity ν = 0.05 and we prescribe Γ 2 as the right edge (the outlet of the channel) where we set p 0 = 0 and a = 0. The remainder of the boundary constitutes Γ 1 : on the left edge (the inlet of the channel) we impose a parabolic profile g = (4(y − 1)(2 − y), 0) T and on the remainder of Γ 1 (the channel walls) we set g = 0.
The system is run until the final time t = 1 and samples of the obtained numerical results are collected in Figure 5.3. As expected for this test, a fully developed profile (seen in the plot of post-processed velocity) exits the outlet while an important recirculation occurs on the bottom-left corner, right after the expanding region. The vorticity has a very high gradient on the reentrant corner of the channel, but this is well-captured by the numerical scheme. We also show Bernoulli pressure and the classical pressure (which coincides with the expected pressure profiles for this example). In addition, in Figure 5. 4 we portray examples of adaptively refined meshes using the indicator (4.2). One can observe local refinement near the reentrant corner and at later times, a clustering of elements near the horizontal walls in the channel.
Example 4. For our next application we study the flow patterns generated on a channel with three obstacles (using the domain and boundary configuration from the micro-macro models introduced in [31]). Here the flow is now generated only through pressure difference between the inlet (the bottom horizontal section of the boundary defined by (0, 1) × {−2}) and the outlet (the vertical segment on the top left part of the boundary, defined by {−2}×(0, 1)). No other boundary conditions are set. As in the previous test case, β is the discrete velocity at the previous pseudo-time iteration. We take σ = 10 and ν = 0.02 and increase the pressure at the inlet with the pseudo time, reaching after 10 steps the value p in = 3 and set zero Bernoulli pressure at the outlet. The avoidance of the obstacles and accumulation of vorticity near them is a characteristic behaviour of the phenomenon that we can observe in Figure 5.5. These plots were generated with k = 2.
Example 5. Our last test exemplifies the performance of the numerical scheme in 3D. We use as computational domain the geometry of a femoral end-to-side bypass segmented from 3T MRI scans [26]. We generate a volumetric mesh of 68351 tetrahedra. The boundaries of this arterial bifurcation are considered as an inlet Γ in , an outlet Γ out , the arterial wall Γ wall , and an occluded section Γ occl . On the occlusion section and on the walls we set no-slip velocity. A parabolic velocity profile is considered at the inlet surface whereas a mean pressure distribution is prescribed on the outlet section. The last two conditions are time-dependent and periodic with a period of 50 time steps (we employ σ = 100 and run the system for 100 time steps). Moreover we use a blood viscosity of ν = 0.035 (in g/cm 3 ), which represents an average Reynolds number between 144 and 380 [26]. The computations were carried out with the first-order scheme, and the results are
|
2021-02-12T02:15:38.675Z
|
2021-02-11T00:00:00.000
|
{
"year": 2021,
"sha1": "34b9a8c4417278adf8993e36c0df699d4a8b79a6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.05816",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "34b9a8c4417278adf8993e36c0df699d4a8b79a6",
"s2fieldsofstudy": [
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
226306682
|
pes2o/s2orc
|
v3-fos-license
|
Polarization-controlled selective excitation of Mie resonances of dielectric nanoparticle on a coated substrate
High-index spherical nanoparticles with low material losses support sharp high-Q electric and magnetic resonances and exhibit a number of interesting optical phenomena. Developments in fabrication techniques have enabled the further study of their properties and the investigation of related optical effects. After deposition on a substrate, the optical properties of a particle change dramatically due to mutual interaction. Here, we consider a silicon spherical nanoparticle on a dielectric one-layered substrate. At the normal incidence of light, the layer thickness controls the contribution of the nanoparticle's electric and magnetic multipoles to the subsequent optical response. We show that changing the polarization of incident light at a specific excitation angle and layer thickness leads to switching between the multipoles. We further observe a related polarization-driven control over the direction of the scattered radiation.
I. INTRODUCTION
Plasmonic and high-index dielectric resonant nanoparticles are one of the key building blocks in modern nanophotonics 1,2 . They offer efficient control over light at the nanoscale whether single or packed in an array, and have a number of applications [3][4][5][6] . For example, plasmonic nanoparticles exhibit strong field localization 7,8 , and are used for SERS 9,10 , sensing 11,12 , as well as for chemical and biological applications 13,14 . However, the high intrinsic losses of plasmonic materials limit their use for certain applications. High-index dielectric nanoparticles, on the other hand, have significantly lower losses and support both electric and magnetic responses in the visible and near-IR regions 15,16 . This leads to several new phenomena such as Kerker effect [17][18][19] , directional scattering 18,20,21 or excitation of surface plasmon-polariton 22,23 . Finally, dielectric nanoparticles have enabled many nonliner photonic effects [24][25][26][27][28] , implementation of subwavelength room-temperature lasers 29 .
The optical properties of spherical nanoparticles are described by Mie theory 30,31 , where high-index dielectric nanoparticles exhibit pronounced optical resonances. Their fundamental mode (with the lowest frequency) is a magnetic dipole (MD) resonance, followed by an electric dipole (ED), a magnetic quadrupole (MQ), and so on into higher-order multipoles. Mie theory predicts the spectral position and quality factor for each multipole separately, but there is some overlap of multipoles and several multipoles are excited at the same wavelength. In some cases, selective excitation of multipoles is necessary 28,32 , such as the selective excitation of MD to enhance second 33 and third harmonic generation 34 . Another example is the enlarged optical pulling and pushing forces 35 caused by manipulation of the ED, MD, and MQ of the dielectric nanoparticles. Finally, control over the contribution of multipoles to scattering provides multicolor pixels at the nanoscale [36][37][38][39] .
There are a number of methods for the selective excitation of multipole resonances of high-index particles. One of them is structured light in the form of tightly focused cylindrical vector-beams 40 . Alternatively, radially polarized light 41 or even a simple plane wave excitation 32,38,42 are also possible. An efficient way to control multipoles was proposed by Xiang et al. 38 , where the authors suggested using the evanescent field for excitation by illuminating the nanoantenna from the substrate side under the total internal reflection condition. Here, we use farfield excitation, which is more practical. Sinev et al. in Ref. 42 demonstrated the polarization-controlled enhancement of the MD of a silicon nanoparticle on a plasmonic substrate. Alternatively, Van de Groep and Polman 32 showed theoretically that a dielectric one-layered substrate can be used to control ED and MD contribution to the scattering. However, they only considered the case of normal incidence, where due to the selection rules, only modes with the azimuthal number m = ±1 were present. In term of the group symmetry, one can say that only the modes from E 1 irreducible representation were excited 43 .
Here, we use oblique incident light, which contains all azimuthal harmonics and, thus, excites the modes with, m=0, ±1, ±2, etc. It gives an additional degree of freedom for selective multipole excitation and manipulation of their interference. We show that the angle of incidence and polarization of the excitation light combined with adjusting the silica spacer, which separates the nanoantenna from the silicon substrate provide a flexible control over the relative amplitudes of MD and ED, and their interference which governs the scattered field. The thickness of the SiO 2 spacer defines the phase shift between ED and MD, while the incident angle defines their relative amplitudes. Thus, we can strongly enhance or almost completely suppress the ED or MD component in the scattered field or maximize the scattering in certain directions. For example, the demonstrated enhancement of scattering to the directions of light source can be used for retroreflectors of subwavelength thickness 44 .
II. RESULTS AND DISCUSSION
We consider scattering from a silicon spherical nanoparticle with fixed radius R = 85 nm on top of a silica layer of variable thickness backed by the silicon substrate (see Fig. 1). We employ the T-matrix method for scattering from a single particle and the scattering matrix method for propagation in a layered structure. These methods have previously been implemented in open-source numerical software "Smuthi" using Python [45][46][47] . Simulation of the electromagnetic fields inside the particle was performed in COMSOL Multiphysics package.
It is important to note that an analytical solution of the scattering problem for a small dielectric or plasmonic particle can be indeed obtained even for the case of multilayered substrate and oblique incident plane wave in a point-dipole approximation using the Green's function formalism 48 . We used this method, for example, in 22,23 . However, the analytical solutions are quite cumbersome for the analysis as it requires calculation of Sommerfeld's integral and solutions of a transcendent equation for complex poles of the leaky modes. Therefore, the numerical analysis is almost unavoidable. Moreover, in Sec. II.C we analyzed the directivity of the scattered field taking into account magnetic quadrupole resonance. These regimes are beyond the dipole approximation and also require numerical calculations for the correct analysis of scattering. Finally, the analytical solution in a point-dipole approximation contains a fitting parameter -the height at which the point dipole should be positioned. The scattering from the nanoantenna strongly depends on this parameter (see additional materials to 48 ).
A. Normal incidence
Firstly, varying the thickness of the layer h l , we investigate the upper half-space scattering efficiency Q sca at normal incidence, where scattering efficiency is a scattering cross-section normalized to a geometrical crosssection.
At zero h l , with the particle located on a silicon substrate, the scattering is enhanced in comparison with the free space case. In spite the interaction with the substrate, there is no spectral shift of the ED and MD resonances (see Appendix A, Fig. 9). Infinite h l corresponds to the case of a particle on glass, where the scattering enhancement is negligible. For intermediate cases, the layer modulates the standing wave in the upper halfspace, that results in scattering modulation (see Fig. 2a). As MD (at 670 nm) and ED (at 540 nm) interact with the standing wave, along the h l axis we note oscillating behavior of the Q sca with a resonant condition: where n l is a refractive index of the layer, m is an integer, and d 0 is associated with initial scattering phase. The presence of the initial scattering phase, which is different for the ED and MD, makes it possible to find layer thicknesses where simultaneous enhancement or suppression of both dipoles occurs (see Fig. 3a). Alternatively, dipole enhancement takes place at different layer thicknesses. For example, enhancement of the ED can be achieved at h l = 40 nm, and of the MD at h l = 160 nm (see Fig. 3b and 3c). In order to quantify the relative contributions of dipoles, we introduce selectivity, defined as: where Q(λ ED ) and Q(λ M D ) are scattering efficiencies at wavelengths of the ED and MD resonances (see Fig. 2b). At thicknesses of 40 nm and 160 nm, the selectivity is 0.23 and -0.36, respectively. In these states, we provide the fit of Q sca by two Fano-like curves to represent the ED and MD contribution (thin red and blue dashed lines in Figs. 3b and 3c). For example, at the wavelength of the enhanced ED, the contribution of the suppressed MD to the scattering efficiency is negligible (see Fig. 3b), the enhanced MD effect being similar. The dominance of one dipole over another also appears as a characteristic pattern in the field distribution inside the particle (see inserts in Fig 3b and c). It should be noted that the absolute value of the selectivity is relatively small due to the overlapping of the enhanced and suppressed dipoles at the wavelength of the latter. As we further increase the layer thickness, the selectivity oscillates, reaching extreme values of 0.6 and -0.7 (see Fig. 2b). At the same time, these values do not reflect the presence of the additional spectral features associated with Fabri-Perot modes in thick layers. We have to therefore use additional degrees of freedom in order to achieve a regime where only one dipole contributes to the scattering.
B. Oblique incidence
A single-dipole scattering regime can be achieved at an oblique incidence. In this case, the polarization of the incident radiation begins to play a significant role, allowing us to switch selectivity while keeping the angle of incidence and the thickness of the layer constant. In order to find the optimal values for the incidence angle, as well as the thickness of the layer, we simulate the difference between selectivity for TE and TM polarization at different incidence angle and buffer layer thickness: where S T E and S T M are selectivity values for TE and TM incident polarization.As far as the selectivity itself is an analog to the contrast between the contributions of electric and magnetic dipoles to scattering, the selectivity variation allows to determine the parameters that simultaneously achieve single-dipole scattering regimes and pronounced polarization-driven switching between them. Therefore, scattering efficiencies at ED and MD resonant wavelengths for both polarization are used for Fig.4. There we see that the absolute value of selectivity variation grows with the increase of the incidence angle, while the dependence on the layer thickness is periodic in character. Next, we demonstrate polarization switching of the selectivity with layer thickness equals 430 nm, and an incident angle of 75 • (white dot in Fig. 4). Scattering efficiency maps of wavelength-incident angle coordinates for both TE and TM polarization are shown in Figs. 5a and 5c. At normal incidence, the ED and MD make approximately equal contribution to scattering, which remains the same for both polarizations as the angle of incidence increases. However, in the vicinity of 75 • , single-ED or single-MD scattering regimes occur. Figure. 5a shows that, for TE polarization, the enhanced MD almost completely dominates the suppressed ED, a situation which is reversed with TM polarization (see Fig. 5c).
We explain this effect by considering the angular dependence of the background field of the standing wave at wavelengths of the ED and MD in plane z = R where z = 0 coincides with the upper boundary of the layer. We calculate normalized fields using Fresnel coefficients then choose the determining components of the fields. In the case of TE polarization (see Fig. 5b), E y (λ ED ) has a maximum at 30 • and then subsides, while H z (λ ED ) reaches its maximum at 70 • . Since the magnitude of the dipole is proportional to the applied field, the vertical magnetic dipole provides most of the scattering, so the MD-only scattering regime is achieved.
Additionally, due to suppression of the ED it becomes possible to distinguish the MQ contribution to scattering, dashed line in Fig. 6, TE polarization. The ED-only scattering regime for TM polarization occurs similarly. Changing the polarization angle from TE to TM, we can continuously tune the scattering regime from MD-only to ED-only (see Fig. 6 and inset in Fig. 4).
C. Directivity of scattered radiation
In the previous section we found an angle of incidence and layer thickness that enabled us tune the scattering regimes in the upper half-space. The contribution of three main features, the MQ, MD, and ED, is controlled by the polarization direction of the excitation. In this section, we investigate the directivity of scattered radiation in peculiar regimes. In the upper half-space we simulate the differential scattering cross-section (DSCS) normalized to the geometrical cross-section of the particle. The most distinctive patterns are plotted in Fig. 7 in polar coordinates. The polar angle of the scattered radiation θ is plotted along the radial coordinate. The azimuthal angle ϕ increases in a counterclockwise direction. The direction of the incident radiation with θ = 75 • and ϕ = 180 • is indicated by the dots in Fig. 7.
The first regime of interest occurs in the vicinity of the MQ, at λ = 520 nm, TE polarization. While the contribution of the ED is vastly suppressed for TE-polarized excitation, the MQ begins to noticeably affect the DSCS pattern. Interference between radiation scattered by the ED and the MQ results in negative-angle scattering (see Fig. 7a), which is plotted by a red curve in Fig. 8, where the directivity pattern is plotted in the plane of incidence (xz-plane). It should be noted that if we only consider ED and MD in the simulation, negative-angle scattering disappears (see Appendix B, Fig. 10). Switching the polarization to TM significantly increases the contribution of the ED, and a dipole-like DSCS with small asymmetry is achieved (see Fig. 7b). The pattern becomes symmetric at the wavelength of the ED resonance (see Fig. 7d), and similarly at a wavelength of λ M D , TE polarization (see Fig. 7e). Switching polarization back to TM, we suppress the MD and the contributions of both dipoles become comparable, leading to directional upward scattering (see Fig. 7f and Fig. 8, blue curve).
III. CONCLUSION
In this article we show that a one-layered substrate is an effective platform for the manipulation of resonances of a dielectric nanoparticle. At normal incidence, the thickness of the layer controls the enhancement and suppression of the ED and the MD. At oblique incidence, it becomes possible to control the contribution of the ED and the MD to the optical response through polarization of the incident light. We futher present conditions where one dipole resonance is almost completely suppressed as the other is enhanced, and where a smooth transition to the reverse is also possible. Finally, we present negative angle and upward scattering regimes, and show that adjusting the ED and the MD contributions controls directivity of scattered radiation.
IV. ACKNOWLEDGEMENT
This work is supported by RFBR, project number 18-29-20063 Here, we simulate scattering efficiency in the upper half space of a spherical silicon nanoparticle with the radius R = 85 nm, placed in the air, on silicon and glass substrate. On the silicon substrate the scattering is enhanced in comparison with the free space case with no spectral shift of the ED and MD resonances. For the particle on the glass scattering enhancement is negligible. In order to show, that the negative angle scattering occurs due to the interference between radiation scattered by the ED and the MQ, we simulated DSCS for the silicon nanoparticle on one-layered substrate, with λ=520 nm, TE the parameters given in the article, taking into account different maximal order of multipoles. In S 2(a) it is seen, that considering only dipole terms, we don't have negative angle scattering. Taking into account quadrupole, we obtain negative angle scattering. Further increase of the maximal order of multipoles, the DSCS pattern doesn't change.
|
2020-11-13T02:01:08.387Z
|
2020-11-12T00:00:00.000
|
{
"year": 2020,
"sha1": "2d71aceb3266bb7c3d496dc8956976b9abd6ca6e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2011.06494",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f12cecfd25b4612014233de635a26e58fc3083b0",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
265185408
|
pes2o/s2orc
|
v3-fos-license
|
Time‐Dependent Weakening of Granite at Hydrothermal Conditions
The evolution of a fault's frictional strength during the interseismic period is a critical component of the earthquake cycle, yet there have been relatively few studies that examine the time‐dependent evolution of strength at conditions representative of seismogenic depths. Using a simulated fault in Westerly granite, we examined how frictional strength evolves under hydrothermal conditions up to 250°C during slide‐hold‐slide experiments. At temperatures ≤100°C, frictional strength generally increases with hold duration but, at 200 and 250°C, an initial increase in strength transitions to rapid time‐dependent weakening for holds longer than 14 hr. Forward modeling of long hold periods at 250°C using the rate and state friction constitutive equations requires a second, strongly negative, state variable with a long evolution distance. This implies that significant hydrothermal alteration is occurring at 250°C, consistent with microstructural observations of dissolution and secondary mineral precipitation.
Methods
Laboratory SHS tests consist of alternating periods of induced fault slip and quasi-static holds during which time-dependent strength recovery occurs.We express strength changes in terms of secant friction, μ = τ/(σ n − P p ), where τ and σ n are shear and normal stress resolved on the fault surface and P p is pore pressure.The magnitude of restrengthening (μ) is quantified as the difference between the peak failure strength upon resumption of slip and the steady-state sliding strength preceding the hold.Triaxial SHS experiments were conducted at temperatures from 22 to 250°C (Table S1 in Supporting Information S1) on cylindrical (2.54 cm diameter) Westerly granite samples containing saw-cuts inclined at 30°.Simulated fault surfaces were roughened with #240 grit sandpaper to attain similar starting surfaces with RMS roughness of 5-10 μm.Tests were run at constant confining and pore pressures of 30 and 10 MPa, respectively.Deionized water was used as pore fluid and a 2.4 mm diameter borehole provided fluid access to the fault surface.A 24-hr thermal equilibration period preceded each experiment to allow the pore fluid and rock to approach chemical equilibrium.
The experiments began with 1 mm slip during whichh a thin (∼5 μm) layer of ultrafine gouge developed.Multiple hold periods were then employed, separated by 250 μm of axial displacement at 0.1 μm/s.Hold durations ranged from 100 to 5 × 10 5 s and experiments were designed to repeat each hold duration up to three times.Mechanical data were corrected for elastic deformation of the loading system, jacket strength, the confining-pressure-dependence of piston seal friction, and the reduction in contact area during deformation as explained in Tembe et al. (2010).Further details of our experimental methodology can be found in Supporting Information S1 (Text S1).
Friction Data
Sliding was stable at 22, 100, and 250°C (Figure 1; Figure S1 in Supporting Information S1) but oscillatory at 200°C where failure upon reloading was often associated with large stress drops (Figure 1c), consistent with previous studies on the frictional properties of Westerly granite at hydrothermal conditions (Blanpied et al., 1991(Blanpied et al., , 1995)).Due to the tendency for unstable slip, steady-state sliding friction at 200°C had to be extrapolated (Text S2 in Supporting Information S1).When stable sliding was observed it generally followed a consistent strain hardening trend (Figure 1; Figures S1 and S2 in Supporting Information S1).However, there are some exceptions.At 250°C, following holds ≥5,000 s, the sliding friction was initially less than the frictional strength preceding the hold period and the magnitude of the decrease in sliding friction increased with hold duration (Figure 1).The magnitude of strain hardening following hold periods longer than 50,000 s was noticeably greater than strain hardening following shorter holds (Figure S2 in Supporting Information S1) so that friction would presumably return to the pre-hold value, but this often required more than the alotted 250 μm of slip.Reductions in post-hold sliding friction, exceeding expected variation, were also seen at 100 and 200°C after 500,000 s holds (Figures 1f and 1g).
At temperatures from 22 to 200°C failure strength increased initially in proportion to the logarithm of hold duration (Figure 2), as reported in many previous studies for a variety of materials (e.g., Carpenter et al., 2016;Dieterich, 1972;Mitchell et al., 2013).Failure strength did not vary systematically with displacement (Figure S3 in Supporting Information S1).However, at 200°C for holds longer than 50,000 s the failure strength decreased with hold duration (Figure 2).This weakening behavior was observed at 250°C for holds longer than 1,000 s, and little to no strengthening was seen for shorter hold periods.At 250°C the failure strength was often less than the sliding frictional strength measured before the hold period (Figure 1h).This resulted in negative values of μ.
The creep rate at the start of all hold periods was 0.1 μm/s which decayed rapidly so the average creep rate measured for 100 s holds at all temperatures clustered around 10 −2 μm/s.The average creep rate decreased with hold duration but remains similar at all temperatures for holds ≤1,000 s.For holds longer than 1,000 s a clear increase in both the amount of creep and the associated average creep rate during the hold period is observed at 250°C (Figure 1h; Figure S4 in Supporting Information S1).For 500,000 s holds, the average creep rate at temperatures of 22-200°C clusters around 10 −5 μm/s but is 10 −4 μm/s at 250°C for the same hold duration.
Microstructural Observations
The sawcut surfaces of several samples that had undergone slide and hold periods were characterized using a scanning electron microscope (SEM) (Figure 3).Each of these specimens was removed from the pressure vessel at the end of an extended hold period (∼100,000 s).For samples E34 (22°C) and E36 (250°C) the final hold period was preceded by a sequence of slides and holds reaching a cumulative displacement of 7 and 6.7 mm, respectively.E30 (200°C) had only experienced ∼1 mm of shear displacement followed by a 109,000 s hold with no additional shearing.In all three samples, abrasive wear features were apparent including slickenlines, grooved surfaces (Figures 3a and 3d), and gouge development (Figure 3b).At 200 and 250°C there was evidence of dissolution in the form of curved grain boundaries and possible pitting (Figures 3c and 3d).Additionally, at 250°C there was widespread evidence of secondary mineral precipitation (Figures 3e and 3f).Clusters of fibrous minerals were concentrated near the fluid borehole but were also seen in other areas.Secondary mineral development was not observed in the lower temperature experiments.Energy-dispersive X-ray measurements using the SEM indicated that some of the fibrous deposits contained Na, S, Ca, and Al (Figure 3e) while in other places they were enriched in Fe, Cr, Ni Cu, Zn, and Mg (Figure 3f).
Time-Dependent Friction
Previous studies have described SHS experiments on Westerly and other granitic samples.Most of these studies were conducted at room temperature and nominally dry conditions (Beeler et al., 1994;Dieterich, 1972;Ryan et al., 2018) with some work done on water-saturated samples (Carpenter et al., 2016).To the best of our knowledge SHS tests on granite at elevated temperatures (up to 600°C) have only been conducted in the presence of water vapor (Mitchell et al., 2013(Mitchell et al., , 2016)).These studies all reported time-dependent increases in .An empirical expression can be used to quantify the rate of restrengthening: where defines the time-dependent rate of restrengthening, t h is the duration of the hold period, and t c is the characteristic time delay beyond which the logarithmic dependence on time is observed (Dieterich, 1978;Nakatani & Scholz, 2004).At room temperature, healing rates for granitic rock determined from published data range from 0.003 per e-fold increase in hold time for a water-saturated, 2.6-mm-thick layer of Westerly granite gouge (Carpenter et al., 2016) to an average of 0.01 ± 0.001 per e-fold for initially bare-surface granite rock under nominally dry conditions (Beeler et al., 1994;Dieterich, 1972;Mitchell et al., 2013).In heated, nominally dry tests, is 0.009 per e-fold at 100 and 200°C and 0.007 at 250°C (Mitchell et al., 2013).In all cases the cutoff time t c is on the order of 1 s or less.Fitting Equation 1 to our experiments, using only data from hold periods where sliding friction was constant before and after the hold and μ increases with time, yields healing rates of 0.005, 0.006, and 0.008 per e-fold at 22, 100, and 200°C, respectively, with cutoff times on the order 1-10 s (Figure 2).This is generally consistent with previous work, suggesting the increase in strength with time occurs due to the growth of real contact area caused by mainly mechanical processes such as subcritical crack growth (e.g., Dieterich & Kilgore, 1994).We do not fit the short t h data at 250°C since a period of initial restrengthening is not clearly observed.
A transition to time-dependent weakening has not previously been reported for granite at room temperature under either saturated or nominally dry conditions, or at elevated temperatures under nominally dry conditions.We conclude that the higher-temperature weakening behavior is caused by a different mechanism that is temperature-dependent and requires the presence of liquid-phase water.Hydrothermal SHS tests have been conducted on materials other than granite.In experiments on initially bare-surface quartz (Jeppson, Lockner, Beeler, & Hickman, 2023) and 3 mm thick layers of pure quartz gouge (>63 m diameter) (Nakatani & Scholz, 2004) at temperatures up to 200°C only time-dependent increases in were observed.Olsen et al. (1998) conducted SHS tests on gouge layers composed of a mixture of quartz and labradorite sand at temperatures of 200 and 250°C (Olsen et al., 1998).While they did not observe time-dependent weakening they did not observe time-dependent strengthening either.This suggests that mineralogy is important to the underlying weakening mechanism.
Assuming the mechanisms controlling the observed strengthening and weakening are operating in parallel, Equation 1 can be expanded to capture the time-dependent evolution of friction observed in our data: where 1 and t c1 relate to the initial restrengthening behavior and β 2 and t c2 relate to the subsequent weakening behavior.Between 22 and 200°C we have sufficient data to constrain 1 and t c1 but the process controlling β 2 and t c2 appears to be too sluggish to be constrained in the hold times available.At 250°C the weakening process dominates, masking any strengthening that occurs at short timescales.However, assuming that the strengthening mechanism is an Arrhenius process, as is expected, for example, for subcritical crack growth (Lawn, 1993), we extrapolate that 1 and t c1 at 250°C are 0.008 and 9 s, respectively (Figures 2b and 2c).We note that Mitchell et al. ( 2013) identified a similar rate (0.007) for nominally dry Westerly granite at 250°C.This is consistent with a mechanism such as subcritical crack growth, that occurs in the presence of both liquid and gaseous water.Applying these constraints and fitting Equation 2to the 250°C data yields β 2 = −0.02per e-fold and t c2 = 346 s Equation 2 was also fit to the 200°C experiment, but due to the limited observations of weakening at this lower temperature β 2 and t c2 are poorly constrained.The data do indicate that the cutoff time of the weakening mechanism scales with temperature, consistent with an Arrhenius relation.
The significance of these rates and cutoff times is limited, as the parameters are interdependent, and none are well constrained.This empirical relation could be improved with additional measurements to define the underlying mechanisms and inclusion of expected rates for those mechanisms (e.g., Barbot, 2022).It is apparent that in polymineralic hydrothermal systems the evolution of frictional strength with time and temperature is complex because of the interactions among multiple mechanisms.Potentially competing mechanisms include pressure solution enhanced by the presence of phyllosilicates (e.g., Anzalone et al., 2006;Hickman & Evans, 1995;Meyer et al., 2006;Rutter & Wanten, 2000), chemical leaching, preferential dissolution of mechanically strong contacts, fabric development (Jordan, 1987), and secondary-mineral precipitation.We find clear evidence of secondary mineral phases developing at 250°C.Some mineral growths, like those shown in Figure 3f contain Cr, Fe, Ni, and Cu, suggesting they formed as a result of pore fluid reacting with the 17-4 steel end caps used in the sample assembly.These parts are located outside of the high temperature zone and should have limited effect on fault chemistry.Other secondary minerals lack these metals (e.g., Figure 3e).Weak secondary mineral phases could result in reduced friction (e.g., Bomberger, 2013;Carpenter et al., 2016;Morrow et al., 2017;Shreedharan et al., 2023), even at low concentrations, if they form preferentially at load-bearing contacts.
Rate and State Friction
In rate-stepping tests, Blanpied et al. (1995) observed a reduction in the frictional sliding strength of water-saturated samples at temperatures above 300°C.The temperature at which weakening is first observed increases with slip rate (their Figure 6).We observed that during 500,000 s hold periods at 250°C the average creep rate drops as low as 1 × 10 −4 μm/s, significantly less than the slowest slip velocity (0.01 μm/s) examined by Blanpied et al. (1995).
The same mechanisms that caused weakening in Blanpied et al.'s rate-stepping tests may also be responsible for weakening in our SHS experiments.Due to the very low creep rates attained during our extended hold periods, we are able to observe the weakening at temperatures as low as 200°C.
At temperatures above 350°C, rate-and-state (RS) modeling of Blanpied et al.'s (1995) rate-stepping tests required the addition of a second state variable (Blanpied et al., 1998).At 400°C, the scaling parameter for the first state variable, b 1 , was positive and increased with temperature, whereas the parameter for the second state variable, b 2 , was negative and became more negative with increasing temperature.While the rate constant ( ) examined in this paper is not the same as the state variable scaling parameter (b), it is expected that b ∼ (Ikari et al., 2016;Marone, 1998;Paterson & Wong, 2005), so a negative implies that the corresponding b 2 would also be negative.Further, Blanpied et al. (1998) found that b 2 was associated with a large characteristic displacement (Dc) that was positively correlated with temperature, consistent with the prolonged evolution of sliding friction at 250°C in our SHS tests.
To further characterize the strength evolution in SHS tests we consider constraints on the RS friction parameters from the sequence of holds between 100 and 500,000 s at 250°C (Figure 1d).Strengthening during the hold is resolved only for holds greater than 500 s and less than 5,000 s (Figure 1d; Figure 2a).Figure 4a shows an RS simulation of a 1,000 s SHS hold using parameters comparable to those inferred by Blanpied et al. (1998) for rate stepping tests at 250°C (a = 0.0125, b 1 = 0.005, Dc 1 = 0.25 μm, b 2 = 0.013, Dc 2 = 5 μm).Blanpied et al. did not observe weakening at this temperature, requiring both b 1 and b 2 to be positive.The primary difference in parameter values used in simulations in Figure 4a lies in the values of Dc.Dc 1 is slightly outside of the uncertainty associated with the 1.6-8.6 μm range of Blanpied et al. (1998).The smaller Dc 1 is necessary to produce the post-peak slip event with stress drop following the 1,000 s hold (Figure 4a).Dc 2 is much smaller than the 560-908 μm range of Blanpied et al. (1998).Our data lack evidence for an equivalently long weakening distance following short hold periods.Otherwise, the parameters are within the published ranges.The data could also be reasonably well represented by an RS simulation with a single positive state variable.In contrast, for holds greater than 1,000 s the fault shows net strength losses over each SHS sequence by amounts that increase with hold duration (Figures 1d and 2a).These strength losses require significantly different RS parameters to adequately represent the weakening as shown in Figure 4b ).An extended description of the simulations is included in Supporting Information S1 (Text S3).
The parameters used to simulate the 1,000 s hold are comparable to those used for lower temperature experiments (Blanpied et al., 1998) whereas the parameters used for long hold times are dramatically different.This implies that significant hydrothermal alteration occurs in these 250°C experiments.The underlying mechanism appears to be a function of time, not slip, as is indicated by the dependence of the weakening behavior on hold duration and the change in creep rate after 1,000 s (Figure S4 in Supporting Information S1).Most formulations of the RS friction constitutive equations cannot accommodate time-dependent changes, as they lack a term that is representative of the t c parameter present in Equation 2. Capturing the time-dependent weakening behavior would require reformulation of the RS friction equations to incorporate the characteristic time delay, possibly due to the strain-rate sensitivity of plastic yielding (e.g., Brechet & Estrin, 1994) or the reaction rate of a chemical process (e.g., Rimstidt & Barnes, 1980).
Conclusions
Frictional weakening is consistently observed at temperatures ≥200°C in hydrothermal SHS experiments and at temperatures ≥300°C in rate-stepping tests on Westerly granite (Blanpied et al., 1995).Rate and state friction simulations of long-duration holds at 250°C can only be represented using a two-state-variable RS friction model in which the second state variable is negative and associated with a large characteristic displacement (>100 μm).This negative state variable is not required to simulate shorter hold periods or lower temperatures, suggesting that significant hydrothermal alteration is occurring at 250°C.While the underlying mechanisms that control the frictional behavior cannot be positively identified at this time, it is evident that the mechanisms must be water-assisted, temperature-dependent, and chemically complex with a characteristic time delay on the order of 1,000 s.This indicates that the observed frictional weakening is the product of a solution-transfer process.An understanding of the processes that control frictional behavior at hydrothermal conditions and how they relate to and can be incorporated in the RS friction equations will help us better understand and model earthquake behavior in a variety of materials and over a range of environmental conditions.
Figure 1 .
Figure 1.Evolution of coefficient of friction.Coefficient of friction () versus displacement resolved on the fault surface for selected experiments on water-saturated samples run at temperatures of (a) 22°C, (b) 100°C, (c) 200°C, and (d) 250°C.Steady-state sliding secant friction is indicated by the red dashed lines.Hold periods are indicated by gray dashed lines with hold duration indicated.Dark gray boxes highlight the 500,000 s (315,000 s at 22°C) hold periods shown in e-h.The amount of creep that occurred during the hold period (ΔS) and change in friction (μ) are shown.Full curves for all experiments examined in this study are provided in Figure S1 in Supporting Information S1.
Figure 2 .
Figure 2. Time-dependent evolution of friction.(a) Changes in frictional strength (Δ), defined as the difference between failure strength and sliding friction preceding the hold period.Unfilled symbols indicate the data used to determine the initial restrengthening rates using Equation 1. Fits of Equations 1 and 2 are indicated by the dashed lines.The dependence of the resulting (b) healing (β 1 ) and weakening (β 2 ) rates and (c) cutoff times on temperature.The gray dashed line shows the fit of the Arrhenius type relation used to predict β 1 and Tc 1 at 250°C (indicated by x's).Error bars indicate 2 standard error determined using jackknife resampling.If error bars are not visible, uncertainty is less than the marker size.
Figure 4 .
Figure 4. Slide-hold-slide simulations.RS slider block simulations of (a) 1,000 s and (b) 500,000 s holds at 250°C.Simulated loading velocity was 0.1 μm/s with a stiffness of 0.149 MPa/μm.Data from experiment E36 at 250°C are shown in dark gray.Simulations with RS parameters show blue curves using a = 0.0125, b 1 = 0.005, Dc 1 = 0.25 μm, b 2 = 0.013, Dc 2 = 5 μm and red curves using a = 0.0125, b 1 = 0.0025, Dc 1 = 2 μm, b 2 = −0.07,D c2 = 190 μm, respectively.Light-gray dashed line indicates the start of the hold period.In this case, blue model parameters fit the short hold data while red parameters provide better fit to the long hold data and indicate that time-stationary RS parameters are insufficient to represent the observations.
|
2023-11-15T16:12:31.654Z
|
2023-11-13T00:00:00.000
|
{
"year": 2023,
"sha1": "13e61881a3276a7e7cd822f2ae4a1700e79671ed",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023GL105517",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "11e47baee67090b236a08c2c0f73425cc1145e7c",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
55398641
|
pes2o/s2orc
|
v3-fos-license
|
Experimental studies on the deformation and rupture of thin metal plates subject to underwater shock wave loading
In this paper, the dynamic deformation and rupture of thin metal plates subject to underwater shock wave loading are studied by using high-speed 3D digital image correlation (3D-DIC). An equivalent device consist of a gas gun and a water anvil tube was used to supplying an exponentially decaying pressure in lieu of explosive detonation which acted on the panel specimen. The thin metal plate is clamped on the end of the shock tube by a flange. The deformation and rupture process of the metal plates subject to underwater shock waves are recorded by two high-speed cameras. The shape, displacement fields and strain fields of the metal plates under dynamic loading are obtained by using VIC-3D digital image correlation software. The strain gauges also were used to monitor the structural response on the selected position for comparison. The DIC data and the strain gauges results show a high level of correlation, and 3D-DIC is proven to be an effective method to measure 3D full-field dynamic response of structures under underwater impact loading. The effects of pre-notches on the failure modes of thin circular plate were also discussed.
Introduction
Plated structures are a kind of important basic element of marine structures and battleships, which may be subjected to an underwater explosion by the attack of a torpedo or a depth charge [1].In ship design application, the ship plate shell with a small curvature is supported by welded stiffeners at its edges, so the shell between the stiffeners can be considered as a flat panel [2].Non-contact underwater explosion is the important source of threat on the ship structure, and Understanding the response include deformation and failure modes subjected can supply an important reference for designing.
The response of clamped plate structures subjected to air and underwater blast loading has been studied for many years.In the underwater impact area, Ramajeyathilagam et al. [3][4][5] carried out many experimental and numerical investigations on the deformation and tensile tearing of airbacked metal plates and shell panels.The results indicated that the shock impulsive can be estimated in accordance with the Cole's empirical formula [6] and Taylor's plate theory [7].Hung et al. [8] also presented experimental studies on the dynamic elastic response of the aluminium alloy panel subjected blast loading in the water tank.The research reports the underwater pressure history, the moving acceleration history and strain history of the plate surface.
For the underwater blast experimental study in laboratory scale, the reappear of underwater environment, production of the ideal blast impulsive loading and safety protection are all difficult and limited by the technology.With the development of technology, an experimental apparatus incorporating fluid-structure interaction (FSI) effects was recently developed to test scaled structures by a Corresponding author: pwchen@bit.edu.cnEspinosa et al. [9].The set-up allows characterization of the response of solid and sandwich structures subjected to underwater blast impulsive loading.The calibration plate impact experiments confirmed that the FSI setup can generate an exponentially decaying pressure history.The full field out-of-plane deformation profile of annealed Steel plates was recorded in real time by Shadow Moiré and high speed photography.McShane et al. [10] also designed a similar equivalent device based on the impact produced the underwater shock wave loading.The potentiation of polymer coating for the resistence performance of the pure copper plates subjected underwater impact loading was analyzed by used this setup, and four typical failure models are recorded by highspeed photography.
However the level of understanding of the response of these structures at these high loading rates is not as established as that under static conditions.In this paper, A FSI apparatus was developed to generate an exponentially decaying pressure in lieu of explosive detonation by Xiang [11] in reference of Espinosa's works [9].Combined with this equivalent device, 3D DIC technology and high-speed photography, the dynamic response of airbacked copper circular plates with pre-notches subjected to underwater impulsive loading was recorded in realtime.Compared with the strain gauge results and DIC results, the accuracy and advantage of DIC technology was proven obviously.The final failure modes for specimens with different pre-notches are evaluated by recycling the specimens.Moreover, the effects of pre-notches on the failure modes of thin circular plate were discussed.This work will be useful to those involved in research into the response of structures to explosive loading, a subject that has become increasingly important with heightened public awareness of potential explosive threats to civilian safety.
Experimental details
The As seen from this figure, the simplified FSI equipment consists of two parts: a gas gun and a conical anvil tube filling up with water.A piston was sealing the conical anvil tube in the inlet of conical anvil tube opposite to the gas gun, and the disc specimens were clamped by the flange on the other bottom as shown in Fig. 2a.The flyer is driving by the gas gun and impact the piston, and the flyer speed and the piston thickness determine the peak pressure and decay time in the tube respectively [11].In this paper, the strength of shock wave loading is control by changing the speed of the flyer (depended on the pressure of the gas gun).The speed of the flyer can be measured by two pairs of laser gauges position at the outlet of gas gun, and then the flyer fight through the laser trigger that synchronizer trigger the cameras.The pressure gauges respectively near the plate and in the center of conical anvil tube are used to measure the pressure of water shock wave during the loading event.So the peak pressure values respectively at a certain position A (in the anvil) can be predicted by Eq. ( 1), namely: Z w = 40.82× 10 6 , Z p = 1.46 × 10 6 . ( Where V 0 is the the flyer impact velocity, Z p and Z w are the acoustic impedances of the piston of the piston and the water, where D and D A are the diameters of the tube at the position A and impact locations.Two Fastcam SA5 (produced by Photron Corporation in Japan) high-speed digital cameras were mounted in a stereo configuration (the angle is about 20 • ) to record the synchronized images of the deformation and rupture process of the sample specimens during the blast loading.The cameras are far from the explosive vessel and behind the observing windows in order to prevent vibration interference and protect the cameras during testing.In order to maximize the common field of view and the level of correlation, the two cameras were rotated and focused.The captured rate will be 50,000, and 75,000 frames per second.Three halogen lamps with a power of 1 kilowatt were used as the lighting source.The strain gauges were used to measurement the in-plane strain along radial and tangential directions shown in Fig. 2b.
In this paper, two circular plates made of red copper with a diameter of 292 mm and same thickness of 1 mm were preparation before the testes subject to underwater shock wave loading.Through changing the types of the specimen surface (no pre-notch or pre-notch), the effect of pre-crack on the performance of specimens under underwater shock loading is understood with different pressure strength.The shapes of pre-crack is cross, and the position of pre-crack are in the center of plates.The length of the cross crack is 30 mm, the width is 1 mm, and the depth is 0. with a diameter of 200 mm and same size cross prenotch subjected a lower loading are also tested, and the specmen was only fixed while bolts not through the plate, hereinafter referred to as "un-fully clamped".Table 1 is a summary of experimental conditions of all specimens, including the support condition, the pre-notch of the metal plates, the impact velocity V 0 , the theoretical peak pressure P 0 and the measured peak pressure at position A P A .Before prepared the random speckle of samples, the surface must be cleaned and polished.Then, the high contrast speckle pattern is prepared on the plate by spraying white paint and marking the random points by a black marker pen on the plate surface, and the black speckle dots with a diameter of about 5 pixels when seen by two cameras are thought perfect for the random highcontrast pattern.As shown in Fig. 2b, three areas on the specimen surface and flange surface were selected as the AOI (area of interest) which is used to analyze the deformation fields by comparing the difference before and after deformation.During the shock, the shock absorbers cannot completely prevented the motion of the anvil tube.So the DIC measured motion on the specimen surface is absolute displacement include the plate deflection and the tube rigid motion.The areas on the flange surface were used to calculate the rigid displacement of the anvil tube.
DIC results of no pre-notched plate
The evolution of displacement (W ) fields on the surface of no pre-notched specimen at different moments are shown in Fig. 3.The major component is the out-ofplane displacement (absolute displacement).At 0.05 ms, a visible symmetric deformation ring can be found in the boundary of AOI, and then the deformation region rapidly spreads to the center along the radial direction until 0.25 ms.It is worth noticed that the shape of the deformation area is flat nearly except the deformation ring in the first, and evolves into a symmetric curved shape after 0.35 ms with the propagation of deformation ring appeared in the boundary.The displacement ring can be seen as plastic yield line of the first bending modal mode shape of the plate.Figure 4 shows the comparison between the deflection history of centeral point O and the pressure history of position B for the same specimen.The initial moment of pressure history of position B is unified with the initial moment of the pressure history loading on the wet side of the specimen.At 0.1 ms, the first loading from the incident shock wave has vanished, the plate continuous to deform due to multiple followed incident and reflected shock waves and the inertial effects of the plates.It is indicated that the resultant plastic deformation is caused by the combined effect of the reflected shock, the incident shock and the support condition of the structure.
Strain is used to describe the relative change of the material's shape as a normalization of the deformation [35].The evolution of the in-plane principal strain fields of the no pre-notched specimen at different moments is shown in Fig. 5.It can be seen that until 0.15 ms the higher principal strains occur close to the boundaries.During the loading, the incident underwater impulse forces all points of the panel to move out.But points close the clamping area are constrained and higher strains occur in the boundary areas.Further, when inertia forces and the reflection waves are taking over, strains are developing in the centre of the plate reaching first peak values 5 at 0.55 ms in Fig. 5. Somewhat counter-intuitively, the entire central and boundary region of the field of view is at a higher strain than the surrounding plastically deformed circular region in follow.
Figure 6 shows the comparison of the normal strains ε xx and ε yy between the DIC results and the strain gauge results on the strain gauge locations as shown in Fig. 2. The DIC results are acquired from the strain date of symmetric points to the center.From this figure, the strain histories of ε xx and ε yy all correlate well until 0.6 ms between the DIC and the strain gauge.After this moment, the strain history curves stop increasing and even decrease, possibility caused by the strain gauges falling off from the specimen surface.The comparison indicates that DIC technology is a dependable measured method.
Recovering and discussion
After the blast tests, all specimens were collected for further analysis.Figure 7 shows the photographs of four recovered specimens after tests.Three types of failure modes can be identified.For the specimen without prenotch, only large ductile deformation can be observed in Fig. 7a; for the pre-notched specimen, large ductile deformation and local petalling at the location of prenotch can be observed in Fig. 7b; for the un-fully clamped specimen, in addition to large ductile deformation, local necking in the center can also be observed in Fig. 7c, which is confined to the location of the pre-notch.
Conclusions
This paper focus on the dynamic deformation and rupture of pre-notched circular copper plates subjected to underwater impulse loading.High-speed photography, combined with 3D-DIC has been successfully used to acquire the out-of-plane displacement and in-plane principal strain on the surface of the circular plates during the blast process.Compared with the strain gauge results and DIC results, the accuracy and advantage of DIC technology was proven obviously.Through change the clamped condition and the loaded intensity, the evolution of the failure modes has been observed and discussed.Moreover, the effects of pre-notches on the failure modes of thin circular plate were discussed.3D-DIC is also proven to be an effective method to measure 3D full-field dynamic response of structures under underwater impact loading.
simplified FSI equipment was developed by Xiang et al. (From School of Aerospace Engineering in Beijing Institute of Technology).The diagram of experimental setup is shown in Fig. 1.
Figure 3 .
Figure 3. Principle diagram of the experiment setup.
Figure 4 .
Figure 4. Diagram of the deflection history of point O and the pressure history of position B.
Table 1 .
Summary of experimental conditions of all specimens.
|
2018-12-07T07:59:46.411Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "c89161093ea0125d1777d8f85cc0257fa312fdf1",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/13/epjconf-dymat2015_01014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c89161093ea0125d1777d8f85cc0257fa312fdf1",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
225801390
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous Equation Model for Economic Calculation of Households of Independent Rubber Farmers in Mineral Land in Kampar Regency, Riau Province, Indonesia
Rubber is a plantation crop which is mostly a source of community income in Kampar District. As a source of household income, rubber farming is managed by households independently. This study generally aims to design models and government policy strategies in the development of smallholder rubber plantations on land typology mineral land conditions on the economic decision making of rubber farmer households. Specifically, this study was conducted with the aim of analyzing the characteristics of independent smallholders and internal and external dominant factors that influence the allocation of working time, income and household expenses of rubber farmers. This research was conducted using a survey method located in Kampar District. The data used in this study consisted of primary data obtained using the interview method. Samples were taken by simple random sampling method with 60 rubber farmers. Descriptive analysis and Economic Decision Model of Rubber Farmer Households using the simultaneous equation model approach with the Two Stages Least Square (2SLS) analysis method were performed to answer the research objectives. The results showed that only internal factors of farm households are responsive to household economic decisions. There are no external factors included in the model that are responsive to the economic decisions of rubber farming households in Kuantan Singingi Regency regarding the aspects of production, working time allocation, income and expenditure of rubber farming households. From the aspect of production, no responsive internal or external factors were found, but the biggest effect was the number of productive rubber stems. From the aspect of work time allocation, internal factors that are responsive to influence are the total outpouring of farmer work, outpouring of farm family work in businesses and the workforce of farmer households. Furthermore, from the aspect of farmer's household income the responsive internal factors that influence it are the farmer's household income in the business. then what influences household expenditure is outflow of work in business, farmer education, wife education and total rubber farmer income. The policy implications of increasing rubber prices and outpouring of family work in the business have the most positive impact. While the increase in wages for workers outside the family has a negative impact on the household economy.
Introduction
The agricultural sector in Indonesia continues to be demanded to play a role in the national economy through the formation of Gross Domestic Product (GDP), foreign exchange earnings, supply of food and industrial raw materials, poverty alleviation, employment provision and increasing community income.
At the provincial level, Riau does not differ greatly from the national level. In 2016, the area of rubber was ranked second after oil palm with an area of 504,553 ha. In the last five years (2012)(2013)(2014)(2015)(2016), the total area, production and number of farmers cultivating rubber plants in Riau Province tended to decrease. In 2012, the area of rubber plantations was 128,520 ha with a production of 392,781 tons, decreasing to 90,877 ha with a production of 333,155 tons. The number of farmers cultivating rubber plants also declined from 276,210 households to 244,560 households (Badan Pusat Statistik, 2017b). At the level of Kampar Regency, it is not much different from the provincial level. In 2012 the largest plantation area was occupied by oil palm plants with an area of 190,486 Ha. While the rubber plant is in second place with an area of 92,509 Ha. In the period 2012-2016 the area of rubber plantations decreased. In 2012 the area of rubber plantations was 91,328 hectares and in 2016 the area of rubber plants was 91,143 hectares (Badan Pusat Statistik, 2017a). The decreasing area and production of rubber plants, as well as farmers who are working on rubber plants are thought to be due to the conversion of rubber land to oil palm. Development in the plantation sector is directed to further accelerate the rate of production growth both from large plantations, private and state plantations and community nucleus estates as well as self-managed plantations to support industrial development, as well as increase the utilization and preservation of natural resources (SDA) in the form of land and water . The role of the plantation sector is so great for increasing the use of farmers and supplying raw materials for the domestic industry and as a source of foreign exchange (Heriyanto, 2017).
Various problems that occur will affect production acquisition, allocation of work time, income, and the level of welfare of farmers. The level of welfare of farmers can be seen from the expenditure of household consumption. In other words, households are faced with the problem of allocating work time, income and expenses. The economic decisions of rubber farming households in relation to the allocation of work time, household income and expenditure, are theoretically influenced by internal and external factors. A rubber farming household that uses labor from outside the household expects workers with high productivity, but with low wages. Instead a worker tends to expect a job with a high level of wages.
Comparison between the price level of rubber also determines the decision of rubber farmers to keep doing rubber plantation business or not. If the price of rubber products produced is quite high while the price of inputs is relatively cheap, so that production costs are less than the gross income obtained, then the business is profitable. The higher the level of profits obtained, the rubber plantation business will increasingly develop. Various external shocks that affect the production process will affect the allocation of work time, will further affect the acquisition of income and ultimately will affect the amount and pattern of household expenditure.
Based on the background description and problems above, in general this research objective is to analyze the household economy which includes the allocation of working time, income and expenditure of rubber farmer households on mineral land. Specifically the purpose of this study is to Analyze the characteristics of rubber farmers and internal and external dominant factors that affect the allocation of work time, income and household expenditure.
The concept of agricultural development includes land resources, nuftah plasma, water, technology, financing and human resources (HR). Agricultural development aims to increase farmers' income and welfare through increasing agricultural production. This increase in agricultural production in addition to meeting domestic industrial raw materials that continue to grow also aims to increase foreign exchange from exports of agricultural products. One of the steps that can be taken to increase the contribution of the agricultural subsector is the production of plantation crops (Soekanda, 2001).
Strategi Agricultural-Led Growth (Poonyth, D, R. Hassan & Calcaterra, 2001) emphasized that the agricultural sector as a leading sector in economic development because the agricultural sector is a driver for economic growth. Therefore the agricultural sector needs to get the main attention compared to other sectors because of its potential in driving economic growth and job creation. Development of a productive agricultural sector and better rural areas is the key to the growth of the agricultural sector and is a precondition for successful economic development.
Strategi Agriculture-Based Development (Romeo, M, 2000(Romeo, M, , 2001, based on the consideration that in many lowincome countries the majority of the population is in rural areas, where the agricultural sector is the main source of life. This strategy is more effective than the import substitution strategy or the export-led industrialization strategy, based on the consideration that it provides opportunities for income generation, directly or indirectly, for rural populations. Through this strategy public resources are increased to be allocated to the agricultural and rural sectors and is expected to increase agricultural productivity and income of rural populations. The role of the agricultural sector in economic development includes: (1) increasing the availability of food or food surplus for domestic consumption, (2) releasing excess labor to the industrial sector, (3) being a market for industrial products, (4) increasing domestic savings, ( 5) increasing trade (sources of foreign exchange), and (6) improving the welfare of rural people (Jighan, 1994).
Research on the household economy of farmers has been done by researchers, such as the household economy of rice farmers. Households in paddy farming in production are determined by labor in the family, the amount of seeds, fertilizers and pesticides. The difference between households is that households of paddy rice farmers use more labor in the family (Elinur, Asrol, & Heriyanto, 2017;Heriyanto,2018).
Next Heriyanto (Heriyanto, 2017), has conducted research on the analysis of the efficiency of rubber production factors in Kampar Regency, Riau Province. The results of his study showed that the dominant factors affecting rubber production in Kampar District were the number of plants, age of plants, number of workers and investment. The production factor is the number of plants, and the number of workers is technically inefficient, allocative, and economically. The use of fertilizers tends to be technically and economically efficient, butallocativelyinefficient.
Rubber farm house hold economic research analyzes from the aspects of production, farm household work time allocation, the use of non-family labor, non-farm income, and household expenditure that includes food and non-food expenditure. This research will produce a comprehensive economic model of smallholder farmers' households that have not been studied by researchers before. This study also recommends policies relating to the development of smallholder rubber in the context of increasing the household income of rubber farmers.
Household Economy
Understanding farm households is very important because the characteristics are very unique and complex. In this case the household has resources that can provide satisfaction and can be shared among household members. In addition, households in increasing their satisfaction must have alternatives so that households have many choices. Household economic activities such as production activities as a farming company, consumption activities as consumers and as labor providers. In carrying out these activities the household carries out the principle of utility maximization with budgetary or resource constraints (Nakajima, 1989).
Farmer house hold as an economic unit that acts as a producer and consumer. Households as producers carry out production activities and as consumers carry out consumption activities simultaneously. This will be different from the company's activities. Companies as economic units only carry out the activities of producing goods and services to achieve maximum profits. (Becker, 1965) formulating an agricultural household model (economic model of agricultural households) that integrates production and consumption activities as a whole and the use of labor in the family is preferred. This household economic model uses a number of assumptions, namely: First, household satisfaction in consuming is not only determined by the goods and services obtained in the market, but also is determined by various commodities produced in the household. Second, the element of satisfaction is not only goods and services, but includes time. Third, time and goods or services can be used as factors of production in household production activities. And fourth, households act as producers as well as consumers.
Meanwhile, (Barnum & Squire, 1978) revealed that the household economic model can be used to analyze the economic behavior of agricultural companies which all use paid labor and sell all products produced to the market. Unlike subsistence agriculture which relies on family labor, so there is no market surplus. (Singh, I. & Strauss, 1986)arrange the agricultural household economic model as a basic model of the household economy. In the model stated that household utilities are determined by consumption of goods and services produced by households, consumption of goods and services purchased in the market, and consumption of leisure (leisure time).
households include activities of production, consumption and allocation of labor in the family carried out simultaneously with more complex estimation techniques. Estimation of the model uses two stage least squares (3SLS) or three stage least squares (3SLS) estimation techniques. Rice farm household economics studies using 2 SLS estimation techniques were carried out by (Faradesi, 2004;Rochaeni & Erna M, 2005) in Cianjur Regency and Bogor City. Economic research on farm households is also applied to farm households in plantation crops, such as rubber and oil palm farm households, research conducted by (Elinur & Asrol, 2015a;Husin & Dwi Wulan, 2011;Khaswarina, 2017).
Rice farm household economics research analyzes the allocation of work time in farming, production and household expenditure. The allocation of work time consists of the equations of outpouring of family labor in lowland rice farming, outpouring of non-farm family labor. The production equation is influenced by the use of rice production factors. Equation of rice farmers household expenditure consists of food and nonfood expenditure. However, this research has not accommodated household expenditure in terms of health, education and leisure time (leaisure) (Faradesi, 2004;Rochaeni & Erna M, 2005) The economic research of rubber farmer households consists of a flow of farmer's household working time on rubber farming and non-farming. Household income consists of income from rubber, non-rubber and non-farming farming. Rubber farmer household expenses consist of food, non-food expenditure, education expenditure, farm investment and farmer household savings. The model does not yet accommodate the demand for workers outside the family and clothing, housing and health and leisure expenses. This research is still in the village scope (Husin & Dwi Wulan, 2011;Khaswarina, 2017;Ningsih et al., 2020).
The economic research of the oil palm farmer household builds a model consisting of the equal allocation of farm household work time, oil palm production, labor demand outside the farmer's family, farmer's household income and farmer household expenditure. The study of the household economics of oil palm farmers includes four aspects: First, the demand for labor is distinguished from labor within the family and outside the family. Second, include outflow of family work outside the business and income from outside the oil palm farm. Third, include business investment, education investment and household savings, i.e. saving money in financial institutions on the household expenditure side. And fourth, house hold consumption consists of food, non-food consumption and recreation (Elinur & Asrol, 2015a;.
This economic study of rubber farming households combines the economic models of rice farming households and rubber and oil palm farming households. The economic model of rubber farming households consists of complex equations that accommodate household expenditure that are in accordance with household economic phenomena, among others: first, the rubber production equation is influenced by production factors consisting of the amount of crops, fertilizers, pesticides and internal labor. Second, the equality of household time allocation for rubber farmers from the allocation of working time for households in farming and outside farming. Third, household income from rubber farmers consists of rubber farming income, non-rubber farming income and non-farming income. Fourthly, the household expenditure of paddy farmers consists of food expenditure, clothing expenditure, education expenditure, health and recreation expenses. Fifth, the rubber farming household economic model also includes expenditures for farming investment, because in general rubber farmers in the study area set aside their income for the farming. From several farmer household economic studies, this study has similarities and differences with previous farmer household economic research. The similarity is that this research has accommodated all farm household household economic activities which include production, consumption and work time allocation aspects. The advantage of this research is that it includes clothing and health and recreation expenses, which have not been accommodated by previous studies.
Review of Previous Studies on Home Economics
Studies of the household economy have been carried out both partially and simultaneously such as, (Chuzaimah, 2006;Elinur, 2004;Heriyanto, 2017;Husin & Sari, D, 2011;Koestiono, 2004;Siti & Erna, 2005) analyzing policy simulations of the household economy of agriculture. The results of the policy simulation imply that the policy of increasing output prices is not effective in increasing the amount of production that can be sold to the market. This is due to additional benefits due to rising prices of agricultural output and technological improvements are more allocated as labor costs. (Priyanti, B.M, Y.Syaukat, & S.U, 2007) conducting a Farmer Household Economy Model Study on Crop-Livestock Integration Systems. the results of the study that the farm household household economic model is able to explain reciprocal farm household income obtained from maximizing satisfaction with production constraints, time allocation and income distribution. This includes aspects of production, allocation of use of family labor, use of inputs and production costs, income and income as well as farm household expenses. This model is very useful to identify the factors that influence the decisions of farm households, especially in increasing income simultaneously and integrated between crop and livestock businesses. (Husin & Sari, D, 2011), conducted a study on the Economic Behavior of Rubber Farmer Households in Prabumulih in Workforce Allocation, Production and Consumption with the result that the behavior of farm household household time allocation behavior was influenced by total household expenditure, rubber land area, non-rubber farm land area, rubber farming income and number of children under five. Farmer household production behavior is influenced by the area of rubber land, non-rubber farming land area, outpouring of family labor on rubber farming, the use of fertilizers and pesticides. Farmer household consumption behavior is influenced by total household income, time spent working by household members on rubber farming and number of household members. Several variables that were responded to elasticly by the variable of work time spent were rubber farming income, total household expenditure and non-rubber farming land area. Whereas the variable which is responded elasticly by household expenditure is total household income and expenditure for food consumption.
Research conducted by (Elinur & Asrol, 2015b) about the economic decisions of oil palm farmer households in the village of Indra Sakti Kecamaan Tapung, Kampar Regency The economic model of the household that he built includes aspects of production, location of work time, use of labor within and outside the family, household expenses consisting of food and non-food expenditure. The research has not included expenditure on clothing, housing, education, health, and recreation. Overall expenditure is aggregated in food expenditure. This research is still in the scope of the village. (Khaswarina, 2017;Wahyudy, 2019), conducting research on the household economics of ex-UPP TCSDP rubber farmers in Koto Damai Village, Kampar District. The economic model of the constructed household has included the production equation, allocation of work time, income in and out of farming, and food expenditure, education expenditure, non-food and household savings. The model does not yet accommodate the demand for workers outside the family and clothing, housing and health and leisure expenses. This research is still in the village scope.
Research methods
The location of the study was determined proportionally, namely in the Kampar District the Kampar District was chosen with consideration that the Kampar Regency was the second largest rubber plantation area after Kuantan Singingi Regency in Riau Province. To achieve optimal results, this research is expected to be funded within one year.
Fig 1. Number of Rubber Farmers Samples in Kampar District
Sampling in this study was conducted using a multy-stage purposive sampling method with criteria having an area of 1-3 Ha with a rubber plant age of 13-25 years. Samples were taken in 3 districts, namely Kampar Kiri Hulu Subdistrict, Kampar Kiri Hilir Subdistrict and XIII Koto Kampar Subdistrict, because the three districts are rubber production centers in Kampar Regency. Each sub-district took 20 rubber farmers and a total sample of 60 rubberfarmers.A clearer scheme for rubber farming household sampling ispresented in the figure above.
The type of data collected is cross section data. Primary data were obtained from direct interviews with respondents, namely rubber farming households using a prepared questionnaire. Besides that, secondary data from a number of related institutions were also collected, such as: the Plantation Agency, the Central Statistics Agency and other sources. Secondary data are used to sharpen and support the analysis in this study.
Data analysis
To answer the objectives of the characteristics of independent rubber farmers the study was analyzed using descriptive analysis. with the tabulation method focused on explaining the pattern of work time allocation, income contribution and household expenditure patterns. The description of the pattern of rubber farm household household work time allocation includes the length (percentage) of work time allocated to businesses within and outside the rubber plantation business. Furthermore, the allocation of working time can be disaggregated according to household members (husband, wife and children).
Meanwhile, the descriptive analysis of income contribution is intended to get a picture of the amount of income contribution in the business and outside the rubber plantation business to the total income of rubber farmer households. Next the descriptive analysis of household expenditure patterns is focused on looking at the amount of income allocation that is reinvested in the rubber plantation business, consumption expenditure, savings, investment and leisure. The pattern of household consumption expenditure is further broken down according to commodity groups, namely food and non-food consumption. In addition, a descriptive analysis was also carried out relating to the general description of sample identity (age, education, number of family members and work experience).
. Frame work for Economic Analysis of Rubber Farmer House holds in Kampar District analyzed with the Simulatan Equation Model
Econometry with 2 SLS Estimation Method can be seen in the figure below.
Characteristics of Independent Rubber Farmers
The profile of rubber sample households (hereinafter referred to as rubber farm households) can be seen in Table 1. Based on Table 1, it can be seen that the average age of heads of rubber farmer families is 48 years, thus it can be said that the average rubber farmers are at productive age . The average rubber farmer starts a rubber gardening business at the age of 32 years. Thus the average rubber farmer has been in business for 16 years, so it can be said that these rubber farmers have enough experience in running a rubber farming business. Table 2 shows that the formal education of rubber farmers and wives of rubber farmers, respectively 10 years and 9 years. Thus it can be said that the education of rubber farmers and their wives is still low, ie only graduated from junior high school.
The average rubber farm household member is 6 people, including 4 people belonging to the labor force and 2 school children. In general, the household workforce of rubber farmers works both inside and outside the rubber farming business. The majority of rubber farming households are from Kampung Kabuapten, as many as 57 households. Only 3 rubber farmer households come from outside Kampar District. This indicates that the rubber farming business is dominated by people from within Kampar Distric.
Farmers who are working on this rubber have an average of 2 hectares of garden area. Rubber plantation land is own property with an average pattern of rubber plantation exploitation by rubber farmers and their families Table 2 shows the results of the estimation of the rubber production equation that rubber production is not responsive to the outflow of household work in the business (positive) and the number of productive rubber rods (positive). Although the elasticity values are not responsive, rubber production is more sensitive to changes in the number of productive rubber stems thanchanges in the flow of rubber farm family work in the business.
Internal and External Dominant Factors That Affect Working Time Allocation, Income and Expenditures of Rubber Farmer Households
Variations in the work flow of rubber farming families in the rubber farming business, labor outside the family and the number of productive rubber stems have a positive effect on production. This illustrates that if the outflow of rubber farming family work in the rubber farming business, labor outside the family and the number of productive rubber stems increases, then rubber production will tend to increase. From the aspect of work time allocation shows that the outflow of family work in the business is not responsive to household income in the business (positive) and outflow of family work outside the business (negative). Furthermore outflow of family work outside of business is responsive to outflow of household work outside the business (negative) and the number of household workforce (positive) ( Table 2).
The results of the estimation of the equation of the use of workers outside the family show that the use of workers outside the family is not responsive to changes in household income of rubber farmers outside the business (positive). However, the use of labor outside the family of rubber farmers in an effort is responsive to changes in the total outpouring of farmer work (negative).
Furthermore, the estimation results of the household income equation can be stated that the household income of rubber farmers outside the business is responsive to changes in household income of rubber farmers in the business (negative). However, the household income of rubber farmers outside the business is not responsive to changes in the outflow of rubber farming families outside the business (positive) ( Table 2).
Furthermore, the results of estimating the expenditure of rubber farmer households shows that the food consumption of rubber farmer households is not responsive to changes in the number of rubber farmer family members (positive), rubber farmer household recreation expenses (positive) and the education of rubber farmer wives (negative). In the equation of non-food consumption of rubber farmer households shows that non-food consumption of rubber farmer households is not responsive to changes in the total income of rubber farmer households (positive), the number of rubber farmer family members (positive), education of rubber farmer wives (positive) and investment of farmer household education rubber (negative). Table 2 shows the investment in education shows that the education investment of rubber farmer households is not responsive to changes in the number of school children of rubber farmer households (positive). Several studies on farm household economics show that household education expenditure is significantly influenced by the number of school children and total household income of farmers. Both variables are positively related to education expenditure (Adevia, Bakce, & Hadi, 2017;Husin & Dwi Wulan, 2011;Khaswarina, 2017;Putra, Bakce, & Rifai, 2012). Thus the results of this study are in accordance with the results of previous studies.
The equation of rubber farm household business investment it can be stated that the rubber farm household business investment is not responsive to changes in the total income of rubber farm households (positive) and the outflow of rubber farm family work in businesses (positive).Research result (Putra et al., 2012) shows the rubber farming investment variable is influenced by the total income of rubber farm households and the number of school children and is positively related. Both variables are not responsive to investment in rubber farming. Thus this research is similar (Adevia et al., 2017;Putra et al., 2012), where the income variable in farming is part of the total income of farm households.
Meanwhile, from the estimation results in the equation of rubber farmer household recreation expenditure it can be stated that the rubber farmer household recreation expenditure is responsive to changes in the outflow of rubber farm family work in businesses (negative), rubber farmer education (positive) and the education of rubber farmer wives (positive) , but not responsive to changes in the outflow of rubber rubber family work outside the business (negative). Whereas based on the estimation results on the rubber farmer household saving equation it can be stated that the amount of rubber farmer household savings is responsive to changes in the total income of rubber farmer households (positive) but not responsive to changes in the total consumption of rubber farmer households (negative) and changes in interest rates (positive)
Conclusion
Based on the results of the previous analysis and discussion, conclusions can be drawn. The conclusions of this research are: 1) Characteristics of Rubber Farmers, the average age of rubber farmers at productive age (48 years), the length of education of rubber farmers 9 years, the education of the wife of rubber farmers 9 years, 16 years of farmer work experience, 6 household members, 4 farmers' labor force, number 2 school children, 2 hectares of land.
2)
Internal factors of farm households are responsive to household economic decisions. There are no external factors included in the model that are responsive to the economic decisions of rubber farming households in Kuantan Singingi Regency regarding the aspects of production, working time allocation, income and expenditure of rubber farming households. From the aspect of production, no responsive internal or external factors were found, but the biggest effect was the number of productive rubber stems. From the aspect of work time allocation, internal factors that are responsive to influence are the total outpouring of farmer work, outpouring of farm family work in businesses and the workforce of farmer households. Furthermore, from the aspect of farmer's household income the responsive internal factors that influence it are the farmer's household income in the business. then what influences household expenditure is outflow of work in business, farmer education, wife education and total rubber farmer income.
|
2020-10-28T18:31:04.558Z
|
2020-06-26T00:00:00.000
|
{
"year": 2020,
"sha1": "b0823535fdf31de2e107decaaabcff7557d5b32a",
"oa_license": "CCBYSA",
"oa_url": "https://journal.uir.ac.id/index.php/JGEET/article/download/3791/2516",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ebe042618e24b90114c56b4e4f406ab79cae7a41",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
228880285
|
pes2o/s2orc
|
v3-fos-license
|
Accommodating Minorities into Sri Lanka’s Post-Civil War State System: Government Initiatives and Their Failure
Many observers view the defeat of the Liberation Tigers of Tamil Eelam (LTTE) in May 2009 as a significant turning point in the protracted ethnic conflict that was troubling Sri Lanka. The armed struggle and the consequences of war have encouraged the state and society to address the group rights of ethnic minorities and move forward towards state reconstitution. The Tamil minority and international community expect that the Government of Sri Lanka (GOSL) must introduce inclusive policies as a solution to the ethnic conflict. They believe the state should take measures to avoid another major contestation through the lessons learned from the civil war. The study is a qualitative analysis based on text analysis. In this backdrop, this paper examines the attempts made for the inclusion of minorities into the state system in post-civil war Sri Lanka, which would contribute to finding a resolution to the ethnic conflict. The study reveals that numerous attempts were made at various periods to introduce inclusive policies to achieve state reconstitution, but those initiatives failed to deliver sustainable peace. The study also explores problems pertaining to contemporary policy attempts.
Introduction
Sri Lanka is well-known all over the world for the protracted ethnic conflict and civil war that lasted over three decades. It caused the deaths of about 100,000 persons, most of whom were members of minority communities. The violence also resulted in the forcible displacement of hundreds of thousands of people internally and to other countries (Nadeeka & Rodney, 2010;Imthiyas & Iqbal, 2011). Since 1948 when Ceylon as it was known then got its independence, there has been much debate in politics on the issue of transforming and reconstituting the state and society from its unitary character into one that reflects a more plural character. When Tamil minority political elites demanded regional
Research Method
This study is based on the qualitative method that uses text analysis along with limited observation. In this section, the authors discuss the methodology and its relevance to Sri Lanka. The data for this research were collected using multiple methods, including an extensive literature survey to gather documents written on the post-independent state formation and state-minority contestations that occurred in the country, and further supplemented with a process of limited field observations and reflections. This two-step approach was followed for the qualitative data collection carried out during the period 2013 to 2017. Qualitative data were analyzed by adopting critical and interpretative approaches. Empirical materials and data were classified, weighed, and combined in these approaches. These processes were successfully followed in the current study. Thus, this paper reviews the new political trends that resulted after the end of the war, which provided opportunities to reconstitute the state based on the lessons learnt from the war, but were not utilized by the state.
Results and Discussion
This section reviews the findings of the study in respect of each post-civil war attempt made to find a solution to the protracted conflict in Sri Lanka.
Lessons Learnt and Reconciliation Commission (LLRC)
Following internal armed conflicts, several truth commissions have been established in various countries to investigate and record the violence that occurred during the conflict and to formulate recommendations to bring about reconciliation on the basis of devolution of power. There are some notable examples of truth commissions that functioned in East Timor, Guatemala, Rwanda and South Africa (International Justice Resource Center [IJRC], n.d) at different times. When we compare these with the Sri Lankan truth commission, we find that the outcome is entirely different here. In the aftermath of the civil war in Sri Lanka, the Lessons Learnt and Reconciliation Commission (LLRC) was appointed by the Government of Sri Lanka (GOSL) in May 2010 (Ratwatte, 2012).
Post-civil war LLRC and its mandate provided a new avenue to examine the issues pertaining to state reconstitution in Sri Lanka by exploring the following questions: Why did the Government of Sri Lanka appoint the LLRC at the end of the war? What changes were recommended pertaining to devolution of centralized power and state reconstitution to heal the wounds of war and promote reconciliation after the protracted ethnic conflict? To what extent have the LLRC's recommendations on the matter of state reconstitution been implemented in the last four years? The following section attempts to answer these questions.
The defeat of the LTTE did not bring the ethnic conflict to an end, as due to the political manoeuvrings of the Tamil Diaspora there has been consistent international pressure on Sri Lanka to work out some form of a political solution and show a willingness to investigate issues of humanitarian and human rights violations during the final stage of the war (Thiranagama, 2013). Several foreign countries, U.N. bodies, International and Local Non-Governmental Organizations, and civil society organizations have been applying heavy pressure upon the State of Sri Lanka in the following ways: "The call for an investigation into the deadly conflict began when Secretary-General Ban Ki-Moon of the United Nations expressed his intention to appoint a panel of experts in March 2010. On May 31 2010, United Nations (U.N.) High Commissioner for Human Rights, Navi Pillay called on the Sri Lankan government to allow an international inquiry into the Government's offensive against the Tamil Tigers. Western governments, including the United States, also applied pressure on the Sri Lankan government to launch an impartial investigation into allegations of war crimes perpetrated by the state security forces and the LTTE. International Civil rights organizations, including Amnesty International and Human Rights Watch, joined the U.N. officials' calls for accountability into crimes committed in Sri Lanka. On May 17 2010, International Crisis Group (ICG) released a report entitled War Crimes in Sri Lanka, appealing for a concerted effort by the international community, led by the United Nations, to further investigate alleged war crimes by Sri Lankan security forces and the LTTE and prosecute those responsible" (United Nations Regional Information Centre for Western Europe [UNRICFWE], 2016).
The Government, however, stated it engaged in a "humanitarian rescue operation" with a policy of "zero civilian casualties" (U.N., 2011, p. ii). To appease the international community, Sri Lankan President Mahinda Rajapaksa appointed the Lessons Learnt and Reconciliation Commission (LLRC) on May 15, 2010(LLRC, 2011 to investigate the failure of the Cease Fire Agreement (CFA) of 2002. The final report of the LLRC was submitted to the President on November 15, 2011, but it remained unpublished until December 16, 2011 (Fowsar, 2015). The commission also had to report on the sequence of events that occurred from the time the CFA was signed up to May 19 2009 and pointed out the lessons that should be learnt from them. The institutional, administrative and legislative areas were also looked into, in order to prevent any misunderstanding and to promote national unity and reconciliation among the diverse communities in future (LLRC, 2011;Ratwatte, 2012). According to the report, the commission also recommended the Government to deal with the issues of devolution. In this regard, "in Chapter 8, a sub-topic titled 'The Need for Devolution of Power', adopts a positive approach and dismisses the government stance of being unsure as to whether devolution is needed" (LLRC, 2011, as cited in Kelegama, 2015. The LLRC clearly states that devolution is needed and there is no doubting it. Following is a brief account of the recommendations given in the LLRC report about the devolution of power. A devolution based political settlement must address the ethnic and other serious problems that threaten democratic institutions. In order to ensure sustainable reconciliation, the commission wishes to underline the critical issues on the devolution process. Therefore, the commission recommends that the present opportunity should be utilized for both maximum possible devolution to the periphery at the grassroots level and power-sharing at the center. To this end, the Government must take the initiative to have a serious and structured dialogue with all political parties, including the minorities (LLRC, 2011, pp. 305-307).
The National Plan of Action (NPA) was released by the GOSL to implement the recommendations of the LLRC on July 26 2012 ("The LLRC", 2015, March 17). An action plan was formulated to address the need for devolution or state reconstitution as written in the recommendation no. of 9. 236, 237 as, "Take the initiative to arrange a serious and structured dialogue with all political parties, and those representing minorities, in particular, to develop consensus on devolution. The dialogue must take place at a high political level and with adequate technical back-stopping" (Government of Sri Lanka, 2015, p. 13). This plan was formulated during Mahinda Rajapaksa's term of office. However, more than three years later, the overall implementation status of the LLRC recommendations has been disappointing. Meanwhile, newly elected President Maithripala Sirisena and the national Government pledged to implement the recommendations of the LLRC. In this backdrop, the next section will examine Rajapaksa's other commitment to address the ethnic conflict by appointing a Parliamentary Select Committee to prepare a proposal to resolve this issue.
Government-TNA Talks: A Stalemate Situation
One of the negative aspects of Sri Lankan Politics in 2011 was the abortive attempt by the Government to find a political solution to the pressing problems of Tamils. The talks that commenced between the Government and the Tamil National Alliance (TNA), one year after Mahinda Rajapaksa was re-elected as president for a second term in office, did not make any progress. Although the TNA was able to win a majority of parliamentary seats in the North and East in 2010 and proved itself as a credible representative of the Tamils, President Rajapaksa showed no interest in initiating a dialogue with the TNA throughout the year 2010 (Marcelline & Uyangoda, 2013, cited in Fazil, 2019a. Nevertheless, there was substantial international pressure being exerted on the Government to arrive at a political solution by engaging in dialogue with the TNA. Appearing to oblige, President Rajapaksa invited the TNA for talks that began in January 2011. The 14 meetings that are reported to have taken place between both parties in 2011 produced no tangible results. The TNA submitted two sets of political proposals to the Government in February and March, but received no response from the Government despite the fact there were seven meetings between January and August 2011. The TNA insisted on (a) devolution of police and land powers to the provincial councils with full implementation of the existing thirteenth amendment, (b) empowering provincial councils by transferring the list of concurrent powers to the provincial list of powers, and (c) re-merging of the northern and eastern provinces into a single unit of devolution. The Government's failure to respond to the proposal of the TNA can be construed as the rejection of the contents of the proposal. According to media reports, President Rajapaksa and his constituent partners in the United People's Freedom Alliance (UPFA) had expressed their unwillingness to transfer police and land powers to the provincial councils. Based on political analysis, President Rajapaksa had appeared inclined to dilute the 13th amendment by removing police and land powers from the provincial councils' list of powers. A proposal of this nature aimed at reducing the power of provincial councils would result in undermining the existing power-sharing framework integrated into the constitution.
The abortive outcomes of Government-TNA meetings can be attributed to this atmosphere of uncertainty. In 2011, the TNA requested the Government to provide a written response to the proposals it made in February 2011 as an initial measure to pave the way for further continuance of the bilateral talks. The meetings that had taken place in August 2011 were a showcase of conflicting ideas between both sides, and their respective approaches to talks too were divergent, and the result was that the differences sharpened. It is reported that the TNA leader expressed his disappointment over the measure of seriousness the Government was taking with regard to the talks while blaming the Government by stating that the talks were being used as a camouflage to create an impression locally and internationally that they were in a serious process of reconciliation whereas the real situation was contrary to this image (Marcelline & Uyangoda, 2013).
One main element pertinent to the discussion above was the absence of the LTTE in the war front, and this had the effect of eliminating any sense of urgency in finding a solution to the ethnic conflict. This led to the nature of the relations between the Government and the ethnic minorities being altered significantly (Marcelline & Uyangoda, 2013). After May 2009, the key factor in the ethnic conflict in Sri Lanka was not the relationship between the state and the LTTE, but the relationship between the state and the ethnic minorities.
It appeared that the Sri Lankan government was no longer prepared to wake up to the urgency of a political solution, negotiated settlement and regional autonomy that were a priority only because of the military capability of the LTTE and the resultant threat to the Sri Lankan state. With the military defeat of the LTTE, the threat posed to the state was removed and the armed struggle of the LTTE, which catapulted them into stardom as architects defining and shaping the relations between the state and ethnic minorities positively also disappeared. Therefore, the parameters and conditions after the war ended have been tailor-made in favour of the state. In this situation of altered political conditions with respect to ethnic relations, the necessity and urgency for negotiations and a political solution were hardly felt by the state as they were before. The UPFA government was evidently indifferent towards this with no practical initiative being taken. This apathy presents a risk of the re-emergence of violence if the situation persists.
Parliamentary Select Committee (PSC)
Later on, the GOSL insisted that power-sharing and devolution must be addressed by a Parliamentary Select Committee (PSC) (ICG, 2013, p. 21). It is necessary to highlight at this point that the TNA gradually assumed the position of the alternative Tamil social force in the post-LTTE period. After the release of the interim report of the LLRC, the state was pushed to find a way forward by implementing its recommendations. It was in this background that President Rajapaksa and R. Sampanthan, the TNA leader, met together on September 2, 2011, on which occasion the president suggested his ideas on a political solution. President Rajapaksa proposed at this meeting the idea of setting up a Parliamentary Select Committee (PSC), which will make the final decision on the political solution to be taken. Soon the proposal for a PSC led to further disagreements between Rajapaksa and Sampanthan when the latter suggested that a bilateral consensus between the Government and the TNA should be presented to the PSC. While the Government wanted to proceed with the PSC even before arriving at a consensus with the TNA on a political solution, the TNA insisted that without such a consensus, the entire PSC process would be another futile exercise as had been the case with many such committees and commissions in the past. The TNA's refusal to send its nominees to the PSC quickly developed into a major political row, even raising a likelihood of the discontinuation of bilateral talks.
After several months of suspension in the talks, the two sides agreed to meet on January 17, 18 and 19, 2012. However, the Government did not send its delegation to the meeting, indicating that the dialogue had reached a serious impasse. It became apparent in the controversy that the two sides had somewhat exclusivist ideas regarding the implementation of the PSC proposal. The Government stressed that a political solution to the majority-minority conflict in Sri Lanka should be found by way of "an inclusive process with the participation of all political parties, not just the TNA" (Sunday Leader, January 17 2012, as cited in Marcelline & Uyangoda, 2013, pp. 320-321). The TNA, on the other hand, held the view that a consensus between the two sides was a necessary precondition for a successful outcome of the PSC process and that the Government should give priority to a basic understanding with the TNA, the elected representatives of the Tamil people, before summoning the PSC. Thus, the Government-TNA dialogue remained stalled, despite international pressure on both sides to resume the dialogue and work together for reconciliation and a political settlement.
The National Plan of Action (NPA) released by the Government in July 2012 to implement the LLRC recommendations emphasized the need to refer the matter of devolution to the PSC first to obtain its approval. Anyway, there was a deadlock in early 2012 when the Government refused to send its delegation to the meeting. The GOSL used this situation as a tactic to avoid taking up any position of its own and to prevent any expansion in the devolution. In this situation, South Africa stepped in and offered its support to the reconciliation process in Sri Lanka. This is discussed in the following section.
South African Initiatives
The South African Government participated in the first third-party initiative as a facilitator in post-war Sri Lanka. It was anticipated with optimism that the state reconstitution attempt would resolve the ethnic grievances of minorities. When South African President Jacob Zuma visited Sri Lanka for the Commonwealth Heads of Government Meeting ( The delegation also interacted with officials of DIRCO who shared with them their experiences on the truth and reconciliation process (Tamil Guardian, 2014).
South African President Zuma, in his speech in parliament in February 2014, stated that in response to the appeal of the Sri Lankan Government, he was assigning Cyril Ramaphosa as a special representative to Sri Lanka. In June 2014, the special representative arrived in Sri Lanka, making his commitment clear with regard to the peace-building role. South Africa's role in Sri Lanka was explained to all levels of the ruling party at the African National Congress (ANC) annual convention in April 2014 (Perera, 2014). In his first public comments on his role, the Deputy President of the ruling ANC, Ramaphosa said, "We are truly honoured to be chosen among many countries to go and make this type of contribution to the people of Sri Lanka. We have a wonderful story to tell, and it is this wonderful story that the Sri Lankans see." He also said, "As South Africans, we do not impose any solution on anyone around the world. All we ever do is to share our own experience and tell them how, through negotiation, through compromise, and through giving and taking, we were able to defeat the monster of Apartheid." He added, "We think we can share those experiences, but of course, in the end, it is up to the people of Sri Lanka to find their own peace" (Perera, 2014).
The peace and reconciliation initiatives of South Africa could be classified as a third-party contribution in Sri Lanka. Previously, India and Norway took part as mediators and promulgated ceasefires between the GOSL and the LTTE to promote reconciliation and peace in the country. As a result of the defeat of the LTTE, a ceasefire was no longer necessary. Nevertheless, the South African attempt was a hopeful initiative to bring reconciliation (The Sunday Times, 2014).
Ramaphosa and his team met the president, prominent representatives of the Government and Wickremesinghe, the Opposition Leader. Notably, other important discussions were held with the TNA, with leaders such as R. Sampanthan and Chief Minister, Justice C.V. Wigneswaran. During his visit to Jaffna, Ramaphosa talked with Maj. Gen. (Rtd) G.A. Chandrasiri, Northern Province Governor and Udaya Perera, Security Forces Commander ("The Great South", 2014, July 10). South African initiatives were an acceptable way to move towards the post-war reconciliation process, and it was expected that this might provide a true sharing between both nations. Nonetheless, the process had been criticized by some of the majority leaders and radical movements. The South African delegation's meeting with Northern Province Chief Minister, Wigneswaran brought forth a host of race-centred criticisms, even five years after the end of the war.
It was expected that there was an opportunity to find a resolution to the ethnic conflict by devolving powers to the minorities in the post-war scenario. Anyway, the inclusive mechanisms initiated by the GOSL failed as usual. However, Rajapaksa and his Government continued to apply various stratagems to centralize power, such as passing the 18 th amendment, the Divi Neguma Bill, and by conducting impeachment proceedings against the former Chief Justice of the country. These centralization activities are briefly examined below (Fazil, 2019).
A drastic change embodied in the form of the 18 th amendment to the 1978 Constitution came in the wake of the presidential and parliamentary elections. Its purpose was to allow a third presidential term for President Rajapaksa, who led the UPFA coalition regime. The enactment of the 18 th amendment constituted a crucial development in Sri Lanka's post-civil war state reconstitution process as it was in the direction of further centralization of state power in the office of the president and in the hands of the person who holds that office. A key feature of the 18 th amendment was the repeal of the 17 th amendment, which had provided for a constitutional mechanism known as the Constitutional Council, to check some powers of the Executive President, such as the power to make key public service appointments. The 18 th amendment also revised the powers of several important public service bodies such as the Public Service Commission, the National Police Commission and the Elections Commission, by which more power was transferred to the executive.
Regime Change and the National Government in Power
After the war victory in 2009, Rajapaksa was elected to a second term as president by majority votes in 2010, and then his party won the parliamentary elections too. He was in an ideal situation to rehabilitate, reconstruct and develop the country. However, President Mahinda Rajapaksa focused on other things, and his family domination started expanding to cover the entire ruling system of the country. This inevitably led to an increase in corruption and soft-authoritarianism in his regime.
President Mahinda Rajapaksa lost the presidential election in January 2015 owing to his unpopularity over charges of corruption, oppression of the minorities, undemocratic ruling style, militarisation and centralization of state power, etc. "President Rajapaksa's former Minister Maithripala Sirisena secured a surprise win as the common opposition candidate on the promise of implementing a 100-day program of constitutional and governance reforms, after which parliamentary elections would be held" . During the oath-taking ceremony held at the Independence Square on January 09 2015, the newly elected president was sworn-in as the sixth Executive President of Sri Lanka before the Chief Justice. Ranil Wickremesinghe, leader of the United National Party (UNP), was sworn-in as Prime Minister before President Maithripala Sirisena (Adaderana, January 09 2015). In a historic turn of events, the main opposition political party stitched together a coalition government with others that comprised 11 cabinet ministerial, five state ministerial and ten deputy ministerial positions. Ministers were sworn-in before President Maithripala Sirisena and in the presence of Prime Minister Ranil Wickremesinghe ("The More the Merrier," March 23 2015). In August 2015, the 08 th General Elections were held, but no party got a majority in the parliament. Eventually, a national government was formed by a coalition of parties. Interestingly, the prominent Tamil party TNA secured the position of the opposition party in the parliament.
Looking back, national and international factors contributed to the defeat of the Rajapaksa regime. The United National Party (UNP), People's Liberation Front -Janatha Vimukthi Peramuna (JVP), Jathika Hela Urumaya (JHU), Sri Lanka Muslim Congress (SLMC), All Ceylon Makkal Congress (ACMC) and the Tamil parties contributed to this significant victory and change. The crucial role played by the Western countries also contributed to this change as they tried to counter the growing Chinese influence in Sri Lanka and South Asia. The other motivation for regime change was the growth in Buddhist nationalism, which triggered anti-Muslim riots and human rights violations against minorities in post-war Sri Lanka. The newly elected Government promised to eradicate corruption, address the root causes of conflict and find a lasting solution to it, and exercise good governance during its period of office.
At the January 08, 2015 election Rajapaksa gained the votes mainly from Sinhala Buddhists while Maithiripala Sirisena obtained the votes of both majority and minority communities. "But one could easily argue that Maithri's victory was mainly due to the minority votes" (Kalansooriya, January 09 2015). Minorities in large numbers favoured Maithripala and in doing so expressed their opinion of Mahinda Rajapaksa. The intention behind this voting pattern of the minorities was not the same as it was in the case of the Sinhalese. That is because most of the Sinhalese who voted for regime change wanted democracy, good governance and an end to corruption. They were truly fed up with the Rajapaksa dynasty and wanted change.
Where the ethnic minorities were concerned, they had specific hopes behind their decision, such as the Tamils in the badly affected post-war areas who were eager for a return to normalcy. They demanded freedom of speech and association and in general, the freedom to do as they pleased within legal boundaries. Of course, this could be a matter for further deliberation based on a radicalization point of view, but Northerners have certainly prayed for freedom from the clutches of the LTTE as well as from military authoritarianism (Kalansooriya, January 09 2015). Implicitly they demand freedom from majoritarian rule and reconstitution of the state. As Tamil political parties felt they could obtain a positive response to their traditional demand for a homeland within the unitary state system from the new Government, they were in favour of this regime. But, "Surprisingly, Tamil and Muslim parties that backed Maithripala Sirisena, thereby ensuring his electoral victory, did not insist or bargain for any commitment to devolution" (Uyangoda, October 17 2015).
The Muslims had different expectations about securing their own identity and culture. As the Rajapaksa government did not clamp down on the anti-Muslim drive of certain forces, or protect the Muslims during the riots, their support shifted to the common candidate. People naturally expect the Government to safeguard them from marauding religious extremist groups.
Western countries played a behind the scenes role in the dramatic regime change in Sri Lanka. The foreign policy of the Rajapaksa government leaned towards non-western countries such as China, Russia, Pakistan, Iran and India in the Asian region, which caused much concern to the United States (U.S.) and its allies. The West repeatedly requested the GOSL to conduct an impartial inquiry into war crimes allegations and maintain the rule of law, democracy, freedom of expression and good governance. Despite their earnest promptings, the GOSL was not willing to pay any attention to the western demands. There is a strong suspicion that the United States of America (USA) was behind the overthrow of the Rajapaksa regime ("More Evidence" February 21 2015).
The sudden emergence of Sirisena as a "common opposition candidate" was an orchestrated affair. Sirisena, the serving Health Minister declared himself as a candidate in the election. He was backed by the UNP, the opposition and other parties after Rajapaksa announced the election date on November 20 2014. The World Socialist Web Site (WSWS) detailed the involvement of Washington, which acted through former President Chandrika Kumaratunga and UNP Leader Ranil Wickremesinghe in this election. The Obama administration worked strongly against Rajapaksa's ties with Beijing to ensure that Sri Lanka is fully integrated into the U.S. "pivot to Asia" as this would assist the military build-up against China. More proof of Washington's hand in Rajapaksa's removal has now come to light ("More Evidence," February 21 2015).
India, the regional power, had been alarmed because of China's security and economic relations with the Rajapaksa government. The media and intelligence sources disclosed strong evidence that India was on board in the removal of Rajapaksa and his Government (Ratnayake, 2015).
On the one hand, the USA, United Kingdom (U.K.) and European Union (E.U.) maintained good relations with the new Government. Diplomatic relations continued without interruption, and cordial mutual visits by leaders increased in number, and economic ties were also in favour of Sri Lanka. Remarkably, the frenzy of U.S. sponsored resolutions calmed down to some extent (Kirubakaran, 2015). On the other hand, neighbouring India, blamed by the ex-President as being a country that favoured the regime change, has not received its reasonable dividend. India is not happy about the recent developments in Sri Lanka. "Regarding the ethnic conflict, they took a very lenient path, but that trick didn't work with the new government" (Kirubaharan, 2015). The new Government wished to maintain its relationship with India and China in an equal manner. Sri Lanka could not sideline China in its endeavour to achieve post-war economic development and this disappointed India. Anyway, India's foreign policy decision-makers were carefully observing silent diplomacy while Sri Lanka was watchful of its giant neighbour's next move.
However, the new Government was concerned with bringing about a return to normalcy, democracy and protection of basic human rights that will ultimately meet the expectations of its citizens. The biggest challenge to the coalition government was in managing these hopes of the people, by handling the conflicting political agendas of coalition members adroitly. Moreover, the president, after assuming power took initiatives to fulfil the people's social and economic needs as was promised during the election period. Some of the initiatives were, increasing the salaries of the public servants, addressing the pension issue and tackling the high cost of living by bringing down the prices of some essential items. The new Government also launched an anti-corruption drive by starting to investigate and identify those involved in bribery, corruption and misuse of power during the Rajapaksa regime.
19 th amendment
The last three amendments to the constitution were significant in many ways. In 2001, the Seventeenth Amendment to the constitution was enacted during the period when Chandrika Bandaranaike was president, for the purpose of setting up an independent appointment mechanism. But it was replaced by the Eighteenth Amendment to the constitution in 2010 by the Rajapaksa government. Then in 2015, the 'Good Governance' coalition government introduced the Nineteenth Amendment. This amendment set high expectations in respect of further reform of politics and administration (Gunatilleke, 2019).
The presidential election manifesto of 2010 had given priority to state reconstitution while the 2015 manifesto had given up on the idea. Anyway, it was expected that Maithripala Sirisena's Government would move forward on the post-war agenda on government reconstitution and state reconstitution. But after he came to power, he focused only on the government reconstitution agenda mentioned in his policy manifesto. State reconstitution was not being addressed to meet the expectations of the minorities who voted in large numbers for his victory with high hopes of greater devolution of power as a solution to the ethnic conflict in the island.
Crucially, Sirisena gave special attention to the abolition of the Executive Presidential system and to electoral reform. He stated in his manifesto, "The new constitutional structure would essentially be an Executive allied with the parliament through the cabinet, unlike the present autocratic Executive Presidential System. Under this system, the president would be equal to all other citizens before the law" (Maithri, 2015, p. 14). Accordingly, he repealed the 18 th amendment to the constitution by enacting the 19 th amendment, which reduced the power of the executive president and the period of office, while increasing the power of the Prime Minister. The key changes made by this amendment can be seen in Chapter VII, Chapter VII A, and Chapter VIII of the Constitution (Senaratne, 2019). summarises the 19 th amendment as follows: A number of long-overdue reforms have been introduced through the 19 th amendment. Significantly, the presidential term was reduced from six to five years while the two-term office limit has been restored. Although the president can call for another presidential election after four years in office during his first term, the parliament's term has also been reduced to 5 years. It is an acute condition that unless a two-thirds parliamentary majority approves, the president cannot dissolve parliament until the expiration of 4½ years of its term. By establishing more or less fixed presidential and parliamentary terms, these provisions restrict presidential discretion and at the same time strengthen the separation of powers. Presidential immunity from suit has been marginally reduced by extending the Supreme Court's right to make legal decisions and judgments on official acts of the president. The report of the urgent bill procedure is the brief of other provisions of the amendment. The amendment also restricts the number of cabinet ministers to thirty. If the first and second-largest parties represented in parliament come together to form a national government, the size of the cabinet could be enlarged through an act of parliament .
Freedom of information has been added to the third chapter of the constitution by making it an enforceable legal right. In order to provide the institutional apparatus to facilitate the practice and promotion of the constitutional right to information, freedom of information legislation was proposed as part of the 100-day program. The de-politicization framework that was set up with the constitutional council and the independent commissions has proved to be a powerful feature of the 19 th amendment . The Constitutional Council is a body chaired by the Speaker and comprised of ten members. This body oversees and ensures the appointment of top public officials and commissions recognized under the constitution only with the recommendation of the council's members (Senaratne, 2019). These arrangements that existed in the 17 th amendment and suspended by the 18 th amendment were re-introduced in the 19 th amendment. Anyway, Maithripala Sirisena's regime could not dismantle the presidential system entirely as a result of a constitutional restriction, which calls for a referendum to make further changes. In any case, if the President and Prime Minister come from the same political party, this amendment will be weak and meaningless (Senaratne, 2019). Other than that, it is an excellent mechanism to ensure proper checks and balances.
Announcement of the New Constitution
The new regime's agenda for state reconstitution on the basis of devolution of power emerged from Sri Lanka's Prime Minister (PM) Ranil Wickremesinghe, who declared after assuming duty that his Government would implement the 13 th amendment to the country's constitution. Over a period of nearly 30 years, this amendment, which provided a measure of devolution of power to the Tamil minorities in the Northern and Eastern provinces, was never implemented fully (Wijesiriwardena, 2015, January 27). Not surprisingly, the promise made by the present PM has also not materialized so far, and its outcome was no different from the outcomes of the same promises made by previous leaders.
Remarkably, another historical policy document was released by the UNP-led coalition government on July 23 2015. "The manifesto indicates the policy of the UNP-led front for the next five years. Its five-point programs are economic growth, fighting corruption, ensuring freedom for all, promoting infrastructure investments and improving the education system" (Asian Mirror, July 23 2015). The third point of the document was "Ensuring Freedom for All," which meant "steps would be taken to introduce a new constitution. The United National Party said measures would be taken to grant maximum devolution of power to the provinces with everyone's consent under a singular state" (News 1st, July 23 2015). This standpoint of the new GOSL was in accordance with both the national commission (LLRC) and United Nations Human Rights Council (UNHRC) resolutions made during the post-war years, which urge the need to "reach a political settlement" to the ethnic conflict (LLRC, 2011;UNHRC, 2012).
The GOSL is making an attempt to draft a new constitution, as mentioned in their policy manifesto. The initiative for new constitutional drafting and minority expectation of state reconstitution has once again opened the space for debate on the sharing of state power between the majority and minorities (Smantha, 2016). President Maithripala Sirisena proposed a sub-committee of the cabinet under the Prime Minister, who will be in charge of preparing a "conceptual note" on constitutional changes that will be submitted for approval to the Cabinet of Ministers. In accordance with this proposal, the PM appointed a cabinet sub-committee on December 02 2015. The concept note was to review the essentiality and way forward to form a new constitution (Balachandran, December 03 2015). The cabinet sub-committee comprised representatives from the UNP, SLFP, SLMC, ACMC, JHU, and the Tamil Progressive Alliance (TPA). Prime Minister Ranil Wickremesinghe headed the cabinet sub-committee, which comprised 11 members, of whom 07 were Sinhalese, 02 were Tamils, and 02 were Muslims. They were Nimal Siripala de Silva, Lakshman Kiriella, Rauff Hakeem, Susil Premajayantha, Rishad Bathiudeen, Patali Champika Ranawaka, Wijeyadasa Rajapakshe, D.M. Swaminathan, Mano Ganesan and Malik Samarawickrama (Balachandran, December 03 2015). This committee was directed by the president to consult various political groups and representatives of public organizations first. The sub-committee then prepared a conceptual note and submitted it to the PM.
The cabinet approved the conceptual note prepared by the cabinet sub-committee on January 09 2016, around the time when President Sirisena completed his first year in office, PM Wickremesinghe placed the proposal in parliament. The framework was for forming a new constitution and converting the parliament into a "Constitutional Assembly," thus commencing the formal procedures to implement President Sirisena's policy document prepared for the election . The PM spoke in the parliament on the occasion of his submission of the draft resolution saying "we will have the whole Parliament formulating the constitution unlike in the previous instances when constitutions were drafted outside of the Parliament" (Singh, 2016). After two months, on March 09, 2016, the parliament passed the proposal of the PM. "The Government boasted that the resolution was passed unanimously, but the process dragged on for two months amid infighting within the ruling elites. About two dozen parliamentarians aligned with Rajapaksa opposed the resolution unless it incorporated their demands" (De Silva, 2016).
A joint opposition in the parliament was formed by the UPFA with the alliance of Rajapaksa's followers from the SLFP. After the Government agreed to delete the preamble and wording referring to the abolition of the executive presidency and for a "constitutional resolution of the national issue", they backed the resolution to write a new constitution. "The preamble of the original resolution that talked of providing a constitutional resolution to the Tamil question had been removed" (Ramakrishnan, 2016, March 10). It was obvious that the GOSL was committed to resolving the minority Tamils' grievances. The ruling elites of the majoritarian community opposed and removed the preamble that favoured Tamils. It was a case of history repeating itself as with every resolution that attempted to meet the Tamils' demands in the post-independent era. However, this time around there was genuine concern regarding the Tamil grievances as the Government expected to address them. President Sirisena supported the devolution of power to the provinces through the new constitution within a united Sri Lanka (Sing, 2016). Speaking on devolution of power he said that is the practice in developed nations and it is not good to centralize it. He also stated that devolving powers are effective in terms of democracy, independence, human rights and fundamental rights. President Sirisena, in his address to the Parliament on January 09, 2016, had observed, "We need a constitution that suits the needs of the 21 st century as that will ensure that all communities live in harmony." Likewise, on January 15 2016, Prime Minister Wickremesinghe noted, "We are ready to devolve power (to minority Tamils) and protect democracy. The Constitutional Assembly will discuss with all, including (Tamil-dominated) provincial councils to have a new constitution. We will do that in a transparent manner" (Singh, 2016).
Historically, this was the first time in post-independence Sri Lanka that the Tamil political parties and Tamil civil society movements showed eagerness to participate in new constitution-making. As the Government declared, the new constitution would provide a constitutional resolution to the ethnic conflict, and that would be a very positive development. It must be noted that the Tamils refused to participate in the constitution-making process in both 1972 and 1978. The reason for boycotting the 1972 Constitution-making was because the Government of Sirimavo Bandaranaike refused the demand of the Tamils to amend the official language clause in the draft constitution. As the Tamils had elevated their demand to a separate Eelam instead of better representation within a united Sri Lanka Lanka, they boycotted the 1978 constitution-making as well (Singh, 2016).
Currently, the Tamil elites represented by the TNA are seeking a so-called political solution through internal self-determination within a united Sri Lanka. Mr. Sampanthan, the leader of TNA, has repeatedly declared the current opinion of his community, that the Sri Lankan Tamils have abandoned their fight for a separate state and the ethnic problem will be resolved through a united and indivisible Sri Lanka. He also decisively declared that the Tamil speaking minorities included Muslims too and that "the Tamil speaking people have historically inhabited the North and East of Sri Lanka and are entitled to have it as one unit of devolution" (Sanmugathas, 2016). The Muslims' opinion was stated by the SLMC General Secretary Hasen Ali on January 10 2016, which was that, "The Sri Lanka Muslim Congress (SLMC) will submit a proposal to the Constitutional Assembly for a unit of devolution for the Muslims of the North and East based on the founder leader of the party, M.H.M. Ashraff's demand. A unit of devolution encompassing the non-contiguous geographic areas of domicile of the Muslims of the two provinces, with power-sharing arrangements on par with the Tamil community, has been the SLMC's demand from the inception." Mahinda Rajapaksa and his followers opposed to making any meaningful concessions to the Tamils. His speeches have indicated that he is against devolving the crucial land and police powers to the Provincial Councils. The "joint opposition" is manifesting increased Sinhala chauvinist sentiments and accuses the Government of dividing the country by trying to hand over more powers to provincial councils (De Silva, 2016). All these issues pressurized the President and PM into agreeing to modify the resolution accordingly, gaining groundbreaking support to pass the bill. On March 09 2016, without calling for a vote, the Sri Lankan Parliament unanimously approved to change the parliament into a Constitutional Assembly (C.A.) to draft a new constitution for the country.
"WHEREAS there is broad agreement among the people of Sri Lanka that it is necessary to enact a constitution for Sri Lanka -this parliament resolves that there shall be a committee which shall have the powers of a committee of the whole parliament consisting of all Members of Parliament (M.P.s), to deliberate and seek the views and advice of the people, on a constitution for Sri Lanka, and preparing a draft of a constitution bill for the consideration of parliament in the exercise of its powers under Article 75 of the Constitution" (Parliament, 2016; ColomboPage, March 09 2016). Welikala (2016) explains the nature of the C.A. and its activities in his scholarly work as follows -C.A. will act as a separate institution, but it will comprise all the Members of Parliament (MPs). This mechanism is based on inclusivity and flexibility. Thus, all M.P.s can have a vital role to play in the C.A. on constitution-making process. At the same time, the C.A. could avoid the rigidity of parliamentary procedure and standing orders. Prime Minister chaired a steering committee with all parliamentary party leaders and other M.P.s, and the same committee directed the C.A. The CA has a number of subcommittees headed by senior M.P.s and reported on fundamental rights, the judiciary, public finance, the public service, law and order, and centre-periphery relations. The steering committee dealt with matters on electoral reform, devolution, and the central executive directly. Both the steering committee and sub-committees learnt their assigned responsibilities by considering the opinions and evidence of experts and civil society. The sub-committees submitted reports at the end of July 2016.
On December 29 2015, 24 members were appointed to the Public Representations Committee on Constitutional Reforms (PRCCR) by the Prime Minister. This committee was comprised of academics, lawyers, civil society representatives and political party representatives, who were expected to gather public opinion on the formation of a new constitution. The PRCCR worked to collect public opinion at the grassroots level from January 18 to February 29 2016 (Singh, 2016). This committee submitted its final report of 333 pages to the Government and released it to the public as well. According to their records, over 2500 people/organizations participated and shared their opinions orally and in writing. A further 800 opinions were shared via e-mail, 150 via fax messages and 700 by post or by hand delivery (Report on Public Representations on Constitutional Reforms, May 2016).
The work on the new constitutional draft was carried out by the steering committees. News media reports highlighted that PM "Ranil Wickremesinghe planned to present the draft constitution bill by the end of 2016, according to Lal Wijenayake, Chairman of the Public Representations Committee on Constitutional Reforms" (Ramakrishnan, June 03 2016; Eyesrilanka, June 03 2016). He further clarified that various subjects spelt out by the steering committee were dealt with by six sub-committees. On completion of work by each of the subcommittees, their findings would be submitted to the steering committee. In addition to that, the steering committee would report to the C.A. with the draft proposal of the new constitution (Ramakrishnan, June 02 2016; Eyesrilanka, June 03 2016).
Finally, the expert panel of the Steering Committee on the constitutional draft released its second report on January 11 2019. This latest report or draft clearly mandated that it is a federal constitutional solution and included most of the aspects of federalism. Some of the prominent political elites of the Sinhala majority reacted to this by starting to criticize the draft and condemn its contents openly. Thus, this study predicted that the present draft would manifest the following shortcomings, such as establishing a strong unitary state with highly centralized and inefficient bureaucracy; the possibility of regrouping of Tamil militant social forces and the consequent need for strengthening the state security; lack of elites' consensus, electoral purpose and influence of majority community; unstable Government; radical hardline forces; and problems of regional minorities in the North and East continuing as before.
Joint opposition comprising Rajapaksa and his team formed a new political party called the Sri Lanka Podujana Peramuna (SLPP) and worked hard to return to power. He resorted to nationalism to obtain the massive support of the majority and campaigned that the constitutional reforms posed a threat to the unity of the country. The result worked out in favour of the SLPP in the local government elections that were conducted under the new electoral system in February 2018. Then, President Sirisena decided to support the joint opposition and passed a 'no confidence' motion against Ranil Wickremesinghe (Rajasingham, 2019).
Because of this, debates and internal conflicts arose within the coalition government. President and Prime Minister espoused different ideas on the same political issues. The disagreements gradually increased, and both leaders were not able to work together. The president suddenly appointed Mahinda Rajapaksa as the Prime Minister on October 20 2018 (Senaratne, 2019). It was very clearly unconstitutional and illegal. Thus, the UNP challenged this appointment through the court. Senaratne (2019) notes that the Court of Appeal issued an interim order on this appointment. Later, the Supreme Court declared that the president had acted unconstitutionally in issuing such a proclamation. The president was thereby compelled to re-appoint Wickremesinghe as Prime Minister.
Following this, the SLPP chose Gotabaya Rajapaksa, Ex-Defense Secretary as its candidate for the presidential election in 2019. He was always opposed to constitutional reform as a means of conflict resolution. He believed that economic development would serve as a better solution to the conflict rather than constitutional reform (Rajasingham, 2019). Gotabaya Rajapaksa won the presidential election in November 2019 (Lewis, 2020), following which Mahinda Rajapaksa was again appointed as the Prime Minister of the country.
Not too long after the presidential election, Sri Lanka was facing a pandemic situation due to the Covid-19 disease like in so many other countries across the world. Government's attention has now turned to face this problem rather than working on accommodating minorities into the state system. Anyway, the TNA met with the Prime Minister after nine years on May 04 2020 ("TNA meets PM," May 04 2020). As TNA boycotted negotiations with the ruling party in 2011, all talks ended then. But at this meeting, TNA expressed its willingness to cooperate with the Government in fighting the spread of the Covid-19 disease. The TNA brought up many issues such as handling the Covid-19 pandemic, release of the political prisoners, livelihood issues of Tamil minorities during the Covid-19 lockdown period, new constitution for resolving Tamil minority's problems, and general elections, etc. Further, a statement signed by the four constituent parties of TNA was handed over to the Prime Minister ("TNA meets PM," May 04 2020).
These incidents convey a clear message to the people that a sustainable resolution to the conflict through accommodating the minorities will take a long time. Ongoing contestations and the forthcoming general elections could determine the level of accommodation the minorities can expect within the state system.
Conclusion
This paper analyzed the recent political developments in post-civil war Sri Lanka. As the argument of this study shows, the ending of the civil war in Sri Lanka does not appear to have delivered either the environment or political motivation to introduce inclusive policies to address the ethnic conflict through dialogue and consensus. Also, many local and international observers assumed that the defeat of the LTTE and the lessons learned from the destructive civil war would impress on the Government the need to accommodate minorities into the state system by means of inclusive policies. Unfortunately, this approach has been abandoned by the Sri Lankan government because it found itself in a dominant position after the defeat of the LTTE. In this environment, most of the important ideas and proposals discussed above were not given a chance to bring peace to the country. These failed attempts also indicate that any proposed solutions have to be considered within a system of centralized state power. Most of the minority political parties that had formed coalitions with the Rajapaksa regime had also agreed to the centralization of power and the unitary state concept except the mainstream Tamil political party, the TNA. The Government's only initiative after forming the new parliament in 2015 was to appoint a "Constitutional Council" and get its experts to prepare and release the draft of a new constitution. But this process also remains stillborn without any positive outcome. Therefore, the present study suggests that Sinhala, Tamil and Muslim political elites should focus on an inclusive approach that would give equal consideration to all of the ethnic communities living in the country. This is absolutely essential to ensure a peaceful and prosperous Sri Lanka in which all communities can live together in harmony.
|
2020-11-26T09:05:59.452Z
|
2020-11-19T00:00:00.000
|
{
"year": 2020,
"sha1": "c97f88dbf5688a25bb48776b1b7c44c990504c2c",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/12306/11903",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bef4a9bde7850ee83c192c76607eb4a22a8d671f",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
259357477
|
pes2o/s2orc
|
v3-fos-license
|
Quantifying differences in iron deficiency-attributable anemia during pregnancy and postpartum
Summary Pregnant women in resource-limited settings are highly susceptible to anemia and iron deficiency, but the etiology of postpartum anemia remains poorly defined. To inform the optimal timing for anemia interventions, changes in iron deficiency-attributable anemia through pregnancy and postpartum need to be understood. In 699 pregnant Papua New Guinean women attending their first antenatal care appointment and following up at birth and 6 and 12 months postpartum, we undertake logistic mixed-effects modeling to determine the effect of iron deficiency on anemia and population attributable fractions, calculated from odds ratios, to quantify the contribution of iron deficiency to anemia. Anemia is highly prevalent during pregnancy and 12 months postpartum, with iron deficiency increasing the odds of anemia during pregnancy and, to a lesser extent, postpartum. Iron deficiency accounts for ≥72% of anemia during pregnancy and 20%–37% postpartum. Early iron supplementation during and between pregnancies could break the cycle of chronic anemia in women of reproductive age.
In brief
Davidson et al. report that anemia is highly prevalent in pregnant Papua New Guinean women and during the first 12 months postpartum. Iron deficiency is the main contributor to anemia in pregnancy but less so postpartum. Iron supplementation early during and between pregnancies could alleviate anemia in women of reproductive age.
INTRODUCTION
Anemia in pregnancy is a major global public health problem, particularly in resource-limited regions, where every second pregnant woman is estimated to be anemic. 1,2 Anemia in pregnancy contributes significantly to maternal morbidity and mortality and increases the risk of adverse neonatal outcomes. [3][4][5][6] Consequently, reducing anemia by 50% in women of reproductive age is the second goal of the World Health Organization's (WHO) ''Global Nutrition Targets for 2025.'' 7 Approximately half of all anemia cases in pregnancy worldwide are attributed to iron deficiency. 2 Pregnant women have an increased susceptibility to iron deficiency due to the high iron requirements of pregnancy. 8,9 Whether women remain susceptible in the postpartum period, and for how long, is unknown. In high-income settings, the postpartum period is typically considered a time of low iron deficiency risk, as the iron stores of healthy women who take iron supplements typically return to prepregnancy levels within weeks of birth. 10,11 However, it remains unclear how hemoglobin and iron levels change from pregnancy through to the postpartum period in settings with a high burden of infections and under-nutrition. 12 Continued anemia postpartum consigns women to poor health and increases the likelihood of entering subsequent pregnancies already anemic. 13,14 Understanding hemoglobin and iron level changes in pregnancy and into the postpartum period will inform when anemia interventions will be most effective.
The WHO's principal recommendation for the prevention of maternal anemia is universal oral iron and folate supplementation throughout pregnancy. 15 This is a widely implemented recommendation that is supported by a large Cochrane Review of iron supplementation trials in pregnancy. 16 Postpartum, the WHO recommends iron supplementation for the first 6-12 weeks in settings where anemia is a moderate or severe public health problem (population prevalence R 20%). 12 Despite this, iron supplementation is not considered part of routine postpartum care. 12 To further understand the benefit of extending iron supplementation into the postpartum period, there is an urgent need for longer-term information on the prevalence of anemia and iron deficiency in the first year postpartum.
Furthermore, anemia has a complex etiology. In addition to iron deficiency, there are other important causes including micronutrient deficiencies (vitamins A and B12), genetic conditions (e.g., thalassemia), and infectious diseases. [17][18][19] In malaria endemic settings, Plasmodium spp. (species) infection is a major determinant of anemia in pregnancy, 19 with key prevention strategies including intermittent preventative treatment in pregnancy and use of insecticide-treated bed nets. 20 Thus, in settings with more than one cause of anemia, iron supplementation may not be enough to reduce the burden of anemia. Determining the relative contributions of these risk factors to anemia, during pregnancy and postpartum periods, is important for the planning and implementation of anemia prevention strategies.
To better understand anemia in pregnancy and postpartum and to inform anemia prevention strategies, we determined the prevalence of anemia and iron deficiency during pregnancy and the first 12 months postpartum in a prospective cohort of women in Papua New Guinea and quantified the time-varying effect of iron deficiency on hemoglobin and anemia in pregnancy and postpartum.
Hemoglobin and ferritin dynamics, and burden of anemia and iron deficiency in pregnancy and postpartum At the population level, mean hemoglobin concentration remained stable during pregnancy (adjusted mean difference of À0.44 g/L; 95% confidence interval [CI]: À2.37, 1.49; p = 0.65 at birth compared with enrollment; Figure 2; Tables S2 and S3) and then increased in the postpartum period (adjusted mean difference of 11.63 g/L; 95% CI: 9.62, 13.65; p < 0.001 and 9.63 g/L; 95% CI: 7.38, 11.87; p < 0.001 at 6 and 12 months postpartum, respectively, compared with birth). At the individual level, hemoglobin concentration showed substantial variation over time (within-woman SD = 13.02 g/L); compared with at birth, hemoglobin levels were lower at 6 months postpartum in 25% of women, higher in 74%, and did not change in 1%. The population mean ferritin concentration in pregnancy was dynamic, with a 2.3-fold increase in the geometric mean ferritin from enrollment to birth (95% CI: 2.11, 2.50; p < 0.001) and a further increase by 6 months postpartum (1.22-fold increase in adjusted geometric mean compared with birth; 95% CI: 1.12, 1.34; p < 0.001) before stabilizing (0.93-fold increase in geometric mean at 12 months compared with 6 months; 95% CI: 0.84, 1.03; p = 0.14).
Associations between iron deficiency, hemoglobin levels, and anemia
To quantify the association between iron stores and the outcomes, hemoglobin levels, and anemia, univariable and multivariable mixed-effects modeling was performed ( Table 2). Models included ferritin and hemoglobin measurements from enrollment, birth, and 6 and 12 month postpartum evaluation times; thus, the effect measures represent the averages across all evaluation times. It was assumed that a concurrent measurement of ferritin/iron deficiency affects hemoglobin/anemia (see causal diagram in Figure 1). In multivariable analysis, iron deficiency was associated with a lower mean hemoglobin level over the entire study period (À8.07 g/L; 95% CI: À10.12, À6.01; p < 0.001). This corresponded to a 4.60-fold increased odds (95% CI: 2.79, 7.58; p < 0.001) of anemia in those who were iron deficient compared with iron-replete women over the entire study period. When the anemia outcome measure of moderate-to-severe anemia was used, the odds were still increased for iron-deficient individuals during pregnancy and postpartum (Table S4). Other variables associated with lower mean hemoglobin levels (and anemia) over pregnancy and postpartum included being multigravida (À2.63 g/L; 95% CI: À5.26, À0.01; p = 0.05; compared with primigravida), Plasmodium spp. infection at enrollment (À4.12 g/L; 95% CI: À7.30, À0.93; p = 0.01; compared with uninfected), and having a + -thalassemia (À5.78 g/L; 95% CI: À8.48, À3.08; p < 0.001; compared with wild types) ( Table 2).
Associations between anemia risk factors gravidity, Plasmodium spp. infection, a + -thalassemia, and ferritin levels/iron deficiency were assessed though multivariable mixed-effects modeling (Table S5). In multivariable analysis, multigravida women had increased odds of iron deficiency, while Plasmodium spp. infection at enrollment was associated with decreased odds of iron deficiency. a + -Thalassemia showed no significant associations with iron status, and there were no significant interactions between these risk factors and iron . A similar trend was observed for the association between iron deficiency and moderate-to-severe anemia during pregnancy (Table S8). At 6 months postpartum, the odds of anemia increased 3.28-fold (95% CI: 1.68, 6.43; p = 0.001) for those who were iron deficient versus iron replete (Table 3). By 12 months postpartum, the odds of anemia were only 2-fold increased for women who were iron deficient versus iron replete (aOR = 1.81; 95% CI: 0.75, 4.38; p = 0.19) ( Table 3). The odds of moderate-to-severe anemia was increased for iron-deficient individuals at both 6 (aOR = 4.06; 95% CI: 2.34, 7.03; p = 0.001) and 12 months postpartum (aOR = 2.36; 95% CI: 1.22, 4.57; p = 0.01) (Table S8).
To investigate how long lasting the effect of iron stores on hemoglobin levels is, iron stores were included in the mixed-effects models as lagged effects-iron stores from the preceding evaluation time (depicted in Figure S2). No discernible associations were found, indicating that concurrent iron stores are the more important determinant of hemoglobin levels and anemia (Tables S9 and S10).
DISCUSSION
Pregnant women are at risk of anemia and iron deficiency due to physiological changes and increased iron demands that occur in pregnancy. However, the burden of anemia in the postpartum period and the contribution of iron deficiency to postpartum anemia are largely unknown. In the current study of Papua New Guinean women, hemoglobin levels remained low throughout the study, with anemia prevalent in R69% from enrollment through to 12 months postpartum. Iron deficiency was associated with the greatest increase in odds of anemia throughout pregnancy and up to 12 months postpartum, demonstrating that it is a key risk factor for anemia, alongside Plasmodium spp. infection and a + -thalassemia. Notably, the relative contribution of iron deficiency to anemia changed over time; iron deficiency contributed to 72% of anemia at first antenatal visit but only 20% of anemia by 12 months postpartum. Current anemia prevention strategies delivered during antenatal care in this setting where women present later in pregnancy are not sufficient to address the high burden of anemia during pregnancy and postpartum periods. Anemia prevention strategies delivered earlier in pregnancy, postpartum, or in women of reproductive age targeting anemia etiologies relevant for reproductive stage may also be warranted to reduce the burden of anemia.
Global estimates suggest that approximately 50% of all anemia in pregnancy is attributable to iron deficiency 1 ; in the present study, 72%-89% of anemia in pregnancy was attributed to iron deficiency. This suggests that a large proportion of anemia in pregnancy in this setting would be amenable to iron supplementation. Daily prenatal iron and folic acid supplementation was recommended to the women in this study as per Papua New Guinea National guidelines. 22 In self-reported data on iron supplement use collected after birth, 65% (343/534) of women reported taking iron supplements ''most days'' during their pregnancy. However, almost all (93%, 316/343) of these women only started taking iron supplements at their first antenatal care appointment (enrollment), which typically occurred late on in pregnancy (median 30 [IQR [28][29][30][31][32] gestational weeks) where women would be likely at, or close to, the nadir of iron status in pregnancy. Iron supplementation may have contributed to the observed increase in ferritin concentration between enrollment and birth. Despite this increase, the proportion of women classified as anemic or iron deficient did not change between enrollment and birth, potentially because not enough time or doses between first antenatal care appointment were realized to achieve clinically meaningful benefits. Given that the majority of women were already iron deficient at their first antenatal care appointment and presented to this appointment during the second or third trimester of pregnancy, the WHO's primary recommendation of universal supplementation during pregnancy may not be sufficient to prevent maternal anemia in settings where first antenatal care appointments tend to occur later in pregnancy.
Another opportunity to prevent anemia exists postpartum, between pregnancies, when women are attending regular medical appointments for infant checkups and immunizations. Iron supplementation is not routinely provided as part of postnatal care in Papua New Guinea. In line with this, only $1.5% of women in this study reported that they were taking iron folate supplements at 6 and 12 months postpartum. Iron supplementation could be extended into the postpartum period for women with moderate-to-severe anemia, through existing health systems, to improve women's hemoglobin and iron reserves before their next pregnancy. This intervention strategy could reduce the risk of anemia in subsequent pregnancies, as well as address postpartum anemia caused by iron deficiency. There is a paucity of literature on the prevalence and etiology of postpartum anemia in resource-limited settings worldwide. The high burden of postpartum anemia (>70%) observed highlights that the postpartum period should not be considered a time of low anemia risk in this setting. However, the proportion of anemia attribut- able to iron deficiency was lower postpartum than in pregnancy, $20% by 12 months postpartum. Aside from iron deficiency, Plasmodium spp. infection was a key risk factor for anemia in this cohort. Thus, the effective implementation of malaria prophylaxis should be encouraged, as per standard of care. Bed net use in this cohort was moderate (63% reported using a net the night prior to enrollment) but could be strengthened further as an anemia prevention strategy, along with regular rapid diagnostic testing for malaria. While Plasmodium spp. infection (and also a + -thalassemia) were significant risk factors for anemia, their prevalence was relatively low and could not account for all the remaining attributable risk. Other potential anemia etiologies that were not assessed here, such as infection with intestinal helminths and other micronutrient deficiencies (e.g., vitamins A and B12), may also be important. 23,24 Quantifying these intervenable risk factors will inform a potential suite of interventions that address infections and multiple nutritional needs and further reduce the significant burden of anemia in pregnancy and postpartum.
The consistently low population mean hemoglobin levels throughout the study period suggest that women of reproductive age in this setting experience a cycle of chronic anemia, with hemoglobin levels never being fully restored between pregnancies. In order to substantially reduce anemia in settings where it is highly prevalent (R20%), the WHO recently proposed that iron supplementation be provided to all menstruating women 7 rather than just during pregnancy and postpartum periods. This strategy has not been widely implemented but has proved successful in regions of Vietnam and India, which saw anemia reductions of 20% and 24%, respectively, in women after 12 months of iron supplementation. 25,26 Another option is to target adolescent girls through schools. This approach was adopted in Ghana, where a cohort of adolescent girls received weekly iron-folic supplementation, resulting in a 26% reduction in anemia after 9 months of implementation. 27 These programs provided intermittent iron supplementation, which has similar efficacy to the Cell Reports Medicine 4, 101097, July 18, 2023 5 Article ll OPEN ACCESS daily regime but with fewer side effects. 16 Similar interventions in Papua New Guinea may be valuable but need to consider challenges in health service access, constraints in infrastructure, and low community outreach frequency, especially in remote and rural areas. However, given the public health problem anemia poses in this setting, it is a high priority for improving the health Estimated mean hemoglobin differences and odds ratios for anemia were derived from linear and logistic mixed-effects models, respectively, with a random effect for the individual-specific intercept. Adjusted models included enrollment variables listed in the table and time (enrollment, birth, and 6 and 12 months postpartum). CI, confidence interval; Het, heterozygous; Hom, homozygous; CR1, complement receptor 1; H/H, high CR1 expression; H/L, intermediate CR1 expression; L/L, low CR1 expression; SAO, Southeast Asian ovalocytosis. a Anemia defined as hemoglobin <110 g/L in pregnancy and hemoglobin <120 g/L in the postpartum period. b Ferritin transformed to log base-2 due to positively skewed distribution, thus the coefficient/odds ratio represents the change associated with a 2-fold increase in ferritin. c Iron deficient: ferritin <15 mg/L in pregnancy and postpartum; iron replete: ferritin R15 mg/L and CRP %10 mg/L in pregnancy and ferritin R15 mg/L and CRP %5 mg/L postpartum. d Self-reported history of fever during the pregnancy prior to their first antenatal care appointment. and well-being of women. This is particularly relevant in light of the WHO Global Nutrition target to halve anemia in women of reproductive age by 50% in 2025 7 ; a goal that is unlikely to be achieved in the near term in settings like Papua New Guinea that experience a high burden of maternal anemia without a push for further action or new intervention strategies.
As an alternative to oral iron supplementation, iron can be given intravenously. Modern intravenous iron formulations have been deemed safe and effective in preventing anemia in both pregnancy and postpartum periods in recent randomized controlled trials (reviewed in Qassim et al., 28 Lewkowitz et al., 29 and Sultan et al. 30 ). The key advantage of this strategy is that a single infusion of intravenous iron is required, so there are no compliance concerns. However, significant health system capacity building and strengthening would need to take place to successfully deliver intravenous iron to all moderately to severely anemic pregnant and postpartum women attending health care facilities in Papua New Guinea.
Limitations of the study
In East New Britain, Papua New Guinea, $90% of women attend antenatal care, and our study participants were repre-sentative of pregnant women attending five antenatal clinics that provide >75% of antenatal services in the province. A limitation of our study was that it did not capture women not attending antenatal care, typically women living in hard-toreach areas with presumed poorer health, which may lead us to underestimate the true population prevalence of anemia and iron deficiency. Loss to follow-up (48% by 12 months postpartum) may also introduce bias. However, in our analyses, we used mixed-effects modeling with maximum likelihood estimation, which uses all participant outcome data regardless of completeness across time points. This estimation method provides unbiased effect estimates in the presence of attrition assuming a ''missing-at-random'' missing data mechanism. Furthermore, given that enrollment characteristics were similar at follow-up time points, attrition should not significantly impact study estimates or study conclusions. It should also be noted that the population attributable fractions were calculated using ORs, and given that anemia was a common outcome in our study population, this will result in an overestimation of the population attributable fractions. In terms of external validity, the relative contribution of iron deficiency, genetic polymorphisms, malaria, and other anemia risk factors will vary with the local conditions. In accordance with this, prevention and control strategies for maternal anemia will need to be tailored to the setting-specific anemia etiology. Despite these limitations, some generalizations from this study can be made. The fact that 69% of women in this study were . Prevalence (95% CI) of anemia, iron deficiency, and irondeficiency anemia at enrollment, birth, and 6 and 12 months postpartum (pp) (A) In pregnancy (enrollment, birth), anemia is defined as Hb <110 g/L; iron deficiency is defined as ferritin <15 mg/L; iron-deficiency anemia is defined as ferritin <15 mg/L and Hb <110 g/L. In postpartum (6 months pp, 12 months pp), anemia is defined as Hb <120 g/L; iron deficiency is defined as ferritin <15 mg/ L; iron-deficiency anemia is defined as ferritin <15 mg/L and Hb <120 g/l. (B) In pregnancy (enrollment, birth), anemia is defined as mild: Hb < 110 g/L and R100 g/L; moderate: Hb < 100 and R70 g/L; severe: Hb < 70 g/L. In postpartum (6 months pp, 12 months pp), anemia is defined as mild: Hb < 120 g/L and R110 g/L; moderate: Hb < 110 and R80 g/L; severe: Hb < 80 g/L. Anemia defined as hemoglobin <110 g/L in pregnancy and hemoglobin <120 g/L postpartum. Population attributable fractions for other important anemia risk factors identified in Table 2, such as Plasmodium spp. infection and a + -thalassemia, were not able to be determined due to low numbers of observations in the exposed, non-anemic groups. a Adjusted ORs for anemia were derived from multivariable logistic mixedeffects models with a random effect for the individual-specific intercept. Models included an interaction term included between iron deficiency and time (enrollment, birth, and 6 and 12 months postpartum) and adjusted for age, mid-upper arm circumference, gravidity, smoking status, residence, fever, and genetic polymorphisms. still anemic at birth in this cohort, coupled with the stagnant global prevalence of anemia in pregnancy over the last two decades, suggests that the WHO primary recommendation of universal antenatal iron folic acid supplementation is either not being implemented successfully, or early enough, or is not effective enough to prevent maternal anemia in resourcelimited settings. Based on the high prevalence of anemia observed during the first 12 months postpartum in this study, we can postulate that populations experiencing a similarly high burden of anemia during pregnancy will continue to have high anemia prevalence in the postpartum period without intervention. Given that every second pregnant woman is estimated to be anemic in resource-limited settings, 1 anemia in the postpartum period is likely to be a widespread yet neglected condition globally.
Conclusion
Maternal anemia was highly prevalent in pregnancy and postpartum in our population, with iron deficiency a major risk factor in pregnancy, but less so in the postpartum period. Iron supplementation provided both early during pregnancy and between pregnancies, in conjunction with malaria prevention strategies, could break the cycle of chronic anemia in women of reproductive age. Research is needed to determine the effectiveness and feasibility of providing iron supplementation earlier in pregnancy, as part of routine postpartum care, and the optimal frequency and duration of supplementation required to improve the health of women in this region, as well as other settings where maternal anemia is highly prevalent.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
|
2023-07-07T22:16:36.008Z
|
2023-06-29T00:00:00.000
|
{
"year": 2023,
"sha1": "17f1499997609bedcf477951fa8710deee994d8e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2666379123002161/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66de0a3d89f13b1e2daea3305e7951c82aa85adb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264305656
|
pes2o/s2orc
|
v3-fos-license
|
Auto Search Indexer for End-to-End Document Retrieval
Generative retrieval, which is a new advanced paradigm for document retrieval, has recently attracted research interests, since it encodes all documents into the model and directly generates the retrieved documents. However, its power is still underutilized since it heavily relies on the"preprocessed"document identifiers (docids), thus limiting its retrieval performance and ability to retrieve new documents. In this paper, we propose a novel fully end-to-end retrieval paradigm. It can not only end-to-end learn the best docids for existing and new documents automatically via a semantic indexing module, but also perform end-to-end document retrieval via an encoder-decoder-based generative model, namely Auto Search Indexer (ASI). Besides, we design a reparameterization mechanism to combine the above two modules into a joint optimization framework. Extensive experimental results demonstrate the superiority of our model over advanced baselines on both public and industrial datasets and also verify the ability to deal with new documents.
Introduction
Search engines are widely deployed on web applications to meet users' daily information requirements (Wang et al., 2022).Given a user query, search engines usually first retrieve candidate documents from a huge document collection and then rank them to return a ranking list.Consequently, the performance and efficiency of document retrieval are essential to the final search quality.
Recently, a new end-to-end document retrieval framework named Generative Retrieval is proposed to develop a differentiable indexer, which directly maps a given query to the relevant document identifiers (docids) via a seq2seq model (Tay et al., 2022).Specifically, some policies are first applied to preprocess all the existing documents for docids such as assigning unique integers for documents (Tay et al., 2022;Zhou et al., 2022).Given the preprocessed docids, a Transformer-based model is employed to encode the document-docid mapping information into its parameters, and meanwhile is trained to generate relevant docids directly from a given query.As such, by adding a preprocessing phase, it turns the whole index-retrieve process into a generation task.
Despite the great success of these methods, the power of generative retrieval is still underutilized since they rely on the pre-processed docids, thus leading to the following limitations.(1) New documents cannot be seamlessly retrieved by an existing trained indexer.On the one hand, docids are preprocessed so that new documents cannot obtain their docid assignments directly from the retrieval model.On the other hand, even if their docids are obtained by the same "pre-processing" policy, these new docids are usually unknown semantics to the retrieval model.(2) Existing preprocessing policies are confined to one-to-one mapping between documents and docids.Accordingly, only one single document can be retrieved for each retrieval calculation.It deviates from the intention of the retrieving-ranking framework, i.e., a groups of relevant documents are expected to be efficiently retrieved in the retrieving stage (Guo et al., 2022).We argue to assign similar documents with same docid, which supports retrieving more documents at the same computational cost.(3) The preprocessing phase is independent of the index-retrieve process.Consequently, the caused semantic gap between the docids in preprocessing phase and the embedding space in index-retrieve process limits the performance of generative retrieval.However, it is not trivial to automatically learn the best docids within a joint framework, since the docids, which is served as the generation ground-truths, cannot maintain the gradient flow because they must appear in discrete form by an argmax function.Therefore, the docid learning process and the index-retrieve process are still independent of each other even if they are integrated together.
In this paper, we propose a novel fully end-toend generative retrieval paradigm, Auto Search Indexer (ASI).It combines both the end-to-end learning of docids for existing and new documents and the end-to-end document retrieval into a generative model based joint optimization framework.Specifically, we model document retrieval problem as a seq2seq task, i.e., with user queries as input, it outputs the docids of retrieved documents.Then, we design a semantic indexing module, which learns to automatically assign docids to existing / new documents.Besides, we design two semantic-oriented losses for it, which makes semantically similar documents share the same docids and assigns different docids to dissimilar documents.As such, the new document will be assigned an existing docid based on its content, or a new docid but belonging to the same semantic space as other docids.Furthermore, a reparameterization mechanism is proposed to enable gradient to flow backward through the semantic indexing module, thus supporting joint training for all modules.Extensive experiments on public and industrial datasets show that the proposed ASI outperforms the state-of-the-art (SOTA) baselines by a significant margin in document retrieval, and demonstrate that our semantic indexing module automatically learns meaningful docids for documents.
The contributions are summarized as follows: • To the best of our knowledge, we are the first to propose a fully end-to-end pipeline, Auto Search Indexer (ASI), which supports both end-to-end docid assigning and end-to-end document retrieval within a joint framework.
• To this end, we propose a semantic indexing module as well as two novel semanticoriented losses to automatically assign documents with docids, and develop a reparameterization mechanism to make the individual modules optimize jointly.
• Extensive experiments demonstrate that our ASI can learn the best docid for documents, and meanwhile achieves the best document retrieval compared to the SOTA methods on both public dataset and real industrial dataset.
Related Work
Studies about document retrieval can be roughly divided into three categories: sparse retrieval, dense retrieval and generative retrieval, which are briefly introduced as follows.
Sparse Retrieval
Early studies are mostly based on inverted index and retrieve documents with term matching metrics, e.g., TF-IDF [45].BM25 (Robertson and Zaragoza, 2009) measures term weights and computes relevance scores based on TF-IDF signal.Recent studies design to leverage the word embeddings to help build inverted index (Zheng and Callan, 2015;Dehghani et al., 2017;Dai and Callan, 2020b,a).
To alleviate the mismatch problem between query and document words, which is the key weak point for sparse retrieval, researchers attempt to augment possible terms before building the inverted index, e.g., Doc2Query (Nogueira et al., 2019b).
Dense Retrieval
In another line to relieve the mismatch problem, solutions based on deep learning first embed the queries and documents to dense vectors and then retrieve documents per vector similarity (Lu et al., 2020;Ma et al., 2022;Ni et al., 2022b;Zhan et al., 2022;Li et al., 2023a;Wang et al., 2023).These methods especially benefit from recent advances in pretrained language models (PLMs).For instance, SimCSE (Gao et al., 2021) is a simple but effective contrastive learning framework that employs BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019).To improve the inference latency, dense retrieval methods are usually equipped with approximate nearest neighbor (ANN) (Subramanya et al., 2019) or maximum inner product search (MIPS) algorithms (Shrivastava and Li, 2014) to retrieve relevant documents within a sub-linear time cost.
Generative Retrieval
Recently, an alternative architecture is proposed to end-to-end map user queries to relevant docids with a Transformer-based autoregressive model.Specifically, Tay et al. (2022) and Wang et al. (2022) propose to preprocess the documents into atomic identifiers or hierarchical semantic identifiers with hierarchical k-means algorithm.Differently, SEAL (Bevilacqua et al., 2022) devise to leverage all ngrams in a passage as its identifiers.sentence identifiers, namely GERE.Ultron (Zhou et al., 2022) designs both keyword-based identifiers and semantic-based identifiers and develops a threestage training workflow.SE-DSI (Tang et al., 2023) proposes to use summarization texts as document identifiers.MINDER (Li et al., 2023b) assigns documents multiview identifiers.Concurrently with our work, Sun et al. (2023) also investigated a different framework, GenRet, which uses a codebook and discrete auto-encoder with progressive training to learn the doc-id assignments within retrieval stage, while it relies on a clustering-based initialization1 .In a word, they suffer from the docid preprocessing phase, forming a fake end-to-end framework.In this paper, we propose a fully end-to-end paradigm ASI.It not only supports end-to-end document retrieval by a generative model, but also end-to-end learns the best docids for documents within a joint optimization framework.
3 Our Proposed Method
Overview
In this subsection, we present an overview of our novel Auto Search Indexer.The basic idea is, as illustrated in Figure 1, to build a fully end-to-end pipeline to automatically learn the meaningful docids for documents, perform end-to-end document retrieval, and combine them into a joint framework.
In detail, our ASI adopts encoder-decoder architecture to encode the user query q and directly generate relevant docids id (i) , i = 1, 2, • • • .Distinguished from existing preprocessing-based methods, a semantic indexing module is integrated to automatically assign docids to existing / new documents.To encode semantics into the docids, we creatively design a discrete contrastive loss and a sequence-oriented reverse cross-entropy loss for the semantic indexing module, which helps to assign semantically meaningful docids and break the limitation of one-to-one mapping between documents and docids.Moreover, a reparameterization mechanism is proposed to enable gradient flowing through the indexing module to support joint optimization, thus saving the decoder from falling into meaninglessly mimicking the indexing module.In other words, the decoder thus gains the ability to surpass the indexing module on document retrieval.
Basic Architecture
Noticing the outstanding advances of generative PLMs, ASI adopts a seq-to-seq framework.
Specifically, ASI forms "query-to-docid" paradigm, i.e., ASI takes the user query as input and generates several relevant docids, which are represented as a sequence of id tokens.To this end, with the help of a transformer-based encoderdecoder architecture, the query q is encoded by its encoder, and the generation probability is estimated by its decoder as follows, h i = Decoder(Encoder(q), h <i ), (1) where id i denotes the i-th token in the currently given docid id of length m, which is obtained by the semantic indexing module.Θ e,d denotes the trainable parameters in encoder-decoder architecture.W ∈ R d×V id is the linear parameter to classify the hidden state h i into the docid vocabulary of size V id .Here we treat all of the docids as different tokens from the encoder vocabulary, and therefore the encoder and decoder do not share the vocabulary space to improve decoding efficiency.Finally, to maximize the target docid sequence likelihood, we adopt cross-entropy loss to optimize the following generation objective function, where the target docid id is obtained based on document d by the semantic indexing module, which will be described next.
Semantic Indexing Module
Previous works focus on the end-to-end retrieval phase, while neglecting to build an end-to-end learning framework for docid indexing, i.e., they have to "preprocess" the documents for docids (Tay et al., 2022;Wang et al., 2022;Zhou et al., 2022).Therefore, they can hardly deal with new documents, which are common and unavoidable in practical applications.To tackle the above problem, in this subsection, we propose a semantic indexing module to automatically assign docids to existing / new documents, as illustrated in Figure 2.
Specifically, given the input document d ∈ D and its representation x from encoder2 , the semantic indexing module assigns its corresponding i-th id token based on the following probability distribution, P (id i |d; Θ e,i ) = Linear i (x), (4) where Θ e,i denotes the model parameter in encoder and semantic indexing module.Note that this semantic indexing module could have been more elaborately designed, while this paper focuses on the "fully" end-to-end framework, and hence this module is designed from a simple point of view.
Semantic-Oriented Losses
Existing indexing policies commonly enforce that each docid uniquely refers to one document, which reduces the retrieval efficiency.We argue to assign similar documents with same docid, thus supporting to retrieve more documents at the same computational cost.
To this end, as depicted in Figure 2, we propose a discrete contrastive loss and a sequence-oriented reverse cross-entropy loss for semantic indexing module to softly encourage the assignment of different docids to different documents, rather than utilizing manual rules to force it.
Discrete Contrastive Loss First of all, the premise to assign similar documents with same docid is that the encoder should learn semanticbased representations for documents.Accordingly, we propose a discrete contrastive loss to help learn different embeddings for documents of different semantics, where the "different" is measured by query-document pairs.Formally, given a set of training examples D = {(q, d)} composed of query-document pairs, the discrete contrastive objective function is as follows, where the docid id d and pseudo query id id q are calculated by Eq.( 5), α is a hyperparameter of margin.Note that it is hard to measure the distance between two discrete ids and calculate the corresponding gradient, thus we use the probability distribution of ids to calculate distances.Formally, where the distribution P (•) refers to Eq.( 4).
Sequence-Oriented Reverse Cross-Entropy Loss Intuitively, considering minimizing crossentropy loss is usually used to make a variable closer to a given label, we can conversely maximize it to make an id token away from the given label (Pang et al., 2018).Considering that docid is formed as a sequence of id tokens, it is not necessary to guarantee that every token of the two sequences is different, but only one of the tokens is different.Therefore, given a pair of docids, we can find the maximum value from the cross-entropy of all id token pairs, which means this pair is most likely to become different.Then, our sequenceoriented reverse cross-entropy loss is proposed to maximize this maximum value.
Formally, given two documents d j , d k ∈ D, let id j and id k denote their docids, respectively.We can diversify their docids by minimizing the fol-lowing loss function, For mini-batch training, the document pairs are selected from a batch.
Reparameterization Mechanism
The above semantic indexing module is expected to learn the best docids jointly with L seq .However, the obtained docids are utilized as generation ground-truths, which cannot maintain the gradient flow with the help of softmax but must appear in discrete one-hot form by a gradientless argmax function.Consequently, the gradient is not able to propagate back from decoder to the semantic indexing module, thus the optimization of decoder and semantic indexing module is still decoupled.It means that directly regarding the docids of one-hot format as the training target probably makes decoder fall into meaninglessly mimicking rather than surpassing the indexing module, as ∂Lseq ∂Θ i = 03 .Inspired by DALL-E (Ramesh et al., 2021), we devise a simple but effective reparameterization mechanism to support our end-to-end learning framework via Straight-Through Estimator (STE) (Hinton, 2012).Specifically, suppose the semantic indexing module outputs the i-th docid id i for a document d, we can derive the formula according to the chain rule as follows, where id i denote the one-hot vector of id i , and the mid-term ∂ id i ∂P(id i |d) is non-differential.STE suggests defining the non-differential term as "1", thus we have To this end, the outputted one-hot vector is reparameterized in forward propagation as follows, where detach() makes a tensor detached from the backpropagation.As such, STE utilizes the gradient of P (id i | d) to replace the gradient of argmax.Finally, Eq.( 3) is rewritten as follows, where Θ e,d,i denotes the trainable parameters in encoder, decoder and indexing modules.
Model Training & Inference
For model training, we merge the above objective functions together as follows, where γ c and γ r are the scaling coefficients.In the inference (retrieval) phase, when a user query is inputted to retrieve documents, we apply the encoder to encode the query, then adopt beam search on the decoder to generate relevant docid, and finally look up the document-docid mapping to output the retrieved documents.
As for when a new document appears and should be incorporated into the existing document collection, we apply the encoder followed by the semantic indexing module to assign a docid to it.It is worth noting that this docid belongs to the same semantic space as others.As a result, even if it has never appeared in the training document collection, there is no need to retrain the model for this docid.
Datasets
We evaluate the empirical performance on a public dataset and an industrial dataset for document retrieval.The statistics of the data are reported in Table 1 and a brief introduction is as follows.
MS MARCO Document Ranking Task4 (Nguyen et al., 2016) It is a large-scale dataset for machine reading comprehension, which contains a total of 3.2 million candidate documents.We use the official split of the dataset.Besides, for fair comparison, following Zhou et al. (2022), we apply DocT5Query (Nogueira et al., 2019a) for query generation.
ADS It is a real-world large-scale dataset collected from Bing5 sponsored search engine, which provides organic web results in response to user queries and then supplements with sponsored ads.We collect query-ad pairs where the ads are the concatenation of the title and abstract from the sponsored ads corresponding to the user query.
Baselines
To validate the effectiveness of our proposed ASI, we compare it with the following three groups of strong document retrieval baselines.
Sparse Retrieval BM25 (Robertson and Zaragoza, 2009) is a difficult-to-beat baseline which uses TF-IDF feature to measure term weights.DocT5Query (Nogueira et al., 2019a) utilizes T5 to generate pseudo query for document to expand document information and then applies BM25 for document retrieval.
Generative Retrieval DSI (Tay et al., 2022) is the first generative retrieval framework to directly output docids with the query as input.We compare ASI with two DSI variants, which construct docids with random unique integers and hierarchical clusters, namely DSI-Atomic and DSI-Semantic, respectively.DSI-QG (Zhuang et al., 2022) bridges the gap between indexing and retrieval for differentiable search index with a query generation technique.SEAL (Bevilacqua et al., 2022) regards all n-grams contained in documents as their identifiers.Ultron (Zhou et al., 2022) designs a threestage training workflow where the docids are built by reversed URL or product quantization (Zhan et al., 2021) on document embeddings.We denote these two variants as Ultron-URL and Ultron-PQ, respectively.NCI invents a tailored prefix-aware weight-adaptive decoder architecture, better suited to its hierarchical clustering-based docids and beam search-based generator.GenRet (Sun et al., 2023) uses a codebook and discrete auto-encoder with progressive training to learn the doc-id assignments within retrieval stage.
Metrics
We evaluate model performance with the following common metrics for document retrieval.Recall@K (R@1/5/10) treats the cases as true positives that the decoder generates the same docids as the assignments of the semantic indexing module.Moreover, for dataset ADS, Quality Score between the query and the retrieved documents is measured by an online quality estimation tool in Bing.Considering that ASI allows each docid to point to multiple documents, we report the micro-/ macro-averaged Quality Score, denoted as Mi-QS and Ma-QS respectively.Besides, we also report the average number of retrieved documents for each query when generating Top10 docids, denoted as D/Q.
Detailed Implementation
In terms of model architecture, we build ASI with a 6-layer encoder and 6-layer decoder, where the encoder is initialized with a pretrained 6-layer BERT6 and the decoder is optimized from scratch since its vocabulary is changed to id tokens.For encoderonly or decoder-only baselines, i.e., RepBERT, Sentence-T5, DPR, we set the layer number 12 for fair comparison.For encoder-decoder models, we set encoder/decoder layer number 6, i.e., 12 layers in total.For other settings, we set the max length of input sequence 64 for ours and follow the settings for baselines in Zhou et al. (2022).We set the margin α=3, loss coefficients γ c =γ r =0.2, the length of docid m=4 and the range of each id token is [0, 256).For model training, we set batch size 4096 and learning rate 1e-4 with AdamW optimizer.For inference, we apply vanilla beam search without constraints7 and the beam size as 10.Sun et al. (2023) or Tang et al. (2023) and " ‡" denotes we reproduce by their official implementations.
Retrieval Performance on MS MARCO
We evaluate the model performance for document retrieval on dataset MS MARCO in this subsection, which is reported in Table 2.The major findings from the results are summarized as follows: (1) ASI significantly outperforms all the competitive baselines by a significant margin across four different metrics on the dataset MS MARCO.Especially on the R@1 metric, our ASI almost achieves twice the performance compared with the strongest baseline, i.e., Ultron-Atomic, which validates the superiority of our fully end-to-end pipeline.We attribute this surprising gain to both its tailored design and more suitable docids learned for generative retrieval.
(2) Furthermore, considering that ASI allows each docid to point to multiple documents, it is somehow unfair for baselines to directly compare on Recall.Therefore, we modify the metrics for further comparison: For a retrieved docid that points to multiple documents, we randomly sample one single document for it.As such, ASI would also retrieve the same number of documents as baselines.Besides, in order to further eliminate the occasional performance fluctuation caused by the random sampling strategy, we choose to report the expectation of Recall/MRR in Table 2 (refer to ASI (Expectation)).It can be seen that even if the in- fluence of the one-to-many mapping of docid is excluded, our ASI is still significantly better than all the strong baselines.This further demonstrates the superiority of our design.
Retrieval Performance on ADS
Considering the sparsity of the supervision signal in datasets, the document that was not interacted with the given query according to the dataset is not definitely an "inappropriate" retrieval result.Therefore, in this subsection, we evaluate ASI on ADS dataset and focus on the retrieval quality.
Settings.For datasets, we first conduct experiment on the collected dataset, i.e., we train the model on training set and perform document retrieval on the about 13M documents from training & validation set.Furthermore, we also evaluate this checkpoint based on the full collection of about 689M documents in Bing platform.For baselines, we select two representative baselines, i.e., Sim-CSE and SEAL8 .Furthermore, we modify Sim-CSE to support docid retrieval, where we use the average of document embeddings corresponding to every docid as the docid embedding, based on the document-docid mapping relationship learned by our ASI, namely SimCSE docid .
For metrics Recall@K, as reported in Table 3, ASI outperforms the selected baselines by a greatly large margin.It does not rule out that it is because the one-to-many docids reduce the retrieval difficulty, thus we add a comparison between ASI For Quality Score, SEAL outperforms all the methods including ours.We analyze that this is because SEAL is based on n-grams.Limited by the sparse supervision signal in the dataset, it cannot obtain the expected Recall@K performance.But the n-grams guarantee that its retrieved documents are of high quality.After adding the docid mappings learned by ASI, SimCSE docid has greatly improved the Quality Score and even achieves better performance than ours, which validates the effectiveness of the docid learned by ASI in terms of quality.However, they still suffer from low retrieval efficiency.Specifically, SEAL can only generate 10 documents in one generation process (set topk=10 for beam search in decoder), and Sim-CSE also incurs high computational and storage costs even equipped with ANN Search, while our ASI could retrieve amount of documents in one generation process without any extra storage.It is worth noting that we can retrieve 1344 documents of competitive quality for each query at the same computational cost, which is more than 100 times the efficiency of SEAL.
Additionally, we apply this semantic indexing module to encode the full collection of documents, and evaluate the quality score.One can see that we can obtain more impressive retrieval efficiency and the price we need to pay is only a little bit of acceptable quality degradation.It demonstrates the advantages of our model in terms of both effectiveness and efficiency in practical usage.
Analysis for New Documents
As mentioned before, thanks to the semantic indexing module with two semantic-oriented losses, ASI can better deal with the new documents.Specifically, there are two possible results when dealing with new texts.One is to assign new documents with an existing docid, which means they are new in content rather than semantics.The other is that the new documents are assigned with a new docid, which means they are new in semantics.Hence, we split the validation set into three subsets, including "Existing" denoting the documents that are contained in training set, "NewContent" denoting the first category of new documents and "NewSemantic" denoting the second category.
As shown in Table 4, compared with the performance on the full validation set of ADS, ASI performs better on group "Existing".Surprisingly, ASI also performs better in "NewContent".For the "NewSemantic" group, our ASI still achieves desirable performance.These results validate the semantic indexing module does not mechanically copy the docids contained in the training set, but fully learns the relationship between document semantics and docids.That's why our model has such a satisfying ability to handle new documents.Due to space limitations, we report the performance of baselines on new documents in Appendix A.
Comparison of Variants
We compare our ASI on ADS with the following variants to study the effectiveness of each module.Specifically, "ASI-Unique" denotes the variant using unique id tokens for different id positions like "0,256,512,768", which is our proposed model; "ASI-Share" denotes the variant using shared id tokens for different id positions like "0,0,0,0"; "w/o Cont" denotes the variant removing the discrete contrastive loss; "w/o RCE" denotes the variant removing the sequence-oriented reverse crossentropy loss; "w/o Repara" denotes the variant removing the reparameterization mechanism.We also add a metric Accuracy (Acc) that counts if the semantic indexing module assigns the same docid to the query and document.As reported in Table 5, we can draw the following conclusions.
As reported in Table 5, "ASI-Unique" slightly outperforms "ASI-Share", indicating the unique id tokens for different id positions carry more semantic information.When the discrete contrastive loss is removed, the model completely fails to be trained, referring to "w/o Cont", while "w/o RCE" performs better in terms of Recall but the Quality Score drops since more documents are assigned with a same docid (i.e., the D/ID becomes larger).The removal of the reparameterization mechanism causes a certain decrease of both Recall and Quality.These results verify the effectiveness of each module.It is more noteworthy that the R@1 of "w/o Repara" is lower than Acc while others are not.It implies that the reparameterization mechanism exactly makes the semantic indexing module be trained jointly with encoder-decoder, so that the decoder achieves superior performance than directly using semantic indexing module.
Case Study for Docid Assignment
In this subsection, we provide three docid cases from ADS to uncover ASI's docid assignments.
As illustrated in Table 6, ASI can assign the same docid to semantically similar documents, e.g., documents of Docid 1 are all about sapphire rings.Besides, ASI also assigns similar docids for similar topics.For example, Docid 1 and Docid 2 differ only in the third position, so the topics are similar, i.e., sapphire rings and sapphire earrings.These cases demonstrate that our proposed model can effectively capture the document semantics and assign meaningful docids for them.Please refer to Appendix B for more detailed analysis of docid assignment, where more case analysis can be found in Appendix B.3.
Conclusion
In this paper, we make the first attempt to propose a fully end-to-end pipeline, Auto Search Indexer (ASI), which supports both end-to-end docid indexing and end-to-end document retrieval within a joint framework.Extensive experiments on the public and industrial datasets show that our ASI outperforms the SOTA methods for document retrieval.Besides, the experiments also demonstrate the superiority of ASI for handling new documents and verify the effectiveness of its docid assignments.
Limitations
Hierarchical Docids The docid assigned by our ASI does not have a hierarchy due to the equivalent multiple linear layers in semantic indexing module.As studied in previous work (Tay et al., 2022;Wang et al., 2022), hierarchical docids might be more suitable for beam search-based generators.We expect this can be implemented by a hierarchical neural clustering-based indexing module.We left it for future work as we focus on the "fully" endto-end framework in this paper.
Docid Interpretability
The docid is represented as an integer sequence with no semantic information that humans can understand.As observed in the case study (referring to 4.6 and B.3), these integers might have become a machine language that only the model itself can understand and use during training and inference.Therefore, we will study how these integer sequences can be interpreted for humans.
Post-Retrieval Filtering Strategy While permitting a single docid to correspond to numerous documents may improve the efficiency of the Recall phase, it's evident that retrieving an excessive number of documents could enlarge significant pressure on the subsequent Ranking phase.This implies that a post-retrieval filtering might be required to lessen the number of retrieved results.
A Baseline Performance on new documents
In this section, we report the performance on new documents of some selected baselines.Specifically, Table 7 and 8 show the performances of SimCSE and SEAL, respectively.As shown in the above tables, the baselines similarly perform the best on group "Existing" and perform the worst in "NewSemantic".Compared these results with those in Table 4, ASI outperforms the baselines in the four groups of validation sets on the metric R@K and shows competitive performance on the metric Quality Score.More importantly, we highlight that ASI can simultaneously retrieve far more documents than the baselines with the same computational costs.
B Analysis on Docid Assignment
In this section, we make a thorough analysis on the docid assignment of ASI on the dataset ADS.
B.1 Assignment Density
In ASI, each docid is allowed to point to several documents.In this section, we study the assignment density of docids.
As illustrated in Figure 3, most of the docids point to a single document, while a few of the docids point to hundreds of documents.In other words, the density of docid assignments obeys long-'RFLG 'RF In this section, we visualize the assigned docids where the docids are regarded as continuous vector and visualized by t-SNE (Maaten and Hinton, 2008).
As depicted in Figure 4, there is no obvious clustering phenomenon on the learned docids.On the contrary, the distribution of docids is nearly uniform.It indicates that the proposed semantic indexing module as well as the two semantic-oriented losses can evenly learn from different documents.Besides, it verifies the ability of docids to distinguish different document semantics from the side and also guarantees that each docid has a large coverage of semantics.
B.3 More Cases
We provide more docid cases from dataset ADS in Table 9, 10 and 11.
As shown in tables, it can be seen that among the four docid cases in Table 9, the first three id tokens are of the same while the fourth id token is Docid 6: 10,194,81,99 (women's swim dress) swim dresses women's swimsuit dresses target exclusions apply ... sale womens swimdress kohl s n enjoy free shipping and easy returns every day ... swim dresses women's clothing dillard's ... womens swim dress lightinthebox com ... swimdress women's swimwear macy's new plus size cape town ... Docid 7: 10,194,203,99 (swimwear) designer bonpoint swimwear saks fifth avenue n designer bonpoint swimwear at saks enjoy free shipping ... best beverly swimwear coupon codes online stop searching start saving why scroll when you can save ... ocean dream swimwear shop the world's largest collection of fashion shopstyle n shop 7 top ocean ... ashanti swimwear promo codes deals discounts for free january 2023 install capital one shopping to apply ... swimwear and beachwear zimmermann net a porter claim your exclusive discount code when you subscribe to ... Docid 8: 10,194,225,99 (bikini) bikini shorts swim shorts for women venus get sexy bikini shorts that keep you on the ... full coverage swim shorts bikini sweet dreams venus twisted bodice accentuates your curves while adding ... swim shorts bikini aqua reef venus achieve a more modest beach day look in this pair ... women's swimsuits micro bikinis beach cover ups asos getting ready for your holidays wherever you re ... women's swimwear bikinis tankinis fatface us the warmest days of the year are on their way ... Docid 9: 10,194,231,99 (women's one piece swimsuits) women's one piece swimsuits lululemon need it fast use available near you to buy and pick ... one piece swimsuits for women macy's one piece swimsuits are a must have for any woman's ... black one piece women's swimsuits swimwear macy's new women's linked in colorblocked oceanus ... one piece women's swimsuits swimwear macy's new women's bias stripe bandeau one piece swimsuit ... one piece swimsuits for women target exclusions apply n whether you re planning a beach vacation ... Docid 17: 11,82,244,137 (napkins) table linens napkins williams sonoma earn 10 back in rewards with the new williams sonoma credit ... linen napkins when it comes to event décor every detail counts high quality banquet napkins ... farmhouse cloth napkins wayfair i thought this was a great buy for that many napkins ... Docid 18: 11,21,140,203 (dinnerware) world tableware dinnerware foodservice products webstaurantstore for years ... restaurant dinnerware wholesale plates bowls dishes step up the elegance at your restaurant ... tabletop dinnerware serveware kohl s n enjoy free shipping and easy returns every day ... 10 are all involved "swimsuits", while they focus on the finer-grained topics "high neck swimsuit", "women's swim dress", "swimwear", "bikini" and "women's one piece swimsuits".It is worth noting that the shared id token in Table 10 appears in the first, second and fourth positions, while the different id token appears in the third position.This is because the id tokens in different positions are equivalently assigned by our semantic indexing module.
In addition, the two groups of docids in the two tables are totally different, which is also in line with the fact that the coarser-grained topics of the two are different, i.e., "tablecloths" and "swimsuits".
Moreover, we provide more cases whose docids start with "11" in Table 11.As illustrated in the table, the topics of the 9 docids are "bowl", "table number", "marble coffee tab", "glass coffee tab", "disposable plates", "plastic tableclot", "paper plate", "napkin" and "dinnerwar", respectively.Together with the cases in Table 9, these docids that start with "11" are all related to "tableware".It implies that one id token will correspond to a coarser concept, and a group of id tokens will point to a finer concept.These cases verify the effectiveness of our model to learn semantically meaningful docids for documents.
B.4 Quantitative Analysis
In this section, we conduct a quantitative analysis for the docid assignment.In ASI, the semantic indexing mechanism allows docid to point to multiple documents based on the principle of sharing docids for similar documents and distinguishing docids for the dissimilar ones.Therefore, we measure the averaged cosine similarity of document pairs based on TFIDF vector and BERT embeddings.Specifically, "Random" means completely random sampling, "ASI" means sampling from the same docid, and "p-value" represents whether the difference between the above two is statistically significant based on t-test (usually, p < 0.01 means extremely significant).
As shown in Table 12, the similarity between the documents of same docid is significantly larger than that between random documents, which demonstrates the effectiveness of docid assignments learned by ASI.We will add the above results in the revision.
C Parameter Sensitivity
There are two essential hyper-parameters in ASI, i.e., the length of docid m, and the range of each id token.In this section, we study the impact of these two hyper-parameters.
C.1 Impact of Docid Length
We evaluate the Recall@K and Quality Score for ASI of different docid lengths from 2 to 6 and the docid range is set to 256.As depicted in Figure 5(a), as the length of the docid increases, the micro-and macro-averaged Quality Scores achieve much better while the Recall@1/5/10 on the contrary decrease obviously.This is because the model capacity increases with the growth of docid length, 6969 while the difficulty of generation for the decoder also increases due to the longer target docid.Consequently, ASI could assign different docids to documents based on their finer-grained semantics (referring to 5(b)), supporting better retrieval quality at the expense of recall.
C.2 Impact of Docid Range
We evaluate the performance of ASI with different docid ranges from [0, 32) to [0, 512), where the length is set to 4. As illustrated in Figure 6, there are similar observations to those above for different ranges of docids.Differently, the performance of ASI is not very sensitive to the docid range.We analyze it may be due to the fact that model capacity grows exponentially with docid length, but polynomial with docid range.
All in all, the above two experimental results imply a trade-off in practical applications between efficiency and effectiveness.Therefore, in the main body of this paper, we choose to set the length of docid m=4 and the range of each id token is set to [0, 256).
Figure 2 :
Figure 2: Illustration of the proposed semantic indexing module and two semantic-oriented losses.
Figure 5 :
Figure 5: Performance and docid statistics under different docid lengths.
Figure 6 :
Figure 6: Performance and docid statistics under different docid ranges.
Table 1 :
Statistics of Datasets
Table 2 :
Zhou et al. (2022),set MS MARCO.The best two results are shown in bold and the third best are underlined."†" denotes that the performance is referred fromZhou et al. (2022),
Table 3 :
Performance on dataset ADS.
Table 4 :
Performance on new documents from ADS.
Table 5 :
Comparisons of different variants on dataset ADS.
Table 6 :
Three docid cases from ADS.
Table 7 :
Performance of SimCSE on new documents from ADS.
Table 8 :
Performance of SEAL on new documents from ADS.
Table 9 :
Four docid cases about tablecloth from ADS. Docid 1: 11,200,244,50 (green tablecloth) green tablecloth bed bath beyond what product can we help you find n clearance ... wayfair green outdoor tablecloths you ll love in 2023 get it by tue feb 7 ... buy green solid tablecloths online at overstock our best table linens decor deals ... green kitchen tablecloths bed bath beyond clearance ... green modern elegant tablecloths luxury tablecloths bloomingdale's ... Docid 2: 11,200,244,53 (gingham tablecloth) gingham tablecloth serena and lily over 150 new arrivals to explore the fresh start event enjoy ... wayfair blue gingham table linens you ll love in 2023 get it by thu feb 16 ... wayfair 100 cotton gingham tablecloths you ll love in 2023 i am in absolute love with ... food network woven gingham tablecloth create a rustic dining atmosphere ... wayfair classic farmhouse gingham tablecloths you ll love in 2022 n shop wayfair ... Docid 3: 11,200,244,137 (table skirt) table skirts party tablecloths target exclusions apply ... cloth table skirts wayfair ... cloth table skirts pleated wayfair get it by thu feb 9 ... 9ft natural raffia table skirt tropical table skirts for hawaiian decoration tableclothsfactory ... snap drape table skirting covers clips napkins more snap drape also called sdi brands was founded ... Docid 4: 11,200,244,254 (table linens) sale tablecloths table linens kohl s n enjoy free shipping and easy returns every day ... table linens tagged tablecloth page 3 elrene home fashions showing items 57 ... cyber monday special tablecloth and table linens macy's ... tablecloth and table linens macy's ... table linens tagged tablecloths elrene home fashions ...
Table 10 :
Five docid cases about high neck swimsuit from ADS.Docid 5: 10,194,75,99 (high neck swimsuit)high neck swimsuit in living art venus a feminine silhouette that offers fuller coverage ... women's high neck swimsuits lululemon need it fast use available near you to buy and pick ... waterside high neck one piece swimsuit medium bum coverage online ... sailor blue high neck zip up bikini top bikini venus high neck with zipper at the ... high neck swimsuit in living art venus r n shop high neck swim dress ...
Table 11 :
More docid cases starting with "11" id token from ADS. plastic serving bowls williams sonoma sugg price 59 95 ... disposable bowls walmart com green walmart com n shop for disposable bowls walmart com ... plastic bowls round seagreen 2oz 100 count box my cart ... Docid 11: 11,186,149,34 (table number) top 10 best wedding table numbers gold of 2023 ... table number cards zazzle new instant downloads n weddings n invitations cards ... letters table numbers efavormart decorations prove to be the most important aspect of a party ... Docid 12: 11,170,45,92 (marble coffee table) pierre marble coffee table williams sonoma buy in monthly payments with affirm on orders over 50 ... madison park signature marble coffee table with its luxurious marble top this madison park ... buy modern contemporary marble coffee tables online at overstock our best living room furniture ... Docid 13: 11,170,0,92 (glass coffee table) buy glass square coffee tables online at overstock our best living room furniture ... st germain glass coffee table serena and lily buy in monthly payments with affirm on orders ... glass coffee tables raymour flanigan a coffee table is for more than just coffee ... Docid 14: 11,135,112,203 (disposable plates) disposable plates efavormart 5 10 off all folding chair covers ... our 10 best disposable plates in the us january 2023 bestproductsreviews com ... blue panda disposable plates 48 pack paper plate party supplies ... plastic tablecloth rolls in plastic tablecloths walmart com ... free deals discounts on clear plastic tablecloth roll w self cutter wide thick disposable table cover ... plastic table cloths etsy ... vdomdhtmlhtml n our 10 best paper plates in the us january 2023 bestproductsreviews com ... black and white paper plates wayfair get it by tue feb 14 ... paper plates white walmart com way to celebrate white paper dessert plates 7in 24ct ...
|
2023-10-20T06:42:45.581Z
|
2023-10-19T00:00:00.000
|
{
"year": 2023,
"sha1": "3cd6c8e3fbb18fefeee000683c9bb7d1f8643114",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-emnlp.464.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "93be294d23803e3f0c70fb9f4dc472eae1f0c20c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256389751
|
pes2o/s2orc
|
v3-fos-license
|
Ensemble Learning for Fusion of Multiview Vision with Occlusion and Missing Information: Framework and Evaluations with Real-World Data and Applications in Driver Hand Activity Recognition
Multi-sensor frameworks provide opportunities for ensemble learning and sensor fusion to make use of redundancy and supplemental information, helpful in real-world safety applications such as continuous driver state monitoring which necessitate predictions even in cases where information may be intermittently missing. We define this problem of intermittent instances of missing information (by occlusion, noise, or sensor failure) and design a learning framework around these data gaps, proposing and analyzing an imputation scheme to handle missing information. We apply these ideas to tasks in camera-based hand activity classification for robust safety during autonomous driving. We show that a late-fusion approach between parallel convolutional neural networks can outperform even the best-placed single camera model in estimating the hands' held objects and positions when validated on within-group subjects, and that our multi-camera framework performs best on average in cross-group validation, and that the fusion approach outperforms ensemble weighted majority and model combination schemes.
Introduction
Manual (hand-related) activity is a significant source of crash risk while driving; driver distraction contributes to around 65% of safety-critical events (crashes and near crashes) [1], and more than 3,000 deaths in 2022 [2]. Furthermore, given recent consumer adoption of early-stage autonomy in vehicles, driver hand activity has been shown to lead to various incidents even in these semiautonomous vehicles. Drivers in vehicles supported by partial autonomy show high propensity to engage in distracting activities when supported by automation [3] and show increased likelihood of crashes or near-crashes when engaged in distracting activity [1]. Moreover, it is important to consider the manner of transitions when the driver must take manual control of semi-autonomous vehicles, as drivers demonstrate a slowness or inability to handle these control transitions safely when occupied with non driving-related tasks, often involving the hands [4] [5].
Accordingly, analysis of hand position and hand activity occupation is a useful component to understanding a driver's readiness to take control of a vehicle. Visual sensing through cameras provides a passive means of observing the hands, but its effectiveness varies depending on the camera location.
In this paper, we present a multi-camera sensing framework and machine learning solution, which we apply to the problem of robust driver state monitoring for autonomous driving safety. Our real-world, constrained application represents just one use case for this framework, as it can readily be extended to an ensemble of N domainagnostic data sources and models for similar tasks; accordingly, we provide both a domain-specific and a generalized formulation of the sensing framework and learning problem in the following sections.
Consider an intelligent vehicle which classifies a driver's hand activity for a downstream safety application. By constraints imposed by vehicle manufacturing, we may have multiple cameras (in our case, four) which observe the driver from varying angles: head-on from the steering wheel, diagonally from the rearview mirror, diagonally from the dashboard, and peripherally from the central console. It is readily apparent that, depending on the driver's current position, there are instances where: 1. Only one of the four cameras has any view of the driver's hands, or 2. Multiple cameras have a view of the driver's hands. An ideal intelligent system would recognize which of the cases is present, and in the former, choose to use the visible information to make an estimate, and in the latter, form an estimate made with the joint information of the multiple views which may be helpfully redundant (both cameras observing the fingers grasping the wheel, from multiple directions) or supplemental (one camera observes the fingers clasped to the wheel rim, the other observes the wrist resting on the wheel center) to the task at hand. This redundancy is closely related to the concept of homogeneity in [6]. Because this redundant or supplemental information can be present or absent between instances, we refer to this particular "missing data" phenomenon as irregular redundancy. Conceptually, this is similar to situations where data streams which operate under noisy conditions or at different sample rates are provided as input to a model which must provide output despite lost frames due to sampling rate or corruption.
More generally, we may describe this as a problem of sensor fusion, by which we must handle data to best leverage the accompanying noise, variance, and redundancy between samples to create an optimal estimate. In this theme, here we pose our framework as a system in which we have multiple data sources of the same event, and our goal is to learn an optimal model which accurately estimates a property of the event.
Here, we are left with a few choices: 1. A model learned from one of these sources may tend to provide the best estimate, and we use only this source for future inference. 2. Models learned independently from each of these sources can each provide an independent estimate, and we can interpret their respective estimates to reach a group-informed consensus estimate. 3. A single model can be learned simultaneously between the sources, exploiting moments of redundancy and uncertainty in the data sources, such that the model provides an estimate with intelligence in selecting relevant features from data sources at any given instance dependent on the state of the other sources.
This question, described as the multimodal reasoning problem [6], is thoroughly investigated in the work of Seeland and Mäder [7], as will be discussed on the following section. Their analysis on multi-view classification utilizes datasets with complete data; here we seek to extend their work by answering a further question critical to real-world, real-time tasks: can models and learning paradigms generalize to cases of multiple data sources when significant data is missing?
This problem of missing, corrupted, or asynchronized multi-modal data is found in many domains, ranging from biomedical imaging modalities like photoacoustic and computed tomography and optical microscopy [8] to autonomous systems dealing with temporally-calibrated LiDAR, vision, and radar [9] and identification of crop disease from satellite imagery with significantly different capture frequencies or resolutions [10].
In our analysis, we examine "best-of-N" performance from collections of N independent models, as well as schemes which negotiate between the logits of N indepen- Table 1: Multiview Fusion Methods. Homogeneity (from [6]) refers to the extent that the abstract information presented in one view is equivalent to the information presented in another, toward the intended task(s). High homogeneity is highly redundant, while medium homogeneity refers to cases where some combinations of views may have the same information, but this information may be exluded from other views. Low homogeneity refers to situations where information between views is primarily supplemental. Our presented method is notable in having only medium homogeniety in support of its task, and frequent appearance of incomplete sets.
Method
Modalities Tasks Homogeneity Incompleteness Various Fusions [7] 2-5 RGB 1 High None Late Fusion [11] 4 IR + 1 RGB 1 High None Temporal Score Fusion [12] 3 RGB 1 High None Late Fusion [13] 5 RGB 1 Medium None Slice Fusion [14] Depth Slices 1 Low None Ours 4 IR 4 Medium Frequent dent models, and a model which learns between hidden features derived from the N data sources jointly, known as late fusion. We adopt the term "ensemble" to refer to the N models respectively learned from the N data sources, which may be combined to generate a prediction. Critically, under our condition of irregular redundancy, the number of views available varies between instances, thus requiring the introduction of our method for multi-view ensemble learning with missing data. Further, because there are multiple simultaneous tasks involved in driver monitoring, we examine the task relevance of each modality [6] in our analysis.
To summarize our contributions, we (1) perform comparative analysis between single-view, ensemble voting-based, and late-fusion learning on data from four realworld, continuous-estimation safety tasks, using sensors operating with irregular redundancy, (2) provide a generalized formulation of the real-world, real-time multimodal problem such that our methods can be applied to similar tasks in both autonomous driving and other domains, and (3) evaluate the performance of these models with respect to human-centric safety systems by examining task performance on human drivers outside of the training datasets.
Sensor Fusion
Sensor fusion describes integration of data from multiple sensor sources, like Li-DAR or cameras, towards a task. In the intelligent vehicles domain, research in methods of combination of output from multiple sensors to improve tasks in prediction and estimation is well-established. Chen et al. designed a Multi-Vew 3D Network (MV3D) that fuses LiDAR point cloud and RGB image data to perform 3D Object Detection in autonomous driving scnearios [15]. Their deep fusion of camera and LiDAR data uses FractalNet, a CNN architecture that is an alternative to other state-of-the-art CNNs like ResNet [16]. Similarly, Liang et al. fuse LiDAR and image feature maps into using a continuous convolution fusion layer [17]. This fusion process creates a birds-eyeview (BEV) feature map that is fed into a 3D Object Detection Model. Pointpainting is another prominent example of a fusion process of LiDAR and image data [18]. Pointpainting takes image data and performs semantic segmentation to compactly summarize the features of the image. To fuse the LiDAR and image data, the LiDAR data is projected onto the semantic segmentation output. In all these methods, the sensors are LiDARs and cameras. There is a different class of work which aims to do sensor fusion using the same modality or sensor type, but with data collected from different sensors or sensor views, like [19], which combines LiDAR point clouds from the birds-eye-view and perspective view to learn fused features. In our work, we learn image features fused from different camera views. The features can be combined at different stages in the network giving rise to different ensembles, as explained in the next section.
Ensemble Learning
In addition to fusion of output from N sensors to reduce uncertainty of the observed information, we also explore ways that the models learned from the input of these N sensors can share information during the learning process, such that the collective ensemble is optimized to the task.
We present here the Sagi and Rokach's survey definition of Ensemble Learning: "Ensemble learning is an umbrella term for methods that combine multiple inducers to make a decision,...The main premise of ensemble learning is that by combining multiple models, the errors of a single inducer will likely be compensated by other inducers, and as a result, the overall prediction performance of the ensemble would be better than that of a single inducer." [20] It is worth noting that there are a variety of methods for generating such ensembles [20]. For example, the high-level learning system may: [28] or hyper-parameters [29] between inducers, or 4. Vary some combination of the above between inducers.
In this research, we freeze the model architecture, learning methodology, and hyperparameters; we vary only the training data provided to the inducers. However, in this case, the training data is not sampled or refactored from some shared pool; instead, each ensemble member has access to its own set of training data. These training data are not independent, though; the training data is unified as a collection per instance, where each member of the collection is a different representation of the same base observation (e.g. different cameras taking simultaneous photos of the same object).
From the ensemble of inducers, inductions can be combined and learned-from in a variety of ways:
Bayesian model averaging and combination
Bayesian Model Averaging (BMA) allows formation of predictions with many candidate models without losing information like an all-or-none technique. Using Bayesian model averaging, the probability of a prediction y given training data D can be defined as: where f k is the prediction of the kth model. The posterior probabilities p( f k |D) can be treated as weights w k for each of the separate models since K k=1 p( f k |D) = 1. Previously researched applications using these methods include weather forecasting [30], flood insurance rate maps [31], risk assessment in debris flow [32], and crop management [33].
As mentioned by Monteith et al., Bayesian Model Averaging can be more thought of as a model selection algorithm, as ultimately the importance of each model is determined by the posterior probability weight [34]. To develop an approach that is more inherent in ensemble learning, there are various strategies that can be used for model combination rather than model averaging. Bayesian model combination has found success in reinforcement learning by combining multiple expert models [35], speech recognition [36], and other tasks (notably functioning on non-probabilistic models and combinations of models observing different datasets [37]).
Voting, Weighted Majority Algorithm
In ensemble learning, it is crucial to learn the value of each individual model by assigning different weights to these models in order to increase performance. In this algorithm, weighted votes are collected from each model of the ensemble. Then, a class prediction is made based on which prediction has the highest vote. All models which made incorrect predictions will be discounted by a factor β where 0 < β < 1 [38]. Weighted majority algorithms have been used to combine model predictions to identify power quality disturbances for a hydrogen energy-based microgrid [39], calendar scheduling [40], profitability and pricing [41], and other applications. The weighted majority algorithm is used to combine the predictions from the expert models and see how effective each expert model is.
In addition to single-view results, we compare performance in our hand classification task under naive voting, Bayesian model combination, and weighted majority voting.
Machine Learning from Multiple Cameras
Seeland and Mäder thoroughly investigate image classification performance gains afforded by network fusion at different process levels (early, late, and score-based) when using multiple views of an object [7]. They apply their methodology to datasets comprising cars (shot from 5 views), plants (shot from 2 views), and ants (shot from 3 views). They find that late fusion provides the strongest performance gain for the car and plant datasets, and that an early fusion (slightly misnamed in this case, as it occurs at the final convolution) leads to a very marginal gain compared to late fusion on the ant dataset. In general, the authors results support late fusion as the dominant methodology, with early fusion often leading to worse performance compared to baseline. In their research, each data instance is referred to as a "collection" (i.e. collection of N images). Critically, each collection analyzed is complete; that is, no view is missing from any given collection. This is where our problem framework and approach differs; as illustrated in Figure 2, we consider situations in which collections may be incomplete, and seek to learn correct labels despite missing data.
The late fusion approach for visual patterns has found success in multiple application domains, as presented in Table 1, and even in other domains such as cross-modal information retrieval [42] [43] [44] [45].
Shortcomings of Non-Camera Methods
At the time of writing, most commercial in-vehicle systems which monitor the driver's hands use pressure and torque sensors embedded in the steering wheel to detect the presence of a driver's hands. However, this method of sensing leaves multiple safety vulnerabilities: 1. Especially in the case of non-capacitative sensing, these sensors can be spoofed by placing weighted objects on the wheel, leading to recent fatal accidents. 2. When effective for determining if the hands are on or off the wheel, these sensors still cannot distinguish between different hand activities taking place off of the wheel, and recognizing these activities is critical for estimating important metrics like driver readiness and take-over time. Hand locations and held objects imply hand activities, crucial to inferring a driver's state, and this information is lost when reduced to hands-on-wheel and hands-off-wheel.
Camera Methods
Camera-based methods of driver hand analysis allow for observation of the hands without steering wheel engagement. Past systems for classification of driver activity and identification of driver distraction use traditional machine learning approaches; for instance, Ohn-Bar et al. demonstrated systems that utilize both static and dynamic hand activity cues in order to classify activity in one of three regions [46] and extracts various hand cues in ROI and fuses them using an SVM classifier [47]. Borgi et al. use infrared steering wheel images to detect hands using a histogram-based algorithm [48]. More recent works expand on the aforementioned classifiers and utilize deep learning in order to identify and classify driver distraction in a more robust manner. Eraqi et al., among others, have developed systems that operate in real time to identify driver distraction in a CNN-based localization method [49]. Shahverdy et al. also use a CNN-based system in order to differentiate between driving styles (normal, aggressive, etc.) in order to alert the driver accordingly [50]. Building on this, Weyers et al. demonstrate a system for driver activity recognition based on analysis of key body points of the driver and a recurrent neural network [51], and Yang et al. further demonstrate a spatial and temporal-stream based CNN to classify a driver's activity and the object/device causing driver distraction [52]. A comprehensive survey outlining the current driver behavior analysis using in-vehicle cameras was done by Wang et al [53]. Figure 2: While traditional learning problems (a) may seek to learn a model (gray) which makes a prediction (blue) from input (yellow), in the multi-modal setting (b), we seek to learn a model which makes a prediction from multiple inputs. However, in the case where a sensor fails, becomes occluded, or operates at a different rate, the input set goes from complete to incomplete. In this research, we explore techniques for dealing with such incomplete sets (c), important for systems which are relied upon for always-online output prediction.
Recent pose detection models provide another helpful tool in understanding the hands of the driver. As defined by Dang et al., 2D pose detection involves detecting important human body parts from images or videos [54]. Chen et al. describes that there are three ways to define human poses: skeleton-based models, contour-based models, and 3D-based volume models [55]. Our research uses a skeleton based-model, which describes the human body by identifying locations of joints of the body through 2D-coordinates. A deep learning approach to pose detection through a skeleton basedmodel is to first detect the human location through object detection models like Faster-RCNN and then perform pose estimation on a cropped version of the human. Some successful approaches to pose detection include HRNet, which is successful for pose detection problems since it maintains high-resolution representations of the input image throughout a deep convolutional neural network [56]. Toshev and Szegedy perform this pose estimation by implementing the model DeepPose, which refines initial joint predictions via a Deep Neural Network regressor using higher resolution sub images [57]. Yang et al. design a Pyramid Residual Model for pose estimation which learns convolutional filters on various scales from input features [58].
Though consumer and commercial vehicles have begun integrating inside-facing cameras for a variety of tasks, such as attention monitoring and distraction alerts, these methods are not without their own challenges. A single camera may be well-suited to a particular task, but different situations may call for different camera placements. While one view may be ideal for a particular task within design constraints, this view may sacrifice a complete view of a different driver aspect and may not offer redundancies if a camera is obstructed or blocked. For example, an ideal hand view (taken from above the driver) would not be suitable for assessing a driver's eyegaze, but a camera that can see the driver's eyes may also have at least a partial view of the driver's hands.
Safety and Advanced Driver Assistance Systems
Recent works in safety and advanced driver assistance systems utilize deep learning techniques in order to perform driver analysis. In particular, deep learning allows researchers to extract driver state information and determine if they are distracted through analyzing driver characteristics such as eye-gaze, hand activity, or posture [53]. Estimating driver readiness is another vital aspect to safe partial autonomy, and a key component to understanding driver readiness is hand activity, as a distracted driver often has their hands off the wheel or on other devices like a phone. Illustrated in Figure 4, Rangesh et al. [5] and Deo & Trivedi [59] show that driver hand activity is the most important component of models for prediction of driver readiness and takeover time, two metrics critical to safe control transitions in autonomous vehicles [60] [61]. Such driver-monitoring models take hand activity classes and held-object classes as input, among other components, as illustrated in Figure 3. These classes can be inferred from models such as HandyNet [62] and Part Affinity Fields [63], using individual frames of a single camera view as input. Critically, this view is taken to be above the driver, centered in the cabin and directed towards the lap-a typically unobstructed view of the hands.
Application of multi-view and multi-modal learning to safe, intelligent vehicles ( [65], [66]) brings two benefits: increased flexibility in field-of-view for individual component cameras, and increased accuracy in classification for observable activity. [64], analyzing logits of hand activity and location classes play a useful role in predicting a driver's readiness to take control of a vehicle. Figure 4: In an ablative study, Rangesh et al. [5] show that various individual features and combinations of features associated with the hands, including hand region (HR), distance to wheel (DW), and held object (HO) are most informative to models for predicting cues associated with vehicle takeovers from automated to manual control. In the case of control transitions, these fractional-second gains are critical for a driver's reflexes to safety alerts. Figure 5: The image preprocessing pipeline prior to learning involves four steps, carried out individually from each camera stream. First, the image is captured, then, the driver is detected and their pose extracted, allowing for crops around the hands to be generated. In this example, because the left hand is not visible to the particular camera, the method of single imputation is used to replace the frame with a frame of zeros. We note that because the method uses only the image of the hands towards its learning, it is possible to anonymize the driver by blurring the face, as we have done in the above example, for the cropped frames that serve as model input.
Both benefits arise from the ability of the system to reason between views, allowing occluded or otherwise compromised images from one view to be substantiated by images from additional views in cooperation.
Methods
The general hand activity inference stage is organized in four steps: multi-view capture, pose extraction, hand cropping, and CNN-based classification.
Feature Extraction: Pose and Hands
The inference stage is illustrated in Figure 5. Following data capture, we extract the pose of the driver in each frame, where "pose" is a collection of 2D keypoint coordinates associated with the driver's body, such as the wrists, elbows, shoulders, eyes, etc. This problem is broken into two steps: first, we must detect the driver in the frame, then detect the driver's pose. Each step requires its own neural network; for driver detection, we first use the Faster-RCNN [67] model with Feature Pyramid Networks [68], using a ResNet-50 backbone [69] to detect the driver. We note that this network will output any humans detected in the frame, so we apply a post processing step (based on the camera view) to only include detections corresponding to the driver's seat. For joint detection, we employ the HRNet [70], a robust top-down pose detection model,which predicts 2D coordinates of various points of the body such as the wrists, elbows, shoulders, eyes, etc. The results of driver and keypoint detection are illustrated in Figure 7.
Hyperparameter Selection: Hand Crop Dimensions
We crop images around each of the hands, centered at the wrist and extending 100 pixels in each direction. The width of the crop is a hyperparameter which can be changed to add or reduce spatial context. Only these hand crops are fed into the activity classification pipelines. Figure 6: Classification pipeline. Following image capture, we perform image processing to detect the driver using Faster-RCNN with Feature Pyramid Networks (FPN) with a ResNet-50 backbone, extract the driver pose using HR-Net, and crop the hands 100px from center of wrist joints. In the Inference stage, we utilize CNNs for classification, beginning from a pre-trained ResNet fine-tuned on our dataset. For the single view model, we make direct inference, and for the multi-view models, we pass the logits to ensemble algorithms, or pass the CNN-output feature maps to a neural network for late fusion. In our experiments, we use Bayesian Combination and Weighted Majority Averaging as the Ensemble Learning algorithms, and Late Fusion via fully-connected neural network laters.
Single-View Models
The cropped images from a particular camera are classified by two convolutional neural networks trained on images of that view. One network outputs probabilities that the hands are holding one of three objects: Phone, Beverage, Tablet; or holding nothing. The second network (identical in architecture to the first, except for number of classes) predicts the probability that the hand is in one of five hand location classes: Steering Wheel, Lap, Air, Radio, or Cupholder. The classes Radio and Cupholder are reserved for the right hand only. For single-view model evaluation, the network infers the hands to be classified according to the class of maximal probability.
In cases where there is no image available, the model is provided an image of proper dimension containing only the value 0. This is a variation of the method referred to as single-imputation [8], in which a single value is used to replace any instances of missing data. The intention behind this decision is that the network will learn a prior over the training data in situations when the view is occluded; that is, each time a blank image is presented, it infers that the sample should be classified in one of the typically occluded positions, with probability representative of the distribution of the training data.
Naive Voting
In the naive voting scheme, all four single-view models make a prediction using their respective image from a given collection, noting that up to N − 1 images may be Figure 7: Prior to classifying driver hand activity, the system must detect the driver. We use Faster-RCNN to generate the bounding box shown in green. Following driver detection, we apply HRNet to identify the 2D pose skeleton, shown as keypoints and connecting lines on the driver's body.
blank. The prediction made by the network is taken to be where M is the number of classes, N the number of models, and p i j the probability of the ith class from the jth model. This method gives each model equal vote.
Weighted Majority Voting
Using Weighted Majority Voting, we seek to combine the decisions of the 4 models weighted by a discount factor d i . This discount factor is based on the number of mistakes m i made by the model during validation: Then, each collection prediction is made using
Bayesian Model Combination
Using Bayesian Model Combination, we combine the decisions of the 4 models weighted by a factor representing the likelihood of the particular model given the observed data, P i ∼ p( f i |D). In cases where the hands are not detected in a certain view i, then we consider model f i to have low likelihood; therefore, we set P i to zero in such situations. If n models have P i as zero, then the P i of remaining models is distributed uniformly as where N is total number of views.
Multi-view Late Fusion
For the late fusion scheme, we use a neural network architecture composed of four parallel sets of convolutional layers (ResNet-50 backbones), which act on each of the four image views. Following the convolutional layers, each parallel track is fed to its own fully-connected layer of 512 nodes (followed by a ReLU activation). These layers are joined together by a fully-connected layer with 2048 nodes (followed by a softmax activation); this is the point of fusion, where the features extracted from the four views are combined and the relationships between the multiple views are learned.
We call this late fusion as it is done at the penultimate layer, late in the pipeline. This was done to make sure the fusion could leverage high level features present deeper in the pipeline. We use two fused models as before; one which outputs probabilities that the hand is holding one of the 3 objects and another which outputs the location of the hand. The maximal probability class is chosen as the classification output.
Comparison of Single-View and Ensemble Techniques
Using four cameras, we collect a dataset of 19 subjects engaged in various hand placements and object-related activities.
Altogether, we collect approximately 81,000 frames corresponding to hand zone activity, and 128,000 frames corresponding to held object activity. We divide these into training, validation, and test sets using approximately 80%, 10%, and 10% of the data respectively (with marginal differences to account for dropped frames). The distribution of the data between views is shown in Figure 8. We note that the challenges of selecting camera views for this task are readily apparent in the proportions of the data; one camera view (rearview) has significantly less frames where the pose is reliably estimated, while the steering wheel view has many. However, the availability of frames does not necessarily correspond to the ability of that view to be informative to the task at hand nor generalizability to other tasks in the autonomous driving domain.
Using this data, we trained the above-described neural networks for hand location and held object classification into the defined zones (3 location zones for the left hand, 5 location zones for the right hand, and 4 held objects [including null] for each hand).
Single-View Models
We evaluate the four single view models on images from the test set, including blank images when no image is available. From this, we compute an average model accuracy by taking the average of each per-class accuracy for each of the four tasks (left hand location, right hand location, left hand held object, right hand held object). For each task, we report the performance of the best-performing model, the worstperforming model, and the average across the four models. This highlights foremost the importance of camera view selection for this particular task, but also provides a point of comparison to see how the ensemble learning and fusion methods may enhance the overall performance of the models to their task. Results are provided in Table 6.
Ensemble Methods: Naive Voting, Weighted Majority Voting, Bayesian Model
Combination, and Multi-View Late Fusion We evaluate the four methods described in the Methods section, as well as an additional method which employs both Weighted Majority Voting and Bayesian Model Combination simultaneously. We evaluate the performance of these models on two different sets: first, only on collections which have all N images available, and second, on collections with any number of images (1 to N) available. Results are provided in Tables 4 and 5.
Importantly, only a very small fraction (less than 3%) of each of our task test set Table 3. In fact, some task classes are never simultaneously observed from all views, so results in Table 4 indicate performance on a limited number of classes from the actual task at hand, and at that, only for complete collections! While the models may be great at making inference when they have a clear view of the object of interest, this suggests a significant performance gap for a safety system expected to make continuous inference across all classes, not just inference when data is complete. By contrast, Table 5 represents performance across every sample of the test set. We include both tables to illustrate the point that while the voting-based methods begin to fail, the late fusion method performs just as well even when data is missing from a collection. Our original question was: can these methods overcome situations where data is missing from a collection? Table 5 provides our answer. When data is missing, the voting-based methods struggle significantly due to the falsely-placed confidence given to the model output. The largest challenge with these approaches is recognizing which view is dominantly correct in a particular situation and leveraging that view appropriately; otherwise, too much weight may be given to a model which has false confidence, and a model's vote may be a reflection of the intrinsic difficulty of that particular view. Able to better leverage information between views, the best performance comes from the multi-view late fusion approach. The late fusion model both (1) maintains nearperfect performance on the four tasks, even when 1 to N − 1 frames are missing from the collection, and (2) exceeds performance of all single-view cameras for each task. These two results suggest that a late-fusion model is successfully learning complementary information that is unavailable in a single-view; that is, the model is effective in combining different sources of information to make a better-informed prediction on the Table 4: Classification accuracies (averaged across all classes) of different ensemble methods on four hand classification tasks, evaluated only when all N views are available. In this "complete-view-only" test set, 2 classes from left hand location, 4 from right hand location, and 1 each from left and right hand held object are completely unrepresented. Performance on the held object tasks may be poor due to the uncertainty in lessinformative views bringing down the overall confidence of the system towards the correct class (or artificially raising confidence in the incorrect class). Naive voting may outperform weighted majority voting when challenging examples found in the validation set may be unrepresented in this test set, thereby discounting models which would otherwise be "correct". This table also serves to illustrate how often frames are missing in these tasks, demonstrating the importance of a method which is robust to missing data. task. Additionally, it is able to do so despite missing data, suggesting that the model has learned to leverage remaining sources of information when frames are dropped. In prior work, Greer et al. [71] show that multi-view late fusion models give superior results over single-view models because the network can learn from more perspectives. Late fusion is particularly effective as all the camera views have high-level richer features deeper in the pipeline. The multi-view late fusion model was successful in classifying zones and objects when the training and test subjects were same. But in real-world scenarios, models need to generalize to unseen subjects. We elaborate on our approach for evaluating performance on cross-subject classifications in the next section, and provide recommendations for such systems in the following discussion.
Multiple Subject Validation: Generalizing to Unseen Drivers
In the first set of experiments, we show that multi-view late fusion models give superior results over single-view models because the network can learn from more perspectives. Late fusion is particularly effective as all the camera views have high-level richer features deeper in the pipeline. The multi-view late fusion model was successful in classifying zones and objects when the training and test subjects were same. But in real-world scenarios, models need to generalize to unseen subjects. Here, we evaluate performance on cross-subject classifications, and provide recommendations for such systems in the following discussion.
Greer et al. [71] evaluate the late-fusion model performance on a substantial set of test data derived from the same capture system and subjects as the training and validation data, but in intelligent vehicle applications, it may be impractical to collect training data on each individual driver. An ideal model would generalize to all drivers that may use the vehicle.
A typical risk in end-to-end learning on overparameterized systems involving human subjects is that such a deep neural network is not typically "explainable" [72] [42]. When the model learns from humans, it can overfit to particular features associated with an individual subject, rather than learning actual patterns of interest (e.g. the model becomes really good at learning how to recognize Subject A's hand holding Subject A's cell phone, rather than a more general prototype of any hand holding any cell phone).
Machine learning models are commonly evaluated using k-fold cross validation, but this evaluation has shortcomings when data from the same subjects are contained in both train and validation sets, since (as described above) the model can overfit to the subject's unique signature instead of the latent activity. Accordingly, techniques of subject cross validation are preferred [73]. In typical k-fold cross validation, data is divided into k sets, and each of these k sets have a turn being left out of the training process (used only for evaluation). The summary statistics to describe the goodness of the model is then the average model performance on the k validation sets.
In our case, we utilize a dataset of 19 subjects. Here, we discuss evaluation choices made on splitting the data. We first constrain evaluation such that any subject being used in validation is unseen during training. We note that early stopping is controlled by a subset of the training data to prevent significant overfitting; while it may be beneficial to let yet another unseen subject (or subjects) determine the training stop-point, this introduces the bias of model performance to that particular driver (or drivers), which will not necessarily translate to performance on the unseen evaluation driver.
We use varying values of k on each task to handle computational constraints, rotating a left-out subject from each of the k model trainings for each task. For each model, we take the average accuracy among all of the classification categories (the so-called macro-averaged precision), and then average this value among the k models. We evaluate using k = 8 for the left and right hand location tasks, k = 17 for the left hand held object task, and k = 13 for the right hand held object task.
We report this averaged performance for each of the four single camera views as well as the late-fusion multiview model, in Table 6.
We first note that, with the exception of the rearview-mounted camera on two left hand tasks, single-view models do not seem to generalize well across subjects; classification on the unseen subject tends to collapse to a few classes, likely due to overfit to a nearest-neighbor image in the training data. Practically speaking, this would suggest that models for driver state estimation which rely on a single camera would indeed benefit from fine-tuning on data from the driver of interest; we know that the model can train to near-perfect accuracy on data it has seen, it's the generalizability that causes the issue. Table 6: Classification accuracies (averaged across all classes) of single-camera and late-fusion multiview models on four hand activity tasks. Evaluations are averaged over 19 models in which the evaluated subject is unseen during training, with mean and variance provided. View 1 is taken from the dashboard center, view 2 from the dashboard facing the driver, view 3 from the steering wheel, and view 4 from the rearview mirror. Now, to our primary question: can late-fusion multiview models overcome the generalizability challenge? Our results suggest that the late-fusion multiview model does outperform the best of the single-view models for unseen drivers on right-hand related tasks, though the rearview-mirror placed camera is excellent at left-hand related tasks. The late-fusion multiview model exceeds the average between the four single cameras on every task.
• For the left hand location task, the late-fusion multiview model is 9.6% less accurate than the best-performing rearview model, but 33% more accurate than the average across camera views.
• For the right hand location task, the late-fusion multiview model is 10.8% more accurate than the best-performing rearview model, and 45% more accurate than the average across camera views.
• For the left hand held object task, the late-fusion multiview model is 6% less accurate than the best-performing rearview model, but 30% more accurate than the average across camera views.
• For the right hand held object task, the late-fusion multiview model is 4.3% more accurate than the best-performing dashboard-center-view model, and 15% more accurate than the average across camera views.
Discussion and Concluding Remarks
System designers often have interest in selecting optimal number (and placement) of sensors for a given task, and in this research, we explored methods of leveraging the irregular redundancy of multiple sensors observing the same scene. While one camera placed expertly may be sufficient at a single task (say, observing the hands), there are many other tasks relevant to safe driving, such as estimating eyegaze, passenger seating occupation and positioning, and distraction identification.
What this framework contributes is a method of making stronger inference when information is missing from one source, and the system can recognize and leverage the fact the information is missing to then make better use of information in other sources. In fact, missing information often informs the other models; if a hand is not visible to one camera, then it is more likely to be within the view of another. Further considerations for enhanced accuracy include exploring weighting schemes for weighted majority voting, hyperparameter sweeps for crop sizes and model architecture, and model likelihood estimation for Bayesian model averaging.
Late-fusion approaches which use our method of replacement of missing data with a zero-placeholder may effectively learn a prior distribution given a missed reading from a sensor or camera. This is particularly relevant in cases where multiple perspectives are necessary for complete observation, or when multimodal systems are used which sample at different rates. We see plenty of examples of this in existing technology; many phones and laptop computers use both RGB and IR cameras for securely identifying the user, and thermal cameras are often used as an additional modality for medical applications, but cameras operating on different spectra (or media) typically operate at different rates.
No camera perfectly captures an event, but by using ensemble learning and fusion, safety systems (where every inch of accuracy counts) may exploit the benefits in redundancy and completeness of multi-view or multi-modal observations.
Efficient systems of multiple models
In this research, we evaluate performance of four models related to the driver's hands: two for held object (one for each hand), and two for hand location (one for each hand). While each model may function independently, cascading the models gives a better idea of a driver's current activity. For example, an image of the hand does not necessarily need to pass through both a held-object and hand-location model. In applications, the models can be cascaded such that first a held object can be determined, and if it is the case that no object is held, the image is passed forward to the location classification module. In fact, some applications may successfully "short-circuit" for efficiency depending on their use; a left-hand holding a cell phone may be sufficient to send an advisory without necessary inference on its location nor the right hand's activity.
Of further note, the system bottleneck most strongly occurs at the level of 2D pose estimation. To review, the model first detects the driver, then estimates the driver's pose, and from this pose classifies smaller regions pertaining to the hands (or, for other applications, eyes and other keypoints of interest). Fortunately, a system will only need to pass through this bottleneck once per inference time, since the remaining downstream models all utilize the same predicted pose information (and are much less computationally expensive). Because this system is modular, continued research from the computer vision community on efficient 2D pose estimation will translate directly and smoothly to performance gains in such human driver analysis systems.
Further research may benefit from an analysis of the Vision Transformer architecture for this problem, since the Transformer is particularly adept at selecting which features should be attended to. However, the Transformer is notably computationally expensive, so any performance gains must be balanced with increased inference delays to meet application requirements. The application of attention maps for (near) humanexplainable reasoning from multimodal streams is considered an open challenge within multimodal learning [6], and these sample tasks and irregularly redundant data sources may be a strong candidate for future experiments.
Onboard vs. Cloud Processing
There is strong interest in moving compute from onboard processing toward cloudbased computing of driver monitoring data 1 , but of key concern for consumer support and adoption of such data schemes is the preservation of driver privacy. To this end, we highlight that our presented framework allows for extraction of particular features (rather than complete images), which then allows for the anonymization via blurring or pixel value adjustment of the driver's face or similar privacy-sensitive content before sharing toward network computers, since these components are unused in model training and inference.
System design recommendations from experimental results
Our results lead us to the following system design recommendations for applications involving camera-based driver state estimation: • When possible, collect data and finetune models using the driver of interest. Generalizability is a difficult task since the real-world may violate the i.i.d. assumptions that allow for excellent performance from neural networks. Unseen data may not come from the same distribution as prior training; the simpler case is to fit the model to data that most closely matches the expect distribution (i.e. images of the intended driver).
• If design constraints allow, opt for multiple cameras observing the driver to leverage complementary information between views, alternative views of occluded zones, and redundant information to provide improved accuracy and generalizability.
• If restricted to a single camera view, an overhead view from a camera placed near the rearview mirror may be optimal. If unavailable, a view facing the driver from behind the steering wheel may provide the best performance on estimating whether the driver's hands are on the wheel or elsewhere. However, this view is less well-suited to infer what the driver is doing with their hands if off-wheel; for this, a camera view facing the cabin from the rearview area or dashboard is better suited. The selected view should be informed by the intended application use case.
• While outside the scope of this research, we encourage applications implementing this framework to explore different hyperparameter values for crop size around the hands (or other features of interest). Differences in camera distance affect how much of the hand (and surrounding context) is visible within a particular crop size, and it may be worthwhile to vary these sizes for specific use cases depending on objects and locations of interest.
Post-processing considerations for downstream applications
Systems that seek to reliably estimate the state of the driver's hands (or similar driver attributes) will have to apply robust thresholds and denoising techniques to distinguish between genuine distractions and momentary lapses in attention.
Filtering
We suggest low-pass filtering to reduce the effects of noisy patterns from inference (that is, small "blips" between classes for fractions of a second). This allows for a more steady prediction result by averaging over moving windows of time, where the window size is a hyperparameter that can be tuned based on observation of the duration of a typical inference mistake made by the network.
Thresholding
We also suggest a thresholding step to distinguish between monetary lapses of attention (such as a driver quickly reaching for an object), versus an elongated period of distraction, which warrants an alert. The permissible interval of sustained distraction is another hyperparameter that should be tuned according to the goals of the automaker or driver policy.
Alerting
If it is decided that the driver may be distracted, the system can then issue a standard request for driver attentiveness, or employ other downstream safety mechanisms. It is recommended that the alert system employs a method of alerting aligned to the standards of human-machine interface research; these techniques are outside the scope of this research, but we emphasize that this is a modular endpiece, and the presented framework can be applied for any downstream alerting mechanism.
Additional applications
The hand activity framework requires multi-view capture, driver detection and pose estimation in its upstream steps. These tasks can be used towards several additional safety critical scenarios. As the driver detection step detects all individuals in the car, it can be used to estimate seat occupancy or passenger positioning. The cameras also capture driver gaze which can provide another signal towards driver attentiveness. The pose estimation module can provide data for studying safe airbag deployment in crashes. We believe demonstrating the effectiveness of multi-camera hand distraction can lead to further research in these applications to create holistic, robust, end-to-end systems for driver safety.
There are many further layers of analysis to problems of irregular redundancy; in this research, we move beyond complete sets to emphasize approaches which are applicable toward incomplete sets. Future work should incorporate temporal dynamics into this analysis, towards making systems which show even stronger generality to new subjects. However, this temporal dependency differs from that described in [6], where the goal is "to accumulate multimodal information across time so that long-range cross-modal interactions can be captured through storage and retrieval from memory" -rather, we seek to retain short-range information from the collective representation, such that iterative predictions are consistent with prior predicted states. There is further promise in the ability of ensemble techniques to generalize to an entirely different (and relevant) class of what is "unseen": in addition to generalizing to new subjects, domainadaptive ensemble methods have also been shown to be effective learners to entirely new views [74], making them highly appropriate towards driver monitoring domain tasks, where the same views may not be guaranteed between vehicle designs.
We conclude that the late fusion technique is a strong baseline toward problems where multiple data streams, possibly under noise and dropped instances, are sampled simultaneously for continuous task inference.
|
2023-01-31T06:42:55.499Z
|
2023-01-30T00:00:00.000
|
{
"year": 2023,
"sha1": "f04357efba61909c5c6298b65dd370f835fb5162",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f04357efba61909c5c6298b65dd370f835fb5162",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
257013962
|
pes2o/s2orc
|
v3-fos-license
|
Sire Distribution of Calves in a Beef Herd with Use of Fixed Time Sire Distribution of Calves in a Beef Herd with Use of Fixed Time Artificial Insemination Followed by Immediate Bull Exposure for Artificial Insemination Followed by Immediate Bull Exposure for Natural Service in Cows and Heifers Natural Service in Cows and Heifers
Use of fixed time (FT) artificial insemination (AI) followed by immediate exposure of females to bulls for natural service can be a useful management strategy for commercial cow-calf producers to limit labor and time related to bull turnout and increase pregnancy rates earlier in the breeding season. Considering influence of bull fertility and time to and length of estrus in females, expectations for outcomes in natural service sire versus AI sire parentage is relatively unknown. Our objective was to determine the relative percentages of calves sired by either a natural service or FTAI sire within the same estrous period. In two consecutive years, heifers and cows were synchronized and inseminated using the 7-day CO-Synch + controlled internal drug release (CIDR) FTAI protocol. All females were inseminated by one AI technician using one sire for heifers and a different sire for cows. Females were exposed to natural service bulls immediately after insemination. After calving, DNA was collected from a random subset of calves born in the first 21 days of the calving season for parentage analysis (calves born from heifers in Year 1 = 59 and in Year 2 = 82; calves born from cows in Year 1 = 89, Year 2 = 102). The percentage of calves sired by AI and natural service was determined following parentage verification. In Year 1 , for calves born from heifers in the first 21 days of the calving season, 5.1% (n = 3/59) were sired by natural service. For calves born from cows, 14.6% (n = 13/89) were sired by natural service. In Year 2, for calves born from heifers, 9.8% (n =
Abstract
Use of fixed time (FT) artificial insemination (AI) followed by immediate exposure of females to bulls for natural service can be a useful management strategy for commercial cow-calf producers to limit labor and time related to bull turnout and increase pregnancy rates earlier in the breeding season. Considering influence of bull fertility and time to and length of estrus in females, expectations for outcomes in natural service sire versus AI sire parentage is relatively unknown. Our objective was to determine the relative percentages of calves sired by either a natural service or FTAI sire within the same estrous period. In two consecutive years, heifers and cows were synchronized and inseminated using the 7-day CO-Synch + controlled internal drug release (CIDR) FTAI protocol. All females were inseminated by one AI technician using one sire for heifers and a different sire for cows. Females were exposed to natural service bulls immediately after insemination. After calving, DNA was collected from a random subset of calves born in the first 21 days of the calving season for parentage analysis (calves born from heifers in Year 1 = 59 and in Year 2 = 82; calves born from cows in Year 1 = 89, Year 2 = 102). The percentage of calves sired by AI and natural service was determined following parentage verification. In Year 1, for calves born from heifers in the first 21 days of the calving season, 5.1% (n = 3/59) were sired by natural service. For calves born from cows, 14.6% (n = 13/89) were sired by natural service. In Year 2, for calves born from heifers, 9.8% (n = 8/82) were sired by natural service, whereas 20.6% of calves born from cows (n = 21/102) were sired by natural service. If commercial producers use FTAI followed by immediate bull exposure the proportion of calves sired by natural service bulls may be greater in cows than heifers.
Introduction
Development of fixed time artificial insemination (FTAI) protocols has provided beef producers with tools to harness genetic improvement benefits from the use of AI sires, economic benefits from cows calving earlier in the subsequent calving season, while eliminating the need for estrus detection .
Kansas State University Agricultural Experiment Station and Cooperative Extension Service
Fixed time AI followed by immediate exposure of females to bulls for natural service can be a beneficial management strategy for cow-calf producers. It has the potential to limit labor and time related to bull turnout, as well as to increase proportion of females becoming pregnant early in the breeding season. When natural service sires are exposed to females immediately after FTAI, potential variations in bull fertility, time to estrus onset, and length of estrus in females likely will influence whether the female conceives to the AI sire or natural service sire. Expectations for outcomes in natural service sire versus AI sire parentage are relatively unknown. Our objective was to determine the relative percentages of calves sired by either natural service sires or FTAI sires within the same estrous period when natural service sires are exposed to females immediately after FTAI.
Experimental Procedures
During the spring breeding seasons in two consecutive years at a ranch in Kansas, commercial Angus cows and heifers from a single producer were part of an FTAI program then immediately were exposed to bulls for natural service. In Year 1, cows ranged from two to five years of age and averaged 2.6 years. In Year 2, cows ranged from two to six years of age and averaged 3.2 years of age. Heifers were approximately 15 months of age at insemination in both years.
In both years, cows and heifers were synchronized using the 7-day CO-Synch + CIDR FTAI protocol. Heifers were artificially inseminated approximately 52-56 hours, and cows approximately 60-66 hours, following the removal of the CIDR and prostaglandin injections. The same AI technician inseminated all females in both years. The FTAI procedure used a single Angus sire for heifers and a different single Angus sire for cows. Different Angus AI sires were used in Year 1 and Year 2, but a single sire was used within a single year for both heifers and cows. All females were exposed to natural service sires immediately following insemination.
All natural service sires passed a breeding soundness exam before exposure to females. Natural service sires ranged in age from one to five years. Bull to female ratios were kept between 1:30 and 1:15 based on sire ages. Natural service bulls remained with the females for the 90-day breeding season. Pregnancy detection was performed via rectal palpation 60 days after bull removal, and all open cows and heifers were culled.
At calving, all calves born in the first 21 days of the calving season were weighed, tagged, and any color markings recorded. Calves born in these first 21 days received a tag with a different color for ease of identification at DNA collection.
To determine the proportion of calves sired by the FTAI sire and natural service sires, blood was collected from a subset of calves born in the first 21 days of the calving season (Year 1: calves born from heifers n = 59; calves born from cows n = 89. Year 2: calves born from heifers n = 82; calves born from cows n = 102). SeekSire (Neogen) parentage testing was used to determine percentage of calves from this subset born in the first 21 days of the calving season that were sired by AI or natural service bulls.
Results and Discussion
In Year 1, among calves born from heifers, the actual percentage sired by natural service was 5.1% (n = 3/59). Among calves born from cows, the actual percentage sired by natural service was 14.6% (n = 13/89). In Year 2, among calves born from heifers, the actual percentage sired by natural service was 9.8% (n = 8/82). Among calves born from cows, the actual percentage sired by natural service was 20.6% (n = 21/102). The percentage of calves born from natural service sires in Year 1 was less than in Year 2 for both cows and heifers, see Figure 1 and Figure 2. Although natural service sires sired varying percentages of calves, it is unknown if those calves were additional pregnancies early in the breeding season or if other factors influenced fertilization and resulted in fewer AI-sired calves. Other literature has shown increased pregnancy rates with FTAI along with immediate exposure to natural service sires in heifers when compared to natural service alone (Kasimanickam et al., 2021), and increases in conception rate with AI compared to natural service in indigenous cows (Washaya et al., 2019). These studies, however, did not assess parentage of calves to determine if they were sired by an AI sire or natural service sire. Similarly, Gutierrez et al. (2014) demonstrated increased breeding season pregnancy rates in heifers when using AI and natural service sires compared to natural service sires alone, and Sa Filho et al. (2013) demonstrated this same concept in cows.
Implications
If commercial producers use FTAI followed by immediate bull exposure in heifers, natural service sires may sire 5 to 10% of calves born early in the calving season. In cows, producers may expect 15 to 20% of calves born early in the calving season to be sired by natural service. These data provide estimates of the parentage of calves from AI and natural service sires with use of FTAI followed by immediate bull exposure. This strategy can reduce time and additional steps related to bull turnout and increase pregnancies earlier in the breeding season. Comparison of percentage of calves sired by AI sires compared to those sired by natural service sires born to heifers bred following a 7-day CO-Synch + CIDR fixed time AI protocol and immediate bull exposure in a two-year study. Comparison of percentage of calves sired by AI sires compared to those sired by natural service sires born to cows bred following a 7-day CO-Synch + CIDR fixed time AI protocol and immediate bull exposure in a two-year study.
|
2023-08-29T03:38:26.433Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ae10c051d4e867ff312287cefb246fd4191693bc",
"oa_license": "CCBY",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=8418&context=kaesrr",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ae10c051d4e867ff312287cefb246fd4191693bc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
201976037
|
pes2o/s2orc
|
v3-fos-license
|
Acute Liver Failure Induced by Joss Paper Ingestion.
We present a case of liver failure secondary to ingestion of Joss paper. A 44-year-old female initially presented with fever, nausea and vomiting and was subsequently diagnosed with acute liver failure. Prior to presentation she had consumed 1.3 gram of acetaminophen and 800 mg of ibuprofen. Her acetaminophen level was 18 mcg/mL initially and on repeat check was <10 mcg/ml and all viral hepatology antibodies and antigens were negative. History revealed that the patient ingested a ceremonial paper, Joss paper, daily, which is typically painted with heavy metals. Her mercury level was subsequently found to be elevated to 12 ug/L. Mercury can cause depletion of glutathione (GSH) through production of reactive oxygen species. Acetaminophen metabolism requires sufficient GSH to bind to a reactive metabolite to prevent cell death and hepatic injury. Daily exposure to mercury present in the Joss paper, likely accumulated in our patient's body and allowed hepatic injury from even therapeutic doses of acetaminophen.
Introduction
Mercury has been recognized by the World Health Organization as a major public health concern. Exposure to mercury through ingestion, inhalation or physical contact can lead to neurologic and somatic symptoms [1]. While there have been some observational studies relating elevated liver enzyme levels and levels of mercury in the body, there are no known documented cases of fulminant liver failure in an adult attributed to mercury ingestion.
Acute liver failure has been attributed to a variety of causes, many of them drugs and toxins. In the United States, almost 50% of liver failure is a result of acetaminophen overdose. It is generally accepted that therapeutic doses of acetaminophen do not typically result in liver failure. Here, we report a case of fulminant liver failure in a patient with elevated mercury levels, which likely potentiated acetaminophen hepatotoxicity [2].
Under License of Creative Commons Attribution 3.0 License
Case Presentation
A 44-year-old Vietnamese woman initially presented to her primary care physician with nausea, vomiting, myalgia and fever of 102 F. She had a medical history of hypertension and hypothyroidism and was prescribed hydrochlorothiazide and levothyroxine. The patient reported that she had taken 6 tablets of acetaminophen 325 mg (1.3 grams total) and 2 ibuprofen tablets (800 mg total). Two days prior to presentation, she had consumed tuna sushi and drank sake. She denied illicit drug use or tobacco use but did occasionally drink alcohol. She worked as an accountant and last travelled to Vietnam 1 year prior. At her primary care physician's office, she was noted to be hypotensive and was sent to the hospital emergency room. On arrival to the hospital, the patient was awake and fully oriented, with mild diffuse abdominal tenderness.
Investigations
On admission, the patient's AST was 7565, ALT 4891, total bilirubin 3.9, alkaline phosphatase 77, INR 1.8, and creatinine 1.7. Her acetaminophen level was 18 ug/mL and salicylate and ethanol <10. Urine toxicology was negative. She was then transferred to a tertiary care centre intensive care unit for further management of presumed acute liver failure.
Upon transfer, the patient's AST was 6556, ALT 3335, total bilirubin 4.3, alkaline phosphatase 7, INR 2.9, creatinine 5.1, ammonia 176, and lactate 4.7. All viral serology including hepatitis A, hepatitis B, hepatitis C, human immunodeficiency virus, Epstein-Barr virus, adenovirus, and influenza returned negative. ANA, Anti-smAb and Anti-mAb were negative. Serum and urine copper and ceruloplasmin were within normal limits. Ultrasound guided liver biopsy revealed macro vesicular steatosis.
Family had suspected foul play on the part of the common-law husband and on further inquiry it was revealed that the patient burned and ingested sacred ceremonial paper called Joss paper as tea as part of a daily worship ritual. Joss paper, traditionally made of bamboo paper or rice paper, is painted with metal foil or with ink seals of various sizes. A sample of the Joss paper she used was analysed, which found high levels of mercury 0.02mcg/g and traces of copper, arsenic and iron.
Treatment
The patient was admitted to the intensive care unit and treated with N-acetyl cysteine in spite of the Acetaminophen level and started on broad spectrum antibiotics as well as IV fluids. However, despite treatment she developed encephalopathy and altered mental status requiring intubation and mechanical ventilation for airway protection. She was given lactulose for hepatic encephalopathy but continued to decline.
Outcome and Follow-up
While in the ICU, the patient developed seizures, requiring several doses of Lorazepam and Propofol continuous infusion for treatment. She was evaluated by the liver transplant team and initially she was listed for transplant. However, she then became unresponsive despite discontinuation of sedatives. CT scan of the brain showed evidence of brain herniation and her exam was notable for loss of brainstem reflexes. Intracranial pressure monitoring was placed, with initial pressure was elevated to 90 mmHg. The patient was subsequently pronounced brain dead. Patient was a medical examiner case as per family wishes. The liver sections from autopsy were checked was stained for iron which was negative and showed steatosis, inflammation and necrosis (Figure 1). Post-mortem examination concluded that the cause of death was liver failure due to combination of acetaminophen use with concomitant mercury ingestion.
Discussion
Mercury toxicity can cause a variety of symptoms depending on the magnitude and duration of exposure, principally impacting the central nervous system and the kidneys. Typically, exposure to mercury is in the form of mercury containing fish consumption or dental amalgam use. Thus, exposure is from organic mercury such as methyl or dimethyl mercury [3]. This mercury is absorbed into the bloodstream through the intestine and adheres to sulflhydrl groups, such as on cysteine, and is distributed to peripheral tissues. Concentration of mercury occurs in the brain, liver, and kidneys and is slowly excreted, mostly through the stool. One mechanism of mercury toxicity is secondary to the production of reactive oxygen species and subsequent reduction of glutathione (GSH) [4].
The pathway of acetaminophen metabolism involves the creation of a reactive metabolite Nacetyl-p-benzoquinone imine (NAPQI). NAPQI is then bound to the sulfhydryl group of glutathione, which is then excreted in the urine. At supratherapeutic doses, excess NAPQI can deplete GSH stores, and instead bind to mitochondrial proteins and ion channels, which leads to cell death and hepatic injury [5,6]. In a patient with mercury accumulation, such as in our patient, levels of GSH are likely already partially depleted due to the production of reactive oxygen species. Thus, lower doses of acetaminophen may be sufficient to completely deplete GSH stores, even at therapeutic doses. In our case, the patient was known to use Joss Papers, which are sacred ceremonial papers that have been used in China to communicate with gods and goddesses in other worlds. It is traditionally made of coarse bamboo paper or rice paper. It often has metal foil or ink seals incorporated [7]. In the sample of paper provided by the patient's family, there was evidence of several heavy metals including 0.02 ug/g of mercury. Compared to the mercury level of certain foods, such as swordfish, which has 100 ug/g of mercury, this is quite low. However daily exposure to small doses of mercury present in the Joss paper that our patient ingested daily likely accumulated in her body to a level that was clinically significant [8].
Conclusion
In our globalized society, physicians may not always have knowledge of the cultural rituals of their patients. Physicians should be aware of the diversity of these rituals and the potential for inadvertent heavy metal ingestion. In the case of acute liver failure, which is commonly caused by ingestion of drugs or other substances, obtaining a thorough history is important to eliciting the diagnosis. In addition, a medication history must include everything that the patient may be ingesting or inhaling, including over the counter medications and herbal supplements. As per current acute liver failure guidelines, N-acetylcysteine should be given to all patients where acetaminophen was known to be ingested or is suspected as the cause of liver injury. Mercury should be considered as a potential toxin that can be involved in the pathogenesis of liver injury.
Learning Points/Take Home Messages
When evaluating a patient with acute liver failure it is important to take a thorough history of all medications as well as anything the patient may be ingesting or inhaling.
N-acetylcysteine should be given to all patients where acetaminophen was known to be ingested or suspected even when checked levels are low or if full history is unknown.
Knowledge of our patient's cultural background can be helpful in eliciting a diagnosis. Mercury should be considered as a potential toxin that can be involved in the pathogenesis of liver injury.
|
2019-09-09T18:38:42.899Z
|
2019-04-30T00:00:00.000
|
{
"year": 2019,
"sha1": "bb0e4f261c741ceb7a55e0485508eb20cf0fcf13",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "671740b5b83cb5e01eab2d6266d46cd40d7d2ec1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17250939
|
pes2o/s2orc
|
v3-fos-license
|
Audio Delivery and Territoriality in Collaborative Digital Musical Interaction
This paper explores the design of collaborative musical software through an evaluation of the effects different audio delivery mechanisms have on the way groups of co-located musicians work together in real time via a software environment. Ten groups of three musically proficient users created music using three experimental interfaces. Logs of interaction provide evidence that changing the means of audio delivery had a statistically significant effect on the way users worked together and shared musical contributions. In addition, interview transcripts indicate a number of experiential differences between the audio delivery configurations. The findings and design guidelines presented in this paper are intended to inform future systems for musical collaboration, and also have implications more broadly for the design of multi-user interfaces for which sound is a fundamental component.
INTRODUCTION
Music is a fundamental part of human expression (Makelberge 2010) and although not always a collaborative activity (Makelberge 2010) the creation, performance and enjoyment of music is highly social (Healey et al. 2005;Jord à 2005).Musical interaction is frequently identified as creative, open-ended, process oriented and problem-seeking (Makelberge 2010;Sawyer 2003).Within the computer music community there is a history of developing collaborative musical interfaces (Weinberg 2005;Jord à 2005), whilst more recently laptop orchestras (Wang et al. 2009) and multi-touch interfaces (Xamb ó et al. 2011) have been the focus of much development.However there still is a paucity of research concerning the evaluation of such systems, and there is limited research into human interaction during computer supported collaborative music making.
This paper contributes a study to investigate how different audio delivery configurations (speakers, headphones) can afford understanding of the location, authorship and origin of musical contributions during real-time musical collaboration.Understanding the implications of how to present audio is essential for any form of sound based human computer interaction, however to date there have been no detailed or controlled user studies investigating the effect different audio delivery mechanisms on the process of collaborative digital musical interaction.The study focuses on groups of co-located musicians using a networked graphical interface distributed across multiple computers.The findings are contextualised with reference to the concepts of awareness (Gutwin and Greenberg 2002), territory (Tse et al. 2004) and privacy (Dourish and Bellotti 1992).Quantitative analysis of interaction logs is used to study the effects different audio delivery configurations have on the way participants interact with the software, the degree to which they share their contributions, and their tendency to edit contributions made by other users.A multiple-choice questionnaire is used to gauge general preferences, and extracts from interviews with the participants are used to elaborate on relevant discussion points.
Privacy and Awareness
The Workspace Awareness (WA) framework (Gutwin and Greenberg 2002) describes the means by which people working together in shared physical workspaces gather awareness information about each others' activities.For instance the sounds produced during the execution of a task may indicate to other people that certain individuals are currently occupied or that certain artefacts are currently in use.
Musicians playing acoustic instruments provide a rich source of awareness through the gestures and movements associated with using their instruments, the sounds they produce and the way they orient around each other (Healey et al. 2005).However where musical interaction is mediated via software, generic input devices such as mice and keyboards may reduce opportunities for gathering and displaying awareness information.For instance Merritt et al. (2010) observe ensembles of skilled electronic music performers relying on crude visual indicators such as level meters to glean an understanding of who is responsible for which sounds in an unfolding musical improvisation.
Collaborative interfaces for musical interaction should therefore provide additional awareness mechanisms to support users during real-time interaction (Fencott and Bryan-Kinns 2010;Bryan-Kinns and Hamilton 2009), although there are to date few studies investigating the design of such features.
Space and Territory
Many collaborative activities feature territorial behaviour.Territory has been identified as a signal of ownership or responsibility for objects, artefacts or spaces in shared document writing and other forms of collaborative activities (Thom-Santelli et al. 2009).Within HCI, territorial interaction can be reinforced through the re-appropriation of existing software functionality (Thom-Santelli et al. 2009), or may be the result of natural spatial partitions within an activity.For instance (Tse et al. 2004) showed that users of Single Display Groupware partitioned their workspace to avoid interfering with each another's work.The study presented in this paper investigates the role that spatial audio can play in providing information about the authorship of musical contributions within a shared interface.Our analysis also identifies and characterises territorial behaviour within a shared graphical interface.
Audio Delivery Devices for HCI
There is a paucity of research into auditory delivery in HCI.Kallinen and Ravaja (2007) investigate the effects of presenting news reports on headphones or speakers, reporting that headphones cause people to become more immersed in the task at hand and less conscious of their surroundings.Nelson and Nilsson (1990) report similar results in a single user simulated driving activity.Morris et al. (2004) compares shared speakers and shared speakers plus individual headphones to deliver audio in a collaborative multi-touch system, showing that parallel working styles were adopted when users were given a personal audio channel via an in-ear headphone bud.Alexandraki and Kalantzis (2007) use a questionnaire to ascertain musicians' preferences for audio delivery, noting a slight preference for multi-channel audio.Blaine and Perkis (2000) compare headphones and speakers through informal user testing and suggest that headphones caused participants to be less communicative and more isolated from the group.They also argue that spatialised audio might help users attribute ownership to musical contributions, although their results show that non-musicians encountered difficulties identifying the effects of their actions.Conversely, Merritt et al. (2010) states that their groups of laptop musicians rejected the idea of personal speaker channels in favour of a combined mix from a single set of speakers.Finally, the importance of coupling the performer with a localisable sound source has been advocated by laptop orchestras (Wang et al. 2009), although this research lacks evaluation.
STUDY
We conducted a study to investigate how different audio delivery configurations contribute to the way groups of musicians engage in musical collaboration.This section describes the collaborative music software developed to run the study, and presents the hypotheses and experimental design.
Collaborative Music Software
In order to conduct the study we developed a music environment which allows users working on separate computers to create music via a shared workspace.Our justification for developing a piece of bespoke software is presented in Fencott and Bryan-Kinns (2012).The interface was written in Java and SuperCollider.To make music, virtual musical instruments (drum machines and step-sequencer based synthesisers) are created in the on-screen workspace which is duplicated on the screens of all connected computers (see Figure 1).The 'instruments' (also referred to as 'Modules') provide control over the contents of a looping musical bar, and offer various sound synthesis controls.The looping nature of the sequencer and synthesised sounds makes the software especially suited to the creation of 'electronica' style music.Multiple instantiations of each instrument can be created to build up complex layered and harmonised parts, and instruments can be patched through audio effects to create rich sonic textures, and provide opportunities for musical contributions from different users to be interconnected and inter-associated with each other.Instruments and effects appear in the same location on all users' screens, and changes to the parameters (e.g.slider movements) are immediately updated for all users.The software also features a tempo control which operates globally for all users.Incorporating fine grained control over tone, metric placement of beats and chromatic pitch of notes conforms to the notion that 'good instruments should be able to make bad sounds' (Overholt 2009), and is in contrast to previous studies of collaborative musical interaction which have typically focused on non-musician users (Weinberg 2005;Bryan-Kinns and Hamilton 2009).
The software features Public and Personal audio outputs for each user to facilitate individual and group level working.Patching a module to the Public output causes its audio to be routed to all participants, whereas if a user patches a module to their Personal output it is routed exclusively to their headphones or speaker.The Public output is represented as a grey rectangle in the centre of the workspace, while Personal outputs appear in the bottom left of the screen.Patching modules to the Public and Private outputs also affects their availability for editing.Modules patched to the Public output are freely editable by all users, while modules patched to a user's Personal output are rendered non-editable by other users, and appear as grey boxes on all other participants' computer screens.
Hypotheses
The study investigated the following hypotheses: H1 Presenting audio exclusively through headphones will encourage more individual work.Indicators of individual work are: more use of Personal output; less co-editing of contributions; less verbal acknowledgements.
H2 An interface with the most auditory distinction between public and private audio channels will be preferred by participants.This hypothesis is based on the assumption that privacy is a key factor in the user's preference (Fencott and Bryan-Kinns 2010).
H3 An interface which presents personal and public audio entirely through speakers will cause participants to work more collectively, as evidenced by: more audio played in the Public channel; less use of privacy features; increased discussion of audio sources.
Methodology
The study employed a three condition withinsubjects design in which all participants were exposed to each of the conditions.Aside from the three configurations of headphones and speakers for audio delivery described below, the musical software remained identical throughout all conditions.To counteract ordering effects the sequence of conditions was permeated between groups.
Condition C1 -Speakers only: Each participant has their own speaker, which is used to present both Personal audio and Public audio.This is similar to conventional instrumental playing where each person's instrument comes from a distinct spatial location.Using individual speakers creates a situation in which participants each have a personalised version of the music which can be overheard by the other participants.
Condition C2 -Headphones and Public Speakers:
Each participant has their own speaker for Public audio.Personal audio for each participant is played through headphones.This arrangement is similar to the DJ practice of using headphones to cue new records in private before crossfading to speaksers for the audience to hear (Pfadenhauer 2009).
Condition C3 -Headphones Only: Public and Personal audio are routed through headphones.Musical contributions routed to the Public Channel go to all headphones, while contributions routed to a user's Personal channel are only played through that user's headphones.
Participants and Recruitment
Thirty individuals were recruited via e-mail lists.
The recruitment e-mail asked for 'people with an interest in creating music, for instance composers, musicians, DJs, and students of Music, Music Technology or related fields'.Each participant received financial compensation for taking part.The decision to study users with knowledge of music and music technology represents a departure from studies of non-musician users engaging with simplified musical interfaces (e.g.(Weinberg 2005)).
Participants were organised into groups of three.68% of participants were male (based on 29 responses).The average age was was 33 (based on 25 responses).69% of participants could play a musical instrument.22% classified themselves as of beginner level musical proficiency, 40% classifying themselves as of 'intermediate' level, and approximately 18% for both 'semi-professional' and 'professional' levels.86% of participants had composed songs individually, while 75% had composed songs with others.58% of participants identified their level of computer literacy as 'intermediate', and 37% identified as 'expert'.One participant identified as a 'beginner'.27.59% of participants had not previously used collaborative software.25% had played online multi-player computer games.14.29% had used collaborative document editors and 7.14% had used collaborative writing software.50% had used collaborative music software. 1
Experiment Task
To spark discussion and provide a common ground for the participants, a challenge was set to compose music to compliment a short video animation displayed in the top left of the interface (see top left in fig 1).A different video animation was used for each condition, and the sequence of video animations was ordered independently of the condition ordering. 22.6.Measures 2.6.1.Questionnaire data A post-test questionnaire based on the Mutual Engagement Questionnaire (Bryan-Kinns and Hamilton 2009) gathered information about the participants experiences with the experimental conditions by asking participants to order the experimental conditions in terms of how they related to a list of statements.
Interaction Log Analysis
The following interaction features were logged by the software: Creating a module, Deleting a module, Modifying a module control (e.g.moving a slider, pressing a button), Connecting a module to the Public output, Connecting a module to a Personal output, Patching to and from effects, Movement and spatial position of modules, Tempo changes.
Group Discussions
Video recorded discussions were held at the end of each session.The discussions focused on preferences, perceived differences between the speakers and headphones, use of the Personal and Public channels, awareness of each other's activities, roles, working strategies and spatial use of the shared on-screen workspace.
Procedure and Apparatus
Sessions started with a verbal introduction by the researcher.Participants were then presented with 1 Percentages do not add up to 100 for multiple choice questions 2 Videos created using 'Mother' http://www.onar3d.com/mother/ a pre-test questionnaire to collect demographic information.A 10 minute training period with the software followed.Participants were then given fifteen minutes with each experiment condition.The post-test questionnaire was presented after the conditions, and finally a group discussion was held.The researcher sat in a visually occluded control room while participants completed the questionnaires and engaged in the experimental conditions.
The software ran on three Apple Mac-Mini computers with 21" widescreen displays, placed on a large round table (see fig 2).The displays were lowered to allow participants to see over them.Studio quality Yamaha MSP5 monitors were used for conditions requiring speakers.These were positioned to the right of each display.Sony MDR-7509HD headphones were used for conditions C2 and C3.These headphones could be worn over the head or held to the ear in the style of a DJ.
Post-Test Questionnaires
Twenty-eight participants completed the post-test questionnaire in full, one participant provided responses to almost all statements and one participant provided no responses.The Friedman test was used to identify statements which elicited a statistically significant trend.A significant number of participants identified condition C2 as the one in which they 'lost track of time' (p=0.0406,df=2, csq r =6.41).A significant number of participants (p=0.0211,df=2, csq r =7.72) rated condition C2 as the one in which they 'had the most privacy'.No other statements provoked a statistically significant effect.Table 1 presents the post-test questionnaire results.
Statement
C1 C2 C3 P The best music 2 1.9 2.2 0.56 I felt most involved with the group 1.9 1.9 2.1 0.52 I enjoyed myself the most 2.1 1.9 2.1 0.66 I felt out of control 2.2 1.8 2 0.34 I understood what was going on 1.9 2 2.1 0.8 I worked mostly on my own 2 1.9 2.1 0.64 I lost track of time 2.2 1.6 2.1 0.04 Other people ignored my contributions 1.9 2 2.1 0.58 We worked most effectively 2.1 2 2 0.9 The interface was most complex 1.9 2.1 2 0.79 I had the most privacy 2.4 1.7 1.9 0.02 I knew what other people were doing 2 2 2 0.97 We edited the music together 2 2.1 1.9 0.57 I made my best contributions 2 2 2 0.07 I was influenced by the other people 2 1.9 2 0.9 The condition I preferred the most 2.1 1.9 2 0.73
Interaction Log Analysis
Interaction log analysis using the Friedman Test showed that where participants were given Speakers Only (Condition 1) they made significantly less use of the Personal channel to listen to musical contributions (p=0.0253,df=2, csq r =7.35).There was no significant difference in the amount of modules created (p=0.7225,df=2, csq r =0.65), the amount of modules deleted (p=0.5169,df=2, csq r =1.32), between the amount of editing which individuals performed on their own modules (p=0.8395,df=2, csq r =0.35), or the instances of co-editing which took place (p=0.3413,df=2, csq r =2.15).There was no significant effect on the number of Module coordinate position movements (p=0.4677,df=2, csq r =1.52).There was no significant difference in the use of the tempo control between conditions (p=0.5916, df=2, csq r =1.05).Finally there was no significant effect on the amount of times participants patched modules to the Public channel (p=0.1496., df=2, csq r =3.8).Table 2 summarises these results.
Spatial Workspace Organisation
Visualisations were produced to plot the co-ordinate position of each music module over the course of the interaction.Colour (red, green, blue) was used to signify which of the three participants created each module (See Figure 3).Due to the low frequency of A variety of spatial patterns were informally identified, the most common being contributions by individuals arranged in corners of the screen, in horizontal or vertical stripes, or randomly spaced.The visualisations were manually coded as 'grouped' or 'intermingled' sets based on the degree to which the areas of coloured dots appeared to be grouped together.The categorisations were performed independently by a third party rater, and a Cohen's Kappa test produced an inter-rater reliability of 0.6667 (0 indicates total disagreement, 1 indicates complete agreement).The Mann-Whitney U test was then used to compare the interaction logs for grouped and intermingled data sets.It was found that groups who intermingled their modules performed significantly more co-editing than groups who spatially separated their modules (U a = 1354.5,z=-2.76,p 1 =0.0029, p 2 =0.0058).Groups with more pronounced spatial partitioning also created more modules (U a = 696, z=2.55, p 1 =0.0054, p 2 =0.0108) and made more use of the public channel (U a = 697.5,z=2.54, p 1 =0.0054, p 2 =0.0111).
Video Observation and Group Interviews
Video of the interaction and group discussion was manually transcribed and coded using Grounded Theory methods (Muller and Kogan 2010), although space limitations prevent a full report of this data.Instead, extracts from the interviews and group interaction are presented throughout the following section to elaborate on specific findings.
DISCUSSION
The results suggest that manipulating the way audio was delivered changed the way groups collaborated, and that it also influenced the perceived quality of the interaction.This section begins by using questionnaire, interaction log and transcription extracts to assess the study hypotheses, before expanding on more general issues surrounding the findings of the study.
Hypothesis H1
Hypothesis H1 stated that 'Participants will work more individually when audio is presented exclusively through headphones'.Analysis of the interaction logs indicate that the experimental conditions did not influence the amount of editing or co-editing which took place, and there was also no effect on the use of Personal audio channels when audio was routed through headphones.Participants did however report having least privacy in the speakers only condition, and during discussion the participants identified a number of distinctions between working in headphones and working in speakers.Some participants noted that the headphones encouraged more concentrated listening than with speakers, causing them to 'hear close things', or focus on 'texture' and 'tiny details'.The property of headphones to facilitate a more intimate, close and immersive listening experience was also identified in (Kallinen and Ravaja 2007).The tendency for headphones to promote more focused listening had contradictory effects on reported experiences of involvement in the group.On the one hand some participants reported being less involved with the group as they became more focused on the details of the sounds they were creating, at the expense of engaging with the publicly shared music.For other participants, more concentrated listening resulted in their becoming more attentive to the changes made by others, and consequently they noted feeling as though they were working more as a group when using headphones.These two effects are illustrated by the following transcript extracts: B: yeah, and I think you're much more aware of, of other people's changes, in the last one, having the headphones on [...] and I think, my feeling was that that encourages, encouraged us to change more in each others'.[...] that was my feeling anyway, that there was a bit more collaboration with each others' sounds C: well, erm, I see, the headphones definitely, shut me off from, erm, I mean I was able to concentrate solely on what I was doing, but I wasn't as involved as a group B describes a situation in which headphones caused him/her to engage more with the group, as the close listening drew attention to the changes others were making to the sounds.Conversely, C states that the headphones allowed him/her to concentrate on his/her own ideas, but at the expense of less group involvement.The contradictory statements from participants do provide reassurance that the interview questions were not leading participants towards particular answers (Furniss et al. 2011), however they also suggest that there was a wide range of different experiences of the interaction.
Hypothesis H2
Hypothesis H2 posits that an interface which provides the greatest auditory separation between personal and public work would be preferred by participants.The post-test questionnaire data does not indicate that any of the conditions affected preference based on results for the statements 'The condition I preferred the most' or 'I enjoyed myself the most'.Analysis of post-test responses indicate a significant proportion of participants rated C2 (Speakers + Headphones) as the one they most lost track of time in, which has previously been interpreted as an indicator of engagement (Bryan-Kinns and Hamilton 2009).
Participants expressed mixed responses to the C2 condition.Some participants expressed difficulty switching between headphones and speakers, noting that it caused a disruption in their ability to focus on the shared aspects of the music.One participant described using headphones initially to experiment with the software, before switching back to speakers to work with the group.
C: yeah yeah, because I, I was just trying to get, see how it all worked, and, which I preferred, and then I found it was much better to just be on the same wavelength as everybody else Another participant noted that the headphones made them concentrate more on what they were doing individually, at the expense of formulating a musical contribution which was coherent with the music playing through the public channel on speakers: D: I was more concentrating on what I was doing, and when I tried to erm, add it to the, you know, public thing, it was, it just didn't sound right.
It seems more accurate to state that participants noted feeling more involved with the group when they were most aware of and attentive to the changes being made by others.This appears to have occurred more often when audio was presented via speakers or headphones, rather than when public and personal audio were split into separate devices.
Hypothesis H3
Hypothesis H3 proposed that 'An interface which presents personal and public audio entirely through speakers will cause participants to work more collectively'.When Public and Personal audio was presented entirely through speakers (C1), interaction log analysis showed that participants made significantly less use of the Personal channel to listen to musical contributions than they did in either of the other conditions.This suggests that the personal channel was less useful when delivered via speaker, or perhaps that working with speakers discouraged personal working.This supports the hypothesis that participants would make less use of the privacy functions when audio was delivered this way.Furthermore, response to the questionnaire statement 'I had the most privacy' shows participants reported experiencing least privacy in Condition 1.During discussion, participants noted difficulty determining the spatial location of sounds from individual speakers.
Workspace Organisation
Participants were free to organise the on-screen music modules arbitrarily, and it is important to emphasise that the spatial position of modules did not influence the musical outcome of the software.The categorised visualisations provide evidence that participants in half the groups employed strong spatial organisation within the shared workspace to separate contributions based on ownership.The visualisations also show that groups used a similar spatial arrangement in every condition, and the spatial arrangements appear not to have been influenced by the audio presentation modes.During interviews, individuals often seemed aware that they were working in a particular area of the screen, although they did not always know where other group members were working.Studying the video recordings taken during engagement with software, it appears that in many cases the spatial organisation was the result of unspoken, tacit or implicit agreement, rather than the verbal negotiation.
Unlike in the case of people sitting or standing around a shared screen or table, the circular seating arrangement (see Figure 2), combined with the consistent spatial position of modules within the shared and distributed workspace means that the physical position of participants around the table could not have contributed to the way participants organised the spatial layout of elements on the screen, while the primarily auditory activity of musicmaking presents no inherent cues or suggestions towards specific spatial arrangements.It is possible that the central position of the Public output patching block may have directed participants towards a natural spatial partitioning strategy around the centre of the screen, however the plurality of layout approaches (corners, horizontal and vertical stripes, non-uniform) suggests that this was not a major contributing factor.
Participants may have spatially partitioned the interface to reduce interference with one-another's work.This statement is supported by interaction log evidence; using the Mann-Whitney test to compare instances of co-editing between the sets of partitioned and intermingled groups reveals that participants in groups with weaker spatial organisation strategies were more inclined to edit modules created by others than were participants in groups with more strict spatial organisation.However, conversation during the sessions indicates that the role of spatial partitioning stretched beyond the minimisation of interference, and was used to signify and help manage awareness of authorship of musical contributions.This follows Thom-Santelli et al. (2009), who argues that during collaboration, territoriality serves the communicative function of indicating ownership over a particular object or space.The following extract demonstrates how participants adopted a spatial ordering strategy to counter difficulties maintaining awareness of each other's actions.
E: it's just hard to keep up with so much, what's going on.You don't know who's, who's is doing what, you know?F: (laughs) G: ok, ok well how about this, how about this.E: uh uh G: Why doesn't everybody, like, lets say, you go to one side with yours, I go to one side with mine, and you go to one side with yours.Like, it's to move it to one side so we know what everyone is doing.E: yeah, yeah G: that make sense?F: yeah This extract demonstrates the way in which participants negotiated an informal mechanism or agreement to scaffold authorship awareness through partitioning of the workspace.The participants used the spatial division of the shared interface to create or claim personal workspaces for themselves within the shared interface, even though such workspaces were not provided explicitly via the interface.This extract also highlights a common vocabulary used by participants to discuss the screen layout.During interviews, participants often described themselves in terms of working 'in the bottom left', 'top right', and so on, although some participants appeared not to have any sense of territory within the interface and talked about working 'all over the place', or 'putting stuff anywhere there was space'.These were points at which the spatial organisation broke down, for instance one participant noted during the interview K: you watch around for space, and start just putting stuff wherever Participants occasionally used the spatially congruent layout of the interface as a resource to discuss aspects of the arrangement, as demonstrated in the following excerpt.Here A uses the spatially consistent workspace as a resource to draw B's attention to a particular music module: ....pointing with both fingers all over his screen I can see what everyone else has got on their parameters J: yeah, that's right H: .....pointing at right hand side of screen because I've put, I've got, the second step sequencer, down on the right hand side, I've put the notes where your kicks are, .....pointing to left of screen H: so it's sort of J: ah, right, paralleled Due to the physical arrangement of screens on the circular table, J is unable to ground H's deictic reference to the information on his screen.H therefore verbally refers to the music module's spatial position within the shared onscreen workspace, by stating 'the second step sequencer, down on the right hand side' of the workspace.In this way H and J use their intersubjective knowledge of the spatially consistent layout of the workspace to discuss an aspect of the shared interface.J then notes that H's sequencer is 'paralleled', to acknowledge the observation that H's step sequencer is playing notes at the same time as the kick drums from J's drum module.This extract also demonstrates that participant H was using visual access to J's music modules as a resource for creating musically coherent contributions.
Identification of Contributions
In this study, musical material was created by editing on-screen sequencers.This poses two problems related to the gathering of workspace awareness: a lack of feedthrough awareness at the time of creation, and following this the subsequent existence of an autonomous agent (the sequence) which proceeds to generate music independently of it's creator and provides limited information about this autonomous process.Even though the interface was consistently distributed across computer screens, participants reported having difficulty identifying specific music modules within the interface.Some participants reported using their personal channel to discover which modules were responsible for which sounds, and during group interviews participants talked more about the importance of knowing what was making a specific sound than they talked about knowing who was responsible for a contribution.
Using a mouse to create a sequence does not indicate to other co-located musicians what activities are being undertaken, and indeed depending on the physical layout of workstations the action of moving the mouse may itself be non-visible.One participant in this study noted correlating onscreen activities such as fader movements with feedthrough awareness provided by the sounds of other participants' mouse-clicks to attribute authorship to certain music modules.
Key Findings
• The form of audio delivery influenced the degree to which participants used their personal audio channel.When Personal audio was routed to individual speakers next to each participant and public audio was routed to all speakers, participants made significantly less use of the personal channel.• The spatially and visually consistent layout of the interface was exploited in several ways to support collaboration; primarily as an aid to joint attention, and as a means of indicating ownership over specific music modules.• Most groups adopted similar spatial arrangements in each interface condition, although there was no evidence that the layout was influenced by the way audio was delivered.• Strong territorial behaviour was identified in half the groups.These groups performed less co-editing, created more modules and made more use of the personal audio channel.• Identification of contributions and identification of ownership appear to be two distinct issues.
Layout Features
Given the importance of module layout during collaboration, a redesigned interface could incorporate additional layout and organisation features to support additional scaffolding for collaboration, awareness and joint attention.These features would not have a direct influence on the sonic output of the software, but could aid groups of people in structuring a collection of interface elements.Layout features could include user configurable dividers, workspaces, partitions, annotations and colour coded areas.The ability to group or bundle associated modules together (e.g. a collection of drum sequencers forming a rhythm) could be another useful feature.A new research direction might be to investigate the extent to which these organisational features need to be consistently duplicated across all connected computers, and how groups or individuals might exploit the affordances of these features.
Multiple Devices for Audio Delivery
In single user performance contexts such as DJing, the separation of audio into different devices has been identified as a central aspect of the practice.However in a real-time, co-located collaborative context, where multiple people are listening to a variety of sound sources simultaneously, splitting audio across multiple devices appears to be problematic from a design and usability perspective.
Although results from previous studies implied that this separation might be beneficial (e.g.(Fencott and Bryan-Kinns 2010)), our study suggests that the separation of audio into different devices is detrimental to a groups' ability to coordinate and manage their collaboration, and may contribute to feelings of less involvement and less awareness.
Secondly, problems arose due to switching between headphones and speakers, balancing the level between speakers and headphones, the auditory disruption of headphone wearing on conversation and monitoring of audio played through speakers.These issues could be counteracted with less acoustically isolated headphones, providing individuals with control over the level of their audio outputs, and using wireless headphones to make switching between headphones and speakers less awkward.
Finally, a clear drawback of our findings of this study is that although the participants were musically inclined, they had limited experience of collaborative software, and limited experience using the software developed for this study.Had the participants become accustomed to the split audio design of Condition 2 they may have developed ways to deal with the problems they encountered.Consequentially, a strong implication is that for first time users, audio should be presented via a single device where possible (either headphones or speakers) as it appears to encourage stronger feelings of group involvement, and a greater sense of awareness.
Single Device for Audio Delivery
Interaction log analysis of the data suggests that using speakers for shared and individual audio presentation discouraged people from using personal audio channels, although previous research suggests that incorporating the ability to work in auditory isolation allowed participants to formulate more complete contributions before sharing them (Fencott and Bryan-Kinns 2010).Designers must therefore balance the choice to discourage individual work against the benefits of allowing users to control how and when their ideas are shared with the rest of the group.If the system is intended to promote open and collective group interaction then using speakers might be preferable, while a system which is designed to promote more focused individual work might benefit from headphone presentation, as this allows users to concentrate on their own contributions and take advantage of the detailed sound provided by headphones.A collaborative system could also incorporate a switching mechanism which allows a group to jointly transfer audio from their individual headphones to a speaker system at a point where they can combine their contributions.
Ownership and Identification
Participants seemed less concerned with who was contributing what, although they commented on using the personal audio channel to discover which interface elements were responsible for which sounds.This suggests that interfaces should provide separate mechanisms for identifying 'who' is doing what, and 'what' is doing what within the interface.
Our subsequent research has pursued this issue.
CONCLUSION
Designing to support group musical interaction necessitates a careful consideration of how audio should be presented.Using an experimental design, this study has identified a number of ways in which different audio delivery mechanisms influence group musical interaction among ten groups of musically inclined users.This informed the synthesis of design implications for the way sound should be presented to support collaboration.In addition, analysis of the way groups configured, managed and discussed the shared interface points to a number of other design considerations for future collaborative systems.
Figure 1 :
Figure 1: Screenshot of the collaborative interface
Figure 2 :
Figure 2: Equipment used for user study
Figure 3 :
Figure 3: Visualisation of workspace territory.Circles represent position of modules, colours indicate individuals.
Table 1 :
Post-Test questionnaire results summarised to rank averages.Significance of p<0.05 highlighted in bold.
Table 2 :
Interaction log results summarised to rank averages.df=2 in all cases.Significance of p<0.05 in bold.
|
2015-07-06T21:03:06.000Z
|
2012-09-10T00:00:00.000
|
{
"year": 2012,
"sha1": "516a2bb3527795e6bbe52a2d05cb6816b0f1600a",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/416f54cb-716e-41f5-866d-a400198ca4d2/ScienceOpen/069_Fencott.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "516a2bb3527795e6bbe52a2d05cb6816b0f1600a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
268497832
|
pes2o/s2orc
|
v3-fos-license
|
Impact of the COVID-19 Pandemic on Persons Living with HIV in Western Washington: Examining Lived Experiences of Social Distancing Stress, Personal Buffers, and Mental Health
Pandemic-related stressors may disproportionately affect the mental health of people with HIV (PWH). Stratified, purposive sampling was used to recruit 24 PWH who participated in a quantitative survey on COVID-19 experiences for in-depth interviews (IDIs). IDIs were conducted by Zoom, audio recorded and transcribed. Thematic analysis was used to develop an adapted stress-coping model. Participants experienced acute stress following exposure events and symptoms compatible with COVID-19. Social isolation and job loss were longer-term stressors. While adaptive coping strategies helped promote mental health, participants who experienced multiple stressors simultaneously often felt overwhelmed and engaged in maladaptive coping behaviors. Healthcare providers were important sources of social support and provided continuity in care and referrals to mental health and social services. Understanding how PWH experienced stressors and coped during the COVID-19 pandemic can help healthcare providers connect with patients during future public health emergencies, address mental health needs and support adaptive coping strategies. Supplementary Information The online version contains supplementary material available at 10.1007/s10461-024-04273-7.
Introduction
The first case of COVID-19 in the United States was reported in Western Washington in January 2020 [1].The World Health Organization declared COVID-19 a global pandemic in March 2020 and shortly thereafter, Washington State implemented a wide range of social distancing measures [1,2].The COVID-19 pandemic has since laid bare social and health inequities throughout the United States, including in Washington State.Numerous published reports demonstrate how COVID-19 disproportionately impacted vulnerable populations such as racial and ethnic minorities, individuals experiencing homelessness, persons who inject drugs, and other marginalized groups [3][4][5][6].These communities are also vulnerable to and disproportionately impacted by HIV [7][8][9].
Understanding the psychosocial impacts of the pandemic for people living with HIV (PWH) is critical, as they are already at increased risk for common mental health disorders such as depression and anxiety [10][11][12], and poor mental health is often a predictor of negative HIV-related outcomes [13].At the same time, a growing body of literature supports an increased susceptibility of people with pre-existing mental health conditions to stressors associated with COVID-19, relative to the general population [14].
The theory of stress and coping described by Lazarus and Folkman provides a powerful and useful framework for understanding how individuals cognitively appraise potentially stressful demands or events and utilize both Sarah Smith and Kristin Beima-Sofie have contributed equally to this work.
internal and external resources to manage these stressors [15].Multiple stressors, such as adhering to treatment regimens, disclosing HIV status, and managing HIV-related symptoms, have been shown to impact the mental health of PWH [10][11][12].Stress and coping models guided by the Lazarus and Folkman theory [15], have been used to better understand PWH coping strategies and to develop and test interventions related to coping [16][17][18].These studies have shown that effective coping strategies used by PWH include problem-focused coping, addressing the stressor directly, and emotion-focused coping techniques such as positive reappraisal, meditation and exercise to alleviate stress and improve mood [16][17][18].Findings related to the effectiveness of social support in its various forms remain mixed [16,17].The Lazarus and Folkman framework [15] informed the current study's aims to examine pandemic-related stressors, adaptive and maladaptive coping strategies, and mental health related outcomes among PWH.
Qualitative methods provide powerful analytical approaches that complement epidemiological studies, and may offer important, contextual insights regarding mental health and other determinants of clinical outcomes [19].And yet, according to a scoping review published in May 2021, only 4.5% of research investigating the impacts of COVID-19 on PWH within the first 12 months of the pandemic were focused on stress and mental health, and less than 8% involved qualitative research [12].A deeper understanding of how PWH made meaning of, responded to, and coped with the COVID-19 pandemic will help clinicians examine the ways in which adaptive and maladaptive coping strategies may have impacted care engagement and clinical outcomes.We conducted a qualitative study to explore the lived experiences of PWH in Western Washington, in order to better understand how the pandemic impacted daily life, identify resources that led to or mitigated symptoms of anxiety and depression, and develop an adapted stress-coping model of how COVID-19-related stress impacted mental health in this population.
Study Design and Population
This qualitative research study evaluates data collected as part of the University of Washington (UW) HIV and COVID Study, a mixed-methods assessment of COVID-19 impacts on PWH in Western Washington.The study included a cross-sectional, online survey in REDCap to evaluate experiences with and impacts of the COVID-19 pandemic and related social distancing on a range of health outcomes, as well as interviews with a purposively selected subset of survey participants.Individuals were eligible to participate in the survey if they were 18 or older, had internet access, and had enrolled in the UW HIV patient registry, which recruits UW clinic patients to participate in the Centers for AIDS Research Network of Integrated Clinical Systems (CNICS) cohort, a longitudinal, observational cohort of PWH in care at eight geographically distinct HIV research sites in the US from January 1995 to the present (http:// www.uab.edu/ cnics/) [20].The HIV patient registry in Seattle includes patients who are 84% male and 67% white, with a median age of 52 (range 18-89).Survey recruitment was done via email and text message.After consenting, participants were asked a series of questions related to sociodemographic characteristics, COVID-19 impact, personal buffers, mental health, substance use, sexual health, and HIV treatment outcomes.Individuals who participated in the survey were asked if they were willing to participate in an in-depth interview (IDI).Participants who completed the survey by November 6, 2020, and who consented to participate in an IDI, were eligible to participate in this qualitative study.
In the survey, we included a seven-item COVID stress scale created by selecting items from the longer 36-item COVID Stress Scale to capture several dimensions of COVID-19 stress while minimizing participant burden [21].This brief scale assessed different types of stress due to the pandemic (e.g., "I'm afraid of getting COVID-19") and related social distancing measures (e.g., "Social distancing has resulted in increased mental stress").For each item, participants indicated the intensity of the stress using a five-point Likert scale, ranging from 0 (not at all) to 4 (extremely).Responses to the seven items were summed to derive a total COVID-19 stress score, ranging from 0 (no impact) to 28 (high impact).Participants were then categorized as belonging to one of three tertiles: "Low Stress" (2)(3)(4)(5)(6)(7)(8)(9)(10)(11), "Medium Stress" (12)(13)(14)(15)(16) or "High Stress" (17)(18)(19)(20)(21)(22)(23)(24).As of October 31, 2020, 327 participants had completed the survey and 277 (85%) had consented to IDI invitation: 103 participants scored as having "Low Stress", 95 as "Medium Stress", and 79 as "High Stress."To capture a wide range of experiences, and to better examine factors that led to or mitigated symptoms of anxiety and depression, we used stratified, purposive sampling to select participants from this population who scored in the "Low Stress" and "High Stress" categories, aiming for representative coverage of the sample by age, sex, race/ethnicity, and sexual orientation.
Data Collection
The HIV and COVID study was guided by the Lazarus and Folkman theory of stress and coping [15].Our initial model focused on how stressors resulting from the COVID-19 pandemic (e.g., COVID-19 illness, social distancing stress, job loss, housing challenges, reduced quality of life) may have been mitigated by adaptive coping.We were interested in how personal buffers (i.e., coping resources) such as COVID-19 knowledge, support for social distancing, technology access and skills, social support, and prior experience with coping strategies would impact health outcomes such as mental health, substance use, sexual health, and HIV treatment adherence.The stress-coping model applied in this qualitative analysis posited that each individual's COVID-19 experience was the result of a stressful/traumatic life event (i.e., the COVID-19 pandemic), and that personal buffers were protective factors buffering the negative effects of COVID-19 on health outcomes.
Based on this model, we developed a four-part semistructured interview guide that captured: (1) participants' knowledge of COVID-19, (2) their personal experiences and beliefs about COVID-19, (3) how social distancing measures influenced daily life, and (4) how the pandemic impacted their motivation and ability to access health care and other services, including those related to mental health and substance use treatment.
IDIs were conducted by SS and MM between December 2020 and May 2021 using HIPAA compliant online Zoom or phone-to Zoom.Participants were recruited for IDIs via email.All IDIs were conducted in English, and were audio recorded via Zoom with participants' consent.Participants who completed an interview received $40 for their time.After the first 5 interviews, the interview guide was revised to optimize phrasing and question flow, and several probes were added, in order to better capture participant experiences and perspectives (Supplemental File 1).The average length of IDIs was 55 min, ranging from 35 min to 1 h and 33 min.
Following each IDI, a structured debrief report was written to capture observations and summarize participant responses.Otter.comAI software was used to provide firstdraft transcriptions of the recorded interviews.Draft transcripts were reviewed and revised by SS or NB to ensure accuracy and quality.
The study protocol, consent form, and IDI topic guide were reviewed and approved by the UW Human Subjects Division.All study participants provided written informed consent, with oral confirmation upon starting the interview.
Data Analysis
We conducted thematic analysis [22], grounded in our adapted COVID-19 stress-coping model, to identify pandemic-related stressors, personal buffers that acted as protective factors mitigating negative effects, and reflections on personal mental health.All transcripts were coded using the qualitative analysis software ATLAS.ti(ATLAS.tiversion 8, Scientific Software Development GmbH, Berlin, 2020).A preliminary codebook was developed deductively based on the domains of our conceptual model, then expanded through open coding and identification of data-driven codes.During three rounds of consensus coding, discrepancies in code application were resolved through discussion with the larger analysis team, until a final codebook and coding strategy were agreed upon.Thereafter, the remaining transcripts were divided between two analysts (SS and AN) and imported into ATLAS.tifor independent coding.After all transcripts were independently coded, both analysts reviewed each other's codes and text segmentation, and resolved disagreements [22].
Salient themes were identified through generating and reviewing queries of codes within each conceptual area (stressors, personal buffers, and mental health outcomes) [22].Interpretation of findings were discussed among the analysts and by the broader study team.Finally, a data display was developed to include illustrative quotes corresponding to stressors, personal buffers, and coping strategies [22].
Results
Of the 59 participants invited to interview, 24 (41%) completed an interview: 11 were individuals who had low COVID stress scores and 13 were individuals who had high COVID stress scores.Table 1 presents characteristics of these participants.Median age was 48 years (range 28-67 years), and 18 (75%) identified as male.Sixteen (67%) identified as white, and 16 (67%) identified as gay, homosexual or lesbian.Due to our purposive sampling approach, these demographics closely matched those of the overall sample who participated in the online survey (median age 47, 82% male, 65% white, and 66% gay, homosexual or lesbian).The majority of participants (80%) had at least some college, an associate's degree, or technical school training.Notably, while many lived in low-income housing, no participants experienced homelessness during the early months of the pandemic, although two younger participants reported fear and/or loss of housing.Both PHQ-8 and GAD-7 scores were higher among participants with high COVID stress scores, compared to those with low scores.
While participants experiencing high and low COVID stress differed in their perceived amount of stress and their personal appraisal of their ability to cope, they did not differ in how they described the types of stressors or the relationships between stressors and corresponding strategies used to cope.Therefore, results for all participants are presented together.Most participants expressed having felt some level of anxiety and depression; however, the degree to which they experienced mental health challenges was related to their specific stressors and success with adopting positive coping strategies.Overall, among 3 main topic areas, we identified multiple themes as participants described their experiences, which are further described in the text below.Table 2 provides additional support for each theme.
Topic 1. Pandemic-Related Stressors
The COVID-19 pandemic brought about a range of acute and chronic stressors, including exposure events, social isolation, and disruptions in routine services.
Exposure Events and Symptoms Compatible with COVID-19 Acted as Powerful Short-Term Stressors
On the individual level, common pandemic-related stressors included exposure events, onset of symptoms compatible with COVID-19, and COVID-19 illness.Many participants described being fearful of COVID-19, often stemming from their personal experiences living with a chronic viral infection and the uncertainty of how COVID-19 could affect their health.Most participants reported strictly adhering to their antiretroviral regimens even before the pandemic began; as a result, these participants did not cite non-adherence as a personal risk factor for contracting COVID-19 and developing severe complications.Rather, their living and working environments, as well as any comorbidities they had, more closely informed their personal perceptions of risk.
For most participants, COVID-19-related fear and stress were manageable on a day-to-day basis, provided they could adhere to social distancing, practice regular hygiene routines, and limit visiting crowded public spaces (e.g., grocery stores, public transportation).When outbreaks occurred in their living or working environments, or when participants experienced flu-like symptoms, their worry and stress intensified.Symptoms led to heightened stress even among participants who perceived their overall risk of severe COVID-19 to be low.After testing negative, participants often maintained or increased adherence to social distancing.
Many who perceived high risk for mortality due to COVID-19 cited HIV as an underlying heath condition that increased their risk.Highly stressed participants reported anxiety, with "tipping points" that felt beyond control due to constant worry and fear.
"I mean, I'm probably like 80-or 90-pounds overweight, and the smoking, and then who knows what it
would really do with the HIV?...Like I said, on a daily basis, it doesn't scare me.But whenever I've had to go in for a test, I mean, that's when all those thoughts are going on.It's like, 'Oh, my God, I'll probably die if I get it.'And then like right now, like today, with the ICU so overloaded and stuff, I wonder if they had to triage me, where they would put me, right?Am I just at more of a risk of dying, so they're not going to treat me…?"-White man, mid-40s, high COVID stress "You know every cough, every sniffle, every itch in your nose or itch in your throat was perceived as I have COVID.And even though we weren't going anywhere, and we weren't socializing with anyone, it was just that initial fear and panic...I really had a hard time with the anxiety and the depression about that.And, you know, it was a super huge trigger from an already existing gaping wound of "When am I going to die?"-White woman, late 30s, high COVID stress To cope with heightened worry and manage fears of contracting COVID-19, some participants developed compulsive behaviors.
"I started making sure that I wiped down everything with Clorox wipes that came into the house as much as possible or washed it with soap and water.Everything.Fruits and vegetables, boxed goods, canned goods, everything, everything.I started washing it and sterilizing things as much as possible, and I've carried that on ever since."-Whiteman, early 50s, high COVID stress When asked if they knew PWH in their communities who had been diagnosed with COVID-19, many participants said 'no'; in fact, many were surprised that the pandemic had not impacted PWH as much as they had initially anticipated.Despite not knowing many who had fallen ill with COVID-19, several participants feared contracting COVID-19 from routine clinic visits.
"I try as much as I can not to go [to the hospital],
because even if there is a protocol and we are taking as much as we can as precaution…you're gonna sit where other people sit and breathe, and you're in the same huge room.In my mind, even if you're wearing a mask in the room, you can catch COVID in some way, in a small percentage.It's not a 0% risk."-Whiteman, late 20s, high COVID stress Of the 24 study IDI participants, only four were diagnosed with COVID-19 prior to their interview.As such, most participants did not experience stress related to actual illness, treatment, or long-term complications.For the four participants who contracted COVID-19, all recovered (two after hospitalization) but expressed fears of contracting it again.
Social Distancing and Concern for Others Were Sources of Chronic Stress, Regardless of Fears about Contracting COVID-19
Nearly all participants expressed their support for, and followed, social distancing.However, for many, adhering to social distancing led to isolation and loneliness.This distress stemmed from several sources: reduced face-to-face interaction with family, limited means of supporting loved ones who were struggling, and few opportunities for community gatherings and groupbased activities.
Limitations in one's ability to provide support to family members and friends who were unwell was commonly brought up.For some, this element of social distancing was extremely painful and led to sadness and helplessness.Anxiety and depressive symptoms intensified when family or close friends experienced severe COVID-19 symptoms.
In addition to feeling lonely, sad, or "down" due to being unable to engage in certain activities or social gatherings, many participants felt their families and communities were not taking the pandemic and social distancing guidelines as seriously as they ought to, which brought about feelings of anger and frustration.Several participants expressed concern over the political climate and the disproportionate impact of the pandemic on racial and ethnic minority groups.
Changes in Insurance and Disruptions in Medical, Dental and Social Services were Common Challenges
Overall, participants reported few challenges accessing COVID-19 testing and treatment or STI testing and treatment; however, many reported a wide variety of system navigation challenges related to changes in insurance and accessing unemployment benefits if needed.Participants also described challenges scheduling medical and dental appointments.A few also reported feeling that their health needs and concerns were invalidated by clinic and social service staff.
"Imagine that my social worker, my case manager was yelling and screaming at me, just because I have more appointments that I have to go to every week that I wanted to get some more help for bus tickets…And they refuse me for this, that and those services…That is extremely stressful."-Asian man, mid-60s, high COVID stress
Topic 2. Personal "Buffers" That acted as Protective Factors
Participants who perceived their risk of contracting COVID-19 to be largely based on personal behavior, and therefore mostly in their control, described feeling safer as they learned more about COVID-19.Along these lines, participants described a range of protective factors that helped ease COVID-19 related stresses.
Housing and Financial Security were Essential Personal Buffers
Only a few participants expressed concern about the possibility of being evicted from their homes.These participants had lost their jobs early in the pandemic and remained unemployed for several months.Despite participants eventually finding other jobs, these experiences had a substantial impact on their finances and personal well-being.While many other participants lived in low-income housing, none were homeless at the time of their interview, and housing and finances were not major sources of stress for them.Numerous participants described their homes as a refuge, and some felt this helped protect them from COVID-19 exposures.
"I'm lucky to be living in a place that even though my contract at [employer] ended, I don't need to worry about being homeless. I don't need to worry about going to food banks and being at an extra risk. I'm really blessed to be in a stable environment."-White woman, early 40s, low COVID stress
One participant described how his low-income housing facility not only provided on-site COVID-19 testing, but also provided food and other forms of support for those experiencing symptoms compatible with COVID-19.Importantly, many participants who were retired, received social security/disability benefits or had lost their jobs, had spouses or other family members who provided financial assistance as needed.This added layer of financial security enabled participants to purchase food, pay bills (e.g., phone, internet, TV), and buy other necessities.Other factors that enhanced quality of life included employment (especially when the work environment was perceived as safe), home delivery of groceries or medications, and COVID-19 financial relief, among others.
Access to Technology Improved the Ability to Maintain Social Connections
For many, technology was the primary means of maintaining social connection early in the pandemic.Having a phone or computer and internet access, were also critical for participants who were anxious about going to medical facilities and preferred to use telehealth services.Notably, though engagement in virtual mental health services was mixed, participants who attended virtual counseling sessions often felt they benefited from valuable coping tools or practices.
"I have a really fantastic therapist that I see once a week online.And I feel so fortunate.So, I have a great thing with her, and she's really helpful for me…it's one of those things where, after going to therapy for a long time, you get a really good toolbox of tools that you can use emotionally."-Whitewoman, late 30s, high COVID stress
Topic 3. Coping Strategies
Participants employed adaptive and maladaptive coping strategies to alleviate pandemic-related stress.
Cognitive Coping Techniques, Physical Exercise, Social Support, and Engaging in Meaningful Activities at Home were Helpful in Alleviating Stress and Building Resilience
Participants frequently drew parallels between when they were first diagnosed with HIV and the COVID-19 pandemic.Most had developed an array of techniques to help them manage their physical and mental health based on years of experience coping with HIV, in addition to other chronic conditions.These preexisting strategies allowed them to feel more prepared for coping with the COVID-19 pandemic.
Cognitive coping techniques and emotion-focused activities (e.g., positive reframing, meditation, mindfulness) were particularly useful for stressors perceived to be out of one's personal control.For example, after experiencing symptoms and while awaiting test results, cognitive coping techniques helped participants remain calm.
"It always goes back to HIV.With those of us that have HIV, it's that same anxiety you feel and fears that you feel when you've taken the test that, back then we had to wait seven days, and fortunately with this, we only had to wait 24 hours.And so, there was a lot of high anxiety, but I just started doing my self-meditation and thinking that, you know, there's no sense in stressing about it until you hear."-AmericanIndian/Alaska Native man, late 60s, high COVID stress Emotion-focused strategies were helpful for coping with the open-ended nature of the pandemic.For example, many participants spoke about the pandemic's early months as being the most difficult, especially as there was limited information on how long it would last.Meditation and mindfulness were frequently cited by participants as helping them practice gratitude and positive thinking over time.
For some participants, physical activity was one of the most impactful coping strategies for combatting prolonged stress and building resilience.Participants who exercised often reported improvements in depressive and anxiety symptoms.For many, exercise provided more than physical health benefits, serving as a distraction and means of keeping busy.Participants who worked, and especially those who worked outside their homes, also expressed less social distancing stress compared to those who spent much of their time at home.For participants who did not regularly exercise, common reasons provided centered around (1) not feeling safe exercising in public spaces, or (2) having underlying health conditions that made engaging in prolonged physical activity difficult.
"I was a member in a gym, and I canceled it.And then I was trying to workout outside the house.But like no one, like even if I'm running or doing some exercise outside the house, in the park or wherever, people don't wear a mask.And even working out or running with the mask is something that I honestly don't like.And I feel like I can't breathe.So, I stopped doing that."-Whiteman, late 20s, high COVID stress Participants found ways to stay connected with friends, family or neighbors via the following forms of social support: emotional (e.g., physical or virtual presence), instrumental (e.g., food deliveries, financial assistance), and informational (e.g., sharing COVID-19 information and advice), which helped alleviate depressive and anxiety symptoms.
Finally, many participants also engaged in meaningful activities at home, including home improvement projects and hobbies such as making art, reading, watching TV, and cooking new recipes.Many such activities served as important opportunities for self-reflection, personal growth and self-care.
"I think, in a weird way, this isolation at home alone has given me a really long time to recover on my own in a way that has helped me specifically, gathering my own thoughts and look back on things, and, what's the word I'm looking for?A chance to review things long term and come to some really important life decisions and choices that have helped me in a tremendous way."-White man, early 50s, high COVID stress
Job Loss and Social Distancing Stress Increased Depressive Symptoms and Exacerbated Preexisting Tendencies to Engage in Maladaptive Coping
Participants reported feeling down for a number of reasons, including job loss, inability to socialize in person or take vacations, and loss of loved ones.Some participants with a history of depression reported recurrent depressive symptoms and restarting therapy.
"I lost my job because of COVID.And so, you know, I was really depressed through the whole summer, which was part of the job loss, but I mean, just lonely.You know?I'm not a huge social person, like, I don't go out to bars and stuff.But I usually take a vacation.I go somewhere every summer and during the winter, I definitely go somewhere sunny to fight my seasonal depression.And so now, that's not happening.I'm just sort of worried about my grandma a lot…I went on a second pill for my depression."-Whiteman, mid-40s, high COVID stress While most participants had either been sober for many years or had not changed their consumption of alcohol, tobacco, marijuana, or illicit substances during the pandemic, some participants who engaged in maladaptive coping strategies prior to the pandemic (e.g., harmful or hazardous alcohol consumption, harmful substance use, excessive gambling) noted that pandemic-related stressors increased their frequency of engaging in these unhealthy habits.For some participants, a combination of stressors acting together contributed to unhealthy consumption.
"So, I drink two glasses of wine per night to keep my blood sugar down…Before though, when the pandemic first started, it was like two bottles of wine.You know, you can get really cheap wine nowadays.And I feel like that was because of the pandemic, like I don't have to be at work tomorrow, and well, I might as well get drunk.It was kind of like, I feel like that's only because of the pandemic.I felt like it's okay to do that."-Blackman, early 30s, low COVID stress Fortunately, several participants who engaged in maladaptive coping strategies were connected to, or increased participation in, mental health counseling, Alcoholics Anonymous, or other behavioral health services.
Updated Stress-Coping Model
Lived experiences of participants were used to expand our initial stress-coping model to accommodate the complex interactions between individual stressors, personal buffers, coping strategies, and mental health outcomes among PWH during the COVID-19 pandemic (Fig. 1).Our adapted model includes the following stressors: individual-level stressors (e.g., anxiety about acquiring COVID-19), inter-personal level stressors (e.g., social distancing stress or conflict with family or friends), community-level stressors (e.g., limited social opportunities) and systems-level stressors (e.g., system navigation challenges, political climate).These stressors were either short-term or longer term, and they ebbed and flowed with the pandemic.Exposure events and symptoms compatible with COVID-19 usually led to short-term stress, while most participants described prolonged stress stemming Fig. 1 Updated COVID-19 stress-coping model from social distancing and limited opportunities for interpersonal support.Participants who experienced multiple stressors often engaged in maladaptive coping, especially after job loss or some other adverse event.Personal buffers such as stable housing and job security protected those who had these advantages.Regardless of personal buffers, adaptive coping strategies helped most participants cope with these stressors in positive ways and reduced the negative impacts of COVID-19 on mental health outcomes.When navigating systems level challenges, healthcare providers played a critical role in supporting PWH, by providing continuity in care as well as referrals to mental health and social services.
Discussion
Our qualitative evaluation of lived experiences of PWH in Western Washington suggests that a complex interplay of stressors at the individual, interpersonal, community and systems levels affected PWH experiences coping with the COVID-19 pandemic.Participants identified both adaptive and maladaptive coping strategies used to alleviate stressors and reduce impacts on mental health outcomes.Personal buffers, including knowledge and beliefs about COVID-19, job security, and access to technology, helped promote adaptive coping responses and reduce negative mental health impacts.Building upon previous stress-coping research, participant experiences were used to expand an initial stresscoping model to characterize the interactions between identified stressors, personal buffers, coping strategies and mental health impacts among this population.To our knowledge, this is one of the first research studies to provide a stresscoping model that articulates common mechanisms through which the COVID-19 pandemic impacted the mental health of PWH.While our model is similar to the ecological model published by Cowan et al., illustrating individual, network, community, and structural features of the pandemic impacting the health of individuals with opioid use disorder [23], this and other models published in the literature do not include successful coping strategies or other factors which promote resilience [24].
Our findings align with other research studies suggesting that problem-focused and emotion-focused coping activities that improve mood, such as finding gratitude via mindfulness, exercising, and having a supportive social network, promoted well-being during the COVID-19 pandemic [25][26][27][28][29]. Similar to other studies, we found that having access to technology was an invaluable resource for maintaining social connections and giving and receiving social support during the pandemic [29].While these strategies are not necessarily different or novel from those shown to be effective for coping with HIV prior to the COVID-19 pandemic [16][17][18], these findings underscore the importance of developing and maintaining skills and behaviors to effectively manage short and longer-term stressors.Importantly, results from this study are also supported by previous research which suggests that prior experience coping with HIV and other chronic conditions helped some PWH feel more prepared to cope with pandemic-related stressors [30,31].
Although some participants remained physically active during the pandemic, many participants did not describe physical activity as part of their daily routine.This finding is supported by a recent study conducted by Wion et al., which found that aspects of HIV self-management that were made the most difficult by the COVID-19 pandemic were "the ability to exercise, ability to manage affective symptoms, and ability to maintain social support networks" [32].This is cause for concern, as a growing body of literature suggests that exercise improves depressive and anxiety symptoms, increases quality of life, and promotes treatment adherence among PWH [32][33][34].Equitable access to physical activity should be further explored for PWH who have limited outdoor recreational opportunities or don't feel safe exercising outside of their homes.Likewise, additional research is needed to evaluate the clinical effectiveness of virtual physical therapy sessions, as well as patient barriers to using these services [35][36][37].Addressing these gaps in knowledge will be particularly important for individuals who have physical limitations in their ability to exercise.
It is important to note that the COVID stress scores used for purposive sampling in this study were not a proxy for having experienced stressful events such as job loss or traumatizing events such as domestic violence.In fact, some participants who were categorized as having "low COVID stress" had experienced many of these difficulties, but these experiences did not necessarily impact their perceptions of stress related to the COVID-19 pandemic specifically.The level of perceived COVID-19 stress for each participant was dependent on individual-level and other contextual factors, and limited characterization and comparison of different stressors or coping strategies by COVID-19 stress score category.
While all participants described access to COVID-19 testing, PWH described challenges accessing social services, such as insurance and unemployment benefits, and scheduling routine medical and dental appointments.Thus, our findings suggest that an evaluation of how changing clinic and social service policies impacted patients' sense of safety and service access during the pandemic would be beneficial.Community advisory boards and other avenues for patient feedback will be critically important in developing patientcentered solutions that are equitable and inclusive, and in improving patient-provider relationships in future COVID-19 waves or future pandemics.
We acknowledge several limitations to this study.First, our data reflect the experiences of individuals who had access to e-mail and were willing to be contacted for an interview, responded to our invitation, and consented to use either online Zoom or telephone-to-Zoom to participate.Second, we note that the median age of survey participants was lower than that of HIV patient registry participants, at 47 versus 52, which may be due to differences in technology access or skills.Third, the response rate to our interview invitations was relatively low, at 41% overall.This sampling bias was, in part, unavoidable due to social distancing restrictions, as in-person recruitment and data collection were not possible during the study period.As a result of this limitation, we likely under-sampled individuals who experienced homelessness or extreme mental health and substance use challenges, and our results may have limited generalizability to more vulnerable populations.However, our study draws on the experiences of individuals who reflect the socio-demographic profile of many HIV clinic patients in Western Washington, in terms of age, race/ethnicity, gender identity, and sexual orientation.Fourth, the seven-item COVID stress scale used in this study was created at a time when validated tools were not available, and it has not been formally validated.Finally, of the 24 participants, only four had been diagnosed with COVID-19 prior to being interviewed.As such, we cannot provide generalizable findings regarding experiences with illness, stigma, or other adverse outcomes for those diagnosed with COVID-19.
Conclusion
In summary, we collected interview data from 24 individuals who receive their HIV care at UW clinics in Western Washington.Cognitive techniques, physical activity, and social support appeared to be the most impactful coping strategies for participants during the first year of the pandemic.Although COVID-19 vaccines and other treatments are now available and social distancing guidelines have been lifted, understanding how PWH experienced stressors and coped during the COVID-19 pandemic can help healthcare providers connect with their patients, address mental health needs and support adaptive coping strategies during future public health emergencies.
Table 1
Sociodemographic characteristics of participants, by stress category and overall AA Associate of Arts, GAD-7 general anxiety disorder 7, GED general educational development test, IQR interquartile range, N/A not applicable, PHQ-8 patient health questionnaire 8, USA United States of America a Other countries of birth included China, Lebanon, and Brazil
Table 2
Illustrative quotes corresponding to pandemic-related stressors, personal buffers, and coping strategies Topic 1. Pandemic-related stressors Fear of COVID-19 acquisition: "There's still that little element, about 5%, that thinks "is this it, is this what's going to take me out?"Here I am surviving 35 years of HIV, and COVID comes in and within a week takes me out.So there is, that's still in the back of your mind whenever I go out, so even to doctor's appointments, to the dentist, and QFC [local grocery store], I'm a little nervous, especially if somebody gets on the streetcar…if somebody gets on the streetcar and is not wearing a mask, one time I got off and waited for the next streetcar, so."-AmericanIndian/Alaska Native man, late 60s, high COVID stress Impact of social distancing guidelines on inter-personal relationships: "I am a very social person.So, not being able to get out or do anything is really damping my spirit…I'm the person that my friends and family tend to call when they want that hug or when they need that good cry, and we haven't been able to do that.So, I spend a lot of my time on video calls, zoom calls, and things like that, trying to keep the connection open, but this has been very hard, you know, not seeing my nephew has been very, very, very hard."-Blackwoman, late 30s, high COVID stress Direct experiences with COVID-19 illness: "Yeah, when my husband's brother was in the hospital, he was hospitalized for like, two months.We were praying and like, we were worried.Because we thought he was gonna die, and well, thank God he didn't.But we were anxious and very sad."-White man, late 20s, high COVID stress Uncertainty in pandemic response management: "I think probably about three or four months in, I had a little bit of a freakout.Just because there didn't seem to be a plan in place.You know, we could see cases going up, we could see the hospitals filling up, we could see, you know, this was during that, and there was no plan.There was no plan for testing, there was no plan for anything.And I had a little bit of a moment, then.You know, but, and I think regularly, anytime I had to listen to Trump talk, I was, you know, I felt my level of anxiety and anger going "My husband and I decided that instead of just numbing ourselves by eating whatever treat we wanted, or, by having wine in the evening while we watch TV…we decided, this is ridiculous, we've had enough…it's been really nice, because we've had a little bit more socialization in little bite size interactions at the gym.But it's all still very what we feel is controlled within reason…And losing my COVID weight, and now I'm chipping away at the baby weight that I didn't lose.It feels really good to be able to focus on something other than COVID."-Whitewoman, late 30s, high COVID stress Helping others: "I would call them and tell them, if they need anything, let me know, I can like drop it off in front of their front door, if they need to.You know, I'm obviously not gonna go over there and give them a hug.But, you know, I wanted to make sure they knew that they weren't alone…So, I would just offer that up to them."-Whiteman, mid-40s, low COVID stress Accessing social support: "I still spend a lot of time on the telephone…I spend a lot of time having long conversations with people, especially those nearest and dearest, you know…And it's just kind of that thing that sometimes that call out of the blue, that interaction you weren't planning for is the thing that really lifts your spirits."-Whiteman, early 50s, high COVID stress Engaging in meaningful activities: "Well, there's not really much that I can do.But I pretty much tried to divert from baking and go more just up.And then when I heard the Woodward tapes of Trump, I was infuriated… So I guess, as far as like, medically, I haven't had a lot of freak outs, maybe one or two, but politically and information wise, that's probably been where the majority of my negative feelings have come from."-Whiteman, early 50s, high COVID stress Topic 2. Personal "buffers" or protective factors Stable income and housing: "We haven't had any stress because my daughter's still working and she makes good money.And we're in low income housing.So that helps and…if we didn't have any income, our rent would only be $50 at the minimum, so we would have been fine."-Whitewoman, early 60s, low COVID stress Economic independence: "You know, I've been ridiculously fortunate.I mean, my business is doing better than before.My housing has gotten better.I'm healthier than I was a year ago.I have a great relationship…I already worked at home.I own my own business and worked from home or from coffee shops.So now, I just work from home using Zoom."-Whiteman, mid-50s, low COVID stress Pandemic relief: "There's been some interesting things that have come across that have been helpful, like the stimulus packages, and like, they have, DSHS has something called P-EBT [pandemic electronic benefits transfer].So, I have two children who are in grade school, and they are on reduced lunch.And so we get paid in food stamps for every day they should have been in school getting reduced lunch."-Blackgender-queer/gender non-conforming, early 30s, low COVID stress Obtaining information: "I think the worry has changed.In the early days, I was a lot more stressed.And worried.Because of lack of information, or that nobody really knew details about it as much as they do now.So, I feel like now I'm more prepared to be able to protect myself better and make more educated choices."-Whitewoman, early 40s, low COVID stress Topic 3. Coping strategies Adaptive strategies Cognitive coping: "I mean, I take a couple of deep breaths and get aware.Like, okay,…get really tuned into what's really in front of me… What's really happening here?…Asopposed to what I'm afraid of, or what some perception is.And often it's like, 'Oh, actually, I started coughing a little bit.I've been sneezing.It feels like I have a cold.I will go get tested.And I don't have a result yet.And there's nothing I can do about it until I know and I'll know, one way or another, and if it is COVID, I will be able to accept…I think that's how I usually deal with things like that.It's like, okay, what's true, and what's made up?And what's unknown?And if it is COVID, then [what] are the things for me to do?"-White man, mid-50s, low COVID stress Physical exercise: like cooking, because a lot of baked goods have a lot of carbs and sugars, and I'm trying to get more into the proteins and the grains.So now like, I guess, like having to make that gravitation is giving me something to do.And pretty much just bettering my life is the only thing I've been trying do during this pandemic."-Blackman, early 30s, low COVID stress
Table 2
(continued) Maladaptive strategies (when in excess) Harmful or hazardous alcohol use: "[W]hen COVID started, it took away my new job, and then it closed down my gym, and then it closed down all the trails that I was hiking at.So, it took away not only my job, but took away everything that I could do for fun.So.I had already drank a lot.So COVID turned drinking into a full-time hobby, which was not healthy for me… Not seeing my boyfriend for about half a year, that was really hard."-Whiteman, early 30s, low COVID stress Compulsive behaviors: "I felt myself sort of compensating by becoming overly, what's the word I want to use, compulsive about certain I don't see anybody.No.You can't.Because that's congregating.No friends, nobody to see.No, no.Even I don't like it.This is not about like or not.That's a life and death.You either observe the social distancing and quarantine or you don't.Do you have a choice to die?Go dig a grave, I would say.And I told people, too, please do not invite each other.You invite me, I'm going to report you because you're congregating."-Asianman, mid-60s, high COVID stress
|
2024-03-18T06:17:57.900Z
|
2024-03-16T00:00:00.000
|
{
"year": 2024,
"sha1": "32f17e1d912f6fcae1e775bd0f2e0f2af12d1f4f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10461-024-04273-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "572f14d2d87152d6fd9e31d116aab9f472a70c91",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
104293698
|
pes2o/s2orc
|
v3-fos-license
|
Antioxidants and Male Fertility: From Molecular Studies to Clinical Evidence
Spermatozoa are physiologically exposed to reactive oxygen species (ROS) that play a pivotal role on several sperm functions through activation of different intracellular mechanisms involved in physiological functions such as sperm capacitation associated-events. However, ROS overproduction depletes sperm antioxidant system, which leads to a condition of oxidative stress (OS). Subfertile and infertile men are known to present higher amount of ROS in the reproductive tract which causes sperm DNA damage and results in lower fertility and pregnancy rates. Thus, there is a growing number of couples seeking fertility treatment and assisted reproductive technologies (ART) due to OS-related problems in the male partner. Interestingly, although ART can be successfully used, it is also related with an increase in ROS production. This has led to a debate if antioxidants should be proposed as part of a fertility treatment in an attempt to decrease non-physiological elevated levels of ROS. However, the rationale behind oral antioxidants intake and positive effects on male reproduction outcome is only supported by few studies. In addition, it is unclear whether negative effects may arise from oral antioxidants intake. Although there are some contrasting reports, oral consumption of compounds with antioxidant activity appears to improve sperm parameters, such as motility and concentration, and decrease DNA damage, but there is not sufficient evidence that fertility rates and live birth really improve after antioxidants intake. Moreover, it depends on the type of antioxidants, treatment duration, and even the diagnostics of the man’s fertility, among other factors. Literature also suggests that the main advantage of antioxidant therapy is to extend sperm preservation to be used during ART. Herein, we discuss ROS production and its relevance in male fertility and antioxidant therapy with focus on molecular mechanisms and clinical evidence.
Introduction
The mammalian spermatozoon is a cell with a high demand for energy to perform its function. Spermatozoa obtain their energy by two main metabolic pathways: glycolysis that occurs in the principal piece of the flagellum and oxidative phosphorylation (OXPHOS) that takes place on mitochondria located at the midpiece of the flagellum [1]. Spermatozoa contain between 50 and 75
Sources of ROS in Spermatozoa
Several situations result in nonphysiological levels of ROS overwhelming the natural scavenger systems ( Figure 1). For example, lifestyle habits, such as alcohol consumption, smoking, exposure to toxicants, or pathologies such as obesity, varicocele, stress, and ageing have been associated with increased production of ROS in seminal plasma [8]. Presence of leucocytes in semen, as well as high percentage of spermatozoa with morphological anomalies [9] or immature spermatozoa with cytoplasmatic droplets containing high amount of enzymes are some examples associated to high ROS levels [9][10][11][12].
Currently, human infertility is a global health problem that has led to an exponential grown in the use of ART in the last years to overcome fertility problems. However, ART protocols imply sample centrifugation, light exposure, change of oxygen concentration, pH, or temperature, and the use of culture media with metals content that can produce hydroxyl radicals by Haber-Weiss and Fenton reactions (see explanation below). Hence, optimization of ART protocols has been proposed to minimize artificial ROS production, for instance, by decreasing g-force during sperm selection [13], decreasing spermatozoa incubation time during in vitro fertilization (IVF), which in turn decreases the time where aberrant spermatozoa that produce more ROS are in contact with the oocyte, and by decreasing sperm concentration or atmospheric oxygen concentration during embryo culture under in vitro conditions [7]. In order to reduce human leucocyte contamination on raw semen, paramagnetic bead technology (Dynabead ® ) can be used. Thus, magnetic beads coated with leukocyte antigen CD45 decrease leukocyte contamination [14,15], doubling the percentage of spermatozoa-oocyte penetration, as shown in a heterologous assay using hamster oocytes [16].
Bivalent Role of ROS on Sperm Function
Mammalian spermatozoa are extraordinary cells able to survive in a different body from where they were created. They are very specialized cells having the sole purpose to deliver the paternal genome into the oocyte. However, after ejaculation, spermatozoa must undergo a complex process within the female reproductive tract named capacitation, which allows spermatozoa to fertilize the oocyte [17,18]. Capacitation is a cascade of different cellular events that imply high production and consumption of energy. Although there is controversy on the preponderant metabolic pathways, glycolysis or OXPHOS, used by spermatozoa to generate energy in the form of ATP, it seems that there are sperm species preferences [1]. OXPHOS is the most efficient pathway, obtaining about 30 molecules of ATP by oxidizing one molecule of glucose, while during glycolysis, only two molecules of ATP are obtained per molecule of glucose. It has been described that OXPHOS is the major source of ROS in spermatozoa [19]. Furthermore, ROS might play a bivalent role in sperm function: mild ROS levels boost different intracellular events that culminate on oocyte fertilization, while higher ROS levels induce sperm DNA damage and embryo miscarriage [20,21]. In a comprehensive review, Ford summarized ROS physiological functions on sperm capacitation [22]. It is known that soluble adenylyl cyclase (sAC) is activated by bicarbonate and Ca 2+ , converting ATP into cAMP, subsequently activating the PKA pathway that mediates the phosphorylation of protein in tyrosine residues, which is used as a hallmark of sperm capacitation [23,24]. It has been proposed that ROS participate in the activation of the cAMP/PKA pathway by increasing cAMP levels, although the mechanism of cAMP production is still not clear in spermatozoa [22]. In adipocytes, it has been proposed that the mechanism of action is through inhibition of phosphodiesterase activity [25]. In human spermatozoa, it was proven that ROS action is mediated by PKA [26]. Thus, the induction of tyrosine phosphorylation was suppressed by a PKA inhibitor (H89) and the responsiveness to progesterone (sperm-oocyte fusion) when spermatozoa were coincubated with NADPH proved it to be a ROS generator [26]. In a different study, capacitated human spermatozoa showed increased levels of cAMP that was mimicked in vitro by exposure of spermatozoa to superoxide anions (O2 − ). Superoxide dismutase (SOD) addition inhibited cAMP levels and the sperm acrosome reaction in a concentration-dependent manner [27]. These results were confirmed by others where superoxide anions increased cAMP concentration and capacitated spermatozoa produced H2O2, leading to an increase in protein tyrosine phosphorylation [28].
Bivalent Role of ROS on Sperm Function
Mammalian spermatozoa are extraordinary cells able to survive in a different body from where they were created. They are very specialized cells having the sole purpose to deliver the paternal genome into the oocyte. However, after ejaculation, spermatozoa must undergo a complex process within the female reproductive tract named capacitation, which allows spermatozoa to fertilize the oocyte [17,18]. Capacitation is a cascade of different cellular events that imply high production and consumption of energy. Although there is controversy on the preponderant metabolic pathways, glycolysis or OXPHOS, used by spermatozoa to generate energy in the form of ATP, it seems that there are sperm species preferences [1]. OXPHOS is the most efficient pathway, obtaining about 30 molecules of ATP by oxidizing one molecule of glucose, while during glycolysis, only two molecules of ATP are obtained per molecule of glucose. It has been described that OXPHOS is the major source of ROS in spermatozoa [19]. Furthermore, ROS might play a bivalent role in sperm function: mild ROS levels boost different intracellular events that culminate on oocyte fertilization, while higher ROS levels induce sperm DNA damage and embryo miscarriage [20,21]. In a comprehensive review, Ford summarized ROS physiological functions on sperm capacitation [22]. It is known that soluble adenylyl cyclase (sAC) is activated by bicarbonate and Ca 2+ , converting ATP into cAMP, subsequently activating the PKA pathway that mediates the phosphorylation of protein in tyrosine residues, which is used as a hallmark of sperm capacitation [23,24]. It has been proposed that ROS participate in the activation of the cAMP/PKA pathway by increasing cAMP levels, although the mechanism of cAMP production is still not clear in spermatozoa [22]. In adipocytes, it has been proposed that the mechanism of action is through inhibition of phosphodiesterase activity [25]. In human spermatozoa, it was proven that ROS action is mediated by PKA [26]. Thus, the induction of tyrosine phosphorylation was suppressed by a PKA inhibitor (H89) and the responsiveness to progesterone (sperm-oocyte fusion) when spermatozoa were coincubated with NADPH proved it to be a ROS generator [26]. In a different study, capacitated human spermatozoa showed increased levels of cAMP that was mimicked in vitro by exposure of spermatozoa to superoxide anions (O 2 − ).
Superoxide dismutase (SOD) addition inhibited cAMP levels and the sperm acrosome reaction in a concentration-dependent manner [27]. These results were confirmed by others where superoxide anions increased cAMP concentration and capacitated spermatozoa produced H 2 O 2 , leading to an increase in protein tyrosine phosphorylation [28]. Nevertheless, when ROS production overcomes antioxidant defenses, detrimental effects on spermatozoa can be summarized as increased LPO and DNA damage and reduction of sperm motility, which are associated with lower sperm fertility (Reviewed by [29]). Thus, ROS homeostasis is pivotal for male reproductive potential as they mediate important functions of sperm, such as capacitation, but when ROS levels surpass these biological levels, they readily oxidize lipids and proteins at membranes and compromise sperm quality and fertilization capacity ( Figure 2). Nevertheless, when ROS production overcomes antioxidant defenses, detrimental effects on spermatozoa can be summarized as increased LPO and DNA damage and reduction of sperm motility, which are associated with lower sperm fertility (Reviewed by [29]). Thus, ROS homeostasis is pivotal for male reproductive potential as they mediate important functions of sperm, such as capacitation, but when ROS levels surpass these biological levels, they readily oxidize lipids and proteins at membranes and compromise sperm quality and fertilization capacity ( Figure 2).
Mechanism of ROS Defense in Spermatozoa
Spermatozoa differentiation is achieved during spermiogenesis as they gradually lose their cytoplasm. By the end of the process, the cytoplasm content is very small compared to other cells, where most of the space is occupied by DNA (sperm head). This special feature results in spermatozoa possessing low intracellular antioxidant activity consisting of superoxide dismutase (SOD), nuclear glutathione peroxidase (GPx), peroxiredoxin (PRDX), thioredoxin (TRX), and thioredoxin reductase (TRD) [30]. Therefore, sperm ROS scavenger activity basically depends on the antioxidant content of the seminal plasma, which is formed mainly by a trio of enzymes where SOD converts superoxide anion (O2 − .) to hydrogen peroxide (H2O2), preventing the formation of hydroxyl radical that is an inductor of LPO. However, the H2O2 generated is a strong membrane oxidant that is rapidly eliminated either by catalase (CAT) or GPx activities, giving H2O as a product. Finally, seminal plasma also contains nonenzymatic antioxidant components such as α-tocopherol (vitamin E), ascorbic acid (vitamin C), pyruvate, urate, taurine, and hypotaurine [31].
Mechanism of ROS Defense in Spermatozoa
Spermatozoa differentiation is achieved during spermiogenesis as they gradually lose their cytoplasm. By the end of the process, the cytoplasm content is very small compared to other cells, where most of the space is occupied by DNA (sperm head). This special feature results in spermatozoa possessing low intracellular antioxidant activity consisting of superoxide dismutase (SOD), nuclear glutathione peroxidase (GPx), peroxiredoxin (PRDX), thioredoxin (TRX), and thioredoxin reductase (TRD) [30]. Therefore, sperm ROS scavenger activity basically depends on the antioxidant content of the seminal plasma, which is formed mainly by a trio of enzymes where SOD converts superoxide anion (O 2 − .) to hydrogen peroxide (H 2 O 2 ), preventing the formation of hydroxyl radical that is an inductor of LPO. However, the H 2 O 2 generated is a strong membrane oxidant that is rapidly eliminated either by catalase (CAT) or GPx activities, giving H 2 O as a product. Finally, seminal plasma also contains nonenzymatic antioxidant components such as α-tocopherol (vitamin E), ascorbic acid (vitamin C), pyruvate, urate, taurine, and hypotaurine [31].
It should be noted that most ART involves washing steps, meaning that all the natural antioxidant defenses contained in seminal plasma are removed. Likewise, this also happens after natural insemination. During ejaculation, spermatozoa are surrounded by antioxidant molecules coming from seminal plasma but once the ejaculate reaches the vagina, seminal plasma is diluted, leading in both cases to spermatozoa facing ROS. Although spermatozoa possess antioxidant scavenger systems, it seems that they are not strong enough when ROS levels exceed physiological levels, subsequently making spermatozoa highly susceptible to OS.
Lipid Peroxidation
The sperm plasma membrane contains a high proportion of polyunsaturated fatty acid (PUFAs) to generate the fluidity needed in order to accomplish the membrane fusion events associated with fertilization. This high PUFAs content makes spermatozoa especially susceptible to suffer LPO [32,33]. The highly reactive hydroxyl radical (OH − ) is an inductor of LPO produced through two consecutive reactions ( Figure 3): the first is the Haber-Weiss reaction in which a ferric ion (Fe 3+ ) in the presence of a superoxide radical (O 2 − ) is reduced to ferrous ion (Fe 2+ ), followed by Fenton reaction, where Fe 2+ reacts with hydrogen peroxide (H 2 O 2 ), forming Fe 3+ and a hydroxyl radical (OH − ). It should be noted that most ART involves washing steps, meaning that all the natural antioxidant defenses contained in seminal plasma are removed. Likewise, this also happens after natural insemination. During ejaculation, spermatozoa are surrounded by antioxidant molecules coming from seminal plasma but once the ejaculate reaches the vagina, seminal plasma is diluted, leading in both cases to spermatozoa facing ROS. Although spermatozoa possess antioxidant scavenger systems, it seems that they are not strong enough when ROS levels exceed physiological levels, subsequently making spermatozoa highly susceptible to OS.
Lipid Peroxidation
The sperm plasma membrane contains a high proportion of polyunsaturated fatty acid (PUFAs) to generate the fluidity needed in order to accomplish the membrane fusion events associated with fertilization. This high PUFAs content makes spermatozoa especially susceptible to suffer LPO [32,33]. The highly reactive hydroxyl radical (OH − ) is an inductor of LPO produced through two consecutive reactions ( Figure 3): the first is the Haber-Weiss reaction in which a ferric ion (Fe 3+ ) in the presence of a superoxide radical (O2 − ) is reduced to ferrous ion (Fe 2+ ), followed by Fenton reaction, where Fe 2+ reacts with hydrogen peroxide (H2O2), forming Fe 3+ and a hydroxyl radical (OH -). Secondary products are formed during LPO: malondialdehyde (MDA), propanol, hexanol, and 4hydroxynonenal (4-HNE) [34], which are highly reactive and may attack other nearby PUFAs, thus initiating a chain reaction with harmful effects that eventually disrupts membrane fluidity. These secondary products are used as lipid oxidative stress biomarkers.
Nowadays, cryopreservation is becoming an important issue for the success of ART in humans and livestock. Although cryopreservation is routinely used, it is a tough procedure associated with deleterious effects on sperm function due to an increase of ROS production linked to LPO and thus an increase of membrane permeability [35][36][37]. In this context, the use of antioxidants as additives during cryopreservation/thawing procedure is a common strategy to counteract negative effects of ROS on sperm function.
Effects of Oral Antioxidant Intake on Male Reproductive Outcome
Currently, there is a growing trend of oral antioxidant intake to counteract high levels of ROS found in spermatozoa and seminal plasma of subfertile or infertile men. This hypothesis is supported by several works that describe an improvement of sperm parameters after oral antioxidant intake. Among those improvements, sperm concentration, motility, or decrease of DNA damaged are reported (Reviewed by [38]). However, only a few works have shown the effect of antioxidant therapy on fertility outcomes. Here, we discuss the major findings of oral antioxidant intake in reproduction outcome and its endpoints, such as fertility and live birth (summarized in Table 1). Secondary products are formed during LPO: malondialdehyde (MDA), propanol, hexanol, and 4-hydroxynonenal (4-HNE) [34], which are highly reactive and may attack other nearby PUFAs, thus initiating a chain reaction with harmful effects that eventually disrupts membrane fluidity. These secondary products are used as lipid oxidative stress biomarkers.
Nowadays, cryopreservation is becoming an important issue for the success of ART in humans and livestock. Although cryopreservation is routinely used, it is a tough procedure associated with deleterious effects on sperm function due to an increase of ROS production linked to LPO and thus an increase of membrane permeability [35][36][37]. In this context, the use of antioxidants as additives during cryopreservation/thawing procedure is a common strategy to counteract negative effects of ROS on sperm function.
Effects of Oral Antioxidant Intake on Male Reproductive Outcome
Currently, there is a growing trend of oral antioxidant intake to counteract high levels of ROS found in spermatozoa and seminal plasma of subfertile or infertile men. This hypothesis is supported by several works that describe an improvement of sperm parameters after oral antioxidant intake. Among those improvements, sperm concentration, motility, or decrease of DNA damaged are reported (Reviewed by [38]). However, only a few works have shown the effect of antioxidant therapy on fertility outcomes. Here, we discuss the major findings of oral antioxidant intake in reproduction outcome and its endpoints, such as fertility and live birth (summarized in Table 1). Oligo-and/or astheno-and/or teratozoospermia [43] LC fumarate (2 g), LAC (1 g) Clomiphene citrate (50 mg) and a complex of vitamins and microelements 3-4 NI ↑ Sperm concentration No modification in pregnancy rates 173 Oligo-and/or asteno-and/or teratozoospermia [44] NI: natural insemination, IVF: in vitro fertilization, ICSI: intracytoplasmic sperm injection, IU: international unit, PVE: prostate-vesiculo-epididymitis, LC: L-carnitine, LAC: L-acetyl-carnitine, LPO: lipid peroxidation, ↑ increase, ↓ decrease.
Carnitines
Carnitines are synthetized by the organism and found in seminal plasma at higher concentration than in spermatozoa. The l-carnitine (LC) isomer is the bioactive form [54] with a pivotal role in mitochondrial β-oxidation, acting as a shuttle of the activated long-chain fatty acids into the mitochondria [55] where l-acetyl-carnitine (LAC) is an acyl derivative of LC. Long-chain fatty acids provide energy to mature spermatozoa (with positive effects on sperm motility) and during maturation and the spermatogenic process [56]. Oral intake of LC (1 g twice/day) and LAC (0.5 g twice/day) for three months reduced ROS levels in spermatozoa and improved pregnancy (11.7%) in patients with abacterial prostate-vesiculo-epididymitis (PVE) with normal values of leucocytes, but it did not improve pregnancy at all (0%) in those PVE patients with high levels of leucocytes [40]. A year later, the same group tested patients diagnosed with abacterial PVE concomitant with high levels of leucocytes and showed that pretreatment for two months with a nonsteroidal anti-inflammatory followed by two months of carnitine oral intake achieved 23.1% pregnancy in comparison with the four-month carnitine intake group (0%), nonsteroidal anti-inflammatory group (6.2%), and the group receiving four-month nonsteroidal anti-inflammatory compounds and carnitines (3.8%) [41]. In another study, the effect of daily intake of LC (3 g), LAC (3 g), or a combination of LC (2 g) and LAC (1 g) was discriminated over six months and results were followed up 9 months after intervention in idiopathic asthenozoospermic men (n = 60) [42]. Treated men improved their total oxyradicals scavenging capacity of seminal fluid [42]. Overall, LAC or the combination of LAC + LC treatment had better improvement of sperm motility and concentration. Nevertheless, those patients with lower basal values of sperm motility had higher probability to respond to the treatment but pregnancy rate was not improved by any treatment in comparison with placebo control group [42]. Recently, coadministration of LC fumarate (2 g), LAC (1 g), and clomiphene citrate (50 mg) concurrently with vitamins and minerals in patients with idiopathic oligo-and/or asteno-and/or teratozoospermia (n = 173) enhanced sperm concentration specially in those patients with multiple impairment semen parameters (oligoasthenoteratozoospermic patients), but did not improve the morphology, progressive sperm motility neither pregnancy rates in comparison with control group [44]. A meta-analysis concerning carnitine used as an oral antioxidant therapy concluded that this molecule might be effective for improving pregnancy rates regarding the limits of patient inclusion criteria and the lower number of men evaluated in each study [57].
Vitamins
The interest of vitamin E and its use as antioxidant is due to its protective activity against ROS which subsequently decreases LPO, and therefore exerts positive effects on sperm functions, such as sperm concentration and motility [58]. However, its effects in fertility are less clear. For example, in a small clinical trial (n = 30), oral administration of vitamin E (300 mg twice daily) for three months raised the levels of vitamin E in blood serum, although human seminal plasma levels were not modified, questioning its possible effects on reproductive parameters [50]. Nevertheless, in this clinical trial, vitamin E treatment achieved an improvement of the zona pellucida binding test without any other improvement described, including ROS level [50]. Similarly, 15 normospermic infertile men after one month of daily consumption of 200 mg of vitamin E improved their fertilization rate (19.3 ± 23.3 pretreatment versus 29.1 ± 22.2 post-treatment) after IVF. Those results were associated with lower sperm LPO levels in comparison with preintervention values [52]. In another work, oral administration of vitamin E (100 mg thrice daily) to patients with asthenospermia (n = 52) established three different groups of men according to the results: (i) men without improvement of their sperm motility (40%); (ii) men with improved sperm motility but did not achieve pregnancy (39%); (iii) men with improved motility and achieved pregnancy (21%), of which 81.8% of pregnancies finished in live birth. The placebo control group did not achieve any pregnancies [47]. Later, daily intake of a combination of vitamin E and C (1 mg of each component) for two months in patients where intracytoplasmic sperm injection (ICSI) had previously failed was studied (n = 38). The results showed two different populations: (i) those where the antioxidant treatment decreased the percentage of sperm DNA damage (n = 29) and (ii) those where the treatment did not affect this parameter (n = 9) [49]. The most interesting result was observed in the responsive group that after ICSI, the pregnancy rate (6.9 vs. 49.3%) and implantation rate (2.2 vs. 19.2%) were improved compared with the pretreatment group, although no differences were found in embryo quality [49]. In a nonplacebo-controlled and nondouble-blind design trial, daily intake of a combination of selenium (200 µg) and vitamin E (400 UI) followed for 3.5 months by infertile men (n = 690) achieved 10.8% spontaneous pregnancy [50].
Several studies have been performed looking for beneficial effects from a combination compounds with antioxidant activity. For example, a formulation using a mix of several compounds with antioxidant activity (vitamin C, vitamin E, carnitine, folic acid, lycopene, selenium, and zinc) was evaluated using a mouse Gpx5 knock-out (KO) subjected to a second stress: scrotal heat (KO + SH) (42 • C for 30 min) [58]. Although the exact ingestion quantity of this antioxidant combination could not be determined, their effects include the reversion of sperm DNA oxidation induced in KO + SH animals and protection of seminiferous tubules. The results showed that animals supplemented with KO + SH versus the nonsupplemented animals had double the fertilization rate (73.7 vs. 35.2%) and fetus reabsorption was halved (8.9 vs. 17.8%) [58]. In another trial, infertile human patients with oligoand/ or astheno-and/or teratozoospermia with or without varicocele (n = 104) using a combination of antioxidants (vitamin C 90 mg, vitamin B12 1.5 µg, LC 1mg, fumarate 725 mg, LAC 500 mg, fructose 1000 mg, CoQ 10 20 mg, zinc 10 mg, and folic acid 200 µg) were studied for six months. The results showed that the individuals from the treated group, regardless of whether they suffered from varicocele or not, presented improved sperm concentration total sperm motility [43]. Moreover, after treatment, 22.2% (10/45) of supplemented patients achieved pregnancy, while in the control group, only 4.1% (2/49) of the couples were pregnant [43]. A close analysis of the men from the supplemented group revealed that only 4.8% (1/21) of patients suffering varicocele improved after treatment, while the nonvaricocele group achieved 37.5% (9/24) pregnancy [43]. A different group studied the effect of a commercial multiantioxidant supplement (vitamin E 400 IU, vitamin C 100 mg, lycopene 6 mg, zinc 25 mg, selenium 26 µg, folate 0.5 mg, garlic 1000 mg) for three months on 60 men with high levels of DNA fragmentation and poor sperm motility and membrane integrity [51]. The treatment achieved doubled pregnancy rate (63.9 vs. 37.5%), implantation rate (46.2 vs. 24%), and viable pregnancy rate (38.5 vs. 16%) versus the placebo group without any modification of any sperm parameters, fertilization, or embryo quality rates [51]. However, this work was later criticized because of the experimental design, particularly the low number of individuals in the trial, unequal distribution of individuals between the placebo (n = 16) and treatment groups (n = 36) and the suitability of the statistical analysis used [59].
Contradictory results were found when men were supplemented with different oral antioxidants after varicocelectomy. Oral intake of vitamin E (300 mg twice/day) for 12 months (n = 40) improved the sperm parameters of sperm concentration and the percentage of motile spermatozoa, although these data were not significant compared with control [60]. Recently, a multiple antioxidant combo was tested (l-carnitine fumarate 1 g, acetyl-l-carnitine HCl 0.5 g, fructose 1 g, citric acid 50 mg, vitamin C 90 mg, zinc 10 mg, folic acid 200 µg, selenium 50 µg, coenzyme Q-10 20 mg, and vitamin B12 1.5 µg) after varicocelectomy (n = 90) for six months [45]. Surgery improved the following sperm parameters: sperm concentration, percentage of motile spermatozoa or progressive motility, and spermatozoa with normal morphology. Moreover, treated men achieved 29% pregnancy versus 17.9% in the placebo group [45].
Zinc
Zinc is a metalloprotein cofactor for DNA transcription and protein synthesis. Moreover, zinc is necessary for the maintenance of spermatogenesis and optimal function of the testis, prostate, and epididymis [61], in addition to their antioxidant properties preventing LPO [62]. A trial using zinc sulphate as an antioxidant therapy administrated orally (250 mg twice daily) for three months reported an improvement in the reproductive outcome of asthenozoospermic men (n = 100), particularly in the sperm parameters of concentration, motility, and sperm membrane integrity (hypoosmotic swelling test). It was also noticed a decrease of antisperm antibodies on seminal plasma without modification of zinc levels on seminal plasma [53]. Pregnancies were also improved in couples where men underwent treatment when compared with placebo, 22.5% (11/49) versus 4.3% (2/48), respectively [53]. In another trial with only 14 patients and no control group, sperm parameters were improved after zinc treatment (220 mg daily for four months) and 21.4% (3/14) of patients achieved pregnancy and increase zinc levels on seminal plasma [52]. Although beneficial evidence has been found on reproductive outcome after zinc intake, the lower number of studies and subjects under treatment without a proper control does not allow further discussion of the possible positive effects of zinc intake on reproduction outcome.
Natural Compounds-Traditional Medicine
Natural compounds have been used traditionally to treat diseases. For instance, beneficial effects on reproductive outcome have been reported using products derived from tea (Camelia sinensis (L.)), which is the second most consumed beverage after water [63]. For example, an in vitro experiment using green tea extract or epigallocatechin-3-gallate (EGCG) added to human spermatozoa media improved sperm capacitation hallmarks, such as tyrosine phosphorylation and cholesterol efflux, through the estrogen receptor pathway [64]. EGCG has been shown to have beneficial effects when extreme stresses are applied to male mice [65,66]. Interestingly, adverse effects induced by artificial testicular hyperthermia were ameliorated by oral administration of green tea extract [65]. Positive effects were visible after 28 days of heat stress induction, improving sperm concentration, percentage of motile and progressive spermatozoa, and sperm membrane integrity [65]. Another example of the beneficial effects of EGCG were described when intraperitoneal administration (50 mg/kg) protected against testicular injury induced by ionizing radiation in rats [66]. Thus, treated animals restored testicular function with an improvement in the number of pups by littler reducing LPO (TBARs) and protein carbonyl levels [66]. EGCG's mechanism of action is via the mitogen-activated protein kinase/BCL2 family/caspase 3 pathway [66]. In another work, the combination of two different tea extracts, white and green, where evaluated as additives to improve ART sperm of rats stored at room temperature. The authors found doubled levels of epigallocatechin (EGC) and EGCG in white tea in comparison with green tea [67], highlighting the variability associated with the type of tea extract used. Moreover, although both extracts had positive effects, the white tea extract had better ferric reducing antioxidant power than the green tea extract and the control. The beneficial effects were proportional to the concentration used, with 1 mg/mL of white tea extract being the best concentration tested for improving sperm survival and decreasing LPO over 72 hours of storage at room temperature [67]. Encouraged by the antioxidant effects on sperm parameters of white tea, the same group explored the oral administration potential of the extract to improve prediabetic type II (PreDM) male reproduction features known to be decreased due to oxidative stress [68]. PreDM is characterized by mild hyperglycemia, glucose intolerance, and insulin resistance and has been related with infertility or subfertility problems in males [69]. Consequently, using rat as an animal model, drinking white tea counteracted the negative effects of PreDM on the male reproductive tract. For example, white tea consumption improved testicular antioxidant power and decreased lipid peroxidation and protein oxidation [68]. Ingestion of white tea also restored sperm motility and restored sperm showing morpho-anomalies to normal levels [68].
Antioxidants as a Tool to Improve Male ART Outcomes
Human infertility already affects one of six couples worldwide [70] and male factors contribute to 20-50% of infertility [71]. Infertile men tend to have higher ROS levels than fertile men. To counteract fertility problems, different ART have been developed, mainly IVF and ICSI. In both cases, gametes are extracted from the body and incubated in in vitro conditions and, after a while, an embryo is transferred into the uterus. It should be noted that due to legislation and ethical issues, it is easier to perform experiments in animal models than in humans to test antioxidant effects on different ART. The interest in the use of antioxidants to improve sperm parameters is not new. As early as 1943, in a study focused on sperm metabolism and oxygen consumption, MacLeod showed that sperm produce hydrogen peroxide, which has a deleterious effect on sperm motility, and it can be counterbalanced by addition of catalase to the media [72]. Later, some authors followed the same rationale and tried to adapt MacLeod's hypothesis to different ART, such as cryopreservation, IVF, and ICSI.
Sperm conservation for long periods of time in liquid nitrogen (cryopreservation) is designed to keep sperm viable. From a practical point of view, cryopreservation is a tool to enable male fertility before, for example, chemotherapy, radiotherapy, vasectomy, or exposure to toxicants, or just to have time to screen donors for infectious agents, such as the human immunodeficiency or hepatitis B viruses [73]. On the other hand, from the animal industry point of view, the use of cryopreservation aims to maximize the number of services (inseminations) that can be performed from a simple ejaculation, ensuring the quality of genetical material preserved, or allowing the transportation of this genetical material to distant places. Cryopreservation is also of special interest to preserve endangered species. However, cryopreservation is not a harmless technique, inducing DNA and LPO damage and other adverse effects [74]. Moreover, cryopreservation, like ART, involves centrifugation, which is associated with production of ROS [13] and removal of seminal plasma which contains the main sperm antioxidant scavenger systems.
Antioxidant supplementation to cryopreservation media has been proposed as a way to overcome ROS production and OS status in spermatozoa (summarized in Table 2). For example, supplementation with a synthetic phenolic antioxidant, butylated hydroxytoluene (BHT), during boar sperm cryopreservation improved post-thawing sperm survival, decreased MDA levels at the concentration of 0.4 mM BHT, and embryo development was improved (28.8% vs. 15.8%) without modification of embryo cleave percentage in comparison to the control [75]. Later, it was described that 1 mM BHT improved antioxidant sperm activity, pregnancy rate (86.7 vs. 63.6%), the number of gilts farrowing (86.7 vs. 45.4%), and the number of piglets born (10.8 ± 1.6 vs. 8.2 ± 2.2) after performing intrauterine artificial insemination (IUI) using cryopreserved sperm versus control [76]. Subsequently, in a multitest in which four different compounds with antioxidant activity (BHT 2 mM, ascorbic acid 8.5 mg/mL, hypotaurine 10 mM, and cysteine 5 mM) were added during goat sperm cryopreservation, LPO was decreased but only ascorbic acid and BHT significantly improved fertility in comparison with control after performing artificial insemination (AI) [77]. The importance and the use of the amino acid cysteine in the fight against ROS impacts on the cell is due to the fact it is a limiting substrate for glutathione synthesis [90]. Cysteine (2 mM) and taurine (2 mM) (a cysteine derived) antioxidant properties were controversial when they were used during the cryopreservation procedure of bull spermatozoa [80]. Taurine decreased GSH and SOD levels, while CAT levels were five times higher than control, but MDA levels were also higher. However, cysteine increased SOD and CAT levels without an effect on MDA levels [80]. The nonreturn rate was not modified when IUI were performed by neither of the compounds; however, a nonsignificant (p > 0.05) tendency of improvement was observed in cysteine-treated straws 74.54% (41/55) in comparison to control 57.14% (28/49) [80]. Similar results were obtained when a higher concentration of cysteine (5 mM) and trehalose (25 mM) were added again to bull cryopreservation media. Thus, the antioxidant features of these compounds were not proved; neither MDA nor GPx levels were enhanced [82]. Furthermore, no improvement on the nonreturn rate was found after IUI [82]. Similarly, using cysteamine (5 µM), a decarboxylated derivative of cysteine and lycopene (500 µg/mL) during bull sperm cryopreservation, no differences were found in the nonreturn rate [83]. In other study, the authors used N-acetyl-l-cysteine (NAC), an acetylated cysteine residue which has been shown to effectively reduce ROS formation when H 2 O 2 stress were used in thawed bull spermatozoa [87]. However, neither sperm DNA, nor the number of blastocysts were not improved after performing ICSI using spermatozoa cryopreserved in the presence of NAC [87]. Nevertheless, in an IVF study on mice using fresh spermatozoa, where gametes and embryos were stressed by incubation under 20% oxygen atmosphere (over physiological levels on oviduct and uterine from 2-8% [91]), a combination of substances with antioxidant activity were tested (LAC 10 µM, NAC 10 µM, α-Lipoic Acid 5 µM) in either IVF media, embryo culture media, or both. Treated samples had lower intracellular levels of H 2 O 2 , accelerated embryo development, and significantly increased trophectoderm (TE) cell numbers, inner cell mass (ICM), and total cell numbers [88]. All these effects were exacerbated when the antioxidant combo were added during the whole process [88].
Positive effects were also described when thawed bull spermatozoa were supplemented with an antioxidant combination (zinc chloride 10 µg/mL, D-aspartic acid 500 µg/mL, and coenzyme-Q10 40 µg/mL), obtaining a better percentage of total sperm motile and progressive motility and a decrease of DNA fragmentation through sperm incubation [89]. Moreover, antioxidant supplementation improved embryo development. Although no differences were found in the cleave percentage, the number of blastocysts that reached the eight-cell stage was 37.1% in the control versus 51.7% in the treated group [89].
Following the rationale of MacLeod [72], adding antioxidant enzymes to counteract the adverse effects of ROS on spermatozoa was used to improve sperm cryopreservation. Enzymes with antioxidant properties were added to bull cryopreservation media-0.5 and 1.0 mM of reduced glutathione (GSH) or a combination of 0.5 mM of GSH and 100 U/mL of SOD-but did not modify the nonreturn rates [84]. In another study, the use of CAT (200 IU/mL) was used to cryopreserve ram (Capra pyrenaica) epidydimal spermatozoa obtained postmortem [79]. At this concentration, no differences were found in sperm parameters but negative effects were described on fertility: fewer pronucleus zygotes (25.5% control vs. 13.2% treated) and cleaved embryos were obtained from treated samples after IVF (16.7% control vs. 7.6% treated) [79].
Natural compounds with antioxidant activity have also been tested in ART. Metformin, a biguanide isolated from Galea officialis used worldwide as a treatment for diabetes type II [92], was recently added to the cryopreservation sperm media of chicken due to its antioxidant properties, among other properties [93]. Cryopreserved mouse spermatozoa treated with metformin displayed better motility, sperm viability, doubled fertilization rate and embryo development, and halved DNA fragmentation rate [86]. These promising results of supplementation of cryopreservation media with metformin appeared to be related to the activation of 5'AMP-activated protein kinase (AMPK). However, recently, negative results have been described when metformin (1 and 10 mM) was used to improve boar sperm preservation at 17 • C, decreasing sperm motility and mitochondria potential [94]. In an in vitro study performed in human spermatozoa kept at physiological temperature, metformin (10 mM) induced a reduction of sperm motility, where the mechanism of action was associated with PKA pathway inhibition [95]. Boar spermatozoa were coincubated during cryopreservation with rosemary extract (Rosmarinus officinalis) or cysteine (10 mM) or a combination of both [81]. Although both compounds enhance some sperm properties, the most noticeable effects were found by rosemary compound, enhancing total sperm motility, progressive motility, and preventing acrosome membrane damage three hours post-thawing in comparison to control [81]. Rosemary-treated spermatozoa yielded better cleave percentages without affecting blastocyst formation rate after performing IVF [81].
Melatonin (MLT) is a hormone endogenously synthesized mainly by the pineal gland. It has been detected in human seminal fluid [96] and melatonin receptors have been described in sperm of several species [97]. MLT's antioxidant property was tested in cryopreserved human spermatozoa [98]. MLT increased the expression of the antioxidant-related gene Nrf2 as well as its downstream genes SOD2, CAT, HO-1, and GSTM1, leading to lower ROS levels and LPO [98]. On the other hand, MLT (1 µM) used during boar semen preservation at 17 • C only showed a modest membrane protective effect [99]. By contrast, cryopreserved ram sperm supplemented with MLT achieved higher viability rates, higher percentages of total motile and progressive motile spermatozoa, and higher DNA integrity [85]. However, after IVF, only faster first embryonic division without any other embryo output difference was observed in those samples supplemented with MLT [85].
Antioxidants as a Therapy to Improve Reproduction Outcome
Sperm produce ROS as consequence of high aerobic metabolism. ROS production at nonphysiological levels overwhelm cellular scavenger systems and result in deleterious effects, such as lipid and protein peroxidation and DNA damage. Infertile men are known to possess pathological ROS levels, leading to sperm DNA fragmentation and lower ART outcome [29]. Thus, to deal with ROS overproduction and their deleterious effects at cellular levels in the male reproductive system, different strategies have been tested: (i) antioxidant oral consumption and (ii) antioxidants used as additives to media during ART.
Literature concerning the use of compounds with antioxidant activity and the improvement of sperm function is extensive. Nevertheless, others have found negative results [101,102], questioning the beneficial impact of antioxidant prescription and arguing that there is not clear evidence supporting prescription of antioxidants [103] or even that the over exposure to antioxidants can lead to other pathologies [104]. Others have found that administration of high doses of antioxidants have harmful effects on health [105,106]. Most trials have the handicap of using a lower number of men or are not double-blind or placebo-controlled. Moreover, the heterogeneity of the treatments and concentrations used as well as the experimental design make it hard to establish solid conclusions. Studies with greater numbers of patients should be performed, including large control groups to address the effects of oral antioxidant consumption on reproductive outcome. Moreover, arbitrary formulations of antioxidants should be avoided and classical pharmacological concentration-dependent experiments should be performed in order to find effective concentrations of antioxidants. Rather than by oral consumption, better reproductive outcome results are described when antioxidants were implemented in ART, especially during cryopreservation-thawing procedures. Antioxidant supplementation decreased LPO and improved reproductive outcome. Antioxidant concentration should be adapted to each form of ART. The future of antioxidant therapy to improve ART involves the development of nonintrusive technologies that can discern between sperm with or without lipid peroxidation or DNA damage, allowing physicians to inject healthy sperm into the oocyte by ICSI.
|
2019-04-10T07:24:45.914Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "d5cb9c178cf688dfb82f923b30090bba90b1d7f9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/antiox8040089",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5cb9c178cf688dfb82f923b30090bba90b1d7f9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
118038475
|
pes2o/s2orc
|
v3-fos-license
|
eLISA: Astrophysics and cosmology in the millihertz regime
This document introduces the exciting and fundamentally new science and astronomy that the European New Gravitational Wave Observatory (NGO) mission (derived from the previous LISA proposal) will deliver. The mission (which we will refer to by its informal name"eLISA") will survey for the first time the low-frequency gravitational wave band (about 0.1 mHz to 1 Hz), with sufficient sensitivity to detect interesting individual astrophysical sources out to z = 15. The eLISA mission will discover and study a variety of cosmic events and systems with high sensitivity: coalescences of massive black holes binaries, brought together by galaxy mergers; mergers of earlier, less-massive black holes during the epoch of hierarchical galaxy and black-hole growth; stellar-mass black holes and compact stars in orbits just skimming the horizons of massive black holes in galactic nuclei of the present era; extremely compact white dwarf binaries in our Galaxy, a rich source of information about binary evolution and about future Type Ia supernovae; and possibly most interesting of all, the uncertain and unpredicted sources, for example relics of inflation and of the symmetry-breaking epoch directly after the Big Bang. eLISA's measurements will allow detailed studies of these signals with high signal-to-noise ratio, addressing most of the key scientific questions raised by ESA's Cosmic Vision programme in the areas of astrophysics and cosmology. They will also provide stringent tests of general relativity in the strong-field dynamical regime, which cannot be probed in any other way. This document not only describes the science but also gives an overview on the mission design and orbits.
1 Introduction 1 AU 1 × 10 9 m Sun Earth 20°6 0°F igure 1: The eLISA orbits: The constellation is shown trailing the Earth by about 20 degrees (or 5 × 10 10 km) and is inclined by 60 degrees with respect to the ecliptic. The trailing angle will vary over the course of the mission duration from 10 degrees to 25 degrees. The separation between the spacecraft is L = 1 × 10 9 m.
length. For practical reasons, this measurement is broken up into three distinct parts (see figure 3): the measurement between the spacecrafts, i.e. between the optical benches that are fixed to each spacecraft, and the measurement between each of the test masses and its respective optical bench. Those measurements are recombined in a way that allows us to reconstruct the distance between the test masses which is insensitive to the noise in the position of the spacecraft with respect to the test masses.
A second key feature of the eLISA concept is that the test masses are protected from disturbances as much as possible by a careful design and the "drag-free" operation. To establish the drag-free operation, a housing around the test mass senses the relative position of test mass and spacecraft, and a control system commands the spacecraft thrusters to follow the free-falling mass. Drag-free operation reduces the time-varying disturbances to the test masses caused by force gradients arising in a spacecraft that is moving with respect to the test masses. The requirements on the power spectral density of the residual acceleration of the test mass is where f is the frequency.
The third key feature, the distance measuring system, is a continuous interferometric laser ranging scheme, similar to that used for radar-tracking of spacecraft. The direct reflection of laser light, such as in a normal Michelson interferometer, is not feasible due to the large distance between the spacecrafts. Therefore, lasers at the ends of each arm operate in a "transponder" mode. A laser beam is sent out from the central spacecraft to an end spacecraft. The laser in the end spacecraft is then phase-locked to the incoming beam thus returning a high-power phase replica. The returned beam is received by the central spacecraft and its phase is in turn compared to the phase of the local laser. A similar scheme is employed for the second arm. In addition, the phases of the two lasers serving the two arms are compared within the central spacecraft. The combined set of phase measurements together with some auxiliary modulation allows to determine the relative optical path changes with simultaneous suppression of the laser frequency noise and clock noise below the secondary (acceleration and displacement) noise. The displacement noise has two components: the shot noise, with a required power spectral density
Measurement S/C to test mass
Measurement S/C to test mass S/C to S/C measurement Figure 3: Partition of the eLISA measurement. Each measurement between two test masses is broken up into three different measurements: two between the respective test mass and the spacecraft and one between the two spacecraft (S/C). As the noise in the measurement is dominated by the shot noise in the S/C-S/C measurement, the noise penalty for the partitioning of the measurement is negligible. The blue (solid) dots indicate where the interferometric measurements are taken. Simulation of LISA Simulation of ELISA Analytical approximation Figure 4: Sensitivity of eLISA (averaged over all sky locations and polarisations) versus frequency: the solid red curve is obtained numerically using the simulator LISACode 2.0 (Petiteau et al., 2008) and the dashed blue curve is the analytic approximation based on equation 5. For a reference, we also depict the sensitivity curve of LISA (dotted, green curve).
According to the requirements, eLISA achieves the strain noise amplitude spectral density (often called sensitivity) showed in figure 4 which can be analytically approximate ash( f ) = 2 δL( f )/L = S ( f ), where : This allows to detect a strain of about 3.7 × 10 −24 in a 2-year measurement with an SNR of 1 (displacement sensitivity of 11 × 10 −12 m/ √ Hz over a path length of 1 × 10 9 m). The feasible reduction of disturbances on test masses and the displacement sensitivities achievable by the laser ranging system yield a useful measurement frequency bandwidth from 3 × 10 −5 Hz to 1 Hz (the requirement is 10 −4 Hz to 1 Hz; the goal is 3 × 10 −5 Hz to 1 Hz).
Ultra-Compact Binaries 1 Overview
The most numerous sources in the low-frequency gravitational wave band are ultra-compact binary stars: double stars in which two compact objects, such as white dwarfs and neutron stars, orbit each other with short periods. They have 9 of 81 relatively weak gravitational wave signals in comparison to massive black hole binaries, but are numerous in the Galaxy and even the Solar neighbourhood.
Several thousand systems are expected to be detected individually, with their parameters determined to high precision, while the combined signals of the millions of compact binaries in the eLISA band will form a foreground signal. This is in contrast to less than 50 ultra-compact binaries known today. The number of detections will allow for detailed study of the entire WD binary population. In particular, the most numerous sources are double white dwarfs, which are one of the candidate progenitors of type Ia supernovae and related peculiar supernovae. eLISA will determine the merger rate of these binaries. The detailed knowledge of the ultra-compact binary population also constrains the formation of these binaries and thus many preceding phases in binary evolution. This has a strong bearing on our understanding of many high-energy phenomena in the Universe, such as supernova explosions, gamma-ray bursts and X-ray sources, as they share parts of the evolution history of the binaries detectable by eLISA.
As many of the Galactic sources are rather close (within a few kpc), they will be detectable at high SNR (often larger than 50), allowing detailed studies of individual binaries. For many hundreds, the frequency and phase evolution can be studied, enabling the study of the physics of tides and mass transfer in unprecedented detail. The extreme conditions of short orbital periods, strong gravitational fields and high mass-transfer rates are unique in astrophysics.
The information provided by eLISA will be different from what can be deduced by electromagnetic observations. In particular, eLISA's capability to determine distances and inclinations, as well as the fact that the gravitational wave signals are unaffected by interstellar dust, provide significant advantages over other detection techniques. Compared to Gaia, eLISA will observe a quite different population. Gravitational wave observations allow us to determine the distances to binaries that are right in the Galactic centre rather than to those close to the Sun. The distance determinations will make it possible to map the distribution of many compact binaries in the Galaxy, providing a new method to study Galactic structure. The inclination determinations allow the study of binary formation by comparing the average angular momentum of the binaries to that of the Galaxy. Electromagnetic observations and gravitational wave observations are complementary to one another; dedicated complementary observing programs as well as public data releases will allow simultaneous and follow-up electromagnetic observations of binaries identified by eLISA.
A number of guaranteed detectable sources are known to date from electromagnetic observations. Some of these can be used to verify instrument performance by looking for a gravitational signal at twice the orbital period and comparing the signal with expectations. In addition, once eLISA has detected several nearby binaries and determined their sky position they can be observed optically, thus providing an additional quantitative check on instrument sensitivity. 10 of 81 eLISA: Astrophysics and cosmology in the millihertz regime
Instrument verification
There are currently about 50 know ultra-compact binaries. They come in two flavours: systems in which the two stars are well apart, called detached binaries, and systems in which the two stars are so close together that mass is flowing from one star to the other, called interacting binaries (see figure 5).
A subset of the known ultra-compact binaries have been recognised as instrument verification sources, as they should be detected in a few weeks to months and thus can be used to verify the performance of the instrument (Stroeer and Vecchio, 2006). The most promising verification binaries, shown as green squares in figure 6, are the shortest-period interacting binaries HM Cnc (RX J0806.3+1527), V407 Vul, ES Cet and the recently discovered 12 minute period detached system SDSS J0651+28 (Brown et al., 2011), whose lightcurve is shown in figure 8. For a decade it has remained unclear if the measured periods of HM Cnc and V407 Vul were actually orbital periods, but recent results from the Keck telescope on HM Cnc show conclusively that this system has an orbital period of 5.4 minutes. As V407 Vul has almost identical properties, this implies that this also really is a binary with an orbital period of 9.5 minutes. As the signal from the verification binaries is essentially monochromatic with a well known frequency within the eLISA mission time, astrophysical effects such as those discussed in section 4 will not hamper their detection. As more and more wide field and synoptical surveys are completed, the number of ultra-compact binaries is gradually increasing and is expected to continue to do so in the future. Already several new binaries have been found in the SDSS and the PTF (Levitan et al., 2011 while surveys such as Pan-Starrs, the EGAPS and in the future LSST will also find new systems. However, most of the systems found so far have relatively long orbital periods (longer than about 30 minutes). Two pilot surveys in principle capable of finding ultra-compact binaries with periods less than 30 minutes are underway or will start soon: the RATS (Barclay et al., 2011) and the OmegaWhite survey.
Interacting ultra-compact binaries with neutron star accretors are strong X-ray sources and new discoveries are expected, both through the continued monitoring of the sky to search for X-ray transients with RXTE, MAXI and other satellites, as well as through dedicated X-ray and optical surveys of the Galactic bulge that are currently happening (Jonker et al., 2011). With these developments we expect that several tens of verification sources should be available for eLISA, allowing detailed tests of the performance of the instrument.
eLISA as a workhorse: thousands of new binaries
Ultra-compact binaries will completely dominate the number of source detections by eLISA. Current estimates suggest the numbers of resolved compact binaries that will be detected by eLISA to be in the thousands (Webbink, 2010). We provide a visual impression in figure 6 by showing the 100 (red dots) and 1000 (black dots) strongest binaries from a MonteCarlo realization of the galaxy compact white dwarf binary population. The shortest period systems will be the most numerous, the majority having periods between 5 and 10 minutes. eLISA will revolutionise our knowledge of such a population, especially given that only two of the known fifty sources have periods less than ten minutes. As these systems are relatively short lived and faint, there is no hope to detect them in significant numbers by any other means than via gravitational radiation, as there are only several thousand expected to exist in the whole Galaxy. Their detection will allow us to test different models for the common-envelope phase, a significant uncertainty in our understanding of binary evolution and many high-energy phenomena. The internal statistical accuracy delivered by the sheer number of detected sources will ensure that the common-envelope phase will be put to the most critical test expected in the midterm future. The same population can be used to constrain models for type Ia supernovae and peculiar supernovae, as well as the formation of ultra-compact binaries in globular clusters.
The outcome of the common envelope phase
Only a minority of the stars in the Universe are single, leaving the majority to be part of a binary, a triple or a higherorder system. On the order of half of the binaries formed with sufficiently small orbital separation, so that the stars will interact during the evolution of the components into giants or super giants. Especially for low-mass stars, the majority of interactions are unstable and will lead to runaway mass transfer. Based on the observed short orbital periods of binaries that have passed this stage it is argued that somehow the companion of the giant ends up inside the giant's outer layers. During that common envelope phase, (dynamical) friction reduces the velocity of the companion, leading to orbital shrinkage and transfer of angular momentum from the orbit into the envelope of the giant. Along with angular momentum, orbital energy is deposited in the envelope, whose matter is then unbound from the giant's core, leading to a very compact binary consisting of the core of the giant and the original companion (Paczynski, 1976 -3.5 -3 -2.5 -2 log ( f /Hz) Figure 6: Strain amplitude spectral density versus frequency for the verification binaries and the brightest binaries,expected from a simulated Galactic population of ultra-compact binaries. The solid line shows the sensitivity of eLISA. Verification binaries are indicated as green squares with their names indicated, blue squares are other known binaries. Strongest 100 simulated binaries are shown in red, strongest 1000 as black dots. The integration time for the binaries is two years. Based on Brown et al. (2011), Roelofs et al. (2006Roelofs et al. ( , 2010 for the known binaries and Nelemans et al. (2004) for the simulation.
12 of 81 eLISA: Astrophysics and cosmology in the millihertz regime Virtually all compact binaries and most of the systems giving rise to high-energy phenomena (such as X-ray binaries, relativistic binary pulsars and possibly gamma-ray bursts) have experienced at least one common-envelope phase. Given the importance of this phase in high-energy astrophysics, our understanding of the physics and our ability to predict the outcome of the common-envelope phase are poor. Theoretical progress to understand the phase from first physical principles is slow (e.g. Ricker, 2010, Taam andSandquist, 2000) and the standard formalism described above has been challenged by observational tests (De Marco et al., 2011, Nelemans andTout, 2005). Comparison of the parameters of the thousands of binaries detected by eLISA with model predictions will provide a direct test of the different proposed outcomes of the common-envelope phase and our understanding of the preceding binary evolution in general.
Type Ia supernovae and sub-luminous supernovae
Type Ia supernovae have been the heralds of a new paradigm in Cosmology: cosmic acceleration Riess, 1999, Riess et al., 1998) for which the 2011 Nobel Prize in Physics was awarded. However, there are different scenarios proposed for the progenitors of SN Ia. One is the merger of two (carbon-oxygen) white dwarfs that are brought together via gravitational wave radiation (Pakmor et al., 2010) which is exactly the population eLISA will be probing. By determining the number of systems in the Galaxy and their period distribution, the rate at which they will merge will be measured. By comparing that to the inferred SNIa rate for an Sbc galaxy, the viability of this progenitor scenario will be determined. The significant efforts in the past decade to find more supernovae and the advent of wide field optical surveys have revealed a host of new types of supernovae , Sullivan et al., 2011. Some of these have been suggested to originate in the interaction between two white dwarfs at very short periods, again exactly the population to which eLISA is sensitive (Perets et al., 2010, Waldman et al., 2011.
Formation of ultra-compact binaries in globular clusters
Globular clusters have a strong overabundance of bright X-ray sources per unit mass compared to the field, probably due to dynamical interactions. Many of these have turned out to be so-called ultra-compact X-ray binaries, in which a neutron star accreted material from a white dwarf companion is a very compact orbit, exactly the type of sources that eLISA may see. However, it is not clear if the same enhancement will operate for the much more numerous white dwarf binaries. The angular resolution that can be achieved with eLISA is such that globular clusters can be resolved, so that the cluster sources can be distinguished from the Galactic disc sources. This enables eLISA to determine the number of ultra-compact binaries in globular clusters and thus to provide a direct test of the overabundance of white dwarfs binaries in globular clusters. That in turn can be used to test models for dynamical interactions in clusters.
The foreground of Galactic gravitational waves
At frequencies below a few mHz the number of sources in the Galaxy is so large (6 × 10 7 to 8 × 10 7 , see e.g. Ruiter et al., 2010, Yu andJeffery, 2010) that only a small percentage, the brightest sources, will be individually detected. The vast majority will form an unresolved foreground signal in the detector, which is quite different from and much stronger than any diffuse extragalactic background (Farmer and Phinney, 2003). This foreground is often described as an additional noise component, which is misleading for two reasons. The first is that there is a lot of astrophysical information in the foreground. The overall level of the foreground is a measure of the total number of ultra-compact binaries, which gives valuable information given the current uncertainty levels in the normalisation of the population models. The spectral shape of the foreground also contains information about the homogeneity of the sample, as simple models of a steady state with one type of binary predict a very distinct shape. In addition, the geometrical distribution of the sources can be detected by eLISA.
Due to the concentration of sources in the Galactic centre and the inhomogeneity of the eLISA antenna pattern, the foreground is strongly modulated over the course of a year (see figure 7), with time periods in which the foreground is more than a factor two lower than during other periods (Edlund et al., 2005). The characteristics of the modulation can be used to learn about the distribution of the sources in the Galaxy, as the different Galactic components (thin disk, thick disk, halo) contribute differently to the modulation and their respective amplitude can be used, for example, to set upper limits on the halo population (e.g. Ruiter et al., 2009).
Studying the astrophysics of compact binaries using eLISA
Although the effect of gravitational radiation on the orbit will dominate the evolution of the binaries detected by eLISA, additional physical processes will cause strong deviations from the simple point-mass approximation. The two most important interactions that occur are tides -when at least one of the stars in a binary system is not in co-rotation with the orbital motion or when the orbit is eccentric -and mass transfer. Because many binaries will be easily detected, these interactions do not hamper their discovery, but instead will allow tests of the physics underlying these deviations. By providing a completely complementary approach, gravitational wave measurements are optimal to the study of short period systems, in contrast to the current bias towards bright electromagnetic systems and events.
Physics of tidal interaction
eLISA measurements of individual short-period binaries will give a wealth of information on the physics of tides and the stability of the mass transfer. For detached systems with little or no interaction, the frequency evolution is well understood as that of two point masses. The strain amplitude h, the frequency f and its derivatives then are connected by where M = (m 1 m 2 ) 3/5 / (m 1 + m 2 ) 1/5 is the chirp mass, m 1 , m 2 are the masses of the binary constituents and D is the distance. Thus the measurement of h, f ,ḟ provides chirp mass and distance; the additional measurement off gives a direct test of the dominance of gravitational wave radiation in the frequency evolution. Tidal interaction between white dwarfs in detached systems before the onset of mass transfer will give rise to distinct deviations of the frequency evolution as compared to systems with no or little tidal interaction. The strength of the tidal interaction is virtually unknown, with 14 of 81 Phase φ -1 -0.5 0 0.5 1 ∆Flux 0 -0.1 Figure 8: Lightcurve of SDSS J0651+28, folded on the 12 minute orbital period. Except for the two eclipses at phase φ = 0 and φ = 0.5, the sinusoidal variation due to the tidal distortion of the primary white dwarf. From Brown et al. (2011) estimates ranging over many orders of magnitude (Marsh et al., 2004), although the high temperature of the white dwarf in the recently discovered 12 min double white dwarf may suggest efficient tidal heating (Piro, 2011). Knowledge of the strength of the tides is not only important for understanding the physics of tides in general and of white dwarf interiors, it also has important consequences for the tidal heating (and possibly optical observability) of eLISA sources and for the stability of mass transfer between white dwarfs (Fuller and Lai, 2011, Marsh, 2011, Racine et al., 2007, Willems et al., 2010. In globular clusters, dynamical interactions may produce eccentric double white dwarf systems, which can be used to constrain white dwarf properties and masses (Valsecchi et al., 2011).
Physics of mass-transfer stability
Detached ultra-compact binaries will evolve to shorter and shorter periods due to the angular momentum loss through gravitational wave radiation. At sufficiently short orbital periods (a few minutes) one of the stars becomes larger than its Roche lobe -the equipotential surface that crosses the minimum of the potential between the two stars -and material leaks out of the potential well of one star upon the other star. Depending on the difference between the change of the radius of this star and the Roche lobe upon mass transfer, there may be positive or negative feedback, leading to either limited, stable mass transfer, or a runaway mass-transfer instability.
For double white dwarfs and white dwarf-neutron star binaries the stability of the ensuing mass transfer has important consequences, for the number of detectable sources, as well as for a number of open astrophysical questions. The stable systems will form interacting binaries, AM CVn systems or ultra-compact X-ray binaries, that can be detected through their gravitational wave emissions. eLISA will detect a number of detached double white dwarfs and AM CVn systems that are so close to the onset of mass transfer that the stability of the mass transfer can be tested directly by comparing their numbers. In addition, eLISA will detect several ultra-compact X-ray binaries at the very early stages of mass transfer, providing a test of the mass transfer stability in these systems as well (Marsh, 2011).
For AM CVn systems, a major uncertainty in the mass-transfer stability is again the tidal interaction between the two white dwarfs. Most likely the mass transfer will proceed via the direct impact configuration: due to the proximity of the two stars, the mass transfer stream lands directly on the surface of the accreting white dwarf, rather than wrapping around the accreting stars and interacting with itself to form a flat accretion disk in the plane of the orbit (Marsh andSteeghs, 15 of 81 2002, Webbink, 1984). The stability of the mass transfer depends critically on the tidal interaction between the two white dwarfs (Marsh et al., 2004): In the absence of any tidal interaction, there will be additional angular momentum loss from the orbit due to the transfer of angular momentum from the orbit to the accreting star which will consequently spin up. This is different from cases where the accretion is via a disc for which most of the angular momentum generally is stored in the disc and eventually via very efficient tidal interaction put back into the orbit. Efficient tidal coupling between the accreting star and the companion has the ability to return the angular momentum back to the orbit (see D'Souza et al., 2006, Racine et al., 2007, thus reducing the magnitude of the spin-up. The difference between efficient and inefficient tidal coupling is rather dramatic: the fraction of double white dwarfs estimated to survive the onset of mass transfer can drop from about 20 % to 0.2 % (Nelemans et al., 2001) depending on assumptions about the tidal coupling. This difference is easily measurable with eLISA. Short-term variations in the secular evolution of the systems experiencing mass transfer will change the frequency evolution, but are likely to be rare and will not prevent the detection of these systems (Stroeer and Nelemans, 2009).
For ultra-compact X-ray binaries (see e.g. figure 9), the stability issue is completely different. At the onset, the mass transfer is orders of magnitude above the Eddington limit for a neutron star (the mass transfer rate at which the potential energy liberated in the accretion can couple to the infalling gas to blow it away). For normal stars and white dwarfs, this would likely lead to a complete merger of the system, but the enormous amount of energy liberated when matter is falling into the very deep potential well of a neutron star allows matter to be dumped on it at rates up to a thousand times the Eddington limit if the white dwarf has a low mass (see Yungelson et al., 2002). This allows the formation of ultra-compact X-ray binaries from white dwarf-neutron star pairs. eLISA will unambiguously test this prediction by detecting several tens of ultra-compact X-ray binaries with periods between 5 and 20 minutes.
Double white dwarf mergers
The 80% to 99.8% of the double white dwarfs that experience run-away mass transfer and merger give rise to quite spectacular phenomena. Mergers of double white dwarfs have been proposed as progenitors of single subdwarf O and B stars, R Corona Borealis stars and maybe all massive white dwarfs (e.g. Webbink, 1984). In addition, the merger of a sufficiently massive double white dwarf can be a trigger for type Ia supernova events (see Pakmor et al., 2010). Alternatively, if the merger does not lead to an explosion, a (rapidly spinning) neutron star will be formed. This is one possible way to form isolated millisecond radio pulsars as well as magnetars, which have been proposed as sites for short gamma-ray bursts (e.g. Levan et al., 2006).
Although it is not expected that eLISA will witness the actual merger of a double white dwarf as the event rate in our Galaxy is too low, it will certainly detect the shortest-period binaries known, expected at a period of about two minutes, and give an extremely good estimate of their merger rate. In addition, if the actual merger takes many orbits as recently found in simulations (Dan et al., 2011), eLISA may observe them directly.
By measuring (chirp) masses and coalescence times, eLISA will directly determine the merger rate for double white dwarfs with different masses, which can then be compared with the rates and population of their possible descendants determined by other means (Stroeer et al., 2011).
Neutron star and black hole binaries
The current observational and theoretical estimates of the formation rate of neutron star binaries are highly uncertain and predict several tens of neutron star binaries to be detected by eLISA (e.g. Belczynski et al., 2010, Nelemans et al., 2001. The number of ultra-compact stellar-mass black hole binaries in the Galaxy is even more uncertain (e.g. Belczynski et al., 2002); furthermore, these binaries are likely to be detectable only through their gravitational wave emission as they are electromagnetically quiet. eLISA will thus constrain the formation rate estimates and the numbers of neutron star binaries and ultra-compact stellar mass black hole binaries. As these systems can be seen throughout the Galaxy, the samples for all these populations will be complete at the shortest periods. Thus the sample will be independent of selection effects, such as those present in radio pulsar surveys and X-ray surveys, that pick up only transient X-ray sources. In addition, by the time eLISA will fly, Advanced LIGO and Virgo will likely have detected a number of double neutron star mergers from far away galaxies, so these measurements together will test our ability to extrapolate our population models from our own galaxy to the rest of the Universe. : Imprint of the 40 min orbital period on the arrival times of the X-ray pulsations in the ultra-compact X-ray binary XTE J0929-314. From Galloway et al. (2002).
17 of 81 eLISA: Astrophysics and cosmology in the millihertz regime A special situation might arise for the case of millisecond X-ray pulsars, in ultra-compact X-ray binaries. In the last decade, observations of X-ray pulsations from many ultra-compact X-ray binaries have enabled astrophysicists to determine the rotation rate of the neutron star in the binary using the NASA mission RXTE (Wijnands, 2010). As had been expected on theoretical grounds, neutron stars are spinning rapidly (several hundred times per second) due to the angular momentum gained from infalling matter. The measurements give credence to the idea that these rapidly spinning neutron stars observed as millisecond radio pulsars are descendants of accreting neutron stars in binary systems (e.g. Bhattacharya and van den Heuvel, 1991). However, the exact role of ultra-compact binaries in the formation of these pulsars has yet to be established. The distribution of spin periods discovered in X-ray binaries suggests additional neutron star angular momentum loss on top of the plasma physics interaction between the accretion and magnetic field of the spinning neutron stars (Chakrabarty et al., 2003) which could be due to strong gravitational wave emission (Bildsten (1998); but see Watts et al. (2008) and Patruno et al. (2011)). In that case, ultra-compact X-ray binaries might be the only sources that could be studied simultaneously with eLISA and ground based detectors, with eLISA detecting the orbital period and the ground based detector detecting the neutron star spin period.
Studies of galactic structure with eLISA
One of the major capabilities of eLISA is that it will determine distances for hundreds of compact binaries by measuring theirḟ (see equation 7). The ability of eLISA to determine distances depends on the mission lifetime, as larger life times lead to more accurateḟ measurements. The directional dependence of the Galactic foreground as well as the directional accuracy for the resolved systems allow a statistical assessment of the contributions of the different Galactic components, such as the Galactic bulge (with its bar), the thin and thick disc and the Galactic halo (for a realistic MW model, see in figure 10).
The Galactic center is one of the most interesting areas of the Galaxy, with a central massive black hole surrounded by a dense assembly of stars with intriguing properties. Dynamical effects, in particular mass segregation, will lead to many interactions close to the central black hole so that wide binaries will become tighter or will be disrupted (for a review see Alexander, 2005). This likely leads to an increase in the number of ultra-compact binaries as well as the possibility of EMRIs (see 2). eLISA will put much more stringent constraints on these populations than current observations (see e.g. Roelofs et al., 2007), which are limited by the electromagnetic faintness of the sources, or theoretical predictions, which are limited by our current understanding of the processes leading to compact binary formation. Distance determinations to the many ultra-compact binaries around the Galactic centre will allow for an independent distance determination.
The level and shape of the double white dwarf foreground as well as the distribution of resolved sources will provide information on the scale height of the ultra-compact binary population (Benacquista and Holley-Bockelmann, 2006) in the disc of the Galaxy.
The distribution of sources in the Galactic halo will be significantly different from the other Galactic components. In principle the halo population is expected to be much smaller than the rest of the Galaxy (Ruiter et al., 2009, Yu andJeffery, 2010), but it might be enhanced as the formation and evolution of binaries in the halo may have been quite different. Such old and metal-poor population can be studied locally only in globular clusters, where the formation and evolution of binaries is generally completely altered by dynamical effects. Two of the known AMCVn systems may belong to the halo. They have very low metal abundances and have anomalous velocities. If true this implies that a large number of AMCVn stars are in the halo, maybe as many as in the rest of the Galaxy.
The eLISA directional sensitivity will immediately pick up any strong halo population if it exists.
Finally, for many of the resolved sources the eLISA measurements will also provide an accurate estimate of their orbital inclination. For the first time, this will give hints on the dynamics of the formation of binaries from interstellar clouds, because the angular momentum vectors of the binaries is related (in a statistical way) to the overall angular momentum of the Galaxy. Astrophysical black holes appear to come in nature into two flavours: the "stellar mass" black holes of 3 M to approximately 100 M resulting from the core collapse of very massive stars, and the "supermassive" black holes of 10 6 M -10 9 M that, according to the accretion paradigm, power the luminous QSO. The former light up the X-ray sky, albeit only in our neighbourhood, as stellar mass black holes fade below detection limits outside our local group. The latter are detected as active nuclei, over the whole cosmic time accessible to our current telescopes. Electromagnetic evidence of black holes in the mass range 10 2 M -10 6 M is less common, due to the intrinsic difficulty of detecting such faint sources in external galaxies. However, it is in this mass interval, elusive to electromagnetic observations, that the history of supermassive black hole growth is imprinted.
Supermassive black holes inhabit bright galaxies, and are ubiquitous in our low-redshift Universe. The discovery of close correlations between the mass of the supermassive black hole with key properties of the host has led to the notion that black holes form and evolve in symbiosis with their galaxy host. In agreement with the current paradigm of hierarchical formation of galactic structures and with limits imposed by the cosmic X-ray background light, astrophysical black holes are believed to emerge from a population of seed black holes with masses in the range 100 M -10 5 M , customarily called intermediate mass black holes. The mass and spin of these black holes change sizably in these interactions as they evolve over cosmic time through intermittent phases of copious accretion and merging with other black holes in galactic halos. In a galactic merger, the black holes that inhabit the two colliding galaxies spiral in under the action of dynamical friction, and pair on sub-galactic scales forming a Keplerian binary: binary black holes thus appear as the inescapable outcome of galaxy assembly. When two massive black holes coalesce, they become one of the loudest sources of gravitational waves in the Universe.
eLISA is expected to target coalescing binaries of 10 5 M -10 7 M during the epoch of widespread cosmic star formation and up to z ∼ 20, and to capture the signal of a coalescing binary of 10 4 M -10 5 M beyond the era of the earliest known QSO (z ∼ 7). Gravitational waveforms carry information on the spins of the black holes that eLISA will measure with exquisite precision, providing a diagnostic of the mechanism of black hole growth. The detection of coalescing black holes not only will shed light into the phases of black hole growth and QSO evolution, but will pierce deep into the hierarchical process of galaxy formation.
2 Black holes in the realm of the observations Dormant and active supermassive black holes QSOs are active nuclei so luminous that they often outshine their galaxy host. They are sources of electromagnetic energy, with radiation emitted across the spectrum, almost equally, from X-rays to the far-infrared, and in a fraction of cases, from γ−rays to radio waves. Their variability on short timescales revealed that the emitting region is compact, only a few light hours across.
There is now scientific consensus that the electromagnetic power from QSO and from the less luminous AGN results from accretion onto a supermassive black hole of 10 6 M -10 9 M (Krolik, 1999, Salpeter, 1964, Zel'dovich and Novikov, 1964. Escaping energy in the form of radiation, high velocity plasma outflows, and ultra relativistic jets can be generated with high efficiency (ε ∼ 10 %, higher than nuclear reactions) just outside the event horizon, through viscous stresses on parcels of gas orbiting in the gravitational potential of the black hole. The accretion paradigm has thus been, and still is, at the heart of the hypothesis of black holes as being "real" sources in our cosmic landscape. eLISA will offer the new perspective of revealing these black holes as powerful sources of gravitational waves, probing the smallest volumes of the large scale Universe.
Massive black holes are tiny objects compared to their host galaxies. The event horizon of a Kerr black hole of mass M • scales as R horizon ∼ GM • /c 2 , and it is far smaller than the optical radius of the galaxy host: R horizon ∼ 10 −11 R gal . The distance out to which a black affects the kinematic of stars and gas (the gravitational influence radius), R grav ∼ GM • /σ 2 , is also small compared to the optical radius of the host, R grav ∼ 10 −4 R gal (where σ is the velocity dispersion of the stars of the galactic bulge).
For a long time, QSO and more generally the less luminous AGN phenomena were understood as caused by a process 20 of 81 σ (km s −1 ) Figure 11: The correlation between the black hole mass M • and the luminosity of the host galaxy's stellar bulge (left), and host galaxy's bulge velocity dispersion σ (right) for all detections in galaxies near enough for current instruments to resolve the region in which the black hole mass dominates the dynamics (adapted from Gültekin et al., 2009) exclusively confined to the nuclear region of the host. This picture of disjoint black hole and galaxy evolution changed with the advent of the HST (Ferrarese and Ford, 2005).
Observations of almost all bright galaxy spheroids in the near universe reveal that the velocities of stars and gas start to rise in a Keplerian fashion at their centres, highlighting the presence of a dark point-mass which dominates the central gravitational potential. The same observations provide the mass of this dark object, hypothesised to be a quiescent black hole. The proximity of these galaxies to Earth allowed for a full optical characterisation of the host, and this ultimately led to the discovery of tight correlations -depicted in figure 11, from Gültekin et al. (2009) -between the black hole mass M • and the optical luminosity and velocity dispersion σ of the stars measured far from the black hole (Ferrarese and Merritt, 2000, Gebhardt et al., 2000, Graham et al., 2011, Gültekin et al., 2009. The relations state that galaxy spheroids with higher stellar velocity dispersions, i.e. with deeper gravitational potential wells and higher stellar masses and luminosities, host heavier central black holes with little dispersion in the correlation. Thus more massive galaxies grow more massive black holes: the black hole sees the galaxy that it inhabits, and the galaxy sees the black hole at its centre despite its small influence radius (Häring and Rix, 2004, Magorrian et al., 1998, Marconi and Hunt, 2003.
Consensus is rising that the M • −σ relation of figure 11 is fossil evidence of a co-evolution of black holes and galaxies. The relation may have been established along the course of galactic mergers and in episodes of self-regulated accretion , Di Matteo et al., 2005, 2008, Johansson et al., 2009, Lamastra et al., 2010, Mihos and Hernquist, 1996, Somerville et al., 2008. However, the origin of the M • − σ relation (Ciotti et al., 2010, King, 2003, Silk and Rees, 1998, and its evolution at look-back times is still unclear (Peng, 2007, Treu et al., 2007, Woo et al., 2008. The similarity between the evolution, over cosmic time, of the luminosity density of QSOs and the global star formation rate Terlevich, 1998, Kauffmann andHaehnelt, 2000) points to the presence of a symbiotic growth, which is still under study (Schawinski et al., 2010(Schawinski et al., , 2011.
The census of black holes, from the study of the kinematics of stars and gas in nearby galaxies, has further led to the estimate of the black hole local mass density: ρ • ∼ 2 × 10 5 M Mpc −3 − 5 × 10 5 M Mpc −3 (Aller and Richstone, 2002, Lauer et al., 2007, Marconi et al., 2004, Tundo et al., 2007. Whether this mass density traces the initial conditions, i.e. the mass since birth, obtained at most by rearranging individual masses via coalescences, or the mass acquired via major episodes of accretion in active AGN phases can only be inferred using additional information: that resulting from the AGN demographics and from studies of the X-ray cosmic background. Two arguments provide information about how much of the black hole growth occurred through accretion of gas, in phases when the black hole is active as AGN. The first is the existence of a limiting luminosity for an accreting black hole, corresponding to when the radiation pressure force equals gravity. Above this limit material that would be responsible for the emission can not fall onto the black hole, as it is pushed away. This limit is the Eddington luminosity L E = 4πGM • m p c/σ T ∼ 10 46 erg s −1 (M • /(10 8 M ) (σ T and m p are the Thomson cross section and proton mass). The AGN luminosity L is normally a fraction f E 1 of the Eddington luminosity, since as soon as L approaches L E the radiation pressure force against gravity self-regulates the accretion flow to L ∼ L E , providing also a lower bound on M. The second argument is that "light is mass", i.e. that any light output from accretion (at a luminosity level L = εṀc 2 ) increases the black hole's mass at a rate dM • /dt = (1 − ε)Ṁ, whereṀ is the rest-mass accreted per unit time and ε the accretion efficiency, i.e. how much of the accreted mass is converted into radiation. Accordingly, the black hole's mass increases exponentially in relation to the self-regulated flow, with an e-folding time τ BH ≈ 4.7 × 10 8 ε[ f E (1 − ε)] −1 yr. For ε ≈ 0.1 -typical of radiatively efficient accretion onto a non-rapidly rotating black hole (Shapiro and Teukolsky, 1979) -and f E ≈ 0.1, this timescale is short (about 3 %) compared to the age of the Universe, indicating that black holes can enhance their mass via accretion by orders of magnitude.
Active black holes in galaxies are known to contribute to the rise of a cosmic X-ray background resulting mostly from unresolved and obscured AGN of mass 10 8 M -10 9 M , in the redshift interval 0.5 < z < 3 (Merloni, 2004). As energy from accretion is equivalent to mass, the X-ray light present in the background mirrors the increment experienced by the black holes over cosmic history due to accretion. This mass-density increment is found to be ∆ρ • ≈ 3.5 × 10 5 (ε/0.1) −1 M Mpc −3 (Marconi et al., 2004, Soltan, 1982, Yu and Tremaine, 2002. As the contribution to the local (zero redshift) black hole mass density ρ • results from black holes of comparable mass 10 8 M -10 9 M , the close match between the two independent measures, ρ • and ∆ρ • , indicates that radiatively efficient accretion (ε ≈ 0.1) played a large part in the building of supermassive black holes in galaxies, from redshift z ∼ 3 to now. It further indicates that information residing in the initial mass distribution of the, albeit unknown, black hole seed population is erased during events of copious accretion, along the course of cosmic evolution.
Massive black holes in the cosmological framework
These key findings hint in favour of the existence, at any redshift, of an underlying population of black holes of a smaller variety, with masses of 10 4 M -10 7 M , that grew in mass along cosmic history inside their galaxies, through episodes of merging and accretion. The evolution of black holes mimics closely that of their host galaxies within the currently favoured cosmological paradigm: a Universe dominated by cold dark matter (CDM).
Observations show that the mass content of the Universe is dominated by CDM, with baryons contributing only at a 10 % level to the CDM, and that the spectrum of primordial density perturbations contains more power at lower masses (Mo et al., 2010). Thus, at the earliest epoch, the Universe was dominated by small density perturbations. Regions with higher density grow in time, to the point where they decouple from the Hubble flow and collapse and virialise, forming self gravitating halos. The first objects that collapse under their own self-gravity are small halos that grow bigger through mergers with other halos and accretion of surrounding matter. This is a bottom up path, and the process is known as hierarchical clustering. As halos cluster and merge to build larger ones, baryons follow the CDM halo potential well and, similarly, black holes form and evolve in the same bottom-up fashion (Haehnelt et al., 1998, Haiman and Loeb, 1998, Volonteri et al., 2003, White and Rees, 1978, Wyithe and Loeb, 2002. State-of-the-art hydrodynamical cosmological simulations (Di Matteo et al., 2008) illustrate (figure 12) where and when the massive black holes form and how they are connected with the evolving background baryonic density field. As illustrated in figure 12 and as inferred in statistical models based on the extended Press-Schechter (EPS) formalism, most of the black holes transit into the mass interval for which eLISA is sensitive during their cosmic evolution (Volonteri et al., 2003). Figure 13 sketches and simplifies conceptually the complex net terminating with the formation of a bright galaxy at zero redshift, highlighting the sites where black holes form, cluster within halos, pair with other black holes, and eventually coalesce.
Black holes in the sensitivity window of eLISA Is there any observational evidence of black holes of this variety in the Universe that may be observed by eLISA? The Milky Way hosts in its bulge a black hole of (4±0.06±0.35)×10 6 M (Ghez et al., 2005, Gillessen et al., 2009), providing Figure 12: A state-of-art hydrodynamical simulation by Di Matteo et al. (2008) visualising the cosmic evolution of the baryonic density field and of their embedded black holes, in the ΛCDM cosmology. Each panel shows the same region of space (33.75 h −1 Mpc on a side) at different redshift, as labelled. The circles mark the positions of the black holes, with a size that encodes the mass, as indicated in the top left panel (numerical force resolution limits the lowest black hole mass to 10 5 M ). The projected baryonic density field is colour-coded with brightness proportional to the logarithm of the gas surface density. The images show that the black holes emerge in halos starting at high redshift (as early as z ∼ 10) and later grow by accretion driven by gas inflows that accompany the hierarchical build-up of ever larger halos through merging. As the simulation evolves the number of black holes rapidly increases and larger halos host increasingly larger black holes. No black holes as massive as 10 9 M are present in the simulated box because they are extremely rare. 23 of 81 eLISA: Astrophysics and cosmology in the millihertz regime Time Figure 13: A cartoon of the merger-tree history for the assembly of a galaxy and its central black holes. Time increases along the arrow. Here the final galaxy is assembled through the merger of twenty smaller galaxies housing three seed black holes, and four coalescences of binary black holes.
an example of a black hole that does not fall into the population that can be traced by luminous QSO. Black holes in the mass range 10 5 M -10 7 M are now increasingly found in low mass spiral galaxies and dwarfs with and without a bulge (Barth et al., 2004, Greene and Ho, 2004, Greene et al., 2008, Jiang et al., 2011a,b, Kuo et al., 2011, Xiao et al., 2011 and evidence exists that some of these low mass black holes of M < 10 5 M cohabit nuclear star clusters (Barth et al., 2009, Bekki and Graham, 2010, Ferrarese et al., 2006, Graham and Spitler, 2009, Seth et al., 2008, Wehner and Harris, 2006. Dwarf galaxies in the galactic field are believed to undergo a quieter merger and accretion history than their brighter analogues. They may represent the closest example of low mass halos from which galaxy assembly took off. Late type dwarfs are thus the preferred site for the search of pristine black holes (Volonteri and Natarajan, 2009). NGC 4359, a close-by bulgeless, disky dwarf houses in its centre a black hole of only 3.6 × 10 5 M (Peterson et al., 2005). This key discovery shows that nature provides a channel to black hole formation also in potential wells much shallower than that of the massive spheroids.
These middleweight mass black holes are numerous at high redshifts (Di Matteo et al., 2008), but are invisible with today instrumentation, given their low intrinsic luminosity and far-out distance. Furthermore, they become invisible to electromagnetic observations near z 11 as close to this redshift the intergalactic medium becomes opaque to their light, due to intervening absorption of the neutral hydrogen (Fan et al., 2006a, Miralda-Escude, 1998. ULAS J1120+0641 holds the record of being the further distant known QSO, at redshift z = 7.085 ± 0.003, and hosts a bright, very massive black hole of ∼ 2 × 10 9 M (Mortlock et al., 2011). Its light was emitted before the end of the reionisation, i.e. before the theoretically predicted transition of the interstellar medium from an electrically neutral to an ionised state (Fan et al., 2006a).
Galaxy mergers and black hole coalescence
A grand collision between two galaxies of comparable mass (called major merger) is not destructive event, but rather a transformation, as the two galaxies, after merging, form a new galaxy with a new morphology. Individual stars do not collide during the merger, as they are tiny compared to the distances between them. The two galaxies pass through each other and complex, time-varying gravitational interactions redistribute the energy of each star in such a way that a new bound galaxy forms. Gas clouds instead collide along the course of the merger: new stars form, and streams of gas flow in the nuclear region of the newly forming galaxy. The massive black holes in the grand collision behave like stars. A key question for the eLISA science case is: do black holes coalesce as their galaxies merge?
The fate of black holes in merging galaxies can only be traced using numerical simulations at the limits of current numerical resolution. Not only isolated black holes are tiny, but also binary black holes are. They form a tight binary system within a galaxy when the mass in stars enclosed in the binary orbit becomes negligible compared to the total mass of the binary M, and their Keplerian velocity exceeds the velocity of the stars, σ. This occurs when their relative separation a B decays below about GM/σ 2 , i.e. when a B 10 −4 − 10 −5 R gal . Binary black holes on the verge of coalescing within less than a Hubble time are even smaller, as they touch when their separation is of the size of the event horizon. The timescale for coalescence by gravitational waves only is a sensitive function of the binary separation, scaling as a 4 (Peters, 1964). Therefore, gravitational waves guide the inspiral only when a is less than a critical value a GW ∼ 0.003 a B (M/10 6 M ) 1/4 (using scaling relations) that is of 0.01 pc -0.001 pc for a circular binary in the eLISA mass interval. Typical orbital periods at a GW are of a few years to tens of years, and the hole's relative velocities are as high as 3000 km/s -5000 km/s. Black holes have to travel a distance from 0.1 kpc -10 kpc down to 0.01 pc -0.001 pc, before entering the gravitational wave inspiral regime, in a galaxy. Given the huge dynamical range, different physical mechanisms are guiding their sinking . We can distinguish four phases for the dynamics of black holes on their way to and after merging: Figure 14: The different stages of the merger between two identical Milky-Way-like gas-rich disc galaxies (from Mayer et al., 2007). The panels show the density maps of the gas component in logarithmic scale, with brighter colours for higher densities. The four panels to the left show the large-scale evolution at different times. The boxes are 120 kpc on a side (top) and 60 kpc on a side (bottom). During the interaction tidal forces tear the galactic disks apart, generating spectacular tidal tails and plumes. The panels to the right show a zoom in of the very last stage of the merger, about 100 million years before the two cores have fully coalesced (upper panel), and 2 million years after the merger (middle panel), when a massive, rotating nuclear gaseous disc embedded in a series of large-scale ring-like structures has formed. The boxes are now 8 kpc on a side. The two bottom panels, with a grey colour scale, show the detail of the inner 160 pc of the middle panel; a massive nuclear disc, shown edge-on (left) and face-on (right), forms in the aftermath of the merger (of 10 9 M ). The two black holes continue to sink inside the disc and form a Keplerian binary; they are shown in the face-on image.
In major merger of galaxies the black holes pair under the action of dynamical friction against the dark matter background, that brakes the disc/bulge which they inhabit (Begelman et al., 1980, Chandrasekhar, 1943, Colpi et al., 1999, Ostriker, 1999. Pairing occurs on the typical timescale of a galactic merger of a few billion years. A few million years after the new galaxy has formed, a Keplerian binary forms on the scale of 1 pc -10 pc, under the action of dynamical friction by stars and gas . Figure 14 shows the evolution of the two gas-discs in merging galaxies similar to the Milky Way. The galaxies host a central black hole, and the black holes end forming a Keplerian binary embedded in a massive nuclear disc . The subsequent hardening of the binary orbit (phase II) is controlled by the inflow of stars from larger radii, and by the gas rotating in a circum-binary disc (Callegari et al., 2009, Dotti et al., 2006, 2009b, Escala et al., 2004, Merritt and Milosavljević, 2005. In gas rich environments, and for black holes of mass smaller than about 10 7 M , gas-dynamical torques on the binary suffice to drive the system down to the gravitational wave inspiral domain (Armitage and Natarajan, 2005, Cuadra et al., 2009, Gould and Rix, 2000, Hayasaki et al., 2008, Hayasaki and Okazaki, 2009, Ivanov et al., 1999, MacFadyen and Milosavljević, 2008 if the gas does not fragment in stars (Lodato et al., 2009).
Stars are ubiquitous, and in stellar bulges the black holes lose orbital energy and angular momentum by ejecting stars that scatter individually off the black holes (Berczik et al., 2005, Makino and Funato, 2004, Merritt, 2006, Merritt and Milosavljević, 2005, Merritt and Poon, 2004, Milosavljević and Merritt, 2001, Perets and Alexander, 2008, Quinlan, 1996, Sesana et al., 2006, 2007a. These stars approach the binary from nearly radial orbit, and shrink the binary down to the gravitational wave phase, if they are present in sufficient number to carry away the energy for the binary to decay down to a GW . These stars, ejected with high velocities, are lost by the galaxy, and the timescale of sinking of the binary depends on the rate at which new stars are supplied from far-out distances. Self-consistent high resolution direct N-body simulations (Berczik et al., 2006, Khan et al., 2011 indicate that the stellar potential of the remnant galaxy retains, in response to the anisotropy of the merger, a sufficiently high degree of rotation and triaxiality to guarantee a large reservoir of stars on centrophilic orbits that can interact with the black holes down to the transit from the binary phase to the gravitational wave phase. This seems to be a universal process. When coalescence occurs, the merger remnant "recoils"because of the anisotropic emission of gravitational waves (Baker et al., 2008), moving away from the gravitational centre of the galaxy. The kicked black hole, may return after a few oscillations down to the nuclear regions of the host galaxy, or escape the galaxy depending on the magnitude of the kick (Blecha et al., 2011, Blecha and Loeb, 2008, Gualandris and Merritt, 2008, Guedes et al., 2011 4 Dual, binary and recoiling AGN in the cosmic landscape Surprisingly, the closest example of an imminent merger is in our Local Group. Andromeda (M31) along with a handful of lesser galaxies does not follow Hubble's law of cosmic expansion: it is falling toward us at a speed of about 120 km/s. M31 is a member of a group of galaxies, including the Milky Way, that form a gravitationally bound system, the Local Group. M31 and the Milky Way each house a massive black hole (van der Marel et al., 1994) and are on a collision course, with a merger possibly before the Sun expands into a red giant (∼ 4 billion years) (Cox and Loeb, 2008).
Observations are now revealing the presence of many colliding galaxies in the Universe, and in a number of cases two active black holes are visible through their X-ray or radio emission.
The existence of binary AGN, i.e. of two active black holes bound in a Keplerian fashion, is still debatable at the observational level, as they are rare objects . Two cases deserve attention. The first case is 0402+379, a radio source in an elliptical galaxy showing two compact flat-spectrum radio nuclei, only 7 pc apart (Rodriguez et al., 2006(Rodriguez et al., , 2009. The second case is OJ 287, a source displaying a periodic variability of 12 years (Valtonen et al., 2008(Valtonen et al., , 2011(Valtonen et al., , 2010(Valtonen et al., , 2006) that has been interpreted as being a Keplerian binary with evidence of orbital decay by emission of gravitational waves. A number of sub-parsec binary black hole candidates have been proposed (Eracleous et al., 2011, Tsalmantza et al., 2011 based on the recognition that gas clouds orbiting one/two black hole(s) can leave an imprint in the optical spectra of the AGN (Barrows et al., 2011, Bogdanović et al., 2008, Boroson and Lauer, 2009, Decarli et al., 2010, Montuori et al., 2011, Shen and Loeb, 2010, Shields et al., 2009. Follow-up observations will be necessary to assess their true nature. Recoiling AGN, i.e. recoiling black holes observed in an active phase , Loeb, 2007, Merritt et al., 2009b, O'Leary and Loeb, 2009, have been searched recently, and there has been a claim of a discovery (Komossa et al., 2008), even though alternative interpretations are also viable (Bogdanović et al., 2009, Dotti et al., 2009a. Two spatially off-set AGN have been found in deep surveys with kinematic properties that are consistent with being two recoiling black holes (Civano et al., 2010, Jonker et al., 2010. Figure 15: Active black holes in colliding galaxies. Arp 299 (upper left panel) is the interacting system resulting from the collision of two gas-rich spirals, and hosts a dual AGN, i.e. two black holes "active" during the pairing phase. The accreting black holes are visible in the X-rays and are located at the optical centres of the two galaxies, at a separation of 4.6 kpc (Ballo et al., 2004). X-ray view of NGC6240 (lower left panel) an ultra luminous infrared galaxy considered to be a merger in a well advanced phase (Komossa et al., 2003). X-ray observations with the Chandra Observatory let to the discovery of two strong hard X-ray unresolved sources embedded in the diluted soft X-ray emission (red) of a starburst. The dual AGN are at a separation of 700 pc. Composite X-ray (blue)/radio (pink) image of the galaxy cluster Abell 400 (upper right panel) showing radio jets immersed in a vast cloud of multimillion degree X-ray emitting gas that pervades the cluster. The jets emanate from the vicinity of two supermassive black holes (a dual radio-loud AGN) housed in two elliptical galaxies in the very early stage of merging. Composite optical and X-ray image of NGC 3393 (lower right panel), a spiral galaxy with no evident signs of interaction. In its nucleus, two active black holes have been discovered at a separation of only 150 pc (Fabbiano et al., 2011). The closeness of the black holes embedded in the bulge, provide a hitherto missing observational point to the study of galaxy-black hole evolution: the phase when the black holes are close to forming a Keplerian binary. The regular spiral morphology and predominantly old circum-nuclear stellar population of this galaxy, indicates that a merger of a dwarf with a large spiral led to the formation of the binary (Callegari et al., , 2009).
28 of 81 The most remarkable, albeit indirect, evidence of coalescence events is found in bright elliptical galaxies that are believed to be product of mergers. Bright elliptical galaxies show light deficits (cores) in their surface brightness profiles, i.e. lack of stars in their nuclei , and this missing light correlates with the mass of the central black hole (Merritt, 2006. Thus, cores are evidence of binary black holes scouring out the nuclear stars via three-body scattering or even via post-merger relaxation following a kick (Boylan-Kolchin et al., 2004, Gualandris and Merritt, 2008, Guedes et al., 2009. Lastly, mergers change the black hole's spin directions due to conservation of angular momentum. Reorientation of the black hole spin following coalescence is now believed to be at the heart of X-shaped radio galaxies, where an old jet coexists with a new jet of different orientation (Liu, 2004, Merritt andEkers, 2002). This would be again a sign of a fully accomplished coalescence event.
Seed black holes
Models of hierarchical structure formation predict that galaxy sized dark matter halos start to become common at redshift z ∼ 10 − 20 (Mo et al., 2010). This is the beginning of the nonlinear phase of density fluctuations in the Universe, and hence also the epoch of baryonic collapse leading to star and galaxy formation. Different populations of seed black holes have been proposed in the range 100 M -10 6 M (Volonteri, 2010).
Small mass seeds (100 M -1000 M ) may result from the core collapse of the first generation of massive stars (Pop III) that form from unstable metal-free gas clouds, at z ∼ 20 and in halos of 10 6 M (Abel et al., 2002, Bromm et al., 2002, Haiman et al., 1996, Omukai and Palla, 2001, Ripamonti et al., 2002, Tegmark et al., 1997. Pop III stars as massive as 260 M or larger collapse into a black hole of similar mass after only about 2 Myr (Heger et al., 2003, Madau andRees, 2001). The formation of Pop III stars remains a poorly understood process (Zinnecker and Yorke, 2007) and the maximum mass reached by individual stars is unknown (see, e.g., Clark et al., 2011).
Large seeds form in heavier halos (of 10 8 M ) from the collapse of unstable gaseous discs of 10 4 M -10 6 M . This route, ending with the formation of a very massive quasi-star, assumes that fragmentation is suppressed possibly by turbulence and by an intense ultraviolet background light, in an environment of low metallicity (Begelman et al., 2006, Bromm and Loeb, 2003, Dijkstra et al., 2008, Dotan et al., 2011, Haehnelt and Rees, 1993, Koushiappas et al., 2004, Lodato and Natarajan, 2006, Loeb and Rasio, 1994. Collapsing clouds can have significant angular momentum (Bullock et al., 2001), and thus additional momentum transport is required for the self-gravitating disc to form a supermassive star Shlosman, 2009, Shlosman et al., 1989). The very massive quasi-star of about 10 5 M , burns hydrogen and helium in its core and once formed, the star collapses into a black hole when metallicity is below some threshold, as the alternative is its entire explosion (Montero et al., 2011, Shibata andShapiro, 2002). Recently, other formation routes have been explored for the large seeds (10 3 M -10 4 M ). They comprise the formation of a massive star via stellar runaway collisions in young dense star clusters , Gürkan et al., 2004, Portegies Zwart and McMillan, 2002, and the relativistic collapse of stellar mass black holes in nuclear star clusters invaded by a large inflow of gas (Davies et al., 2011). Lastly, much heavier seeds resulting from direct collapse of nuclear gas in a gas-rich galactic merger have been proposed as the origin of black holes (Mayer et al., 2010).
The subsequent step is to follow the evolution of the black hole seeds according to the growth of the halos they inhabit, and the mode of accretion , Volonteri and Begelman, 2010. This is an inherently model dependent process. Observations of nearby dormant black holes and of AGN at higher redshift help in constraining their evolution, but theoretical models still have some unconstrained degrees of freedom.
Evolving massive black hole spins via coalescence and accretion events
Astrophysical black holes are fully described by the mass M • and angular momentum J, referred to as spin. The modulus J of J is usually specified in terms of the dimensionless spin parameter a • defined so that J = a • (GM 2 • /c). For a specified mass M • , a black hole described by GR cannot have a • > 1, without showing a naked singularity (and this is forbidden by the Cosmic Censorship conjecture).
Both coalescences and accretion change M • , J (or a • ) and the orientation of J in a significant manner (Berti andVolonteri, 2008, Volonteri et al., 2007).
Spins in black hole coalescences
With the advent of numerical relativity, it became possible to accurately determine the evolution of the initial spins of the black holes to the final spin of the remnant black hole in a merger event (Baker et al., 2006, Campanelli et al., 2006, Centrella et al., 2010, Pretorius, 2005, Rezzolla et al., 2008a. Numerical relativity simulations for equal mass, non spinning black holes find a spin a • = 0.686 46 ± 0.000 04 (Scheel et al., 2009) for the merged black hole, resulting from the angular momentum of the orbit. Extrapolation of black hole coalescences with large initial spins (larger than approximately 0.9) exactly aligned with the orbital angular momentum find a final a • = 0.951 ± 0.004 (Marronetti et al., 2008). When mergers occur with retro-and pro-grade orbits equally distributed, as it is expected in the case of astrophysical black holes (Hughes and Blandford, 2003), the average spin of the merger remnant is about 0.7, close to the expectation for non spinning holes (Berti and Volonteri, 2008).
For almost any configuration of spins and mass ratio, the emission pattern of the gravitational wave is anisotropic, leading to a gravitational recoil (Campanelli et al., 2007, González et al., 2007, Lousto and Zlochower, 2011b.
Numerical studies show that initially nonspinning black holes or binaries with spins aligned with the orbital angular momentum are recoiling with a velocity below about 200 km/s. By contrast, the recoil is dramatically larger, up to approximately 5000 km/s, for binaries of comparable mass and black holes with large spins in peculiar non-aligned configurations (Lousto and Zlochower, 2011a). Thus, unexpectedly, spins (regulated by coalescence and accretion) affect the retention fraction of black holes in galactic halos, and this has consequences on the overall evolution of black holes in galaxies (Kesden et al., 2010, Schnittman, 2007, Schnittman and Buonanno, 2007.
Spins and black hole accretion
The evolution of mass and spin of astrophysical black holes are strongly correlated, also when considering accretion. Spins determine directly the radiative efficiency ε(a • ), and so the rate at which mass is increasing. In radiatively efficient accretion discs (Shakura and Sunyaev, 1973) ε varies from 0.057 (for a • = 0) to 0.151 (for a • = 0.9) and 0.43 (for a • = 1). Accretion on the other hand determines black hole spins since matter carries with it angular momentum (the angular momentum at the innermost stable circular orbit of a Kerr black hole). A non-rotating black hole is spun-up to a • = 1 after increasing its mass by a factor √ 6, for prograde accretion (Bardeen, 1970). Conversely, a maximally rotating black hole is spun-down by retrograde accretion to a • ∼ 0, after growing by a factor √ 3/2.
Accretion imposes limits on the black hole spin. Gas accretion from a geometrically thin disc limits the black-hole spin to a acc • = 0.9980 ± 0.0002, as photons emitted by the disc and with angular momentum anti-parallel to the black hole spin are preferentially captured, having a larger cross section, limiting its rotation (Thorne, 1974). The inclusion of a jet, as studied in magneto-electrodynamic simulations, reduces this limit to a jet • 0.93 (Gammie et al., 2004), and changes in the accretion geometry produce a similar effect (Popham and Gammie, 1998).
How black holes are fed from the large scale down to the hole's influence radius (R grav ) is presently unknown, and the spin is sensitive to the way gas is accreted with time (Volonteri et al., 2007). Two limiting modes of accretion can occur. Coherent accretion refers to accretion from a geometrically thin disc, lasting longer than a few black hole mass growth e-folding times. During coherent accretion the black hole can more than double its mass, bringing its spin up to the limit imposed by basic physics, either a acc • or a jet • . By contrast, chaotic accretion refers to a succession of accretion episodes that are incoherent, i.e. randomly oriented. The black hole can then spin-up or down, and spin-down occurs when counter-rotating material is accreted, i.e. when the angular momentum L of the disc is strongly misaligned with respect to J (i.e. J·L < 0). If accretion proceeds via short-lived, uncorrelated episodes with co-rotating and counter-rotating material equally probable, spins tend to be small (King and Pringle, 2006, Moderski et al., 1998: counter-rotating material spins the black hole down more than co-rotating material spins it up, as the innermost stable orbit of a counter-rotating test particle is located at a larger radius than that of a co-rotating particle, and accordingly carries a larger orbital angular momentum. The direction of the black hole spin is also an important element in the study of black holes. In a viscous accretion disc that is misaligned with the spin of the black hole, Lense-Thirring precession of the orbital plane of fluid elements warps the disc, forcing the gas close to the black hole to align (either parallel or anti-parallel) with the spin of the black hole. Warping is a rapid process that causes alignment of the disc out to 100 R horizon -10 3 R horizon , depending on a • ( Bardeen and Petterson, 1975). Following conservation of total angular momentum, the black hole responds to the warping through precession and alignment, due to dissipation in the disc (Perego et al., 2009, Scheuer andFeiler, 1996) evolving into a 30 of 81 configuration of minimum energy where the black hole and disc are aligned (King et al., 2005). This process is short (10 5 yrs) compared to the typical accretion time scale, allowing astrophysical black holes to evolve into a quasi-aligned spin-orbit configuration prior to coalescence .
According to these theoretical findings, masses and spins evolve dramatically following coalescence and accretion events. The spin offers the best diagnostics on whether the black holes prior to coalescence have experienced either coherent or chaotic accretion episodes. Both, mass and spin, are directly encoded into the gravitational waves emitted during the the merger process. eLISA will measure the masses and spins of the black holes prior to coalescence, offering unprecedented details on how black hole binaries have been evolving along of the course of the galactic merger and along cosmic history.
7 Cosmological massive black hole merger rate As eLISA creates a new exploratory window on the evolution of black holes, covering a mass and redshift range that is out of the reach of current (and planned) instruments, its expected detection rate is observationally unconstrained. Today we can probe dormant black holes down to masses of about 10 5 M (Magorrian et al., 1998, Xiao et al., 2011 in the local Universe only, and their massive (i.e. heavier than 10 8 M ) active counterparts out to redshift 6 (Fan et al., 2004(Fan et al., , 2001(Fan et al., , 2006b(Fan et al., , 2003. Any estimate of the eLISA detection rate necessarily has to rely on extrapolations based on theoretical models matching the properties of the observable black hole population. Observationally, the black hole merger rate can be inferred only at relatively low redshift, by counting the fraction of close pairs in deep galaxy surveys. Given a galaxy density per co-moving megaparsec cube n G , a fraction of close pairs φ, and a characteristic merger timescale T M , the merger rate density of galaxies (number of mergers per year per co-moving megaparsec cube), is given byṅ M = φn G /(2T M ).
Estimates ofṅ M have been produced by several groups in the last decade (Bell et al., 2006, De Propris et al., 2007, de Ravel et al., 2009, 2008, Patton et al., 2002, Xu et al., 2011, using deep spectroscopic galaxy surveys like COMBO, COSMOS and DEEP2. Surveys are obviously flux limited, and usually an absolute magnitude cutoff (which translate into a stellar mass lower limit) is applied to obtain a complete sample of galaxy pairs across a range of redshifts. The galaxy merger rate is therefore fairly well constrained only at redshift z 1 for galaxies with stellar mass larger than approximately 10 10 M . From compilation of all the measurement (Xu et al., 2011), typical average massive galaxy merger ratesṅ M at z < 1 lie in the range 5 × 10 −4 <ṅ M < 2 × 10 −3 h 3 100 Mpc −3 Gyr −1 . By applying the black hole-host relations (Gültekin et al., 2009), the galaxy stellar mass cutoff is converted into a lower limit to the hosted black hole mass. Assuming a black hole occupation fraction of one (appropriate for massive galaxies) and integrating over the appropriate co-moving cosmological volume, this translates into an observational estimate of the massive black hole merger rate for z < 1 and M > few × 10 6 M .
These estimates can be compared to the rate predicted by Monte Carlo merger trees (Volonteri et al., 2003) based on the EPS formalism (Lacey and Cole, 1993, Press and Schechter, 1974, Sheth and Tormen, 1999, which are used to reconstruct the black hole assembly, and thus to infer eLISA detection rates, in the ΛCDM cosmology. The evolutionary path (outlined in the previous sections) can be traced back to very high redshift (z > 20) with high resolution via numerical EPS Monte Carlo realisations of the merger hierarchy. Sesana et al. (2008b) carried a detailed comparison of the merger rate predicted by such models in the z < 1 and M > few × 10 6 M range with those inferred by galaxy pair counting, finding a generally broad consistency within a factor of 2.
On the theoretical side, massive black hole merger rates can be computed from semi-analytic galaxy formation models coupled to massive N-body simulations tracing the cosmological evolution of dark matter halos (Bertone et al., 2007, Guo et al., 2011, such as the Millennium Run . Such models are generally bound to the limiting resolution of the underlying N-body simulations, and are therefore complete only for galaxy masses larger than approximately 10 10 M . In a companion study, Sesana et al. (2009) also showed that the merging black hole mass functions predicted by EPS based merger trees is in excellent agreement with those extracted by semi-analytic galaxy formation model in the mass range M > 10 7 M , where semi-analytic models can be considered complete.
Merger rates obtained by EPS merger trees are therefore firmly anchored to low redshift observations and to theoretical galaxy formation models. Nevertheless, the lack of observations in the mass range of interest for eLISA leaves significant room for modelling, and theoretical astrophysicists have developed a large variety of massive black holes formation scenarios that are compatible with observational constraints (Begelman et al., 2006, Koushiappas et al., 2004, Lodato and Natarajan, 2006, Volonteri et al., 2003. The predicted coalescence rate in the eLISA window depends on the peculiar 31 of 81 eLISA: Astrophysics and cosmology in the millihertz regime details of the models, ranging from a handful up to few hundred events per year (Enoki et al., 2004, Haehnelt, 1994, Koushiappas and Zentner, 2006, Rhook and Wyithe, 2005, Sesana et al., 2004, 2007b. A recent compilation, encompassing a wide variety of assembly history models, can be found in 8 Massive black hole binaries as gravitational waves sources: what can eLISA discover? In the eLISA window of detectability massive black hole binary coalescence is a three-step process comprising the inspiral, merger, and ring-down (Flanagan and Hughes, 1998). The inspiral stage is a relatively slow, adiabatic process in which the black holes spiral together on nearly circular orbits. The black holes have a separation wide enough so that they can be treated analytically as point particles through the Post Newtonian (PN) expansion of their binding energy and radiated flux (Blanchet, 2006). The inspiral is followed by the dynamical coalescence, in which the black holes plunge and merge together, forming a highly distorted, perturbed remnant. At the end of the inspiral, the black hole velocities approach v/c ∼ 1/3. At this stage the PN approximation breaks down, and the system can only be described by a numerical solution of the Einstein equations. The distorted remnant settles into a stationary Kerr black hole as it rings down, by emitting gravitational radiation. This latter stage can be, again, modelled analytically in terms of black hole perturbation theory. At the end of the ring-down the final black hole is left in a quiescent state, with no residual structure besides its Kerr spacetime geometry.
In recent years there has been a major effort in constructing accurate waveforms inclusive of the inspiral merger and ring-down phases (Baker et al., 2006, Campanelli et al., 2006, Pretorius, 2005. Even a few orbital cycles of the full waveform are computationally very demanding. "Complete" waveforms can be designed by stitching together analytical PN waveforms for the early inspiral with a (semi)phenomenologically described merger and ring-down phase (Damour et al., 2011, Pan et al., 2011, calibrated against available numerical data. In the following estimations we will mostly employ phenomenological waveforms constructed in frequency domain, as described in Santamaría et al. (2010). Self-consistent waveforms of this type (the so called PhenomC waveforms) are available for non-spinning binaries and for binaries with aligned spins. In the case of binaries with misaligned spins, we use "hybrid" waveforms obtained by stitching precessing PN waveforms for the inspiral with PhenomC waveforms for the merger/ring-down. This stitching is performed by projecting the orbital angular momentum and individual spins onto the angular momentum of the distorted black hole after merger. Given a waveform model, a first measure of the eLISA performance is the SNR of a binary merger with parameters in the relevant astrophysical range. Figure 16 shows eLISA SNRs for equal mass, non-spinning coalescing binaries. Here we use PhenomC waveforms and we compute the SNR as a function of the rest-frame total binary mass M and of the redshift z, averaging over all possible source sky locations and wave polarisation, for two-year observations. The plot highlights the exquisite capabilities of the instrument in covering almost all the mass-redshift parameter space relevant to massive black hole astrophysics. It is of importance to emphasise that current electromagnetic observations are probing only the tip of the massive black hole distribution in the Universe. Our current knowledge of massive black holes is bound to instrument flux limits, probing only the mass range 10 7 M -10 9 M at 0 z 7. Conversely, eLISA will be able to detect the gravitational waves emitted by sources with total mass (in the source rest frame) as small as 10 4 M at cosmological distances inaccessible to any other astrophysical probe. A binary with total mass in the interval 10 4 M -10 7 M can be detected out to a redshift as remote as z ∼ 20 with a SNR ≥ 10. By contrast, a binary as massive as a few 10 8 M can be detected with high SNR in our local Universe (z 1). Binaries with total mass between 10 5 M -10 7 M can be detected with a SNR 100, between 0 z 5. These intervals in mass and redshift can be considered as optimal for a deep and extensive census of the black hole population in the Universe. Figure 17 shows constant-contour levels of the SNR expected from binaries with different mass ratios q (defined as q = m 2 /m 1 , where m 2 is the mass of the less massive black hole in the binary) located at redshift z = 1 and z = 4. The plots show the SNR reduction that occurs with decreasing q, as unequal mass binaries have lower strain amplitudes than equal mass binaries. They also show how SNR decreases with increasing redshift, and thus with increasing luminosity distance. Notice however that even at z = 4, binaries in the mass range 10 5 M -10 7 M with mass ratio q 10 −1 can be detected with SNR > 20. Here M is the total mass of the binary in the source rest frame, and q is the mass ratio. The SNR is computed from the full non spinning PhenomC waveform inclusive of inspiral, merger and ring-down, as in figure 16.
Parameter estimation
Figures 16 and 17 describe the detectability of single events, and for these individual events, it is possible to extract information on the physical parameters of the source. Waveforms carry information on the redshifted mass (the mass measured at the detector is (1 + z) times the mass at the source location) and on the spin of the individual black holes prior to coalescence. The measure of the mass and spin is of importance in astrophysics. Except for the Galactic centre (Ghez et al., 2005, Gillessen et al., 2009, the mass of astrophysical black holes is estimated with uncertainties ranging from 15 % to a factor of about 2, depending on the technique used and the type of source. As far as spin is concerned, its measure is only indirect, and it is derived through modelling of the spectrum, or of the shape of emission lines, mainly by fitting the skewed relativistic Kα iron line. There are few notable examples, but uncertainties are still large. By contrast, spins leave a distinctive peculiar imprint in the waveform. In section 5 and 6 we explored different routes to seed black hole formation and to their subsequent assembly and growth through mergers and accretion episodes. Different physically motivated assumptions lead to different black hole evolution scenarios, and, as we highlighted above, the lack of observational constraints allowed theoretical astrophysicists to develop a large variety of massive black holes formation scenarios. To assess the astrophysical impact of eLISA, we simulate observations assuming a fiducial set of four cosmological black hole evolution scenarios: SE refers to a model where the seeds have small (S) mass about 100 M (from Pop III stars only) and accretion is coherent, i.e. resulting from extended (E) accretion episodes: SC refers to a model where seeds are small but accretion is chaotic (C), i.e., resulting from uncorrelated episodes; and finally, LE and LC refer to models where the seed population is heavy (L stands for large seeds of 10 5 M ) and accretion is extended and chaotic, respectively. The models are almost the same used in previous studies by the LISA Parameter Estimation Task Force (Arun et al., 2009). The only difference is that in the extended accretion model, spins are not assumed to be perfectly aligned to the binary orbital angular momentum. The angles of misalignment relative to the orbit are drawn randomly in the range 0 to 20 degree, consistent with the finding of recent hydrodynamical simulations of binaries forming in wet mergers . These models encompass a broad range of plausible massive black hole evolution scenarios, and we use them as a testbed for eLISA capabilities in a fiducial astrophysical context. Each massive black hole binary, coalescing at redshift z, is characterised by the (rest frame) total mass M = m 1 +m 2 (with m 1 and m 2 the mass of the primary and secondary black hole), mass ratio q = m 2 /m 1 , spin vectors J 1 and J 2 ; spin magnitudes are denoted by a 1 and a 2 . The orientations of the spins are drawn as described above for the extended (E) accretion models, and completely random for the chaotic (C) accretion models. Here we generate several Monte Carlo realisations of each model and we sum up all the generated sources in a single "average" catalogue (we will consider models separately in the next section). Catalogues are generated by selecting M, q, z, a 1 , a 2 according to the distribution predicted by the individual models, and by randomising other source parameters (sky location, polarisation, inclination, initial phase, coalescence time) according to the appropriate distribution. Figure 18 shows the average source SNR as a function of the source redshift. According to the simulated models, eLISA will detect sources with SNR 10 out to z 10. Note that the astrophysical capabilities of eLISA are not limited by the detector design, but by the population of astrophysical sources. If there were a coalescing black hole binary of 10 4 M -10 6 M out to redshift z ∼ 20, eLISA would reveal such a source. Our models, and accordingly our SNR distribution, do not have such an event. Figure 19 shows error distributions in the source parameter estimation, for all the events in the combined catalogue. We used a hybrid approach of joining inspiral with PhenomC waveforms, as described above, to evaluate uncertainties based on the Fisher matrix approximation.
The figure illustrates the importance and power of including the full waveform modelling, comprising all the stages of the binary merger, in order to reach a high level of precision measurements. It is found that individual black hole redshifted masses can be measured with unprecedented precision, i.e. with an error of 0.1 % -1 %, on both components. No other astrophysical tool has the capability of reaching a comparable accuracy. As far as spins are concerned, the analysis shows that the spin of the primary massive black hole can be measured with an exquisite accuracy, to a 0.01 -0.1 absolute uncertainty. This precision in the measure mirrors the fact that the primary black hole leaves a bigger imprint in the waveform. The measurement is more problematic for a 2 that can be either determined to an accuracy of 0.1, or remain completely undetermined, depending on the source mass ratio and spin amplitude. We emphasise that the spin measure is a neat, direct measurement, that does not involve complex, often degenerate, multi-parametric fits of high energy emission processes.
The source luminosity distance error D L has a wide spread, usually ranging from 50% to only few percent. Note that this is a direct measurement of the luminosity distance to the source, which, again, cannot be directly obtained (for cosmological objects) at any comparable accuracy level by any other astrophysical means. eLISA is a full sky monitor, and the localisation of the source in the sky is also encoded in the waveform pattern. Sky location accuracy is typically estimated in the range 10 -1000 square degrees.
9 Reconstructing the massive black hole cosmic history through eLISA observations eLISA will be an observatory. The goal is not only to detect sources, but also to extract valuable astrophysical information from the observations. While measurements for individual systems are interesting and potentially very useful for making strong-field tests of GR (see section 2), it is the properties of the set of massive black hole binary mergers that are observed which will carry the most information for astrophysics. Gravitational wave observations of multiple binary mergers may be used together to learn about their formation and evolution through cosmic history.
As any observatory, eLISA will observe a set of signals. After signal extraction and data analysis, these observations will provide a catalogue of coalescing binaries, with measurements of several properties of the sources (masses, mass ratio, spins, distances, etc) and estimated errors. The interesting questions to ask are the following: can we discriminate among different massive black hole formation and evolution scenarios on the basis of gravitational wave observations alone? Given a set of observed binary coalescences, what information can be extracted about the underlying population? For example, will gravitational wave observations alone tell us something about the mass spectrum of the seed black holes at high redshift, that are inaccessible to conventional electromagnetic observations, or about the poorly understood physics of accretion? These questions were extensively tackled in in the context of LISA.
Selection among a discrete set of models
First we consider a discrete set of models. As argued above, in the general picture of massive black hole cosmic evolution, the population is shaped by the seeding process and the accretion history. The four models we study here are the SE, SC, LE, and LC models introduced in the previous section (Arun et al., 2009). As a first step, we test here if eLISA observations will provide enough information to enable us to discriminate between those models, assuming that the Universe is well described by one of them.
Each model predicts a theoretical distribution of coalescing massive black hole binaries. A given dataset D of observed events can be compared to a given model A by computing the likelihood p(D|A) that the observed dataset D is a realisation of model A. When testing a dataset D against a pair of models A and B, we assign probability p A = p(D|A)/(p(D|A) + p(D|B)) to model A, and probability p B = 1 − p A to model B. The probabilities p A and p B are a measure of the relative confidence we have in model A and B, given an observation D. Once eLISA data is available, each model comparison will 37 of 81 yield this single number, p A , which is our confidence that model A is correct. Since the eLISA data set is not currently available, we can only work out how likely it is that we will achieve a certain confidence with future eLISA observations.
We therefore generate 1000 independent realisations of the population of coalescing massive black hole binaries in the Universe predicted by each of the four models. We then simulate gravitational wave observations by producing datasets D of observed events (including measurement errors), which we statistically compare to the theoretical models. We consider only sources that are observed with SNR larger than eight in the detector. We set a confidence threshold of 0.95, and we count what fraction of the 1000 realisations of model A yield a confidence p A > 0.95 when compared to an alternative model B. We repeat this procedure for every pair of models. For simplicity, in modelling gravitational wave observations, we focus on circular, non-spinning binaries; therefore, each coalescing black hole binary in the population is characterised by only three intrinsic parameters -redshift z, mass M, and mass ratio q -and we compare the theoretical trivariate distribution in these parameters predicted by the models to the observed values in the dataset D. In terms of gravitational waveform modelling, our analysis can therefore be considered extremely conservative.
Results are shown in the left-hand panel of table 1, for a one year observation. The vast majority of the pair comparisons yield a 95 % confidence in the true model for almost all the realisations -we can perfectly discriminate among different models. Similarly, we can always rule out the alternative (false) model at a 95 % confidence level. Noticeable exceptions are the comparisons of models LE to LC and SE to SC, i.e., among models differing by accretion mode only. This is because the accretion mode (efficient versus chaotic) particularly affects the spin distribution of the coalescing systems, which was not considered here. To extend this work, we added to our analysis the distribution of the merger remnant spins S r , and compared the theoretical distribution predicted by the models to the observed values (including determination errors once again). The spin of the remnant can be reasonably determined in about 30 % of the cases only; nevertheless, by adding this information, we are able to almost perfectly discriminate between the LE and LC and the SE and SC models, as shown in the right hand panel of table 1.
Constrains on parametric models
In the preceding section we demonstrated the potential of eLISA to discriminate among a discrete set of "pure" models given a priori. However, the true massive black hole population in the Universe will probably result from a mixing of known physical processes, or even from a completely unexplored physical mechanism. A meaningful way to study this problem is to construct parametric models that depend on a set of key physical parameters, λ i , describing, for instance, the seed mass function and redshift distribution, the accretion efficiency etc. and to investigate the potential of eLISA to constrain these parameters. Such a parametric family of models is not available at the moment, but we can carry out a similar exercise by mixing two of our pure models, A and B, to produce a model in which the number of events of a particular type is given by F is the number of events of that type predicted by model A, [B] is the corresponding number predicted by model B and F is the "mixing fraction". In this case we generate datasets D from a mixed model with a certain unknown F , and we estimate the F parameter by computing the likelihood that the data D is drawn from a mixed distribution, as a function of F . A specific example is shown in figure 20. Here the underlying model is F 5 Extreme mass ratio inspirals and astrophysics of dense stellar systems 1 The Galactic Centre: a unique laboratory The discovery, in the local Universe, of dark, massive objects lurking at the centres of nearly all bright galaxies is one of the key findings of modern-day astronomy, the most spectacular being the case of the dark object in our own Galaxy (Eisenhauer et al., 2005, Ghez et al., 2003, 2008, Gillessen et al., 2009, Schödel et al., 2003. The nucleus of the Milky Way is one hundred times closer to Earth than the nearest large external galaxy Andromeda, and one hundred thousand times closer than the nearest QSO. Due to its proximity, it is the only nucleus in the Universe that can be studied and imaged in great detail. The central few parsecs of the Milky Way house gas cloud complexes in both neutral and hot phases, a dense luminous star cluster, and a faint radio source SgrA * of extreme compactness (3 -10 light minutes across). Observations, using diffraction-limited imaging and spectroscopy in the near-infrared, have been able to probe the densest region of the star cluster and measure the stellar dynamics of more than two hundred stars within a few light days of the dynamic centre. The latter is coincident, to within 0.1 arcsec, with the compact radio source SgrA * . The stellar velocities increase toward SgrA * with a Kepler law, implying the presence of a (4±0.06±0.35)×10 6 M central dark mass (Gillessen et al., 2009). This technique has also led to the discovery of nearly thirty young stars that orbit the innermost region: the so called S0 (or S) stars. These young stars are seen to move on Keplerian orbits, with S02 (or S2) the showcase star orbiting the putative black hole on a highly eccentric (0.88) orbit with a period of 15.9 years. The periapsis of this orbit requires a lower limit on the density of the dark point-like mass concentration of more than 10 13 M pc −3 (Maoz, 1998). Additionally, a lower limit of more than 10 18 M pc −3 can be inferred from the compactness of the radio source (Genzel et al., 2010). These limits provide compelling evidence that the dark point-mass at SgrA * is a black hole. A cluster of dark stars of this mass and density (e.g. composed of neutron stars, stellar black holes or sub-stellar entities such as brown dwarfs, planets and rocks) can not remain in stable equilibrium for longer than 10 7 years (Genzel et al., 2000, Maoz, 1998, and the only remaining, albeit improbable, hypothesis is a concentration of heavy bosons (a boson star, (Colpi et al., 1986)) or of hyperlight black holes (Maoz, 1998, M • < 0.005 M ,). Overall, the measurements at the Galactic Centre are consistent with a system composed of a massive black hole and an extended close-to-isotropic star cluster, with the young S0 (or S) stars the only population showing a collective rotation pattern in their orbits (Genzel et al., 2010).
Extreme Mass Ratio Inspirals in galactic nuclei
Can we probe the nearest environs of a massive black hole other than the Galactic Centre? Massive black holes are surrounded by a variety of stellar populations, and among them are compact stars (stellar black holes, neutron stars and white dwarfs). White dwarfs, neutron stars, and stellar-mass black holes all share the property that they reach the last stable orbit around the central massive black hole before they are tidally disrupted. A compact star can either plunge directly toward the event horizon of the massive black hole, or gradually spiral in and fall into the hole, emitting gravitational waves. The latter process is the one of primary interest for eLISA. Gravitational waves produced by inspirals of stellarmass compact objects into massive black holes are observable by eLISA. The mass of the compact object is typically of the order of a few solar masses, while the mass of the central black holes detectable by eLISA is from 10 4 M to 10 7 M . Because the mass ratio for these binaries is typically around 10 5 , these sources are commonly referred to as EMRI.
The extreme mass ratio ensures that the inspiralling object essentially acts as a test particle in the background spacetime of the central massive black hole. EMRI detections thus provide the best means to probe the environment of an astrophysical black hole and its stellar surroundings. White dwarfs, neutron stars, and stellar-mass black holes can all in principle lead to observable EMRI signals. However, stellar-mass black holes, being more massive, are expected to dominate the observed rate for eLISA, for two reasons: mass segregation tends to concentrate the heavier compact stars nearer the massive black hole, and black hole inspirals have higher signal-to-noise, and so can be seen within a much larger volume.
Three different mechanisms for the production of EMRI have been explored in the literature. The oldest and bestunderstood mechanism is the diffusion of stars in angular-momentum space, due to two-body scattering. Compact stars in the inner 0.01 pc will sometimes diffuse onto very high eccentricity orbits, such that gravitational radiation will then shrink the orbit's semi-major axis and eventually drive the compact star into the massive black hole. Important physical 40 of 81 effects setting the overall rate for this mechanism are mass segregation, which concentrates the more massive stellar-mass black holes ( 10 M ) close to the central black hole, and resonant relaxation, which increases the rate of orbit diffusion in phase-space (Hopman and Alexander, 2006b): the orbits of stars close to a massive black hole are nearly Keplerian ellipses, and these orbits exert long term torques on each other, which can modify the angular momentum distribution of the stars and enhance the rate of EMRI formation (Gürkan and Hopman, 2007). However, subtle relativistic effects can reduce the estimated rates from relaxation processes . In addition to the two-body scattering mechanism, other proposed channels for EMRI are tidal disruption of binaries that pass close to the central black hole (Miller et al., 2005), and creation of massive stars (and their rapid evolution into black holes) in the accretion discs surrounding the central massive black hole (Levin, 2007). Tidal break up of incoming stellar binaries may already have been seen in the Milky Way following the remarkable discovery of a number of so-called hypervelocity stars observed escaping from our Galaxy (e.g., Brown et al., 2009). They are believed to be the outcome of an ejection following the break-up of two bound stars by the tidal field of SgrA * . All these mechanisms give specific predictions on the eccentricity and inclination of EMRI events that can be extracted from the gravitational wave signal (Miller et al., 2005).
When the orbiting object is close enough (within a few horizon radii from the large black hole) gravitational radiation dominates energy losses from the system, and the semimajor axis of the orbit shrinks. Radiation is emitted over hundreds of thousand of orbits as the object inspirals to the point where it is swallowed by the central massive black hole. Over short periods of time, the emitted radiation can be thought of as a snapshot that contains detailed information about the physical parameters of the binary. The detection of the emitted gravitational wave signal will give us very detailed information about the orbit, the mass, and spin of the massive black hole as well as the mass of the test object , Hopman, 2009, Preto and Amaro-Seoane, 2010).
The measurement of even a few EMRI will give astrophysicists a totally new and different way of probing dense stellar systems determining the mechanisms that shape stellar dynamics in the galactic nuclei and will allow us to recover information on the emitting system with a precision which is not only unprecedented in the history of astrophysics, but beyond that of any other technique (Amaro-Seoane et al., 2007, Babak et al., 2010, Porter, 2009).
A probe of galactic dynamics
The centre-most part of the stellar spheroid, i.e. the galactic nucleus, constitutes an extreme environment in terms of stellar dynamics. With stellar densities higher than 10 6 M pc −3 and relative velocities exceeding 100 km/s, collisional processes (i.e. collective gravitational encounters among stars) are important in shaping the density profiles of stars. The mutual influence between the massive black hole and the stellar system occurs thanks to various mechanisms. Some are global, like the capture of stars via collisional relaxation, or accretion of gas lost by stars through stellar evolution, or adiabatic adaptation of stellar orbits to the increasing mass of the black hole. Others involve a very close interaction, like the tidal disruption of a star or the formation of an EMRI.
The distribution of stars around a massive black hole is a classical problem in stellar dynamics Wolf, 1976, 1977), and of importance for EMRI is the distribution of stellar black holes. Objects more massive than the average star, such as stellar black holes, tend to segregate at the centre of the stellar distribution in the attempt to reach, through long-distance gravitational encounters, equipartition of kinetic energy. A dense, strongly mass-segregated cusp of stellar black hole is expected to form near a massive black hole, and such a cusp plays a critical role in the generation of EMRI. The problem of the presence of a dark cusp has been addressed, for the Galactic Centre, by different authors, from a semi-analytical and numerical standpoint ,b, Hopman and Alexander, 2006a, Miralda-Escudé and Gould, 2000, Sigurdsson and Rees, 1997. A population of stellar black holes can leave an imprint on the dynamics of the S0 (or S) stars at the Galactic Centre, inducing a Newtonian retrograde precession on their orbits (Mouawad et al., 2005). Current data are not sufficient to provide evidence of such deviations from Keplerian orbits, so that the existence of a population of stellar black holes is yet to be confirmed (Gillessen et al., 2009, Merritt et al., 2009a.
A probe of the masses of stellar and massive black holes
It is very difficult to measure the mass of black holes, both of the massive and stellar variety. In the case of massive black holes, methods based on following the innermost kinematics are difficult for low-mass black holes in the range 10 5 M -10 7 M . These black holes have low intrinsic luminosities even when they are active, making detection hard. Performing dynamical measurements at these masses through stellar kinematics requires extremely high spatial resolution. Nowadays with adaptive optics we could optimistically hope to get a handful of measurements through stellar kinematics 41 of 81 about 5 kpc away, although future 20 m -30 m telescopes can reach out to the Virgo cluster (16.5 Mpc). Exquisite gasdynamical measurements are possible for only a handful of active black holes using water megamaser spots in a Keplerian circumnuclear disk (Kuo et al., 2011). Still, the black hole in the centre of our own galaxy lies in this range, and placing constraints on the mass function of low-mass black holes has key astrophysical implications. Observations show that the masses of black holes correlate with the mass, luminosity and the stellar velocity dispersion of the host (Gültekin et al., 2009). These correlations imply that black holes evolve along with their hosts throughout cosmic time. One unanswered question is whether this symbiosis extends down to the lowest galaxy and black hole masses due to changes in the accretion properties (Mathur and Grupe, 2005), dynamical effects (Volonteri et al., 2007), or cosmic bias (Volonteri and Natarajan, 2009). eLISA will discover the population of massive black holes in galaxies smaller than the Milky Way, that are difficult to access using other observational techniques, and provide insights on the co-evolution of black holes and their hosts. Difficulties, albeit of a different nature, exist in measuring the masses and mass distribution of stellar black holes. Stellar black holes are observed as accreting X-ray sources in binaries. According to stellar evolution, black holes result from the core collapse of very massive stars, and their mass is predicted to be in excess of the maximum mass of a neutron star, which is still not fully constrained. Depending on the state of nuclear matter, this limit varies from about 1.6 M to about 3 M (Shapiro and Teukolsky, 1986). The maximum mass of a stellar black hole is not constrained theoretically, and is known to depend sensitively on the metallicity of the progenitor star. The masses of stellar black holes are inferred using Kepler's third law, or through spectral analysis of the emission from the hole's accretion disc. These techniques can be used only for black holes in a binary system. Current measurements of the black hole mass indicate a range for stellar black holes from about 5 M up to 20 M , but uncertainties in the estimate can be as large as a factor of two (Orosz, 2003). In addition, stellar black holes in interacting binaries are a very small and probably strongly biased fraction of the total stellar black hole population. They are formed from stars that have lost their hydrogen mantle due to mass transfer (and thus formed in a different way than the vast majority of stellar black holes. eLISA will measure the mass of the stellar black holes again with unprecedented precision providing invaluable insight on the process of star formation in the dense nuclei of galaxies, where conditions appear extreme. 5 Detecting extreme mass ratio inspirals with eLISA EMRI are compact stars moving on relativistic orbits around a massive black hole. As the compact object spends most of its time in the strong field regime, its orbit is very complex and difficult to model. , the AK waveforms, which fully capture the complexity of the model. These waveforms are defined by a 14 dimensional parameter set, of which the most physically relevant are the masses of the central black hole and of the compact object, M • and m respectively, the spin of the massive black hole a • , the eccentricity of the orbit at plunge, e p , the sky position of the source with respect to the detector, and the luminosity distance to the source, D L . In addition to these approximate models, more accurate EMRI waveform models have been computed using black hole perturbation theory, in which the inspiraling object is regarded as a small perturbation to the background spacetime of the large black hole. The perturbation theory framework was first outlined in (Teukolsky, 1973) and gave rise to the Teukolsky equation. However, the solution of this equation is computationally expensive, and results have only recently been obtained for a selection of generic orbits (Drasco and Hughes, 2006). Nonetheless, results have been fully tabulated for certain restricted types of orbit. For the calculations described here we will use data for circular-equatorial orbits Thorne, 2000, Gair, 2009a). We can use both models to compute the maximum detectable redshift, or the horizon for EMRI detection, as a function of mass.
To calculate the detection limit of EMRI for eLISA using the AK waveforms, we must perform a Monte Carlo simulation over the waveform parameters. We explore the mass range 10 4 M M 5 × 10 6 M . As not much is known about the distribution of spins or eccentricities for EMRI, we consider uniform distributions for the spins in the range −0.95 ≤ a ≤ 0.95, and for eccentricities at plunge in the interval 0.05 ≤ e p ≤ 0.4. We fix the mass of the inspiraling body to 10 M to represent the inspiral of a stellar black hole, as these are expected to dominate the event rate (Gair et al., 2004). The detection horizon for neutron star and white dwarf inspirals is significantly less than for black holes. The final assumption required is to set a threshold of detection. While a SNR threshold of 30 was thought to be justified in the past, advances in search algorithms have recently demonstrated that EMRI with SNR about 20 is sufficient for detection (Babak et al., 2010, Cornish, 2011, Gair et al., 2008, allowing us to assume an SNR threshold of 20 in this analysis. Assuming a mission lifetime T of two years, and plunge times between 0 yr ≤ t p ≤ 5 yr, a large scale Monte Carlo simulation was run over all 14 parameters. In figure 21 (left) we plot the maximum detectable redshift z (also referred to as horizon) as a function of intrinsic mass of the massive black hole. Systems with intrinsic mass in the range from 10 4 M ≤ M ≤ 5 × 10 6 M are detectable in the local Universe at redshift of z 0.1, while systems in the range from 10 5 M ≤ M ≤ 10 6 M should be detectable by eLISA to z ∼ 0.7, corresponding to a co-moving volume of about 70 Gpc 3 . Figure 21 (left) also shows the maximum detectable redshift z as a function of the mass of the central massive black hole, computed for circular-equatorial inspirals using the Teukolsky equation for the same masses of the inspiralling compact object and massive black holes. This curve shows the sky-averaged horizon, i.e., the maximum redshift at which the SNR averaged over inclinations and orientations of the EMRI system reaches the threshold value of 20. Tabulated Teukolsky results are only available for selected values of the spin of the central black hole, so we show the horizon assuming all the central black holes have either spin a • = 0 or a • = 0.9. The Teukolsky horizon appears significantly lower than the AK horizon, but this is a result of the sky-averaging approximation -the sky averaged SNR is expected to be a factor of about 2.5 lower than the SNR of an "optimally oriented" binary. The AK horizon was computed using a Monte Carlo simulation over orientations and sky locations for the source and will therefore approach the value for an optimally-oriented binary. The difference between the sky-averaged Teukolsky horizon and the AK horizon is therefore consistent with the expected level of difference. The maximum horizon for the Teukolsky curves is at a similar value for the mass of the central black hole as the AK results -somewhat lower for a • = 0 and higher for a • = 0.9, as we would expect since inspirals into more rapidly spinning black holes emit radiation at higher frequencies, which shifts the peak sensitivity to higher masses. For the same reason, we see that the eLISA horizon is at a higher redshift for more rapidly spinning central black holes.
In figure 21 (right) we plot the distribution of maximum SNR as a function of redshift for the Monte Carlo simulation performed using the AK waveforms. A nearby EMRI will be detectable with SNR of many tens, with SNRs of 30 being available out to z = 0.5. EMRI can be detected with an SNR of 20 up to z 0.7, up to a volume of about 70 Gpc 3 , encompassing the last 6 billion years of the Universe.
EMRIs are the most complex sources to model and to search for. However, if they can be detected, this complexity will allow us to estimate the parameters of the system with great accuracy (Babak et al., 2010, Cornish, 2011, Gair et al., 2008. For an EMRI detected with a certain SNR, the parameter estimation accuracy does not strongly depend on the detector configuration, since any detected EMRI will be observed for many waveform cycles. For this reason the parameter estimation accuracy achievable with eLISA is essentially the same as reported in the published LISA literature Cutler, 2004, Huerta and. For any EMRI observed with SNR above the detection threshold of 20, we expect to measure the mass M • and spin a • of the central massive black hole with a precision to better than a part in 10 4 . This is illustrated in Figure 22, that shows the results from a Markov Chain Monte Carlo analysis (Cornish and Porter, 2006) black hole, the mass m/M • of the stellar black hole, and the eccentricity at plunge e p . Our analysis also shows that the luminosity distance D L to the source is determined with an accuracy of less than 1 % and the source sky location can be determined to around 0.2 square degrees. While the SNR is quite low for this source, the accuracy in the estimation of parameters is very good.
6 Estimating the event rates of extreme mass ratio inspirals for eLISA We can use the horizon distances described in the preceding section to compute the likely number of EMRI events that eLISA will detect, if we make further assumptions about the EMRI occurring in the Universe. This depends on the black hole population and on the rate at which EMRI occur around massive black holes with particular properties. The latter is poorly known, and we will use results from Hopman (2009) and Amaro-Seoane and Preto (2011) for the rate of inspirals involving black holes. The rate Γ • is found to scale with the central black hole mass, M • , as Γ • ∼ 400 Gyr −1 M • /3 × 10 6 M −0.19 . We do not consider neutron star and white dwarf inspirals in these rate estimates as the expected number of detections with eLISA is less than one in both cases, due to the considerably reduced horizon distance for these events. We therefore fix the mass of the inspiraling body at 10 M , as in the previous section.
To model the black hole population, we take the mass function of black holes to be in the intrinsic mass range 10 4 M M • 5 × 10 6 M . Using the assumption that there is no evolution in the black hole mass function, we sampled sources from a uniform distribution in co-moving volume. These assumptions are consistent with the mass function derived from the observed galaxy luminosity function using the M • − σ relation, and excluding Sc-Sd galaxies (Aller and Richstone, 2002, Gair, 2009a, Gair et al., 2004. For the results using the AK waveform model, we choose the spin of the central object uniformly in the range 0 ≤ a ≤ 0.95, the eccentricity of the orbit at plunge uniformly in the range 0.05 ≤ e p ≤ 0.4 and all angles to be uniform or uniform in cosine as appropriate. For the Teukolsky based results we do not need to specify the angles, as we use a sky and orientation averaged sensitivity, and we do not specify the eccentricity or inclination as the orbits are all circular and equatorial (although we assume equal numbers of prograde and retrograde inspirals). As before, the Teukolsky results are available for fixed values of the spin only, so we estimate the event rate assuming that all the black holes have spin 0, 0.5 or 0.9.
It is important also to correctly randomise over the plunge time of the EMRI. For the AK calculation, we choose the plunge time uniformly in 0 yr ≤ t p ≤ 5 yr, with time measured relative to the start of the eLISA observation and assuming an eLISA lifetime of 2 years. Although sufficiently nearby events with plunge times greater than 5 years in principle could be detected, it was found that such events contribute less than one event to the total event rate. For the Teukolsky calculation, we evaluated the observable lifetime for every event, which is the amount of time during the inspiral that eLISA could start to observe that will allow sufficient SNR to be accumulated over the mission lifetime to allow a detection (Gair, 2009a).
In table 2 we give the results of this calculation for different waveform models and black hole spins. The predicted number of events depends on the assumptions about the waveform model and the spin of the black holes, but it is in the range of 25 -50 events in two years. The number of events predicted for the AK model is higher because the presence of eccentricity in the system tends to increase the amount of energy radiated in the eLISA band. The analytic kludge estimates include randomisation over the black hole spin, orbital eccentricity and inclination, so the true detection rate is likely to be closer to this number, although this depends on the unknown astrophysical distribution of EMRI parameters. Even with as few as 10 events, Gair et al. (2010) show that the slope of the mass function of massive black holes in the mass range 10 4 M -10 6 M can be determined to a precision of about 0.3, which is the current level of observational uncertainty.
Black hole coalescence events in star clusters
In closing this section on astrophysical black holes, we explore briefly the possibility that an instrument like eLISA will detect coalescences between low-mass massive black holes (called also intermediate-mass black holes) in the mass range 10 2 M -10 4 M . These coalescence events do not result from the assembly of dark matter halos, but rather they are local coalescence events occurring in star clusters under extreme (and largely unexplored) astrophysical conditions. Given the tiny radius of gravitational influence (about 0.01 pc) of such light black holes on the surrounding dense stellar environment, their detection is extremely difficult, and their existence has never been confirmed, though evidence has been claimed in a number of globular clusters (see Miller, 2009, Miller andColbert, 2004, and references therein).
An intermediate-mass black hole may form in a young cluster if the most massive stars sink to the cluster's centre due to mass segregation before they evolve and explode. There, they start to physically collide. The most massive star gains more and more mass and forms a runaway star that may collapse to form an intermediate-mass black hole ,c, Gürkan et al., 2004, Portegies Zwart and McMillan, 2000. Intermediate-mass black holes can be observed by eLISA via the inspiral of compact objects such as stellar mass black holes (Konstantinidis et al., 2011), or when they form a binary. The formation of an intermediate-mass black hole binary can occur via star-cluster star-cluster collisions like those found in the Antennae galaxy (Amaro-Seoane et al., 2010a, Amaro-Seoane and, or via formation in situ (Gürkan et al., 2006). eLISA will observe intermediate-mass black hole binaries with SNR > 10 out to a few Gpc , and it will detect stellar-mass black holes plunging into an intermediate-mass black hole in a massive star cluster in the local Universe (Konstantinidis et al., 2011). Event rates are hard to predict due to large uncertainties in the dynamical formation of intermediate mass black holes in star clusters, but we may observe as many as a few events per year (Amaro-Seoane and . The detection of even a single event would have great importance for astrophysics, probing the existence of black holes in this unexplored mass range and shedding light on the complex dynamics of dense stellar clusters.
Confronting General Relativity with Precision Measurements
of Strong Gravity 1 Setting the stage GR is a theory of gravity in which gravitational fields are manifested as curvature of spacetime. GR has no adjustable parameters other than Newton's gravitational constant, and it makes solid, specific predictions. Any test can therefore potentially be fatal to its viability, and any failure of GR can point the way to new physics. Confronting GR with experimental measurements, particularly in the strong gravitational field regime, is therefore an essential enterprise. In fact, despite its great successes, we know that GR cannot be the final word on gravity, since it is a classical theory that necessarily breaks down at the Planck scale. As yet there is no complete, quantum theory of gravity, and gravitation is not unified with the other fundamental forces. Under such a premise, several stress tests of GR have been proposed, each of them potentially fatal to the theory, however all of them involve low energies and length-scales much larger than the Planck scale.
Although so far GR has passed all the tests to which it has been subjected , most of these tests were set in the weak-field regime, in which the parameter = v 2 /c 2 ∼ GM/(Rc 2 ) is much smaller than one. Here v is the typical velocity of the orbiting bodies, M their total mass, and R their typical separation.
For the tests of GR that have been carried out in our Solar System, expected second-order GR corrections to the Newtonian dynamics are of the order ∼ 10 −6 − 10 −8 , and so to date it has been sufficient to expand GR equations to the first Post-Newtonian (PN) order. Solar System tests are completely consistent with GR to this order of approximation.
Binary pulsars, which are essentially very stable and accurate clocks with typical orbital velocities v/c ∼ 10 −3 ( ∼ 10 −6 ), are excellent laboratories for precision tests of GR (Lorimer, 2008). Current observations of several binary pulsars are perfectly consistent with the GR predictions, with orbits again calculated to the first PN order. Observations of the first binary pulsar to be discovered, PSR 1913+16, also provided the first astrophysical evidence for gravitational radiation, a 2.5-PN-order effect. Loss of energy due to gravitational-wave emission (radiation reaction) causes the binary orbit to shrink slowly; its measured period derivativeṖ agrees with GR predictions to within 0. The total mass of system M(1 + z) = 2 × 10 6 M , mass ratio m 1 /m 2 = 2, spin magnitudes a 1 = 0.6 a 2 = 0.55, misalignment between spins and orbital angular momentum few degrees, the distance to the source z = 5. The inset shows the signal on a larger data span. error bars (Weisberg and Taylor, 2005). Another double pulsar system, SR J0737-3039 A and B, allows additional tests of GR that were not available prior to its discovery (Kramer et al., 2006). In that system, the orbital period derivative is consistent with GR at the 0.3 % level, and the Shapiro delay agrees to within 0.05 % with the predictions of GR (Kramer and Wex, 2009). However, the gravitational fields responsible for the orbital motion in known binary pulsars are not much stronger than those in the Solar System: the semimajor axis of the orbit of PSR 1913+16 is about 1.4 R . Such weak fields limit the ability of binary pulsars to probe nonlinear GR dynamics. They do provide important tests of strong-field static gravity, as the redshift at the surface of a neutron star is of order 0.2. eLISA observations of coalescing massive black hole binaries, or of stellar-mass compact objects spiralling into massive black holes, will allow us to confront GR with precision measurements of physical regimes and phenomena that are not accessible through Solar System or binary pulsar measurements. The merger of comparable-mass black hole binaries produces an enormously powerful burst of gravitational radiation, which eLISA will be able to measure with amplitude SNR as high as a few hundred, even at cosmological distances.
In the months prior to merger, eLISA will detect the gravitational waves emitted during the binary inspiral; from that inspiral waveform, the masses and spins of the two black holes can be determined to high accuracy. Given these physical parameters, numerical relativity will predict very accurately the shape of the merger waveform, and this can be compared directly with observations, providing an ideal test of pure GR in a highly dynamical, strong-field regime. 47 of 81 eLISA: Astrophysics and cosmology in the millihertz regime Stellar-mass compact objects spiralling into massive black holes will provide a qualitatively different test, but an equally exquisite one. The compact object travels on a near-geodesic of the spacetime of the massive black hole. As it spirals in, its emitted radiation effectively maps out the spacetime surrounding the massive black hole. Because the inspiralling body is so small compared to the central black hole, the inspiral time is long and eLISA will typically be able to observe of order 10 5 cycles of inspiral waveform, all of which are emitted as the compact object spirals from 10 horizon radii down to a few horizon radii. Encoded in these waves is an extremely high precision map of the spacetime metric just outside the central black hole. Better opportunities than these for confronting GR with actual strong-field observations could hardly be hoped for.
The LIGO and Virgo detectors should come online around 2015, and their sensitivity is large enough that they should routinely observe stellar mass black hole coalescences, where the binary components are of roughly comparable mass. However, even the brightest black hole mergers that LIGO and Virgo should observe will still have an amplitude SNR about 10 to 100 times smaller than the brightest massive black hole coalescences that eLISA will observe. The precision with which eLISA can measure the merger and ringdown waveforms will correspondingly be better by the same factor when compared to ground-based detectors. The situation is similar for the EMRI described in the previous section: while ground-based detectors may detect binaries with mass ratios of about 10 −2 (e.g., a neutron star spiralling into a 100 M black hole), in observations lasting approximately 10 2 -10 3 cycles, the precision with which the spacetime can be mapped in such cases is at least two orders of magnitude worse than what is achievable with eLISA's EMRI sources. Thus eLISA will test our understanding of gravity in the most extreme conditions of strong and dynamical fields, and with a precision that is two orders of magnitude better than that achievable from the ground.
GR has been extraordinarily fruitful in correctly predicting new physical effects, including gravitational lensing, the gravitational redshift, black holes and gravitational waves. GR also provided the overall framework for modern cosmology, including the expansion of the Universe. However, our current understanding of the nonlinear, strong gravity regime of GR is quite limited. Exploring gravitational fields in the dynamical, strong-field regime could reveal new objects that are unexpected, but perfectly consistent with GR, or even show violations of GR.
The best opportunity for making such discoveries is with an instrument of high sensitivity. Ground-based detectors like LIGO and Virgo will almost certainly always have to detect signals by extracting them from deep in the instrumental noise, and they will therefore depend on prior predictions of waveforms. eLISA, on the other hand, will have enough sensitivity that many signals will show themselves well above noise; unexpected signals are much easier to recognize with such an instrument.
2 Testing strong-field gravity: The inspiral, merger, and ringdown of massive black hole binaries eLISA's strongest sources are expected to be coalescing black hole binaries where the components have roughly comparable masses, 0.1 < m 2 /m 1 < 1. Their signal at coalescence will be visible by eye in the data stream, standing out well above the noise, as illustrated in figure 23.
As discussed in section 8, black hole binary coalescence can be schematically decomposed into three stages (inspiral, merger, and ringdown), all of which will be observable by eLISA for a typical source. The inspiral stage is a relatively slow, adiabatic process, well described by the analytic PN approximation. The inspiral is followed by the dynamical merger of the two black holes, that form a single, highly distorted black hole remnant. Close to merger, the black hole velocities approach v/c ∼ 1/3 and the PN approximation breaks down, so the waveform must be computed by solving the full Einstein equations via advanced numerical techniques. The distorted remnant black hole settles down into a stationary rotating solution of Einstein's equations (a Kerr black hole) by emitting gravitational radiation. This is the so called "ringdown" phase, where the gravitational wave signal is a superposition of damped exponentials QNM, and therefore similar to the sound of a ringing bell.
While numerical relativity is required to understand the gravitational radiation emitted during merger, the post-merger evolution -i.e., the black hole "quasinormal ringing" -can be modelled using black hole perturbation theory. The final outcome of the ringdown is the Kerr geometry, with a stationary spacetime metric that is determined uniquely by its mass and spin, as required by the black hole"no-hair" theorem.
For equal-mass black hole binaries with total mass M in the range 2 × 10 5 M < M(1 + z) < 2 × 10 6 M , where z is the cosmological redshift of the source, the inspiral SNR and post-inspiral (merger plus ring-down) SNR are within an order of magnitude of each other. From a typical eLISA observation of the inspiral part of the signal, it will be possible to determine the physical parameters of the binary to extremely high accuracy. Using these parameters, numerical relativity can predict very precisely the merger and ringdown waves. Measurements of the individual masses and spins will allow us to predict the mass and the spin of the remnant black hole (Rezzolla et al., 2008b), which can be directly tested against the corresponding parameters extracted from the ringdown. The merger and ringdown waveforms will typically have an SNR of 10 2 -10 3 for binary black holes with total mass 10 5 M < M(1 + z) < 6 × 10 8 M at z = 1, so an extremely clean comparison will be possible between the observed waveforms and the predictions of GR.
The inspiral stage: comparing inspiral rate with predictions of General Relativity
With orbital velocities v/c typically in the range 0.05 -0.3, most of the inspiral stage can be well described using highorder PN expansions of the Einstein equations. The inspiral waveform is a chirp: a sinusoid that increases in frequency and amplitude as the black holes spiral together. Depending on the source parameters, eLISA will be able to observe the final stages of the inspiral, for up to one year in some favourable cases. To give a practical reference, when the gravitational-wave frequency sweeps past 0.3 mHz, the time remaining until merger is approximately where, as above, M = m 1 + m 2 is the total mass of the binary and η = m 1 m 2 /M 2 is the symmetric mass ratio. eLISA will observe the last 10 2 -10 4 gravitational wave inspiral cycles, depending on the total mass and distance of the source. Since the inspiral signal is quite well understood theoretically, matched filtering can be used to recognise these inspirals up to a year before the final merger, at a time when the total SNR is still small. Moreover, as the total SNR in the inspiral is quite large in many cases, and such signals are long lived, matched filtering based on the inspiral waveform alone can determine the system parameters to very high accuracy. Both masses can be determined to within a fractional error of about 10 −2 -10 −1 , and the spin of the primary black hole can be measured to an accuracy of 10 % or better.
The nonlinear structure of GR (and possible deviations from GR) could be encoded in a phenomenological way by considering hypothetical modifications of the gravitational wave amplitude and phasing, as proposed by different authors (Arun et al., 2006, Yunes andPretorius, 2009). The relatively large strength of the inspiral gravitational wave signal will allow a sensitive test of GR by comparing the rate of the observed inspiral (phase evolution) to predictions of the PN approximation to GR , Huwyler et al., 2011, Mishra et al., 2010.
The merger stage: spectacular bursts
The inspiral is followed by a dynamical merger that produces a burst of gravitational waves. This is a brief event, comprising a few cycles lasting about 5×10 3 s M/10 6 M (0.25/η), yet very energetic: during the merger the gravitational wave luminosity is L GW ∼ 10 23 L , emitting more power than all the stars in the observable Universe. The final merger of massive binaries occurs in the very strong-field, highly nonlinear and highly dynamical regime of GR, and is the strongest gravitational wave source that eLISA is expected to see. eLISA will be able to see the merger of two 10 4 M black hole beyond redshift z = 20, and for mergers of two 10 6 M black hole at z = 1 the SNR will be about 2000. As mentioned above, eLISA observations of the inspiral yield a good measurement of the masses and spins of the black holes. With these in hand, numerical relativity will make a very specific prediction for the merger and ringdown radiation from the system. Comparison with the waveform that eLISA actually observes will allow us to confront the predictions of GR with an ultra-high precision measurement in the fully nonlinear and dynamical regime of strong gravity for the first time.
The ringdown stage: black hole spectroscopy
Although numerical relativity waveforms from colliding holes naturally include the ringdown waves, these waves are also well understood analytically. GR predicts, as a consequence of the "no-hair" theorem, that every excited black hole emits gravitational waves until it reaches a time-independent state characterised entirely by its mass and spin. These ringdown waves consist of a set of superposed black hole QNM waves with exponentially damped sinusoidal time dependence, plus a far weaker "tail". The modes are strongly damped as their energy is radiated away to infinity, so the final ringdown stage is brief, lasting only a few cycles. 49 of 81 eLISA: Astrophysics and cosmology in the millihertz regime The QNM of Kerr black hole can be computed using perturbation theory: the spacetime metric is written as the Kerr metric plus a small perturbation, and Einstein's equations are expanded to first-order in that perturbation. The solutions can be decomposed into a sum of damped exponentials with complex eigenfrequencies (Chandrasekhar and Detweiler, 1975) that can be computed to essentially arbitrary accuracy (Leaver, 1985).
While there are infinitely many modes (corresponding to the angular order and overtone number of the perturbation from the stationary state), the lowest-order modes are the most readily excited and the least strongly damped, so in practice only a few modes are likely to be observed.
The frequencies and damping times of these ringdown QNM (tabulated in Berti et al., 2009) are completely determined by the mass and the spin of the remnant black hole.
A data analysis strategy based on multi-mode searches will be necessary for an accurate estimation of the mass and spin of the final black hole (Berti et al., 2007. Furthermore, if we can measure at least two different QNM in a ringdown signal, the ringdown radiation itself will provide a strong-field test of the hypothesis that the central massive objects in galactic nuclei are indeed Kerr black holes. The reason is that a two-mode signal contains four parameters (the frequencies and damping times of each mode), which must all be consistent with the same mass and spin values (Dreyer et al., 2004). Just like we can identify chemical elements via spectroscopic measurements, we can uniquely identify a black hole (determine its mass and spin) from the spectrum of its ringdown radiation.
If GR is correct but the observed radiation is emitted from a different source (exotic proposals include boson stars and gravastars, among others), the spectrum would most certainly be inconsistent with the QNM spectrum of Kerr black holes in GR , Chirenti and Rezzolla, 2007, Yoshida et al., 1994. The same should occur if GR does not correctly describe gravity in the extremes of strong fields and dynamical spacetimes. The fact that black hole oscillations should produce different radiation spectra in different theories of gravity is true in general (Barausse and Sotiriou, 2008), and the spectrum was studied in some specific extensions of GR, such as Einstein-dilaton-Gauss-Bonnet gravity ). The possibility of testing the no-hair theorem with QNM depends on the accuracy with which frequencies and damping times can be measured, which in turn depends on the SNR of the ringdown signal. As shown in (Berti, 2006, Berti et al., 2007, SNR larger than 50 should be sufficient to identify the presence of a second mode and use it for tests of the no-hair theorem. This is only marginally achievable with advanced Earthbased detectors, but SNR of this order should be the norm for the black hole mergers detectable by eLISA. Furthermore, recent work showed that multi-mode ringdown waveforms could encode information on parameters of the binary before merger, such as the binary's mass ratio (Kamaretsos et al., 2011), and this would provide further consistency checks on the strong-field dynamics of general relativity.
3 Extreme mass ratio inspirals: precision probes of Kerr spacetime EMRI are expected to be very clean astrophysical systems, except perhaps in the few percent of galaxies containing accreting massive black holes, where interactions with the accretion disk could possibly affect the EMRI dynamics. Over timescales of the order of a day, the orbits of the smaller body are essentially geodesics in the spacetime of the massive black hole. On longer timescales, the loss of energy and angular momentum due to gravitational-wave emission causes the smaller body to spiral in; i.e., the geodesic's "constants" of motion change slowly over time. Over a typical eLISA observation time (years), EMRI orbits are highly relativistic (radius smaller than 10 Schwarzschild radii) and display extreme forms of periastron and orbital plane precession due to the dragging of inertial frames by the massive black hole's spin. Figure 24 shows two sample waveforms, corresponding to short stretches of time.
Given the large amount of gravitational wave cycles collected in a typical EMRI observation (about 10 5 ), a fit of the observed gravitational waves to theoretically calculated templates will be very sensitive to small changes in the physical parameters of the system. As mentioned above, this sensitivity makes the search computationally challenging, but it allows an extremely accurate determination of the source parameters, once an EMRI signal is identified. Assuming that GR is correct and the central massive object is a black hole, eLISA should be able to determine the mass and spin of the massive black hole to fractional accuracy of about 10 −4 -10 −3 for gravitational wave signals with an SNR of 20 .
This level of precision suggests that we can use EMRI as a highly precise observational test of the "Kerr-ness" of the central massive object. That is, if we do not assume that the larger object is a black hole, we can use gravitational waves from an EMRI to map the spacetime of that object. The spacetime outside a stationary axisymmetric object is (Drasco and Hughes, 2006). These are the plus-polarised waves produced by a test mass orbiting a 10 6 M black hole that is spinning at 90 % of the maximal rate allowed by general relativity, a distance D from the observer. Top panel: Slightly eccentric and inclined retrograde orbit modestly far from the horizon. Bottom panel: Highly eccentric and inclined prograde orbit much closer to the horizon. The amplitude modulation visible in the top panel is mostly due to Lense-Thirring precession of the orbital plane. The bottom panel's more eccentric orbit produces sharp spikes at each pericentre passage.
fully determined by its mass moments M l and current multipole moments S l . Since these moments fully characterise the spacetime, the orbits of the smaller object and the gravitational waves it emits are determined by the multipolar structure of the spacetime. By observing these gravitational waves with eLISA we can therefore precisely characterise the spacetime of the central object. Extracting the moments from the EMRI waves is analogous to geodesy, in which the distribution of mass in the Earth is determined by studying the orbits of satellites. Black hole geodesy, also known as holiodesy, is very powerful because Kerr black holes have a very special multipolar structure. A Kerr black hole with mass M • and spin parameter a • (in units with G = c = 1) has multipole moments given by Thus, M 0 = M • , S 1 = a • M 2 • , and M 2 = −a 2 • M 3 , and similarly for all other multipole moments; they are all completely determined by the first two moments, the black hole mass and spin. This is nothing more than the black hole "no-hair" theorem: the properties of a black hole are entirely determined by its mass and spin.
For inspiraling trajectories that are slightly eccentric and slightly non-equatorial, in principle all the multipole moments are redundantly encoded in the emitted gravitational waves (Ryan, 1995), through the time-evolution of the three fundamental frequencies of the orbit: the fundamental frequencies associated with the r, θ, and φ motions (Drasco and Hughes, 2004), or, equivalently, the radial frequency and the two precession frequencies.
The mass quadrupole moment M 2 of a Kerr black hole can be measured to within ∆M 2 ≈ 10 −2 M • 3 − 10 −4 M • 3 for signals with an SNR of 30 , At the same time ∆M • /M • and ∆S 1 /M 2 • will be estimated to an accuracy of 10 −4 -10 −3 .
Any inconsistency with the Kerr multipole structure could signal a failure of GR, the discovery of a new type of compact object, or a surprisingly strong perturbation from some other material or object. For a review of the different hypotheses regarding the nature of the central object see , Sopuerta, 2010.
Other tests of the Kerr nature of the central massive object have also been proposed. EMRI signals can be used to distinguish definitively between a central massive black hole and a boson star (Kesden et al., 2005). In the black hole case the gravitational wave signal "shuts off" shortly after the inspiraling body reaches the last stable orbit (and then plunges through the event horizon), while for a massive boson star, the signal does not fade, and its frequency derivative changes sign, as the body enters the boson star and spirals toward its centre. Similarly, if the central object's horizon is replaced by some kind of membrane (this is the case for the so-called gravastars) the orbital radiation produced by the orbiting body could resonantly excite the QNM of the gravastar, with characteristic signatures in the gravitational wave energy spectrum that would be detectable by eLISA .
Other studies within GR considered axisymmetric solutions of the Einstein field equations for which the multipole moments can differ from the Kerr metric, such as the Manko-Novikov solution. These studies revealed ergodic orbital motion in some parts of the parameter space (Gair, 2009b) as a result of the loss of the third integral of motion. A similar study suggested that the inspiralling body could experience an extended resonance in the orbital evolution when the ratio of intrinsic frequencies of the system is a rational number (Lukes-Gerakopoulos et al., 2010). If detected, these features would be a robust signature of a deviation from the Kerr metric.
These and similar studies of "bumpy" Kerr black holes -spacetime metrics with a multipolar stucture that deviates from the Kerr spacetime by some "tunable" amount (Collins and Hughes, 2004, Glampedakis and Babak, 2006, Hughes, 2006, Ryan, 1995, Vigeland et al., 2011, Vigeland and Hughes, 2010) -focussed on understanding whether the best fit to eLISA data is consistent with the Kerr solution within general relativity. However, an even more exciting prospect is that modifications in EMRI waveforms might arise because the true theory of gravity is in fact different from GR. For example, black holes in dynamical Chern-Simons theory (a parity-violating, quantum-gravity inspired extension of GR) deviate from Kerr black holes in the fourth multipole moment = 4. This affects geodesic motion, and therefore the phasing of the gravitational wave signal (Pani et al., 2011, Sopuerta and. Gravitational wave observations of black hole-black hole binaries cannot discriminate between GR and scalar-tensor theories of gravity. The reason is that black holes do not support scalar fields; i.e., they have no scalar hair. However, eLISA could place interesting bounds on scalar-tensor theories using observations of neutron stars spiralling into massive black holes (Berti et al., 2005, Yagi andTanaka, 2010). These limits will be competitive with -but probably not much more stringent than -Solar System and binary pulsar measurements (Esposito-Farèse, 2004).
Finally, eLISA observations of compact binaries could provide interesting bounds on Randall-Sundrum inspired 52 of 81 eLISA: Astrophysics and cosmology in the millihertz regime braneworld models (McWilliams, 2010, Yagi et al., 2011. A general framework to describe deviations from GR in different alternative theories and their imprint on the gravitational wave signal from EMRI can be found in (Gair and Yunes, 2011).
Most high-energy modifications to GR predict the existence of light scalar fields (axions). If such scalar fields exist, as pointed out long ago by Detweiler and others (Detweiler, 1980), rotating black holes could undergo a superradiant "black hole bomb" instability for some values of their spin parameter. Depending on the mass of axions, string-theory motivated "string axiverse" scenarios predict that stable black holes cannot exist in certain regions of the mass/angular momentum plane (Arvanitaki and Dubovsky, 2011). Furthermore, this superradiant instability could produce a surprising result: close to the resonances corresponding to a superradiant instability the EMRI would stop, and the orbiting body would float around the central black hole. These "floating orbits" (for which the net gravitational energy loss at infinity is entirely provided by the black hole's rotational energy) are potentially observable by eLISA, and they could provide a smoking gun of high-energy deviations from general relativity .
In conclusion we remark that, if GR must be modified, the "true" theory of gravity should lead to similar deviations in all observed EMRI. For this reason, statistical studies of EMRI to test GR would alleviate possible disturbances that may cause deviations in individual systems, such as interactions with an accretion disk (Barausse and Rezzolla, 2008, Barausse et al., 2007, Kocsis et al., 2011, perturbations due to a second nearby black hole or by a near-by star, which could allow us to investigate different models of how stars distribute around a massive black hole (Amaro-Seoane et al., 2012).
Intermediate mass ratio binaries
A loud gravitational wave source for eLISA would be the IMRI of binaries comprising a middleweight (or equivalently intermediate-mass) black hole, with mass in the range of a few times 10 2 M to a few times 10 4 M , along with either a massive black hole (10 6 M ) or a solar-mass black hole. Currently there is no fully convincing evidence for the existence of intermediate-mass black holes, primarily due to the enormous observational difficulties of resolving the central region of dwarf galaxies and/or globular clusters, the two most likely places where they might reside. eLISA is one of the most promising observatories for discovering these middleweight black holes.
The strength of the gravitational wave signal from an IMRI lies between that of massive black hole binaries and EMRI, and the signal itself carries features of both limiting types, including a relatively fast frequency evolution and comparable contribution of several harmonics to the total strength of the signal. According to the proposed eLISA sensitivity, IMRI could be seen up to redshift z ∼ 4. There are good reasons to expect that IMRI orbits may have measurable eccentricity (Amaro-Seoane, 2006, Amaro-Seoane et al., 2010b, Amaro-Seoane et al., 2009, Amaro-Seoane and Santamaría, 2010, Sesana, 2010. It may also be possible in some cases to observe the gravitational spin-spin coupling between the two black holes (equivalent to the Lense-Thirring effect). The precision in the measurements of the source parameters will lie between that of EMRI and comparable-mass binaries.
The mass of the graviton
In GR, gravitational waves travel with the speed of light and the graviton is hence massless. Alternative theories with a massive graviton predict an additional frequency-dependent phase shift of the observed waveform. The dominant effect can be expressed at 1-PN order, and would change the PN coefficient ψ 2 in the stationary-phase approximation to the Fourier transform of the waveform as follows: where η is again the symmetric mass ratio. This term alters the time of arrival of waves of different frequencies, causing a dispersion, and a corresponding modulation in the phase of the signal that depends on the Compton wavelength λ g and the distance D to the binary. Hence, by tracking the phase of the inspiral waves, eLISA should set bounds in the range λ g ∈ [2 × 10 16 km, 10 18 km] on the graviton Compton wavelength , Huwyler et al., 2011, improving current Solar System bound on the graviton mass, m g < 4 × 10 −22 eV (λ g > 3 × 10 12 m) by several orders of magnitude. 53 of 81 Statistical observations of an ensemble of black hole coalescence events could be used to yield stringent constraints on other theories whose deviations from GR are parametrized by a set of global parameters: examples considered so far in the literature include theories with an evolving gravitational constant (Yunes et al., 2010), massive Brans-Dicke theories (Alsing et al., 2011) and Lorentz-violating modifications of GR (Mirshekari et al., 2011).
Cosmology 1 New physics and the early Universe
Gravitational waves penetrate all of cosmic history, which allows eLISA to explore scales, epochs, and new physical effects not accessible in any other way (see figure 25). Indeed a detectable gravitational wave background in the eLISA band is predicted by a number of new physical ideas for early cosmological evolution (Hogan, 2006, Maggiore, 2000. Two important mechanisms for generating stochastic backgrounds are phase transitions in the early Universe and cosmic strings. Gravitational waves produced after the Big Bang form a fossile radiation: expansion prevents them from reaching thermal equilibrium with the other components because of the weakness of the gravitational interaction. Important information on the first instants of the Universe is thus imprinted in these relics and can be decoded. The mechanical effect of expansion is simply to redshift the corresponding frequency. Assuming that the wavelength is set by the apparent horizon size c/H * = ca/ȧ at the time of production (when the temperature of the Universe is T * ), the redshifted frequency is Thus, the eLISA frequency band of about 0.1 mHz to 100 mHz today corresponds to the horizon at and beyond the Terascale frontier of fundamental physics. This allows eLISA to probe bulk motions at times about 3 × 10 −18 -3 × 10 −10 seconds after the Big Bang, a period not directly accessible with any other technique. Taking a typical broad spectrum into account, eLISA has the sensitivity to detect cosmological backgrounds caused by new physics active in the range of energy from 0.1 TeV to 1000 TeV, if more than a modest fraction Ω GW of about 10 −5 of the energy density is converted to gravitational radiation at the time of production.
Various sources of gravitational wave background of cosmological origin are presented in detail in Binétruy et al. (2012). Here we will only briefly summarize the main mechanisms leading to the potentially observable backgrounds.
A standard example of new physics is a first-order phase transition resulting in bubble nucleation and growth, and subsequent bubble collisions and turbulence. Phase transitions also often lead to the formation of one-dimensional topological defects known as cosmic strings. Among possible topological defects, cosmic strings are unique from a cosmological point of view because, whereas their energy density should grow with the expansion, they interact and form loops which decay into gravitational waves. Thus cosmic strings tend to form networks with a typical scaling behaviour, losing energy mainly through gravitational radiation with a very broad and uniquely identifiable spectrum. Besides topological defects, cosmic strings could also find their origin among the fundamental objects of string theory, the theory that is aiming at providing a unified framework for all particles and forces of nature. Indeed, although fundamental strings were devised as submicroscopic objects, it has been progressively realized (Copeland et al., 2004) that some of these strings could be stretched to astronomical size by the cosmic expansion. eLISA will be our most sensitive probe for these objects by several orders of magnitude and so offers the possibility of detecting direct evidence of fundamental strings.
In order to distinguish backgrounds of gravitational waves from those waves emitted by point sources, it is essential to make use of the successive positions of eLISA around the Sun, and thus to wait a sufficient amount of time (of the order of a few months). It is more difficult to disentangle an isotropic cosmological (or astrophysical) background from an instrumental one, all the more because the eLISA "Mother-Daughter" configuration, providing only two measurement arms, does not allow to use Sagnac calibration (Hogan and Bender, 2001). Luckily, in the case of phase transitions as well as cosmic strings, the spectral dependence of the signal is well predicted and may allow to distinguish cosmological backgrounds as long as they lie above the eLISA sensitivity curve. Abundant evidence suggests that the physical vacuum was not always in its current state, but once had a significantly higher free energy. This idea is fundamental and general: it underlies symmetry breaking in theories such as the Standard Model and its supersymmetric extensions, and cosmological models including almost all versions of inflation. Common to all these schemes is the feature that a cold, nearly uniform free energy contained in the original (false) vacuum is liberated in a phase transition to a final (true) vacuum, and eventually converted into thermal energy of radiation and hot plasma.
In many theories beyond the Standard Model, the conversion between vacuum states corresponds to a first-order phase transition. In an expanding Universe this leads to a cataclysmic process. After supercooling below the critical temperature T * for the transition, a thermal or quantum jump across an energy barrier leads to the formation of bubbles of the new phase. The bubbles rapidly expand and collide. The internal energy is thus converted to organised flows of mass-energy, whose bulk kinetic energy eventually dissipates via turbulence and finally thermalises. The initial bubble collision and subsequent turbulent cascade lead to relativistic flows and acceleration of matter that radiate gravitational waves on a scale not far below the horizon scale (Caprini et al., 2009, Hogan, 1986, Huber and Konstandin, 2008, Kamionkowski et al., 1994, Witten, 1984. The gravitational wave energy density Ω GW typically depends on two parameters: H * /β is the duration of the transition in Hubble units and α is the fraction of energy density available in the source (false vacuum, relativistic motion). Typically Ω GW ∼ Ω rad (H * /β) 2 (κ α) 2 /(1 + α) 2 , where Ω rad is the the fraction of radiation energy today, and κ the fraction of vacuum energy which is converted into bulk kinetic energy during the phase transition. Strong first order phase transitions are obtained for α 1 but, in the context of specific models, increasing α may increase β as well.
Dynamics of warped sub-millimetre extra dimensions
Superstring theory provides examples of strong first order phase transitions in the Terascale region. It requires, for mathematical consistency, several extra dimensions. The sizes of these dimensions, their shapes, and how they are stabilised are yet to be determined. If they exist, gravity can penetrate into them, so they must be small or warped -with a size below the sub-millimetre scale limit set by direct laboratory tests of the gravitational inverse-square law. The scales probed by Standard Model particles and fields are much smaller than this, but fields other than gravity might be confined to a 3-dimensional subspace or (mem)brane plunged in the higher dimensional space.
Since the the Hubble length at the Terascale is about a millimetre, the current threshold where possible new effects of extra dimensions might appear happens to be about the same for experimantal gravity in the laboratory as for the cosmological regime accessible to eLISA. It is even possible that new properties of gravity on this scale are related to cosmic dark energy, whose energy density is about (0.1 mm) −4 in particle physics units.
The dynamics associated with the stabilisation of extra dimensions at a certain size or warp radius might introduce a source of free internal energy released coherently on a mesoscopic, i.e. sub-millimetre to nanometre scale, leading to a detectable background (Hogan, 2000, Randall andServant, 2007). If the extra dimensions are much smaller than the Hubble length when the stabilisation occurs, the behaviour of the extra dimensions is nearly equivalent to scalar field behaviour as viewed in conventional 3-dimensional space, with effects similar to the phase transitions discussed above (see figure 26).
Backgrounds, bursts, and harmonic notes from cosmic strings
As we have seen above, models of physics and cosmology based on string theory, as well as their field-theory counterparts, often predict the cosmological formation of cosmic superstrings (Copeland et al., 2004) that form after inflation and are stretched to enormous length by the cosmic expansion. In equivalent field-theory language, cosmic strings arise from certain types of phase transitions, and stable relics of the high-energy phase persist as topological defects: in the form of one-dimensional strings that resemble flux tubes or trapped vortex lines.
The primordial network of strings produces isolated, oscillating loops that ultimately radiate almost all of their energy into gravitational waves. Their gravitational radiation is mainly governed by a single dimensionless parameter Gµ/c 4 reflecting the fundamental physics of the strings, where G is Newton's constant and µ is the energy per unit length, or tension. This parameter is known to be very small, as current limits on gravitational wave backgrounds already indicate that if cosmic strings exist, they must be so light that they would have few observable effects apart from their gravitational radiation. Figure 27 compares eLISA sensitivity (in red) with predicted stochastic background spectra in two distinct scenarios: large loops in blue (where newly formed loops are about α = 0.1 times the horizon size) for two values of Gµ/c 4 spanning a range of scenarios motivated by brane world inflation, and small loops in dashed (with α = 50 Gµ) for one value of Gµ/c 4 . We note that the spectrum from cosmic strings is distinguishably different from that of phase transitions or any other predicted source: it has nearly constant energy per logarithmic frequency interval over many decades at high frequencies, and falls off after a peak at low frequencies, since large string loops are rare and radiate slowly. In the small loop scenario, the peak frequency shifts to lower values when increasing , whereas the amplitude decreases with Gµ/c 4 . This allows an interesting interplay between measurements at eLISA, ground interferometers and millisecond pulsar arrays: depending on the parameters, one may have detection of the string background at one, two or three of these different types of detectors. In the large loop scenario, eLISA sensitivity in terms of Gµ/c 4 is several orders of magnitude deeper than even the best possible future sensitivity from pulsar timing.
If the strings are not too much lighter than Gµ/c 4 ∼ 10 −10 , occasional distinctive bursts might be seen from loops, produced by a sharply bent bit of string moving at nearly the speed of light Vilenkin, 2005, Siemens et al., 2006). These rare events, known as kinks or cusps, are recognisable, if they are intense enough to stand out above the background, from their universal waveform which derives just from the geometry of the string. Cusps are localized in time whereas kinks are propagating along the strings. In the case of fundamental strings, the presence of junctions between strings leads to a proliferation of kinks (Binétruy et al., 2010, Bohé, 2011.
Although individual burst events, if detected, give the clearest signature of a string source, the first detectable sign of a superstring loop population is likely their integrated stochastic background as shown in figure 27. 58 of 81 eLISA: Astrophysics and cosmology in the millihertz regime
Terascale inflationary reheating
Inflation represents an extraordinarily coherent behaviour of an energetic scalar field that is nearly uniform across the observable Universe. After inflation, the internal potential energy of this field is converted into a thermal mix of relativistic particles, in a process known as reheating. The reheating temperature might be as cool as 1 TeV, especially in some braneworld models where the Planck scale is itself not far above the Terascale.
There is no reason to assume a quiet, orderly reheating process: the decay of the inflaton energy may be violently unstable. In many scenarios, the conversion begins with macroscopically coherent but inhomogeneous motions that eventually cascade to microscopic scales. Quantum coherent processes such as preheating transform the energy into coherent classical motions that can generate backgrounds on the order of 10 −3 or more of the total energy density (Dufaux et al., 2007, Easther and Lim, 2006, Garcia-Bellido and Figueroa, 2007, Khlebnikov and Tkachev, 1997. The characteristic frequency of the background can fall in the eLISA band if the final reheating occurred at 0.1 TeV to 1000 TeV.
Exotic inflationary quantum vacuum fluctuations
The amplification of quantum vacuum fluctuations during inflation leads to a background of primordial gravitational waves. An optimistic estimate of this background in the case of conventional inflation limits these to less than about 10 −10 of the CMB energy density, far below eLISA's sensitivity; in many inflation models it is much less (Chongchitnan and Efstathiou, 2006). However, some unconventional versions of inflation, particularly pre-Big-Bang or bouncing brane scenarios, predict possibly detectable backgrounds in the eLISA band (see e.g. Brustein et al., 1995, Buonanno, 2003, Buonanno et al., 1997. Although some key parameters remain unknown, which limits the predictive power of these models, they are significantly constrained by gravitational wave backgrounds. If such a background is detected, its spectrum also contains information about the Universe at the time perturbations re-enter the horizon (the second horizon intersection in figure 25).
Cosmological measurements with eLISA
As discussed in section 4 we can probe the assembly of cosmic structures through observations of black hole binaries up to high redshifts. In addition to that, gravitational wave sources could serve as standard sirens for cosmography (Holz and Hughes, 2005), because chirping binary systems allow direct measurements of the luminosity distance to the source. The principle is elegant and simple (Schutz, 1986): the chirping time τ of an inspiral/merger event, together with its orbital frequency ω and strain h, gives the absolute luminosity distance to the source, D L ∼ c/(ω 2 τh), with a numerical factor depending on details of the system that are precisely determined by the measured waveform. However, eLISA cannot independently determine the redshift of a source, since in gravitational wave astronomy, the measured source frequency and chirp time are always combined with cosmic redshift ω = ω source /(1 + z), τ = (1 + z)τ source , i.e., the redshift is degenerate with the source intrinsic parameters. An independent measurement of redshift is therefore needed. This may be accomplished by getting the optical redshift to the host galaxy, for instance by identifying an electromagnetic radiation counterpart to the event.
In the last decade, several mechanisms producing electromagnetic counterparts to black hole binary coalescences have been proposed (e.g., Armitage and Natarajan, 2002, Milosavljević and Phinney, 2005, Phinney, 2009; an exhaustive review can be found in (Schnittman, 2011). While there are still uncertainties in the nature and strength of such counterparts, we might expect some of them to be observable at least in the local Universe (say, z ≤ 1). Our parameter estimation simulations show that, at low redshift, we could expect to localize at least 50 % of the inspiralling black holes to better than 400 square degrees and about 11 % to better than 10 square degrees. Merger and ringdown (if observed) should further improve those numbers. As a practical example, wide area surveys like LSST (LSST Science Collaborations et al., 2009) in optical or the VAST project using the Australian Square Kilometer Array Pathfinder (Johnston et al., 2007) in radio will have the capability of covering such large area in the sky to high depth several times per day during and right after the merger event, looking for distinctive transients. Any identified counterpart will provide precise measurements of the source redshift and sky location. We can use this information to perform directional search (fixing the sky location of the gravitational wave source) in the eLISA data and the resulting uncertainty in the luminosity distance drops to less than 1 % for 60 % (5 % for 87 %) of the sources. Those numbers are comparable with (or even lower than) the weak lensing error at these low redshifts (Wang et al., 2002). Ultra-precise measurements of the redshift and the luminosity distance will allow us to cross-check the SNIa measurements Riess, 1999, Riess et al., 1998), and because of the very different systematics from the usual cosmological distance ladder estimates, will be a strong check on hidden systematic errors in these measurements. This will improve the estimation of cosmological parameters, such as H 0 and w.
Without electromagnetic identification of the host, we can check statistical consistency between all the possible hosts detected within the measurement error box, to infer cosmological parameters as suggested in . To realize this scheme one needs a rather good source sky location and distance determination, which is possible with eLISA only at low redshifts (z < 2). In the local Universe, the same technique applied to EMRI will allow precision measurement of H 0 (MacLeod and Hogan, 2008) at a level of a few percent.
Conclusions: science and observational requirements
In this document we have presented the science that eLISA will be able to do, which ranges from ultra-compact binaries to cosmology and tests of GR.
In particular, we note that eight of the known ultra-compact binaries will be detected by eLISA as verification binaries. Upcoming wide-field and synoptical surveys will most likely discover more verification binaries before eLISA's launch. eLISA will detect about 3,000 double white dwarf binaries individually. Most have orbital periods between 5 and 10 minutes and have experienced at least one common-envelope phase, so they will provide critical tests of physical models of the common-envelope phase. These sources are exactly the population which has been proposed as progenitors of normal as well as peculiar (type Ia) supernovae. eLISA will tell us if the formation of all ultra-compact binaries is enhanced in globular clusters by dynamical interactions. The millions of ultra-compact binaries that will not be individually detected by eLISA will form a detectable foreground from which the global properties of the whole population can be determined. The binaries detected by eLISA will improve our knowledge of tidal interactions in white dwarfs, mass-transfer stability and white dwarf mergers. eLISA will unravel the Galactic population of short-period neutron star and black hole binaries, and thus determine their local merger rate. eLISA will measure the sky position and distance of several hundred binaries, constraining the mass distribution in the Galaxy and providing an independent distance estimate to the Galactic centre. The level and shape of the Galactic foreground will constrain the relative contributions of thin disc, thick disc and halo populations and their properties. For several hundred sources the orbital inclination will be determined to better than 10 degrees, allowing to test if binaries are statistically aligned with the Galactic disc.
One of the most promising science goals of the mission are supermassive black holes, which appear to be a key component of galaxies. They are ubiquitous in near bright galaxies and share a common evolution. The intense accretion phase that supermassive black holes experience when shining as QSOs and AGN erases information on how and when the black holes formed. eLISA will unravel precisely this information. Very massive black holes are expected to transit into the mass interval to which eLISA is sensitive along the course of their cosmic evolution. eLISA will then map and mark the loci where galaxies form and cluster, using black holes as clean tracers of their assembly by capturing gravitational waves emitted during their coalescence, that travelled undisturbed from the sites where they originated. On the other hand, middleweight black holes of 10 5 M are observed in the near universe, but our knowledge of these systems is rather incomplete. eLISA will investigate a mass interval that is not accessible to current electromagnetic techniques, and this is fundamental to understand the origin and growth of supermassive black holes. Due to the transparency of the universe to gravitational waves at any redshift, eLISA will explore black holes of 10 5 M -10 7 M out to a redshift z 20, tracing the growth of the black hole population. eLISa will also shed light on the path of black holes to coalescence in a galaxy merger. This is a complex process, as various physical mechanisms involving the interaction of the black holes with stars and gas need to be at play and work effectively, acting on different scales (from kpc down to 10 −3 pc). Only at the smallest scales gravitational waves are the dominant dissipative process driving the binary to coalescence. eLISA will trace the last phase of this evolution. Dual AGN, i.e. active black holes observed during their pairing phase, offer the view of what we may call the galactic precursors of black hole binary coalescences. They are now discovered in increasing numbers, in large surveys. By contrast, evidence of binary and recoiling AGN is poor, as the true nature of a number of candidates is not yet fully established. eLISA only will offer the unique view of an imminent binary merger by capturing its loud gravitational wave signal.
There exist major uncertainties in the physical mechanism(s) conducive to the gravitational collapse of a star (or perhaps of a very massive quasi-stars) leading to the formation of the first black holes in galaxies. The mass of seed black holes ranges from a few hundred to a few thousand solar masses. Seed black holes later grow, following different evolutions according to their different formation path and clustering inside dark matter halos, and eLISA aims at disentangling different routes of evolution. eLISA will considerably reduce uncertainties on the nature of the seed population, as the 60 of 81 eLISA: Astrophysics and cosmology in the millihertz regime number of observed mergers and the inferred masses will allow to decide among the different models or, in the case of concurrent models, determine their prevalence.
According to the theoretical findings we have presented, massive black hole masses and spins evolve through coalescence and accretion events. Black hole spins offers the best opportunity to determine whether accretion episodes prior to coalescence are coherent or chaotic. Masses and spins are directly encoded into the gravitational waves emitted during the the merger process. eLISA will measure the masses and spins of the black holes prior to coalescence, offering unprecedented details on how black hole binaries have been evolving via mergers and accretion along cosmic history. At present, coalescence rates, as a function of redshift and in different mass bins, can only be inferred theoretically, using statistical models for the hierarchical build-up of cosmic structures. These models, firmly anchored to low redshift observations, indicate that the expected detection rates for eLISA range between few and few hundred per year.
Current electromagnetic observations are probing only the tip of the massive black hole distribution in the universe, targeting black holes with large masses, between 10 7 M − 10 9 M . Conversely, eLISA will be able to detect the gravitational waves emitted by black hole binaries with total mass (in the source rest frame) as small as 10 4 M and up to 10 7 M , out to a redshift as remote as z ∼ 20. eLISA will detect fiducial sources out to redshift z 10 with SNR 10 and so it will explore almost all the mass-redshift parameter space relevant for addressing scientific questions on the evolution of the black hole population. Redshifted masses will be measured to an unprecedented accuracy, up to the 0.1-1% level, whereas absolute errors in the spin determination are expected to be in the range 0.01-0.1, allowing us to reconstruct the cosmic evolution of massive black holes. eLISA observations hence have the potential of constraining the astrophysics of massive black holes along their entire cosmic history, in a mass and redshift range inaccessible to conventional electromagnetic observations.
On smaller scales, eLISA will also bring a new revolutionary perspective, in this case relative to the study of galactic nuclei. eLISA will offer the deepest view of galactic nuclei, exploring regions to which we are blind using current electromagnetic techniques and probing the dynamics of stars in the space-time of a Kerr black hole, by capturing the gravitational waves emitted by stellar black holes orbiting the massive black hole. EMRI detections will allow us to infer properties of the stellar environment around a massive black hole, so that our understanding of stellar dynamics in galactic nuclei will be greatly improved. Detection of EMRIs from black holes in the eLISA mass range, that includes black holes similar to the Milky Way's, will enable us to probe the population of central black holes in an interval of masses where electromagnetic observations are challenging. eLISA's EMRIs can be detected up to z 0.5 − 0.7 allowing to explore a volume of several tens of Gpc 3 and discover massive black holes in dwarf galaxies that are still elusive to electromagnetic observations. eLISA may also measure the mass of stellar-mass black holes. This will provide invaluable information on the mass spectrum of stellar black holes, and on the processes giving rise to compact stars. eLISA will detect EMRI events out to redshift z ∼ 0.7, in normal galaxies with high SNR, and in the mass interval, 10 4 M M 5 × 10 6 M . eLISA will measure the mass and spin of the large, massive black hole with a precision to better than a part in 10 4 . This will enable us to characterise the population of massive black holes in nuclei in an interval of masses where electromagnetic observations are poor, incomplete or even missing, providing information also on their spins. eLISA will also measure with equivalent precision the mass of the stellar black hole in the EMRI event, and also the orbital eccentricity at plunge. These observations will provide insight on the way stars and their remnants are forming and evolving in the extreme environment of a galactic nucleus. The estimated detection rates based on the best available models of the black hole population and the EMRI rate per galaxy, are about 50 events with a two year eLISA mission, with a factor of 2 uncertainty from the waveform modelling and lack of knowledge about the likely system parameters (larger uncertainties are of astrophysical nature). Even with a handful of events, EMRIs will be a powerful astrophysical probe of the formation and evolution of massive and stellar black holes. We also note that the detection with eLISA of even a single coalescence event involving two intermediate mass black holes in colliding star clusters, present in the very local universe, would be a major discovery, and it would have a strong impact in the field of stellar dynamics and stellar evolution in star forming regions.
General Relativity has been extensively tested in the weak field regime both in the solar system and by using binary pulsars. eLISA will provide a unique opportunity of confronting GR in the highly dynamical strong field regime of massive black holes. eLISA will be capable of detecting inspiral and/or merger plus ring-down parts of the gravitational wave signal from coalescing massive black holes binaries of comparable mass. For the nearby events (z ∼ 1) the last several hours of the gravitational wave signal will be clearly seen in the data, allowing direct comparison with the waveforms predicted by GR. The inspiral phase could be observed by eLISA up to a year before the final merger with relatively large SNR. Comparison of the observed inspiral rate with the predictions of GR will provide a valuable test of the theory in the regime of strong, dynamical gravitational fields.
The merger of two black holes could be observed by eLISA throughout the Universe if it falls into the detector band. 61 of 81 eLISA: Astrophysics and cosmology in the millihertz regime The observation of the merger could be confronted directly with the predictions of GR and, if the inspiral is also observed, could be used for a consistency check between the two parts of the gravitational wave signal. According to GR the merger leads to a single ringing Kerr black hole characterised by its mass and spin. Detecting two or more quasinormal modes (the individual damped exponential components of the so-called ringdown radiation) will allow us to check whether the final object indeed is described only by two parameters in accord with the no-hair theorem of GR. eLISA will give us a unique opportunity to observe middleweight mass black holes in the local Universe. If observed, these systems would provide an additional testbed for GR. eLISA will be capable of setting an upper limit on the mass of graviton that is at least four orders of magnitude better than the current limit based on observations in the Solar System. The discovery of coalescing binary black holes, signposts of (pre-)galactic mergers, will test, albeit indirectly, the hypothesis which is at the heart of the current paradigm of galaxy formation, i.e. their assembly in a bottom-up fashion. Furthermore coalescing binary black holes can be regarded as standard sirens, and they may allow a direct measurement of the luminosity distance to the source. If coalescence is accompanied by an electromagnetic signal that permits the measurement of the optical redshift of the source eLISA will improve upon the estimation of cosmological parameters, such as the Hubble constant and the dark-energy parameter w. eLISA will have unique capabilities in detecting signatures from (or setting meaningful constraints on) a wide range of cosmological phenomena and fundamental physics. Gravitational radiation backgrounds are predicted in cosmological models that include first order phase transitions, late-ending inflation, and dynamically active mesoscopic extra dimensions. eLISA will provide the most sensitive direct probes of such phenomena near TeV energies.
We state now the eLISA science requirements (SR) which summarize the research needed to fulfill the eLISA objectives. For each science requirement, one or more observational requirements (OR) are defined. The observational requirements are stated in terms of observable quantities necessary to meet the science requirements, and in terms of the precision with which such quantities must be measured.
• Galactic binaries -SR 1.1 : Elucidate the formation and evolution of Galactic stellar-mass compact binaries and thus constrain the outcome of the common envelope phase and the progenitors of (type Ia) supernovae. * OR 1.1.1 : eLISA shall have the capability to detect at least 1000 binaries at SNR > 10 with orbital periods shorter than approximately six hours and determine their period. eLISA shall maintain this detection capability for at least one year. * OR 1.1.2 : eLISA shall detect all neutron star and black hole binaries in the Milky Way with periods shorter than 35 minutes if they exist. * OR 1.1.3 : eLISA shall have the capability to measure the level of the unresolved Galactic foreground. eLISA shall maintain this detection capability for at least one year.
-SR 1.2 : Determine the spatial distribution of stellar mass binaries in the Milky Way. * OR 1.2.1 : eLISA shall have the capability to determine the position of at least 500 sources with better than ten square degree angular resolution and the frequency derivative to a fractional uncertainty of 10 %. * OR 1.2.2 : eLISA shall measure the inclination of at least 500 binaries to better than 10 degrees.
-SR 1.3 : Improve our understanding of white dwarfs, their masses, and their interactions in binaries, and enable combined gravitational and electromagnetic observations. * OR 1.3 : eLISA shall have the capability to measure the frequency derivative of all detected binary systems with gravitational wave frequencies above 10 mHz to better than 10 %.
• Massive black hole binaries -SR 2.1 : Trace the formation, growth and merger history of massive black hole with masses 10 5 M − 10 7 M during the epoch of growth of QSO and widespread star formation (0 < z < 5) through their coalescence in galactic halos. * OR 2.1.1 : eLISA shall have the capability to detect the mergers of similar masses massive black hole (mass ratio m 2 /m 1 > 0.1) with total mass in the range 10 5 M < m 1 + m 2 < 10 7 M up to redshift z = 20. The SNR of those sources with redshift z < 5 should be sufficient to enable determination of the massive black hole masses (relative errors smaller than 1 %) and the spin of the largest massive black hole (error smaller than 0.1) and an estimation of the luminosity distance (relative error smaller than 50 %). * OR 2.1.1 : eLISA shall have the capability to detect the mergers of massive black hole with total mass in the range 10 5 M < m 1 + m 2 < 10 7 M and mass ratio m 2 /m 1 about 0.01 up to redshift z = 8. The SNR of those sources with redshift z < 5 shall be sufficient to enable determination of the massive black hole masses (relative errors smaller than a few percents).
-SR 2.2 : Capture the signal of coalescing massive black hole binaries with masses 2 × 10 4 M − 10 5 M in the range of 5 < z < 10 when the universe is less than 1 Gyr old. * OR 2.2.1 : eLISA shall have the capability to detect the mergers of comparable mass massive black hole (mass ratio m 2 /m 1 > 0.1) with total mass in the range 2 × 10 4 M < m 1 + m 2 < 10 5 M beyond redhift z = 5 and up to z = 15 for equal mass systems with sufficient SNR to enable determination of the massive black hole masses (relative errors smaller than 1 %) and the spin of the largest massive black hole (error smaller than 0.1) and an estimation of the luminosity distance (relative error smaller than 50 %). * OR 2.2.2 : eLISA shall have the capability to detect some of the mergers of massive black hole with total mass in the range 2 × 10 4 M < m 1 + m 2 < 10 5 M and mass ratio 0.01 < m 1 /m 2 < 0.1 beyond redshift z = 5 with sufficient SNR to enable determination of the massive black hole masses with relative errors smaller than a few percent.
• Extreme (and intermediate) mass ratio inspiral -SR 3.1 : Characterise the immediate environment of massive black hole in z < 0.7 galactic nuclei from EMRI capture signals. * OR 3.1 : eLISA shall have the capability to detect gravitational waves emitted during the last two years of inspiral for a stellar-mass compact object (m 2 ∼ 5 M − 20 M ) orbiting a massive black hole (m 1 ∼ 10 5 M − 10 6 M ) up to z = 0.7 with an SNR > 20 . The detection of those sources shall be sufficient to determine the mass of the massive black hole with an relative error smaller than 0.1 %, the spin of the massive black hole with an error smaller than 10 −3 , and the mass of the compact object with a relative error smaller than 0.1 %, as well as the orbital eccentricity before the plunge with an error smaller than 10 −3 .
-SR 3.2 : Discovery of intermediate-mass black holes from their captures by massive black hole. * OR 3.2 : eLISA shall have the capability to detect gravitational waves emitted by a 10 2 M − 10 4 M intermediate-mass black hole spiralling into an massive black hole with mass 3 × 10 5 M − 10 7 M out to z ∼ 2 − 4 (for a mass ratio around 10 −2 to 10 −3 ).
• Confronting General Relativity with Precision Measurements of Strong Gravity
-SR 4.1 : Detect gravitational waves directly and measure their properties precisely. * OR 4.1.1 : eLISA shall have capability to detect and study three or more optically observable verification binaries between 1 mHz and 10 mHz with SNR > 10 in two years of mission lifetime. * OR 4.1.2 : eLISA shall be capable of observing the gravitational waves from at least 50 % of all z ∼ 2 coalescing binary systems consisting of compact objects with masses between 10 5 M and 10 6 M and mass ratios between 1 : 1 and 1 : 3. eLISA shall detect these systems with SNR ≥ 5 in each of five equal logarithmic frequency bands between 0.1 mHz (or the lowest observed frequency) and the highest inspiral frequency.
-SR 4.2 Test whether the central massive objects in galactic nuclei are consistent with the Kerr black holes of General Relativity. * OR 4.2 : eLISA shall have the capability to detect gravitational waves emitted during the last year of inspiral for a 10 M black hole orbiting a 10 5 M and 10 6 M black hole up to z = 0.7 with SNR > 20. eLISA shall have a science mission duration with adequate observation time for extreme mass-ratio inspirals (EMRIs) to sweep over a range of r/M to map space-time.
-SR 4.3 : Perform precision tests of dynamical strong-field gravity. * OR 4.3.1 : eLISA shall have the capability to observe the inspiral radiation from massive black hole with masses between 10 5 M − 10 6 M and mass ratio m 2 /m 1 > 1/3 to z ≤ 5 with an average SNR > 30, measuring the mass to better than 1 % and spin parameters to better than 0.1. The SNR should be sufficient to check consistency of the inspiral waveform with the predictions of the General Theory of Relativity. * OR 4.3.2 : eLISA shall have the capability to observe the merger and ring-down radiation from massive black hole with masses between 10 5 M − 10 6 M and mass ratio m 2 /m 1 > 1/3 to z ≤ 8 with an average SNR > 60, measuring the mass to better than 1 % and spin parameters to better than 0.3. The SNR should be sufficient to check consistency with the predictions of the General Theory of Relativity based on inspiral measurements.
• Cosmology -SR 5.1 : Measure the spectrum of cosmological backgrounds, or set upper limits on them in the 10 −4 Hz − 10 −1 Hz band. * OR 5.1 : eLISA shall be capable of setting an upper limit on the spectrum of a stochastic gravitational wave background in the 10 −4 Hz − 10 −1 Hz band.
-SR 5.2 : Search for gravitational wave bursts from cosmic string cusps and kinks. * OR 5.2 : eLISA shall be capable of detecting gravitational wave bursts from cusps or kinks, or of setting cosmologically interesting constraints on cosmic (super-)strings.
• Discovery -SR 6.1 : Search for unforeseen sources of gravitational waves * OR 6.1 : eLISA shall be sensitive over discovery space for unforeseen effects (e.g. even at frequencies where we cannot predict likely signals from known classes of astrophysical sources). eLISA shall allow for reliable separation of real strain signals from instrumental and environmental artifacts.
|
2012-01-17T21:00:00.000Z
|
2012-01-17T00:00:00.000
|
{
"year": 2012,
"sha1": "b2d701858a2f7f253f8205a68c18d54c5d7814cf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b2d701858a2f7f253f8205a68c18d54c5d7814cf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7889918
|
pes2o/s2orc
|
v3-fos-license
|
Retroperitoneal extrarenal angiomyolipoma at the surgical bed 8 years after a renal angiomyolipoma nephrectomy: A case report and review of literature
Extrarenal retroperitoneal space is the third most common primary site for angiomyolipoma (AML). Sixty cases of extrarenal AML (ERAML) have been reported since the first case report in 1982 by Friis and Hjortrup,[1,2] of which only 16 were retroperitoneal extrarenal angiomyolipoma (RERAML). RERAML is difficult to differentiate from the liposarcoma however the history of AML and few characteristic imaging features indicate RERAML which can be managed with mechanistic target of rapamycin (mTOR) kinase inhibitor therapy and surveillance rather than surgery. We report a patient with AML recurrence in the surgical bed 8 years after nephrectomy for a large renal AML. We found no reports in literature with a similar postoperative presentation.
INTRODUCTION
Extrarenal retroperitoneal space is the third most common primary site for angiomyolipoma (AML). Sixty cases of extrarenal AML (ERAML) have been reported since the first case report in 1982 by Friis and Hjortrup, [1,2] of which only 16 were retroperitoneal extrarenal angiomyolipoma (RERAML). RERAML is difficult to differentiate from the liposarcoma however the history of AML and few characteristic imaging features indicate RERAML which can be managed with mechanistic target of rapamycin (mTOR) kinase inhibitor therapy and surveillance rather than surgery. We report a patient with AML recurrence in the surgical bed 8 years after nephrectomy for a large renal AML. We found no reports in literature with a similar postoperative presentation.
CASE REPORT
A 35-year-old asymptomatic female had a history of large left renal AML which presented as acute left-sided abdominal pain and massive retroperitoneal hemorrhage on computed tomography (CT) abdomen imaging at another hospital (images unavailable). She had left nephrectomy for the same 8 years ago, details of which were unavailable when she presented to our institution. She was then followed up at the same hospital annually for 2 years with CT abdomen showing thin-walled cysts in basal segments of both lungs [ Figure 1], suggestive of lymphangioleiomyomatosis (LAM). She was followed up by a pulmonologist for LAM with regular pulmonary function tests, and CT chest 6 years later showed mild interval increase in the number of cysts in both lungs [ Figure 2], however without any pulmonary symptoms or complications since her diagnosis. She was worked up for tuberous sclerosis including genetic test tuberous sclerosis complex [TSC] 1 and 2 and magnetic resonance (MR) brain which were negative. She had no significant past medical, social, or family history. Abdomen CT during the first 2 years postnephrectomy showed no residual or recurrent disease in the left renal bed after which she had no further imaging follow-up. She had annual vascular endothelial growth factor-D (VEGF-D) levels which were within normal limits since nephrectomy. Now, she presented to our hospital 8 years after nephrectomy when her admission VEGF-D level was elevated -1197 pg/ml (normal <600 pg/ml). CT abdomen showed a new 9.1 cm × 3.5 cm enhancing fat-containing, mixed-density lesion at the left renal bed [ Figure 3a and b]. The retroaortic left renal vein extended into the lesion and arborized, with left renal artery seen as short, blind-ending stump. The contralateral right kidney was normal. Imaging diagnosis favored retroperitoneal ERAML, and she was placed on a trial of everolimus, an mTOR4 kinase inhibitor, since she was asymptomatic. Follow-up VEGF-D level at 4 months decreased to 623 pg/ml, and ultrasound (US) showed mild decrease in the size of the retroperitoneal lesion [ Figure 4], and hence embolization/surgery was deferred.
On subsequent 6-month follow-ups for the next 2 years, everolimus levels remained within the therapeutic range, there were no changes on US, and the patient continued to be symptom free. Urology Annals | Volume 9 | Issue 3 | July-September 2017
DISCUSSION
AML is a benign neoplasm of monoclonal origin, sometimes referred as renal hamartoma, choristoma, or perivascular epithelioid cell tumors. It has triphasic morphology on histology that includes mature adipose tissue, thick-walled blood vessels, and perivascular spindle cells. [3][4][5][6] They show immunoreactivity to smooth muscle actin, HMB-45, and melanocytic marker Melan-A. [7] AMLs can occur in inherited forms in association with tuberous sclerosis (10%-20%); however, the sporadic form is the most common (80%-90%). [1,[7][8][9][10][11] ERAML are rare and usually present as incidentalomas on imaging. [3,12] These rare tumors were reported in liver, retroperitoneum, adrenal glands, colon, urinary bladder, hilar lymph nodes, lungs, ribs, oral and nasal cavity, abdominal wall, fallopian tube, uterus, and skin. [1,3,5,9,[12][13][14] Literature review reports only 60 cases of ERAML since its first description in 1982 by Friis and Hjortrup in the above-mentioned locations, of which only 16 cases have been reported in the retroperitoneum, which is the 2 nd most common ERAML primary location. [2,3,13] RERAML are usually >10 cm and asymptomatic in view of their retroperitoneal location. [1,12,13,15] However, they can less commonly present with nonspecific symptoms such as vague abdominal pain, weight gain, fullness in epigastric region/abdomen, hematuria, and constipation, and rarely with enlarged abdomen or ureteric obstruction or retroperitoneal hemorrhage. [4,9,13] Imaging has a crucial part in the diagnosis of RERAML since they are mostly asymptomatic and difficult especially in obese patients and helps in determining the extent of the tumor as well as guides surgical planning. [13] It is very helpful in the surveillance of the postnephrectomy AML patients. US determines the size and extent of the mass, mass effect on adjacent organs, presence of metastatic lesion, and guides biopsy. [4] The classical sonographic findings are well-defined hyperechogenic mass with acoustic shadowing. [1] Cross-sectional imaging assessment with CT, CT angiography, and/or MR imaging (MRI) includes size, internal characterization of the mass, margins, extent, involvement of regional vessels, and also guides biopsies. [3,4] Thin-section non-enhanced CT (NECT) is preferred for identifying the intralesional fat content. [6] On contrast-enhanced CT, these present as noncalcified macroscopic fat-containing hyperdense mass which can help in differentiating them from renal cell carcinoma (RCC). [1,[6][7][8]10] The challenge to CT imaging is radiation concerns since the commonly affected population with this condition are young females. MRI features include heterogeneous signal intensity on T1-weighted images due to fat content and foci of hemorrhage, low signal intensity on T2-weighted images due to its smooth muscle component, signal dropout on gradient-echo or spin-echo images with fat suppression, India Ink artifact at fat-water interface on chemical shift imaging, profound diffusion restriction on diffusion-weighted imaging, and/or rapid arterial enhancement with contrast administration. [6,8,10] MRI uses the fat suppression techniques such as inversion recovery and chemical saturation to identify the intratumoral fat component and differentiate it from intratumoral hemorrhage. [8] Although not pathognomonic, RERAML demonstrates aneurysmal dilatation of intratumoral vessels, linear vascularity, bridging veins, and/or hematomas on angiography. [3,4] Although most of these RERAML have been benign, it is difficult to exclude malignancy which is the closest differential diagnosis due to similarities in imaging appearance, and the common differentials include lipoma, liposarcoma, papillary RCC, and adrenal myelolipoma. [2,3,8,10,12,13,14] No serum biochemistry or urinalysis investigation is specific for RERAML. [4] The most common differential, liposarcoma, is difficult to differentiate from RERAML even on positron emission tomography/CT and histopathological examination. [15] CT and MRI help in the diagnostic dilemma by the following: liposarcomas arise from outside the Gerota's fascia whereas RERAML arise from perinephric fat; features favoring RERAML include history of AML, microscopic fat, heterogeneity on imaging, hemorrhage, absence of calcifications, NECT hyperdensity, T2 low signal intensity, dilated intratumoral vessels. [6,9,16] Although not mandatory for making a diagnosis, HMB-45 positivity of RERAML and positive FISH test for MDM2 amplification in liposarcomas can help in differentiation. [3,5,12,13] RCC can be differentiated from RERAML by the presence of calcifications, enhancing intratumoral nodules, and invasion into renal vein or inferior vena cava being more common in the former than the latter. [7,8,16] Most common and dreaded complication of RERAML is retroperitoneal hemorrhage. [9,12,13] Rarely malignant degeneration and metastasis occur with recurrence considered very rare in ERAML with only two cases in literature describing distant metastasis to mediastinum, liver, and bone. [3,13] Usually, AML patients do not have recurrence after renal sparing nephrectomy or embolization even at 5-year follow-up period as described in literature. [3] However, our patient had RERAML presenting as AML recurrence at the surgical bed following total nephrectomy for renal AML. Close follow-up with CT imaging during the 1 st year following surgery with continued follow-up to 5 years has been recommended. [3,4] It is recommended that management of AML should be dependent on the size of lesion, symptomatology, and estimated compliance with follow-up, though historically size >4 cm was considered as universal standard cutoff for invasive treatment given its risk for rupture and life-threatening retroperitoneal hemorrhage. [3,7,10,11,14] Treatment options include minimally invasive techniques such as radiofrequency ablation, cryoablation, microwave ablation, selective angioembolization (SAE), and surgery; however, the latter two have proven to be the most effective for symptomatic AML. [4,7,[9][10][11][12] SAE is usually performed and reserved for patients with active bleeding, large retroperitoneal hemorrhage, plan for subsequent staged surgical resection, or large tumors with the advantage of short recovery period and preserved renal function. [3,4,7] Recently, hormonal or targeted therapies including sirolimus have been used for downgrading tumors by decreasing the size of the tumor and as an adjuvant treatment to patients undergoing SAE; however, controversies exist due to the increase in the size of tumor once the treatment is discontinued. [7,11,13] RERAML management is similar to AML, and current recommendations by the International TSC Consensus group for the management of AML are embolization and corticosteroids as first-line therapy in acute hemorrhage. However, for asymptomatic patients with growing AML >3 cm, growing and/or seen in association with tuberous sclerosis, mTOR inhibitor is recommended as first-line therapy followed by SAE when unresponsive to mTOR therapy. [4] When metastatic, aggressive surgery with resection and vascular reconstructions can improve outcome. [13] However, RERAML can be biopsied and/or safely followed up with imaging if the imaging findings are highly suggestive of RERAML given the benign nature. Our patient had asymptomatic postnephrectomy surgical site mass with imaging features highly suggestive of RERAML. Therefore, surgery was deferred and she was started mTOR inhibitor trial with good response to therapy.
CONCLUSION
RERAML is a rare entity specially to present as recurrence following nephrectomy, however, should be considered in the differential when it occurs. It is difficult to differentiate RERAML from retroperitoneal liposarcoma by imaging and histology. Imaging features such as heterogeneity, hemorrhage, absence of calcifications, hyperdensity on NECT, low signal intensity on T2-weighted MR images, and dilated intratumoral vessels can help in making a confident diagnosis of RERAML on imaging. Imaging helps to decide medical management over invasive surgery in asymptomatic patients, with surgical resection reserved for larger lesions with impending risk of bleeding or suspicious features for malignancy. If surgically resected, it is important to follow up these patients with imaging and laboratory tests for at least 5 years from surgical resection to ensure stability and no recurrence.
|
2018-04-03T00:05:48.589Z
|
2017-07-01T00:00:00.000
|
{
"year": 2017,
"sha1": "0d1b95c521d4932e144bd997458498eda6817a54",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ua.ua_20_17",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0d1b95c521d4932e144bd997458498eda6817a54",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212431245
|
pes2o/s2orc
|
v3-fos-license
|
Reservoir Characterization Using Acoustic Impedance Inversion and Multi-Attribute Analysis in Nias Waters, North Sumatra
Seismic method is one of the most frequently applied geophysical methods in the process of oil and gas exploration. This research is conducted in Nias Waters, North Sumatra using one line 2D post-stack time migration seismic section and two wells data. Reservoir characterization is carried out to obtain physical parameters of rocks affected by fluid and rock lithology. Seismic inversion is used as a technique to create acoustic impedance distribution using seismic data as input and well data as control. As final product, multiattribute analysis is applied to integrate of inversion results with seismic data to determine the lateral distribution of other parameters contained in well data. In this research, multi-attribute analysis is used to determine the distribution of NPHI as a validation of hydrocarbon source rocks. In that area, there is a gas hydrocarbon prospect in limestone lithology in depth around 1450 ms. Based on the results of sensitivity analysis, cross-plot between acoustic impedance and NPHI are sensitive in separating rock lithology, the target rock in the form of limestone has physical characteristics in the form of acoustic impedance values ??in the range of 20,000-49,000 ((ft/s)*(g/cc)) and NPHI values in the range of 5-35 %. While the results of the cross-plot between the acoustic impedance and resistivity are able to separate fluid-containing rocks with resistivity values ??in the range about 18-30 ohmm. The result of acoustic impedance inversion using the model based method shows the potential for hydrocarbons in the well FYR-1 with acoustic impedance in the range 21,469-22,881 ((ft/s)*(gr/cc)).
INTRODUCTION
Hydrocarbons are a strategic energy commodity for Indonesia. In addition to providing energy supplies, hydrocarbons containing oil and natural gas are also one of the largest sectors. No wonder oil and gas stakeholder is demanded to continue to increase national oil and gas production. However, many have not yet been discovered, hydrocarbon reserves are not easy to be fulfilled. Hydrocarbon exploration activities are still being carried out until now, while the activities have been carried out for escalating the discovery of hydrocarbon reserves. Seismic reflection is a geophysical method that records the propagation of seismic waves reflected from the boundary between the two mediums. Seismic method is used because seismic waves are able to penetrate subsurface so that it can produce images of the appearance of rock structures at the subsurface and display stratigraphic and depositional features to detect hydrocarbon reservoirs, this method focuses on low frequency data (Manik, 2012). The magnitude of the seismic reflection is directly related to the change in acoustic impedance between the two mediums (Sanjaya et al., 2014). The greater contrast between two mediums, the reflection waves will become stronger. Acoustic impedance is the physical ability of rocks to pass seismic waves through it. So that the harder and denser the rock, the greater the acoustic impedance value (Alifudin et al., 2016).
There are some weaknesses in seismic data in describing rock layers that are larger than the thickness of the tuning. Tuning thickness is the minimum thickness of a rock layer that is able to be separated by seismic data, so seismic data is less able to separate rock layers that have a thickness greater than the thickness of the tuning. This requires seismic inversion to be able to get a better figure of the rock layers and the physical properties of the layer. Well data is used to provide lost impedance information on seismic data (Ogagarue, 2016). According to Sukmono (2000), there are three types of inversion methods, that are model based, band limited, and sparse spike methods. However, this research focuses on using a model based method that has a good correlation with seismic data because this method is based on the deviation of the low frequency of the acoustic impedance model (Shankar et al., 2016).
A multi-attribute method is basically a process of extracting several attributes from seismic data that has a good correlation with well data (Malik et al., 2019). The ability to analyze using multi-attribute methods can be used to predict the log parameters to then see the distribution in a seismic section. In this research, multiattribute analysis is able to predict the resistivity parameters that can indicate the hydrocarbons.
REGIONAL GEOLOGY
Nias Island's forearc basin near Nias Island is underlain by a Pre-Miocene accretionary complex and probably is not underlain by any upper plate crystalline rock (Kieckhefer et al., 1980). This field is located in the northwest part of Sumatra Island, on an emerging portion of the outer arc ridge of the Sunda arc system (Moore, 1979). The regional geology of the Nias and Sibolga regions refer to two geological map sheets as listed on the Sinabang Sheet Geological Map ( Figure 1).
The stratigraphic sequence of the Geology Map of Nias Sheets is as follows: • Aluvium (Qa) is a river, swamp and beach sediments consisting of chunks of limestone, sand, mud and clay. • Gunungsitoli Formation (QTg) composed of reef limestone, silty limestone, limestone, sandstone, fine quartz sandstone, marl and sandy loam; fine layered, weakly folded. This formation having a Pio-Plistocene in age and was deposited in a shallow marine environment. • Gomo Formation (Tmpg) is composed of claystone, marl, sandstone, limestone with tuff, tuff and peat inserts. Good layer, strong folded and sedimentary structure in the form of parallel lamination. Based on the fossil content above the age of this formation is the Early Middle-Pliocene Miocene which was deposited in the sublitoralbathyal environment. The lower part of this formation is tinged with the Lelematua Formation while the upper part is suppressed out of harmony with the Gunungsitoli Formation. • Lelematua Formation (Tml) consists of sandstone, claystone, siltstone, conglomerate and tufa intertwined with coal and shale. The upper part of the formation is tinged with the Gomo Formation, while the lower part overlaps unconformably the Bancuh Block. • Bancuh Block (Tomm) is chunks of various types and sizes of rocks consisting of peridotite, gabbro serpentinitely, serpentinite, basalt, schist, shale, graywacke, conglomerate, breccia, limestone, sandstone and flint with scaly clay base mass. Based on the position of this interpreted stratigraphy is formed in the Early Oligocene-Miocene.
Basement
The bedrock in the Nias Arc Face Basin consists of several types of rocks. In Nias Island and its surrounding igneous rocks, while in the FYR-1 well, the basic rock is in the form of black rocks that are strongly deformed and exposed on the Sumatra Island. Karig et al., 1977, Rose, 1983, Vail, et al., 1977 Regionally the base rock age is Mesozoic (Jura), which is characterized by Belemnite fossil.
Paleogene rocks
Paleogene sedimentary rocks may only be found in the Pini Sub-Basin in the southern part. Nias Bedrocks are similar to Simeulue Island, which is Baru Melange covered by Pinang Conglomerate Member. Gabbro (Sibau Gabbro Group) is an exotic block on the Melange, which based on K-Ar dating around 35.4±3.6 Ma and 40.1 ± 2.7 Ma, which is identical to the Late Eocene. Similar outcrops were also observed in the Sibolga area, namely the development of conglomerate volcanic rock units which comprise the lower Sibolga Formation. The unit is estimated to be in the form of debris flows and riverbed deposits (braided stream). While the upper part is a sandstone unit deposited braided stream which is comparable to the upper Sibolga Formation. The formation is estimated to be comparable to well FYR-1 as Eocene-Oligocene age.
Miocene rocks
Miocene age sedimentary rocks are generally separated by angular misalignment with Paleogenous sediments. Several drilling wells have penetrated this unit, one of which is FYR-1 well. In general, Miocene sedimentary rocks are predominantly fine fractions such as claystone and shale, sandstone and limestone insertion. Some of the early Miocene age inner-outer sub-litoral sandstone inserts are thought to act as reservoirs. Early Miocene limestone growth in this region was rather slow, this was due to high sedimentation from the mainland. The growth of reef limestone is very intensive in Middle-Late Miocene. In some limestone beds, these are the main target of the reservoirs.
Pliocene-Pleistocene rocks
Pliocene and Miocene sedimentary rocks are generally separated by unconformity. This Mio-Pliocene misalignment is characterized by acceleration of growth in accretion areas due to the inclination of oceanic plates. Precipitation in this period shows a pattern of regressive deposits, such as in the Simeulue and Meulaboh sub-basins, while in FYR-1 well subbasin occurs transgressive and regressive, possibly due to the active Batee Fault. In the FYR-1 well area, the existence of Toba tuff is very dominant. This situation continues from the west of the sub-basin (FYR-1 well) where the bottom is deposited with sandstones. There are 3 important tectonic cycles that can be recognized in the Nias Basin, namely Paleogenic Orogenics, Neogene Subsidence and Late Tertiary tectonics (Beaudry and Moore, 1985). These tectonic events were also followed by three cycles of sediment transgression -major regressions related to sea level changes.
BASIC THEORY Acoustic Impedance
Acoustic impedance is a physical parameter that describes the ability of rocks to pass or reflect waves (Zain et al., 2017). This parameter can be used as an indicator of changes in lithology, porosity, as well as fluid content, pressure and temperature (Sanjaya et al., 2014). Acoustic impedance inversion cross section has many advantages to separate and describe reservoir because it is obtained by integrating data related to rock properties (Alabi and Enikanselu, 2019). In general, rocks which have a high level of hardness then the acoustic impedance value will be high. The acoustic impedance is formulated as, where the variable ris the density and Vp is the P-wave velocity.
Seismic Inversion
Seismic inversion is a modeling of the structure and physical properties of rock bases using seismic data as input and well data as a control. Basically the seismic inversion method is the opposite of forward modeling which is related to the formation of synthetic seismograms based on the earth model (Febridon et al., 2019). The high frequency information from well data is not used to obtain information contained in seismic data (Huuse and Feary, 2005). Based on the data source, seismic inversion is divided into two, namely pre-stack inversion and post-stack inversion, whereas based on the method, seismic inversion is divided into three namely model based, and band limited, sparse spike (Sukmono, 2000). This study uses 2D seismic data post-stack time migration and model based inversion method. The basic concept of the model based method is to make a geological model to be compared with seismic data, and the comparison results will later be used to update the initial model iteratively until a suitable model is obtained (Russel, 1988).
Multi-attribute Analysis
Multi-attribute analysis is a statistical method that uses more than one attribute to predict several physical properties of the earth's surface. In this analysis, log relationships with seismic data have high correlations, with this method the log parameters in a well can be drawn for distribution at all locations in a seismic cross section. In the most common case, searching for a (1) function that will convert different multi-attributes into the desired property, can be written mathematically as follows, P(x,y,z) = F [A 1 (x,y,z),....., A n (x,y,z)] (2) where P is a log property, variables x, y and z are coordinate functions, F is a function that states the relationship between seismic attributes and log properties and A n is attribute m, where (n = 1, 2, …, m).
In the simplest case, the relationship between log property and seismic attributes can be shown by the equation of the number of linear weights in the following equation, where w n is the value of m+1, and A 1 is m, with n = 1, 2, …, m.
METHODOLOGY
Flow chart of well analysis and post-stack seismic inversion untill multi-attribute is shown on Figure 3.
Target Zone Analysis
Target zone qualitative analysis is done by looking at the log curves of the parameters that exist in the target well. In this study, there are one 2D seismic post-stack time migration section and two wells (FYR-1 and FYR-2), which are used in multi-attribute analysis (Figure 4). Post-stack time migration method gives better results on the continuity of seismic sections, eliminates incoherent noise, and improves seismic cross-section resolution (Farfour et al., 2015). The density log curve and NPHI have a correlation in determining the target zone. This is because hydrocarbons have a low hydrogen index so neutron devices obtain high energy neutrons in the formation. In areas that are filled with Figure 3. Flow chart of multi-attribute analysis hydrocarbons shows a lower value. More, for density logs in areas filled with hydrocarbons will show a higher in value, because of it occurs a positive separation in the area filled with hydrocarbons (Irawan and Utama, 2009). The log is able to determine the type of fluid indicated, such as gas, oil and water. In addition to the relationship between the two logs above there needs to be validation from other logs such as the gamma ray and resistivity logs. The target zone which is generally located in limestone and sandstone lithology has a low gamma ray value so that this log curve can be used as a target zone validation, while the resistivity log plays a role in determining the fluid contained in rock formations ( Figure 5).
Tuning Thickness Analysis
Tuning thickness is influenced by the wavelength that passes through the rock, mathematically the value is ?/4 of the seismic wave that passes through the layer. In determining the value of tuning thickness, the average velocity (P-wave) and dominant frequency of the layer are needed (Nainggolan and Winardhi, 2012).
The thickness of the layer is greater than the thickness of the tuning causes the lack of a seismic cross section in the target layer (Table 1). This can be overcome by doing acoustic impedance inversions that include low frequency information from the log data. Whereas seismic data only has only a certain frequency ranges.
Sensitivity Analysis
The qualitative and quantitative sensitivity analysis stage can be seen from the results of cross-plots and cross sections to look for sensitive parameters that are able to separate lithology and hydrocarbons and get a cut-off value of these parameters (Kurniawan et al., 2013).
The areas that contain hydrocarbons are indicated by red zones in Figure 6. Anomaly in the form of different resistivity values that are higher than the surrounding environment is a target zone, because the physical properties of hydrocarbons are high resistivity.
Well Seismic Tie
The well seismic tie stage is carried out to bind the well data with seismic data which is useful so that each horizon in the seismic cross section is at the actual depth. High correlation will increase the level of accuracy of depth in both data.
The results of the well seismic tie stage of various wavelets, it is known that the statistical wavelet has the highest correlation coefficient on the X08 path with a value of 0.720 (Table 2).
Initial Model Analysis
Making the initial model provides information in the form of the distribution of acoustic impedance values in seismic data with well data as a control. In this study, the initial model was made using three horizons that were obtained in the previous stage and using a 10/ 15 Hz high-cut filter to create an initial AI model for the inversion (Figure 7). The use of hard constraints in making the initial model is useful for combining low frequency data that matches acoustic impedance so as to minimize inversion mismatches (Li and Peng, 2017).
Acoustic Impedance Inversion
Inversion analysis is determination of inversion parameters before inversion is performed. This is useful to determine suitable parameters used at the inversion stage so that the correlation between the initial model with high inversion results is obtained.
The results of testing several seismic methods show a high correlation on model based inversion (Table 3). The existence of a high correlation will also help increase the suitability of inversion results with the geological form of the earth (Sukmono, 2000).
Multi-attribute Analysis
The stages in multi-attribute analysis begin with entering well data to be used in the multi-attribute method. Seismic impedance inversion results that have been obtained in the previous stage are used as input to the external attributes of this processing. Furthermore, determining the right attributes, this process is done several time to obtain the results of the best multiattribute composition in all wells can see the validation error option in this process, the smaller the error value, correlation between the multi-attribute components shows better result. Figure 8 shows the error value in the multiattribute composition, the lowest point that touches the x-axis shows a bad correlation. Multi-attribute analysis results indicate the error level.
Acoustic Impedance Inversion
The results of seismic inversion can be seen from the distribution of acoustic impedance values in the seismic cross section shown in Figure 9. Based on the distribution of acoustic impedances in the cross section, physical characteristics of the target zone can be seen. A change in the acoustic impedance value is an indication of the different layers in the rock. Based on the analysis of the target zone in the two wells it can be seen that the FYR-1 well has a hydrocarbon potential, this is seen from the intersection of the density curve and NPHI, while the remaining wells are dry wells. Figure 9 shows the target zone which is indicated by a red circle. In this zone, the acoustic impedance value decreases from an impedance value of around 40,000 ((ft/s)*(gr/cc)) to 21,469-28,881 ((ft/s)*(gr/cc)). The target zone area which is at a depth of around 1450 ms has a low impedance value distribution spread from CDP 1946 to 1990. To ensure that the area is a target zone containing hydrocarbons then a resistivity distribution is made from the multi-attributes.
Multi-attribute Analysis
Analyzing a number of reservoir properties (porosity, resistivity and Vshale) able to optimize the boundary conditions of the target zone containing hydrocarbons (Alao and Oludare, 2015). In this research, multi-attribute analysis is used to create resistivity log distribution in seismic section line X08, log porosity and Vshale analysis cannot be done in this study due to limited data in both wells. Multi-attribute analysis using the linear regression method with step wise regression technique is to look for attributes with the smallest validation error value. Based on the results Figure 8, the prediction of porosity distribution has the best correlation on the composition of 2 (two) attributes with 5 (five) points. The results of the multi-attribute linear validation show a very good correlation of 0.81.
Based on Figure 10, there is a red colored zone at a depth of 1450 ms which has a higher resistivity value than the surrounding area. The cross-plot results show that the target zone has a resistivity value in the range of 18-30 ohmm. The results of the resistivity distribution using multi-attribute analysis have resistivity values in the range of 19-30 ohmm, the existence of resistivity higher than the target zone allows the insertion of other rocks in the layer. By combining the results of the acoustic impedance distribution and the resistivity distribution, it can be seen that the zone is a target zone containing hydrocarbons.
DISCUSSION
Both of these two wells are crossed by seismic lines X08 with an azimuth of 100° with length 23.3 km. The azimuth on the seismic lines is the direction of the ship's direction, with an azimuth of 100° meaning the ship is moving from the west (280°) to the east (100 °). Based on the analysis of the target zone, hydrocarbon was found in the FYR-1 well. Below is the hydrocarbon with zones N17-N16. The target zone in the FYR-1 well consists of gas trapped in limestone with depth range about 1450 ms. The parameters yielded from sensitivity analysis results is used as a lithology separator and the appearance of hydrocarbons, so it can be used as a reference in determining the distribution of hydrocarbons in the cross section of the inversion results and multi-attribute analysis. Hydrocarbon zone have a character consisting of acoustic impedance value between 18,000 and 35,000 ((ft/s)*(g/cc)) and resistivity values in the range of 18-31 ohmm. The acoustic impedance inversion distribution value by model based method on 2D seismic line X08 gives better results than the other methods. Based on the results of this inversion, there is a bright spot with an acoustic impedance value between 21,469 and 28,881 ((ft/s)*(g/cc)) at depth about 1450 ms that crosses from east to west towards the western FYR-1 well. The bright spot area has an acoustic impedance value which match with the cross-plot analysis results, so it can be assumed that the area is a target zone containing hydrocarbons. Then, Multi-attribute analysis is performed to map the resistivity distribution based on the best composition available in seismic data (Altowairqi et al., 2017).
Multi-attribute analysis produces a map of the resistivity distribution as shown in Figure 10. This resistivity distribution can be used as a validation of the hydrocarbon distribution that has been mapped on the results of the previous inversion. In Figure 10, the area indicated by a red circle has resistivity value between 18 and 32 ohmm. This value indicates compatibility with the range of values ??obtained based on sensitivity analysis. So it can be seen that the zone at depth around 1450 ms elongated from southeast to northwest is a zone containing hydrocarbons.
CONCLUSIONS
The results of cross-plot analysis between acoustic impedance (AI) and resistivity show that these parameters are sensitive in separating the target zones which containing hydrocarbons in limestone. Based on the results of cross-plot analysis show the target zone has a physical character in the form of acoustic impedance values in the range of 18,000-35,000 ((ft / s)*(g/cc)) and the value of resistivity in the range of 18-31 ohmm. Acoustic impedance inversion results are able to describe the distribution of hydrocarbons with a range of value 21,469-28,881 ((ft/s)*(g/cc)). Afterward, the resistivity distribution of multi-attribute analysis results is used as validation of acoustic impedance inversion result data in the target zone. The target zone resistivity distributions show the values which correspond to the characteristics of the cross-plot results, in the range of 13-30 ohmm. Based on the range of those parameters, the hydrocarbon zone elongated from east to west toward the FYR-1 well.
ACKNOWLEDGEMENT
Sincere appreciation and many thanks to honorable Head of Marine Geological Institute for the trusting and supervising to the authors. Our truly appreciation to Chief Scientist Riza Rahardiawan, scientists, technicians, and MV Geomarin 3 crew member for hard work and total support.
|
2020-02-13T09:25:14.942Z
|
2019-05-24T00:00:00.000
|
{
"year": 2019,
"sha1": "aca719b6297a6bcaa16d8274562b43e5c0a1c406",
"oa_license": "CCBYNCSA",
"oa_url": "http://ejournal.mgi.esdm.go.id/index.php/bomg/article/download/637/469",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "043e00b3296dc5b91112e8290b69c74851198bce",
"s2fieldsofstudy": [
"Engineering",
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
118843802
|
pes2o/s2orc
|
v3-fos-license
|
Charge and current orders in the spin-fermion model with overlapping hot spots
Experiments carried over the last years on the underdoped cuprates have revealed a variety of symmetry-breaking phenomena in the pseudogap state. Charge-density waves, breaking of $C_{4}$ rotational symmetry as well as time-reversal symmetry breaking have all been observed in several cuprate families. In this regard, theoretical models where multiple non-superconducting orders emerge are of particular interest. We consider the recently introduced (Phys. Rev. B 93, 085131 (2016)) spin-fermion model with overlapping 'hot spots' on the Fermi surface. Focusing on the particle-hole instabilities we obtain a rich phase diagram with the chemical potential relative to the dispersion at $(0,\pi);\;(\pi,0)$ and the Fermi surface curvature in the antinodal regions being the control parameters. We find evidence for d-wave Pomeranchuk instability, d-form factor charge density waves as well as commensurate and incommensurate staggered bond current phases similar to the d-density wave state. The current orders are found to be promoted by the curvature. Considering the appropriate parameter range for the hole-doped cuprates, we discuss the relation of our results to the pseudogap state and incommensurate magnetic phases of the cuprates.
I. INTRODUCTION
Origin of the pseudogap state 1-3 remains one of the main puzzles in the physics of the high-T c cuprate superconductors. First observed in NMR measurements 4,5 , it is characterized by the loss of the density of states due to the opening of a partial gap at the Fermi level below the pseudogap temperature T * > T c . Studies of the pseudogap by means of ARPES 3,6 and Raman scattering 7,8 have revealed that it opens around (0, π) and (π, 0) points of the 2D Brillouin zone, the so-called antinodal regions. With increasing hole doping both T * and the gap magnitude decrease monotonously and eventually disappear. However, modern experiments add many more unconventional details to this picture, showing that a crucial role in the pseudogap state is played by the various ordering tendencies.
To begin with, the point-group symmetry of the CuO 2 planes appears to be broken. Namely, scanning tunneling microscopy (STM) 9,10 and transport 11,12 studies show the absence of C 4 rotational symmetry in the pseudogap state. More recently, magnetic torque measurements of the bulk magnetic susceptibility 13 confirmed C 4 breaking occurring at T * . Additionally, an inversion symmetry breaking associated with pseudogap has been discovered by means of second harmonic optical anisotropy measurement 14 .
Other experiments suggest that an unconventional time-reversal symmetry breaking is inherent to the pseudogap. Polarized neutron diffraction studies of different cuprate families reveal a magnetic signal commensurate with the lattice appearing below T * and interpreted as being due to a Q = 0 intra-unit cell magnetic order 15,16 . The signal has been observed to develop above T * with a finite correlation length 17 and breaks the C 4 symmetry 18 (note, however, that a recent report 19 does not bear evidence of such a signal). Additionally, at a temperature T K that is below T * but shares a similar doping dependence polar Kerr effect has been observed 20,21 , which implies 22,23 that time-reversal symmetry is broken. Additional signatures of a temporally fluctuating magnetism below T * are also available from the recent µSr studies 24,25 .
While the signatures described above indicate that the pseudogap state is a distinct phase with a lower symmetry, there exist only few experiments 1,26 that yield a thermodynamic evidence for a corresponding phase transition. On the other hand, transport measurements suggest the existence of quantum critical points (QCPs) of the pseudogap phase 27 , accompanied by strong mass enhancement 28 in line with the existence of a QCP.
Additionally, in the recent years the presence of charge density waves (CDW) has been discovered in a similar doping range. The CDW onset temperature can be rather close to T * 29 but has been shown to have a distinct dome-shaped doping dependence in YBCO 30 and Hg-1201 31 . Diverse probes such as resonant 29,[31][32][33] and hard X-ray 30,34-36 scattering, STM 10,37-40 and NMR 41,42 have observed CDW with similar properties in most of the hole-doped cuprate compounds with the exception of La-based ones (in which the spin and charge modulations are intertwined 43 ). Generally, the CDWs have the following common properties. The modulation wavevectors are oriented along the Brillouin zone axes (axial CDW) and decrease with doping. While the modulations along both directions are usually observed, there is an experimental evidence 36,40,44 that the CDW is unidirectional locally. The intra-unit cell structure of the CDW is characterized by a d-form-factor 45 with the charge being modulated at two oxygen sites of the unit cell in antiphase with each other.
From the theoretical perspective, one of the initial interpretations was that the pseudogap was a manifestation of fluctuating superconductivity, either in a form of preformed Bose pairs 46,47 or strong phase fluctuations 48 . However, the onset temperatures of superconducting fluctuations observed in the experiments [49][50][51] are considerably below T * and have a distinct doping dependence. Another scenario dating back to the seminal paper 52 attributes the pseudogap to the strong short-range correlations due to strong on-site repulsion 53,54 . Numerical quantum Monte Carlo simulations [55][56][57][58] of the Hubbard model support this idea. However, this scenario 'as is' does not explain the broken symmetries of the pseudogap state. More recently, these results have been interpreted as being due to topological order 59,60 , that can also coexist with the breaking of discrete symmetries 61 .
A different class of proposals for explaining the pseudogap behavior involves a competing symmetry-breaking order. One of the possible candidates discussed in the literature is the Q = 0 orbital loop current order 62 . Presence of circulating currents explicitly breaks the time reversal symmetry, allowing one, with appropriate modifications, to describe the phenomena observed in polarized neutron scattering 63 and polar Kerr effect 64 experiments. However, it does not lead to a gap on the Fermi surface at the mean-field level. Numerical studies of the threeband Hubbard model give arguments both for 65,66 and against [67][68][69] this type of order. Other proposals for Q = 0 magnetic order include spin-nematic 15,70 , oxygen orbital moment 71 or magnetoelectric multipole 72,73 order.
Charge nematic order 74 and the related d-wave Pomeranchuk instability 75,76 of the Fermi surface have also been considered in the context of the pseudogap state. It breaks the C 4 rotational symmetry of the CuO 2 planes in agreement with numerous experiments [9][10][11][12][13] . While not opening a gap, fluctuating distortion of the Fermi surface can result in an arc-like momentum distribution of the spectral weight and non-Fermi liquid behaviour [77][78][79] . Evidence for this order comes from numerical studies of the Hubbard model with functional renormalization group 75 , dynamical mean-field theory 80,81 and other 82,83 methods as well as analytical studies of forward-scattering 84 and spin-fermion 85,86 models.
Another possibility is the CDW [87][88][89] . More recent studies focus on the important role of the interplay between CDW and superconducting fluctuations 90 , preemptive orders and time reversal symmetry breaking 91 (that can result in the polar Kerr effect 92,93 ), vertex corrections for the interactions 94,95 , CDW phase fluctuations 96 and possible SU(2) symmetry [97][98][99] . Additionally, pair density wave -a state with modulated Cooper pairing amplitude has been proposed to explain the pseudogap and CDW [100][101][102] , which can be also understood with the concept of 'intertwined' SC and CDW orders 103 .
An interesting alternative is the d-density wave 104 (DDW) state (also known as flux phase 105,106 ) which is characterized by a pattern of bond currents modulated with the wavevector Q = (π, π) that is not generally accompanied by a charge modulation. This order leads to a reconstructed Fermi surface consistent with the transport 27,107 and ARPES 108 signatures of the pseudogap. Moreover, the time-reversal symmetry is also broken and a modified version of DDW can explain the polar Kerr effect 109 observation. Additionally, model calculations show 110,111 that the system in the DDW state can be unstable to the formation of axial CDWs. Studies aimed at a direct detection of magnetic moments created by the DDW have yielded results both supporting 112,113 and against 114 their existence (or with the conclusion that the signal is due to impurity phases 115 ). However, theoretical estimates of the resulting moments are model-dependent 116,117 . There also exists indirect evidence from superfluid density measurements 118 . Theoretical support for the DDW comes from renormalization group 119 and variational Monte Carlo 120 studies of the Hubbard model, DMRG studies of t − J ladders 121 , and mean-field studies of t − J 122 as well as single 123 and three-orbital 124 models. However, the regions of DDW stability found in these studies vary significantly and depend on the value of particular interactions 123,124 or details of the Fermi surface 120 .
Overall, the question of possible competing orders in the cuprates has turned out to be a rather complicated one. Interestingly, state-of-art numerical calculations comparing different methods show that the energy difference between distinct ground states can be miniscule 125 explaining some of the difficulties. Thus, analytical approaches which allow one to study the influence of different parameters in detail can be of interest.
In this paper, we deduce leading non-superconducting orders using a low-energy effective theory for fermions interacting with antiferromagnetic (AF) fluctuations. While such theories can be in principle derived from the microscopic Hubbard or t-J Hamiltonians 126 , we employ here a semiphenomenological approach in the spirit of the widely used spin-fermion (SF) model 90,91,97,127,128 . Our take on this problem differs in that we relax the usual assumption that the interaction, being peaked at (π, π), singles out eight isolated 'hot spots' on the Fermi surface. In contrast, we consider that neighboring hot spots may strongly overlap and form antinodal 'hot regions'. This assumption agrees well with the ARPES results 3 demonstrating the pseudogap covers the full antinodal region without pronounced maxima at the 'hot spots' of the standard SF model. Moreover, the electron spectrum in the antinodal regions has been found 108,129 to be shallow with respect to the pseudogap energy scale for the hole-underdoped samples, i.e. the pseudogap opens also at points that are not in immediate vicinity of the Fermi surface. From the spin fluctuation perspective this can be anticipated if the AF fluctuations correlation length is small enough such that the resulting interaction between fermions is uniformly smeared covering the full antinodal regions. Indeed, the neutron scatter-ing experiments 130,131 show that the correlation lengths at the temperatures and dopings relevant for the pseudogap amount to several unit cells lengths.
SF model with overlapping hot spots has been introduced in our recent publications 85,86 , where we have considered normal state properties as well as charge orders corresponding to intra-region particle-hole pairing. For the case of a small Fermi surface curvature, it has been shown that the d-wave Pomeranchuk instability is the leading one for sufficiently shallow electron spectrum in the hot regions. This is in contrast to the diagonal dform factor CDW usually being the leading particle-hole instability in the standard SF model 90,97,128 . As a result of Pomeranchuk transition, the C 4 symmetry gets broken by a deformation of the Fermi surface and an intra-unitcell charge redistribution. Additionally, as the Pomeranchuk order leaves the Fermi surface ungapped, we have shown that at lower temperatures an axial CDW with dominant d-form factor and d-wave superconductivity may appear. These results are in line with the simultaneous observation of the commensurate C 4 breaking 9-13 and axial d-form factor CDWs 39,45 . At the same time, these order parameters, although being in agreement with the experimental observations, do not readily explain the time-reversal symmetry breaking phenomena as well as the possible Fermi surface reconstruction into hole pockets 27 .
In this paper, we consider a possibility of an interregion particle-hole pairing, akin to the excitonic insulator proposed long ago 132 . The resulting state is similar in properties to the d-density wave 104 , having staggered bond currents. In addition, we find also evidence for an incommensurate version thereof. It turns out (Sec. III,IV) that the Fermi surface curvature in the antinodal regions (assumed to be small in Refs.85 and 86) is the most important ingredient that stabilizes this state against the charge orders, thus leading to a rich phase diagram. We further discuss the relation of our findings to the pseudogap state in Sec.V.
The paper is organized as follows. In Sec. II we present the model and assumptions that we use. In Sec. III we analyze a simplified version of the model ignoring retardation effects and identify the emerging orders. In Sec. IV we present the results for the full model and discuss the approximations used. In Sec. V we discuss the relation of our results to the physics of underdoped cuprates and in Sec.VI we summarize our findings.
II. MODEL.
The spin-fermion model describes the low-energy physics of the cuprates in terms of low-energy fermions interacting via the antiferromagnetic paramagnons. The latter are assumed to be remnants of the parent insulating AF state destroyed by hole doping 127 . The resulting interaction is strongly peaked at the wavevector Q 0 = (π, π) corresponding to the antiferromagnetic order periodicity and is described by a propagator Then, one can identify eight 'hot spots' on the Fermi Surface mutually connected by Q 0 where the interaction is expected to be strongest (see the left part of Fig.1). A conventional approximation motivated by the proximity to AF quantum critical point (QCP) where ξ AF → ∞ is to consider only small δp ∼ 1/ξ AF vicinities of the 'hot spots' to be strongly affected by the interaction. However, at temperatures relevant for the pseudogap state this argument does not have to hold -the experimentally reported correlation lengths 130,131 are indeed rather small. Moreover, ARPES experiments 3 show that the effects of the pseudogap extend well beyond the 'hot spots' to the Brillouin zone edges (π, 0), (0, π) without being significantly weakened.~1 A different approach has been introduced in 85,86 . As ξ AF becomes smaller, the 'hot spots' expand and and can eventually overlap and merge forming two 'hot regions' (see the right part of Fig.1). For the latter to occur the fermionic dispersion in the antinodal region should be shallow (see also the formal definition below), which is supported by the experimental data 108,129 . To describe this situation we consider the following Lagrangian: where c ν p and ϕ q are the fermionic and bosonic (paramagnon) fields, respectively and ε ν (p) is the fermionic dispersion where the index ν enumerates the two 'hot regions'. Additionally, as the quantity |ε((0, π)/(π, 0)) − µ| has been observed 3,108 to be of the order of the pseudogap energy or smaller, one expects the fermions in the whole region to participate in the interaction. Consequently one has to consider the dispersion relation not linearized near the Fermi surface. Due to the saddle points present at (π, 0) and (0, π) the minimal model for the dispersion is: where α has the meaning of the inverse fermion mass and β/α controls the Fermi surface curvature. Note that the chemical potential of the system is determined by the full Fermi surface. As the Fermi energies (measured from the Γ-point) in hole-doped cuprates are quite large we neglect the temperature dependence of the chemical potential, and, consequently, µ. On the other hand, as the relevant temperatures for the pseudogap onset are still sizable, we shall not consider the effects of simultaneous development of multiple instability channels due to the van Hove singularities 133 . In Refs. 85 and 86 the limit β → 0 has been considered. For this case the condition of 'shallowness' leading to the merging of the hot spots reads 1/ξ AF ≫ µ/α. However, in order to keep the simple quadratic form of the paramagnon dispersion we also assume ξ AF ≫ a 0 , where a 0 is the lattice spacing. Here we will consider the consequences of finite β for the model 2.1 under the same assumption of the strong hot spot overlap. As in Refs. 85 and 86 we will concentrate on the particle-hole (nonsuperconducting) orders. Note that for interaction being via the antiferromagnetic paramagnons only, d-wave superconductivity is expected to overcome the particle-hole orders 90,128 . However, additional interactions present in real systems, such as nearest-neighbor 134 or remnant lowenergy Coulomb repulsion 85 should suppress it with respect to the particle-hole orders.
III. PHASE DIAGRAM FOR A SIMPLIFIED
MODEL.
To address qualitative features of the emerging orders we can use a simplified version of the model (2.1) also introduced previously by us 85 . It amounts to substitution of the paramagnon part of the Lagrangian with a constant interregion interaction between the fermions. This is also equivalent to taking the ξ → 0 limit for the paramagnon propagator. Additionally, we neglect the selfenergy effects for this case.
We start with the following Lagrangian (3.1) The spin structure of the interaction is taken here in full analogy to the original spin-fermion model. Additionally, the integrals that appear below are cut off at momenta p x , p y ∼ 1/ξ and we assume that the inequality α/ξ 2 ≫ µ, T (strong overlap of hot spots) is fulfilled. In addition to the d-wave superconductivity, one can identify two attractive singlet channels (the triplet channels, as in the SF model, are subleading with the effective coupling being three times weaker). Corresponding order parameters can be written in terms of the averages (spin indices are suppressed): The interaction for the orders without the sign change between the regions (corresponding to s-form factor charge order) is repulsive and consequently such orders are not expected to appear on the mean-field level. As is shown in detail below, the order parameter W , Eq. 3), on the other hand, leads to a pattern of bond currents modulated with wavevector (π, π) + Q without any charge density modulation. In the case Q = 0 (see Fig.3a) this state is similar to the DDW 104 or the staggered flux phase 106 . Additionally, the order parameter is purely imaginary in this case (D = −D * ) and therefore breaks only a discrete symmetry. Finite Q = 0 correspond to a modulation of the current pattern incommensurate with the lattice (see Fig.3b), which results in a breaking of a continuous rather then discrete symmetry. We will call the resulting state incommensurate DDW (IDDW).
In order to obtain a phase diagram containing the orders discussed above in for the model described by the Lagrangian (3.1) we calculate critical temperatures of the corresponding transitions using linearized mean field equations where ν 0 = S/(2π) 2 , S being the area of the 2D system and (3.6) where ω n = (2n + 1)πT is the fermionic Matsubara frequency. Evaluating the susceptibilities (3.6) one can find the critical temperatures for the instabilities considered here. To map out the phase diagram we fix the leading instability temperature T ins and identify the order having the largest χ as the leading one. The control parameters are then µ/T ins and β/α.
In Fig. 4 we present the resulting phase diagram where χ W,D have been calculated using α/ ξ 2 T = 50. In contrast to the previous studies of the SF model 90,128,134 , where a diagonal d-form factor CDW has been universally found to be the leading particle-hole order, one obtains now three novel instabilities here: to the d-wave Pomeranchuk phase with a deformed Fermi surface and intra-unit cell charge nematicity, as well as current orders in the form of DDW and its incommensurate variation (IDDW). Note that the experimentally observed 39,45 axial d-form factor CDW is a subleading instability, which is however expected to occur at lower temperatures within the Pomeranchuk 85 or DDW 110 phases.
Two general trends are evident from Fig. 4. First, the charge order is generally favorable at low Fermi surface curvature β/α while the current orders dominate at larger ones. Secondly, small values of µ are seen to stimulate the Pomeranchuk and IDDW phases among the charge and current phases, respectively.
The qualitative reason for the dominance of the DDW at moderate β/α can be seen in the dispersion (2.2) : for ). Axial CDW can emerge from Pomeranchuk or DDW phases at a lower temperature (see text). β = α one has ε 1 = −ε 2 for αp 2 ≫ µ. As α/ξ 2 ≫ µ, the Fermi surfaces of the two regions are nearly nested in the large part of the regions, in contrast to a CDW, for which nesting is restricted to a vicinity of a single point in p-space. It is, however, surprising that the current phases start dominating at β/α considerably smaller than 1. Below we provide analytical results leading to the phase diagram of Fig. 4 as well as a detailed description of the emerging orders.
• Charge Density Wave is represented by Eq. (3.2) with finite value of Q. Due to the sign change between the two FS regions the amplitude of the on-site charge modulation proportional to p W (p) ∼ W 1 + W 2 vanishes.
It has been shown 85,90 , however, that the charge modulation on oxygen sites is related to bond operators in the single-band model as: Let us compute now the corresponding susceptibility, Eq. (3.6). First of all, one can conclude from Eq. (3.4) that only Q x = Q y ≡ Q satisfies χ 1 W = χ 2 W , i.e. the wavevector is directed along the diagonal. This is in line with the previous results on the spin-fermion model 90,128 . The momentum integrals for the present model with overlapping hotspots can be evaluated explicitly for two limiting cases (for the details of calculation see Appendix A). In the limit β → 0 one obtains: (3.8) In the opposite limit β/ξ 2 ≫ µ, T one gets The expression in the sum in Eq. (3.9) is obtained for ω n ≪ β/ξ 2 . However, as the resulting sum is logarithmically large for β/ξ 2 ≫ µ, T ,we can simply disregard the contribution from higher Matsubara frequencies.
• Pomeranchuk instability corresponds to the anomalous average 3.2 with Q = 0. It leads to a d-wave-like deformation of the Fermi surface breaking C 4 symmetry without opening a gap. Additionally, the Pomeranchuk order should be accompanied by an intra-unit cell redistribution of the charge on the two oxygen orbitals, which can be readily seen from (3.7).
The expressions for the susceptibilities can be obtained from Eqs. (3.8) and (3.9) by taking the Q → 0 limit. Moreover, the sign of ∂χCDW ∂Q 2 Q=0 allows one to check the stability of the Pomeranchuk phase with respect to the CDW. For β → 0 we get and Numerical calculation shows that the expression (3.11) changes sign from positive to negative for µ/T < 1.1. Therefore, the Pomeranchuk phase is stable for µ/T ins < 1.1 for β → 0 (in agreement with Ref. 85). In the opposite limit β/ξ 2 → ∞, however, the expansion of (3.9) yields which is always positive. Thus, to obtain the phase boundary between CDW and Pomeranchuk phases at finite β one needs to perform the momentum integration assuming finite values of β/ξ 2 . The general result is rather cumbersome and is presented in Appendix A (Eq. A1). For µ/T ≪ 1 a simple expression is found (3.13) One can see that (µ/T ) cr decreases for β < 0.5 and then starts to increase. However, as we shall see, this upturn is located in the region where DDW is the leading instability. Note that in Fig. 4 the Pomeranchuk/CDW boundary is found from the full expression (A1) numerically for (α/ξ 2 )/T = 50.
• Current Phases (DDW/IDDW) are represented by the anomalous average (3.3). This order parameter does not result in any charge modulations on both the copper and the oxygen orbitals. In the former case this is guaranteed by the d-wave symmetry, while for the oxygen orbitals (e.g., O x ): However, it can be shown that the order parameter D induces a staggered pattern of currents flowing through the lattice. The current between the lattice sites i and j is given by 124 : where t ij is the hopping parameter. For example, the current I i,i+x between nearest neighbor sites along x can be estimated as In Fig. 3 an illustration of the current patterns is presented. Note that, in general, currents between nonnearest neighbors are induced, too. However, as this effect depends on the structure of the DDW amplitude in the entire Brillouin zone, we will not consider it in this work.
To calculate the resulting magnetic fields one should however calculate the current density j q rather then the current. For a square lattice with nearest-neighbor hopping t one can obtain the following result neglecting the smearing of atomic wavefunctions with respect to the cur-rent variation length (see also 116,117 ): (3.16) where e x(y) is a unit vector along x(y) axis and K n is a reciprocal lattice vector. Additionally, one can calculate the magnetic field along the z-direction produced by the DDW B z assuming that DDWs are aligned in-phase along z axis Note that in the full model (Sec.IV) the order parameter is frequency-dependent due to the retardation effects and D has to be calculated using a corresponding anomalous Green's function. Thus, D is not, in general, simply related to the magnitude of the pseudogap or T * as would be the case for the constant interaction.
Let us now present the results for the thermodynamic susceptibilities. For the commensurate (Q = 0) state, one obtains in the limit β → 0 In the opposite limit β/ξ 2 → ∞, we have instead (3.19) For the case Q = 0, let us first consider the stability of the DDW with respect to an infinitesimal discommensuration vector Q. The general expression obtained from ). Numerical solution of the resulting equation is represented by the DDW/IDDW critical line in the phase diagram of Fig. 4. Qualitatively, low values of µ favor IDDW. For µ/T ≪ 1, α − β ≪ α the result can also be expressed analytically (3.20) Furthermore, we have studied the dependence of the direction and magnitude of Q maximizing χ DDW (Q) numerically. For this purpose, expressions (A5,A6,A7) have been used. In Fig. 5 the result is presented for µ/T = 0.1. Interestingly, the orientation of Q in the IDDW phase is almost always along the axes. While there seems to be a transition to a diagonal phase at low curvatures, the charge order is dominant in that region, as is shown below.
Dashed line is (βQ 2 /T )max for Q along the diagonal. Vertical lines mark transitions between different phases. In the leftmost region (D) Q has diagonal orientation, in the middle (A) -axial and in the rightmost DDW is commensurate.
• Competition between charge and current orders
We are now in position to compare the tendencies to form charge (CDW/Pomeranchuk) and current (DDW/IDDW) order. For β ≪ α, comparing (3.8) and (3.10) to (3.18) one finds an additional factor α/ξ 2 in the former. After the summation this translates into a large parameter of the order of ∼ (α/ξ 2 )/[T, µ] present in the susceptibilities for charge instabilities. Incommensurability of the DDW does not change this conclusion (see Eq. A3).
In the case β ∼ α, β/ξ 2 ≫ µ, T one gets large logarithmic contributions in Matsubara sums for both charge and current orders. Therefore, one can estimate the transition line by equating the prefactors of the sums (3.9) (3.19), which leads to the following equation Numerical solution of this equation yields (β/α) cr ≈ 0.29, this value being independent of µ/T . The finite slope of the charge/current boundary in Fig. 4 results from finite values of (α/ξ 2 )/T taken in the numerical calculations. One can however show, that the slope is strongly suppressed being proportional to log −1 ((β/ξ 2 )/T ).
IV. CHARGE AND CURRENT ORDERS IN THE FULL MODEL
We turn now to the analysis of the full SF model (2.1). Let us first consider the effects of interactions in the normal state. The interactions renormalize the Green's function G of the fermions and D of the paramagnons leading to: where α, β are fermion spin indices, ν = 1, 2 is the 'hot region' index and m, m ′ = (x, y, z) enumerate the components of the paramagnon field ϕ. Additionally,f (ε n , p) = ε n + iΣ(ε n , p) and Ω(ω n , q) = ω 2 n + v 2 s Π(ω n , q), Σ(ε n , p) and Π(ω n , q) being the fermionic self-energy and polarization operator for paramagnons, respectively. In this section ε n = (2n + 1)πT and ω n = 2nπT stand for fermionic and bosonic Matsubara frequencies, respectively.
To calculate the self-energies we use the approximations illustrated diagrammatically in Fig.6. This is justified by a small parameter To study the formation of the DDW we will, however, use these approximations for all values of β assuming that the results to be nevertheless correct at least qualitatively. The re- sulting momentum-dependent self-consistency equations have the same form as the ones presented in Ref. 85 . Assuming the strong overlap of hot spots expressed by the inequality α/ξ 2 ≫ µ, one can perform the momentum integration in the self-energies. In the limit β ≪ α the latter can be shown to be momentum-independent. For larger β we approximate the self energies by their values at zero incoming momentum. Additionally, we have evaluated the momentum integral in the polarization operator for the paramagnons without any cutoff. It turns out that to reproduce our previous results 85 one needs to introduce a cutoff Λ such that αΛ 2 → ∞ while βΛ 2 → 0. Physically, Λ is related to the deviation of the fermionic spectrum from the form (2.2) outside the 'hot regions'.
Here we will assume for simplicity that βΛ 2 ≫ µ, T which should be valid for not too small β . Further details of the calculations can be found in Appendix C 2. Introducing an energy scale Γ = λ 2 v 2 s /α (note that in our previous work 85 a different scale (λ 2 v s / √ α) 2/3 has been used) the resulting equations can be cast in a dimensionless form where all quantities are assumed to be normalized by Γ to the appropriate power where a = v 2 s /ξ 2 , Ω ω = ω 2 + v 2 s Π(ω), f + = (αf ε+ω + βf ε )/(α + β) and f − = (αf ε + βf ε+ω )/(α + β). The value Ω(0) has been absorbed into redefinition of 1/ξ 2 . Now we proceed to the analysis of the emergence of the particle-hole orders. The general mean-field equation for the Pomeranchuk order has been derived in Ref. 85 . For the charge density wave order parameter W Q (ε, p) one , while the equations for the DDW and IDDW can be written as These equations can be used to study the full temperature, momentum, and Matsubara frequency dependence of the order parameters, provided the expressions for the normal self-energies are suitably modified (see Ref. 85 for the case of Pomeranchuk order at small β). Here we will concentrate on the critical temperatures of the emerging orders in order to check if the general trends observed in Sec.III still hold. Therefore, we simply use Eqs. (4.2) for the normal state self-energies in our study.
Assuming the order parameters to be momentum independent we integrate over momenta (for details see Appendix C 2) in the equations for charge and current orders. The resulting equations for the Pomeranchuk P (ε), CDW C(ε), DDW D(ε)and IDDW D I (ε) order parameters near the critical temperature are rather cumbersome and we present them in Appendix C 2 (Eq. C6). All quantities in (C6) are normalized by Γ. Additionally, to obtain a closed-form answer for IDDW, we have assumed Q along diagonal. While this assumption does not allow one to study the orientation of Q, for small Q it is supposed to yield a correct critical temperature, allowing to draw conclusions about the commensurate DDW stability. From Eq. C6 it is already evident that for β/ξ 2 ≪ 1 r.h.s. of the equation for the charge orders contains a large factor 1/ v 2 s /α, ultimately meaning that current orders do not appear in this case. In Fig.7 results of numerical solution of the equations (C6) are presented for two sets of parameters. The value of parameters characterizing the incommensurability for CDW and IDDW have been chosen to maximize the critical temperatures. Considering the qualitative character of our approximations for finite β below we analyze only the general features of the obtained results. One can see that the Pomeranchuk instability is considerably more robust to increasing β than is expected from the simplified model (3.13). Additionally, the IDDW seems to play a more important role. Actually, both the results can be qualitatively understood as being a consequence of the renormalization of the fermionic self-energy resulting in the replacement ε → f (ε). At low frequencies one obtains Ref (ε) ∼ ε/Z, where Z < 1.Therefore, the parameter µ is renormalized to a smaller value Zµ. As smaller values of µ qualitatively favor Pomeracnhuk and IDDW phases, this explains the observed tendency.
As for charge/current order competition one can draw a conclusion that the boundary between charge and current phases (β/α) cr appears to be remarkably close to the one obtained in the simplified model. The dip in critical temperatures at intermediate β/α is actually also qualitatively present in the simplified model, however there we concentrated on competition between phases at a given T . One could expect that in this region superconductivity will re-emerge as a leading instability even if the remnant Coulomb interaction acts against it. One should also keep in mind that closeness of the different phases in energy may induce strong fluctuations that can modify the results obtained here in the mean field approximation. These effects, however, should be important only close to phase boundaries. Thus, we suggest that the fluctuations will not change the results qualitatively, but leave detailed investigations for future studies.
V. DISCUSSION.
Considering the obtained non-superconducting phases two tentative scenarios can be anticipated for the pseudogap state. In the first one, Pomeranchuk instability is the leading one and is expected to occur at T P om T * . Then, at T * the pseudogap would open due to the formation of an axial CDW 85 . However, the time reversal symmetry breaking does not appear naturally in this scenario unless more complicated form-factors for the CDW are considered 91,93 . Moreover, while in some compounds 29 T CDW has been observed to be close to T * this does not seem to be the general case 30,31 . Additionally, more recent transport data suggests a Fermi surface reconstruction taking place at T CDW < T < T * 27 to be distinct from the one caused by the CDW 135 .
In the other scenario the leading instability is the DDW that has its onset at T * . This is consistent with transport 27 and ARPES 108 signatures of the pseudogap. It has also been shown that an axial d-form factor CDW can emerge on the Fermi surface reconstructed by the DDW 110,111 . DDW breaks time reversal symmetry but, as it breaks also the translational symmetry, additional Bragg peaks at (π+Q x , π+Q y ) are expected to appear in the DDW phase. While definitive experimental evidence for these peaks seems to be lacking [112][113][114][115] , we note that the the magnitude of the signal predicted by the BCSlike theories 104 should change in the case of a strongly frequency-dependent order parameter, such as the one in the SF model. Thus it is possible that the magnitude of the additional peaks predicted using a frequencyindependent order parameter could be overestimated. Moreover, the Q = 0 signal, observed experimentally 15,16 might orginate from higher-order processes but this possibility has not been investigated theoretically sofar.
Additionally, at low dopings there is evidence for a (π + Q x , π + Q y ) order not accompanied by the CDW from neutron scattering experiments 130 . We suggest that this observation can be explained in terms of the incommensurate IDDW order studied in the present work. Unlike previous works, where a similar state has been suggested 122 , in our case IDDW is not accompanied by a charge modulation.
Let us discuss now the values of parameters of the model (2.1) that might be most suitable for the cuprates. Relevant values of µ have been identified in our previous work 85 and are usually of the order of T * for the underdoped case. The Fermi surface curvature β/α appears now to be another crucial parameter controlling the phase diagram. One can relate β/α to the tightbinding parametrization of the dispersion in the full Brillouin zone ε(p) = −2t(cos p x + cos p y )− 4t ′ cos p x cos p y − 2t ′′ (cos 2p x + cos 2p y ). 140 . Anyway, the curvatures are rather low and seemingly constrain us to the regime where the Pomeranchuk instability is the leading one. However, drawing quantitative conclusions about the appearance of the current orders demands taking into account renormalization of the Fermi surface curvature by the low-energy interactions, which is beyond the scope of the present paper.
VI. CONCLUSION
We have studied particle-hole instabilities in the spinfermion model in the regime where shallowness of the antinodal dispersion combined with finite AF correlation length leads to a strong overlap of the 'hot spots' on the Fermi surface. A rich phase diagram has been obtained as a function of the chemical potential (doping) relative to the dispersion saddle-points and Fermi surface curvature in the antinodal regions. The phases obtained include Pomeranchuk and current-ordered phases previously not encountered in the SF model. We have shown that for small curvatures β/ξ 2 ≪ µ, T an Eliashberglike approximation is justified by a small parameter [T, µ, v 2 s /α]/(α/ξ 2 ). The self-energy effects have been found to promote Pomeranchuk and incommensurate current orders. The current orders possess attractive features for an explanation of the pseudogap phase, namely, the particle-hole asymmetric gap in the antinodal regions, time reversal symmetry breaking and Fermi surface reconstruction into hole pockets. Moreover, the incommensurate current order obtained in this work can potentially explain the incommensurate magnetism observed at low dopings. Finally, we expect our results to be also of relevance to other itinerant systems with strong antiferromagnetic fluctuations.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the financial support of the Ministry of Education and Science of the Russian Federation in the framework of Increase Competitiveness Program of NUST "MISiS" (Nr. K2-2017-085).
Appendix A: Matsubara Susceptibilities for Simplified Model
Charge Orders
In the limit β/ξ 2 → 0 the integral over one of the momenta in (3.6) yields a factor of 2/ξ, while for the second one limits can be taken to ±∞ for α/ξ 2 ≫ µ, ω n . As the resulting sum over ω n converges we can neglect the contribution from ω n α/ξ 2 resulting in (3.8).
The resulting integrals over momenta converge for all x. Consequently one can exchange the integration order to obtain: The sum over Matsubara frequencies appears to diverge at large ω n , but only logarithmically. This allows one to obtain the leading contribution to χ CDW (Q) by introducing a cutoff at ω n ∼ β/ξ 2 in the sum and neglecting the region ω n β/ξ 2 provided that β/ξ 2 ≫ µ, T .
To study the stability of the Q = 0 phase at finite β we expand the CDW susceptibility in Eq.(3.6) in powers of Q.
Assuming only α/ξ 2 ≫ µ, ω n one can evaluate the integral over momenta analytically. This yields From this equation one can obtain (3.18) and (3.19). For finite Q a closed form for χ D (Q) can be obtained for β/ξ 2 ≫ T, µ using: 1 a 1 a 2 = For the momentum integral in (3.6) we obtain: To change the integration order we need to assume Q 1 , Q 2 = 0 as ±(α + β)x + (α − β)/2 can vanish inside the x integration region while (α − β)x ± (α + β)/2 does not cross zero for all x. To integrate over p 1 we rewrite [...]: Now one can simplify the calculation by shifting the integration variables. First let us integrate over p 1 . The answer depends on the sign of (α + β)x + α−β 2 : The remaining integral over p 2 yields: Combining the results above one obtains two contributions. The first one is: . Or, after a change of variables x → (α − β)x/(α + β)/2 and some algebra: where Q 2 = (Q 2 1 + Q 2 2 )/2 and δQ 2 = (Q 2 2 − Q 2 1 )/2. For δQ 2 = 0 one can evaluate the integral analytically to obtain The second contribution is: .
For δQ 2 = 0 the integral in I 2 can be evaluated to obtain Combining this with the first contribution we get: The expression (A3) can be used to calculate the IDDW/DDW phase boundary because for small Q one can show that χ D (Q → 0) ≈ a + b(Q 2 x + Q 2 y ). From the condition ∂ 2 χD ∂Q 2 1 = 0 we get after evaluation of the Matsubara sum: This expression has been used to calculate the IDDW/DDW boundary numerically. As is evident from Fig. 4 µ/T becomes small for (α − β) ≪ α. Expanding (A4) for µ/T ≪ 1, (α − β) ≪ α one obtains Eq. 3.20. For numerical calculations of χ DDW and orientation of Q we have evaluated the Matsubara sum before the integrals in I 1 , I 2 . Using T 1 iω+a = tanh(a/2T )/2 one obtains for I 1 : Calculating the derivative dχ 1 DDW dδQ 2 one obtains that it is negative for δQ 2 > 0 and positive for δQ 2 < 0. Consequently, J 1 has a global maximum at δQ 2 = 0. Moreover, one can see that χ 1 DDW increases with Q 2 . For I 2 the resulting Matsubara sum diverges logarithmically, however one can subtract the divergent part T 1 |ω| . Using T /2π one obtains: The divergent part is then evaluated with a cutoff at ω = β/ξ 2 . The expressions (A5),(A6) have been used to calculate χ DDW (Q) numerically: Orientation and magnitude of Q are found maximizing χ DDW (Q). One can get some analytical insight on the possible orientation of Q in the case β ≪ α. I 2 can be approximately evaluated taking x ≈ ± α−β 2(α+β) in the integral: At T, µ ≪ βQ 2 one can go to integration over ω. The second terms in the brackets yield after integration over ω: β/α 3 const + O(µ/βQ 2 ) , consequently: χ 1 DDW (Q) can be shown to be bounded from above by π 2 2(α+β) . Consequently, for β/ξ 2 ≫ βQ 2 ≫ µ, T contribution from χ 2 DDW is dominant and maximizing δQ 2 is favorable. The maximal absolute value of δQ 2 is Q 2 that is reached is reached for Q along one of the BZ axes. However as β decreases χ 2 DDW eventually becomes less important due to the factor β/α. As χ 1 DDW has been shown above to be maximal for δQ 2 = 0 one expects a transition to diagonal Q at low β/α. This is in line with the results of numerical calculation in Fig. 5.
Appendix B: Expression for the current density
The definition of the current density directly follows from the standard expression for the magnetic part of the action S mag in the presence of a vector potential A (r). Here we derive the bare part S 0 [A] of the action S in the presence of an external potential A (r) and derive the expression for the current using a standard formula from electrodynamics for the magnetic part S mag of the action For simplicity we consider the energy operator to be of the form In Eq. (B2) a x and a y are lattice vectors directed along x and y bonds of the CuO lattice, respectively, and |a x | = |a y | = a 0 . The energy operator for the system with the vector potential can easily be written using the minimal coupling equivalent to the Peierls' substitution in Eq. (B2) Then, the energy operator takes the form The current operator should be defined calculating the linear term of the expansion of the action S mag in A (r). As the vector potential A (r) does not commute with the gradient, the expansion in A (r) is not trivial and we use time ordering products. For any non-commuting operators A and B one has and T α and T A α are time ordering and anti time ordering operators, respectively. Introducing operatorsà (r,α) = e −αax∇ A (r) e αax∇ = A (r−αa x ) , one comes to the following expressions Using Eq. (B8) we rewrite the energy operatorε −i∇− e c A (r) . This leads to to the following expression for the action Now, expanding the exponentials in the vector potential A (r) , and comparing the linear in A (r) term with S mag , Eq. (B1), we bring the correlation function for the current density to the form where j x,y are x-and y-components of the current density. The angular brackets in Eq. (B10) stand for averaging with the action S of the system. We will use for this averaging the action in the mean field approximation.
We emphasize that the current density j x,y (τ, r) is a function of the continuous coordinate r and Eq. (B10) is valid not only on the sites of the lattice. This is very important because in some cases non-zero circulating currents turn to zero at these points.
In order to calculate physical quantities can expand the fields c (r) in the Bloch functions ψ P c (r) = c P ψ P (r) dP where ψ P (r) has the standard form u p (r) is periodic function with the period a x,y and P is a quasimomentum in the first Brillouin zone. Note that in the main text the quasimomenta are defined in units of the inverse lattice spacing. However, as we use the spectrum, Eq. (B4), corresponding to a tight binding limit, the eigenfunctions of the Hamiltonian are localized near the lattice sites and it is convenient to expand the Bloch functions in Wannier functions w Rn (r) representing the functions ψ P (r) as where R n = a x n x + a y n y , n = (n x , n y ), n x , n y = 0, ±1, ±2, ±3.... and N is the total number of the sites. Then, The functions w Rn (r) are localized near the sites with the coordinates r − R n . The function w 0 (r) is localized near r = 0, and w Rn (r) = w 0 (r − R n ). The Wannier functions are normalized as follows Taking the Fourier transform of the current we substitute Eq. (B14) into Eq. (B10) and the latter into Eq. (B16). The we shift r → r−αa x,y in the first term in j x,y (r) and r → r + αa x,y in the second one and use the fact that the product w Rn 1 (r) w Rn 2 (r ± a x,y ) is essentially different from zero only for R n1 = R n2 ∓ a x,y . The the integral over r reduces to the following expression for |q| ≪ l −1 c , where l c is the localization radius of the function w 0 (r). The calculation of the sum over R n is performed using the Poisson formula where K n is the vector of the reciprocal lattice, K n = 2π a0 (n x , n y ) .The summation over the vectors of the reciprocal lattice is important because q is not necessarily located in the first Brillouin zone.
As a result, we come to the following expression for the current density (as it does not depend on time, we omit from now on the variable τ ) where e x,y = a x,y /a 0 is the unit vector along x or y bond. In Eq. (B19) integration is performed over all P and P ′ inside the first Brillouin zone. The effective velocity v x,y q (P) equals v x,y q (P) = Ja 0 In the limit q → 0, the function v x,y q (p) is just the conventional velocity v x,y 0 (P) = Ja 0 sin (Pa x,y ) .
Averaging in Eq. (B19) we reduce the latter to the form where The factor 2 in Eq. (B22) is due to spin. The main contribution in the integral over P in Eq. (B19) comes from the hot regions and it is again convenient to change to the variables c 1,2 , Eq. (2.1) and the momenta p counted from the middle of the edges of the reciprocal lattice. Then, using the symmetry relation and the fact that Where Q x = (π/a 0 , 0), Q y = (0, π/a 0 ) Eq. (B22) can be written in the form j x,y (q) = 2ee x,y e iQxax,y Kn dp (2π) 2v x,y q (p) g 12 wherev Integrating in Eq. (B26) over p we reduce this equation to the form As the order parameter, Eq. (3.3), is imaginary, the coefficient D is real. The current has peaks at points Q AF + K nx and Q AF + K ny . The function j x,y (q) can be further simplified at the peak values and represented in the form j x,y (q) = −8etDie iQxax,y e x,y (B30) Nevertheless, the sine function can be relevant if the peaks are smeared. The particle conservation reads The current density can also be written in the real space. It is important to emphasize that q is a momentum (not a quasimomentum) and therefore the current density in the real space j (r) can be written for the continuous coordinate r using the standard Fourier transform, Eq. (B16). A simple calculation leads to the following expression Eq. (B32) describes currents circulating around the elementary cells. The currents oscillate with the period Q AF and form the bond current antiferromagnet. This picture corresponds to the one proposed in Ref. 104.
In order to avoid a confusion we would like to note that j x,y (τ, r) is the two dimensional current density. The three dimensional current density J x,y (r) can be written as where z is the coordinate perpendicular to the planes and c 0 is the distance between the layers. Provided the currents in different layers are in phase (the function j x,y (r,m) does not depend on m) one can approximate for rough estimates the 3D current density as follows In the next subsection we use this approximation in order to visualize roughly the structure of the magnetic field. The circulating currents produce magnetic fields that can be measured by various techniques. As the explicit expression for the spontaneous currents has been obtained, the magnetic field can be determined without difficulties. The Fourier transform of the magnetic field B (τ, q) can easily be written using the Maxwell equation With the approximation (B34), only the z -component B z of the field, is not equal to zero.
Appendix C: Details of calculations for the SF model
Estimate of vertex corrections
Here we show that the vertex corrections contain a small parameter [T, µ, v 2 s /α]/(α/ξ 2 ) if β ≪ α. As an example we compare the two self-energy diagrams presented in Fig. 8 for β = 0. The bare propagators for fermions G a = a) b) FIG. 8. Self-energy diagrams of the same order a) without b) with vertex corrections (iε+µ−αp 2 a ) −1 are assumed to be much 'sharper' in momentum space than the bosonic ones D = (ω 2 /v 2 s +p 2 +1/ξ 2 ) −1 due to α/ξ 2 ≫ µ. The incoming momenta p are taken to be ∼ µ/α ≪ 1/ξ in magnitude. This allows one to simplify the resulting integrals neglecting the dispersion in the bosonic propagators along a, b or both. We get Using the strong overlap 1/ξ ≫ [µ, T ]/α we can neglect p and p ′ a in the first bosonic propagator and p ′ a and p ′′ a in the second one. We proceed to obtain where all quantities having dimensions of energy after are normalized to Γ.
where p and p ′ a are neglected in the first bosonic propagator and (p ′ − p ′′ ) a and p ′′ b -in the second. Taking Γ 2 = λ 4 v 2 s α we get 1 in front of the sum for (a) and (v 2 s /α)/Γ for (b). Let us now estimate the Matsubara sums for two cases. For T ∼ Γ ≫ µ, v s /ξ sums in (a) and (b) are both of the order 1 and we get the total result (b) ∼ (v 2 s /α)/Γ · (a). For the calculations in Sec. IV a more relevant approximation would be T ∼ µ, v 2 s /α ≪ Γ. It follows then that v s /ξ = (v 2 s /α) · (α/ξ 2 ) ≫ µ, T, v 2 s /α. Sums in (a) is estimated as follows. For the one over ε ′ one can neglect ε ′ in the bosonic propagators. For ε ′′ the sum evaluated this way diverges, however for an estimate one can use v s /ξ/Γ as a high-frequency cutoff with T 1 √ ε ∼ √ ε max . In total one gets (a) ∼ . Comparing the expressions we obtain (b)/(a) ∼ T /(α/ξ 2 ) ≪ 1. For non-zero β the fermionic propagators start to disperse along both direction in each region and consequently the argument is not valid. E.g., for β ∼ α one can ignore the momentum dependence of the first bosonic propagator in a) completely, leading to the same overall form of the answer as in b). On the other hand, if β/ξ 2 ≪ µ one can ignore β in the fermionic propagators, vindicating the argument. Thus the vertex corrections can be neglected at least for β/ξ 2 ≪ µ.
If α − β ∼ α we can neglect the dependence of the bosonic propagator on p 1 (p 2 ) for the first (second) term in the integral and we can neglect the momenta in the bosonic propagator altogether for the third term. We obtain as an intermediate result: We use our assumptions α − β ∼ α and | (if + µ)/α| ≪ | Ω + 1/ξ 2 | to further simplify the answer: The result is reminiscent of the expression for χ DDW in the toy model. To study qualitatively the presence of IDDW within spin-fermion model we use the following equation that can be easily derived assuming β/ξ 2 ≫ |if + µ| and incommensurability (Q, Q) along the diagonal using the result (A3): Introducing an energy scale Γ = λ 2 v 2 s /α (note that in our previous work a larger scale (λ 2 v s / √ α) 2/3 has been used) we can bring the equations to a dimensionless form: The equations have been solved numerically by an iteration method with nonlinearities 1/(10|D(ε ′ )| 2 + 1), 1/(10|P (ε ′ )| 2 + 1) introduced to r.h.s. of the order parameter equations to enforce convergence below critical temperature. The values of critical temperatures are not affected by this procedure. The number of Matsubara frequencies taken has been 300 + 1/(πT ), but not larger then 800.
While solving equations numerically two obstacles were encountered. First, the equations contain nonanalytic functions of a complex argument. To exclude ambiguity, we exclude the half-axis Rez < 0, Imz = 0 and check that arguments never cross it. In practice this means that the square roots should be evaluated from combinations like if + µ which always do have an imaginary part and never cross Rez < 0, Imz = 0 as functions of ε but not of (if 1 + µ) * (if 2 + µ) or (if 1 + µ)/(if 2 + µ) as these can cross the negative axis as functions of ε.
The second obstacle is that at low µ spurious solutions for D appear. They don't converge even for large numbers of iterations. However the convergence can be greatly improved by the following trick, which is a simplified version of Newton's method. For T > T P om,DDW we have an equation: and the Newton's method looks: Evaluating the matrix (1 − A) −1 is a rather slow operation. However in our case A ij ∼ (Ω i−j + a) and thus diagonal elements dominate. We can approximately use then: This allows us to get rid of the spurious solutions and improve convergence to obtain consistent T P om/DDW values.
|
2018-04-26T16:02:04.000Z
|
2018-02-02T00:00:00.000
|
{
"year": 2018,
"sha1": "004465dc18860ed777b0cc49253fdcda3d93f492",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1802.00694",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "004465dc18860ed777b0cc49253fdcda3d93f492",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
20014832
|
pes2o/s2orc
|
v3-fos-license
|
Molecular epidemiology and genotyping of TT virus isolated from Saudi blood donors and hepatitis patients
BACKGROUND In Saudi Arabia, the epidemiology and clinical significnance of Torque Teno virus (TTV) infection alone and in patients with hepatitis virus infections have not been determined in a single study. In this paper, we molecularly investigated the rate and genotypes of TTV infection among Saudi Arabian blood donors and patients with viral hepatitis. The effect of TTV coinfection on viral hepatitis was also examined. SUBJECTS AND METHODS DNA was extracted from the sera of 200 healthy blood volunteers, 45 hepatitis B virus patients, 100 hepatitis C virus patients, 19 hepatitis G virus patients, and 56 non-A-G hepatitis patients. TTV DNA was amplified using primers derived from the ORF1 and 5′UTR regions. The alanine aminotransferase (ALT) level was determined for each specimen. Sequencing of ORF1 amplicons was carried out to investigate TTV genotypes. RESULTS Using primers derived from ORF1 and 5′UTR, TTV DNA was detected in 5.5% and 50.5%, respectively, of healthy blood donors, in 2.2% and 88.8% in hepatitis B patients, in 2.0% and 70% of hepatitis C patients, in 15.8% and 100% of hepatitis G patients, in 5.4% and 12.5% of non-A-G hepatitis patients and in 4.8% and 56.4% overall. No detrimental effect of TTV coinfection in viral hepatitis patients was noted. An overall prevalence of 4.8% and 56.4% was established. Phylogenetic analysis indicated that the most common genotype of TTV among Saudis is 2c. CONCLUSION The rate of TTV infection among Saudi Arabians seems to be lower than that stated in previous reports on Saudi Arabia and in some other countries. The virus does not seem to worsen the status of those who are suffering from viral hepatitis infection.
ing region of 2.6 kb and a non-coding region of about 1.2 kb. 125][16] The virus was reported to persist in patients with multiple transfusions and in renal transplant patients. 17,18These and other studies in hemodialysis patients and hemophiliacs indicate that TTV is probably a transmissible blood-borne virus.However, some investigators reported other possible routes of transmission, including vertical, 19 sexual, 20 and oral-fecal. 21orldwide reports have described a TTV prevalence from 1% to 100%, depending on the area of the genomic DNA detected and on the geographical location. 11In Saudi Arabia, a report described a TTV prevalence of 19% among 48 blood donors residing in Riyadh. 7In a letter to the editor, Simmonds et al 22 described a prevalence of 100% among the same blood donors using primers derived from the 5'UTR region.In this study, in which we used a reasonably high number of samples compared to previous observations, 7,11,22 we report our findings on the rate of TTV infection among Saudi Arabian nationals, using the original primers derived from the ORF1 region of the virus genome for genotyping and for comparison with primers derived from the 5'UTR region.The study was performed on serum samples from adult Saudi Arabian nationals who were healthy blood donors and on hepatitis patients coming from various parts of the country for treatment in our tertiary care medical center.Through the determination of alanine aminotransferase (ALT) levels, we also investigated the effect of TTV positivity on the status of patients with hepatitis of various viral etiologies.
Subjects and Methods
Specimens.Two hundred serum samples were taken from healthy, male, volunteer blood donors who were negative for HIV, HBV, and HCV markers (Table 1).Forty-five serum samples taken from from HBV-positive patients (28 males, 17 females), as determined by HBsAg positivity and confirmed by polymerase chain reaction (PCR), 23 were negative for all other viral hepatitis markers.One hundred serum samples from HCVpositive patients (65 males, 35 females), as determined by both antibodies and PCR, 24 were negative for all other viral hepatitis markers.Nineteen serum samples from HGV-positive patients (13 males, 6 females), as determined by PCR, 25 were negative for all other viral hepatitis markers.Fifty-six serum samples from non-A-G (cryptogenic) hepatitis patients (35 males, 21 females), as determined by serology and PCR, had elevated ALT.All samples were from adults and were stored at -80ºC until processed.All TTV-DNA-ORF1-positive samples that were sequenced were each termed KSA, followed by the sample number and the designation "-BD" if from a blood donor, "-C" if from an HCV-positive patient, "-Cr" if from a non-A-G hepatitis patient, and "-G" if from a HGV-positive patient.All samples were obtained with consent from Saudi Arabian nationals, coming from different geographical areas of Saudi Arabia.Our institutional review board approved the study.
TTV DNA detection by PCR.Viral DNA from 100 µL of each serum sample was extracted by QIAmp System (Qiagen, Germany) according to the manufacturer' s instructions.Using the original primers derived from the ORF1 region (NG059, NG061 and NG063 as described by Nishizawa et al 1 ), a semi-nested PCR was performed on all samples.Using primers derived from the 5'UTR (NG054/NG147 for the first round and NG133/NG132 for the second round as described by Okamoto et al 26 ), nested PCR was performed on all samples.For both procedures, the first round of amplification was carried out in a total volume of 50 µL using 5 µL of extracted DNA and 45 µL of master mix (10 mM Tris-HCl, pH 8.3, 50 mM KCl, 1.5 mM MgCl 2 , 0.01% gelatin, 0.2 mM of each of dNTPs, 20 pmole of each of sense [NG059 for ORF1 and NG054 for 5'UTR] and antisense [NG061 for ORF1 and NG147 for 5'UTR] primers, and 1 Unit of AmplitaqR DNA polymerase) for 35 cycles.Each cycle consisted of denaturation at 94ºC for 1 min, annealing at 60ºC for 1 min, and extension at 72ºC for 1 min.The second round was performed with 20 pmoles of each of sense primer (NG061 for ORF1 and NG133 for 5'UTR) and antisense primer (NG063 for ORF1 and NG132 for 5'UTR) using 2.5 µL of the round 1 PCR product as a template for 25 cycles of the same cycling parameters as the first round except that the final extension step was increased to 7 minutes to allow complete formation of duplex molecules.Plasmid-cloned TTV DNA sequences were prepared in our laboratory and used as a positive control.Negative controls consisted of a negative serum and the PCR reaction mixture.Preparation of PCR mixture, viral DNA extraction, thermal cycling, and post-PCR analysis were done in separate areas.Amplified products (10 µL) were electrophoresed on 2% agarose, stained with 1 µg/mL ethidium bromide, and visualized under UV illumination.
Sequencing and phylogenetic analysis.PCR amplicons obtained from the amplification of the ORF1 region were subcloned into PCR 2.1 TA vector (Invitrogen, San Diego, USA).Two clones from each of 17 of the 20 positive samples were sequenced in both directions using the dideoxy nucleotide chain termination method (Taq Dye Deoxy Teminator Cycle Sequncing Kit, Applied Biosystems, Weiterstadt, Germany).Primers NG061 and NG063 were used for sequencing in an ABI 373A Autosequencer (Applied Biosystems).Nucleotide sequences of the positive amplicons and the published TTV prototype sequences were compared by multiple sequence alignment using the Lasergene Navigator computer program (DNA Star Sequence Analysis Software Package, Madison, WI).Nucleotide similarity and phylogenetic analysis were determined, using the CLUSTAL W algorithm.
Results were expressed as mean±standard deviation.Differences in proportions were tested by the chisquare test.Mean quantitative values were compared by Student' s t test.For all tests, P<0.05 was considered to have statistical significance.
Results
Prevalence of TTV DNA.When ORF1 region primers were used, TTV DNA was detected in 11 of 200 (5.5%)healthy volunteer blood donors, 1 of 45 (2.2%) HBV-positive patients, 2 of 100 (2.0%) HCV-positive patients, 3 of 19 (15.8%)HGV-positive patients, and 3 of 56 (5.4%) non-A-G hepatitis patients, with an overall rate of 4.8% (Table 1).When 5'UTR primers were used, TTV DNA was detected in 101 of 200 (50.5%)healthy volunteer blood donors, 40 of 45 (88.8%)HBV-positive patients, 70 of 100 (70.0%)HCV-positive patients, 19 of 19 (100%) HGV-positive patients, and 7 of 56 (12.5%) non-A-G hepatitis patients, with an overall rate of 56.4%.Effect of TTV infection on the liver.The effect of TTV infection (as determined by 5'UTR positivity) on the liver was investigated by measuring the liver enzyme, alanine aminotransferase (ALT) values for each patient in all groups positive and negative for TTV DNA (Table 2).Blood donors infected with TTV (n=101) had relatively lower levels of ALT (33.4±10.7)than those who were not infected (n=99, 37.4±10.3).In the case of HBV, HCV, and non-A-G patient groups, subjects positive for TTV infection had slightly lower levels of ALT than those with single HBV, HCV, or non-A-G infection.In the HGV group, all patients were positive for TTV DNA.Taken altogether but without HGV patients, hepatitis patients (all with ALT levels >45 U/ L, n=201) who were positive for TTV DNA (n=117) had relatively lower levels of ALT (58.7±10.1)than those who were negative for TTV DNA (n=84, ALT 66.8±12.2).However, all these differences were not statistically significant.Nucleotide sequence analysis of ORF1 of the TTV genome.Based on availability, semi-nested PCR products, using N22 primers from ORF1, were sequenced from all but three TTV DNA-positive samples (all 11 positive blood donor samples and 2 samples of each HCV, HGV, and non-A-G).Gene sequences were deposited in the Gene Bank (GenBank accession numbers AY256663-AY256679).Nucleotide sequence alignments were performed on 222-bp segments of the amplicons (excluding sequencing primers) and the corresponding region of TTV sequences taken from GenBank, using the CLUSTAL W algorithm. 27The pair-wise distance matrices showed that TTV isolates from blood donors were closely related to each other with an overall nucleotide sequence homology of 83.2% to 100%.Sequence diversity was highest between the two HGV-positive patients with a sequence divergence of 55.9%.Patients positive for HCV showed a sequence divergence of 19.7%, whereas positive non-A-G cryptogenic hepatitis patients showed a sequence divergence of only 1.4% (unpublished, but available on request).
Phylogenetic Analysis of TTV isolates.Sequences of local TTV isolates and 16 reference sequences taken from the GenBank database were subjected to phylogenetic analysis by the neighbor-joining method. 28The TTV isolates formed five distinct clades (Figure 1) which corresponded to 5 major genotypes separated by an evolutionary distance of >0.3.The genotypes were further divided into subtypes separated by an evolutionary distance of >0.15.It was found that all isolates from blood donors belonged to genotype 2c.Of the two TTV isolates from HCV-positive patients, one was found to be of genotype 1a and the other belonged to 1b.Of the two TTV isolates from HGV positive patients one was genotype 1a and the other was genotype 2c.Both TTV isolates from non-A-G patients were genotype 1a.No other genotype/subtype was found.The results are summarized in Table 3.
Discussion
TTV is a novel single-stranded DNA virus that is transmitted both parenterally and non-parenterally.There is no clear association with liver disease or any other disease. 14,15,29,30TTV infection among adult Saudi Arabians was investigated in this study.Using primers derived from ORF1 and 5'UTR regions of the virus genome, respectively, a prevalence rate of 5.5% and 50.5% was determined among healthy blood donors, and 4.1% and 61.8% among patients with elevated ALT (hepatitis patients), for an overall prevalence of 4.8% and 56.4% (20 and 237 samples of 420).We decided to determine the prevalence of TTV infection by using the original primers derived from ORF1 region1 as well as primers derived from the 5'UTR since it was found that primers from this region detect more cases of TTV infection. 26e observed that this is true since the 5'UTR is a conserved region while the N22 region of ORF1 is variable.Prescott et al 7 reported the prevalence of TTV in many countries, including Saudi Arabia, where a prevalence of 19% (9 samples of 48) was found.The primers they used were from the same ORF1 region that we used in our study.Simmonds et al 22 reported a prevalence of 100% among Saudi Arabians, using primers derived from the 5'UTR region.However, a relatively small number of samples (n=48) in both reports were used and the samples were obtained only from blood donors living in Riyadh, which may have included non-Saudi Arabians.Early studies in Japan used primers based on the ORF1 region of the N22 clone. 1,2Later, it was found that primers deduced from the 5'UTR region showed a much higher prevalence in almost all geographical regions studied. 11,12,31For example, in neighboring Egypt the reported blood donor prevalence was 29% and 85% when primers from ORF1 and 5'UTR regions, respectively, were used. 32,33A recent study from Taiwan 34 as well as a study of renal transplanted patients 18 reported the same observation.A recent study on 449 serum samples from various populations in neighboring United Arab Emirates showed that nationals have a lower rate of infection than non-nationals, using 5'UTR-based primers. 35Like our study, they found a high TTV positivity in patients with hepatitis B and C.
The discovery of TTV led to thoughts that this virus might be a cause of unexplained post-transfusion hepatitis.Accumulating reports, however, showed that the virus is ubiquitous, is found in a wide range of tissues, and is not clearly associated with any overt disease. 36,37By reporting ALT levels in this study, we have observed that TTV does not affect the outcome or the pathogenesis of other, well-characterized hepatotropic viruses.In fact, the average ALT levels of HBV, HCV, and non-A-G hepatitis patients with a TTV co-infection were slightly lower than the average ALT level of those with no TTV co-infection.We are therefore in agreement with previous observations that TTV infection does not seem to be related to abnormal ALT levels, and ALT abnormality was mainly attributable to HCV, HBV or HGV and not TTV infection. 38,39The presence or absence of serum TTV DNA does not seem to affect the clinical course of patients with chronic HBV or HCV infection and TTV viremia with any genotype was not associated with HBV, HCV or HGV infection and abnormal ALT levels. 40,41or sequencing and phylogenetic analysis, we used Cladogram constructed with the nucleotide sequences derived from the ORF1 region of TTV isolates and reference strains by the neighborjoining method [28].For genotype 1a, the reference strains are TA278 (AB 0008394) and N22 (AB 017767).For genotype 1b, the reference strains are TX011 (AB 017769), JaBD28 (AB 018930), and JaHCC59 (AB 0188916).For genotypes 2a, 2b, 2c, and 2d, the reference strains are TS003 (AB 017770), NA004 (AB 017771), US35 (AF 124020), and TKB212 (AB 017772), respectively.For genotype 3, the reference strains are Bhu96 (AF 084107) and Gab441 (AF 084108).For genotype 4, the reference strains are JaBD43 (AB 018938) and JaBD98 (AB 018960).For genotype 5, the reference strains are JaM21 (AB 017887) and JaNBNC10 (AB 019861).
the semi-nested PCR products of the ORF1 region since these are employed for the characterization of major genotypes. 31Most of our TTV positive samples (17/20) were cloned, sequenced and subjected to phylogenetic analysis in our laboratories (all 11 blood donor positive samples, the 2 TTV positive samples from HCV-infected patients, 2/3 TTV positive samples from the HGV-infected patients, and 2/3 TTV positive samples from the non-A-G cryptogenic hepatitis patients).All isolates were found to be of TTV genotypes 1 and 2. The TTV positive blood donor samples (n=11) were all genotype 2, subtype c (2c).However, those samples that were TTV positive from hepatitis patients (n=6) were mostly genotype 1, subtype a (1a), with two that were 1b and 2c.Prescott et al 7 reported the genotypes of 8/9 TTV-positive samples from Saudi Arabia.They found 4 samples of the 8 tested belonged to genotype 1a, 3 samples were genotype 1 but could not be subtyped, and 1 sample could not be typed.Although 28 genotypes have been described thus far, 36 our results are in agreement with the literature that most of the reported genotypes from all over the world were G1 and G2, with occasional G3 and rarely other types, and that there was no significant differences in the distribution of these types based on gender or clinical manifestions. 11his study showed that the prevalence of TTV among Saudi Arabians using primers from the ORF1 region is rather low compared to studies from other countries.It also demonstrated that the major TTV genotype is 2c, especially among healthy blood donors, and illustrated that TTV co-infection did not aggravate other hepatitis infections.Further investigations are needed to augment these results with additional insights into the impact of TTV on the health of our population and to contribute to the global understanding of the unclear role of this virus in pathogenesis.
This study was partially supported by a grant from King Abdulaziz City for Science and Technology [LGP- 5-45].The authors are grateful to the Research Center Administration for providing the facilities, equipment and approvals.We thank Maria Cristina Rasing and Hanan Shaarawi for secretarial and logistic assistance.
Table 1 .
Rate of TTV infection among study groups as determined by PCR methods using primers derived from the ORF1 and 5'UTR regions.
Table 2 .
Effect of TTV positivity (by using 5'UTR primers) on the outcome of hepatitis infection as measured by ALT levels.
|
2018-04-03T02:26:15.560Z
|
2006-11-01T00:00:00.000
|
{
"year": 2006,
"sha1": "39b061b4f857258bf4e7a534a07e3f7303041767",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.5144/0256-4947.2006.444",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7a27450bdeb062f1f4f14b1ae7de8a26390325dd",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15254462
|
pes2o/s2orc
|
v3-fos-license
|
Oral Ondansetron versus Domperidone for Acute Gastroenteritis in Pediatric Emergency Departments: Multicenter Double Blind Randomized Controlled Trial
The use of antiemetics for vomiting in acute gastroenteritis in children is still a matter of debate. We conducted a double-blind randomized trial to evaluate whether a single oral dose of ondansetron vs domperidone or placebo improves outcomes in children with gastroenteritis. After failure of initial oral rehydration administration, children aged 1–6 years admitted for gastroenteritis to the pediatric emergency departments of 15 hospitals in Italy were randomized to receive one oral dose of ondansetron (0.15 mg/kg) or domperidone (0.5 mg/kg) or placebo. The primary outcome was the percentage of children receiving nasogastric or intravenous rehydration. A p value of 0.014 was used to indicate statistical significance (and 98.6% CI were calculated) as a result of having carried out two interim analyses. 1,313 children were eligible for the first attempt with oral rehydration solution, which was successful for 832 (63.4%); 356 underwent randomization (the parents of 125 children did not give consent): 118 to placebo, 119 to domperidone, and 119 to ondansetron. Fourteen (11.8%) needed intravenous rehydration in the ondansetron group vs 30 (25.2%) and 34 (28.8%) in the domperidone and placebo groups, respectively. Ondansetron reduced the risk of intravenous rehydration by over 50%, both vs placebo (RR 0.41, 98.6% CI 0.20–0.83) and domperidone (RR 0.47, 98.6% CI 0.23–0.97). No differences for adverse events were seen among groups. In a context of emergency care, 6 out of 10 children aged 1–6 years with vomiting due to gastroenteritis and without severe dehydration can be managed effectively with administration of oral rehydration solution alone. In children who fail oral rehydration, a single oral dose of ondansetron reduces the need for intravenous rehydration and the percentage of children who continue to vomit, thereby facilitating the success of oral rehydration. Domperidone was not effective for the symptomatic treatment of vomiting during acute gastroenteritis.
Introduction
Acute gastroenteritis (AGE) is the main cause of acute vomiting in children under the age of 3 years and one of the most important reasons for admission to the pediatric emergency department (ED) and the hospital [1,2]. In the USA, 1.5 million children under 5 years are diagnosed with AGE annually and this condition accounts for 13% of all hospital admissions [1]. The most frequent complication is dehydration. In Europe, at least 230 deaths and over 87,000 hospitalizations of children under 5 years of age are reported every year. [3] In the initial phase of AGE, vomiting is reported in 75% of children with rotavirus infection [4], and is distressing for both patients and their families. Vomiting is a direct cause of fluid loss and can also hamper successful treatment with oral rehydration solution (ORS). Symptomatic pharmacological treatment for vomiting is still a matter of debate and is not systematically included in current practice recommendations for pediatric AGE [5][6][7]. Physicians and parents in ED favor intravenous fluid therapy (IVT) for mild or moderate dehydration when vomiting is the major symptom [8,9]. Thus, effective antiemetic treatment would lead to an important reduction in the use of IVT.
Various antiemetic agents are available and are often used off-label to prevent or reduce vomiting in children with AGE [10,11]. In France, Spain, Italy and in other European countries, the dopamine receptor antagonist domperidone is the preferred antiemetic treatment [12]. Ondansetron is administered to only a small proportion of children and its use varies significantly among institutions [13,14].
Literature evaluating the efficacy of symptomatic drugs in reducing acute vomiting for pediatric AGE focuses mainly on ondansetron [15][16][17][18][19]. Evidence exists that ondansetron compared with placebo increases the proportion of patients with cessation of vomiting, reduces the immediate hospital admission rate and the need for IVT. However, not all of these studies evaluate first-line oral rehydration therapy (ORT) during hospital stay before the administration of the antiemetic [18], and an adequate comparative evaluation between domperidone and ondansetron is missing [4,20]. Concerning the use of domperidone, only few studies are available with small sample sizes, low methodological quality, and inconsistent results [4,15,17,[18][19][20][21][22][23].
The aim of the current trial was to assess whether the oral administration of ondansetron vs domperidone or placebo, after a first attempt with ORS, prevents IVT or nasogastric rehydration in children with vomiting during AGE.
Study design
This prospective, multicenter, double-blind randomized controlled trial involved children admitted to 15 pediatric EDs in Italy.
The study was coordinated by the Institute for Maternal and Child Health-IRCCS Burlo Garofolo (Trieste) and by the Maternal and Child Health Laboratory-IRCCS-Istituto di Ricerche Farmacologiche Mario Negri (Milan). The protocol (S1 Protocol), which has been previously published [24] The Italian Medicines Agency (AIFA) funded the trial, including the reimbursement of the costs of production of the drugs by Monteresearch S.r.l (www.monteresearch.it), a pharmaceutical development service licensed by AIFA to produce and manage medical products for clinical trials according to Good Manufacture Practice (GMP), with no role in trial design and conduction. The trial received no commercial funding. A written informative document was handed to parents or legal surrogate prior to enrollment and contained detailed information on the study, the burden of the intervention, including the length of ED stay in case of participation and the possible adverse events. Written informed consent was obtained from each child's parent or legal surrogate.
A multidisciplinary steering committee (1 epidemiologist, 1 clinician, 1 pharmaco-epidemiologist) was established to monitor the data, ensure patient safety and act as reference for Participants Units. The members of the committee were not directly involved in the actual field work.
The exclusion criteria were the use of antiemetics or antidiarrheal drugs in the 6 hours prior to access to ED, underlying chronic diseases (i.e., malignancy, gastroesophageal reflux, migraine, renal failure, hypoalbuminemia, liver disease), severe dehydration defined by a standard clinical score of !18 for children 12-24 months or !16 for children !24 months of age (S1 Table) [25], known hypersensitivity to ondansetron or domperidone, previous enrollment in the study, concomitant use of drugs that prolong the QT interval, language barriers or inability to perform the telephone follow-up. The last two exclusion criteria were added after discussion with the Bioethic Committee of the Coordinating Center (Trieste).
Randomization and masking
Patients were randomly assigned in fixed blocks of nine to receive ondansetron or domperidone or placebo in a 1:1:1 ratio. The randomization list was generated using the STATA software and was stratified according to participating centers. The randomization procedure was centralized. The randomization sequence was transmitted to the pharmaceutical development service (Monteresearch S.r.l.), that prepared and sent directly to participating hospitals, active drugs and placebo in closed, opaque and consecutively numbered bags. Drug preparations were indistinguishable by taste, odor and appearance. A syrup was preferred to other possible formulations (i.e. tablets) because it allows for the preparation of solutions at different concentrations which makes it possible to administer the same volume, based only on the child's weight (ml/kg), regardless of the allocation group. Each bag contained a graduated drug dispenser. For each randomization the amount of syrup allowed for a second administration in vomiting children within 15 min of the first dose. After confirmation of first-line ORT failure, the next available bag containing the drug preparation was opened and a weigh-appropriate dose was administered to the patient. Study investigators and participants were unaware of the randomization list and blind to the pharmaceutical preparations assigned.
Procedures
After checking for inclusion and exclusion criteria, a first ORT attempt was carried out following the standard protocol (S2 Text). In case of failure of the initial ORS administration, defined as vomiting after ORS or fluid refusal after three attempts, patients were randomized to receive an oral administration of: 3. placebo syrup.
The dosages of ondansetron and domperidone were those indicated by the Summary of Product Characteristics. Children who vomited again within 15 minutes of receiving the drug, were given a second dose. A new ORT attempt was carried out 45 to 60 minutes after the first treatment. Patients were reassessed at 30 minute intervals for a minimum of 6 hours and data was collected at each assessment. Forty-eight hours after discharge, a blinded research assistant phoned the child's family to assess, using a standard form, the evolution of AGE, the need for hospitalization or readmission to the ED and the final outcome.
Outcomes
According to the published protocol [24], the primary outcome was the percentage of patients who were administered nasogastric or intravenous rehydration after symptomatic oral treatment failure, defined as vomiting or fluid refusal after the second ORT attempt. Secondary outcomes were: a) the percentage of subjects remaining in ED for observation stay for more than 6 hours; b) the percentage of subjects requiring hospital admission during the ED stay; c) subjects with episodes of vomiting and number of episodes in the 3 treatment groups during the ED stay and during the 48-hour follow-up period; d) the percentage of subjects presenting adverse events during ED stay and during the 48-hour follow-up period. Two further outcomes were evaluated: the rate of success at the second ORT attempt and the percentage of subjects requiring laboratory tests during ED stay. Moreover, the subjects with episodes of diarrhea and the number of episodes in the 3 treatment groups were evaluated both during ED stay and during the 48-hour follow-up period.
The information on the number of vomiting and diarrhea episodes after ED discharge was referred to the last 24 hours of follow-up with the aim of verifying real symptom resolution.
Safety profile
In case of serious or medically relevant clinical adverse events or abnormal laboratory test values registered during the course of the study or in the post-treatment period, the investigators were obliged to inform the Coordinating Units immediately. The Coordinating Units were responsible for sending the reports on suspected unexpected serious adverse reactions to all participating investigators, to AIFA and to the Ethics Committees, in accordance with international and Italian laws and regulations as well as with International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH)/ Good Clinical Practice (GCP) guidelines.
Statistical analysis
To estimate sample size we initially referred to the Roslund RCT that implemented a similar protocol, enrolling subjects with AGE who failed initial ORS administration in the ED [26]. We estimated, using the Fleiss method with continuity correction, that the enrollment of 540 children (i.e. 180 patients in each arm) would provide the study with a statistical power of 80% to detect a change from 50% in placebo group to 35% in domperidone group and 20% in ondansetron group in the proportion of children receiving nasogastric or IVT, given a twosided type I error of 0.05. Given the lack of efficacy estimates, the effect of domperidone was estimated as intermediate between ondansetron and placebo.
In the original protocol no interim analyses were planned. However, given the difficulty to enroll patients due to the unexpected success of the first ORT, in accordance with the study Sponsor (AIFA), in June 2013 we amended the protocol adding 2 blind interim analyses and 1 possible final analysis following the O'Brien-Fleming criteria [27]. The first interim analysis was planned for July 2013, two years after the enrollment of the first subject in the study (critical value p = 0.0005). A second interim analysis was planned for November 4 th 2013 (the expected date for the end of the enrollment) if: 1) the first analysis had not achieved the necessary significance, or 2) the initially estimated sample size had not been reached. The second interim analysis (critical value p = 0.014) allowed us to close the study with a final sample size of 356 children (Table A and Table B in S2 Table). The decision was taken after consultation with the study steering committee. The two interim analysis were carried out at the Epidemiology and Biostatistics Unit of the coordinating center by a statistician not involved in the study and blinded to the allocation group.
Numbers, percentages and, when appropriate, relative risks and confidence intervals (CI) are presented for categorical data and medians and interquartile ranges (IQR) for continuous data. For categorical outcomes, differences between ondansetron vs placebo and ondansetron vs domperidone were evaluated using the chi-square test and for continuous outcomes using the non-parametric Mann-Whitney U test, since data were not normally distributed. All p values and estimates of treatment effects were based on separate comparisons, so no adjustments were made for multiple comparisons. Analyses were performed with SPSS software (version 21.0) according to the intention-to-treat principle. All p values are two-sided. According to the O'Brien-Fleming criteria, a p value of less than 0.014 was used to indicate statistical significance and 98.6% CI were calculated.
Results
Participants were recruited between July 7, 2011 (first randomization) and November 3, 2013 (last follow-up). A total of 1438 children with AGE who had accessed the 15 EDs were assessed, 1313 were eligible for the first ORT attempt which was successful for 832 (63.4%). Of the remaining 481 children, 125 (25.9%) were excluded because parents or guardians did not give their consent while 356 were randomly assigned to the study groups: 119 to domperidone, 119 to ondansetron and 118 to placebo (Fig 1). The baseline characteristics of the groups were similar (Table 1).
All the randomized subjects received the first dose of the study medication. For the majority of patients (315/356, 88.5%) vomiting did not interfere with the first administration, whereas 22 patients receiving domperidone (18.5%), 8 ondansetron (6.7%) and 11 placebo syrup (9.3%) needed a second dose within 15 minutes after the first one. Five children (four in the domperidone and one in the ondansetron group) vomited immediately after the second dose. Three of these received one dose of open-label ondansetron, and two were treated with intravenous rehydration. According to the intention to treat approach, these children were considered to belong to their respective randomization group (Fig 1). The average amount of syrup was 3.8 ml (SD 1.2) for ondansetron, 3.8 ml (SD 1.1) for domperidone, and 3.9 ml (SD 1.0) for placebo (p = 0.66).
A stratified analysis showed no effect of the dehydration status (no vs mild to moderate dehydration) on the primary study outcome.
Ondansetron halved the risk of the observation stay exceeding 6 hours versus both domperidone and placebo while the number of patients admitted to hospital from the ED was similar in all three groups ( Table 2). Ondansetron significantly reduced the number of subjects with episode of vomiting during ED stay and led to greater success of the second attempt with ORS vs domperidone and placebo; furthermore, ondansetron significantly reduced the need for laboratory tests compared with placebo ( Table 2). No statistically significant difference was seen for the occurrence of diarrhea while the median number of diarrhea episodes was higher in the ondansetron group (2.0 episodes vs 1.0 in domperidone and 1.5 in placebo group).
After discharge, no statistically significant differences were seen among the three groups for subjects readmitted in ED for the same illness, subjects with episodes of vomiting and diarrhea in the 48 hours follow-up and number of episode of vomiting and diarrhea in the last 24 hours of follow-up (Table 3).
No serious adverse event was observed. A total of 13 patients had one mild adverse effect: 6 after ondansetron, 5 after domperidone, and 2 after placebo. Episodes of drowsiness, asthenia, irritability, diarrhea or abdominal pain were common to all the three investigated groups (S3 Table). In one case (ondansetron group) the blinding was opened following an adverse event.
Discussion
Our study indicates a 63.4% success rate of the first attempt with ORS in over 1300 children with AGE without severe dehydration. This means that, in an ED setting, 6 out of 10 children aged 1-6 years with vomiting due to AGE and no or mild to moderate dehydration, can be successfully treated with oral rehydration solution alone, without the need for drugs. This finding is consistent with the estimates of the Cochrane review [17,18]. In children who continue to vomit after the first ORT attempt, a single oral dose of ondansetron improves the chances of success of ORT. Ondansetron reduces by over 50% the number of patients requiring IVT vs both domperidone and placebo (Table 2), in agreement with the results of other RCTs [15][16][17][18][19]25]. Our results provide clear evidence of benefit of ondansetron also with respect to the other study outcomes. Hospital admission rates are lower in the ondansetron group vs both domperidone and placebo. Differences among groups did not reach the statistical significance of the meta-analysis (RR 0.41; 95% IC 0.29 to 0.59) [18]. In the present study, the need for observation stay to last more than six hours is statistically significantly lower in the ondansetron group compared with the domperidone and placebo groups.
In agreement with the results of other RCTs, no difference in the percentage of patients readmitted to the ED within 48 hours of discharge was seen among the three groups. This percentage is lower than the one reported in Freedman's RCT and similar to that reported in another study [25,28].
Only a few mild adverse events occurred after ondansetron administration, all of them resolved quickly without any consequence for the children. In particular, although the ondansetron group presented a higher median number of episodes of diarrhea (one additional episode on average), this increase is smaller than the one described in other RCTs [17,18], and seems to have no clinical relevance, especially when weighed against the drug's significant effect on reduction of vomiting. Unfortunately, our study does not have the statistical power to detect rare but serious adverse events, such as cardiac arrhythmias, and further studies, i.e. post-marketing surveillances, should be carried out to address this issue. Although outside the context of diarrhea, the FDA black box alert published in September 2011, recommends electrocardiogram monitoring in patients with potential "electrolyte abnormalities" receiving ondansetron, due to the risk of developing prolongation of the QT interval which can lead to an abnormal and potentially fatal heart rhythm, including Torsade de Pointes [4]. However, there is evidence that routine ECG and electrolyte screening before the administration of a single dose of oral ondansetron in children without known risk factors (i.e. history of arrhythmias, concomitant use of QT-prolonging drugs) are not necessary [29]. Domperidone did not appear to be superior in any of the primary and secondary outcomes ( Table 2). The available evidence of the efficacy of domperidone consist of few studies with small sample sizes, low methodological quality and inconsistent results [4,15,[17][18][19][20][21][22][23]. The only randomized trial comparing oral domperidone with placebo showed that the drug, used in combination with ORT, does not reduce vomiting in the early stages of AGE [23].
We chose to include domperidone in our RCT since it is commonly prescribed to children with gastroenteritis in several countries, including Italy [10-12, 23, 30], and is licensed in Europe for the "treatment of nausea and vomiting", also in the pediatric population, despite the lack of evidence on efficacy. Recently the authorization for the use of domperidone for these clinical conditions has been subjected to restrictions because of the possible risks of severe arrhythmias, particularly in the case of drug overdose [4,31]. Furthermore, case reports and post marketing surveillances have reported the occurrence of extrapyramidal reactions associated with the use of domperidone [32,33].
The main limitation of our study is the premature closure of the enrollment, and the consequent failure to reach the initially estimated sample size. This has already been described for publicly-funded trials in the UK [34]. In our study this was due to the success of the first ORT which made it difficult to have patients to randomize and to the non-occurrence of the annual rotavirus diarrhea outbreak during the second year of the study. Furthermore, in the absence of literature evidence, the sample calculation of the efficacy of domperidone was estimated to be intermediate between ondansetron and placebo, but the study findings did not confirm this hypothesis. However, our RCT is the largest carried out on this topic so far.
The use of placebo in our RCT could be questioned. When the study started, available evidence, albeit weak and unreliable [17], suggested the efficacy of ondansetron for symptomatic treatment of vomiting during AGE in children but a proper evaluation of domperidone, largely used in clinical practice, was lacking. Furthermore, at the time, no formal indications for the use of ondansetron in the treatment of AGE was given by clinical practice guidelines. Consequently, and in accordance with the document on ethical considerations for clinical trials in the pediatric population [35], we felt that the use of placebo was legitimate. This matter was also discussed by the Bioethic Committee of the coordinating centre before the approval of the study protocol.
Our trial included a large number of patients with a limited age range (1-6 years), with no or mild to moderate dehydration. This reflects the actual clinical setting in Italy. The selected age range allowed us to enroll most of the children with community acquired AGE, which is prevalent in younger children, and to ensure an adequate safety profile, since ondansetron is approved for vomiting (after chemotherapy) in children over 6 months. This study contributes to provide evidence for better management of this young population characterized by high incidence of AGE associated to dehydration, and scanty evidence of effective therapeutic approaches.
Our study presents several strengths. First, the complete independence of the study, which was made possible by the public funding received from AIFA. Second, the initial use of ORS before administering the drugs and the inclusion in the trial only of children who had failed this first attempt. This made it possible to adequately evaluate the role of the drugs under study and to confirm in a field study the role and applicability of ORT in children with AGE without severe dehydration.
The results of our study have relevant implications for practice. Our trial fully confirms the results of the most recent systematic review on the use of ondansetron which suggests that the clinical practice guidelines for the treatment of children with AGE should be revised to include the use of a single dose of oral ondansetron in the case of continued vomiting after a first attempt with ORS [18]. This simple intervention reduces the need for IVT and laboratory tests, the discomfort of vomiting and the time spent in the ED. Currently, some guidelines, in particular those from North America [7], already suggest the use of ondansetron in pediatric emergency departments in infants and children between six months and 12 years of age, while others, including ESPGHAN and NICE [4,6], are more conservative as a result of the FDA warning on the potentially severe adverse effects of ondansetron.
In view of the ineffectiveness demonstrated in the present trial, the inconclusive results of previous studies and the possible side effects reported in literature, domperidone should not be used for the symptomatic treatment of vomiting as a consequence of AGE in children.
The aims of our work did not include a pharmacoeconomic analysis, but a previous study showed that the administration of oral ondansetron to children with dehydration and vomiting secondary to AGE led to significant monetary savings compared to a no-ondansetron policy [36]. Appropriate strategies are needed to successfully incorporate oral ondansetron into clinical practice in order to maximize its potential benefits [37]. Further studies are needed to understand if, in the real context of ED care, the use of ondansetron in children at high risk of dehydration can effectively reduce the number of cases receiving IVT. Indeed, despite the increasing use of ondansetron over the years in the United States and Canada, the percentage of children requiring IVT doesn't seem to have decreased [38].
In conclusion, our trial showed that, in a context of emergency care, 6 out of 10 children aged 1-6 years with vomiting due to AGE and without severe dehydration can be managed effectively with the administration of ORS alone. In children who continue to vomit or refuse ORT, a single oral dose of ondansetron reduces the need for IVT, the percentage of children who vomit and the number of episodes of vomiting, thereby facilitating the success of ORT. A small, not clinically relevant, increase in the number of episodes of diarrhea was observed in the ondansetron group. Domperidone was not effective for the symptomatic treatment of vomiting during AGE.
|
2018-04-03T02:16:04.379Z
|
2016-11-23T00:00:00.000
|
{
"year": 2016,
"sha1": "8be187d1d2167b88f0de1f54c37b62d2eada25c5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0165441&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8be187d1d2167b88f0de1f54c37b62d2eada25c5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117287820
|
pes2o/s2orc
|
v3-fos-license
|
Hyperfine tensors of nitrogen-vacancy center in diamond from \emph{ab initio} calculations
We determine and analyze the charge and spin density distributions of nitrogen-vacancy (N-V) center in diamond for both the ground and excited states by \emph{ab initio} supercell calculations. We show that the hyperfine tensor of $^{15}$N nuclear spin is negative and strongly anisotropic in the excited state, in contrast to previous models used extensively to explain electron spin resonance measurements. In addition, we detect a significant redistribution of the spin density due to excitation that has serious implications for the quantum register applications of N-V center.
Nitrogen-vacancy (N-V) centers in diamond have numerous peculiar properties that make them a very attractive solid state system for fundamental investigations of spin based phenomena. Recently, this defect has been proposed for several applications, like quantum information processing [1,2,3,4], ultrasensitive magnetometer [5,6], and measurement of zeropoint fluctuations or preparing quantum-correlated spin-states over macroscopic distance [7]. In these measurements, a room temperature read-out of single nuclear spins in diamond has been achieved by coherently mapping nuclear spin states onto the electron spin of a single N-V center [3,8] which can be optically polarized and read out with long coherence time [9,10]. Particularly, this has been the basis in realization of a nuclear-spin-based quantum register [11] and multipartite entanglement among single spins at room temperature [12]. The polarization of a single nuclear spin has been achieved by using either a combination of selective microwave excitation and controlled Larmor precession of the nuclear-spin state [11] or a level anticrossing in the excited state [13]. Understanding the spin states and levels is of critical importance for optical control of N-V centers in both the ground and excited states. Especially, the hyperfine interaction couples the electron spin and nuclear spin, thus determination of hyperfine tensors of the nuclei with non-zero nuclear spin plays a key role both in creation of entaglement states and in the decoherence process [3,14].
Recently, the hyperfine signals in the ground [13,15] and excited [16] states have been detected in N-V centers but with contradicting interpretations. In a conventional electron paramagnetic resonance (EPR) spectrum on the ensemble of N-V centers, the 15 N signal was assumed to be positive with slight anisotropy in the ground state while Fuchs et al. and Jacques et al. assumed an isotropic negative hyperfine constant for 15 N in the ground state [13,16] based on previous EPR and optically detected magnetic resonance (ODMR) measurements [17,18]. Recently, Fuchs et al. have reported that the hyperfine splitting of 15 N should be ∼20× larger in the excited state than in the ground state [16]. The excitedstate 15 N hyperfine signal was assumed to be isotropic in their model [16]. While we already addressed the hyperfine tensor of 14 N in the ground state [19], the lack of the detailed study on 15 N hyperfine signal and the proximate 13 C isotopes both in the ground and excited states prohibits the understanding of the intriguing physical properties of this defect. In this Letter, we thoroughly investigate the hyperfine tensors of proximate 15 N and 13 C isotopes of the N-V center both in the ground and excited states by means of high level ab initio supercell all-electron plane wave calculations. In addition, we analyse the overall charge and spin density distributions before and after the optical excitation. We show that the hyperfine constants of 15 N is positive and possibly slightly anisotropic in the ground state while negative and strongly anisotropic in the excited state. In addition, the hyperfine splittings of the proximate 13 C isotopes change also significantly that has serious implications both in the interpretation of the excited-state spectroscopy signals and in the quantum-information applications.
The negatively charged NV-center in diamond [20] consists of a substitutional nitrogen atom associated with a vacancy at an adjacent lattice site (Fig. 1a). The ground state has 3 A 2 symmetry where one a 1 defect level in the gap is fully occupied by electrons while the double degenerate e-level above that is occupied by only two electrons with parallel aligment of spins (Fig. 1b). Thus, this defect has S=1 high spin ground state [21]. By promotion one electron from the a 1 -level to the e-level will result in the excited 3 E state. Both states can be described by conventional density functional methods. The ground state can be readily described by spin density functional theory while the excited state can be obtained by constrained occupation of states [19]. We optimized the geometry for both the ground and excited states, and calculated the charge and spin densities at the optimized geometries.
We applied the PBE functional [24] to calculate the spin density of the defect. First, the diamond primitive lattice was optimized, then a simple cubic 512-atom supercell was constructed from that. Finally, we placed the negatively charged NV-defect into the supercell, and optimized under the given electronic configuration. We utilized the VASP code for geometry optimization [25]. We applied plane wave basis set (cut-off: 420 eV) with PAW-method [26]. During the optimization of the lattice constant we applied twice as high plane wave cut-off and 12 3 Monkhorst-Pack K-point set [27]. For the 512-atom supercell we used the Γ-point that provided convergent charge density. We plugged the optimized geometry into the CPPAW supercell plane wave code with PAWmethod that provides the hyperfine tensors [28]. We applied |−1, b) The schematic single particle picture of the ms = 1 high spin states in ground state ( 3 A2,gs) and excited state ( 3 E,es). c) The fine structure of the 3 A2 and 3 E states at room temperature due to spin-spin interaction. Zero-field splittings are Dgs=2.88 GHz [22], Dgs=1.42 GHz [23]. During optical excitations the fluorescence is predominantly active for the ms = 0 ground state (|0 ) due to a non-radiative intersystem crossing of the | ± 1 es states with the many-body singlet states (not shown here). d) Splittings of the es substates in the presence of the magnetic field (B). LAC is expected between |0 and | − 1 states. e) Simplified energy-level diagramm with including the hyperfine structure associated with 15 N nuclear spin states | ↑ and | ↓ in the case of LAC regime of the applied B-field. At LAC, precession frequency Ω between excited-state sublevels |0, ↓ and | − 1, ↑ can lead to nuclearspin flip, which can be transfered to the ground state through nonradiative intersystem crossing (curved arrow).
the same basis set and projectors in both codes yielding virtually equivalent spin density of the defects. Other technical details are given in Ref. 19. The charge density distribution was analyzed by the Bader-method [29]. We briefly mention here that we provide the principal values of the hyperfine tensors, called, A 11 , A 22 , and A 33 that can be found by diagonalization of the hyperfine tensors. If the hyperfine field has C 3v symmetry then A 11 =A 22 =A ⊥ and A 33 =A || where || means that hyperfine field coincides with the symmetry axis of the defect. The Fermi-contact term (a) is defined as First, we discuss the geometry of the defect. The obtained distance between the carbon atoms is 1.54Å in perfect diamond. In the nitrogen-vacancy defect there are three carbon atoms (C a ) and one nitrogen atom (N) closest to the vacant site each possessing a dangling bond (Fig. 1a). The defect conserves its C 3v symmetry during the outward relaxation, and the dangling bonds of the C a and N atoms will point to the vacant site. The symmetry axis will go through the vacant site and the N-atom which is the [111] direction in our particular working frame (see Fig. 1a). We found in our PBE calculations that the C a atoms are closer to the vacant site (1.64Å) than the N-atom (1.69Å) in the ground state. We obtained 5.97e Bader-charge on the N-atom that it is 0.97e larger than the number of valence electrons of the neutral Natom. N-atom is more electro-negative than the C-atom. Indeed, the three C-atoms bound to N-atom (C b ) have 3.69e Bader-charge, so there is significant charge transfer from C b atoms toward the N-atom (∼0.93e as total). That is the main source of the negative polarization of the N-atom. We found that the negative charge is distributed on many atoms around the defect. The C a atoms are even slightly positively polarized (3.97e) that will finally induce a dipole moment in the defect. Next, we briefly discuss the spin density of the ground state. The spin density is primarily originated from the unpaired electrons on the e defect level in the gap. Due to symmetry reasons [19,30] the e-level is only localized on the C a atoms but not on the N-atom. Therefore, large spin density is expected on the C a atoms while negligible on the N-atom. Indeed, very small hyperfine splitting was found for 14 N [17] and 15 N [15,18]. However, the sign of the Fermi-contact term of the hyperfine interaction for 15 N was contradictory. Rabeau et al. assumed a negative value [18] while Felton et al. has recently proposed a positive value [15]. The gyromagnetic factor of 15 N (γ N ) is negative, thus negative(positive) Fermicontact hyperfine value (a) indicates positive(negative) spin density on the N-atom (n s ) because a ∼ γ N n s . In our previous LDA calculation [19] we detected negative spin density on N-atom. Our improved PBE calculation justifies this scenario. Due to symmetry reasons the direct spin polarization of N-atom does not occur in the ground state but the large spin density on the C a atoms can polarize the core electrons of the N-atom, i.e., it is an indirect and weak spin polarization. In our PBE calculation we can also detect a slight dipoledipole interaction for 15 N which is outside of our error bar (∼0.3 MHz [19]). Our conclusions agree with the findings of Felton et al. [15]: the 15 N has positive hyperfine splittings and it is slightly anisotropic (see Table I). The calculated hyperfine tensors for C a atoms agree nicely with the recent experimental values recorded at low temperature [15]. The fair qualitative agreement between the experiment and theory for the ground state allows us to study the less known excited state with our tools.
In the excited state we found a significant re-arrangement of atoms compared to the ground state. The C a atoms now farther from the vacant site (1.70Å) than the N-atom (1.63Å) in contrast to the case of the ground state. The N-atom attracts less electrons from its neighbor C-atoms: the Bader-charge of N-atom (5.88e) is 0.09e less than in the ground state. Consequently, the Bader-charge of C b atoms increases by 0.03e. It is important to note that the Bader-charge of C a atoms increases from 3.97e to 4.04e. Thus, the excitation induces a change in the dipole moment of the defect. Next, we discuss the change in the spin density distribution upon excitation. Fig. 2 shows the calculated difference of the spin densities in the excited and ground states. As one can see that the spin density enhanced a lot around N-atom (indicating with red lobes) while it dropped around the C a atoms (indicating with blue lobes). This can be explained by the hole left on the a 1 defect level in the gap after excitation. The a 1 defect level is significantly localized on the N-atom [19]. Thus, the spin polarization of the a 1 defect level will spin polarize the N-atom considerably. Consequently, the spin polarization of the C a atoms will be smaller. According to the calculations the hyperfine constants of 13 C isotopes are dropped by around 50% (see Table I). However, the overall magnetization density of the N and C a atoms is about 95% the same both in the ground and excited states. In other words, the spin density mostly redistributed between the N-atom and the three C a -atoms upon excitation. Indeed, it has been very recently found by using excited state spectroscopy in 15 N enriched diamond samples that the 15 N hyperfine signal is ≈20 times larger (∼60 MHz) in the excited state than in the ground state [16]. Our calculations can explain this feature. However, the applied model Hamiltonian for describing the EPR of 15 N hyperfine signal (A (N) ) was incomplete in Refs. 13, 16. Fuchs et al. and Jacques et al. studied individual NV centers by confocal photoluminescence miscroscopy where the actual defect was aligned to the [111] axis and the applied magnetic field was parallel to this axis, thus the angular dependence of A (N) was not measured. They assumed isotropic A (N) for the excited state while our study shows that it is strongly anisotropic. Fuchs et al. also noticed that A (N) ) should have the opposite sign in the ground and excited states [16]. Our study shows that A (N) is positive(negative) in the ground(excited) states in contrast to the previous assumptions [13,16]. Now, we discuss the consequence of our findings in the light of recent experiments on the dynamic polarization of single nuclear spins of the NV center [13,16]. It has been demonstrated that the effective nuclear-spin temperature corresponds to a µK in this process [13] decoupled from the ambient temperature that can be the basic physical process in the measurement of zero-point fluctuations [7]. In these measurements the de-polarization of the nuclear spins of 15 N [13,16] and 13 C a [13] have been demonstrated. This has been achieved by the level anticrossing (LAC) of the electron spin m s sublevels in the excited state. The LAC effect may appear if the m s sublevels cross at a given external magnetic field (see Fig. 1c,d). We show a refined and corrected model of Ref. 13 accounted for LAC. We study the de-polarization of 15 N isotope but it can be generalized for the 13 C isotopes straightforwardly. The Hamiltonian of the system (with neglecting the nuclear-Zeeman splitting) can be written as [13], whereŜ andÎ are the electron and nuclear-spin operators, D es the excited-state zero-field splitting, g e the electron g factor, µ B the Bohr-magneton, and A es the excited hyperfine coupling. We assume positive B-field. Because A es is anisotropic, the A esŜÎ term can be written with the a es and b es hyperfine splittings and the spin-shift operators as, The hyperfine field of 15 N is parallel to the symmetry axis, and (a es − b es )=A ⊥ ≈-39 MHz while (a es + 2b es )=A || ≈-58 MHz. According to a recent study [23] D es =+1.42 GHz (m s = 0 sublevel is below m s = ±1 sublevels), so we can restrict our study to the excited state m s = 0 and m s = −1 sublevels (see Fig. 1d). In the basis [| − 1, ↓ ; | − 1, ↑ ; |0, ↓ ; |0, ↑ ; ] and by choosing the origin of energy level at level |0, ↑ , the Hamiltonian described by Eqs. 1,2 can be written as The eigenstates of this Hamiltonian are |0, ↑ , | − 1, ↓ , |+ = α|0, ↓ + β| − 1, ↑ and |− = β|0, ↓ − α| − 1, ↑ . By following the arguments in Ref. 13, the transition from the ground state |0, ↑ to the excited state remains nuclear spin conserving, whereas the transition from |0, ↓ results in (α|+ + β|− ) in the excited state (see Fig. 1e). This superposition state then starts to precess between the appropriate states at frequency whereh is Planck's constant. The precession frequency depends on B via electron Zeeman-effect (c in our notation) that is minimal at LAC resonance, and will be equal to |d|/h = |A ⊥ |/ √ 2h. Jacques et al. assumed isotropic hyperfine splitting for 15 N, therefore, they applied ≈60 MHz in this formula [13]. Our analyzis shows that rather the |A ⊥ |≈39 MHz should be substituted here. Nevertheless, this precession frequency is still at the same order of magnitude as the excited state decay rate, 12 ns [16]. Thus, the spin-flip process is very efficient between |0, ↓ and | − 1, ↑ states, and we can explain the de-polarization of 15 N found in the experiments [10,13,16]. It has been found that the probability of the de-polarization effect significantly depends on the misalignment of the magnetic field from the symmetry axis [13]. This may be partially explained by the anisotropy of the 15 N hyperfine splitting beside the mixing of the spin states.
We found other intriguing properties of the spin density in the excited state. As apparent in Fig. 2, the spin density and the corresponding hyperfine tensors change considerably also for the proximate 13 C isotopes. We show the hyperfine tensors only for the most significant change, for C a atoms in Table I. Beside that new 13 C isotope becomes active above N-atom that has negligible spin density in the ground state. In addition, the spin density of the sets of 6(3) C-atoms at R=3.9Å decreases(increases) due to excitation where R is the distance from the vacant site. According to our previous study [19], one of these 13 C isotopes was manipulated in the qubit and quantum register applications [3,11]. Our study shows that during the optical set and read-out processes the spin-density of the addressed proximate 13 C isotopes changes indicating an effective oscillating magnetic field with the inverse lifetime of the excited-state. This may also influence the decoherence of the entangled electron-nuclear spin state that has not yet been considered [14].
AG acknowledges support from Hungarian OTKA No. K-67886. The fruitful discussion with Jeronimo Maze is appreciated.
|
2009-05-08T03:48:59.000Z
|
2009-05-08T00:00:00.000
|
{
"year": 2009,
"sha1": "f81a67ed9b5e7b05fd70bbbd635fbd3c6bc9238f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f81a67ed9b5e7b05fd70bbbd635fbd3c6bc9238f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
18930140
|
pes2o/s2orc
|
v3-fos-license
|
Multivariate Analysis of the Predictors of Survival for Patients with Hepatocellular Carcinoma Undergoing Transarterial Chemoembolization: Focusing on Superselective Chemoembolization
Objective While the prognostic factors of survival for patients with hepatocellular carcinoma (HCC) who underwent transarterial chemoembolization (TACE) are well known, the clinical significance of performing selective TACE for HCC patients has not been clearly documented. We tried to analyze the potential factors of disease-free survival for these patients, including the performance of selective TACE. Materials and Methods A total of 151 patients with HCC who underwent TACE were retrospectively analyzed for their disease-free survival (a median follow-up of 23 months, range: 1-88 months). Univariate and multivariate analyses were performed for 20 potential factors by using the Cox proportional hazard model, including 19 baseline factors and one procedure-related factor (conventional versus selective TACE). The parameters that proved to be significant on the univariate analysis were subsequently tested with the multivariate model. Results Conventional or selective TACE was performed for 40 and 111 patients, respectively. Univariate and multivariate analyses revealed that tumor multiplicity, venous tumor thrombosis and selective TACE were the only three independent significant prognostic factors of disease-free survival (p = 0.002, 0.015 and 0.019, respectively). Conclusion In our study, selective TACE was a favorable prognostic factor for the disease-free survival of patients with HCC who underwent TACE.
urgical resection or transplantation has been considered as the gold standard for the treatment of hepatocellular carcinoma (HCC) in those patients who are good surgical candidates (1,2). However, the surgical indications are usually very limited, and consequently transarterial chemoembolization (TACE) has been widely practiced for the treatments of patients with unresectable HCC (3)(4)(5)(6)(7)(8). The additional survival benefit of TACE over conservative managements has been analyzed in many previous randomized studies (8)(9)(10)(11)(12)(13)(14). Furthermore, the potential prognostic factors affecting the survival of patients with HCC who undergo TACE have been analyzed in many previous studies (15)(16)(17)(18). Since the 1990's selective TACE such as segmental or subsegmental TACE has been performed to achieve favorable clinical results with regards to local tumor progression and overall survival (19)(20)(21)(22)(23)(24). However, as far as we know, the survival benefit of selective TACE compared to conventional TACE has not been clearly reported in the literature.
In this study, we evaluated potential prognostic factors of disease-free survival for patients with HCC who underwent TACE, while we focused on the role of selective TACE.
Patients Selection
The requirements for patients to enter the study were as follows: (a) adult patients with hepatic cirrhosis and HCC (b) a prothrombin time ratio (i.e., the normal time divided by the patient's time) greater than 40%, (c) a platelet count higher than 40,000 per cubic millimeter (40×10 9 /L), (d) newly diagnosed patients with no previous treatment for the HCC, (e) the patient was ineligible for surgical resection or transplantation, (f) the patient agreed to undergo TACE.
From August 1, 1998 to July 15, 2006, a total of 151 consecutive patients with HCC who underwent TACE in our hospital met the inclusion criteria, and the medical records of the patients were retrospectively reviewed. All of the 151 patients were monitored from the time of diagnosis to the date of death or to the time of study closure, if they were still alive. The study was censored on July 15, 2007. All of the patients had a known 1-year or longer survival status. The median follow-up period was 23 months (range: 1-88 months). All but seven patients were male. Their age ranged from 44 to 82 years (mean ± SD [standard deviation]: 64.2 ± 8.5 years). The other characteristics are shown in Table 1. The Institutional Review Board (IRB) of our hospital approved this study.
The diagnosis of HCC was verified histologically by performing a percutaneous needle biopsy for 12 patients (7.9%). For the other patients, a diagnosis was established based on the characteristic radiological features on at least two imaging examinations. These examinations included ultrasound, contrast-enhanced dynamic computed tomography (CT), magnetic resonance imaging (MRI) and hepatic angiography (for the hypervascular tumors seen on two or more imaging modalities), or the use of a single imaging technique with positive findings for HCC and an associated serum α -fetoprotein level > 400 ng/mL (25). In the majority of patients, the etiology of the cirrhosis was chronic viral hepatitis B or hepatitis C (Table 1).
Chemoembolization Techniques
All patients had enhanced dynamic CT performed within four weeks prior to TACE. Informed consent was obtained for all of the patients before the procedure. All the TACE procedures were performed by two interventional radiologists with ten and six years of experience, respectively. Hepatic angiography was performed using 5 Fr angiographic catheters, followed by superselection of arterial feeders using a microcatheter (mainly Progreat TM α ; Terumo; Tokyo, Japan).
We administered an iodized oil-doxorubicin hydrochloride (Adriamycin; Kyowa Hakko Kogyo, Tokyo, Japan) emulsion into the feeders. The volume of the iodized oil ranged from 3 to 10 ml, and the amount of doxorubin ranged from 20 to 70 mg. Once the flow became sluggish, gelatin sponge particles (Gelfoam; Upjohn, Kalamazoo, MI) that were mixed with mitomycin-C (Kyowa Hakko Kogyo, Tokyo, Japan) and the contrast material were additionally administered into the feeders.
For selective TACE, chemoembolization was performed selectively as possible in the distal arteries that fed the tumor (24). While performing selective TACE, attempts were made to completely occlude the arterial feeders. A small amount of saline solution was then injected slowly to confirm the complete occlusion of the segmental or subsegmental arterial feeder. If the retained contrast media was partially washed out after the saline injection, then additional gelatin sponge particles were infused until complete stasis of flow was achieved. Conventional TACE was defined as TACE at the level of the right or left lobar hepatic artery or the proper hepatic artery. When catheterization of a segmental tumor feeder failed, then TACE was performed through the right or left hepatic arteries. Conventional TACE was performed more frequently when the tumors were supplied by multiple segmental arterial feeders.
For bilobar disease, we tried to treat all the tumors by selective TACE, if possible. If the patient's liver function was as poor as Child-Pugh class B or C, then only the larger tumors were selectively treated to preserve the liver function. When performing conventional TACE, occlusion of the arterial feeders was not intended, and only stasis of flow was obtained at the end of the procedure. This was done to minimize possible damage to the liver parenchyma.
Imaging Interpretation and Follow-up
The CT examinations were performed with an 8-slice multidetector CT scanner (Lightspeed; GE Medical Systems, Milwaukee, WI) with 5-mm collimation and 17.5 mm/sec table speed, or with a single-detector helical scanner (Prospeed Advantage; GE Medical Systems, Milwaukee, WI) with 10-mm collimation and a 10-mm/sec table speed. All the patients underwent both non-enhanced and contrast-enhanced three-phase helical CT one-month after their TACE. Two radiologists with twelve and seven years of experience, respectively, interpreted the CT and angiographic images independently, and the final decisions were reached by consensus.
A residual viable tumor was judged to be present when an enhanced portion was seen within or around the original tumor on a one-month follow-up CT scan. If no definite evidence of residual tumor was noted on this onemonth follow-up CT, then 3-phase contrast-enhanced CT was performed at a 3-or 4-month interval thereafter. Local tumor progression was judged to be present when eccentric focal disappearance of the iodized oil from the lesion was seen, or an enhanced portion was seen within or at the margin of the original mass on the next follow-up CT scans after the first one-month follow-up CT scan (26,27). Radiofrequency ablation was first considered for the recurred small tumor that was ≤ 3 cm in the maximal diameter, if the tumor was not located in a difficult location. Additional TACE procedures were performed for other tumors. Repeated procedures were based on the tumor response and the patient's tolerance, and were not performed at a fixed time interval.
Analysis of the Prognostic Factors for Disease-Free Survival and the Image Interpretation
Disease-free survival was calculated by considering any death or recurrence as an event (28). All the patients were followed up with a standard protocol of surveillance that included performing a contrast-enhanced dynamic CT scan at one month after TACE, followed by a liver function test, a test for the serum α -fetoprotein level, a dynamic CT scan and chest radiography every three months or when the serum α -fetoprotein level was significantly increased (29). When recurrence was indicated by any of these examinations, the patients underwent hepatic angiography.
The disease-free survival was the only end point of this study, and this was analyzed for 20 potential prognostic factors, including 14 baseline patient factors (the patient's age and gender, hepatitis B infection, hepatitis C infection, the presence of ascites, the serum aspartate aminotransferase [AST] level, the serum alanine aminotransferase [ALT] level, the serum albumin level, the total bilirubin level, the platelet count, the prothrombin time [INR ratio], the Child-Pugh class, the presence of portal hypertension and the performance status score), five baseline tumor factors (the serum α -fetoprotein level, the tumor location that was either unilobar or bilobar, multiplicity of the tumors, the maximal tumor diameter and portal or hepatic vein tumor thrombosis), and finally one procedure-related factor (conventional versus selective chemoembolization).
Portal hypertension was defined by the presence of either esophageal varices or splenomegaly with a platelet count < 100,000/ml (30). The performance status assessment followed the guidelines of the Eastern Cooperative Oncology Group (ECOG) (31). The number of tumors was determined from the pre-embolization CT. Tumor size was determined as the maximal diameter of the nodule that was measured on the pre-embolization CT. Vascular invasion was assessed by dynamic CT and hepatic angiography. Lymph node invasion and distant metastases were assessed via a routine screening study such as ultrasonography, dynamic CT and chest X-ray. Bone scintigraphy or a brain CT was performed if suggestive symptoms were present. Abdominal lymph nodes with the shortest diameter being 10 mm or greater were regarded as metastatic nodes.
Statistical Analysis
For the 20 potential prognostic factors of disease-free survival, univariate and multivariate analyses were performed by using the Cox proportional hazard regression model, respectively. The main focus of analysis was on the role of selective TACE. The parameters that proved to be significant on the univariate analysis were subsequently tested with the multivariate model. The backward stepwise selection (likelihood ratio) technique was used for the multivariate test. For the survival analysis, multivariate analysis was performed twice, with and without inclusion of the procedure-related factors. If any independently significant baseline prognostic factor was not dropped by the addition of a procedure-related factor, then the confounding between the baseline and procedure-related factors was regarded as insignificant. The existence of variance inflation was also checked. For continuous variables, the cut-off was set at the median values on the univariate and multivariate analyses, while giving consideration to the clinical context.
P-values less than 0.05 were considered statistically significant. The SPSS software package (version 10.0; SPSS Inc., Chicago, IL) was used for the statistical analysis.
RESULTS
Among the 20 potential prognostic factors affecting disease-free survival, univariate analysis revealed that the presence of ascites, a higher serum α -fetoprotein level (> 40 ng/mL), a bilobar tumor distribution, tumor multiplicity, the tumor size and venous tumor thrombosis were the only significant baseline prognostic factors (p = 0.046, 0.016, 0.006, 0.000, 0.022 and 0.001, respectively). In addition, selective TACE was a significant procedurerelated factor on the univariate analysis (p = 0.000) ( Table 2).
Multivariate analysis for the prognostic factors that affected disease-free survival on the univariate analysis revealed that tumor multiplicity, venous tumor thrombosis and selective chemoembolization were the only three independently significant prognostic factors of disease-free survival (p = 0.002, 0.015 and 0.019, respectively) ( Table 3). The two independently significant baseline factors were the same irrespective of including selective TACE in the multivariate analysis, except for minute numerical changes. The variance inflation was also minimal for the three significant factors after the addition of selective TACE in the multivariate analysis.
DISCUSSION
The survival benefit of TACE over conservative management has been analyzed in many previous randomized studies and review articles (8)(9)(10)(11)(12)(13)(14). However, to the best of our knowledge, there have been no randomized trials or controlled studies for determining the survival benefit of selective TACE over conventional TACE. While considering the additional cost and procedural time of segmental or subsegmental TACE, it would be necessary to evaluate whether selective TACE can induce an additional overall or disease-free survival benefit so as to compensate for the additional cost and time of performing this procedure.
We think that improved disease-free survival status can be expected by the potential merits of selective TACE as follows: (1) damage to the liver parenchyma can be restricted to the specific liver segments, and (2) the tumoricidal effect can be potentiated because the chemoembolic agents are focused into specific liver segments (21,22). Despite these potential merits, it was not clear whether selective TACE could enhance the disease-free survival status of patients, when compared to conventional TACE. The beneficial effect of initial selective TACE might have been weakened by the high rate of intra-hepatic tumor recurrence, especially for patients with viral hepatitisoriginated HCC (32). However, this study showed that selective TACE could improve the disease-free survival status of patients with inoperable HCC.
In this study, the significant baseline prognostic factors for disease-free survival were similar to the factors for tumor recurrence, as were determined by the previous studies on TACE (19,20,22). Further more in our study, the most powerful adverse prognostic factor of survival was portal or hepatic venous thrombosis, and similar results were found by previous studies on patients who were treated with liver resection or transplantation (33)(34)(35).
The limitations of this study are as follows. First, this was a retrospective analysis and not a randomized controlled trial. However, we did not perform a randomized controlled study between selective and conventional TACE because of the anticipated potential advantages of selective TACE. Although we did not perform a controlled study, the impact of selective TACE on patient survival was adjusted by the multivariate analysis. Second, the number of patients who underwent conventional TACE was relatively small when compared to those patients who underwent selective TACE. Third, the intrahepatic or extrahepatic tumor recurrence might have been underestimated. However, the main focus of this study was not to evaluate the tumor recurrence rate itself, but to evaluate the prognostic impact of procedure-related factors on the disease-free survival.
In conclusion, selective TACE could prolong the diseasefree survival period of patients with inoperable HCC, as was shown by the multivariate analysis. A future larger scale controlled study between conventional TACE and selective TACE will be helpful to further evaluate this subject.
|
2016-05-04T20:20:58.661Z
|
2008-12-01T00:00:00.000
|
{
"year": 2008,
"sha1": "5374b244a1890f6d45461b01ab6e4d099ab54e27",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc2627241?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5374b244a1890f6d45461b01ab6e4d099ab54e27",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9635138
|
pes2o/s2orc
|
v3-fos-license
|
Dose-dependent effects of vitamin D on transdifferentiation of skeletal muscle cells to adipose cells
Fat infiltration within muscle is one of a number of features of vitamin D deficiency, which leads to a decline in muscle functionality. The origin of this fat is unclear, but one possibility is that it forms from myogenic precursor cells present in the muscle, which transdifferentiate into mature adipocytes. The current study examined the effect of the active form of vitamin D3, 1,25-dihydroxyvitamin D3 (1,25(OH)2D3), on the capacity of the C2C12 muscle cell line to differentiate towards the myogenic and adipogenic lineages. Cells were cultured in myogenic or adipogenic differentiation media containing increasing concentrations (0, 10−13, 10−11, 10−9, 10−7 or 10−5 M) of 1,25(OH)2D3 for up to 6 days and markers of muscle and fat development were measured. Mature myofibres were formed in both adipogenic and myogenic media, but fat droplets were only observed in adipogenic media. Relative to controls, low physiological concentrations (10−13 and 10−11 M) of 1,25(OH)2D3 increased fat droplet accumulation, whereas high physiological (10−9 M) and supraphysiological concentrations (≥10−7 M) inhibited fat accumulation. This increased accumulation of fat with low physiological concentrations (10−13 and 10−11 M) was associated with a sequential up-regulation of Pparγ2 (Pparg) and Fabp4 mRNA, indicating formation of adipocytes, whereas higher concentrations (≥10−9 M) reduced all these effects, and the highest concentration (10−5 M) appeared to have toxic effects. This is the first study to demonstrate dose-dependent effects of 1,25(OH)2D3 on the transdifferentiation of muscle cells into adipose cells. Low physiological concentrations (possibly mimicking a deficient state) induced adipogenesis, whereas higher (physiological and supraphysiological) concentrations attenuated this effect.
Introduction
Vitamin D (VitD) is a key nutrient for maintaining the health of the musculoskeletal system, with VitD deficiency leading to myopathy, classically characterised by hypotonia, weakness and atrophy of skeletal muscle, and a deterioration in physical capacity (Ceglia 2008). Muscle biopsies from VitD-deficient adults demonstrate enlarged onset of sarcopenia is associated with an increase in fat deposition within the tissue (Ryall et al. 2008).
It is of major concern that VitD deficiency is particularly prevalent amongst the elderly population, as its effects on the musculoskeletal system compound the degenerative effects of sarcopenia (Holick 2007). This can have major consequences for their welfare, as the resultant decline in basic muscle function leads to an increased risk of falls and bone fractures. This deterioration in muscle strength and functionality is thought to result from not just the loss in muscle fibres but also a progressive infiltration of fat within the tissue (Goodpaster et al. 2001, Ryall et al. 2008. This fat infiltration has been shown to directly impact on muscle strength and functionality and is a key independent risk factor for metabolic diseases such as insulin resistance and diabetes (Goodpaster et al. 2003, Zoico et al. 2010.
In the blood circulation, VitD is found in two main forms -calcidiol or 25-hydroxyvitamin D (25(OH)D), which is an inactive precursor form, and calcitriol or 1,25-dihydroxyvitamin D (1,25(OH) 2 D), which is the active form. Blood concentrations of calcidiol range between 30 and 50 nM (i.e. 3-5!10 K8 M; McLeod & Cooke 1989), whereas calcitriol is much lower, ranging between w2 and 350 pM (i.e. 2!10 K12 -3.5!10 K10 M; Zittermann et al. 2009). The active form, calcitriol, is formed when circulating 25(OH)D is hydroxylated by the 1a-hydroxylase enzyme (CYP27b1), present mainly in the kidney, to form the active 1,25(OH) 2 D (Takeyama & Kato 2011). Activation then enables 1,25(OH) 2 D 3 to bind to the VitD receptor (VDR), a type of nuclear receptor, and thereby regulates transcription of a number of VitD target genes, which is thought to be the principal mechanism of action of VitD. Down-regulation of this response occurs via activation of the 24-hydroxylase enzyme (CYP24a1), which hydroxylates various forms of VitD at carbon 24 resulting in inactivation and targeting for excretion (Holick 2007). De novo synthesis of components required for calcium cycling, phospholipid metabolism and cell proliferation/differentiation in muscle is thought to be mediated by VDR-driven mechanisms operating at the level of gene transcription (Drittanti et al. 1989, Ceglia 2008, thereby playing an important role in maintaining muscle structure and functionality. The effects of VitD deficiency are reversible, and studies have shown that VitD supplementation increases the relative number and size of type II fibres in aged skeletal muscle, which can improve balance, increase overall muscle strength and ultimately reduce the incidence of falls (Bischoff et al. 2003, Harwood et al. 2004, Sato et al. 2005, Moreira-Pfrimer et al. 2009). However, it is not known what effect VitD supplementation has on fat infiltration within muscle. Indeed, the origin of these adipose cells and the mechanism by which they mature within muscle remains unclear. It may correspond to aberrant transdifferentiation of myogenic precursor cells into adipocytes resulting in the formation of fat within the intermuscular space (Vettor et al. 2009). Certainly numerous studies have demonstrated that myogenic precursor cells retain the potential to transdifferentiate towards the adipogenic lineage (Hu et al. 1995, Grimaldi et al. 1997, Holst et al. 2003, Seale et al. 2008, Vettor et al. 2009. Previous work has shown that VitD has potent effects on both adipogenesis (Ishida et al. 1988, Sato & Hiragun 1988, Lenoir et al. 1996, Blumberg et al. 2006, Kong & Li 2006, Thomson et al. 2007, Zhuang et al. 2007) and myogenesis (Capiati et al. 1999, Garcia et al. 2011. Most have used physiologically relevant concentrations of 1,25(OH) 2 D 3 , but some have tended to use high physiological or supraphysiological concentrations. Importantly, it is not known whether VitD affects the transdifferentiation of myogenic precursor cells into adipocytes. In order to address this question, the current study investigated the effect of a broad range of concentrations of the active form of VitD 3 (1,25(OH) 2 D 3 ) on the capacity of the murine C2C12 muscle cell line to differentiate or transdifferentiate towards the myogenic or adipogenic lineages respectively. We included 1,25(OH) 2 D 3 concentrations covering the physiological range (10 K13 -10 K9 M), as well as the supraphysiological concentrations (10 K7 and 10 K5 M) used previously in other cell culture studies (Blumberg et al. 2006, Zhuang et al. 2007, Garcia et al. 2011).
Materials and methods
Cell culture and reagents C2C12 cells were cultured in six-well plates in growth media consisting of DMEM (Sigma) supplemented with 10% heat-inactivated fetal bovine serum (hi-FBS), 100 units/ml penicillin and 0.1 mg/ml streptomycin and maintained at 37 8C under 5% CO 2 until 70-80% confluent. This corresponded to day 0 of differentiation, at which stage cells were switched to either myogenic or adipogenic differentiation media and incubated for up to 6 days at 37 8C under 5% CO 2 . Myogenic differentiation media consisted of DMEM containing 2% horse serum and antibiotics and was changed every 2 days until the end of the experiment. For induction of adipogenic differentiation, cells were cultured in DMEM containing 10% hi-FBS, antibiotics, 0.5 mM isobutylmethylxanthine, 1 mM dexamethasone, 850 nM insulin, 10 nM tri-iodothyronine (T 3 ) and 1 mM rosiglitazone (peroxisome proliferator activated receptor g (PPARg) agonist) from days 0 to 2. This induction medium was then replaced with DMEM containing 10% hi-FBS, antibiotics, 850 nM insulin, 10 nM T 3 and 1 mM rosiglitazone from day 2 onwards and changed every 2 days until the end of the experiment. Both myogenic and adipogenic differentiation media were supplemented with increasing concentrations (0, 10 K13 , 10 K11 , 10 K9 , 10 K7 or 10 K5 M) of 1,25(OH) 2 D 3 (Sigma), which was dissolved and diluted in DMSO. All treatments (including control) contained 0.1% (v/v) DMSO.
Oil Red-O and haematoxylin staining
Accumulation of lipid droplets was monitored by phase contrast microscopy. After 6 days of exposure to myogenic or adipogenic differentiation media, cells were stained with Oil Red-O to identify lipid droplets and counterstained with haematoxylin to delineate nuclear and myofibre structures. Briefly, media were removed and cells were fixed in 3.7% formaldehyde at room temperature for 30 min. Cells were washed twice with pre-warmed (to 37 8C) PBS and then with 60% isopropanol, before staining with 0.3% Oil Red-O (Sigma) for 30 min at room temperature. Cells were then washed once in 60% isopropanol and twice in tap water, before counterstaining with Harris haematoxylin (Sigma) for 3 min, all at room temperature. Excess stain was removed by washing twice in distilled water and then another two times in Scott's tap water (0.24 M sodium bicarbonate and 0.03 M magnesium sulphate). Images were captured using an Olympus SZH10 microscope (Olympus, Southend-on-Sea, UK) and analysed using Image Pro (version 5.1, Rockville, MD, USA) for the quantification of Oil Red-O staining. The Image Pro Software detected the Oil Red-O staining using a fixed threshold and then calculated percentage area of staining for each image. Five field of view images from each of four replicate wells (i.e. 20 in total) were quantified for each treatment.
Quantitative real-time PCR analysis
At the required timepoints (days 0, 2, 4 and 6), total RNA was extracted from C2C12 cells using a High Pure RNA isolation kit (Roche), according to the manufacturer's protocol. Samples were then stored frozen (K80 8C) before first-strand cDNA synthesis using random hexamer primers, as described previously (Hemmings et al. 2009). Quantitative PCR was performed in duplicate using SYBR1 Master mix (Roche), 384-well microplates and the LightCycler LC480 (Roche) configured for SYBR green detection, again as described previously (Tonge et al. 2010). The primers for each transcript were designed using Primer Express (Applied Biosystems) and are shown in Table 1. A standard curve was produced using serial dilutions of a pool of cDNA made from all samples to check the linearity and efficiency of the PCR for every gene, while melt curve analysis was used to check whether there was only a single product. Transcript abundance was determined using the standard curve method as described previously (Brown et al. 2012). Previous studies in this cell line (Brown et al. 2012) had demonstrated that expression of all the common 'housekeeper' genes changes during differentiation; therefore, all data were normalised to total first-strand cDNA content, measured using Oligreen, as described previously (Brown et al. 2012).
Creatine kinase, DNA and protein assays
At the required timepoint (day 4), C2C12 cells from each well were scraped into 1 ml of cold (4 8C) tri-sodium citrate buffer (0.05 M, pH 6.8) and sonicated on ice for 15 s using a benchtop ultrasonicator (Soniprep 150; MSE, London, (UK) Ltd.), being stored frozen (K80 8C) before analysis. Creatine kinase (CK) activity (IU/well) was measured using a CK assay kit (Thermo Scientific Cramlington, Northumberland, UK), as described previously (Brown et al. 2012). Briefly, thawed samples were transferred on to a 96-well microtitre plate and 200 ml of reaction buffer was added according to the manufacturer's instructions. Absorption at 340 nm was measured at 30 8C every 5 min over a 30-min period. DNA content (mg/well) of the thawed cell lysate was measured using a fluorescence plate reader assay adapted from Rago et al. (Rago et al. 1990, Hurley et al. 2006. Protein content (mg/well) was measured via the Lowry method (Lowry et al. 1951) adapted for 96-well plate format. Usually, we normalise the CK activity data to DNA content to account for differences in numbers of cells in each well. However, in this case, normalising the CK activity data to DNA resulted in the differences in CK activity observed between media types (myogenic vs adipogenic) being exaggerated due to the obvious differences in DNA contents (i.e. numbers of cells and their differing morphology), as there were clear differences in the types of cells present. We therefore opted to include the three measurements (CK activity, protein and DNA) separately on a per well basis.
Statistical analysis
Data were analysed by one-or two-way ANOVA for effects of 1,25(OH) 2 D 3 concentration, media type (myogenic or adipogenic) and/or timepoint (as appropriate) using SPSS Statistical Software (19th Edition, Portsmouth, Hampshire, UK). Percentage area of Oil Red-O staining was analysed by one-way ANOVA (1,25(OH) 2 D 3 concentration only). CK activities, DNA and protein contents were analysed by two-way ANOVA (1,25(OH) 2 D 3 concentration and media type). All real-time qPCR data were analysed by two-way ANOVA (1,25(OH) 2 D 3 concentration and timepoint), with the data from myogenic and adipogenic media being analysed separately. A post-hoc Bonferroni's test was used when appropriate (i.e. no significant interactions). All means are for either nZ3 (CK activity, protein and DNA contents) or nZ4 (Oil Red-O staining and all mRNA expression) replicates (i.e. wells) for each treatment and media type at each timepoint. P!0.05 was considered statistically significant.
Effects of 1,25(OH) 2 D 3 on myotube formation and lipid accumulation
Myotube formation was clearly evident in control cells after 6 days in either myogenic or adipogenic differentiation media ( Fig. 1A and G). Phase contrast images of control cells showed extensive myotube formation by day 4 in myogenic media, which did not appear to increase any further on day 5 (data not shown). By contrast, myotubes were still visibly forming throughout days 4 and 5 of differentiation in control cells in adipogenic media (data not shown), suggesting that myogenic differentiation was slightly delayed, but there appeared to be no difference in myotube formation between myogenic and adipogenic media on day 6 ( Fig. 1A and G).
Positive Oil Red-O staining confirmed the presence of lipid droplets in control cells grown for 6 days in adipogenic media (Fig. 1G), but there was no Oil Red-O staining of cells grown in myogenic media (either without or with 1,25(OH) 2 D 3 ) for the same length of time ( Fig. 1A, B, C, D, E and F). Similarly, phase contrast images taken at earlier timepoints demonstrated the formation of lipid droplets after 4 days, but only in cells cultured in adipogenic media (data not shown).
Dose-dependent effects of 1,25(OH) 2 D 3 supplementation for 6 days were observed on myotube formation, with addition of supraphysiological concentrations (10 K7 and 10 K5 M) appearing to inhibit myotube formation in both myogenic and adipogenic media ( Fig. 1E, F, K and L). Relative to the control cells grown for 6 days in adipogenic media, supplementation with increasing concentrations of 1,25(OH) 2 D 3 induced a bimodal effect on lipid droplet accumulation in cells cultured in adipogenic media, as determined by percentage area of Oil Red-O staining (Fig. 2). At the lowest physiological concentration (10 K13 M), 1,25(OH) 2 D 3 significantly increased lipid droplet accumulation compared with controls (P!0.01 Bonferroni; Figs 1H and 2), whereas the highest concentrations (10 K9 M and above), corresponding to high physiological and supraphysiological concentrations, inhibited lipid droplet formation (P!0.001 Bonferroni; Figs 1J, K, L and 2).
Expression of gene markers of white adipocytes
As observed for lipid droplet formation (Fig. 1G, H, I, J, K and L), C2C12 cells grown in adipogenic media demonstrated induction of genes indicative of differentiation of (Fig. 3A, B and C). For clarity, we only include figures for the expression of genes in adipogenic media, but the figures showing expression of these same genes by cells cultured in myogenic media are included in the Supplementary Figure 1A, B, C and D, see section on supplementary data given at the end of this article. In adipogenic media, expression of all three adipose-specific marker genes, Pparg2 (Pparg; Fig. 3A), fatty acid binding protein 4 (Fabp4; Fig. 3B) and adiponectin/Adipoq (Fig. 3C), were induced in a 1,25(OH) 2 D 3 concentration and time-dependent manner (P!0.001 for all three 1,25(OH) 2 D 3 concentration!day interactions), but only in cells grown in adipogenic media. Low physiological concentrations (10 K13 and 10 K11 M) of 1,25(OH) 2 D 3 increased expression of all three genes, with levels of PPARg2 peaking at day 2 and declining over days 4 and 6 ( Fig. 3A), while expression of the PPARg2 target genes, Fabp4 and Adipoq, peaked 2 days later at day 4 of differentiation ( Fig. 3B and C). Interestingly, the high physiological (10 K9 M) and supraphysiological (10 K7 and 10 K5 M) concentrations of 1,25(OH) 2 D 3 decreased expression of all three genes, corresponding to the observed decrease in lipid droplet formation. By contrast, expression of these genes by cells incubated in myogenic media was either undetectable (PPARg2; Supplementary Figure 1A) or only expressed in the presence of supraphysiological concentrations of 1,25(OH) 2 D 3 (FABP4 and ADIPOQ; Supplementary Figure 1B and C). Leptin mRNA was not detectable in any of the cultures (in adipogenic or myogenic media) throughout the 6-day period (data not shown). In contrast to PPARg2, which is known to be adipocyte specific, Pparg1 mRNA was detected in all cultures (in both myogenic and adipogenic media) and was found to be higher at day 2 than at day 6 (PZ0.002 for day effect; Fig. 3D) in adipogenic media and higher in controls and 10 K5 M 1,25(OH) 2 D 3 than the other concentrations (PZ0.005 for 1,25(OH) 2 D 3 concentration effect; Fig. 3D) again in adipogenic media.
Effects of 1,25(OH) 2 D 3 on myogenic differentiation in myogenic and adipogenic media
It was apparent from the morphological studies that supplementation with supraphysiological concentrations (10 K7 and 10 K5 M) of 1,25(OH) 2 D 3 inhibited myotube formation in both myogenic and adipogenic media (Fig. 1E, F, K and L). We therefore determined CK activities after 4 days of treatment, as a quantitative measure of myogenic differentiation. Lower CK activities were observed for control cells in adipogenic compared with myogenic media (Fig. 4A), but supplementation of the adipogenic media with 1,25(OH) 2 D 3 increased CK activity to levels approaching those observed in control cells in myogenic media and the magnitude of this increase was similar between 10 K13 and 10 K7 M (P!0.001 for media !1,25(OH) 2 D 3 concentration interaction; Fig. 4A). By contrast, cells exposed to 10 K5 M 1,25(OH) 2 D 3 in adipogenic media showed a decrease in CK activity compared with controls (P!0.001 for media!1,25(OH) 2 D 3 concentration interaction; Fig. 4A), indicating an attenuation of myogenic differentiation, consistent with the observed absence of myotubes in these cultures (Fig. 1L). In myogenic media, addition of 1,25(OH) 2 D 3 at 10 K7 and 10 K5 M was also associated with a reduction in CK activity (P!0.001 for media!1,25(OH) 2 D 3 concentration interaction; Fig. 4A), as well as reductions in total protein (P!0.01 for media!1,25(OH) 2 D 3 concentration interaction; Fig. 4B) and total DNA (P!0.001 for media! 1,25(OH) 2 D 3 concentration interaction; Fig. 4C) contents at day 4. Protein ( Fig. 4B) and DNA (Fig. 4C) contents were greater in cells cultured in adipogenic compared with myogenic media, indicative of increased cell proliferation, probably due to the higher FBS content of the adipogenic media. Similar to effects in myogenic media, increasing 1,25(OH) 2 D 3 concentrations in adipogenic media decreased both protein (P!0.01 for media! 1,25(OH) 2 D 3 concentration interaction; Fig. 4B) and DNA (P!0.001 for media!1,25(OH) 2 D 3 concentration interaction; Fig. 4C) contents in a dose-dependent manner, but particularly at the highest (supraphysiological) concentrations.
Expression of myogenic marker genes
Consistent with myotube formation being observed in both types of differentiation media (Fig. 1A, B, C, D, E, F, G and H), muscle-specific genes were expressed in cells grown in either myogenic or adipogenic media. For clarity, we will mainly describe the results from studies using adipogenic media here, but figures for expression of the same genes by cells cultured in myogenic media are included in the Supplementary Figure 1E, F, G and H. The effects of 1,25(OH) 2 D 3 (particularly physiological concentrations) on myogenic marker genes were more pronounced in adipogenic media (Fig. 5A, B, C and D) compared with myogenic media (Supplementary Figure 1E, F, G and H), consistent with the results obtained for CK activity. Similar to CK activity, CK mRNA expression was lower in adipogenic media compared with myogenic media (Fig. 5A and Supplementary Figure 1E). An increase in CK mRNA was observed at day 4 in control cells (in adipogenic media), but supplementation of adipogenic media with physiological concentrations (10 K13 , 10 K11 and 10 K9 M) of 1,25(OH) 2 D 3 increased CK mRNA expression particularly on day 4, whereas 10 K5 M 1,25(OH) 2 D 3 blocked/inhibited differentiation at all timepoints (P!0.001 for 1,25(OH) 2 D 3 concentration!day interaction; Fig. 5A). A similar pattern was observed for myogenin expression (Fig. 5B). Adipogenic media induced a smaller increase in myogenin mRNA on day 2 compared with myogenic media (Fig. 5B and Supplementary Figure 1F), but myogenin mRNA continued to increase on days 4 and 6 in adipogenic media and actually exceeded the levels observed in myogenic media ( Fig. 5B and Supplementary Figure 1F). Supplementation of adipogenic media with 10 K13 , 10 K11 , 10 K9 or 10 K7 M 1,25(OH) 2 D 3 increased myogenin mRNA, particularly on day 2, whereas 10 K5 M 1,25(OH) 2 D 3 blocked differentiation at all timepoints (P!0.001 for 1,25(OH) 2 D 3 concentration!day interaction; Fig. 5B). In adipogenic media, there was no change in Myod1 (MyoD) expression in control cells throughout the 6 days of differentiation (Fig. 5C), but supplementation with 10 K13 , 10 K11 , 10 K9 or 10 K7 M 1,25(OH) 2 D 3 increased MyoD on days 2 and 4, whereas 10 K5 M 1,25(OH) 2 D 3 blocked these effects (PZ0.001 for 1,25(OH) 2 D 3 concentration!day interaction; Fig. 5C). Finally, myf5 mRNA was up-regulated in control cells in adipogenic media at days 2 and 4 (Fig. 5D), but this was inhibited by increasing concentrations of 1,25(OH) 2 D 3 in a dose-dependent manner, with 10 K5 M 1,25(OH) 2 D 3 appearing to block differentiation (P!0.001 for 1,25(OH) 2 D 3 concentration!day interaction; Fig. 5D).
In summary, there were only relatively small effects of 1,25(OH) 2 D 3 on cells incubated in myogenic media (see Supplementary Figure 1E, F, G and H), mainly due to the effects of the highest (supraphysiological) concentration (10 K5 M). However, supplementation of adipogenic media with physiological (10 K13 , 10 K11 and 10 K9 M) concentrations of 1,25(OH) 2 D 3 increased CK, MyoD and myogenin mRNA, suggesting an induction of muscle differentiation. Importantly, 10 K5 M 1,25(OH) 2 D 3 blocked/inhibited expression of all four myogenic marker genes in adipogenic media, suggesting that this concentration (10 K5 M) may have a different effect to the other concentrations, possibly involving anti-differentiation, pro-apoptotic and/or toxic effects.
Expression of gene markers of brown adipocytes
As 1,25(OH) 2 D 3 was shown to induce gene markers of white adipocytes, we also considered its effect on activation of genes relating to brown adipocytes. Once again, for clarity, only the data from studies in adipogenic media are included, but data for myogenic media are provided in the Supplementary Figure 2A, B, C and D, see section on supplementary data given at the end of this article. Expression of the brown fat-specific marker, uncoupling protein 1 (UCP1), was below detectable levels in cells grown in either myogenic or adipogenic media (data not shown). Likewise, expression of PRD1-BF1-RIZ1 homologous domain containing 16 (Prdm16), previously shown to be required for the transdifferentiation of C2C12 cells to brown adipocytes (Seale et al. 2008), was also below detectable limits (data not shown). However, expression of other brown fat marker genes, Elovl3 and CIDEA, were detectable in cells cultured in adipogenic media, but only after 6 days (P!0.001 for day effect for both genes; Fig. 6A and B respectively). This was preceded by a slight decrease in the expression of CCAAT enhancer binding protein b (C/ebpb) mRNA (P!0.001 for day effect; Fig. 6C) at days 2-6, but no change in expression of PPARg coactivator 1a (PGC1a; Fig. 6D) mRNA. In contrast to the changes observed in expression of the white adipogenic marker genes, treatment with 1,25(OH) 2 D 3 had no significant effect (PO0.05) on expression of any of
Figure 3
Dose-dependent effects of 1,25(OH) 2 D 3 on expression of white adipocyte marker genes. Expression of white adipocyte marker genes was determined by quantitative RT-PCR analysis. Levels of (A) Pparg2, (B) Fabp4, (C) Adipoq/adiponectin and (D) Pparg1 mRNAs were quantified in C2C12 cells cultured in the absence or presence of 10 K13 , 10 K11 , 10 K9 , 10 K7 or 10 K5 M 1,25(OH) 2 D 3 for 2, 4 or 6 days in adipogenic differentiation media. Expression at day 0 (before differentiation media and 1,25(OH) 2 D 3 was added) is also included and is indicated by a bar (in some instances, this was very low). Significant two-way interactions between day of differentiation and 1,25(OH) 2 D 3 concentration were observed for PPARg2, FABP4 and AdipoQ (P!0.001 for all). For PPARg1, there was a significant effect of stage of differentiation (PZ0.002) and a significant effect of 1,25(OH) 2 D 3 concentration (PZ0.005), but no interaction.
the brown adipocyte marker genes, suggesting that 1,25(OH) 2 D 3 treatment was not inducing conversion of myoblasts to brown adipocytes, although a longer time frame may be required to be completely sure.
Expression of VDR and the VitD hydroxylating enzymes 1a-hydroxylase and 24-hydroxylase As the activity of the VitD system is dependent on the levels of VDR and metabolising enzymes, we also determined their expression in the cell cultures. Once again, for clarity, the data from adipogenic media will mainly be described here, but the data from myogenic media are provided in the Supplementary Figure 2E, F and G. VDR mRNA was expressed in the C2C12 cells and there was no difference in the level of expression between control cells incubated in either myogenic or adipogenic media ( Fig. 7A and Supplementary Figure 2E). In adipogenic media, VDR expression was increased by the highest concentration (10 K5 M) of 1,25(OH) 2 D 3 particularly on day 2 (P!0.001 for VitD concentration!day interaction; Fig. 7A). Basal levels of expression of the 25(OH)D 3 activating enzyme, CYP27B1 (1a-hydroxylase) mRNA, increased in control cells incubated in adipogenic media throughout the 6 days of differentiation (P!0.001 for day effect; Fig. 7B), but there was no significant effect of 1,25(OH) 2 D 3 on CYP27B1 mRNA expression (PO0.05; Fig. 7B).
In contrast to the activating enzyme, expression of the VitD inactivating enzyme, CYP24a1 (also called 24-hydroxylase), was not detectable in control cells incubated in adipogenic media and did not change with day/stage of differentiation (Fig. 7C), but treatment with 10 K5 M 1,25(OH) 2 D 3 induced CYP24A1 mRNA expression (P!0.001 for 1,25(OH) 2 D 3 concentration effect; Fig. 7C). Hence, the cells appear to be responding to the very high levels of active VitD by increasing the levels of this inactivating enzyme to avoid or minimise potential toxicity effects.
Adipogenic induction
For the first time, this study shows a bimodal dose-response effect of the active form of VitD 3 , 1,25(OH) 2 D 3 , to modulate the capacity of C2C12 cells to transdifferentiate into adipocytes. The adipogenic potential of C2C12 cells has been shown previously, with exposure to thiazolidinediones and fatty acids found to induce transdifferentiation to mature adipocytes (Teboul et al. 1995). A key finding from these studies was that physiologically relevant, sub-nanomolar (10 K13 M) concentrations of 1,25(OH) 2 D 3 potently induced accumulation of lipid droplets in adipogenic media. Importantly, this was preceded by a clear up-regulation in expression of the adipogenic marker genes, PPARg2 and FABP4, which were undetectable in cells grown in myogenic media. PPARg2 has previously been shown to be pivotal in inducing the transdifferentiation of myogenic precursor cells to the adipogenic lineage (Hu et al. 1995, Yu et al. 2006. This study showed that expression of Pparg2 mRNA peaked on day 2 of differentiation followed by a sequential up-regulation in expression of downstream target genes, Fabp4 and Adipoq, both of which peaked at day 4 in adipogenic media. Interestingly, decreases in lipid accumulation (Oil Red-O percentage area) and adipocyte marker gene expression (Pparg2 and Fabp4) were observed at higher physiological concentrations (between 10 K13 and 10 K9 M) of 1,25(OH) 2 D 3 without any change in DNA or protein contents, whereas supraphysiological concentrations (10 K7 and 10 K5 M) of 1,25(OH) 2 D 3 completely inhibited lipid droplet accumulation and decreased expression of adipogenic marker genes but also reduced DNA and protein contents. Hence, the mechanisms for the anti-differentiation effects of the highest concentrations are potentially due to anti-proliferative and/or pro-apoptotic/toxic effects, but this appears not to be the case for the lower, more physiological concentrations. Therefore, the bimodal effects of 1,25(OH) 2 D 3 in altering the transdifferentiation to an adipogenic lineage occurred over a physiologically relevant concentration range and appeared not to be due to anti-proliferative or toxic effects, suggesting that our findings may be indicative of effects in vivo.
2006, Thomson et al. 2007) and were associated with decreases in C/ebpa and Pparg2 mRNA expression (Blumberg et al. 2006, Kong & Li 2006, Thomson et al. 2007). Interestingly, 3T3-L1 preadipocytes were only receptive to this inhibitory effect in the early stages of differentiation (i.e. the induction phase), with no effect observed when 1,25(OH) 2 D 3 was administered from 48 h onwards (Kong & Li 2006). This receptive period in 3T3-L1 cells may relate to temporal changes in VDR expression, which is rapidly up-regulated in preadipocytes in the first 4-8 h of differentiation and then progressively down-regulated in the following 48 h (Kong & Li 2006). We observed something similar, with higher VDR expression at day 2 compared with days 4 and 6 in adipogenic media. Interestingly, only the highest supraphysiological (10 K5 M) concentration of 1,25(OH) 2 D 3 increased expression of VDR, particularly at day 2 of differentiation in adipogenic media.
Having established that C2C12 cells appeared to be induced to transdifferentiate into adipocytes, a number of brown fat-specific marker genes were also measured to clarify whether the cells being formed were white or brown adipocytes. Recent findings from lineage tracing studies have shown that brown adipocytes develop in vivo from a MYF5-positive progenitor cell (Seale et al. 2008), suggesting that these myf5-expressing C2C12 cells might also be converting to brown adipocytes. Previous work showed that ectopic overexpression of PRDM16 in C2C12 cells induced Myf5 gene expression and this was associated with diversion of these cells to the brown fat lineage (Seale et al. 2008). In this study, Myf5 mRNA expression in C2C12 cells was up-regulated in adipogenic media compared with myogenic media at all timepoints over the 6-day culture period, but Prdm16 mRNA was expressed at very low levels in all cultures, below the threshold at which expression could be accurately quantified (data not shown). Similarly, Ucp1 expression was not detectable in these cultures (data not shown). Other brown adipocyte-specific marker genes, Elovl3 and Cidea, were activated in cells cultured in adipogenic media, but not until day 6 of differentiation, and the expression was a lot more variable, as indicated by the larger error bars. This suggests that activation of brown adipogenic genes may be occurring, but at a much later stage in the developmental process, for which further investigation is required. However, the absence of any effect of 1,25(OH) 2 D 3 treatment on any of the brown fat genes measured suggests that the dose-dependent changes in lipid accumulation observed relate to white rather than brown adipogenesis.
Figure 6
Dose-dependent effects of 1,25(OH) 2 D 3 on expression of brown adipocyte marker genes. Expression of brown adipocyte marker genes was determined by quantitative RT-PCR analysis. Levels of (A) Elovl3, (B) Cidea, (C) C/ebpb and (D) Pgc1a mRNAs were quantified in C2C12 cells cultured in the absence or presence of 10 K13 , 10 K11 , 10 K9 , 10 K7 or 10 K5 M 1,25(OH) 2 D 3 for 2, 4 or 6 days in adipogenic differentiation media. Expression at day 0 (before differentiation media and 1,25(OH) 2 D 3 was added) is also included for reference and is indicated by a bar. Significant effects of day of differentiation were observed for Elovl3, Cidea and C/ebpb (P!0.001 for all
Myogenic differentiation
Our results show that exposure to adipogenic media induces C2C12 cells to transdifferentiate into cells that accumulate lipid droplets and thus resemble a mature adipocyte. This indicates some degree of plasticity in the lineage potential of this 'muscle cell line', but it should be noted that extensive myotube formation was still evident, even in the presence of adipogenic media. Previous studies whereby C2C12 cells were induced to form adipocytes following exposure to thiazolidinediones and fatty acids indicated that this transdifferentiation to mature adipocytes was associated with an inhibition of myogenic differentiation (Teboul et al. 1995). An initial myogenesis inhibitory effect of exposing C2C12 cells to adipogenic media was evident in our studies, as CK activity was reduced in control cells exposed to adipogenic compared with myogenic media. This was associated with delayed activation of myogenesis, as indicated by reduced myogenin and Ck mRNA at day 2 and was possibly a consequence of increased cell proliferation, as indicated by the increase in DNA content observed, presumably due to the higher FBS content of the adipogenic media. However, this study indicated that the three physiological concentrations (10 K13 -10 K9 M) of 1,25(OH) 2 D 3 appeared to induce myogenesis (in adipogenic media only), rather than inhibit it. Indeed, the low physiological (10 K13 M) concentration of 1,25(OH) 2 D 3 appeared to increase both adipogenesis and myogenesis, whereas the high physiological (10 K9 M) concentration only increased myogenesis and the highest supraphysiological (10 K5 M) concentration inhibited both. This is consistent with previous work in which a high physiological concentration (10 K9 M) of 1,25(OH) 2 D 3 was shown to increase fusion/ differentiation of chick embryo myoblasts during late stages of myogenesis (Capiati et al. 1999).
Local regulation of activity
Circulating levels of 1,25(OH) 2 D 3 are tightly regulated by the activity of the hydroxylating enzymes, 1a-hydroxylase (CYP27b1) and 24-hydroxylase (CYP24a1). In this study, the activating enzyme, Cyp27b1 mRNA, was found to be expressed by C2C12 cells and expression increased with day/stage of differentiation in adipogenic media, but there was no effect of 1,25(OH) 2 D 3 . Although the highest concentration (10 K5 M) appeared to reduce expression of Cyp27b1 mRNA, this was not statistically significant. Previous work (Turunen et al. 2007) has shown that 1,25(OH) 2 D 3 administered at suprananomolar levels
Figure 7
Dose-dependent effects of 1,25(OH) 2 D 3 on expression of VDR, CYP27B1 (1a-hydroxylase) and CYP24A1 (24-hydroxylase) enzymes. Expression of VitD-related genes was determined by quantitative RT-PCR analysis. Levels of (A) Vdr, (B) Cyp27b1 and (C) Cyp24a1 mRNAs were quantified in C2C12 cells cultured in the absence or presence of 10 K13 , 10 K11 , 10 K9 , 10 K7 or 10 K5 M 1,25(OH) 2 D 3 for 2, 4 or 6 days in adipogenic differentiation media. Expression at day 0 (before differentiation media and 1,25(OH) 2 D 3 was added) is also included for reference and is indicated by a bar. A significant two-way interaction (P!0.001) between day of differentiation and 1,25(OH) 2 D 3 concentration was observed for VDR only. There was a significant effect of day of differentiation (PZ0.001) on expression of Cyp27b1 mRNA, but no effect of 1,25(OH) 2 D 3 concentration. By contrast, there was a significant effect (P!0.001) of 1,25(OH) 2 D 3 concentration on Cyp24a1 mRNA, but no effect of day of differentiation.
(10 K8 M) suppressed 1a-hydroxylase (CYP27b1) promoter activity in HEK 293 cells. It should be noted that there was no detectable expression in the C2C12 cells of the inactivating enzyme, CYP24a1 (24-hydroxylase), apart from when 1,25(OH) 2 D 3 was supplemented at levels in excess of the physiological range (10 K7 -10 K5 M), when an induction was observed. This likely occurs as a mechanism to protect the cells from toxic effects of such high non-physiological levels of the active vitamin. Similarly, VDR mRNA was very low (but detectable) and induced at the highest (10 K5 M) concentration. This would appear to suggest that there were toxic effects of the highest concentration(s) of 1,25(OH) 2 D 3 , but that this was not the case for the other, more physiological concentrations (10 K13 -10 K9 M), indicating that their effects on morphology and gene expression are unlikely to be via anti-proliferative/pro-apoptotic mechanisms.
Physiological relevance
It is difficult to extrapolate from this study to the likely effects of VitD deficiency on the level of transdifferentiation of muscle precursor cells to adipocytes that occur in muscle in vivo. Our findings suggest that low (deficient) levels (Zittermann et al. 2009) of 1,25(OH) 2 D 3 (i.e. 10 K13 M) may actually enhance adipogenic transdifferentiation, whereas high (sufficient) levels (i.e. 10 K9 M) inhibit adipogenesis, thereby potentially impacting on fat infiltration and muscle function. A number of studies have demonstrated the ability of primary muscle cells to form adipocytes, but the mechanisms involved are not clear (Asakura et al. 2001, Csete et al. 2001, Aguiari et al. 2008). However, the C2C12 cell model is likely to be a conservative model of adipogenic transdifferentiation, as primary satellite cells isolated from pig muscle demonstrate a much higher degree of plasticity with a greater number of adipocytes and a lower number of myotubes formed in response to adipogenic media (Redshaw et al. 2010). The exposure of myogenic precursor cells to adipogenic regulatory factors may be an important factor in contributing to the increased fat infiltration seen in muscle, for example during ageing and VitD deficiency, although infiltration by already committed adipocyte populations is also possible. VitD has been shown to play a key regulatory role in myogenesis and is likely to be important in muscle fibre repair (Capiati et al. 1999, Garcia et al. 2011. A speculative interpretation of the bimodal response of C2C12 cells to 1,25(OH) 2 D 3 observed in vitro is that it represents an energy-conserving mechanism in vivo that has evolved in response to the changing seasons and enables extra energy to be repartitioned into fat depots. Certainly for our ancestors, a significant quantity of VitD was primarily obtained from exposure to u.v./sunlight, which induces the conversion of 7-dehydrocholesterol to VitD 3 . Low levels of VitD in the body, which possibly occur during periods of low u.v. exposure such as winter (Moosgaard et al. 2005, Aguiari et al. 2008, may act as an important regulatory cue in inducing muscle precursor cells to form adipocytes rather than myofibres and enable extra fat depots to be stored in the body during periods of austerity. This speculative hypothesis needs testing in an appropriate animal model. In conclusion, this is the first study to show that low physiological concentrations (0.1-10 pM or 10 K13 -10 K11 M) of 1,25(OH) 2 D 3 , which may represent a VitD-deficient state, induce myoblasts to transdifferentiate into the adipogenic lineage and appears to involve activation of PPARg2. These findings have implications for muscle health and function as well as whole-body energy metabolism because an increase in fat infiltration within skeletal muscle has been linked with a decrease in functional strength and impairment of glucose tolerance, leading to an increased susceptibility to obesity and type II diabetes (Goodpaster et al. 2003, Hilton et al. 2008. High concentrations (1 nM or 10 K9 M and above) of 1,25(OH) 2 D 3 appeared to block adipogenic transdifferentiation, suggesting that changes in physiological concentrations of 1,25(OH) 2 D 3 have a major impact on the determination of cell fate of myogenic precursor cells. Furthermore, our data indicate that levels of 1,25(OH) 2 D 3 in the serum and muscle are likely to be important biomarkers linking VitD intakes and optimal muscular health. Given the widespread prevalence of VitD deficiency, particularly in the elderly population, understanding the role of this vitamin in muscle differentiation processes throughout life will be key to defining nutritional parameters for maintaining life-long health and well-being.
|
2016-05-04T20:20:58.661Z
|
2013-01-17T00:00:00.000
|
{
"year": 2013,
"sha1": "390c1fb7a51a7ac751173e2ea2442856f7700cbc",
"oa_license": null,
"oa_url": "https://joe.bioscientifica.com/downloadpdf/journals/joe/217/1/45.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3ddadae6af04f35a1dbf7aac971315eed09ed21",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254546140
|
pes2o/s2orc
|
v3-fos-license
|
A Compact Two-Frequency Notch Filter for Millimeter Wave Plasma Diagnostics
Sensitive millimeter wave diagnostics in magnetic confinement plasma fusion experiments need protection from gyrotron stray radiation in the plasma vessel. Modern electron cyclotron resonance heating (ECRH) systems take advantage of multi-frequency gyrotrons. This means that the frequency band of some millimeter wave diagnostics contains more than one narrow-band gyrotron-frequency line, which needs to be effectively suppressed. A compact standard waveguide notch filter based on coupled waveguide resonators with rectangular cross-section is presented which can provide very high suppression of several gyrotron frequencies and has low insertion loss of the passband.
Introduction
Modern multi-frequency megawatt-class gyrotrons can operate at several frequencies in the millimeter wave range [1][2][3][4]. These frequencies correspond to the various transmission maxima of their single-disk chemical vapor deposition (CVD) diamond window. In recent years, high-power gyrotrons became very attractive as sources for electron cyclotron resonance heating (ECRH) systems in thermonuclear fusion plasma experiments, since they allow for more flexibility with respect to the applied magnetic field in the magnetic confinement fusion devices [5]. Sensitive millimeter wave diagnostic systems need protection against ECRH stray radiation [6]. When applying multi-frequency gyrotrons, more than one frequency must be suppressed. A specific problem concerning these filters is the frequency chirp of high-power gyrotrons due to cavity heating and expansion, specifically at the beginning of the pulse, which can range from tens to The red lines mark the domains where the transmitted power is below − 23 dB. The arrow marks the point where the minima for both frequencies meet and therefore gives the correct dimensions of the desired cavity (a r = 6.72 mm, b r = 0.826 mm, l r = 1.68 mm) hundreds of MHz [7]. For ECRH systems applying many different gyrotrons, the notches need to be at least as wide as the specified accuracy provided by the gyrotron manufacturer, which is typically ± 250 MHz. These requirements are hard to fulfill with the available filter technology. Tunable coupled-cavity filters are difficult to be tuned for more than one specific stopband. Multi-frequency filters like optical Fabry-Perot resonance filters can provide various but only very narrow notches [6]. In principle, it is possible to switch from one single frequency notch filter to another one, depending on the applied gyrotron frequency, or to connect two single frequency notch filters in series, with the latter leading to an increase in the overall insertion loss. However, for application of two gyrotron frequencies, it is preferable to have a single filter providing two stopbands. Two- frequency notch filters for the electron cyclotron emission (ECE) diagnostic systems both at the ASDEX Upgrade tokamak experiment at IPP Garching and the W7-X stellarator at IPP Greifswald were successfully realized applying advanced waveguide Bragg reflectors [8,9]. In the present paper, a much simpler and compact approach based on standard waveguide resonators with rectangular cross-section [10][11][12][13] is presented. The filter described in this paper is designed for application in ECE systems which are operated over a wide frequency range. The width of the stopbands is not critical for ECE as long as they are wide enough to suppress all gyrotron frequencies of the ECRH system. There are other diagnostics, e.g., collective Thomson scattering systems, which would require a design with much narrower notches in order to allow for measurements very close to the gyrotron frequencies, but with no interest in wide passbands.
Filter Design
The goal of the present notch filter design is to reject the two frequencies of the ASDEX Upgrade and W7-X ECRH systems, which are 105 and 140 GHz. The filter should have no additional stopbands over the full waveguide band (D-band). This can be realized with inwaveguide resonators as described in [9], i.e., standard rectangular waveguides with symmetric steps in the width. In this case, there is no coupling to TM modes and an incident TE 1,0 fundamental waveguide mode only couples to TE 2·m+1,0 modes providing a very limited mode spectrum. To calculate the step-type coupling, a mode matching formalism [9] is applied, taking into account both propagating and evanescent modes. As an example, Fig. 1 shows the coupling of a symmetric step from fundamental waveguide with width a = 1.651 mm (WR-06) A resonant cavity at both 105 and 140 GHz can be created by a stepped waveguide where the length l r of the step corresponds to approximately half of the wavelength for the TE 30 mode at 105 GHz and the TE 50 mode at 140 GHz. The optimum dimensions of the waveguide cavity can be determined by calculating the power transmission at both frequencies as a function of both the width a r and the length l r of a single cavity. Figure 2 shows the transmission minima at both frequencies. The point where both domains meet gives the correct dimension of the desired cavity. Figure 3 shows the calculated frequency characteristic of the resonator with only two notches over the full D-Band. The modal amplitude distributions along the waveguide circuit at the 105 and 140 GHz resonances are plotted in Fig. 4.
As can be seen from Fig. 4, there exist resonant mode mixtures with the dominant modes being TE 30 at 105 GHz and TE 50 at 140 GHz, but also evanescent TE m0 cutoff modes have to be taken into account in the generalized scattering matrix calculations. The resonant electric field distributions over the horizontal waveguide cross-section at the two resonances are plotted in Fig. 5.
Multi-Cavity Filter
The signal suppression of the two stopbands can be further increased by combining several cavities. Figure 6 shows the geometry of a filter with 5 resonators, where each cavity has the Figure 7 presents a comparison of the frequency characteristics of filters with a different number of cavities. Both the width of the stopbands and the steepness of the slopes increase with an increasing number of cavities. The transmission coefficient in the frequency range below 110 GHz, which is below the D-band and close to cutoff for the propagating TE10 mode, shows a strong dependence on the varying reflection. This is probably due to the varying standing wave ratio in the waveguide.
With 5 cavities, the theoretical notch depth at both resonant frequencies (105 and 140 GHz) is below − 120 dB.
Measurements
A 5-cavity filter was fabricated in split-block technology and tested. The filter geometry has been cut out of a sheet metal plate with thickness 0.826 mm, corresponding to the height of a standard WR-06 waveguide. The plate has been sandwiched between two metallic blocks to form the filter geometry. Figure 8 shows photos of the filter and its separate parts. The material is brass. First measurements with the assembled notch filter were performed in the frequency band 117.5-150 GHz using a GYCOM GGBWO-78/178 BWO as source and an ELMIKA D-band detector diode as receiver (Fig. 9). The first measurement revealed a higher insertion loss of about 10 dB compared with theory as well as a broadened and downshifted upper notch around 140 GHz (red curve). After polishing and reassembling of the filter parts, the insertion loss was reduced by 5 dB and the width of the stopband narrowed (green curve). To further reduce the surface losses and improve the contact between the center plate and both upper and lower block, all the parts were gold plated and assembled again. The measurement with the gold plated filter is given by the black curve, which is again closer to the theoretical values (blue curve). Since the dynamic range of these measurements was only about − 40 dB, we performed additional measurements at IGVP Stuttgart using an ABmm 8-350 Vector Network Analyzer (VNA) in the frequency range from 96 to 156 GHz and at KIT Karlsruhe using a PNA N5222B VNA from 135 to 170 GHz. These measurements are shown in Fig. 10 together with the theoretical frequency characteristic. Both measurements are in excellent agreement and their frequency dependence compares well with the calculation. The dynamic range of the ABmm VNA is limited to about − 60 dB around 140 GHz where its background noise dominates, while the dynamic range of the KIT-measurement was − 80 dB. Half of the conductivity of ideal gold was assumed in the calculation to cope for surface roughness. However, the measured insertion loss is still somewhat higher than calculated (3-4 dB in the range 110-140 GHz and about 2 dB between 140 and 170 GHz). This is most probably due to non-perfect contact between the two blocks and the center sheet over the common surface in this split block construction, also leading to a downshift of the resonance frequencies (see also Fig. 9). Figures 11 and 12 give more details in the two rejection bands around 105 and 140 GHz. The frequency range of all 8 ASDEX Upgrade gyrotrons, including their frequency chirps and drifts during the pulse, is indicated by the dashed lines. All gyrotron frequencies are within the stopbands.
Summary A compact rectangular waveguide two-frequency notch filter has been designed, fabricated, and tested. The filter provides suppression of two narrow frequency intervals in the D-band. This could be realized by introducing symmetric steps in the width of a standard WR-06 waveguide. The measured notch depth is about − 60 dB at 105 GHz and -80 dB at 140 GHz. No additional tuning was required.
|
2022-12-12T15:33:06.809Z
|
2020-05-27T00:00:00.000
|
{
"year": 2020,
"sha1": "6c3b05cf3a64bdcee2e50a861d50db578884c695",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10762-020-00701-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "6c3b05cf3a64bdcee2e50a861d50db578884c695",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
251766497
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Outcomes of Percutaneous Coronary Intervention for Bifurcation Lesions According to Medina Classification
Background Coronary bifurcation lesions (CBLs) are frequently encountered in clinical practice and are associated with worse outcomes after percutaneous coronary intervention. However, there are limited data around the prognostic impact of different CBL distributions. Methods and Results All CBL percutaneous coronary intervention procedures from the prospective e‐Ultimaster (Prospective, Single‐Arm, Multi Centre Observations Ultimaster Des Registry) multicenter international registry were analyzed according to CBL distribution as defined by the Medina classification. Cox proportional hazards models were used to compare the hazard ratio (HR) of the primary outcome, 1‐year target lesion failure (composite of cardiac death, target vessel–related myocardial infarction, and clinically driven target lesion revascularization), and its individual components between Medina subtypes using Medina 1.0.0 as the reference category. A total of 4003 CBL procedures were included. The most prevalent Medina subtypes were 1.1.1 (35.5%) and 1.1.0 (26.8%), whereas the least prevalent was 0.0.1 (3.5%). Overall, there were no significant differences in patient and procedural characteristics among Medina subtypes. Only Medina 1.1.1 and 0.0.1 subtypes were associated with increased target lesion failure (HR, 2.6 [95% CI, 1.3–5.5] and HR, 4.0 [95% CI, 1.6–9.0], respectively) at 1 year, compared with Medina 1.0.0, prompted by clinically driven target lesion revascularization (HR, 3.1 [95% CI, 1.1–8.6] and HR, 4.6 [95% CI, 1.3–16.0], respectively) as well as cardiac death in Medina 0.0.1 (HR, 4.7 [95% CI, 1.0–21.6]). No differences in secondary outcomes were observed between Medina subtypes. Conclusions In a large multicenter registry analysis of coronary bifurcation percutaneous coronary intervention procedures, we demonstrate prognostic differences in 1‐year outcomes between different CBL distributions, with Medina 1.1.1 and 0.0.1 subtypes associated with an increased risk of target lesion failure.
C oronary bifurcation lesions (CBLs) are some of the most challenging and frequently encountered lesion subsets in interventional practice, representing nearly 20% of all percutaneous coronary intervention (PCI) procedures. 1,2 Although several classifications of CBLs exist, the Medina classification, endorsed by major bodies such as the European Bifurcation Club, is the most widely used. [2][3][4][5] This classification assigns a binary value (0 or 1) to the proximal and distal main branches (MBs) as well as the side branch (SB), in that respective order, based on the presence (1) or absence (0) of significant plaque burden (≥50% stenosis) in that vascular segment. 4 Furthermore, CBLs can be classified into true bifurcation lesions, if both the MB and SB have significant stenosis, and nontrue bifurcation lesions if either the MB or SB is not significantly stenosed.
Although previous studies have examined clinical outcomes of CBL PCI according to lesion complexity (simple versus complex bifurcations) or true versus nontrue bifurcations, [6][7][8][9] none have systematically examined the prognostic impact of disease distribution within the CBL on mid-or long-term outcomes following PCI. [10][11][12] The present study sought to compare the impact of disease distribution according to Medina classification on 1-year clinical outcomes adjudicated by an independent clinical events committee among patients undergoing bifurcation PCI within the e-Ultimaster (Prospective, Single-Arm, Multi Centre Observations Ultimaster Des Registry).
METHODS Study Data Set
The e-Ultimaster study is a large, international, prospective observational study that enrolled 37 198 patients between October 2014 and June 2018 in 378 hospitals from 50 countries including sites in Europe, Asia, Africa, the Middle East, South America, and Mexico. Eligibility criteria were minimal to enroll an all-comer population, and included (1) age ≥18 years and (2) an indication for a PCI according to routine hospital practice. The Ultimaster sirolimus-eluting coronary stent (Terumo, Tokyo, Japan) is made of cobalt-chromium with a strut thickness of 80 μm. Sirolimus is released from an abluminal applied bioresorbable polymer coating (poly-D,L-lactic acid polycaprolactone) that is fully metabolized through dl-lactide and caprolactone into carbon dioxide and water in 3 to 4 months. The study was approved by the ethical committees of the participating sites, and all patients provided written informed consent. The clini caltr ial.gov identifier is NCT02188355. The data used for the purpose of this study are only available to designated researchers and cannot be shared with other researchers. However, all efforts were made to describe the methods in detail.
Study Population and Follow-Up
All patients with a single CBL at the index procedure from the e-Ultimaster study were included, stratified into 7 permutations according to their Medina classification (true bifurcations: 0.1.1, 1.0.1, 1.1.1; nontrue bifurcations: 0.0.1, 0.1.0, 1.0.0, 1.1.0) ( Figure S1). There were no restrictions on the clinical indication, the number of lesions to be treated, or number of stents to be implanted. Because the Medina classification was not available for 56 (1.4%) patients, they were excluded from our analysis. Follow-up was up to 1 year, except CLINICAL PERSPECTIVE What Is New?
• This is the first study to examine the prognostic difference in 1-year clinical outcomes after coronary bifurcation lesion percutaneous coronary intervention according to Medina for patients in whom no Ultimaster stent was implanted, in which follow-up was only available to discharge. The event rates (see Outcomes) at 1-year follow-up were calculated based upon the number of patients who had a 1-year follow-up visit or experienced a clinical event.
All primary outcome-related adverse events were adjudicated by an independent clinical events committee.
Outcomes
The primary outcome measure was target lesion failure (TLF), defined as the composite of cardiac death, target vessel-related myocardial infarction, and clinically driven target lesion revascularization. Secondary outcomes included target vessel failure (TVF), composite of cardiac death, target vessel myocardial infarction and clinically driven target vessel revascularization, and patient-oriented composite end point, defined as a composite of all-cause death, any myocardial infarction (MI), and any revascularization. For MI, the extended historical definition was applied that primarily uses creatine kinase myocardial band, or if not available troponin, as a cardiac biomarker criterion. 13 All deaths, MI, target lesion revascularizations (TLRs) or target vessel revascularizations, and stent thrombosis were adjudicated by an independent clinical events committee.
Statistical Analysis
Continuous variables are reported as mean along with SD and compared using the ANOVA test, whereas categorial variables are reported as frequency and percentage and compared using the χ 2 test. Analyses were performed with SAS version 9.4 software (SAS Institute, Cary, NC). Cox proportional hazards models were performed using Medina 1.0.0 as the reference category to assess the hazard ratio (HR) of 1-year outcomes across different Medina subtypes, adjusting for PCI indication (chronic coronary syndrome, unstable angina, non-ST-segment-elevation myocardial infarction, and ST-segment-elevation myocardial infarction), CBL vessel location (right coronary artery, left main stem, left anterior descending artery, and left circumflex artery), radial access, stenting strategy (1 versus 2 stents), and use of intracoronary imaging and/or proximal optimization technique.
RESULTS
A total of 4003 patients undergoing PCI for CBLs at the index procedure were included in the analysis. The most prevalent lesion subtypes were Medina 1.1.1 (n=1420, 35.5%) and Medina 1.1.0 (n=1064, 26.8%), whereas the least prevalent subtype was Medina 0.0.1 (n=137, 3.5%) ( Figure 1). In terms of procedural characteristics, the rate of use of intracoronary imaging was highest in the Medina 1.1.0 subtype (14.5%) and lowest in the Medina 1.0.1 subtype (7.6%) ( Table 1) Figure 3). No other differences in risk of adverse clinical outcomes (target vessel MI, cardiac death, and stent thrombosis) were observed between Medina 1.0.0 and other Medina subtypes.
Subgroup Analysis
Comparison by Medina subtype found no difference between stenting techniques (1 versus 2 stents) except
DISCUSSION
The present study is the first to outline the impact of lesion distribution according to Medina classification on 1-year clinical outcomes among >4000 patients undergoing CBL PCI using the same contemporary stent platform. Our findings can be summarized as follows ( Figure 4). First, we found that Medina 1.1.1 (35.5%) and Medina 1.1.0 (26.8%) were the most prevalent CBL distributions, whereas Medina 0.0.1 (3.5%) was the least prevalent. Second, no clinically meaningful differences in patient characteristics were observed following stratification for Medina subtypes. Despite this, PCI of Medina 1.1.1 and 0.0.1 subtypes was associated with significantly higher crude rates of TLF compared with all other bifurcation distributions and were independently associated with an increased hazard of TLF (2.6-and 4-fold, respectively) at 1 year.
Although previous studies have examined differences in procedural outcomes between true and nontrue CBLs, there are limited data on the prognostic impact of disease distribution in CBLs. [10][11][12] Our findings highlight differences in 1-year outcomes between CBL distributions, with the greatest hazard for TLF and TVF observed among Medina 1.1.1 and Medina 0.0.1, even after adjustment for baseline differences between Medina subtypes. Given the limited previous literature comparing outcomes between CBL lesion distributions, it is difficult to place our findings within the context of other studies. A study of 2897 patients undergoing CBL PCI reported higher crude rates of Major acute cardiovascular events and cardiac death/MI among specific Medina subtypes (Medina 1.1.1: 12.4% and 4.5%, respectively; Medina 0.1.1: 13.9% and 3.7%, respectively). 10 However, the authors did not compare outcomes between different Medina subtypes except within true CBL groups (Medina 1.1.1 and Medina 0.1.1 versus Medina 1.0.1) and reported no differences between these groups. Furthermore, their findings were derived from an old procedural cohort (enrolled between 2003 and 2009), managed with older (first and second) generation drug-eluting stents. In contrast, Todara et al reported no difference in in-hospital and 12-month cardiac death, TLR, and reinfarction between Medina 1.1.1 and all other subtypes combined. 11 However, their analysis was based on a small sample size (n=120, including n=25 with Medina 1.1.1) from an old procedural cohort (2005)(2006), where 25% of patient were not managed with a drug-eluting stent.
The finding that 0.0.1 isolated SB disease has the highest hazard of TLF is interesting and deserves further comment. Our subgroup analysis suggests that the worse outcomes associated with 0.0.1 are observed irrespective of whether a single-or 2-stent approach is used. Isolated SB disease is complex to treat, because lesions are often fibrocalcific with significant recoil, which makes achieving good minimal luminal area challenging, particularly if the lesions are not adequately prepared. 14,15 Furthermore, isolated SB lesions often supply a limited myocardial territory (<10%), with only 1 in 5 non-left main artery SB lesions shown to supply >10% fractional myocardial mass in a multicenter registry analysis of 482 patients undergoing computed tomography coronary angiography. 16 Isolated SB lesions are also difficult to treat, because identification of the ostium may be challenging, resulting in geographical or ostial miss, or compromise of the main vessel, which may account for 27% of isolated SB lesions treated with a 2-stent approach. Furthermore, stent underexpansion or recoil, smaller SB vessel size in relation to the MB, and SB length may all contribute to the worse outcomes observed in this group. [17][18][19] The mean stent diameter for SB 0.0.1 lesions in our study was 2.79 mm compared with >3 mm used to treat MB lesions. Consequently, many operators may choose to treat these lesions conservatively or use alternative interventional strategies such as drug-eluting balloons, although there is limited evidence to support the latter. An observational study of 49 patients of Medina 0.0.1 lesions with associated ischemia showed that careful predilatation with a cutting balloon followed by a drugeluting balloon use was sufficient in the majority of cases, with only 14% requiring stent implantation for acute recoil/coronary dissection. 20 Nevertheless, their reported rate of TLF was still high at 14%, 1-year after the procedure. The present findings highlight the complexity of this lesion subset, for which there are limited evidence-based therapies, and emphasize the need for prospective work around alternative management strategies for isolated SB lesions.
Limitations
There are several limitations to the present study. First, the outcomes that we report are only in patients treated with PCI; our data do not inform on outcomes of lesions treated medically, which is particularly relevant for isolated SB lesions. Second, although the events in our study were independently adjudicated, it is possible that some events were underreported. However, measures to preserve the quality of data reporting, including on-site and remote monitoring, and close communication with the participating sites were in place. Furthermore, the Medina classification was identified based on data submitted in the electronic case report form and not assessed from a core laboratory. Third, some of the CIs in our results are wide because of the relatively small sample size for individual Medina subgroups. Finally, our findings are based on 1-year outcomes, and it is possible that further prognostic differences are observed between Medina subtypes on longer follow-up. Figure S1. Bifurcation subtypes as per the Medina classification.
|
2022-08-25T06:17:59.722Z
|
2022-08-24T00:00:00.000
|
{
"year": 2022,
"sha1": "e9cbdf564c776af032ff5e4215a0cda69aecd194",
"oa_license": "CCBYNC",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.122.025459",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a29eae1bdcd825c75108f15199959c0908edb3a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254634000
|
pes2o/s2orc
|
v3-fos-license
|
Genital self-amputation—its psychological urge
Abstract Genital self-mutilation (GSM) is a rare phenomenon that can occur in patients with severe mental health illness. This case report highlights a rare case of self-inflicted bilateral testicular amputation and partial penile amputation in a patient who is a transwoman with a psychiatric history. The patient initially presented to urology in extremis with bilateral testicular amputation. The patient was resuscitated but required emergency surgery in the form of bilateral inguinal approach to ligate the cord and control haemostasis. The testes were not re-implanted as the patient refused and, after psychiatric discussion, was deemed to have capacity. She then re-presented within a week with self-inflicted partial amputation of penis. On both admissions, the patient had psychiatric evaluation but she was sectioned under the mental health act the second time. This case demonstrates how one can control haemostasis in the emergency scenarios of GSM and emphasizes the importance of psychiatric illness and evaluation in patients presenting with GSM.
INTRODUCTION
Genital self-mutilation (GSM) in males is one of the rarest urological cases in the world. GSM occurs in various forms from simple laceration to complete bilateral testicular amputation and even penile amputation. It causes major functional and psychological consequences to the patient's overall quality of life. Until now, just over a hundred of cases have been recorded in the literature dating back to the 20th century [1][2][3][4].
Although British Association of Urological Surgeons (BAUS) consensus statement suggests attempted re-implantation in centres with micro-vascular surgical skills and in those presenting within 12 hs, this is often not feasible due to patient being unstable, a lack of time for safe transfer and expertise [5].
CASE REPORT
A Caucasian transwoman was referred from the Emergency Department to Urology with self-amputation of both the testicles. The patient had identified as a female for many years, undergone hormone manipulation and had a significant psychiatric background with episodes of severe depression, selfmutilation, alcohol abuse and suicidal ideation causing previous admissions under the mental health act.
The patient had previously seen the gender reassignment team, but they had refused immediate surgery on the basis of her alcohol intake and unstable mental health-instead arranging further review after a period of abstinence and stability. Despite abstinence and a period of stability for the prior 6 months, the patient did not see any other way out than GSM. She researched and obtained necessary tools (local anaesthetic, surgical scissors and a scalpel) from the internet in preparation and had been injecting local anaesthetic into her scrotum to ensure adequate numbing to allow a scalpel incision over the course of a few weeks.
On the day of this incident, the patient attended Emergency department (ED) stating her intention to perform GSM, where she was referred to crisis team for help but the patient did not choose to wait and left. She denied any thought of suicide and her intention was only to remove her genitals. She had approached ED multiple times even before for the same but never acted upon her intentions.
Later that day, after infiltrating local anaesthetic and using a scalpel bought online, she made an incision to her hemiscrotum to deliver the testicle that was then cut f lush with the scalpel and removed. The process was then repeated on the contralateral side. As neither cord had been tied, she began to exsanguinate, at which point she phoned for emergency service help. The patient did not regret performing the act, it was not done with suicidal intent but after seeing the volume of blood, was concerned for her life.
In the hospital, the patient was resuscitated until haemodynamically stable in the ED, a compressive dressing was done and referred to the urology team. The patient received a stat dose of intravenous antibiotics, tranexamic acid and 1 unit of blood transfusion.
The patient was reviewed and observations were then stable. She stated she was very frustrated with waiting for her gender reassignment surgery and hence did it herself. The patient stated she would never want her testes back and if it was attempted, she would remove again. The risks and benefits of exploration, haemostasis and re-implantation of testes was thoroughly discussed. The patient understood this and expressed her wish for her life to be saved by controlling the bleeding but she did not consent to attempted re-implantation, nor did she want sperm cryo-preservation. She would not consent to surgery if we were to attempt re-implantation of her testes.
We had a telephone consultation with the on-call psychiatrist about the capacity of the patient. A key component to this decision was that she had not attempted to take her own life. The patient had ongoing severe bleeding that required urgent surgical intervention and was consenting to life-saving surgery but not attempted re-implantation. The psychiatric team agreed the patient had capacity and advised to proceed to surgery, without attempted re-implantation, and there was currently no need of sectioning .
The patient underwent scrotal exploration under general anaesthesia. On inspection, a grossly enlarged scrotum and two cuts were present with clots filled within the scrotum. The scrotal wounds were extended, clots evacuated and severe oedema of cord and connective tissues was seen with ongoing bleedingcausing difficulty in identifying the retracted cord. It was then decided to go through an inguinal approach in order to quickly control the bleeding: the cord was identified and double ligated at the deep inguinal ring, and the same was performed on the contralateral side. The scrotal wound was washed and closed in layers, leaving a corrugated drain.
During the hospital stay, the patient expressed wishes that she would perfrom self-amputation of her penis. She was then referred to the Crisis team who advised to restart her antipsychotics and counselled her. The patient wanted to selfdischarge on day 5 but the urology team felt that the patient was a risk to herself. Crisis team was informed about this decision and they advised the patient can be discharged as she had capacity to make this decision.
In 2 days' time, the patient returned with attempted amputation of penis. The injuries were only superficial and she had degloved the penis without any other significant injury. The bleeding was settled and there was no threat to life. The patient initially refused any surgical intervention, but later gave consent for a repair. After being reviewed by the psychiatric team, she was sectioned under the mental health act for compulsory psychiatric evaluation.
DISCUSSION
GSM is a rare phenomenon that mostly occurs within the background of severe mental health illness. The term Klingsor Syndrome has been suggested for GSM associated with religious delusions [6]. Klingsor was a magician who wanted to be accepted as a Knight of the Grail, a religious brotherhood. He castrated himself because of his inability to remain chaste to be accepted into this brotherhood [7,8].
One of the issues with this patient is her waiting period for transgender surgical treatment -she felt that it was unfair, she could not wait any longer and being a transgender with the residual social stigma around the subject and everyday persecution felt by the patient, she felt like she had no way out other than to perform GSM.
Treatment of GSM necessitates a multidisciplinary approach, usually between the urologist, psychiatrist, psychologist and the primary care physicians. The main goal of surgical treatment includes life-saving measures as the imperative measurement, then restoration of anatomical continuity and function as much as possible.
In this scenario, one of the more challenging surgical aspects was control of haemostasis, hence the need for inguinal incisions. In terms of testicular genital mutilation not undergoing attempted re-implantation, if the cord is not visible due to oedema or retraction, then an inguinal approach may be necessary to quickly and safely control the cord pedicle and bleeding, as in this reported case.
Furthermore, although BAUS consensus statement recommends attempted re-implantation, in this case, the cord was retracted and oedematous, and without microsurgical expertise, this would be unlikely to be achievable [5]. Certainly, if reimplantation was to be attempted, it would have to be in a stable patient and life-saving treatment should not be delayed to allow the transfer to a centre with such expertise.
Though surgical treatment is very important, addressing mental health issue is another important modality of treatment to avoid readmission and further patient harm. Patients presenting acutely with GSM provide a challenge in assessing the capacity for surgical management and require strong MDT working and documentation to ensure they receive the correct care in a timely manner.
CONCLUSION
GSM is an infrequent form of non-suicidal self-injury that occurs within a spectrum of severity. When dealing a case of GSM, clinicians should explore the motivations underlying the injury and consider a variety of psychiatric diagnoses along with surgical diagnosis. Even though the patient may not want re-implantation, it should be discussed and document thoroughly as the patient may not have capacity at that time of incident and capacity should always be formally assessed. Getting a psychiatric opinion before the procedure is important but should not delay any life saving surgery. The inguinal approach can be used in case of ongoing bleeding to control haemostasis quickly and safely. The patient should receive full psychiatric evaluation before discharged in order to prevent further episodes of GSM [9].
|
2022-12-15T05:09:18.374Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "ce2b43ad2d0a380c1910495a93ac836a0bf1e96e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ce2b43ad2d0a380c1910495a93ac836a0bf1e96e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236983208
|
pes2o/s2orc
|
v3-fos-license
|
Corneal Biomechanics Assessment Using High Frequency Ultrasound B-Mode Imaging
Assessment of corneal biomechanics for pre- and post-refractive surgery is of great clinical importance. Corneal biomechanics affect vision quality of human eye. Many factors affect corneal biomechanics such as, age, corneal diseases and corneal refractive surgery. There is a need for non-invasive in-vivo measurement of corneal biomechanics due to corneal shape preserving as opposed to ex-vivo measurements that destructs corneal tissue. In this study, a new approach for assessing corneal biomechanics in-vivo non-invasively using ultrasound estimation method with 100 KHz frame rate is proposed. Three models in conjunction with each other are used to study the biomechanics behavior of corneal tissue pre- and post-refractive surgery. These three models are cornea FEM, cornea scatterrer model and ultrasound transducer model respectively. Nine different elastic moduli corneal models are constructed to achieve this goal; 140KPa, 300KPa, 600KPa and 800KPa as post-refractive surgery models and 1MPa, 1.5MPa, 2MPa, 2.5MPa and 3MPa as pre-refractive surgery models respectively. Time-To-Peak (TTP) deformation, deformation amplitude (DA), deformation amplitude ratio at 2 mm (DA ratio 2 mm) and shear wave speed (SWS) are estimated for each of the nine involved corneal models in this study. Results show that TTP is decreasing (6.9 msec. at 140KPa to 5.3 msec. at 3MPa) while increasing the elastic modulus of corneal tissue. Also, DA is decreasing (2 mm at 140KPa to 0.5 mm at 3MPa), while increasing the elastic modulus as well. However, DA ratio shows decrease while increasing of elastic modulus to reflect the difficulty of corneal tissue to deform uniformly at higher elastic moduli. Estimated SWS shows an average accuracy of 98% of the theoretical SWS.
I. INTRODUCTION
Cornea is the transparent component of human ocular system that acts as a protective part enveloping other human ocular components such as lens. It preserves shape of human eye and accounts for most of its refractive power [1], [2]. Shape, transparency and uniformity of human cornea are important factors when assessing its role to vision. However, human vision degradation is caused by cornea shape alteration due to age, corneal diseases and refractive surgery [1]. Developing post-surgery ectasia is common among patients undergoing refractive surgeries. Hence, assessing corneal biomechanics post-refractive surgery is of great importance to ophthalmologists [3]- [5]. Refractive surgery outcomes are subjective to corneal biomechanics [6]- [9].
The associate editor coordinating the review of this manuscript and approving it for publication was Jingang Jiang .
Studying corneal behavior has been extensively investigated by many models and techniques [10]. Age related factors are studied in [11]- [13]. X-Rays diffraction is used to investigate the inter-fibril spacing of stromal collagen fibrils and inter-molecular spacing of collagen molecules in [11]. The inter-fibril spacing of stromal collagen is found to be decreasing with age as reported in [12]. In the work presented in [13], diameter of collagen fibrils is observed to be age dependent, where there is a change of about 9% in subjects older than 65 years old compared to those under 65 years old. Cornea stiffness related to age is investigated in [14] and [15], where the cornea is found to be stiffer when getting older.
In terms of refractive surgery, myopic astigmatism is achieved by small-incision lenticule extraction (SMILE) [15]. SMILE is claimed to have minimal effect on corneal biomechanics compared to other refractive surgeries such as LASIK (laser-assisted in-situ keratomileusis) and femto-LASIK (femto-second laser-assisted in-situ keratomileusis) [16], [17]. SMILE has shown less effect on corneal biomechanics than femto-LASIK [17]. Corneal safety and sensitivity are studied in multiple studies [18]- [20]. LASEK (laser-assisted sub-epithelial keratectomy) is reported to have less effect on corneal biomechanics with minimal risk of developing ectasia as well [21]. However, some cases are reported to have corneal biomechanics alteration after performing LASEK [22]. Moreover, both LASEK and SMILE are reported to induce changes in corneal biomechanics, where SMILE has the advantage of inducing less biomechanics changes than LASEK does due to the preservation of stiffer anterior stroma [6].
Ultrasound techniques are used to estimate corneal biomechanics. The main algorithm for estimating soft tissue biomechanics is by acquiring a stack of deformed frames for the tissue along with a reference frame utilizing high frame rate B-mode imaging [23], [24]. Transient ultrasound acoustic radiation force impulse (ARFI) is applied by means of ultrasound focusing techniques [25]. This ARFI generates deformation wave inside tissue under study that alters the characteristic of tissue's mechanical properties such as, Young's modulus (E) and shear modulus (µ). This induced shear wave speed is related to tissue shear modulus by Eqn. (1) & (2): where C is the shear wave speed, µ is the shear modulus, ρ is the density in Kg/m3, E is the Young's modulus and ϑ is the Poisson ratio for tissue under investigation respectively. Supersonic Time-Of-Flight (TOF) [26], [27], Lateral Time-To-Peak (TTP) [28], [29] or cross correlation [29]- [32] algorithms are implemented to estimate the speed of the resulting tissue deformation wave; i.e. Shear Wave Speed (SWS).
Till now, there is no non-invasive technique for clinical ultrasound in-vivo device that is used to assess the corneal biomechanics as a single modality combining the pushing element and the imaging element. All available approaches are still proof of concepts and utilize two different elements, external one for generating the push and another one for capturing the deformed frames [33]. Even, these approaches do not give spatial distribution for the corneal biomechanics, instead they give a mean estimate for the whole cornea tissue area.
In our research, we propose a new technique where the same ultrasound transducer is used to generate highly localized ARFI to act as internal actuator to induce tissue deformation and to perform B-mode imaging procedure to capture the propagation of the deformation wave. In our research also, we can continuously change the investigated zone of cornea to cover the whole cornea area and to obtain a spatial distribution of cornea biomechanics.
In this paper, a 3D FEM in conjunction with 2D scatterrer model for human cornea and an ultrasound transducer model are implemented to study the effect of post-refractive surgery on the biomechanics of corneal tissue using ultrasound B-mode imaging. Different elastic moduli are assigned to the 3D FEM to simulate different cornea tissue biomechanics [1], [27], [34], [35]. Cornea dimensions are adopted from medical literature for average human cornea [36], [37].
The paper is organized as follows, section I gives an introduction about the cornea, their biomechanics effect and related refractive surgeries, and a brief survey on ultrasound techniques used for assessing corneal biomechanics. Section II shows a full description for the proposed approach. Section III describes the methodology of how the three proposed models interact with each other and the acquisition sequence and the shear wave speed estimation method. Experimental results are reported in section V. Finally, discussion and conclusion are presented in sections VI and VII respectively.
II. THE PROPOSED APPROACH
Our proposed system is visualized in Fig. 1(a) where the ultrasound transducer is used on top of the human eye lid [38]. The same ultrasound transducer is used to generate the ARFI and to perform the B-mode imaging procedure. The ultrasound transducer will be fixed in the same position during the whole corneal biomechanics estimation process. At the beginning of the estimation process, the transducer is used to generate the ARFI at the corneal apex, then the ordinary B-mode imaging procedure is initiated to capture the departing deformation wave across the lateral direction.
The proposed algorithm for estimating corneal biomechanics for each elastic modulus is presented in the block diagram shown in Fig. 1(b). It shows three main blocks: First block resembles the mechanical model of the cornea, the second resembles the ultrasound transducer model and the third resembles the scatterrer model of the cornea. The three models are mutually interacting with each other to produce a deformed video stack of the cornea.
A. CORNEA MECHANICAL FEM
A vertical cross section for the human cornea is modeled as 3D FEM and shown in Fig. 2 (a). The model dimensions are assigned to be equal to those of average human cornea 440 µm to 650 µm with an average of 540±30 µ m [40, 41]. The mechanical properties are set to those of cornea pre-and post-refractive surgery and are listed in Table 1. Triangular mesh is used by COMSOL software to mesh the volume of the cornea model. The mesh is set to be narrower at the volume of interest for obtaining accurate results and wider at the remaining volume to speed up the processing time. The cornea mesh is shown in Fig. 2 (b).
B. FEM DRAWING
Human cornea is modeled using FEM as a vertical cross section to simulate its behavior for pre-and post-refractive surgery. As a consequence, corneal biomechanics changes from pre-refractive to post-refractive surgery in terms of Young's modulus. This change affects cornea behavior to external applied forces. The FEM is constructed from two intersecting shapes, a circle and an ellipse in the 2D plan. Then the intersection is revolved around the z-axis to construct the volume of the model. Complete model in 2D plane drawing is presented in Fig. 3 and its dimensions are listed in Table 2.
C. FEM SOLID MECHANICS
Isotropic linear visco-elastic approach is adopted for this model as it simulates Kelvin-Voigt mathematical model efficiently in Finite Element Analysis (FEA) software. Peak ARFI is applied at the apex of the cornea which is set to be at the focal point of the ultrasound beam. The fixed boundaries simulating the behavior of the ciliary muscles of cornea are set to be in the lower face of the cornea at 6 vertex points. This is shown in Fig. 4.
D. ULTRASOUND TRANSDUCER MODEL
Ultrasound transducer is used to generate the internal push inside the cornea tissue that induces tissue deformation. As well it is used to perform B-mode imaging on the deformed tissue at subsequent time stamps to capture the propagation of the deformation wave. Therefore, the ultrasound transducer performs two functions in the transient elastography procedure, internal actuator for inducing tissue deformation and imaging to capture the induced displacement propagation.
In this model, two toolboxes in MATLAB are used to simulate the behavior of the ultrasound transducer in both cases of generating the ARFI and performing the B-mode linear imaging procedure. FOCUS toolbox is used for simulating the ultrasound transducer while generating the ARFI. However, Field II toolbox is used for simulating the ultrasound transducer imaging procedure. The transducer parameters used in both simulations are listed in Table 3.
The purpose of the ARFI is to generate a highly localized ultrasound push inside the tissue. This localization gives rise to quantitative estimation of tissue's mechanical properties. The imparted ultrasonic power due the focused push dampens highly around the focal push location in all direction exponentially. The generated ARFI induces cornea tissue to deform yielding a deformation (displacement) wave that propagates away from the focal point. The generated wave is considered to be the shear wave.
Fast Near field Method (FNM) method from FOCUS toolbox is used to simulate the ultrasound linear transducer behavior while generating the ARFI. Pressure map is simulated for the ultrasound transducer and is presented in Fig. 5. The focal point is set to be around 28.5 mm in the axial direction (depth direction). The focal point is that point where the peak ultrasound pressure generated from the transducer is applied inside the tissue. It is the imparted ultrasonic pressure that is responsible for inducing the tissue deformation and hence the shear wave. We have simulated the transducer 2D pressure map, transverse cross-sectional pressure VOLUME 9, 2021 distribution across the 2D map at different axial depths, focal axial pressure at the center and the average axial pressure. In addition, focal axial intensity, average axial intensity, focal axial force and average axial force are simulated as well. The resulting force field is then applied to the 3D FEM to induce deformation and to simulate tissue behavior to such applied force for different elastic moduli. Tissue deformation at different consecutive time frames is then picked up by using the B-mode imaging with the ultrasound transducer model implemented by Field II toolbox. The force is applied for about 1 msec. and the B-mode imaging procedure starts just after applying of the ARFI. Captured B-mode images resulting from imaging the wave propagation gives rise to an estimate of the shear wave speed (SWS) that is related to the tissue biomechanics.
The 2D pressure distribution is shown in Fig. 5. where the highest pressure value is observed around the preset focal point of 28.5 mm. Multiple transverse cross-sectional pressure distributions at different focal planes are shown in Fig. 6(a) where again the highest smoothest pressure distribution is observed at 33.35 mm which is the closest focal plane to the focal point. Pressure axial distribution at the focal beam (blue line) along with the averaged pressure axial distribution (red line) are shown in Fig. 6(b) where the peak axial pressure value is observed at 28.5 mm which is coincident with the preset focal point. Intensity axial distribution at the focal beam (blue line) along with averaged intensity axial distribution (red line) are shown in Fig. 6(c) where peak intensity axial value is observed at 28.5 mm. Acoustic axial force distribution at the focal beam (blue line) along with averaged force axial distribution (red line) are shown in Fig. 6(d) where acoustic force is calculated according to Eqn. (3): where α is the acoustic absorption coefficient in ( dB cm.MHz ), I is the temporal average intensity of the beam in ( Watts m 2 ), and C is the speed of sound in tissue in ( m sec. ).
E. CORNEA SCATTERRER MODEL
FIELD II toolbox is used to generate the equivalent scatterrer model of the 3D FEM of cornea. The FEM consists of nodes with random spatial locations in-between and specific spatial locations at the boundaries. It is at these nodes where the ultrasound echoes are calculated to reconstruct an ultrasound line in the B-mode image. Nodes' spatial locations are exported to a comma separated value (csv) file with a fixed file structure; where x, y and z positions for each node are listed in single row. Each row is starting from the nodes' stationary spatial locations at the reference frame at the first three columns and the subsequent columns holds the nodes' new spatial locations at each time frame of simulation. Then we import this file to MATLAB for reading the scatterrers' spatial location at both the reference frame and each deformed frame in order to construct the B-mode image. A snapshot of corresponding file structure is presented in Table 4 where the presented spatial locations are x and z only as they are responsible for deformation in our FEM. The complete scatterrers model as seen by the ultrasound transducer is shown in Fig. 7.
III. METHODOLOGY
Corneal biomechanics are estimated from the speed of propagating shear wave in response to acoustic radiation force impulse (ARFI) generated by the ultrasound transducer. This force impulse is generated by a focused ultrasound push and applied to the corneal tissue transiently for about 1 msec. ARFI causes deformation of corneal tissue as a wave propagating away from the focal point of application. The resulting deformation is then tracked using ultrasound transducer via high frame rate B-mode imaging procedure. Monitoring sequence is then obtained for the cornea model for complete 10 msec. of B-mode imaging resulting in a video stack for deformation propagation. Deformation is tracked in either direction laterally around the cornea apex by means of two fixed points in the video stack. One point is the focal point and the other one is laterally distal from cornea apex in either lateral direction. This process is repeated for different elastic moduli cornea models introduced in this study.
The proposed methodology is explained as follows: i. 3D FEM, 2D scatterrer and ultrasound transducer models are implemented in parallel to simulate the average human cornea dimensions and linear array ultrasound transducer respectively, as discussed in section 2.1. ii. The ultrasound transducer model is used to: a. Obtain a reference frame for the cornea before applying the ARFI (unloaded cornea frame). b. Generate ARFI to act as internal stimulus for cornea tissue to deform. iii. The generated ARFI is then applied to the 3D FEM to simulate the mechanical behavior of the cornea (loaded cornea) for different time frames corresponding to the frame rate of the ultrasound B-mode imaging. We use a frame rate of 100 KHz for high frame rate imaging procedures to reduce the error in the estimated values from ultrasound imaging, as discussed in section 2.3. iv. The simulated time frames from the mechanical FEM model are then used to update the spatial locations of the scatterrers inside the scatterrer model.
v. The ultrasound transducer is used to image the updated scatterrer model to obtain different time frames. vi. The corneal tissue biomechanics are estimated from the reference frame and the stack of deformed frames. vii. This process is repeated for every cornea with elastic moduli included for this study.
A. ACQUISITION SEQUENCE
Acquisition sequence starts by acquiring a reference frame for corneal tissue under study. This reference frame is acquired by transmitting a plane wave to the medium of interest (corneal tissue). Then an ARFI is sent to the corneal tissue for a short duration of time of about 1 msec. to resemble a transient force. This transient force is used to induce corneal tissue deformation giving rise to shear wave propagation inside the tissue. In this study we apply ARFI at a focal zone of 28.5 mm inside the tissue. Just after applying of the ARFI to the corneal tissue, high frame rate B-mode imaging starts to capture the migrating shear wave in both lateral directions away from the focal zone. The frame rate used in this study is 100 KHz enabling the accurate tracking of the wave peak through acquisition time of 10 msec. The complete timing diagram for the complete B-mode imaging procedure is shown in Fig. 8.
Apodization of the ultrasonic wave is performed to help reduce both the grating and side lobes of ultrasound beam.
B. SHEAR WAVE SPEED ESTIMATION
Estimating shear wave speed is achieved by lateral Time-To-Peak displacement (lateral TTP) technique. The wave speed is estimated by dividing the lateral distance between the two probing nodes over the estimated times of wave peak arrival at these nodes. The two probing nodes are chosen to be the focal zones of two distal ultrasound beam, one node is at the focal zone of the central ultrasound beam and the other is at the focal zone of any of the lateral beams. The second probing point can be in either lateral direction as well, i.e. to the left or the right of the central beam. Time-To-Peak displacement is a property for every tissue biomechanics, where tissues with different elastic moduli yields different TTPs. It is worthy to be noted that, dependence on ARFI value only changes the amplitude of tissue deformation where two different elastic moduli tissues having same ARFI value will yield two different peak deformation values. Moreover, these two tissues will yield two different TTP values. TTP is estimated for tissue under study from displacement curve obtained for the probing node with proper tracking frequency considerations. Equation (4) is used to calculate the estimated SWS: where C avg is the average velocity across the lateral position, x and t are the difference in distance between probing nodes and difference in times of peak arrival at these nodes respectively. Figure 9 shows the reference frame and peak axial deformation frame after applying ARFI for 140 KPa corneal model. The focal point is observed to reach its peak deformation along the axial direction in Fig. 9 (b). These two figures are VOLUME 9, 2021 The reference frame shown in Fig. 9(a) is used as a base for estimating the axial deformation of the cornea through subsequent time frames after applying ARFI, and estimating the peak axial deformation as well. It is considered the starting point in transient elastography imaging procedure. This reference frame is obtained for all cornea models with different elastic moduli namely; 140 KPa, 300 KPa, 600 KPa, 800 KPa, 1 MPa, 1.5 MPa, 2 MPa, 2.5 MPa and 3 MPa. The field of view (FOV) is limited to the lateral distance obtained by the ultrasound probe during B-mode imaging. The field of view (FOV) in this experiment is about 8 mm in the lateral direction and about 12 mm in the axial direction. Cornea tissue is observed to occupy axial location from 28 mm to 28.5 mm at the apex which is relevant to literature information about corneal thickness at the apex to be about 0.5 mm.
IV. RESULTS
Peak corneal deformation after applying ARFI is shown in Fig. 9 (b) where the 8 mm wide FOV is considered to be optimum for monitoring the lateral deformation of the cornea. This is an empirical result obtained by try and error. The shear wave is observed at this frame clearly, where it propagates in either lateral direction along the corneal tissue.
A complete simulation for shear wave propagation inside corneal model of 140 KPa along the temporal domain is performed and shown in Fig. 10. Shear wave propagation is shown for only eight different time stamps along with the reference frame at the beginning for convenience in presentation. The monitoring sequence starts with the reference frame at 0 sec. before which ARFI is applied and end up at 10 msec.
A complete simulation time sequence with 100 KHz frame for each corneal model is obtained. The deformation along these frames is tracked at each time frame and compared with the reference frame. Complete 1D deformation curves for both the focal point and the lateral point are then constructed from the peak deformation tracking procedure and are shown in Fig. 11. The time axis starts from 3 msec. to 10 msec. and not from 0 sec. since there are no significant information before deformation to be shown along the curve. Figure 12 shows the estimated times at which focal peak deformation takes place for each corneal model. These times enables the estimation for the frame at which the peak deformation happens, hence enabling the estimation of the focal peak deformation itself accurately (Deformation Amplitude). This is achieved by means of the reference frame and the estimated peak deformation frame. Reference frame along with peak deformation frame for each corneal model is shown in Fig. 13. These frames are used to estimate the peak deformation value for each cornea model and the Deformation Amplitude Ratio at 2 mm in either lateral direction distal from the apex as well.
1D deformation curves allow for estimating the SWS for each cornea model by calculating the time difference between TTPs for each of the two probing nodes divided by their spatial distance. Theoretical, estimated shear wave speed along with speed estimation accuracy for each cornea model are listed in Table 5.
Deformation amplitude is estimated accurately for each cornea model by comparing the ultrasound B-mode focal beam for the reference frame and corresponding deformation frame. The reference beam along with all deformed beams for each cornea model are presented in Fig. 14. The deformation amplitude is considered to be the difference between the axial location at which the cornea tissue appears and the axial location at which each other cornea tissue for different cornea models appear. The deformation amplitudes are shown in Fig. 15. Figure 16 shows the deformation amplitude ratios for each elastic modulus cornea model. They are estimated from B-mode images where the focal peak axial deformation happens. It is the ratio between the apex deformation to 2 mm nasal or temporal peak axial deformation. Deformation amplitude ratio defines how the cornea apex deforms with respect to either 1 mm or 2 mm nasal or temporal deformation of corneal tissue.
V. DISCUSSION
From Fig. 10. it is observed that corneal tissue experiences one complete cycle of wave propagation within 10 msec. at 140 KPa. The higher the elastic modulus for cornea tissue the higher the number of cycles of wave propagation. It is observed also that 140 KPa reaches its peak deformation at t3 when applying ARFI. The tissue starts by no deformation at the beginning of the simulation and deformation increases gradually till its peak value and then dampens again till reaching the no deformation state again. Many periods of ARFI application can be performed to obtain an average tissue behavior and more accurate results for biomechanical parameters.
TTP values for focal peak axial displacements shows a TTP value of 6.9 msec. at 140 KPa and 5.3 msec. at 3 MPa with time difference in TTPs between 140 KPa and 3 MPa of 1.6 msec. This narrow time difference makes it nearly impossible to estimate the corneal tissue biomechanics with ordinary ultrasound transducers operating with nearly several tens or even hundreds of frame rate. Yet, it is possible to achieve optimum temporal resolution with transducers operating with several thousand or even hundreds of thousands frame rate as in this study. In this study, 100 KHz frame rate gives about 160 frames for the 1.6 msec. time difference between TTPs of 140 KPa and 3 MPa, which is optimum FIGURE 13. Reference frame along with peak deformation frame for each cornea model. to differentiate between them. TTP decrease as elasticity of tissue increases as shown in Fig. 12.
From Fig. 11 and Fig. 13 it is observed that force is dampened rapidly in temporal domain and spatial domain, where complete B-mode frame shows high localization of the force around the point of application, and deformation curves shows the rapid dampening in time for force after time of application.
From Fig. 15. We observed that deformation amplitudes decrease as the Young's modulus increase for corneal models. As elastic moduli shift to MPa range the difference between two consecutive deformation amplitudes becomes smaller. Differentiation between these deformation amplitudes is subjected to transducer axial resolution. Lateral resolution affects only the shear wave tracking process in the lateral direction, where the probing nodes are fixed with beam width. As beam width determines lateral resolution of the transducer, hence determining the lateral distance between the probing nodes. Smaller lateral distance yields worse temporal resolution between deformation curves of the two probing nodes of the wave speed, while larger lateral distance yields better temporal resolution.
We conclude from Fig. 14. that transducer's axial resolution is capable of differentiating between four of the major corneal layers.
Our estimation accuracy of SWS was maximum at 800 KPa with value of 99.8% and minimum at 2 MPa with value of 97.6% with respect to the theoretical SWS calculated from Eqn. (1).
As shown in Fig. 16. DA ratio at 2 mm represents how cornea apex deforms with respect to paracentral regions. It is observed that cornea models with low elastic modulus have high DA ratio values while DA ratio value decreases as the elastic modulus increases which is matching with experimental results obtained in relevant studies. DA ratio gives an objective information about how whole corneal tissue deforms in response to ARFI or any external force. This means that for corneal pre-refractive surgery; where its stiffness is considered to be high; DA ration at 2 mm is supposed to be small. While cornea post-refractive surgery is supposed to have high DA ratio at 2 mm as its stiffness is considered to be decreased by the surgery.
If we depend only on one of the estimated parameters for estimating the corneal biomechanics is not recommended. This is due to the misleading estimation as in the case of 300 KPa cornea model, where the estimated deformation amplitude ratio at 2mm value is supposed to be smaller than that of 140 KPa, while the estimated value is observed to be higher. Depending only on deformation amplitude ratio at 2 mm value supposes that the estimated corneal elastic modulus should be lower than that of 140 KPa. However, depending on all the estimated parameters values assessing corneal biomechanics leads to accurate estimation of corneal biomechanics and accurate assessment of post-refractive surgery.
VI. CONCLUSION
In this research, shear wave speed, deformation amplitude, Time-To-Peak deformation and deformation amplitude ratio for different elastic moduli cornea models are estimated respectively. Two models are used in parallel to study the behavior of cornea biomechanics pre-and post-refractive surgery, namely, FEM in conjunction with scatterrer model. Third ultrasound transducer model is used to simulate the behavior of ultrasound transducer while imaging corneal tissue undergoing transient elastography. Also, the transducer model is used to simulate the transducer behavior while generating acoustic radiation force impulse that is used to excite corneal tissue. Nine FEMs are used to represent cornea in different biomechanical states pre-and postrefractive, 140 KPa, 300 KPa, 600 KPa and 800 KPa as post-refractive surgery corneal models and 1 MPa, 1.5 MPa, 2 MPa, 2.5 MPa and 3 MPa as pre-refractive surgery corneal models.
ARFI is applied transiently to each of the nine cornea models to induce deformation wave propagation. This wave of tissue deformation is tracked using B-mode imaging procedure that yields a video stack for each cornea model representing the wave propagation. The B-mode video stack including deformation wave is tracked by a 100 KHz frame rate. Resulting wave speed is tracked through two fixed probing points assigned on two fixed ultrasound beams, one point is at cornea apex, and the other is distal from cornea apex in either lateral direction. Number of probing points is subjected to the lateral resolution of the transducer, where involving more probing points inside the B-mode image frame means finer lateral resolution of the utilized transducer.
Focal TTP deformation values are estimated from TTP deformation curves for the focal apical probing point. It is observed that increasing corneal stiffness yields decrease in TTP deformation values.
Focal peak axial deformations are estimated from focal ultrasound beams by estimating corneal tissue axial location in both the reference frame and all peak deformation frames for each cornea model. Deformation amplitude values are observed to be decreasing by increase in corneal stiffness.
Similarly, paracentral axial deformations are estimated which enabled the estimation of deformation amplitude ration at 2 mm. Deformation amplitude ratios at 2 mm are shown to be decreasing with increasing the elastic modulus of the corneal tissue.
Shear wave speed values are estimated using the TTP deformation curves for both probing points. Lateral distance between the probing points divided by peaks arrival times gives an estimate to the corresponding shear wave speed of the cornea model under study. Elastic moduli can be estimated from the resulting shear wave speeds as they are related by Eqn. (4).
It is recommended to depend on each of the estimated parameters values of TTP deformation, deformation amplitude, deformation amplitude ratio at 2 mm and shear wave speed respectively when assessing cornea biomechanics post refractive surgery. Depending only on one of these values may be misleading and vague. As in the case of 300 KPa cornea model.
|
2021-08-12T13:21:18.903Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6eea90c780e7ee91bee9c2776947e9334fa5753b",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09496618.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "e4f30dc79dabb2614be8933e37c865195d9c906c",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.